Modern Data Platform Fundamentals, Data Modelling Fundamentals, Data Ware Housing
Databricks
Cloud
Specialization
Databricks Engineering: Senior Data Engineer
Job requirements
Role Overview:
We are seeking a highly skilled Databricks Engineer to design, develop, and optimize data pipelines and analytics solutions. The ideal candidate will have strong expertise in Databricks, PySpark, and SQL, with experience in building scalable data solutions across cloud platforms.
Key Responsibilities:
• Design and implement data pipelines and ETL processes using Databricks and PySpark.
• Develop and optimize SQL queries for data transformation and analytics.
• Collaborate with data architects, analysts, and business stakeholders to deliver high-quality data solutions.
• Ensure data quality, governance, and compliance across all processes.
• Integrate data from multiple sources and manage large-scale datasets efficiently.
• Troubleshoot and optimize performance for data workflows and queries.
Primary Skills (Must-Have):
• Databricks (including Delta Lake and ML capabilities)
• PySpark for distributed data processing
• SQL for data manipulation and analytics
Good-to-Have Skills:
• Azure Services: ADF, Synapse, Purview, Fabric
• AWS Services: Glue, Lambda, Step Functions
• Workflow Orchestration: Airflow
• Data Transformation & Integration: DBT, Fivetran, Informatica
• Streaming & Messaging: Kafka
• BI & Visualization: Power BI
• Data Governance Tools: Collibra, Alation
• GCP: BigQuery
Qualifications:
• Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field.
• Proven experience in building and managing data pipelines in cloud environments.
• Strong problem-solving and analytical skills.
• Excellent communication and collaboration abilities.
Nice to Have:
• Experience with multi-cloud environments (Azure, AWS, GCP).
• Familiarity with modern data stack and data governance frameworks.