Our clients reserves the right not to make an appointment. In considering candidates for appointment into advertised posts, preference will be accorded to persons from a designated group in accordance with the approved Employment Equity Plan.
Intermediate Data Engineer - Contract - Remote
(PDG2000483537)
Overview
Reference
PDG2000483537
Salary
ZAR550 - ZAR650/hour
Job Location
- South Africa -- Johannesburg Metro -- Sandton
Job Type
Contract
Posted
11 August 2025
Closing date
11 Sep 2025 19:59
We are looking for a highly experienced Data Engineer to design and implement robust data solutions across cloud and hybrid environments. This role involves building scalable ETL pipelines, integrating diverse data sources, and ensuring data quality and governance. The ideal candidate will have strong expertise in Azure technologies, data modelling, and enterprise data integration, with a proven ability to collaborate across technical and business teams.
Responsibilities:
- Create and/or extend existing data models to include the data for consumption by the Analytics teams.
- Apply the relevant business and technical rules in the ETL jobs to correctly move data.
- Use the SDLC as defined including testing and aligning to release management
- Must produce design documents that can be reviewed Design Authority.
- Build must align to Standards as defined by Enterprise Architecture.
- Includes KT, Hypercare and PGLS of work delivered.
- Design, develop, and maintain ETL pipelines using Azure Data Factory and Databricks.
- Implement data movement and transformation across cloud, on-premises, and hybrid systems.
- Ensure seamless data exchange and integration using Azure Synapse Analytics, Azure Data Lake, and SQL Server.
- Develop and consume RESTful and SOAP APIs for real-time and batch data integration.
- Work with API gateways and secure authentication methods (OAuth, JWT, API keys, certificates).
- Apply data validation, cleansing, and enrichment techniques.
- Execute reconciliation processes to ensure data accuracy and completeness.
- Adhere to data governance and security compliance standards.
- Troubleshoot ETL failures and optimize SQL queries and stored procedures.
- Provide operational support and enhancements for existing data pipelines.
- Partner with data analysts, business analysts, and stakeholders to understand data needs.
- Document data workflows, mappings, and ETL processes for maintainability.
- Share best practices and mentor junior engineers.
Experience:
- Matric and a tertiary qualification.
- Experience in large-scale enterprise data integration projects.
- 5-7 years in data engineering, ETL development, and SQL scripting.
- Strong expertise in Azure Data Factory, Databricks, Synapse, and Pentaho.
- Proficiency in SQL, Python, PySpark, and performance tuning.
- Experience with Git, Azure DevOps, and CI/CD pipelines.
- Solid understanding of data modelling, warehousing, and governance.
|