

Cyberobotix
Azure Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Engineer in Cincinnati, OH, on a contract basis for 10+ years of experience. Key skills include Azure Data Factory, PySpark, Python, and SQL. Candidates must be ready for face-to-face interviews.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
January 22, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Cincinnati, OH
-
π§ - Skills detailed
#Data Processing #Regression #Observability #Automated Testing #Deployment #Computer Science #GitHub #Logging #Pytest #Azure Data Factory #Data Pipeline #Debugging #GIT #Stories #Azure #AWS (Amazon Web Services) #Spark (Apache Spark) #Java #Programming #GCP (Google Cloud Platform) #Microsoft Azure #Python #Monitoring #SQL (Structured Query Language) #Agile #Data Quality #Mathematics #Scrum #"ETL (Extract #Transform #Load)" #Version Control #Data Engineering #Infrastructure as Code (IaC) #Scala #ADF (Azure Data Factory) #PySpark #Cloud
Role description
Job Title: Azure Data Engineer (ADF / PySpark / Python)
Location: Cincinnati, OH
Interview: Final round Face-to-Face in Ohio (submit only candidates ready for F2F)
Experience Required: 10+ years
Employment Type: Contract
Job Summary:
We are seeking an experienced Azure Data Engineer with strong expertise in Azure Data Factory (ADF), PySpark, and Python to design, build, and maintain scalable data pipelines. The role involves ownership of data engineering stories end-to-end, working in an Agile environment, and supporting both internal and external data applications.
Key Responsibilities:
β’ Take ownership of user stories and drive them through all phases of the SDLC.
β’ Design and develop distributed data processing pipelines using PySpark.
β’ Build and orchestrate multi-step data transformation pipelines using Azure Data Factory.
β’ Develop transformation logic using object-oriented Python.
β’ Perform unit, integration, and regression testing on data pipelines and packaged code.
β’ Enhance and maintain CI/CD pipelines for production deployment.
β’ Implement data quality checks for ingested and post-processed data.
β’ Ensure data observability through monitoring, alerting, and logging.
β’ Maintain and enhance existing data applications and pipelines.
β’ Provision and manage cloud infrastructure using Infrastructure as Code (IaC).
β’ Participate in sprint planning, estimations, and retrospective meetings.
β’ Mentor junior team members and promote engineering best practices.
β’ Continuously identify opportunities to improve processes, performance, and quality.
Required Skills & Experience:
Core Technical Skills:
β’ Strong hands-on experience with Microsoft Azure, including:
β’ Azure Data Factory (ADF)
β’ Azure data services
β’ Python (advanced) with object-oriented programming principles
β’ PySpark (distributed data processing)
β’ Strong SQL development experience
β’ Experience with data pipeline orchestration
β’ Experience with CI/CD pipelines
β’ Experience with Git/GitHub (version control)
β’ Experience with automated testing (PyTest or similar)
β’ Experience with data observability, monitoring, and alerting
β’ Experience implementing data quality frameworks and checks
β’ Experience with Infrastructure as Code
Additional Technical Knowledge:
β’ Java and Spring Framework (working knowledge)
β’ Cloud experience (Azure preferred; AWS/GCP acceptable)
β’ Dependency management tools (Conda, venv, etc.)
β’ Debugging and performance tuning of enterprise applications
β’ Understanding of SOLID principles
β’ Agile/Scrum development experience
Education:
β’ Bachelorβs degree in computer science, MIS, Mathematics, Business Analytics, or a related technical field.
Soft Skills:
β’ Strong ownership mindset and problem-solving skills
β’ Ability to work in a fast-paced Agile environment
β’ Strong collaboration and communication skills
β’ Continuous learning and improvement mindset
Job Title: Azure Data Engineer (ADF / PySpark / Python)
Location: Cincinnati, OH
Interview: Final round Face-to-Face in Ohio (submit only candidates ready for F2F)
Experience Required: 10+ years
Employment Type: Contract
Job Summary:
We are seeking an experienced Azure Data Engineer with strong expertise in Azure Data Factory (ADF), PySpark, and Python to design, build, and maintain scalable data pipelines. The role involves ownership of data engineering stories end-to-end, working in an Agile environment, and supporting both internal and external data applications.
Key Responsibilities:
β’ Take ownership of user stories and drive them through all phases of the SDLC.
β’ Design and develop distributed data processing pipelines using PySpark.
β’ Build and orchestrate multi-step data transformation pipelines using Azure Data Factory.
β’ Develop transformation logic using object-oriented Python.
β’ Perform unit, integration, and regression testing on data pipelines and packaged code.
β’ Enhance and maintain CI/CD pipelines for production deployment.
β’ Implement data quality checks for ingested and post-processed data.
β’ Ensure data observability through monitoring, alerting, and logging.
β’ Maintain and enhance existing data applications and pipelines.
β’ Provision and manage cloud infrastructure using Infrastructure as Code (IaC).
β’ Participate in sprint planning, estimations, and retrospective meetings.
β’ Mentor junior team members and promote engineering best practices.
β’ Continuously identify opportunities to improve processes, performance, and quality.
Required Skills & Experience:
Core Technical Skills:
β’ Strong hands-on experience with Microsoft Azure, including:
β’ Azure Data Factory (ADF)
β’ Azure data services
β’ Python (advanced) with object-oriented programming principles
β’ PySpark (distributed data processing)
β’ Strong SQL development experience
β’ Experience with data pipeline orchestration
β’ Experience with CI/CD pipelines
β’ Experience with Git/GitHub (version control)
β’ Experience with automated testing (PyTest or similar)
β’ Experience with data observability, monitoring, and alerting
β’ Experience implementing data quality frameworks and checks
β’ Experience with Infrastructure as Code
Additional Technical Knowledge:
β’ Java and Spring Framework (working knowledge)
β’ Cloud experience (Azure preferred; AWS/GCP acceptable)
β’ Dependency management tools (Conda, venv, etc.)
β’ Debugging and performance tuning of enterprise applications
β’ Understanding of SOLID principles
β’ Agile/Scrum development experience
Education:
β’ Bachelorβs degree in computer science, MIS, Mathematics, Business Analytics, or a related technical field.
Soft Skills:
β’ Strong ownership mindset and problem-solving skills
β’ Ability to work in a fast-paced Agile environment
β’ Strong collaboration and communication skills
β’ Continuous learning and improvement mindset






