

Opening for Data Engineer :: Contract :: Texas - Remote
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a contract basis, working remotely from Texas. Requires 4+ years of Python (3.11+), strong SQL skills, and experience with data pipelines. Familiarity with Azure Machine Learning and Azure Data Factory is preferred. Pay rate and contract length unspecified.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#SQL (Structured Query Language) #Data Science #DevOps #Azure DevOps #Data Pipeline #Deployment #Version Control #Azure Data Factory #Data Orchestration #Data Storage #Batch #Databricks #Docker #Kafka (Apache Kafka) #Data Framework #Python #Data Engineering #Kubernetes #Big Data #Spark (Apache Spark) #Azure #Data Warehouse #ADF (Azure Data Factory) #"ETL (Extract #Transform #Load)" #Storage #Azure Databricks #DynamoDB #Airflow #NoSQL #Monitoring #PySpark #REST (Representational State Transfer) #REST API #Data Processing #Scala #PostgreSQL #Azure Machine Learning #ML (Machine Learning) #GitLab
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Job Title: Data Engineer
Location: Texas (Remote)
Job Type: Contract
Travel: 10% to Dallas/ Dallas Airport
Must Have
β’ Python Expert (idiomatic python 3.11+)
β’ Heavy data engineering experience.
β’ Experience working alongside Data Scientists, assisting with model training and prediction frameworks.
Job Summary
We're seeking an experienced Data Engineer with deep expertise in Python 3.11+ who can design and develop high-throughput data pipelines and collaborate closely with Data Scientists on model training and production prediction systems. Familiarity with Azure Machine Learning and Azure Data Factory is a strong advantage.
Key Responsibilities
β’ Design, build, and maintain scalable ETL/ELT data pipelines in idiomatic Python 3.11+, handling both batch and streaming workloads
β’ Collaborate with Data Scientists to operationalize ML models: assist in model training, deployment, and end-to-end prediction frameworks .
β’ Optimize data storage solutions (SQL/NoSQL/data warehouses) and implement transformations in Python and SQL for analytical and ML use cases .
β’ Integrate and manage message queues (e.g., Kafka, RabbitMQ) for asynchronous data processing workflows
β’ Monitor, log, troubleshoot, and optimize data pipeline performance and reliability
β’ (Nice to have) Design and maintain ML workflows using Azure Machine Learning, and manage data orchestration with Azure Data Factory
Required Qualifications
β’ 4+ years of professional experience in Python (3.11+), including clean, idiomatic code practices
β’ Proven history of building scalable data pipelines, data models, and ETL/ELT workflows
β’ Strong SQL and database expertise (PostgreSQL, DynamoDB, or equivalents) .
β’ Experience integrating REST APIs and message-driven frameworks in data engineering contexts .
β’ Familiarity with version control and CI/CD systems (e.g., GitLab, Azure DevOps) .
β’ Excellent collaboration and communication skills with technical and data science stakeholders
Preferred (Nice-to-Have)
β’ Hands-on experience with Azure Machine Learning and Azure Data Factory
β’ Familiarity with Azure Databricks, Spark/PySpark, or similar big data frameworks
β’ Experience with Airflow or similar orchestration tools
β’ Experience containerizing workloads (Docker/Kubernetes) for deployment and scaling
β’ Experience in MLOps or assisting production ML systems with monitoring, retraining, and versioning.