

Robert Half
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with expertise in AWS, Databricks, and data governance. Contract length is unspecified, with a competitive pay rate. Key skills include data architecture, ETL/ELT pipelines, and AI/ML integration. Experience in cloud environments is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
600
-
🗓️ - Date
April 30, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Houston, TX
-
🧠 - Skills detailed
#Batch #Data Pipeline #Data Governance #Data Integration #Metadata #Data Processing #AI (Artificial Intelligence) #Data Lineage #Data Management #Data Lake #ML (Machine Learning) #SageMaker #Data Architecture #AWS (Amazon Web Services) #Data Access #Data Engineering #AWS Glue #"ETL (Extract #Transform #Load)" #Scala #Migration #Compliance #Cloud #Databricks
Role description
We are seeking an experienced Data Engineer with strong expertise in AWS, Databricks, and data governance to design, build, and modernize enterprise data platforms. This role will focus on creating scalable data lake, lakehouse, and data mesh architectures, enabling trusted data access across the organization, and supporting advanced analytics and AI/ML initiatives.
Key Responsibilities:
• Design and implement scalable data architectures, including Data Lakes, Lakehouse, and Data Mesh frameworks.
• Lead cloud migration and data modernization initiatives to improve performance, scalability, and reliability.
• Build, architect, and optimize batch and streaming data pipelines using AWS-native tools such as AWS Glue and related services.
• Partner with business leaders, data consumers, and technology teams to ensure solutions align with enterprise architecture and business goals.
• Establish and manage data governance practices, including metadata management, data lineage, cataloging, quality controls, and compliance frameworks.
• Develop and maintain data solutions in Databricks to support analytics, reporting, and large-scale data processing.
• Integrate enterprise data platforms with AI/ML ecosystems, including Amazon SageMaker, Amazon Bedrock, and Databricks.
• Ensure data platforms are secure, reliable, and optimized for both operational and analytical use cases.
• Promote best practices for data engineering, cloud architecture, governance, and platform scalability.
Required Qualifications:
• Proven experience as a Data Engineer in AWS cloud environments.
• Hands-on experience with Databricks and modern data platform design.
• Strong experience building and supporting ETL/ELT pipelines, batch processing, and streaming data workflows.
• Experience with AWS Glue and related AWS data services.
• Knowledge of data governance, metadata management, lineage, and compliance requirements.
• Experience supporting AI/ML data integration and enabling platforms such as SageMaker and Bedrock.
• Strong understanding of modern data architecture patterns, including Lakehouse and Data Mesh.
• Excellent stakeholder communication and cross-functional collaboration skills.
We are seeking an experienced Data Engineer with strong expertise in AWS, Databricks, and data governance to design, build, and modernize enterprise data platforms. This role will focus on creating scalable data lake, lakehouse, and data mesh architectures, enabling trusted data access across the organization, and supporting advanced analytics and AI/ML initiatives.
Key Responsibilities:
• Design and implement scalable data architectures, including Data Lakes, Lakehouse, and Data Mesh frameworks.
• Lead cloud migration and data modernization initiatives to improve performance, scalability, and reliability.
• Build, architect, and optimize batch and streaming data pipelines using AWS-native tools such as AWS Glue and related services.
• Partner with business leaders, data consumers, and technology teams to ensure solutions align with enterprise architecture and business goals.
• Establish and manage data governance practices, including metadata management, data lineage, cataloging, quality controls, and compliance frameworks.
• Develop and maintain data solutions in Databricks to support analytics, reporting, and large-scale data processing.
• Integrate enterprise data platforms with AI/ML ecosystems, including Amazon SageMaker, Amazon Bedrock, and Databricks.
• Ensure data platforms are secure, reliable, and optimized for both operational and analytical use cases.
• Promote best practices for data engineering, cloud architecture, governance, and platform scalability.
Required Qualifications:
• Proven experience as a Data Engineer in AWS cloud environments.
• Hands-on experience with Databricks and modern data platform design.
• Strong experience building and supporting ETL/ELT pipelines, batch processing, and streaming data workflows.
• Experience with AWS Glue and related AWS data services.
• Knowledge of data governance, metadata management, lineage, and compliance requirements.
• Experience supporting AI/ML data integration and enabling platforms such as SageMaker and Bedrock.
• Strong understanding of modern data architecture patterns, including Lakehouse and Data Mesh.
• Excellent stakeholder communication and cross-functional collaboration skills.






