

Robert Half
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "$XX/hour." Key skills include expertise in Apache Kafka, Airflow, Databricks, and cloud-based data solutions. A minimum of seven years of experience is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
536
-
ποΈ - Date
October 24, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Dallas, TX
-
π§ - Skills detailed
#Data Engineering #Apache NiFi #Automation #Data Manipulation #Data Modeling #NoSQL #Java #Storage #Data Governance #Apache Kafka #Kubernetes #Data Science #Programming #Databricks #Apache Airflow #Data Architecture #Kafka (Apache Kafka) #Scala #Databases #Python #JSON (JavaScript Object Notation) #Spark (Apache Spark) #Data Framework #Data Quality #Computer Science #Data Access #S3 (Amazon Simple Storage Service) #Security #SQL (Structured Query Language) #Data Processing #Quality Assurance #"ETL (Extract #Transform #Load)" #Monitoring #Hadoop #Data Storage #Cloud #NiFi (Apache NiFi) #Data Lifecycle #Deployment #Airflow #Data Pipeline
Role description
We are looking for a highly experienced Data Platform Engineer to join our team. You will be instrumental in architecting, building, and optimizing our next-generation, cloud-based data ecosystem. This role requires deep expertise in modern data frameworks to deliver the scalable, reliable data solutions that fuel organizational intelligence.
You will be expected to design and implement robust data pipelines, utilizing key technologies such as Apache Kafka, Apache Airflow, Apache NiFi, Databricks, Hadoop, Apache Flink, and Amazon S3. Success in this position means collaborating closely with our data science, analytics, and infrastructure teams to ensure the deployment of secure, high-quality data products essential for critical business decision-making.
Core Responsibilities
β’ Pipeline Development: Architect, code, and maintain highly scalable and dependable data pipelines. Leverage tools like Airflow, NiFi, and Databricks to automate the entire data lifecycle: ingestion, cleansing, transformation, and delivery.
β’ Real-Time Architecture: Implement and manage event-driven and streaming data architectures using technologies such as Apache Kafka and Flink.
β’ Data Storage & Optimization: Develop and refine data persistence strategies on platforms like Hadoop and Amazon S3, focusing on performance, cost efficiency, and future scalability.
β’ Quality & Governance: Establish and enforce best practices for data quality, governance, and security, including the deployment of comprehensive monitoring solutions.
β’ Orchestration: Oversee and manage complex ETL/ELT processes and workflow orchestration across diverse hybrid and multi-cloud environments.
β’ Interoperability: Work extensively with various data formats (e.g., Parquet, Avro, JSON) to maximize data accessibility and ensure seamless interoperability across systems.
β’ Performance Tuning: Diagnose and resolve performance bottlenecks within distributed data processing systems.
β’ Mentorship: Guide and mentor junior team members, promoting technical excellence and a commitment to continuous learning within the team.
Qualifications
β’ Education: A Bachelorβs degree in Computer Science, Engineering, or a closely related technical field.
β’ Experience: A minimum of seven (7) years of hands-on experience in the Data Engineering domain, specifically focused on pipeline design, development, and workflow orchestration.
β’ Technology Mastery: Proven, expert-level proficiency with data platform technologies including Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and S3.
β’ Programming Skills: Strong command of high-level programming languages such as Python, Scala, or Java for data manipulation and automation.
β’ Database Fluency: Excellent SQL skills and working knowledge of both relational and NoSQL databases.
β’ Modern Environments: Experience with on-premise and Kubernetes-based data engineering deployments.
β’ Conceptual Knowledge: Familiarity with modern concepts in data modeling, data governance, and data quality assurance frameworks.
β’ Soft Skills: Outstanding analytical capabilities, superior problem-solving acumen, and excellent written and verbal communication skills.
β’ Preferred: Prior experience operating within a multi-cloud or hybrid data infrastructure is highly desirable.
We are looking for a highly experienced Data Platform Engineer to join our team. You will be instrumental in architecting, building, and optimizing our next-generation, cloud-based data ecosystem. This role requires deep expertise in modern data frameworks to deliver the scalable, reliable data solutions that fuel organizational intelligence.
You will be expected to design and implement robust data pipelines, utilizing key technologies such as Apache Kafka, Apache Airflow, Apache NiFi, Databricks, Hadoop, Apache Flink, and Amazon S3. Success in this position means collaborating closely with our data science, analytics, and infrastructure teams to ensure the deployment of secure, high-quality data products essential for critical business decision-making.
Core Responsibilities
β’ Pipeline Development: Architect, code, and maintain highly scalable and dependable data pipelines. Leverage tools like Airflow, NiFi, and Databricks to automate the entire data lifecycle: ingestion, cleansing, transformation, and delivery.
β’ Real-Time Architecture: Implement and manage event-driven and streaming data architectures using technologies such as Apache Kafka and Flink.
β’ Data Storage & Optimization: Develop and refine data persistence strategies on platforms like Hadoop and Amazon S3, focusing on performance, cost efficiency, and future scalability.
β’ Quality & Governance: Establish and enforce best practices for data quality, governance, and security, including the deployment of comprehensive monitoring solutions.
β’ Orchestration: Oversee and manage complex ETL/ELT processes and workflow orchestration across diverse hybrid and multi-cloud environments.
β’ Interoperability: Work extensively with various data formats (e.g., Parquet, Avro, JSON) to maximize data accessibility and ensure seamless interoperability across systems.
β’ Performance Tuning: Diagnose and resolve performance bottlenecks within distributed data processing systems.
β’ Mentorship: Guide and mentor junior team members, promoting technical excellence and a commitment to continuous learning within the team.
Qualifications
β’ Education: A Bachelorβs degree in Computer Science, Engineering, or a closely related technical field.
β’ Experience: A minimum of seven (7) years of hands-on experience in the Data Engineering domain, specifically focused on pipeline design, development, and workflow orchestration.
β’ Technology Mastery: Proven, expert-level proficiency with data platform technologies including Kafka, Airflow, NiFi, Databricks, Spark, Hadoop, Flink, and S3.
β’ Programming Skills: Strong command of high-level programming languages such as Python, Scala, or Java for data manipulation and automation.
β’ Database Fluency: Excellent SQL skills and working knowledge of both relational and NoSQL databases.
β’ Modern Environments: Experience with on-premise and Kubernetes-based data engineering deployments.
β’ Conceptual Knowledge: Familiarity with modern concepts in data modeling, data governance, and data quality assurance frameworks.
β’ Soft Skills: Outstanding analytical capabilities, superior problem-solving acumen, and excellent written and verbal communication skills.
β’ Preferred: Prior experience operating within a multi-cloud or hybrid data infrastructure is highly desirable.






