

ALL KNOWN SERVICES
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5–8 years of experience in Big Data. Contract length is W2, located in Texas. Key skills include Spark, AWS/GCP/Azure, and data lakehouse architectures. Certifications not specified.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
624
-
🗓️ - Date
November 28, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas, United States
-
🧠 - Skills detailed
#Data Science #Data Modeling #SQL (Structured Query Language) #Azure #AWS (Amazon Web Services) #Data Lake #Scala #BI (Business Intelligence) #Automation #ML (Machine Learning) #Data Lakehouse #Data Management #Cloud #Storage #Snowflake #Batch #Data Pipeline #Datasets #Distributed Computing #dbt (data build tool) #Docker #Spark (Apache Spark) #Redshift #Airflow #Data Engineering #Databricks #Data Quality #AI (Artificial Intelligence) #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #Metadata #Delta Lake #Data Ingestion #Kafka (Apache Kafka) #Big Data #PySpark #BigQuery
Role description
Big Data Engineer
Experience: 5–8 Years
Location: Texas
Contract : W2
We’re hiring a Big Data Engineer to design, build, and optimize large-scale distributed data systems that support advanced analytics, AI/ML workloads, and real-time business applications. The ideal candidate brings hands-on expertise with Spark, cloud big data stacks, streaming frameworks, data lakehouse architectures, and enterprise-grade data engineering practices.
Responsibilities:
• Build scalable ETL/ELT pipelines, data ingestion frameworks, and orchestration workflows.
• Develop batch & streaming data pipelines using Kafka, Spark Streaming, Flink, or Kinesis.
• Build and optimize data lakes & lakehouses using Delta Lake, Iceberg, or Hudi.
• Deploy and manage distributed data systems on AWS/GCP/Azure.
• Optimize big data compute environments (Spark, EMR, Databricks, Snowflake, Redshift, BigQuery).
• Implement data quality, metadata management, lineage tracking, and governance controls.
• Support ML engineers & data scientists with feature pipelines and model-ready datasets.
• Enable analytics workloads across BI, dashboards, real-time insights, and predictive models.
Requirements:
• 5–8 years in Big Data or Data Platform Engineering.
• Strong in Spark (PySpark/Scala), SQL, and distributed computing.
• Hands-on experience with AWS/GCP/Azure big data services.
• Experience with Snowflake, Redshift, BigQuery, and data lakehouse architectures.
• Proficient with Kafka, Flink, Kinesis, Airflow, dbt, or similar tools.
• Strong understanding of ETL/ELT pipelines, data modeling, storage formats (Parquet/Delta), and performance optimization.
• Experience with CI/CD, Docker, and infrastructure automation.
Big Data Engineer
Experience: 5–8 Years
Location: Texas
Contract : W2
We’re hiring a Big Data Engineer to design, build, and optimize large-scale distributed data systems that support advanced analytics, AI/ML workloads, and real-time business applications. The ideal candidate brings hands-on expertise with Spark, cloud big data stacks, streaming frameworks, data lakehouse architectures, and enterprise-grade data engineering practices.
Responsibilities:
• Build scalable ETL/ELT pipelines, data ingestion frameworks, and orchestration workflows.
• Develop batch & streaming data pipelines using Kafka, Spark Streaming, Flink, or Kinesis.
• Build and optimize data lakes & lakehouses using Delta Lake, Iceberg, or Hudi.
• Deploy and manage distributed data systems on AWS/GCP/Azure.
• Optimize big data compute environments (Spark, EMR, Databricks, Snowflake, Redshift, BigQuery).
• Implement data quality, metadata management, lineage tracking, and governance controls.
• Support ML engineers & data scientists with feature pipelines and model-ready datasets.
• Enable analytics workloads across BI, dashboards, real-time insights, and predictive models.
Requirements:
• 5–8 years in Big Data or Data Platform Engineering.
• Strong in Spark (PySpark/Scala), SQL, and distributed computing.
• Hands-on experience with AWS/GCP/Azure big data services.
• Experience with Snowflake, Redshift, BigQuery, and data lakehouse architectures.
• Proficient with Kafka, Flink, Kinesis, Airflow, dbt, or similar tools.
• Strong understanding of ETL/ELT pipelines, data modeling, storage formats (Parquet/Delta), and performance optimization.
• Experience with CI/CD, Docker, and infrastructure automation.






