

MindSource
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 6+ years of experience in software/data engineering, strong SQL skills, and proficiency in Python, Java, or Scala. Contract length is unspecified; pay rate is also unspecified. Experience with Airflow, Spark, and cloud platforms is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Azure #Python #SQL (Structured Query Language) #AWS (Amazon Web Services) #Libraries #Spark (Apache Spark) #Cloud #Terraform #Kafka (Apache Kafka) #Version Control #Data Engineering #Trino #Computer Science #Java #Datasets #Kubernetes #GCP (Google Cloud Platform) #Scala #Infrastructure as Code (IaC) #Data Quality #Airflow
Role description
Data Engineer / Software Engineer – 6+ Years
Requirements
• 6+ years in software/data engineering with strong SQL skills
• Proficiency in Python, Java, or Scala
• Experience with Airflow, Spark, Trino, Kafka
• Known for analyzing complex datasets and building efficient, high-quality solutions
• Familiar with SDLC, version control, and CI/CD
Role
• Architect, build, and test large-scale data solutions
• Design scalable pipelines from diverse data sources
• Create reusable data products, libraries, and frameworks
• Optimize performance, data quality, and operational reliability
• Collaborate with cross-functional teams in a fast-paced environment
Preferred
• BS/MS in Computer Science or Engineering
• Cloud experience (AWS / GCP / Azure)
• Knowledge of Terraform, Kubernetes or other IaC/container tools
Data Engineer / Software Engineer – 6+ Years
Requirements
• 6+ years in software/data engineering with strong SQL skills
• Proficiency in Python, Java, or Scala
• Experience with Airflow, Spark, Trino, Kafka
• Known for analyzing complex datasets and building efficient, high-quality solutions
• Familiar with SDLC, version control, and CI/CD
Role
• Architect, build, and test large-scale data solutions
• Design scalable pipelines from diverse data sources
• Create reusable data products, libraries, and frameworks
• Optimize performance, data quality, and operational reliability
• Collaborate with cross-functional teams in a fast-paced environment
Preferred
• BS/MS in Computer Science or Engineering
• Cloud experience (AWS / GCP / Azure)
• Knowledge of Terraform, Kubernetes or other IaC/container tools






