

MindSource
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position for a long-term contract in a hybrid setting in Austin, TX. Candidates should have 6+ years of software engineering experience, strong SQL skills, and expertise in Python, Java, or Scala, along with data technologies like Airflow and Spark.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
February 17, 2026
π - Duration
Unknown
-
ποΈ - Location
Hybrid
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Austin, TX
-
π§ - Skills detailed
#Trino #Spark (Apache Spark) #Java #Kubernetes #Scala #Computer Science #Datasets #Libraries #Airflow #Version Control #Data Engineering #Terraform #Storage #Infrastructure as Code (IaC) #Kafka (Apache Kafka) #Data Quality #AWS (Amazon Web Services) #Azure #GCP (Google Cloud Platform) #Leadership #Cloud #SQL (Structured Query Language) #Python
Role description
Data Engineer
π Hybrid β Austin, TX
π Long-Term Contract
About the Role
We are seeking an experienced Data Engineer to architect, develop, and optimize large-scale data solutions that empower leadership with accurate, high-quality data to drive business decisions. This role is ideal for a self-starter who thrives in a fast-paced, collaborative environment and enjoys solving complex technical and business challenges.
Key Responsibilities
β’ Architect, develop, and test scalable, high-performance data solutions
β’ Design and implement efficient methods for consuming data from diverse and variable-quality sources
β’ Build and maintain data products that enable self-service analytics and predictability for consumers
β’ Develop reusable libraries and frameworks to enhance team productivity
β’ Optimize and maintain existing solutions to improve efficiency, data quality, and operational excellence
β’ Collaborate cross-functionally to adapt to evolving business and technical requirements
Required Qualifications
β’ 6+ years of software engineering experience with a strong focus on data and SQL
β’ Expertise in at least one of the following: Python, Java, or Scala
β’ Experience with data technologies such as Airflow, Spark, Trino, and Kafka
β’ Strong ability to analyze complex datasets and design efficient, high-quality solutions
β’ Solid understanding of SDLC best practices, version control systems, and CI/CD processes
Preferred Qualifications
β’ Bachelorβs or Masterβs degree in Engineering, Computer Science, or related field
β’ Experience with cloud platforms such as AWS, GCP, or Azure for data infrastructure and storage
β’ Knowledge of Infrastructure as Code tools (e.g., Terraform)
β’ Experience with container orchestration tools (e.g., Kubernetes)
Data Engineer
π Hybrid β Austin, TX
π Long-Term Contract
About the Role
We are seeking an experienced Data Engineer to architect, develop, and optimize large-scale data solutions that empower leadership with accurate, high-quality data to drive business decisions. This role is ideal for a self-starter who thrives in a fast-paced, collaborative environment and enjoys solving complex technical and business challenges.
Key Responsibilities
β’ Architect, develop, and test scalable, high-performance data solutions
β’ Design and implement efficient methods for consuming data from diverse and variable-quality sources
β’ Build and maintain data products that enable self-service analytics and predictability for consumers
β’ Develop reusable libraries and frameworks to enhance team productivity
β’ Optimize and maintain existing solutions to improve efficiency, data quality, and operational excellence
β’ Collaborate cross-functionally to adapt to evolving business and technical requirements
Required Qualifications
β’ 6+ years of software engineering experience with a strong focus on data and SQL
β’ Expertise in at least one of the following: Python, Java, or Scala
β’ Experience with data technologies such as Airflow, Spark, Trino, and Kafka
β’ Strong ability to analyze complex datasets and design efficient, high-quality solutions
β’ Solid understanding of SDLC best practices, version control systems, and CI/CD processes
Preferred Qualifications
β’ Bachelorβs or Masterβs degree in Engineering, Computer Science, or related field
β’ Experience with cloud platforms such as AWS, GCP, or Azure for data infrastructure and storage
β’ Knowledge of Infrastructure as Code tools (e.g., Terraform)
β’ Experience with container orchestration tools (e.g., Kubernetes)





