MindSource

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Austin, TX (Hybrid) on a long-term contract. Requires 6+ years in software engineering, proficiency in Python, Java, or Scala, and experience with Airflow, Spark, and Kafka.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
December 15, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
W2 Contractor
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Azure #Computer Science #Python #Airflow #Libraries #AWS (Amazon Web Services) #Data Ingestion #Kafka (Apache Kafka) #Terraform #Data Engineering #Data Quality #Infrastructure as Code (IaC) #Java #Version Control #Trino #GCP (Google Cloud Platform) #Cloud #SQL (Structured Query Language) #Kubernetes #Scala #Spark (Apache Spark)
Role description
Job Title: Senior Data Engineer Location: Austin, TX (Hybrid) Duration: Long Term Employment Type: Contract (W2 / C2C) Job Overview We are seeking an experienced Senior Data Engineer to architect, build, and maintain large-scale data solutions that empower business leaders with accurate, timely, and actionable insights. The ideal candidate is a self-starter who thrives in a fast-paced environment, adapts quickly to changing requirements, and collaborates effectively with cross-functional teams. Key Responsibilities β€’ Architect, develop, and test scalable, high-performance data solutions β€’ Design and implement efficient data ingestion pipelines from diverse and variable-quality data sources β€’ Create data products that enable self-service analytics and predictability for consumers β€’ Build reusable libraries and frameworks to improve team productivity β€’ Optimize and maintain data solutions to improve efficiency, data quality, and operational excellence β€’ Collaborate with engineering, analytics, and business stakeholders Required Qualifications β€’ 6+ years of software engineering experience with a strong focus on data and SQL β€’ Proficiency in Python, Java, or Scala β€’ Experience with Airflow, Spark, Trino, and Kafka β€’ Strong analytical skills with the ability to design efficient, high-quality data solutions β€’ Solid understanding of SDLC best practices, version control, and CI/CD Preferred Qualifications β€’ Bachelor’s or Master’s degree in Engineering, Computer Science, or a related field β€’ Experience with cloud platforms such as AWS, GCP, or Azure β€’ Familiarity with Infrastructure as Code tools (e.g., Terraform) β€’ Knowledge of containerization and orchestration tools such as Kubernetes