EPITEC

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5+ years of experience, specializing in building data pipelines for automotive or time-series data formats. Contract length is unspecified, with a competitive pay rate. Key skills include Python, Databricks, and cloud platforms.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
November 18, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, Texas Metropolitan Area
-
🧠 - Skills detailed
#PySpark #Azure #Security #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #Data Lake #Data Processing #Python #"ETL (Extract #Transform #Load)" #Cloud #Databricks #Delta Lake #Storage #Scala #Data Storage #Spark (Apache Spark) #Data Engineering #Data Pipeline #Datasets
Role description
ABOUT THE ROLE: Seeking highly experienced Data Engineer with a strong background in building production-grade data pipelines, working with complex automotive or time-series data formats (especially MF4/MDF4), and deploying large-scale solutions in Databricks. The ideal candidate is fluent in Python and Data Storage formats, comfortable working at the intersection of data engineering and data analytics. This is a senior technical role requiring deep expertise, independence, and the ability to drive end-to-end delivery of data \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Key Responsibilities • Design and build robust data pipelines in Python to extract, transform, and load data from MF4/MDF4 files(e.g., automotive telemetry, sensor logs). • Architect scalable ETL/ELT workflows in Databricks, leveraging Delta Lake and cloud-native storage. • Optimize performance and ensure reliability of pipelines handling large-scale, high-frequency time-series datasets. • Mentor junior engineers and contribute to technical design reviews, architecture discussions, and code quality. • Stay ahead of industry trends, data lake house architecture, and data workflows. \_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_ Required Qualifications • 5+ years of experience in data engineering and software development. • Advanced proficiency in Python, with experience in performance tuning and large-scale data processing. • Strong experience with Databricks, Delta Lake, and Spark (PySpark or Scala). • Demonstrated ability to design and implement high-throughput, fault-tolerant pipelines in production environments. • Familiarity with cloud platforms (AWS, Azure, or GCP), including data storage, compute, and security best practices.