Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Austin, TX, offering a 12+ month W2 contract. Key skills include 9+ years in Data Engineering, proficiency in SQL and Python/Scala, and experience with big data tools and cloud platforms.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 12, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Austin, Texas Metropolitan Area
-
🧠 - Skills detailed
#Python #Data Lake #GCP (Google Cloud Platform) #Data Ingestion #Snowflake #Data Warehouse #Azure #Spark (Apache Spark) #SQL (Structured Query Language) #Databases #Docker #Data Quality #Data Processing #Airflow #Kafka (Apache Kafka) #AWS (Amazon Web Services) #BigQuery #Data Engineering #Hadoop #Redshift #Data Science #PostgreSQL #"ETL (Extract #Transform #Load)" #Luigi #Scala #Big Data #Security #MySQL #Kubernetes #Cloud
Role description
Job Title: Senior Data Engineer (W2 Only) Location: Austin, TX (On-site / Hybrid – as per project needs) Duration: 12+ Months Contract Employment Type: W2 Only (No C2C) Job Description: We are seeking a highly skilled Senior Data Engineer to join our team in Austin, TX. The ideal candidate will have strong experience in designing, developing, and maintaining large-scale data processing systems, ensuring the availability, reliability, and performance of our data infrastructure. Responsibilities: β€’ Design and develop robust ETL/ELT pipelines for structured and unstructured data. β€’ Build and optimize data warehouse and data lake solutions. β€’ Implement and enforce data quality, governance, and security standards. β€’ Collaborate with Data Scientists, Analysts, and Developers to enable analytics solutions. β€’ Optimize data workflows for speed, scalability, and cost efficiency. β€’ Automate data ingestion, transformation, and delivery processes. Required Skills & Qualifications: β€’ 9+ years of experience in Data Engineering. β€’ Proficiency in SQL and relational databases (PostgreSQL, MySQL, etc.). β€’ Strong coding skills in Python or Scala. β€’ Hands-on experience with big data tools (Spark, Hadoop, Hive). β€’ Experience with cloud platforms (AWS, Azure, or GCP). β€’ Knowledge of pipeline orchestration tools (Airflow, Luigi, etc.). Preferred Skills: β€’ Streaming data processing with Kafka or Kinesis. β€’ Experience with Snowflake, Redshift, or BigQuery. β€’ Familiarity with Docker/Kubernetes.