

aKUBE
Senior Data Engineer – Data Foundations (Databricks, Snowflake, AWS)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – Data Foundations in Los Angeles, CA, for 12 months at $96/hr C2C or $89/hr W2. Key skills include Databricks, Snowflake, AWS, advanced SQL, and Python. Requires 7+ years of data engineering experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
768
-
🗓️ - Date
May 2, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#ML (Machine Learning) #Scala #Jenkins #Snowflake #Data Catalog #Data Engineering #dbt (data build tool) #Python #S3 (Amazon Simple Storage Service) #Data Pipeline #GitHub #Data Science #SQL (Structured Query Language) #Big Data #Data Quality #AWS (Amazon Web Services) #Redshift #AWS S3 (Amazon Simple Storage Service) #Observability #Databricks #Datasets #Lambda (AWS Lambda) #Airflow #Data Processing #Batch #SQL Queries
Role description
Location: Los Angeles, CA
Onsite/ Hybrid/ Remote: Hybrid
Duration: 12 Months
Rate Range: Max is $96/hr on C2C or $89/hr on W2
Work Authorization: GC, USC, All valid EADs except H1B, OPT, CPT
Must Have:
• Databricks
• Snowflake
• Redshift
• AWS (S3, Glue, Lambda)
• Advanced SQL (performance tuning)
• Python
• Airflow or DBT
Responsibilities:
• Design and build scalable data pipelines for large datasets
• Develop batch and real-time data processing solutions
• Work with Databricks, Snowflake, and Redshift for data platforms
• Optimize SQL queries and improve data performance
• Build and maintain workflows using Airflow or DBT
• Ensure data quality, reliability, and governance
• Partner with data scientists to deploy ML models
• Collaborate with teams to translate business needs into data solutions
Qualifications:
• 7+ years of data engineering experience
• Strong experience with AWS data services
• Hands-on coding in Python and SQL
• Experience with distributed data systems and big data tools
• Experience building enterprise-scale data platforms
Nice to Have:
• Monte Carlo (data observability)
• Atlan (data catalog)
• CI/CD (GitHub Actions or Jenkins)
• Experience with ML or statistical modeling
Location: Los Angeles, CA
Onsite/ Hybrid/ Remote: Hybrid
Duration: 12 Months
Rate Range: Max is $96/hr on C2C or $89/hr on W2
Work Authorization: GC, USC, All valid EADs except H1B, OPT, CPT
Must Have:
• Databricks
• Snowflake
• Redshift
• AWS (S3, Glue, Lambda)
• Advanced SQL (performance tuning)
• Python
• Airflow or DBT
Responsibilities:
• Design and build scalable data pipelines for large datasets
• Develop batch and real-time data processing solutions
• Work with Databricks, Snowflake, and Redshift for data platforms
• Optimize SQL queries and improve data performance
• Build and maintain workflows using Airflow or DBT
• Ensure data quality, reliability, and governance
• Partner with data scientists to deploy ML models
• Collaborate with teams to translate business needs into data solutions
Qualifications:
• 7+ years of data engineering experience
• Strong experience with AWS data services
• Hands-on coding in Python and SQL
• Experience with distributed data systems and big data tools
• Experience building enterprise-scale data platforms
Nice to Have:
• Monte Carlo (data observability)
• Atlan (data catalog)
• CI/CD (GitHub Actions or Jenkins)
• Experience with ML or statistical modeling






