Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
736
-
πŸ—“οΈ - Date discovered
September 16, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Los Angeles Metropolitan Area
-
🧠 - Skills detailed
#Snowflake #S3 (Amazon Simple Storage Service) #Documentation #SQL (Structured Query Language) #Big Data #Python #Spark (Apache Spark) #Data Pipeline #Agile #BigQuery #Cloud #Apache Spark #Data Science #Data Orchestration #Hadoop #HDFS (Hadoop Distributed File System) #Presto #Airflow #Databases #Java #"ETL (Extract #Transform #Load)" #Programming #Scrum #Data Governance #Redshift #Datasets #EC2 #Scala #AWS (Amazon Web Services) #Data Engineering #Data Warehouse #Data Modeling #PySpark
Role description
City: LA, CA Onsite/ Hybrid/ Remote: Remote Duration: 6 months Rate Range: Up to$92.5/hr on W2 depending on experience (no C2C or 1099 or sub-contract) Work Authorization: GC, USC, All valid EADs except OPT, CPT, H1B Core Skills: Expertise in big data engineering pipelines, Spark. Python, MPP Databases/SQL (Snowflake), Cloud Environments (AWS) Must Have: β€’ Expertise in Big Data engineering pipelines β€’ Strong SQL and MPP Databases (Snowflake, Redshift, or BigQuery) β€’ Apache Spark (PySpark, Scala, Hadoop ecosystem) β€’ Python/Scala/Java programming β€’ Cloud Environments (AWS – S3, EMR, EC2) β€’ Data Warehousing and Data Modeling β€’ Data orchestration/ETL tools (Airflow or similar) Responsibilities: β€’ Design, build, and optimize large-scale data pipelines and warehousing solutions. β€’ Develop ETL workflows in Big Data environments across cloud, on-prem, or hybrid setups. β€’ Collaborate with Data Product Managers, Architects, and Engineers to deliver scalable and reliable data solutions. β€’ Define data models and frameworks for data warehouses and marts supporting analytics and audience engagement. β€’ Maintain strong documentation practices for data governance and quality standards. β€’ Ensure solutions meet SLAs, operational efficiency, and support analytics/data science teams. β€’ Contribute to Agile/Scrum processes and continuously drive team improvements. Qualifications: β€’ 6+ years of experience in data engineering with large, distributed data systems. β€’ Strong SQL expertise with ability to create performant datasets. β€’ Hands-on experience with Spark, Hadoop (HDFS, Hive, Presto, PySpark). β€’ Proficiency in Python, Scala, or Java. β€’ Experience with at least one major MPP or cloud database (Snowflake preferred, Redshift or BigQuery acceptable). β€’ Experience with orchestration tools such as Airflow. β€’ Strong knowledge of data modeling techniques and data warehousing best practices. β€’ Familiarity with Agile methodologies. β€’ Excellent problem-solving, analytical, and communication skills. β€’ Bachelor’s degree in STEM required.