

Data Engineer Manager
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer Manager with 6+ years of experience, proficient in Python, Hive, SQL, and PySpark. Contract length is unspecified, with a competitive pay rate. Strong AWS skills and finance/asset management experience are required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
August 13, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Irvine, CA
-
π§ - Skills detailed
#Airflow #Data Engineering #SQL (Structured Query Language) #AWS (Amazon Web Services) #Cloud #BitBucket #Data Warehouse #Databricks #DevOps #GitHub #Python #PySpark #Redshift #Data Quality #Spark (Apache Spark) #Version Control
Role description
Experience & Core Requirements
β’ 6+ years of data engineering experience
β’ Proficiency in Python, Hive, SQL, and PySpark
β’ Strong experience with AWS services
β’ Hands-on experience integrating Databricks into existing AWS data platforms
β’ Expertise in real-time streaming design using Medallion Architecture
β’ Skilled in troubleshooting and resolving AWS platform failures
Primary Skills
β’ PySpark, Hive, SQL, Python
β’ AWS cloud environment experience
β’ Familiarity with Airflow, version control tools (GitHub, Bitbucket)
β’ Exposure to MPP data warehouses (SQLDW, Redshift)
β’ Strong understanding of data warehousing concepts
β’ Proficient with Databricks workflows and integration
β’ Excellent communication and client engagement skills
β’ Experience collaborating with offshore teams
Secondary Skills (Preferred)
β’ Experience building enterprise data models in the finance/asset management domain (fixed income, equities, etc.)
β’ DevOps exposure
β’ AWS Data Quality (AWS DQ) knowledge
Experience & Core Requirements
β’ 6+ years of data engineering experience
β’ Proficiency in Python, Hive, SQL, and PySpark
β’ Strong experience with AWS services
β’ Hands-on experience integrating Databricks into existing AWS data platforms
β’ Expertise in real-time streaming design using Medallion Architecture
β’ Skilled in troubleshooting and resolving AWS platform failures
Primary Skills
β’ PySpark, Hive, SQL, Python
β’ AWS cloud environment experience
β’ Familiarity with Airflow, version control tools (GitHub, Bitbucket)
β’ Exposure to MPP data warehouses (SQLDW, Redshift)
β’ Strong understanding of data warehousing concepts
β’ Proficient with Databricks workflows and integration
β’ Excellent communication and client engagement skills
β’ Experience collaborating with offshore teams
Secondary Skills (Preferred)
β’ Experience building enterprise data models in the finance/asset management domain (fixed income, equities, etc.)
β’ DevOps exposure
β’ AWS Data Quality (AWS DQ) knowledge