

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Irvine, CA (Hybrid) with a contract length of "Unknown." Pay rate is "Unknown." Key skills include GCP Data Architecture, MDM, Python, and experience in AWS or Azure. Proficiency in data management and analytics is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 25, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Irvine, CA
-
π§ - Skills detailed
#Azure #Data Engineering #AWS (Amazon Web Services) #Data Lake #MDM (Master Data Management) #Spark (Apache Spark) #Strategy #Data Management #Visualization #PySpark #GCP (Google Cloud Platform) #Data Lakehouse #Observability #Python #Logging #Data Architecture
Role description
Role: Data Engineering Lead
Location: Irvine, CA (Hybrid)
Mandatory Skills : GCP Data Architecture, MDM Conceptual, Data Architecture, Data Lakehouse Architecture, Dimensional Data Modelling
Job Description:
Engineering leader with strong Handson Python PySpark Platform background
Ability to design Python Pyspark logging troubleshooting observability resilience pipeline instrumentation testing frameworks
Lead Data Engineering team to speed up product delivery Must be hands on in Data Engineering
Must have experience and proficiency in all aspects of data management data analytics solution design implementation
Must be experienced in either AWS or Azure Data Analytics Visualization Stack If the candidate is experienced in AWS then expected to be proficient in Azure and vice versa
Define testing strategy and reusable components
Proficiency in Asset management Asset servicing business domain is a plus.
Role: Data Engineering Lead
Location: Irvine, CA (Hybrid)
Mandatory Skills : GCP Data Architecture, MDM Conceptual, Data Architecture, Data Lakehouse Architecture, Dimensional Data Modelling
Job Description:
Engineering leader with strong Handson Python PySpark Platform background
Ability to design Python Pyspark logging troubleshooting observability resilience pipeline instrumentation testing frameworks
Lead Data Engineering team to speed up product delivery Must be hands on in Data Engineering
Must have experience and proficiency in all aspects of data management data analytics solution design implementation
Must be experienced in either AWS or Azure Data Analytics Visualization Stack If the candidate is experienced in AWS then expected to be proficient in Azure and vice versa
Define testing strategy and reusable components
Proficiency in Asset management Asset servicing business domain is a plus.