

CXC
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Los Angeles, California, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Azure, Databricks, Python, SQL, and experience with ETL/ELT pipelines and data governance.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#Storage #Data Lake #GitHub #AI (Artificial Intelligence) #Data Warehouse #Azure Data Factory #Data Pipeline #Spark (Apache Spark) #BI (Business Intelligence) #Azure #Data Governance #Microsoft Power BI #PySpark #Tableau #Automation #Compliance #Schema Design #Terraform #Deployment #Python #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Computer Science #ML (Machine Learning) #Apache Spark #Data Security #Data Modeling #Scala #Security #ADF (Azure Data Factory) #Data Architecture #Databricks #Data Processing #Data Engineering
Role description
We are seeking an experienced Senior Data Engineer to join a dynamic, enterprise-scale environment supporting large and complex data initiatives. This role involves designing and building modern data platforms, enabling advanced analytics, and driving data-driven decision-making.
⚠️ Note: Candidates must be local to Los Angeles, California and willing to work in a hybrid setup.
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT data pipelines
• Develop robust solutions using Azure Data Factory, Data Lake, and related services
• Work with Databricks, Apache Spark, and PySpark for large-scale data processing
• Implement data modeling, schema design, and data warehousing solutions
• Collaborate with cross-functional teams to support analytics, reporting, and AI/ML initiatives
• Ensure data governance, security, and compliance standards
• Automate deployments using CI/CD pipelines, GitHub, and Terraform
• Optimize performance and troubleshoot complex data issues
Required Skills & Experience
• Strong experience with Azure ecosystem (ADF, Data Lake, Functions, Blob Storage)
• Hands-on experience with Databricks, Spark, and PySpark
• Advanced proficiency in Python and SQL
• Experience building data pipelines and data warehouse solutions
• Familiarity with BI tools (Power BI, Tableau)
• Knowledge of data governance, RBAC/ABAC, and data security practices
• Experience with CI/CD, Terraform, and infrastructure automation
Preferred Qualifications
• Experience working with AI/ML or predictive analytics solutions
• Exposure to enterprise data architecture and large-scale systems
• Strong understanding of OLAP/OLTP systems and performance tuning
• Bachelor's degree in Computer Science, IT, or related field
We are seeking an experienced Senior Data Engineer to join a dynamic, enterprise-scale environment supporting large and complex data initiatives. This role involves designing and building modern data platforms, enabling advanced analytics, and driving data-driven decision-making.
⚠️ Note: Candidates must be local to Los Angeles, California and willing to work in a hybrid setup.
Key Responsibilities
• Design, build, and maintain scalable ETL/ELT data pipelines
• Develop robust solutions using Azure Data Factory, Data Lake, and related services
• Work with Databricks, Apache Spark, and PySpark for large-scale data processing
• Implement data modeling, schema design, and data warehousing solutions
• Collaborate with cross-functional teams to support analytics, reporting, and AI/ML initiatives
• Ensure data governance, security, and compliance standards
• Automate deployments using CI/CD pipelines, GitHub, and Terraform
• Optimize performance and troubleshoot complex data issues
Required Skills & Experience
• Strong experience with Azure ecosystem (ADF, Data Lake, Functions, Blob Storage)
• Hands-on experience with Databricks, Spark, and PySpark
• Advanced proficiency in Python and SQL
• Experience building data pipelines and data warehouse solutions
• Familiarity with BI tools (Power BI, Tableau)
• Knowledge of data governance, RBAC/ABAC, and data security practices
• Experience with CI/CD, Terraform, and infrastructure automation
Preferred Qualifications
• Experience working with AI/ML or predictive analytics solutions
• Exposure to enterprise data architecture and large-scale systems
• Strong understanding of OLAP/OLTP systems and performance tuning
• Bachelor's degree in Computer Science, IT, or related field






