Pyramid Technology Solutions

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Los Angeles, CA, offering a 12-month contract at an unspecified pay rate. Key skills include Azure Data Factory, Databricks, Python, SQL, and data governance. Requires seven years of experience in enterprise architecture and five years in Azure technologies.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 13, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#Version Control #Terraform #Data Lake #Python #ML (Machine Learning) #Deployment #Data Modeling #Security #YAML (YAML Ain't Markup Language) #API (Application Programming Interface) #Data Pipeline #Storage #Big Data #Forecasting #Spark (Apache Spark) #ADF (Azure Data Factory) #Scala #Azure Databricks #Programming #"ETL (Extract #Transform #Load)" #Compliance #Azure Data Factory #GitHub #Data Governance #Apache Spark #Cloud #Databricks #Monitoring #Scripting #Automation #Tableau #AWS (Amazon Web Services) #Delta Lake #BI (Business Intelligence) #MLflow #Data Security #Azure #R #Schema Design #SQL (Structured Query Language) #Data Lineage #AI (Artificial Intelligence) #Microsoft Power BI #PySpark #Data Engineering
Role description
Job Role: Senior Data Engineer Location: Los Angeles, CA (90012) Duration: 12 Months Contract Additional Skills Required: Cloud Platforms: Deep understanding of Azure ecosystem, including Azure Data Factory, Data Lake Storage, Blob Storage, power apps, and Functions. Additionally, in-depth understanding and implementation of API management such as APIGEE. Big Data Technologies: Proficiency in Databricks, Spark, PySpark, Scala, and SQL. Data Engineering Fundamentals: Expertise in ETL/ELT processes, data pipelines, data modeling, schema design, and data warehousing. Programming Languages: Strong Python and SQL skills, with knowledge of other languages like Scala or R beneficial. Data Warehousing and Business Intelligence: Strong ERD concepts, designs, and patterns, Understanding of OLAP/OLTP systems, performance tuning, Database Server concepts, and BI tools (Power BI, Tableau). Data Governance: Strong understanding of RBAC/ABAC, Data Lineage, Data leak prevention, Data security, and compliance. Deep understanding and implementation knowledge of audit and monitoring in Cloud. Infrastructure Deployment: GitHub version control, CI/CD pipelines, release management, Terraform and YAML templates, and script-based deployments. Additional Experience Required: Seven (7) years of applying Enterprise Architecture principles, with at least five (5) years in a lead capacity. Five (5) years of hands-on experience with Azure Data Factory, Azure Databricks, API implementation and management solution, and managing Azure resources. Five (5) years of experience in the following: developing data models and pipelines using Python; working with Lakehouse platforms; GitHub CI/CD pipelines and infrastructure automation, Terraform scripting; and with data warehousing systems, OLAP/OLTP systems, integration of BI tools and designing, developing, and deploying AI/ML and predictive analytics solutions using Databricks, Apache Spark, MLflow, Delta Lake, Python, and cloud platforms such as AWS or Azure. Proven ability to build and operationalize predictive models, Generative AI solutions, and enterprise-scale analytics pipelines to support forecasting, operational intelligence, and data-driven decision-making.