Aroha Technologies, Inc

Sr. Data Engineer – Snowflake

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer – Snowflake in Los Angeles, CA, on a contract basis. Requires 8-12 years of data engineering experience, 5+ years with Azure, and 3+ years with Azure Databricks. Key skills include Snowflake, SQL, and data modeling.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
496
-
🗓️ - Date
March 17, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#Scala #Snowpark #Azure DevOps #SQL Server #Synapse #Data Modeling #Microsoft Power BI #Anomaly Detection #Snowflake #Datasets #Azure Data Platforms #SQL (Structured Query Language) #Data Architecture #DevOps #Cloud #Azure Security #Azure Databricks #"ETL (Extract #Transform #Load)" #ML (Machine Learning) #Batch #Azure Data Factory #Databases #Azure #Data Vault #Java #Security #Data Quality #Data Pipeline #Vault #ADF (Azure Data Factory) #Data Processing #ADLS (Azure Data Lake Storage) #Azure SQL #BI (Business Intelligence) #Python #Data Engineering #Databricks #Data Ingestion
Role description
a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; } Job Title: Senior Data Engineer – Snowflake Location: Los Angeles, CA (Onsite, 5 days per week) Contract Type: Contract Experience Required: 8–12 years Positions: 1 Role Overview We are looking for an experienced Senior Data Engineer with strong expertise in Snowflake and Azure data technologies. This role involves designing scalable data pipelines, building cloud-native data models, and ensuring high-performance data processing across the enterprise. Required Experience • 8–12+ years of experience in Data Engineering • 5+ years of experience with Azure Data Platforms • 3+ years of hands-on experience with Azure Databricks Key Responsibilities • Design and implement data ingestion, transformation, and orchestration pipelines on Snowflake for batch and near real-time workloads • Build and maintain Snowflake objects such as databases, schemas, tables, views, streams, tasks, stages, and file formats • Develop transformation logic using SQL and Snowpark (Python, Scala, or Java) • Implement secure data sharing and governed access controls across domains • Ingest data from Azure sources including ADLS Gen2, Azure SQL, SQL Server, Synapse, APIs, and event-driven sources • Implement orchestration using Azure Data Factory, Synapse Pipelines, or similar tools • Leverage Azure security components such as Key Vault, Managed Identities, Private Endpoints, and RBAC • Build scalable analytics-ready data models (star schema, denormalized marts, or Data Vault) • Design layered data architectures (raw, refined, curated) to support BI, reporting, self-service analytics, and machine learning • Collaborate with BI teams to build semantic layers and optimized reporting datasets, particularly for Power BI • Optimize Snowflake performance and query efficiency for large-scale workloads • Implement data validation, quality checks, anomaly detection, and reconciliation frameworks • Build CI/CD pipelines for Snowflake and data workflows using Azure DevOps or similar tooling • Translate business requirements into scalable, high-performing technical solutions • Collaborate with onsite and offshore teams as well as IT and project stakeholders Required Skills Snowflake, Azure Databricks, Azure Data Factory, ADLS Gen2, SQL, Snowpark, Azure Synapse, Data Modeling, Azure DevOps, Pipeline Orchestration, Data Quality Frameworks