Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Azure) contractor in Pleasanton, CA, with a preference for on-site or hybrid work. Requires expertise in Python, Azure services, SQL, and data pipeline orchestration. Contract length and pay rate are unspecified.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Hybrid
📄 - Contract type
1099 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Pleasanton, CA
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #Azure ADLS (Azure Data Lake Storage) #Programming #Azure Synapse Analytics #PySpark #Azure SQL #Azure Data Factory #Databricks #Storage #SQL (Structured Query Language) #Data Integration #Azure #Spark (Apache Spark) #PostgreSQL #Synapse #Python #Azure Cosmos DB #Cloud #Apache Spark #API (Application Programming Interface) #Azure Databricks #Azure SQL Database #DevOps #Data Lake #Data Pipeline #Databases #Azure cloud #Data Science #Scala #Data Engineering #ADLS (Azure Data Lake Storage) #Data Processing
Role description

Senior Data Engineer (Azure)

Type: Contractor (C2C / W2 /1099)

Location: Pleasanton, CA

Availability: On-site or hybrid presence preferred

We are seeking an experienced and independent Senior Data Engineer to help design and build data systems that are clean, scalable, and reusable. If you enjoy working with modern data tools, solving real-world data problems, and collaborating with a capable and supportive team, we’d love to connect.

What You’ll Do

   • Build modular and scalable data pipelines that integrate data from multiple sources

   • Analyze raw data and organize it based on business requirements

   • Collaborate with data scientists, engineers, and application teams to design effective solutions

   • Ensure that systems are reliable, reusable, and aligned with best practices

   • Identify and address data gaps, quality issues, or discrepancies proactively

   • Contribute to architectural discussions with a strong understanding of cloud-native patterns

What We’re Looking For

   • Strong foundation in cloud-native application architecture

   • Hands-on experience with both relational and non-relational databases, including understanding their trade-offs and design considerations

   • Familiarity with API design and implementation

   • Comfortable working independently and communicating clearly in a collaborative environment

   • A solid grasp of software engineering principles, clean code practices, and system design

Technical Skills

Python and Data Engineering

   • Proficient in Python, with a strong understanding of object-oriented programming, data structures, and algorithms

   • Experience with Apache Spark and PySpark, especially using Azure Databricks for large-scale data processing

Azure Cloud Platform

   • Practical experience with Azure services including Function Apps, Blob Storage, DevOps, Networking, and Access Control

   • Familiarity with Azure data services such as:

   • Azure SQL Database

   • Azure Synapse Analytics

   • Azure Cosmos DB

   • Azure Database for PostgreSQL

   • Azure Data Lake Storage

   • Azure Data Factory (ADF) for orchestration and data integration

Database and SQL

   • Strong SQL skills, including querying, transforming, and managing data in relational databases

Workflow and Orchestration

   • Familiarity with scheduling and orchestrating data pipelines using workflow tools