

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "$XX/hour." Key skills include Azure Data Services, SQL, Python, and PySpark. Experience with data warehousing and ETL is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
576
-
ποΈ - Date discovered
June 3, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Sacramento, CA
-
π§ - Skills detailed
#Data Warehouse #Spark (Apache Spark) #Indexing #Oracle #SQL Server #Synapse #Microservices #Visualization #Data Ingestion #Data Security #DevOps #Data Modeling #BI (Business Intelligence) #Storage #Data Lake #Data Pipeline #Azure DevOps #Database Design #"ETL (Extract #Transform #Load)" #Physical Data Model #Databases #SQL (Structured Query Language) #Agile #Python #Compliance #Data Governance #Data Quality #Jenkins #Scala #Data Engineering #Code Reviews #Azure #NoSQL #Security #PySpark
Role description
JOB DESCRIPTION
Data Pipeline Development Design and implement complex, scalable ETL pipelines using Azure Fabric, Data Factory, and Synapse. Build and maintain transformation pipelines and data flows in Azure Fabric. Source data from diverse systems including APIs, legacy systems, and mainframes (nice to have). Automate data ingestion, transformation, and validation processes using PySpark and Python. Maintain source control hygiene and CI/CD pipelines (e.g., Azure DevOps, Jenkins). Database Design & Optimization Design and maintain relational and NoSQL databases (SQL Server, Oracle, etc.). Ensure referential integrity, indexing, and query optimization for performance. Data Infrastructure Management Manage data warehouses, data lakes, and other storage solutions on Azure. Monitor system performance, ensure data security, and maintain compliance. Data Modeling & Governance Develop and maintain logical and physical data models. Implement data governance policies and ensure data quality standards. Collaboration & Agile Development Work closely with business and technical teams to gather requirements and deliver solutions. Participate in agile ceremonies, sprint planning, and code reviews. Provide technical guidance and mentorship to team members.
REQUIRED SKILLS AND EXPERIENCE
Proven experience with Azure Data Services, especially Azure Fabric for data flows and transformation pipelines. Strong proficiency in SQL, Python, and PySpark. Experience with data warehousing, ETL/ELT, and data modeling. Familiarity with CI/CD, DevOps, and microservices architecture. Experience with relational (SQL Server, Oracle) and NoSQL databases. Strong analytical and problem-solving skills. Excellent communication skills, both written and verbal.
NICE TO HAVE SKILLS AND EXPERIENCE
Experience integrating data from mainframes or other legacy systems. Familiarity with mock data generation, data validation frameworks, and data quality tools. Exposure to data visualization and BI tools for delivering insights
JOB DESCRIPTION
Data Pipeline Development Design and implement complex, scalable ETL pipelines using Azure Fabric, Data Factory, and Synapse. Build and maintain transformation pipelines and data flows in Azure Fabric. Source data from diverse systems including APIs, legacy systems, and mainframes (nice to have). Automate data ingestion, transformation, and validation processes using PySpark and Python. Maintain source control hygiene and CI/CD pipelines (e.g., Azure DevOps, Jenkins). Database Design & Optimization Design and maintain relational and NoSQL databases (SQL Server, Oracle, etc.). Ensure referential integrity, indexing, and query optimization for performance. Data Infrastructure Management Manage data warehouses, data lakes, and other storage solutions on Azure. Monitor system performance, ensure data security, and maintain compliance. Data Modeling & Governance Develop and maintain logical and physical data models. Implement data governance policies and ensure data quality standards. Collaboration & Agile Development Work closely with business and technical teams to gather requirements and deliver solutions. Participate in agile ceremonies, sprint planning, and code reviews. Provide technical guidance and mentorship to team members.
REQUIRED SKILLS AND EXPERIENCE
Proven experience with Azure Data Services, especially Azure Fabric for data flows and transformation pipelines. Strong proficiency in SQL, Python, and PySpark. Experience with data warehousing, ETL/ELT, and data modeling. Familiarity with CI/CD, DevOps, and microservices architecture. Experience with relational (SQL Server, Oracle) and NoSQL databases. Strong analytical and problem-solving skills. Excellent communication skills, both written and verbal.
NICE TO HAVE SKILLS AND EXPERIENCE
Experience integrating data from mainframes or other legacy systems. Familiarity with mock data generation, data validation frameworks, and data quality tools. Exposure to data visualization and BI tools for delivering insights