

CloudIngest
Azure Cosmos Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Cosmos Data Engineer on a 6-12 month remote contract at $50/hr. Requires strong experience in Azure Cosmos DB, Azure Synapse Analytics, and building end-to-end data pipelines. Must have international data mapping expertise.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
400
-
ποΈ - Date
April 7, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Database Design #Programming #Data Modeling #Big Data #Databricks #Synapse #Databases #Data Engineering #Apache Spark #Azure Cosmos DB #Data Conversion #GIT #Data Lake #Database Management #Normalization #Azure Data Factory #Data Processing #Oracle #Datasets #Monitoring #Spark (Apache Spark) #SQL Server #Azure #Replication #Version Control #PySpark #Data Pipeline #Data Architecture #"ETL (Extract #Transform #Load)" #Delta Lake #Data Integration #NoSQL #Cloud #Schema Design #Python #Data Mapping #ADF (Azure Data Factory) #SQL (Structured Query Language) #Azure Synapse Analytics
Role description
Job Title: Azure Cosmos Data Engineer
Location: Remote
Duration: 6-12 month contract with multiple extensions
Rate: $50/hr. on W2 (NO C2C)
Azure Cosmos Data engineer
β’ Expertise in Azure Synapse, Cosmos DB for potential role
β’ Experience in international data mapping, data conversion, set up is preferred.
β’ Azure Synapse Analytics:
β’ Data integration, big data processing.
β’ Databricks: Apache Spark, Delta Lake, notebooks.
β’ SQL: Advanced querying, performance optimization, database management.
β’ PySpark: ETL processes, data transformation, performance tuning.
β’ Data Pipelines: Design, development, orchestration, monitoring.
β’ Data Modeling & Database Design: Schema design, normalization/denormalization.
β’ ETL/ELT Tools: Azure Data Factory.
β’ NoSQL Database: Azure Cosmos DB
β’ Relational Database: SQL Server and Oracle
β’ Version Control Systems: Git proficiency.
Must-Have Experience
β’ Strong experience building end-to-end data pipelines (ETL/ELT)
β’ Hands-on with cloud platforms (Azure preferred)
β’ Experience working on enterprise/global applications with large datasets
Core Technical Skills (High Priority)
β’ Strong hands-on experience with Azure Cosmos DB (required) Min 2 years
β’ Experience with Azure Synapse Analytics (required) for analytics and data warehousing
β’ Experience with Azure Data Factory, Databricks (PySpark), and Data Lake
β’ Strong programming in Python, SQL, PySpark
β’ Good understanding of SQL + NoSQL databases
Data Architecture & Global Use Case Experience
β’ Experience in international data mapping / data standardization across regions
β’ Exposure to distributed systems and multi-region data processing
β’ Understanding of data partitioning, replication, and performance optimization
β’ Experience supporting global applications / enterprise-wide analytics platforms
Soft Skills / Functional Fit
β’ Strong collaboration with cross-functional and global teams
β’ Ability to translate business requirements into data solutions
β’ Experience working in fast-paced, high-impact environments
β’ Ownership mindset (handling large-scale, business-critical pipelines)
Job Title: Azure Cosmos Data Engineer
Location: Remote
Duration: 6-12 month contract with multiple extensions
Rate: $50/hr. on W2 (NO C2C)
Azure Cosmos Data engineer
β’ Expertise in Azure Synapse, Cosmos DB for potential role
β’ Experience in international data mapping, data conversion, set up is preferred.
β’ Azure Synapse Analytics:
β’ Data integration, big data processing.
β’ Databricks: Apache Spark, Delta Lake, notebooks.
β’ SQL: Advanced querying, performance optimization, database management.
β’ PySpark: ETL processes, data transformation, performance tuning.
β’ Data Pipelines: Design, development, orchestration, monitoring.
β’ Data Modeling & Database Design: Schema design, normalization/denormalization.
β’ ETL/ELT Tools: Azure Data Factory.
β’ NoSQL Database: Azure Cosmos DB
β’ Relational Database: SQL Server and Oracle
β’ Version Control Systems: Git proficiency.
Must-Have Experience
β’ Strong experience building end-to-end data pipelines (ETL/ELT)
β’ Hands-on with cloud platforms (Azure preferred)
β’ Experience working on enterprise/global applications with large datasets
Core Technical Skills (High Priority)
β’ Strong hands-on experience with Azure Cosmos DB (required) Min 2 years
β’ Experience with Azure Synapse Analytics (required) for analytics and data warehousing
β’ Experience with Azure Data Factory, Databricks (PySpark), and Data Lake
β’ Strong programming in Python, SQL, PySpark
β’ Good understanding of SQL + NoSQL databases
Data Architecture & Global Use Case Experience
β’ Experience in international data mapping / data standardization across regions
β’ Exposure to distributed systems and multi-region data processing
β’ Understanding of data partitioning, replication, and performance optimization
β’ Experience supporting global applications / enterprise-wide analytics platforms
Soft Skills / Functional Fit
β’ Strong collaboration with cross-functional and global teams
β’ Ability to translate business requirements into data solutions
β’ Experience working in fast-paced, high-impact environments
β’ Ownership mindset (handling large-scale, business-critical pipelines)






