

Sadup Softech
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 3-6 years of experience, focusing on Azure data engineering technologies. Contract length is unspecified, with a pay rate of "unknown". Key skills include Azure Databricks, PySpark, and Azure Synapse Analytics.
🌎 - Country
United States
💱 - Currency
Unknown
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 25, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#PySpark #Synapse #Snowflake #SQL (Structured Query Language) #Database Systems #Teradata #Data Modeling #Azure Data Factory #JSON (JavaScript Object Notation) #Datasets #Databricks #Azure Synapse Analytics #Azure Databricks #"ETL (Extract #Transform #Load)" #Data Engineering #Agile #ADF (Azure Data Factory) #Python #Scala #Documentation #Data Pipeline #Spark (Apache Spark) #Azure #Deployment #DevOps #Azure DevOps #ADLS (Azure Data Lake Storage)
Role description
USA
Contract
Experience : 3-6yrs
Overview:
We are seeking an experienced Data Engineer to join our team in a Developer. This role focuses on enhancing and supporting existing Data & Analytics solutions by leveraging Azure Data Engineering technologies. The engineer will work on developing, maintaining, and deploying IT products and solutions that serve various business users, with a strong emphasis on performance, scalability, and reliability.
Must-Have Skills:
Azure Databricks
PySpark
Azure Synapse Analytics
Key Responsibilities:
Design, develop, and support data pipelines and solutions using Azure data engineering services.
Implement data flow and ETL techniques leveraging Azure Data Factory, Databricks, and Synapse.
Cleanse, transform, and enrich datasets using Databricks notebooks and PySpark.
Orchestrate and automate workflows across services and systems.
Collaborate with business and technical teams to deliver robust and scalable data solutions.
Work in a support role to resolve incidents, handle change/service requests, and monitor performance.
Contribute to CI/CD pipeline implementation using Azure DevOps.
Technical Requirements:
3 to 6 years of experience in IT and Azure data engineering technologies.
Strong experience in Azure Databricks, Azure Synapse, and ADLS Gen2.
Proficient in Python, PySpark, and SQL.
Experience with file formats such as JSON and Parquet.
Working knowledge of database systems, with a preference for Teradata and Snowflake.
Hands-on experience with Azure DevOps and CI/CD pipeline deployments.
Understanding of Data Warehousing concepts and data modeling best practices.
Familiarity with SNOW (ServiceNow) for incident and change management.
Non-Technical Requirements:
Ability to work independently and collaboratively in virtual teams across geographies.
Strong analytical and problem-solving skills.
Experience in Agile development practices, including estimation, testing, and deployment.
Effective task and time management with the ability to prioritize under pressure.
Clear communication and documentation skills for project updates and technical processes.
Interested candidates please send your cv to hr@sadupsoft.com
USA
Contract
Experience : 3-6yrs
Overview:
We are seeking an experienced Data Engineer to join our team in a Developer. This role focuses on enhancing and supporting existing Data & Analytics solutions by leveraging Azure Data Engineering technologies. The engineer will work on developing, maintaining, and deploying IT products and solutions that serve various business users, with a strong emphasis on performance, scalability, and reliability.
Must-Have Skills:
Azure Databricks
PySpark
Azure Synapse Analytics
Key Responsibilities:
Design, develop, and support data pipelines and solutions using Azure data engineering services.
Implement data flow and ETL techniques leveraging Azure Data Factory, Databricks, and Synapse.
Cleanse, transform, and enrich datasets using Databricks notebooks and PySpark.
Orchestrate and automate workflows across services and systems.
Collaborate with business and technical teams to deliver robust and scalable data solutions.
Work in a support role to resolve incidents, handle change/service requests, and monitor performance.
Contribute to CI/CD pipeline implementation using Azure DevOps.
Technical Requirements:
3 to 6 years of experience in IT and Azure data engineering technologies.
Strong experience in Azure Databricks, Azure Synapse, and ADLS Gen2.
Proficient in Python, PySpark, and SQL.
Experience with file formats such as JSON and Parquet.
Working knowledge of database systems, with a preference for Teradata and Snowflake.
Hands-on experience with Azure DevOps and CI/CD pipeline deployments.
Understanding of Data Warehousing concepts and data modeling best practices.
Familiarity with SNOW (ServiceNow) for incident and change management.
Non-Technical Requirements:
Ability to work independently and collaboratively in virtual teams across geographies.
Strong analytical and problem-solving skills.
Experience in Agile development practices, including estimation, testing, and deployment.
Effective task and time management with the ability to prioritize under pressure.
Clear communication and documentation skills for project updates and technical processes.
Interested candidates please send your cv to hr@sadupsoft.com




