

HYR Global Source Inc
Data Engineer - Fully Remote
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (W2, fully remote) focusing on designing and maintaining data pipelines. Key skills include Python, SQL, Apache Spark, Databricks, Azure Data Factory, and Kafka. No H1B, CPT, or OPT candidates.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 11, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Python #Programming #Data Quality #Azure #Databricks #Batch #Azure Data Factory #Data Processing #Spark (Apache Spark) #Data Engineering #Apache Spark #Scala #"ETL (Extract #Transform #Load)" #ADF (Azure Data Factory) #SQL (Structured Query Language) #Data Pipeline #Kafka (Apache Kafka) #Data Modeling #Cloud
Role description
Job Title: Data Engineer
Location: Remote(US Based)
Job Type: W2
Note : (No H1B,CPT ,OPT)
Job Overview
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and platforms. The ideal candidate has strong experience working with modern data processing frameworks, cloud-based data tools, and real-time streaming systems. You’ll collaborate closely with analytics, engineering, and business teams to deliver reliable, high-quality data solutions.
Key Responsibilities
• Design, develop, and maintain robust data pipelines using Python and SQL
• Build and optimize batch and streaming data processing solutions using Apache Spark
• Develop and manage data workflows in Databricks
• Implement and orchestrate ETL/ELT pipelines using Azure Data Factory
• Work with Kafka to support real-time data streaming and event-driven architectures
• Ensure data quality, reliability, and performance across data systems
• Collaborate with cross-functional teams to understand data requirements and deliver solutions
• Monitor, troubleshoot, and optimize existing data pipelines and infrastructure
Required Skills & Qualifications
• Strong programming experience in Python and SQL
• Hands-on experience with Apache Spark and distributed data processing
• Proficiency with Databricks for data engineering workloads
• Experience building data pipelines using Azure Data Factory
• Knowledge of real-time data streaming using Kafka
• Solid understanding of data modeling, ETL/ELT concepts, and data warehousing
• Experience working in cloud-based data environments
• Strong problem-solving and communication skills
Follow us over Linkedin - https://www.linkedin.com/company/hyr-global-source-inc
To Get more job notifications
Job Title: Data Engineer
Location: Remote(US Based)
Job Type: W2
Note : (No H1B,CPT ,OPT)
Job Overview
We are seeking a skilled Data Engineer to design, build, and maintain scalable data pipelines and platforms. The ideal candidate has strong experience working with modern data processing frameworks, cloud-based data tools, and real-time streaming systems. You’ll collaborate closely with analytics, engineering, and business teams to deliver reliable, high-quality data solutions.
Key Responsibilities
• Design, develop, and maintain robust data pipelines using Python and SQL
• Build and optimize batch and streaming data processing solutions using Apache Spark
• Develop and manage data workflows in Databricks
• Implement and orchestrate ETL/ELT pipelines using Azure Data Factory
• Work with Kafka to support real-time data streaming and event-driven architectures
• Ensure data quality, reliability, and performance across data systems
• Collaborate with cross-functional teams to understand data requirements and deliver solutions
• Monitor, troubleshoot, and optimize existing data pipelines and infrastructure
Required Skills & Qualifications
• Strong programming experience in Python and SQL
• Hands-on experience with Apache Spark and distributed data processing
• Proficiency with Databricks for data engineering workloads
• Experience building data pipelines using Azure Data Factory
• Knowledge of real-time data streaming using Kafka
• Solid understanding of data modeling, ETL/ELT concepts, and data warehousing
• Experience working in cloud-based data environments
• Strong problem-solving and communication skills
Follow us over Linkedin - https://www.linkedin.com/company/hyr-global-source-inc
To Get more job notifications






