

JMD Technologies Inc.
Data Engineer (Databricks, Spark, ETL)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer II contract position focused on designing scalable ETL pipelines using Python and Apache Spark. Requires 5–10 years of experience, a Bachelor's degree in a related field, and expertise in data modeling and distributed systems. Remote work available.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 23, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Distributed Computing #Data Engineering #Computer Science #Scala #Apache Spark #Spark (Apache Spark) #Data Pipeline #Python #Databricks #"ETL (Extract #Transform #Load)" #Data Modeling #Data Processing #R
Role description
Title: Data Engineer II
Location: Remote (EST/CST Support)
Employment Type: Contract
Status: Accepting Candidates
About the role
Seeking a Data Engineer to design and optimize scalable data pipelines and support large-scale data processing initiatives. This role focuses on building efficient ETL workflows and leveraging distributed systems to drive data reliability and performance.
Key Responsibilities
• Develop and maintain ETL pipelines for large-scale data processing
• Build solutions using Python and Apache Spark
• Implement and optimize data models and distributed computing frameworks
• Apply best practices in data structures, algorithms, and design patterns
• Collaborate with cross-functional teams to support data-driven initiatives
Qualifications
• 5–10 years of experience in data engineering or related field
• Strong expertise in Python, Apache Spark, and R
• Hands-on experience with ETL processes, data modeling, and distributed systems
• Solid understanding of software design patterns and algorithms
• Bachelor’s degree in Computer Science, Computer Engineering, or related field
Title: Data Engineer II
Location: Remote (EST/CST Support)
Employment Type: Contract
Status: Accepting Candidates
About the role
Seeking a Data Engineer to design and optimize scalable data pipelines and support large-scale data processing initiatives. This role focuses on building efficient ETL workflows and leveraging distributed systems to drive data reliability and performance.
Key Responsibilities
• Develop and maintain ETL pipelines for large-scale data processing
• Build solutions using Python and Apache Spark
• Implement and optimize data models and distributed computing frameworks
• Apply best practices in data structures, algorithms, and design patterns
• Collaborate with cross-functional teams to support data-driven initiatives
Qualifications
• 5–10 years of experience in data engineering or related field
• Strong expertise in Python, Apache Spark, and R
• Hands-on experience with ETL processes, data modeling, and distributed systems
• Solid understanding of software design patterns and algorithms
• Bachelor’s degree in Computer Science, Computer Engineering, or related field






