

HireOn Tech
Data Engineer – SparkFlow Framework
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer specializing in the SparkFlow Framework, located in Charlotte, NC (Hybrid). The contract lasts 12 months+, with a focus on Apache Spark, API design, and enterprise data ecosystems. W2 candidates only.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Cloud #API (Application Programming Interface) #Storage #GCP (Google Cloud Platform) #Python #JSON (JavaScript Object Notation) #SQL (Structured Query Language) #Metadata #Scala #Hadoop #Apache Spark #AI (Artificial Intelligence) #Spark SQL #Data Processing #GIT #Data Engineering #Deployment #"ETL (Extract #Transform #Load)" #Observability #Libraries #Spark (Apache Spark) #Code Reviews #Kafka (Apache Kafka) #Java #Integration Testing #Automation
Role description
Need w2 candidates!!
Job Title: Data Engineer – SparkFlow Framework
Location: Charlotte, NC-Hybrid
Duration: 12 Months+
Role Summary:
We are seeking a Senior Software Engineer to contribute to SparkFlow, an enterprise data processing framework built on Apache Spark. This engineer will implement new functional framework features, strengthen existing components, improve developer ergonomics, and help deliver AI enabled capabilities that make the framework easier to use and operate. The role will also support integrating SparkFlow into the Unity control plane by building and hardening the required interfaces and workflows.
Key Responsibilities
• Build and enhance new functional features in the SparkFlow framework (sources/targets, transformations, governance/controls, reliability features).
• Implement and refine framework extension points (APIs, configs, libraries) to improve composability and reuse.
• Improve developer experience: simplify configuration patterns (e.g., pipeline JSON/configs), reduce onboarding friction, and improve diagnostics/observability hooks.
• Develop AI enabled solutions that assist developers (e.g., guided config generation, validation, troubleshooting accelerators) and improve framework usability.
• Contribute to Unity control plane integration work: implement adapters/operators, automation, and integration testing for consistent orchestration.
• Participate in code reviews, design discussions, and on call/operational support patterns as needed.
Required Experience
• Strong hands on engineering experience with Apache Spark (Scala and/or Java; Python a plus), including Spark SQL–based processing.
• Experience building frameworks/libraries (not just applications), including API and abstraction design.
• Working knowledge of CI/CD and engineering fundamentals (Git, build tooling, unit/integration testing).
• Experience with enterprise data ecosystem components (e.g., Hadoop/Hive, Kafka, cloud storage/warehouse patterns) and production hardening.
Nice to Have
• Experience improving pipeline onboarding and deployment patterns (config driven artifacts, launcher scripts, scheduler integration).
• Familiarity with governance capabilities like audit trail capture, metadata/lineage integration, and data in motion controls.
• Cloud/hybrid experience (e.g., GCP Dataproc patterns) supporting Spark workloads.
--------
Thanks.
Regards,
Ashish
Email ID: ashish@Hireontech.com
Need w2 candidates!!
Job Title: Data Engineer – SparkFlow Framework
Location: Charlotte, NC-Hybrid
Duration: 12 Months+
Role Summary:
We are seeking a Senior Software Engineer to contribute to SparkFlow, an enterprise data processing framework built on Apache Spark. This engineer will implement new functional framework features, strengthen existing components, improve developer ergonomics, and help deliver AI enabled capabilities that make the framework easier to use and operate. The role will also support integrating SparkFlow into the Unity control plane by building and hardening the required interfaces and workflows.
Key Responsibilities
• Build and enhance new functional features in the SparkFlow framework (sources/targets, transformations, governance/controls, reliability features).
• Implement and refine framework extension points (APIs, configs, libraries) to improve composability and reuse.
• Improve developer experience: simplify configuration patterns (e.g., pipeline JSON/configs), reduce onboarding friction, and improve diagnostics/observability hooks.
• Develop AI enabled solutions that assist developers (e.g., guided config generation, validation, troubleshooting accelerators) and improve framework usability.
• Contribute to Unity control plane integration work: implement adapters/operators, automation, and integration testing for consistent orchestration.
• Participate in code reviews, design discussions, and on call/operational support patterns as needed.
Required Experience
• Strong hands on engineering experience with Apache Spark (Scala and/or Java; Python a plus), including Spark SQL–based processing.
• Experience building frameworks/libraries (not just applications), including API and abstraction design.
• Working knowledge of CI/CD and engineering fundamentals (Git, build tooling, unit/integration testing).
• Experience with enterprise data ecosystem components (e.g., Hadoop/Hive, Kafka, cloud storage/warehouse patterns) and production hardening.
Nice to Have
• Experience improving pipeline onboarding and deployment patterns (config driven artifacts, launcher scripts, scheduler integration).
• Familiarity with governance capabilities like audit trail capture, metadata/lineage integration, and data in motion controls.
• Cloud/hybrid experience (e.g., GCP Dataproc patterns) supporting Spark workloads.
--------
Thanks.
Regards,
Ashish
Email ID: ashish@Hireontech.com






