OM Housing

Hadoop (Spark )Developer - MN Locals - H1, GC, and Citizens

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Hadoop (Spark) Developer in Minneapolis, MN, offering a contract of over 6 months at a competitive pay rate. Candidates must have 5-8 years of data engineering experience, preferably in financial services, and strong skills in Hadoop, Spark, and SQL.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 14, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Minneapolis, MN
-
🧠 - Skills detailed
#Data Storage #Databricks #Security #Big Data #Business Analysis #GDPR (General Data Protection Regulation) #Java #CRM (Customer Relationship Management) #Azure Databricks #Azure #HBase #Azure DevOps #Programming #Cloud #Jenkins #Data Governance #Spark (Apache Spark) #AWS EMR (Amazon Elastic MapReduce) #GCP (Google Cloud Platform) #Data Lake #Data Ingestion #Datasets #SQL (Structured Query Language) #Kafka (Apache Kafka) #Data Science #Storage #Hadoop #PySpark #Compliance #Data Engineering #Apache Spark #Data Pipeline #Data Integration #"ETL (Extract #Transform #Load)" #Computer Science #Python #DevOps #AWS (Amazon Web Services) #Sqoop (Apache Sqoop) #Scala
Role description
πŸš€ Now Hiring: Hadoop/Spark Developer | Hybrid – Minneapolis, MN πŸ“ Location: Minneapolis, MN (Hybrid – Local Candidates Only) πŸ’Ό Business Line: Financial Services / Life Insurance 🧾 Employment Type: Full-Time πŸ›‚ Work Authorization: USC / GC / H1B Interested? Apply today or send your updated resume to πŸ“§ charneet@omtechllc.com Let’s connect and discuss how this opportunity aligns with your career goals! About the Role We’re seeking an experienced Hadoop/Spark Developer to join our client’s dynamic data engineering team. This role involves building and optimizing scalable data pipelines, enabling advanced analytics, and supporting enterprise data lake initiatives within the financial services domain. Key Responsibilities πŸ”Ή Data Engineering & Pipeline Development β€’ Design and develop high-performance data ingestion and transformation pipelines using Apache Spark, Hadoop, Hive, and Kafka. β€’ Build scalable ETL/ELT frameworks for analytical and operational data systems. β€’ Ensure efficient data storage, partitioning, and query performance for large datasets. πŸ”Ή Data Integration & Quality Management β€’ Integrate data from multiple systems (policy, claims, CRM, and financial). β€’ Implement robust data validation and cleansing processes. β€’ Ensure compliance with SOX, NAIC, and GDPR standards. πŸ”Ή Collaboration & Cloud Modernization β€’ Work with data scientists, business analysts, and cloud teams to modernize on-prem Hadoop clusters to cloud-based ecosystems (AWS, Azure, or GCP). β€’ Support enterprise data lake and real-time analytics initiatives. πŸ”Ή Performance Tuning & Optimization β€’ Optimize Spark jobs, monitor cluster performance, and resolve data workflow issues. β€’ Participate in root-cause analysis and implement long-term data solutions. Required Qualifications βœ… Bachelor’s degree in Computer Science, Engineering, or related field. βœ… 5–8 years of experience in Data Engineering or Big Data development (financial or insurance domain preferred). βœ… Strong hands-on experience with Hadoop, Spark (PySpark/Scala), Hive, HBase, Kafka, Sqoop, and Oozie. βœ… Solid proficiency in SQL and at least one programming language (Python, Java, or Scala). βœ… Familiarity with ETL pipelines, data lakes, and distributed systems. Preferred Skills ⭐ Experience with Azure Databricks, AWS EMR, or GCP DataProc. ⭐ Understanding of data governance, compliance, and lineage. ⭐ Knowledge of security frameworks (OAuth2, SSL/TLS). ⭐ Exposure to CI/CD pipelines (Jenkins, Azure DevOps). Why Join ✨ Work with a leading financial services client driving innovation in data modernization. 🌐 Contribute to cutting-edge big data and cloud transformation projects. 🀝 Collaborate with talented engineers, analysts, and data scientists in a growth-oriented environment.