Jobs via Dice

BigData Engineer : Columbus, Ohio : Long-Term Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Engineer in Columbus, Ohio, on a long-term contract. Key skills include strong Java and Python coding, Hadoop ecosystem expertise, Spark, Pyspark, AWS, and experience in Agile environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 3, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Columbus, OH
-
🧠 - Skills detailed
#Security #Storage #Java #Documentation #HDFS (Hadoop Distributed File System) #Agile #Greenplum #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Compliance #Data Pipeline #Data Processing #Scala #Data Engineering #Datasets #Data Quality #Python #Monitoring #PySpark #Big Data #AWS (Amazon Web Services) #Databases #Hadoop #Kafka (Apache Kafka)
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, HAN IT Staffing Inc., is seeking the following. Apply via Dice today! Hi, I'd like you to take a look at a great new position we now have available! Your opinion would be valued with regards to this opening. I really appreciate sending an updated resume & the best number with the best time to reach you. Role: BigData Engineer Location: Columbus, Ohio Duration: Long-Term contract Job Description Note: Look for a Strong Big Data Engineer , with Very Strong Java Coding and a little in Python. also look for good Spark, Pyspark and AWS and Very Strong Agile Environment • Strong work experience - Agile environment preferred Data Engineer (Big Data – Hadoop, Green Plum, etc. , Data Owner). • Designs and builds scalable data pipelines, integrates diverse sources, and optimizes storage/processing using Hadoop ecosystem and Greenplum. Ensures data quality, security, and compliance through governance frameworks. • Implements orchestration, monitoring, and performance tuning for reliable, cost-efficient operations. • Expertise in Hadoop ecosystem (HDFS, Hive, Spark, Kafka) and MPP databases like Greenplum for large-scale data processing and optimization. Collaborates with Data Owners and stakeholders to translate business rules into technical solutions. • Delivers curated datasets, lineage, and documentation aligned with SLAs and regulatory standards. • Subject mater expert having experiance of interacting with client, understanding the requirement and guiding the team. • Documenting the requirements crlearly with defined scope and must play a anchor role in setting the reight expectations and delivering as per the schedule. • Design and develop scalable data pipelines using Hadoop ecosystem and Greenplum for ingestion, transformation, and storage of large datasets. Optimize data models and queries for performance and reliability, ensuring compliance with security and governance standards. Implement data quality checks, monitoring, and orchestration workflows for timely. -- Thanks & Regards HAN IT STAFFING Chigiri Rohith Reddy Sr.Technical Recruiter 100 Wood Ave S, Suite 102, Iselin NJ, 08830