Jobs via Dice

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract duration of 12-18 months, located in Jacksonville, FL; Chicago, IL; Denver, CO; or Charlotte, NC. Pay is $65-$68 per hour. Requires strong SQL skills, Spark experience, and data ingestion expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 10, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jacksonville, FL
-
🧠 - Skills detailed
#PHP #Sqoop (Apache Sqoop) #REST (Representational State Transfer) #Python #API (Application Programming Interface) #Cloud #REST API #JSON (JavaScript Object Notation) #XML (eXtensible Markup Language) #Data Ingestion #Spark (Apache Spark) #Data Integrity #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Engineering #Scala #Data Framework #Base #Big Data #Hadoop #Spark SQL #Kafka (Apache Kafka) #Programming #HDFS (Hadoop Distributed File System) #MySQL #HBase #Impala #NoSQL #Databases #Cloudera #SQL (Structured Query Language) #Batch
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Collabera LLC, is seeking the following. Apply via Dice today! MAJOR BANKING CLIENT IS HIRING FOR A DATA ENGINEER !!! Job Title: Data Engineer Location: Jacksonville, FL; Chicago, IL; Denver, CO; Charlotte, NC Work Arrangement: 5 day in office Client Industry: Banking Duration: 12 -18 months Contract Schedule: Monday to Friday About The Role Day-to-Day: • Design, develop, and maintain batch and near-real-time data pipelines using Spark Structured Streaming, MapReduce, and other Big Data frameworks. • Ingest data from multiple sources such as message queues (Kafka), file shares, REST APIs, and relational databases. • Transform, clean, and validate data in HDFS, Hive, Impala, or Spark SQL. • Convert and manage data in formats like JSON, CSV, XML to support downstream analytics. • Perform data validation, profiling, and analysis to identify anomalies and ensure data integrity. • Troubleshoot issues in data pipelines, SQL jobs, or Spark applications, including slow-running jobs or failures. Must Haves: • Strong SQL Skills one or more of MySQL, HIVE, Impala, SPARK SQL • Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml • Experience working with SPARK Structured steaming • Experience working with Hadoop/Big Data and Distributed Systems • Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc. • Hands on programming experience in at least one of Scala, Python, PHP, Compensation Hourly Rate: 65$-68$ per hour This range reflects base compensation and may vary based on location, market conditions, experience, and candidate qualifications. Benefits: The Company offers the following benefits for this position, subject to applicable eligibility requirements: medical insurance, dental insurance, vision insurance, 401(k) retirement plan, life insurance, long-term disability insurance, short-term disability insurance, paid parking/public transportation, (paid time, paid sick and safe time, hours of paid vacation time, weeks of paid parental leave, paid holidays annually - AS Applicable) About Us At Collabera, we don t just offer jobs we build careers. As a global leader in talent solutions, we provide opportunities to work with top organizations, cutting-edge technologies, and dynamic teams. Our culture thrives on innovation, collaboration, and a commitment to excellence. With continuous learning, career growth, and a people-first approach, we empower you to achieve your full potential. Join us and be part of a company that values passion, integrity, and making an impact. Ready to Apply? Apply now or reach out to Ashwini Pawar at for more information. We look forward to speaking with you!