Pronix Inc

Big Data Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Chicago, IL, Denver, CO, or Charlotte, NC, with a 12-month contract. Key skills include strong SQL, Hadoop, Spark, data ingestion, and programming in Scala or Python. On-site work required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#SQL (Structured Query Language) #Version Control #PHP #Scala #REST API #Shell Scripting #Hadoop #Spark (Apache Spark) #Data Pipeline #Agile #Programming #Big Data #Cloudera #HBase #Python #Batch #API (Application Programming Interface) #NoSQL #Data Ingestion #Linux #GIT #REST (Representational State Transfer) #MySQL #Scripting #Spark SQL #Impala #Cloud #Jenkins #XML (eXtensible Markup Language) #JSON (JavaScript Object Notation) #Kafka (Apache Kafka) #Sqoop (Apache Sqoop)
Role description
Position: Big Data/Hadoop Developers (Heavy Coding focus) - Only W2 Location: Chicago, IL, Denver, CO or Charlotte, NC (5 days in office with GIS team) Duration: 12-month contract Interview Mode: 1 WebEx int (expect coding assessment) and then in person final interview (also will be coding assessment) This role will join the GIS team as a Big Data Developer/Engineer within a tight-knit, supportive community passionate about delivering the best experience for customers. This specific role is required to be fulfilled onsite 5 days a week in a client facility. Requirements: • Strong SQL Skills – one or more of MySQL, HIVE, Impala, SPARK SQL • Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml • Experience working with SPARK Structured steaming • Experience working with Hadoop/Big Data and Distributed Systems • Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc. • Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting • Performance tuning experience with spark /MapReduce or SQL jobs • Experience and proficiency with Linux operating system is a must • Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines • Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle • Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc. • Experience working with Jenkins and Jar management