

Pronix Inc
Big Data Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer/Engineer on a 12+ month contract, requiring onsite work in "Charlotte, NC," "Denver, CO," or "Chicago, IL." Key skills include SQL, Spark, Hadoop, and programming in Scala or Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 28, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Scripting #Jenkins #Impala #Linux #Data Pipeline #Shell Scripting #PHP #Spark SQL #Cloud #API (Application Programming Interface) #MySQL #Sqoop (Apache Sqoop) #Agile #GIT #Batch #REST API #Kafka (Apache Kafka) #Data Ingestion #Python #SQL (Structured Query Language) #Cloudera #NoSQL #Hadoop #Programming #Version Control #Spark (Apache Spark) #XML (eXtensible Markup Language) #Big Data #JSON (JavaScript Object Notation) #Scala #HBase #REST (Representational State Transfer)
Role description
Pronix is hiring Big Data Developer/Engineer for an onsite contract position with a Global Financial Institution located in Charlotte, NC, Denver, CO, or Chicago, IL. This is a 12+ month contract position. only W2 and in-person interview
This role will join the GIS team as a Big Data Developer/Engineer within a tight-knit, supportive community passionate about delivering the best experience for customers. This specific role is required to be fulfilled onsite 5 days a week in a client facility.
Requirements:
• Strong SQL Skills – one or more of MySQL, HIVE, Impala, SPARK SQL
• Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml
• Experience working with SPARK Structured steaming
• Experience working with Hadoop/Big Data and Distributed Systems
• Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc.
• Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting
• Performance tuning experience with spark /MapReduce or SQL jobs
• Experience and proficiency with Linux operating system is a must
• Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines
• Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
• Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.
• Experience working with Jenkins and Jar management
Desired skills:
• Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets
• Ability to comprehend customer requests and provide the correct solution
• Strong analytical mind to help take on complicated problems
• Desire to resolve issues and dive into potential issues
• Ability to adapt and continue to learn new technologies is important
Pronix is hiring Big Data Developer/Engineer for an onsite contract position with a Global Financial Institution located in Charlotte, NC, Denver, CO, or Chicago, IL. This is a 12+ month contract position. only W2 and in-person interview
This role will join the GIS team as a Big Data Developer/Engineer within a tight-knit, supportive community passionate about delivering the best experience for customers. This specific role is required to be fulfilled onsite 5 days a week in a client facility.
Requirements:
• Strong SQL Skills – one or more of MySQL, HIVE, Impala, SPARK SQL
• Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml
• Experience working with SPARK Structured steaming
• Experience working with Hadoop/Big Data and Distributed Systems
• Working experience with Spark, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, CDP or HDP, Cloudera or Hortonworks, Elastic Search, Kibana, etc.
• Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting
• Performance tuning experience with spark /MapReduce or SQL jobs
• Experience and proficiency with Linux operating system is a must
• Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines
• Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle
• Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.
• Experience working with Jenkins and Jar management
Desired skills:
• Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets
• Ability to comprehend customer requests and provide the correct solution
• Strong analytical mind to help take on complicated problems
• Desire to resolve issues and dive into potential issues
• Ability to adapt and continue to learn new technologies is important






