Triveni IT

Big Data Developer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer on a long-term contract in Charlotte, NC, requiring 5+ years of experience. Key skills include Python, SQL, and major Big Data tools like Spark and Kafka, with a focus on financial crimes technology.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
March 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#RDS (Amazon Relational Database Service) #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Python #PySpark #SQL (Structured Query Language) #Linux #Data Engineering #Java #Kafka (Apache Kafka) #Scala #Data Processing #HDFS (Hadoop Distributed File System) #Big Data #Datasets #Monitoring
Role description
Role: Big Data Engineer OR OR Data Engineer Duration: long term contract Location: Charlotte NC About the Company We are seeking a highly experienced Big Data Developer to support a Financial Crimes Technology organization. About the Role This role will focus on building, enhancing, and maintaining large‑scale data solutions that enable critical analytics, monitoring, and risk‑mitigation initiatives. Responsibilities • Building, enhancing, and maintaining large‑scale data solutions. • Enabling critical analytics, monitoring, and risk‑mitigation initiatives. Qualifications • 5+ years of experience as a Big Data Developer or in a similar data‑engineering role. Required Skills • Strong proficiency in Python, including experience with RDS. • Hands‑on experience with major Big Data platforms and tools, such as: • • Linux • HDFS • Hive • Hortonworks • Spark • Scala • Python • Java • Kafka and related streaming technologies • • Experience designing and/or consuming APIs. • Strong SQL skills and the ability to interact with large, complex datasets. • Demonstrated experience with data processing operations (join, merge, transform, summarize, etc.). • Ability to design and implement data control checks and validation frameworks. • Hands‑on experience with PySpark.