TrueSkilla

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a 15+ month contract, hybrid location in Charlotte, NC. Key skills include data engineering, SQL, Python/Java, ETL tools, big data technologies, and cloud platforms. Agile methodology experience is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
520
-
🗓️ - Date
April 1, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #Big Data #AWS (Amazon Web Services) #AI (Artificial Intelligence) #Web Services #Spark (Apache Spark) #Hadoop #Data Engineering #Cloud #Compliance #Microsoft Azure #GCP (Google Cloud Platform) #Python #Apache Spark #Data Pipeline #"ETL (Extract #Transform #Load)" #NoSQL #Java #SQL (Structured Query Language) #Agile #Automation #Azure
Role description
Job Title: Data Engineer Type: W2 Only Duration: 15+ month(s) possibility to extend Location: 300 S Brevard St., Charlotte, NC – 28202 – Hybrid Role Onsite 3 days per week Job Descriptions: • In this contingent resource assignment, you may: Consult on complex initiatives with broad impact and large-scale planning for Software Engineering. • Review and analyze complex multi-faceted, larger scale or longer-term Software Engineering challenges that require in-depth evaluation of multiple factors including intangibles or unprecedented factors. • Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables. • Strategically collaborate and consult with client personnel. Required Qualifications: • Data Engineering experience • Proficiency in Database, SQL, PLSQL skills. • Experience with Python or Java • Experience with ETL tools and data pipeline frameworks • Experience with Parquet files in S3 buckets • Familiarity with relational and NoSQL database • Familiarity with big data technologies such as Apache Spark, Hadoop, or Kafka • Performance Tuning and Automation: Hands-on experience in performance tuning, including any report automations involved. • Agile Methodology: Experience in working in Agile projects, including participation in Backlog grooming, sprint planning, and Daily stand-up meetings. • Experience with Artificial Intelligence • Experience with cloud platforms such as Amazon Web Services (AWS), Google Cloud Platform (GCP), or Microsoft Azure