Insight Global

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 6+ years of experience in Data Engineering, proficient in Python/Pyspark and AWS services. It is an 18-month hybrid contract in Charlotte, requiring strong SQL skills and familiarity with Agile methodologies.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
600
-
πŸ—“οΈ - Date
December 5, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#PySpark #Scala #Oracle #BitBucket #Batch #DevOps #Jenkins #Data Analysis #Kafka (Apache Kafka) #Sybase #Data Extraction #Spark (Apache Spark) #SQL Server #Data Integration #SQL (Structured Query Language) #Programming #Lambda (AWS Lambda) #Scrum #RDS (Amazon Relational Database Service) #AWS (Amazon Web Services) #GitHub #Data Manipulation #Agile #S3 (Amazon Simple Storage Service) #Classification #Databases #Unit Testing #Python #Business Analysis #Data Modeling #"ETL (Extract #Transform #Load)" #Data Mining #Triggers #Data Engineering #Data Profiling
Role description
Position: Python/Pyspark Data Engineer Location: Charlotte HYBRID 2x a week – Tues and Wed Duration: 18 month contract to hire Skills and Knowledge: β€’ At least 6+ years of Developer experience specifically focused on Data Engineering β€’ Strong Hands-on experience in Data Engineering development using Python and Pyspark as an ETL tool β€’ Hands-on experience in AWS services like Glue, RDS, S3, Step functions, Event Bridge, Lambda, MSK (Kafka), EKS etc. β€’ Hands-on experience in Databases like Postgres, SQL Server, Oracle, Sybase β€’ Hands-on experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views, functions and triggers β€’ Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing). β€’ Good knowledge in CI/CD DevOps process and tools like Bitbucket, GitHub, Jenkins β€’ Strong foundation and experience with data modeling, data warehousing, data mining, data analysis and data profiling. β€’ Strong experience with Agile/SCRUM methodology β€’ Good communication and inter-personal skills The primary role of the Data Engineer-Developer is to function as a critical member of a data team by designing data integration solutions that deliver business value in line with the company’s objectives. They are responsible for the design and development of data/batch processing, data manipulation, data mining, and data extraction/transformation/loading into large data domains using Python/Pyspark and AWS tools. Responsibilities: β€’ Provide scoping, estimating, planning, design, development, and support services to a project. β€’ Identify and develop the Technical detail design document. β€’ Work with developers and business areas to design, configure, deploy and maintain custom ETL Infrastructure to support project initiatives. β€’ Design and develop data/batch processing, data manipulation, data mining, and data extraction/transformation/loading (ETL Pipelines) into large data domains. β€’ Document and present solution alternatives to clients, which support business processes and business objectives. β€’ Work with business analysts to understand and prioritize user requirements β€’ Design, development, test, and implement application code β€’ Follow proper software development lifecycle processes and standards β€’ Quality Analysis of the products, responsible for the Defect tracking and Classification β€’ Track progress and intervene as needed to eliminate barriers and ensure delivery. β€’ Resolve or escalate problems and manage risk for both development and production support. β€’ Maintain deep knowledge and awareness of technical & industry best practices and trends, especially in technology & methodologies.