

Insight Global
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with an 18-month contract, paying $65/hr, located in Charlotte (hybrid, 2x a week). Requires 6+ years of data engineering experience, proficiency in Python/Pyspark, AWS services, and strong SQL skills.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
November 21, 2025
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
1099 Contractor
-
π - Security
Unknown
-
π - Location detailed
Charlotte Metro
-
π§ - Skills detailed
#Data Engineering #Data Analysis #Python #Data Profiling #Triggers #Scrum #"ETL (Extract #Transform #Load)" #Data Extraction #Sybase #GitHub #AWS (Amazon Web Services) #Business Analysis #Jenkins #Agile #Data Mining #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Programming #Kafka (Apache Kafka) #Data Integration #Oracle #Unit Testing #BitBucket #Classification #Spark (Apache Spark) #SQL Server #PySpark #RDS (Amazon Relational Database Service) #SQL (Structured Query Language) #Scala #DevOps #Databases #Data Modeling #Data Manipulation #Batch
Role description
Position: Python/Pyspark Data Engineer
Location: Charlotte HYBRID 2x a week β Tues and Wed
Duration: 18-month contract to hire
Pay Rate: $65/hr
The primary role of the Data Engineer-Developer is to function as a critical member of a data team by designing data integration solutions that deliver business value in line with the companyβs objectives. They are responsible for the design and development of data/batch processing, data manipulation, data mining, and data extraction/transformation/loading into large data domains using Python/Pyspark and AWS tools.
Responsibilities:
β’ Provide scoping, estimating, planning, design, development, and support services to a project.
β’ Identify and develop the Technical detail design document.
β’ Work with developers and business areas to design, configure, deploy and maintain custom ETL Infrastructure to support project initiatives.
β’ Design and develop data/batch processing, data manipulation, data mining, and data extraction/transformation/loading (ETL Pipelines) into large data domains.
β’ Document and present solution alternatives to clients, which support business processes and business objectives.
β’ Work with business analysts to understand and prioritize user requirements
β’ Design, development, test, and implement application code
β’ Follow proper software development lifecycle processes and standards
β’ Quality Analysis of the products, responsible for the Defect tracking and Classification
β’ Track progress and intervene as needed to eliminate barriers and ensure delivery.
β’ Resolve or escalate problems and manage risk for both development and production support.
β’ Maintain deep knowledge and awareness of technical & industry best practices and trends, especially in technology & methodologies.
Skills and Knowledge:
β’ At least 6+ years of Developer experience specifically focused on Data Engineering
β’ Strong Hands-on experience in Data Engineering development using Python and Pyspark as an ETL tool
β’ Hands-on experience in AWS services like Glue, RDS, S3, Step functions, Event Bridge, Lambda, MSK (Kafka), EKS etc.
β’ Hands-on experience in Databases like Postgres, SQL Server, Oracle, Sybase
β’ Hands-on experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views, functions and triggers
β’ Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing).
β’ Good knowledge in CI/CD DevOps process and tools like Bitbucket, GitHub, Jenkins
β’ Strong foundation and experience with data modeling, data warehousing, data mining, data analysis and data profiling.
β’ Strong experience with Agile/SCRUM methodology
β’ Good communication and inter-personal skills
Position: Python/Pyspark Data Engineer
Location: Charlotte HYBRID 2x a week β Tues and Wed
Duration: 18-month contract to hire
Pay Rate: $65/hr
The primary role of the Data Engineer-Developer is to function as a critical member of a data team by designing data integration solutions that deliver business value in line with the companyβs objectives. They are responsible for the design and development of data/batch processing, data manipulation, data mining, and data extraction/transformation/loading into large data domains using Python/Pyspark and AWS tools.
Responsibilities:
β’ Provide scoping, estimating, planning, design, development, and support services to a project.
β’ Identify and develop the Technical detail design document.
β’ Work with developers and business areas to design, configure, deploy and maintain custom ETL Infrastructure to support project initiatives.
β’ Design and develop data/batch processing, data manipulation, data mining, and data extraction/transformation/loading (ETL Pipelines) into large data domains.
β’ Document and present solution alternatives to clients, which support business processes and business objectives.
β’ Work with business analysts to understand and prioritize user requirements
β’ Design, development, test, and implement application code
β’ Follow proper software development lifecycle processes and standards
β’ Quality Analysis of the products, responsible for the Defect tracking and Classification
β’ Track progress and intervene as needed to eliminate barriers and ensure delivery.
β’ Resolve or escalate problems and manage risk for both development and production support.
β’ Maintain deep knowledge and awareness of technical & industry best practices and trends, especially in technology & methodologies.
Skills and Knowledge:
β’ At least 6+ years of Developer experience specifically focused on Data Engineering
β’ Strong Hands-on experience in Data Engineering development using Python and Pyspark as an ETL tool
β’ Hands-on experience in AWS services like Glue, RDS, S3, Step functions, Event Bridge, Lambda, MSK (Kafka), EKS etc.
β’ Hands-on experience in Databases like Postgres, SQL Server, Oracle, Sybase
β’ Hands-on experience with SQL database programming, SQL performance tuning, relational model analysis, queries, stored procedures, views, functions and triggers
β’ Strong technical experience in Design (Mapping specifications, HLD, LLD), Development (Coding, Unit testing).
β’ Good knowledge in CI/CD DevOps process and tools like Bitbucket, GitHub, Jenkins
β’ Strong foundation and experience with data modeling, data warehousing, data mining, data analysis and data profiling.
β’ Strong experience with Agile/SCRUM methodology
β’ Good communication and inter-personal skills






