

Insight Global
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 10+ years of experience in Data Engineering, strong AWS and Python skills, and expertise in ETL and data warehousing. It offers an 18-month remote contract at $60-65/hr.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
January 28, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Charlotte Metro
-
π§ - Skills detailed
#Data Quality #Data Warehouse #Python #ML (Machine Learning) #AWS S3 (Amazon Simple Storage Service) #PySpark #Spark (Apache Spark) #Data Processing #Lambda (AWS Lambda) #"ETL (Extract #Transform #Load)" #Data Engineering #Code Reviews #AI (Artificial Intelligence) #Data Ingestion #Scala #FastAPI #AWS (Amazon Web Services) #Data Pipeline #S3 (Amazon Simple Storage Service)
Role description
Position: Python/Pyspark Data Engineer
Location: Remote or Charlotte
Duration: 18 mo contract to hire extending
Pay: 60-65/hr
Required Skills & Experience
β’ 10+ years experience in Data Engineering
β’ Strong hands-on experience with AWS (S3, Lambda, Glue)
β’ Proficiency in Python, including PySpark
β’ Solid background in ETL development and data warehousing concepts
β’ Experience building or working with APIs (FastAPI preferred, but equivalent experience is acceptable)
β’ Ability to clearly walk through past projects and explain design decisions, trade-offs, and outcomes
β’ Exposure to RAG (Retrieval-Augmented Generation) or AI/ML implementations within data engineering (approx. 5% of role)
β’ Can include personal projects
Nice to Have
β’ Experience integrating data pipelines with AI or analytics use cases
β’ Familiarity with modern data platforms and orchestration tools
What Weβre Looking For
β’ Strong problem-solving skills and a practical, hands-on mindset
β’ Clear communicator who can explain technical work to both technical and non-technical audiences
β’ Someone who can confidently walk us through a real project theyβve built or contributed to
β’ Someone who is enthusiastic about new tools and incorporating AI into their skillset
Job Overview
We are looking for a skilled Data Engineer with a strong AWS and Python background to help design, build, and maintain scalable data pipelines and APIs. This role is ideal for someone who enjoys working across data ingestion, transformation, and delivery, and who can clearly explain their work through real project experience.
Key Responsibilities
β’ Design, build, and maintain ETL pipelines using AWS services such as S3, Lambda, and Glue
β’ Develop and optimize data processing workflows using Python and PySpark
β’ Build and support APIs (preferably using FastAPI) to expose data to downstream systems
β’ Work with data warehouse architectures, ensuring data quality, reliability, and performance
β’ Collaborate with cross-functional teams to understand data requirements and translate them into scalable solutions
β’ Participate in code reviews and contribute to best practices in data engineering
Position: Python/Pyspark Data Engineer
Location: Remote or Charlotte
Duration: 18 mo contract to hire extending
Pay: 60-65/hr
Required Skills & Experience
β’ 10+ years experience in Data Engineering
β’ Strong hands-on experience with AWS (S3, Lambda, Glue)
β’ Proficiency in Python, including PySpark
β’ Solid background in ETL development and data warehousing concepts
β’ Experience building or working with APIs (FastAPI preferred, but equivalent experience is acceptable)
β’ Ability to clearly walk through past projects and explain design decisions, trade-offs, and outcomes
β’ Exposure to RAG (Retrieval-Augmented Generation) or AI/ML implementations within data engineering (approx. 5% of role)
β’ Can include personal projects
Nice to Have
β’ Experience integrating data pipelines with AI or analytics use cases
β’ Familiarity with modern data platforms and orchestration tools
What Weβre Looking For
β’ Strong problem-solving skills and a practical, hands-on mindset
β’ Clear communicator who can explain technical work to both technical and non-technical audiences
β’ Someone who can confidently walk us through a real project theyβve built or contributed to
β’ Someone who is enthusiastic about new tools and incorporating AI into their skillset
Job Overview
We are looking for a skilled Data Engineer with a strong AWS and Python background to help design, build, and maintain scalable data pipelines and APIs. This role is ideal for someone who enjoys working across data ingestion, transformation, and delivery, and who can clearly explain their work through real project experience.
Key Responsibilities
β’ Design, build, and maintain ETL pipelines using AWS services such as S3, Lambda, and Glue
β’ Develop and optimize data processing workflows using Python and PySpark
β’ Build and support APIs (preferably using FastAPI) to expose data to downstream systems
β’ Work with data warehouse architectures, ensuring data quality, reliability, and performance
β’ Collaborate with cross-functional teams to understand data requirements and translate them into scalable solutions
β’ Participate in code reviews and contribute to best practices in data engineering






