

Insight Global
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer for an 18-month remote contract, with a pay rate of $60-$70/hr. Requires 10+ years in Data Engineering, strong AWS and Python skills, ETL development, and API experience (FastAPI preferred).
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
560
-
🗓️ - Date
January 30, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Spark (Apache Spark) #PySpark #AWS S3 (Amazon Simple Storage Service) #Data Ingestion #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Python #Data Engineering #Data Warehouse #AI (Artificial Intelligence) #ML (Machine Learning) #Data Pipeline #AWS (Amazon Web Services) #FastAPI #Data Quality #S3 (Amazon Simple Storage Service) #Code Reviews #Data Processing #Scala
Role description
We are looking for a skilled Data Engineer with a strong AWS and Python background to help design, build, and maintain scalable data pipelines and APIs. This role is ideal for someone who enjoys working across data ingestion, transformation, and delivery, and who can clearly explain their work through real project experience. This is a fully remote opportunity, slated to be an 18-month contract engagement. Target rate for this role on W2 is $60-$70/hr depending on previous experience.
Key Responsibilities
• Design, build, and maintain ETL pipelines using AWS services such as S3, Lambda, and Glue
• Develop and optimize data processing workflows using Python and PySpark
• Build and support APIs (preferably using FastAPI) to expose data to downstream systems
• Work with data warehouse architectures, ensuring data quality, reliability, and performance
• Collaborate with cross-functional teams to understand data requirements and translate them into scalable solutions
• Participate in code reviews and contribute to best practices in data engineering
Required Skills & Experience
• 10+ years of experience in Data Engineering
• Strong hands-on experience with AWS (S3, Lambda, Glue)
• Proficiency in Python, including PySpark
• Solid background in ETL development and data warehousing concepts
• Experience building or working with APIs (FastAPI preferred, but equivalent experience is acceptable)
• Ability to clearly walk through past projects and explain design decisions, trade-offs, and outcomes
Nice to Have
• Experience integrating data pipelines with AI or analytics use cases
• Familiarity with modern data platforms and orchestration tools
• Exposure to RAG (Retrieval-Augmented Generation) or AI/ML implementations within data engineering (approx. 5% of role)
We are looking for a skilled Data Engineer with a strong AWS and Python background to help design, build, and maintain scalable data pipelines and APIs. This role is ideal for someone who enjoys working across data ingestion, transformation, and delivery, and who can clearly explain their work through real project experience. This is a fully remote opportunity, slated to be an 18-month contract engagement. Target rate for this role on W2 is $60-$70/hr depending on previous experience.
Key Responsibilities
• Design, build, and maintain ETL pipelines using AWS services such as S3, Lambda, and Glue
• Develop and optimize data processing workflows using Python and PySpark
• Build and support APIs (preferably using FastAPI) to expose data to downstream systems
• Work with data warehouse architectures, ensuring data quality, reliability, and performance
• Collaborate with cross-functional teams to understand data requirements and translate them into scalable solutions
• Participate in code reviews and contribute to best practices in data engineering
Required Skills & Experience
• 10+ years of experience in Data Engineering
• Strong hands-on experience with AWS (S3, Lambda, Glue)
• Proficiency in Python, including PySpark
• Solid background in ETL development and data warehousing concepts
• Experience building or working with APIs (FastAPI preferred, but equivalent experience is acceptable)
• Ability to clearly walk through past projects and explain design decisions, trade-offs, and outcomes
Nice to Have
• Experience integrating data pipelines with AI or analytics use cases
• Familiarity with modern data platforms and orchestration tools
• Exposure to RAG (Retrieval-Augmented Generation) or AI/ML implementations within data engineering (approx. 5% of role)






