

Motion Recruitment
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5+ years of experience, focusing on AWS and serverless technologies. It is a contract-to-hire position, 100% remote, requiring expertise in SQL, ETL, Python, and data warehousing solutions.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
April 2, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Scala #SNS (Simple Notification Service) #Looker #AWS Lambda #Cloud #Complex Queries #Datasets #Agile #ML (Machine Learning) #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Automation #Data Engineering #DynamoDB #Redshift #Data Pipeline #Microservices #SQS (Simple Queue Service) #Python #SQL (Structured Query Language) #Tableau #AI (Artificial Intelligence) #Data Warehouse #S3 (Amazon Simple Storage Service) #SQL Queries #DevOps #Lambda (AWS Lambda) #BI (Business Intelligence)
Role description
We are looking for a highly skilled Senior Data Engineer to join a fast-paced, innovative team building scalable data platforms and analytics solutions. This role is ideal for someone who thrives in a cloud-native environment, enjoys solving complex data challenges, and is passionate about building high-quality, resilient data systems.
You will play a key role in designing and developing modern data pipelines, working with large-scale datasets, and leveraging AWS serverless technologies to power data-driven products.
Senior Data Engineer (AWS, Serverless, Data Platforms)
Location: 100% Remote (EST/CST preferred)
Duration: Contract to Hire
What Youβll Do
β’ Design, build, and maintain scalable data pipelines and ETL processes
β’ Develop and optimize complex SQL queries for data transformation and reporting
β’ Work with AWS services such as Lambda, S3, DynamoDB, SNS, SQS, and Step Functions
β’ Build and manage data solutions using Redshift as a core data warehouse
β’ Develop microservices and backend components using Python and Node.js
β’ Collaborate with cross-functional Agile teams to design and deliver robust solutions
β’ Integrate BI tools such as Tableau, QuickSight, or Looker for analytics and dashboards
β’ Implement testing, automation, and performance optimization for data workflows
β’ Contribute to the adoption of modern engineering practices and cloud technologies
β’ Explore and apply AI/GenAI capabilities within data and analytics workflows
What Weβre Looking For
β’ 5+ years of experience in Data Engineering or related field
β’ Strong expertise in SQL, including complex queries and performance tuning
β’ Hands-on experience with AWS cloud services and serverless architecture
β’ Experience building ETL/data pipelines using tools like AWS Lambda and Redshift
β’ Proficiency in Python and working knowledge of Node.js
β’ Experience working with large-scale data systems and data warehousing solutions
β’ Familiarity with BI tools such as Tableau, QuickSight, or Looker
β’ Exposure to AI/ML or Generative AI technologies is a plus
β’ Understanding of microservices architecture and event-driven systems
β’ Experience with DevOps tools and CI/CD pipelines is a plus
β’ Strong communication skills and ability to work in collaborative environments
We are looking for a highly skilled Senior Data Engineer to join a fast-paced, innovative team building scalable data platforms and analytics solutions. This role is ideal for someone who thrives in a cloud-native environment, enjoys solving complex data challenges, and is passionate about building high-quality, resilient data systems.
You will play a key role in designing and developing modern data pipelines, working with large-scale datasets, and leveraging AWS serverless technologies to power data-driven products.
Senior Data Engineer (AWS, Serverless, Data Platforms)
Location: 100% Remote (EST/CST preferred)
Duration: Contract to Hire
What Youβll Do
β’ Design, build, and maintain scalable data pipelines and ETL processes
β’ Develop and optimize complex SQL queries for data transformation and reporting
β’ Work with AWS services such as Lambda, S3, DynamoDB, SNS, SQS, and Step Functions
β’ Build and manage data solutions using Redshift as a core data warehouse
β’ Develop microservices and backend components using Python and Node.js
β’ Collaborate with cross-functional Agile teams to design and deliver robust solutions
β’ Integrate BI tools such as Tableau, QuickSight, or Looker for analytics and dashboards
β’ Implement testing, automation, and performance optimization for data workflows
β’ Contribute to the adoption of modern engineering practices and cloud technologies
β’ Explore and apply AI/GenAI capabilities within data and analytics workflows
What Weβre Looking For
β’ 5+ years of experience in Data Engineering or related field
β’ Strong expertise in SQL, including complex queries and performance tuning
β’ Hands-on experience with AWS cloud services and serverless architecture
β’ Experience building ETL/data pipelines using tools like AWS Lambda and Redshift
β’ Proficiency in Python and working knowledge of Node.js
β’ Experience working with large-scale data systems and data warehousing solutions
β’ Familiarity with BI tools such as Tableau, QuickSight, or Looker
β’ Exposure to AI/ML or Generative AI technologies is a plus
β’ Understanding of microservices architecture and event-driven systems
β’ Experience with DevOps tools and CI/CD pipelines is a plus
β’ Strong communication skills and ability to work in collaborative environments






