

Revel IT
Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a fully remote contract-to-hire opportunity, offering a competitive pay rate. Requires 5+ years of experience, expert proficiency in Python and PySpark, and strong AWS skills for building scalable data solutions.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 18, 2025
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Scala #Jenkins #Athena #ML (Machine Learning) #Spark (Apache Spark) #PySpark #Code Reviews #AWS Glue #S3 (Amazon Simple Storage Service) #Strategy #Lambda (AWS Lambda) #Data Strategy #Python #DynamoDB #Data Quality #Distributed Computing #Data Management #BI (Business Intelligence) #Scrum #AWS (Amazon Web Services) #Data Pipeline #Data Ingestion #GitHub #Data Governance #Redshift #Data Lineage #Big Data #SQL (Structured Query Language) #Datasets #RDS (Amazon Relational Database Service) #Cloud #Visualization #Tableau #Data Modeling #Microservices #Metadata #Programming #Data Engineering #"ETL (Extract #Transform #Load)" #IAM (Identity and Access Management)
Role description
Our direct client has a fully remote contract to hire opportunity for a Senior Data Engineer.
We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics productsβspanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.
You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
β’ Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services.
β’ Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).
β’ Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.
β’ Implement best practices for data quality, validation, auditing, and error-handling within pipelines.
Analytics Solution Design
β’ Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.
β’ Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics.
β’ Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc.
Cross-Functional Collaboration
β’ Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.
β’ Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.
β’ Provide mentorship and guidance to junior engineers and analysts as needed.
Engineering (Supporting Skills)
β’ Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs.
β’ Implement scripts, utilities, and micro-services as needed to support analytics workloads.
Required Qualifications
β’ 5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles.
β’ Expert-level proficiency (10/10) in:
β’ Python
β’ PySpark
β’ Strong working knowledge of:
β’ SQL and other programming languages
β’ Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS.
β’ Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc.
β’ Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling.
β’ Ability to partner effectively with business stakeholders and translate requirements into technical solutions.
β’ Strong problem-solving skills and the ability to work independently in a fast-paced environment.
Preferred Qualifications
β’ Experience with BI/Visualization tools such as Tableau
β’ Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).
β’ Familiarity with machine learning workflows or MLOps frameworks.
β’ Knowledge of metadata management, data governance, and data lineage tools.
Our direct client has a fully remote contract to hire opportunity for a Senior Data Engineer.
We are seeking a highly skilled Analytics Data Engineer with deep expertise in building scalable data solutions on the AWS platform. The ideal candidate is a 10/10 expert in Python and PySpark, with strong working knowledge of SQL. This engineer will play a critical role in translating business and end-user needs into robust analytics productsβspanning ingestion, transformation, curation, and enablement for downstream reporting and visualization.
You will work closely with both business stakeholders and IT teams to design, develop, and deploy advanced data pipelines and analytical capabilities that power enterprise decision-making.
Key Responsibilities
Data Engineering & Pipeline Development
β’ Design, develop, and optimize scalable data ingestion pipelines using Python, PySpark, and AWS native services.
β’ Build end-to-end solutions to move large-scale big data from source systems into AWS environments (e.g., S3, Redshift, DynamoDB, RDS).
β’ Develop and maintain robust data transformation and curation processes to support analytics, dashboards, and business intelligence tools.
β’ Implement best practices for data quality, validation, auditing, and error-handling within pipelines.
Analytics Solution Design
β’ Collaborate with business users to understand analytical needs and translate them into technical specifications, data models, and solution architectures.
β’ Build curated datasets optimized for reporting, visualization, machine learning, and self-service analytics.
β’ Contribute to solution design for analytics products leveraging AWS services such as AWS Glue, Lambda, EMR, Athena, Step Functions, Redshift, Kinesis, Lake Formation, etc.
Cross-Functional Collaboration
β’ Work with IT and business partners to define requirements, architecture, and KPIs for analytical solutions.
β’ Participate in Daily Scrum meetings, code reviews, and architecture discussions to ensure alignment with enterprise data strategy and coding standards.
β’ Provide mentorship and guidance to junior engineers and analysts as needed.
Engineering (Supporting Skills)
β’ Employ strong skills in Python, Pyspark and SQL to support data engineering tasks, broader system integration requirements, and application layer needs.
β’ Implement scripts, utilities, and micro-services as needed to support analytics workloads.
Required Qualifications
β’ 5+ years of professional experience in data engineering, analytics engineering, or full-stack data development roles.
β’ Expert-level proficiency (10/10) in:
β’ Python
β’ PySpark
β’ Strong working knowledge of:
β’ SQL and other programming languages
β’ Demonstrated experience designing and delivering big-data ingestion and transformation solutions through AWS.
β’ Hands-on experience with AWS services such as Glue, EMR, Lambda, Redshift, S3, Kinesis, CloudFormation, IAM, etc.
β’ Strong understanding of data warehousing, ETL/ELT, distributed computing, and data modeling.
β’ Ability to partner effectively with business stakeholders and translate requirements into technical solutions.
β’ Strong problem-solving skills and the ability to work independently in a fast-paced environment.
Preferred Qualifications
β’ Experience with BI/Visualization tools such as Tableau
β’ Experience building CI/CD pipelines for data products (e.g., Jenkins, GitHub Actions).
β’ Familiarity with machine learning workflows or MLOps frameworks.
β’ Knowledge of metadata management, data governance, and data lineage tools.






