

Amtec Inc.
Data Warehouse Engineer - REMOTE
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Warehouse Engineer, 6-month contract, fully remote, paying $70 - $74.54/hr. Requires 8+ years in data engineering, expertise in Redshift, AWS Glue, and ETL pipelines, plus a Bachelor’s/Master’s in Computer Science. US Citizenship/Green Card required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
592
-
🗓️ - Date
October 24, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Warehouse #Data Engineering #Data Modeling #Dataiku #NoSQL #Database Design #Java #Storage #Scrum #Oracle #Data Governance #AWS (Amazon Web Services) #Data Cleansing #PySpark #Apache Kafka #Indexing #Data Science #Programming #Snowflake #Apache Airflow #AWS Glue #Kafka (Apache Kafka) #Scala #Fivetran #Databases #Python #Spark (Apache Spark) #Kanban #Data Quality #Computer Science #Amazon Redshift #Datasets #Redshift #Security #SQL (Structured Query Language) #Data Processing #Tableau #"ETL (Extract #Transform #Load)" #Agile #Base #Data Storage #SQL Server #Cloud #Alteryx #Documentation #Airflow #Data Pipeline
Role description
Job Title: Data Warehouse Developer/Engineer - REMOTE
Duration: 6 Months
Work Schedule: 9/80
Location: Falls Church, VA
Pay: $70 - $74.54/hr
This position can be FULL REMOTE (100%), prefers somebody who is on the East Coast
- REDSHIFT IS REQUIRED
US Citizenship or Green Card required
About the Role
We’re seeking an experienced Data Warehouse Developer/Engineer to design, develop, and optimize large-scale data systems. In this role, you’ll focus on migrating data from a legacy system to Amazon Redshift cloud-based data warehouse. You'll also build robust ETL pipelines, develop data models, and ensure data quality across multiple cloud and on-prem environments. You’ll collaborate closely with cross-functional teams, data scientists, analysts, and business stakeholders to deliver high-impact data solutions.
Responsibilities
• Design and implement data pipelines to ingest, extract, transform, load (ETL) data and store large datasets from various sources
• Build and maintain data warehouses, including data modeling, data governance, and data quality o Ensure data quality, integrity, and security by implementing data validation, data cleansing, and data governance policies
• Optimize data systems for performance, scalability, and reliability
• Collaborate with customers to understand their technical requirements and provide guidance on best practices for using Amazon Redshift
• Work with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver data solutions
• Provide technical support for Amazon Redshift, including troubleshooting, performance optimization, and data modeling
• Identify and resolve data-related issues, including data pipeline failures, data quality issues, and performance bottlenecks
• Develop technical documentation and knowledge base articles to help customers and AWS engineers troubleshoot common issues
Required Qualifications
• Bachelor's or Master's degree in Computer Science or a related field, with at least 6 years of experience in Information Technology
• Proficiency in one or more programming languages (e.g., Python, Java, Scala)
• 8+ years of experience in data engineering, with a focus on designing and implementing large-scale data systems
• 5 + years of hands-on experience in writing complex, highly-optimized queries across large data sets using Oracle, SQL Server and Redshift.
• 5 + years of hands-on experience using AWS Glue, python/pyspark to build ETL pipelines in a production setting, including writing test cases
• Strong understanding of database design principles, data modeling, and data governance
• Proficiency in SQL, including query optimization, indexing, and performance tuning
• Experience with data warehousing concepts, including star and snowflake schemas
• Strong analytical and problem-solving skills, with the ability to break down complex problems into manageable components o Experience with data storage solutions such as relational databases (Oracle, SQL Server), NoSQL databases, or cloud-based data warehouses (Redshift)
• Experience with data processing frameworks such as Apache Kafka, Fivetran o Experience in building ETL pipelines using AWS Glue, Apache Airflow, and programming languages including Python and PySpark
• Understanding of data quality and governance principles and best practices o Experience with agile development methodologies such as Scrum or Kanban
Preferred Skills
• Experience with Dataiku
• Experience with building reports using PowerBI and Tableau
• Experience with Alteryx
• Relevant cloud certifications (e.g., AWS Certified Data Analytics - Specialty)
• Experience with AWS services and best practices
Education Preferred
• Bachelor's or Master's degree in Computer Science or a related field, with at least 8 years of experience in Information Technology
Additional Information:
• 100% Remote or Hybrid work arrangement, with occasional visits to the office
Job Title: Data Warehouse Developer/Engineer - REMOTE
Duration: 6 Months
Work Schedule: 9/80
Location: Falls Church, VA
Pay: $70 - $74.54/hr
This position can be FULL REMOTE (100%), prefers somebody who is on the East Coast
- REDSHIFT IS REQUIRED
US Citizenship or Green Card required
About the Role
We’re seeking an experienced Data Warehouse Developer/Engineer to design, develop, and optimize large-scale data systems. In this role, you’ll focus on migrating data from a legacy system to Amazon Redshift cloud-based data warehouse. You'll also build robust ETL pipelines, develop data models, and ensure data quality across multiple cloud and on-prem environments. You’ll collaborate closely with cross-functional teams, data scientists, analysts, and business stakeholders to deliver high-impact data solutions.
Responsibilities
• Design and implement data pipelines to ingest, extract, transform, load (ETL) data and store large datasets from various sources
• Build and maintain data warehouses, including data modeling, data governance, and data quality o Ensure data quality, integrity, and security by implementing data validation, data cleansing, and data governance policies
• Optimize data systems for performance, scalability, and reliability
• Collaborate with customers to understand their technical requirements and provide guidance on best practices for using Amazon Redshift
• Work with cross-functional teams, including data scientists, analysts, and business stakeholders, to understand data requirements and deliver data solutions
• Provide technical support for Amazon Redshift, including troubleshooting, performance optimization, and data modeling
• Identify and resolve data-related issues, including data pipeline failures, data quality issues, and performance bottlenecks
• Develop technical documentation and knowledge base articles to help customers and AWS engineers troubleshoot common issues
Required Qualifications
• Bachelor's or Master's degree in Computer Science or a related field, with at least 6 years of experience in Information Technology
• Proficiency in one or more programming languages (e.g., Python, Java, Scala)
• 8+ years of experience in data engineering, with a focus on designing and implementing large-scale data systems
• 5 + years of hands-on experience in writing complex, highly-optimized queries across large data sets using Oracle, SQL Server and Redshift.
• 5 + years of hands-on experience using AWS Glue, python/pyspark to build ETL pipelines in a production setting, including writing test cases
• Strong understanding of database design principles, data modeling, and data governance
• Proficiency in SQL, including query optimization, indexing, and performance tuning
• Experience with data warehousing concepts, including star and snowflake schemas
• Strong analytical and problem-solving skills, with the ability to break down complex problems into manageable components o Experience with data storage solutions such as relational databases (Oracle, SQL Server), NoSQL databases, or cloud-based data warehouses (Redshift)
• Experience with data processing frameworks such as Apache Kafka, Fivetran o Experience in building ETL pipelines using AWS Glue, Apache Airflow, and programming languages including Python and PySpark
• Understanding of data quality and governance principles and best practices o Experience with agile development methodologies such as Scrum or Kanban
Preferred Skills
• Experience with Dataiku
• Experience with building reports using PowerBI and Tableau
• Experience with Alteryx
• Relevant cloud certifications (e.g., AWS Certified Data Analytics - Specialty)
• Experience with AWS services and best practices
Education Preferred
• Bachelor's or Master's degree in Computer Science or a related field, with at least 8 years of experience in Information Technology
Additional Information:
• 100% Remote or Hybrid work arrangement, with occasional visits to the office






