Vantage Point Consulting Inc.

Datawarehouse Engineer with Redshift

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Warehouse Engineer with Redshift, offering a 12-month+ contract in Alpharetta or Cincinnati (Hybrid). Requires 3+ years of data engineering experience, strong SQL skills, and expertise in Amazon Redshift. AWS Certification preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 10, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#RDS (Amazon Relational Database Service) #S3 (Amazon Simple Storage Service) #Data Warehouse #Athena #BI (Business Intelligence) #Matillion #Redshift #Scala #Microsoft Power BI #Python #Compliance #Snowflake #Cloud #Apache Airflow #"ETL (Extract #Transform #Load)" #Normalization #Lambda (AWS Lambda) #Data Architecture #Complex Queries #Data Quality #AWS Glue #Airflow #AWS (Amazon Web Services) #Amazon Redshift #IAM (Identity and Access Management) #Data Engineering #Data Pipeline #Security #Data Integration #SQL (Structured Query Language) #Tableau #SQL Queries #Data Modeling #dbt (data build tool) #Databases
Role description
Job Title: Datawarehouse engineer with Redshift Location: Alpharetta OR Cincinnati (Hybrid) Job Type: 12 months+ Job Summary: • We are looking for a skilled Data Warehouse Engineer with deep expertise in Amazon Redshift to design, build, and manage scalable data warehousing solutions. This role is critical for ensuring efficient data availability and performance for reporting, analytics, and business decision-making. Key Responsibilities: • Design, develop, and maintain scalable and high-performance data warehouse solutions on Amazon Redshift. • Create and optimize complex SQL queries and stored procedures for ETL/ELT processes. • Collaborate with data engineers, analysts, and stakeholders to understand data requirements and implement efficient data models. • Implement data partitioning, distribution styles, and performance tuning techniques in Redshift. • Integrate data from various sources including S3, RDS, APIs, and third-party applications. • Ensure data quality, consistency, and integrity across data sources and pipelines. • Monitor and troubleshoot data loads, query performance, and system health. • Maintain and document data architecture, data flows, and technical standards. Required Qualifications: • 3+ years of experience in data engineering or data warehousing roles. • Strong hands-on experience with Amazon Redshift, including query optimization and performance tuning. • Proficient in SQL, with the ability to write complex queries and debug performance issues. • Experience with ETL/ELT tools (e.g., AWS Glue, Apache Airflow, dbt, Matillion). • Familiarity with data modeling concepts (star/snowflake schemas, normalization/denormalization). • Knowledge of data integration from diverse sources (flat files, APIs, databases). • Experience with AWS ecosystem – especially S3, Lambda, IAM, CloudWatch, and Athena. • Understanding of security and compliance best practices for data handling. Preferred Qualifications: • Experience with Python or Scala for data transformation and orchestration. • Experience with dbt (data build tool) for modeling in Redshift. • AWS Certification (e.g., Data Analytics – Specialty or Solutions Architect – Associate). • Familiarity with CI/CD practices for data pipelines. • Exposure to BI tools such as Tableau, Power BI, or QuickSight.