

Data Engineer/Analytics Engineer (TEMP)
Our Pay Rate Range reflects the cost of labor across several US geographic markets. The pay rate for this position ranges from $63.51/hr in our lowest geographic market up to $94.52/hr in our highest geographic market. Pay Rate is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.
Supporting the data product team in the development, deployment, maintenance, and enhancements of data assets and data pipelines that support the broader VINCI organization, with a potential specialization in data science and decision science data applications
Hyper focus on the following three things: (3+ years of experience)
- Data build tool (dbt)
- Apache Airflow
- AWS Redshift
ABOUT THIS ROLE
You'll help build and maintain the data infrastructure that powers or client's analytics capabilities. Working at the intersection of data engineering and analytics, you'll design scalable data models, develop efficient ETL processes, and collaborate with stakeholders to transform raw data into actionable insights. Your work will directly impact business decision-making and enhance the listener experience through data-driven improvements.
ABOUT YOU
You're passionate about building well-structured data models and writing clean, efficient code. You thrive in collaborative environments where you can share your technical expertise while learning from others. You have a curious mindset that drives you to continuously improve data processes and find innovative solutions to complex data problems. You're excited about working with modern data stack technologies and have a commitment to data quality and reliability.
As an Analytics Engineer, you will...
• Design, implement, and maintain data models using dbt to transform raw data into analytics-ready datasets
• Create and optimize data pipelines using Airflow to ensure reliable and timely data delivery
• Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver appropriate data solutions
• Establish testing frameworks and data quality checks to maintain data integrity throughout the pipeline
• Apply software engineering best practices to data engineering workflows to improve code quality and maintainability
BASIC QUALIFICATIONS
• Bachelor's degree in Computer Science, Engineering, Mathematics, or related field
• 2+ years of experience building data pipelines and data models
• Experience with dbt including Views, Tables, and Incremental models
• Experience with Airflow including DAG creation and backfill processes
• Experience with SQL and data warehouse technologies like Redshift
• Proficiency in Python including package management and modular programming
• Experience with managing code with git and providing feedback within code-reviews
PREFERRED QUALIFICATIONS
• Experience with Jinja templating in dbt and advanced dbt usage such as testing and versioning models
• Knowledge of data warehouse optimization techniques including distribution and sort keys in Redshift
• Experience with dynamic DAG generation using JSON or dictionaries in Airflow
• Experience with object-oriented programming in Python
• Experience working in an agile development environment
• Knowledge of data visualization tools like Tableau or Looker
Our Pay Rate Range reflects the cost of labor across several US geographic markets. The pay rate for this position ranges from $63.51/hr in our lowest geographic market up to $94.52/hr in our highest geographic market. Pay Rate is based on a number of factors including market location and may vary depending on job-related knowledge, skills, and experience.
Supporting the data product team in the development, deployment, maintenance, and enhancements of data assets and data pipelines that support the broader VINCI organization, with a potential specialization in data science and decision science data applications
Hyper focus on the following three things: (3+ years of experience)
- Data build tool (dbt)
- Apache Airflow
- AWS Redshift
ABOUT THIS ROLE
You'll help build and maintain the data infrastructure that powers or client's analytics capabilities. Working at the intersection of data engineering and analytics, you'll design scalable data models, develop efficient ETL processes, and collaborate with stakeholders to transform raw data into actionable insights. Your work will directly impact business decision-making and enhance the listener experience through data-driven improvements.
ABOUT YOU
You're passionate about building well-structured data models and writing clean, efficient code. You thrive in collaborative environments where you can share your technical expertise while learning from others. You have a curious mindset that drives you to continuously improve data processes and find innovative solutions to complex data problems. You're excited about working with modern data stack technologies and have a commitment to data quality and reliability.
As an Analytics Engineer, you will...
• Design, implement, and maintain data models using dbt to transform raw data into analytics-ready datasets
• Create and optimize data pipelines using Airflow to ensure reliable and timely data delivery
• Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver appropriate data solutions
• Establish testing frameworks and data quality checks to maintain data integrity throughout the pipeline
• Apply software engineering best practices to data engineering workflows to improve code quality and maintainability
BASIC QUALIFICATIONS
• Bachelor's degree in Computer Science, Engineering, Mathematics, or related field
• 2+ years of experience building data pipelines and data models
• Experience with dbt including Views, Tables, and Incremental models
• Experience with Airflow including DAG creation and backfill processes
• Experience with SQL and data warehouse technologies like Redshift
• Proficiency in Python including package management and modular programming
• Experience with managing code with git and providing feedback within code-reviews
PREFERRED QUALIFICATIONS
• Experience with Jinja templating in dbt and advanced dbt usage such as testing and versioning models
• Knowledge of data warehouse optimization techniques including distribution and sort keys in Redshift
• Experience with dynamic DAG generation using JSON or dictionaries in Airflow
• Experience with object-oriented programming in Python
• Experience working in an agile development environment
• Knowledge of data visualization tools like Tableau or Looker