Vertek Solutions, Inc.

Consultant Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Consultant Data Engineer in Nashville, TN (Hybrid) for an 18-24 month contract, offering a competitive pay rate. Required skills include Snowflake, Fivetran, dbt, Python, and SQL, with 4+ years of relevant experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 21, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Nashville Metropolitan Area
-
🧠 - Skills detailed
#Data Warehouse #Scala #Snowflake #Data Analysis #Airflow #Cloud #Data Modeling #Monitoring #Spark (Apache Spark) #Data Quality #Documentation #Python #BI (Business Intelligence) #SQL (Structured Query Language) #Data Governance #Big Data #Fivetran #Data Pipeline #dbt (data build tool) #"ETL (Extract #Transform #Load)" #Automation #Data Processing #Data Engineering
Role description
Position: Consultant Data Engineer Location: Nashville, TN (Hybrid) 3 Days Onsite, 2 Remote Preferred Duration: 18-24 Month Contract Who you'll work with: We are seeking a Data Engineer to join our dynamic Data Team. This role is ideal for a data engineer with experience in Snowflake, Fivetran, dbt, Python, and SQL who thrives on ingesting and transforming data from diverse sources, ensuring it is optimized for analytics and business intelligence. The ideal candidate is a hands-on problem solver, eager to learn, and a strong team player who contributes to a collaborative data-driven culture. Required Qualifications • 4+ years of experience in data engineering, data warehousing, or related fields. • Proficiency in Snowflake, dbt, Fivetran, Python, and SQL for data transformation, automation, and pipeline development. • Experience with data modeling techniques and best practices for modern cloud-based data warehouses. Preferred Qualifications • Experience with orchestration tools (e.g., Airflow) and data governance frameworks. • Familiarity with CI/CD for data pipelines and infrastructure-as-code principles. • Exposure to big data processing frameworks (e.g., Spark) is a plus. What you'll do: • Design, build, and maintain scalable data pipelines to ingest and integrate data from multiple sources using Fivetran and Snowflake. • Develop and optimize dbt transformation models to structure and prepare data for analytics and consumption. • Ensure data quality, integrity, and reliability through robust testing, monitoring, and governance practices. • Implement data modeling best practices in a data warehouse environment, optimizing for performance and usability. • Work closely with data analysts, engineers, and business teams to understand data requirements and improve accessibility. • Collaborate on improving internal data engineering processes, documentation, and best practices.