

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 22-month contract, hybrid in Orlando, FL, paying $75-$85/hr. Requires 3+ years in data engineering, proficiency in Python and DBT, and experience with cloud environments and CI/CD tools.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date discovered
August 27, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Orlando, FL
-
π§ - Skills detailed
#Cloud #Automation #Airflow #pydantic #Snowflake #DevOps #AWS (Amazon Web Services) #SQL (Structured Query Language) #Data Quality #Data Engineering #Data Modeling #Deployment #BigQuery #Documentation #GitHub #Azure #Python #Scala #"ETL (Extract #Transform #Load)" #Data Pipeline #Data Warehouse #dbt (data build tool) #GCP (Google Cloud Platform)
Role description
Compensation:
$75/hr to $85/hr.
Exact compensation may vary based on several factors, including skills, experience, and education.
Position: Data Engineer
Location: Hybrid in Orlando FL (1 day/week)
Duration: 22-month contract
Required Qualifications:
3+ years of experience in data engineering or data operations.
Proficiency in Python, especially with Pydantic for data validation.
Hands-on experience with DBT for data modeling.
Familiarity with GitHub Actions or similar CI/CD tools.
Strong understanding of data contracts and schema evolution.
Experience working in cloud environments (e.g., AWS, GCP, Azure).
Knowledge of SQL and modern data warehouses (e.g., Snowflake, BigQuery).
Background in supporting data engineering teams or managing internal tooling.
Familiarity with orchestration tools like Airflow or Prefect.
Day To Day:
We are seeking a Data Engineer / Data Ops Specialist to join our growing data team. This role will focus on building scalable data models using DBT, automating data pipelines based on Pydantic data contracts, and integrating CI/CD workflows using GitHub Actions. The ideal candidate will have a strong background in Python, data modeling, and DevOps practices, and will also provide support to data engineering users across the organization.
Responsibilities:
Data Modeling & Transformation
β’ Design and implement robust DBT models based on Pydantic data contracts.
β’ Ensure data quality, consistency, and documentation across models and pipelines.
Automation & CI/CD
β’ Develop and maintain automated workflows using GitHub Actions for data pipeline deployment and validation.
β’ Integrate data contracts into the Python codebase to drive dynamic DBT model generation.
Data Engineering Support
β’ Provide technical support and guidance to data engineers on tooling, best practices, and troubleshooting.
β’ Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
Process Optimization
β’ Identify opportunities to streamline data operations and improve pipeline efficiency.
β’ Contribute to the development of internal tools and frameworks for data contract management.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.
Compensation:
$75/hr to $85/hr.
Exact compensation may vary based on several factors, including skills, experience, and education.
Position: Data Engineer
Location: Hybrid in Orlando FL (1 day/week)
Duration: 22-month contract
Required Qualifications:
3+ years of experience in data engineering or data operations.
Proficiency in Python, especially with Pydantic for data validation.
Hands-on experience with DBT for data modeling.
Familiarity with GitHub Actions or similar CI/CD tools.
Strong understanding of data contracts and schema evolution.
Experience working in cloud environments (e.g., AWS, GCP, Azure).
Knowledge of SQL and modern data warehouses (e.g., Snowflake, BigQuery).
Background in supporting data engineering teams or managing internal tooling.
Familiarity with orchestration tools like Airflow or Prefect.
Day To Day:
We are seeking a Data Engineer / Data Ops Specialist to join our growing data team. This role will focus on building scalable data models using DBT, automating data pipelines based on Pydantic data contracts, and integrating CI/CD workflows using GitHub Actions. The ideal candidate will have a strong background in Python, data modeling, and DevOps practices, and will also provide support to data engineering users across the organization.
Responsibilities:
Data Modeling & Transformation
β’ Design and implement robust DBT models based on Pydantic data contracts.
β’ Ensure data quality, consistency, and documentation across models and pipelines.
Automation & CI/CD
β’ Develop and maintain automated workflows using GitHub Actions for data pipeline deployment and validation.
β’ Integrate data contracts into the Python codebase to drive dynamic DBT model generation.
Data Engineering Support
β’ Provide technical support and guidance to data engineers on tooling, best practices, and troubleshooting.
β’ Collaborate with cross-functional teams to understand data requirements and deliver scalable solutions.
Process Optimization
β’ Identify opportunities to streamline data operations and improve pipeline efficiency.
β’ Contribute to the development of internal tools and frameworks for data contract management.
Benefit packages for this role will start on the 31st day of employment and include medical, dental, and vision insurance, as well as HSA, FSA, and DCFSA account options, and 401k retirement account access with employer matching. Employees in this role are also entitled to paid sick leave and/or other paid time off as provided by applicable law.