

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer on a 2-month remote contract, requiring 5+ years of experience, proficiency in Databricks, Python, SQL, and ETL tooling. Experience in data manipulation strategies and agile methodologies is essential.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date discovered
September 19, 2025
π - Project duration
1 to 3 months
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#AI (Artificial Intelligence) #Requirements Gathering #Tableau #Lambda (AWS Lambda) #Azure Data Factory #AWS (Amazon Web Services) #Python #SQL (Structured Query Language) #Data Manipulation #BI (Business Intelligence) #AWS Glue #Data Integrity #Documentation #Spark (Apache Spark) #Data Pipeline #ADF (Azure Data Factory) #SQL Queries #Databricks #Informatica #Microsoft Power BI #"ETL (Extract #Transform #Load)" #Agile #Data Enrichment #Metadata #Azure #Talend #DataStage #Data Engineering #Consulting #Data Quality #Kafka (Apache Kafka) #Hadoop #Data Ingestion
Role description
Title: Data Engineer
Duration: 2 Month Contract (could extend longer)
Location: Remote
Must Have: Databricks, Python, SQL, ETL Tooling experience
Our client within the Technology Consulting space is looking to bring on a Data Engineer for a 2-month contract to help oversee and deliver on a wide variety of data/AI focused projects centered around their client facing products.
Responsibilities:
β’ Support projects leveraging an agile delivery model through requirements gathering and documentation, data ingestion pipeline development, and technical testing.
β’ Coordinate activities with data source application owners to understand the underlying data, ensure integration and data integrity
β’ Contribute to project delivery in the following ways:
β’ Gather technical requirements and develop solutions with collaboration across the DE&A team
β’ Add configurations for data enrichments such as lookups, type conversions, calculations, and concatenations
β’ Add data quality rules
β’ Connect to new data sources
β’ Map multiple sources into a single output
β’ Interpret error messages to troubleshoot data pipelines
β’ Create outputs that facilitate reporting, dashboarding, and semantic modeling for tools such as Tableau and Power BI
β’ Test data pipelines to ensure accuracy
Qualifications:
β’ 5+ years of hands-on professional development experience, consulting experience preferred.
β’ 3+ years of experience working with DataBricks
β’ Experience writing complex SQL queries related to data manipulation strategies
β’ Experience building or working with metadata-driven ETL / ELT patterns and concepts.
β’ Experience with Spark, Databricks, Kafka, Kinesis, Hadoop, and/or Lambda.
β’ Exposure using ETL tools such as Azure Data Factory, AWS Glue, Informatica, Talend, IBM DataStage, etc.
β’ Experience in technical testing, issue identification and resolution
β’ Experience with AWS a plus
Title: Data Engineer
Duration: 2 Month Contract (could extend longer)
Location: Remote
Must Have: Databricks, Python, SQL, ETL Tooling experience
Our client within the Technology Consulting space is looking to bring on a Data Engineer for a 2-month contract to help oversee and deliver on a wide variety of data/AI focused projects centered around their client facing products.
Responsibilities:
β’ Support projects leveraging an agile delivery model through requirements gathering and documentation, data ingestion pipeline development, and technical testing.
β’ Coordinate activities with data source application owners to understand the underlying data, ensure integration and data integrity
β’ Contribute to project delivery in the following ways:
β’ Gather technical requirements and develop solutions with collaboration across the DE&A team
β’ Add configurations for data enrichments such as lookups, type conversions, calculations, and concatenations
β’ Add data quality rules
β’ Connect to new data sources
β’ Map multiple sources into a single output
β’ Interpret error messages to troubleshoot data pipelines
β’ Create outputs that facilitate reporting, dashboarding, and semantic modeling for tools such as Tableau and Power BI
β’ Test data pipelines to ensure accuracy
Qualifications:
β’ 5+ years of hands-on professional development experience, consulting experience preferred.
β’ 3+ years of experience working with DataBricks
β’ Experience writing complex SQL queries related to data manipulation strategies
β’ Experience building or working with metadata-driven ETL / ELT patterns and concepts.
β’ Experience with Spark, Databricks, Kafka, Kinesis, Hadoop, and/or Lambda.
β’ Exposure using ETL tools such as Azure Data Factory, AWS Glue, Informatica, Talend, IBM DataStage, etc.
β’ Experience in technical testing, issue identification and resolution
β’ Experience with AWS a plus