

Data Warehouse Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Warehouse Engineer for a 6-month contract, offering a pay rate of "$X/hour". Key skills include Azure Data Lake Storage, Azure Databricks, ETL/ELT processes, and strong programming in Python or SQL. Requires 5+ years of Data Engineering experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 8, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Austin, TX
-
π§ - Skills detailed
#Java #"ETL (Extract #Transform #Load)" #Programming #Data Lake #Scala #Data Modeling #Data Access #Azure ADLS (Azure Data Lake Storage) #SQL (Structured Query Language) #Python #ADLS (Azure Data Lake Storage) #Azure Databricks #Compliance #Data Pipeline #Data Warehouse #Data Storage #Azure #Databricks #Computer Science #Data Engineering #Data Integrity #Storage #Data Architecture #Data Science #Leadership #Data Governance
Role description
The ideal candidate will be instrumental in designing and implementing robust data pipelines, ensuring the integrity and accessibility of data across our organization.
RESPONSIBILITIES:
β’ Design, build, and maintain efficient and reliable data pipelines to move and transform data (both large and small amounts) within our Azure ecosystem.
β’ Work closely with Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer to manage and optimize data processes.
β’ Develop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes.
β’ Ensure the seamless integration and compatibility of data solutions with Databricks Unity Catalog Data Warehouse and adhere to general data warehousing principles.
β’ Collaborate with data scientists, analysts, and other stakeholders to support data-centric needs.
β’ Implement data governance and quality processes, ensuring data integrity and compliance.
β’ Optimize data flow and collection for cross-functional teams.
β’ Provide technical leadership and mentorship to junior team members.
β’ Stay current with industry trends and developments in data architecture and processing.
REQUIREMENTS:
β’ Bachelors or masters degree in computer science, Engineering, or a related field.
β’ Minimum of 5 years of experience in a Data Engineering role.
β’ Strong expertise in the Azure data ecosystem, including Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer.
β’ Proficient in Databricks Data Warehouse and a solid understanding of data warehousing principles.
β’ Experience with ETL and ELT processes and tools.
β’ Strong programming skills in languages such as Python, SQL, Scala, or Java.
β’ Experience with data modeling, data access, and data storage techniques.
β’ Ability to work in a fast-paced environment and manage multiple projects simultaneously.
β’ Excellent problem-solving skills and attention to detail.
β’ Strong communication and teamwork skills.
The ideal candidate will be instrumental in designing and implementing robust data pipelines, ensuring the integrity and accessibility of data across our organization.
RESPONSIBILITIES:
β’ Design, build, and maintain efficient and reliable data pipelines to move and transform data (both large and small amounts) within our Azure ecosystem.
β’ Work closely with Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer to manage and optimize data processes.
β’ Develop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes.
β’ Ensure the seamless integration and compatibility of data solutions with Databricks Unity Catalog Data Warehouse and adhere to general data warehousing principles.
β’ Collaborate with data scientists, analysts, and other stakeholders to support data-centric needs.
β’ Implement data governance and quality processes, ensuring data integrity and compliance.
β’ Optimize data flow and collection for cross-functional teams.
β’ Provide technical leadership and mentorship to junior team members.
β’ Stay current with industry trends and developments in data architecture and processing.
REQUIREMENTS:
β’ Bachelors or masters degree in computer science, Engineering, or a related field.
β’ Minimum of 5 years of experience in a Data Engineering role.
β’ Strong expertise in the Azure data ecosystem, including Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer.
β’ Proficient in Databricks Data Warehouse and a solid understanding of data warehousing principles.
β’ Experience with ETL and ELT processes and tools.
β’ Strong programming skills in languages such as Python, SQL, Scala, or Java.
β’ Experience with data modeling, data access, and data storage techniques.
β’ Ability to work in a fast-paced environment and manage multiple projects simultaneously.
β’ Excellent problem-solving skills and attention to detail.
β’ Strong communication and teamwork skills.