Senior Data Warehouse Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Warehouse Engineer, 12-month contract, hybrid in Austin, TX. Requires 5+ years in Data Engineering, expertise in Azure ecosystem, Databricks, ETL/ELT processes, and strong Python/SQL skills. Bachelor's degree required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
August 8, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
W2 Contractor
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Austin, Texas Metropolitan Area
-
🧠 - Skills detailed
#BI (Business Intelligence) #Java #"ETL (Extract #Transform #Load)" #Programming #Data Lake #Scala #Data Modeling #Data Access #Azure ADLS (Azure Data Lake Storage) #SQL (Structured Query Language) #Python #ADLS (Azure Data Lake Storage) #Azure Databricks #Compliance #Data Pipeline #Data Warehouse #Data Storage #Azure #Databricks #Computer Science #Data Engineering #Data Integrity #Microsoft Power BI #Storage #Data Architecture #Data Science #Leadership #Data Governance
Role description
Note: This is W2 only role, C2C candidate please excuse, Candidates who are looking for sponsership please excuse Title: Senior Data Warehouse Engineer Duration: 12- months Location: Austin, TX - Hybrid (Onsite 3 days a week) Interviews: β€’ One screening call, followed by 2 interviews with a panel. β€’ Virtual interviews are fine and needs to be a video interview. β€’ Candidate also needs to be at a place where there is good connectivity. Top 3 skills: β€’ Databricks Data Warehouse and a solid understanding of data warehousing principles β€’ Develop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes β€’ Programming skills in languages such as Python, SQL, Scala, or Java - Python and SQL are the most important β€’ Power BI knowledge would be beneficial, but is not a requirement Responsibilities: β€’ Design, build, and maintain efficient and reliable data pipelines to move and transform data (both large and small amounts) within our Azure ecosystem. β€’ Work closely with Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer to manage and optimize data processes. β€’ Develop and maintain scalable ETL (Extract, Transform, Load) and ELT (Extract, Load, Transform) processes. β€’ Ensure the seamless integration and compatibility of data solutions with Databricks Unity Catalog Data Warehouse and adhere to general data warehousing principles. β€’ Collaborate with data scientists, analysts, and other stakeholders to support data-centric needs. β€’ Implement data governance and quality processes, ensuring data integrity and compliance. β€’ Optimize data flow and collection for cross-functional teams. β€’ Provide technical leadership and mentorship to junior team members. β€’ Stay current with industry trends and developments in data architecture and processing. Requirements: β€’ Bachelor’s or master’s degree in computer science, Engineering, or a related field. β€’ Minimum of 5 years of experience in a Data Engineering role. β€’ Strong expertise in the Azure data ecosystem, including Azure Data Lake Storage, Azure Databricks, and Azure Data Explorer. β€’ Proficient in Databricks Data Warehouse and a solid understanding of data warehousing principles. β€’ Experience with ETL and ELT processes and tools. β€’ Strong programming skills in languages such as Python, SQL, Scala, or Java. β€’ Experience with data modeling, data access, and data storage techniques. β€’ Ability to work in a fast-paced environment and manage multiple projects simultaneously. β€’ Excellent problem-solving skills and attention to detail. β€’ Strong communication and teamwork skills.