

Neos Consulting Group
Sr. Data Engineer (Azure Data Services/Databricks)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer (Azure Data Services/Databricks) on a long-term contract in Austin, TX. Requires 10+ years IT experience, 5+ years with Azure, proficiency in Python, SQL, and Databricks, and strong ETL optimization skills.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 16, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Snowflake #Data Lake #Storage #Cloud #Agile #GitHub #Spark (Apache Spark) #SQL (Structured Query Language) #ADLS (Azure Data Lake Storage) #Azure cloud #Scala #Azure Blob Storage #Data Pipeline #Synapse #Microsoft Power BI #API (Application Programming Interface) #ML (Machine Learning) #Databricks #Data Engineering #Data Warehouse #Power Automate #Azure SQL #Azure #PySpark #Vault #Scrum #BI (Business Intelligence) #Python #Security #DevOps #"ETL (Extract #Transform #Load)" #AI (Artificial Intelligence) #Migration
Role description
Neos is Seeking a Sr. Data Engineer (Azure Data Services/Databricks) for a long-term contract role with our client in Austin, TX.
• HYBRID – ONLY CANDIDATES CURRENTLY RESIDING IN THE AUSTIN, TX AREA (within 50 miles) WILL BE CONSIDERED
•
•
• This role is HYBRID - 3 Days Onsite ( Austin, TX 78723)
No calls, no emails, please respond directly to the “apply” link with your resume and contact details.
Job Description
We are seeking a hands-on Senior Data Engineer with extensive experience (Hands on) in Azure data services and Databricks.
Key Skill Set: Azure data services, Databricks, Microsoft Fabric, Python, SQL, PySpark, GitHub
Department/Project: Enterprise Data & Analytics/EDM Team
Responsibilities
• Design and implement data solutions using Azure services (e.g., Synapse Analytics, Databricks, Snowflake, Azure SQL, Azure Blob Storage, ADLS Gen2, Azure Functions, Azure Key Vault, AI/ML).
• Develop data pipelines and transformations (ETL/ELT).
• Collaborate with stakeholders to understand data requirements and deliver solutions.
• Ensure data solutions are scalable, reliable, and secure.
• Implement CI/CD and DevOps practices.
• Maintain and publish code to GitHub.
• Follow PMO/Scrum master instructions using Agile/Scrum methodologies.
Requirements
Must Have
• 10+ years of IT experience, including a minimum of 5 years as a Data Engineer
• 5+ years of experience with the Azure cloud platform
• Proven experience with Azure data services (e.g., Azure Key Vault, Azure Blob Storage, ADLS Gen2, Databricks Pipeline, Log Analytics, Logic App, Purview, Azure Functions)
• Experience with Databricks, Microsoft Fabric, Power Automate, Power BI and SQL
• Proficiency in Python, SQL, API, and PySpark
• Expertise in optimizing ETL processes, data warehouse, and data lake management
• Experience with CI/CD and DevOps practices
• Experience with structured, unstructured, and semi-structured data
• Experience with maintaining and publishing code to GitHub
• Ability to work in a cross-functional team and coordinate with Infra/Ops and Information Security departments
• Experience with requirement gathering and Scrum methodologies
Good to Have
• Experience with on premises to Azure cloud migration
• Knowledge of Snowflake, Microsoft Fabric, Microsoft Purview, AI/ML, streaming data services, and marketplace data services
• Relevant certifications
• Experience with multiple cloud implementations
#DICE
Neos is Seeking a Sr. Data Engineer (Azure Data Services/Databricks) for a long-term contract role with our client in Austin, TX.
• HYBRID – ONLY CANDIDATES CURRENTLY RESIDING IN THE AUSTIN, TX AREA (within 50 miles) WILL BE CONSIDERED
•
•
• This role is HYBRID - 3 Days Onsite ( Austin, TX 78723)
No calls, no emails, please respond directly to the “apply” link with your resume and contact details.
Job Description
We are seeking a hands-on Senior Data Engineer with extensive experience (Hands on) in Azure data services and Databricks.
Key Skill Set: Azure data services, Databricks, Microsoft Fabric, Python, SQL, PySpark, GitHub
Department/Project: Enterprise Data & Analytics/EDM Team
Responsibilities
• Design and implement data solutions using Azure services (e.g., Synapse Analytics, Databricks, Snowflake, Azure SQL, Azure Blob Storage, ADLS Gen2, Azure Functions, Azure Key Vault, AI/ML).
• Develop data pipelines and transformations (ETL/ELT).
• Collaborate with stakeholders to understand data requirements and deliver solutions.
• Ensure data solutions are scalable, reliable, and secure.
• Implement CI/CD and DevOps practices.
• Maintain and publish code to GitHub.
• Follow PMO/Scrum master instructions using Agile/Scrum methodologies.
Requirements
Must Have
• 10+ years of IT experience, including a minimum of 5 years as a Data Engineer
• 5+ years of experience with the Azure cloud platform
• Proven experience with Azure data services (e.g., Azure Key Vault, Azure Blob Storage, ADLS Gen2, Databricks Pipeline, Log Analytics, Logic App, Purview, Azure Functions)
• Experience with Databricks, Microsoft Fabric, Power Automate, Power BI and SQL
• Proficiency in Python, SQL, API, and PySpark
• Expertise in optimizing ETL processes, data warehouse, and data lake management
• Experience with CI/CD and DevOps practices
• Experience with structured, unstructured, and semi-structured data
• Experience with maintaining and publishing code to GitHub
• Ability to work in a cross-functional team and coordinate with Infra/Ops and Information Security departments
• Experience with requirement gathering and Scrum methodologies
Good to Have
• Experience with on premises to Azure cloud migration
• Knowledge of Snowflake, Microsoft Fabric, Microsoft Purview, AI/ML, streaming data services, and marketplace data services
• Relevant certifications
• Experience with multiple cloud implementations
#DICE






