SWITS DIGITAL Private Limited

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Spring, TX, with a contract length of unspecified duration and a pay rate of "unknown." Key skills include Azure, Databricks, Delta Lake, CI/CD automation, and experience in the Oil & Gas domain is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 29, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Spring, TX
-
🧠 - Skills detailed
#Databricks #Storage #"ACID (Atomicity #Consistency #Isolation #Durability)" #ML (Machine Learning) #SQL (Structured Query Language) #Azure #Agile #Cloud #Data Governance #Data Engineering #Data Quality #PySpark #AI (Artificial Intelligence) #Python #Automation #Data Pipeline #Data Processing #Snowflake #Delta Lake #Data Science #"ETL (Extract #Transform #Load)" #Azure cloud #Datasets #Scala #Spark (Apache Spark) #Scrum #Version Control
Role description
Job Description Greetings from Smartwork IT Services! We are seeking a Senior Data Engineer with strong experience in cloud and on-premise data platforms for an on-site role in Spring, Texas. Job Title: Senior Data Engineer Location: Spring, TX (On-Site) Domain: Oil & Gas (Good to Have) Job Summary We are seeking a highly skilled Senior Data Engineer to lead the design, development, automation, and optimization of scalable data infrastructure supporting enterprise applications, advanced analytics, and AI/ML workloads. The ideal candidate will have deep expertise in Azure-based data platforms, Databricks Lakehouse architecture, and CI/CD automation, and will collaborate closely with cross-functional teams in an agile environment. Required Skills & Qualifications • Proven experience as a Senior Data Engineer in enterprise-scale environments • Strong hands-on experience with Azure cloud services • Expertise in Databricks, Delta Lake, and Snowflake • Advanced proficiency in PySpark, Python, and SQL • Strong experience designing and optimizing ETL/ELT pipelines • In-depth knowledge of Databricks Lakehouse architecture, including: • Schema evolution • ACID transactions • Data quality enforcement • Cost and performance optimization for large-scale datasets • Experience implementing Bronze/Silver/Gold Lakehouse architectures • Strong understanding of data governance, data quality frameworks (e.g., Great Expectations), and performance tuning • Hands-on experience with CI/CD automation, version control, and testing frameworks for data pipelines • Experience working in Agile/Scrum environments • Strong analytical, problem-solving, and communication skills • Oil & Gas domain experience is a plus Key Responsibilities • Design, build, and optimize scalable data pipelines using Databricks, Delta Lake, PySpark, Python, and SQL • Develop and maintain high-performance data solutions supporting AI/ML workloads, advanced analytics, and real-time data processing • Optimize data processing efficiency and storage through Lakehouse architecture best practices • Collaborate with data scientists, software engineers, product owners, and business stakeholders to translate business requirements into robust technical solutions • Implement and strengthen CI/CD pipelines, automation, and governance standards for enterprise data platforms • Ensure secure, reliable, and production-grade data workflows aligned with enterprise standards Top 3 Skills • Azure • Databricks, Delta Lake & Snowflake • CI/CD Automation