

Siri InfoSolutions, Inc.
Sr Data Bricks Engineer/SME
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Bricks Engineer/SME with a contract length of "unknown" and a pay rate of "unknown." Candidates should have 12+ years in data engineering, 3–5 years with Databricks and Spark, and strong SQL proficiency. Azure experience preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Monitoring #Data Modeling #Data Analysis #Data Pipeline #Apache Spark #Data Bricks #Cloud #Spark (Apache Spark) #Data Engineering #Azure #Databricks #Delta Lake #Data Architecture #BI (Business Intelligence) #SQL Queries #Data Quality #Scala #"ETL (Extract #Transform #Load)" #Microsoft Power BI #PySpark #SQL (Structured Query Language) #Python
Role description
Sr Data Bricks SME
Descriptions:
Must Have Technical/Functional Skills
12+ years overall experience in data engineering or related fields.
3–5 years hands-on experience with Databricks and Spark.
Strong proficiency in SQL and data analysis techniques.
Experience with ETL processes, data modeling, and performance tuning.
Familiarity with Python or Scala for data engineering tasks.
Excellent problem-solving and communication skills.
Roles & Responsibilities
We are seeking a hands-on Sr. Databricks Data Engineer to design, develop, and optimize data pipelines and analytics solutions. The ideal candidate will have strong experience in data engineering, ETL development, and production support, ensuring reliable, scalable, and high-performing data operations within Azure environment and can work in a fast-paced environment. Knowledge of insurance domain and Power BI is a plus but not mandatory.
Development
Design, develop, and deploy scalable ETL/ELT data pipelines using Apache Spark, PySpark, and Databricks.
Develop and optimize SQL queries for data transformation and analysis.
Collaborate with product owners, data architects and analysts to build data models, delta lake structures, and data workflows.
Collaborate with data analysts and business teams to deliver actionable insights.
Build job orchestration and monitoring solutions Ensure data quality, performance, and reliability across workflows.
Develop and maintain CI/CD pipelines for Databricks notebooks, jobs, and workflows.
Work with cloud-based data platforms (Azure preferred).
Required Skills & Experience:
10+ years overall experience in data engineering or related fields.
3–5 years hands-on experience with Databricks and Spark.
Strong proficiency in SQL and data analysis techniques.
Experience with ETL processes, data modeling, and performance tuning.
Familiarity with Python or Scala for data engineering tasks.
Excellent problem-solving and communication skills.
Sr Data Bricks SME
Descriptions:
Must Have Technical/Functional Skills
12+ years overall experience in data engineering or related fields.
3–5 years hands-on experience with Databricks and Spark.
Strong proficiency in SQL and data analysis techniques.
Experience with ETL processes, data modeling, and performance tuning.
Familiarity with Python or Scala for data engineering tasks.
Excellent problem-solving and communication skills.
Roles & Responsibilities
We are seeking a hands-on Sr. Databricks Data Engineer to design, develop, and optimize data pipelines and analytics solutions. The ideal candidate will have strong experience in data engineering, ETL development, and production support, ensuring reliable, scalable, and high-performing data operations within Azure environment and can work in a fast-paced environment. Knowledge of insurance domain and Power BI is a plus but not mandatory.
Development
Design, develop, and deploy scalable ETL/ELT data pipelines using Apache Spark, PySpark, and Databricks.
Develop and optimize SQL queries for data transformation and analysis.
Collaborate with product owners, data architects and analysts to build data models, delta lake structures, and data workflows.
Collaborate with data analysts and business teams to deliver actionable insights.
Build job orchestration and monitoring solutions Ensure data quality, performance, and reliability across workflows.
Develop and maintain CI/CD pipelines for Databricks notebooks, jobs, and workflows.
Work with cloud-based data platforms (Azure preferred).
Required Skills & Experience:
10+ years overall experience in data engineering or related fields.
3–5 years hands-on experience with Databricks and Spark.
Strong proficiency in SQL and data analysis techniques.
Experience with ETL processes, data modeling, and performance tuning.
Familiarity with Python or Scala for data engineering tasks.
Excellent problem-solving and communication skills.






