

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 7-12 years of experience in Databricks and cloud technologies, offering a competitive pay rate for a contract length of "X months". Key skills include proficiency in PySpark, Python, SQL, and Azure components.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
September 20, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Cloud #"ETL (Extract #Transform #Load)" #Automation #Data Lake #SQL (Structured Query Language) #Databricks #Spark (Apache Spark) #Azure Databricks #Data Modeling #Azure cloud #Python #PySpark #Azure Data Factory #Data Engineering #Delta Lake #Computer Science #ADF (Azure Data Factory) #Azure #Data Pipeline
Role description
β’ 7-12 years experience on Data Engineering role working with Databricks & Cloud technologies.
β’ Bachelorβs degree in computer science, Information Technology, or related field.
β’ Strong proficiency in PySpark, Python, SQL.
β’ Strong experience in data modeling, ETL/ELT pipeline development, and automation
β’ Hands-on experience with performance tuning of data pipelines and workflows
β’ Proficient in working on Azure cloud components Azure Data Factory, Azure DataBricks, Azure Data Lake etc.
β’ Experience with data modeling, ETL processes, Delta Lake and data warehousing.
β’ Experience on Delta Live Tables, Autoloader & Unity Catalog.
β’ Preferred - Knowledge of the insurance industry and its data requirements.
β’ Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
β’ Excellent communication and problem-solving skills to work effectively with diverse teams
β’ Excellent problem-solving skills and ability to work under tight deadlines.
β’ 7-12 years experience on Data Engineering role working with Databricks & Cloud technologies.
β’ Bachelorβs degree in computer science, Information Technology, or related field.
β’ Strong proficiency in PySpark, Python, SQL.
β’ Strong experience in data modeling, ETL/ELT pipeline development, and automation
β’ Hands-on experience with performance tuning of data pipelines and workflows
β’ Proficient in working on Azure cloud components Azure Data Factory, Azure DataBricks, Azure Data Lake etc.
β’ Experience with data modeling, ETL processes, Delta Lake and data warehousing.
β’ Experience on Delta Live Tables, Autoloader & Unity Catalog.
β’ Preferred - Knowledge of the insurance industry and its data requirements.
β’ Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy.
β’ Excellent communication and problem-solving skills to work effectively with diverse teams
β’ Excellent problem-solving skills and ability to work under tight deadlines.