Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 7-12 years of experience in Databricks and cloud technologies, offering a competitive pay rate for a contract length of "X months". Key skills include proficiency in PySpark, Python, SQL, and Azure components.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 20, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Cloud #"ETL (Extract #Transform #Load)" #Automation #Data Lake #SQL (Structured Query Language) #Databricks #Spark (Apache Spark) #Azure Databricks #Data Modeling #Azure cloud #Python #PySpark #Azure Data Factory #Data Engineering #Delta Lake #Computer Science #ADF (Azure Data Factory) #Azure #Data Pipeline
Role description
β€’ 7-12 years experience on Data Engineering role working with Databricks & Cloud technologies. β€’ Bachelor’s degree in computer science, Information Technology, or related field. β€’ Strong proficiency in PySpark, Python, SQL. β€’ Strong experience in data modeling, ETL/ELT pipeline development, and automation β€’ Hands-on experience with performance tuning of data pipelines and workflows β€’ Proficient in working on Azure cloud components Azure Data Factory, Azure DataBricks, Azure Data Lake etc. β€’ Experience with data modeling, ETL processes, Delta Lake and data warehousing. β€’ Experience on Delta Live Tables, Autoloader & Unity Catalog. β€’ Preferred - Knowledge of the insurance industry and its data requirements. β€’ Strong analytical skills with the ability to collect, organize, analyze, and disseminate significant amounts of information with attention to detail and accuracy. β€’ Excellent communication and problem-solving skills to work effectively with diverse teams β€’ Excellent problem-solving skills and ability to work under tight deadlines.