KPG99 INC

Databricks - W2 Only

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer position based in Birmingham, AL, with a 6-month contract-to-hire. Key skills include Databricks, Azure, SQL, Python, and Power BI. Requires 5–10 years of experience in data engineering and strong ETL capabilities.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
April 9, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Birmingham, AL
-
🧠 - Skills detailed
#Data Quality #Databricks #Big Data #Oracle GoldenGate #Batch #Data Modeling #Data Governance #Azure Data Factory #Agile #R #Docker #ML (Machine Learning) #ADF (Azure Data Factory) #SQL (Structured Query Language) #SSAS (SQL Server Analysis Services) #Data Processing #SSIS (SQL Server Integration Services) #Datasets #AI (Artificial Intelligence) #API (Application Programming Interface) #Web Services #Scala #"ETL (Extract #Transform #Load)" #Python #Hadoop #Vault #Spark (Apache Spark) #Synapse #Azure #Data Engineering #Data Lake #DevOps #Microsoft Power BI #Oracle #Data Analysis #Azure SQL #BI (Business Intelligence) #Informatica #Data Pipeline
Role description
Title: Databricks - W2 Only Location: It is 4 days on-site/1 day remote - Birmingham AL- Relocation is fine from Georgia Duration: 6-month contract-to-hire. Interview Mode : Phone + Video Job Description: Top Skills - Must Haves • Development • SQL • Power bi • Dashboarding • Business intelligence • Analysis • Data • Data model • Dashboard • Data analysis • Reporting • Azure • Python • Databricks • Medallion architecture Top Skills' Details 1. Experience creating a pipeline 1. ETL-strong 1. Helping curate data 1. Create data models and Pipeline 1. Strong Azure, SQL, Python, and Databricks experience 1. Ability to analyze data, create dashboards, charts and make data more usable Job Description We are looking for a skilled Data Engineer to support the development of a modern data platform. This role involves building scalable data pipelines, transforming raw data into structured formats, and enabling data-driven insights through dashboards and reporting tools. The ideal candidate will have strong experience with Databricks, Azure, SQL, and Python, along with a solid understanding of data modeling and ETL processes. Key Responsibilities • Design and develop data pipelines for multiple data sources • Perform ETL processes to clean, transform, and curate raw data • Build and maintain data models and datasets • Develop dashboards and reports using BI tools (Power BI preferred) • Analyze data and convert it into meaningful insights • Work with cross-functional teams to gather and understand data requirements • Implement Medallion Architecture (Bronze, Silver, Gold layers) • Improve data quality, accessibility, and usability Required Skills • Strong experience with Azure (Data Factory, Data Lake, Databricks, Synapse, Key Vault) • Proficiency in SQL and Python • Hands-on experience with Databricks pipelines • Strong background in ETL and data modeling • Experience with Power BI or similar BI tools • Knowledge of big data technologies (Spark, Hive, Hadoop) • Experience with batch and real-time data processing • Familiarity with data governance and data quality tools Preferred Qualifications • Experience with MSBI tools (SSIS/SSAS) or similar • Exposure to Informatica, Oracle GoldenGate, or similar tools • Experience with API integrations and web services • Familiarity with containerization tools (Docker, OpenShift) • Understanding of Agile, DevOps, and CI/CD practices • Exposure to AI/ML models using Python or R Experience • 5–10 years of experience in data engineering or related field • Strong experience working with large-scale data systems and data lakes