

CBase Inc
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Warren, MI, with a contract length of "unknown" and a pay rate of "unknown." Key skills include SQL, SSIS, Azure Data Factory, and Databricks. Requires 4-8+ years of data engineering experience, preferably with ERP systems.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
May 12, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Warren, MI
-
🧠 - Skills detailed
#Data Extraction #Migration #Storage #Monitoring #Data Lake #Strategy #Data Modeling #Datasets #Documentation #ADF (Azure Data Factory) #Data Governance #GIT #Data Transformations #Data Pipeline #Spark SQL #ADLS (Azure Data Lake Storage) #Version Control #SSIS (SQL Server Integration Services) #Cloud #Data Lineage #Spark (Apache Spark) #Data Accuracy #Data Engineering #Azure #Databricks #Azure Data Factory #Delta Lake #BI (Business Intelligence) #Indexing #Forecasting #SQL Queries #Data Quality #"ETL (Extract #Transform #Load)" #Scala #dbt (data build tool) #Microsoft Power BI #SQL Server #Semantic Models #PySpark #Data Warehouse #SQL (Structured Query Language) #Azure ADLS (Azure Data Lake Storage) #Python
Role description
Only Visa Independent Consultants - No C2C - Only W2
Warren MI - Onsite Role
Data Engineer – Role Summary & Job Description
(Legacy Support & Cloud Modernization – Azure + Databricks)
Role Summary
We are seeking a Data Engineer to support and evolve our enterprise data platform, which integrates data from multiple ERP systems into a centralized analytics environment. This role is responsible for maintaining our existing SQL Server and SSIS-based data warehouse while driving the transition to a modern Azure-based architecture leveraging Azure Data Factory, Databricks (Lakehouse), and Power BI.
The position requires a balance of strong technical expertise and business acumen. The ideal candidate will not only build and maintain data pipelines, but also partner with business stakeholders to deliver high-quality, trusted data that drives decision-making, operational efficiency, and measurable business value.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Key Responsibilities
Legacy Data Platform Support
• Maintain and enhance SSIS packages for data extraction, transformation, and loading
• Support SQL Server data warehouse (staging, ODS, reporting layers)
• Troubleshoot data issues, job failures, and performance bottlenecks
• Optimize SQL queries, stored procedures, and indexing strategies
• Ensure reliability of scheduled jobs via SQL Server Agent
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Cloud Data Engineering (Azure + Databricks)
• Design and develop data pipelines using Azure Data Factory (ADF)
• Ingest and organize data into Azure Data Lake (Bronze/Silver/Gold layers)
• Build scalable data transformations using Databricks (Spark SQL, PySpark)
• Create curated, analytics-ready datasets for Power BI
• Implement Delta Lake and support data governance (e.g., Unity Catalog)
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Migration & Modernization
• Analyze and document existing SSIS/SQL pipelines
• Translate legacy ETL processes into modern ELT patterns
• Support phased migration strategy (coexistence of legacy and modern platforms)
• Reduce technical debt and improve pipeline maintainability
• Establish standards for data modeling, naming, and architecture
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Data Modeling & Business Value Creation
• Design dimensional models (fact and dimension tables) aligned to business processes
• Integrate and standardize data across multiple ERP systems
• Translate business requirements into scalable data solutions
• Partner with stakeholders to identify high-impact use cases for data and analytics
• Deliver datasets that enable reporting, forecasting, and operational insights
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Data Quality & Governance
• Implement data validation, reconciliation, and monitoring processes
• Ensure data accuracy and consistency across systems during migration
• Define and enforce data quality standards and controls
• Support data lineage, documentation, and transparency initiatives
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Collaboration & Stakeholder Engagement
• Work closely with business stakeholders, analysts, and BI developers
• Support Power BI semantic models and reporting solutions
• Communicate technical solutions in business terms
• Act as a bridge between IT/data teams and business functions
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Required Qualifications
• 4–8+ years of experience in data engineering or data warehousing
• Strong SQL skills (T-SQL and/or Spark SQL)
• Hands-on experience with SSIS and SQL Server
• Experience with Azure Data Factory (ADF) or similar tools
• Experience with Databricks (Spark, Delta Lake, or similar platforms)
• Solid understanding of data warehousing concepts (star schema, fact/dimension modeling)
• Experience integrating data from multiple source systems (ERP experience preferred)
• Proven ability to translate business requirements into technical solutions
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Preferred Qualifications
• Experience migrating legacy ETL systems (SSIS) to cloud-based architectures
• Proficiency in Python or PySpark
• Familiarity with Medallion architecture (Bronze/Silver/Gold)
• Experience with Power BI data modeling and performance optimization
• Knowledge of data governance tools (e.g., Unity Catalog)
• Experience with Git and CI/CD pipelines
• Exposure to dbt or similar frameworks
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Technical Skills
• SQL Server (T-SQL), SSIS
• Azure Data Factory (ADF)
• Azure Data Lake Storage (ADLS)
• Databricks (Spark SQL, PySpark, Delta Lake)
• Data modeling (Kimball methodology preferred)
• Performance tuning and query optimization
• Version control (Git)
Only Visa Independent Consultants - No C2C - Only W2
Warren MI - Onsite Role
Data Engineer – Role Summary & Job Description
(Legacy Support & Cloud Modernization – Azure + Databricks)
Role Summary
We are seeking a Data Engineer to support and evolve our enterprise data platform, which integrates data from multiple ERP systems into a centralized analytics environment. This role is responsible for maintaining our existing SQL Server and SSIS-based data warehouse while driving the transition to a modern Azure-based architecture leveraging Azure Data Factory, Databricks (Lakehouse), and Power BI.
The position requires a balance of strong technical expertise and business acumen. The ideal candidate will not only build and maintain data pipelines, but also partner with business stakeholders to deliver high-quality, trusted data that drives decision-making, operational efficiency, and measurable business value.
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Key Responsibilities
Legacy Data Platform Support
• Maintain and enhance SSIS packages for data extraction, transformation, and loading
• Support SQL Server data warehouse (staging, ODS, reporting layers)
• Troubleshoot data issues, job failures, and performance bottlenecks
• Optimize SQL queries, stored procedures, and indexing strategies
• Ensure reliability of scheduled jobs via SQL Server Agent
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Cloud Data Engineering (Azure + Databricks)
• Design and develop data pipelines using Azure Data Factory (ADF)
• Ingest and organize data into Azure Data Lake (Bronze/Silver/Gold layers)
• Build scalable data transformations using Databricks (Spark SQL, PySpark)
• Create curated, analytics-ready datasets for Power BI
• Implement Delta Lake and support data governance (e.g., Unity Catalog)
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Migration & Modernization
• Analyze and document existing SSIS/SQL pipelines
• Translate legacy ETL processes into modern ELT patterns
• Support phased migration strategy (coexistence of legacy and modern platforms)
• Reduce technical debt and improve pipeline maintainability
• Establish standards for data modeling, naming, and architecture
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Data Modeling & Business Value Creation
• Design dimensional models (fact and dimension tables) aligned to business processes
• Integrate and standardize data across multiple ERP systems
• Translate business requirements into scalable data solutions
• Partner with stakeholders to identify high-impact use cases for data and analytics
• Deliver datasets that enable reporting, forecasting, and operational insights
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Data Quality & Governance
• Implement data validation, reconciliation, and monitoring processes
• Ensure data accuracy and consistency across systems during migration
• Define and enforce data quality standards and controls
• Support data lineage, documentation, and transparency initiatives
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Collaboration & Stakeholder Engagement
• Work closely with business stakeholders, analysts, and BI developers
• Support Power BI semantic models and reporting solutions
• Communicate technical solutions in business terms
• Act as a bridge between IT/data teams and business functions
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Required Qualifications
• 4–8+ years of experience in data engineering or data warehousing
• Strong SQL skills (T-SQL and/or Spark SQL)
• Hands-on experience with SSIS and SQL Server
• Experience with Azure Data Factory (ADF) or similar tools
• Experience with Databricks (Spark, Delta Lake, or similar platforms)
• Solid understanding of data warehousing concepts (star schema, fact/dimension modeling)
• Experience integrating data from multiple source systems (ERP experience preferred)
• Proven ability to translate business requirements into technical solutions
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Preferred Qualifications
• Experience migrating legacy ETL systems (SSIS) to cloud-based architectures
• Proficiency in Python or PySpark
• Familiarity with Medallion architecture (Bronze/Silver/Gold)
• Experience with Power BI data modeling and performance optimization
• Knowledge of data governance tools (e.g., Unity Catalog)
• Experience with Git and CI/CD pipelines
• Exposure to dbt or similar frameworks
\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_\_
Technical Skills
• SQL Server (T-SQL), SSIS
• Azure Data Factory (ADF)
• Azure Data Lake Storage (ADLS)
• Databricks (Spark SQL, PySpark, Delta Lake)
• Data modeling (Kimball methodology preferred)
• Performance tuning and query optimization
• Version control (Git)






