

Compunnel Inc.
Data Engineer - III -- PRADC5697387
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - III, with a contract length of "unknown," offering a pay rate of "unknown," and is located in "unknown." Candidates must have 12+ years of Data Engineering experience, proficiency in Databricks, Python, SQL, and AWS, and a Bachelor's degree in a related field.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 23, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Chicago, IL
-
π§ - Skills detailed
#"ETL (Extract #Transform #Load)" #Strategy #Cloud #Scripting #Data Architecture #Power Automate #Python #SQL (Structured Query Language) #Big Data #Tableau #BI (Business Intelligence) #SQL Server #Batch #Agile #Logical Data Model #Scala #Databricks #Visualization #Data Security #Data Mart #Data Modeling #Data Engineering #Data Pipeline #Data Strategy #Data Management #Microsoft Power BI #Data Quality #Security #Data Warehouse #Computer Science #AWS (Amazon Web Services)
Role description
Job Summary
We are seeking an intermediate-level Data Engineer to design and develop scalable, repeatable, and secure data pipelines for ingestion and ETL processes.
The ideal candidate will support business intelligence efforts, optimize data architecture, and ensure data quality and reliability across the organization.
MUST HAVE SKILLS:
β’ 12+ years of Hands-on Data Engineering experience
β’ Databricks
β’ Python
β’ SQL
β’ PowerBI
β’ Tableau
β’ Power Automate
β’ AWS
Key Responsibilities
β’ Design and implement data management strategies to support business intelligence initiatives.
β’ Build and maintain data architecture aligned with organizational data strategy.
β’ Develop logical data models and implement physical database structures.
β’ Ensure quality, security, and reliability of data systems; troubleshoot issues as needed.
β’ Construct operational data stores and data marts.
β’ Support deployed data applications and analytical models; act as a trusted advisor to data consumers.
β’ Create real-time and batch ETL processes aligned with business needs.
β’ Optimize data pipelines for performance and readability.
β’ Document data flow diagrams, security access, and data quality standards.
β’ Maintain a query library for recurring data requests.
Required Qualifications
β’ Bachelorβs degree in Computer Science, Information Technology, or related field (or equivalent experience).
β’ Minimum 1 year of relevant work experience.
β’ Intermediate understanding of data warehouse architectures, ETL/ELT processes, and data modeling.
β’ Knowledge of scripting languages and big data tools.
β’ Familiarity with CI/CD pipelines and cloud-based data platforms.
β’ Understanding of data security, privacy regulations, and Agile methodologies.
β’ Strong critical thinking, decision-making, and communication skills.
Preferred Qualifications
β’ Experience with SQL, SQL Server, Tableau, Power BI, or similar tools.
β’ Familiarity with predictive analytics and advanced data visualization techniques.
Job Summary
We are seeking an intermediate-level Data Engineer to design and develop scalable, repeatable, and secure data pipelines for ingestion and ETL processes.
The ideal candidate will support business intelligence efforts, optimize data architecture, and ensure data quality and reliability across the organization.
MUST HAVE SKILLS:
β’ 12+ years of Hands-on Data Engineering experience
β’ Databricks
β’ Python
β’ SQL
β’ PowerBI
β’ Tableau
β’ Power Automate
β’ AWS
Key Responsibilities
β’ Design and implement data management strategies to support business intelligence initiatives.
β’ Build and maintain data architecture aligned with organizational data strategy.
β’ Develop logical data models and implement physical database structures.
β’ Ensure quality, security, and reliability of data systems; troubleshoot issues as needed.
β’ Construct operational data stores and data marts.
β’ Support deployed data applications and analytical models; act as a trusted advisor to data consumers.
β’ Create real-time and batch ETL processes aligned with business needs.
β’ Optimize data pipelines for performance and readability.
β’ Document data flow diagrams, security access, and data quality standards.
β’ Maintain a query library for recurring data requests.
Required Qualifications
β’ Bachelorβs degree in Computer Science, Information Technology, or related field (or equivalent experience).
β’ Minimum 1 year of relevant work experience.
β’ Intermediate understanding of data warehouse architectures, ETL/ELT processes, and data modeling.
β’ Knowledge of scripting languages and big data tools.
β’ Familiarity with CI/CD pipelines and cloud-based data platforms.
β’ Understanding of data security, privacy regulations, and Agile methodologies.
β’ Strong critical thinking, decision-making, and communication skills.
Preferred Qualifications
β’ Experience with SQL, SQL Server, Tableau, Power BI, or similar tools.
β’ Familiarity with predictive analytics and advanced data visualization techniques.






