Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of over 6 months, offering a pay rate of $90,000 - $130,000 per year. Key skills include Databricks, SQL, Python, and data pipeline development. A bachelor's degree and 5+ years of relevant experience are required.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
591
🗓️ - Date discovered
May 16, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
On-site
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Chicago, IL 60654
🧠 - Skills detailed
#AWS (Amazon Web Services) #Data Analysis #"ETL (Extract #Transform #Load)" #BitBucket #Data Architecture #Delta Lake #MS SQL (Microsoft SQL Server) #Python #Microsoft SQL Server #Data Lifecycle #SSIS (SQL Server Integration Services) #Azure DevOps #ML (Machine Learning) #Data Integrity #GIT #SQL (Structured Query Language) #Lambda (AWS Lambda) #Airflow #Data Modeling #Data Lake #Cloud #Data Processing #Data Pipeline #Scala #Monitoring #Computer Science #Microsoft SQL #S3 (Amazon Simple Storage Service) #Redshift #Azure #SQL Server #Code Reviews #Data Warehouse #Data Lineage #Data Engineering #Databricks #DevOps
Role description
Senior DataBricks Data Engineer We are seeking a hands-on Senior Data Engineer to join our team at the end client's site. This role focuses heavily on data transformation and pipeline development using Databricks, SQL, and Python, within a modern cloud-based data warehouse environment. Your Role As a Senior Data Engineer, you will: Design, develop, and support data transformation pipelines, primarily focused on the Silver and Gold layers of a data lake architecture (e.g., Databricks Delta Lake). Collaborate with data analysts, scientists, and engineers to deliver clean, reliable, and well-modeled data for analytics, reporting, and machine learning workloads. Ensure pipelines are well-structured, efficient, and maintainable using DLT pipelines, job orchestration, and workflow management tools. Assist in improving and maintaining a scalable data and analytics platform, ensuring data integrity and performance. Responsibilities Build and maintain robust, reusable data transformation pipelines in Databricks, primarily using SQL and Python. Collaborate with data architects and business stakeholders to understand data requirements and translate them into reliable pipeline solutions. Optimize data processing workflows from Bronze to Silver to Gold layers. Leverage DLT, Delta Live Tables, and Databricks Jobs for building production-grade workflows. Participate in peer code reviews, implement CI/CD practices, and document design and operational procedures. Mentor junior team members on best practices in data pipeline design and Databricks usage. Required Qualifications Bachelor’s degree in Computer Science, Engineering, or related field. 5+ years of experience building data pipelines, with a strong focus on transformation and data modeling in data lake or warehouse environments. Hands-on experience with Databricks (minimum 1 year), including building and maintaining DLT pipelines, Delta tables, and managing workflows using Databricks Jobs. Proficient in SQL and Python for data transformation and manipulation. Strong understanding of data warehousing principles, including dimensional modeling, data dependencies, and ETL best practices. Familiarity with data lineage, job orchestration, and monitoring practices within Databricks. Comfortable communicating in both English and Vietnamese, with the ability to clearly explain data processes and architectures. Excellent problem-solving skills and a methodical attention to detail. Preferred Qualifications Experience with Microsoft SQL Server and SSIS. Familiarity with Airflow/Astronomer for orchestration (nice-to-have). Exposure to cloud environments such as AWS, including S3, Lambda, Glue, or Redshift. Experience optimizing Delta Lake performance, partitioning strategies, and data lifecycle management. Familiar with source control tools like Git, Bitbucket, or Azure DevOps. Job Types: Full-time, Contract Pay: $90,000.00 - $130,000.00 per year Schedule: Monday to Friday Experience: Building Data Pipeline: 6 years (Required) Data transformation in Databrick: 5 years (Required) Python: 5 years (Required) SQL: 5 years (Preferred) Work Location: In person