LHH

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer III, onsite in Louisville, Kentucky, with a contract length of unspecified duration. The pay rate is $60-$65/hr. Requires 10+ years of experience in Data Engineering, Azure, SQL, Data Warehousing, and Power BI.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date
April 28, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Yes
-
πŸ“ - Location detailed
Louisville, KY
-
🧠 - Skills detailed
#Scripting #Azure ADLS (Azure Data Lake Storage) #Data Lakehouse #Azure #ADF (Azure Data Factory) #Azure cloud #Data Lake #Apache Spark #Logic Apps #Data Engineering #Data Pipeline #Microsoft Power BI #Cloud #Spark SQL #Data Quality #Databricks #Statistics #Documentation #ADLS (Azure Data Lake Storage) #Agile #Metadata #Data Lineage #Data Ingestion #Delta Lake #Snowflake #Python #Data Science #Computer Science #Azure Data Factory #BI (Business Intelligence) #Azure SQL #PySpark #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Security #Spark (Apache Spark) #SSIS (SQL Server Integration Services) #Compliance #Storage #Scala #Data Architecture
Role description
Data Engineer: III - Onsite We’re looking for a Data Engineer: III to join our client’s dynamic finance team. If you have 10 years of experience in Data Engineer, Azure, SQL and Datawarehousing with Power BI this is a great opportunity to grow your career with a company known for excellence. What You'll Do The Senior Data Engineer will support Data Warehousing in a Databricks and Azure SQL environment by designing, building, and maintaining scalable, reliable data pipelines and data models that enable analytics and reporting across the organization. This role is responsible for ingesting data from diverse source systems, applying data quality and transformation logic, and modeling data to support BI reporting and downstream data needs. Working closely with BI developers, product owners, and business stakeholders, the Senior Data Engineer ensures data is reliable, well governed, and aligned to enterprise standards. Essential Job Responsibilities β€’ Create and maintain optimal data pipeline Patterns/Architecture. β€’ Assemble large, complex data sets that meet functional / non-functional business requirements. β€’ Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability. β€’ Build data pipelines that extract, transform, and load data from a wide variety of data sources using Databricks and Azure technologies. β€’ Design and implement data models. β€’ Create automated tests to continuously monitor the quality of the data models. β€’ Work with stakeholders to assist with data-related technical issues and support their data infrastructure needs. β€’ Keep data separated and secure across Azure regions, ensuring HIPAA & HITECH and applicable regulation compliance. Education & Experience β€’ Bachelors degree in computer science, statistics, informatics, information systems or other quantitative fields. β€’ 10+ years of experience in a Data Engineer/ BI Developer role. β€’ Extensive SQL experience including complex query development, SSIS, performance tuning, and optimization in Azure SQL and distributed query environments. β€’ Experience with data lake and data lakehouse architecture is preferred. β€’ Experience with design and engineering for Databricks Delta tables, Spark Declarative Pipelines, Jobs/Workflows is preferred. β€’ Experience with Azure cloud services: Databricks, Azure Data Factory, Azure SQL db, Azure Data Lake Storage Gen2, PySpark, Logic Apps β€’ Experience with object-oriented/object function scripting languages: Python, Scala β€’ Experience with structured, semi structured and unstructured data. β€’ Experience with data pipeline architecture is preferred. β€’ Experience with development/maintenance of APIs and WebServices is preferred. β€’ β€’ Knowledge, Skills & Abilities β€’ Advanced proficiency in SQL, including complex query development, performance tuning, and optimization in Azure SQL and distributed query environments. β€’ Strong hands on experience designing and building ETL/ELT pipelines using Databricks, leveraging Apache Spark (PySpark / Spark SQL) for large scale data ingestion, transformation, and processing. β€’ Deep understanding of data lake and lakehouse architectures, including structured, semi structured, and incremental data ingestion patterns using Delta Lake, partitioning, and schema enforcement. β€’ Proven ability to design and implement analytics ready data models (star, snowflake, and dimensional models) to support Power BI and other BI/analytics consumption patterns. β€’ Experience managing metadata, data lineage, dependencies, and workload orchestration, ensuring reliable and repeatable data pipelines across development and production environments. β€’ Strong analytical and troubleshooting skills, with the ability to perform root cause analysis across source systems, data pipelines, and downstream reporting to resolve data quality and performance issues. β€’ Ability to collaborate closely with data architects, DBAs, BI developers, and data scientists to align engineering solutions with enterprise architecture and analytics standards. β€’ Proactive mindset with the ability to identify data quality risks, pipeline failures, scalability constraints, and performance bottlenecks, and escalate or remediate them early. β€’ Detail oriented with strong documentation, versioning, and governance discipline, supporting maintainable, auditable, and compliant data solutions. β€’ Comfortable working in an Agile, cross functional environment, managing multiple pipelines and priorities while maintaining production stability. β€’ Customer focused approach to delivering reliable, scalable, and business ready data assets that support operational and strategic decision making. β€’ Strategic thinker who understands how data engineering choices impact cost, performance, scalability, and business outcomes. β€’ Ability and willingness to mentor junior data engineers, promote Spark and Databricks best practices, and contribute to team technical standards. πŸ“ Location: Louisville, Kentucky (Onsite) πŸ’΅ Pay: $60 –$65/hr. πŸ“© Apply now to take the next step in your accounting career! Benefit offerings include medical, dental, vision, life insurance, short-term disability, additional voluntary benefits, EAP program, commuter benefits and 401K plan. Our program provides employees the flexibility to choose the type of coverage that meets their individual needs. Available paid leave may include Paid Sick Leave, where required by law; any other paid leave required by Federal, State, or local law; and Holiday pay upon meeting eligibility criteria. Equal Opportunity Employer/Veterans/Disabled To read our Candidate Privacy Information Statement, which explains how we will use your information, please navigate to https://www.lhh.com/us/en/candidate-privacy The Company will consider qualified applicants with arrest and conviction records in accordance with federal, state, and local laws and/or security clearance requirements, including, as applicable: β€’ The California Fair Chance Act β€’ Los Angeles City Fair Chance Ordinance β€’ Los Angeles County Fair Chance Ordinance for Employers β€’ San Francisco Fair Chance Ordinance