HVR/Databricks Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an HVR/Databricks Data Engineer with a contract length of "unknown," offering a pay rate of "$XX/hour." Key skills include Python, SQL, ETL processes, and experience with Databricks and cloud services like AWS and Azure.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 30, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#dbt (data build tool) #Batch #Data Processing #SQL Server #AWS (Amazon Web Services) #Data Ingestion #Storage #Data Lake #BI (Business Intelligence) #Scala #Synapse #Databricks #Data Engineering #Azure SQL #Python #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Programming #Azure cloud #Big Data #Documentation #Data Integration #Azure #MySQL #Data Mart #Oracle #Databases #Cloud
Role description
The Opportunity: We are seeking a software engineer/developer or ETL/data integration/big data developer with experience in projects emphasizing data processing and storage. This person will be responsible for supporting the data ingestion, transformation, and distribution to end consumers. Candidate will perform requirements analysis, design/develop process flow, unit and integration tests, and create/update process documentation. Β· Work with the Business Intelligence team and operational stakeholders to design and implement both the data presentation layer available to the user community, as well as the underlying technical architecture of the data warehousing environment. Β· Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes. Β· Design and develop database objects, tables, stored procedures, views, etc. Β· Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end Β· Design and develop ETL Processes that will transform a variety of raw data, flat files, xl spreadsheets into SQL Databases Β· Understands the concept of Data marts and Data lakes and experience with migrating legacy systems to data marts/lake Β· Uses additional cloud technologies (e.g., understands concept of Cloud services like Azure SQL server) Β· Maintain comprehensive project documentation Β· Aptitude to learn new technologies and the ability to perform continuous research, analysis, and process improvement. Β· Strong interpersonal and communication skills to be able to work in a team environment to include customer and contractor technical, end users, and management team members. Β· Manage multiple projects, responsibilities and competing priorities. Requirements Experience Needed: Β· Programming languages, frameworks, and file formats such as: Python, SQL, PLSQL, DBT Β· Database platforms such as: Oracle, SQL Server, MySQL Β· Big data concepts and technologies such as Synapse & Databricks Β· AWS and Azure cloud computing