

HVR/Databricks Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an HVR/Databricks Data Engineer, offering a contract of "length" at a pay rate of "$$$". Key skills include Python, SQL, Databricks, and cloud technologies (AWS/Azure). Experience with data integration and ETL processes is required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 2, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
United States
-
π§ - Skills detailed
#Azure #AWS (Amazon Web Services) #Programming #Databricks #Azure cloud #Replication #BI (Business Intelligence) #Informatica #Big Data #Data Engineering #Documentation #Python #"ETL (Extract #Transform #Load)" #Data Governance #Data Integration #SQL Server #Batch #SQL (Structured Query Language) #Databases #Storage #Data Lake #Azure SQL #Data Processing #Cloud #Data Mart #Oracle #Synapse #Data Replication #Scala #Data Ingestion #MySQL
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
The Opportunity:
We are seeking a software engineer/developer or ETL/data integration/big data developer with experience in projects emphasizing data processing and storage. This person will be responsible for supporting the data ingestion, transformation, and distribution to end consumers.
Candidate will perform requirements analysis, design/develop process flow, unit and integration tests, and create/update process documentation.
β’ Work with the Business Intelligence team and operational stakeholders to design and implement both the data presentation layer available to the user community, as well as the underlying technical architecture of the data warehousing environment.
β’ Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes.
β’ Design and develop database objects, tables, stored procedures, views, etc.
β’ Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end
β’ Design and develop ETL Processes that will transform a variety of raw data, flat files, xl spreadsheets into SQL Databases
β’ Understands the concept of Data marts and Data lakes and experience with migrating legacy systems to data marts/lake
β’ Uses additional cloud technologies (e.g., understands concept of Cloud services like Azure SQL server)
β’ Maintain comprehensive project documentation
β’ Aptitude to learn new technologies and the ability to perform continuous research, analysis, and process improvement.
β’ Strong interpersonal and communication skills to be able to work in a team environment to include customer and contractor technical, end users, and management team members.
β’ Manage multiple projects, responsibilities and competing priorities.
Experience Needed:
β’ Programming languages, frameworks, and file formats such as: Python, SQL, PLSQL, and VB
β’ Database platforms such as: Oracle, SQL Server, MySQL
β’ Big data concepts and technologies such as Synapse & Databricks
β’ AWS and Azure cloud computing
β’ HVR data replication
β’ Informatica and/or Immuta for Data Governance