Data Engineer - Databricks - REMOTE

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer - Databricks, offering a remote contract position. The pay rate is "unknown," requiring expertise in Python, SQL, ETL processes, and big data technologies like Databricks and Azure. Experience with data lakes and cloud services is essential.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
April 25, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Unknown
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Boston, MA
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Batch #Programming #Data Ingestion #Azure SQL #Databricks #SQL Server #Storage #SQL (Structured Query Language) #Data Integration #Azure #Big Data #Synapse #Python #AWS (Amazon Web Services) #MySQL #Cloud #Oracle #Documentation #Data Lake #Databases #Azure cloud #BI (Business Intelligence) #Scala #Data Mart #Data Engineering #Data Processing
Role description

We are seeking a software engineer/developer or ETL/data integration/big data developer with experience in projects emphasizing data processing and storage.

This person will be responsible for supporting the data ingestion, transformation, and distribution to end consumers. Candidate will perform requirements analysis, design/develop process flow, unit and integration tests, and create/update process documentation.

   • Work with the Business Intelligence team and operational stakeholders to design and implement both the data presentation layer available to the user community, as well as the underlying technical architecture of the data warehousing environment.

   • Develop scalable and reliable data solutions to move data across systems from multiple sources in real time as well as batch modes.

   • Design and develop database objects, tables, stored procedures, views, etc.

   • Independently analyze, solve, and correct issues in real time, providing problem resolution end-to-end

   • Design and develop ETL Processes that will transform a variety of raw data, flat files, xl spreadsheets into SQL Databases

   • Understands the concept of Data marts and Data lakes and experience with migrating legacy systems to data marts/lake

   • Uses additional cloud technologies (e.g., understands concept of Cloud services like Azure SQL server)

   • Maintain comprehensive project documentation

   • Aptitude to learn new technologies and the ability to perform continuous research, analysis, and process improvement.

   • Strong interpersonal and communication skills to be able to work in a team environment to include customer and contractor technical, end users, and management team members.

   • Manage multiple projects, responsibilities and competing priorities.

Experience Needed:

   • Programming languages, frameworks, and file formats such as: Python, SQL, PLSQL, and VB

   • Database platforms such as: Oracle, SQL Server, MySQL

   • Big data concepts and technologies such as Synapse & Databricks

   • AWS and Azure cloud computing