

JSG (Johnson Service Group, Inc.)
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Databricks) with a contract length of "unknown" and a pay rate of "unknown," working remotely in the USA. Key skills include Databricks, Azure Data Factory, SQL, and PySpark. Experience in data pipeline orchestration and Azure-based transformations is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Boston, MA
-
🧠 - Skills detailed
#Data Lake #Azure Logic Apps #Azure Function Apps #Data Pipeline #"ETL (Extract #Transform #Load)" #UAT (User Acceptance Testing) #Azure Data Factory #Azure DevOps #Databricks #SQL (Structured Query Language) #ADF (Azure Data Factory) #Documentation #Synapse #Delta Lake #Data Transformations #Unit Testing #Spark (Apache Spark) #DevOps #Data Engineering #PySpark #Python #Azure #Logic Apps #Data Quality #Integration Testing
Role description
Our client is seeking a Data Engineer (Databricks) responsible for building, orchestrating, and optimizing Azure-based data pipelines and transformations to deliver reliable data into enterprise lakehouse and warehouse platforms.(Remote, USA)
Must Have skills
• Databricks (Data Engineering)
• DLT
• Azure Data Factory
• SQL
• PySpark
• Synapse (Dedicated SQL Pool)
• Azure DevOps
• Python
• Azure Function Apps
• Azure Logic Apps
Responsibilities
• Create and enhance data pipelines leveraging existing ingestion frameworks and tools.
• Orchestrate data pipelines using Azure Data Factory.
• Develop/enhance data transformations to parse, transform, and load data into Enterprise Data Lake, Delta Lake, and Enterprise DWH (Synapse Analytics).
• Perform unit testing and coordinate integration testing and UAT.
• Create pipeline documentation including HLD, DD, and runbooks.
• Configure compute, implement data quality rules, and manage pipeline maintenance.
• Conduct performance tuning and optimization.
• Provide production support and operational troubleshooting.
Other information
• Primary platforms/tools: Azure Data Factory, Databricks (DLT), Synapse Dedicated SQL Pool, Azure DevOps, Python/PySpark, Azure Function Apps, Azure Logic Apps.
Our client is seeking a Data Engineer (Databricks) responsible for building, orchestrating, and optimizing Azure-based data pipelines and transformations to deliver reliable data into enterprise lakehouse and warehouse platforms.(Remote, USA)
Must Have skills
• Databricks (Data Engineering)
• DLT
• Azure Data Factory
• SQL
• PySpark
• Synapse (Dedicated SQL Pool)
• Azure DevOps
• Python
• Azure Function Apps
• Azure Logic Apps
Responsibilities
• Create and enhance data pipelines leveraging existing ingestion frameworks and tools.
• Orchestrate data pipelines using Azure Data Factory.
• Develop/enhance data transformations to parse, transform, and load data into Enterprise Data Lake, Delta Lake, and Enterprise DWH (Synapse Analytics).
• Perform unit testing and coordinate integration testing and UAT.
• Create pipeline documentation including HLD, DD, and runbooks.
• Configure compute, implement data quality rules, and manage pipeline maintenance.
• Conduct performance tuning and optimization.
• Provide production support and operational troubleshooting.
Other information
• Primary platforms/tools: Azure Data Factory, Databricks (DLT), Synapse Dedicated SQL Pool, Azure DevOps, Python/PySpark, Azure Function Apps, Azure Logic Apps.






