Seneca Resources

Sr Consultant

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (Azure Databricks / PySpark) on a contract basis, paying $60/hour, fully remote. Requires 5+ years in Data Engineering, expertise in Azure Databricks, PySpark, and experience in healthcare data environments.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
480
-
🗓️ - Date
March 18, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Texas, United States
-
🧠 - Skills detailed
#Azure #Spark (Apache Spark) #Agile #Data Modeling #Data Integration #Azure Data Factory #Azure Databricks #Data Architecture #Data Engineering #ADF (Azure Data Factory) #Data Processing #"ETL (Extract #Transform #Load)" #Data Quality #Databricks #PySpark #Data Pipeline #Scala
Role description
Position Title: Senior Data Engineer (Azure Databricks / PySpark) Location: Philadelphia, Pennsylvania (100% Remote) Position Status: Contract Pay Rate: $60/hour on W2 Position Description We are seeking a highly skilled Senior Data Engineer to support a critical data engineering initiative within the healthcare operations domain. This role focuses on building and optimizing scalable data pipelines, enabling advanced analytics, and supporting enterprise data transformation efforts. The ideal candidate brings strong expertise in Azure Data Platform technologies, Databricks, PySpark, and data modeling, along with experience in workflow orchestration and large-scale data processing. This is a fully remote opportunity with a fast-moving, collaborative team. Key Responsibilities • Design, develop, and maintain scalable data pipelines using Azure Databricks and PySpark • Implement and manage ETL/ELT workflows using Azure Data Factory (ADF) or similar orchestration tools • Perform data modeling to support analytics and reporting use cases • Optimize data processing for performance, scalability, and reliability • Collaborate with cross-functional teams to understand data requirements and deliver solutions • Ensure data quality, integrity, and consistency across systems • Troubleshoot and resolve complex data and pipeline issues • Support data integration across multiple systems and platforms • Document data architecture, pipelines, and workflows • Contribute to continuous improvement of data engineering best practices Required Skills/Education • 5+ years of experience in Data Engineering • Strong hands-on experience with: • Azure Databricks • PySpark • Azure Data Factory (ADF) or similar orchestration tools • Solid experience in data modeling and data architecture • Experience building and managing data pipelines and workflow orchestration • Strong problem-solving and analytical skills • Excellent communication and collaboration skills • Experience working in Agile environments Preferred Qualifications • Experience working in healthcare data environments (Epic, clinical, or operational data) • Familiarity with large enterprise data platforms • Experience with data warehousing and lakehouse architectures • Exposure to predictive analytics or advanced analytics platforms