

Intuition IT – Intuitive Technology Recruitment
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer focused on clinical data integration, requiring strong expertise in Data Engineering, ETL/ELT, Python, Spark, Databricks, and Denodo. Remote work is available, with a focus on life sciences experience.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
March 21, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Integration Testing #Spark (Apache Spark) #Data Pipeline #Data Processing #Databricks #"ETL (Extract #Transform #Load)" #PySpark #Scala #Data Engineering #Delta Lake #Snowflake #Monitoring #Python #Data Integration #DevOps
Role description
Job Title: Senior Data Engineer – Clinical Data Integration
Location: USA Remote
Job Summary:
We are looking for a Data Engineer to build and maintain scalable data pipelines for clinical and operational data. The role focuses on upstream and downstream integration, ensuring high-quality and reliable data across systems.
Key Responsibilities:
• Develop and maintain data pipelines using Databricks, Snowflake, and Spark/PySpark
• Implement data integration using Denodo (VQL)
• Work with Delta Lake for optimized data processing
• Integrate data from Veeva Clinical systems
• Support clinical operations and monitoring data
• Perform unit and integration testing
• Collaborate with cross-functional teams and support DevOps/CI-CD processes
Required Skills:
• Strong experience in Data Engineering & ETL/ELT
• Hands-on with Python, Spark, Databricks, Snowflake
• Experience with Denodo / VQL
• Knowledge of clinical data or life sciences domain
• Familiarity with testing and DevOps practices
Nice to Have:
• Experience with Veeva Clinical platforms
• Exposure to GxP / regulated environments
Job Title: Senior Data Engineer – Clinical Data Integration
Location: USA Remote
Job Summary:
We are looking for a Data Engineer to build and maintain scalable data pipelines for clinical and operational data. The role focuses on upstream and downstream integration, ensuring high-quality and reliable data across systems.
Key Responsibilities:
• Develop and maintain data pipelines using Databricks, Snowflake, and Spark/PySpark
• Implement data integration using Denodo (VQL)
• Work with Delta Lake for optimized data processing
• Integrate data from Veeva Clinical systems
• Support clinical operations and monitoring data
• Perform unit and integration testing
• Collaborate with cross-functional teams and support DevOps/CI-CD processes
Required Skills:
• Strong experience in Data Engineering & ETL/ELT
• Hands-on with Python, Spark, Databricks, Snowflake
• Experience with Denodo / VQL
• Knowledge of clinical data or life sciences domain
• Familiarity with testing and DevOps practices
Nice to Have:
• Experience with Veeva Clinical platforms
• Exposure to GxP / regulated environments






