

Eliassen Group
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 5–7+ years of experience in data engineering, proficient in SQL, Databricks, and CI/CD pipelines. The contract is for "X months" at "$Y/hour", located in "Remote". Pharmaceutical industry experience is preferred.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 20, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Pipeline #Version Control #Data Architecture #Data Processing #Data Engineering #Databricks #Cloud #SQL (Structured Query Language) #Compliance #AWS (Amazon Web Services) #Data Modeling #Deployment #Scala #GCP (Google Cloud Platform) #Migration #Spark (Apache Spark) #Data Governance #Automation #Azure #Python #GitHub
Role description
Job Description
Our pharmaceutical client is seeking a skilled and proactive Data Engineer to join our team and help drive the automation and scalability of our data workflows. This role is ideal for someone with a strong foundation in data engineering who thrives in a fast-paced environment and can work independently while collaborating across teams.
You’ll play a key role in evolving our data architecture, automating critical processes, and supporting cross-functional data needs. This position offers the opportunity to work with cutting-edge technologies like Databricks, GitHub Actions, and CI/CD pipelines, while contributing to impactful projects in the pharmaceutical and cloud computing space.
Key Responsibilities:
• Design, build, and maintain scalable data pipelines and workflows.
• Automate various points in the data architecture to improve efficiency and reliability.
• Collaborate with stakeholders to understand end-to-end data workflows and identify opportunities for optimization.
• Support project scaling initiatives, ensuring robust and maintainable data infrastructure.
• Provide occasional support to team members (e.g., Deepak and Maisy) for data requests, updates, and troubleshooting.
• Contribute to the migration and automation of data processes into Databricks.
• Ensure best practices in version control and deployment using GitHub and CI/CD pipelines.
Required Skills & Experience:
• 5–7+ years of experience in data engineering.
• Strong proficiency in SQL and data modeling.
• Hands-on experience with CI/CD pipelines, especially in conjunction with GitHub.
• Familiarity with GitHub Actions for automation and deployment.
• Experience working with Databricks for data processing and analytics.
• Solid understanding of cloud computing platforms (e.g., AWS, Azure, GCP).
• Background or experience in pharmaceutical data is a plus.
• Ability to work independently and manage multiple priorities effectively.
Preferred Qualifications:
• Experience with data governance and compliance in regulated industries.
• Familiarity with Python, Spark, or other data engineering tools.
• Strong communication and collaboration skills.
Job Description
Our pharmaceutical client is seeking a skilled and proactive Data Engineer to join our team and help drive the automation and scalability of our data workflows. This role is ideal for someone with a strong foundation in data engineering who thrives in a fast-paced environment and can work independently while collaborating across teams.
You’ll play a key role in evolving our data architecture, automating critical processes, and supporting cross-functional data needs. This position offers the opportunity to work with cutting-edge technologies like Databricks, GitHub Actions, and CI/CD pipelines, while contributing to impactful projects in the pharmaceutical and cloud computing space.
Key Responsibilities:
• Design, build, and maintain scalable data pipelines and workflows.
• Automate various points in the data architecture to improve efficiency and reliability.
• Collaborate with stakeholders to understand end-to-end data workflows and identify opportunities for optimization.
• Support project scaling initiatives, ensuring robust and maintainable data infrastructure.
• Provide occasional support to team members (e.g., Deepak and Maisy) for data requests, updates, and troubleshooting.
• Contribute to the migration and automation of data processes into Databricks.
• Ensure best practices in version control and deployment using GitHub and CI/CD pipelines.
Required Skills & Experience:
• 5–7+ years of experience in data engineering.
• Strong proficiency in SQL and data modeling.
• Hands-on experience with CI/CD pipelines, especially in conjunction with GitHub.
• Familiarity with GitHub Actions for automation and deployment.
• Experience working with Databricks for data processing and analytics.
• Solid understanding of cloud computing platforms (e.g., AWS, Azure, GCP).
• Background or experience in pharmaceutical data is a plus.
• Ability to work independently and manage multiple priorities effectively.
Preferred Qualifications:
• Experience with data governance and compliance in regulated industries.
• Familiarity with Python, Spark, or other data engineering tools.
• Strong communication and collaboration skills.