Themesoft Inc.

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "Unknown" and a pay rate of "Unknown." It requires 6+ years of software development experience, 3+ years in data engineering, proficiency in Python and Azure, and a background in life sciences.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Cloud #Datasets #Databases #Scala #Data Pipeline #PySpark #Data Modeling #Data Processing #Database Design #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #NoSQL #Deployment #Databricks #SQL (Structured Query Language) #Data Engineering #Python #Data Quality #R #Security #Azure cloud #Compliance #Azure #Model Deployment
Role description
Role: Data Engineer Location: US Remote Required Skills: • 6+ years of software development experience with • 3+ years focused on data engineering • Advanced proficiency in Python, PySpark, and distributed data processing frameworks • Strong experience with Databricks platform and Azure cloud services • Proven experience in client-facing roles with technical solution delivery • Strong presentation skills with ability to communicate complex technical concepts clearly • Experience gathering requirements and translating business needs into technical solutions • Expertise in data modeling, database design, and modern data warehousing concepts • Experience with SQL, NoSQL databases, and data pipeline orchestration tools • Strong analytical and problem-solving skills with attention to detail • Ability to work independently while maintaining strong collaboration with team members • Background in life sciences, biotechnology, or healthcare domains • Knowledge of MLOps practices and model deployment pipelines • Knowledge of GxP compliance and validation in regulated environments Responsibilities: • Build and optimize scalable data pipelines using PySpark and Databricks for veterinary diagnostic data processing • Develop robust ETL/ELT workflows that handle complex, multi-source veterinary research datasets • Implement data models and schemas that support advanced analytics and research workflows • Create efficient data processing solutions using Azure cloud services and modern data stack technologies • Develop APIs and data services that enable seamless integration with client systems • Ensure data quality, governance, and security standards across all technical deliverables Regards Praveen Kumar Talent Acquisition Group – Strategic Recruitment Manager praveen.r@themesoft.com| Themesoft Inc