

CBL Solutions
Solutions Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Azure Data Solutions Architect on a remote contract basis, offering a competitive pay rate. Candidates should have a BS/MS/PhD in a relevant field, 4+ years of experience in biotech/pharmaceuticals, and expertise in Azure services, data engineering, and team leadership.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 1, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Azure #Code Reviews #AI (Artificial Intelligence) #Data Processing #Database Design #Strategy #Scala #"ETL (Extract #Transform #Load)" #BigQuery #GCP (Google Cloud Platform) #Leadership #DevOps #Automation #Agile #Alation #Security #Azure Data Factory #ADLS (Azure Data Lake Storage) #Visualization #Deployment #Databricks #Kubernetes #Data Engineering #Computer Science #Cloud #Data Quality #Data Ingestion #Virtualization #ADF (Azure Data Factory) #Python #Logging #R #Vault #Documentation #Synapse
Role description
Role: Azure Data Solutions Architect
Location: Remote
Contract Position
Our client is looking for a leading technical contributor who can consistently take a business or technical problem, work it to a well-defined data problem/specification, present the solution to peers and execute it at a high level. They have a strong focus on metrics, both for the impact of their work and for its inner workings/operations. They are a model for the team on best practices for software development in general (and data engineering in particular), including code quality, documentation, DevOps practices, and testing, and consistently mentor junior members of the team. They ensure the robustness of our services and serve as an escalation point in the operation of existing services, pipelines, and workflows.
This lead should demonstrate core engineering knowledge/experience of industry technologies, practices, and frameworks, e.g. Databricks, Kubernetes, ArgoCD, ADO, Azure Message Bus and PubSub, CICD, OpenTelementry, networking principles and scaling applications.
They must be experts in working closely and collaborating with near and offshore delivery teams.
Primary responsibilities include the following:
• Using Azure or GCP cloud services and a propietary data platform tools to ingest, egress, and transform data from multiple sources.
• Confidently optimizes the design and execution of complex solutions in data ingestion and data transformation using established pattern or improving those pattern.
• Produces well-engineered software, including appropriate automated test suites, technical documentation, and operational strategy.
• Provides input into the roadmaps, e.g. to, Data Platforms and other Data Engineering Teams, to help improve the overall program of work.
• Ensure consistent application of platform capabilities to ensure quality and consistency concerning logging and lineage.
• Fully versed in coding best practices and ways of working, and participates in code reviews and partnering to implement established standards in the team and to improve those standards if needed.
• Adhere to QMS framework, Security & Regulatory Standards, and CI/CD best practices and helps to guide improvements to them that improve ways of working.
• Provide leadership to team members to help others get the job done right.
• Supporting engineering teams in the adoption and creation of data mesh best practices.
• Maintains best practices for engineering and architecture on our Confluence site.
• Pro-actively engages in experimentation and innovation to drive relentless improvement
• Provides leadership, technical input to architecture and engineering teams.
Basic Qualifications:
We are looking for professionals with these required skills to achieve our goals:
• BS in Computer Science, Software Engineering, biomedical engineering, engineering, or bioinformatics/computational biology, with 4+ years of experience (or MS with 2+ years of experience, or PhD) in the biotech/pharmaceutical/ healthcare/diagnostics/health insurance space.
• Extensive architecture, coding and testing experience, excellent teamwork.
• Proficient with at least 3 of the below skills and can demonstrate knowledge and value with relevant experience in all the following competencies:
• Data Engineering development, architecture design & technology platforms/frameworks.
• Hands-on experience with Azure Data Analytics services e.g. ADLS, Azure Data Factory, Azure.
• Databricks, Purview, Azure Synapse, etc.
• Data Platforms and Domain-driven design.
• Agile, DevOps & Automation [of testing, build, deployment, CI/CD, etc.]
• Data analytics & data quality/integrity.
• Testing strategies & frameworks.
• Kubernetes and ArgoCD/FluxCD.
Role requires:
• Has soft-skill to lead a larger data engineering team.
• Demonstrated skill in delivering high-quality engineered data products.
• Knowledge of industry standards and technology platforms.
• Excellent communication, negotiation, influencing, and stakeholder management skills.
• Customer focus and excellent problem-solving skills.
• Familiarity with and use of various cloud ecosystems including BigQuery, DataBricks, KeyVaults, ObjectStores, etc.
• Good understanding of various software paradigms: domain-driven, procedural, data-driven, object-oriented, functional.
• Deep knowledge in Python.
• Demonstrable knowledge depth in more than one area of software engineering and technology.
Good to have Qualifications:
If you have the following characteristics, it would be a plus:
• Experience in data structures (i.e. information management), data models or relational database design.
• Background in biomedical data processing is a plus.
• Experience in GenAI and Agentic AI.
• Subject matter expertise in Pharma CMC and scientific domains.
• Experience in applying data curation, virtualization, workflow, and advanced visualization techniques to enable decision support across multiple products and assets to drive results across R&D business operations.
Role: Azure Data Solutions Architect
Location: Remote
Contract Position
Our client is looking for a leading technical contributor who can consistently take a business or technical problem, work it to a well-defined data problem/specification, present the solution to peers and execute it at a high level. They have a strong focus on metrics, both for the impact of their work and for its inner workings/operations. They are a model for the team on best practices for software development in general (and data engineering in particular), including code quality, documentation, DevOps practices, and testing, and consistently mentor junior members of the team. They ensure the robustness of our services and serve as an escalation point in the operation of existing services, pipelines, and workflows.
This lead should demonstrate core engineering knowledge/experience of industry technologies, practices, and frameworks, e.g. Databricks, Kubernetes, ArgoCD, ADO, Azure Message Bus and PubSub, CICD, OpenTelementry, networking principles and scaling applications.
They must be experts in working closely and collaborating with near and offshore delivery teams.
Primary responsibilities include the following:
• Using Azure or GCP cloud services and a propietary data platform tools to ingest, egress, and transform data from multiple sources.
• Confidently optimizes the design and execution of complex solutions in data ingestion and data transformation using established pattern or improving those pattern.
• Produces well-engineered software, including appropriate automated test suites, technical documentation, and operational strategy.
• Provides input into the roadmaps, e.g. to, Data Platforms and other Data Engineering Teams, to help improve the overall program of work.
• Ensure consistent application of platform capabilities to ensure quality and consistency concerning logging and lineage.
• Fully versed in coding best practices and ways of working, and participates in code reviews and partnering to implement established standards in the team and to improve those standards if needed.
• Adhere to QMS framework, Security & Regulatory Standards, and CI/CD best practices and helps to guide improvements to them that improve ways of working.
• Provide leadership to team members to help others get the job done right.
• Supporting engineering teams in the adoption and creation of data mesh best practices.
• Maintains best practices for engineering and architecture on our Confluence site.
• Pro-actively engages in experimentation and innovation to drive relentless improvement
• Provides leadership, technical input to architecture and engineering teams.
Basic Qualifications:
We are looking for professionals with these required skills to achieve our goals:
• BS in Computer Science, Software Engineering, biomedical engineering, engineering, or bioinformatics/computational biology, with 4+ years of experience (or MS with 2+ years of experience, or PhD) in the biotech/pharmaceutical/ healthcare/diagnostics/health insurance space.
• Extensive architecture, coding and testing experience, excellent teamwork.
• Proficient with at least 3 of the below skills and can demonstrate knowledge and value with relevant experience in all the following competencies:
• Data Engineering development, architecture design & technology platforms/frameworks.
• Hands-on experience with Azure Data Analytics services e.g. ADLS, Azure Data Factory, Azure.
• Databricks, Purview, Azure Synapse, etc.
• Data Platforms and Domain-driven design.
• Agile, DevOps & Automation [of testing, build, deployment, CI/CD, etc.]
• Data analytics & data quality/integrity.
• Testing strategies & frameworks.
• Kubernetes and ArgoCD/FluxCD.
Role requires:
• Has soft-skill to lead a larger data engineering team.
• Demonstrated skill in delivering high-quality engineered data products.
• Knowledge of industry standards and technology platforms.
• Excellent communication, negotiation, influencing, and stakeholder management skills.
• Customer focus and excellent problem-solving skills.
• Familiarity with and use of various cloud ecosystems including BigQuery, DataBricks, KeyVaults, ObjectStores, etc.
• Good understanding of various software paradigms: domain-driven, procedural, data-driven, object-oriented, functional.
• Deep knowledge in Python.
• Demonstrable knowledge depth in more than one area of software engineering and technology.
Good to have Qualifications:
If you have the following characteristics, it would be a plus:
• Experience in data structures (i.e. information management), data models or relational database design.
• Background in biomedical data processing is a plus.
• Experience in GenAI and Agentic AI.
• Subject matter expertise in Pharma CMC and scientific domains.
• Experience in applying data curation, virtualization, workflow, and advanced visualization techniques to enable decision support across multiple products and assets to drive results across R&D business operations.






