

Compunnel Inc.
Data Architect/Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Consultant with a long-term contract, located in "San Francisco, CA/ Los Angeles, CA/ Portland, OR/ Salt Lake City, UT/ Renton, WA." Required skills include 2+ years with Databricks and Collibra, 3+ years in Python and PySpark, and experience with AWS and modern data stacks.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
October 2, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Los Angeles, CA
-
π§ - Skills detailed
#NoSQL #Monitoring #Scala #Collibra #Data Warehouse #Python #Redshift #Data Architecture #Data Engineering #AWS (Amazon Web Services) #Airflow #PySpark #Data Pipeline #Automation #Data Science #Jupyter #Databricks #Spark (Apache Spark) #Big Data #Cloud #S3 (Amazon Simple Storage Service) #Security #Computer Science #Agile #Strategy #Snowflake #Databases #"ETL (Extract #Transform #Load)" #Unit Testing
Role description
Job Title: Senior Data Consultant
Location: San Francisco, CA/ Los Angeles, CA / Portland, OR/ Salt Lake City, UT / Renton, WA
Duration: Long Term
Job Description: Qualifications:
Role Overview: As a Data Engineer, this CW will be responsible for collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable insights. They will work across multiple platforms to ensure that data pipelines are scalable, repeatable, and secure, capable of serving multiple users.
Qualifications:
β’ Bachelorβs degree in Computer Science, Information Systems, or a related field, or equivalent experience.
β’ 2+ yearsβ experience with tools such as Databricks, Collibra, and Starburst.
β’ 3+ yearsβ experience with Python and PySpark.
β’ Experience using Jupyter notebooks, including coding and unit testing.
β’ Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling).
β’ 2+ years of experience with a modern data stack (Object stores like S3, Spark, Airflow, Lakehouse architectures, real-time databases) and cloud data warehouses such as RedShift, Snowflake.
β’ Overall data engineering experience across traditional ETL & Big Data, either on-prem or Cloud.
β’ Data engineering experience in AWS (any CFS2/EDS) highlighting the services/tools used.
β’ Experience building end-to-end data pipelines to ingest and process unstructured and semi-structured data using Spark architecture.
Responsibilities:
Key Responsibilities:
β’ Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data from disparate sources into our Common Data Platform.
β’ Actively participate in Agile rituals and follow Scaled Agile processes as set forth by the CDP Program team.
β’ Deliver high-quality data products and services following Safe Agile Practices.
β’ Proactively identify and resolve issues with data pipelines and analytical data stores.
β’ Deploy monitoring and alerting for data pipelines and data stores, implementing auto-remediation where possible to ensure system availability and reliability.
β’ Employ a security-first, testing, and automation strategy, adhering to data engineering best practices.
β’ Collaborate with cross-functional teams, including product management, data scientists, analysts, and business stakeholders, to understand their data requirements and provide them with the necessary infrastructure and tools.
β’ Keep up with the latest trends and technologies, evaluating and recommending new tools, frameworks, and technologies to improve data engineering processes and efficiencies.
Job Title: Senior Data Consultant
Location: San Francisco, CA/ Los Angeles, CA / Portland, OR/ Salt Lake City, UT / Renton, WA
Duration: Long Term
Job Description: Qualifications:
Role Overview: As a Data Engineer, this CW will be responsible for collecting, parsing, managing, analyzing, and visualizing large sets of data to turn information into actionable insights. They will work across multiple platforms to ensure that data pipelines are scalable, repeatable, and secure, capable of serving multiple users.
Qualifications:
β’ Bachelorβs degree in Computer Science, Information Systems, or a related field, or equivalent experience.
β’ 2+ yearsβ experience with tools such as Databricks, Collibra, and Starburst.
β’ 3+ yearsβ experience with Python and PySpark.
β’ Experience using Jupyter notebooks, including coding and unit testing.
β’ Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling).
β’ 2+ years of experience with a modern data stack (Object stores like S3, Spark, Airflow, Lakehouse architectures, real-time databases) and cloud data warehouses such as RedShift, Snowflake.
β’ Overall data engineering experience across traditional ETL & Big Data, either on-prem or Cloud.
β’ Data engineering experience in AWS (any CFS2/EDS) highlighting the services/tools used.
β’ Experience building end-to-end data pipelines to ingest and process unstructured and semi-structured data using Spark architecture.
Responsibilities:
Key Responsibilities:
β’ Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data from disparate sources into our Common Data Platform.
β’ Actively participate in Agile rituals and follow Scaled Agile processes as set forth by the CDP Program team.
β’ Deliver high-quality data products and services following Safe Agile Practices.
β’ Proactively identify and resolve issues with data pipelines and analytical data stores.
β’ Deploy monitoring and alerting for data pipelines and data stores, implementing auto-remediation where possible to ensure system availability and reliability.
β’ Employ a security-first, testing, and automation strategy, adhering to data engineering best practices.
β’ Collaborate with cross-functional teams, including product management, data scientists, analysts, and business stakeholders, to understand their data requirements and provide them with the necessary infrastructure and tools.
β’ Keep up with the latest trends and technologies, evaluating and recommending new tools, frameworks, and technologies to improve data engineering processes and efficiencies.