

Compunnel Inc.
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" at a pay rate of "unknown." Required skills include 2+ years with Databricks and Collibra, 3+ years with Python and PySpark, and AWS experience.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
December 2, 2025
π - Duration
Unknown
-
ποΈ - Location
Unknown
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
San Francisco Bay Area
-
π§ - Skills detailed
#Agile #NoSQL #Scala #S3 (Amazon Simple Storage Service) #Databricks #PySpark #Unit Testing #Data Engineering #Datasets #Big Data #Spark (Apache Spark) #Python #AWS (Amazon Web Services) #Monitoring #Jupyter #Data Warehouse #"ETL (Extract #Transform #Load)" #Airflow #Databases #Computer Science #Redshift #Security #Data Pipeline #Collibra #Data Science #Snowflake #Automation #Cloud
Role description
Job Summary:
The Data Engineer will be responsible for collecting, parsing, managing, analyzing, and visualizing large datasets to transform information into actionable insights. The role involves building scalable, repeatable, and secure data pipelines across various platforms. The engineer will ensure that data workflows support multiple users while maintaining high performance, security, and reliability.
Key Responsibilities:
β’ Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data into the Common Data Platform.
β’ Actively participate in Agile ceremonies and follow Scaled Agile processes defined by the CDP Program team.
β’ Deliver high-quality data products and services following SAFe Agile practices.
β’ Identify and resolve issues related to data pipelines and analytical data stores.
β’ Implement monitoring and alerting for pipelines and data stores, enabling auto-remediation to ensure uptime and reliability.
β’ Apply a security-first approach with strong testing and automation practices.
β’ Collaborate with product managers, data scientists, analysts, and business stakeholders to understand data needs and provide tools and infrastructure.
β’ Stay updated with modern data engineering trends and evaluate new tools, frameworks, and technologies to improve efficiency.
Required Skills:
β’ Bachelorβs degree in computer science, Information Systems, or related field, or equivalent experience.
β’ 2+ yearsβ experience with Databricks, Collibra, and Starburst.
β’ 3+ yearsβ experience with Python and PySpark.
β’ Experience using Jupyter notebooks for coding and unit testing.
β’ Strong recent experience with relational and NoSQL data stores, STAR and Dimensional Modeling methods.
β’ 2+ yearsβ experience with a modern data stack (S3, Spark, Airflow, Lakehouse architecture, real-time databases).
β’ Experience with cloud data warehouses such as RedShift or Snowflake.
β’ Overall data engineering experience across traditional ETL and Big Data (on-prem or cloud).
β’ AWS data engineering experience, detailing services/tools used.
β’ Experience building end-to-end pipelines for unstructured and semi-structured data using Spark.
β’ Ability to work with confidential supervisory information (Must meet Protected Individual requirements).
Education:
Bachelorβs degree in computer science, Information Systems, or related field (or equivalent experience).
Job Summary:
The Data Engineer will be responsible for collecting, parsing, managing, analyzing, and visualizing large datasets to transform information into actionable insights. The role involves building scalable, repeatable, and secure data pipelines across various platforms. The engineer will ensure that data workflows support multiple users while maintaining high performance, security, and reliability.
Key Responsibilities:
β’ Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data into the Common Data Platform.
β’ Actively participate in Agile ceremonies and follow Scaled Agile processes defined by the CDP Program team.
β’ Deliver high-quality data products and services following SAFe Agile practices.
β’ Identify and resolve issues related to data pipelines and analytical data stores.
β’ Implement monitoring and alerting for pipelines and data stores, enabling auto-remediation to ensure uptime and reliability.
β’ Apply a security-first approach with strong testing and automation practices.
β’ Collaborate with product managers, data scientists, analysts, and business stakeholders to understand data needs and provide tools and infrastructure.
β’ Stay updated with modern data engineering trends and evaluate new tools, frameworks, and technologies to improve efficiency.
Required Skills:
β’ Bachelorβs degree in computer science, Information Systems, or related field, or equivalent experience.
β’ 2+ yearsβ experience with Databricks, Collibra, and Starburst.
β’ 3+ yearsβ experience with Python and PySpark.
β’ Experience using Jupyter notebooks for coding and unit testing.
β’ Strong recent experience with relational and NoSQL data stores, STAR and Dimensional Modeling methods.
β’ 2+ yearsβ experience with a modern data stack (S3, Spark, Airflow, Lakehouse architecture, real-time databases).
β’ Experience with cloud data warehouses such as RedShift or Snowflake.
β’ Overall data engineering experience across traditional ETL and Big Data (on-prem or cloud).
β’ AWS data engineering experience, detailing services/tools used.
β’ Experience building end-to-end pipelines for unstructured and semi-structured data using Spark.
β’ Ability to work with confidential supervisory information (Must meet Protected Individual requirements).
Education:
Bachelorβs degree in computer science, Information Systems, or related field (or equivalent experience).






