

Enterprise Solutions Inc.
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," located in "unknown." Requires a Bachelor’s degree, 2+ years with Databricks, Collibra, and Starburst, and 3+ years in Python and PySpark.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date
October 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
San Francisco, CA
-
🧠 - Skills detailed
#Automation #S3 (Amazon Simple Storage Service) #Airflow #Data Pipeline #Cloud #Python #Redshift #Data Warehouse #Data Engineering #Agile #Databases #Big Data #NoSQL #Unit Testing #PySpark #Jupyter #Computer Science #Security #Strategy #Monitoring #Databricks #Snowflake #Spark (Apache Spark) #AWS (Amazon Web Services) #Collibra #Data Science #"ETL (Extract #Transform #Load)"
Role description
Qualifications:
• Bachelor’s degree in Computer Science, Information Systems, or a related field, or equivalent experience.
• 2+ years’ experience with tools such as Databricks, Collibra, and Starburst.
• 3+ years’ experience with Python and PySpark.
• Experience using Jupyter notebooks, including coding and unit testing.
• Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling).
• 2+ years of experience with a modern data stack (Object stores like S3, Spark, Airflow, Lakehouse architectures, real-time databases) and cloud data warehouses such as RedShift, Snowflake.
• Overall data engineering experience across traditional ETL & Big Data, either on-prem or Cloud.
• Data engineering experience in AWS (any CFS2/EDS) highlighting the services/tools used.
• Experience building end-to-end data pipelines to ingest and process unstructured and semi-structured data using Spark architecture.
Responsibilities:
Key Responsibilities:
• Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data from disparate sources into our Common Data Platform.
• Actively participate in Agile rituals and follow Scaled Agile processes as set forth by the CDP Program team.
• Deliver high-quality data products and services following Safe Agile Practices.
• Proactively identify and resolve issues with data pipelines and analytical data stores.
• Deploy monitoring and alerting for data pipelines and data stores, implementing auto-remediation where possible to ensure system availability and reliability.
• Employ a security-first, testing, and automation strategy, adhering to data engineering best practices.
• Collaborate with cross-functional teams, including product management, data scientists, analysts, and business stakeholders, to understand their data requirements and provide them with the necessary infrastructure and tools.
• Keep up with the latest trends and technologies, evaluating and recommending new tools, frameworks, and technologies to improve data engineering processes and efficiencies.
Qualifications:
• Bachelor’s degree in Computer Science, Information Systems, or a related field, or equivalent experience.
• 2+ years’ experience with tools such as Databricks, Collibra, and Starburst.
• 3+ years’ experience with Python and PySpark.
• Experience using Jupyter notebooks, including coding and unit testing.
• Recent accomplishments working with relational and NoSQL data stores, methods, and approaches (STAR, Dimensional Modeling).
• 2+ years of experience with a modern data stack (Object stores like S3, Spark, Airflow, Lakehouse architectures, real-time databases) and cloud data warehouses such as RedShift, Snowflake.
• Overall data engineering experience across traditional ETL & Big Data, either on-prem or Cloud.
• Data engineering experience in AWS (any CFS2/EDS) highlighting the services/tools used.
• Experience building end-to-end data pipelines to ingest and process unstructured and semi-structured data using Spark architecture.
Responsibilities:
Key Responsibilities:
• Design, develop, and maintain robust and efficient data pipelines to ingest, transform, catalog, and deliver curated, trusted, and quality data from disparate sources into our Common Data Platform.
• Actively participate in Agile rituals and follow Scaled Agile processes as set forth by the CDP Program team.
• Deliver high-quality data products and services following Safe Agile Practices.
• Proactively identify and resolve issues with data pipelines and analytical data stores.
• Deploy monitoring and alerting for data pipelines and data stores, implementing auto-remediation where possible to ensure system availability and reliability.
• Employ a security-first, testing, and automation strategy, adhering to data engineering best practices.
• Collaborate with cross-functional teams, including product management, data scientists, analysts, and business stakeholders, to understand their data requirements and provide them with the necessary infrastructure and tools.
• Keep up with the latest trends and technologies, evaluating and recommending new tools, frameworks, and technologies to improve data engineering processes and efficiencies.