

Jobs via Dice
Senior Data Engineer (34455)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (34455) on a fully remote contract, requiring 5+ years in Data Engineering, expertise in Databricks, Snowflake, AWS Cloud, and strong skills in Python and SQL. Certifications are a plus.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 17, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#GitHub #Data Lakehouse #Data Manipulation #Azure Data Factory #Agile #Databricks #PySpark #Security #Data Ingestion #Data Engineering #Data Modeling #Automation #Azure #Airflow #Documentation #Terraform #Scala #Compliance #SQL (Structured Query Language) #DevOps #Data Science #Data Security #Kafka (Apache Kafka) #Redshift #dbt (data build tool) #Lambda (AWS Lambda) #Azure DevOps #BI (Business Intelligence) #Infrastructure as Code (IaC) #Spark (Apache Spark) #Data Pipeline #Delta Lake #Snowflake #Data Encryption #IAM (Identity and Access Management) #BigQuery #Python #S3 (Amazon Simple Storage Service) #ADF (Azure Data Factory) #AWS Glue #Cloud #Deployment #GIT #AWS (Amazon Web Services) #Data Lake #"ETL (Extract #Transform #Load)" #Data Governance #Athena
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Myticas LLC, is seeking the following. Apply via Dice today!
We are seeking a Senior Data Engineer with proven expertise in Databricks, Snowflake, and AWS Cloud. This fully remote contract role requires hands-on experience designing, developing, and optimizing scalable, secure, and high-performance data pipelines across modern cloud-based ecosystems.
Responsibilities
• Design, develop, and maintain data pipelines and ETL workflows using Databricks and Snowflake.
• Implement data ingestion, transformation, and orchestration solutions across structured and unstructured data sources.
• Develop and optimize data models, warehouse schemas, and partitioning strategies for analytical performance.
• Build and maintain AWS-based data infrastructure (e.g., S3, Lambda, Glue, Redshift, IAM, CloudFormation).
• Ensure data security and compliance through encryption/decryption processes and governance frameworks (e.g., encrypt/decrypt guest reservation data).
• Implement CI/CD pipelines for data engineering using tools like GitHub Actions, AWS CodePipeline, or Azure DevOps.
• Collaborate with data scientists, analysts, and architects to align infrastructure with business intelligence needs.
• Monitor, troubleshoot, and resolve data pipeline performance or reliability issues.
• Document technical solutions and follow best practices for code versioning, testing, and deployment.
Must Have
• 5+ years of experience in Data Engineering, building and maintaining cloud-based data solutions.
• Hands-on experience with Snowflake (mandatory):
• Expertise in Snowflake SQL, data modeling, staging, warehouse optimization, time travel, and data sharing.
• Experience integrating Snowflake with Databricks and AWS data services.
• Strong proficiency in Databricks (PySpark, Delta Lake, notebook development).
• Solid knowledge of AWS Cloud services such as S3, Glue, Athena, Lambda, Step Functions, and Redshift.
• Proficiency with Python and SQL for data manipulation and ETL logic.
• Strong understanding of ETL/ELT frameworks, data lakehouse architectures, and data governance principles.
• Experience with data encryption, decryption, and key management best practices.
• Excellent communication, documentation, and collaboration skills.
Nice to Have
• Experience with Airflow, dbt, or AWS Glue Workflows for orchestration.
• Familiarity with Terraform or CloudFormation for infrastructure as code.
• Exposure to Azure Data Factory, Google BigQuery, or Kafka streaming pipelines.
• Knowledge of CI/CD automation for data pipelines.
• Certifications (e.g., AWS Certified Data Analytics - Specialty, Databricks Certified Data Engineer, SnowPro Core).
• Experience working in agile environments with DevOps and Git-based workflows.
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Myticas LLC, is seeking the following. Apply via Dice today!
We are seeking a Senior Data Engineer with proven expertise in Databricks, Snowflake, and AWS Cloud. This fully remote contract role requires hands-on experience designing, developing, and optimizing scalable, secure, and high-performance data pipelines across modern cloud-based ecosystems.
Responsibilities
• Design, develop, and maintain data pipelines and ETL workflows using Databricks and Snowflake.
• Implement data ingestion, transformation, and orchestration solutions across structured and unstructured data sources.
• Develop and optimize data models, warehouse schemas, and partitioning strategies for analytical performance.
• Build and maintain AWS-based data infrastructure (e.g., S3, Lambda, Glue, Redshift, IAM, CloudFormation).
• Ensure data security and compliance through encryption/decryption processes and governance frameworks (e.g., encrypt/decrypt guest reservation data).
• Implement CI/CD pipelines for data engineering using tools like GitHub Actions, AWS CodePipeline, or Azure DevOps.
• Collaborate with data scientists, analysts, and architects to align infrastructure with business intelligence needs.
• Monitor, troubleshoot, and resolve data pipeline performance or reliability issues.
• Document technical solutions and follow best practices for code versioning, testing, and deployment.
Must Have
• 5+ years of experience in Data Engineering, building and maintaining cloud-based data solutions.
• Hands-on experience with Snowflake (mandatory):
• Expertise in Snowflake SQL, data modeling, staging, warehouse optimization, time travel, and data sharing.
• Experience integrating Snowflake with Databricks and AWS data services.
• Strong proficiency in Databricks (PySpark, Delta Lake, notebook development).
• Solid knowledge of AWS Cloud services such as S3, Glue, Athena, Lambda, Step Functions, and Redshift.
• Proficiency with Python and SQL for data manipulation and ETL logic.
• Strong understanding of ETL/ELT frameworks, data lakehouse architectures, and data governance principles.
• Experience with data encryption, decryption, and key management best practices.
• Excellent communication, documentation, and collaboration skills.
Nice to Have
• Experience with Airflow, dbt, or AWS Glue Workflows for orchestration.
• Familiarity with Terraform or CloudFormation for infrastructure as code.
• Exposure to Azure Data Factory, Google BigQuery, or Kafka streaming pipelines.
• Knowledge of CI/CD automation for data pipelines.
• Certifications (e.g., AWS Certified Data Analytics - Specialty, Databricks Certified Data Engineer, SnowPro Core).
• Experience working in agile environments with DevOps and Git-based workflows.