Heyer Expectations LLC

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer (Remote, USA) for a high-growth Data Engineering & Analytics company, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, SQL, Apache Spark, Airflow, and experience with Snowflake and AWS.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date
December 9, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Apache Spark #Data Pipeline #Scala #Deployment #Spark (Apache Spark) #Batch #Docker #Python #Snowflake #Data Warehouse #Data Lake #Kafka (Apache Kafka) #SQL (Structured Query Language) #Monitoring #Databricks #Data Processing #Cloud #"ETL (Extract #Transform #Load)" #BI (Business Intelligence) #Airflow #ML (Machine Learning) #Observability #Terraform #Apache Kafka #AWS (Amazon Web Services) #Data Quality #Apache Airflow #Data Engineering #Automation #Data Science
Role description
About The Opportunity A high-growth player in the Data Engineering & Analytics sector, we build scalable, secure data infrastructure and analytics platforms that power business intelligence and operational analytics for enterprise customers. We deliver production-grade ETL/ELT pipelines, data warehouses, and streaming systems to support data-driven decision making across the organization. Primary Title: Data Engineer (Remote, USA) Role & Responsibilities β€’ Design, build and maintain scalable ETL/ELT pipelines for batch and streaming data to support analytics and ML use-cases. β€’ Author and optimize SQL and Python-based data processing jobs using Spark and cloud-native services to ensure reliability and cost-efficiency. β€’ Develop and operate orchestration workflows (Airflow) and CI/CD for data deployments, monitoring, and automated recovery. β€’ Implement and enforce data modelling, partitioning, and governance best practices across data lakes and warehouses (Snowflake/Databricks). β€’ Collaborate with Data Scientists, Analysts, and Product teams to translate requirements into performant data solutions and delivery timelines. β€’ Troubleshoot production incidents, tune pipeline performance, and document operational runbooks and observability metrics. Skills & Qualifications Must-Have β€’ Proficiency in Python for data engineering and automation tasks. β€’ Strong SQL skills for analytics, ETL validation, and performance tuning. β€’ Hands-on experience with Apache Spark for large-scale data processing. β€’ Experience building and scheduling workflows with Apache Airflow (or equivalent). β€’ Familiarity with cloud data platforms and services (AWS preferred) and Snowflake. β€’ Proven experience designing production-grade data pipelines and implementing data quality/observability. Preferred β€’ Experience with Databricks for collaborative Spark workloads. β€’ Knowledge of streaming platforms such as Apache Kafka. β€’ Infrastructure-as-code experience (Terraform) and containerization (Docker). Benefits & Culture Highlights β€’ Fully remote role with flexible work hours to support work–life balance. β€’ Opportunities for career growth, cross-functional collaboration, and technical mentorship. β€’ Competitive compensation, learning and development support, and modern cloud-first tech stack. Skills: sql,snowflake,apache kafka,aws,data,python,pipelines,apache spark