

Hope Tech
Senior Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Key skills include PySpark, SQL, AWS services, and Control-M. Requires 3+ years of experience in data engineering and production support.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
360
-
🗓️ - Date
February 20, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
McLean, VA
-
🧠 - Skills detailed
#Spark (Apache Spark) #AWS (Amazon Web Services) #S3 (Amazon Simple Storage Service) #Data Management #Elasticsearch #Data Quality #Kubernetes #"ETL (Extract #Transform #Load)" #Data Processing #MongoDB #Storage #Data Pipeline #Snowflake #SQL (Structured Query Language) #Data Storage #Python #Monitoring #Data Engineering #Documentation #Spark SQL #PySpark #Cloud
Role description
Primary Skills: Spark - PySpark, SQL (Basic + Advanced), Python, Hive, Data Modelling Fundamentals
Job Requirements:
3+ years of hands-on experience in production support for data engineering or analytics applications.
Proficiency in PySpark for managing and troubleshooting data processing jobs.
Solid experience with AWS cloud services, including EKS (Elastic Kubernetes Service), EMR (Elastic MapReduce), S3, Glue, and Step Functions.
Hands-on experience with Control-M for workflow orchestration and monitoring.
Working knowledge of MongoDB and Snowflake for production data management.
Experience with log monitoring and analysis using Elasticsearch Service (ESS).
Good understanding of ETL processes, incident management, and production monitoring best practices.
Strong problem-solving, troubleshooting, and analytical skills.
Excellent communication and collaboration abilities.
Responsibilities:
Provide production support for data pipelines and workflows, ensuring high availability and timely resolution of incidents.
Monitor, troubleshoot, and optimize PySpark-based data processing jobs.
Support and maintain data processing applications deployed on AWS services, including EKS and EMR.
Manage and orchestrate workflows using Control-M, including scheduling, monitoring, and error handling.
Perform root cause analysis and implement permanent fixes for recurring production issues.
Oversee data storage and management in MongoDB and Snowflake, ensuring data quality and consistency.
Monitor and analyze application and system logs using Elasticsearch Service (ESS) to proactively identify and resolve issues.
Collaborate with data engineering, analytics, and operations teams to deliver reliable data solutions.
Maintain and update technical documentation related to production support processes and incident management.
Participate in on-call rotations and respond to critical production alerts as needed.
#Workwolf
Primary Skills: Spark - PySpark, SQL (Basic + Advanced), Python, Hive, Data Modelling Fundamentals
Job Requirements:
3+ years of hands-on experience in production support for data engineering or analytics applications.
Proficiency in PySpark for managing and troubleshooting data processing jobs.
Solid experience with AWS cloud services, including EKS (Elastic Kubernetes Service), EMR (Elastic MapReduce), S3, Glue, and Step Functions.
Hands-on experience with Control-M for workflow orchestration and monitoring.
Working knowledge of MongoDB and Snowflake for production data management.
Experience with log monitoring and analysis using Elasticsearch Service (ESS).
Good understanding of ETL processes, incident management, and production monitoring best practices.
Strong problem-solving, troubleshooting, and analytical skills.
Excellent communication and collaboration abilities.
Responsibilities:
Provide production support for data pipelines and workflows, ensuring high availability and timely resolution of incidents.
Monitor, troubleshoot, and optimize PySpark-based data processing jobs.
Support and maintain data processing applications deployed on AWS services, including EKS and EMR.
Manage and orchestrate workflows using Control-M, including scheduling, monitoring, and error handling.
Perform root cause analysis and implement permanent fixes for recurring production issues.
Oversee data storage and management in MongoDB and Snowflake, ensuring data quality and consistency.
Monitor and analyze application and system logs using Elasticsearch Service (ESS) to proactively identify and resolve issues.
Collaborate with data engineering, analytics, and operations teams to deliver reliable data solutions.
Maintain and update technical documentation related to production support processes and incident management.
Participate in on-call rotations and respond to critical production alerts as needed.
#Workwolf






