FUSTIS LLC

Data Engineer (Snowflake, ETL, AWS)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Snowflake, ETL, AWS) with a contract length of "Contract to Hire" and a pay rate of "$50/hr. on W2 MAX." Located onsite in Seattle, WA, it requires 5+ years of experience in Data Engineering and expertise in AWS and Snowflake.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
October 16, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#Monitoring #Data Warehouse #Datadog #AWS Glue #Scala #Bash #Data Modeling #Splunk #Scripting #Cloud #Data Pipeline #Snowflake #Data Quality #Python #Automation #Kafka (Apache Kafka) #Logging #Terraform #DevOps #Data Engineering #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Lambda (AWS Lambda) #Security #Storage #S3 (Amazon Simple Storage Service) #Airflow #SQL (Structured Query Language) #Agile #AWS (Amazon Web Services) #Data Security #dbt (data build tool)
Role description
Job Title: Data Operations Engineer/Data Engineer Location: Onsite – Seattle, WA (Need locals only) Employment Type: Contract to Hire Eligibility: USC/GC Only Pay Rate: $50/hr. on W2 MAX Job Overview: We are seeking a highly skilled Data Operations Engineer with strong expertise in AWS, Snowflake, and DevOps-driven ETL workflows. The ideal candidate will be responsible for managing data pipelines, monitoring data flows, optimizing performance, and ensuring reliability across our cloud-based data infrastructure. This role requires hands-on cloud engineering skills and operational excellence in a fast-paced environment. Key Responsibilities: • Design, build, and maintain scalable ETL/ELT pipelines using AWS and Snowflake. • Monitor daily data operations, troubleshoot failures, and ensure high data availability and integrity. • Automate workflows using DevOps tools and CI/CD pipelines. • Collaborate with data engineers, analysts, and platform teams to optimize data flow and storage mechanisms. • Implement best practices for data security, governance, and performance tuning. • Develop monitoring and alerting solutions for proactive incident management. • Participate in on-call rotations for operational readiness. Required Skills & Experience: • 5+ years of experience in Data Engineering or Data Operations roles. • Strong expertise in AWS services such as S3, Lambda, Glue, ECS/EKS, Step Functions, or similar. • Hands-on experience with Snowflake (data modeling, performance tuning, query optimization). • Solid understanding of ETL/ELT processes and orchestration tools (Airflow, DBT, AWS Glue, etc.). • Experience with DevOps practices including CI/CD pipelines, automation, and infrastructure as code (Terraform/CloudFormation). • Proficiency in Python, SQL, or Bash scripting for automation. • Familiarity with monitoring & logging tools (CloudWatch, Datadog, Splunk, etc.). Preferred Qualifications: • Experience working in agile/cloud-native environments. • Knowledge of data warehouse best practices and data quality frameworks. • Exposure to Kafka, Kinesis, or other streaming platforms is a plus.