Mid-Level Analytics Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Mid-Level Analytics Engineer, offering a contract length of "unknown" with a pay rate of "$/hour." Key skills include 2+ years in data engineering, proficiency in SQL and Python, and AWS experience. Snowflake knowledge is preferred.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 20, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Kansas City, MO
-
🧠 - Skills detailed
#Deployment #Cloud #"ETL (Extract #Transform #Load)" #Automation #Docker #GIT #SNS (Simple Notification Service) #SQL (Structured Query Language) #S3 (Amazon Simple Storage Service) #Scala #Data Modeling #Python #AWS (Amazon Web Services) #AI (Artificial Intelligence) #ML (Machine Learning) #Batch #Data Engineering #Data Ingestion #SQS (Simple Queue Service) #Snowflake #Monitoring #Schema Design #Normalization #Data Quality #Airflow #Data Pipeline #Lambda (AWS Lambda)
Role description
Key Responsibilities β€’ Design, build, and maintain event-driven ETL pipelines leveraging AWS services (SNS, SQS, Lambda, ECS, S3) and Snowflake. β€’ Develop and optimize data ingestion frameworks in Python for both batch and near real-time workloads. β€’ Partner with the Feature Development team to ensure reliable, scalable data pipelines that enable product features. β€’ Implement data quality checks, monitoring, and alerting to ensure trustworthy pipelines. β€’ Optimize queries, schema design, and performance within Snowflake. β€’ Collaborate with product and engineering teams to understand feature requirements and translate them into robust data solutions. β€’ Contribute to CI/CD practices and infrastructure-as-code for pipeline deployments. Must-Have Technical Skills β€’ 2+ years of professional experience in data engineering or backend development. β€’ Strong proficiency in SQL (analytical queries, schema design, performance tuning). β€’ Hands-on experience with Python for ETL and automation. β€’ Experience with AWS cloud services (SNS, SQS, Lambda, ECS, S3, etc.). β€’ Knowledge of event-driven or streaming architectures. β€’ Proficiency with Git workflows and collaborative development. Preferred Skills β€’ Experience working with Snowflake at scale (query optimization, task orchestration, warehouse management). β€’ Familiarity with orchestration tools (Airflow, Dagster, Prefect). β€’ Understanding of data modeling best practices (star schema, normalization, β€’ incremental loads). β€’ Exposure to containerization (Docker) and CI/CD pipelines. β€’ Interest in AI/ML-driven analytics and semantic layers (Snowflake Cortex)