Qwiktechy LLC

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 10+ years of experience, focusing on Snowflake, AWS, Python, and PySpark. Contract length is unspecified, with a competitive pay rate. Requires strong data warehousing knowledge and proficiency in SQL and data pipeline frameworks.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 3, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
United States
-
🧠 - Skills detailed
#Data Engineering #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Programming #Azure #Azure Data Factory #Data Transformations #Automation #Data Pipeline #IAM (Identity and Access Management) #Python #AWS S3 (Amazon Simple Storage Service) #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Agile #Scala #Data Modeling #Airflow #AWS (Amazon Web Services) #Schema Design #SQL (Structured Query Language) #ADF (Azure Data Factory) #SnowPipe #GIT #PySpark #JavaScript #Storage #Data Processing #Data Integration #dbt (data build tool) #Version Control #DevOps #Cloud #Snowflake
Role description
We are looking for a highly experienced Data Engineer who can help design, build, and optimize scalable data solutions using Snowflake, AWS, Python, and PySpark. This role is ideal for someone who enjoys solving complex data challenges, building modern data pipelines, and working in a fast-paced, collaborative environment. What You’ll Do • Design and develop robust end-to-end data pipelines and integration solutions using Snowflake, AWS services, and PySpark. • Build new ingestion and data processing frameworks leveraging Snowflake capabilities, Python, and cloud-native tools. • Work hands-on with Snowflake architecture, including Virtual Warehouses, Storage/Caching, Snowpipe, Streams, Tasks, and Stages. • Write optimized SQL, stored procedures (JavaScript or Python-based), and scripts to support data transformations and automation. • Collaborate across teams to implement data warehousing best practices (ETL/ELT, dimensional modeling, star/snowflake schema). • Use orchestration tools such as Airflow, dbt, Azure Data Factory, or similar to schedule and manage workflows. • Apply Git-based version control and CI/CD best practices to ensure high-quality, maintainable code. • Work within an Agile environment, communicating clearly with team members and stakeholders. What You Bring • 10+ years of experience in data engineering or data integration roles. • Expert-level skills in Snowflake along with deep experience integrating Snowflake with AWS services and PySpark. • 8+ years of hands-on experience with core data engineering tools and technologies: • Snowflake ecosystem • AWS (S3, Lambda, IAM, Glue, etc.) • Core SQL • Python Programming • 5+ years building new data pipeline frameworks using AWS + Snowflake + Python. • Strong understanding of data warehousing concepts, including data modeling and schema design. • Proficiency in SQL query optimization and performance tuning. • Familiarity with Git, DevOps practices, and CI/CD pipelines. • Excellent communication, teamwork, and problem-solving skills.