

InfoVision Inc.
Snowflake Data Engineering with AWS, Python and PySpark
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Snowflake Data Engineer with AWS, Python, and PySpark, based in Frisco, TX for 12 months. Requires 10+ years in data engineering, expertise in Snowflake and AWS, and proficiency in SQL and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 5, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Frisco, TX
-
🧠 - Skills detailed
#JavaScript #PySpark #Snowflake #GIT #DevOps #Spark (Apache Spark) #Cloud #Azure #Data Integration #SQL (Structured Query Language) #Storage #Programming #AWS (Amazon Web Services) #Airflow #Data Ingestion #Agile #Version Control #Data Pipeline #Python #dbt (data build tool) #Data Modeling #"ETL (Extract #Transform #Load)" #GCP (Google Cloud Platform) #ADF (Azure Data Factory) #Azure Data Factory #Data Engineering #Automation
Role description
Job Title: Snowflake Data Engineering with AWS, Python and PySpark
Location: Frisco TX (3 days office)
Duration: 12 months
Required Skills & Experience:
• 10+ years of experience in data engineering and data integration roles.
• Experts working with snowflake ecosystem integrated with AWS services & PySpark.
• 8+ years of Core Data engineering skills – Handson on experience with Snowflake ecosystem + AWS experience, Core SQL, Snowflake, Python Programming.
• 5+ years Handson experience in building new data pipeline frameworks with AWS, Snowflake, Python and able to explore new ingestion frame works.
• Handson with Snowflake architecture, Virtual Warehouses, Storage, and Caching, Snow pipe, Streams, Tasks, and Stages.
• Experience with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake.
• Snowflake SQL and Stored Procedures (JavaScript or Python-based).
• Proficient in Python for data ingestion, transformation, and automation.
• Solid understanding of data warehousing concepts (ETL, ELT, data modeling, star/snowflake schema).
• Hands-on with orchestration tools (Airflow, dbt, Azure Data Factory, or similar).
• Proficiency in SQL and performance tuning.
• Familiar with Git-based version control, CI/CD pipelines, and DevOps best practices.
• Strong communication skills and ability to collaborate in agile teams.
Job Title: Snowflake Data Engineering with AWS, Python and PySpark
Location: Frisco TX (3 days office)
Duration: 12 months
Required Skills & Experience:
• 10+ years of experience in data engineering and data integration roles.
• Experts working with snowflake ecosystem integrated with AWS services & PySpark.
• 8+ years of Core Data engineering skills – Handson on experience with Snowflake ecosystem + AWS experience, Core SQL, Snowflake, Python Programming.
• 5+ years Handson experience in building new data pipeline frameworks with AWS, Snowflake, Python and able to explore new ingestion frame works.
• Handson with Snowflake architecture, Virtual Warehouses, Storage, and Caching, Snow pipe, Streams, Tasks, and Stages.
• Experience with cloud platforms (AWS, Azure, or GCP) and integration with Snowflake.
• Snowflake SQL and Stored Procedures (JavaScript or Python-based).
• Proficient in Python for data ingestion, transformation, and automation.
• Solid understanding of data warehousing concepts (ETL, ELT, data modeling, star/snowflake schema).
• Hands-on with orchestration tools (Airflow, dbt, Azure Data Factory, or similar).
• Proficiency in SQL and performance tuning.
• Familiar with Git-based version control, CI/CD pipelines, and DevOps best practices.
• Strong communication skills and ability to collaborate in agile teams.






