Data Engineer II - Snowflake - SQL - Python, GIT, AWS 100% Remote 12+ Month Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Engineer II position focused on Snowflake, SQL, and Python, requiring 2-4 years of experience in healthcare data analysis. It offers a 12+ month remote contract with a pay rate of "TBD." Key skills include AWS and GIT.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 23, 2025
πŸ•’ - Project duration
More than 6 months
-
🏝️ - Location type
Remote
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Version Control #Snowflake #AWS (Amazon Web Services) #Databases #SQL (Structured Query Language) #Data Pipeline #Kafka (Apache Kafka) #"ETL (Extract #Transform #Load)" #Data Engineering #GIT #Data Analysis #Agile #Python #JSON (JavaScript Object Notation) #SQL Server #Datasets #Informatica #Oracle #MongoDB
Role description
Ideally Looking for a Healthcare Data Analyst with SQL, Snowflake, Python, AWS We have a 12 month contract, opportunity to hire, for a highly motivated and collaborative professional with 2–4 years of experience developing and operationalizing data pipelines for ingestion, transformation, validation, and optimization. They possess strong expertise in Snowflake, SQL, and Python, with a proven ability to source and manage data from diverse systems, including relational databases like Oracle/SQL Server and JSON-based sources such as MongoDB and Kafka. This individual has hands-on experience leveraging tools in the AWS ecosystem and vendor platforms like Upsolver and Informatica IDMC, as well as using Git for version control. They thrive in agile, sprint-based environments, can work independently after onboarding, and excel at collaborating with source and downstream teams to deliver clean, structured data for reporting and analytics. A strong problem-solver and communicator, they contribute valuable insights to team discussions and help drive continuous improvement in data engineering practices. 100% Remote. MUST HAVES: β€’ 2-4+ years of experience β€’ Snowflake β€’ SQL β€’ Python β€’ AWS NICE TO HAVES: β€’ GIT experience Disqualifiers: β€’ Missing Snowflake / SQL experience About the Role: β€’ The Data Sourcing team is established to help source data into our Snowflake environment. We source from multiple source applications that are rooted as relational sources like Oracle / SQL Server or JSON based sources from Mongo/Kafka. We leverage different utilities to write the data to Snowflake that include, but not limited to, Upsolver and Informatica IDMC. Once the data is in Snowflake, we write data to different layers of our Snowflake applications leveraging stored procedures written in Python. β€’ We work with different source team who are SMEs on the data sets and downstream teams who help further refine the data into different datasets for reporting needs. Our team works in an agile fashion in two-week sprints leveraging agile best practices. Responsibilities: β€’ This specific role would be more focused around our JSON based sourcing into Snowflake leveraging different tools in the AWS ecosystem and different vendor tools. They would also be working within Snowflake platform to write data to different Snowflake tables via stored procedures. β€’ For this role, you are expected to be able to take on work independently or in a group fashion after a comprehensive onboarding process. The team is always working together but the level of expectation for this role is higher than an entry level position where there should be less basic explanations required. We’d also look for this individual to be engaged in team conversations and helping add perspective to discussion around projects.