

Sr Data Analyst Snowflake ADF_C2H_W2_Remote
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Data Analyst specializing in Snowflake and Azure Data Factory, offering a 6-month contract (to hire) with a pay rate of "unknown". Candidates should have over 5 years of data engineering experience, 2 years in Snowflake, and proficiency in Python and ETL processes.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 14, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Remote
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Little Rock, AR
-
π§ - Skills detailed
#Data Quality #Snowpark #SQL Server #Database Design #JSON (JavaScript Object Notation) #Scala #AWS (Amazon Web Services) #Automation #Security #Data Mart #Data Engineering #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Modeling #DevOps #Python #Spark (Apache Spark) #Scripting #Pandas #Data Ingestion #Snowflake #Data Manipulation #Storage #Compliance #Data Security #Cloud #NumPy #Data Governance #Data Processing #ADF (Azure Data Factory) #Apache Spark #SnowSQL #Azure Data Factory #SQL (Structured Query Language) #Azure #Load Balancing #Data Analysis #Data Science #Data Integration #Data Accuracy
Role description
Sr Data Analyst Snowflake ADF
6 month contract (to hire)
100% Telecommute
9am-5pm day
Responsibilities
β’ Lead the development and management of ETL processes for Program Integrity data marts in Snowflake and SQL Server.
β’ Design, build, and maintain data pipelines using Azure Data Factory (ADF) to support ETL operations and Peer Group Profiling procedures and outputs.
β’ Implement and manage DevOps practices to ensure efficient data loading and workload balancing within the Program Integrity data mart.
β’ Collaborate with cross-functional teams to ensure seamless data integration and consistent data flow across platforms.
β’ Monitor, optimize, and troubleshoot data pipelines and workflows to maintain performance and reliability.
β’ Independently identify and resolve complex technical issues to maintain operational efficiency.
β’ Communicate effectively and foster collaboration across teams to support project execution and alignment.
β’ Apply advanced analytical skills and attention to detail to uphold data accuracy and quality in all deliverables.
β’ Lead cloud-based project delivery, ensuring adherence to timelines, scope, and performance benchmarks.
β’ Ensure data security and compliance with relevant industry standards and regulatory requirements.
Requirements
β’ Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions.
β’ More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting.
β’ Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake.
β’ Skilled in managing various data formats (CSV, JSON, VARIANT) and executing data loading/exporting tasks using SnowSQL, with orchestration via ADF.
β’ Proficient in using data science and analytics tools like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling.
β’ Strong experience with Python and other scripting languages for data manipulation and automation.
β’ Proven ability to develop cloud-native ETL processes and data pipelines using modern technologies on Azure and Snowflake.
β’ Demonstrated excellence in analytical thinking and independent problem-solving.
Preferred Skills
β’ Certification in Azure or Snowflake
β’ Experience with data modeling and database design
β’ Knowledge of data governance and data quality best practices
β’ Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Project
This individual will be using a mapping document to write the ETL process to load the PI data mart. They will be testing the load process. They will be create a balancing routine to be exectuted after each data load. They will work closely with data analytics and QA testes to ensure that the data is loaded correctly. As a part of the ETL process there will be procecures for summaring data. They will be responsible for creating the ADF pipeline to move data from SQL Server to Snowfalke and from Snowflake to SQL Server.
Ideal Background
Experience with setting up DDL, mapping data, and extract, transform, and load (ETL) procedures in Snowflake and SQL Server. Experience with Azure services; Azure Data Factory pipelines for ETL and movement of data between Snowflake and SQL Server. Experience with identifying and creating balancing procedues with medical claim data, and ability to resolve balancing issues. Abilty to easily communitcate process with technial staff as well as non-technical staff. Expeience with creating algorithms or models with tools like like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling. Expeience with Phython - used for data manipuation and automation.
Top 3 Requirements
β’ Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions.
β’ More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting.
β’ Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake.
What experience will set candidates apart from one another?
Recent medical claim processing experience. Data science and analytics experience.
Team
1 Product Owner, 1 PM Implemenation Manager, 1 SME/Analytics Manager, 2 ETL/Algorithm Developers, 2 Report Developers, and 7 Data Analyst.
2 Rounds of Interviews
Verbal quizes to be certain of the individuals knowlege. There is no written/virtal test.
Required Skills : Snoflake - including Snowpark Azure Data Factory Python SQL ETL
Sr Data Analyst Snowflake ADF
6 month contract (to hire)
100% Telecommute
9am-5pm day
Responsibilities
β’ Lead the development and management of ETL processes for Program Integrity data marts in Snowflake and SQL Server.
β’ Design, build, and maintain data pipelines using Azure Data Factory (ADF) to support ETL operations and Peer Group Profiling procedures and outputs.
β’ Implement and manage DevOps practices to ensure efficient data loading and workload balancing within the Program Integrity data mart.
β’ Collaborate with cross-functional teams to ensure seamless data integration and consistent data flow across platforms.
β’ Monitor, optimize, and troubleshoot data pipelines and workflows to maintain performance and reliability.
β’ Independently identify and resolve complex technical issues to maintain operational efficiency.
β’ Communicate effectively and foster collaboration across teams to support project execution and alignment.
β’ Apply advanced analytical skills and attention to detail to uphold data accuracy and quality in all deliverables.
β’ Lead cloud-based project delivery, ensuring adherence to timelines, scope, and performance benchmarks.
β’ Ensure data security and compliance with relevant industry standards and regulatory requirements.
Requirements
β’ Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions.
β’ More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting.
β’ Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake.
β’ Skilled in managing various data formats (CSV, JSON, VARIANT) and executing data loading/exporting tasks using SnowSQL, with orchestration via ADF.
β’ Proficient in using data science and analytics tools like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling.
β’ Strong experience with Python and other scripting languages for data manipulation and automation.
β’ Proven ability to develop cloud-native ETL processes and data pipelines using modern technologies on Azure and Snowflake.
β’ Demonstrated excellence in analytical thinking and independent problem-solving.
Preferred Skills
β’ Certification in Azure or Snowflake
β’ Experience with data modeling and database design
β’ Knowledge of data governance and data quality best practices
β’ Familiarity with other cloud platforms (e.g., AWS, Google Cloud)
Project
This individual will be using a mapping document to write the ETL process to load the PI data mart. They will be testing the load process. They will be create a balancing routine to be exectuted after each data load. They will work closely with data analytics and QA testes to ensure that the data is loaded correctly. As a part of the ETL process there will be procecures for summaring data. They will be responsible for creating the ADF pipeline to move data from SQL Server to Snowfalke and from Snowflake to SQL Server.
Ideal Background
Experience with setting up DDL, mapping data, and extract, transform, and load (ETL) procedures in Snowflake and SQL Server. Experience with Azure services; Azure Data Factory pipelines for ETL and movement of data between Snowflake and SQL Server. Experience with identifying and creating balancing procedues with medical claim data, and ability to resolve balancing issues. Abilty to easily communitcate process with technial staff as well as non-technical staff. Expeience with creating algorithms or models with tools like like Snowpark, Apache Spark, pandas, NumPy, and Scikit-learn for complex data processing and modeling. Expeience with Phython - used for data manipuation and automation.
Top 3 Requirements
β’ Over 5 years of hands-on expertise in cloud-based data engineering and data warehousing, with a strong emphasis on building scalable and efficient data solutions.
β’ More than 2 years of focused experience in Snowflake, including the design, development, and maintenance of automated data ingestion workflows using Snowflake SQL procedures and Python scripting.
β’ Practical experience with Azure data tools such as Azure Data Factory (ADF), SQL Server, and Blob Storage, alongside Snowflake.
What experience will set candidates apart from one another?
Recent medical claim processing experience. Data science and analytics experience.
Team
1 Product Owner, 1 PM Implemenation Manager, 1 SME/Analytics Manager, 2 ETL/Algorithm Developers, 2 Report Developers, and 7 Data Analyst.
2 Rounds of Interviews
Verbal quizes to be certain of the individuals knowlege. There is no written/virtal test.
Required Skills : Snoflake - including Snowpark Azure Data Factory Python SQL ETL