

SRS Consulting Inc
Data Engineer with Python
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with Python, offering a long-term remote contract. Required skills include 12-14 years of experience in data pipelines, 10 years with Snowflake, 8 years in BI tools, and strong Python and SQL proficiency.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 19, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, United States
-
🧠 - Skills detailed
#Data Warehouse #Scala #Data Processing #Batch #AWS DMS (AWS Database Migration Service) #dbt (data build tool) #DMS (Data Migration Service) #Tableau #Kafka (Apache Kafka) #Documentation #AWS S3 (Amazon Simple Storage Service) #Data Engineering #Snowflake #Cloud #SnowPipe #S3 (Amazon Simple Storage Service) #BI (Business Intelligence) #Schema Design #DevOps #SQS (Simple Queue Service) #"ETL (Extract #Transform #Load)" #Jupyter #SQL (Structured Query Language) #API (Application Programming Interface) #Security #Qlik #Data Pipeline #AWS (Amazon Web Services) #Data Integration #Looker #Microsoft Power BI #Lambda (AWS Lambda) #Python
Role description
Role: Data Engineer with Python
Duration: Long Term
Location: Remote – EST working hours
• 12- 14 Plus years - Develop data pipelines for streaming and batch data processing needs to move data in and out of Snowflake data warehouse
• Collaborate with engineering and DevOps team members to implement, test, deploy, and operate data pipelines and ETL solutions
• 10 Years Hands-on experience with Snowflake, including schema design, query optimization, and data load techniques
• 8 years of hands-on reporting experience leveraging Business Intelligence tools such as Looker, Qlik, Tableau, Power BI, etc.
• Experience with DBT, including model development, testing, and documentation
• Optimize data pipelines, ETL processes, and data integrations for large-scale data analytics use cases
• 7+ years of experience in building and maintaining data pipelines and ETL/ELT processes in data-intensive organizations
• Design, build, and maintain scalable data pipelines using Snowflake and DBT.
• Design, build, and maintain scalable data pipelines using Snowflake and DBT.
• Develop and manage ETL processes to ingest data from various sources into Snowflake.
• Strong coding skills with Python and SQL for manipulating and analyzing data
• Strong Experience on Anaconda and Jupyter Notebook
• Hands-on experience with data movement using Snowpipe, Snow SQL, etc.
• Able to build data integrations and ingestion pipelines for streaming and batch data
• 5 years of designing and implementing operational production grade large-scale data pipelines, ETL/ELT and data integration solutions
• Hands-on experience with cloud platforms such as AWS and Google Cloud
• Proficiency with Kafka, AWS S3, SQS, Lambda, Pub/Sub, AWS DMS, Glue
Preferred - Good to Have
• Familiarity with API security frameworks, token management and user access control including OAuth, JWT etc.,
• Background in healthcare data especially patient centric clinical data and provider data is a plus
Role: Data Engineer with Python
Duration: Long Term
Location: Remote – EST working hours
• 12- 14 Plus years - Develop data pipelines for streaming and batch data processing needs to move data in and out of Snowflake data warehouse
• Collaborate with engineering and DevOps team members to implement, test, deploy, and operate data pipelines and ETL solutions
• 10 Years Hands-on experience with Snowflake, including schema design, query optimization, and data load techniques
• 8 years of hands-on reporting experience leveraging Business Intelligence tools such as Looker, Qlik, Tableau, Power BI, etc.
• Experience with DBT, including model development, testing, and documentation
• Optimize data pipelines, ETL processes, and data integrations for large-scale data analytics use cases
• 7+ years of experience in building and maintaining data pipelines and ETL/ELT processes in data-intensive organizations
• Design, build, and maintain scalable data pipelines using Snowflake and DBT.
• Design, build, and maintain scalable data pipelines using Snowflake and DBT.
• Develop and manage ETL processes to ingest data from various sources into Snowflake.
• Strong coding skills with Python and SQL for manipulating and analyzing data
• Strong Experience on Anaconda and Jupyter Notebook
• Hands-on experience with data movement using Snowpipe, Snow SQL, etc.
• Able to build data integrations and ingestion pipelines for streaming and batch data
• 5 years of designing and implementing operational production grade large-scale data pipelines, ETL/ELT and data integration solutions
• Hands-on experience with cloud platforms such as AWS and Google Cloud
• Proficiency with Kafka, AWS S3, SQS, Lambda, Pub/Sub, AWS DMS, Glue
Preferred - Good to Have
• Familiarity with API security frameworks, token management and user access control including OAuth, JWT etc.,
• Background in healthcare data especially patient centric clinical data and provider data is a plus






