TechLink Resources, Inc

Sr. Data Engineer (Snowpark, Python, SQL, Azure)- No H1B or C2C

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer in Los Angeles, with a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, SQL, Snowflake, and Azure Data Factory. Requires 3-5+ years in data engineering and experience with cloud data migrations.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
720
-
πŸ—“οΈ - Date
May 5, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Corp-to-Corp (C2C)
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Los Angeles, CA
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #"ETL (Extract #Transform #Load)" #Cloud #Azure #Data Transformations #ADF (Azure Data Factory) #Data Engineering #Azure cloud #Scrum #Data Pipeline #Storage #Debugging #Data Governance #Data Integration #Snowpark #Data Quality #Azure Data Factory #Migration #Python #Scala #Agile #Data Migration #Security #AWS (Amazon Web Services) #Snowflake #Leadership #Data Processing #SQL (Structured Query Language)
Role description
PLEASE THE JOB DESCRIPTION CAREFULLY. THIS IS ON-SITE LOS ANGELES POSITION. MUST BE ABLE TO INTERVIEW IN PERSON. NO H-1B AND NO C2C. Senior Data Engineer (Snowflake / Python / AI-Assisted Development) About the Role We’re looking for a Senior Data Engineer to help build and modernize enterprise data pipelines in a cloud-based environment. This role is highly hands-on and focused on execution and delivery, working closely with architects and team leads to implement scalable data solutions. You’ll play a key role in migrating existing data pipelines from Azure Data Factory into Snowflake (Snowpark) while leveraging modern AI-assisted development tools (like Cursor and Copilot) to improve speed, quality, and efficiency. What You’ll Be Doing β€’ Build, refactor, and maintain scalable data pipelines for enterprise data systems β€’ Migrate existing ADF pipelines into Snowflake Snowpark solutions β€’ Design and implement data transformations for reporting, analytics, and KPIs β€’ Work heavily with Snowflake (SQL + Snowpark) for data processing and storage β€’ Develop and optimize Python-based data workflows β€’ Integrate data from multiple sources and ensure high data quality β€’ Collaborate with engineering leadership to execute on defined requirements β€’ Leverage AI tools (Cursor, Copilot, etc.) for coding, debugging, and optimization β€’ Ensure solutions meet performance, security, and data governance standards What We are Looking For β€’ Strong hands-on coder (not just design/architecture) β€’ Deep experience with Python + SQL for data engineering β€’ Proven experience working with Snowflake in production β€’ Experience with Azure Data Factory and translating pipelines into code β€’ Someone who has done cloud data migrations (Azure β†’ Snowflake preferred) β€’ Comfortable working in Agile/Scrum environments β€’ Familiar with using AI coding tools to move faster and write better code Basic Qualifications β€’ 3–5+ years in Data Engineering / Data Integration β€’ Advanced experience with Python and SQL (complex transformations + tuning) β€’ Strong experience with Snowflake β€’ Experience with Azure Data Factory (ADF) β€’ Experience working in Agile teams β€’ Experience using AI-assisted development tools (Cursor, Copilot, etc.) Nice to Have β€’ Experience with Snowflake Snowpark, Tasks, and Streams β€’ Exposure to AWS or Azure cloud environments β€’ Experience with CI/CD pipelines for data engineering β€’ Background in large-scale data migration projects Why This Role Stands Out β€’ High-impact work modernizing enterprise data systems β€’ Exposure to cutting-edge AI-assisted development practices β€’ Strong engineering culture focused on delivery and quality β€’ Opportunity to work on large-scale cloud data transformations