Jobs via Dice

Sr. Data Engineer/Dallas, Orlando, Chicago or NYC (hybrid)- 6 Months Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer with 5+ years of PySpark and 3+ years of Snowflake experience, focusing on AWS data pipelines. It's a 6-month hybrid contract located in Dallas, Orlando, Chicago, or NYC, requiring in-person interviews.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
November 16, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Data Pipeline #AWS (Amazon Web Services) #IAM (Identity and Access Management) #Databases #PySpark #Spark (Apache Spark) #Spark SQL #Snowflake #Automation #Scala #Data Processing #SNS (Simple Notification Service) #Clustering #Data Architecture #Data Ingestion #Lambda (AWS Lambda) #SQL (Structured Query Language) #Triggers #Data Engineering #Complex Queries #Cloud #"ETL (Extract #Transform #Load)" #Datasets #S3 (Amazon Simple Storage Service) #SQS (Simple Queue Service) #SnowPipe
Role description
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Suncap Technology, is seeking the following. Apply via Dice today! Title: Sr. Data Engineer Location: Dallas, TX, Orlando, FL, Chicago, IL or NYC Duration: 6+ months Need candidates local to Dallas, Orlando, Chicago or NYC Role is hybrid (2-3 days/week), will relax to remote once established In-person interview in one of these locations will be required Required Qualifications 5+ years of experience with PySpark, including performance tuning, DataFrames, Spark SQL, and distributed data processing. 3+ years of hands-on experience with Snowflake, including Snowpipe, stages, tasks, streams, and performance optimization. Strong experience building data pipelines on AWS. Strong SQL skills with the ability to write optimized, complex queries. Solid understanding of ETL/ELT concepts, data warehousing, and modern data architecture. Job Description: Data Engineer (PySpark + Snowflake, AWS) Position Overview We are seeking an experienced Data Engineer with strong skills in PySpark and hands-on expertise in Snowflake on the AWS platform. The ideal candidate has 5+ years of PySpark experience and 3+ years working with Snowflake, with proven ability to build, optimize, and maintain large-scale data pipelines. Key Responsibilities Data Pipeline Engineering Design, build, and maintain high-performance ETL/ELT pipelines using PySpark on AWS. Develop automated ingestion, transformation, and validation workflows for large structured and semi-structured datasets. Optimize Spark jobs for performance, scalability, and cost efficiency. Snowflake Development Build and manage data pipelines that load into Snowflake using PySpark, Snowpipe, and external stages. Create and maintain Snowflake objects including: Databases, schemas, tables Virtual warehouses Internal/external stages, file formats Streams, Tasks, Dynamic Tables Implement Snowpipe for continuous or incremental ingestion. Apply Snowflake optimization techniques (clustering, micro-partitioning, query profiling, etc.). AWS Integration Work with AWS services such as S3, IAM, Lambda, CloudWatch, and EventBridge for data ingestion and automation. Implement event-driven ingestion using SNS/SQS or other AWS-native triggers.