

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Frisco, TX, on a 6-month contract at a pay rate of "unknown." Key skills include AWS, S3, PySpark, Snowflake, and StreamSets. Requires 10-15 years of data engineering experience and a relevant degree.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 25, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Frisco, TX
-
π§ - Skills detailed
#Data Engineering #SnowSQL #Data Pipeline #Cloud #AWS (Amazon Web Services) #Data Migration #StreamSets #Databases #Documentation #Complex Queries #Spark (Apache Spark) #Computer Science #Scripting #Data Processing #Scala #Leadership #Data Architecture #S3 (Amazon Simple Storage Service) #Compliance #PySpark #Programming #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Snowpark #Migration #Snowflake #Data Transformations #Data Ingestion #Data Security #Security #Python
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role : Data Engineer
Location: Frisco, TX (100% Onsite)
Mode : Contract (6 months)
Mandate : AWS , S3 , Pyspark , python, end to end snowflake , Streamset pipeline
Job Overview:
We are seeking a highly skilled Lead Data Engineer to design, build, and maintain scalable data solutions on the Snowflake platform. The ideal candidate will have strong hands-on experience with StreamSets for data ingestion, expertise in Snowflake (including stored procedures, SnowSQL, Snowpark and job scheduling), and proficiency in SQL and Python (especially PySpark). Experience with performance tuning, end-to-end solution design, and data migration projects is essential.
Key Responsibilities:
β’ Design and Develop Data Pipelines: Build and maintain robust, scalable StreamSets jobs to ingest and load data into Snowflake databases from various sources.
β’ Snowflake Development: Develop, optimize, and maintain Snowflake stored procedures, functions, and scripts using SnowSQL.
β’ Job Scheduling: Set up and manage job scheduling within Snowflake for automated data loads and other processes.
β’ Performance Tuning: Analyze, optimize, and tune data pipelines and Snowflake queries for optimal performance.
β’ Solution Architecture: Design end-to-end data solutions, ensuring security, scalability, and best practices are followed across all data flows.
β’ Data Migration: Plan and execute complex data migration projects, including transformation and validation, from legacy or external systems to Snowflake.
β’ Programming: Write clean, efficient, and reusable SQL and Python code, with a focus on data processing using PySpark and Snow park.
β’ Collaboration: Work closely with data architects, analysts, and other engineers to understand business requirements and translate them into technical solutions.
β’ Documentation: Prepare and maintain clear documentation for all processes, pipelines, and architecture.
Required Skills & Qualifications:
β’ Bachelorβs or Masterβs degree in Computer Science, Engineering, Information Systems, or a related field.
β’ 10 to 15 years of experience in data engineering roles
β’ Hands-on expertise in building StreamSets pipelines for data ingestion and transformation.
β’ Advanced knowledge of Snowflake, including stored procedures, user-defined functions, and SnowSQL scripting.
β’ Strong experience in job scheduling and orchestration within Snowflake.
β’ Deep proficiency in SQL for complex queries, data transformations, and performance tuning.
β’ Strong programming skills in Python, especially in building and optimizing data pipelines with PySpark.
β’ End-to-end solution design and architecture experience on Snowflake.
β’ Proven experience in large-scale data migration projects.
β’ Excellent problem-solving, communication, and leadership skills.
Preferred Qualifications:
β’ Experience with cloud platforms (AWS) and their integration with Snowflake.
β’ Familiarity with other ETL tools and orchestration frameworks.
β’ Knowledge of data security, governance, and compliance standards.
Thanks
Harinath M
Harinathm@vbeyond.com