

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 10+ years of experience, based in Plano, TX. It requires SnowPro and Databricks certifications, proficiency in SQL and Spark, and expertise in Snowflake and Apache Spark for ETL/ELT pipeline development.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
July 29, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Plano, TX
-
π§ - Skills detailed
#Data Science #MLflow #Delta Lake #RDBMS (Relational Database Management System) #Data Pipeline #Data Quality #Agile #Version Control #Java #Scrum #Kafka (Apache Kafka) #Data Lake #SQL Queries #AWS (Amazon Web Services) #Azure #dbt (data build tool) #Airflow #Data Processing #Python #Storage #Data Governance #Apache Spark #Data Modeling #Data Architecture #Spark (Apache Spark) #Scala #SnowPipe #Security #GitHub #Databricks #Distributed Computing #"ETL (Extract #Transform #Load)" #Monitoring #Data Engineering #Compliance #Cloud #PySpark #GIT #Documentation #GCP (Google Cloud Platform) #Schema Design #Snowflake #Unit Testing #SQL (Structured Query Language)
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Data Engineer β Snowflake & Apache Spark (Certified)
Location: Plano, TX
Experience Required: 10+ years
Job type: W2
Key Responsibilities:
β’ Design, develop, and maintain robust, scalable ETL/ELT pipelines using Apache Spark and Snowflake.
β’ Leverage Databricks for data processing, transformation, and analytics in distributed environments.
β’ Develop efficient SQL and Spark applications to process and analyze large volumes of data.
β’ Implement and maintain data warehousing solutions using Snowflake with best practices for performance, cost, and security.
β’ Collaborate with data scientists, analysts, and business stakeholders to meet data needs.
β’ Ensure data quality and integrity through unit testing, data validation, and monitoring.
β’ Optimize and troubleshoot Spark jobs, SQL queries, and Snowflake data workflows.
β’ Integrate with various data sources (cloud storage, APIs, RDBMS) and tools (Airflow, DBT, etc.).
β’ Apply data governance and compliance policies in data pipeline design and execution.
Required Qualifications:
β’ Certifications:
β’ SnowPro Core / Advanced Certification (e.g., SnowPro Advanced: Architect, Data Engineer, etc.)
β’ Databricks Certified Associate Developer for Apache Spark (latest version preferred)
β’ Experience:
β’ 3+ years of experience working with Snowflake, including schema design, query optimization, and Snowpipe/Streams/Tasks.
β’ 2+ years of hands-on development with Apache Spark (PySpark, Scala, or Java) in Databricks or open-source environments.
β’ Strong understanding of distributed computing, data lakes, and modern data architectures.
β’ Technical Skills:
β’ Proficient in SQL, Spark (RDD/DataFrame APIs), and Python or Scala
β’ Experience with cloud platforms (AWS, Azure, or GCP), especially integrating Snowflake and Databricks
β’ Familiarity with data modeling, data quality, and orchestration tools (e.g., Airflow, Prefect)
β’ Knowledge of CI/CD pipelines and version control (e.g., Git, GitHub Actions)
Preferred Qualifications:
β’ Experience with Delta Lake, MLflow, and Data Governance frameworks
β’ Familiarity with real-time data streaming (Kafka, Spark Structured Streaming)
β’ Strong communication and documentation skills
β’ Experience working in Agile/Scrum teams