

Senior Data Engineer | W2 Only | Local Candidates Only
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with a contract length of "unknown" and a pay rate of "$XX/hour." Local candidates only. Key skills include Python, PySpark, Databricks, cloud experience, and agile methodologies. Requires 7+ years in data/software engineering and a Bachelor's in computer science.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Unknown
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Seattle, WA
-
π§ - Skills detailed
#PySpark #Scrum #Databases #Spark (Apache Spark) #Databricks #NoSQL #Spring Boot #Agile #Data Processing #Python #Data Engineering #Computer Science #Docker #GIT #Kafka (Apache Kafka) #Kubernetes #Monitoring #Cloud #SQL (Structured Query Language) #Java #Automated Testing #Automation #Datasets #"ETL (Extract #Transform #Load)" #Data Quality #Version Control #Containers #Data Pipeline #Scala
Role description
Our client is seeking a top-tier Senior Data Engineer to join their engineering team.
The team supports various mid-layer services across the organization and to external partners and is seeking a strong Data Engineer with cloud experience to work with Customer data.
The ideal candidate will have strong experiences in Python, PySpark, Databricks, Cloud, Containerization, Data Streaming, and Agile/Scrum.
Required Skills -
Data Engineering with Python & PySpark, Databricks, Containers, Cloud, Automated testing, Agile
Job Duties -
Be an active participant in all scrum ceremonies and lead in the delivery of the strategic roadmap.
Responsible for the full application development lifecycle and support.
Provide subject matter expertise and direction, guidance, and support on complex engagements.
Collaborate with the Architecture team, Product team, Development team and other Information Technology (IT) team. Initiates process improvements for new and existing systems.
Design, build, and maintain scalable and efficient data pipelines using Python and PySpark.
Develop and optimize ETL/ELT processes to ingest, transform, and load large datasets from various sources.
Implement data quality checks and monitoring to ensure the accuracy and integrity of our data.
Job Requirements -
7+ years of Data/Software Engineering experience
5+ years hands-on development experience in Python and PySpark for large-scale data processing
4+ years experience with Containers (Docker and orchestration, ie: Kubernetes or EKS or Redhat)
4+ years experience with test automation
5+ years experience working with SQL & NoSql databases
Experience with version control systems, such as Git
Work experience in a complex enterprise application delivery, using agile development methodology
Bachelorβs in computer science or equivalent work experience
Desired Skills & Experience -
Java
Spring Boot
Kafka
Cassandra
Elastic Search
Our client is seeking a top-tier Senior Data Engineer to join their engineering team.
The team supports various mid-layer services across the organization and to external partners and is seeking a strong Data Engineer with cloud experience to work with Customer data.
The ideal candidate will have strong experiences in Python, PySpark, Databricks, Cloud, Containerization, Data Streaming, and Agile/Scrum.
Required Skills -
Data Engineering with Python & PySpark, Databricks, Containers, Cloud, Automated testing, Agile
Job Duties -
Be an active participant in all scrum ceremonies and lead in the delivery of the strategic roadmap.
Responsible for the full application development lifecycle and support.
Provide subject matter expertise and direction, guidance, and support on complex engagements.
Collaborate with the Architecture team, Product team, Development team and other Information Technology (IT) team. Initiates process improvements for new and existing systems.
Design, build, and maintain scalable and efficient data pipelines using Python and PySpark.
Develop and optimize ETL/ELT processes to ingest, transform, and load large datasets from various sources.
Implement data quality checks and monitoring to ensure the accuracy and integrity of our data.
Job Requirements -
7+ years of Data/Software Engineering experience
5+ years hands-on development experience in Python and PySpark for large-scale data processing
4+ years experience with Containers (Docker and orchestration, ie: Kubernetes or EKS or Redhat)
4+ years experience with test automation
5+ years experience working with SQL & NoSql databases
Experience with version control systems, such as Git
Work experience in a complex enterprise application delivery, using agile development methodology
Bachelorβs in computer science or equivalent work experience
Desired Skills & Experience -
Java
Spring Boot
Kafka
Cassandra
Elastic Search