

Sr. Python Data Engineer - Onsite Interview
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Python Data Engineer in Seattle, WA, offering a 6-12 month contract at $70/hr. Requires 7+ years in Data Engineering, expertise in Python, PySpark, Containers, and Agile methodology. Hybrid work; locals only.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date discovered
August 20, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
1099 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Seattle, WA
-
π§ - Skills detailed
#PySpark #Scrum #Databases #Spark (Apache Spark) #Databricks #NoSQL #Spring Boot #Agile #Data Processing #Python #Data Engineering #Computer Science #Docker #GIT #Kafka (Apache Kafka) #Kubernetes #Monitoring #Cloud #SQL (Structured Query Language) #Java #Automated Testing #Automation #Datasets #"ETL (Extract #Transform #Load)" #Data Quality #Version Control #Containers #Data Pipeline #Scala
Role description
Job Title: Senior Data Engineer
Location: Seattle, WA - Hybrid - Need Locals Only
Duration: 6-12 Months
Eligibility: USC, GC, GC-EAD and H4-EAD
C2C Rate: $70/hr. on 1099/C2C
Mode of Interview: Onsite/Face-To-Face
Job Description:
Our client is seeking a top-tier Senior Data Engineer to join their engineering team.
The team supports various mid-layer services across the organization and to external partners and is seeking a strong Data Engineer with cloud experience to work with Customer data.
The ideal candidate will have strong experiences in Python, PySpark, Databricks, Cloud, Containerization, Data Streaming, and Agile/Scrum.
Required Skills -
Data Engineering with Python & PySpark, Databricks, Containers, Cloud, Automated testing, Agile
Job Duties -
Be an active participant in all scrum ceremonies and lead in the delivery of the strategic roadmap.
Responsible for the full application development lifecycle and support.
Provide subject matter expertise and direction, guidance, and support on complex engagements.
Collaborate with the Architecture team, Product team, Development team and other Information Technology (IT) team. Initiates process improvements for new and existing systems.
Design, build, and maintain scalable and efficient data pipelines using Python and PySpark.
Develop and optimize ETL/ELT processes to ingest, transform, and load large datasets from various sources.
Implement data quality checks and monitoring to ensure the accuracy and integrity of our data.
Job Requirements -
7+ years of Data/Software Engineering experience
5+ years hands-on development experience in Python and PySpark for large-scale data processing
4+ years experience with Containers (Docker and orchestration, ie: Kubernetes or EKS or Redhat)
4+ years experience with test automation
5+ years experience working with SQL & NoSql databases
Experience with version control systems, such as Git
Work experience in a complex enterprise application delivery, using agile development methodology
Bachelorβs in computer science or equivalent work experience
Desired Skills & Experience -
The following would all be a plus:
Java
Spring Boot
Kafka
Cassandra
Elastic Search
Job Title: Senior Data Engineer
Location: Seattle, WA - Hybrid - Need Locals Only
Duration: 6-12 Months
Eligibility: USC, GC, GC-EAD and H4-EAD
C2C Rate: $70/hr. on 1099/C2C
Mode of Interview: Onsite/Face-To-Face
Job Description:
Our client is seeking a top-tier Senior Data Engineer to join their engineering team.
The team supports various mid-layer services across the organization and to external partners and is seeking a strong Data Engineer with cloud experience to work with Customer data.
The ideal candidate will have strong experiences in Python, PySpark, Databricks, Cloud, Containerization, Data Streaming, and Agile/Scrum.
Required Skills -
Data Engineering with Python & PySpark, Databricks, Containers, Cloud, Automated testing, Agile
Job Duties -
Be an active participant in all scrum ceremonies and lead in the delivery of the strategic roadmap.
Responsible for the full application development lifecycle and support.
Provide subject matter expertise and direction, guidance, and support on complex engagements.
Collaborate with the Architecture team, Product team, Development team and other Information Technology (IT) team. Initiates process improvements for new and existing systems.
Design, build, and maintain scalable and efficient data pipelines using Python and PySpark.
Develop and optimize ETL/ELT processes to ingest, transform, and load large datasets from various sources.
Implement data quality checks and monitoring to ensure the accuracy and integrity of our data.
Job Requirements -
7+ years of Data/Software Engineering experience
5+ years hands-on development experience in Python and PySpark for large-scale data processing
4+ years experience with Containers (Docker and orchestration, ie: Kubernetes or EKS or Redhat)
4+ years experience with test automation
5+ years experience working with SQL & NoSql databases
Experience with version control systems, such as Git
Work experience in a complex enterprise application delivery, using agile development methodology
Bachelorβs in computer science or equivalent work experience
Desired Skills & Experience -
The following would all be a plus:
Java
Spring Boot
Kafka
Cassandra
Elastic Search