

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Seattle, WA, for 16 months, offering competitive pay. Requires 5+ years in Scala/Python, 3+ years in big data (Spark, Flink), and strong skills in data modeling and cloud technologies. Bachelor's degree required.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
760
-
ποΈ - Date discovered
September 19, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Unknown
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Seattle, WA
-
π§ - Skills detailed
#Lambda (AWS Lambda) #Airflow #Data Modeling #AWS (Amazon Web Services) #Python #Observability #Spring Cloud #Java #Scala #Spark (Apache Spark) #FastAPI #Cloud #Data Pipeline #Big Data #Databricks #Compliance #Terraform #Agile #Deployment #Snowflake #IAM (Identity and Access Management) #Programming #Data Governance #Data Engineering #Spring Boot #Kafka (Apache Kafka)
Role description
Location: Seattle WA
Duration: 16 months
Description/Comment:
- 5+ years of professional programming in Scala, Python, and etc.
- 3+ years of big data development experience with technical stacks like Spark, Flink, Airflow, Singlestore, Kafka and AWS big data technologies
- Deep understanding of data modeling, distributed systems, and performance optimization
- Knowledge of system, application design and architecture
- Experience of building industry level high available and scalable service
- Passion about technologies, and openness to interdisciplinary work
- Excellent communication and collaboration skills
Preferred Qualifications:
- Hands-on experience with Databricks for development and deployment of data pipelines.
- Experience with data governance, compliance, or observability tooling.
- Demonstrated ability with cloud infrastructure technologies, including Terraform, K8S, Spinnaker, IAM, ALB, and etc.
- Experience with Snowflake, Kinesis, lambda etc.
- Experience in MicroService framework like Spring Boot, Spring Cloud, FastAPI, NestJS etc.
Required Education:
- Bachelor's Degree
Must Have Skills:
Scala/Java Development: Primary coding language is Scala (must be highly proficient). Strong Python or Java skills may be considered as secondary.
Big Data: Hands-on experience with Spark is required (team also leverages Flink).
Programming Expertise: Solid background in object-oriented programming from a software engineering perspective.
Agile Practices: Familiarity with Agile development methodologies.
Location: Seattle WA
Duration: 16 months
Description/Comment:
- 5+ years of professional programming in Scala, Python, and etc.
- 3+ years of big data development experience with technical stacks like Spark, Flink, Airflow, Singlestore, Kafka and AWS big data technologies
- Deep understanding of data modeling, distributed systems, and performance optimization
- Knowledge of system, application design and architecture
- Experience of building industry level high available and scalable service
- Passion about technologies, and openness to interdisciplinary work
- Excellent communication and collaboration skills
Preferred Qualifications:
- Hands-on experience with Databricks for development and deployment of data pipelines.
- Experience with data governance, compliance, or observability tooling.
- Demonstrated ability with cloud infrastructure technologies, including Terraform, K8S, Spinnaker, IAM, ALB, and etc.
- Experience with Snowflake, Kinesis, lambda etc.
- Experience in MicroService framework like Spring Boot, Spring Cloud, FastAPI, NestJS etc.
Required Education:
- Bachelor's Degree
Must Have Skills:
Scala/Java Development: Primary coding language is Scala (must be highly proficient). Strong Python or Java skills may be considered as secondary.
Big Data: Hands-on experience with Spark is required (team also leverages Flink).
Programming Expertise: Solid background in object-oriented programming from a software engineering perspective.
Agile Practices: Familiarity with Agile development methodologies.