

Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Beaverton, OR, with an 8+ month contract. Requires a Bachelor's degree and 10 years of experience, including 4-6 years with Databricks, AWS, Apache Spark, Python, and PySpark. Hybrid schedule, 4:1.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date discovered
August 30, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Beaverton, OR
-
π§ - Skills detailed
#Apache Spark #Data Engineering #Python #AWS (Amazon Web Services) #Databricks #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #PySpark
Role description
Data Engineer
Beaverton, OR
8+months
Typically requires a Bachelors Degree and minimum of 10 years directly relevant experience; experience should include comprehensive experience as a business/process leader or industry expert Note: One of the following alternatives may be accepted: - PhD or Law + 8 yrs; Masters + 9 yrs; Associates degree + 11 yrs; High School + 12 yrs.
Comments for Suppliers:
Must be onsite at WHQ in Beaverton for Hybrid schedule, 4:1.
Data engineers build and maintain systems that collect, manage, and transform data into usable information. They work with large amounts of data from various sources, and ensure that data is accessible and flows smoothly to its destination.
Must have 4-6 years of experience with:
Databricks
AWS
Apache Spark
Python
PySpark
Data Engineer
Beaverton, OR
8+months
Typically requires a Bachelors Degree and minimum of 10 years directly relevant experience; experience should include comprehensive experience as a business/process leader or industry expert Note: One of the following alternatives may be accepted: - PhD or Law + 8 yrs; Masters + 9 yrs; Associates degree + 11 yrs; High School + 12 yrs.
Comments for Suppliers:
Must be onsite at WHQ in Beaverton for Hybrid schedule, 4:1.
Data engineers build and maintain systems that collect, manage, and transform data into usable information. They work with large amounts of data from various sources, and ensure that data is accessible and flows smoothly to its destination.
Must have 4-6 years of experience with:
Databricks
AWS
Apache Spark
Python
PySpark