

Senior Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract-to-hire position based in Bellevue, WA, offering $75-$85/hr. Requires 5+ years in data processing, cloud services (Azure/AWS), and strong coding skills in C#/Java/Python. Familiarity with ETL and distributed systems preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date discovered
September 30, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
W2 Contractor
-
π - Security clearance
Unknown
-
π - Location detailed
Bellevue, WA
-
π§ - Skills detailed
#Distributed Computing #Data Mining #Monitoring #C# #Data Processing #GCP (Google Cloud Platform) #AWS (Amazon Web Services) #Computer Science #Data Pipeline #SaaS (Software as a Service) #Apache Kafka #Storage #REST API #Web Services #API (Application Programming Interface) #Databricks #Data Engineering #Kafka (Apache Kafka) #REST (Representational State Transfer) #Apache Spark #Python #"ETL (Extract #Transform #Load)" #Spark (Apache Spark) #Big Data #C++ #GraphQL #Azure Databricks #Azure #ML (Machine Learning) #Leadership #Data Storage #Agile #Data Science #Microsoft Azure #Debugging #Java #Cloud
Role description
Red Oak Technologies is a leading provider of comprehensive resourcing solutions across a variety of industries and sectors including IT, Marketing, Finance, Business Operations, Manufacturing and Engineering.
Our client is a financial services company. They are looking for a Senior Data Engineer to join their team. This role is a CONTRACT TO HIRE position. Position will be HYBRID. Candidates must be able to commute to Bellevue, WA.
β’
β’ Hybrid Role β 3 days per week in office
β’
β’
β’
β’ Contract to Hire
β’
β’
β’
β’ $75/hr - $85/hr on W2
β’
β’ Position Summary:
The Senior Data Engineer, is a hands-on engineer who works from design through implementation of large-scale systems that is data centric for the company platform. This is a thought leadership role in the Data Domain across all of the organization, with the expectation that the candidate will demonstrate and propagate best practices and processes in software development. The candidate is expected to drive things on their own with minimal supervision from anyone.
Position Duties:
β’ Design, code, test, and develop features to support large-scale data processing pipelines, for our multi-cloud SaaS platform with good quality, maintainability, and end to end ownership.
β’ Define and leverage data models to understand cost drivers, to create concrete action plans that address platform concerns on Data.
β’ Develop highly efficient code that scales to thousands of active users by minimizing operational costs.
β’ Work with all aspects of high throughput, distributed and multi-tenant services on Azure and/or AWS and/or GCP.
β’ Collaborate with team members to plan, architect, implement and deliver new features and experiences.
Requirements:
β’ 5+ years of experience in building and shipping production grade software systems or services, with one or more of the following: Distributed Systems, large-scale data processing, data storage, Information Retrieval and/or Data Mining, Machine Learning fundamentals.
β’ BS/MS/ in Computer Science or equivalent industry experience.
β’ Experience building and operating online services and fault-tolerant distributed systems at internet scale.
β’ Demonstrable experience shipping software, internet scale services using GraphQL/REST API(s) on Microsoft Azure and/or Amazon Web Services (AWS) cloud.
β’ Experience writing code in C#, Java, C++, or Python using agile and test-driving development (TDD).
β’ 3+ years in cloud service development β Azure or AWS.
β’ Experience with building cloud-scale infrastructure components.
β’ Awareness, passion, and experience related to cloud scale distributed design and patterns.
β’ Familiar with secure software design concepts.
β’ Proven track record of delivering projects that include multiple components.
β’ Demonstrated problem solving and debugging skills.
β’ Data driven approach to solving problems iteratively and measuring success.
β’ Commitment to collaboration and teamwork and ability to deliver via influence.
Preferred Requirement:
β’ Excellent verbal and written communications skills (to engage with both technical and non-technical stakeholders at all levels).
β’ Familiarity with Extract Transform Load (ETL) Pipelines, Data Modelling, Data Engineering and past ML experience is a plus.
β’ Experience in Azure Databricks and/or Microsoft Fabric will be an added plus.
β’ Hands-on experience using distributed computing platforms like Apache Spark, Apache Flink Apache Kafka or Azure EventHub.
β’ Familiarity with big data, data science/analytics, and large-scale distributed systems.
β’ Experience building data pipelines, collecting metrics/logs and hauling data to storage specific to various needs (service monitoring, business monitoring etc.).
β’ Experience working with Big Data or ML platform tooling.
Red Oak Technologies is made up of people from a wide variety of backgrounds and lifestyles. We embrace diversity and invite applications from people of all walks of life. See what itβs like to be at the top; connect with one of our recruiters and apply today.
Red Oak Tech: Quality | Talent | Integrity
Note: Compensation rates are based on years of experience and/or level of skills relevant to the opportunity.
Red Oak Technologies is a leading provider of comprehensive resourcing solutions across a variety of industries and sectors including IT, Marketing, Finance, Business Operations, Manufacturing and Engineering.
Our client is a financial services company. They are looking for a Senior Data Engineer to join their team. This role is a CONTRACT TO HIRE position. Position will be HYBRID. Candidates must be able to commute to Bellevue, WA.
β’
β’ Hybrid Role β 3 days per week in office
β’
β’
β’
β’ Contract to Hire
β’
β’
β’
β’ $75/hr - $85/hr on W2
β’
β’ Position Summary:
The Senior Data Engineer, is a hands-on engineer who works from design through implementation of large-scale systems that is data centric for the company platform. This is a thought leadership role in the Data Domain across all of the organization, with the expectation that the candidate will demonstrate and propagate best practices and processes in software development. The candidate is expected to drive things on their own with minimal supervision from anyone.
Position Duties:
β’ Design, code, test, and develop features to support large-scale data processing pipelines, for our multi-cloud SaaS platform with good quality, maintainability, and end to end ownership.
β’ Define and leverage data models to understand cost drivers, to create concrete action plans that address platform concerns on Data.
β’ Develop highly efficient code that scales to thousands of active users by minimizing operational costs.
β’ Work with all aspects of high throughput, distributed and multi-tenant services on Azure and/or AWS and/or GCP.
β’ Collaborate with team members to plan, architect, implement and deliver new features and experiences.
Requirements:
β’ 5+ years of experience in building and shipping production grade software systems or services, with one or more of the following: Distributed Systems, large-scale data processing, data storage, Information Retrieval and/or Data Mining, Machine Learning fundamentals.
β’ BS/MS/ in Computer Science or equivalent industry experience.
β’ Experience building and operating online services and fault-tolerant distributed systems at internet scale.
β’ Demonstrable experience shipping software, internet scale services using GraphQL/REST API(s) on Microsoft Azure and/or Amazon Web Services (AWS) cloud.
β’ Experience writing code in C#, Java, C++, or Python using agile and test-driving development (TDD).
β’ 3+ years in cloud service development β Azure or AWS.
β’ Experience with building cloud-scale infrastructure components.
β’ Awareness, passion, and experience related to cloud scale distributed design and patterns.
β’ Familiar with secure software design concepts.
β’ Proven track record of delivering projects that include multiple components.
β’ Demonstrated problem solving and debugging skills.
β’ Data driven approach to solving problems iteratively and measuring success.
β’ Commitment to collaboration and teamwork and ability to deliver via influence.
Preferred Requirement:
β’ Excellent verbal and written communications skills (to engage with both technical and non-technical stakeholders at all levels).
β’ Familiarity with Extract Transform Load (ETL) Pipelines, Data Modelling, Data Engineering and past ML experience is a plus.
β’ Experience in Azure Databricks and/or Microsoft Fabric will be an added plus.
β’ Hands-on experience using distributed computing platforms like Apache Spark, Apache Flink Apache Kafka or Azure EventHub.
β’ Familiarity with big data, data science/analytics, and large-scale distributed systems.
β’ Experience building data pipelines, collecting metrics/logs and hauling data to storage specific to various needs (service monitoring, business monitoring etc.).
β’ Experience working with Big Data or ML platform tooling.
Red Oak Technologies is made up of people from a wide variety of backgrounds and lifestyles. We embrace diversity and invite applications from people of all walks of life. See what itβs like to be at the top; connect with one of our recruiters and apply today.
Red Oak Tech: Quality | Talent | Integrity
Note: Compensation rates are based on years of experience and/or level of skills relevant to the opportunity.