

Intellectt Inc
Data Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Renton, WA, on a contract basis, requiring 3+ years of experience in data engineering with a focus on automotive data. Key skills include Confluent Kafka, AWS S3, Snowflake, SQL, and Python.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
520
-
ποΈ - Date
February 20, 2026
π - Duration
Unknown
-
ποΈ - Location
On-site
-
π - Contract
Unknown
-
π - Security
Unknown
-
π - Location detailed
Renton, WA
-
π§ - Skills detailed
#Programming #AWS (Amazon Web Services) #API (Application Programming Interface) #S3 (Amazon Simple Storage Service) #Visualization #Data Quality #Computer Science #"ETL (Extract #Transform #Load)" #Data Processing #Scala #Automation #AWS S3 (Amazon Simple Storage Service) #GCP (Google Cloud Platform) #Storage #Data Pipeline #Snowflake #Dataflow #Tableau #SQL (Structured Query Language) #Python #Kafka (Apache Kafka) #Data Engineering #Batch #Data Modeling #Cloud
Role description
Job Title: Data Engineer
Location: Renton, WA
Contract
Need Automotive/Vehicle Motors Background
Role Overview:
It is important that the LinkedIn profile of the candidates matches their resume, as the customer may review their profiles.
The Data Engineer will play a critical role in building scalable, reliable data pipelines to support real-time and batch processing workflows. You will work closely with cross-functional teams to integrate multiple data sources, build Operational Data Stores, ,transformations and enable timely data availability for reporting and analytics through dashboards.
Required Technologies & Skills:
Event Streaming: Confluent Kafka (proficiency), Kafka Connectors
API Management: Apigee(proficiency)
Cloud Storage & Data Warehousing: AWS S3, Snowflake
Data Processing: Google Dataflow
Programming: SQL, Python (proficiency)
Batch & Real-Time Pipeline Development
Data Visualization Support: Tableau (basic understanding for data publishing)
Experience building Operational Data Stores (ODS) and data transformation pipelines in Snowflake
Familiarity with truck industry aftersales or automotive service and repair data is a plus
Qualifications
Bachelorβs or Masterβs degree in Computer Science, Data Engineering, Information Systems, or related field.
3+ years of proven experience in data engineering, especially with streaming and batch data pipelines.
Hands-on experience with Kafka ecosystem (Confluent Kafka, Connectors) and cloud data platforms (Snowflake, AWS).
Skilled in Python programming for data processing and automation.
Experience with Google Cloud Platform services, especially Google Dataflow, is highly desirable.
Strong understanding of data modeling, ETL/ELT processes, and data quality principles.
Ability to work collaboratively in cross-functional teams and communicate technical concepts effectively.
Job Title: Data Engineer
Location: Renton, WA
Contract
Need Automotive/Vehicle Motors Background
Role Overview:
It is important that the LinkedIn profile of the candidates matches their resume, as the customer may review their profiles.
The Data Engineer will play a critical role in building scalable, reliable data pipelines to support real-time and batch processing workflows. You will work closely with cross-functional teams to integrate multiple data sources, build Operational Data Stores, ,transformations and enable timely data availability for reporting and analytics through dashboards.
Required Technologies & Skills:
Event Streaming: Confluent Kafka (proficiency), Kafka Connectors
API Management: Apigee(proficiency)
Cloud Storage & Data Warehousing: AWS S3, Snowflake
Data Processing: Google Dataflow
Programming: SQL, Python (proficiency)
Batch & Real-Time Pipeline Development
Data Visualization Support: Tableau (basic understanding for data publishing)
Experience building Operational Data Stores (ODS) and data transformation pipelines in Snowflake
Familiarity with truck industry aftersales or automotive service and repair data is a plus
Qualifications
Bachelorβs or Masterβs degree in Computer Science, Data Engineering, Information Systems, or related field.
3+ years of proven experience in data engineering, especially with streaming and batch data pipelines.
Hands-on experience with Kafka ecosystem (Confluent Kafka, Connectors) and cloud data platforms (Snowflake, AWS).
Skilled in Python programming for data processing and automation.
Experience with Google Cloud Platform services, especially Google Dataflow, is highly desirable.
Strong understanding of data modeling, ETL/ELT processes, and data quality principles.
Ability to work collaboratively in cross-functional teams and communicate technical concepts effectively.






