Intellectt Inc

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Renton, WA, on a contract basis, requiring 3+ years of experience in data engineering with a focus on automotive data. Key skills include Confluent Kafka, AWS S3, Snowflake, SQL, and Python.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
520
-
πŸ—“οΈ - Date
February 20, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Renton, WA
-
🧠 - Skills detailed
#Programming #AWS (Amazon Web Services) #API (Application Programming Interface) #S3 (Amazon Simple Storage Service) #Visualization #Data Quality #Computer Science #"ETL (Extract #Transform #Load)" #Data Processing #Scala #Automation #AWS S3 (Amazon Simple Storage Service) #GCP (Google Cloud Platform) #Storage #Data Pipeline #Snowflake #Dataflow #Tableau #SQL (Structured Query Language) #Python #Kafka (Apache Kafka) #Data Engineering #Batch #Data Modeling #Cloud
Role description
Job Title: Data Engineer Location: Renton, WA Contract Need Automotive/Vehicle Motors Background Role Overview: It is important that the LinkedIn profile of the candidates matches their resume, as the customer may review their profiles. The Data Engineer will play a critical role in building scalable, reliable data pipelines to support real-time and batch processing workflows. You will work closely with cross-functional teams to integrate multiple data sources, build Operational Data Stores, ,transformations and enable timely data availability for reporting and analytics through dashboards. Required Technologies & Skills: Event Streaming: Confluent Kafka (proficiency), Kafka Connectors API Management: Apigee(proficiency) Cloud Storage & Data Warehousing: AWS S3, Snowflake Data Processing: Google Dataflow Programming: SQL, Python (proficiency) Batch & Real-Time Pipeline Development Data Visualization Support: Tableau (basic understanding for data publishing) Experience building Operational Data Stores (ODS) and data transformation pipelines in Snowflake Familiarity with truck industry aftersales or automotive service and repair data is a plus Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or related field. 3+ years of proven experience in data engineering, especially with streaming and batch data pipelines. Hands-on experience with Kafka ecosystem (Confluent Kafka, Connectors) and cloud data platforms (Snowflake, AWS). Skilled in Python programming for data processing and automation. Experience with Google Cloud Platform services, especially Google Dataflow, is highly desirable. Strong understanding of data modeling, ETL/ELT processes, and data quality principles. Ability to work collaboratively in cross-functional teams and communicate technical concepts effectively.