

Data Engineer II
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer II in Seattle, WA for 8 months at a pay rate of "negotiable." Key skills include data modeling, ETL pipeline development, and AWS technologies. Requires 5 years of experience and proficiency in SQL and Python.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
June 13, 2025
🕒 - Project duration
More than 6 months
-
🏝️ - Location type
On-site
-
📄 - Contract type
Fixed Term
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Seattle, WA
-
🧠 - Skills detailed
#Data Engineering #Java #Spark (Apache Spark) #Lambda (AWS Lambda) #Programming #Airflow #Cloud #AWS (Amazon Web Services) #RDBMS (Relational Database Management System) #Redshift #Data Processing #Apache Airflow #Scala #SSIS (SQL Server Integration Services) #Big Data #Databases #Data Quality #Data Warehouse #Python #Leadership #Snowflake #SQL (Structured Query Language) #"ETL (Extract #Transform #Load)" #Hadoop #S3 (Amazon Simple Storage Service) #Informatica #Deployment #DataStage #Automation #Data Modeling #Scripting #HiveQL #ODI (Oracle Data Integrator) #Data Access
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Overview
TekWissen is a global workforce management provider headquartered in Ann Arbor, Michigan that offers strategic talent solutions to our clients world-wide. We are a community of 50 million who think—and feel—differently about investing. Together, we’re changing the way the world invests. Since our founding in 1975, helping investors achieve their goals has been our main reason for existence. At Client, we’re built differently. Our client is investor-owned, meaning that we’re owned by our funds, which are owned by our fund shareholder clients. Therefore, your success adds to ours, so you’re surrounded by people who care about the same things. With no other parties to answer to, we make decisions—including keeping investing costs as low as possible—with your needs in mind. Because of our unique structure, your goals align with our goals.
Position: Data Engineer II
Location: Seattle, WA 98121
Duration: 8 Months
Job Type: Contract
Work Type: Onsite
Job Description:
• Client is a premium streaming service that offers customers a vast collection of TV shows and movies - all with the ease of finding what they love to watch in one place.
• We offer customers thousands of popular movies and TV shows from Originals and Exclusive content to exciting live sports events.
• We also offer our members the opportunity to subscribe to add-on channels which they can cancel at anytime and to rent or buy new release movies and TV box sets on the Client Store.
• Client is a fast-paced, growth business - available in over 240 countries and territories worldwide.
• The team works in a dynamic environment where innovating on behalf of our customers is at the heart of everything we do.
• If this sounds exciting to you, please read on.
• The team presents opportunities to work on very large data sets in one of the world's largest and most complex data warehouse environments.
• Our data warehouse is built on AWS cloud technologies like Redshift, Kinesis, Lambda, S3, MWAA, performing ETL processing on multi terabytes of relational data in a matter of hours.
• Our team is serious about great design and redefining best practices with a cloud-based approach to scale, resilience and automation.
Key Job Responsibilities
• You'll solve data warehousing problems on a massive scale and apply cloud-based AWS services to solve challenging problems around: big data processing,
• data warehouse design, self-service data access, automated data quality detection and building infrastructure as a code. You'll be part of the team that
• focuses on automation and optimization for all areas of DW/ETL maintenance and deployment.
• You'll work closely with global business partners and technical teams on many non-standard and unique business problems and use creative problem solving
• to deliver data products that underpin Client strategic decision making, from content selection to on-platform customer experience. You'll develop
• efficient systems and tools to process data, using technologies than can scale to seasonal spikes and easily accommodate future growth. Your work will have
• a direct impact on the day-to-day decision making across Client.
Basic Qualifications
• Experience with data modeling, warehousing and building ETL pipelines
• Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
• Experience with one or more scripting language (e.g., Python, KornShell)
• Experience with SQL
• Experience with big data technologies such Spark, EMR
• Preferred qualifications
• Experience with big data technologies such as: Hadoop, Hive
• Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
Typical Day in the Role
• Extension? Potentially
• Conversion? no
Training/ Ramp- up
• Candidates should be ready to do the job on Day 1 or at least within a week.
Compelling Story & Candidate Value Proposition
Role interesting: -
• Opportunity to learn many various technologies
• Work with cutting-edge AWS technologies
• Experience with MWAA (Managed Workflows for Apache Airflow)
• High visibility among a "much, much wider group of Client
• Work impacts 300+ teams across Client.
Candidate Requirements:
• Yrs Exp: 5 yrs.
Basic Qualifications
• Experience with data modeling, warehousing and building ETL pipelines
• Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala)
• Experience with one or more scripting language (e.g., Python, KornShell) Experience with SQL Experience as a Data Engineer or in a similar role
Preferred Qualifications
• Experience with big data technologies such as: Hadoop, Hive, Spark, EMR
• Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc.
• Former employee exp is a plus
Leadership Principles
• Deliver results
• Bias for action
• Invent and simplify
Top 3 Must-have Hard Skills
• Experience with data modeling, warehousing and building ETL pipelines
• Programming/Scripting: Python/ Java
Database Experience:
• Specifically mentioned Redshift or Snowflake as ideal
• Experience with MPP (Massively Parallel Processing) RDBMS
• Be able to write SQL on top of databases
TekWissen® Group is an equal opportunity employer supporting workforce diversity.