On W2 Candidates & Locals (NO C2C) :: Sr. Data Engineer/ Lead in Austin, TX or Seattle, WA (Hybrid Role)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr. Data Engineer/Lead in Austin, TX or Seattle, WA (Hybrid), with a contract length of "unknown" and a pay rate of "unknown." Requires 5+ years in development, expertise in AWS, Python/Java, and experience with data pipelines and ETL processes.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
June 3, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Corp-to-Corp (C2C)
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Austin, TX
-
🧠 - Skills detailed
#Spark (Apache Spark) #Data Processing #AWS EC2 (Amazon Elastic Compute Cloud) #Java #Batch #AWS (Amazon Web Services) #Big Data #Kafka (Apache Kafka) #S3 (Amazon Simple Storage Service) #Stories #Programming #Data Ingestion #Redshift #HDFS (Hadoop Distributed File System) #Data Modeling #Automation #Storage #Data Pipeline #Database Design #"ETL (Extract #Transform #Load)" #Hadoop #Databases #SQL (Structured Query Language) #Agile #Data Framework #Python #Lambda (AWS Lambda) #DynamoDB #Data Engineering #SQS (Simple Queue Service) #NoSQL #Unit Testing #EC2
Role description
Responsibilities: β€’ Develop various facets of data capture, data processing, storage, and distribution. β€’ Understand and apply AWS standard methodologies and products (compute, storage, databases). β€’ Translate marketing concepts/requirements into functional specifications. β€’ Write clean, maintainable, and well-tested code. β€’ Propose new ways of doing things and contribute to the system architecture. β€’ Manage ETL data to and from client’s entities to third party solutions. Skills Required: β€’ 5+ years of development experience, particularly in using marketing acquisition technologies to deliver automation of multiple channels and drive operational efficiencies. β€’ 4+ years’ experience with programming languages such as Python or Java stack. β€’ Experience building data pipelines from multiple sources including APIs, CSV, event streams, NoSQL, etc. using distributed data frameworks. β€’ Experience with different aspects of data systems including database design, data ingestion, data modeling, unit testing, performance optimization, SQL etc. β€’ Demonstrable history creating on and leveraging AWS. β€’ Experience in batch and/or stream processing (using Spark) and streaming systems/queues such as Kafka or SQS. β€’ Daily practice of agile methods including use of sprints, backlog, user stories. β€’ Experience with AWS ecosystem or other big data technologies such as EC2, S3, Redshift, Batch, AppFlow. β€’ AWS: EC2, S3, Lambda, DynamoDB, Cassandra, SQL. β€’ Hadoop, Hive, HDFS, Spark, other big data technologies. β€’ Understand, Analyze, design, develop, as well as implement RESTful services and APIs.