

AWS Big Data Engineer with AI/ML
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Big Data Engineer with AI/ML in Seattle, WA, for a long-term contract. Requires 10+ years in data architecture, expertise in AWS Glue, Redshift, and Apache Iceberg, and proficiency in Python, Scala, or Java.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 24, 2025
π - Project duration
Unknown
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Seattle, WA
-
π§ - Skills detailed
#Big Data #SageMaker #Data Engineering #AI (Artificial Intelligence) #Snowflake #Java #AWS (Amazon Web Services) #IAM (Identity and Access Management) #Redshift #ML (Machine Learning) #Data Pipeline #Batch #Spark (Apache Spark) #Python #Leadership #Data Lake #"ETL (Extract #Transform #Load)" #Databricks #AWS Glue #Scala #Data Architecture #Kafka (Apache Kafka) #Apache Iceberg
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Title: AWS Big Data Engineer with AI/ML
Location: Seattle, WA - Onsite
Duration: Long term
1. Key Result Areas and Activities:
β’ Seeking a Data Engineer with 10+ years of hands-on data architecture experience designing and managing large-scale data lakes and warehouses.
β’ Solid understanding and hands-on experience with Apache Iceberg and AWS Glue
β’ Deep experience with AWS systems like Lake formation, IAM and Redshift.
β’ Proficiency in Python, Scala, or Java for ETL and custom tooling is essential.
β’ Proven expertise in complex schema and ERD design, maintaining data dictionaries, and developing real-time/batch data pipelines using tools like Kafka, Spark, or similar.
β’ AI/ML integration familiarity, leadership abilities, and strong problem-solving skills
1. Work and Technical Experience:
Essential Skills:
β’ Big Data Technologies
β’ AWS experience AWS Glue, Redshift, SageMaker
β’ Databricks
β’ Snowflake
β’ Building large pipelines
β’ Experience in data and ML lifecycle