AJNA INFOTECH

AWS Data Architect - 15+ Years Experience Mandatory

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Architect requiring 15+ years of experience. The contract is onsite in Torrance, CA. Key skills include AWS services, data modeling, ETL development, and proficiency in Python and SQL.
🌎 - Country
United States
πŸ’± - Currency
Unknown
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 17, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Torrance, CA
-
🧠 - Skills detailed
#Storage #Databases #Data Cleansing #Programming #PySpark #Conceptual Data Model #Data Ingestion #Data Engineering #Data Modeling #Informatica #Data Lineage #IICS (Informatica Intelligent Cloud Services) #Scala #SQL (Structured Query Language) #Physical Data Model #Data Quality #Data Architecture #Redshift #Lambda (AWS Lambda) #Normalization #Logical Data Model #Spark (Apache Spark) #Data Pipeline #Snowflake #Data Migration #Classification #Python #S3 (Amazon Simple Storage Service) #Batch #AWS Glue #Cloud #AWS (Amazon Web Services) #Data Lake #"ETL (Extract #Transform #Load)" #Data Governance #Data Processing #Migration #Athena
Role description
Job Description Need 15+ years of experience Need 15+ years of experience Need 15+ years of experience Please do not send me profiles with 10 or less years of experience. Read the job description caraefully before applying Role: AWS Data Architect Location: Toorance CA - Onsite Type: Contract Key skill The role requires specific skills around developing CDM (Conceptual Data Model), LDM (Logical Data Model), PDM (Physical Data Model), data modeling, integration, data migration, and ETL development on the AWS platform. Key Responsibilities Architect and implement a scalable data hub solution on AWS using best practices for data ingestion transformation storage and access control Define data models data lineage and data quality standards for the DataHub Select appropriate AWS services S3 Glue Redshift Athena Lambda based on data volume access patterns and performance requirements Design and build data pipelines to extract transform and load data from various sources databases APIs flat files into the DataHub using AWS Glue AWS Batch or custom ETL processes Implement data cleansing and normalization techniques to ensure data quality Manage data ingestion schedules and error handling mechanisms Required Skills and Experience AWS Expertise Deep understanding of AWS data services including S3 Glue Redshift Athena Lake Formation Sep Functions CloudWatch and EventBridge Data Modeling Proficiency in designing dimensional and snowflake data models for data warehousing and data lakes Data Engineering Skills Experience with ETLELT processes data cleansing data transformation and data quality checks Experience with Informatica IICS and ICDQ is a plus Programming Languages Proficiency in Python SQL and potentially PySpark for data processing and manipulation Data Governance Knowledge of data governance best practices including data classification access control and data lineage tracking Additional Information All your information will be kept confidential according to EEO guidelines.