
Enterprise Data Architect
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Enterprise Data Architect on a 7-month contract, offering remote work at a competitive pay rate. Key skills include AWS, SQL, NoSQL, ETL, and big data technologies. Experience with data governance and architecture in enterprise settings is required.
🌎 - Country
United States
💱 - Currency
Unknown
💰 - Day rate
Unknown
Unknown
🗓️ - Date discovered
May 19, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Remote
🧠 - Skills detailed
#Data Processing #Database Design #Data Lake #Data Architecture #Scala #"ETL (Extract #Transform #Load)" #NoSQL #AWS Glue #Agile #Monitoring #IAM (Identity and Access Management) #Java #Lambda (AWS Lambda) #Data Governance #SQL (Structured Query Language) #Data Modeling #Indexing #Databases #Azure #GDPR (General Data Protection Regulation) #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Athena #Amazon Redshift #Big Data #Data Pipeline #Data Management #Aurora #Compliance #Data Catalog #Data Lakehouse #Storage #AWS IAM (AWS Identity and Access Management) #Data Quality #AWS Lambda #Security #Data Warehouse #AWS (Amazon Web Services) #Redshift
Role description
Job OverviewWe are seeking an experienced and innovative Enterprise Data Architect to join our dynamic team. In this role, you will be responsible for designing and implementing robust data architectures that support our enterprise-level data management strategies. The ideal candidate will have a strong background in big data technologies, database design, and a deep understanding of both SQL and NoSQL databases. You will work closely with cross-functional teams to ensure that our data solutions align with business objectives and enhance decision-making processes.
Duties
Develop and maintain enterprise data architecture frameworks that support business goals.
Collaborate with stakeholders to gather requirements and translate them into technical specifications.
Design and implement scalable data models using both SQL and NoSQL databases.
Utilize big data technologies such as Azure, Spark, and Java to optimize data processing workflows.
Ensure data quality, integrity, and security across all platforms.
Participate in Agile development processes to deliver timely solutions.
Provide guidance on best practices for database design and management.
Conduct performance tuning and optimization of existing databases.
Stay current with industry trends and emerging technologies related to data architecture.
Requirements
Proven experience as a Data Architect or similar role in enterprise environments.
Enterprise Data Architect must know AWS - warehouse/lakehouse (S3, Redshift, Cookbook)-
Needed mid-end of May for at least 7 months, multiple phases. Architecture & Design
Design and implement AWS data lakehouse architecture.
Build out data lake and data warehouse capabilities using services such as Amazon S3, Amazon Aurora, Amazon Redshift, and Amazon Athena.
Implement data governance frameworks, including data cataloging (Data Cookbook experience preferred), access controls (AWS IAM, Lake Formation), and compliance with regulations (e.g., GDPR, CCPA).ETL: Design and lead development of ETL pipelines to ingest structured, semi-structured, and unstructured data from various sources into the lakehouse using Mulesoft.
Experience with products such as AWS Glue, AWS Data Pipeline, or AWS Lambda is also beneficial.Data Modeling:
Create and maintain data models (e.g., star schemas, dimensional models) optimized for analytics and reporting, ensuring scalability and performance.
Optimization: Optimize query performance and storage costs using partitioning, indexing, and caching strategies in AWS services.
Establish monitoring and alerting around performance.
Job Type: Contract
Schedule:
8 hour shift
Monday to Friday
Work Location: Remote
Job OverviewWe are seeking an experienced and innovative Enterprise Data Architect to join our dynamic team. In this role, you will be responsible for designing and implementing robust data architectures that support our enterprise-level data management strategies. The ideal candidate will have a strong background in big data technologies, database design, and a deep understanding of both SQL and NoSQL databases. You will work closely with cross-functional teams to ensure that our data solutions align with business objectives and enhance decision-making processes.
Duties
Develop and maintain enterprise data architecture frameworks that support business goals.
Collaborate with stakeholders to gather requirements and translate them into technical specifications.
Design and implement scalable data models using both SQL and NoSQL databases.
Utilize big data technologies such as Azure, Spark, and Java to optimize data processing workflows.
Ensure data quality, integrity, and security across all platforms.
Participate in Agile development processes to deliver timely solutions.
Provide guidance on best practices for database design and management.
Conduct performance tuning and optimization of existing databases.
Stay current with industry trends and emerging technologies related to data architecture.
Requirements
Proven experience as a Data Architect or similar role in enterprise environments.
Enterprise Data Architect must know AWS - warehouse/lakehouse (S3, Redshift, Cookbook)-
Needed mid-end of May for at least 7 months, multiple phases. Architecture & Design
Design and implement AWS data lakehouse architecture.
Build out data lake and data warehouse capabilities using services such as Amazon S3, Amazon Aurora, Amazon Redshift, and Amazon Athena.
Implement data governance frameworks, including data cataloging (Data Cookbook experience preferred), access controls (AWS IAM, Lake Formation), and compliance with regulations (e.g., GDPR, CCPA).ETL: Design and lead development of ETL pipelines to ingest structured, semi-structured, and unstructured data from various sources into the lakehouse using Mulesoft.
Experience with products such as AWS Glue, AWS Data Pipeline, or AWS Lambda is also beneficial.Data Modeling:
Create and maintain data models (e.g., star schemas, dimensional models) optimized for analytics and reporting, ensuring scalability and performance.
Optimization: Optimize query performance and storage costs using partitioning, indexing, and caching strategies in AWS services.
Establish monitoring and alerting around performance.
Job Type: Contract
Schedule:
8 hour shift
Monday to Friday
Work Location: Remote