Sharp Decisions

Senior Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Architect in Marysville, OH, with a contract length of unspecified duration. Pay rate is W2 for local candidates. Requires 8-10+ years in Data Engineering, AWS expertise, and strong skills in Python, SQL, and ETL processes.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
640
-
🗓️ - Date
January 13, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Marysville, OH
-
🧠 - Skills detailed
#Data Architecture #SAP #GitHub #RDS (Amazon Relational Database Service) #SQL (Structured Query Language) #Datasets #Informatica #Deployment #Cloud #Compliance #SQL Server #Data Modeling #API (Application Programming Interface) #Data Management #Python #AWS (Amazon Web Services) #Physical Data Model #Monitoring #DMS (Data Migration Service) #Athena #BI (Business Intelligence) #Metadata #IAM (Identity and Access Management) #Scala #EC2 #GIT #Spark (Apache Spark) #Lambda (AWS Lambda) #Spark SQL #PySpark #DevOps #Data Quality #Automation #Data Governance #Redshift #S3 (Amazon Simple Storage Service) #Security #"ETL (Extract #Transform #Load)" #Data Engineering
Role description
A client of Sharp Decisions Inc. is looking for a Senior Data Architect to be based in Marysville, OH. The position is an on-site contract role with a possible extension. • W2 and local candidates only. Title: Senior Data Architect Job Description: Data Engineer designs, builds, and maintains scalable data solutions to enable advanced analytics and business intelligence across Honda's enterprise. What will this person be working on • Design and implement ETL pipelines using AWS services (Glue, EMR, DMS, S3, Redshift). • Orchestrate workflows with AWS Step Functions, EventBridge, and Lambda. • Integrate CI/CD pipelines with GitHub and AWS CDK for automated deployments. • Develop conceptual, logical, and physical data models for operational and analytical systems. • Optimize queries, normalize datasets, and apply performance tuning techniques. • Use Python, PySpark, and SQL for data transformation and automation. • Monitor pipeline performance using CloudWatch and Glue job logs. • Troubleshoot and resolve data quality and performance issues proactively. Minimum Experience • 8-10+ years in Data Engineering or related roles. • Proven track record in AWS-based data solutions and orchestration • Integration with ERP systems (SAP, Homegrown ERP Systems) • API-based Data Exchange between Manufacturing, Supply Chain legacy applications and AWS pipelines • Metadata Management for compliance attributes • Audit Trails & Reporting for compliance verification • Expertise in cloud to design, build, and maintain data-driven solutions • Skilled in Data Architecture and Data Engineering with a strong background in Supply Chain domain • Experienced in Data Modeling (Conceptual, Logical, and Physical), ETL optimizations, Query optimizations and Performance tuning Technical Skills • Languages: Python, PySpark, SQL • AWS Services: Glue, EMR, EC2, Lambda, DMS, S3, Redshift, RDS • Data Governance: Informatica CDGC/CDQ • DevOps Tools: Git, GitHub, AWS CDK • Security: IAM, encryption policies • Monitoring: CloudWatch, Glue Catalog, Athena • Strong integration background with DB2, UDB, SQL Server, etc. Soft Skills: • Strong communication and collaboration skills. • Ability to work with cross-functional teams and stakeholders.