

iSpace, Inc.
Senior Data Architect
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Architect in Marysville, OH (Hybrid) for a 12+ month contract at $85/hr. Requires 8-10+ years in Data Engineering, expertise in AWS services, strong background in Supply Chain, and proficiency in Python, PySpark, and SQL.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
680
-
ποΈ - Date
January 13, 2026
π - Duration
More than 6 months
-
ποΈ - Location
Hybrid
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
Marysville, OH
-
π§ - Skills detailed
#Data Architecture #SAP #GitHub #RDS (Amazon Relational Database Service) #SQL (Structured Query Language) #Datasets #Informatica #Deployment #Cloud #Compliance #SQL Server #Data Modeling #API (Application Programming Interface) #Data Management #Python #AWS (Amazon Web Services) #Physical Data Model #Monitoring #DMS (Data Migration Service) #Athena #BI (Business Intelligence) #Metadata #IAM (Identity and Access Management) #Scala #EC2 #GIT #Spark (Apache Spark) #Lambda (AWS Lambda) #Spark SQL #PySpark #DevOps #Data Quality #Automation #Data Governance #Redshift #S3 (Amazon Simple Storage Service) #Security #"ETL (Extract #Transform #Load)" #Data Engineering
Role description
Title: Senior Data Architect
Location: Marysville OH (Hybrid role) 4 days Office
Duration: 12+ Months Contract
Pay rate $85 Per hr on W2
Description
Data Engineer designs, builds, and maintains scalable data solutions to enable advanced analytics and business intelligence across client's enterprise.
What will this person be working on
β’ Design and implement ETL pipelines using AWS services (Glue, EMR, DMS, S3, Redshift).
β’ Orchestrate workflows with AWS Step Functions, EventBridge, and Lambda.
β’ Integrate CI/CD pipelines with GitHub and AWS CDK for automated deployments.
β’ Develop conceptual, logical, and physical data models for operational and analytical systems.
β’ Optimize queries, normalize datasets, and apply performance tuning techniques.
β’ Use Python, PySpark, and SQL for data transformation and automation.
β’ Monitor pipeline performance using CloudWatch and Glue job logs.
β’ Troubleshoot and resolve data quality and performance issues proactively.
Minimum Experience
β’ 8β10+ years in Data Engineering or related roles.
β’ Proven track record in AWS-based data solutions and orchestration
β’ Integration with ERP systems (SAP, Homegrown ERP Systems)
β’ API-based Data Exchange between Manufacturing, Supply Chain legacy applications and AWS pipelines
β’ Metadata Management for compliance attributes
β’ Audit Trails & Reporting for compliance verification
β’ Expertise in cloud to design, build, and maintain data-driven solutions
β’ Skilled in Data Architecture and Data Engineering with a strong background in Supply Chain domain
β’ Experienced in Data Modeling (Conceptual, Logical, and Physical), ETL optimizations, Query optimizations and Performance tuning
Technical Skills
β’ Languages: Python, PySpark, SQL
β’ AWS Services: Glue, EMR, EC2, Lambda, DMS, S3, Redshift, RDS
β’ Data Governance: Informatica CDGC/CDQ
β’ DevOps Tools: Git, GitHub, AWS CDK
β’ Security: IAM, encryption policies
β’ Monitoring: CloudWatch, Glue Catalog, Athena
β’ Strong integration background with DB2, UDB, SQL Server etc
β’ Soft Skills:
β’ Strong communication and collaboration skills.
β’ Ability to work with cross-functional teams and stakeholders.
If you're interested in this role please send me your updated resume to chakravarthi.savalam@ispace.com
Title: Senior Data Architect
Location: Marysville OH (Hybrid role) 4 days Office
Duration: 12+ Months Contract
Pay rate $85 Per hr on W2
Description
Data Engineer designs, builds, and maintains scalable data solutions to enable advanced analytics and business intelligence across client's enterprise.
What will this person be working on
β’ Design and implement ETL pipelines using AWS services (Glue, EMR, DMS, S3, Redshift).
β’ Orchestrate workflows with AWS Step Functions, EventBridge, and Lambda.
β’ Integrate CI/CD pipelines with GitHub and AWS CDK for automated deployments.
β’ Develop conceptual, logical, and physical data models for operational and analytical systems.
β’ Optimize queries, normalize datasets, and apply performance tuning techniques.
β’ Use Python, PySpark, and SQL for data transformation and automation.
β’ Monitor pipeline performance using CloudWatch and Glue job logs.
β’ Troubleshoot and resolve data quality and performance issues proactively.
Minimum Experience
β’ 8β10+ years in Data Engineering or related roles.
β’ Proven track record in AWS-based data solutions and orchestration
β’ Integration with ERP systems (SAP, Homegrown ERP Systems)
β’ API-based Data Exchange between Manufacturing, Supply Chain legacy applications and AWS pipelines
β’ Metadata Management for compliance attributes
β’ Audit Trails & Reporting for compliance verification
β’ Expertise in cloud to design, build, and maintain data-driven solutions
β’ Skilled in Data Architecture and Data Engineering with a strong background in Supply Chain domain
β’ Experienced in Data Modeling (Conceptual, Logical, and Physical), ETL optimizations, Query optimizations and Performance tuning
Technical Skills
β’ Languages: Python, PySpark, SQL
β’ AWS Services: Glue, EMR, EC2, Lambda, DMS, S3, Redshift, RDS
β’ Data Governance: Informatica CDGC/CDQ
β’ DevOps Tools: Git, GitHub, AWS CDK
β’ Security: IAM, encryption policies
β’ Monitoring: CloudWatch, Glue Catalog, Athena
β’ Strong integration background with DB2, UDB, SQL Server etc
β’ Soft Skills:
β’ Strong communication and collaboration skills.
β’ Ability to work with cross-functional teams and stakeholders.
If you're interested in this role please send me your updated resume to chakravarthi.savalam@ispace.com






