

Infojini Inc
Senior AWS Cloud Data Engineer/Architect | W2 Contract
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Cloud Data Engineer/Architect on a W2 contract, remote with quarterly onsite requirements. Pay rate is "unknown." Key skills include AWS, Big Data, Hadoop/Spark, and IaC. Requires 7+ years of relevant experience and a Bachelor's degree.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
Unknown
-
ποΈ - Date
March 27, 2026
π - Duration
Unknown
-
ποΈ - Location
Remote
-
π - Contract
W2 Contractor
-
π - Security
Unknown
-
π - Location detailed
St. Louis City County, MO
-
π§ - Skills detailed
#Data Strategy #Data Pipeline #Data Architecture #RDS (Amazon Relational Database Service) #Data Analysis #Data Quality #Scala #Visualization #Vulnerability Management #Big Data #Unix #Code Reviews #EC2 #AWS (Amazon Web Services) #Leadership #Cloud #Agile #Databases #Ansible #Lambda (AWS Lambda) #Data Management #Linux #Data Security #Redshift #Data Integration #Infrastructure as Code (IaC) #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #SageMaker #Data Lake #ML (Machine Learning) #Data Processing #Security #Terraform #API (Application Programming Interface) #DevOps #AI (Artificial Intelligence) #R #Strategy #Data Engineering #Data Science #Hadoop #ECR (Elastic Container Registery) #Python #SQL (Structured Query Language)
Role description
We have two Openings below: -
Must have working experience with the following and should be in the resume:-
Big Data, Hadoop/Spark, IaC, AWS, Redshift, AWS-based data lake architectures, API integrations
Nice to have:
Working exp. On Transactional applications or systems.
Details Below: -
Location: Remote - Need to be Onsite in Every 3 months
Interview Process: 2 rounds of Virtual Interview and Final Interview will be In Person (All Travel expenses and accommodations will be paid)
ROLE 1: -
We are looking for a Senior AWS Cloud Data Engineer
Note: - Need a candidate who is equally strong on both the data and application side.
ABOUT THE ROLE:
We are looking for a Data Engineer to design and build capabilities for a cutting-edge, cloud-based big data analytics platform.
You will report to an engineering leader and be a part of an agile engineering team responsible for developing complex cloud-native data processing capabilities as part of an AWS-based data analytics platform.
You will also work with data scientists, as users of the platform, to analyze and visualize data and develop machine learning/AI models.
Responsibilities
β’ Develop, enhance, and troubleshoot complex data engineering, data visualization, and data integration capabilities using Python, R, Lambda, Glue, Redshift, EMR, Quick Sight, SageMaker, and related AWS data processing and visualization services.
β’ Provide technical thought leadership and collaborate with software developers, data engineers, database architects, data analysts, and data scientists on projects to ensure data delivery and align data processing architecture and services across multiple ongoing projects.
β’ Perform other team contributions such as peer code reviews, database defect support, Security enhancement support, Vulnerability management, and occasional backup production support.
β’ Leverage DevOps skills to build and release Infrastructure as Code, Configuration as Code, software, and cloud-native capabilities, ensuring the process follows appropriate change management guidelines.
β’ In partnership with the product owner and engineering leader, ensure team has a clear understanding of the business vision and goals and how that connects with technology solutions.
Qualifications
β’ Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience.
β’ 7+ years proven experience with a combination of the following:
β’ Designing and building complex data processing pipelines and streaming.
β’ Design of big data solutions and use of common tools (Hadoop, Spark, etc.)
β’ Relational SQL databases, especially Redshift
β’ IaC tools like Terraform, Ansible, AWS CDK.
β’ Containerization services like EKS, ECR.
β’ AWS cloud services: EC2, S3, RDS, Redshift, Glue, Lambda, Step Functions, SageMaker, QuickSight, Config, Security Hub, Inspector
β’ Designing, building and implementing high-performance API and programs using architectural frameworks and guidelines
β’ UNIX / Linux operating systems.
ROLE 2:-
We are looking for a Senior AWS Cloud Data Architect
ABOUT THE ROLE:
We are looking for a Data Architect to Architect, Design and build capabilities for a cutting-edge, cloud-based big data analytics platform and Portal/Transactional application. You will report to an engineering leader and be a part of an agile engineering team responsible for developing complex cloud-native data processing capabilities as part of an AWS-based data analytics platform and Portal application.
Responsibilities
β’ Support multiple products simultaneously and effectively. Communicate and collaborate with stakeholders and meet commitments as planned
β’ Understand requirements and render those as architectural data models that will operate at a large scale and high performance, and advise on how to implement these architectural models into cloud native database technologies
β’ Document data management best practices, standards, and reference architecture.
β’ Publish data models for the development team and the BAβs reference
β’ Investigate and advise on data quality and ways to improve system reliability and performance
β’ Collaborate with peer data architects and application architects to eliminate duplication of work and bring common standards.
β’ Take lead on impact analysis of DDL/DML changes
β’ Discuss and escalate potential risk in the current and planned implementations
β’ Implement best practices in Data Security and audit in a timely manner.
β’ Develop and automate where possible
β’ Partner with app. engineering in product development and lead in selection of best of breed data management technologies in cloud
β’ Conduct data architecture evaluation and report on deficiencies.
β’ Participate in planning and effort estimation
β’ Work towards creating an enterprise level data strategy to create best vale by using βdata as assetβ
β’ Ensure CI/CD pipeline and release practices are followed consistently.
Qualifications
β’ Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience.
β’ 7+ years proven experience with a combination of the following:
β’ Experience in designing and delivering cross-functional data solutions, data pipelines, and data delivery using advanced technologies
β’ Design of big data solutions and use of common tools (Hadoop, Spark, etc.)
β’ Relational and Analytical SQL databases, especially Postgres and Redshift
β’ AWS cloud services: EC2, S3, Glue, Lambda, Step Functions, SageMaker, QuickSight, Config, Security Hub, Inspector
β’ Experience in Data Management Best Practices
β’ Working experience in Agile development methodologies; SDLC
β’ Must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
β’ Must be resourceful and creative in identifying ways to mitigate issues and risks to avoid project delays.
β’ Effective written and verbal communication skills
We have two Openings below: -
Must have working experience with the following and should be in the resume:-
Big Data, Hadoop/Spark, IaC, AWS, Redshift, AWS-based data lake architectures, API integrations
Nice to have:
Working exp. On Transactional applications or systems.
Details Below: -
Location: Remote - Need to be Onsite in Every 3 months
Interview Process: 2 rounds of Virtual Interview and Final Interview will be In Person (All Travel expenses and accommodations will be paid)
ROLE 1: -
We are looking for a Senior AWS Cloud Data Engineer
Note: - Need a candidate who is equally strong on both the data and application side.
ABOUT THE ROLE:
We are looking for a Data Engineer to design and build capabilities for a cutting-edge, cloud-based big data analytics platform.
You will report to an engineering leader and be a part of an agile engineering team responsible for developing complex cloud-native data processing capabilities as part of an AWS-based data analytics platform.
You will also work with data scientists, as users of the platform, to analyze and visualize data and develop machine learning/AI models.
Responsibilities
β’ Develop, enhance, and troubleshoot complex data engineering, data visualization, and data integration capabilities using Python, R, Lambda, Glue, Redshift, EMR, Quick Sight, SageMaker, and related AWS data processing and visualization services.
β’ Provide technical thought leadership and collaborate with software developers, data engineers, database architects, data analysts, and data scientists on projects to ensure data delivery and align data processing architecture and services across multiple ongoing projects.
β’ Perform other team contributions such as peer code reviews, database defect support, Security enhancement support, Vulnerability management, and occasional backup production support.
β’ Leverage DevOps skills to build and release Infrastructure as Code, Configuration as Code, software, and cloud-native capabilities, ensuring the process follows appropriate change management guidelines.
β’ In partnership with the product owner and engineering leader, ensure team has a clear understanding of the business vision and goals and how that connects with technology solutions.
Qualifications
β’ Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience.
β’ 7+ years proven experience with a combination of the following:
β’ Designing and building complex data processing pipelines and streaming.
β’ Design of big data solutions and use of common tools (Hadoop, Spark, etc.)
β’ Relational SQL databases, especially Redshift
β’ IaC tools like Terraform, Ansible, AWS CDK.
β’ Containerization services like EKS, ECR.
β’ AWS cloud services: EC2, S3, RDS, Redshift, Glue, Lambda, Step Functions, SageMaker, QuickSight, Config, Security Hub, Inspector
β’ Designing, building and implementing high-performance API and programs using architectural frameworks and guidelines
β’ UNIX / Linux operating systems.
ROLE 2:-
We are looking for a Senior AWS Cloud Data Architect
ABOUT THE ROLE:
We are looking for a Data Architect to Architect, Design and build capabilities for a cutting-edge, cloud-based big data analytics platform and Portal/Transactional application. You will report to an engineering leader and be a part of an agile engineering team responsible for developing complex cloud-native data processing capabilities as part of an AWS-based data analytics platform and Portal application.
Responsibilities
β’ Support multiple products simultaneously and effectively. Communicate and collaborate with stakeholders and meet commitments as planned
β’ Understand requirements and render those as architectural data models that will operate at a large scale and high performance, and advise on how to implement these architectural models into cloud native database technologies
β’ Document data management best practices, standards, and reference architecture.
β’ Publish data models for the development team and the BAβs reference
β’ Investigate and advise on data quality and ways to improve system reliability and performance
β’ Collaborate with peer data architects and application architects to eliminate duplication of work and bring common standards.
β’ Take lead on impact analysis of DDL/DML changes
β’ Discuss and escalate potential risk in the current and planned implementations
β’ Implement best practices in Data Security and audit in a timely manner.
β’ Develop and automate where possible
β’ Partner with app. engineering in product development and lead in selection of best of breed data management technologies in cloud
β’ Conduct data architecture evaluation and report on deficiencies.
β’ Participate in planning and effort estimation
β’ Work towards creating an enterprise level data strategy to create best vale by using βdata as assetβ
β’ Ensure CI/CD pipeline and release practices are followed consistently.
Qualifications
β’ Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience.
β’ 7+ years proven experience with a combination of the following:
β’ Experience in designing and delivering cross-functional data solutions, data pipelines, and data delivery using advanced technologies
β’ Design of big data solutions and use of common tools (Hadoop, Spark, etc.)
β’ Relational and Analytical SQL databases, especially Postgres and Redshift
β’ AWS cloud services: EC2, S3, Glue, Lambda, Step Functions, SageMaker, QuickSight, Config, Security Hub, Inspector
β’ Experience in Data Management Best Practices
β’ Working experience in Agile development methodologies; SDLC
β’ Must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products.
β’ Must be resourceful and creative in identifying ways to mitigate issues and risks to avoid project delays.
β’ Effective written and verbal communication skills





