Infojini Inc

Senior AWS Cloud Data Engineer - W2 Contract

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior AWS Cloud Data Engineer on a W2 contract, remote with quarterly onsite requirements. Key skills include AWS services, Python, data engineering, and DevOps. Requires 7+ years of experience and a relevant Bachelor's degree.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 14, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
St. Louis County, MO
-
🧠 - Skills detailed
#Lambda (AWS Lambda) #Vulnerability Management #ECR (Elastic Container Registery) #Unix #Visualization #Redshift #RDS (Amazon Relational Database Service) #Data Analysis #R #API (Application Programming Interface) #Leadership #AI (Artificial Intelligence) #SageMaker #Hadoop #ML (Machine Learning) #SQL (Structured Query Language) #Databases #Data Science #Infrastructure as Code (IaC) #Code Reviews #AWS (Amazon Web Services) #Data Processing #Big Data #Spark (Apache Spark) #DevOps #Ansible #Data Engineering #EC2 #Linux #Security #Terraform #Python #Data Integration #Cloud #Agile #S3 (Amazon Simple Storage Service)
Role description
We are looking for a Senior AWS Cloud Data Engineer Details Below:- Location: Remote - Need to be Onsite in Every 3 months In-person IVs can be requested - Travel expenses will be paid. Note:- Need a candidate who is equally strong on both the data and application sides ABOUT THE ROLE: We are looking for a Data Engineer to design and build capabilities for a cutting-edge, cloud-based big data analytics platform. You will report to an engineering leader and be a part of an agile engineering team responsible for developing complex cloud-native data processing capabilities as part of an AWS-based data analytics platform. You will also work with data scientists, as users of the platform, to analyze and visualize data and develop machine learning/AI models. Responsibilities • Develop, enhance, and troubleshoot complex data engineering, data visualization, and data integration capabilities using Python, R, Lambda, Glue, Redshift, EMR, Quick Sight, SageMaker, and related AWS data processing and visualization services. • Provide technical thought leadership and collaborate with software developers, data engineers, database architects, data analysts, and data scientists on projects to ensure data delivery and align data processing architecture and services across multiple ongoing projects. • Perform other team contributions such as peer code reviews, database defect support, Security enhancement support, Vulnerability management, and occasional backup production support. • Leverage DevOps skills to build and release Infrastructure as Code, Configuration as Code, software, and cloud-native capabilities, ensuring the process follows appropriate change management guidelines. • In partnership with the product owner and engineering leader, ensure team has a clear understanding of the business vision and goals and how that connects with technology solutions. Qualifications • Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience. 7+ years proven experience with a combination of the following: • Designing and building complex data processing pipelines and streaming. • Design of big data solutions and use of common tools (Hadoop, Spark, etc.) • Relational SQL databases, especially Redshift • IaC tools like Terraform, Ansible, AWS CDK. • Containerization services like EKS, ECR. • AWS cloud services: EC2, S3, RDS, Redshift, Glue, Lambda, Step Functions, SageMaker, QuickSight, Config, Security Hub, Inspector • Designing, building, and implementing high-performance API and programs using architectural frameworks and guidelines • UNIX / Linux operating systems.