Eliassen Group

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with 7+ years of experience, focused on AWS-based data analytics. Contract length is unspecified, with a pay rate of "unknown". U.S. citizenship required. Key skills include Python, R, Redshift, and DevOps practices.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
680
-
🗓️ - Date
January 15, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
St. Louis City County, MO
-
🧠 - Skills detailed
#Data Integration #Vulnerability Management #EC2 #Terraform #Visualization #Cloud #Linux #Databases #Leadership #Security #API (Application Programming Interface) #Agile #AWS (Amazon Web Services) #Data Processing #Infrastructure as Code (IaC) #Data Science #Code Reviews #Spark (Apache Spark) #Big Data #R #Unix #Python #RDS (Amazon Relational Database Service) #AI (Artificial Intelligence) #Hadoop #ML (Machine Learning) #SQL (Structured Query Language) #Data Analysis #DevOps #Data Engineering #S3 (Amazon Simple Storage Service) #Lambda (AWS Lambda) #Ansible #Redshift #SageMaker #ECR (Elastic Container Registery)
Role description
JOB DESCRIPTION: We are looking for a Data Engineer to design and build capabilities for a cutting-edge, cloud-based big data analytics platform. You will report to an engineering leader and be a part of an agile engineering team responsible for developing complex cloud-native data processing capabilities as part of an AWS-based data analytics platform. You also will work with data scientists, as users of the platform, to analyze and visualize data and develop machine learning/AI models. Due to federal government contract requirements, this position is limited to U.S. citizens. Responsibilities • Develop, enhance, and troubleshoot complex data engineering, data Visualization and data integration capabilities using python, R, lambda, Glue, Redshift, EMR, QuickSight, SageMaker and related AWS data processing, Visualization services. • Provide technical thought leadership and collaborate with software developers, data engineers, database architects, data analysts, and data scientists on projects to ensure data delivery and align data processing architecture and services across multiple ongoing projects. • Perform other team contributions such as peer code reviews, database defect support, Security enhancement support, Vulnerability management, and occasional backup production support. • Leverage DevOps skills to build and release Infrastructure as Code, Configuration as Code, software, and cloud-native capabilities, ensuring the process follows appropriate change management guidelines. • In partnership with the product owner and engineering leader, ensure team has a clear understanding of the business vision and goals and how that connects with technology solutions. Qualifications • Bachelor's degree with a major or specialized courses in Information Technology or commensurate experience. • 7+ years proven experience with a combination of the following: • Designing and building complex data processing pipelines and streaming. • Design of big data solutions and use of common tools (Hadoop, Spark, etc.) • Relational SQL databases, especially Redshift • IaC tools like Terraform, Ansible, AWS CDK. • Containerization services like EKS, ECR. • AWS cloud services: EC2, S3, RDS, Redshift, Glue, Lambda, Step Functions, SageMaker, QuickSight, Config, Security Hub, Inspector • Designing, building and implementing high-performance API and programs using architectural frameworks and guidelines • UNIX / Linux operating systems.