$180K/YR - Data Engineer (Flink)- 100% Remote

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer (Flink) with 5+ years of experience, offering $180K/YR for more than 6 months, 100% remote. Key skills include AWS, Kubernetes, Apache Flink, and Terraform. Experience with data lakehouse and cloud platforms is essential.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
818
🗓️ - Date discovered
April 24, 2025
🕒 - Project duration
More than 6 months
🏝️ - Location type
Remote
📄 - Contract type
Unknown
🔒 - Security clearance
Unknown
📍 - Location detailed
Charlotte, NC
🧠 - Skills detailed
#Lambda (AWS Lambda) #Big Data #PostgreSQL #Athena #GIT #Data Lake #Scala #API (Application Programming Interface) #Aurora #Data Quality #Agile #Java #S3 (Amazon Simple Storage Service) #Hadoop #AWS (Amazon Web Services) #RDS (Amazon Relational Database Service) #Redshift #Cloud #Spark (Apache Spark) #Data Lakehouse #Kubernetes #Security #Logging #Python #Terraform #SQS (Simple Queue Service) #Kafka (Apache Kafka) #Vault #SQL (Structured Query Language) #Data Encryption #Storage #Deployment #Debugging #IAM (Identity and Access Management) #Containers #Data Engineering #SNS (Simple Notification Service) #Data Security #"ETL (Extract #Transform #Load)"
Role description

Data Engineer with Flink experience needed!

$180K/YR

100% Remote

Enterprise Data Platforms Team is seeking a Subject Matter Expert to help develop Duke's Data Fabric as an interconnected network of data capabilities and data products designed to deliver data efficiently and at scale. Candidates should have expertise in developing and building data platforms, demonstrating experience with overcoming obstacles and avoiding pitfalls. They should also possess skills in optimizing and automating deliverables to production using the required tech stack. Additionally, candidates should be experienced and adaptable to changing demands and priorities in an Agile development environment.

We are specifically looking for individuals with at least 5+ years of experience in Data Engineering and/or Software Engineering roles who can provide knowledge and support to our existing engineers.

Must have experience with similar platform engineering/management solutions:

   • Building/optimizing Data LakeHouse with Open Table formats

   • KubernetesKubernetes deployments/cluster administration

   • Transitioning on-premise big data platforms to scalable cloud-based platforms like AWS

AWS

   • Distributed Systems, Microservice Microservice architecture, and containers

   • Cloud Streaming use cases in Big Data Ecosystems (e.g., EMR,EMR EKS,EKS Hadoop, Spark, Hudi,Kafka/Kinesis

Must have experience with below tech stack:

   • Git Hib Hiband Git Hub Actions

   • AWS

AWS o IAM

IAM o API APIGateway

o Lambda

o Step Functions

o Lake formation

o EKS EKS& Kubernetes

Kubernetes o Glue: Catalog, ETL,ETL Crawler

o Athena

o Lambda

o S3 S3(Strong foundational foundational concepts like object data store vs block data store, encryption/decryption, storage tiers etc)

   • Apache Hudi

Hudi

   • Apache Flink

Flink

   • PostgreSQL PostgreSQLand SQL

SQL

   • RDS RDS(Relational Database Services).

   • Python

   • Java

   • Terraform TerraformEnterprise

o Must be able to explain what TF TFis used for

o Understand and explain basic principles (e.g. modules, providers, functions)

o Must be able to write and debug TF

TF Helpful tech stack experience would include:

   • Helm

   • Kafka and Kafka Schema Registry

   • AWS AWSServices: CloudTrail,CloudTrail SNS,SNS SQS,SQS CloudWatch,CloudWatch Step Functions, Aurora, EMR,EMR Redshift,g

   • Secrets Management Platform: Vault, AWS AWS Secrets manager

Core Responsibilities and Soft Skills

   • Provides technical direction, engage team in discussion on how to best guide/build features on key technical aspects and responsible for product tech delivery

   • Works closely with the Product Owner and team to align on delivery goals and timing

   • Collaborates with architects on key technical decisions for data and overall solution

   • Lead design and implementation of data quality check methods

   • Ensure data security and permissions solutions, including data encryption, user access controls and logging

   • Be able to think unconventionally to find the best way to solve for a defined use case with fuzzy requirements.

   • Self-starter mentality

o Willing to do their own research to solve problems and can clearly present findings and engage in conversation on what makes one solution better than another

   • Thrive in a fail-fast environment, involving mini PoCs,PoCs and participate in an inspect and adapt process.

   • Questioning and Improvement mindset

o Must be ready to ask questions about why something is currently done the way it is and suggest alternative solutions

   • Customer facing skills

o Interfacing with stakeholders and other product teams via pairing, troubleshooting support, and debugging issues they encounter with our products

Impellam Group and its brands are equal-opportunity employers committed to diversity and inclusion. All qualified applicants will receive consideration without regard to race, color, religion, gender, sexual orientation, pregnancy or maternity, national origin, age, disability, veteran status, or any other factor determined to be unlawful under applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application, interview process, pre-employment activity, and the performance of crucial job functions.

If you require additional disability considerations, modifications, or adjustments please let us know by contacting HR-InfoImpellamNA@impellam.com or fill out this form to request accommodations.