

Optomi
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer II with a contract length of "unknown," offering a pay rate of "unknown." Located in Charlotte, NC, or remote within the EST timezone, it requires 2+ years of AWS experience, proficiency in AWS services, Kafka, Python, SQL, and Terraform.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
440
-
🗓️ - Date
March 9, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#Athena #SNS (Simple Notification Service) #Infrastructure as Code (IaC) #AWS (Amazon Web Services) #Data Warehouse #SQL (Structured Query Language) #Data Quality #Data Modeling #Airflow #DevOps #IAM (Identity and Access Management) #"ETL (Extract #Transform #Load)" #GitHub #Database Management #PySpark #SQS (Simple Queue Service) #Migration #Cloud #Data Orchestration #Lambda (AWS Lambda) #Data Engineering #Terraform #Data Pipeline #Python #Spark (Apache Spark) #S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #Security
Role description
Optomi, in partnership with a leading organization, is seeking a Data Engineer II to join their team in Charlotte, NC. This position offers a hybrid work environment for local candidates or remote work for candidates in the EST timezone who are a perfect fit.
Position Summary: The AWS Data Engineer II will play a critical role in developing and maintaining data pipelines, ensuring long-term reliability and high data quality. The role involves designing and implementing data warehousing solutions tailored to end-user needs, managing production environments, and collaborating with cross-functional teams to deliver technical solutions. The ideal candidate will have hands-on experience with AWS services, Kafka, and data orchestration tools, and will demonstrate a deep understanding of database management and ETL processes.
What the right candidate will enjoy:
• Opportunity to work with cutting-edge AWS technologies
• Hybrid work environment for local candidates, with flexibility for remote work
• A collaborative and innovative team environment
What type of experience does the right candidate have:
• 2+ years of AWS experience
• Proficiency in AWS services: S3, EMR, Glue Jobs, Lambda, Athena, CloudTrail, SNS, SQS, CloudWatch, Step Functions
• Experience with Kafka, preferably Confluent Kafka
• Strong SQL and data modeling skills for ETL processes tailored to data warehousing
• Proficiency in Python (with PySpark experience) and SQL
• Knowledge of Terraform for Infrastructure as Code
• Experience with DevOps pipelines (CI/CD) using GitHub
• Deep understanding of IAM roles and policies
• Familiarity with AWS workflow orchestration tools like Airflow or Step Functions
What the responsibilities are of the right candidate:
• Develop and refine data pipelines, focusing on reliability and data quality
• Design and implement user-friendly data warehousing solutions that prioritize performance
• Manage and resolve issues in production data warehouse environments on AWS
• Collaborate with data and solution architects to make key technical decisions
• Stand up development instances and migration paths with required security and access roles
• Build new data pipelines, identify data gaps, and provide automated solutions to enhance analytical capabilities
This is an excellent opportunity for a skilled Data Engineer II to make a significant impact within a forward-thinking organization. Apply today to join the team and contribute to innovative data solutions!
Optomi, in partnership with a leading organization, is seeking a Data Engineer II to join their team in Charlotte, NC. This position offers a hybrid work environment for local candidates or remote work for candidates in the EST timezone who are a perfect fit.
Position Summary: The AWS Data Engineer II will play a critical role in developing and maintaining data pipelines, ensuring long-term reliability and high data quality. The role involves designing and implementing data warehousing solutions tailored to end-user needs, managing production environments, and collaborating with cross-functional teams to deliver technical solutions. The ideal candidate will have hands-on experience with AWS services, Kafka, and data orchestration tools, and will demonstrate a deep understanding of database management and ETL processes.
What the right candidate will enjoy:
• Opportunity to work with cutting-edge AWS technologies
• Hybrid work environment for local candidates, with flexibility for remote work
• A collaborative and innovative team environment
What type of experience does the right candidate have:
• 2+ years of AWS experience
• Proficiency in AWS services: S3, EMR, Glue Jobs, Lambda, Athena, CloudTrail, SNS, SQS, CloudWatch, Step Functions
• Experience with Kafka, preferably Confluent Kafka
• Strong SQL and data modeling skills for ETL processes tailored to data warehousing
• Proficiency in Python (with PySpark experience) and SQL
• Knowledge of Terraform for Infrastructure as Code
• Experience with DevOps pipelines (CI/CD) using GitHub
• Deep understanding of IAM roles and policies
• Familiarity with AWS workflow orchestration tools like Airflow or Step Functions
What the responsibilities are of the right candidate:
• Develop and refine data pipelines, focusing on reliability and data quality
• Design and implement user-friendly data warehousing solutions that prioritize performance
• Manage and resolve issues in production data warehouse environments on AWS
• Collaborate with data and solution architects to make key technical decisions
• Stand up development instances and migration paths with required security and access roles
• Build new data pipelines, identify data gaps, and provide automated solutions to enhance analytical capabilities
This is an excellent opportunity for a skilled Data Engineer II to make a significant impact within a forward-thinking organization. Apply today to join the team and contribute to innovative data solutions!






