Optomi

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer II with a contract length of "unknown," offering a pay rate of "unknown." It requires 2+ years of AWS experience, proficiency in Python and SQL, and expertise in data pipelines and warehousing solutions. Hybrid work location.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
464
-
πŸ—“οΈ - Date
January 16, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Charlotte, NC
-
🧠 - Skills detailed
#S3 (Amazon Simple Storage Service) #Data Quality #Migration #Data Pipeline #Security #Data Architecture #Kafka (Apache Kafka) #Data Engineering #Data Warehouse #"ETL (Extract #Transform #Load)" #Athena #Infrastructure as Code (IaC) #Python #Terraform #PySpark #SNS (Simple Notification Service) #Spark SQL #SQL (Structured Query Language) #AWS (Amazon Web Services) #Spark (Apache Spark) #GitHub #DevOps #SQS (Simple Queue Service) #Airflow #Lambda (AWS Lambda) #Data Modeling #IAM (Identity and Access Management) #Cloud
Role description
Position Summary: β€’ The Data Engineer II will leverage their expertise in AWS services and data engineering to design, develop, and refine data pipelines and warehousing solutions. This role requires close collaboration with Lead Developers, Data Architects, and Solution Architects to ensure high-quality delivery of technical solutions. The ideal candidate will have a strong background in Python with PySpark, SQL, and AWS workflow orchestration tools like Airflow or Step Functions. This position offers the opportunity to work with cutting-edge technologies and contribute to innovative data initiatives. What the right candidate will enjoy: β€’ Working in a hybrid/local environment with flexibility. β€’ Collaborating with a cross-functional team of developers and architects. β€’ Utilizing advanced AWS services and data engineering tools. What type of experience does the right candidate have: β€’ 2+ years of AWS experience, including services like S3, EMR, Glue Jobs, Lambda, Athena, CloudTrail, SNS, SQS, CloudWatch, and Step Functions. β€’ Experience with Kafka, preferably Confluent Kafka. β€’ Proficiency in Python, particularly PySpark, and SQL. β€’ Competence in data modeling, executing ETL processes, and developing data pipelines. β€’ Knowledge of Infrastructure as Code technologies such as Terraform. β€’ Familiarity with DevOps pipelines using GitHub. β€’ Deep understanding of IAM roles and policies. β€’ Experience with AWS workflow orchestration tools like Airflow or Step Functions. What the responsibilities are of the right candidate: β€’ Collaborate with Lead Developers and Solution Architects to understand requirements and deliver technical solutions. β€’ Develop reliable data pipelines and ensure high data quality. β€’ Design data warehousing solutions optimized for end-user performance. β€’ Manage and resolve issues in production data warehouse environments on AWS. β€’ Implement data pipelines with attention to durability and data quality. β€’ Stand up development instances and migration paths with required security and access roles.