Compunnel Inc.

AWS Data Engineer II

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer II, with a contract length of "unknown" and a pay rate of "unknown." Key skills include AWS services, Python, SQL, and experience with data pipelines. A minimum of eight years in a similar role is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
416
-
🗓️ - Date
November 4, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Westbrook, ME
-
🧠 - Skills detailed
#SNS (Simple Notification Service) #Metadata #Compliance #SQL (Structured Query Language) #Data Pipeline #"ETL (Extract #Transform #Load)" #Data Storage #Infrastructure as Code (IaC) #AWS (Amazon Web Services) #Storage #dbt (data build tool) #Security #Cloud #Lambda (AWS Lambda) #Data Processing #Snowflake #SQS (Simple Queue Service) #Scripting #Data Accuracy #Programming #S3 (Amazon Simple Storage Service) #Terraform #Python #Airflow #Agile #Databases #Scala #Data Engineering #Apache Iceberg
Role description
We are seeking a highly motivated and experienced Data Engineer to join our team and support the accelerated advancement of Clinet's global instrument data pipelines. This role is pivotal in developing changes that deliver instrument data to stakeholders. Reporting and Collaboration- This position reports directly to the Development Manager while working on an Agile team and involves collaboration with cross-functional teams including engineering, product, and quality. TOP (3) REQUIRED SKILLSETS: • Strong background in AWS cloud services, with a focus on data storage and pub/sub platforms (S3, SNS, SQS, Lambda, etc.) • Proven experience in building and maintaining operational data pipelines, particularly with modern technologies like dbt, Airflow, and Snowflake. • Strong proficiency in Python, and SQL. NICE TO HAVE SKILLSETS: • Familiarity with Infrastructure as Code solutions like Terraform • Experience with Quality Engineering processes Key Responsibilities- • Design and implement ingestion and storage solutions using AWS services such as S3, SNS, SQS, and Lambda. • Develop and implement analytical solutions leveraging Snowflake, Airflow, Apache Iceberg • Collaborate with cross-functional teams to understand and meet data needs for processing. • Develop and maintain scalable and reliable data solutions that support client's operational and business requirements. • Document data flows, architecture decisions, and metadata to ensure maintainability and knowledge sharing. • Design and implement fault-tolerant systems, ensuring high availability and resilience in our data processing pipelines. • Actively participate in testing and quality engineering (QE) processes, collaborating closely with the QE team to ensure the reliability and accuracy of data solutions. Required Skills- • Strong problem-solving skills and the ability to operate independently, sometimes with limited information. • Strong communication skills, both verbal and written, including the ability to communicate complex issues to both technical and non-technical users in a professional, positive, and clear manner. • Initiative and self-motivation with strong planning and organizational skills. • Ability to prioritize and adapt to changing business needs. • Proven experience in building and maintaining operational data pipelines, particularly with modern technologies like dbt, Airflow, and Snowflake. • Strong proficiency in Python, and SQL. • Strong background in AWS cloud services, with a focus on data storage and pub/sub platforms (S3, SNS, SQS, Lambda, etc.). o Familiarity with Infrastructure as Code solutions like Terraform is a plus • Familiarity with a broad range of technologies, including: o Cloud-native data processing and analytics o SQL Databases, specifically for analytics o Orchestration and ELT technologies like Airflow and dbt o Scripting and programming with Python, or similar languages o Infrastructure as Code languages like terraform • Ability to translate complex business requirements into scalable and efficient data solutions. • Strong multitasking skills and the ability to prioritize effectively in a fast-paced environment. Preferred Background • Candidates should have a minimum of eight years of experience in a similar role, preferably within a technology-driven environment. • Experience building data services and ETL/ELT pipelines in the cloud using Infrastructure as Code and large-scale data processing engines • Strong experience with SQL and one or more programming languages (Python preferred) Success Metrics- • Meeting delivery timelines for project milestones. • Effective collaboration with cross-functional teams. • Ensure high standards of data accuracy and accessibility in a fast-paced, dynamic environment. • Reduction in data pipeline failures or downtime through resilient and fault-tolerant design. • Demonstrated contribution to the stability and scalability of the platform through well-architected, maintainable code. • Positive feedback from stakeholders (engineering, product, or customer-facing teams) on delivered solutions. • Active contribution to Agile ceremonies and improvement of team velocity or estimation accuracy. • Proactive identification and mitigation of data-related risks, including security or compliance issues.