Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown" and a pay rate of "unknown." Located in Westbrook, candidates must possess strong AWS, Python, SQL, and data pipeline experience, preferably with eight years in a technology-driven environment.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 24, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Westbrook, ME
-
🧠 - Skills detailed
#Programming #Airflow #Data Storage #"ETL (Extract #Transform #Load)" #Infrastructure as Code (IaC) #Cloud #SQS (Simple Queue Service) #Scripting #SNS (Simple Notification Service) #Terraform #Data Pipeline #Python #Compliance #Snowflake #SQL (Structured Query Language) #Data Engineering #dbt (data build tool) #Agile #Apache Iceberg #S3 (Amazon Simple Storage Service) #Security #Metadata #Databases #Scala #Data Accuracy #Data Processing #Lambda (AWS Lambda) #AWS (Amazon Web Services) #Storage
Role description
Hybrid role Candidates must be able to work on-site in Westbrook as required. We are seeking a highly motivated and experienced Data Engineer to join our team and support the accelerated advancement of instrument data pipelines. This role is pivotal in developing changes that deliver instrument data to stakeholders. This position reports directly to the Development Manager while working on an Agile team and involves collaboration with cross-functional teams including engineering, product, and quality. TOP (3) REQUIRED SKILLSETS: β€’ Strong background in AWS cloud services, with a focus on data storage and pub/sub platforms (S3, SNS, SQS, Lambda, etc.) β€’ Proven experience in building and maintaining operational data pipelines, particularly with modern technologies like dbt, Airflow, and Snowflake. β€’ Strong proficiency in Python, and SQL. NICE TO HAVE SKILLSETS: β€’ Familiarity with Infrastructure as Code solutions like Terraform β€’ Experience with Quality Engineering processes Key Responsibilities β€’ Design and implement ingestion and storage solutions using AWS services such as S3, SNS, SQS, and Lambda. β€’ Develop and implement analytical solutions leveraging Snowflake, Airflow, Apache Iceberg β€’ Collaborate with cross-functional teams to understand and meet data needs for processing. β€’ Develop and maintain scalable and reliable data solutions that support IDEXX's operational and business requirements. β€’ Document data flows, architecture decisions, and metadata to ensure maintainability and knowledge sharing. β€’ Design and implement fault-tolerant systems, ensuring high availability and resilience in our data processing pipelines. β€’ Actively participate in testing and quality engineering (QE) processes, collaborating closely with the QE team to ensure the reliability and accuracy of data solutions. Required Skills β€’ Strong problem-solving skills and the ability to operate independently, sometimes with limited information. β€’ Strong communication skills, both verbal and written, including the ability to communicate complex issues to both technical and non-technical users in a professional, positive, and clear manner. β€’ Initiative and self-motivation with strong planning and organizational skills. β€’ Ability to prioritize and adapt to changing business needs. β€’ Proven experience in building and maintaining operational data pipelines, particularly with modern technologies like dbt, Airflow, and Snowflake. β€’ Strong proficiency in Python, and SQL. β€’ Strong background in AWS cloud services, with a focus on data storage and pub/sub platforms (S3, SNS, SQS, Lambda, etc.). o Familiarity with Infrastructure as Code solutions like Terraform is a plus β€’ Familiarity with a broad range of technologies, including: o Cloud-native data processing and analytics o SQL Databases, specifically for analytics o Orchestration and ELT technologies like Airflow and dbt o Scripting and programming with Python, or similar languages o Infrastructure as Code languages like terraform β€’ Ability to translate complex business requirements into scalable and efficient data solutions. β€’ Strong multitasking skills and the ability to prioritize effectively in a fast-paced environment. Preferred Background β€’ Candidates should have a minimum of eight years of experience in a similar role, preferably within a technology-driven environment. β€’ Experience building data services and ETL/ELT pipelines in the cloud using Infrastructure as Code and large-scale data processing engines β€’ Strong experience with SQL and one or more programming languages (Python preferred) Success Metrics β€’ Meeting delivery timelines for project milestones. β€’ Effective collaboration with cross-functional teams. β€’ Ensure high standards of data accuracy and accessibility in a fast-paced, dynamic environment. β€’ Reduction in data pipeline failures or downtime through resilient and fault-tolerant design. β€’ Demonstrated contribution to the stability and scalability of the platform through well-architected, maintainable code. β€’ Positive feedback from stakeholders (engineering, product, or customer-facing teams) on delivered solutions. β€’ Active contribution to Agile ceremonies and improvement of team velocity or estimation accuracy. β€’ Proactive identification and mitigation of data-related risks, including security or compliance issues.