Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
September 16, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
Hybrid
-
📄 - Contract type
Unknown
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Westbrook, ME
-
🧠 - Skills detailed
#Apache Iceberg #Snowflake #S3 (Amazon Simple Storage Service) #Data Security #Security #SQS (Simple Queue Service) #Data Storage #Terraform #SQL (Structured Query Language) #Data Lakehouse #Python #Data Pipeline #SNS (Simple Notification Service) #Agile #Cloud #Data Lake #Infrastructure as Code (IaC) #Airflow #Data Accuracy #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #dbt (data build tool) #Scala #AWS (Amazon Web Services) #Data Engineering #Compliance #Code Reviews #Storage
Role description
Due to the requirements of the employer’s government contracts, U.S. citizenship is required for this position. ProSearch is seeking a highly motivated and experienced Senior Data Engineer to join a cutting-edge data engineering team supporting the accelerated advancement of global instrument data pipelines for one of Maine’s most innovative companies. This role is central to building scalable, high-performance data solutions that deliver instrument data to internal stakeholders across product, engineering, and business operations. This is a hybrid role based in Maine, with on-site work in Westbrook required as needed. Candidates must be currently residing in Maine or able to commute reliably. Location: Hybrid in Maine (on-site presence in Westbrook required 2 days a week) Experience: 8+ years Employment Type: Contract Company: ProSearch (on behalf of an anonymous client) Due to the requirements of the employer’s government contracts, U.S. citizenship is required for this position. Key Responsibilities • Design and implement ingestion and storage pipelines using AWS services including S3, SNS, SQS, and Lambda • Build scalable analytical solutions leveraging modern data platforms such as Snowflake, dbt, and Airflow • Develop, maintain, and optimize fault-tolerant systems to ensure high availability of data services • Collaborate with cross-functional teams to define data needs and deliver solutions that meet evolving business requirements • Document architecture, data flows, and design decisions to support knowledge sharing and long-term maintainability • Participate in Agile ceremonies and partner closely with Quality Engineering to validate data accuracy and performance • Contribute to platform stability through well-architected, resilient code • Proactively identify and mitigate risks related to data security, privacy, and compliance Top Required Skillsets • Strong background in AWS cloud services, especially data storage and pub/sub platforms (S3, SNS, SQS, Lambda) • Proven experience building and maintaining operational data pipelines using tools such as dbt, Airflow, and Snowflake • Strong proficiency in Python and SQL Preferred Skillsets • Familiarity with Infrastructure as Code tools such as Terraform • Experience working in Quality Engineering or strong collaboration with QE teams • Exposure to Apache Iceberg or similar modern data lakehouse architectures • Ability to translate business requirements into scalable data solutions • Experience designing cloud-native analytics infrastructure Required Attributes • Ability to work independently and navigate ambiguity • Strong verbal and written communication skills • Comfortable engaging with both technical and non-technical stakeholders • Planning and organizational skills with the ability to manage shifting priorities • Team-oriented mindset with a focus on delivering results in a fast-paced environment Preferred Background • 8 or more years of experience in data engineering or a similar technical role • Demonstrated success building large-scale ETL/ELT pipelines in the cloud • Strong hands-on experience with Python and modern orchestration platforms • Track record of delivering solutions that are both scalable and maintainable • Familiarity with Agile methodologies and cross-functional team environments Success Metrics • Timely delivery of project milestones • Reduction in data pipeline downtime or failures • High quality, accessible, and trusted data made available to stakeholders • Positive feedback from engineering, product, and business partners • Active participation in team ceremonies, code reviews, and architectural planning • Contribution to velocity improvements and process optimization • Early identification and mitigation of security or compliance risks About the Opportunity This is a high-impact opportunity to join a modern data engineering team supporting business-critical applications. The ideal candidate brings technical depth, strategic thinking, and a collaborative mindset. You will work in an environment that values innovation, ownership, and continuous improvement. How to Apply You can apply directly through ProSearch. We look forward to connecting with experienced data engineers ready to take on their next challenge.