ECLARO

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer with 5-9 years of experience in data engineering, proficient in SQL and Python, and skilled in Azure, Databricks, and data integration tools. Contract length is unspecified, with a pay rate of $65.00-$70.00/hour, remote location.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
560
-
πŸ—“οΈ - Date
April 24, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
New York, United States
-
🧠 - Skills detailed
#Data Integration #Data Pipeline #Data Quality #Data Engineering #Data Modeling #Compliance #"ETL (Extract #Transform #Load)" #GitHub #Agile #Debugging #Terraform #Data Strategy #Deployment #Scala #SQL (Structured Query Language) #Azure #API (Application Programming Interface) #Documentation #Python #Schema Design #Databricks #Strategy #Azure Data Factory #Batch #Data Science #Observability #Version Control #Data Ingestion #ADF (Azure Data Factory) #Jenkins #Data Architecture #Storage
Role description
Job Number: 26-00677 Find your next opportunity in the Financial Services Industry. ECLARO is looking for a Senior Data Engineer for our client in Remote, NY. ECLARO’s client’s provides retirement savings, health & investment management and insurance services. If you’re up to the challenge, then take a chance at this rewarding opportunity Position Overview: β€’ The Senior Data Engineer, leads the design and implementation of robust data solutions across multiple domains, driving technical excellence and scalability. β€’ This role mentor others, shape best practices, and influence data architecture. β€’ This role is expected to proactively identify opportunities to improve systems, drive reliability, and collaborate with product and business stakeholder to again data strategy and company goals. Responsibilities: β€’ Design, build and scale robust, high-performing batch and real-time data pipelines. β€’ Drive architectural decisions for transformation logic, storage formats, and schema design. β€’ Lead complex data ingestion efforts and mentor peers on performance optimization and scalability. β€’ Lead the design and optimization of complex data models and storage architecture, balancing performance, scalability, and usability. β€’ Partner with stakeholders to translate business requirements into robust data structures. β€’ Contribute significantly to delivery planning and execution, mentor junior engineers on agile approaches, and ensure timely completion of tasks by managing dependencies and escalating delivery challenges. β€’ Design and standardize advanced data validation frameworks and testing strategies across platforms. β€’ Lead root cause analysis and data quality issues and mentor others on quality best practices. β€’ Partner with stakeholders to define SLAs and quality metrics. β€’ Leads efforts to automate, monitor, and scale deployment of production-grade data pipelines. β€’ Design resilient workflows with retry logic, failure handling, and resource optimization. β€’ Proactively address performance and reliability issues and contribute to runbooks and on-call documentation. β€’ Lead the creation and maintenance of detailed technical documentation for complex pipelines, data models, and system integrations. β€’ Establish and enforce documentation and development standards across projects. β€’ Mentor junior engineers on clear, consistent coding and documentation habits. β€’ Act as a key technical partner to product, analytics and data science teams. β€’ Lead design, discussions, communicate complex data trade-offs with clarity, and proactively surface risks and blockers. β€’ Support collaborative planning and mentor junior team members in effective communication and partnership. Required Qualifications: β€’ 5-9 years in data engineering, data modeling, and pipeline development β€’ Expert in SQL and Python for developing and debugging scalable data pipelines. β€’ Deep hands-on experience with Azure and Databricks, including Delta Live Tables and Unity Catalog β€’ Skilled with data integration/orchestration tools (SnapLogic, Azure Data Factory, Jenkins) β€’ Strong use of infrastructure-as-code tools like Terraform to manage deployment pipelines. β€’ Design and optimize API integration in pipelines. β€’ Familiar with data quality observability tools such as Soda or similar. β€’ Proficient in version control and CI/CD workflows using GitHub. β€’ Advanced understanding of dimensional modeling and data warehousing concepts. β€’ Comfortable leading efforts in agile environments and strong ownership and collaboration. Pay Rate: $65.00-$70.00/Hour. If hired, you will enjoy the following ECLARO Benefits: β€’ 401k Retirement Savings Plan administered by Merrill Lynch β€’ Commuter Check Pretax Commuter Benefits β€’ Eligibility to purchase Medical, Dental & Vision Insurance through ECLARO If interested, you may contact: Lester Candilado jan.candilado@eclaro.com 6466800168 Lester Candilado | LinkedIn Equal Opportunity Employer: ECLARO values diversity and does not discriminate based on Race, Color, Religion, Sex, Sexual Orientation, National Origin, Age, Genetic Information, Disability, Protected Veteran Status, or any other legally protected group status, in compliance with all applicable laws.