Masis Staffing Solutions

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Senior Data Engineer contract-to-hire position lasting 3-6 months, offering a competitive pay rate. Key skills include SQL, Python, Azure, Databricks, and data pipeline development. Requires 5-9 years of relevant experience and expertise in data architecture.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
544
-
πŸ—“οΈ - Date
April 29, 2026
πŸ•’ - Duration
3 to 6 months
-
🏝️ - Location
Unknown
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#Deployment #Databricks #API (Application Programming Interface) #Version Control #SQL (Structured Query Language) #Observability #Data Integration #"ETL (Extract #Transform #Load)" #Jenkins #Documentation #Data Pipeline #Scala #ADF (Azure Data Factory) #Data Strategy #Terraform #Data Engineering #Data Architecture #Data Science #Data Modeling #Batch #Storage #Agile #Schema Design #Data Ingestion #Python #Azure #Debugging #GitHub #Data Quality #Azure Data Factory #Strategy
Role description
This is a Contract-to-Hire position. Conversion will happen in 3-6 months. Senior Data Engineer The Senior Data Engineer, leads the design and implementation of robust data solutions across multiple domains, driving technical excellence and scalability. This role mentor others, shape best practices, and influence data architecture. This role is expected to proactively identify opportunities to improve systems, drive reliability, and collaborate with product and business stakeholder to again data strategy and company goals. Profile Description: β€’ Design, build and scale robust, high-performing batch and real-time data pipelines. β€’ Drive architectural decisions for transformation logic, storage formats, and schema design. β€’ Lead complex data ingestion efforts and mentor peers on performance optimization and scalability. β€’ Lead the design and optimization of complex data models and storage architecture, balancing performance, scalability, and usability. β€’ Partner with stakeholders to translate business requirements into robust data structures. β€’ Contribute significantly to delivery planning and execution, mentor junior engineers on agile approaches, and ensure timely completion of tasks by managing dependencies and escalating delivery challenges. β€’ Design and standardize advanced data validation frameworks and testing strategies across platforms. β€’ Lead root cause analysis and data quality issues and mentor others on quality best practices. β€’ Partner with stakeholders to define SLAs and quality metrics. β€’ Leads efforts to automate, monitor, and scale deployment of production-grade data pipelines. β€’ Design resilient workflows with retry logic, failure handling, and resource optimization. β€’ Proactively address performance and reliability issues and contribute to runbooks and on-call documentation. β€’ Lead the creation and maintenance of detailed technical documentation for complex pipelines, data models, and system integrations. β€’ Establish and enforce documentation and development standards across projects. β€’ Mentor junior engineers on clear, consistent coding and documentation habits. β€’ Act as a key technical partner to product, analytics and data science teams. β€’ Lead design, discussions, communicate complex data trade-offs with clarity, and proactively surface risks and blockers. β€’ Support collaborative planning and mentor junior team members in effective communication and partnership. Knowledge & Experience: β€’ 5-9 years in data engineering, data modeling, and pipeline development β€’ Expert in SQL and Python for developing and debugging scalable data pipelines. β€’ Deep hands-on experience with Azure and Databricks, including Delta Live Tables and Unity Catalog β€’ Skilled with data integration/orchestration tools (SnapLogic, Azure Data Factory, Jenkins) β€’ Strong use of infrastructure-as-code tools like Terraform to manage deployment pipelines. β€’ Design and optimize API integration in pipelines. β€’ Familiar with data quality observability tools such as Soda or similar. β€’ Proficient in version control and CI/CD workflows using GitHub. β€’ Advanced understanding of dimensional modeling and data warehousing concepts. β€’ Comfortable leading efforts in agile environments and strong ownership and collaboration.