Lead OR Sr. Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead or Sr. Data Engineer with a contract length of "unknown," offering a pay rate of "unknown," and is remote. Key skills include Snowflake, Kafka, JSON, and AWS. Requires 6+ years of experience and leadership abilities.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
September 17, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Boston, MA
-
🧠 - Skills detailed
#Kafka (Apache Kafka) #MongoDB #Migration #Data Modeling #GIT #Agile #Snowflake #Version Control #Data Pipeline #JSON (JavaScript Object Notation) #Data Engineering #SQL (Structured Query Language) #Jira #Python #"ETL (Extract #Transform #Load)" #AWS (Amazon Web Services) #Leadership
Role description
Lead Data Engineer Responsibilities: β€’ Lead data engineering efforts focused on streaming pipelines from MongoDB through Kafka into Snowflake. β€’ Perform data modeling and transformation of complex JSON structures into standardized Snowflake tables. β€’ Conduct discovery and analysis for new data sets; identify caveats and align with standard patterns. β€’ Mentor mid-level data engineers, provide guidance on ticket scope, development practices, and troubleshooting. β€’ Ensure adherence to agile practices (sprint planning, Jira ticket management, story refinement). β€’ Validate and test data flows before handoff to downstream teams. Requirements: β€’ Strong Snowflake experience, especially stored procedure development. β€’ Deep knowledge of Kafka and JSON streaming. β€’ Proficiency with Git (branching, merging, troubleshooting). β€’ AWS ecosystem familiarity. β€’ Proven leadership/mentorship ability. β€’ QA mindset with strong validation/testing skills (Tosca is a plus). β€’ Experience in enterprise-scale migrations to Snowflake. β€’ SQL/Python β€’ Nice-to-have: β€’ Familiarity with boutique tools like UpSolver (internal onboarding provided). β€’ Tosca β€’ Soft skills: β€’ Strong problem-solving and independence β€’ Collaborative, team-oriented, effective communicator. β€’ 6 years of experience is required , though more is preferred. Mid-Level Data Engineer Responsibilities: β€’ Build and maintain streaming data pipelines with Kafka, MongoDB, and Snowflake. β€’ Transform nested JSON into flattened Snowflake tables. β€’ Participate in agile sprints: analyzing requirements, scoping tickets, developing and testing pipelines. β€’ Perform self-validation and testing before promoting code. Requirements: β€’ Database experience with Snowflake exposure. β€’ Familiarity with Kafka, JSON, and AWS-based pipelines. β€’ Hands-on experience with Git for version control. β€’ Agile team experience (ticket-based work, sprint cycles). β€’ QA/testing practices with data pipelines. β€’ SQL/Python β€’ 4-6 years of experience β€’ Nice-to-have: β€’ Prior exposure to Uspolver or similar boutique ETL tools. β€’ Healthcare background not required but a plus. β€’ Tosca for testing β€’ Soft skills: β€’ Willingness to learn proprietary tools through onboarding. β€’ Team-oriented, open to feedback, detail-oriented.