Matlen Silver

AI/ML/GenAI (Senior Data Engineer)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Tech Lead – AI/ML/GenAI (Senior Data Engineer) on an 18-month W2 contract, paying $60-$90/hour. Requires 8+ years in data science platform development, strong leadership, and expertise in ML pipelines, data governance, and distributed computing. Hybrid location.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
720
-
🗓️ - Date
December 20, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Jersey City, NJ
-
🧠 - Skills detailed
#R #YARN (Yet Another Resource Negotiator) #Pandas #Storage #Data Lineage #Big Data #Spark (Apache Spark) #Data Management #Strategy #Jupyter #Data Engineering #Datasets #Kubernetes #Data Pipeline #ML (Machine Learning) #NumPy #Security #AI (Artificial Intelligence) #Scala #Agile #Libraries #Data Science #Automation #Leadership #Kafka (Apache Kafka) #Data Analysis #Distributed Computing #Python #Monitoring #Batch #Deployment #Java #Data Governance #Metadata #Computer Science #Containers #DevOps
Role description
Tech Lead – AI/ML/GenAI (Senior Data Engineer) Jersey City, JC | Dallas, TX | Charlotte, NC Hybrid 3 days onsite 2 days remote 18 Month W2 Contract $60-$90/hour Candidate must possess a passion for producing high quality software and solutions for AI/ML and be ready to jump in and solve complex problems and perform code writing and reviews. This role is responsible for providing leadership, technical direction, and oversight to a team as they deliver technology solutions. Key responsibilities of the role include developing solutions and processes for delivering features based on their knowledge of design/architectural patterns and Agile/DevOps practices. This role ensures the systems design and requirements are aligned to achieve the desired business outcomes, and that team practices and coding/quality principles are aligned to achieve the desired technology outcomes. They have built significant experience through multiple software implementations and have developed both depth and breadth in a number of technical competencies. Required Qualifications • Bachelor’s or master’s degree in computer science or engineering or related field • 8+ years of experience in platform development, architecture, and strategy for data science, modeling, and advanced analytics • Experience building E2E analytics platform focusing on self-service for as Data science, Big Data platform, Analytics etc.- Experience building E2E ML pipelines: Data prep, Model Build, training models, Deployment, Scoring, Monitoring and Optimization • Experience working with technical and line of business users to gather the requirements, writing BRD for building platform, brainstorm with different set of tech and non-tech audience, document the details with conceptual diagrams, validate the tech feasibilities of capabilities by researching and working with tech teams, architects and engineers • Reviews technical designs to ensure that they are consistent with defined architecture principles, standards, and best practices. Own technical decisions for the solution and application developers in the creation of architectural decisions and artifacts. Ability to clearly communicate with team & stakeholders • Collaborate with product teams, data analysts and data scientists to design and build solutions • Manage next generation of architectural decision for advanced analytics platform, create strategy, roadmaps, present to tech and non-tech leaders • Strong understanding of modern open source based data science platform architecture for storage & compute separation, interactive development workbenches, virtual environments, containers, and toolsets such as Jupyter, Spyder, VSCode and how they work with open source languages & libraries such as Python, R, H2O, SciKitLearn, Pandas, NumPy etc. • Hands on experience on implementing CI/CD and automation using the Atlassian ecosystem • Knowledge of metadata management, data lineage, and principles of data governance • Experience designing and building full stack solutions utilizing distributed computing architecture • Design and build and deploy streaming and batch data pipelines capable of processing and storing large datasets quickly and reliably using Kafka, Spark and YARN • Good understanding and knowledge of processing and deployment technologies such YARN, Kubernetes/containers and serverless • Experience working on Java, Scala or Python based tools and technologies • Accountable for the availability, stability, scalability, security, and recoverability enabled by the designs • Support the company’s commitment to protect the integrity and confidentiality of systems and data. Desired Qualifications Agile SDLC frameworks