Net2Source Inc.

Data Engineer – SQL/Python (SPECTRA Platform)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer – SQL/Python (SPECTRA Platform) in Menlo Park, CA, from 30/03/2026 to 31/12/2026, at $50/HR. Required skills include strong SQL and Python proficiency, ETL pipeline experience, and dashboarding with Tableau.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
400
-
🗓️ - Date
February 25, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Menlo Park, CA
-
🧠 - Skills detailed
#Datasets #Complex Queries #Data Processing #Tableau #ML (Machine Learning) #SQL (Structured Query Language) #Computer Science #Data Quality #Data Science #Data Integrity #Classification #Data Pipeline #Data Privacy #Scala #Debugging #Automated Testing #Data Modeling #"ETL (Extract #Transform #Load)" #Automation #Monitoring #Python #Data Engineering
Role description
Job Title: Data Engineer – SQL/Python (SPECTRA Platform) Location: Menlo Park, CA (Onsite – 5 Days/Week, No Remote/Hybrid) Duration: 30/03/2026 – 31/12/2026 Pay Rate: $50/HR on W2 All Inclusive-No Benefits About the Team Client’s Safe Ads Experiences (SAE) team is building the next-generation data collection platform, SPECTRA (Scalable Platform for Enhanced Data Collection, Targeted Sampling and Rater Assistance). This platform powers human-labeled data collection systems that directly impact responsible advertising, user data privacy, and trust across Client’s advertising ecosystem. This role will contribute to large-scale data pipeline development, quality monitoring, dashboarding, and vendor-supported labeling workflows. Key Responsibilities • Drive technical implementation and testing of complex data pipeline features within the SPECTRA platform. • Design and develop scalable data processing pipelines using SQL and Python. • Handle large-scale data collection, transformation, and quality monitoring workflows. • Build and maintain dashboards (Tableau or similar) for: • Data quality monitoring • Rater performance tracking • Platform health metrics • Develop end-to-end testing and monitoring alerts to ensure: • Data integrity • System reliability • Proactive issue detection • Debug and troubleshoot complex data flow issues across the platform. • Collaborate with engineering teams to implement backend data processing logic. • Partner with Product Data Operations (PDO) teams on: • Human labeling workflows • Vendor coordination • Budget planning • Rater performance optimization • Work with Taxonomists and Data Labeling Analysts to ensure proper data classification and quality standards. • Support smart sampling and targeted data collection strategies. Required Skills & Qualifications • Strong SQL development experience (complex queries, transformations, large datasets in production). • Proficiency in Python (data processing, automation, OOP-based pipeline development). • Experience building ETL pipelines, scalable workflows, and data modeling. • Hands-on experience with Tableau or similar dashboarding tools. • Strong analytical and debugging skills for complex data issues. • Experience with data quality monitoring and automated testing frameworks. Preferred Qualifications • Bachelor’s degree in Computer Science or related field. • Machine Learning knowledge (training data, feature engineering, evaluation metrics). • Experience building and scaling large systems/products. • Exposure to privacy or advertising-related platforms. • Experience with human labeling/annotation systems and vendor management. • Prior experience working with Facebook/Meta tools (nice to have). Relevant Keywords for Sourcing Data Engineer, SQL Developer, Python Developer, ETL Developer, Data Analytics, Dashboarding, Tableau, Data Science, Data Pipelines, Data Quality, Machine Learning Data Ops