

Trilyon, Inc.
Data Scientist
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Business & Marketing Data Scientist (II) in New York, NY, for 6+ months at a pay rate of "unknown." Key skills include SQL, Python, data integration, and experience with GCP. A Master's degree and 3 years of relevant experience are required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
960
-
🗓️ - Date
March 19, 2026
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Compliance #Agile #Security #Libraries #Looker #Data Science #Data Manipulation #Scala #Consulting #Monitoring #Cloud #Datasets #Documentation #Data Cleaning #Databases #Statistics #Data Integration #Storage #SQL (Structured Query Language) #Python #BigQuery #Data Accuracy #GCP (Google Cloud Platform) #R
Role description
About Company:
Trilyon, Inc. is a trusted partner to some of the world’s leading organizations—including Fortune 500 companies—delivering innovative technology consulting and customized workforce solutions to address complex business challenges.
With a strong presence across North America, LATAM, EMEA, and APAC, we combine global reach with deep local expertise to deliver impactful results. Our agile and scalable approach enables clients to adapt swiftly in today’s fast-changing business landscape.
Job Details:
Title: Business & Marketing Data Scientist (II)
Location: New York, NY Hybrid
Duration: 6+ months
Job Description:
Work Location: New York - Hybrid (Tues-Thurs in the office. Monday & Friday remote)
Work Schedule: Monday - Friday, Normal Business Hours
Top Responsibilities:
- Pipeline Maintenance & Optimization: Proactively maintain, monitor, and optimize existing SQL pipelines within the team's codebase to reduce failure rates, ensure data accuracy, and improve efficiency.
- Data Preparation & Cleaning: Drive the formatting, restructuring, and validation of complex datasets. Take ownership of data cleaning tasks for research panels (e.g., mobile and living room panels) to accelerate the delivery of insights and ensure data is ready for analysis.
- Dashboard Enhancement: Maintain and enhance SQL-based dashboards (e.g., Looker Studio, internal dashboarding tools) to improve loading times, ensure data freshness, and provide reliable reporting for stakeholders.
- Data Integration & Monitoring: Monitor and validate data transfers from third-party suppliers to ensure accurate ingestion, processing integrity, and compliance with privacy and security standards.
- Documentation & Best Practices: Ensure effective, up-to-date documentation of pipeline architectures, changes, and optimization efforts. Help scale existing solutions by creating repeatable processes and sample code.
- Cross-Functional Collaboration: Work closely with data scientists to understand their data requirements. Communicate technical issues and pipeline updates clearly to internal teams.
Minimum role qualifications requirement:
- Master's degree in a quantitative discipline such as Statistics, Engineering, Sciences, or equivalent practical experience.
- 3 years of experience using analytics to solve product or business problems, coding (e.g., Python, R, SQL), querying databases or statistical analysis, or a relevant PhD degree.
Preferred/Additional Skills:
- Experience with Google's internal data infrastructure (e.g., Plx, F1, ZugZug, Procella).
- Familiarity with Cloud platforms (specifically GCP components like BigQuery and Cloud Storage).
- Proficiency in Python, particularly for data manipulation or migrating custom libraries into centralized repositories.
- Experience or interest in setting up Cloud Composer instances for ad hoc analyses.
- Proficiency with AI tools is a plus.
Thanks & Regards
Mohammad Fahad khan
Lead Technical Recruiter
Email - Fahad@trilyonservices.com
LinkedIn Link – https://www.linkedin.com/in/mohd-fahad-khan/
Trilyon LinkedIn Link https://www.linkedin.com/company/trilyon/
About Company:
Trilyon, Inc. is a trusted partner to some of the world’s leading organizations—including Fortune 500 companies—delivering innovative technology consulting and customized workforce solutions to address complex business challenges.
With a strong presence across North America, LATAM, EMEA, and APAC, we combine global reach with deep local expertise to deliver impactful results. Our agile and scalable approach enables clients to adapt swiftly in today’s fast-changing business landscape.
Job Details:
Title: Business & Marketing Data Scientist (II)
Location: New York, NY Hybrid
Duration: 6+ months
Job Description:
Work Location: New York - Hybrid (Tues-Thurs in the office. Monday & Friday remote)
Work Schedule: Monday - Friday, Normal Business Hours
Top Responsibilities:
- Pipeline Maintenance & Optimization: Proactively maintain, monitor, and optimize existing SQL pipelines within the team's codebase to reduce failure rates, ensure data accuracy, and improve efficiency.
- Data Preparation & Cleaning: Drive the formatting, restructuring, and validation of complex datasets. Take ownership of data cleaning tasks for research panels (e.g., mobile and living room panels) to accelerate the delivery of insights and ensure data is ready for analysis.
- Dashboard Enhancement: Maintain and enhance SQL-based dashboards (e.g., Looker Studio, internal dashboarding tools) to improve loading times, ensure data freshness, and provide reliable reporting for stakeholders.
- Data Integration & Monitoring: Monitor and validate data transfers from third-party suppliers to ensure accurate ingestion, processing integrity, and compliance with privacy and security standards.
- Documentation & Best Practices: Ensure effective, up-to-date documentation of pipeline architectures, changes, and optimization efforts. Help scale existing solutions by creating repeatable processes and sample code.
- Cross-Functional Collaboration: Work closely with data scientists to understand their data requirements. Communicate technical issues and pipeline updates clearly to internal teams.
Minimum role qualifications requirement:
- Master's degree in a quantitative discipline such as Statistics, Engineering, Sciences, or equivalent practical experience.
- 3 years of experience using analytics to solve product or business problems, coding (e.g., Python, R, SQL), querying databases or statistical analysis, or a relevant PhD degree.
Preferred/Additional Skills:
- Experience with Google's internal data infrastructure (e.g., Plx, F1, ZugZug, Procella).
- Familiarity with Cloud platforms (specifically GCP components like BigQuery and Cloud Storage).
- Proficiency in Python, particularly for data manipulation or migrating custom libraries into centralized repositories.
- Experience or interest in setting up Cloud Composer instances for ad hoc analyses.
- Proficiency with AI tools is a plus.
Thanks & Regards
Mohammad Fahad khan
Lead Technical Recruiter
Email - Fahad@trilyonservices.com
LinkedIn Link – https://www.linkedin.com/in/mohd-fahad-khan/
Trilyon LinkedIn Link https://www.linkedin.com/company/trilyon/






