

Data Scientist
β - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Scientist in Boise, ID (hybrid) with a long-term contract. Key skills include SQL, Python, and data orchestration. Requires experience in data quality, migration analysis, and ML model evaluation.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
June 26, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Hybrid
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Boise, ID
-
π§ - Skills detailed
#Data Cleaning #Quality Assurance #Strategy #Data Pipeline #Scripting #Data Integrity #Forecasting #Data Accuracy #Data Strategy #SQL (Structured Query Language) #Databases #Monitoring #Spark (Apache Spark) #Datasets #Trend Analysis #Data Reconciliation #Model Evaluation #Data Science #Migration #Data Orchestration #Pandas #Data Quality #Classification #Python #ML (Machine Learning) #"ETL (Extract #Transform #Load)"
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: Data Scientist
Location: Boise, ID - hybrid
duration: Long Term
KEY RESPONSIBLTIES: This role is critical for ensuring data integrity, driving data-driven insights for the migration, and supporting the enhancement and analysis of our Random Forest classification models.
1. Data Strategy & Quality Assurance
Data Acquisition & Integration: Lead the identification, collection, and extraction of essential data from diverse source systems, including legacy platforms, new payment systems, internal databases, and other relevant sources. This involves hands-on work with SQL, Python, and other scripting languages.
Data Quality & Governance: Implement rigorous data cleaning and validation processes to ensure accuracy, completeness, and consistency across all datasets. This includes identifying and resolving discrepancies, managing missing values, and validating data against defined business rules.
New Data Element Identification: Collaborate closely with engineers and data teams to identify and seamlessly integrate new data elements becoming available post-payments migration. These new elements (e.g., new transaction codes, updated customer attributes, new payment channel identifiers) will be crucial for improving model performance and enriching analytical insights.
Risk Management: Proactively identify, assess, and propose mitigation strategies for potential data-related risks throughout the migration process (e.g., data loss, corruption, inconsistencies, privacy concerns).
1. Automated Analytics & ML Pipeline Development
Data Preparation Pipelines: Design, develop, and maintain robust, automated data pipelines for efficient ingestion, cleaning, transformation, and preparation of data for model retraining and ongoing analysis. This leverages expertise in scripting (Spark/Python/Pandas) and data orchestration tools.
Model Evaluation Pipelines: Develop and automate comprehensive analysis pipelines for evaluating the Random Forest classification models. This includes generating detailed performance reports (e.g., precision, recall, F1-score, AUC, confusion matrices) that are segmented by critical customer migration characteristics.
1. Migration Performance & Model Analysis:
Transaction & Trend Analysis: Conduct in-depth analysis of historical and real-time payment transaction data to understand patterns, volumes, and trends. This analysis will support forecasting, identify critical data elements for migration, and uncover potential data quality issues.
Performance Monitoring & KPIs: Define, track, and analyze key performance indicators (KPIs) crucial for the payments migration's success. This includes metrics such as migration completion rates, overall data accuracy, transaction success rates on the new system, and post-migration system performance.
Data Reconciliation & Validation: Develop and execute strategies for data reconciliation and validation. This involves creating and utilizing detailed reconciliation reports to compare data before and after migration, ensuring all payment data is accurately transferred, accounted for, and aligned with defined success thresholds.
ML Model Insights: Utilize insights from the model to explain observed payment behaviors and customer migration patterns to business stakeholders, fostering a deeper understanding of the factors driving success or challenges.
Uttam Kumar
+1 470-634-0313 EX:108
uttam@datumtg.com