Data Scientist

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is a Data Scientist contract position for 6 months, paying W2 only, located remotely (Georgia residents preferred). Requires 3+ years in data engineering, strong Python and SQL skills, GCP experience, and familiarity with machine learning concepts.
🌎 - Country
United States
💱 - Currency
$ USD
💰 - Day rate
Unknown
Unknown
456
🗓️ - Date discovered
April 23, 2025
🕒 - Project duration
Unknown
🏝️ - Location type
Remote
📄 - Contract type
W2 Contractor
🔒 - Security clearance
Unknown
📍 - Location detailed
Atlanta Metropolitan Area
🧠 - Skills detailed
#Data Engineering #PySpark #"ETL (Extract #Transform #Load)" #TensorFlow #GCP (Google Cloud Platform) #BigQuery #AI (Artificial Intelligence) #ML (Machine Learning) #Kubernetes #Python #Scripting #Dataflow #GitHub #Big Data #Data Migration #Migration #Impala #Scala #Deployment #Programming #Data Management #Spark (Apache Spark) #Data Science #Bash #SQL (Structured Query Language) #PyTorch #Data Pipeline #Computer Science #Data Wrangling #Automation #Cloud
Role description

Data Engineer (Contract)

Location: Remote (Candidates MUST reside in Georgia; Atlanta/Alpharetta preferred)

📆 6-Month Contract | 💼 W2 Only | No C2C or 3rd Party Vendors - No Exceptions

About the Opportunity:

We are representing a leading global organization in the financial services and technology space seeking a Data Scientist to join their team. This role focuses on supporting machine learning initiatives and migrating data processes to Google Cloud Platform (GCP).

As a key member of the analytics engineering group, you will work on cloud-based automation, data pipeline design, and ML/AI module creation in a high-performance, collaborative environment.

What You’ll Do:

   • Build and optimize automated ML/AI pipelines for data preparation and processing.

   • Support GCP data migration and project transitions in partnership with Data Scientists.

   • Develop Python and BigQuery-based analysis scripts for performance and scalability.

   • Integrate and cleanse data from multiple sources to support identity and fraud solutions.

   • Participate in end-to-end dataset creation, testing, and deployment.

   • Collaborate with cross-functional teams to implement business and technical requirements.

   • Ensure adherence to best practices for data management, maintenance, and reporting.

What We’re Looking For:

   • 3+ years of professional experience as a Data Engineer or in data wrangling roles.

   • Strong Python programming background, including experience with PySpark.

   • Proficiency with SQL and Big Data querying tools (e.g., Hive, Impala, BigQuery).

   • Hands-on experience with cloud platforms (preferably GCP).

   • Familiarity with bash scripting and ETL workflows.

   • Working knowledge of GitHub, Data Studio, Big Table, and Cloud Composer/Dataflow is a plus.

   • Basic knowledge of machine learning, TensorFlow, PyTorch, or graph mining is highly desirable.

   • Understanding of Kubernetes or container orchestration systems is a strong plus.

   • Strong analytical and communication skills; ability to work independently and collaboratively.

Preferred Qualifications:

   • Bachelor's degree in Computer Science, Engineering, or a related field.

   • Master’s degree in Data Science or Computer Science is a plus.

   • Previous experience in banking, e-commerce, or high-volume data environments.

   • GCP certification is a plus, but not required.

STONE Resource Group is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, sexual orientation, or national origin.