

Jr. GCP Data Engineer – ML Pipelines & Big Data Integration _ Alpharetta, GA (W2/Local)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Jr. GCP Data Engineer in Alpharetta, GA, with a contract length of "unknown" and a pay rate of "unknown." Requires 3-5 years of data engineering experience, proficiency in Python, BigQuery, and familiarity with machine learning and GCP tools.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
-
🗓️ - Date discovered
June 11, 2025
🕒 - Project duration
Unknown
-
🏝️ - Location type
On-site
-
📄 - Contract type
W2 Contractor
-
🔒 - Security clearance
Unknown
-
📍 - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#Data Integration #GitHub #Scala #Programming #"ETL (Extract #Transform #Load)" #Automation #AI (Artificial Intelligence) #Libraries #Hadoop #Impala #Python #Informatica BDM (Big Data Management) #Cloud #Data Management #Deployment #ML (Machine Learning) #API (Application Programming Interface) #Spark (Apache Spark) #GCP (Google Cloud Platform) #Scripting #Computer Science #Bash #Data Engineering #Migration #Pig #PyTorch #BigQuery #Data Science #Kubernetes #Big Data #Datasets #Data Wrangling #TensorFlow #PySpark
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Jr. GCP Data Engineer – ML Pipelines & Big Data Integration
Location: Alpharetta, GA
Onsite/ F2F
Our client is looking for a Statistical Consultant/Data Engineer to join our world-class Global Identity and Fraud Analytics team. In this exciting role, you will have the opportunity to work on a variety of challenging projects across multiple industries including Financial Services, Telecommunications, eCommerce, Healthcare, Insurance and Government. In this position you will:
What You'll Do:
- Work with data scientist team to migration the analytical data and projects to GCP environment and ensure the smooth project transition
- Prepare and build data and analytical automation pipeline for self-serving machine learning projects: gather data from multiple sources and systems, integrating, consolidating and cleansing data, and structuring data for use by our clients our client facing projects.
- Design and Code analysis scripts that can run on GCP using BigQuery/Python/Scala leverage multiple Core data sources
Qualifications: 3-5 years of professional data engineering or data wrangling experience in
- working with Hadoop based or Cloud based big data management environment
- bash scripting or similar experience for data movement and ETL
- Big data queries in Hive/Impala/Pig/BigQuery (Sufficient in BigQuery API libraries to data prep automation is a plus)
- Advanced Python programming including PySpark (Scala is a plus) with strong coding experience and Proficient in data studio, Big Table, GitHub working experience (Cloud composer and Data flow is a plus)
- basic gcp certification is a plus
- Knowledge of Kubernetes is a plus (or other types of GCP native tools of the container-orchestration system for automating computer application deployment, scaling, and management)
- Basic knowledge in machine learning (ensemble machine learning models, unsupervised machine learning models) with experience using Tensorflow and PyTorch is a plus
- Basic knowledge in graph mining and graph data model is a plus
- Understand best practices for data management, maintenance, and reportingand use that knowledge to implement improvements in our solutions.
What You'll Do
• Build automated ML/AI modules, job, and data preparation pipelines by gathering data from multiple sources and systems, integrating, consolidating and cleansing data, and structuring data and analytical procedures for use by our clients in our solutions.
• Perform design, creation, and interpretation of large and highly complex datasets
• Consult with internal and external clients to understand the business requirements so successfully build datasets and implement complex big data solutions (under senior lead's supervision).
• Ability to work with Technology and D&A teams to review, understand and interpret the business requirements to design and build missing functionalities to support the identity and fraud analytics needs (under senior lead's supervision).
• Ability to work on the end to end interpretation , design, creation, and build of large and highly complex analytics related capabilities (under senior lead's supervision).
• Strong oral and written communication skills, and ability to collaborate with cross-functional partners
Things that would stand out on resume -
1- Masters Degree in Computer Science & Data Science
2- Previous Company - Any Bank, Ecommerce
Will wait for your response.
Vishnu Singh
Email : vishnu@datumtg.com
Phone : 470 451 0404