SAN R&D Business Solutions

Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer in Alpharetta, GA, on a contract basis. It requires 9+ years of experience, strong skills in Python, SQL, and GCP, and expertise in building scalable data pipelines. Only local USC/GC holders will be considered.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 10, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#Apache Spark #Data Accuracy #Apache Beam #Data Storage #Data Modeling #Cloud #Data Science #Computer Science #Python #Datasets #Data Security #Storage #Security #Data Pipeline #Version Control #Java #Monitoring #Data Quality #Azure #Data Engineering #Data Processing #BigQuery #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Data Governance #AWS (Amazon Web Services) #Automation #Data Architecture #Dataflow #GCP (Google Cloud Platform) #Scala #SQL Queries #GIT
Role description
Job Title: Data Engineer Location: Alpharetta, GA (Onsite) Interview: In-Person (Mandatory) Employment Type: Contract (C2C) Visa Requirement: Only USC / GC Holders Candidates only local to Georgia will be considered About the Role: We are seeking an experienced Data Engineer to join our team. The ideal candidate will have strong expertise in Python, SQL, and Google Cloud Platform (GCP), with hands-on experience building and maintaining scalable data pipelines. This role involves working with large datasets, optimizing data workflows, and collaborating closely with data scientists, analysts, and engineering teams to deliver reliable, high-quality data solutions. Key Responsibilities: • Design, build, and maintain scalable, reliable data pipelines to process large volumes of data • Develop, optimize, and maintain ETL/ELT workflows using Python, SQL, and cloud-native tools • Write and optimize complex SQL queries for data transformation and analysis • Work extensively with BigQuery for data storage, processing, and performance optimization • Develop and maintain data pipelines using Apache Spark, Apache Beam, and GCP Dataflow • Implement data validation, quality checks, and monitoring to ensure data accuracy and integrity • Troubleshoot and resolve pipeline failures, performance issues, and data inconsistencies • Collaborate with data scientists, analysts, and business stakeholders to understand data requirements • Apply best practices in data engineering, data modeling, and cloud architecture • Stay current with emerging data engineering tools and technologies Required Skills & Qualifications: • Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) • 9+ years of experience as a Data Engineer • Strong proficiency in Python for data processing and automation • Advanced SQL skills, including complex query writing and performance optimization • Hands-on experience with Google Cloud Platform (GCP) • Strong experience with BigQuery, Apache Spark, Apache Beam, GCP Dataflow • Working knowledge of Java for data processing or pipeline development • Solid understanding of data warehousing concepts and data modeling • Strong problem-solving skills and attention to detail • Excellent communication skills and ability to work in a team-oriented environment Preferred Skills: • Experience with data governance, data quality, and data security best practices • Familiarity with CI/CD pipelines and version control systems (e.g., Git) • Experience working in cloud-native data architectures • Exposure to additional cloud platforms (AWS or Azure) • Experience supporting large-scale, enterprise data environments