

Kastech Software Solutions Group
Big Data Developer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Big Data Developer in Alpharetta, GA, lasting for an unspecified duration. It requires 8+ years of Big Data experience, expertise in GCP services, and proficiency in Java, Python, or Scala.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
February 27, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Alpharetta, GA
-
🧠 - Skills detailed
#Debugging #Python #"ETL (Extract #Transform #Load)" #Security #Data Architecture #Monitoring #Batch #Compliance #Data Quality #BigQuery #Dataflow #Big Data #Storage #Deployment #Scala #IAM (Identity and Access Management) #Kubernetes #Apache Beam #Data Modeling #Data Governance #Data Processing #Docker #GIT #Java #GCP (Google Cloud Platform) #Spark (Apache Spark) #Cloud #Hadoop #Data Pipeline
Role description
Job Title: Big Data Developer
Location: Alpharetta, GA - (Onsite)
Duration:
Independent Candidates only
Job Summary:
We are seeking an experienced Big Data Developer with strong expertise in Google Cloud Platform (GCP), Dataflow, and modern Big Data technologies. The ideal candidate will design, develop, and optimize scalable data pipelines and processing frameworks to support enterprise analytics and data-driven applications.
Key Responsibilities
• Design, build, and maintain scalable Big Data pipelines using GCP services.
• Develop and manage batch and streaming data processing using Dataflow / Apache Beam.
• Work with BigQuery, Cloud Storage, Pub/Sub, and Dataproc for end-to-end data solutions.
• Optimize data processing performance, reliability, and cost efficiency in GCP.
• Collaborate with data architects, analysts, and application teams to deliver data solutions.
• Implement data quality, validation, and monitoring mechanisms.
• Support production deployments, troubleshooting, and performance tuning.
• Follow best practices for security, governance, and compliance in cloud environments.
Required Skills
• Strong hands-on experience in Big Data development (8+ years).
• Expertise in GCP services – Dataflow, BigQuery, Pub/Sub, Cloud Storage, Dataproc.
• Experience with Apache Beam, Spark, or Hadoop ecosystem.
• Proficiency in Java, Python, or Scala.
• Solid understanding of ETL/ELT, data modeling, and distributed systems.
• Experience with CI/CD, Git, and cloud deployment practices.
• Strong problem-solving and debugging skills.
Preferred Qualifications
• Experience with real-time streaming architectures.
• Knowledge of Data governance, security, and IAM in GCP.
• Familiarity with containerization or orchestration tools (Docker, Kubernetes).
• Prior experience working in enterprise or large-scale data environments.
Job Title: Big Data Developer
Location: Alpharetta, GA - (Onsite)
Duration:
Independent Candidates only
Job Summary:
We are seeking an experienced Big Data Developer with strong expertise in Google Cloud Platform (GCP), Dataflow, and modern Big Data technologies. The ideal candidate will design, develop, and optimize scalable data pipelines and processing frameworks to support enterprise analytics and data-driven applications.
Key Responsibilities
• Design, build, and maintain scalable Big Data pipelines using GCP services.
• Develop and manage batch and streaming data processing using Dataflow / Apache Beam.
• Work with BigQuery, Cloud Storage, Pub/Sub, and Dataproc for end-to-end data solutions.
• Optimize data processing performance, reliability, and cost efficiency in GCP.
• Collaborate with data architects, analysts, and application teams to deliver data solutions.
• Implement data quality, validation, and monitoring mechanisms.
• Support production deployments, troubleshooting, and performance tuning.
• Follow best practices for security, governance, and compliance in cloud environments.
Required Skills
• Strong hands-on experience in Big Data development (8+ years).
• Expertise in GCP services – Dataflow, BigQuery, Pub/Sub, Cloud Storage, Dataproc.
• Experience with Apache Beam, Spark, or Hadoop ecosystem.
• Proficiency in Java, Python, or Scala.
• Solid understanding of ETL/ELT, data modeling, and distributed systems.
• Experience with CI/CD, Git, and cloud deployment practices.
• Strong problem-solving and debugging skills.
Preferred Qualifications
• Experience with real-time streaming architectures.
• Knowledge of Data governance, security, and IAM in GCP.
• Familiarity with containerization or orchestration tools (Docker, Kubernetes).
• Prior experience working in enterprise or large-scale data environments.





