IPrime Info Solutions Inc.

Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer in Atlanta, GA, with a 12+ year experience requirement. Pay is W2 only. Key skills include SQL, cloud platforms (AWS/Azure/GCP), Python/Scala, and distributed processing frameworks. On-site work is mandatory.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
December 3, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
On-site
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#Data Engineering #Docker #BI (Business Intelligence) #Delta Lake #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #Azure #Azure Data Factory #GCP (Google Cloud Platform) #Kafka (Apache Kafka) #Databricks #Apache Spark #Automation #Data Quality #Data Architecture #Data Science #Python #Synapse #Kubernetes #Dataflow #ML (Machine Learning) #Spark (Apache Spark) #Redshift #S3 (Amazon Simple Storage Service) #Data Warehouse #ADLS (Azure Data Lake Storage) #BigQuery #Scala #Airflow #AWS (Amazon Web Services) #Security #SQL (Structured Query Language) #ADF (Azure Data Factory) #GIT #Compliance #Data Lake #Storage #Data Processing #Data Governance #Batch #Data Lakehouse #Version Control #DevOps #Cloud
Role description
Role: Senior Data Engineer Experience: 12+ Years Location: Atlanta, GA Employment Type: W2 Only Job Summary: We are seeking a highly experienced Senior Data Engineer with 12+ years of hands-on expertise in designing, building, and optimizing large-scale data platforms. The ideal candidate will have deep knowledge of modern data engineering, cloud technologies, data warehousing, and distributed processing frameworks. You will play a key role in architecting end-to-end data solutions, building robust pipelines, and ensuring data quality, reliability, and scalability across the organization. Key Responsibilities: Design, build, and maintain scalable, secure, and high-performance ETL/ELT pipelines for batch and real-time data processing. Architect and optimize data lake, data warehouse, and analytics platforms on cloud environments (AWS/Azure/GCP). Lead the development of data models, schemas, and storage structures to support analytics, BI, and ML use cases. Implement and enhance data governance, cataloging, lineage, and quality frameworks. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver solutions. Optimize data workflows, query performance, and storage costs across distributed systems. Integrate data from various structured and unstructured sources using modern ETL tools and frameworks. Mentor junior engineers, review code, and enforce engineering best practices. Ensure security, compliance, and reliability of data platforms in production environments. Troubleshoot production issues, perform root cause analysis, and implement long-term fixes. Required Skills & Qualifications: 12+ years of experience in data engineering, data architecture, or related fields. Strong expertise in SQL, data modelling, and performance tuning. Deep hands-on experience with cloud platforms (AWS / Azure / GCP) and managed data services such as: AWS: Redshift, Glue, EMR, S3, Lambda, Kinesis Azure: Data Factory, Synapse, Databricks, ADLS GCP: BigQuery, Dataflow, Composer Proficiency in Python or Scala for data engineering workloads. Strong background in distributed data processing frameworks: Apache Spark, Kafka, Hive, Airflow, etc. Experience with modern data lakehouse architectures (Delta Lake / Iceberg / Hudi). Hands-on experience with ETL/ELT tools, workflow orchestration, and automation. Strong understanding of CI/CD, DevOps concepts, and version control (Git). Experience with Docker/Kubernetes is a plus.