

Aptonet Inc
Senior Data Engineer (AWS & Databricks)
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer (AWS & Databricks) in Metro Atlanta, GA. It’s a long-term contract with a pay rate of "unknown". Requires 5+ years of experience, strong Databricks, AWS, Python, SQL skills, and data pipeline expertise.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 10, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#Data Science #Azure #S3 (Amazon Simple Storage Service) #Data Warehouse #Data Accuracy #Data Governance #BI (Business Intelligence) #Spark (Apache Spark) #Redshift #Scala #Agile #Python #Automation #Compliance #Fivetran #Cloud #Data Manipulation #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #DevOps #Databricks #GCP (Google Cloud Platform) #Data Quality #Data Lake #PySpark #Data Analysis #Airflow #Monitoring #AWS (Amazon Web Services) #Observability #Storage #Schema Design #Data Engineering #Data Pipeline #Data Processing #Data Integration #Datasets #Apache Spark #Delta Lake #Security #Computer Science #SQL (Structured Query Language) #Data Modeling #dbt (data build tool) #Databases
Role description
Senior Data Engineer (Databricks) – Hybrid, Metro Atlanta
Location: Metro Atlanta, GA (Hybrid – 3 days onsite per week)
Type: Long-term contract / Full-time opportunity
Overview
We are seeking a Senior Data Engineer with strong hands-on experience in Databricks and modern data engineering practices. This role is ideal for an engineer who enjoys building, optimizing, and maintaining scalable data pipelines that enable analytics, business intelligence, and data-driven decision-making across the organization.
The ideal candidate will be highly proficient in data modeling, data pipeline orchestration, ETL/ELT design, and cloud data engineering (preferably AWS), with the ability to work cross-functionally with data analysts, data scientists, and application teams.
Key Responsibilities
• Design & Build Data Pipelines: Architect, implement, and maintain scalable end-to-end data pipelines using Databricks, Spark, and related technologies.
• Data Transformation & Optimization: Develop efficient data processing and transformation workflows to support analytics and reporting use cases.
• Data Integration: Integrate diverse data sources including APIs, databases, and cloud storage into unified datasets.
• Performance Tuning: Optimize Spark jobs, queries, and workflows for efficiency, scalability, and cost-effectiveness.
• Collaboration: Work closely with cross-functional teams (data science, analytics, business units) to design and implement data solutions that align with business goals.
• Data Quality & Validation: Implement robust validation, monitoring, and observability processes to ensure data accuracy, completeness, and reliability.
• Automation & Governance: Contribute to data governance, security, and automation initiatives within the data ecosystem.
• Cloud Environment: Leverage AWS services (e.g., S3, Glue, Lambda, Redshift) to build and deploy data solutions in a cloud-native environment.
Qualifications
• Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field.
• 5+ years of experience as a Data Engineer or Senior Data Engineer in enterprise-scale environments.
• Strong expertise in Databricks, Apache Spark, and PySpark for data engineering and analytics.
• Proficiency with Python and SQL for data manipulation, automation, and orchestration.
• Experience designing and maintaining ETL/ELT processes and data pipelines for large datasets.
• Working knowledge of AWS (preferred) or other cloud platforms (Azure, GCP).
• Familiarity with data modeling, schema design, and performance tuning in data lake or data warehouse environments.
• Solid understanding of data governance, security, and compliance principles.
• Excellent communication, analytical, and problem-solving skills.
• Strong teamwork skills with the ability to collaborate across distributed teams.
Nice to Have
• Experience with tools like Fivetran, Prophecy, or Precisely Connect.
• Exposure to Delta Lake, Airflow, or dbt.
• Prior work in Agile or DevOps-oriented environments.
Benefits (employee contribution):
• Health insurance
• Health savings account
• Dental insurance
• Vision insurance
• Flexible spending accounts
• Life insurance
• Retirement plan
All qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.
Senior Data Engineer (Databricks) – Hybrid, Metro Atlanta
Location: Metro Atlanta, GA (Hybrid – 3 days onsite per week)
Type: Long-term contract / Full-time opportunity
Overview
We are seeking a Senior Data Engineer with strong hands-on experience in Databricks and modern data engineering practices. This role is ideal for an engineer who enjoys building, optimizing, and maintaining scalable data pipelines that enable analytics, business intelligence, and data-driven decision-making across the organization.
The ideal candidate will be highly proficient in data modeling, data pipeline orchestration, ETL/ELT design, and cloud data engineering (preferably AWS), with the ability to work cross-functionally with data analysts, data scientists, and application teams.
Key Responsibilities
• Design & Build Data Pipelines: Architect, implement, and maintain scalable end-to-end data pipelines using Databricks, Spark, and related technologies.
• Data Transformation & Optimization: Develop efficient data processing and transformation workflows to support analytics and reporting use cases.
• Data Integration: Integrate diverse data sources including APIs, databases, and cloud storage into unified datasets.
• Performance Tuning: Optimize Spark jobs, queries, and workflows for efficiency, scalability, and cost-effectiveness.
• Collaboration: Work closely with cross-functional teams (data science, analytics, business units) to design and implement data solutions that align with business goals.
• Data Quality & Validation: Implement robust validation, monitoring, and observability processes to ensure data accuracy, completeness, and reliability.
• Automation & Governance: Contribute to data governance, security, and automation initiatives within the data ecosystem.
• Cloud Environment: Leverage AWS services (e.g., S3, Glue, Lambda, Redshift) to build and deploy data solutions in a cloud-native environment.
Qualifications
• Bachelor’s or Master’s degree in Computer Science, Information Systems, or related field.
• 5+ years of experience as a Data Engineer or Senior Data Engineer in enterprise-scale environments.
• Strong expertise in Databricks, Apache Spark, and PySpark for data engineering and analytics.
• Proficiency with Python and SQL for data manipulation, automation, and orchestration.
• Experience designing and maintaining ETL/ELT processes and data pipelines for large datasets.
• Working knowledge of AWS (preferred) or other cloud platforms (Azure, GCP).
• Familiarity with data modeling, schema design, and performance tuning in data lake or data warehouse environments.
• Solid understanding of data governance, security, and compliance principles.
• Excellent communication, analytical, and problem-solving skills.
• Strong teamwork skills with the ability to collaborate across distributed teams.
Nice to Have
• Experience with tools like Fivetran, Prophecy, or Precisely Connect.
• Exposure to Delta Lake, Airflow, or dbt.
• Prior work in Agile or DevOps-oriented environments.
Benefits (employee contribution):
• Health insurance
• Health savings account
• Dental insurance
• Vision insurance
• Flexible spending accounts
• Life insurance
• Retirement plan
All qualified applicants will receive consideration for employment without regard to age, race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran.