

NLB Services
Data Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Engineer with a contract length of "unknown," offering a pay rate of "unknown." Key skills include Python, AWS, Databricks, and PySpark. Experience in building data pipelines and understanding distributed systems is required.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
January 8, 2026
🕒 - Duration
Unknown
-
🏝️ - Location
Unknown
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#AWS (Amazon Web Services) #Python #Lambda (AWS Lambda) #Data Quality #Data Science #Databricks #Scala #PySpark #Spark (Apache Spark) #"ETL (Extract #Transform #Load)" #Apache Spark #Data Processing #Data Analysis #Data Engineering #EC2 #S3 (Amazon Simple Storage Service) #Data Pipeline
Role description
Key Responsibilities:
• Design, develop, and maintain robust and scalable data pipelines.
• Write clean, efficient, and well-tested Python code for data processing and transformation.
• Work extensively with AWS services to build and optimize data solutions.
• Develop and optimize Spark/Databricks jobs using PySpark.
• Ensure data quality, reliability, and performance across data pipelines.
• Collaborate with data analysts, data scientists, and stakeholders to understand data requirements.
• Troubleshoot and resolve data pipeline and performance issues.
Required Skills & Qualifications:
• Strong hands-on experience in Python development.
• Experience working with AWS (e.g., S3, EC2, Glue, Lambda, EMR).
• Solid understanding of Databricks or Apache Spark.
• Hands-on experience with PySpark.
• Experience in building and managing end-to-end data pipelines.
• Good understanding of data processing concepts and distributed systems.
Key Responsibilities:
• Design, develop, and maintain robust and scalable data pipelines.
• Write clean, efficient, and well-tested Python code for data processing and transformation.
• Work extensively with AWS services to build and optimize data solutions.
• Develop and optimize Spark/Databricks jobs using PySpark.
• Ensure data quality, reliability, and performance across data pipelines.
• Collaborate with data analysts, data scientists, and stakeholders to understand data requirements.
• Troubleshoot and resolve data pipeline and performance issues.
Required Skills & Qualifications:
• Strong hands-on experience in Python development.
• Experience working with AWS (e.g., S3, EC2, Glue, Lambda, EMR).
• Solid understanding of Databricks or Apache Spark.
• Hands-on experience with PySpark.
• Experience in building and managing end-to-end data pipelines.
• Good understanding of data processing concepts and distributed systems.






