Prestige Staffing

Principal Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Principal Data Engineer on a 6-month contract, 100% remote, offering a competitive salary. Requires 7-10 years in data engineering, expertise in Python, AWS services, and ETL/ELT workflows. Preferred experience with GoLang and Infrastructure as Code tools.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
January 30, 2026
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Fixed Term
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#DataOps #Computer Science #Golang #DevOps #"ETL (Extract #Transform #Load)" #Lambda (AWS Lambda) #SQL (Structured Query Language) #Python #Infrastructure as Code (IaC) #Terraform #Data Engineering #Security #Deployment #AWS Glue #Data Pipeline #Cloud #AWS (Amazon Web Services) #Data Quality #Data Management #Automated Testing #Scala
Role description
Principal Data Engineer Pay Competitive salary commensurate with experience, with potential bonus incentives. Location 100% Remote Contract: 6 months, potential for perm Summary Seeking an experienced Principal Data Engineer to join a dynamic team working on innovative, high-impact data projects. This role involves designing and implementing scalable data pipelines, optimizing data workflows, and collaborating with cross-functional stakeholders to deliver actionable insights. The position offers the opportunity to work with modern cloud-native technologies in a fast-paced, modern tech environment, supporting exciting new projects with a focus on data quality and system reliability. Requirements β€’ 7-10 years of development experience in data engineering β€’ Degree in Computer Science or a relevant field, or equivalent experience β€’ Extensive hands-on experience with Python and designing data pipelines β€’ Proven ability to work with cloud services such as AWS Glue, Lambda, and Step Functions β€’ Expertise in building and maintaining ETL/ELT workflows and integrating data from various sources, including APIs and non-SQL systems β€’ Strong understanding of data quality practices and CI/CD deployment pipelines β€’ Excellent problem-solving skills and the ability to work independently in ambiguous environments β€’ Preferred: Experience with GoLang, Infrastructure as Code tools like Terraform or Pulumi Responsibilities β€’ Design, develop, and optimize robust and scalable data pipelines, ensuring high data quality and performance β€’ Implement DataOps/DevOps practices to optimize data workflows and system reliability β€’ Collaborate with data stakeholders to understand business goals and translate them into technical roadmaps β€’ Identify opportunities for data-driven improvements and contribute to high-level planning and estimation β€’ Mentor team members and promote best practices in data engineering β€’ Troubleshoot and resolve complex data pipeline issues, ensuring consistent, reliable data flows β€’ Advocate for security, data management, and architecture standards, including automated testing and CI/CD processes β€’ Work with cross-departmental teams to support long-term data strategies and projects involving large or uncertain scopes Benefits β€’ Opportunity to work with smart, talented colleagues and interesting projects β€’ Exposure to modern, cloud-native data technologies β€’ Flexible, remote work environment with a hybrid model available β€’ Supportive company culture recognized as a Great Place to Workβ„’ β€’ Potential for professional growth and development through challenging projects and mentorship