Senior Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Senior Data Engineer, a 6-month contract position, offering a pay rate of "$X/hour". Key skills include MuleSoft APIs, FAST API, AWS services, PostgreSQL, and CI/CD pipelines. Requires 10+ years of data engineering experience and a relevant degree.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
-
πŸ—“οΈ - Date discovered
May 23, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#"ETL (Extract #Transform #Load)" #Python #PostgreSQL #Storage #GitHub #Security #S3 (Amazon Simple Storage Service) #Databases #AWS (Amazon Web Services) #AWS Glue #Automated Testing #API (Application Programming Interface) #Computer Science #Data Engineering #Data Pipeline #Deployment #Data Security #Big Data #Athena #Data Integration #Database Performance #Scala #Data Storage
Role description
Job Description: As a Sr. Data Engineer, you will play a critical role in designing, developing, and maintaining our data infrastructure. You will work closely with cross-functional teams to ensure seamless data integration, performance optimization, and reliable production support. Your expertise in MuleSoft APIs, FAST API, Python and the AWS stack will be essential in driving our data initiatives forward. Key Responsibilities: β€’ Design, develop, and maintain scalable data pipelines and ETL processes using AWS Glue, S3, and other AWS services. β€’ Implement and manage APIs using MuleSoft and FAST API to facilitate data integration and communication. β€’ Optimize database performance and ensure efficient data storage and retrieval using PostgreSQL and Athena. β€’ Collaborate with cross-functional teams to understand data requirements and deliver solutions that meet business needs. β€’ Coordinate with offshore teams to ensure timely delivery of data projects and maintain effective communication. β€’ Implement CI/CD pipelines using GitHub for automated testing and deployment of data solutions. β€’ Provide production support, troubleshoot issues, and ensure high availability and reliability of data systems. β€’ Perform hands-on coding and development to build robust data solutions. β€’ Conduct performance tuning and optimization of data processes and queries. Qualifications: β€’ Bachelor's or Master's degree in Computer Science, Engineering, or a related field. β€’ 10+ years of experience in data engineering or a related role. β€’ Strong proficiency in Mulesoft APIs, FAST API, and AWS services (Glue, S3, Athena). β€’ Experience with PostgreSQL and performance tuning of databases. β€’ Hands-on experience with CI/CD pipelines using GitHub. β€’ Proven ability to work with cross-functional teams and coordinate with offshore teams. β€’ Excellent problem-solving skills and attention to detail. β€’ Strong communication skills and the ability to work in a collaborative environment. β€’ Experience in production support and troubleshooting data systems. Soft Skills: β€’ Ownership: Demonstrates a strong sense of responsibility and takes initiative to drive projects to completion. β€’ Accountability: Holds oneself accountable for delivering high-quality work and meeting deadlines. β€’ Great Communication Skills: Communicates effectively with team members, stakeholders, and management, both verbally and in writing. Preferred Qualifications: β€’ Experience with other data technologies and tools. β€’ Knowledge of data security best practices. β€’ Familiarity with big data technologies and frameworks