Galent

AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Data Engineer in Phoenix, AZ, with a 12+ month contract at a competitive pay rate. Requires 12+ years of experience, expertise in AWS services, big data technologies, and strong programming skills in Python and SQL.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 12, 2025
πŸ•’ - Duration
Unknown
-
🏝️ - Location
On-site
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#Big Data #Data Security #"ETL (Extract #Transform #Load)" #SQL (Structured Query Language) #Batch #Lambda (AWS Lambda) #Compliance #Data Ingestion #Data Engineering #Data Access #Databases #DynamoDB #AWS (Amazon Web Services) #Complex Queries #GIT #Snowflake #Java #Programming #Agile #Apache Spark #Delta Lake #Computer Science #AWS S3 (Amazon Simple Storage Service) #Kafka (Apache Kafka) #Data Accuracy #Data Science #Data Lake #AWS Lambda #Deployment #Athena #Security #Graph Databases #JSON (JavaScript Object Notation) #Data Quality #Cloud #Data Pipeline #Data Architecture #Data Warehouse #DevOps #Kubernetes #Data Manipulation #Redshift #Scala #Version Control #Web Services #Jenkins #Python #Docker #Hadoop #S3 (Amazon Simple Storage Service) #Schema Design #AWS EMR (Amazon Elastic MapReduce) #AWS Glue #Spark (Apache Spark) #Data Modeling
Role description
Job Title: AWS Data Engineer Location: Phoenix, AZ Day 1 Onsite - 5 days 12+ years An AWS Data Engineer is responsible for designing building and maintaining robust and scalable data architectures leveraging Amazon Web Services AWS They play a vital role in transforming raw data into actionable insights enabling datadriven decisionmaking within an organization Key responsibilities Lead the design and development of robust ETLELT data pipelines ensuring efficient data ingestion processing and transformation from diverse sources into AWS data warehouses and data lakes This includes designing and implementing solutions for batch and streaming data handling various data formats like JSON CSV Parquet and Avro Architect build and optimize scalable data architectures including data lakes eg S3 Delta Lake Iceberg and data warehouses eg Redshift Snowflake on AWS ensuring optimal performance and data accessibility Collaborate closely with data scientists analysts and other stakeholders to understand data requirements design appropriate data models and schemas and deliver tailored data solutions that enable datadriven decisionmaking Implement advanced data quality and governance practices ensuring data accuracy consistency and compliance with relevant regulations Optimize data retrieval and develop dashboards and reports using various tools leveraging deep understanding of data warehousing and analytics concepts Mentor and guide junior data engineers fostering a culture of technical excellence and continuous improvement within the team Proactively identify and resolve operational issues troubleshoot complex data pipeline failures and implement evolutionary recommendations for system improvements Required experience and skills handson experience as a Data Engineer with a strong focus on AWS cloud services for data solutions Expertise in designing developing optimizing and troubleshooting complex data pipelines leveraging AWS services like Glue Lambda EMR S3 Redshift Athena Kinesis Step Functions DynamoDB and Lake Formation Proficiency in programming languages such as Python including Py Spark and SQL including advanced SQL PLSQL query tuning with the ability to write and optimize complex queries and scripts for data manipulation and transformation Experience with big data technologies like Apache Spark Hadoop Kafka and potentially realtime streaming platforms like Kinesis or Flink Indepth knowledge of data modeling techniques including relational dimensional star schema snowflake schema and potentially graph databases and schema design for data warehouses and data lakes Experience with containerization technologies like Docker and Kubernetes for scalable deployments Strong understanding of data security and compliance principles and best practices Familiarity with DevOps practices for managing data infrastructure and CICD pipelines using tools like AWS Code Pipeline Jenkins or others Demonstrated experience in supporting MLAI projects including deploying pipelines for feature engineering and model inference Excellent problemsolving analytical and communication skills with the ability to collaborate effectively with crossfunctional teams and stakeholders Experience with version control systems like Git Educational background and certifications Bachelors degree in computer science Information Systems or a related field is typically required or preferred Masters degree preferred AWS certifications such as the AWS Certified Data Analytics Specialty or AWS Certified Solutions Architect Associate can further demonstrate expertise Desired soft skills Strong communication and collaboration skills to interact with various stakeholders Ability to mentor and support junior team members Strong analytical and problemsolving skills with a focus on data quality and operational efficiency Ability to adapt and thrive in a fastpaced agile environment Skills Mandatory Skills : Apache Spark,Java,Python,AWS EMR,AWS Glue,AWS Lambda,AWS S3,Scala,Snowflake,ANSI-SQL,Aws Step Functions,Python for DATA