

AWS Engineer
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS Engineer in Austin, TX, lasting 6+ months with a pay rate of "unknown." Key skills include AWS services (Glue, EMR, Redshift, S3, SageMaker), SQL, Python, and PySpark. Requires 5+ years in Data Engineering and AWS certifications preferred.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
560
-
ποΈ - Date discovered
June 17, 2025
π - Project duration
More than 6 months
-
ποΈ - Location type
On-site
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
Texas, United States
-
π§ - Skills detailed
#Cloud #Leadership #Redshift #AWS Glue #EC2 #SageMaker #SQL (Structured Query Language) #Scala #Python #S3 (Amazon Simple Storage Service) #AWS (Amazon Web Services) #Docker #Spark (Apache Spark) #Terraform #GIT #Data Science #Data Ingestion #PySpark #Data Lake #"ETL (Extract #Transform #Load)" #Data Engineering #Jenkins #Data Pipeline #Data Warehouse #DevOps #Jupyter
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Role: AWS Engineer
Location: Austin, TX
Duration: 6+ months
(Multiple Roles, multiple levels)
Summary: We are seeking a highly skilled AWS Engineer to join a cutting-edge data platform. The ideal candidate will have deep experience in AWS infrastructure, data lake architecture, and large-scale data pipeline development. This role demands hands-on expertise in AWS services such as Glue, EMR, Redshift, S3, and SageMaker, along with strong SQL, Python, and PySpark skills.
Key Responsibilities:
β’ Architect, develop, and maintain scalable AWS-based data lake and ETL/ELT solutions.
β’ Leverage AWS Glue, EMR, CloudFormation, Development EndPoints, S3, Redshift, and EC2 to build distributed and secure data platforms.
β’ Set up and optimize Jupyter/SageMaker Notebooks for advanced analytics and data science collaboration.
β’ Develop robust data pipelines using Spark clusters, ensuring performance, fault-tolerance, and maintainability.
β’ Build connectors to ingest and process data from distributed sources using various integration tools and frameworks.
β’ Write efficient, production-grade SQL, Python, and PySpark code for data transformation and analysis.
β’ Lead proof-of-concept (PoC) efforts and scale them into production-ready systems.
β’ Stay current with emerging data and cloud technologies, offering guidance on how to apply them effectively to solve complex technical and business challenges.
β’ Collaborate with cross-functional teams, including data scientists, analysts, and product stakeholders.
Required Skills:
β’ Proven experience setting up and managing AWS infrastructure with CloudFormation, Glue, EMR, Redshift, S3, EC2, and SageMaker.
β’ Strong knowledge of Data Lake architecture and data ingestion frameworks.
β’ 5+ years of experience in Data Engineering and Data Warehouse development.
β’ Advanced proficiency in SQL, Python, and PySpark.
β’ Experience designing and optimizing complex Spark-based data pipelines on AWS.
β’ Ability to troubleshoot performance bottlenecks and production issues in large-scale distributed systems.
β’ Strong leadership in taking PoCs to production through structured engineering practices.
Preferred Qualifications:
β’ AWS certifications (e.g., AWS Certified Data Analytics β Specialty, Solutions Architect).
β’ Prior experience at an enterprise-scale client such as Amazon or other FAANG companies.
β’ Familiarity with DevOps practices and tools like Terraform, Jenkins, Docker, and Git.