

AWS/Spark Data & AI Engineer | Remote
β - Featured Role | Apply direct with Data Freelance Hub
This role is for an AWS/Spark Data & AI Engineer, 100% remote, with a contract length of unspecified duration. Pay rate is not provided. Candidates must have extensive experience in AWS services, Apache Spark, Python, and AI/Machine Learning concepts.
π - Country
United States
π± - Currency
$ USD
-
π° - Day rate
-
ποΈ - Date discovered
August 8, 2025
π - Project duration
Unknown
-
ποΈ - Location type
Remote
-
π - Contract type
Unknown
-
π - Security clearance
Unknown
-
π - Location detailed
District of Columbia, United States
-
π§ - Skills detailed
#Data Quality #"ETL (Extract #Transform #Load)" #Data Framework #Lambda (AWS Lambda) #Data Lake #Security #Scala #Data Modeling #Big Data #Databases #Cloud #SQL (Structured Query Language) #Python #Code Reviews #Hadoop #Data Pipeline #Scripting #Data Warehouse #S3 (Amazon Simple Storage Service) #PySpark #Apache Spark #ML (Machine Learning) #Computer Science #Data Engineering #Redshift #Storage #Data Science #NoSQL #Spark (Apache Spark) #PyTorch #AI (Artificial Intelligence) #Data Manipulation #AWS Glue #AWS (Amazon Web Services) #TensorFlow #Data Processing #Libraries
Role description
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
AWS/Spark Data Engineer
100% remote
USC/GC's Only
Job Description:
We are seeking a highly skilled and experienced AWS/Spark Data & AI Engineer with a strong background in Python and artificial intelligence to join our clientβs team. The ideal candidate will be responsible for designing, developing, and deploying scalable data pipelines and AI solutions on the AWS cloud platform. This role requires a deep understanding of big data technologies, machine learning concepts, and a proven ability to leverage Python to build robust and efficient data and AI applications.
Responsibilities:
β’ Design, develop, and maintain large-scale, distributed data pipelines using Apache Spark, PySpark, and AWS services like AWS Glue, EMR, S3, and Redshift.
β’ Implement ETL (Extract, Transform, Load) processes to ingest, transform, and load data from various sources into data lakes and data warehouses.
β’ Develop, train, and deploy machine learning and AI models using Python and relevant libraries (e.g., scikit-learn, TensorFlow, PyTorch).
β’ Collaborate with data scientists and business stakeholders to understand data requirements and translate them into technical solutions.
β’ Optimize and fine-tune Spark jobs and other data processing applications for performance, scalability, and cost-efficiency.
β’ Ensure data quality, integrity, and security across all data processing and storage systems.
β’ Troubleshoot and resolve issues related to data pipelines, Spark jobs, and AWS infrastructure.
β’ Stay up-to-date with the latest trends and technologies in big data, cloud computing, and artificial intelligence.
β’ Participate in code reviews and contribute to a culture of engineering excellence and best practices.
Qualifications:
β’ Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
β’ Proven professional experience with AWS cloud services, particularly those related to data engineering (S3, Glue, EMR, Redshift, Lambda).
β’ Extensive experience with Apache Spark and PySpark for big data processing and analytics.
β’ Advanced proficiency in Python for data manipulation, scripting, and application development.
β’ Strong understanding of AI/Machine Learning concepts, algorithms, and practical experience in building and deploying ML models.
β’ Experience with big data frameworks and technologies (e.g., Hadoop, Hive).
β’ Solid knowledge of data warehousing concepts, data modeling, and ETL processes.
β’ Proficiency in SQL and working with relational and NoSQL databases.
β’ Excellent problem-solving, analytical, and communication skills.
β’ Ability to work both independently and collaboratively in a fast-paced environment.