INSPYR Solutions

Lead AWS Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Lead AWS Data Engineer, 100% remote, with a contract length of 6 months. Pay rate is competitive. Requires 5+ years of data engineering experience, strong AWS and Snowflake expertise, and proficiency in Python/Spark.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
October 14, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Dallas, TX
-
🧠 - Skills detailed
#Monitoring #Data Modeling #Infrastructure as Code (IaC) #Data Architecture #Data Ingestion #Data Pipeline #Storage #Data Governance #Scala #ML (Machine Learning) #S3 (Amazon Simple Storage Service) #Security #API (Application Programming Interface) #Cloud #Compliance #AI (Artificial Intelligence) #Data Catalog #Snowflake #Kafka (Apache Kafka) #Airflow #Lambda (AWS Lambda) #AWS Glue #Data Engineering #Logging #PySpark #AWS (Amazon Web Services) #Python #Automation #REST (Representational State Transfer) #Terraform #Agile #Spark (Apache Spark) #Databases #IAM (Identity and Access Management) #REST API #Data Access #"ETL (Extract #Transform #Load)" #Observability
Role description
Title: Lead AWS Data Engineer Location: 100% remote Duration: 6 months with extension based on performance Work Requirements: US Citizen, GC Holders or Authorized to Work in the U.S. Skillset / Experience: We are seeking two Senior Data Engineers to join the POP Platform build-out initiative for Q4 2025. These engineers will play a key role in designing, developing, and optimizing cloud-based data pipelines and integrations that support large-scale analytics and data-driven decision-making. The ideal candidates will have strong AWS expertise, hands-on experience with Snowflake, and a proven background in Python/Spark-based data engineering within a global, cross-functional environment.Key Responsibilities β€’ Design, build, and maintain scalable ETL/ELT pipelines using Python, PySpark, and AWS services (Glue, Lambda, S3, EMR, Step Functions). β€’ Develop and optimize data models and warehouse schemas in Snowflake, ensuring high performance and reliability. β€’ Implement data ingestion frameworks integrating structured, semi-structured, and unstructured data from multiple sources (APIs, files, databases). β€’ Build and maintain REST APIs for data access and integration, including secure authentication and performance optimization. β€’ Collaborate with data architects, analysts, and business stakeholders in a global, agile team environment to deliver high-quality data solutions. β€’ Ensure compliance with AWS security best practices, including IAM roles, encryption, and data governance policies. β€’ Set up and manage CI/CD pipelines, orchestration, and workflow automation using Airflow, Glue, or similar tools. β€’ Implement robust monitoring, logging, and alerting to ensure reliability and observability across data pipelines. Required Skills & Experience β€’ 5+ years of experience as a Data Engineer or similar role, with a strong background in cloud-native data platforms. β€’ Deep expertise in AWS (networking, compute, storage, IAM, CloudWatch, security, cost optimization). β€’ Strong proficiency in Snowflake - data modeling, performance tuning, and external integrations. β€’ Advanced skills in Python and Spark for large-scale data transformation and processing. β€’ Proven experience with API development (REST, authentication, scaling, and automation). β€’ Hands-on experience with CI/CD pipelines and orchestration tools (AWS Glue, Airflow, Step Functions, or similar). β€’ Strong understanding of monitoring and logging frameworks for data pipelines and applications. β€’ Experience working in a global, agile, cross-functional environment, collaborating with distributed teams. Preferred Qualifications β€’ Experience with infrastructure as code (Terraform, CloudFormation). β€’ Familiarity with modern data governance frameworks and data cataloging tools. β€’ Exposure to real-time data streaming (Kinesis, Kafka) is a plus. β€’ Experience integrating AI/ML or analytics platforms on Snowflake or AWS. Why Join β€’ Be part of a high-impact, global data platform initiative (POP Platform). β€’ Opportunity to work with cutting-edge cloud and data technologies in a collaborative environment. β€’ Contract extensions possible based on performance and project needs. Our benefits package includes: β€’ Comprehensive medical benefits β€’ Competitive pay β€’ 401(k) retirement plan β€’ …and much more! About INSPYR Solutions Technology is our focus and quality is our commitment. As a national expert in delivering flexible technology and talent solutions, we strategically align industry and technical expertise with our clients’ business objectives and cultural needs. Our solutions are tailored to each client and include a wide variety of professional services, project, and talent solutions. By always striving for excellence and focusing on the human aspect of our business, we work seamlessly with our talent and clients to match the right solutions to the right opportunities. Learn more about us at inspyrsolutions.com. INSPYR Solutions provides Equal Employment Opportunities (EEO) to all employees and applicants for employment without regard to race, color, religion, sex, national origin, age, disability, or genetics. In addition to federal law requirements, INSPYR Solutions complies with applicable state and local laws governing nondiscrimination in employment in every location in which the company has facilities. Information collected and processed through your application with INSPYR Solutions (including any job applications you choose to submit) is subject to INSPYR Solutions’ Privacy Policy and INSPYR Solutions’ AI and Automated Employment Decision Tool Policy: https://www.inspyrsolutions.com/policies/. By submitting an application, you are consenting to being contacted by INSPYR Solutions through phone, email, or text.