IntraEdge

Data Architect

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Data Architect with a 6-month contract, paying "$XX per hour," located remotely in Phoenix, AZ. Requires 2+ years of AWS experience, strong Python and SQL skills, and expertise in building knowledgebases and conversational agents.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 21, 2025
🕒 - Duration
Unknown
-
🏝️ - Location
Remote
-
📄 - Contract
Unknown
-
🔒 - Security
Unknown
-
📍 - Location detailed
Phoenix, AZ
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #Data Storage #AWS (Amazon Web Services) #Deployment #API (Application Programming Interface) #ML (Machine Learning) #Cloud #Automation #SageMaker #Security #Data Engineering #Athena #Monitoring #Storage #Python #DevOps #S3 (Amazon Simple Storage Service) #"ETL (Extract #Transform #Load)" #Scala #SQL (Structured Query Language) #Redshift #Lambda (AWS Lambda) #Data Pipeline #Data Architecture #Logging #Terraform #Data Lineage
Role description
IntraEdge has an immediate need for a Data Architect in Phoenix, AZ. This role is remote, but we are searching for local Phoenix candidates to attend meetings as needed. NO C2C FOR THIS POSITION Hands-on experience with AWS services: S3 for data storage and management Athena for querying data Step Functions & Lambda for orchestration and automation SageMaker for ML workflows AWS Bedrock for LLM-powered conversational agents Strong proficiency in Python for data engineering, automation, and API integration. Experience building knowledgebases and conversational agents (Conversight or equivalent). Strong SQL skills and familiarity with large-scale data environments. Solid understanding of ETL design, orchestration, and optimization in cloud platforms. Job Requirements: We are looking for a Data Engineer with strong expertise in AWS, Python, and Conversational Analytics with a focus on secure, scalable, and resilient systems. This role will focus on developing knowledge bases and agents that allow business users to interact with data through natural language, while ensuring reliability, scalability, and efficiency across AWS cloud infrastructure. Key Responsibilities • Design, build, and optimize data pipelines using AWS • Implement security best practices across data and AI pipelines • Develop and maintain knowledgebases and agents and integrate structured and unstructured data sources • Implement and support machine learning workflows in SageMaker for model training, deployment, and monitoring. • Leverage AWS Bedrock to build and deploy conversational AI solutions. • Ensure data lineage, and quality control across all conversational AI systems. • Automate orchestration, monitoring, and logging for data pipelines and conversational agents. • Build and maintain CI/CD pipelines to enable secure, automated, and reliable deployments • Collaborate with analytics, product, and business teams to design solutions that improve decision-making and data democratization. Required Skills & Qualifications • 2+ Years Hands-on experience with AWS services (S3, Athena, Lambda, Step Functions, Redshift Serverless, SageMaker, Bedrock) • Strong proficiency in Python for data engineering, automation, and API integration. • Experience building knowledgebases and conversational agents or equivalent. • Strong SQL skills and familiarity with large-scale data environments. • Solid understanding of ETL design, orchestration, and optimization in cloud platforms. Preferred Qualifications • Experience with LLM integration into business workflows. • Familiarity with lakehouse frameworks (Iceberg, Delta) for scalable analytics. • Experience with DevOps/Infra-as-Code (Terraform, CloudFormation). • Background in marketing analytics or customer intelligence use cases.