Avacend Inc

Cloud Cost Optimization

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Cloud Cost Optimization specialist, contract from 11/17/2025 to 4/25/2026, remote (EST preferred). Requires a degree in Computer Science/Data Science, 2+ years in cloud operations, expertise in AWS, GCP, Azure, and proficiency in Python, ETL, and ML.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
November 4, 2025
πŸ•’ - Duration
More than 6 months
-
🏝️ - Location
Remote
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
United States
-
🧠 - Skills detailed
#GCP (Google Cloud Platform) #Bash #Spark (Apache Spark) #Oracle #AWS Glue #Apache Airflow #DevOps #Data Pipeline #"ETL (Extract #Transform #Load)" #Flask #ML (Machine Learning) #Data Storage #PySpark #GitHub #AWS (Amazon Web Services) #Athena #Data Science #Version Control #Storage #Anomaly Detection #Computer Science #Documentation #Cloud #Data Lake #Jira #Snowflake #Libraries #AI (Artificial Intelligence) #Scripting #Public Cloud #Angular #Golang #Programming #TensorFlow #Python #PyTorch #Azure #GIT #Airflow #Agile #Forecasting #Jenkins #Scala #Security #GitLab
Role description
Duration: Potential contract to hire : Start/End Dates: 11/17/2025 - 4/25/2026 Location: Remote option and EST TimeZone preferred Job Description: What You Will Do: β€’ Design and implement AI Agents to monitor and optimize cloud resources based on findings and recommendations from Cloud Service Providers. β€’ Develop predictive models for drift detection, cost anomaly detection, and forecasting of public cloud resources and spend. β€’ Automate operational workflows using machine learning and intelligent scripting. β€’ Integrate AI-driven insights with Cloud Service Providers like AWS, GCP, Azure, and existing data and tools. β€’ Conduct anomaly detection for security, cost optimization, and performance analytics. β€’ Design, build, and maintain scalable ETL pipelines using AWS Glue and other cloud-native services. β€’ Utilize AWS Athena for interactive querying of data stored in data lakes. β€’ Manage and optimize data storage and processing using Snowflake cloud data platform. β€’ Orchestrate complex workflows and data pipelines using Apache Airflow DAGs. β€’ Continuously evaluate emerging AI technologies and tools for operational improvements. β€’ Maintain documentation and best practices for AI/ML integration in cloud systems. Our Minimum Requirements Include: β€’ Bachelor’s or Master’s degree in Computer Science, Data Science, or related technical field, or equivalent experience. β€’ Proven ability building and deploying ML models, with at least 2 years focused on cloud operations. β€’ Solid knowledge of cloud technologies (AWS, GCP, Azure, OCI). β€’ Experience with Python, PySpark, and ML libraries such as PyTorch, TensorFlow, or scikit-learn. β€’ Comfortable working with streaming data, APIs, and telemetry systems. β€’ Experience with AWS Glue ETL, AWS Athena, Snowflake, and Apache Airflow DAGs. β€’ Strong communication and multi-functional collaboration skills. β€’ Experience with Agile and DevOps operating models, including project tracking tools (e.g., Jira), Git (any Version Control systems), and CI/CD systems (e.g., GitLab, GitHub Actions, Jenkins). β€’ Proficient in general-purpose programming languages (Python, Golang, Bash) and development platforms and technologies. Preferred Qualifications: β€’ Understanding of Cloud Technologies and Services of one or more providers including AWS, GCP, Azure, Oracle, and Alibaba. β€’ Established record of leading technical initiatives, delivering results, and a commitment to fostering a supportive work environment. β€’ Hard-working, dedicated to providing quality support for your customers. β€’ Full stack development experience with Angular for frontend and Flask for backend application development