Synergy Technologies

Need AI Engineer with ML/DL Frameworks, Python, Data Engineering & Cloud Exp.:: St. Louis, MI (or) Atlanta, GA (Hybrid Position)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Engineer with expertise in ML/DL frameworks, Python, data engineering, and cloud technologies. It is a hybrid position in St. Louis, MI or Atlanta, GA, requiring a degree in a related field and experience deploying machine learning models.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
Unknown
-
πŸ—“οΈ - Date
March 26, 2026
πŸ•’ - Duration
Unknown
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
Atlanta, GA
-
🧠 - Skills detailed
#SageMaker #Data Science #Kubernetes #Programming #GCP (Google Cloud Platform) #Version Control #SQL (Structured Query Language) #Azure #Data Engineering #Scala #Data Pipeline #Model Evaluation #Computer Science #Cloud #Monitoring #NLP (Natural Language Processing) #Datasets #GIT #TensorFlow #Deep Learning #AI (Artificial Intelligence) #PyTorch #Airflow #Batch #ML (Machine Learning) #Python #AWS (Amazon Web Services) #MLflow #Docker
Role description
Job Title: AI Engineer Location: St. Louis, MI (or) Atlanta, GA (Hybrid Position) ====================== Role Overview: We’re looking for an AI Engineer who loves turning data into intelligent systems that actually work in the real world. You’ll design, build, and deploy machine learning and AI solutions that power products, improve decisions, and automate complex tasks. This role blends research thinking with hands-on engineering. If you enjoy experimenting with models but also care about performance, scalability, and clean production code, you’ll fit right in. Key Responsibilities: β€’ Design, develop, and deploy machine learning and deep learning models β€’ Build and maintain data pipelines for training and inference β€’ Work closely with product managers, data scientists, and software engineers to turn business needs into AI solutions β€’ Fine-tune and optimize models for accuracy, speed, and scalability β€’ Deploy models into production using cloud or on-prem infrastructure β€’ Monitor model performance and retrain when needed β€’ Stay up to date with the latest AI research and evaluate new techniques β€’ Document systems, experiments, and model behavior clearly Required Skills and Qualifications: β€’ Strong programming skills in Python (and familiarity with software engineering best practices) β€’ Experience with ML/DL frameworks such as TensorFlow, PyTorch, or JAX β€’ Solid understanding of machine learning fundamentals (supervised, unsupervised, and deep learning) β€’ Experience with data preprocessing, feature engineering, and model evaluation β€’ Familiarity with SQL and working with large datasets β€’ Experience deploying models via APIs, batch jobs, or streaming systems β€’ Understanding of version control (Git) and collaborative development workflows β€’ Experience with LLMs, NLP, or computer vision β€’ Knowledge of MLOps tools such as MLflow, Kubeflow, Airflow, or SageMaker β€’ Experience with Docker and Kubernetes β€’ Familiarity with cloud platforms (AWS, GCP, or Azure) β€’ Background in data engineering or distributed systems β€’ Experience with model monitoring, drift detection, and retraining pipelines Education & Experience: β€’ Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or a related field or equivalent hands-on experience β€’ Experience in building and deploying machine learning models (flexible based on skill level)