Sr Python AI/ML Engineer (NYC)

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for a Sr Python AI/ML Engineer in NYC, offering a contract length of "unknown" and a pay rate of "unknown." Key skills include Python, Django, AWS, and AI/ML integration. Experience with data governance and compliance is required.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
760
-
πŸ—“οΈ - Date discovered
September 5, 2025
πŸ•’ - Project duration
Unknown
-
🏝️ - Location type
Unknown
-
πŸ“„ - Contract type
Unknown
-
πŸ”’ - Security clearance
Unknown
-
πŸ“ - Location detailed
New York City Metropolitan Area
-
🧠 - Skills detailed
#Jenkins #AWS EC2 (Amazon Elastic Compute Cloud) #Python #Data Science #Monitoring #SNS (Simple Notification Service) #DevOps #NLP (Natural Language Processing) #Compliance #SQL (Structured Query Language) #Lambda (AWS Lambda) #React #Microservices #Cloud #EC2 #MLflow #S3 (Amazon Simple Storage Service) #PyTorch #Elasticsearch #SQS (Simple Queue Service) #REST (Representational State Transfer) #Django #Kubernetes #Angular #TensorFlow #AWS (Amazon Web Services) #SageMaker #Data Governance #Docker #ML (Machine Learning) #RDS (Amazon Relational Database Service) #AI (Artificial Intelligence)
Role description
We are looking for a Senior Software Engineer focused on extending our Python/Django REST microservices platform for a modern academic information system. In addition to core development, this engineer will integrate, deploy, and maintain machine-learning models (e.g. recommendation engines, predictive analytics, NLP interfaces) in AWS. Key Technical Areas β€’ Core Stack: Python 3.x, Django REST Framework, SQL/RDS, Angular or React front-ends β€’ Cloud & DevOps: AWS (EC2, S3, Lambda, RDS, ElasticSearch, SQS/SNS, SageMaker), Docker/Kubernetes, Jenkins or CodePipeline β€’ AI/ML Integration: Collaborate with data scientists to productionize TensorFlow/PyTorch models, build MLOps pipelines (MLflow, Kubeflow), implement CI/CD for model retraining and monitoring β€’ Data & Compliance: Design data schemas and pipelines for training/inference, ensure FERPA-compliant data governance and privacy