InterSources Inc

AI Data Engineer

⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an AI Data Engineer in San Francisco, CA (Hybrid) for a 3-6 month contract, offering competitive pay. Requires 12+ years of data engineering experience, proficiency in Python, SQL, and Scala, and expertise in big data tools and cloud platforms.
🌎 - Country
United States
πŸ’± - Currency
$ USD
-
πŸ’° - Day rate
909
-
πŸ—“οΈ - Date
October 23, 2025
πŸ•’ - Duration
3 to 6 months
-
🏝️ - Location
Hybrid
-
πŸ“„ - Contract
Unknown
-
πŸ”’ - Security
Unknown
-
πŸ“ - Location detailed
San Francisco Bay Area
-
🧠 - Skills detailed
#AI (Artificial Intelligence) #REST (Representational State Transfer) #Data Lake #Model Deployment #Airflow #Data Engineering #Data Quality #Programming #Deployment #NLP (Natural Language Processing) #SQL (Structured Query Language) #Azure #Data Management #"ETL (Extract #Transform #Load)" #Statistics #Data Pipeline #Monitoring #AWS (Amazon Web Services) #Data Science #Kafka (Apache Kafka) #MLflow #Python #REST services #Scala #ML (Machine Learning) #Data Governance #Metadata #Datasets #PyTorch #Spark (Apache Spark) #Hadoop #TensorFlow #Databricks #Cloud #Mathematics #Computer Science #Data Modeling #Big Data #Automation
Role description
Job Title: AI Data Engineer Location: San Francisco, CA (Hybrid)\` Duration: 3-6 Months CTH Experience: Minimum 12+ Years Job Description: β€’ Design, build, and maintain large-scale data pipelines and architectures to support AI and machine learning initiatives. β€’ Collaborate with data scientists, AI engineers, and analytics teams to ensure seamless data flow for model development and deployment. β€’ Develop and optimize ETL processes to extract, transform, and load data from multiple structured and unstructured sources. β€’ Build data platforms that enable high-performance AI/ML model training, testing, and monitoring. β€’ Ensure data quality, integrity, and availability across all AI-driven systems. β€’ Work with massive datasets to preprocess, cleanse, and structure data for analytics and machine learning applications. β€’ Integrate cloud-based data services and tools to support large-scale AI workloads. β€’ Automate data workflows and support the deployment of AI models using MLOps best practices. β€’ Research and implement the latest advancements in data engineering and AI infrastructure technologies. Required Skills: β€’ Minimum 12 years of experience in Data Engineering with a strong focus on AI/ML data systems. β€’ Proficiency in programming languages such as Python, SQL, and Scala. β€’ Strong experience with big data tools and frameworks such as Spark, Hadoop, Hive, and Kafka. β€’ Hands-on experience with data pipeline orchestration tools like Airflow or Databricks. β€’ Expertise in cloud platforms such as AWS, Azure, or Google Cloud (preferably with AI/ML services). β€’ Solid understanding of data modeling, warehousing, and data lake architectures. β€’ Experience integrating data for machine learning workflows and managing feature stores. β€’ Familiarity with ML frameworks such as TensorFlow, PyTorch, and Scikit-learn. β€’ Knowledge of MLOps tools like MLflow, Kubeflow, or DataRobot for production model deployment. β€’ Strong understanding of APIs, REST services, and real-time data streaming solutions. Preferred Skills: β€’ Experience with data governance, metadata management, and lineage tracking. β€’ Familiarity with generative AI data preparation and LLM data pipelines. β€’ Exposure to NLP, computer vision, or AI model integration within data systems. β€’ Experience with CI/CD pipelines for data and AI model automation. β€’ Excellent analytical, problem-solving, and communication skills. β€’ Strong background in mathematics, statistics, or computer science fundamentals. Education: β€’ Bachelor’s degree in Computer Science, Data Engineering, or related field required. β€’ Master’s preferred.