

PTR Global
Artificial Intelligence Engineer
⭐ - Featured Role | Apply direct with Data Freelance Hub
This role is for an Artificial Intelligence Engineer, contract length over 6 months, hybrid in New York, with a pay rate of "unknown." Requires 3-4 years in technology audit or AI/ML, strong audit knowledge, communication skills, and preferred certifications like CISA or CISSP.
🌎 - Country
United States
💱 - Currency
$ USD
-
💰 - Day rate
Unknown
-
🗓️ - Date
October 1, 2025
🕒 - Duration
More than 6 months
-
🏝️ - Location
Hybrid
-
📄 - Contract
W2 Contractor
-
🔒 - Security
Unknown
-
📍 - Location detailed
New York, NY
-
🧠 - Skills detailed
#Data Lineage #Deployment #ML (Machine Learning) #AI (Artificial Intelligence) #Computer Science #Compliance #Data Science #Data Governance #GDPR (General Data Protection Regulation) #Data Privacy #Security
Role description
Job Title: AI Security and Controls Subject Matter Expert - Full Time Consultant (Internal Audit)
Location: New York, Hybrid (3 days per week onsite)
Client Industry: Investment Banking
Type of Hire:- Contract
Job Overview:
Client, a leading Investment Bank based in New York, is seeking a talented and experienced AI Security and Controls Subject Matter Expert to join their Internal Audit Department (IAD) as a full-time consultant. In this role, you will work as part of the technology audit team to manage and execute risk-based assurance activities specifically focused on the Firm’s use of Generative AI (GenAI) and Artificial Intelligence (AI) systems.
The ideal candidate will have a deep understanding of AI/ML systems, data privacy, information security, and audit principles, with a passion for identifying and mitigating risks within AI models. You will be responsible for developing and executing AI assurance strategies, evaluating risk, and defining the necessary controls to ensure the safe and effective use of AI across the organization.
About Internal Audit (IAD):
The Internal Audit Department (IAD) operates independently and reports directly to the Board Audit Committee. IAD plays a critical role in assisting senior management and the Audit Committee in their oversight and fiduciary duties. The department focuses on providing independent assurance on the effectiveness of the Firm’s internal controls, risk management, and governance processes.
With over 400 employees globally, the department continually strives to improve the Firm’s risk management processes, identify operating risks, and evaluate the adequacy of the Firm’s internal controls. IAD’s work enables the organization to identify vulnerabilities and prioritize resources accordingly.
What You'll Do in the Role:
• Conduct Model Audits: Execute a range of assurance activities to evaluate controls, governance, and risk management of generative AI models used within the organization.
• Model Security & Privacy Reviews: Review and assess privacy controls, data protection, and security measures, including compliance with regulatory standards (e.g., GDPR, CCPA).
• GenAI Model Familiarity: Demonstrate a strong understanding of both current and upcoming Generative AI models to assess their risk, security, and control frameworks.
• Adopt New Audit Tools: Implement and stay updated on new audit tools and techniques relevant to AI/ML systems, focusing on model interpretability, fairness, and robustness.
• Risk Communication: Create clear, concise reports and presentations regarding AI-related risks, including model bias, drift, security vulnerabilities, and their potential business impacts.
• Data-Driven Analysis: Identify, collect, and analyze relevant data to assess model performance, security, and privacy using structured and unstructured data sources.
• Control Testing: Test and evaluate the controls over AI model development, deployment, and lifecycle management, including aspects such as data lineage, model versioning, and access controls.
• Issue Identification: Identify control gaps, raise insightful questions, and report findings with clear recommendations for remediation.
What You'll Bring to the Role:
• Experience: 3-4 years of relevant experience in technology audit, AI/ML, data privacy, or information security.
• Audit Knowledge: Strong understanding of audit principles, tools, and processes, including risk assessments, planning, testing, and reporting, with a focus on AI/ML systems.
• Communication Skills: Ability to articulate complex technical issues in a clear and concise manner, tailored for both technical and non-technical audiences.
• Analytical Skills: A sharp eye for identifying patterns, anomalies, and risks in AI model behavior and data.
• Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Information Security, or related fields (preferred).
• Certifications: CISA, CISSP, or relevant AI/ML certifications (preferred but not required).
• Technical Knowledge:
• Strong understanding of AI/ML model development, deployment processes, and related frameworks.
• Expertise in model interpretability, fairness, and robustness.
• Familiarity with privacy frameworks (e.g., GDPR, CCPA) and security standards (e.g., NIST, ISO 27001/02).
• Knowledge of data governance and protection practices.
Job Title: AI Security and Controls Subject Matter Expert - Full Time Consultant (Internal Audit)
Location: New York, Hybrid (3 days per week onsite)
Client Industry: Investment Banking
Type of Hire:- Contract
Job Overview:
Client, a leading Investment Bank based in New York, is seeking a talented and experienced AI Security and Controls Subject Matter Expert to join their Internal Audit Department (IAD) as a full-time consultant. In this role, you will work as part of the technology audit team to manage and execute risk-based assurance activities specifically focused on the Firm’s use of Generative AI (GenAI) and Artificial Intelligence (AI) systems.
The ideal candidate will have a deep understanding of AI/ML systems, data privacy, information security, and audit principles, with a passion for identifying and mitigating risks within AI models. You will be responsible for developing and executing AI assurance strategies, evaluating risk, and defining the necessary controls to ensure the safe and effective use of AI across the organization.
About Internal Audit (IAD):
The Internal Audit Department (IAD) operates independently and reports directly to the Board Audit Committee. IAD plays a critical role in assisting senior management and the Audit Committee in their oversight and fiduciary duties. The department focuses on providing independent assurance on the effectiveness of the Firm’s internal controls, risk management, and governance processes.
With over 400 employees globally, the department continually strives to improve the Firm’s risk management processes, identify operating risks, and evaluate the adequacy of the Firm’s internal controls. IAD’s work enables the organization to identify vulnerabilities and prioritize resources accordingly.
What You'll Do in the Role:
• Conduct Model Audits: Execute a range of assurance activities to evaluate controls, governance, and risk management of generative AI models used within the organization.
• Model Security & Privacy Reviews: Review and assess privacy controls, data protection, and security measures, including compliance with regulatory standards (e.g., GDPR, CCPA).
• GenAI Model Familiarity: Demonstrate a strong understanding of both current and upcoming Generative AI models to assess their risk, security, and control frameworks.
• Adopt New Audit Tools: Implement and stay updated on new audit tools and techniques relevant to AI/ML systems, focusing on model interpretability, fairness, and robustness.
• Risk Communication: Create clear, concise reports and presentations regarding AI-related risks, including model bias, drift, security vulnerabilities, and their potential business impacts.
• Data-Driven Analysis: Identify, collect, and analyze relevant data to assess model performance, security, and privacy using structured and unstructured data sources.
• Control Testing: Test and evaluate the controls over AI model development, deployment, and lifecycle management, including aspects such as data lineage, model versioning, and access controls.
• Issue Identification: Identify control gaps, raise insightful questions, and report findings with clear recommendations for remediation.
What You'll Bring to the Role:
• Experience: 3-4 years of relevant experience in technology audit, AI/ML, data privacy, or information security.
• Audit Knowledge: Strong understanding of audit principles, tools, and processes, including risk assessments, planning, testing, and reporting, with a focus on AI/ML systems.
• Communication Skills: Ability to articulate complex technical issues in a clear and concise manner, tailored for both technical and non-technical audiences.
• Analytical Skills: A sharp eye for identifying patterns, anomalies, and risks in AI model behavior and data.
• Education: Bachelor’s or Master’s degree in Computer Science, Data Science, Information Security, or related fields (preferred).
• Certifications: CISA, CISSP, or relevant AI/ML certifications (preferred but not required).
• Technical Knowledge:
• Strong understanding of AI/ML model development, deployment processes, and related frameworks.
• Expertise in model interpretability, fairness, and robustness.
• Familiarity with privacy frameworks (e.g., GDPR, CCPA) and security standards (e.g., NIST, ISO 27001/02).
• Knowledge of data governance and protection practices.