AI Security and Controls Subject Matter Expert to design and execute and define AI assurance strategy, risk and control matrix, guidance.
We're seeking someone to join our team as a Full time Consultant to work in the technology audit team, within Internal Audit, to manage / execute risk based assurance activities for Firms use of GenAI or Artificial Intelligence in general.
Internal Audit
The Internal Audit Department (IAD) reports directly to the Board Audit Committee, and is an objective and independent function within the risk management framework. IAD assists senior management and the Audit Committee of the Board (BAC) in the effective discharge of their legal, fiduciary and oversight responsibilities.
What you'll do in the role :
- Conduct Model Audits : Execute a wide range of assurance activities focused on the controls, governance, and risk management of generative AI models used within the organization.
- Model Security & Privacy Reviews : Review and assess privacy controls, data protection measures, and security protocols applied to AI models, including data handling, access management, and compliance with regulatory standards.
- Familiarity with GenAI Model : Good understanding of current and upcoming GenAI models.
- Adopt New Audit Tools : Stay current with and implement new audit tools and techniques relevant to AI / ML systems, including model interpretability, fairness, and robustness assessment tools.
- Risk Communication : Develop clear and concise messages regarding risks and business impact related to AI models, including model bias, drift, and security vulnerabilities.
- Data-Driven Analysis : Identify, collect, and analyze data relevant to model performance, privacy, and security, leveraging both structured and unstructured sources.
- Control Testing : Test controls over AI model development, deployment, monitoring, and lifecycle management, including data lineage, model versioning, and access controls.
- Issue Identification : Identify control gaps and open risks, raise insightful questions to identify root causes and business impact, and draw appropriate conclusions.
What you'll bring to the role :
Experience : At least 3-4 years relevant experience in technology audit, AI / ML, data privacy, or information security.Audit Knowledge : Understanding of audit principles, tools, and processes (risk assessments, planning, testing, reporting, and continuous monitoring), with a focus on AI / ML systems.Communication : Ability to communicate clearly and concisely, adapting messages for technical and non-technical audiences.Analytical Skills : Ability to identify patterns, anomalies, and risks in model behavior and data.Education : Masters or bachelors degree (Computer Science, Data Science, Information Security, or related field preferred).Certifications : CISA, CISSP, or relevant AI / ML certifications (preferred, not required).Technical Knowledge : Strong understanding of AI / ML model development and deployment processes, model interpretability, fairness, and robustness concepts, privacy frameworks (e.g., GDPR, CCPA), security standards (e.g., NIST, ISO 27001 / 02), data governance and protection practices.Location : NYC (Hybrid 3 days onsite, 2 remote)
Seniority level : Mid-Senior level
Employment type : Contract
Job function : Information Technology
J-18808-Ljbffr