Talent.com
AI Adversarial / Penetration Tester (REMOTE)

AI Adversarial / Penetration Tester (REMOTE)

Insight GlobalUnited States
job_description.job_card.variable_hours_ago
serp_jobs.job_preview.job_type
  • serp_jobs.job_card.full_time
  • serp_jobs.filters.remote
job_description.job_card.job_description

Job Description

A large global bank is looking for a strong AI Adversarial Tester to join their Application Security & Testing team within Infrastructure Security group. The role can be hybrid remote in Boston MA, NYC, Miami FL, or Dallas TX.

The bank has close to $90B in assets, 9K employees and more than 2M customers along the east coast. In addition to providing excellent banking experiences for their customers, the bank also very involved in the community through charitable giving and philanthropy to give back to low / moderate income and underserved communities.

The bank is expanding embedded AI technology into day to day products, such as Voice bots supporting customers service centers, internal chatbots and they need AI security around these efforts. They will leverage some publicly available resources and threat models for common attack vectors and injection points and do some manual testing. However, there will need to be some additional tests after production and deployment because there are failures. They need to establish a secure AI practice. This resource will engage the AI security tower as a whole, and fully align AI initiatives and security solutions architecture, threat modeling and testing. This resource must think about products critically and build out threat models for use cases for AI implementations and guide teams on how to perform testing; understanding what are edge cases and defining prompts on what can possibly be breaking within the environment.

This role is focused on proactively identifying vulnerabilities in GenAI systems and embedding adversarial resilience across the AI development lifecycle.

You’ll lead hands-on testing efforts, collaborate across cybersecurity and engineering teams, and contribute to the bank’s enterprise AI security strategy.

Key Responsibilities

Adversarial Testing

Design and execute controlled adversarial attacks (e.g., prompt injection, input / output evaluation, data exfiltration, misinformation generation).

Evaluate GenAI models against known and emerging AI-specific attack vectors.

Develop reusable test repositories, scripts, and automation to continuously challenge models.

Partner with developers to recommend remediation strategies for discovered vulnerabilities.

Threat Monitoring & Intelligence

Monitor the external threat landscape for new GenAI-related attack methods (e.g., malicious prompt engineering, fine-tuned model abuse).

Correlate findings with internal AI deployments to identify potential exposure points.

Assess existing technical controls and identify enhancements.

Build relationships with threat intelligence providers, industry groups, and regulators.

Cross-Functional Collaboration

Partner with Cybersecurity, SOC, and DevSecOps teams to integrate adversarial testing into the broader security framework.

Collaborate with ML / AI engineering teams to embed adversarial resilience into the development lifecycle (“shift-left” AI security).

Provide training and awareness sessions for business units leveraging GenAI.

Continuous Improvement & Innovation

Develop custom adversarial testing frameworks tailored to the bank’s use cases.

Evaluate and recommend security tools and platforms for AI model monitoring, testing, and threat detection.

Contribute to enterprise AI security strategy by introducing new practices, frameworks, and technologies.

Compensation :

$72 / hr to $82 / hr.

Exact compensation may vary based on several factors, including location, skills, experience, and education.

Employees in this role will enjoy a comprehensive benefits package starting on day one of employment, including options for medical, dental, and vision insurance. Eligibility to enroll in the 401(k) retirement plan begins after 90 days of employment. Additionally, employees in this role will have access to paid sick leave and other paid time off benefits as required under the applicable law of the worksite location.

We are a company committed to creating diverse and inclusive environments where people can bring their full, authentic selves to work every day. We are an equal opportunity / affirmative action employer that believes everyone matters. Qualified candidates will receive consideration for employment regardless of their race, color, ethnicity, religion, sex (including pregnancy), sexual orientation, gender identity and expression, marital status, national origin, ancestry, genetic factors, age, disability, protected veteran status, military or uniformed service member status, or any other status or characteristic protected by applicable laws, regulations, and ordinances. If you need assistance and / or a reasonable accommodation due to a disability during the application or recruiting process, please send a request to HR@insightglobal.com.To learn more about how we collect, keep, and process your private information, please review Insight Global's Workforce Privacy Policy :

Skills and Requirements

5+ years of experience in cybersecurity (focus on red teaming)

previous Red Team experience for offensive security (penetration testing, threat hunting, vulnerability exploitation, etc)

exposure to machine learning and generative AI fundamentals (LLMs, diffusion models, embeddings, etc)

Hands-on experience with adversarial ML techniques : model extraction, poisoning, prompt injection.

Hands on threat modeling and security testing

Familiarity with AI security frameworks : NIST AI RMF, MITRE ATLAS, OWASP Top 10 for LLMs.

Python and scripting languages

Experience with AI / MLOps platforms and integration frameworks : Azure AI, AWS SageMaker, OpenAI API, Hugging Face, LangChain or equivalent

Exposure to SIEM, SOAR, and threat intelligence platforms. - CISSP, CRTP, GIAC, CRTA, OSCP (Offensive Security Certified Professional), PNPT (Practical Network Penetration Tester), CPTS (Certified Penetration Testing Specialist), CBBH (Certified Bug Bounty Hunter), CRT (Certified Red Team)

AI Security Certifications : AAISM (Advanced in AI Security Management), CAISF (Certified AI Security Fundamentals), CAISA (Certified AI Security Auditor), CAISS (Certified Artificial Intelligence Security Specialist), CAISP (Certified AI Security Professional), BlueCert AI Security Certifications, AI-Powered Threat Detection Certification, AI Risk Management Certification, AI Testing Certification, etc

previous background with Penetration testing tools (e.g., Metasploit, Burp Suite, Nessus)

serp_jobs.job_alerts.create_a_job

Ai Tester • United States