Job Description : Develop defensive algorithms against adversarial ML attacks (data poisoning, model inversion, evasion attacks). Conduct red teaming of AI / ML models to simulate real-world exploitation scenarios. Build explainable AI (XAI) frameworks ensuring transparency and fairness. Implement secure MLOps pipelines with compliance to the NIST AI Risk Management Framework. Research emerging threats in Generative AI, LLM security, and synthetic data misuse. Collaborate with data science, security, and legal teams on AI compliance and ethics. Required Skills : Hands-on expertise in TensorFlow, PyTorch, and Scikit-learn with a focus on security testing. Strong understanding of adversarial ML attack / defense strategies. Knowledge of federated learning, differential privacy, and secure multiparty computation (SMPC). Familiarity with LLM security (prompt injection, model theft, jailbreaking). Strong programming skills in Python, R, and C++. Certifications : GIAC Machine Learning Security Essentials (GMLE) TensorFlow Developer Certificate Certified Artificial Intelligence Practitioner (CAIP) Certified Information Systems Security Professional (CISSP)
Cyber Security Engineer • New York, New York, United States