Job Title: Senior / Principal AI Security Engineer-W2
Location: Remote
Duration: 8 months+
Client: Mastercard
Rate: $80/hr
Job Description Summary
We are seeking a highly technical AI Security Engineer to design, test, and defend modern AI systems against real-world threats. This role requires hands-on experience building, attacking, and securing LLM-based systems, with a focus on practical implementation over theoretical knowledge.
You will work across the full lifecycle of AI systemsβfrom architecture and design to active testing, exploitation, and mitigationβensuring the security of solutions such as RAG pipelines, tool-augmented models (MCP), and agentic systems. This is a deeply technical role for individuals who have direct experience securing AI systems in practice, not just advising on policy or governance.
Key Responsibilities:
- Vulnerability Assessment: Identify, implement and manage tooling and methodologies for penetration testing on AI models and systems to identify and remediate security weaknesses.
- Secure AI Development: Collaborate with data scientists and software engineers to integrate security best practices into the AI development lifecycle, including secure model training, validation, and deployment. Support security engineers in the evaluation of AI systems being developed and implemented.
- Compliance and Standards: Keep track of emerging industry standards, regulations, and best practices for AI security, such as NIST, ISO, and GDPR.
- Research and Innovation: Stay abreast of the latest advancements in AI security, conduct research, and contribute to the development of innovative security solutions.
- Documentation and Reporting: Prepare and document standard operating procedures, protocols, and security reports, including assessment-based findings and recommendations for further system security enhancement.
- Advisory and Support: answering queries, providing feedback, and advising on best practices
- Technical Training and Mentorship: Provide technical training and mentorship to team members and stakeholders on AI security principles and practices.
- Experimentation and POCs: Design and execute experiments and proof.
Qualifications:
- Bachelorβs or masterβs degree in computer science, Information Security, or a related field.
- Extensive experience in information security, with a strong focus on AI security.
- In-depth knowledge of AI technologies, machine learning algorithms, and data protection techniques.
- Proven expertise in designing and implementing security measures for AI systems, including secure coding, encryption, and access controls.
- Strong analytical and problem-solving skills, with the ability to conduct vulnerability assessments and penetration testing.
- Excellent technical communication and collaboration skills to work effectively with diverse teams.
- Relevant certifications such as CISSP, CEH, OSCP, or equivalent are highly desirable.
[Upgrade to PRO to see contact]