
Senior Product Security Engineer
phaidra • Remote
Posted: January 23, 2026
Job Description
The Opportunity
At Phaidra, security is the bedrock of trust for our customers operating the world's most critical infrastructure. We are looking for a Senior Product Security Engineer to partner directly with our Agentic AI department.
This team is at the forefront of our mission, building autonomous agents responsible for optimizing the operational fabric of AI factories. These agents don't just chat; they act. They make real-time decisions to optimize power usage, cooling efficiency, and hardware health, creating a more stable and efficient environment for massive-scale compute.
This is a high-stakes environment where the integrity and security of AI-driven decisions are paramount. You will tackle the unique security challenges of deploying autonomous agents that interact directly with physical control systems. Security failures here don't just mean data leaks; they could mean operational downtime or physical degradation of critical hardware.
We need a security expert who thrives working hand-in-hand with AI researchers and engineers. Your role is to embed security into the DNA of our Agentic platform, ensuring that as our agents learn and explore, they do so within unbreakable safety boundaries.
We are seeking a team member located in the following area: UK
Responsibilities
- Champion Secure Agentic AI Development: Drive the adoption of Phaidra’s Secure AI/ML Development Lifecycle (SAIDL) within the Agentic AI team. Adapt security practices to fit the iterative and experimental nature of Reinforcement Learning and agent development.
- Agentic Threat Modeling: Partner with researchers to model threats specific to autonomous agents. Beyond standard AI risks, you will analyze risks unique to agents, such as goal misalignment, reward hacking, infinite looping, and insecure tool execution (e.g., an agent executing a command that exceeds safety limits).
- Secure Agent Architecture & Safety Boundaries: Design secure-by-default architectures for autonomous agents. Crucially, this involves defining deterministic safety guardrails that sit between the probabilistic AI model and the physical hardware controls. Ensure "Zero Trust" applies to the agent—it should only have the minimum permissions needed to adjust specific parameters.
- Secure Agent Tools & Memory: Architect security controls for the "tools" the agent uses (APIs to read sensors or change settings) and the agent's long-term memory. Ensure the agent cannot be manipulated into using a tool to perform unauthorized actions or "poisoned" via its memory context.
- MLSecOps for RL Pipelines: Secure the training and simulation pipelines used for Reinforcement Learning. Ensure the integrity of the simulation environments (Digital Twins) used to train agents, preventing attackers from influencing agent behavior during the training phase.
- Adversarial Testing & Red Teaming: Lead AI Red Teaming exercises focused on behavioral manipulation. Can you trick the agent into making a suboptimal decision? Can you manipulate the observations the agent receives?
- Incident Preparedness: Develop incident response playbooks tailored for autonomous systems, focusing on "Kill Switches" and rapid rollback capabilities in the event of rogue agent behavior.
- Cross-Functional Partnership: Build strong relationships with the Agentic AI researchers, SREs, and Data Scientists. Act as an enabler who helps them deploy powerful agents safely, rather than a blocker.
Key Qualifications
- Agentic AI & RL Security: Proven understanding of the security risks associated with Reinforcement Learning, Autonomous Agents, or automated decision-making systems.
- AI Partnership: Demonstrated experience working embedded with AI system developers and researchers. You understand the difference between "probabilistic" (AI) and "deterministic" (Code) and how to secure the bridge between them.
- Core Experience: 5+ years of work experience in product security, application security, or a closely related security engineering role.
- Safety Engineering Mindset: You understand that in physical systems, "Availability" and "Safety" often outrank "Confidentiality." You are familiar with concepts like fail-safes and human-in-the-loop controls.
- Technical Depth:
- Strong programming experience, ideally with Python (essential for ML/AI ecosystems) or Go.
- Familiarity with agent frameworks (e.g., LangChain, AutoGPT) or RL libraries (e.g., Ray RLLib).
- Proven experience securing Cloud infrastructure (GCP) and Kubernetes.
- Deep understanding of Authentication & Authorization (specifically non-human identities/workload identity).
- Advanced MLOps: Direct, hands-on experience securing MLOps tooling (e.g., Kubeflow, MLflow) and deep understanding of securing complex data and model-training pipelines.
Preferred Skills & Experience
- Industrial / OT Context: Experience working with systems that interface with the physical world (IoT, Robotics, ICS/OT). Understanding of the "IT/OT convergence."
- Formal Verification for AI: Experience using mathematical methods to prove that an AI model or agent will not violate specific safety constraints.
- Sim-to-Real Security: Experience securing simulation environments (Digital Twins) and managing the security risks of transferring policies from simulation to the real world.
- Protocol Fuzzing: Ability to test industrial protocols (e.g., Modbus, BACnet) for robustness against automated or adversarial inputs.
- AI Governance: Familiarity with emerging standards like the NIST AI RMF or ISO 42001.
- Critical Systems: Experience securing "closed loops" or control systems where latency and reliability are critical.
- Certifications: Relevant advanced certifications, such as GICSP (Global Industrial Cyber Security Professional), ISA/IEC 62443 Cybersecurity Expert, NVIDIA Agentic AI, OSEP (Offensive Security Experienced Penetration Tester), CISSP, or OSCP.
Our Stack
- AI/ML: PyTorch, TensorFlow, Ray (RL), LangChain, Gemini/OpenAI/Anthropic models.
- Languages: Python, Go.
- Infrastructure: Docker, Kubernetes, Terraform.
- Cloud: GCP (GKE, PubSub, BigTable)
Onboarding
In your first 30 days — (Foundation and AI Landscape Familiarization)
- Understand the Mission: Deep dive into "Building AI for AI Factories." Understand the specific physical parameters (power, cooling, airflow) our agents are optimizing.
- Build Trust: Sit with the Agentic AI researchers to understand their workflow. How do they train agents? How do they simulate environments?
- Initial Review: Conduct a high-level review of the current "Safety Layer" that sits between the Agent and the control systems.
In your first 60 days — (Threat Modeling & Guardrails):
- Agent Threat Model: Lead a detailed threat modeling session for a specific Agentic workflow. Focus on the Interface between the Agent and the Physical Hardware.
- Guardrail Implementation: Propose and begin implementing technical controls (guardrails) that enforce deterministic safety rules, ensuring the AI cannot exceed operational limits regardless of its intent.
- Secure the Tools: Review the security of the internal APIs (Tools) that the agents use to sense and act on the environment.
In your first 90 days — (Strategy & Automation):
- Reference Architecture: Publish a "Secure Agent Reference Architecture" for future agent development.
- Driving Initiatives: Drive the implementation of the secure reference architectures and remediation of key findings from the threat modeling exercises.
- Demonstrable Impact: Showcase measurable improvements in the security of the AI/ML pipeline (e.g., implementation of runtime monitoring for anomalous model behavior, reduction of AI-specific vulnerabilities).
- Strategic Contributions: Establish yourself as the key security partner and expert for the Agentic AI Department.
General Interview Process
All of our interviews are held via Google Meet, and an active camera connection is required.
- Meeting with People Operations team member (30 minutes)
- Meeting with Hiring Manager (30 minutes)
- Technical Interview with our Senior Product Security Engineer (60 minutes)
- Meeting with Agentic AI team member (30 minutes)
- Culture fit interview with Phaidra’s co-founders (30 minutes)
Base Salary
UK Residents:
- Tier 1 (London): 95,200 GBP - 142,000 GBP
- Tier 2 (Manchester, Birmingham, Edinburgh, Bristol): 89,600 GBP - 134,400 GBP
- Tier 3 (Other areas): 84,000 GPB - 126,000 GBP
In addition to base salary, this position is eligible for equity. Final salary will be determined based on several factors, including a candidate’s qualifications, skills, competencies, experience, expertise, education and location. In some cases, final compensation may fall outside the posted range. Salary ranges are regularly reviewed and may be adjusted in response to market trends.
Additional Content
The Opportunity
At Phaidra, security is the bedrock of trust for our customers operating the world's most critical infrastructure. We are looking for a Senior Product Security Engineer to partner directly with our Agentic AI department.
This team is at the forefront of our mission, building autonomous agents responsible for optimizing the operational fabric of AI factories. These agents don't just chat; they act. They make real-time decisions to optimize power usage, cooling efficiency, and hardware health, creating a more stable and efficient environment for massive-scale compute.
This is a high-stakes environment where the integrity and security of AI-driven decisions are paramount. You will tackle the unique security challenges of deploying autonomous agents that interact directly with physical control systems. Security failures here don't just mean data leaks; they could mean operational downtime or physical degradation of critical hardware.
We need a security expert who thrives working hand-in-hand with AI researchers and engineers. Your role is to embed security into the DNA of our Agentic platform, ensuring that as our agents learn and explore, they do so within unbreakable safety boundaries.
We are seeking a team member located in the following area: UK
Responsibilities
- Champion Secure Agentic AI Development: Drive the adoption of Phaidra’s Secure AI/ML Development Lifecycle (SAIDL) within the Agentic AI team. Adapt security practices to fit the iterative and experimental nature of Reinforcement Learning and agent development.
- Agentic Threat Modeling: Partner with researchers to model threats specific to autonomous agents. Beyond standard AI risks, you will analyze risks unique to agents, such as goal misalignment, reward hacking, infinite looping, and insecure tool execution (e.g., an agent executing a command that exceeds safety limits).
- Secure Agent Architecture & Safety Boundaries: Design secure-by-default architectures for autonomous agents. Crucially, this involves defining deterministic safety guardrails that sit between the probabilistic AI model and the physical hardware controls. Ensure "Zero Trust" applies to the agent—it should only have the minimum permissions needed to adjust specific parameters.
- Secure Agent Tools & Memory: Architect security controls for the "tools" the agent uses (APIs to read sensors or change settings) and the agent's long-term memory. Ensure the agent cannot be manipulated into using a tool to perform unauthorized actions or "poisoned" via its memory context.
- MLSecOps for RL Pipelines: Secure the training and simulation pipelines used for Reinforcement Learning. Ensure the integrity of the simulation environments (Digital Twins) used to train agents, preventing attackers from influencing agent behavior during the training phase.
- Adversarial Testing & Red Teaming: Lead AI Red Teaming exercises focused on behavioral manipulation. Can you trick the agent into making a suboptimal decision? Can you manipulate the observations the agent receives?
- Incident Preparedness: Develop incident response playbooks tailored for autonomous systems, focusing on "Kill Switches" and rapid rollback capabilities in the event of rogue agent behavior.
- Cross-Functional Partnership: Build strong relationships with the Agentic AI researchers, SREs, and Data Scientists. Act as an enabler who helps them deploy powerful agents safely, rather than a blocker.
Key Qualifications
- Agentic AI & RL Security: Proven understanding of the security risks associated with Reinforcement Learning, Autonomous Agents, or automated decision-making systems.
- AI Partnership: Demonstrated experience working embedded with AI system developers and researchers. You understand the difference between "probabilistic" (AI) and "deterministic" (Code) and how to secure the bridge between them.
- Core Experience: 5+ years of work experience in product security, application security, or a closely related security engineering role.
- Safety Engineering Mindset: You understand that in physical systems, "Availability" and "Safety" often outrank "Confidentiality." You are familiar with concepts like fail-safes and human-in-the-loop controls.
- Technical Depth:
- Strong programming experience, ideally with Python (essential for ML/AI ecosystems) or Go.
- Familiarity with agent frameworks (e.g., LangChain, AutoGPT) or RL libraries (e.g., Ray RLLib).
- Proven experience securing Cloud infrastructure (GCP) and Kubernetes.
- Deep understanding of Authentication & Authorization (specifically non-human identities/workload identity).
- Advanced MLOps: Direct, hands-on experience securing MLOps tooling (e.g., Kubeflow, MLflow) and deep understanding of securing complex data and model-training pipelines.
Preferred Skills & Experience
- Industrial / OT Context: Experience working with systems that interface with the physical world (IoT, Robotics, ICS/OT). Understanding of the "IT/OT convergence."
- Formal Verification for AI: Experience using mathematical methods to prove that an AI model or agent will not violate specific safety constraints.
- Sim-to-Real Security: Experience securing simulation environments (Digital Twins) and managing the security risks of transferring policies from simulation to the real world.
- Protocol Fuzzing: Ability to test industrial protocols (e.g., Modbus, BACnet) for robustness against automated or adversarial inputs.
- AI Governance: Familiarity with emerging standards like the NIST AI RMF or ISO 42001.
- Critical Systems: Experience securing "closed loops" or control systems where latency and reliability are critical.
- Certifications: Relevant advanced certifications, such as GICSP (Global Industrial Cyber Security Professional), ISA/IEC 62443 Cybersecurity Expert, NVIDIA Agentic AI, OSEP (Offensive Security Experienced Penetration Tester), CISSP, or OSCP.
Our Stack
- AI/ML: PyTorch, TensorFlow, Ray (RL), LangChain, Gemini/OpenAI/Anthropic models.
- Languages: Python, Go.
- Infrastructure: Docker, Kubernetes, Terraform.
- Cloud: GCP (GKE, PubSub, BigTable)
Onboarding
In your first 30 days — (Foundation and AI Landscape Familiarization)
- Understand the Mission: Deep dive into "Building AI for AI Factories." Understand the specific physical parameters (power, cooling, airflow) our agents are optimizing.
- Build Trust: Sit with the Agentic AI researchers to understand their workflow. How do they train agents? How do they simulate environments?
- Initial Review: Conduct a high-level review of the current "Safety Layer" that sits between the Agent and the control systems.
In your first 60 days — (Threat Modeling & Guardrails):
- Agent Threat Model: Lead a detailed threat modeling session for a specific Agentic workflow. Focus on the Interface between the Agent and the Physical Hardware.
- Guardrail Implementation: Propose and begin implementing technical controls (guardrails) that enforce deterministic safety rules, ensuring the AI cannot exceed operational limits regardless of its intent.
- Secure the Tools: Review the security of the internal APIs (Tools) that the agents use to sense and act on the environment.
In your first 90 days — (Strategy & Automation):
- Reference Architecture: Publish a "Secure Agent Reference Architecture" for future agent development.
- Driving Initiatives: Drive the implementation of the secure reference architectures and remediation of key findings from the threat modeling exercises.
- Demonstrable Impact: Showcase measurable improvements in the security of the AI/ML pipeline (e.g., implementation of runtime monitoring for anomalous model behavior, reduction of AI-specific vulnerabilities).
- Strategic Contributions: Establish yourself as the key security partner and expert for the Agentic AI Department.
General Interview Process
All of our interviews are held via Google Meet, and an active camera connection is required.
- Meeting with People Operations team member (30 minutes)
- Meeting with Hiring Manager (30 minutes)
- Technical Interview with our Senior Product Security Engineer (60 minutes)
- Meeting with Agentic AI team member (30 minutes)
- Culture fit interview with Phaidra’s co-founders (30 minutes)
Base Salary
UK Residents:
- Tier 1 (London): 95,200 GBP - 142,000 GBP
- Tier 2 (Manchester, Birmingham, Edinburgh, Bristol): 89,600 GBP - 134,400 GBP
- Tier 3 (Other areas): 84,000 GPB - 126,000 GBP
In addition to base salary, this position is eligible for equity. Final salary will be determined based on several factors, including a candidate’s qualifications, skills, competencies, experience, expertise, education and location. In some cases, final compensation may fall outside the posted range. Salary ranges are regularly reviewed and may be adjusted in response to market trends.