All Episodes

Displaying 21 - 40 of 91 in total

Episode 70 — Analyze Model Inversion Risks: What Can Leak and How to Reduce It

This episode focuses on model inversion risk as a privacy and confidentiality concern, because SecAI+ expects you to understand how attackers may try to infer sensitiv...

Episode 69 — Investigate Model Poisoning: Artifact Integrity, Supply Chain, and Remediation

 This episode teaches model poisoning as an artifact and supply chain problem, because SecAI+ scenarios often involve compromised checkpoints, tampered weights, malici...

Episode 68 — Investigate Data Poisoning: Detection Clues, Impact Analysis, and Recovery Steps

This episode focuses on data poisoning investigations, because SecAI+ expects you to recognize how poisoned inputs can degrade performance, embed attacker goals, or cr...

Episode 67 — Defend Against Jailbreaking: Common Tactics and Practical Mitigations

This episode teaches jailbreak defense as a layered control strategy, because SecAI+ expects you to recognize that jailbreaks are not just “bad prompts,” they are syst...

Episode 66 — Detect Prompt Injection Attempts: Indicators, Triage, and Containment Options

 This episode focuses on detecting prompt injection as an active defense capability, because SecAI+ scenarios frequently involve untrusted inputs that try to override ...

Episode 65 — Interpret Confidence Signals: Limits, Miscalibration, and Operational Risk

This episode teaches confidence as a risk signal that must be handled carefully, because SecAI+ expects you to understand that model confidence can be miscalibrated, c...

Episode 64 — Audit AI Use at Scale: Who Asked What, When, and With What Data

This episode focuses on auditing AI usage as a governance and security requirement, because SecAI+ expects you to prove accountability across prompts, retrieval, tools...

Episode 63 — Log AI Interactions Safely: Sanitization, Redaction, and Tamper-Resistance

This episode teaches secure logging for AI interactions, because SecAI+ scenarios regularly involve logs that accidentally become a secondary data breach, especially w...

Episode 62 — Monitor Prompts as Telemetry: Signals, Patterns, and Threat-Hunting Hooks

This episode explains how prompts and context assembly can be treated as security telemetry, because SecAI+ expects you to detect emerging abuse, injection attempts, a...

Episode 61 — Apply Key Management Right: Rotation, Storage, and Separation of Duties

This episode focuses on key management as a foundational control for AI systems, because SecAI+ scenarios often involve encrypted datasets, protected model artifacts, ...

Episode 60 — Apply Access Controls Across Layers: Data, Models, Tools, and Agents

This episode ties access control together across the entire AI ecosystem, because SecAI+ scenarios often fail when organizations secure one layer, like the model endpo...

Episode 59 — Lock Down Endpoints: Network Controls, Segmentation, and Service Hardening

 This episode teaches endpoint security for AI services as a familiar discipline applied to a new workload, because SecAI+ expects you to defend inference endpoints, r...

Episode 58 — Secure Agent Toolchains: Least Privilege, Scoped Credentials, and Audit Trails

 This episode focuses on agent toolchains as a high-risk area, because SecAI+ scenarios often involve agents that can call APIs, query internal systems, create tickets...

Episode 57 — Control Outputs Safely: Dangerous Content Filters and Secure Output Encoding

This episode teaches safe output handling as a concrete security requirement, because SecAI+ expects you to prevent situations where AI outputs create harm through uns...

Episode 56 — Validate Inputs Rigorously: File Types, Length Limits, and Content Sanitization

This episode focuses on input validation as a first-line defense for AI systems, because SecAI+ scenarios frequently involve attackers using oversized payloads, malici...

Episode 55 — Set Rate Limits and Quotas: Token Caps, Cost Controls, and Abuse Prevention

 This episode explains rate limiting and quotas as both a security control and a reliability control, because SecAI+ expects you to mitigate abuse patterns that includ...

Episode 54 — Build Prompt Firewalls: Filtering, Classification, and Instruction Boundary Checks

 This episode teaches prompt firewalls as a practical defense pattern, because SecAI+ scenarios often involve untrusted user input, untrusted documents, and integrated...

Episode 53 — Implement Guardrails That Hold: Policy Rules, Validators, and Refusal Logic

This episode focuses on guardrails as enforceable controls, because SecAI+ expects you to design guardrails that still work when inputs are messy, users are persistent...

Episode 52 — Model the Attack Surface: Data, Model, Agent, Tooling, and Integrations

This episode builds an AI-specific attack surface map you can apply quickly on the SecAI+ exam, because many scenario questions are really asking which layer is being ...

Episode 51 — Track AI Vulnerabilities: CVE Workflows, Advisories, and Exposure Management

 This episode teaches vulnerability management for AI and adjacent components in a way that matches SecAI+ scenario questions, where the right answer is often a discip...

Broadcast by