All Episodes

Displaying 61 - 80 of 91 in total

Episode 30 — Use Labeling Safely: Quality Controls, Annotation Bias, and Poisoning Exposure

This episode focuses on labeling as both a quality risk and a security risk, because SecAI+ expects you to understand how labels shape model behavior and how attackers...

Episode 29 — Apply Data Minimization: Collect Less, Store Less, and Expose Far Less

This episode explains data minimization as a practical security strategy, because SecAI+ scenarios often involve unnecessary data collection that expands breach impact...

Episode 28 — Handle Structured, Semi-Structured, and Unstructured Data With Safe Controls

This episode teaches safe handling across data types, because SecAI+ expects you to apply appropriate controls whether you are dealing with clean tables, messy logs, d...

Episode 27 — Prevent Training Data Leakage: Secrets, PII, and Tokenization Side Effects

This episode focuses on preventing training data leakage, because SecAI+ will test whether you can recognize how secrets and personal data can enter pipelines and late...

Episode 26 — Clean and Normalize Data Without Losing Security-Relevant Signal and Context

This episode teaches data cleaning as a careful tradeoff, because SecAI+ expects you to preserve security-relevant signals while still producing datasets that models c...

Episode 25 — Secure Data Intake: Authenticity Checks, Source Trust, and Provenance Tracking

 This episode covers data intake as the start of the AI security chain, because SecAI+ often frames failures that begin with untrusted sources, weak authenticity check...

Episode 24 — Manage Model Output Formats: Schemas, Parsing, and Safe Downstream Handling

 This episode explains why output formatting is a security issue, not just a developer convenience, because SecAI+ expects you to prevent failures where loosely struct...

Episode 23 — Calibrate Confidence Carefully: When to Trust Outputs and When to Escalate

 This episode teaches confidence calibration as a safety control, because SecAI+ scenarios frequently require you to decide when an AI output is “good enough,” when it...

Episode 22 — Reduce Hallucinations Practically: Grounding, Constraints, and Verification Patterns

This episode focuses on reducing hallucinations as an operational discipline, because SecAI+ tests whether you can select controls that improve reliability without pre...

Episode 21 — Separate System, Developer, and User Instructions to Prevent Confused Authority

This episode explains instruction hierarchy as a security control, because SecAI+ scenarios often involve an AI system receiving competing directions from system promp...

Episode 20 — Control Tool Use in Agents: Permissions, Scope, and Safe Action Boundaries

This episode teaches tool-using agents as a high-impact risk area, because SecAI+ will test whether you understand that once an AI system can take actions, the primary...

Episode 19 — Write Prompt Templates That Reduce Variance and Prevent Risky Behaviors

This episode focuses on prompt templates as a standardization control, because SecAI+ expects you to think like an operator who needs consistent outputs, predictable s...

Episode 18 — Use Zero-Shot, One-Shot, and Few-Shot Prompting With Clear Guardrails

This episode teaches when and how to use zero-shot, one-shot, and few-shot prompting in ways that improve reliability without creating new security problems, because S...

Episode 17 — Build Prompt Foundations: Roles, Instructions, Context, and Output Constraints

This episode establishes prompt fundamentals the way SecAI+ tests them, treating prompts as a control surface that can reduce variance and risk when they are structure...

Episode 16 — Choose Vector Stores Wisely: Indexing, Latency, Recall, and Access Controls

This episode focuses on selecting and operating vector stores with a security-first mindset, because SecAI+ expects you to balance performance goals like low latency a...

Episode 15 — Design Retrieval-Augmented Generation That Resists Abuse and Data Spillover

This episode teaches retrieval-augmented generation as a security architecture pattern, because SecAI+ frequently frames scenarios where an LLM is connected to enterpr...

Episode 14 — Understand Embeddings Deeply: Similarity Search, Semantic Space, and Leakage Risks

This episode explains embeddings in a way that makes similarity search and semantic retrieval feel concrete, because SecAI+ will test your ability to reason about how ...

Episode 13 — Apply Pruning and Quantization Without Breaking Security Expectations and Accuracy

 This episode covers pruning and quantization from a security-aware perspective, because SecAI+ scenarios often involve performance constraints, edge deployment, or co...

Episode 12 — Fine-Tune Safely: Epochs, Learning Rates, and Catastrophic Forgetting Risks

This episode teaches fine-tuning as a controlled engineering activity with security consequences, not a casual “make it better” step, because SecAI+ expects you to und...

Episode 11 — Explain Model Lifecycle States: Training, Tuning, Serving, and Retirement Criteria

This episode explains the full model lifecycle in a way that maps directly to SecAI+ governance, risk, and operational control questions, because exam scenarios often ...

Broadcast by