Episode 77 — Use AI for Code Review: Linting, SAST Triage, and False-Positive Control
In this episode, we’re going to focus on using A I not to write code, but to review code, which is a different skill with different risks. Beginners often imagine code review as something only senior developers do, like a gatekeeping ritual. In reality, code review is one of the best learning tools in software, because it teaches you to notice patterns, ask good questions, and spot mistakes before they become bugs or vulnerabilities. A I can help here because it can summarize what code is doing, point out suspicious patterns, and help triage warnings from automated tools that can overwhelm humans. The risk is that if you treat A I as an authority, you can accept incorrect assessments or ignore important issues because the tool sounded confident. The goal is to use A I as a partner in a disciplined review process, where it accelerates understanding and prioritization without replacing human judgment. In secure development, A I can help with linting-style feedback, help interpret Static Application Security Testing (S A S T) alerts, and help reduce noise by controlling false positives. But to do that safely, you need to understand what these tools are meant to do, what they miss, and how to keep the review process grounded in evidence.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
A useful starting point is to define what linting and S A S T are, because beginners often treat them as the same thing. Linting is automated checking that focuses on code style, common mistakes, and patterns that may cause bugs, like unused variables or suspicious formatting. It is often about clarity and correctness rather than security, although some lint rules can catch security smells. Static Application Security Testing (S A S T) is automated analysis that looks for potential security vulnerabilities in code without running it, such as injection risks, insecure cryptography usage, or dangerous functions. Both tools can generate a lot of findings, and beginners can feel overwhelmed because the list looks long and technical. This is where A I can help by explaining what each finding means in plain language and by grouping similar issues. However, it is crucial to remember that linting and S A S T outputs are hints, not verdicts. They identify patterns that might be risky, but they do not always understand context. A I can help interpret, but it can also misunderstand context in the same way. So you want a workflow that uses A I to clarify and prioritize, while still requiring a human to confirm what is actually true.
When A I helps with code review, it is often acting like a fast reader. It can summarize what a function does, identify data flows, and point out where user input is used in dangerous ways. For beginners, that can be extremely valuable because understanding code is the first step to reviewing it. If you cannot explain what the code is doing, you cannot reliably assess whether it is safe. A I can also help highlight trust boundaries, like where an API request enters the system, how it is validated, and where it reaches a database or a file system. This is the kind of reasoning a security reviewer does: follow the data and look for places where untrusted input becomes an instruction. But here is the important caution: a model may produce a plausible summary even if it misread the code. It may assume a variable is validated because it is named like it is, or it may miss a subtle check hidden behind a conditional. That means you should treat A I summaries as a starting hypothesis. The human reviewer should verify by looking at the specific lines and confirming that the described checks actually exist. In secure review, details matter, so any tool that gives you a narrative must be anchored back to the code.
A I can also assist in linting-style review by pointing out clarity problems that indirectly affect security. Confusing code is risky because people maintain it incorrectly, and maintenance errors create vulnerabilities. For example, if a function has unclear parameter names or mixed responsibilities, a future developer might reuse it in an unsafe way. If error handling is inconsistent, logging might leak secrets or fail to capture important events. If input validation is scattered, it is easy to miss. Linting tools catch some of this, but not all, because many clarity issues are semantic rather than syntactic. A I can help by suggesting refactors that improve readability, such as splitting complex functions, clarifying variable names, and making validation logic explicit. These improvements do not directly patch a vulnerability, but they reduce the chance that vulnerabilities are introduced later. Beginners should see that secure code is not only about blocking attacks; it is also about being understandable enough to be reviewed and maintained safely. If you use A I to improve clarity, you are strengthening the foundation on which security checks rely.
Now let’s talk about S A S T triage, because this is where many teams struggle. S A S T tools can produce a large number of findings, and many of them are false positives, meaning the tool flagged something that is not actually exploitable. False positives are not harmless; they create alert fatigue, which makes people ignore the tool entirely. Beginners might think you should fix everything the tool flags, but in real life, that can waste time and even introduce new bugs if changes are rushed. Triage is the process of sorting findings into categories like likely real, likely false, and needs more investigation. A I can help by explaining the pattern the tool detected and what evidence would confirm whether it is real. For example, if the tool flags a possible injection, A I can help you look for whether the code uses parameterized queries or whether it constructs a query by concatenating strings. If the tool flags insecure randomness, A I can explain what kind of randomness is expected and where the randomness is used. The benefit is speed: the reviewer can understand why the tool complained without reading pages of dense documentation. The risk is that A I might overconfidently label something a false positive when it is real. That is why triage must be evidence-based, with the code as the ultimate reference.
A practical triage mindset is to ask, what is the source, the sink, and the sanitization. The source is where data comes from, like a user input. The sink is where data becomes dangerous, like a database query, a command execution, or a file path. Sanitization is the set of checks and transformations that make data safe before it reaches the sink. Many S A S T findings can be evaluated by tracing these three elements. A I can help by outlining the data path and pointing to where checks appear. But the human must confirm those checks are complete and correct. A beginner mistake is to accept a check that looks like validation but is not sufficient, such as checking that a string is not empty but not limiting its allowed characters. Another beginner mistake is to assume that encoding, escaping, and validation are interchangeable. They are related but not identical. Triage therefore benefits from A I explanations, but it still requires understanding enough to ask, does this check actually prevent the exploit. When you consistently use the source-sink-sanitization mindset, you reduce reliance on vague impressions.
False-positive control is not only about labeling alerts; it is about improving the system so the signal-to-noise ratio gets better over time. If you treat every S A S T run as a fresh flood of alerts, the process becomes exhausting. A better approach is to tune rules, add suppressions where appropriate, and create patterns for safe usage that the tool can recognize. A I can help here by summarizing which alerts are recurring and by suggesting how to restructure code so the tool understands the safety checks. For example, sometimes code is safe, but the tool cannot see the validation because it happens in a separate function or behind abstraction layers. Refactoring can make safety checks more explicit and therefore reduce false positives. However, beginners should be cautious about suppressions. Suppressing an alert should be done with evidence, because suppressions can hide real issues if misused. A safe practice is to require justification for suppressions and to review them periodically. False-positive control is not about making the tool quiet; it is about making the tool accurate enough that people trust it and respond appropriately. A I can support that goal, but it cannot replace disciplined governance.
Another way A I can help is by prioritizing findings based on likely impact. Not all vulnerabilities are equally dangerous, and not all code paths are equally exposed. A potential injection in an internal-only admin tool may be lower priority than the same issue in a public-facing endpoint, although both still matter. A I can help by asking clarifying questions about exposure and by helping you reason about threat models, such as who can reach the vulnerable path and what an attacker could achieve. In a beginner context, threat modeling can feel intimidating, but it is simply thinking about who can do what. Prioritization decisions should be grounded in the environment, not in the tool’s severity label alone. Tools sometimes label issues based on generic risk, but your context determines real risk. A I can help you connect the generic risk to your specific context. Still, the human team must own these decisions because they involve business impact, user trust, and operational constraints. A tool can assist with reasoning, but it should not be the one making tradeoffs silently.
It is also important to be clear about what A I cannot do reliably in code review. It cannot fully simulate runtime behavior, especially in complex systems. It cannot guarantee that a fix it suggests is correct, secure, and compatible. It may miss issues that require deep domain knowledge or environment-specific understanding. It may also produce plausible-sounding but incorrect security advice if the prompt is ambiguous. Beginners should treat A I as a reviewer’s assistant, not as a reviewer. That means you use it to accelerate reading, to generate questions, and to interpret tool output, but you still perform human review steps like checking code diffs carefully and validating assumptions. You also use independent evidence, like tests, documentation, and security policies, to confirm decisions. This is especially important for high-risk code areas such as authentication, authorization, cryptography, and input validation. A I can help you understand these areas, but it should not be the only voice in the room. Overreliance on A I in code review can create a false sense of assurance, which is a security risk in itself.
Finally, you want to connect A I-assisted review to a healthy team workflow. Code review is not just about catching mistakes; it is about sharing knowledge and improving quality. If A I is used privately to speed up review, it should still result in clear human-readable feedback. That feedback should point to specific code behaviors, explain risk in plain language, and propose safer patterns. A I can help draft that feedback, but the reviewer should ensure it is accurate and appropriate. Over time, the team can develop common review check themes, such as always checking input validation, always checking authorization, and always checking logging hygiene. A I can help remind reviewers of these themes without turning review into a checklist recitation. In secure coding, consistency matters because vulnerabilities often hide in the exceptions, the places no one looked because they assumed it was fine. When A I helps make review more consistent and less exhausting, it can improve security culture. The key is to keep humans responsible for the final decision and to treat A I outputs as guidance that must be validated.
To close, using A I for code review can strengthen security by accelerating understanding, supporting linting-style clarity improvements, and helping triage S A S T findings so humans focus on what matters. Linting catches style and common mistake patterns, S A S T looks for security-relevant patterns, and both can generate noisy outputs that beginners find overwhelming. A I can translate findings into plain language, trace data flows from sources to sinks, and suggest evidence to confirm whether an alert is real. False-positive control requires disciplined suppressions, rule tuning, and sometimes refactoring so tools can recognize safe patterns, and A I can help by summarizing recurring issues and suggesting clearer structures. The critical safety principle is that A I is not an authority; it is a helper. Human reviewers must confirm claims against actual code, use independent evidence, and own prioritization decisions based on real exposure and impact. When you use A I to make review faster and clearer without surrendering judgment, you get the best of both worlds: improved productivity and stronger security outcomes.