Episode 10 — Detect unauthorized software quickly using discovery signals, baselines, and change patterns
In this episode, we focus on detecting unauthorized software early, because time is the attacker’s best friend and unauthorized software often becomes the first quiet foothold for persistence, credential theft, or data movement. When software authority exists on paper but detection is weak, the environment drifts into a state where you only discover unauthorized tools during an incident, and by then the blast radius is often larger than it needed to be. Early detection is not about punishing users or creating constant noise; it is about seeing change as it happens and responding proportionally. The most effective programs treat unauthorized software detection as a change-detection problem, not a guessing game, and they use baselines plus reliable telemetry to surface what is new, rare, or out of place. This approach aligns cleanly with governance expectations because it creates evidence that you are operating controls continuously rather than performing occasional inventory snapshots. By the end, you should have a workable model for baselining, identifying deltas, triaging findings, and responding in a way that reduces risk without turning your security team into the software police.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
Begin by establishing baselines for common roles and standard builds, because you cannot detect unauthorized software reliably if you have no definition of normal. A baseline is the expected set of applications, services, and key components for a given device class, such as a developer workstation, a finance endpoint, a call center system, or a server role. Baselines should reflect what the role actually needs, not an idealized minimal build that no one can work with, because unrealistic baselines drive exceptions and exceptions create blind spots. You also want baselines to capture the things that matter for risk, such as administrative tools, remote access utilities, scripting environments, and browser extensions with elevated permissions. Standard builds provide a starting point, but the baseline must be maintained as roles evolve, otherwise the baseline becomes obsolete and every change becomes noise. A useful baseline includes not just application names but also publisher expectations, installation paths, and version ranges, because those attributes help distinguish legitimate software from lookalikes. When baselines are role-aware, you reduce false positives and you make detection meaningful.
With baselines defined, use discovery signals from Endpoint Detection and Response (E D R) telemetry, logs, and package managers, because unauthorized software can enter through different channels and you need visibility across those channels. E D R telemetry can show process execution, parent-child process chains, file creation events, and network connections that help you understand what ran and what it did. Operating system logs can show installation events, service creation, scheduled tasks, and user activity patterns that indicate software introduction and persistence behaviors. Package managers and endpoint management inventories can show declared installations, version states, and update activity, which is valuable for spotting unauthorized packages or unusual repository sources. For cloud-hosted workloads, control plane telemetry can reveal new images, new dependencies, or newly enabled features that effectively introduce new software into the environment. The strength of combining signals is that each one compensates for the others’ blind spots, such as portable executables that do not register as installed software. The goal is to make unauthorized software detection a data problem you can solve systematically. When signals are layered, you can confirm findings and reduce the risk of chasing ghosts.
Now practice spotting anomalies such as new executables, rare publishers, and odd paths, because unauthorized software often reveals itself through small deviations from normal patterns. New executables can be legitimate updates, but they can also be ad hoc tools dropped into user directories, temporary folders, or application data paths that are commonly abused. Rare publishers are a strong signal because most enterprise environments have a small set of common vendors, and a new or unknown publisher deserves scrutiny, especially if it appears on a privileged system. Odd paths are a useful indicator because legitimate enterprise software usually installs in consistent directories, while unauthorized and malicious tools often run from downloads folders, user profile locations, or unusual nested paths designed to evade casual inspection. You can also look for unusual execution patterns, such as a document process spawning a scripting engine that then launches an executable from a temporary location. Anomaly spotting is not about assuming maliciousness; it is about identifying items that warrant review because they fall outside the baseline. When you train analysts to look for these patterns, you improve detection precision and response confidence.
As you build this capability, avoid pitfalls like alert fatigue and ignoring low-frequency events, because both failures can quietly undermine the whole program. Alert fatigue happens when the detection system produces too many low-value alerts, causing analysts to skim, delay, or dismiss signals that actually matter. Low-frequency events are dangerous because rare activity is often where unauthorized tools appear, and rarity can be a feature of stealth rather than a sign of harmlessness. A common anti-pattern is tuning out everything that appears on only one device or one user, even though targeted attacks often start that way. Another anti-pattern is treating repeated alerts as more important simply because they are loud, when they may be noise from a misconfiguration. The right approach is to tune signals so they are actionable, and to maintain a triage method that can give rare but high-risk events the attention they deserve. You also need to ensure that detection logic is aligned with baselines, otherwise every normal software update becomes an alert. Managing noise is not optional; it is a core part of sustaining detection over time.
A quick win that provides immediate value is running weekly diffs against approved baselines, because diffs turn a large inventory into a focused list of changes. A weekly diff compares the current software state of systems in a role group to the baseline and highlights additions, removals, and key attribute changes such as publisher and version. Weekly is a good cadence because it is frequent enough to catch drift before it becomes normal, while avoiding the churn that can occur with daily full diffs in high-change environments. The diff report should prioritize anomalies that are out-of-policy, such as prohibited tools, unknown publishers, and software running from unusual paths. It should also separate expected changes, such as planned deployments, so analysts are not forced to re-investigate known activity. When diffs are consistent and scoped, they become a reliable operational tool rather than a compliance artifact. Over time, weekly diffs also improve baseline accuracy, because they reveal where the baseline needs to evolve based on legitimate business needs.
Once a detection occurs, correlate it with user actions and deployment records, because context is how you move from signal to decision. User actions can include authentication events, administrative activity, downloads, and execution patterns that indicate whether software was introduced intentionally or through compromise. Deployment records can include endpoint management jobs, packaging pipelines, procurement approvals, or change tickets that show whether the software was installed through sanctioned processes. Correlation reduces false positives, because it helps you quickly separate planned rollouts from unauthorized installs. It also improves incident response, because it can reveal whether the appearance is isolated or part of a broader pattern, such as the same executable showing up across multiple systems after a specific event. Correlation also supports respectful handling, because you can approach a user with facts about what was observed and when, rather than with assumptions. In environments with strong governance, correlation should be fast because control points leave records, and detection should connect to those records naturally. When correlation is weak, it often indicates gaps in governance, such as unmanaged deployment channels or informal software sharing.
After you have context, triage findings by risk, prevalence, and business impact, because not every unauthorized software event deserves the same response. Risk includes factors like whether the software enables remote access, whether it touches credentials, whether it runs with elevated privileges, and whether it communicates externally in unexpected ways. Prevalence includes how widely the software appears and whether it is spreading, because widespread presence can indicate either legitimate need or broad compromise. Business impact includes whether removing the software would break critical workflows, and whether there is an approved alternative that can replace it quickly. A high-risk remote access utility on a privileged server is different from an unapproved PDF viewer on a low-risk endpoint, even though both are unauthorized. Triage should also consider whether the software is known prohibited or merely unapproved, because that affects urgency and the likelihood that a rapid containment step is justified. When triage is structured, response is consistent, and consistent response builds trust across the organization. It also ensures you spend analyst time where it reduces the most risk.
Because humans are involved, mentally rehearse investigating without blaming users immediately, because blame-driven security creates hiding behavior. Many unauthorized software cases are caused by process friction, missing approved tools, or unclear guidance, not malicious intent. If your first move is accusation, users will learn to avoid reporting and will become less cooperative during investigations. A better stance is curiosity and evidence, where you start by validating what was observed, determining whether it is legitimate, and understanding why it appeared. You can ask whether the software was needed for a work task, whether it was installed through a sanctioned method, and whether the user was aware of policies and alternatives. If compromise is suspected, you focus on containment and root cause rather than on fault. This approach does not eliminate accountability; it preserves it by keeping the investigation factual and by directing accountability toward fixing control weaknesses. Calm investigation also aligns with governance expectations because it demonstrates due process and proportional response. When the team can investigate respectfully, you get better information and faster resolution.
To keep the detection model easy to remember and apply, create a memory anchor: baseline plus delta equals detection. The baseline is your definition of normal for a role or system class, including expected software, publishers, and paths. The delta is the change, such as a new executable, a new publisher, a version shift outside expected ranges, or a program running from an unusual location. Detection happens when you compare the delta to the baseline and treat mismatches as candidates for review. This anchor also keeps teams from chasing raw signals without context, because a signal is only meaningful when compared to what should be there. It also guides tuning, because if you see too many deltas that are legitimate, the baseline may be outdated or too strict for the role. If you see too few deltas despite known drift, your telemetry may be incomplete or your comparison logic may be weak. The anchor helps you diagnose where the program needs improvement without turning it into an abstract debate.
When you detect unauthorized software, document response options clearly, because response should be proportional and repeatable rather than improvised. One option is removal, where the software is uninstalled or deleted and the system is returned to baseline, which is appropriate when the tool is clearly prohibited or unnecessary. Another option is quarantine, where you isolate the system or block execution while you investigate, which is appropriate when risk is high or compromise is suspected. A third option is approval, where the software is evaluated and moved into the approved list if it is legitimate and needed, which is appropriate when demand is real and the tool meets criteria. A fourth option is restriction, where you allow limited use under constraints such as reduced privileges, limited network access, or scope-limited deployment, which is appropriate when full approval is not yet justified but immediate business need exists. Documenting these options reduces confusion and reduces inconsistent treatment across teams. It also creates a clear interface between detection and governance, because approvals and restrictions should feed back into baseline updates and software authority records. When response options are clear, analysts can act quickly and defend their decisions.
Now do a mini-review by restating the triage steps in your own words, because triage is where detection becomes useful action. You confirm the signal using reliable telemetry and determine what exactly executed, where it ran, and what identifiers and publishers are involved. You correlate with deployment and user context to separate sanctioned activity from unauthorized introduction and to assess whether the event is isolated or part of a pattern. You assess risk based on capabilities, privilege level, and external communication, and you assess business impact based on the role of the system and the workflow dependencies. You then choose a response option that matches the risk and impact, documenting the decision and any follow-up actions such as baseline updates, approval review, or policy enforcement improvements. If the tool is prohibited or suspicious, you prioritize containment and evidence preservation while you investigate further. If the tool is legitimate but unapproved, you treat it as a governance and process improvement opportunity rather than only as an enforcement event. When you can state triage steps clearly, your team will apply them consistently, which is what makes detection sustainable.
To keep the program operationally visible, pick one dashboard metric to monitor daily, because daily monitoring prevents slow drift from becoming normal. A strong daily metric is one that signals change patterns, such as the count of new unapproved executables observed, the number of unique rare publishers introduced, or the number of baseline mismatches on high-risk systems. The metric should be scoped enough that it can be reviewed quickly, and it should have a defined response expectation so it does not become a number that no one acts on. Daily monitoring is not about reacting to every blip; it is about noticing trends early and verifying that controls are working as intended. If daily metrics suddenly spike, it may indicate a new deployment channel, a process bypass, or an active threat, and catching that early reduces risk. If daily metrics fall to zero unexpectedly, it may indicate telemetry gaps or misconfigured detection, which is also a risk. The purpose is situational awareness for software change, because software change is where exposure often enters.
To conclude, detecting unauthorized software quickly depends on establishing role-aware baselines, collecting discovery signals through E D R telemetry, logs, and package manager data, and watching change patterns that reveal what is new, rare, or out of place. Anomaly spotting focuses on new executables, unusual publishers, and odd paths, while careful tuning prevents alert fatigue and ensures rare events are not ignored. Weekly diffs against approved baselines provide a practical change report, and correlation with user actions and deployment records turns raw signals into context-rich decisions. Structured triage by risk, prevalence, and business impact supports proportional response without immediate blame, and documented response options keep actions consistent across analysts and teams. The memory anchor baseline plus delta equals detection keeps the workflow simple and guides continuous tuning. Now tune signals for clarity by reducing noise and improving baseline accuracy, because a detection program only protects the enterprise when analysts can see meaningful change and respond quickly with confidence.