Episode 43 — Protect backups as high-value targets: access controls, encryption, and isolation strategy
Backups are supposed to be the safety net, but modern attackers treat them like the first line of defense they need to cut through. In this episode, we begin with a simple but sobering reality: if an adversary can encrypt production data and also delete or corrupt your backups, they control the outcome of the incident. That is why backup security is not a storage discussion or a routine operations checkbox. It is an adversary-focused control area that deserves the same rigor you apply to identity systems, privileged access, and critical data stores. The goal is to preserve recovery options under hostile conditions, not just to meet a retention policy on paper. When you design backup protections, assume the attacker understands your environment and will look for backup consoles, backup credentials, and high-value repositories. A backup you cannot trust or cannot access safely during an incident is not a backup; it is a false sense of continuity.
Before we continue, a quick note: this audio course is a companion to our course companion books. The first book is about the exam and provides detailed information on how to pass it best. The second book is a Kindle-only eBook that contains 1,000 flashcards that can be used on your mobile device or Kindle. Check them both out at Cyber Author dot me, in the Bare Metal Study Guides Series.
The first mindset shift is to treat backups as sensitive data, because that is exactly what they are. Backups often contain complete copies of databases, file shares, and system images, which means they concentrate sensitive content into fewer places. They can include personal data, authentication artifacts, configuration secrets, or older records that no longer exist in production but still carry risk. From an attacker perspective, backups are valuable both for destruction and for theft, because stealing a backup can be more efficient than scraping production systems. That reality should drive strong access restrictions, because the impact of unauthorized access is usually broader than the impact of unauthorized access to a single application. Once you classify backups as sensitive, you naturally stop treating backup repositories as lower tier infrastructure and start treating them as controlled vaults. This mindset also affects how you talk about them in risk discussions, because compromise of backups can expand the incident from availability impact to confidentiality impact. A good program makes it clear that backups are part of your most sensitive data landscape.
Strong access restrictions start with the principle that most people and most services should not be able to touch backups at all. Backup access should be constrained by role, by function, and by time, meaning that people should only have the minimum permissions needed to perform their job and only during the windows when those actions are required. The backup administration plane should not be reachable from general user networks, and it should not share the same identity boundaries as everyday operational tooling. Even within an infrastructure team, you want separation between those who can initiate backup jobs, those who can modify backup policies, and those who can delete or expire backups. This is where least privilege becomes tangible, because the consequences of a single overly broad permission can be catastrophic during ransomware. Strong access controls also include careful control of API permissions, because many modern backup operations are automated through APIs that can be abused if tokens are stolen. When you design access restrictions, think about how an attacker would use stolen credentials and then remove the most dangerous paths.
Encryption is the next layer, and it must be applied both at rest and in transit to protect backup content from interception and unauthorized reading. Encrypting backups at rest means that even if someone can access the storage, the data remains protected without the keys. Encrypting in transit means that moving backups across networks does not expose them through packet capture, misconfigured proxies, or compromised intermediate systems. The most practical approach is to use managed keys, because key lifecycle and access control are as important as the encryption algorithm itself. Managed key services typically provide centralized control, auditability, rotation support, and the ability to set strict policies about who can use keys and from where. The key point is that encryption should not be a box you check; it should be a deliberate design where the ability to decrypt backups is tightly controlled and logged. If any administrator can retrieve keys casually, you have not meaningfully reduced risk. Proper backup encryption makes theft less valuable and reduces the blast radius when storage access boundaries fail.
Key management deserves a careful mental model because attackers increasingly target keys and key usage permissions rather than trying to break encryption. If an attacker can use your managed keys through your own permissions model, they can decrypt backups as easily as legitimate operators. That is why you want strict policies around key usage, including separation between key administrators and backup operators, and restrictions that prevent keys from being used from unexpected environments. You also want clear procedures for what happens when you suspect key compromise, because recovering trust often requires revocation, rotation, and re-encryption decisions that can be operationally heavy. The point of managed keys is not merely convenience; it is the ability to enforce and audit policy at the key layer. In mature designs, backup repositories and backup encryption keys are treated as coupled assets, with aligned ownership and aligned access controls. When you do this well, you gain an important security property: even if storage access is breached, the attacker still faces a controlled barrier at the key layer. That barrier buys time, reduces impact, and improves incident response options.
Isolation is what keeps backup integrity intact when attackers are inside your environment. Isolation can be physical, logical, administrative, or some combination, but the outcome you want is that compromise of production or common administrative accounts does not automatically grant the ability to alter or delete backups. Immutable storage is a key technique here, because it prevents modification or deletion for a defined retention period, even by users who normally have broad permissions. Separate administrative accounts are another key technique, because they prevent a single compromised credential set from controlling both production and backup planes. Isolation also includes network boundaries, such as keeping backup storage endpoints off common networks and limiting management interfaces to controlled paths. The overarching idea is to make backup tampering require additional steps and additional privileges beyond what an attacker typically gains early in an intrusion. If backups are isolated properly, ransomware can still encrypt production systems, but it cannot easily erase the recovery path. That difference often determines whether an incident is a disruption or an existential crisis.
Designing isolation also means being honest about administrative convenience, because convenience is where isolation quietly disappears. If the same privileged identity can administer production compute, modify backup policies, and delete backup repositories, then your environment is one phishing email away from a complete loss scenario. Separate administrative accounts are not just separate usernames; they are separate privilege domains, protected with stronger authentication, tighter access windows, and stricter monitoring. Immutability also needs careful configuration, because if immutability can be disabled casually or if retention windows are too short, you have created a feature that looks good in a report but fails under adversarial pressure. Isolation strategies should also consider how you will restore, because the restoration workflow must be possible without reintroducing the same risks that caused the failure. In other words, you want isolation that preserves operational capability while resisting attacker control. If you cannot restore safely, isolation has not been designed as part of the full lifecycle.
A practical exercise is to design backup access for operators who need to do their job without full administrative privilege. Operators may need to start restore jobs, verify backup success, and investigate failures, but they should not necessarily be able to change retention policies, disable immutability, or delete repositories. This is where role design matters, because you can build a model where routine tasks are possible under constrained permissions, while sensitive configuration changes require a separate approval path and a higher privilege role. You can also design time-bound elevation, where operators gain additional permissions only when a ticketed change is approved and only for a limited duration. The goal is to reduce the number of standing permissions that could be abused if an operator account is compromised. This is also a good place to use separation of duties, where one person initiates a change and another person approves it, reducing the chance of a single compromised identity causing irreversible harm. When operators can do routine work safely, the organization is less likely to bypass controls in the name of speed.
The common pitfalls in backup protection tend to cluster around the idea that backups are internal and therefore safe. Leaving backups online and broadly accessible is the classic mistake, because it creates a reachable target that an attacker can enumerate and destroy. Another pitfall is allowing backup consoles to be reachable from general administrative networks, where compromise of any admin workstation becomes a path to backup control. A third pitfall is failing to protect backup credentials, such as storing them in places where they can be extracted from scripts, configuration files, or automation pipelines. Organizations also sometimes assume that because backups exist, they are automatically restorable, without validating that the backups are intact, uncorrupted, and accessible under incident conditions. These pitfalls compound, because broad access often leads to poor monitoring, and poor monitoring means you discover tampering only when you try to restore. By the time you realize backups are gone, you are negotiating with attackers or rebuilding from scratch. Avoiding these pitfalls is not glamorous work, but it is foundational to resilience.
One quick win that delivers real security value is separating backup credentials from normal accounts. Backup credentials should not be the same identities used for day-to-day administration, and they should not be shared across environments in ways that allow lateral movement. Separate credentials make it harder for an attacker who compromises common accounts to immediately gain backup control, and they also allow you to apply stronger authentication and stricter access policies to the backup plane. This separation can include dedicated service accounts for backup operations and dedicated human accounts for backup administration, each with tightly scoped permissions. Credential separation also enables better monitoring, because backup actions become easier to distinguish from normal operational activity. When you combine separate credentials with isolation and immutability, you create layered defenses that force an attacker to cross multiple boundaries to destroy recovery options. Even if you cannot rebuild every aspect of the program immediately, separating credentials is a meaningful step that reduces single point of failure risk. It also tends to be achievable without major architecture change, which makes it a practical starting point.
Monitoring is what tells you whether your controls are holding when the environment is under stress or attack. Backup actions should generate logs that you treat as high-signal events, because deletion attempts, retention changes, and access spikes often indicate malicious intent. You want to monitor for unusual deletions, for changes to retention periods, for disabling immutability features, and for unexpected access patterns such as access from unfamiliar networks or at unusual times. You also want to monitor for spikes in restore attempts, because attackers sometimes test restoration workflows or attempt to exfiltrate backup data by restoring it to systems they control. Effective monitoring is not just collecting logs; it is building detection logic that understands what normal looks like for backup operations. That includes expected backup windows, expected administrative actions, and expected service account behaviors. When monitoring is strong, you gain early warning before backups are fully compromised, and early warning can be the difference between containment and catastrophe.
A mental rehearsal of a ransomware attempt helps clarify what you need to detect and what you need to prevent. Imagine an attacker has gained privileged access and begins to encrypt production systems while simultaneously racing to delete backups. Their likely steps include locating backup consoles, enumerating repositories, disabling immutability if possible, modifying retention to expire backups immediately, and deleting backup jobs or snapshots. They may also attempt to delete or corrupt backup catalogs, because that slows restoration even if the data still exists. Under this pressure, your defenses need to be layered, because you cannot rely on one control to stop a fast-moving adversary. Separate credentials slow credential reuse, immutability resists destructive actions, and monitoring provides detection signals that can trigger incident response actions. The rehearsal also highlights the importance of access path control, because if backup management interfaces are reachable broadly, the attacker’s search becomes trivial. Rehearsal is not about fear; it is about designing controls that anticipate the attacker’s sequence.
A useful memory anchor for teams is simple and operational: isolate, encrypt, restrict, monitor backups always. Isolation reminds you to break the direct control path from compromised production to backup destruction. Encryption reminds you that backups are sensitive and that theft is as real as deletion. Restrict reminds you that access should be narrow and deliberate, with separation of duties and minimal standing privileges. Monitor reminds you that controls need visibility, because silent failure is what turns minor compromises into unrecoverable disasters. This anchor is also helpful when teams argue about priorities, because it frames backup security as a set of evergreen principles rather than a collection of vendor features. If a proposed design does not satisfy each part of the anchor, it is probably incomplete. If a design satisfies the anchor and is testable, it is likely to be resilient. In practice, this anchor guides both architecture and daily operational decisions.
Backup inventories are often overlooked, yet they are critical for both governance and recovery execution. An inventory should capture where backups exist, what they contain, how long they are retained, and which services they are intended to restore first. Locations matter because backups can be spread across cloud regions, on-premises storage, third-party services, and offline media, each with different access controls and threat profiles. Retention matters because immutability and lifecycle policies depend on clear retention targets, and because keeping backups too long can create unnecessary risk and cost. Restoration priority matters because during an incident you cannot restore everything at once, and you need a defensible plan for what returns first. An inventory also helps with auditability, because it connects business services to backup repositories and makes it possible to demonstrate that backups exist and are governed. Just as importantly, inventories help during change, because new systems appear and old systems retire, and backups must follow those lifecycle shifts. If you do not know what you have, you cannot protect it consistently.
At a minimum, three controls keep backups trustworthy, meaning you can rely on them when you need them most. Strong access controls prevent unauthorized reading, modification, and deletion, and they enforce least privilege for both human and service identities. Encryption at rest and in transit protects confidentiality and ensures that storage compromise does not immediately become data disclosure, especially when key usage is tightly controlled. Isolation through immutability and separate administrative accounts protects integrity and availability, making it difficult for an attacker to erase recovery options quickly. These controls are mutually reinforcing, because access control failures can be mitigated by encryption, and encryption does not help if integrity is destroyed, and isolation helps even when credentials are compromised. Trustworthiness is also supported by monitoring and testing, because you need to detect tampering and confirm restorability, but the core three controls define the baseline security posture. When you can articulate these controls clearly, you can also evaluate vendor claims and architecture proposals more effectively. Trust is not a feeling here; it is a property you build through layered controls and verifiable evidence.
To turn this into action, pick one backup repository to harden this month and treat it like a pilot that sets the standard. Choose a repository that supports a critical service or contains highly sensitive data, because improvements there will have immediate risk reduction value. Start by validating who can access it today and whether those permissions are justified, then remove broad access paths that exist only for convenience. Ensure encryption is enabled both in transit and at rest, and confirm that key usage is restricted so decryption is not casually available to broad administrative roles. Then implement or strengthen isolation, such as enabling immutability with a retention period that matches your recovery needs and ensuring that immutability settings cannot be disabled by normal admin accounts. Finally, configure monitoring so that deletion attempts, policy changes, and access spikes are visible and actionable. By choosing one repository and hardening it fully, you create a pattern you can reuse across the rest of your environment. The goal is to convert general principles into a working, testable design in a real system.
To conclude, protecting backups is about preserving recovery power in the face of an intelligent adversary who understands that backups are the last line of defense. When you treat backups as sensitive data and restrict access tightly, you reduce the chance of theft and unauthorized tampering. When you encrypt backups at rest and in transit with managed keys, you reduce the damage if storage boundaries fail and you improve auditability of key usage. When you isolate backups using immutable storage and separate administrative accounts, you make it much harder for ransomware to erase recovery options quickly. When you monitor backup actions and maintain an accurate inventory, you improve detection, response, and restoration prioritization under pressure. The next step is to audit backup access today, because an access review is often where you find the most dangerous hidden paths, and closing just one of those paths can be the difference between a successful recovery and a prolonged outage.