The 400 Zettabyte Question. Where Will Your Backup Data Live?

We crossed 180 zettabytes of global data in 2025, according to IDC’s Global DataSphere forecast. By 2028, we’ll hit almost 400 zettabytes. That’s not a typo. We’re more than doubling the world’s data footprint in three years.While artificial intelligence dominates every conference keynote and vendor pitch, there’s a less glamorous truth driving this exponential growth: unstructured data. IDC predicts that 80% of global data will be unstructured by 2025 documents, images, video, backups. The ungoverned, sprawling reality of how organisations actually work. And within that tsunami, there’s one category that’s simultaneously mission-critical and dangerously underestimated: backup data.
The Target on Backup’s Back

Newsflash. Ransomware operators got smarter. They stopped encrypting production systems first. Now, they hunt for backups. Compromise the backup infrastructure, encrypt or delete recovery points, then strike production. No backups, no recovery options, maximum leverage.
The results are devastating. Veeam’s 2025 Ransomware Trends Report found that 69% of organizations experienced a ransomware attack in the past year. But here’s the gut-punch: of those attacked, only 10% recovered more than 90% of their data, while 57% recovered less than 50%. Read that again. More than half of ransomware victims lose the majority of their data permanently.
The one thing designed to save you has become the primary target of the attacker. Yet how many organisations treat backup storage with the same security posture as their production environment? How many can confidently answer where their backup data physically resides, under which jurisdiction it’s governed, or who has access to it?
These aren’t rhetorical questions anymore. They’re audit requirements.
Regulation Is Rewriting the Rules
The compliance landscape has fundamentally shifted. GDPR was the opening act. Now we’re seeing data residency mandates proliferate globally: Australia’s Privacy Act amendments, Brazil’s LGPD enforcement, the EU’s Data Act, regional requirements across the Middle East and Asia-Pacific. The common thread? Organisations must demonstrate not just data protection, but data sovereignty. You need to prove where data lives, how it’s protected, and who governs access.
For managed service providers and enterprises relying on centralised hyperscale storage, this creates a painful paradox (AWS US east-1 region anyone?). The very model that promised simplicity through centralisation now delivers complexity through opacity. When your backup data sits in someone else’s data centre, in someone else’s jurisdiction, subject to someone else’s access policies, compliance becomes a confidence game you can’t win.
Proximity Isn’t Just About Latency

The conversation around data proximity usually focuses on performance. Lower latency, faster recovery time objectives. Those matter. But proximity delivers something more fundamental: control.
When backup data resides locally, in-region, on infrastructure you manage, several things happen simultaneously. You can answer the auditor’s question about data residency with absolute certainty. You can implement air-gapped recovery procedures without navigating hyperscaler APIs. You can enforce immutability at the storage layer. You can recover at LAN speeds, not internet speeds. And when a breach occurs, you’re not waiting for a support ticket to understand what happened to your backups.
This is why we’re seeing the emergence of what we call LocalScale architecture, distributed storage clusters deployed close to where data is created and consumed, delivering hyperscale capabilities without hyperscale compromises. It’s not a return to legacy on-premises infrastructure, and it’s not edge computing. It’s purpose-built sovereign storage that sits in your region, under your governance, accessible at LAN speeds.
The 3-2-1-1-0 Reality Check
The industry standard has evolved. It’s no longer just 3-2-1 (three copies, two media types, one offsite). It’s now 3-2-1-1-0: add one immutable/air-gapped copy and zero errors in recovery testing. This isn’t gold-plating. It’s the minimum viable defence against modern ransomware tactics. Veeam’s 2025 research emphasizes that organizations following this rule can recover from attacks up to seven times faster than those who don’t.
But here’s the economic reality nobody talks about: 3-2-1-1-0 dramatically increases backup storage requirements. That immutable copy? It can’t be deduplicated away or thin-provisioned. It sits there, consuming capacity, because that’s exactly what makes it resilient. For organisations already struggling with backup storage costs, this enhanced standard feels like choosing between security and budget.
“The cost crisis in backup storage isn’t a future problem, it’s happening right now,” says Peter Boyle, CEO & Co-Founder of Exaba. “MSPs are being asked to deliver more copies, with immutability, under stricter compliance, while maintaining margin. The hyperscale model wasn’t built for this. We founded Exaba with a single-minded focus: make sovereign (local), secure backup storage economically viable for service providers and enterprises. Not as a luxury, but as standard infrastructure they can build their business on.”
This is where service providers face a critical inflection point. Your customers need 3-2-1-1-0 compliance. They need sovereign, local storage for regulatory requirements. They need immutability for ransomware protection. And they need all of this without their backup costs spiralling out of control.
Traditional hyperscale providers built their models for different workloads. Their pricing assumptions don’t account for the retention requirements, immutability needs, or data sovereignty mandates that backup now demands. MSPs end up trapped: deliver the security customers require, or maintain margin. Pick one.
The Economics of Taking Control
This is precisely why LocalScale architecture changes the game for service providers. When you can deliver S3-compatible backup storage at <$2 per terabyte per month with no egress fees, no surprise charges, and full sovereignty control, the 3-2-1-1-0 equation shifts from “impossible” to “profitable.”
For MSPs, this isn’t just about cost reduction. It’s about value creation. You’re not reselling someone else’s infrastructure at shrinking margins while explaining unpredictable invoices to angry customers. You’re delivering a branded, sovereign backup service that solves real problems: compliance, security, performance, and cost predictability. That’s a conversation where you control the relationship and the economics.
The hyperscale objection, “but their scale makes them cheaper” collapses under scrutiny. Cheaper for whom? When you factor in egress fees for recovery scenarios (exactly when customers are most vulnerable), repatriation costs for compliance audits, API charges for immutability features, and the operational overhead of managing opaque pricing models, the true cost of hyperscale backup becomes clear. And it’s rarely in the service provider’s favour.
Backup as Strategic Infrastructure

As we hurtle toward 400 zettabytes, the industry needs to reframe how we think about backup. It’s not a boring necessity in the corner of the budget. It’s the foundation of resilience. The last line of defence. The asset attackers prioritise above everything else.
That demands a different architecture. One built on proximity, transparency, and control. One where you know exactly where every byte resides. One where recovery doesn’t require permission from a hyperscaler or navigating geopolitical access restrictions.
The future of backup isn’t about backing up to the cloud. It’s about backing up to your cloud, distributed, sovereign, and secure. Because when the 400 zettabyte question lands on your desk, the answer shouldn’t be “I think it’s somewhere in AWS.” It should be “I know exactly where it is, and I control it.”
The data explosion is here. The regulatory pressure is intensifying. The threat landscape is evolving. So, the million dollar question… where will your backup data live?