Insight

Maintaining data usability and security

Data
Dave Russell, senior vice president and head of strategy at Veeam Software, tells Comms Business about the challenges of balancing data demands with regulation.

Securing data while also keeping it accessible has been a complex balancing act since, quite literally, records began. And, it’s only getting harder to stay on top of. The last few decades have seen an explosion in the sheer amount of data being collected, stored and used. If that wasn’t enough to contend with, data is about to go through another growth spurt as artificial intelligence (AI) reaches widespread adoption. 

Naturally, governments across the globe are trying to keep up by implementing a flurry of data-focused regulations. Despite the good intentions, they are putting organisations under immense pressure as they struggle to comply with everything being thrown their way while ensuring their data resilience processes evolve in the face of AI. Not only that, channel partners are having to adapt quickly as they work to find their role in this new world of AI-led process and regulation. 

As 2025 begins, enterprises and channel partners alike are having to find a balance between keeping data secure, resilient and usable, all while navigating evolving regulations and the rapid, and often unchecked, adoption of AI. 

Not just any data

The pressures on enterprise data have never been greater as AI is dependent on accessible, accurate and useable data at all times. As the hype around the flashier applications has died down, organisations have rushed to adopt AI and access new business value from their existing data. According to the latest McKinsey Global Survey on AI, 65 per cent of respondents worldwide reported that their organisations are already regularly using AI. But what does this mean in practice for data resilience?  

It's common knowledge that AI relies on data. And not just any data: accurate and relevant data. Most AI applications require live access to a data pool to analyse and react to changes in real-time. The smallest inaccuracy or inconsistency in data across an organisation can quickly render AI’s output useless. If it receives incorrect data, it will produce an incorrect output. Preventing the input of sensitive, mission-critical or customer data is also paramount. There’s very much still a balance to be figured out as increasingly more organisations embrace AI.

Despite the pressures they bring, the wave of regulations demanding greater data resilience and responsibility both in AI and more broadly will help. These regulations, including the EU AI Act and the NIS2 Directive, place increased responsibility on organisations to ensure data security. They also focus largely on extending the line of custody that organisations have on their data, requiring them to consider how it will be secured when used with new technologies such as AI. 

It’s important to bear in mind that when organisations started collecting data, AI wasn’t even a practical reality, let alone something that might use said data. While these new considerations fall primarily under the responsibility of chief information governance teams, achieving compliance with AI-related regulations will require effort across the entire organisation. And, this is all while ensuring that relevant teams have access to the data they need to innovate and grow.

Not quite as bad as it seems? 

As organisations and their channel partners navigate the balance between a suitable speed of access to data while also maintaining data resilience in line with regulation, they might fear it’s a never-ending task. However, it’s not so different to previous challenges they will have faced, just wrapped up in a new set of systems and circumstances. AI might be about to reinvent the technological wheel, but businesses don’t need to in order to keep up. 

The core issue is constant, the principles remain, but the environment, the technology and the scale keep changing. According to the Veeam Data Protection Trends Report 2024, 76 per cent of organisations recognise a protection gap between how much data they can afford to lose and how often their data is protected. While this gap has been shrinking in recent years, AI’s data boom could result in it widening again unless action is taken. 

Team collaboration, from data governance to security IT and production has always been, and continues to be, vital to stay on top of data resilience. Creating a new set of business risk assessments together will lead the way forward for organisations using data in AI models.

Although they bring additional work for organisations and their channel partners, these regulations are perfectly timed, demanding a re-evaluation of data security practices as we enter an AI boom, which brings with it heightened security threats. But organisations shouldn’t be reliant on new regulations to prompt investment in resilience. Monitoring and adjusting to risk levels should be a regular, ongoing process. 

Going back to backup

As is often the case, it comes back to data backups. A key aspect of modern data regulation, their role will only get bigger in 2025. They will provide those teams developing AI and large language models with a much-needed anchor in a constantly changing environment. 

One of the most useful tools in the data toolbox, backup keeps data accurate, secure and usable while also providing a comprehensive record for organisations to prove their adherence to regulations. It’s an increasingly rare source of truth when dealing with AI as its very nature makes it difficult to account for how exactly it has used the data it has been fed or trained on. But, by using data backups, organisations can account for the security of their data at any given time and place. 

Sadly, it’s impossible to reach complete security when it comes to data; the measuring of risk versus reward will always be a consideration for enterprises and their channel partners alike. But quality data backups can give you the best backup possible, catching you to ensure that any slip-ups are righted as soon as possible. 

 

Posted under: