CEOWORLD magazine

5th Avenue, New York, NY 10001, United States
Phone: +1 3479835101
Email: info@ceoworld.biz
CEOWORLD magazine - Latest - CEO Advisory - Maximizing Value from AI Deployments Requires Rethinking Storage Strategy

CEO Advisory

Maximizing Value from AI Deployments Requires Rethinking Storage Strategy

For artificial intelligence and machine learning technologies to do their important work, they need data – a lot of data. The same holds true for big data analytics. The amount of data generated and replicated was expected to hit over 70 zettabytes (for context, that’s 21 zeroes after the number) last year – and that number continues to soar.

The information and insights within this data can be incredibly valuable, sometimes representing once-in-a-lifetime events and often irreplaceable. That means this data must be stored safely and securely, driving demand for storage capacity. 

But while data generation is expected to increase at a compound annual global rate of 23% through 2025, according to IDC analysts, it’s outpacing the 19.2% growth of global storage capacity. How do enterprises grapple with this conundrum? It requires a new approach to storage, one that is secure, reliable and infinitely scalable. Enter fast object storage.  

Let’s examine three data-intensive use cases that could benefit from such an approach .

Digging into digital pathology

Digital pathologists are handling massive amounts of data. A single whole slide image can be gigabyte or larger in size. And that single slide, once analyzed, can generate a massive quantity of data. Historically, the field has struggled to make better use of these tremendous volumes of data. This will enable them to do things like automatically detect pathologies in tissue samples, perform remote diagnostics and so on. But current storage paradigms are limiting usage. Images with useful resolution are too large to store economically. 

However, fast object storage will enable new abilities – like image banks that can be used as a key training resource and the use of space-filling curves to name/store and retrieve multiresolution images in an object store. It also enables extensible and flexible metadata tagging, which makes it easier to search for and make sense of this information.

The time-sensitive travel industry

After two years of lockdowns and restricted movement, the travel industry is working hard to get back to pre-pandemic times. That’s driving the need for a better way to apply and use data. 

Consider the possibilities of knowing where most of the world’s airline travelers are going to travel next or where they’re going tomorrow. That would be huge for a travel agency. But sorting through the volumes of data is a massive undertaking – about a petabyte of data is generated each day, and some of the data is duplicated by sites like Kayak. This data is time-sensitive, and travel companies need to quickly discover which data is meaningful. They need a way to better manage this level of scale in a more effective manner.  

The safety-first automotive industry 

A use case that gets some of the biggest headlines is autonomous vehicles. Cars today are computers on wheels, and the industry’s been hard at work with assistance tools like lane minders, collision avoidance and the like. These tools rely on sensors, all of which are bringing in great quantities of data. And that’s not to mention the development, testing and verifying of self-driving algorithms.  

To make optimal use of this stored data, the automotive industry needs a more effective way to analyze the data to better understand incidents where something went wrong, curate sensor outputs as a test case, test algorithms against sensor data and more. Older approaches to storage can’t keep up with the scale – 

A new framework for AI workloads

These three use cases alone underscore the importance of being able to aggregate and orchestrate vast amounts of data related to AI/ML workloads. Data sets today often reach multi-petabyte scale, with performance demands that could saturate the whole infrastructure. To successfully make the most of this data, organizations must overcome storage bottlenecks and capacity limitations. 

AI/ML and deep learning (DL) workloads demand a new storage framework, one that can data flowing through the pipeline, with both excellent raw I/O performance and capacity scaling capability. Storage infrastructure must keep pace with increasingly demanding requirements across all stages of the AI/ML/DL pipeline. That requires a fast object storage solution that’s specifically built for speed and limitless scale. 

Realizing business value

Use cases continue to arise from the possibilities inherent within AI and ML – use cases that can change not only how business gets done but daily life as well. Some of these have already been tested and proven to be beneficial. However, enterprise AI initiatives are typically dealing with vast data sets and storage solutions that can’t handle them. Without a resolution to the storage issue, the automotive, healthcare and other sectors can’t implement the innovations they’re working toward. Fast object storage enables enterprises to resolve the burden of big data retention so they can get the insights they need from their treasure troves of data and realize real business value.


Authored by Brad King, CTO of Scality.
Why Flexibility is the Key Component to Hybrid Work Environments by Jason Palmer.
Improving Women’s Economic Participation with New Crowdsourcing Models by Tracy Garley.
Transforming an Industry with Innovation by Kevin Leahy.
The future of work: What workplace trends are on the rise? by Janet Candido.


Add CEOWORLD magazine to your Google News feed.
Follow CEOWORLD magazine headlines on: Google News, LinkedIn, Twitter, and Facebook.
Copyright 2024 The CEOWORLD magazine. All rights reserved. This material (and any extract from it) must not be copied, redistributed or placed on any website, without CEOWORLD magazine' prior written consent. For media queries, please contact: info@ceoworld.biz
CEOWORLD magazine - Latest - CEO Advisory - Maximizing Value from AI Deployments Requires Rethinking Storage Strategy
Brad King
Brad King, CTO of Scality. As field CTO, Brad is responsible for the design of the largest systems Scality deploys around the world. These include multi-petabyte, multi-site systems with hundreds of servers. He is one of the co-founders of Scality. Brad holds a PhD degree in naval architecture and marine engineering from the University of Michigan.


Brad King is an opinion columnist for the CEOWORLD magazine. You can follow him on LinkedIn.