Failure is a friend during AI deployments, says Hitachi Vantara’s Jason Hardy

1 week ago 8

After having been exposed to AI startup publicity material for the better part of two years, one would be forgiven for thinking that models like ChatGPT or Claude could be thrown into the hot, bubbling mess of a corporate back-end system like a bouquet garni and successfully add a little je ne sais quoi. Most CIOs can’t afford such flights of fancy. The wisest among them realise that harnessing the potential of AI requires more ingredients than a prepackaged model, but hard graft to clean, chop and sieve their firm’s data to make the whole proposition sing.

You also need to plan carefully, says Jason Hardy. VP and CTO of AI at Hitachi Vantara, Hardy’s seen plenty of businesses crunch numbers using the latest generative models only to realise that their envisioned applications are more limited than first assumed. That’s usually because the IT team has failed to consult all the possible stakeholders in the company that could be impacted by the deployment, Hardy explained to Tech Monitor – a process that CIOs skip at their peril. 

A data product veteran, Hardy has seen it all, from when data was considered the ‘new oil’, to the rush for analytics and virtualisation, to the ChatGPT and Nvidia-powered hype of the last few years. As he explains in the following interview with Tech Monitor edited for length and clarity, AI deployments are nothing without good planning – not sustainable, not efficient, and certainly not financially sensible.

A headshot of Jason Hardy, used to illustrate an interview with the Hitachi Vantara executive.Involve all possible corporate stakeholders in your AI deployment, Jason Hardy urges CIOs. (Photo: Hitachi Vantara)

Let’s say you’re a CIO and you’ve been tasked by your superiors with deploying the latest and greatest AI within your company. Where do you start?

Businesses will already have data, tools and information to cut through the noise and identify what they actually need. That’s enough to stop walking blindly into implementation based on hope – that approach won’t result in being AI-enabled! 

You need a plan. That requires pragmatism, intentionality and a willingness to both embrace failure and define what success should look like. It means roadmapping the application’s implementation in the same manner as any other digital transformation, using the same rules. It just might have to be a faster process because, yes, the market is always shifting – to the point where AI is changing who we are, fundamentally, as a society – but such deployments still demand a practical and methodical approach because of such wide-reaching impacts.

How do you do that without disrupting the core functions of the business?

Consciousness around sustainability and environmental concerns has to interact with bottom-line priorities. When implementing AI, we have to ask: how does it improve the business, and how can the business mature with the technology? Again, this is about understanding risk, and there will be some risk as the technology is bleeding edge. 

My advice is don’t use technology that is invariably unfamiliar to change the core of the business. Start on the edge, and as you build maturity and skillsets, develop it further; fail on the edge, experiment on the edge, and then aim for the core. It might take years, and it won’t happen overnight, and there will be more failures than successes but it’s such an approach that will keep your people sane. Businesses can’t forget the people impact, who they are doing this for.

But what about the people who have to use these tools – how do you win their trust and support?

You bring them into the tent. AI is a very new paradigm, so having multiple opinions on how the technology should help is always helpful. Those can come from outside partners or the right people within the organisation. Everyone in the legal and security departments to customer service can help craft the product, strategy and messaging behind the deployment. This isn’t not just an IT responsibility, or that of a specific line of business – it’s an organisational responsibility.

Winning over receptive audiences is one thing. But what about those individuals or even companies that feel burned by previous failed deployments?

In that case, having a multi-voice approach becomes critical. Any AI deployment is, to a certain extent, a leap of faith, but you have to make it an educated one. You constantly have to ask yourself what the ROI is, what risks could you be taking, what rewards are in store and whether or not your data estate is truly ready for the project. That process will naturally lead to a maturation of the organisation’s thinking on AI. 

Trust in an application ultimately derives from its reliability and those who bear responsibility for its operation. These are two inextricable concepts. If you build a more reliable platform, you are doing it in a responsible manner and will naturally gain the trust of users. But it has to be built from the ground up, using the right data, the right governance, the right privacy, fairness, safety, accountability and ethical standards. That’s the ecosystem the whole AI-build cycle should be built upon.

Read more: The only strong cybersecurity strategies are holistic cybersecurity strategies, says Black Duck’s Ishpreet Singh

Read Entire Article