by Michael Voellinger
"It's computers, it has nothing to do with music. It can't destroy a hotel room, it can't throw a TV off the fifth floor into the pool and get it right in the middle. When AI knows how to destroy a hotel room, I'll pay attention to it."
That was Eagles guitarist and music icon Joe Walsh’s recent response when asked for his take on the intersection between AI and the music industry – a highly relevant question as AI-produced deepfakes have quickly become difficult to tell apart from the real thing.
Obviously, Walsh is a rock star, not a tech expert. But despite property damage not being part of the Turing test, he’s indirectly making an incredibly important point about AI and humanity: We are more than the sum of our parts. The technical barrier to putting Joe Walsh’s face and Joe Walsh’s voice and Joe Walsh’s songwriting style together using AI has gotten very low – yet while those elements combined may create a convincing replica, they do not amount to Joe Walsh.
To be clear, I'm not arguing against AI, and generative AI specifically, being a fundamentally disruptive technology. There’s no question that we’ve reached a technological inflection point where machine learning is intersecting with high levels of compute and storage – a potent blend that will reshape just about everything we do over the next 3 to 5 years, with increasing velocity and depth of impact. However, as Joe Walsh implied, it's not artificial general intelligence or "strong AI." We're not launching Skynet just yet.
It's easy to get caught up in the excitement that generative AI has created – and in the trepidation regarding the new levels of risk it presents to individuals, companies, governments, and humanity. That is precisely why it's so important for executives to take on the active task of separating signals from noise and conveying that information to employees, peers, customers, and all stakeholders. This is an especially critical role for CIOs, CTOs, CISOs, Chief Digital Officers, and others with technical capability or risk responsibility. They remain the go-to source for counsel, guidance, and strategy related to AI.
In the course of our work and our discussions with senior leaders at the Spark Executive Forum, my colleagues and I have picked up on several trends in the ways that large enterprises are using and approaching generative AI. So what exactly are they doing?
Containing the beast
Unsurprisingly, many enterprises are approaching this technology with caution – attempting to contain or block access and usage of generative AI tools (our unscientific survey of Spark attendees showed that about 30% of them were outright blocking ChatGPT). Most organizations have been forced to implement a brand-new set of generative AI policies on the fly, and we’re also seeing the rapid incubation and standup of private generative tools.
It’s paramount for large organizations to mitigate the many risks related to data, intellectual property, compliance, privacy, and reputation. At the same time, it’s great to see so many companies making secure, private tools and platforms available to their employees. Doing so helps alleviate the dreaded “shadow IT” effect long term while granting the business and technical staff the opportunity to experiment, innovate, and take advantage of the opportunity.
Can we talk?
As aggressively as you may be acting to mitigate AI risk internally, the same may not extend to everyone in your enterprise supply chain. From Microsoft and Amazon to small manufacturers and logistics companies, your data is constantly on the move outside your organization. That means it’s become mandatory to assess generative AI opportunity and risk with strategic suppliers and partners. Transparency in relationships with your ecosystem participants has never been more critical, and it needs to cover more than just the risk of data exposure – extending to platform roadmaps, third-party integrations, and even ESG impact.
Show me the money!
Use cases are incredibly important in the generative AI equation. It’s thought-provoking (and a proof point that we’re still early in the cycle) that the projected impact of generative AI typically starts with the same categories that currently benefit from basic automation and machine learning – administrative tasks, back-office functions, marketing and sales, etc. Generative AI has the potential to add value in all of these categories and beyond, but these focus areas demonstrate how we're still thinking with a constrained "art of the possible."
We’re now seeing forward-looking organizations focusing on AI incubation and development driven by business value, very often in a CoE model or as an expansion of existing data and analytics functions. This focus is a refreshing change from previous "disruptions." I'm not picking on cloud, but if you moved there with the sole business case and motivation of reducing cost, I'm sorry and available to console you as needed.
Built for speed
Speed, agility, and iteration are immensely valuable to businesses – and AI is perhaps their ultimate enabler. The experimentation is fast and furious, and agile enterprises are having a field day iterating the opportunities. If you can amplify the output of key groups in your organization (think DevOps, Marketing, Customer Service, etc.) by 100% or more, you’ve got line of sight to major uplifts in efficiency, scalability, resilience, and profitability.
By way of personal example at Yates Ltd, we’ve integrated generative AI into several of our processes, generating profound business value almost instantaneously. For instance, we've applied AI to several steps of the app rationalization process – accelerating analysis by more than 50%, enhancing the depth of analysis by more than 25%, and increasing savings outcomes by 15% or more. When you apply those statistics to thousands of titles and hundreds of millions in software spend, the result is a massive uplift in results for our clients and for our business.
Change is hard
Generative AI is a new catalyst for an old problem: Change is never easy, and you need to ensure you take everyone along on this journey – employees, customers, partners, investors, and so on. The senior executives we work with are, at the end of the day, responsible not just for implementing AI technology, but also for dealing with its consequences and managing the change it brings with it.
With employees, it’s important to get a head start on unwinding the fear and doubt about being replaced by a machine. Yes, there will always be casualties in the course of major change. However the opportunity to upskill and fill the millions of jobs AI will create will overshadow everything in the long run – so be upfront and transparent. The absence of change management and real, one-on-one communications will land you squarely in a different Joe Walsh quote: "I got myself in / the worst mess I've been / and I find myself startin' to doubt you."
Final thoughts
In a 1950 edition of Collier’s Weekly, Kurt wrote “EPICAC,” a very human AI love story:
“Clickety-clack, and out popped two inches of paper ribbon. I glanced at the nonsense answer to a nonsense problem: "23-8-1-20-19-20-8-5-20-18-15-21-2-12-5." The odds against its being by chance a sensible message, against its even containing a meaningful word or more than three letters, were staggering. Apathetically, I decoded it. There it was, staring up at me: "What's the trouble?"
I laughed out loud at the absurd coincidence. Playfully, I typed, "My girl doesn't love me."
We’re just starting our story with AI, and like any maturing relationship, we need to respect the awesome potential it has to change our lives, good and bad. Read the signals, move forward with caution and purpose, and most of all remember that people will remain the key to AI success for a very long time.
Yates Ltd partners with senior executives to create the strategy, blueprints, financial mechanisms, and execution plans to drive and achieve transformation. Our clients gain measurable cost savings, new capabilities, and the ability to outperform the competition.