The AI Enigma, Part II: Evolving AI Standards
Cathy White, Head of AI and Automation
Standards are core to the ability for most organizations to deploy, implement and improve technology. In the famous words of Taiichi Ohno, “Without standards, there can be no improvement.”
Through governing bodies and the hard work of internal enterprise technology teams, standards help us rationalize the world of hardware, software, infrastructure, and services. When it comes to AI, however, standards are hard to come by – it’s a nascent space and there’s little information available about what standards should be, and who should be setting them.
The NIST (National Institute of Standards and Technology) is actively driving discussions with the public and private sector around the development of federal standards to create building blocks for reliable, robust, and trustworthy AI systems. State lawmakers are increasingly focused on AI’s benefits and challenges, funding universities and private labs to study the impact of the use of AI or algorithms and the potential roles for policymakers.
While NIST and NCSL (National Conference of State Legislatures) have offered some guidance on what needs to be done, the content is vague. In general, they speak about a “do no harm” approach and focus heavily on decision engines systems, a subset of the AI space. There’s a very human reason behind this dynamic – people tend to fear what they don’t understand, and AI is certainly not well understood. It’s easy to focus on the potential misuse or unintended consequences of AI, and the impacts on job security and labor markets.
Pitching the Intent
It’s important for leaders to ensure messaging around AI standards is nuanced and separated from governance and oversight. They are two different concepts with two different outcomes. Look at Elon Musk and Jeff Bezos. Tesla and Amazon’s Alexa are two very powerful, yet different types of commercialized AI that have been wildly successful, and it didn’t happen overnight. Musk and Bezos both set standards and have enforced them in the extreme.
They started simple and grew over time. Through great engineers and developers who agreed to follow and build on those standards, they were able to evolve their products to the point of real productivity and market success – reflected in enviable revenue and stock prices. Your message to stakeholders should be simple: We need strong standards for AI to accelerate development, adoption, and business value.
Asking the Right Questions
Standards for AI, like the business case, should always begin with mission clarity. It can be difficult for technology leaders to explain the value of investing in standards for AI, especially with business partners. If they don’t understand the intent and outcome, you will create resistance. What is the business problem AI will solve, what’s the expected ROI, and what am I building towards? (See types of AI initiatives from my first article here)
AI is advanced technology that can deliver competitive and market advantage – but it’s expensive, and that investment can easily spin out of control. Mission clarity helps control costs, but also helps determine how stringent your standards truly need to be. The more complex your business and technology requirements are for AI, the stricter you will need to be in setting and enforcing your standards.
Focusing the mission is critical, but leaders also need to focus the standards story. To help articulate the key messaging points, I encourage leaders to develop rock-solid answers to basic questions including:
There are common functional areas that will require standards in AI projects and initiatives, so it’s wise to pay attention to these categories as you consider your roadmap. They include data sources, cleansing and schema, API accessibility, operating systems, database types, security, hardware, delivery model (e.g., agile), coding platforms, communications, and a framework for checks and balances.
Ethics in AI is a difficult topic, and one I will address directly in our next article in this series. From a standards perspective, the question of ethics should be focused on what can be done in the present to mitigate negative consequences and unintended bias.
Key Points: Getting Started
There are few stakeholders or peers in your organization that will look forward to a long discussion about standards. You can, however, change the tone and tenor with AI. As you begin your journey, here are six tenets to help guide development:
Establish the business connection. Make sure your stakeholders understand the fundamental need for standards in AI initiatives as a business enabler. Accelerating AI development, speed to market, financial efficiency and risk mitigation are always a winning combination.
Set clear goals and simplify. The standards you set should be accretive to the mission and outcome, not developing the greatest set of standards. AI is a complex topic, so make sure the message is relevant to stakeholders, action oriented, and bypass the stakeholder skepticism.
Be agile. AI is a rapidly evolving space, and that needs to be accommodated in the standards you create. Make sure there’s enough room and process in your plans to enable creativity, exploration and innovation as opportunities present themselves.
Involve stakeholders. There’s one sure-fire way to create active stakeholders – and that’s giving them a voice and a reason to use it. Make sure they’re involved from concept to approval and keep them talking.
Ask for feedback. Provide a transparent process for feedback on AI standards. It will encourage long-term participation from key stakeholders, and create a strong source of content for pitching the AI narrative across the organization.
Collaboration, not dictation. Technology organizations have a reputation, right or wrong, for saying “no” with little explanation. AI needs to be a team sport, with business and technology leaders sharing an equal stake in the risks and benefits. AI should not be another breeding ground for shadow technology investments.
Do you have questions about AI strategy, technology, or suppliers? Get in touch to set up a briefing.
About Cathy White
Catherine White is the Head of AI and Automation at Yates Ltd. She brings 25 years of success as a Fortune 500 leader creating and capturing opportunities for global competitive advantage, business growth and efficiencies through innovative IT strategy, operations, restructuring and transformation initiatives. She has deep technical experience in planning and implementing hybrid cloud, machine learning and AI, automation engineering, Dev Ops planning and Agile processes.
Prior to Yates, Catherine was a Vice President at Johnson & Johnson responsible for all technology infrastructure globally as well as architecture and platform engineering. She initially joined J&J in early 2018 to run technical operations and hybrid cloud. Prior to J&J she was at J.P. Morgan Chase leading IaaS automation and directing several infrastructure functions (AIX Power Series, Monitoring and Linux Engineering). She later took responsibility for enterprise portfolio management in consumer and community banking, along with architecture governance and total cost of ownership optimization. Following her portfolio management role Cathy was Executive Director of Digital Technologies, responsible for driving automation, AI and machine learning into digital marketing and customer experience platforms.
Catherine holds a Master of Science in Technology Management from Stevens Institute of Technology.
Yates Ltd partners with senior executives to create the strategy, blueprints, financial mechanisms, and execution plans to drive and achieve transformation. Our clients gain measurable cost savings, new capabilities, and the ability to outperform the competition.