A New Intelligence Seen Through Transparent AI

Source: pixabay

By.Lori Witzel, Director of Research for Analytics and Data Management, TIBCO

Artificial Intelligence (AI) has moved markedly onwards from the fanciful manifestations imagined in movies and science fiction, and become a tangible part of our lives through things like chatbots and shopping recommendations. But, as with many new and still-nascent technologies, we humans tend to go through a period of sceptical assessment, socially fuelled distrust and hesitant partial adoption before we reach any more widespread level of validated acceptance.

Executed, deployed and integrated properly, AI can provide us with a quantum leap forward in operational intelligence, driving more profitable business operations that deliver improved customer experience across the board. A new level of ‘user intimacy’ is achieved where business systems are reinvented and entire supply chains are augmented, advanced and enhanced. Given the challenges that AI faces in terms of ‘rise of the robots’ fear-mongering, how can we move to a new level of AI adoption where artificial intelligence is trusted, tested, toughened and above all transparent?

A clear road to AI transparency

The responsibility for enabling this new era of AI rests upon not only business leaders, but it also lies with all interested parties and stakeholders who seek the many benefits of faster, smarter systems. Work to achieve a new level of trust and transparency is going on at international state level that we all hope will prove effective and robust.

Currently still in its draft form, the European Union Artificial Intelligence Act (EU AI Act) will likely impact enterprises inside and outside of the union given the nature of international business.

At its core, the EU AI Act insists that humans remain at the centre of AI innovation and governance. As we start to apply the advantage of AI and Machine Learning (ML) efficiencies to systems, services and products, we need to ensure that human decision making underpins the logic and algorithms that AI uses. This human-centred AI is needed to properly govern personal privacy, human ethics and corporate compliance.

In terms of human agency and oversight, the EU AIA team has said, “AI systems should empower human beings, allowing them to make informed decisions and fostering their fundamental rights. At the same time, proper oversight mechanisms are needed through human-in-the-loop, human-on-the-loop and human-in-command approaches.”

Much like the EU’s General Data Protection Regulation (GDPR), implementation of the EU AI Act may take time, but like GDPR it will be consequential for businesses. The EU AI Act will likely take a tough stance on how and where it will apply significant penalties for non-compliance.

Transparency starts with audit-ability

We know that getting the benefit of AI requires trust. It’s clear the best AI systems will therefore be the most transparent and the most auditable. These are systems that exhibit traceability and explainability, enabling clear communication channels to illustrate, clarify and ratify the AI models that they are constructed upon.

If we have a clear line of sight into the algorithms and ML processes that go towards making an AI model function, then we have transparency into the processes, tools, data and ‘actors’ (mathematical models of computation) involved in the production of the total AI process itself.

The most auditable (and therefore the most transparent) AI processes are ones that are built with a level of documentation that is clear and comprehensive enough for auditors to be able to access and use. An AI auditor should be able to use that documentation to produce the same results using the same AI method using a new data science team. In many ways, we could call this a form of reverse engineering designed to prove, validate and corroborate the required level of transparency needed.

Down the line with AI model lineage

In order to achieve true transparency and trust in an AI system, we need to be able to understand its lineage. This is the set of associations between any given ML model used by an AI, and all the components involved in its creation. Tracking a model’s lineage is more difficult without robust, scalable model operations – and this is typically down to the number of components involved, which can be large, dynamic and difficult to trace.

Both trust and transparency can be addressed with robust, scalable model management and model operations. Model operations, which is the development and management of ML models that support AI initiatives, is key to operationalising AI. But it can be tough and problematic to scale, so organisations need to think about working diligently with their data science and IT teams to understand their individual operational challenges.

In working practice, robust transparency is a blend of proper disclosure, documentation and technology. Putting it in more specific technical terms, data fabrics and model operationalisation tools track and expose data transparency through changelogs and history. Access to these assets enables us to trace and playback the actions of AI models. These actions, specifically, are the mechanics of transparent AI playing out in working motion.

This data and model traceability, combined with proper disclosures and documentation, help make the data used, decisions made and the implications of those decisions more transparent across an entire organisation. At no level can we reasonably expect customers and partners (and indeed internal company employees in any organisation) to engage with – let alone trust – any business systems driven by AI decisions without transparency at this level.

A better (more transparent) world

We do now have an enviable, exciting and enriching opportunity to embrace AI and ML as we use it to make our lives better at so many levels.

If we think about the unmatched speed at which vaccines have been produced in response to the pandemic since the start of this decade, this action (and many others besides) illustrates how we can leverage AI and ML to do so much more, so much faster and with so much more accuracy than at any time in our lives up until this point.

When we take these actions forward and do so within the boundaries of regulatory compliance and governance that communities, practitioners, and lawmakers will set and establish, then we can be confident about embracing AI and its many benefits – powerfully positive customer experience, medical breakthroughs, and operational excellence – that we know it can deliver.

Editor