Are you familiar with this classic nursery rhyme: Old Mother Hubbard, went to the cupboard, to fetch her poor dog a bone, And when she went there, the cupboard was bare, and so the poor dog had none (the rhyme was the charming work of Sarah Catherine Martin and published in 1805)?
I’m sure you’ve heard it before.
The crucial catchphrase within this enchanting verse is that sometimes the cupboard can be bare. You know how that can be. At times, we find ourselves up a creek without a paddle. You might have hoped that the cupboard was stocked or that your canoe had a paddle, but upon closer inspection, you sadly discovered that the things you needed or perhaps dreamily wanted weren’t there. This can be disconcerting or even downright debilitating.
Shift gears into modern-day Artificial Intelligence (AI) and the rising tsunami of AI systems that we daily come in contact with. When it comes to auditing AI systems, especially the thorny matter of AI Ethics adherence, you might lamentedly say that the cupboard is bare. You see, we are woefully lacking in fully embraced across-the-board methods, techniques, tools, and standardized approaches for the solemn and necessary act of auditing complex AI.
That is both a shame and an outright perilous omission or insidious state of affairs.
Auditing is a long-revered process to assess and attest to something being of an appropriate or shall we say proper condition. We often rely upon the analyses and conclusions reached by auditors when they have sought to examine whether a business is fair-and-square about reporting its financial status. Without some form of validation, we cannot be sure of the validity of what a business is claiming they are doing or has done. As we have witnessed throughout history, many businesses have opted to cook their books, trying to hide devious schemes or downplay their precarious financial dealings.
That being said, let’s all acknowledge that auditing is not a panacea. Can a business still manage to keep secret their deceitful trickery, despite undergoing an audit? Yes, absolutely. Is it possible to pull the wool over the eyes of the auditors or perhaps sneakily feed the auditors a plate of falsehoods that they fall for? Unfortunately, yes.
But just because auditing is not a perfect means of ferreting out what is happening, we should be cautious in unduly throwing out the baby with the bathwater. Auditing can indeed reveal that which was sought to be kept from prying eyes. Auditing can also act as a type of scare tactic that keeps potential wrongdoers from veering into the evil grasp of doing wrongs. Those leaning into the disturbingly foul territory are apt to realize they might get caught with their hands in the cookie jar.
If you don’t do audits, a potential Pandora’s box of devilish practices might get started and grow without nary a concern that the house of cards is conning others and someday might crash and devastate innocents.
And that brings us to the emerging and pervasive adoption of AI.
Consider these seemingly straightforward questions that ought to be always answerable:
- How does a company know that its AI is working properly and has been well-devised?
- How do consumers know that the AI they are using is doing the right things?
- How do regulators and government overseers know that the AI being used by the public at large is balanced and fair?
The chilling answer is that by and large they don’t know.
Companies are pell-mell rushing forward to push shoddy AI out into the marketplace since they realize that if they don’t grab the market first, a competitor will do so in their stead. Sometimes a firm does this with the best of intentions and is clueless that their AI is fraught with badness. Other times a firm realizes that the AI is perhaps less-than-optimal, but they take a chance anyway on it. There are the evildoers that know exactly the wrongs they are committing with atrocious and unethical AI, yet they proceed unabated and do so partially because the specter of an AI audit is generally unlikely. For my ongoing and extensive coverage of AI Ethics and Ethical AI, see the link here and the link here, just to name a few.
Among the many and varied ways of dealing with bad AI, we should assuredly be clamoring for AI audits.
To clarify, this is not a silver bullet. It is certainly not the only means of coping with or stopping the emergence of bad AI. Nonetheless, it is a vital option and one that surprisingly is not taking place as much as you might assume.
Auditing AI is not especially a top priority for many companies. One reason is that firms are often unaware that auditing AI is something that can be done. Believe it or not, some executives seem to think that AI is entirely unamendable to being audited. Worse still, some of those executives also do not see any reason to audit AI at all, even if it seemed doable. They blindly have faith in AI and believe that AI is magically infallible.
We can add more frightening thoughts to this pile-on of reasons that AI auditing is not often undertaken. An oft-used excuse is that a normal audit will decidedly reveal any concerns with the AI systems being devised or utilized by a company. Regrettably, that is not necessarily the case. Conventional audits are frequently ill-prepared to handle the AI side of things or skirt over the AI and act like it isn’t worthwhile to probe into. An escape clause then might be included in the audit analysis that quietly and unobtrusively says in small print that the AI systems are not encompassed by the scope of the audit.
Meanwhile, executives that didn’t read the fine print will incorrectly presume that their AI was given a clean bill of health. Sneaky executives might have gotten out their magnifying glass and carefully read the stated exceptions about AI, and yet act as though AI was presumably included. You might say that they will try the classic ploy of plausible deniability and hope that no one will ever catch them on their happening to have (shockingly, wink-wink) missed noticing that AI was carved out of an audit and not given due attention.
With all of that chicanery about auditing AI, we have the icing on the cake that there aren’t fully agreed on standards about how to best audit complex AI systems. The audit camps can argue that they are lacking in the means to do AI audits. They might also worry that if AI audits are indeed performed, perhaps they will be on some ostensibly doubtful loose ground if the audit doesn’t catch heinous AI badness. Clients might come after the auditors, as might shareholders and possibly the government.
It is risky business to do an AI audit.
A counterbalancing factor is that AI audits can be monetarily rewarding. Where there is a buck to be made, auditors will find a sufficient way to do audits. The tradeoff of whether a bona fide and defensible AI audit can be done, along with exposures for such audits that are later found to be inadequate, provides a potential ROI (return on investment) that is attractive enough to make dollars-wise sense in the right circumstances, including the auditing of AI.
I guess you could also say that where there is a need, a means will be found to meet the need. Thus, despite the immature state of AI auditing, many are stridently trying to devise AI auditing approaches and put those into practice. Auditing of AI is like the little train that could, it is chugging and slogging its way up the hill toward providing valuable and usable auditing methods and auditing tools to get the AI auditing job satisfactorily done.
Rather than perhaps starkly contending that the cupboard is completely bare, it might be more suitable to suggest that the cupboard has a smorgasbord of items that are somewhat piecemeal gradually coming together to make an increasingly satisfactory AI auditing meal, as it were. A common approach entails borrowing from conventional tech-oriented auditing principles and techniques and recasting them in the context of AI systems. This is not a slam dunk. There are substantive differences in how AI works versus the traditional forms of everyday tech capabilities.
Let’s take a look at what the cupboard already has and then ruminate on what we might expect to see as a further rise in the realm of auditing AI.
First, consider a handy report by the well-known professional association on tech auditing referred to as ISACA (Information Systems Audit and Control Association). Their Auditing Artificial Intelligence study proffered this important point: “AI’s rise has been accompanied by the traditional lag time between early adoption and the establishment of regulatory and compliance frameworks. There is, for example, no mature auditing framework in place detailing AI subprocesses, nor are there any AI-specific regulations, standards or mandates.”
You see, a conundrum exists as to the nature and scope of AI as an area of focus. It is indubitably challenging to try and audit something that you cannot even at the get-go put your arms around. Not only is the definition of AI rather loosey-goosey (I’ve covered the legal difficulties of trying to pin down what AI is, see my discussion at the link here), the manner in which AI is devised varies widely too. Per the ISACA study: “Moreover, AI systems and solutions vary widely from each other, and the vast set of existing and emerging technologies foundational to AI architecture give birth to complex systems. This complexity points to a high likelihood of uncertainty around the scope of AI within the business.”
I especially like this list in the ISACA report that spells out the key reasons that the auditing of AI has yet to have become an all-agreed formalized across-the-board standardized approach (this list is excerpted from the Auditing Artificial Intelligence report):
1. Immature auditing frameworks or regulations specific to AI
2. Limited precedents for AI use cases
3. Uncertain definitions and taxonomies of AI
4. Wide variance among AI systems and solutions
5. Emerging nature of AI technology
6. Lack of explicit AI auditing guidance
7. Lack of strategic starting points
8. Possibly steep learning curve for the AI auditor
9. Supplier risk created by AI outsourcing to third parties
Some of the proposed solutions in the report that are aimed to prod along the maturation of doing AI auditing include these stated steps:
- Adopt and adapt existing frameworks and regulations.
- Explain and communicate proactively about AI with stakeholders.
- Become informed about AI design and architecture to set proper scope.
- Focus on transparency through an iterative process.
- Focus on controls and governance, not algorithms.
- Involve all stakeholders.
- Become informed about AI design and engage specialists as needed.
- Document architectural practices for cross-team transparency.
When it comes to auditing AI, there is a particular element that sometimes does not get the attention it deserves. I am referring to AI Ethics and the production and use of Ethical AI. A bona fide audit of AI should encompass whether the AI is being devised to abide by the core precepts of AI Ethics. This is more than simply assessing the AI hardware or software for the usual kinds of auditing concerns.
A research paper in the journal of Minds and Machines made a telling observation that audits of AI need to incorporate a form of ethics-based auditing. The effort tends to examine the basis for the AI, the code of the AI system, and the effects that the AI brings forth: “Rather than attempting to codify ethics, ethics-based auditing helps identify, visualize, and communicate whichever normative values are embedded in a system. Although standards have yet to emerge, a range of different approaches to ethics-based auditing of AI already exists: Functionality audits focus on the rationale behind the decision, code audits entail reviewing the source code, and impact audits investigate the effects of an algorithm’s outputs” (by Jakob Mökander and Luciano Floridi in “Ethics-Based Auditing To Develop Trustworthy AI”).
Allow me a brief sidebar to cover some essentials about AI Ethics.
One particular segment or portion of AI Ethics that has been getting a lot of media attention consists of AI that exhibits untoward biases and inequities. You might be aware that when the latest era of AI got underway there was a huge burst of enthusiasm for what some now call AI For Good. Unfortunately, on the heels of that gushing excitement, we began to witness AI For Bad. For example, various AI-based facial recognition systems have been revealed as containing racial biases and gender biases, which I’ve discussed at the link here.
Efforts to fight back against AI For Bad are actively underway. Besides vociferous legal pursuits of reining in the wrongdoing, there is also a substantive push toward embracing AI Ethics to righten the AI vileness. The notion is that we ought to adopt and endorse key Ethical AI principles for the development and fielding of AI doing so to undercut the AI For Bad and simultaneously heralding and promoting the preferable AI For Good.
On a related notion, I am an advocate of trying to use AI as part of the solution to AI woes, fighting fire with fire in that manner of thinking. We might for example embed Ethical AI components into an AI system that will monitor how the rest of the AI is doing things and thus potentially catch in real-time any discriminatory efforts, see my discussion at the link here. We could also have a separate AI system that acts as a type of AI Ethics monitor. The AI system serves as an overseer to track and detect when another AI is going into the unethical abyss (see my analysis of such capabilities at the link here).
In a moment, I’ll share with you some overarching principles underlying AI Ethics. There are lots of these kinds of lists floating around here and there. You could say that there isn’t as yet a singular list of universal appeal and concurrence. That’s the unfortunate news. The good news is that at least there are readily available AI Ethics lists and they tend to be quite similar. All told, this suggests that by a form of reasoned convergence of sorts that we are finding our way toward a general commonality of what AI Ethics consists of.
First, let’s cover briefly some of the overall Ethical AI precepts to illustrate what ought to be a vital consideration for anyone crafting, fielding, or using AI.
For example, as stated by the Vatican in the Rome Call For AI Ethics and as I’ve covered in-depth at the link here, these are their identified six primary AI ethics principles:
- Transparency: In principle, AI systems must be explainable
- Inclusion: The needs of all human beings must be taken into consideration so that everyone can benefit, and all individuals can be offered the best possible conditions to express themselves and develop
- Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency
- Impartiality: Do not create or act according to bias, thus safeguarding fairness and human dignity
- Reliability: AI systems must be able to work reliably
- Security and privacy: AI systems must work securely and respect the privacy of users.
As stated by the U.S. Department of Defense (DoD) in their Ethical Principles For The Use Of Artificial Intelligence and as I’ve covered in-depth at the link here, these are their six primary AI ethics principles:
- Responsible: DoD personnel will exercise appropriate levels of judgment and care while remaining responsible for the development, deployment, and use of AI capabilities.
- Equitable: The Department will take deliberate steps to minimize unintended bias in AI capabilities.
- Traceable: The Department’s AI capabilities will be developed and deployed such that relevant personnel possesses an appropriate understanding of the technology, development processes, and operational methods applicable to AI capabilities, including transparent and auditable methodologies, data sources, and design procedure and documentation.
- Reliable: The Department’s AI capabilities will have explicit, well-defined uses, and the safety, security, and effectiveness of such capabilities will be subject to testing and assurance within those defined uses across their entire lifecycles.
- Governable: The Department will design and engineer AI capabilities to fulfill their intended functions while possessing the ability to detect and avoid unintended consequences, and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior.
I’ve also discussed various collective analyses of AI ethics principles, including having covered a set devised by researchers that examined and condensed the essence of numerous national and international AI ethics tenets in a paper entitled “The Global Landscape Of AI Ethics Guidelines” (published in Nature), and that my coverage explores at the link here, which led to this keystone list:
- Justice & Fairness
- Freedom & Autonomy
As you might directly guess, trying to pin down the specifics underlying these principles can be extremely hard to do. Even more so, the effort to turn those broad principles into something entirely tangible and detailed enough to be used when crafting AI systems is also a tough nut to crack. It is easy to overall do some handwaving about what AI Ethics precepts are and how they should be generally observed, while it is a much more complicated situation in the AI coding having to be the veritable rubber that meets the road.
The AI Ethics principles are to be utilized by AI developers, along with those that manage AI development efforts, and even those that ultimately field and perform upkeep on AI systems. All stakeholders throughout the entire AI life cycle of development and usage are considered within the scope of abiding by the being-established norms of Ethical AI. This is an important highlight since the usual assumption is that “only coders” or those that program the AI are subject to adhering to the AI Ethics notions. As earlier stated, it takes a village to devise and field AI, and for which the entire village has to be versed in and abide by AI Ethics precepts.
Let’s also make sure we are on the same page about the nature of today’s AI.
There isn’t any AI today that is sentient. We don’t have this. We don’t know if sentient AI will be possible. Nobody can aptly predict whether we will attain sentient AI, nor whether sentient AI will somehow miraculously spontaneously arise in a form of computational cognitive supernova (usually referred to as the singularity, see my coverage at the link here).
The type of AI that I am focusing on consists of the non-sentient AI that we have today. If we wanted to wildly speculate about sentient AI, this discussion could go in a radically different direction. A sentient AI would supposedly be of human quality. You would need to consider that the sentient AI is the cognitive equivalent of a human. More so, since some speculate we might have super-intelligent AI, it is conceivable that such AI could end up being smarter than humans (for my exploration of super-intelligent AI as a possibility, see the coverage here).
Let’s keep things more down to earth and consider today’s computational non-sentient AI.
Realize that today’s AI is not able to “think” in any fashion on par with human thinking. When you interact with Alexa or Siri, the conversational capacities might seem akin to human capacities, but the reality is that it is computational and lacks human cognition. The latest era of AI has made extensive use of Machine Learning (ML) and Deep Learning (DL), which leverage computational pattern matching. This has led to AI systems that have the appearance of human-like proclivities. Meanwhile, there isn’t any AI today that has a semblance of common sense and nor has any of the cognitive wonderment of robust human thinking.
ML/DL is a form of computational pattern matching. The usual approach is that you assemble data about a decision-making task. You feed the data into the ML/DL computer models. Those models seek to find mathematical patterns. After finding such patterns, if so found, the AI system then will use those patterns when encountering new data. Upon the presentation of new data, the patterns based on the “old” or historical data are applied to render a current decision.
I think you can guess where this is heading. If humans that have been making the patterned upon decisions have been incorporating untoward biases, the odds are that the data reflects this in subtle but significant ways. Machine Learning or Deep Learning computational pattern matching will simply try to mathematically mimic the data accordingly. There is no semblance of common sense or other sentient aspects of AI-crafted modeling per se.
Furthermore, the AI developers might not realize what is going on either. The arcane mathematics in the ML/DL might make it difficult to ferret out the now hidden biases. You would rightfully hope and expect that the AI developers would test for the potentially buried biases, though this is trickier than it might seem. A solid chance exists that even with relatively extensive testing that there will be biases still embedded within the pattern matching models of the ML/DL.
You could somewhat use the famous or infamous adage of garbage-in garbage-out. The thing is, this is more akin to biases-in that insidiously get infused as biases submerged within the AI. The algorithm decision-making (ADM) of AI axiomatically becomes laden with inequities.
I think you can readily discern why the auditing of AI would be of such crucial importance. The AI builder might not realize they have veered into unethical AI waters. The leadership or executives managing the AI efforts might not realize that the AI is potentially going to go astray. All told, stakeholders that are vested in the AI production and deployment can get caught completely off-guard that the AI is not being devised to strictly abide by AI Ethics precepts.
What to do?
One especially sensible answer is to carry out an audit of the AI.
Amongst the various avenues for auditing the AI would be the particular focus on AI Ethics. I would say that AI Ethics is instrumental and should not be neglected. At the same time, I would also assert that AI Ethics alone is not the only matter to be of auditing attention. Of course, you could employ a targeted audit to examine solely the Ethical AI aspects, though this should be done in a larger context amid an overarching audit of the AI.
The audit results of a budding AI system can then be shared with a myriad of stakeholders. In addition to showcasing any audit concerns, there would oftentimes be an indication of how the audit points can be potentially rectified. This is not necessarily at any detailed level and instead more so as to broad guidance of what the AI should be exhibiting and how the AI ought to be better devised.
Along those lines, a notable research paper related to the auditing of AI that is entitled “Toward Trustworthy AI Development Mechanisms For Supporting Verifiable Claims” has proposed that an AI audit would usually examine three sets of the mechanism underlying the AI:
- Institutional mechanisms
- Software mechanisms
- Hardware mechanisms
They define institutional mechanisms this way: “Institutional mechanisms are processes that shape or clarify the incentives of the people involved in AI development, make their behavior more transparent, or enable accountability for their behavior. Institutional mechanisms help to ensure that individuals or organizations making claims regarding AI development are incentivized to be diligent in developing AI responsibly and that other stakeholders can Institutions verify that behavior can shape incentives or constrain behavior in various ways.”
The gist is that you need to audit not just the software and hardware but also ensure that you audit the organizational envelopment that surrounds the building and deployment of the AI. This might seem like an obvious point. Well, you’d be surprised to then know that many firms will willingly have their AI-related software and hardware activities audited, but the top executives bristle when told that the overall institutional and managerial milieu is also going to come under the audit gaze. The usual knee-jerk response is that this seems afield of the AI.
Do not fall for that kind of thinking. The three-legged stool of institutional, software, and hardware will fall apart if one of the legs is omitted. You can’t skip over the institutional aspects. In that same way of thinking, it would equally be lopsided and ineffective to skirt past the hardware aspects or do likewise for the software facets.
The paper defines software mechanisms this way: “Software mechanisms involve shaping and revealing the functionality of existing AI systems. They can support verification of new types of claims or verify existing claims with higher confidence. This section begins with an overview of the landscape of software mechanisms relevant to verifying claims, and then highlights several key problems, mechanisms, and associated recommendations.”
And hardware mechanisms are defined this way: “Computing hardware enables the training, testing, and use of AI systems. Hardware relevant to AI development ranges from sensors, networking, and memory, too, perhaps most crucially, processing power. Concerns about the security and other properties of computing hardware, as well as methods to address those concerns in a verifiable manner, long precede the current growth in adoption of AI. However, because of the increasing capabilities and impacts of AI systems and the particular hardware demands of the field, there is a need for novel approaches to assuring the verifiability of claims about the hardware used in AI development.”
You might not be familiar with the usage of the word “claims” that permeates the aforementioned definitions. The notion is straightforward. An AI system should be represented via a series of tangible claims or assertions about what the AI is intended to accomplish. By having explicitly stated and measurable claims, you can then have a fighting chance to ascertain the verifiability of those claims. The AI either does or does not satisfy the stated claims.
Many AI projects do not overtly identify their supposed claims. That is a recipe for disaster. If the stipulations of what the AI is supposed to do, and also not supposed to do, remain in a haze or fog, you are undoubtedly heading toward an AI system of immense problematic concerns. The old line that you cannot manage that which you cannot measure is a truism worthy of rapt reverence for the auditing of AI.
A handwringing qualm about today’s AI efforts is that there often is an accountability gap involved in pushing AI systems out the door to simply try to get your AI into the marketplace soonest possible. Who has taken responsibility for the AI? The AI builders might insist they aren’t to be held accountable if the AI goes astray and that instead, it is the fault of management. Management might exhort that it is the AI builders that have the responsibility. Massive amounts of finger-pointing will ensue the moment that the AI gets the firm in trouble.
I had mentioned earlier that examining the AI Ethics components is an oft-overlooked part of auditing AI. Sometimes the examination of AI Ethics is referred to as doing a social assessment of an AI system. A research paper entitled Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing helpfully explores the accountability gap and the need for a social assessment of AI systems: “A social impact assessment should inform the ethical review. Social impact assessments are commonly defined as a method to analyze and mitigate the unintended social consequences, both positive and negative, that occur when a new development, program, or policy engages with human populations and communities. In it, we describe how the use of an artificial intelligence system might change people’s ways of life, their culture, their community, their political systems, their environment, their health and well-being, their personal and property rights, and their experiences (positive or negative).”
A potential blind spot for many AI builders is that they do not anticipate the unintended consequences of their AI system. They are usually preoccupied with what they purposefully intend their AI to attain. Meanwhile, giving any semblance of consideration to how the AI might generate unintended consequences is just not in their frame of mind. The danger there is that AI systems are especially likely to promulgate unintended consequences, some of which are adverse and some of which might be beneficial. An audit of AI should seek to uncover any such unintended consequences that might later emerge via the AI.
An audit of AI needs to be organized and performed in a logical and explicitly laid out manner.
There are steps or stages that an audit of AI would normally undertake. In the paper on closing the AI accountability gap, they propose a set of five core steps coined as SMACTR, which are Scoping, Mapping, Artifact Collection, Testing, and Reflection. There is also a follow-on sixth step for doing a post-audit review. By and large, audits are best arranged when a methodology is utilized, for which there are lots of tech-oriented auditing methodologies and they provide a handy step-by-step indication of what should be done. The same is true for doing an AI audit.
You might know that business audits are conventionally conducted in one of two ways, either done as an internal audit or performed as an external audit. Internal auditors typically are considered “internal” in that they work for the company they are auditing. They provide an essential function within a firm. At the same time, there is always a hint of suspicion that perhaps an internally executed audit might not be as strident as one that is done by external auditors.
The assumption is that external auditors would seemingly be less biased toward the firm they are auditing. This does not always work out and there have been famous cases of external auditors that dropped the ball. In any case, external audits are an important part of doing the auditing of AI, along with undertaking internal audits of AI too. You can also anticipate that the tsunami of AI regulations will serve as an impetus for firms opting to have external audits of their AI undertaken. Indeed, it is likely that the government might mandate AI audits in certain circumstances.
Is the audit of your AI a legitimate audit?
That is the question that seriously will be asked if a firm contends that they have done audits of their AI. Some companies are bound to try and boast that they’ve had their AI audited and yet will have done the auditing in the thinnest of ways. All they really wanted was a checkmark that they did an audit of AI. They don’t care that the audit was sparse, perhaps only covering the most obvious elements. All I can say is that they will ultimately hopefully be held accountable for taking this kind of auditing dodge, and take it hard on the chin when their AI goes bad and lawsuits or the long arm of the law comes after them.
Before we move on, I’d like to also mention that AI can be used for aiding the auditing effort. This might seem like a tongue twister or a mental puzzle. Not really. The idea is that computer-based tools used by auditors are getting better and better via being infused with AI capabilities. It might seem curious or ironic to imagine that an AI-based auditing tool could possibly be used to audit an AI system that someone is devising. I suppose it is akin to the spy-versus-spy notion whereby we use AI to gauge the nature of some other AI.
At this juncture of this weighty discussion, I’d bet that you are desirous of some illustrative examples that might showcase the AI auditing auspices. There is a special and assuredly popular set of examples that are close to my heart. You see, in my capacity as an expert on AI including the ethical and legal ramifications, I am frequently asked to identify realistic examples that showcase AI Ethics dilemmas so that the somewhat theoretical nature of the topic can be more readily grasped. One of the most evocative areas that vividly presents this ethical AI quandary is the advent of AI-based true self-driving cars. This will serve as a handy use case or exemplar for ample discussion on the topic.
Here’s then a noteworthy question that is worth contemplating: Does the advent of AI-based true self-driving cars illuminate anything about the auditing of AI, and if so, what does this showcase?
Allow me a moment to unpack the question.
First, note that there isn’t a human driver involved in a true self-driving car. Keep in mind that true self-driving cars are driven via an AI driving system. There isn’t a need for a human driver at the wheel, nor is there a provision for a human to drive the vehicle. For my extensive and ongoing coverage of Autonomous Vehicles (AVs) and especially self-driving cars, see the link here.
I’d like to further clarify what is meant when I refer to true self-driving cars.
Understanding The Levels Of Self-Driving Cars
As a clarification, true self-driving cars are ones where the AI drives the car entirely on its own and there isn’t any human assistance during the driving task.
These driverless vehicles are considered Level 4 and Level 5 (see my explanation at this link here), while a car that requires a human driver to co-share the driving effort is usually considered at Level 2 or Level 3. The cars that co-share the driving task are described as being semi-autonomous, and typically contain a variety of automated add-ons that are referred to as ADA
There is not yet a true self-driving car at Level 5, and we don’t yet even know if this will be possible to achieve, nor how long it will take to get there.
Meanwhile, the Level 4 efforts are gradually trying to get some traction by undergoing very narrow and selective public roadway trials, though there is controversy over whether this testing should be allowed per se (we are all life-or-death guinea pigs in an experiment taking place on our highways and byways, some contend, see my coverage at this link here).
Since semi-autonomous cars require a human driver, the adoption of those types of cars won’t be markedly different than driving conventional vehicles, so there’s not much new per se to cover about them on this topic (though, as you’ll see in a moment, the points next made are generally applicable).
For semi-autonomous cars, it is important that the public needs to be forewarned about a disturbing aspect that’s been arising lately, namely that despite those human drivers that keep posting videos of themselves falling asleep at the wheel of a Level 2 or Level 3 car, we all need to avoid being misled into believing that the driver can take away their attention from the driving task while driving a semi-autonomous car.
You are the responsible party for the driving actions of the vehicle, regardless of how much automation might be tossed into a Level 2 or Level 3.
Self-Driving Cars And AI Auditing
For Level 4 and Level 5 true self-driving vehicles, there won’t be a human driver involved in the driving task.
All occupants will be passengers.
The AI is doing the driving.
One aspect to immediately discuss entails the fact that the AI involved in today’s AI driving systems is not sentient. In other words, the AI is altogether a collective of computer-based programming and algorithms, and most assuredly not able to reason in the same manner that humans can.
Why is this added emphasis about the AI not being sentient?
Because I want to underscore that when discussing the role of the AI driving system, I am not ascribing human qualities to the AI. Please be aware that there is an ongoing and dangerous tendency these days to anthropomorphize AI. In essence, people are assigning human-like sentience to today’s AI, despite the undeniable and inarguable fact that no such AI exists as yet.
With that clarification, you can envision that the AI driving system won’t natively somehow “know” about the facets of driving. Driving and all that it entails will need to be programmed as part of the hardware and software of the self-driving car.
Let’s dive into the myriad of aspects that come to play on this topic.
First, it is important to realize that not all AI self-driving cars are the same. Each automaker and self-driving tech firm is taking its approach to devising self-driving cars. As such, it is difficult to make sweeping statements about what AI driving systems will do or not do.
Furthermore, whenever stating that an AI driving system doesn’t do some particular thing, this can, later on, be overtaken by developers that in fact program the computer to do that very thing. Step by step, AI driving systems are being gradually improved and extended. An existing limitation today might no longer exist in a future iteration or version of the system.
I trust that provides a sufficient litany of caveats to underlie what I am about to relate.
We are primed now to do a deep dive into self-driving cars and the auditing of AI.
Generally, the bona fide self-driving car firms are routinely doing overall audits of the firm all-told and at times doing tech-specific audits too. You might say that this is the business-as-usual kind of mindset.
Fewer are also specifically doing AI audits.
This can be explained by several factors. First, the auditing of AI is considered less well-understood and less likely to come to the minds of those self-driving tech firms or automakers that are building AI driving systems. Second, the assumption is that an overall audit of the firm or any tech-specific audit would be sufficient to somehow encapsulate the AI driving systems. It is a weak assumption and tends to gloss over the fact that the AI driving system is either not being audited or being audited in the most tangential of ways.
Another viewpoint is that auditing the AI that underlies the AI driving system is presumed to be nearly impossible. The complexities of the AI driving system would appear to be of such a large-scale magnitude that no auditors could conceivably do anything substantive by attempts at auditing it. That is yet another falsehood. The right auditors that are armed in the right ways and that are provided the right kind of access can materially perform such audits.
Period, full stop.
In terms of the specifics associated with auditing AI driving systems, particularly the AI Ethics elements, I’ve extensively covered those matters in my ongoing coverage of self-driving cars, including the link here and the link here, among many other of my postings.
Imagine that you were going to buy a used car.
Wouldn’t you want to look under the hood?
Yes, you most certainly would want to do so. You need to know what is happening there and whether you are getting a lemon or a shining star. By popping open the hood, you have a reasonable chance of figuring out whether the engine is going to work and whether it will last longer than the time it takes to drive it off the car lot.
You might though not know much about cars. In that case, looking at the engine is probably not going to be especially fruitful. Seeing that there is an engine there is not much of a useful inspection. Then again, even if you know about cars, perhaps you don’t have the time available to give the vehicle an in-depth kick of the tires.
I bring up that automotive-related analogy to try and illustrate the demonstrative value of doing an audit of your AI system. You might not know what to look at when trying to discern whether the AI is being built sensibly. You might not have the time available to do so. It could be that even if you did do the assessment, investors or other outsiders would not especially find your audit findings credible.
Time to bring in the AI auditors.
The field of AI auditing is growing and will continue to expand. The advent of AI and the ubiquitous fielding of AI is going to be accompanied by AI that goes awry. Public pressure will mount to make sure that audits of AI are being performed. New laws and regulations will inspire or possibly even force firms into doing AI audits. Companies that make AI systems will be the initial targets. Gradually, any company that deploys AI, meaning just about all companies, will need to do AI audits of how they are fielding and using their licensed or acquired AI systems.
The business of auditing AI is going to be booming.
That being said, there will be some firms that earnestly embrace those AI auditors while other companies will grimace, fight dearly to keep from being audited and try to keep the auditors entirely at bay.
AI auditors that walk in the door never know exactly how they will be treated and nor what they might find. They are brave souls.
May the AI audit be with them.