Don’t fear AI, fear complacency
As published on Little Black Book
By Jonathan Hart
One of the most salient responses in industry to the Covid-19 outbreak has been to increase investment in Artificial Intelligence (AI). Businesses presume that this will allow them to serve customers more readily through digital channels and to be more resilient to challenges presented when using human labour that needs to physically staff those customer points. The International Data Corporation, a leading industry research provider, reports an expected 33% increase in AI and related technology investments over the next two years in the post-pandemic world.
With any new technology, the challenges and pitfalls remain largely unknown. Perhaps uniquely, AI has had a long lineage of disaster scenarios painted by Hollywood and dystopian authors. The result of which is that much of the concern over the technology is focused on a distant and hypothetical outcome where the seeds created today germinate into an autonomy which always leads to certain doom for Humanity. Having taken such a far-flung view, very little time has been spent examining more the practical risks of AI adoption or ways to mitigate them.
AI, as we know it today, is nothing more than existing data science methods running autonomously and the current tension between predictive analytics and data science shows very clearly where potential issues may arise. The largest divide in these approaches is essentially the human interpretability of their outputs. Predictive analytics generally leverages interpretable models, which means that a human can easily understand both the logical path and the resulting decision just by looking at the outputs. Conversely, data science (and ergo AI) principally uses un-interpretable models which, like the human brain itself, can be inscrutable.
Recently, I worked with a client who was concerned about identifying high value candidates early on in a job recruitment process. We contracted a predictive analytics firm to build a model to identify what attributes of candidate indicated they were the most likely to accept an offer. The resulting model was extremely accurate, correctly identifying the outcome more than 95% of the time. When we examined what variables the model was using to make this prediction, two attributes had an overwhelming influence — gender and race. When boiled down to its most basic level the conclusion was that white males were the ones most likely to get the job. While this result may seem shocking, it should not at all be surprising.
A decision engine built on data is restricted by the data that has been generated from real world events, a real world where these jobs were given disproportionately to white men. The model cannot simulate a different world in which all genders and ethnic groups were given equal opportunities, so it can only serve to reinforce existing biases. It is worth noting that even with a team of seasoned analytical professionals, they built and presented this model for approval without once considering this consequence for themselves.
If this use case had been solved using AI, the outcome would have been the same, but the results much worse. The biases that are so clearly visible in the predictive analytical model, would be invisible. It would have taken an existing social inequity and cemented it into a decision engine which could continue to exacerbate it at scale, unchecked and without anyone’s knowledge.
Our fears should not be that AI becomes sentient, they should be that it does not. Like a child, it can only learn from what is put in front of it and it can only reproduce what it learns from, so we need to be very careful to give it a broad education and the tools to fill in its gaps in knowledge. Without the ability to create, to challenge and to experiment we risk building a technological infrastructure that girders our existing way of doing things. We should not fear an AI that can think for itself, we should fear one that cannot.
Today’s AI does not have the power to introduce randomness, experimentation, or in any way improve the quality of the data, but we should work to provide it. We should work to intentionally create net new data rather than use the data exhaust created by ‘business as usual’ activities. We call the process Designing for Data Creation and we use it with our clients every day when creating communications, planning media and designing consumer experiences so that when we start training AI for them, a broad and solid foundation already exists.