July 26, 2018

The Unseen Human Problem With AI

By Caity Dalby, Content Manager, at Figaro Digital

We hear all too often that consumers are surprised, shocked, and horrified (okay, a little exaggeration there), to find out that the customer service or sales representative they have been talking to is actually a robot – more commonly known as an artificial intelligence or machine learning powered service, often in the form of a chatbot.

But what happens, and what does it mean for the digital marketing industry as it becomes increasingly reliant upon AI technology, when the robot consumers think they are interacting with is actually human?

The lines of transparency and honesty seem to be automatically blurred when it comes to the implementation of AI in companies, but ‘pseudo-AI’ is a whole other ball game. With numerous startups being funded and set up on the promise of current, or future, AI capabilities, these companies are implementing a ‘Wizard of Oz’ technique – cheaply employing humans to do the work of an ‘AI’ powered service.

In some cases, as it has been revealed in the Wallstreet Journal scoop on Google granting hundreds of third-party software developers access to the content of users emails, these humans have access to an unprecedented level of personal information and data. This is made even more chilling by the loss of the (no-longer) blissful ignorance of those having their personal data mined by software engineers, in an attempt to improve AI powered services, or by human employees doing data sensitive work purported to be done by machines (The Guardian).

Upon realising that building a service powered by AI is unsurprisingly both incredibly time consuming and expensive, many companies have reverted to the very thing the development of machine learning informed services were meant to solve: employing large numbers of people, as cheaply as possible. In addition to this, companies are often misleading investors who believe their investment is being used to develop or implement these technologies.

In a July 2017 study produced by Asgard Capital on the 715 companies based in Europe that were claiming to be in the AI industry, it has become apparent that only 409 of these companies (60 per cent) actually use a form of AI or a machine learning process. Whilst this number may not sound too extreme; 40 per cent of companies claiming to be using AI when they, in fact, aren’t is indicative of a wider global problem. And in an era where transparency is becoming absolutely necessary within businesses due to the far reaching affects of the implementation of GDPR, the Cambridge Analytica fiasco, Gender Pay Gap Reports, and the well publicised transparency issues within Oxfam earlier in the year, it is more important than ever to fulfil business promises, or at least not get caught breaking them.

Tongue in cheek articles, such as Robbie Allen’s How to convince Venture Capitalists you’re an expert in Artificial Intelligence, may be making light of the huge gap between companies that are demanding AI and startups that feel they can supply it, but it definitely has a thinly veiled sinister undertone. In the article Allen provides a comedic ‘script’ of things to say in order to bluff your expertise and it is hilarious to think of a twenty-something, CEO of a startup heading into a meeting and saying, “We created a system called [Popular Gen-X first name] that is based on the work of [Socher, Karpathy, Lecun, Hinton, Thrun, Koller, Goodfellow, Bengio] and uses a [50,100,1000] dimension proprietary dataset.” But take the humour out of the situation and this isn’t far from the truth.

The problem isn’t a new one but has been around as long as AI services have, with Bloomberg revealing in 2016 that the reason the artificial intelligence personal assistant from startup X.ai, lovingly named Amy Ingram, sounds remarkably like a real person is because it is. Well, multiple real people actually. With the most recent revelations of this unique fraudulent behaviour hitting the front pages and making waves in the tech industry affecting all sectors, this trend doesn’t seem to be slowing down anytime soon.

This may all seem alarming and a little like scare mongering, especially in the current climate, but it is exactly because of the current climate surrounding personal data and transparency that we should take note. And ultimately, it’s because of this that we are now sitting up and taking note of these issues and the breaches of trust that take place in these transactions of data. In so many industries and sectors consumers are becoming increasingly aware of the inner workings of the companies they interact with, the value of their data, and what they are worth as customers, and trust in a brand plays, now more than ever, a hugely significant role in a company’s ability to gain and retain custom.

However, it is not all doom and gloom on the AI front as the tables are turning and the power of artificial intelligence has been (in a very small way) placed in the hands of consumers. The industry was thoroughly shocked at the recent unveiling of Google’s virtual assistant’s staggering ability to make unnervingly life-like phone calls. The AI powered personal assistant has been demonstrated booking tables at restaurants and haircuts at salons, with ‘umms’ and ‘mhmms’ smattered throughout to mimic natural human linguistic idiosyncrasies and patterns.

This in itself conjures up images of i-robot and Ex Machina (post-robot revolution), but for some reason the placing of such sophisticated artificially powered technology into the hands of the consumer feels like a minor victory in the client experience (or non-experience) with AI. The lines of transparency and honesty here are still blurred, in this case due to the ability of the AI, but it is somehow refreshing to be concerned about this in relation to our incredible advancements in technology, rather than the human dishonesty and duplicity that ‘pseudo-AI’ represents.

AI influence in the industry is in constant flux and growth, so only time will tell what’s worse: the artificial problem with human interaction with AI or the unseen human problem with AI.

 


Written by

By Caity Dalby, Content Manager, at Figaro Digital