How Companies have been able to lie about their AI

Through slight of hand companies are purporting to use or be all about AI – but are they really?

By VICTOR ANJOS

It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

This practice was exposed two (2) years ago in a Wall Street Journal article highlighting the hundreds of third-party app developers that Google allows to access people’s inboxes.

”Using what one expert calls a ‘Wizard of Oz technique’, some companies keep their reliance on humans a secret from investors”

In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users – with their identities redacted – to improve a “smart replies” feature. The company did not mention that humans would view users’ emails in its privacy policy.

The third parties highlighted in the WSJ article are far from the first ones to do it. In 2008, Spinvox, a company that converted voicemails into text messages, was accused of using humans in overseas call centres rather than machines to do its work.

Even when it appears they are faking things, I have still found uses for them personally, as with the next two.

In 2016, Bloomberg highlighted the plight of the humans spending 12 hours a day pretending to be chatbots for calendar scheduling services such as X.ai and Clara. The job was so mind-numbing that human employees said they were looking forward to being replaced by bots.

In 2017, the business expense management app Expensify admitted that it had been using humans to transcribe at least some of the receipts it claimed to process using its “smartscan technology”. Scans of the receipts were being posted to Amazon’s Mechanical Turk crowdsourced labour tool, where low-paid workers were reading and transcribing them.

“I wonder if Expensify SmartScan users know MTurk workers enter their receipts,” said Rochelle LaPlante, a “Turker” and advocate for gig economy workers on Twitter. “I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”

Even Facebook, which has invested heavily in AI, relied on humans for its virtual assistant for Messenger, M.

In some cases, humans are used to train the AI system and improve its accuracy. A company called Scale offers a bank of human workers to provide training data for self-driving cars and other AI-powered systems. “Scalers” will, for example, look at camera or sensor feeds and label cars, pedestrians and cyclists in the frame. With enough of this human calibration, the AI will learn to recognize these objects itself.

In other cases, companies fake it until they make it, telling investors and users they have developed a scalable AI technology while secretly relying on human intelligence.

”“I wonder if Expensify SmartScan users know MTurk workers enter their receipts, I’m looking at someone’s Uber receipt with their full name, pick-up and drop-off addresses.”.

Alison Darcy, a psychologist and founder of Woebot, a mental health support chatbot, describes this as the “Wizard of Oz design technique”.

“You simulate what the ultimate experience of something is going to be. And a lot of time when it comes to AI, there is a person behind the curtain rather than an algorithm,” she said, adding that building a good AI system required a “ton of data” and that sometimes designers wanted to know if there was sufficient demand for a service before making the investment.

This approach was not appropriate in the case of a psychological support service like Woebot, she said.

“As psychologists we are guided by a code of ethics. Not deceiving people is very clearly one of those ethical principles.”

Research has shown that people tend to disclose more when they think they are talking to a machine, rather than a person, because of the stigma associated with seeking help for one’s mental health.

A team from the University of Southern California tested this with a virtual therapist called Ellie. They found that veterans with post-traumatic stress disorder were more likely to divulge their symptoms when they knew that Ellie was an AI system versus when they were told there was a human operating the machine.

Others think companies should always be transparent about how their services operate.

“I don’t like it,” said LaPlante of companies that pretend to offer AI-powered services but actually employ humans. “It feels dishonest and deceptive to me, neither of which is something I’d want from a business I’m using.

“And on the worker side, it feels like we’re being pushed behind a curtain. I don’t like my labour being used by a company that will turn around and lie to their customers about what’s really happening.”

This ethical quandary also raises its head with AI systems that pretend to be human. One recent example of this is Google Duplex, a robot assistant that makes eerily lifelike phone calls complete with “ums” and “ers” to book appointments and make reservations.

After an initial backlash, Google said its AI would identify itself to the humans it spoke to.

“In their demo version, it feels marginally deceptive in a low-impact conversation,” said Darcy. Although booking a table at a restaurant might seem like a low-stakes interaction, the same technology could be much more manipulative in the wrong hands.

What would happen if you could make lifelike calls simulating the voice of a celebrity or politician, for example?

“There’s already major fear around AI and it’s not really helping the conversation when there’s a lack of transparency,” Darcy said.

So what do you do when all signs point to having to go to University to gain any sort of advantage? Unfortunately it’s the current state of affairs that most employers will not hire you unless you have a degree for even junior or starting jobs. Once you have that degree, coming to my Mentor Program, with 1000ml with our Patent Pending training system, the only such system in the world; is the only way to gain the practical knowledge and experience that will jump start your career.

Check out our next dates below for our upcoming seminars, labs and programs, we’d love to have you there.

Be a friend, spread the word!