Go to Homepage


The rise of the killer bots

Vijay Verghese, Editor, Smart Travel AsiaCan travel or conversation ever be the same again? How social media users – and random innocents – have become collateral damage in the shootout at the FB Corral. But the Trump army marches on.

Visit our Facebook pagePrintE-mail Page

by Vijay Verghese/ Editor

JUMP TO  Current column

English idiom vs idiot algorithms

We're all going down but this man's cool as a cucumber. Why it's a crime to post smileys and jest with friends. English idioms vs idiot Algorithm in the new Wild West

AND, AS I sip my tea in the morning, there it is again, that warm and fuzzy message. “Vijay we care about you and the memories you share here.” Not the Church. Not a post on the office board. The Facebook whisper is ingratiating and tinged with a compassionate urgency.

It is a deception as dishonest as it is mechanical – not unlike a confidence trickster’s approach to gain trust – and designed expressly to put people out of work because someone believes computers can do it better. They can’t.

You cannot expect a bot or even superior artificial intelligence to actually understand anything apart from coded commands. True or false. Yes or no. Computers inhabit a simple binary world and caring about my feelings has absolutely nothing to do with it. Yet we see ourselves through this false lens and are reassured daily that we are valued because someone clicked on our post. Or some bot sent us a message masquerading as a dear friend.

Send us your Feedback / Letter to the Editor

AI lacks a sense of humour, stumbles over idiom, and hits a wall with ‘open-ended inference’ (or sentences that may mean different things based on the placement of a comma or a single word).

Take humour. In the Cambridge Dictionary, ‘wicked’ means ‘morally wrong or bad’ as well as ‘excellent’. Terms like ‘badass’ or ‘killing’ are a minefield for AI. Slang words like ‘dope’ and ‘slay’ make things murkier still. Now combine sentence structure with slang, the notorious chameleon shifts of the English language, or open ended inference. The show bombed. The show was bombed. He drives a boss car. He drives the boss’s car. The boss drives a car. Peter promised Lisa to leave. Peter promised to leave Lisa… The list is endless. This is where the human brain proves its worth.

{With facial recognition favouring lighter skin tones and lacking a database of diverse subjects, Google’s app often labelled darker people ‘dogs’....

On a recent photo post of my Hong Kong hike, the FB Police instantly swung into action, serving me with a curt notice. “Your comment goes against our Community Standards on hate speech and inferiority,” it cautioned, citing my quip to a friend that said, “Wicked Canadian” (followed by a laughter emoji). I was asked if I agreed or disagreed. I disagreed keenly and punched that button.

Seconds later I was served with a sterner notice: “We reviewed your comment again and it doesn’t follow our Community Standards.” There were those imposing capital letters again like some morally damning Hammurabi’s Code. Facebook suggested I appeal to its Oversight Board with a lengthy set of procedures akin to clearing my name at The Hague.

A single bot intent on saving the world from grating incivility had secretly eavesdropped on and interrupted my conversation with a friend while also managing to get everything wrong. But it did set off an eruption of mirth from South Africa to Vancouver. Meanwhile Donald Trump Jr’s desperate video to raise an army for Trump – not quite Dad’s Army – remains on Facebook because, as a company spokeswoman Monika Bickert explained, “When we change our policies, we generally do not apply them retroactively.”

FB’s Oversight Board that has operated for the better part of a year is to be commended for upholding things like the Trump ban and fighting for a breast cancer post that Instagram whipped off on account of a bared nipple. But somehow I cannot visualise these august personages taking a heroic stand over my humble two-word jest plus emoji.

In 2018, Facebook announced it had deleted 99 percent of all ISIS and al-Qaeda related posts, thus saving humanity from the slavering maws of terrorism. A worthy strike. Yet, in the process, investigative newspaper and magazine articles also disappeared, causing many to seek answers about the company’s advanced algorithm process. How many times had it failed? What was the success to failure ratio? Was there a human in the loop to determine an actual offence? There has been no response. This is a dark process that FB zealously guards.

Algorithms and smart computers often make mistakes, leading to serious and unfortunate consequences. This is unsurprising. Machines can go spectacularly wrong. Computers are fast but lack intelligence. And they cannot be creative – a process based on real life experience (computers have none), mood and skill.

AI mistakes and racial profiling – often due to unintended human input bias – have landed people in jail as in the 2020 Michigan case of Robert Julian-Borchak Williams who protested to his captors, “You think all black men look alike.” Williams won an unconditional apology from the prosecutor. In 2015 Google was forced to apologise after its Photos app tagged a black computer engineer and his girlfriend, ‘gorillas’.

With facial recognition for a long while favouring lighter skin tones and lacking a database of diverse subjects, Google’s app often labelled darker people ‘dogs’. It also sometimes tagged dogs as horses. The animals gamely galloped, or loped, on. Humans fought back. How did this happen? For many years Kodak – long a benchmark of photographic perfection – used its ‘Shirley Card’ (so named after one of its more photogenic Caucasian employees) to measure skin tone, light and shadow during the printing process. There was no such fine-tuning for races with less than milky skin.

In 2016 Microsoft had to deactivate an AI-learning driven chatbot named Tay within a day of launch as it was tweeting all manner of Nazi gibberish and even supporting genocide. It had been conversing with the wrong crowd to quickly turn racist and abusive. That is the difference between human learning and AI. Humans make weighted judgements, they have common sense, and they take moral positions.

Over a decade ago our magazine tested an AI-driven ‘talking head’ to field simple questions for certain destinations. We named her Michelle. In no time at all our fetching frontline bot had been inundated with all manner of sexual innuendo and requests for dates. “I am sorry, I do not have the data to answer that,” she would respond, wide eyes blinking, hair swirling alluringly, the chin tilting a bit this way and that, the expression resolutely blank. We decided to throw the switch and put her out of her binary misery.

Artificial intelligence in travel has come a long way since then. Airline check-in kiosks use AI. Robots can deliver room service and vacuum floors. And Siri can make reservations for your flights or dinner. Yet there are other reservations too. As Stephen Hawking put it, “Whereas the short-term impact of AI depends on who controls it, the long term impact depends on whether it can be controlled at all.” Elon Musk once described it as mankind’s greatest existential threat. He didn’t mince words: “With artificial intelligence, we are summoning the devil.”

For Facebook I am apparently no different from Tay. I am guilty of ‘hate speech and inferiority’ (whatever that may mean).  Now that I have a record, the algorithms will prowl my comments with greater fervour, perhaps serving up posts featuring the Reichstag takeover and the V2 bombing of London.

With this will come the creepy stalker advertising triggered by a random visit to a cheap hotel website to check on quarantine arrangements; or kittens on Instagram because you clicked on a kid’s birthday party photos.

In the early days of FB, ‘boosted’ visual posts would often get held up because “text should not cover more than 15 percent of the image”. So I was told. By a human. In those days a locally based FB person would communicate through email (a short-lived privilege) to explain the company’s head-scratching policies. Large fonts on a photograph were probably deemed advertising (which attracted higher rates). How then could you post a photo of a poster that said in bold type, CHE LIVES, or SAVE THE PLANET, or GET LOST ZUCKERBERG? These are enduring mysteries. Now how the heck do I enrol for Trump’s army?

Send us your Feedback / Letter to the Editor

▲ top

Previous Columns




















NOTE: Telephone and fax numbers, e-mails, website addresses, rates and other details may change or get dated. Please check with your dealer/agent/service-provider or directly with the parties concerned. SmartTravel Asia accepts no responsibility for any inadvertent inaccuracies in this article. Links to websites are provided for the viewer's convenience. SmartTravel Asia accepts no responsibility for content on linked websites or any viruses or malicious programs that may reside therein. Linked website content is neither vetted nor endorsed by SmartTravelAsia. Please read our Terms & Conditions.