When software takes over from human decision-making
An iPhone 4S buyer is suing Apple for supposed failings in its Siri voice-activated assistant software. The dissatisfied user claims Siri often did not understand what it was being asked to do or would respond with the wrong answer.
In litigation-happy US, these types of class-action consumer suits are common, of course, and the allegations of serious failings in the product have to be taken with a pinch of salt.
Often, the main motivation is to force a big company to respond with an out-of-court settlement or a refund rather than endure a lengthy court case, which regardless of the outcome, could damage its brand reputation and hit sales.
Indeed, I suspect that if Siri had been developed by a small cash-strapped start-up, rather than deep-pocketed Apple, Frank Fazio, the New York-based consumer who has filed the compliant, would have been less inclined to sue.
But whatever the merits of this case, the lawsuit raises an interesting issue that is likely to be heard more often as devices and websites with artificial intelligence play a bigger role in our daily lives and, increasingly, make decisions for us.
One of the more controversial examples of software taking over from human decision-making is in so-called “fly-by-wire” aircraft. The tragic loss of an Air France aircraft carrying 228 over the Atlantic in 2009 highlighted to the general public just how dependent modern aircraft have become on software. Initially, some reports speculated that the computers had contributed to the crash.
But last year’s inquest into the AF447 accident exonerated the fly-by-wire technology. The computers controlling the flight had switched off the autopilot after becoming confused by conflicting speed readings caused by faulty sensors. In the confusion in the cockpit, one of the pilots then made the fatal assumption that the plane was flying too fast, and the crew’s misguided attempts to correct the plane caused it to plunge into the ocean.
Much less dramatic are those everyday tales of satellite navigation systems directing car drivers to make prohibited turns or take non-existent roads. Indeed, Fazio complains specifically about the poor direction-giving capabilities of the Siri assistant in his iPhone.
Last year, misleading satellite navigation systems caused more than £200 million worth of damage to cars in the UK by directing drivers to go the wrong way, according to a recent survey.
As many as 83% of users have been misled by their system, the survey found, and
68% of those who have been led astray said they ended up doing longer journeys and so wasting fuel.
As the survey was conducted by a motor insurance website, its objectivity is perhaps open to question and I suspect some of the more glaring examples of satnavs errors are apocryphal rather than based on personal experience.
Nevertheless, I have no problem accepting that satellite navigation systems do sometimes get the directions wrong.
What I do have difficulty understanding is why some users expect 100% accuracy from what is, after all, a “best effort” consumer service and one that is provided essentially free of charge.
In the days before satnav systems, drivers had to use paper maps to find their way in a strange city or country. How many disgruntled satnav users would prefer to go back to paper maps?
And for those who could not read maps well, the UK’s Automobile Association used to publish written route directions to make it easier for its members to get from A to B — “In half a mile, turn left onto the A456”, etc.
Needless to say, the information was provided in good faith and no-one thought to sue the AA if the printed directions caused them to lose their way — which, of course, is lot more likely with printed information as it cannot easily be updated to show new roads, temporary road works or changed traffic signs.
Technology makes a good servant but a bad master.