sa¹ú¼Ê´«Ã½

Skip to content
Join our Newsletter

Air sa¹ú¼Ê´«Ã½ chatbot decision a reminder of company liability: experts

TORONTO — A decision on Air sa¹ú¼Ê´«Ã½'s liability for what its chatbot said is a reminder of how companies need to be cautious when relying on artificial intelligence, experts say. The sa¹ú¼Ê´«Ã½
2024021514024-65ce60a9a5171285253ac4b8jpeg
Experts say a ruling on Air sa¹ú¼Ê´«Ã½'s liability for what its chatbot said shows how companies need to be cautious when relying on the technology. An Air sa¹ú¼Ê´«Ã½ jet taxis at the airport in Vancouver, sa¹ú¼Ê´«Ã½, Wednesday, Nov. 15, 2023. THE CANADIAN PRESS/Adrian Wyld

TORONTO — A decision on Air sa¹ú¼Ê´«Ã½'s liability for what its chatbot said is a reminder of how companies need to be cautious when relying on artificial intelligence, experts say.

The sa¹ú¼Ê´«Ã½ Civil Resolution Tribunal decision issued Wednesday showed that Air sa¹ú¼Ê´«Ã½ tried to deny liability when its chatbot gave misleading information about the airline's bereavement fares.

"In effect, Air sa¹ú¼Ê´«Ã½ suggests the chatbot is a separate legal entity that is responsible for its own actions," tribunal member Christopher Rivers said in his decision.

"This is a remarkable submission," he said.

Jake Moffatt brought the challenge after he tried to get the lower bereavement fare after already having paid full price for a flight, as the chatbot had implied he could, but the airline denied the claim saying he had to apply before taking the trip. 

Rivers said in his decision that it should be obvious Air sa¹ú¼Ê´«Ã½ is responsible for the information on its website, and in this case the airline did not take reasonable care to ensure its chatbot was accurate.

Air sa¹ú¼Ê´«Ã½ said in a statement that it will comply with the ruling, and that since it considers the matter closed, it has no additional information.

While the decision at a tribunal — which doesn't create precedence — was fairly low stakes, with about $650 in dispute, it shows some of the ways companies can get caught up as they increasingly rely on the technology, said Ira Parghi, a lawyer with expertise in information and AI law.

"If an organization or a company decides to go down that road, it has to get it right," she said.

As AI-powered systems become capable of answering increasingly complex questions, companies have to decide if it's worth the risk. 

"If an area is too thorny or complicated, or it's not rule-based enough, or it relies too much on individual discretion, then maybe bots need to stay away," said Parghi.

Laws are still catching up on some gaps presented by AI, which pending federal legislation is looking to bridge, but in many cases existing law can cover the issues, she said.

"They relied on good old-fashioned tort law of negligent misrepresentation, and got to the right result based on, sort of, very conventional reasoning."

The argument that a company isn't liable for its own chatbot is a novel one, said Brent Arnold, a partner at Gowling WLG.

"That's the first time that I've seen that argument," he said.

If a company wants to avoid liability as they offer a chatbot, they would have to use a lot of language making it highly visible that they take no responsibility for the information it provides, which would make it of questionable use to consumers, said Arnold.

"That's about as good as the chatbot saying, 'Hey, why don't you eat this thing I found on the sidewalk?' Why would I do that?"

Companies will have to start disclosing more about what is AI-powered as part of the coming legislation, and they'll also have to test high-impact systems more before rolling them out to the public, he said.

As rules around the practices evolve, companies will have to be careful on both civil liability and regulatory liability, said Arnold.

In the U.S., the Consumer Financial Protection Bureau issued guidance last year around problems with chatbots, warning that banks risk violating obligations, eroding customer trust and causing consumer harm when deploying chatbots.

"When a person’s financial life is at risk, the consequences of being wrong can be grave," the regulator said.

The CFPB warned of numerous negative outcomes that many people are likely familiar with, including wasted time, inaccurate information and feeling stuck and frustrated without a way to reach a human customer service representative that can create "doom loops" of chatbot answers.

While the Air sa¹ú¼Ê´«Ã½ example was straightforward, just how much companies are liable for potential errors has yet to be tested much, said Arnold, as it's still early days for the AI systems.

"It will be interesting to see what a Superior Court does with a similar circumstance, where there's a large amount of money at stake," he said.

Gabor Lukacs, president of the Air Passenger Rights consumer advocacy group, said the Air sa¹ú¼Ê´«Ã½ ruling does justice for the traveller, and showed that the sa¹ú¼Ê´«Ã½ Civil Resolution Tribunal is a forum where passengers can get a fair hearing. 

He also noted that Air sa¹ú¼Ê´«Ã½ was called out by the tribunal for providing a boilerplate response that denied every allegation, without providing any evidence to the contrary. 

This report by The Canadian Press was first published Feb. 15, 2024.

Companies in this story: (TSX:AC)

Ian Bickis, The Canadian Press