close
close

Better fraud detection, no travel warnings

In mid-March, a scammer in California attempted to purchase $150 worth of Wingstop using my debit card. Aside from being impressed by the sheer size of the order, I was also relieved because Citibank, which had issued my card, immediately declined the transaction and alerted me to the fraud. Within a few minutes I was able to block my card, prevent further purchases from the fraudsters and order a new card. All’s well that ends well.

When I traveled to Buenos Aires in April, I thought I might find myself in a similar situation. Sure, the banks say You no longer have to call ahead when traveling, but I assumed some purchases would still be flagged as potential fraud, as was the case on previous international trips. Miraculously, everything went smoothly. I don’t know how JPMorgan Chase knew this would I spent $200 on Botox in Argentina, but this was the case. (No, I didn’t book my flight on the same card, and whatever, now everyone gets Botox.)

It’s great that banks and credit card companies are getting better at distinguishing which payments are fraudulent and which are legitimate. Many people have a horror story about having their credit card stolen or having their own legitimate transactions flagged as suspicious. And it’s nice not to have to spend 20 minutes on the phone before a vacation explaining where and when you’re going. Credit card fraud protection is far from perfect, but there’s no denying that technology is improving. On the other hand, it’s also pretty crazy to think about how much financial institutions need to know about you to make the right decisions.

I was curious about how it all worked – and to be honest, I was a little unsettled. So I reached out to a few credit card companies and researchers to find out more. Why don’t people have to inform their credit card companies about travel anymore? And more generally, how have banks gotten so good at figuring out what is normal and what isn’t about our spending habits?

The Federal Trade Commission receives thousands of card fraud complaints each year. The Nilson report, which examines the card industry, said payment card fraud resulted in $33 billion in losses globally and $13.6 billion in losses in the United States in 2022. Therefore, credit card issuers and banks strive to do their utmost to detect fraud. They want to keep their customers happy and, more importantly, cut their losses. In the United States, major credit card issuers and banks generally have a zero-liability policy, meaning that if a customer is defrauded, the company, not the customer, is responsible for the costs.

Years ago, for example, successfully completing a transaction depended on whether there was a physical card, whether you had enough money for the purchase, and (if the cashier wanted to check) whether your signature on the receipt matched the one on the back of your card . In some cases, the teller may have even asked for ID or called the bank to verify the amount. We’ve moved far beyond those bad old days by using the same tools that drive most innovation: data and computers. Credit card companies and banks know a lot about us—where we shop, when we spend money, and how much we’re typically willing to pay for things—and they’re getting better at putting that knowledge into action.

The models evaluate a trillion dollars worth of transactions every year.

While it’s all the rage to talk about new forms of artificial intelligence, fraud detection owes a lot to machine learning, an area within AI that has been around for years. A lot of data is entered into computer systems and algorithms determine patterns and relationships. The algorithms create decision trees to predict the likelihood of different outcomes and figure out what can be considered normal or faulty. It’s not that your credit card company knows that you’ll spend a lot of money specifically on A and not B – it knows that customers with your profile are in the “Likes A” camp and not the “Likes B” camp .

“Looking at what’s happening is completely out of the ordinary for your general behavior,” said Tina Eide, executive vice president of global fraud risk at American Express. “And when I talk about general behaviors, that’s generalized, right? It doesn’t depend on the specific purchase or the specific retailer.” Eide added: “The models evaluate transactions worth a trillion dollars per year.”

The machines now know more than ever. Mike Lemberger, Visa’s regional risk officer for North America, said the number of data points people generate with their credit cards has increased dramatically over the past five years. More and more people are using cards instead of cash. And they don’t just have a physical card that they pull out at the store – they have their card details stored in their Amazon account, Netflix account, iPhone, etc. The more purchases the card issuer can analyze, the more accurate fraud detection will be.

“Visa, we don’t have consumer information — that’s your financial institution that has it — but what we do have is this triangulation of all of those data points,” Lemberger said. “We can get more and better results by building machine learning and AI capabilities on top of it, and it becomes a much, much more powerful predictor that we then feed into all of our partners to say, ‘Hey guys, if you want .’ “To help you make the best decisions, there’s a lot of really good information here.”

Visa won’t block your card directly, but it will alert your bank that your purchase appears suspicious or that fraud has been detected at the merchant you’re doing business with.

This all seemed pretty simple until I spoke to Yann-Aël Le Borgne and Gianluca Bontempi, a pair of researchers at the Université Libre de Bruxelles in Belgium who study machine learning and card fraud. They highlighted the enormity of this fraud detection technology. Companies and their algorithms record millions of transactions and create so many decision trees to categorize certain activities that it can defy human logic, they said. Basically, the computer can be right when it says that your transaction looks weird even if it’s in your hometown at a fairly innocuous provider, or it can be right that the transaction is fine even though it’s at a far away one location – but if people try to figure out what triggered the alarm or not, no one will really be able to determine the reason why.

“Machines can accommodate many more functions, and at the end of the day it is not clear whether all of these functions are meaningful to humans,” Bontempi said. “Humans are used to working with two, three, at most five functions, while machines can work with hundreds of functions. So there are really different levels between what a machine can do.”

There are human-written rules that are generally open to interpretation, and there are typewritten rules that can be a black box. They are more accurate, but they can be more difficult, if not impossible, for humans to reverse engineer. And banks may use several different algorithms, making things even more complicated. Data scientists are the ultimate decision makers, but the information they work with relies on highly complex technology.

Humans are used to working with two, three, at most five features, while machines can work with hundreds of features.

When I explained my somewhat embarrassing wings-and-folds conundrum to experts and asked what might have triggered one warning and not the other, they offered different explanations. Eide from American Express said that even though I didn’t book my trip to Argentina with the credit card I used to purchase the Botox, something else probably tipped off the system that I was there. I realized that I had also purchased a package of spin classes in Buenos Aires on my phone using the same card. I had also paid for a meal in town. Visa’s Lemberger emphasized that it’s all about the dates and spending patterns and said that given my spending pattern, the Botox was probably a better fit for my profile than the massive delivery order.

“I hate to break it to you, but at the end of the day, all of these data points form personas. Just like in marketing someone would use personas to market to you, we use the same technology to protect you,” he said. “And the fact is, we use these data points to protect not only you, but the entire ecosystem.”

At some point it occurred to me that the supercomputers that run credit card companies and banks might know more about me than I even know or understand about myself.

“It is likely that the exact reason why a transaction led to your card being blocked cannot be clearly interpreted,” said Le Borgne.

I also asked if there was a big difference in credit card protection and debit protection and I wasn’t really told that banks will be a little more restrictive on loans because technically they will lend you money that isn’t limited by your actual cash balance is. I also asked if companies stopped worrying about pre-clearing trips because they no longer care as much about losing money to fraud, to which the answer was a resounding no.

“Ultimately, someone has to pay for the fraudulent activity,” Lemberger said. Credit card companies will give you your money back if you fall victim to fraud, but as always, they will find another place to get the money back.

Instinctively, I’m not a tech enthusiast – if AI is really killing us, I think we should turn it off. I’m not too worried about privacy, but I also don’t like the idea of ​​AmEx, JPMorgan, and Citi holding me like this. But it’s cool that companies are really improving fraud detection, especially in a world where fraudsters themselves are constantly improving. I don’t want to say, “Yay, banks!” but maybe the answer here is actually a bit “Yay, banks!” At least that’s the case until the next big data breach, then I’ll regret everything.


Emily Stewart is a senior correspondent at Business Insider and writes about business and economics.