Avoiding financial exploitation in the age of AI
While emerging technology like artificial intelligence (AI) creates opportunities, it can also present risks – including the increased risk of financial exploitation. Wealth Strategist Ben Rizzuto discusses those risks, as well as resources for mitigating them, with Portfolio Managers Denny Fish and Jonathan Cofsky.
5 minute read
- Continued improvements in AI technology have made it easier for criminals to create content, which has in turn made it more difficult for individuals to tell the difference between a real document and a fake, AI-generated document.
- With the rise of AI, investors will be subjected to an increasing number of attacks via email, mobile phone, and other methods as AI essentially allows criminals to put their phishing attacks on steroids.
- Financial professionals can play a key role in raising awareness of the risks of financial exploitation and helping clients protect themselves against tech-enabled scams.
“Would you like to play a game?”
That quote from the 1983 film War Games was, for many of us, one of our first experiences with artificial intelligence (AI). Since the film’s release, we’ve gone from computers that filled whole rooms (like the WOPR in the movie) to a world where we interact with AI daily and may have apps like ChatGPT on our phones.
The speed at which one can create computer-generated visual and audio content that looks and sounds like the real thing has increased exponentially, leading us to question what really we’re seeing and hearing. Unfortunately, this can pose a number of risks – including an increased risk of financial exploitation.
The FBI Internet Crime Complaint Center reported $5.6 billion in losses due to internet fraud in 2021.1 Older individuals are at even greater risk: The FBI Elder Fraud Report found that losses tied to tech support fraud totaled nearly $350 million in 2021, a 137% increase from the previous year.2
With this in mind, I sat down with Denny Fish and Jonathan Cofsky, Portfolio Managers for the Janus Henderson Global Technology and Innovation Fund. They follow the tech sector daily and can offer valuable perspectives on how AI is changing and how the technology is being used by companies and end users. Their insights can also help us protect ourselves from ever-improving AI-related scams.
Phishing on steroids
Continued improvements in AI technology have made it easier for criminals to create content. That content could be in the form of an “official” letter from a financial institution, which, as you can imagine, poses serious risk. Denny and Jon feel it will be “virtually impossible” to tell the difference between a real document and a fake, AI-generated document in the future.
It takes just a few seconds of sampled audio for AI to generate a voice deepfake. These sorts of fakes and scams have been used to exploit many individuals, leading to more and more unsettling headlines, such as an article titled “Scammers with Voice Deepfakes Are Coming for your Bank Balance,” from the New York Times.3
While AI clearly has many positive applications, its rapid rise also means that investors will inevitably be subjected to an increasing number of attacks via email, mobile phone, and other methods as AI essentially allows criminals to put their phishing attacks on steroids. This helps explain why the top three scams using AI include romance scams, ransomware, and fake requests from the IRS or other government agency saying an individual owes money.4
Life decisions are human decisions
So, what can advisors and their clients do to help protect themselves from these threats? In the opinion of Denny, Jon, and myself, the increased risk clients face with AI serves to elevate the importance of professional financial advice. As clients age and their financial decisions become more important, advisors and other service providers need to educate clients on the importance of making sure they are working with real people.
For example, clients need to make sure they are going to a company’s real homepage. In fact, Jon noted that we may someday see digital watermarks on websites to signify their authenticity. Indeed, companies may be forced to make things a little less easy – or to create “extra friction,” as Jon and Denny put it – when it comes to transactions. While this goes against companies’ goal of making customer interaction as quick and smooth as possible, that extra friction may help ensure transactions are authentic and protect customers from being exploited.
Another thing investors should consider is the type of decision or transaction that is taking place. If it’s a service-related issue with low stakes, then working through an AI chatbot may be most efficient. If, however, the transaction is related to your assets or could have a lasting effect on your life, then human interaction – and potentially some level of inefficiency – should be sought.
Resources to help protect against financial exploitation
A resource that we have found helpful comes from the work of Dr. Peter Lichtenburg at Wayne State University in Detroit. His website, Older Adult Next Egg, provides tools for advisors, older adults and their family members. For example, the Financial Decision Tracker is a 10-item interview that enables financial professionals to better assess an older client’s financial decisions in the context of their overall vulnerability.
Additionally, Janus Henderson has partnered with Dr. Lichtenburg to create several other resources to help advisors and their clients better understand the factors that may increase the risk of financial exploitation in older people, help mitigate those risks, and find ways to raise the topic of exploitation in client meetings.
The importance of human interaction
As new technologies emerge, there is a constant push and pull between the past and the future. We think back to simpler times while waiting for a drone to drop off an online order we placed an hour ago. And while technological advancements create opportunities and allow us to live more efficient and fulfilling lives, they also pose risks. Those risks – and those who exploit them – serve as a reminder of the importance of leaning into our “humanness” and maintaining the interactions that are so critical to our social and financial lives.
1 FBI Internet Crime Compliant Center, 2021.
2 2022 Elder Fraud Annual Report, Federal Bureau of Investigation.
3 “Voice Deepfakes Are Coming for Your Bank Balance.” New York Times, August 2023.
4 “Artificial intelligence is coming for seniors: AI’s dark side targets older adults in scams.” MarketWatch, June 2023.