During most rideshare trips, I like to talk with the driver. I find it interesting to learn about people, and on past trips I’ve had some interesting conversations with my Uber or Lyft drivers about their jobs during the day, their religious beliefs, their politics, and other topics.
On a recent ride, my driver and I got to talking about healthcare. He told me about a somewhat personal medical issue he was having (not sure if it was my trustworthy face or my rider rating that inspired him to open up). He then explained how he had used ChatGPT to help him diagnose the issue and walk him through how he should care for himself over the next several days. He said the app was exactly right in its diagnosis and treatment instructions.
This got me to thinking about the many ways people are now using AI.
It turns out a lot of us, me included, are using AI for health-related advice. The Annenberg Public Policy Center’s April 2025 health survey found that:
- 79% of U.S. adults say they’re likely to look online for the answer to a question about a health symptom or condition.
- 75% of people who search online say that AI-generated responses provide them “sometimes” (45%) or “often or more” (31%) with the answer they need.
- Most Americans (63%) think AI-generated health information is somewhat (55%) or very (8%) reliable.1
Doctors and nurses may not like this trend. In fact, the idea of patient research via the web or AI, more colloquially known as “Dr. Google”, has shown up at least a couple times in the HBOMax show The Pitt. For example:
“Oh, yeah. I’ve got Dr. Google in Central 9 bombarding me with questions like my four-year-old.” – The Pitt Season 1, Episode 7 (“1:00 P.M.”)
AI has joined the financial chat
Perhaps not surprisingly, Dr. Google and Dr. Chat have also entered the realm of financial services. A recent New York Times article noted that:
Two-thirds of adults who have used generative AI said they had used it for financial advice, and around 80 percent of those who acted on that advice said it had improved their financial situation, according to a recent survey of more than 1,000 people by Intuit Credit Karma. Younger generations are especially receptive: Around 82 percent of Generation Z and millennial AI users reported using it for financial guidance.2
So, if people are using AI tools for financial advice, what sorts of information are they seeking?
The most common topics listed in the Credit Karma survey referenced above were:3
- Basic personal finance concepts (35%)
- Financial goal setting and action plans (35%)
- Budgeting and expense management (34%)
- Optimizing savings (33%)
- Saving for retirement (31%)
- Investing in the stock market (32%)
The question becomes, what should investors use AI tools for? I’m all for better understanding of concepts or helping investors think about how to save more, but it’s also important to have a guide, or as those in the AI community say, “a human in the loop.”
It may not cost you, but it’s going to cost you
Some recent research highlights the importance of a human guide. In their 2025 paper entitled “Financial advice behavior: humans versus AI”, Baeckström and Matkovskyy studied the advice given by advisors and AI tools like ChatGPT, Gemeni, and others. They did this by using a “vignette-based survey experiment to compare portfolio recommendations made by professional human advisors and GenAI (large language models, LLMs).”4
The main finding for us to be aware of when it comes to using AI tools for investing was that AI advisors are consistently more conservative than human advisors. The study found that LLMs generally recommend safer and less variable portfolios than professionals, and that trend held across the AI platforms tested.
Notably, the AI advisors systematically under‑risked clients, leading to lower long‑term returns. More specifically, the researchers found that the economic cost of this under-risking could lead to approximately ~1.0 to 1.1 fewer percentage points of annual return. Importantly, those lower returns were not meaningfully offset by lower long‑run volatility.
Noisy but adaptable
While AI’s major shortcoming was under-risking, the researchers also found that human financial advisors systematically project their own risk preferences onto clients. This translated to a professional advisor’s personal portfolio risk strongly influencing the risk they recommend to clients. This was most visible when recommending very high-risk portfolios rather than across all risk levels.
Clearly, this could pose problems, especially if there is a risk tolerance mismatch between advisor and client. However, the researchers found that this tendency declined with advisor age and experience.
Furthermore, overall, the research shows that human advisors, though sometimes noisy or biased, are far more context-responsive and growth-oriented compared to AI advisors. By contrast, AI advisors tend to be more scalable and consistent, but structurally conservative and under‑personalized.
To put this in different terms, we might imagine that AI advice is like a car with a built‑in speed limiter: It feels steady and safe, but never goes as fast as it reasonably could, even on a clear highway. A human advisor, on the other hand, is like an experienced driver. He/she may sometimes drive too fast and sometimes drive too slow, but is capable of adjusting speed based on road conditions, destination, and time horizon.
Replacement or complement?
Another recent paper, “The Role of Large Language Models in Financial Planning: Replace or Complement Advisors?” by Daniel Greene and Ann Powers uncovers some more important findings for clients who may be using AI when creating their financial plans.
In their study, the authors sought to examine the potential for LLMs to replace or complement human financial planners. They did this by creating “a series of structured prompts that incorporate detailed fictitious client descriptions and evaluate the models’ responses…across seven broad areas: Financial Statements, Insurance, Education Planning, Retirement Planning, Investments, Tax Planning, and Estate Planning.”4
Overall, they found that LLMs did a good job providing general advice, but struggled when it came to personalized, client‑specific guidance. For example, when asked to provide concrete numbers (e.g., savings amounts, allocation targets, 529 contributions), models were often vague or non‑committal. Along with that, the evaluators explicitly noted that models couldn’t prioritize competing client goals or adapt recommendations to nuanced personal circumstances.
This really highlights the value of sitting down with a human advisor. Human advisors excel at translating goals and helping clients consider tradeoffs and constraints to craft precise recommendations. It also speaks to the core differentiators that advisors provide, namely trust, relationships, and emotional intelligence.
So while investors may find the quick, free information of an AI advisor enticing, it’s important to remind them how human connection, emotional understanding, and trust‑building create an environment where they may be better able to meet their financial goals.
Human experience matters
As more investors and clients – and all of us – use AI, we must remember why we are using it. AI helps us comb through vast amounts of data, provides us with information, and may even help automate repetitive tasks.
Working with a financial advisor is different. Financial advice clients are not just buying information; they are buying confidence, reassurance, accountability, and advocacy. These are intangible benefits that require a human relationship that cannot be provided by an AI.
Another key advantage financial advisors hold over AI is experience that has helped them hone their craft. The same is true for doctors and nurses. Experiencing different markets, different cases, or different ailments firsthand develops a deeper understanding of how things work.
Dr. Google or Advisor Chat may help with information and easy stuff, but it can’t help with the hard stuff. And the hard stuff are those decisions that affect our lives most, such as how to manage our health and our finances. That is why it’s my belief that life decisions should be human decisions.
1 “Many in U.S. Consider AI-Generated Health Information Useful and Reliable.” Annenberg Public Policy Center, July 2025.
2 “They Had Money Problems. They Turned to ChatGPT for Solutions.” New York Times, September 2025.
3 “The Rise of Fin-AI: Why Americans Are Trusting Generative AI With Their Wallets.” Creditkarma.com, September 2025.
4 Baeckström, Y. & Matkovskyy, R. (2025). “Financial advice behavior: humans versus AI.”
5 Greene, D., & Fairhurst, D. J. (2025). “The role of large language models in financial planning: Replace or complement advisors?”
Volatility measures risk using the dispersion of returns for a given investment.