Home Latest Insights | News Mastering AI for Financial Advice: Why the Quality of Your Prompt Matters Far More Than the Model – MIT Prof

Mastering AI for Financial Advice: Why the Quality of Your Prompt Matters Far More Than the Model – MIT Prof

Mastering AI for Financial Advice: Why the Quality of Your Prompt Matters Far More Than the Model – MIT Prof

Many Americans are now turning to ChatGPT, Claude, or Gemini for financial guidance, but the usefulness of what they get back depends far less on the sophistication of the AI and far more on how skillfully they phrase their questions.

This, according to Andrew Lo, director of MIT’s Laboratory for Financial Engineering and principal investigator at its Computer Science and Artificial Intelligence Lab.

“I think that there’s a real art and science to prompt engineering,” he said in a recent web presentation for Harvard University’s Griffin Graduate School of Arts and Sciences, first published by CNBC.

Register for Tekedia Mini-MBA edition 20 (June 8 – Sept 5, 2026).

Register for Tekedia AI in Business Masterclass.

Join Tekedia Capital Syndicate and co-invest in great global startups.

Register for Tekedia AI Lab.

AI can deliver clear, high-level explanations on many topics. It is often very good at outlining why diversification matters, when exchange-traded funds might outperform mutual funds, or the basic mechanics of retirement accounts. Yet experts are quick to highlight its serious limitations when the conversation turns personal or precise.

The Clear Limits of AI in Personal Finance

Andrew Lo stressed that AI still struggles with individualized planning. Tax strategies are a prime example. While it can discuss general rules or potential deductions, asking it to run detailed calculations based on someone’s actual situation is risky.

“When it comes to very, very specific calculations of your own personal situation, that’s where you have to be very, very careful,” Lo said.

Another persistent weakness is hallucination—the tendency of large language models to invent plausible-sounding answers that are simply wrong. Lo finds this especially troubling in finance.

“One of the things about [large language models] that I find particularly concerning is that no matter what you ask it, it’ll always come back with an answer that sounds authoritative, even if it’s not,” he said.

But despite these shortcomings, adoption is surging. An Intuit Credit Karma poll of 1,019 adults released in September found that 66% of Americans who have tried generative AI have used it for financial advice. Among millennials and Gen Z, the share exceeds 80%, and 85% of those who received recommendations went ahead and acted on them.

Lo’s bottom-line advice is pragmatic: “[People] should be using AI for financial planning — but it’s how they use it that’s important.”

Crafting Prompts That Actually Work

The difference between generic advice and genuinely useful guidance often comes down to the prompt itself. A vague question like “How should I retire?” typically produces boilerplate answers that are of little practical value—“garbage in, garbage out,” as Lo put it during the Harvard webinar.

A far stronger prompt, he explained, gives the AI clear context and structure: “Assume you are a fee-only fiduciary [financial] advisor. Here are my goals, constraints, tax bracket, state, assets, risk tolerance and timeline. Provide me with, number one: base case strategy. Number two: key assumptions. Three: risks. Four: what could invalidate this plan. Five: what information you are missing, and in particular, what are you uncertain about.”

By explicitly instructing the model to act as a fiduciary, legally bound to put the client’s interests first, and by demanding transparency on assumptions, risks, and gaps in knowledge, users extract far more thoughtful and cautious responses.

Certified financial planner Brenton Harrison, founder of New Money New Problems, echoed the point.

“Even if it’s the best model in the world, if it’s fed a bad prompt,” it will only be able to do so much, he said.

He noted that a strong prompt must contain enough specific detail for the AI to tailor its output rather than fall back on generic platitudes.

Lo described the process as iterative, almost conversational. It often takes more than 20 back-and-forth exchanges to refine the answer until it feels reliable.

“It’s a process of trial and error,” he told CNBC.

Practical Techniques to Sharpen Results

One of Lo’s most useful shortcuts is what he calls “reverse engineering” the prompt. After receiving a solid answer, simply ask the AI: “What prompt should I have asked you in order to generate the answer that I was looking for?”

The response can then be saved and reused for similar future questions, making prompt engineering much more efficient over time.

He also recommends pressing the model to reveal its own limitations. After getting what seems like a good answer, follow up with targeted questions such as: “What kind of information did you not have in order to be able to make that recommendation, and that could lead to some unreliable outcomes?”

Or: “How convinced are you that this is the correct answer? What kind of uncertainties do you have about the answer, and what kinds of things don’t you know that you need to in order to come up with a conclusive answer to the question?”

These probes help cut through the false sense of authority that large language models routinely project.

Harrison takes verification one step further. He instructs the AI to list its sources and, when possible, to limit those sources to reputable, verifiable ones.

“If you don’t require it to verify the sources, it’ll give an opinion, which isn’t what I’m looking for,” he said.

Why Human Judgment Still Matters

Even the best-crafted prompts cannot fully replicate the nuance a human advisor brings. Every person’s financial life contains layers of context—family obligations, emotional tolerance for risk, shifting life circumstances, and subtle tax interactions—that are difficult to capture completely in text.

“Looking to [AI] for advice implies you are giving it enough information to form an opinion and make a recommendation, and that’s a step further than I’d go with AI,” Harrison said.

He pointed out that a skilled planner teases out subtleties through conversation that a user might not even realize they need to include in a prompt. That human element remains hard to replicate.

The takeaway from both experts is consistent and reassuring: AI can be a powerful, accessible tool for financial education and initial planning, but it works best as a well-informed assistant rather than a replacement for professional judgment.

The real skill—and the real protection—lies in learning how to ask better questions, verifying every output, and knowing when to bring in a qualified human advisor for complex or high-stakes decisions.

In an era when financial lives have never been more complicated, experts have noted that mastering the art of prompting may be one of the most practical financial skills anyone can develop. This means that those who treat AI as a conversation partner rather than an oracle will get far more value, and far less risk, from the technology.

No posts to display

Post Comment

Please enter your comment!
Please enter your name here