The PhD Is The New MBA

The PhD Is The New MBA

In the PhD program, we train students to ask the right questions, always ask about the basis for an inference and to keep coming up with new approaches to problems. With AI, MBAs might need these very skills.

GBR: Spider-Man 2 At Madame Tussauds - Photocall

LONDON - JULY 15: Interactive Spider-Man 2 attraction is unveiled at Madame Tussauds on July 15, 2004 in London. The attraction with new Spider-Man figure, features a web which visitors must ascend in an attempt to photograph the elusive superhero for the Daily Bugle.(Photo by Gareth Cattermole/Getty Images)

Getty Images

The teaching semester is fast approaching at Columbia Business School, and I have spent the last three weeks updating my presentation slides to teach my class on fundamental analysis titled FAIME. In this class, we perform a 360-degree analysis of one or two companies throughout the semester. This time, I have picked Home Depot and Starbucks.

The slides took much longer to update this time around, not because Home Depot and Starbucks have become more involved to difficult to understand. It’s because of my attempt, nudged by our deans, to deeply integrate AI in what we teach. Incidentally, we need to use both time saved and the quality of the work produced to measure the contribution of AI to productivity.

As I spent more time with ChatGPT5 and Gemini (I did not try the other AI tools), I kept falling back to four skills I learned during my PhD program:

  1. Try to keep all inferences evidence based.
  2. Always ask what the basis for an inference is.
  3. Learn to ask the right questions. The answers usually are less important.
  4. We keep pushing PhD students on “what’s your incremental contribution?” to everything that is out there.

I’ll weave these themes into my experience with using GPT5 for my class notes:

1.0 AI is excellent at scraping secondhand information from the web

Let me elaborate. In my experience, ChatGPT5 is a very efficient seeker, scraper and organizer of secondhand information from the web for at least for my application, financial analysis. AI tools are very good at interpolating, as in finding answers to common ratios and common questions that my quizzes pose, between what others have worked on and left digital footprints. The tools are not all that good at extrapolation or going beyond what is obviously available and what others have done.

For instance, one of the elementary steps in financial analysis is to separate the operating from financing functions of a firm when calculating ratios. The intuition is that a dollar of EPS that comes from a mainstream non-financial firm’s operating activities deserves a higher valuation multiple than a dollar of financing income. This is because a dollar that Starbucks makes from selling coffee (its core business) is more likely to be repeatable and is likely to be protected from greater barriers to entry that management can influence than a dollar made from say return on passive financial instruments the firm might earn.

When you ask ChatGPT5 to decompose Starbucks’ ROE (return on equity) into its operating and financing components, it comes back with something sensible but does not quite understand that net financial assets are best defined as long term debt minus cash. Or that interest expense should be netted off against interest income and other financial gains and losses on a post-tax basis. Why?

Because the typical analysis found on the web, on which ChatGPT5 is trained, does not decompose the data I would like it to. Of course, I can trivially train GPT5 to run the data the way I want it to. But that is not the point. Unless the user is happy to be content with secondhand ways that others have relied on, GPT5 will give you good but less than excellent answers.

To give you another example, I asked GPT5 to find me the companies with the largest unfunded health care plans. It came back with a report dated December 2005 because no one else on the web appears to have worked on compiling more current data. To illustrate difficulties with the extrapolation, I asked GPT5 why the ROIC (return on invested capital) used by Home Depot in its 10-K differs from the version of ROIC used to compensate the CEO? It came back with some gobbledygook presumably because no one had asked a similar question before.

Of course, GPT5 is fantastic at retrieving relevant data from the web if available and if you know what to ask. In another application, I was trying to automate a comparison of how Exxon and Chevron are perceived by their employees and what their regulatory rap sheet with various law enforcement agencies looks like. With directed prompts, GPT5 got me this data in 5 minutes. Manual execution of this task would have taken me two hours at least.

2. Comparability of financial data on the web will become even more important

Functional fixation, which is the technical term for investors consuming information at face value without deeper analysis or understanding, will arguably increase as AI become more mainstream. For instance, if you want to compare the same store sales growth for Home Depot and Lowes, you can get that information in an instant with GPT5.

Except the superficial user perhaps has not accounted for the fact that Lower’s reports numbers on a calendar year basis whereas Home Depot closes books on January 31. This is notwithstanding the absence of any regulatory guidance on how same stores sales are calculated and both companies might have different assumptions baked into that number.

3. Domain knowledge will continue to be important

The hard grind associated with internships, apprenticeships in the first few years of a career enable a manager to develop “sense making” abilities in the particular context of a company and its industry. That is, how do disparate pieces fit, is the stated data accurate or is something missing and the “Spidey Sense” that goes off if something looks amiss.

A somewhat trivial case in point. I was trying to model the wage bill for Starbucks. GPT5 does an excellent job of pulling average wage rates by function (barista, supervisor etc.). The ranges come to around $18/hour. Tracing the source becomes important here. Why? GPT5 is pulling out employee’s reports of pay. I am more interested in the employer’s cost of wage bill. One has to add at least 30% to the $18/hour to cover benefits paid by the employer.

Since I have dabbled with sustainability reports in the past, I always skim through these reports for every company I cover. It turns out that there is an innocuous entry there essentially saying that the cost of compensating an average US worker is $30/hour. If I had not dabbled in adjacent areas or known about employer costs being different from employee pay, I would have underestimated wage costs by 40%.

In general, I worry that future generations will find it difficult to develop such a “Spidey Sense” especially if entry level investment banking and consulting jobs disappear (as seems likely with AI tools). These entry level managers are by and large, glorified data gatherers and power point jockeys or folks who work DCF (discounted cash flow) spreadsheets. ChatGPT5 does 90% or more of what these entry level folks do.

4. The IP of the future is the prompts or the questions we ask the machine

This somewhat obvious point helps me illustrate the analogy to a PhD program. While training PhD students, we focus on developing the skill to ask the right questions. The answers will eventually follow. AI provides almost a case of business “Jeopardy.” You have the answers but are you asking the right questions? And do you have enough “sense making” skills to understand what to ask next?

What is the basis for the inference, or the answer proposed by AI? When should I take it at face value and when should I dig deeper? Is the answer really backed by evidence or is it the pie in the sky estimate of someone on the web? How did the original source acquire the information they did?

These are questions doctoral students grapple with all the time as they read hundreds of research papers claiming to have numerous treatment effects (an intervention that supposedly causes a change in the intended variable). Any seasoned manager would tell you that causing a true treatment effect, even if you control men, materials and resources, is hard and takes skill.

5.0 Incremental contribution

Almost everything that is scientifically known about a topic is online and accessible via AI. Of course, many small and medium businesses will do well to apply older insights profitably for a long time. But the premium on coming up with something new is now even higher. That may partially explain the turbocharged superstar economics at play in pay the top AI talent millions of dollars.

We tend to usually teach MBAs so called “best practices.” That’s fine for now. But to keep their jobs in the future and thrive, should we train them to keep coming up with new things to the point where that skill becomes muscle memory. You have to become a perpetual hypothesis testing machine. True life-long learning will become the norm.

In essence, are we training our MBAs the way we train our PhDs? It may be time to do just that.

Stay Informed

Get the best articles every day for FREE. Cancel anytime.