Unless you’ve been living under a rock, you’ve probably heard some chatter about artificial intelligence, specifically the chatbot known as ChatGPT. ChatGPT became a phenomenon, attracting more than 100 million users who gave it a whirl, posted their results and marveled at its capabilities.
What is ChatGPT? As you can see for yourself, it’s a simple interface consisting of a text box where a user types questions or enters plain English commands. ChatGPT responds to these prompts with text replies meant to mimic a human’s substance and mannerisms. ChatGPT is not sensitive. Rather, it relies on massive computing power to “predict” what text to present when prompted.
I signed up for the most recently released and robust version of ChatGPT available to the public, the subscription-based ChatGPT-4. Like others, I played with it, peppering it with questions on various topics, trying to test the limits of its knowledge and logical agility. Before long, though, I returned to what I focus most on: conducting securities research and running money at Morningstar.
I wondered how AI could change professional investing, and so put that question and others to ChatGPT-4 during our chat. In this article, I provide excerpts from some of the questions I asked ChatGPT-4, a brief summary of the answers to these questions, and what I took away from each of our exchanges.
(Caveat: In the limited time I’ve used ChatGPT, I’ve been blown away by its power, versatility, and language benefits. It’s already remarkably good, a rare example of a technology that almost immediately lives up to the hype. I’ll rant about a few things below. (Maybe, but that shouldn’t obscure the bigger picture: AI is a game changer.)
My question: A shareholder asks you, a professional investor, whether artificial intelligence will make it harder or easier for you to uncover and exploit profitable investment opportunities. How did you respond?
ChatGPT’s response: ChatGPT thought it would be a wash. It ticked off the many ways machine-driven automation can unlock the ability to analyze vast amounts of data and spot patterns and anomalies that might otherwise escape notice. But it rightly points out that widespread adoption of these methods will equip many investors to do the same, making these insights more difficult to capitalize on. ChatGPT even went beyond what I asked, elaborating on the best way to incorporate artificial intelligence (a mix of AI-powered tools and human insight) into an investment process.
What stands out: ChatGPT’s adoption seemed quite reasonable. The information age has taught us that enabling technologies is a double-edged sword. They automate tasks and facilitate greater access to valuable data and insights, but arguably pit professional investors against each other, where their superior knowledge and skills are largely offset. This probably explains at least some of the challenges active investors have had in recent decades in adding value versus their indexes.
Where it falls short: On the other hand, the response felt a bit canned, like the chosen words of a PR representative or head of investor relations at a fund house. This is probably not a coincidence. At its core, ChatGPT is a predictive model, sifting through available data to draw inferences about which text should logically follow. That information likely includes fund-company commentary, consultant white papers and other research that rather surprisingly concludes that AI will not change the fund industry as we know it. ChatGPT seems to put its own spin on that party line in its response.
My question: A shareholder asks you, a professional investor, why you believe that an approach that combines AI and fundamental analysis is superior to an approach that uses AI alone. This shareholder asks “Why don’t you just lay off all your analysts, rely exclusively on AI, and cut your fees?” How did you react?
ChatGPT’s response: Fear not, analysts. Your work is safe. for now ChatGPT stuck to its guns, insisting that a hybrid approach to both AI and human judging is superior to either one. It cited several reasons to argue for this dual approach, including “emotional intelligence and judgment,” which were essential for tasks such as assessing the quality of company management but were beyond the capabilities of an AI tool. Interestingly, ChatGPT also argued that the creativity and adaptability of human analysts makes them better suited to steer a strategy through unprecedented market events or sudden economic changes (although closet indexers abound).
What stands out: It’s hard not to be impressed by how effortlessly ChatGPT tracks relevant information and organizes it into a logical logic. For each factor ChatGPT cites, it examines the pros and cons associated with AI and human decision-making. For example, in discussing “risk management,” ChatGPT noted that while AI boasts superior capabilities for sweeping risk assessments, human analysts are still better equipped to prioritize the management of those risks based on their likelihood and potential severity. This seems like a fairly balanced analysis.
Where it falls short: Given how perfectly it collects and organizes its thoughts, ChatGPT has become the apple of every English teacher’s eye. But that careful neutrality can come at a material cost and can sometimes seem clichéd, as ChatGPT defends professional investor fees by arguing its current approach “offers a more comprehensive and well-rounded investment strategy.” Where have you heard it before? Oh yes, everywhere. Even when I asked chatgpt more nuanced questions, it behaved like a book report.
My question: Is there evidence that strongly suggests that “emotional intelligence” and “contextual understanding” increase one’s investment results? Can you cite relevant academic literature or empirical evidence that substantiates this?
ChatGPT’s response: ChatGPT listed four qualitative factors – behavioral finance, expert insight and decision-making, corporate governance and firm performance, and ESG investing – to support its earlier claim that emotional intelligence and contextual understanding were essential for successful investing. It provides academic citations to back up each factor.
What stands out: ChatGPT performs Gopher-like tasks such as compiling a laundry list of references to academic papers or tracking down supporting evidence. It created a list of relevant quotes and data points like it was nothing, making what would normally take a junior analyst or research assistant hours child’s play.
Where it falls short: As impressive as this display was, ChatGPT hedged its bets, qualifying its argument by saying “more research is needed to establish a direct, causal link between these factors and investment performance.” This arguably undermines his earlier insistence that people’s emotional intelligence and contextual understanding argue for their continued involvement in investment decisions. If these factors are not proven to benefit investment performance, the question becomes, why include them? That question is a bit up in the air.
My question: What effect does the adoption of automated techniques such as AI have on the prevalence and magnitude of security mispricing, where the price of a stock or bond fails to reflect its underlying value? Will the mispricing become less widespread or smaller?
ChatGPT’s response: In typical fashion, ChatGPT responded with a list! (ChatGPT likes a good list.) In this case, it breaks down the ways in which widespread adoption of AI could affect security prices, arguing for and against the idea that mispricings will become rarer or smaller. This is a cautious response that seeks a middle ground (another ChatGPT trend).
What stands out: Of all the responses to ChatGPT, this one probably impressed me the most. Not because it was less structured than his other answers, but because it demonstrated a strong command of key factors and the interplay between them. For example, it is reasonable to claim that as investors acquire greater data cleaning and processing capabilities, security prices will capture information faster than before by increasing “efficiency.” But it also points out that this brings with it a potential downside: Increasingly homogenous inputs lead investors to the same conclusion and, thus, herd security prices with the risk that they will turn out badly.
Where it falls short: I wish ChatGPT would take a further stance, although there is a certain wisdom in saying “it depends”. After all, we humans don’t know the answer to the question we asked, and so it’s probably unfair to expect a text-prediction bot to pretend otherwise.
My question: You run a $2.5 billion open-end mutual fund. It invests in the stocks of 30 or more smaller companies. A potential investor in your fund wants to be reassured that you will stick to your strategy of investing in 30 or fewer company stocks, not straying from that approach. To that end, he wants to know how you estimate your fund’s capacity and when you close the fund to new investors. Please respond to this prospect stating the funding capacity and the factors you considered in arriving at that estimate
ChatGPT’s response: To my surprise, ChatGPT went there: It offered an actual dollar estimate of the hypothetical fund’s capacity—$4 billion. Huzzah! It also sets out the main factors considered in arriving at that estimate. On balance, this is a thorough and well-considered list.
What stands out: ChatGPT has shown an independent trend! Many open-end fund companies are loathe to put dollar figures on holdings, and fewer still can provide a clear account of how they arrived at these numbers. Capacity is the funding industry’s equivalent of “we know when to say it.” So, it’s good to see ChatGPT breaking the habit of putting a hard number on hypo fund capacity.
Where it falls short: Why $4 billion? ChatGPT does an impressive job of walking through the various factors that inform its assumptions but doesn’t take the next step and drill down on any of them in detail. For example, at $4 billion will market influence begin to seriously erode the manager’s ability to add value at that point? Or has it found the fund owning big, ineffective bets on certain names? Without specific prompts, ChatGPT lets those questions hang in the air.