pull down to refresh

We investigate whether large language models (LLMs) can successfully perform financial statement analysis in a way similar to a professional human analyst. We provide standardized and anonymous financial statements to GPT4 and instruct the model to analyze them to determine the direction of firms' future earnings. Even without narrative or industry-specific information, the LLM outperforms financial analysts in its ability to predict earnings changes directionally. The LLM exhibits a relative advantage over human analysts in situations when the analysts tend to struggle. Furthermore, we find that the prediction accuracy of the LLM is on par with a narrowly trained state-of-the-art ML model. LLM prediction does not stem from its training memory. Instead, we find that the LLM generates useful narrative insights about a company's future performance. Lastly, our trading strategies based on GPT's predictions yield a higher Sharpe ratio and alphas than strategies based on other models. Our results suggest that LLMs may take a central role in analysis and decision-making.
Researchers tested GPT-4's ability to analyze financial statements and predict future earnings. Key findings:
  1. GPT-4 outperformed human analysts in predicting earnings changes.
  2. GPT-4 excelled in situations where human analysts struggled.
  3. Its accuracy matched a specialized machine learning model.
  4. GPT-4 generated useful narrative insights, rather than relying on training data.
  5. Trading strategies based on GPT-4's predictions yielded better results.
reply
Coming for your jobs, Wall Streeters!
reply