In a recent article from the Business Valuation Update, we covered a court case that was a good example of a cost of capital analysis gone awry. One of the problems the financial expert had was trying to use her professional judgment on one of the inputs, but the data she used did not support her opinion. So she found herself in the uncomfortable position of trying to explain why she chose not to go where the data were leading her.
About the case
When estimating the cost of capital, an appraiser used data going back to 1926 that suggested a certain size premium. The appraiser did not feel that the size premium the data pointed to was appropriate for her subject company, so she did not use it. Instead, she came up with her own size premium, which was much less than what the data suggested.
In court, she was challenged about why she deviated from what the data were recommending, but she could not adequately explain it. Because of this (and other reasons), the court deemed the expert’s opinion inadmissible. The case is Rover Pipeline LLC v. 10.55 Acres, 2018 U.S. Dist. LEXIS 157188 (Sept. 14, 2018), and the court’s full opinion is available at BVLaw.
This case is disturbing because it illustrates how experts can be backed into a corner and then have to fight their way out of it. That is, a methodology, formula, or computer algorithm gives you output that you may not agree with, and you find yourself on the defensive for not using it. If you don’t have the data to back up your conclusion, it won’t be trusted. In this case, the size premium data the expert had available to her did not support her opinion. To make matters worse, her explanation as to why she chose a smaller size premium was not adequate.
Most appraisers apply a size effect when estimating the cost of capital. At a recent business valuation conference, a show of hands revealed that everyone in the audience uses a size effect, so it seems to be a settled matter with practitioners. But it’s not a settled matter if you talk with academics. A review of academic papers (many of them recently published) reveals that the size effect is conditional and more complex than the simple notion that smaller firms have higher returns than larger ones. The papers also show a substantial disconnect in thinking about the size effect between practitioners and academics. Most empirical studies find that the size effect has diminished or disappeared since it was first documented in academic research (Banz, 1981). What we have now is almost 40 years of research without a lot of conclusions. Therefore, practitioners need to give the size effect issue more thought and not just plug in a size premium by rote, but also consider other factors, such as firm quality.
If you happen to believe that the size effect has disappeared, but you use a methodology developed by those who believe it still exists, their opinion will be embedded into that methodology or formula. When that is the case, be mindful that the data may not back up your opinion.
In theory, risk premiums in cost of capital models are forward-looking. They represent components of what investors expect to receive as a return on investment. If using historical returns to estimate investor expected returns, the expert must choose the part of history that he or she believes best represents investor expectations of the future by first choosing a starting year of historical return data.
If you’re a user of BVR’s Cost of Capital Professional online platform, you already know you have control over the period of historical return data for the analysis. You can choose which time horizon (e.g., specific historical years) of data is appropriate for your case. Some analysts might start with the earliest year of data available, presumably with the belief that the future will resemble the average return of the past 90 years if starting in the 1920s. Other analysts might start with 1963 because that is the first full year when Compustat started its database. Other analysts might start with 1982, the first year after the Banz research. During that time frame, the size effect is very different than if you go all the way back to the 1920s, and other empirical research supports this. Of course, the practitioner can choose another starting year.
No matter what set of data you use, you need to be prepared to explain why that specific data belongs in your analysis. Part of your explanation can be to point to the recent academic research that has been done. And, of course, the actual data set that you use will speak volumes.
Developing cost of capital for smaller, closely held enterprises is especially challenging given the lack of empirical data that correlate directly to the size of the valuation target. Distinguishing between risk related to the size of the entity and unsystematic risk related to the specific company and industry it operates requires professional judgment based on sufficient qualitative and quantitative analysis.