The fault in our stars - the problem with Mutual Fund star ratings and why you shouldn't use them

There are several popular websites like Value Research Online and Moneycontrol that provide historical and comparative data on Mutual Funds.

One aspect that has become especially popular is the star ratings of Mutual Funds on these websites.

The ratings on VRO are their own, while the one on Moneycontrol are provided by Crisil.

Many people feel that if they simply go to one of these websites and pick 5-star rated funds they would do great and that any fund that is not 5-star rated is a bad fund to invest in.

I really wish that were true.

Because it would save us a lot of headache.

We could just provide a bunch of 5-star rated funds (probably the subset that is rated 5-star by both VRO and Moneycontrol) and be done with it.

We wouldn't have to bother coming up with our own Mutual Fund selection strategy.

We would not have to explain why some of our recommended funds don't have 5-star ratings and risk people not trusting us.

It is not trivial to come up with a Mutual Fund selection strategy that beats an average selection of Mutual Funds over the long term given the amount of mean-reversion that exists.

So why do we go through all this trouble of doing things differently and putting our reputation at stake instead of just riding on the coat-tails of VRO and Moneycontrol?

Because star ratings are not a good predictor of future performance.

Nor are they meant to be.

If you don't believe me, you can hear it from the horse's mouth:

"The rating does not reflect Value Research's opinion of the future potential of any fund."


This is exactly the opposite of how most people treat the star ratings - as some sort of approval or recommendation from VRO experts.

I am taking VRO as an example but it is the same with all such ratings.

In fact Morningstar, another fund rating agency, in an analysis of its own ratings of Equity Funds found that in most cases 4-star rated funds have out-performed 5-star rated funds at 3 and 5 year horizons.

One explanation could be that because 5-star rated funds get to the top by generating a lot of out-performance and subsequently they attract a lot of inflows. Slowly mean-reversion kicks in and the bigger size of the fund makes matter worse. Hence subsequent returns can be disappointing as compared to a say a 3 or 4-star fund rising on its way to become a 5-star fund. And the cycle repeats.

Whatever be the explanation, if these ratings were predictive enough, anyone could become a wildly successful wealth manager by just following these publicly available ratings which we know isn't the case.

The fault is not in our stars, it is in us.

Given how explicitly rating websites like VRO admit that their ratings are not an indicator of future potential of a fund, it would be fair to say that the fault is not in their stars but in us.

According to Nobel prize winning behavioural economist Daniel Kahneman it is our substitution bias at work - "When faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution".

A difficult question - Will this Mutual Fund give me good returns in future? - is substituted by a different and much simpler one - Is this a 5-star rated fund? almost automatically without ever demanding any proof that the two can indeed be substituted.

So what are we supposed to do?

Instead of just using the most easily available metrics (star ratings, past returns) that we have access to, we need to dig deeper.

In other words, we need to use a method of selecting Mutual Funds that is explicitly tested for its predictive ability or to put it simply, a method that picks future winners rather than just shining the light on past ones.

Hence, mere sorting of funds on past returns or star ratings will not do. Neither the returns nor the star ratings persist year after year for any fund.

This is why we had to devise our own method of analysing and selecting the best funds that gives us and our investors a reasonable chance of outperforming.

Also it's not a one time thing, we keep researching ways to improve our selection methodology lest it loses its edge.

Of course, even a tested method won't always be correct (there is too much uncertainty in investing for this) but it should work more often than not, provided we stick to it through its ups and downs. :)