I am very critical of the ratings/rankings services. And it is not all based on the bias factor, though that is one part of it. I don't think that I've ever argued that the players targeted by Alabama or Georgia or any other school that has a GREAT scouting department are "wrong". But there are some other things at work too:
1. Too many services, not enough good internet journalist-analysts. Now that you have Rivals and 247 and on3, plus a bit of ESPN and SI in the mix too, you are seeing situations where the "independent" analysis is flawed, defective, or negligent. Even guys that cover the Elite 11 can give very divergent opinions, and that's just 20 kids to watch. So to think that, what, a few hundred writers can cover thousands and thousands of football games, camps, 7-on-7s, and every other event where these recruits showcase their abilities is just nuts. So if the ratings and rankings are not coming from independent observations...
2. Overreliance on what college coaches tell the internet journalist-analysts. Now, to the extent that the Alabama and Georgia writers are getting info from a solid bunch of coaches, you can have reliable information and ratings. But what about the writers who cover Manny Diaz at Miami? I would have no problems if, in an ideal world, these writers had multiple data points that included BOTH their own observations and the commentary of the college coaches. But that is very rare, so you have a doubling effect where the opinions of coaches outweigh everything else. And we've seen a lot of coaches give false information about who they like and don't like...and then...
3. Inaccurate assessments by certain coaches can distort things. All these national services try to be "national" and not just save all the 5-star and 4-star ratings for kids in the southeast. So if some of the writers cover certain colleges NOT named Alabama/Georgia, and those coaching staffs are pursuing players that are NOT as good (either because of poor evaluations or "we can probably sign this kid", then you can get a distorted picture of "who the coaches think the best players are". All we have to do is go over to Gaytor Tears to see how THREE different coaching staffs have chased borderline 4-stars in order to raise their "class rankings".
4. The Gaytor Bumpz is out there too. And if our fanbase was bigger and prone to giving Wiltfong more ****, we would see more fanbases who insist, because of their past championships (and sizable subscriber numbers), that the services automatically upgrade all their 3-star recruits.
5. Hesitance by the services to make big changes or late changes to the rankings is problematic, combined with the rise of December signing day and the shifts in the overall recruiting calendar. A few years ago, Texas used to LOCK UP nearly its entire recruiting class a year in advance. And then they would find out that those players may have "peaked early" and that there were a lot of kids who were late-bloomers, but it was too late.
None of those factors is the "only reason", but they all play a part in the overall picture. So what you have is a relatively accurate assessment for the Top 5 or 10 schools, and then a bunch of other schools scrambling to make their classes look as good as possible, by taking the highest rated players they can. And that's where evaluations come into play and make all the difference. There are only 450 blue-chip recruits each year, and 65 P5 schools. That's 7 "difference makers" per school on average, but we all know that some schools get more and some get less.
I rarely argue on 5-stars. Very few are "really" 3-stars.
But there's a lot of imprecision in the 3-star vs. 4-star world.