Harrison-Hunte Update

Advertisement
Ok that works. It's a perfectly fine criteria to say that you'd assign star ratings based on the schools recruiting a given kid.

But then what do you do with the kids who are highly recruited by Alabama, UGA, OSU, Clemson, etc., and then turns out to not be an elite college player and isn't drafted? Use a kid like Ermon Lane as an example. He was recruited like a 5 star. Alabama, UGA, Clemson, etc., all wanted him. Yet, after the fact, do you think he was overrated? Using one of your metrics; i.e. his ultimate NFL draft selection, he was overrated. Yet using your other metric; i.e. quality of the teams recruiting a given kid, Lane was properly ranked. So which is it D$?

Do you see the inherent flaw in the argument you've been making whereby you judge rankings to have been wrong in hindsight depending on draft position? Even your very own criterion, which is based on the programs who recruit a given kid, isn't a perfect predictor of future college or draft choice success. So what do you then do with that?
Here’s the challenge. If the rating services are systematically wrong in a predictie way on *some* kids, then there should be an algoritm that could tell you which kids they are likely underrating or overrating. An arbitrage algorithm in effect. Maybe its kids with good measurables from small schools. Or two sport stars with insufficient football experience. Or kids who have multiple teammates being recruited by major programs are overrated. Whatever, you can potentially find a basis to identify rating errors. And if you did, the rating services would incorporate that into their process over time and the ratings would improve. There would be fewer predictable errors, but still outcome variances, because uncertainty.

But if there is no clear way to identify flaws in the service rankings, then there is nothing interesting to discuss here.
 
Here’s the challenge. If the rating services are systematically wrong in a predictie way on *some* kids, then there should be an algoritm that could tell you which kids they are likely underrating or overrating. An arbitrage algorithm in effect. Maybe its kids with good measurables from small schools. Or two sport stars with insufficient football experience. Or kids who have multiple teammates being recruited by major programs are overrated. Whatever, you can potentially find a basis to identify rating errors. And if you did, the rating services would incorporate that into their process over time and the ratings would improve. There would be fewer predictable errors, but still outcome variances, because uncertainty.

But if there is no clear way to identify flaws in the service rankings, then there is nothing interesting to discuss here.

There's a lot of truth to that. Personally, I see the distribution of elite college players/NFL draft picks generally occurring along the lines of the star ratings; i.e. more 5 stars achieve elite status and NFL draft success than 4 stars, more 4 stars than 3 stars, etc. So I don't know that the star ratings are grossly wrong. They are what they are. No, we wouldn't want to analyze the epidemiology of cancer treatments using such an imperfect process, but there's a whole less at stake with evaluating HS football recruits.

And in any event I don't know that anyone's ever developed the sort of arbitrage algorithm that attempts to quantify all this and then measure standard deviations from the expected outcome based on the different probabilities of the various star rankings. Arguably it could be improved greatly, but I don't think anyone cares enough to expend the time or money to do it. So we have what we have. which isn't perfect but seems to be directionally correct. But it's a whole lot more imperfect than perfect.
 
DMoney said:
You mean the guy who killed Notre Dame, was good enough to leave early, got sick, and still was deemed a Top 150 prospect in the country?

If we sign a pair of three-stars like McIntosh every year, we will never have a problem at DT.

Do we still get steamrolled against Pitt?
 
Advertisement
There's a lot of truth to that. Personally, I see the distribution of elite college players/NFL draft picks generally occurring along the lines of the star ratings; i.e. more 5 stars achieve elite status and NFL draft success than 4 stars, more 4 stars than 3 stars, etc. So I don't know that the star ratings are grossly wrong. They are what they are. No, we wouldn't want to analyze the epidemiology of cancer treatments using such an imperfect process, but there's a whole less at stake with evaluating HS football recruits.

And in any event I don't know that anyone's ever developed the sort of arbitrage algorithm that attempts to quantify all this and then measure standard deviations from the expected outcome based on the different probabilities of the various star rankings. Arguably it could be improved greatly, but I don't think anyone cares enough to expend the time or money to do it. So we have what we have. which isn't perfect but seems to be directionally correct. But it's a whole lot more imperfect than perfect.
I am not used to seeing you make such loose and controversial statements. CFB success impacts millions of fans. Some specific branch of cancer research may impact a small number of people you have never met, years from now. How to trade off those matters is not obvious, at least unless central planners figure out the answer. Otherwise we’d shut sports down and deploy everyone into cancer research.

That said, two comments. Noting that the star rankings are directionally accurate doesnt tell us that much. You really need a baseline expectation to judge them against. How much better should we expect the top 1% to perform than the top 10% or the top 50%? How mich better or worse are the services relative to each other at predicting? Any differences? Any obvious biases? Regional biases? Program biases? Position biases?

Also, how do you relatively measure a kid by position? QB is more important than LB, but an average QB isn’t distinguished. To the contrary, an average LB may help special teams a lot. An average QB only helps when your roster is full of Rosier and Perry. Rating kids as a group may be fun but it’s likely also wrong. Kickers rarely get highly rated, but the NFL shows you what a great kicker is worth. I’d suggest viewing the ratings only by position. And then what do you do with position switches? McIntosh was the 40th highest rated SDE on Rivals his senior year. Maybe they were RIGHT? He ended up being better as a DT. How do we criticize them for rating him differently at DE? And when you compare offers, how do you handle staff differences on a kid’s position? UM wanted a kid as a DB. He wanted to be a RB. No UM offer. Kid goes to SDST. Rating services downgrade. That kid ended up in the NFL Hall of Fame as a RB (Marshall Faulk).

There is at least some money available for doing this, actually, and it probably isnt hard. Major programs have sizeable recruiting budgets. Alabama may well do a version of this in-house.
 
I am not used to seeing you make such loose and controversial statements. CFB success impacts millions of fans. Some specific branch of cancer research may impact a small number of people you have never met, years from now. How to trade off those matters is not obvious, at least unless central planners figure out the answer. Otherwise we’d shut sports down and deploy everyone into cancer research.

What was I thinking! Spending too much time in Cali these days I suppose. I stand corrected!!

That said, two comments. Noting that the star rankings are directionally accurate doesnt tell us that much. You really need a baseline expectation to judge them against. How much better should we expect the top 1% to perform than the top 10% or the top 50%? How mich better or worse are the services relative to each other at predicting? Any differences? Any obvious biases? Regional biases? Program biases? Position biases?

Also, how do you relatively measure a kid by position? QB is more important than LB, but an average QB isn’t distinguished. To the contrary, an average LB may help special teams a lot. An average QB only helps when your roster is full of Rosier and Perry. Rating kids as a group may be fun but it’s likely also wrong. Kickers rarely get highly rated, but the NFL shows you what a great kicker is worth. I’d suggest viewing the ratings only by position. And then what do you do with position switches? McIntosh was the 40th highest rated SDE on Rivals his senior year. Maybe they were RIGHT? He ended up being better as a DT. How do we criticize them for rating him differently at DE? And when you compare offers, how do you handle staff differences on a kid’s position? UM wanted a kid as a DB. He wanted to be a RB. No UM offer. Kid goes to SDST. Rating services downgrade. That kid ended up in the NFL Hall of Fame as a RB (Marshall Faulk).

No disagreement on any of that. As I stated, the analysis and methodology can be improved

There is at least some money available for doing this, actually, and it probably isnt hard. Major programs have sizeable recruiting budgets. Alabama may well do a version of this in-house.

Ultimately this will occur. Just a matter of when. ****, Alabama may well already be developing something.
 
Advertisement
6DFFCAEB-1A1C-4586-BF0E-0FDB4445A540.gif
 
That example doesn't fly. Nobody ranks lottery tickets.

Let's expand on this for a minute. If we were to rank various types of lottery ticket, the most practical way I can think of to rank them would be based on expected rate of return. Let's assume a hypothetical Florida lotto ticket has an expected rate of return of 10% (e.g., $1.10 per $1 ticket purchased), a California ticket has an expected rate of return of 0% (e.g., $1.00 per $1 ticket purchased), and a Nevada lottery ticket with an expected rate of return of -20% (e.g., $.80 per $1.00 ticket purchased). By expected rate of return, the rankings would be: 1. Florida, 2. California, 3. Nevada.

Let's say I purchase one of each ticket. I win $100 on the Nevada ticket, while getting $0 on the other two. That result, while absolutely within the realm of possibility, does not mean the initial ranking was off -- after all, the probability was better on the Florida ticket. It just means that in this very limited sample size of 1 ticket, the lower ranked lottery netted me the best return.
 
Advertisement
Lu I was headed towards the topic you are mentioning. A huge gap in this discussion is a lack of understanding about what the rating services actually do. The _reality_ of what they do is a lot closer to compiling info on who is recruiting a kid and then rating kids based on who is recruiting them, then it is a true evaluation process. If they were only including info on who is recruiting a kid, the circularity D$ is talking about would be obvious. It’s there, just not entirely circular.

@PalyCane, this is for you also.

Let’s say a quant tech dork geek who is bored bothered to form an algorithm to rank Hs kids. His inputs were solely which schools are recruiting the kid (and which aren’t), where he’s from, what position he plays, his measurables and the roster needs of the schools recruiting him (and the ones that aren’t but geography would suggest should be).

A little machine learning would likely be able to come up with a better ranking than rivals with that info. Except nowhere in that info is there an actual evaluation. And the offer data is self reported and not confirmed. Schools are not even allowed to talk about recruits. They could be recruiting a kid as a courtesy to his coach, to help him get attention for other offers, because they want to get his teammate to commit, or just to confuse their rivals about who they really want. We just don’t know.

The optimal algorithm would be the best predictive measure of future outcomes, and yet it would be missing critical information that if considered might well lead to a different assessment of some subset of kids. How they actually perform on field. Do they like contact or not. Are they still developing physically or already maxed out. So you could well create an optimal general algorithm and still leave room for D$ to validly point out some kids that the algorithm is wrong on. Not because of future uncertain outcomes. Because at the time of the estimate, the algorithm missed important inputs.

Missed this one. Completely agree with the part directed at me. And heck, even something as difficult to estimate as future physical development enters into it.

Khris Bogle is ranked as the 3rd best WDE recruit in the US. He weighs 215 pounds. His #3 raking assumes that he grows and physically matures into an all-conference college player, and future NFL draft choice. But what if he doesn't grow and physically mature? What if he ends up having John Square's metabolism? If that turns out to be the case then Bogle won't become an elite player and draft pick. Would that mean that Bogle would therefore have been overrated in hindsight?

D$'s analysis of what star ratings are supposed to mean would say "yes" Bogle would have been overrated. But how in the world, and who in the world can consistently and accurately project the future weight and strength growth and physical development of HS kids 4-5 years down the line? No one claims to be able to do that, yet it's one of many well acknowledged but inseparable possibilities attached to coming up with star rankings. So no, Bogle's 4 star rating would not be wrong in hindsight. He is in fact, right now, a kid that the top programs believe will develop into an elite college player. By D$'s metrics that makes him a 4 star. Alabama wants him.

No, if Khris Bogle doesn't develop past 225 pounds of weight and doesn't become an elite college player and NFL draft choice, that doesn't change the fact that Alabama and others want him and right now in Jan 2019 that interest translates into him being ranked as a 4 star player. If in fact he doesn't develop accordingly it simply will mean that he's part of the small, but expected outcomes that didn't pan out because he didn't grow. There's no larger implication about the accuracy of star system if Bogle individually doesn't perform. Just like there was no larger implication about the accuracy of the star system because McIntosh was a 3 star and was drafted.
 
Here's where you go astray. Ranking a player is an individual decision that needs to be judged individually. You can't judge that kind of individual decision based on group data.

The roulette example highlights the flaw in your approach. The odds in roulette are fixed. One out of 38. A better example would be making a bet on a football game. You can make bad bets and good bets. Individual decisions with different probabilities of success. If I win 60% of my bets, that doesn't mean that every individual bet I make is sound or based on proper analysis.

That's the situation here. Rivals made a bad bet on McIntosh.

Missed this one too.

No, I haven't gone astray. You have, by trying to make statistical correlations based on the deviation of one player's outcome compared with his expected value. And your example further makes my point, not yours.
 
Let's expand on this for a minute. If we were to rank various types of lottery ticket, the most practical way I can think of to rank them would be based on expected rate of return. Let's assume a hypothetical Florida lotto ticket has an expected rate of return of 10% (e.g., $1.10 per $1 ticket purchased), a California ticket has an expected rate of return of 0% (e.g., $1.00 per $1 ticket purchased), and a Nevada lottery ticket with an expected rate of return of -20% (e.g., $.80 per $1.00 ticket purchased). By expected rate of return, the rankings would be: 1. Florida, 2. California, 3. Nevada.

Let's say I purchase one of each ticket. I win $100 on the Nevada ticket, while getting $0 on the other two. That result, while absolutely within the realm of possibility, does not mean the initial ranking was off -- after all, the probability was better on the Florida ticket. It just means that in this very limited sample size of 1 ticket, the lower ranked lottery netted me the best return.

Yes. Thank you. This is correct. Just because there can be outlier results doesn't mean the probabilities were wrong initially
 
Advertisement
I just hot tagged in Good Brother PalyCane for the rest of this discussion. He came in hot like Robert Gibson, cleaning house.
 
Let's expand on this for a minute. If we were to rank various types of lottery ticket, the most practical way I can think of to rank them would be based on expected rate of return. Let's assume a hypothetical Florida lotto ticket has an expected rate of return of 10% (e.g., $1.10 per $1 ticket purchased), a California ticket has an expected rate of return of 0% (e.g., $1.00 per $1 ticket purchased), and a Nevada lottery ticket with an expected rate of return of -20% (e.g., $.80 per $1.00 ticket purchased). By expected rate of return, the rankings would be: 1. Florida, 2. California, 3. Nevada.

Let's say I purchase one of each ticket. I win $100 on the Nevada ticket, while getting $0 on the other two. That result, while absolutely within the realm of possibility, does not mean the initial ranking was off -- after all, the probability was better on the Florida ticket. It just means that in this very limited sample size of 1 ticket, the lower ranked lottery netted me the best return.

Here is what I think @PalyCane keeps missing: the "expected rate of return" here isn't based on anything tangible. Rivals sets that expectation themselves, subjectively, by assigning a star rating. It's not objective. They control that.

If Rivals decides to make me a four-star, does my probability of success go up? Of course not. I'm the same bad football player.

When I criticize Rivals for making RJ McIntosh a three-star, I am questioning their assignment of expected value to this individual player. I did the same thing in 2015 without the benefit of hindsight. They got it wrong.

Yes. Thank you. This is correct. Just because there can be outlier results doesn't mean the probabilities were wrong initially

Again, this is an individual evaluation. Nobody is saying that four-star v. three-stars don’t matter because McIntosh outplayed a bunch of four stars. I am saying Rivals set the wrong "expected value" on McIntosh.
 
This is the debate that never ends, yes it goes on and on my friend. Some people started debating it, not knowing what it was, and they'll continue debating it forever just because...This is the debate that never ends, yes it goes on and on my friend. Some people started debating it, not knowing what it was, and they'll continue debating it forever just because...
 
Here is what @PalyCane keeps missing: the "expected rate of return" here isn't based on anything tangible. Rivals sets that expectation themselves, subjectively, by assigning a star rating. It's not objective. They control that.

If Rivals decides to make me a four-star, does my probability of success go up? Of course not. I'm the same bad football player.

When I criticize Rivals for making RJ McIntosh a three-star, I am questioning their assignment of expected value to this individual player. I did the same thing in 2015 without the benefit of hindsight. They got it wrong.



Again, this is an individual evaluation. Outliers don’t come into play. Rivals set the wrong "expected value" on McIntosh.

Nobody is saying that four-star v. three-stars don’t matter because McIntosh outplayed a bunch of four stars. I am saying that they got McIntosh wrong.
For you to know that as certainly as you think you know it, you must believe in Fate and the Book being written already. What you are missing here is variance. Here’s an illustration:

Two bets:

- Option (a) pay a dollar, have two equal probability outcomes: $2 or $0.
- Option (b) pay a dollar, have two equal probability outcomes: $1.5 or $0.60.

It’s an absolute fact that option (b) is a better choice. All sane people would rank it superior to option (a). It’s also an absolute fact that Option (a) will pay out more than option (b) literally 50% of the time. If you contend that Option (a) was a superior choice because of something unknowable at the time you made the choice (how it would pay out), you’d be wrong. I could easily construct the payouts so the lower ranked option is superior more than 50% of the time. Mean and variance are different parameters.
 
Advertisement
Back
Top