Miami Ranked #27

One should be able to hit 50% simply by flipping a coin on each game. Here's what I'm seeing.

Sp+ is below 50% picking against the spread ytd. He's underperforming against a coin flip

Am I missing something?

View attachment 337532
You're not missing anything.

What's bizarre to me is how much SP+ gets gassed up as a predictive metric. I see the range of 60-65% ATS get floated by ESPN, but actual third-party comparisons don't seem to bear that out.
 
Advertisement
But major components of what CFPO Committee uses to justify their selections.
yup.

As Canes fans we should remember the computer poll travesty in 2000 when we beat beat FSU head to head, had the same record, and FSU was selected to play Oklahoma in the championship game.

Both human polls had UM ranked #2 and FSU #3. However the one computer poll had FSU #2 and by the slightest of margins it put FSU in the championship. (We were relegated to the Sugar Bowl where we proceeded to badly beat UF's ***.)

The outrage among the sports world was so loud that UM got left out that the computer poll was eliminated and never used again. Until now when it's been re-introduced.
 
You're not missing anything.

What's bizarre to me is how much SP+ gets gassed up as a predictive metric. I see the range of 60-65% ATS get floated by ESPN, but actual third-party comparisons don't seem to bear that out.
SP+ tracks the spread very closely.

If you actually think it's crap...quit your job, borrow as much money as you possibly can, and become a full-time gambler.
 
SP+ tracks the spread very closely.

If you actually think it's crap...quit your job, borrow as much money as you possibly can, and become a full-time gambler.

His picks against the spread literally underperform flipping a coin. Anybody who has bet on his picks has lost money this year. That's the simple fact.

I don't know what you're arguing. He posts his own stats. They're losers in the aggregate. See cell B20.

He underperformed flipping a coin again this week:

1759083781129.png
 
You're missing a basic understanding of sample sizes lol

If you're saying that 4-5 games is too small a sample size to allow the model to outperform a coin flip, I very much agree. You're making my point below that you at first challenged as wrong.

I don't get your point. You're all over the place with your responses. You try to cite facts that are demonstrably wrong (his success rate picking against the spread), and reject critique that his sample size is too small early in the year, to then turn around and claim his sample size is too small.

Me on Sept 9th: This model is meaningless early in the season. The sample size is too small

You on Sept 9th: His model beats the spread [factually wrong] his sample size must therefore be fine.

1759084977889.png


Me on Sept 28th: He has underperformed flipping a coin.

You on Sept 28th: "You don't understand sample sizes"

:ROFLMAO:

1759085051135.png
 
Last edited:
Advertisement
I'll also say that Bill Connelly himself acknowledges that early in the season sample sizes are too small to yield accurate results. Most of us on this thread understand this, except for one poster it seems.

Here's Connelly below, sarcastically responding to a poster who criticized his early season model results after only 4 games.

1759085570206.png


Bottom line, algorithmic models need a meaningful amount data to be accurate. The data is scarce early in the season as teams haven't played each other to necessary level.

Sure his model uses other, off the field metrics like recruiting, previous years results, etc. But using those metrics assumes that Connelly can accurately assess those metrics, weight them properly, not miss other metrics, etc. There's little reason to assume he has properly identified, assessed and weighted the correct off-field metrics.

Using previous years results was likely more relevant in pre-NIL and portal times. Now, teams can flip their fortunes in the span of one single year rendering previous years results less predictive.

The best data source is on-field results, which accrues over the course of the season. By week 12 I expect Sp+ (and FPI) to cluster around the human polls. That's what usually occurs with these models. But early in the season the models aren't particularly meaningful as Connelly himself acknowledges above
 
Last edited:
yup.

As Canes fans we should remember the computer poll travesty in 2000 when we beat beat FSU head to head, had the same record, and FSU was selected to play Oklahoma in the championship game.

Both human polls had UM ranked #2 and FSU #3. However the one computer poll had FSU #2 and by the slightest of margins it put FSU in the championship. (We were relegated to the Sugar Bowl where we proceeded to badly beat UF's ***.)

The outrage among the sports world was so loud that UM got left out that the computer poll was eliminated and never used again. Until now when it's been re-introduced.
i went to that sugar bowl game january 1, 2001 with my daddy. neither if us had ever been there. we dined at arnaud's, commander's palace, brennans and had coffee at cafe du monde. on friday night bourbon street, we got to see the remnants of the beatdown of al blades v. jabar gaffney (there were a bunch of dudes cuffed when we walked by) and then at the superdome we got to see the main beatdown of um v. florida. dorsey was the mvp and rex grossman has 2 or 3 picks that game. we came home happy
 
If you're saying that 4-5 games is too small a sample size to allow the model to outperform a coin flip, I very much agree. You're making my point below that you at first challenged as wrong.

I don't get your point. You're all over the place with your responses. You try to cite facts that are demonstrably wrong (his success rate picking against the spread), and reject critique that his sample size is too small early in the year, to then turn around and claim his sample size is too small.

Me on Sept 9th: This model is meaningless early in the season. The sample size is too small

You on Sept 9th: His model beats the spread [factually wrong] his sample size must therefore be fine.

View attachment 337708

Me on Sept 28th: He has underperformed flipping a coin.

You on Sept 28th: "You don't understand sample sizes"

:ROFLMAO:

View attachment 337709
Geez, are you seriously that bad at statistics? Lol

SP+ has a multi-year track record of beating the spread. A couple of games away from .500 in September is not a meaningful miss.

In fact, SP+ has done so well over the years that now the spreads track SP+ very closely. So if you think it's crap...borrow as much money as you possibly can and gamble full-time.
 
His picks against the spread literally underperform flipping a coin. Anybody who has bet on his picks has lost money this year. That's the simple fact.

I don't know what you're arguing. He posts his own stats. They're losers in the aggregate. See cell B20.

He underperformed flipping a coin again this week:

View attachment 337707
Oh sorry, you cited "this year" and "this week"

I should've read that first before assuming you know what the words "sample" and "size" mean when they're next to each other

Did they teach statistics when you went to UM?
 
I'll also say that Bill Connelly himself acknowledges that early in the season sample sizes are too small to yield accurate results. Most of us on this thread understand this, except for one poster it seems.

Here's Connelly below, sarcastically responding to a poster who criticized his early season model results after only 4 games.

View attachment 337710

Bottom line, algorithmic models need a meaningful amount data to be accurate. The data is scarce early in the season as teams haven't played each other to necessary level.

Sure his model uses other, off the field metrics like recruiting, previous years results, etc. But using those metrics assumes that Connelly can accurately assess those metrics, weight them properly, not miss other metrics, etc. There's little reason to assume he has properly identified, assessed and weighted the correct off-field metrics.

Using previous years results was likely more relevant in pre-NIL and portal times. Now, teams can flip their fortunes in the span of one single year rendering previous years results less predictive.

The best data source is on-field results, which accrues over the course of the season. By week 12 I expect Sp+ (and FPI) to cluster around the human polls. That's what usually occurs with these models. But early in the season the models aren't particularly meaningful as Connelly himself acknowledges above
Oh wow LOL

Every post just gets better.

His comment is very specifically regarding his resume rating, not SP+...lol

Let me know if you need me to explain the difference to you, bud
 
Oh wow LOL

Every post just gets better.

His comment is very specifically regarding his resume rating, not SP+...lol

Let me know if you need me to explain the difference to you, bud

You have nothing to explain to anyone on this subject. You're a fraud.
 
Geez, are you seriously that bad at statistics? Lol

SP+ has a multi-year track record of beating the spread. A couple of games away from .500 in September is not a meaningful miss.

In fact, SP+ has done so well over the years that now the spreads track SP+ very closely. So if you think it's crap...borrow as much money as you possibly can and gamble full-time.

"SP+ has a multi-year track record of beating the spread".

Combining his full end of the year 2024 results with the 5 weeks of his results in 2025 yields: 522-516 record against the spread. That's 17 weeks of games

He's had a 50.2% success rate against the spread in last year and a half (almost). Like I said, his model is no better than flipping a coin.

1759120038418.png
 
Advertisement
For others who are interested, when I went to pull SP+ 2024 info I also came across his final, post BCS and NC rankings:

His model loves the SEC. ESPN bends over backwards for the SEC. Sankey has been crying that all 12 of the BCS playoff spots aren't given to the Top 12 SEC teams. And low and behold the model ESPN picked added two additional SEC teams, making 5 of the top 10 teams

Here's his final Top 10:

#1 He got OSU right (coaches poll #1)

#2 He has Ol Miss as #2. They didn't even make the playoffs. SP+'s masters at ESPN and the SEC are determined to get SEC teams at-large berths though because the schedule is oh so tough that the entire BCS playoffs should be made up of all SEC teams. (coaches poll #13)

#3 Oregon. Matches human polls (coaches poll #4)

#4 Alabama. They didn't make the playoffs either. 9-4 record. SP+ loves 'em some SEC though. No wonder the ESPN/SEC cartel wants SP+ to be used in coming up with payoff berths. (coaches poll #17)

#5 Penn St. Matches human polls (coaches poll #5)

#6 Georgia. Matches human polls (coaches polls #6)

#7 Texas. Not much difference from human polls (coaches poll #3)

#8 Notre Dame. Weird rank. ND gave OSU a better game than Oregon did, yet Ore is somehow ranked # by SP+. ND also beat Penn St in the playoffs but SP+ has Penn St ranked higher. (coaches poll #2

#9 Tennesse. Matches human poll (coaches poll #8)

#10 Miami. Who knows. Best in the country offense, but our defense couldn't stop anybody let alone a Top 15 team (coaches poll #17)


1759122000225.png
 

Attachments

  • 1759120441749.png
    1759120441749.png
    211.7 KB · Views: 0
Last edited:
"SP+ has a multi-year track record of beating the spread".

Combining his full end of the year 2024 results with the 5 weeks of his results in 2025 yields: 522-516 record against the spread. That's 17 weeks of games

He's had a 50.2% success rate against the spread in last year and a half (almost). Like I said, his model is no better than flipping a coin.

View attachment 337757
"Multi-year"

Amazing stuff from you, bud.
 
For others who are interested, when I went to pull SP+ 2024 info I also came across his final, post BCS and NC rankings:

His model loves the SEC. ESPN bends over backwards for the SEC. Sankey has been crying that all 12 of the BCS playoff spots aren't given to the Top 12 SEC teams. And low and behold the model ESPN picked added two additional SEC teams, making 5 of the top 10 teams

Here's his final Top 10:

#1 He got OSU right (coaches poll #1)

#2 He has Ol Miss as #2. They didn't even make the playoffs. SP+'s masters at ESPN and the SEC are determined to get SEC teams at-large berths though because the schedule is oh so tough that the entire BCS playoffs should be made up of all SEC teams. (coaches poll #)

#3 Oregon. Matches human polls (coaches poll #4)

#4 Alabama. They didn't make the playoffs either. 9-4 record. SP+ loves 'em some SEC though. No wonder the ESPN/SEC cartel wants SP+ to be used in coming up with payoff berths. (coaches poll #4)

#5 Penn St. Matches human polls (coaches poll #5)

#6 Georgia. Matches human polls (coaches polls #6)

#7 Texas. Not much difference from human polls (coaches poll #3)

#8 Notre Dame. Weird rank. ND gave OSU a better game than Oregon did, yet Ore is somehow ranked # by SP+. ND also beat Penn St in the playoffs but SP+ has Penn St ranked higher.

#9 Tennesse. Matches human poll (coaches poll #8)

#10 Miami. Who knows. Best in the country offense, but our defense couldn't stop anybody let alone a Top 15 team (coaches poll #17)


View attachment 337759
This is ******* gold, lmao
 
Back
Top