Youth Soccer Rankings ?

Isn't that what we'd expect if the teams were routinely swapping rosters...?

Perhaps. What it is saying is that however they populate the rosters for the 2007 and the 2008 team, they end up with a team that is roughly equivalent. Could mean that 2008's are playing on both of them and it's essentially the same team, or it means that the players on 2007 and 2008 in aggregate aren't dramatically different. What it really means in practice though is for opponents meeting either the 2007 or 2008 teams, that is the level of play that they should expect over time.
 
The issue is that it lowers the rank of the '07 team and when the actual '07 team plays (at an important tournament or playoffs, say) they're better than their rank suggests.



They're playing the '06s or getting the day off. There are a lot of games in an MLS Next season.


Nope. I've seen it happen where the entire 08s played up and so did the 09s. You'd have to ask the LAFC parents how often they do this.


I disagree. It's pretty rare that a club's U19s won't beat their U17s and 17 > 16 and so on. Yes, there are exceptions where a club has a particularly good or bad team in one age group, but for the most part age and size still matter. One way to see this is to look at the teams' ratings.


I guess I am not convinced these should be considered different teams. If the club is managing the younger rostered player availability to meet requirements for a registered team that needs players, I don't see how it should be considered a different team. But it doesn't matter what I think. You can easily have more than one team on the Soccer Rankings app. Someone just need to go into the paid pro sections and separate the results according. If there are enough recent games for each roster, they will become separate teams in the Ranking App. I've seen a couple teams do this. The funny thing is that the separate team rankings usually migrate to be very close to each other after a few events.
 
Anyone check Surf Cup’s brackets to see how well they seeded teams? Did they seed it again to ensure their teams have higher success rates, or did someone there actually use the rankings and ratings to seed it properly?
 
Anyone check Surf Cup’s brackets to see how well they seeded teams? Did they seed it again to ensure their teams have higher success rates, or did someone there actually use the rankings and ratings to seed it properly?

Well, you got me interested enough to take a look. :confused::confused: I dug into the G2007 Brackets, because I am familiar with those teams. Surf Cup absolutely did not seed them properly. But, who knows, there is still time to fix it. ;). I don't want to hijack this thread into age group specific debate so I posted the analysis into a new thread. https://socalsoccer.com/threads/2023-surf-cup-seeding-sh-how.21160/
 
Surf has often, if not always, seeded teams based on the league they play in rather than using any rankings.
Most of the times it makes sense. Sometimes it clearly doesn’t.
 
Surf has often, if not always, seeded teams based on the league they play in rather than using any rankings.
Most of the times it makes sense. Sometimes it clearly doesn’t.

Doesn't look like they used leagues as a seeding tool either. Of course the Best of Best bracket will be ECNL heavy, but there really is no correlation between league and bracket that I can see.
 
Doesn't look like they used leagues as a seeding tool either. Of course the Best of Best bracket will be ECNL heavy, but there really is no correlation between league and bracket that I can see.
I heard one of the 2nd bracket g2010 ECNL teams coaches told parents that they were close to being placed in the 1st bracket but just missed out because of their league record.

If this is true it implies that somehow league records apply to the bracket Surf Cup places teams in. Also, if they "just missed" getting into the 1st bracket some kind of number ranking algorithm is being used.

Or maybe by "just missed" Surf Cup was being literal and that clubs dart just missed the board.
 
Doesn't look like they used leagues as a seeding tool either. Of course the Best of Best bracket will be ECNL heavy, but there really is no correlation between league and bracket that I can see.

in the Girls 09, for example there are a couple DPL teams that according to YSR ranking could be in Best of the Best(1st) and instead they are in super white (3rd) and white (4th).
 
I heard one of the 2nd bracket g2010 ECNL teams coaches told parents that they were close to being placed in the 1st bracket but just missed out because of their league record.

If this is true it implies that somehow league records apply to the bracket Surf Cup places teams in. Also, if they "just missed" getting into the 1st bracket some kind of number ranking algorithm is being used.

Or maybe by "just missed" Surf Cup was being literal and that clubs dart just missed the board.

There are 8 ECNL teams in the Super Black (aka second) bracket for G2010. While I haven’t gone through each team to evaluate their rankings, I don’t see any there that should be in Best of the Best. Who am I overlooking? You might argue that LA Breakers FC ECNL should be in vs DMCV Sharks ECNL.
 
There are 8 ECNL teams in the Super Black (aka second) bracket for G2010. While I haven’t gone through each team to evaluate their rankings, I don’t see any there that should be in Best of the Best. Who am I overlooking? You might argue that LA Breakers FC ECNL should be in vs DMCV Sharks ECNL.
Sporting, Liverpool or Hawaii would be my choice over breakers and sharks.

I think breakers and sharks would be more evenly matched against Slammers RL (last years surf cup winner, Vegas cup finalist and Mojave division champ), Blues RL (southwest and Sonoran division champ and last years finalist) and Beach RL (played In Virginia and competes with slammers and blues).

I think some of the super white bracket and super black bracket could be interchanged to have better balance (bring up the high level RL teams to play those mid tier NL teams)
 
Sporting, Liverpool or Hawaii would be my choice over breakers and sharks.

I think breakers and sharks would be more evenly matched against Slammers RL (last years surf cup winner, Vegas cup finalist and Mojave division champ), Blues RL (southwest and Sonoran division champ and last years finalist) and Beach RL (played In Virginia and competes with slammers and blues).

I think some of the super white bracket and super black bracket could be interchanged to have better balance (bring up the high level RL teams to play those mid tier NL teams)

IF that’s Hawaii Rush then I agree that it should be Best of Best. Crossfire would be the one to drop, though. Otherwise, I’m still not really seeing anything drastically off in Best of Best.

By rankings, BoB Group 4 is the toughest, with all four teams in the national top 40. BoB Group 3 is the weakest (among BoB). If it were me, I would have out Beach in group 3.

There are a couple of teams that probably don’t belong in the top 3 brackets (Rage, Bay Area Surf) and a couple that could have been placed higher (Beach RL, LFCIA).

I went ahead and pulled the #s into a spreadsheet this morning.
1038F46C-3F0A-4C73-91EC-8CF5A2D8869A.jpeg
ETA: The #s support your assertion that Super Black and Super White could be mixed better.
 
Very. A higher rated team will beat another rated team 82% of the time. A higher rated team in the top 100 nationally will beat another team in the top 100 nationally 75% of the team. Does that mean every ranking from 1 to 2000 in each age group is exactly correct and will predict who wins the next game with 100% certainty? Of course not. Something that could do that would be science fiction rather than an actual rating system. But anyone claiming they are terribly inaccurate is either intentionally obtuse or doesn't understand how probability works.

Someone hadn't linked Koge's win from last month as of yet, but when you do so you can see how well they did in the finals. They overperformed in all 4 games (all 4 marked green), and in doing so upped their own rating/ranking significantly.

View attachment 17656View attachment 17657

That said, the 05/04 rankings (and soon, the 06/05 rankings), are probably the wonkiest in terms of making sure each team has all of their games (and none of anyone else's games) correctly assigned per team, since it's when the age groups shift from 1 per year to 2 per year. Some teams keep individual year teams, some go to two years, some go to two years but keep the name of the single year. There is none of that complication for every other group from U9-U17.

Appreciate the response. Not sure I appreciate the insult… Never said they were “terribly inaccurate”. Simply questioned the accuracy.
 
Appreciate the response. Not sure I appreciate the insult… Never said they were “terribly inaccurate”. Simply questioned the accuracy.

My apologies. Someone else had said something silly about the ratings on a different thread due to where Koge was showing at the time, and I conflated two separate users. For what it's worth, Koge's recent performance (and the relative performance of everyone else since that time), has pushed them up to #3 in the country for U19G in SR, out of 1468 ranked teams.

koge.jpg

Sadly, all of these ratings/rankings for U19 are going away on 8/1 as the years roll over to the next season, so anyone interested/invested in a particular U19 team should capture screenshots soon if relevant.
 
IF that’s Hawaii Rush then I agree that it should be Best of Best. Crossfire would be the one to drop, though. Otherwise, I’m still not really seeing anything drastically off in Best of Best.

By rankings, BoB Group 4 is the toughest, with all four teams in the national top 40. BoB Group 3 is the weakest (among BoB). If it were me, I would have out Beach in group 3.

There are a couple of teams that probably don’t belong in the top 3 brackets (Rage, Bay Area Surf) and a couple that could have been placed higher (Beach RL, LFCIA).

I went ahead and pulled the #s into a spreadsheet this morning.
View attachment 17694
ETA: The #s support your assertion that Super Black and Super White could be mixed better.

That's not the Hawaii Rush team that just won National Cup.

Will be good to see all the teams with their new rosters.
 
I wanted to come back and bump this thread with some new information about the Soccer Rankings (SR) app. I weighed the options of just starting a new thread, but figured it might make more sense to have the information consolidated here where there has already been so much discussion about the ratings/rankings/algorithm/etc.

So today Mark made a pretty incredible discovery, and I'm giddy because it was at least partially based on a suggestion I gave him. But before I get there, a little background might be helpful to ground the discussion. So first off, the way this system works is pretty well known and well described at this point, at least to folks who frequent this board. Game data is pulled in from a various electronic sources, and assigned to a team entity. If a correct team entity for the data can't be identified, it creates a new team entity. Rinse and repeat, continuing to add game results to each entity. If the game results have a rated team on the other side of it, the rating for each team is adjusted based on the new results. The ratings of the two teams are compared, and if the actual goal difference is more than expected by the existing ratings, the one who overperformed has their rating bumped up a smidge. If the goal difference is less than expected. the one who underperformed has their rating bumped down a smidge. If the goal difference is pretty much spot on with what was expected, neither team's ratings will move much at all. (more details on this up on the FAQ for the app)

There are a couple outcomes of these ratings, but essentially they are useful for predicting what is going to happen when two rated teams compete. Those predictions can be used to flight tournaments, choose proper league brackets, or as a fun prediction for how an upcoming weekend may be expected to play out. Now these predictions are never going to be 100% accurate (right every time), or 0% accurate (wrong every time); but the better the data, and the better the algorithm, the better quality the predictions can be. For definitions, Mark uses "predictive power" to state these same concepts. 0% predictive power means a coin flip (getting no better than 50% correct). 100% predictive power = god. You can convert predictiveness to the % of results correctly predicted by dividing by 2 and adding 50%. So 70% predictive power would translate to getting 85% of predictions correct. In all of these trials correct is defined as picking the correct winner, for games that result in a winner. If the wrong winner is chosen, it's a failure. Tie game results are excluded from these predictivity results.

With this setup, predictivity of the app isn't an estimate or a guess - it's a specific number that can be calculated as often as desired. Run through all the stored games in the database right now, and compare the predicted results using the comparative ratings, and the actual game results, and divide the correct predictions over all of the games being predicted, and 1 number gets spit out. Turns out this number, as of today, is 66.7% predictive over all games, which translates into picking the correct winner of the soccer game 83.35% of the time. So as expected, it's way better than a coin flip, and will pick the right winner about 5 out of 6 times. This predictive number is a validation that the ratings derived from the algorithm themselves have a certain level of accuracy. If the ratings were wildly inaccurate, the predictive number would trend to 0%; if the ratings were supernatural, the predictive number would trend to 100%. But by any measure, the real, provable, actual predictivity number is pretty darned good (and better than a well known other ranking system by more than 50 points, it's insane). For any skeptics that doubt that youth soccer can be ranked/rated, or even skeptics of this particular algorithm / ranking system, the predictivity number is what mathematically shows the expected probability - and it's an admirable number.

But that still isn't the interesting discovery. Here comes the interesting discovery. There is an intuition, even by proponents of this type of comparative ranking that uses goal differences, that the quality of the data (and the predictions) depends on how close the compared teams are to each other, and how many expected shared opponents they have. The more interplay, the better - the less interplay, the more drift. I believed that to be the case, as it seems reasonable. For example, if teams are in the same league, or same conference, or even same state; they play each other enough, that their comparative ratings will be honed and sharpened by each other, and would have a higher predictive value. And conversely, if you're comparing teams that are not in the same league, same location, may have never seen each other before, and have few if any common opponents - it makes intuitive sense that their comparative ratings would drift a bit more, and would be somewhat less accurate. Remember, this actual predictivity, this quality of each prediction, can be calculated by looking at the existing data for games that would fit into this category.

So what I suggested to Mark - and to be fair, he had also thought of himself within the past few days as well - was that he should exclude all in-state games, and measure the predictivity of interstate games exclusively. CA teams playing AZ, TX playing OK, or any other permutation in the country where the opposing teams are in different states. What this would do, is measure how good the predictions are, when there is very little shared information going into the upcoming game. Interplay is low. This represents what happens when you go to a big tournament elsewhere, as opposed to predicting what will happen with a local league game. He coded the query, ran the data, and a few hours later the number was spat out. And it turns out that for these interstate games, the algorithm is 67.0% predictive, which translates into picking the correct winner of the soccer game 83.5% of the time. So all of the intuitive worry about drift, or more local data being more refined than less remote data, turned out to be a false intuition. The comparative ratings, when used even across different states, provide just as good (and in fact a teensy bit better) predictions as when they are applied to local / in-league contests. If a team has sufficient data to be rated, that rating can be trusted regardless of extensive interplay or not. It's an incredible finding, and it validates all of the work and effort Mark and his team have done over the years to polish and refine the algorithm, tying game data to a useful rating.

And now to a real-world use, it looks like we're predicted to lose both games this Saturday with my youngest's team, so what's the leading recommendation to fill my thermos?
I had recently wondered about the interstate calculation. I saw that it came out fairly predictive in real world results and was properly impressed.
 
Are you doing this through an API?

If Mark is holding out the knowledge of an API for us - there will be pitchforks! :D I've spent way too much time transposing from the app into spreadsheets this past year!

League play kicked off yesterday for the fall season for us. Predictions for both games were as accurate as expected. Unfortunately, the first one was predicted to be a 2-1 loss. Having the final result turn out to be a 2-1 loss is a small consolation. 2nd game was predicted to be a 2 goal win, ended up being a 3 goal win.
 
If Mark is holding out the knowledge of an API for us - there will be pitchforks! :D I've spent way too much time transposing from the app into spreadsheets this past year!

League play kicked off yesterday for the fall season for us. Predictions for both games were as accurate as expected. Unfortunately, the first one was predicted to be a 2-1 loss. Having the final result turn out to be a 2-1 loss is a small consolation. 2nd game was predicted to be a 2 goal win, ended up being a 3 goal win.

If API means “Arduously Pulled by I” then, yes!!!

No, I copied/pasted the brackets from the website into Excel then manually looked up each team’s ranking in SR.
 
Back
Top