
Isn’t it essential to also consider the pool size for 5-star, 4-star, and 3-star players, as well, when making these types of comparisons? I see no accounting for the discrepancy in sample sizes of each grouping. Each year, there are roughly 30 5-star players, yet another 300-ish 4-star players, and then literally thousands of 3-star and below players. Isn’t it simply a matter of disproportionate mathematics that lead to a higher percentage of hit rates for those players if you are merely looking at players that have had a top-24 fantasy season and not accounting for the pool of players from which they originated? I guess I just don’t think it is particularly surprising that out of a group of thousands of players that you have more hits than you do from a group of 300 or 30.
Recruiting and player ranking is an inexact science, especially because it involves 18-year old kids, all from different backgrounds, with different work ethics, different situations, and different goals. This is especially true for positions like offensive and defensive line, where most kids get by early in middle and high school by simply being bigger than everyone else. At those positions, technique, nutrition, weight training, and development happen much later and projecting that in recruiting is so difficult. Other kids simply get overlooked because of where they come from, their economic status, and their opportunity for exposure by participating in skills camps and 7v7 events. The internet has made visibility less of a hurdle, but there are still hundreds of “diamonds in the rough” out there that go unnoticed every year. It’s how guys like Justin Jefferson, Josh Allen, and Austin Ekeler come out of nowhere and become all-pros.
Players are going to both wash out and rise to success across all levels, but when one group of those players is so disproportionately larger than the other two, I’m not sure you can accurately make a comparison on their hit rates without accounting for the discrepancy in their initial size.