John Hollinger devised an all in one basketball rating adjusted to pace of play and an individual’s playing time to rank NBA players, present and past. He readily admits that it’s not full proof and yet, it’s widely accepted and used. He also admits that defensive efficiency for NBA players can be skewed because blocks and steals can lead to false conclusions.
The average NBA player is a ’15’ in Hollinger’s system. A “great” is in mid to high 20’s and a poor player is below 10. (When you Google his system, you’ll see the whole range.)
Be that as it all may, you can examine his rating system (and others) and get into them as deeply as you’d like. Hollinger’s system and others are based on adding a series of positive stats and then subtracting negative stats. With any rudimentary math skills, any coach can devise his own system and tweak it along the way and be in the “analytics” game.
However, at the end of the day, each coach has to decide the worth of utilizing one PER system or another for his own players and/or opponents. Then, the next question becomes “how do I interpret this info?”
With all of the above as a backdrop, let me use several 2014-15 NCAA players as examples to illustrate “holes” in using analytics alone in evaluation
Hollinger also annually assigns PER’s to college players and those numbers are utilized by NBA personnel as support data in evaluating draft prospects. The problem with this is the nature of the different levels of play in NCAA hoops. Not everyone plays the same strength schedule. Consequently, some mid and low level D1 players have higher PER’s than many players from power conferences. In the NBA, a player plays close to the same schedule as any other player.
(Some argue that some NBA subs PER’s are less than accurate because they’re playing against other subs very often.)
With that said, Wisconsin’s Frank Kaminsky led the NCAA PER rankings by a wide margin with a 35.70 ranking. That really doesn’t surprise because Kaminsky will be a high draft choice and anyone who watched him knew categorically that he is very skilled and efficient.
The second ranked player was a surprise- Northern Iowa’s 6-8 Senior forward, Seth Tuttle. The argument could be made here that his stats might be a function of level of play but the MVC is nothing to sneeze at. I saw Suttle play at the Portsmouth Invitational in the annual pre-draft senior showcase. I looked forward to seeing him but I and everyone else saw him go 0-the tournament from the floor (actually 0-13). In spite of that, confusing matters even further, a colleague and I agreed that he was still a viable NBA prospect as a “blend” player because he did so many little things well. He just didn’t finish at Portsmouth.
Numbers 3 and 4, to no one’s surprise were Duke’s Jahlil Okafor and Kentucky’s Karl-Anthony Towns- regarded as the top 2 players in the Draft.
Numbers 7 and 12 bear mentioning. #7 was Notre Dame freshman sub, Bonzie Colson,who averaged only 12.1 mpg but hit 30.86 on the PER chart. #12 was Bowling Green’s 6-10 Richaun Holmes from Bowling Green. I saw him in person during the regular season and labeled him a solid NBA prospect in spite of getting very few touches and shots. PER helped get him to the PIT and thereafter he played his way into wider consideration.
So, what is the moral of the story? Use analytics as tool to further evaluate players but the “eye test” is still king. Kaminsky, Okafor and Towns all pass the eye test easily. But PER could help you evaluate your players and opponents who fall under the radar- like the underutilized Holmes, the bad luck/blend nature of Suttle and the limited playing time of Colson. There are many other types who don’t necessarily fail the eye test, we just sometimes get blind to them because they don’t smack us in the face.
Keeping an open mind to analytics and PER, in particular, can serve to complement your visual impressions.