Speed Kills

It seems unnecessary, even silly to say it, but basketball is a very simple game. Ever since the late 1930s when the college rules committee eliminated the mandatory center jump after every score, the game became one of transition, the competitors converting from offense to defense and back again in near seamless fashion. 

Essentially the game requires you to trade possessions and attempt to score when it’s your turn. You win by outscoring your opponent as the possessions unfold, one after another. It’s a  possession-by-possession game.

Modern analytics and the effort to determine how best to measure a team’s performance finds its origins this understanding.  

Like many insights, a number of people likely came to the same conclusion in different places, under different circumstances, so it’s hard to say who gets credit or took the first steps. But conventional wisdom cites the work of a young Air Force Academy assistant who questioned the accuracy of using traditional stats like points per game to measure a team’s offensive and defensive efficiency.

Dean Smith had majored in mathematics at Kansas while playing for the legendary ­­­­­­­Phog Allen and claimed he would have been happy to have become a high school math teacher, but in 1955 head coach Bob Spear offered him a job at the newly opened Academy.

In his role as Spear’s assistant, Smith began to probe the possession-by-possession nature of the game and reasoned that a game’s tempo or the pace of play skewed statistical conclusions because teams played at different speeds.

“I have never felt it was possible to illustrate how well we were doing offensively based on the number of points we scored. The tempo of the game could limit us to fifty points and yet we could be playing great offense. In some games we might score eighty-five points and yet be playing poorly from an offensive viewpoint… From a defensive point of view, one of my pet peeves is to hear a team referred to as the ‘defensive champion’ strictly on the basis of giving up the fewest points per game over a season. Generally, a low-scoring game is attributable to a ball control offense rather than a sound, successful defense.”

To Smith, the only way to remove the bias of tempo was to accurately count each team’s possessions and ask “who made the most of the possessions they had?” By calculating how many points a team scored and allowed per possession, one could gain a clearer picture of how “efficient” a team had performed and compare it with other teams no matter whether they favored a slow tempo or a fast one. 

To that end, Smith devised a statistical tool he called “possession evaluation” and began measuring the Academy performance through its prism. Four years later, in 1959, while serving as an assistant for Frank McGuire at North Carolina, Smith described the system in McGuire’s book, Defensive Basketball. 

“Possession evaluation is determined by the average number of points scored in each possession of the ball by a team during the game. A perfect game from an offensive viewpoint would be an average of 2.00 points for each possession. The defensive game would result in holding the opponent to 0.00 (scoreless). How well we do offensively is determined by how closely we approach 2.00 points per possession. How close we come to holding the opponent to 0.00 points per possession (as opposed to holding down the opponent’s total score) determines the effectiveness of our defensive efforts. Our goals are to exceed .85 points per possession on offense and keep our opponents below .75 point per possession through our defensive efforts.” 

Four decades later, Smith’s pioneering work found its way into Dean Oliver’s 2004 breakthrough book on statistical analysis, Basketball on Paper, in which he outlined his now famous “four factors” of basketball efficiency.

Oliver took the three broad statistical categories we’ve always used to measure performance – shooting, rebounding, and turnovers – added some nuance by including free throws and offensive rebounds to his equations, calculated the per possession value of each, and then multiplied the resulting percentages by 100 to determine a team’s offensive and defensive efficiency ratings. For Oliver, a team that had a higher offensive rating (points scored per 100 possessions) than its defensive rating (points allowed per 100 possessions) was more efficient than an opponent with lower ratings and the likely winner of a game between the two. 

Combined with the onslaught of the three-pointer and its dramatic impact on coaching strategy, advanced analytics became a mainstay in sports journalism. You would be hard pressed to watch a game on television today or to read an account of it tomorrow without hearing references to one or more of Oliver’s descendants in the field – Ken Pomeroy, Kirk Goldsberry, Seth Partnow, Jordan Sperber, or Jeff Haley, to name a few. 

And it makes good sense. The data is fascinating and often insightful.

• Team A plays fast, averaging 85 points per game but from an efficiency perspective, barely a point per possession while its upcoming opponent, Team B, averages 1.2 points per trip down the floor. Can Team A force Team B into a running game in hopes of disrupting Team B’s preferred rhythm and offensive efficiency? Or might an adjustment in Team A’s defensive tactics push Team B deeper into the shot clock where the data reveals that their shooting percentage drops?

• In its last game, our opponent, Team C, had an excellent offensive efficiency rating of 1.20, averaging 79 points on 66 possessions, but turned the ball over 16 times. In the 50 possessions when they protected the ball, their efficiency rating jumped to 1.58. What can they do to cut down on turnovers in the season ahead and grow more efficient?

• Team D is a weak shooting team but a high volume of their shots are 3-point attempts. They make enough of them to offset their misses and keep the score tight. What can we do to either reduce the number of 3-pointers they take or further erode their shooting percentage?

But there’s a problem.

While data comparisons can be purified by eliminating tempo mathematicallythe game itself is not tempo-free.

Not tempo or pace narrowly defined as the number of possessions a team experiences in the course of a game, but the rate of speed or intensity at which things takes place during those possessions… or perhaps more precisely, the speed and intensity of a team’s actions relative to the speed and intensity of its opponents’ actions. 

What do the military boy say? Speed kills?

In this sense, tempo is a weapon as speed or intensity, or a combination of the two, can shatter an opponent’s cohesion, throw them off balance, confuse and disrupt their preferred rhythm, challenge their confidence. It often creates fatigue and doubt. For example, determined and aggressive offensive rebounding not only creates more scoring opportunities, it disheartens defenders who have worked hard to prevent easy shots, only to see their opponent get the ball back for another try. 

Basketball is filled with sweat and blood, and wild swings of emotion. Games run the gamut from stupid, unforced errors, missed shots, confusion and fatigue to gut-wrenching, scrambling defensive stands and exhilarating, fast-paced scoring runs … and moments of just plain luck. In combination, these can lead to what coaches call “game slippage” – a growing dread that your lead is slipping away and there is nothing you can do to stop it. 

It’s impossible for data, tempo-free or not, to fully capture or quantify such realities.

In his recent book, The Midrange Theory, NBA analyst Seth Partnow eloquently acknowledges this fact, citing the foundational principle of general semantics first outlined by linguist S.I. Hayakawa in 1939: The map is not the territory… the word is not the thing. 

A word or a symbol or a mathematical equation or even a lengthy written description is only a representation of reality… it’s never the reality itself, just as a map is not the territory it attempts to depict. No matter how nuanced and precise it becomes, advanced analytics can never capture the totality of the decisions, actions, and outcomes that comprise a basketball game. 

Today’s advanced, computer-generated analytics often miss the context, the circumstances, the “how and why” of what takes place on the court. 

The stats of the 1963 NCAA championship game between Loyola and Cincinnati that we introduced in our last post don’t explain why – after shooting 73% on 8 of 11 shots in the opening minutes of the second half – Cincy’s Ron Bonham and his teammates took only 3 more shots in the remaining 13 minutes of regulation and blew a 15-point lead. 

Nor do they explain how and why Bonham’s All-American counterpart on Loyola, senior captain Jerry Harkness, failed to make a single basket in the first 35 minutes of play, but then exploded. With 4:34 left in regulation, he scored his first field goal, followed ten seconds later by a steal and a second basket. At the 2:45 mark Harkness scored a third basket and with only four seconds left on the clock, connected on a 12’ jumper to tie the game and send it to overtime. On the opening tip of the overtime period, he scored his fifth and final field goal. 

In the course of 242 desperate seconds, Harkness scored 13 points on five field goals and three free throws, propelling Loyola to the national championship. His frenetic intensity on defense, coupled with mad dashes up the floor hoping to spark the Ramblers’ fast break, typified the Chicagoans’ up-tempo, less than efficient, but highly effective approach to the situation in which they found themselves.

Predictably, according to the conventions of advanced analytics, both teams had roughly the same number of possessions, 67 for Cincinnati, 69 for Loyola… an average of 68 apiece. As the game unfolded, they traded possessions, each taking their turn with the ball. But while this accounts for the relatively slow “mathematical” tempo or pace of the game, it masks the relative speed and intensity – the operational tempo – of what took place within those possessions. 

For example, in less than a third of its possessions, Loyola generated 42 field goal attempts, nearly equaling Cincinnati’s total output for the entire game. 

You won’t find those numbers revealed in the efficiency stats of the game.

Loyola was horribly inefficient but, in the end, effective because their rapid pace and intensity generated 30 more scoring opportunities: 77 field goal attempts to the Bearcats’ 47. Even in the overtime period, Loyola fired six more times than Cincy. They converted only 33% of them, half of  Cincinnati’s ­­­67% rate, but enough to give them one more basket as the clock expired.

The sheer volume or statistical raw count of Loyola’s attack confounds the tempo-free, efficiency percentages of the Four Factors. 

Analytics picks up the underlying drivers – Loyola’s offensive rebounding and tenacious defense leading to Cincy turnovers turned the game around – but assigns values based on percentages that are misleading.

Let’s start with key indicator of efficiency and likely victory in the both the NBA and college ranks, the eFG% or “effective field goal shooting percentage.” 

It’s calculated just like the traditional FG%: you divide a team’s “makes” by its “attempts” to determine its shooting percentage. For example, 20 baskets divided by 50 attempts yields a shooting percentage of 40%. But the modern efficiency calculation makes an important adjustment to the formula. Recognizing the impact of today’s 3-pointer, it grants 50% more credit for made 3-pointers. If 6 of those 20 baskets were 3-pointers, the efficiency calculation climbs from 40% to 46%. From an efficiency standpoint, it’s as if the team had made 23 two-point baskets instead of only 20.

Since there were no 3-pointers in 1963, the eFG% formula for Loyola and Cincinnati is no different than using the traditional version. Clearly, Loyola was horrendous, making only 30% of its 77 shots while Cincinnati scored a very respectful 47%. But, the sheer volume of Loyola’s attempts renders an efficiency comparison between the two meaningless. 

Cincy’s first, last, and total 47 attempts yielded 22 baskets… while Loyola’s first 47 attempts yielded only 12, putting them 20 points behind. But the Ramblers went on to attempt an additional 30 field goals… and made 11 of them, winning the game by 2. 

Comparing each team’s ORB% or offensive rebounding percentage leads to the same problem.

This measurement calculates the percent of offensive rebounds a team secures on its missed FG attempts. It’s an important indicator of offensive efficiency because an offensive rebound extends a possession, creating an opportunity for a second attempt to score more points. 

Once again, Cincy achieved higher efficiency rating than Loyola, snagging 55% of their offensive rebound opportunities compared to Loyola’s 47%. But Cincy’s advantage in the comparison is the “mathematical” consequence of attempting 30 fewer field goals. 

Loyola’s raw numbers tell us more about the actual pace, intensity, and outcome of play than the analytic scores do. 

In just 24 of its possessions, Loyola grabbed 28 ORBs and generated 42 FGAs, nearly as many as Cincy attempted in the entire game. The Bearcats outperformed Loyola in the ORB% factor but produced 12 fewer scoring opportunities and 11 fewer points. 

The efficiency factor called the FTA% captures a team’s ability generate free throws in proportion to the number of FGs it attempts. The idea is that free throws are statistically easy, highly efficient points to score because the shooter is unguarded, so a team with a higher FTA% than its opponent is likely the one that was more proficient in attacking the defense and getting to the line. In the modern game the FTA%  is increasingly relied upon to measure how much pressure a team or specific scorers exert on the defense.

Once again, the wide disparity in the number of field goal attempts between Loyola and Cincinnati makes the comparison irrelevant. Both teams attempted the same number of free throws and made the same number, 14 for 21. Cincinnati’s higher rating had nothing to do with the outcome of the game.

Next to shooting percentage, the most important indicator of offensive efficiency and likelihood of victory in the NBA and college ranks is the TO% or turnover rate. (On the high school level, many coaches believe that it is the most critical of the Four Factors.) 

The TO% tells us how well a team protected the ball by determining the percentage of its possessions that resulted in a turnover. Its importance makes sense because if you lose the ball, you may end up with an “empty possession” – one in which you lost the opportunity to score. 

In the Loyola – Cincinnati tilt, it is the only one of the Four Factors that demonstrates actual efficiency. Loyola turned the ball over 6 times in 69 possessions –  9% of the time – compared to Cincinnati’s horrendous 20 times or 30%. But the reality was even worse. 

That’s because two of Loyola’s turnovers occurred during possessions in which they rebounded missed FGAs and then lost the ball. In other words, these possessions were not “empty” as they attempted a field goal and only lost the ball after securing the offensive rebound. In Cincinnati’s case, all twenty turnovers resulted in empty possessions.  

Perhaps a clearer way to look at it is to compare the number of “true” or pure empty possessions. From this perspective, Loyola had chances to score in 94% of its possessions, not the 91% suggested by the TO% formula. Conversely, all twenty of Cincy’s turnovers resulted in an empty possession so they had chances to score in only 70% of their possessions.   

The 1963 NCAA championship is a dramatic example of how advanced analytics often misses the significant impact of operational tempo on the outcome of a game. 

Ironically, though Oliver and his fellow practitioners trace the origins of their work to Dean Smith’s “possession evaluation” in the 1950s, the underlying culprit may lie in how they departed from Smith’s definition of a possession and, more importantly, from his reasons for evaluating possessions in the first place.

Smith was interested in measuring his team’s efficiency regardless of the pace of the game. Comparing his team with his opponent did not require equalizing the number of each team’s possessions but looking, instead, at the per-possession performance of each team, given its particular pace or tempo. In other words, he didn’t strip tempo out of his calculations, he included it. 

In Smith’s world, a possession ended and a new one began as soon as a team lost “uninterrupted control of the basketball.” Throw the ball out of bounds and your possession ended; shoot the ball and whether you made it or missed it, your possession ended. And if you regained control of a missed free throw or field goal with an offensive rebound, you started a new possession, a new opportunity to score.

In Oliver’s world, an offensive rebound continues the same possession. You don’t get credit for an additional one. That’s why by game’s end both teams will tally the same or close to the same number of possessions and why modern analytics can perform a “tempo-free” comparison of each team’s efficiency.  

Smith wasn’t interested in making an exacting mathematical comparison of efficiency. Regardless of the game’s total number of possessions, whether the overall pace had been fast or slow, or whether one team had generated greater or fewer possessions than its opponent, he asked the same two questions: how many possessions or scoring opportunities did we create and how well did we perform in each of our possessions? 

We know for certainty that Dean Smith was in Freedom Hall the night Loyola beat Cincinnati for the national championship. We don’t know if he charted the game and ran the results through his efficiency formula, but if he did, here’s what it would have looked like compared to the numbers using Oliver’s formula.

The end results are the same; do the multiplication and Loyola wins by two, 60-58… but Smith’s method of counting possessions paints a clearer picture of the operational tempo of the game. 

Possession-by-possession, Loyola’s performance was woeful but in Smith’s view, they played at a faster pace, generating sixteen more possessions than Cincinnati. Couple Smith’s numbers with the game’s traditional stats and you get quick confirmation of what the average fan saw in the arena or on television that night: Loyola played terrible but shot more often, grabbed lots of rebounds when they missed leading to more shots, and rarely turned the ball over. They created more chances to score than Cincinnati did.

Smith’s method reminds us that pace or tempo as always been defined by the number of possessions but his simpler method of calculation – adding a team’s shot attempts, turnovers, and specific kinds of free throw attempts – comes closer to exposing the all-important but missing element of pace in today’s analytics – the passage of time. Consider the following:

Whose count gets us closer to the actual pace of the game? 

In 40 minutes of regulation play, Smith credits Loyola with 94 possessions to Cincy’s 83. Basically, every two minutes that passed off the game clock saw Loyola producing five scoring opportunities to Cincinnati’s four. When regulation time finally expired, the Ramblers had accumulated 11 more possessions and tied the game, pushing it into overtime.

Not only does Smith’s 70-year-old method more accurately reveal the game’s true speed and intensity, it suggests that the proverbial eye test is alive and well, and not easily dismissed by technology, algorithms, and today’s mathematical whiz kids – a theory we’ll explore in our next post. 

But… before we go, one more irony to explore, this one pleasantly surprising.

Two nights before the 1963 NCAA championship game, George Ireland was preparing for his semi-final game against Duke, the ACC champs, when he was interrupted by a knock on his hotel room door. Standing outside was North Carolina’s young assistant who handed him a gift from UNC’s head coach, the legendary Frank McGuire. It was UNC’s scouting report on Duke who had defeated the Tar Heels to secure its bid to the NCAA tournament. 

The young assistant? Dean Smith.

A Hypothetical

Imagine two teams squaring off in a big game. Call them Blue and Red.

Team Blue is college basketball’s two-time defending national champion led by experienced veterans, three seniors and two juniors. They’re renowned for their deliberate pace, disciplined ball control, and for taking high percentage, quality shots. 

Team Red? 

The exact opposite. They start four juniors and a senior, have never won a national championship or even appeared in the NCAA tourney, and play a helter-skelter pressing and running game, averaging 92 points during the season.

Red gets off to a dreadful start, missing 13 of its first 14 shots, and 26 of 34 first half attempts. Though they’ve defended well and are only down 8, they head to the locker room with a meager 21 points, 25 fewer than their seasonal first half average.

Red fares no better in the opening minutes of the second half. With 12:28 left in the game, their  prospects are dire. Blue is now ahead by 15 and Red’s best player, a consensus 1st team All-American, has scored but a single point.

Meanwhile, Blue is well on pace to match its seasonal margin of victory average of 17 points… and, most importantly, to win a third consecutive national championship.

By game’s end, Blue has dominated Red in virtually every statistical category. In fact, except for turning the ball over more than Red, they are a model of offensive efficiency, outpacing Red in three of the “four factors” that separate winners from losers, first defined by analyst Dean Oliver in 2004, popularized by Ken Pomeroy on his site kenpom.com, and frequently cited by college basketball broadcasters and sports writers ever since. 

• Not only did Blue out rebound Red, 47 to 41, on the offensive glass they snagged 55% of their misses, leading to additional scoring opportunities.

• From the charity stripe, they more than doubled Red’s “offensive free throw rate,” 44% to 27%.  In other words, in proportion to their 47 field goals attempts, Blue got to the free throw line more frequently with a chance to score a greater number of “easy” points than Red did.

• Most importantly, Blue significantly dominated Red in basketball’s key indicator of offensive efficiency and likely victory – “effective field goal percentage” – converting 47% of their field goal attempts to Red’s dismal 30%. 

And yet… Blue lost the game.

By now, of course, you’ve likely guessed that Blue versus Red is not an imaginary game but a real one. The actual contest took place sixty seasons ago when Loyola Chicago overcame a 15-point deficit to beat the University of Cincinnati in overtime and capture the 1963 NCAA championship. 

The contest is frequently ranked as one of the best tournament finals of all time and was celebrated during last season’s Final Four in a CBS Sports Network / Paramount + documentary called The Loyola Project

The contest pitted two extremes: the nation’s highest scoring team versus the team that allowed the least. 

Loyola was noted for its “iron-man” approach, wearing ankle weights in practice to improve rebounding and using baskets outfitted with special rim inserts that reduced the cylinder by two inches to sharpen their shooting. They relied on balanced scoring with all five starters averaging in double figures and high scoring runs when they would catch fire and bury their opponents in an avalanche of quick points. For example, in their semi-final game against Duke, they were ahead by only three, 74 – 71 at the 4:30 mark, but went on a final run, outscoring the ACC champs 20 – 4 and winning the game by 19. Moreover, throughout the season they seldom substituted, the five starters accounting for 95% of the team’s total scoring. In the national championship game with Cincinnati, Loyola’s starting five played the entire 40 minutes of regulation followed by the 5-minute overtime period. Not a single substitution in 45 minutes of play.

While Loyola coach, George Ireland, liked to run and score, Cincy’s Ed Jucker preferred a slower, more deliberate pace that he honed over five seasons at the helm, winning 80% of his games and capturing two national championships in three attempts. When he took over the head job in 1960, he replaced the Bearcats’ run-and-gun offense led by the legendary Oscar Robertson with a half court attack that stressed high percentage shots. He designed an offense that operated from the center of the cylinder to the foul line, an area circumscribed by an arc of 13 feet, 9 inches. He persuaded his best shooter, All-American forward, Ron Bonham, “that 15 well-selected shots could be as persuasive as 25 random ones.”

Cincinnati entered the game against Loyola ranked #1 in the nation, averaging 70 points while holding its opponents to only 53, and relying on Ron Bonham to carry the offensive load. Over the course of the season, Bonham accounted for 28% of the team’s field goal attempts and makes, and 30% of its scoring… just as Jucker wanted.

To be sure, compared to its seasonal averages, Cincy fell short, but the Rambler’s performance was absolutely dreadful.

Not only from the perspective of basketball’s traditional stats, but as we have seen, dismal in terms of the game’s modern efficiency stats as well.

How then, did Loyola win? 

And why isn’t their victory revealed in the statistical data? How does a team get out-rebounded, shoot an abysmal 30% from the field, and score a whopping 32 points below its seasonal average and yet defeats an experienced, methodical team that played a methodical, fairly efficient game?

The answer is epitomized in the performance of one player, Cincy’s All-American forward, Ron Bonham.

Ironically, his game stats tell a very positive story. He not only carried the offensive load as Jucker proscribed, he out-performed his seasonal averages in every category, sometimes in dramatic fashion, accounting for 34% of the team’s field goal attempts, 36% of its baskets, and 38% of Cincinnati’s overall scoring. He did everything asked of him and more.

But Bonham’s statistical prowess hides the significance of what actually happened. 

He generated all of his offensive firepower in the first 26:43 minutes of play. After making four of his first six shots in the opening minutes of the second half, he never took another shot.

A total of 18:17 minutes ticked off the game clock without a Bonham field goal attempt – 13:17 in regulation, plus 5 minutes in overtime. 

Not a single attempt. Zero. Nada.

To put a point on it, consider the following: Ron Bonham, arguably the best, most efficient and productive player on the floor that night, effectively stopped playing.

Over the course of Cincy’s first 43 possessions, Bonham attempted 16 field goals and made 8 of them. But during the team’s final 24 possessions, 19 in regulation, another 5 in overtime, none. He touched the ball only five times. 

In effect, for nearly half of the 45-minute game, Ron Bonham was not an offensive threat. 

And his teammates?

They fared no better, taking only three shots and scoring a single basket in the final 13:17 minutes of regulation, echoing Bonham’s withdrawal as an offensive threat.

In the opening minutes of the second half, they had gone on tear, making four of five shots and joining Bonham in an 8 for 11, 73%  break-out run… and a 15-point lead.

But, then, under orders from the bench, their guns fell silent.

Cincinnati had begun to foul, three of its starters – Wilson, Thacker and Yates – each tagged with four. And Loyola’s pestering full-court defense was disrupting and rushing Cincy’s methodical attack, creating uncharacteristic turnovers. And so, to preserve his starters, slow the pace, and protect his lead, Ed Jucker ordered his team to take the air out of the ball. There was no shot clock in 1963 so each Bearcat possession became a game of keep-away. 

Cincinnati was no longer playing to score and win, but playing to avoid losing.

And, Loyola? 

In an evening of horrendous shooting, they had no choice. To cut the Cincinnati lead, they had to play frenetic defense, force turnovers, grab offensive rebounds, and above all, keep firing.

Here are several illustrations charting Cincy and Loyola’s possessions in specific time periods. Note in particular the yellow highlighted boxes.

During the second half, Cincinnati converted 64% of its field goal attempts, doubling Loyola’s dismal 32%. In overtime, the national championship in the balance, Cincy did even better, shooting 67% from the field while Loyola managed only 33%. 

But look at the shot attempts for each team. 

In the game’s final 18:17 minutes, Bonham and his teammates shot the ball only 6 times for 3 baskets while Loyola generated 34 attempts and 12 baskets. In a game where free throws were even – both teams going 14 for 21 – that’s a swing of 18 points.

Yes, by the game’s end, Loyola shot only 30% from the field compared to Cincy’s highly efficient 47%, but the Ramblers fired 30 more times… thirty more opportunities to score and they only needed to make one of them to win the game.

One out of thirty yields an infinitesimal shooting percentage of 3%. But, in this case, it marked the difference between victory and defeat, highlighting the factor that advanced analytics often misses in its quest to measure mathematical efficiency: volume and its impact on effectiveness.

More on this in my next post.