Turning Points

Here’s Part III in our “Charlie Coles” series on the evolution of offensive theory and best practice. 

If you’ve been following along, you’ve met Charlie, the colorful and highly successful coach of Miami of Ohio back in 1999, the last time the school played in the NCAA tourney prior to this past season’s appearance, and learned of his fascination with all things offense – its enduring principles and axioms, its elements and underlying structure. 

In his honor, Part II took us on a pseudo archaeological dig in which we explored “artifacts” of three different offenses we called DocMac, and Noah.

Recall that these offenses were developed and played in eras spanning roughly 100 years, yet were remarkably similar to one another, even as they responded to vastly different circumstances, rules, and “customs” particular to their own spot on basketball’s historic timeline. 

Continue reading…

Charlie Coles, Where Are You, Part II

In Part I, we met the late Charlie Coles, the highly successful and colorful coach of Miami of Ohio who, in addition to coaching, taught a highly entertaining course in basketball theory during his 16 years at the university. 

Charlie’s focus was offensive theory. In both the gym and classroom, he searched for the key that would unlock its secret code, revealing its enduring principles and best practices, its elements and underlying structure. 

With your help, I promised to continue his quest, but before we can assume Charlie’s role as code breaker, we’ve got to become archaeologists… we’ve got to excavate different offenses from different eras and study the artifacts we find. 

That’s the only way to unpack offense’s secret code.

Continue

Charlie Coles, Where Are You?

“Why are we starting with offense, Matt?” the Professor asked. 

“Because it’s more exciting?”

“No. How ‘bout you, Mr. Hatcher? What do you think?”

“Because you need offense to win?”

“Good try… but no. Offense is the hardest thing to teach. Why?”

“Because it’s hard to get people to play together?”

“Yesssss,” the Professor exclaimed. “It’s like saying, ‘I’m putting $50,000 in cash up here, now come up in an orderly fashion and get your share.’ That’s what a basketball coach does.”

The Professor, of course, was the late Charlie Coles and that’s my paraphrase of a classroom discussion that sportswriter, Grant Wahl, recounted back in 2002 in his feature story on the legendary Miami of Ohio coach.

As they enter the NCAA tournament this week, Miami is the darling of college basketball’s mid-majors, winner of 31 games and ranked in the Associated Press top 25 for the first time since Coles prowled the RedHawk’s sideline 28 seasons ago in 1999 when they went 24 and 8 and made it to the NCAA regional semi-finals. With seven players averaging in double figures, shooting 35% or higher from 3-point range, Miami is currently the best shooting team in the nation, converting 52% of their field goal attempts and averaging 91 PPG.

Win or lose tonight in their First Four game against SMU, basketball junkie and offensive savant Charlie Coles would be very happy. 

When Wahl featured Cole’s college credit course on basketball theory in Sports Illustrated, Charlie was the only remaining head coach in the nation’s D-1 schools who still taught such a class. Bob Knight used to teach a similar one at Indiana but had given it up, leaving Coles to carry on the tradition. So, for nine weeks in the early fall each year, Coles would convene a two-hour, twice weekly seminar on all things basketball before his season got underway on the NCAA’s designated annual start date of October 15th.

He liked teaching and had been inspired by the football theory class he had once taken during his sophomore year at Miami, taught by a rookie head coach named Bo Schembechler, one in the long line of coaching luminaries the Oxford college had spawned, making it America’s celebrated “Cradle of Coaches.”

“Best teacher I’ve ever had,” Coles said. “He taught it so well, you ran to class every day.” 

After teaching and coaching in high school for nineteen years, followed by six years at Central Michigan College, Coles returned to his alma mater in 1996, coaching basketball and teaching his seminar until retirement in 2012.

“I’ve  always enjoyed this,” he told Wahl. “One of the things about coaches now, the students hardly ever see them. They’re famous, but what does a coach do during the day other than coach his guys?”

To the delight of his students, Charlie was a man of enticing aphorisms, peppering his presentations with them and intriguing his students in the process.  

“Dribble right at the middle of a 1-3-1 zone.”  

“You can’t coach speed, but you can coach quickness.”

“Good offense is like a good shoe; it will fit immediately,”

“If you keep the ball in the middle of the floor, the defense can’t go help-side.”

“High post offenses create space behind the defense, leading to back cuts.”

Predictably, they were mostly about offense, dramatizing the starting point for Charlie’s seminar each season. 

“Offense is the hardest thing to teach… because it’s hard to get people to play together… that’s what a basketball coach does.”

The ball is round and has no sides, but to win, you’ve got to be able to play… on both sides.  Like me, I suspect Charlie believed that good defense is vitally important… it keeps you from losing, but offense wins championships.

He started with offense both in the gym and the classroom because it was “the hardest thing to teach”… and to learn. 

Good defense is a comparatively easier task. You play it with your heart and feet, aligning yourself in relationship to the ball, the man you’re guarding, and to the game’s true north, the basket. It doesn’t take long to master its basic requirements. In competition, you may be disadvantaged by physical limitation – size, strength, and quickness – or waning resolve as good defense demands tenacity over the course of a game, possession after possession. But the spatial requisites of good defense are relatively easy to grasp and execute, even for the inexperienced.

Offense, of course, is an entirely different animal. 

It’s complex, a cypher or secret language that can’t be understood without knowing the key that unlocks the code, revealing its principles and axioms, its elements and underlying structure.

In this regard, Charlie Coles was a basketball code breaker, searching for a key to unlock basketball’s enduring principles and best practices for his students and players. 

For example, Charlie noted that high post attacks create space behind the defense that that can be exploited with back cuts if the defenders exert too much pressure on the perimeter.

But other offenses reverse this concept, expanding space above the defense to offset the pressure they can exert by packing and isolating the defenders from one another along the baseline, curbing their reaction time because they’re guarding men aligned so close to the basket.

Is one approach more effective than the other. More efficient

What are the axioms or principles that underwrite each approach? Are they the same but applied differently, or are they fundamentally at odds with one another, existing in contradiction? 

If they contradict one another yet are equally successful, does that mean that basketball’s so-called principles are negotiable… that there aren’t any genuine universals or absolutes governing the game? 

What, then, are the specific building blocks needed to construct an effective offense? Historically, how did offense evolve and under what conditions? 

These are the kinds of questions a basketball code breaker like Charlie Coles asks. 

In this post and several to follow, I’ll try to pick up where Charlie left off and continue his exploration. There will be some repetition as I’ve done bits and pieces of this in the past. For example: 

• a three-part series on the evolution of shooting;

• several in-depth pieces exploring the concepts of time and space and how they impact both offense and defense;

• the unintended consequences of particular rule changes both inside and outside the game that affected offensive theory;

• how players develop court sense

• teaching a game of freelance.

This time, though, I want to expand our search, bringing the pieces together in a more comprehensive manner. 

We’ll get started in next post, Part II of the Charlie Coles, Where Are You? series.

60/40 & An Occasional Championship

In our recent series of posts, we learned that modern analytics provides descriptors or language in the form of numbers to represent what happens in a basketball game. 

On the basis of those numbers, we’re able to evaluate or measure the quality of play, demonstrating how well or poorly a team performed compared to its opponent… and in ways often more revealing than what the final score alone might suggest. In fact, as the data accumulates over a series of games, it exposes a team’s tendencies in such fine detail that we can forecast or reasonably guess the likely outcome of its future games. 

But no matter how insightful, the numbers never tell the whole story. 

At best, the computations remain an approximation of reality — a mathematical reduction of the lived experience that sometimes misses or even distorts the larger context of how and why things happened the way they did.

In fact, in last week’s post, we argued that the old-fashioned “eye test” may work better than the math as it detects the nuances and context that the numbers often miss. 

Not convinced? Let’s take a different tack.

By total coincidence, sixty years ago when Loyola Chicago defeated Cincinnati in the ‘63 national championship we’ve been exploring, I first learned what a school president said to his struggling athletic staff.  It’s stuck with me ever since. 

“60/40 and an occasional championship.”

That’s what he told them. 

Play a competitive schedule, win 60% of your games and an occasional championship, and you’ve achieved athletic excellence. 

Run the history of college basketball through that prism and some interesting patterns emerge.

Start with a self-evident fact we seldom acknowledge: on any given night, 100% of time, one team and its coach will lose. Every night of competition, half the participants lose. There aren’t any ties. There’s one winner and one loser, every game played. 

Last season, there were 6,159 basketball games played in Division I, pitting teams of varying ability under a variety of circumstances. Teams that had horrible shooting nights or lost their best players to injury, family deaths or tragic accidents; players whose girlfriends had dumped them; coaches whose careers were on the lines; teams that shot the lights out and won their conference championship or a holiday tournament; Cinderellas who upset better squads in the NCAA tourney only to lose in the next round; teams and coaches who fought their way to the Final Four.

Whatever the myriad of reasons, though, 50% of the teams and coaches who competed in those games found themselves on the losing end… 6,159 times. 

So, if over the course of a career, you manage to win 60% or more of your games, you join a very small and unique club. And very often, it has little or nothing to do with analytic efficiency.

The popular website sports-reference.com tracks the performance of 490 college basketball teams, spanning 132 seasons from 1893 to the current season of 2023-24. Presently, the NCAA recognizes 363 of these colleges as Division I members: 351 of them are eligible for this season’s NCAA tournament; 11 are currently ineligible because they’re transitioning from Divisions II and III.

Of the 490 schools, 403 have competed in ten or more seasons. (I’m not including the current, incomplete season.) 

Only 81 of them have won 60% or more of their games. Here’s a breakdown based on the number of seasons in which they’ve played:

Surprised to see the following schools on the list?

And when we compare the 81 with their 322 competitors whose winning percentages fell below 60%, it looks like this:

Narrowing our selection to the 363 teams that competed last season, 2022-23, only 54 of them won 60% or more of their games. Every other school – 85% of them — either lost more games than they won, or split their victories with losses.

Winning is not easy. For most teams, it’s a crap shoot. Winning 60% or more of your games over a sustained period of time is extraordinary.

What about the “occasional championship”?  

I mentioned that we have historic data on 490 schools since the first official season of college competition 132 seasons ago. Over the years, the number of these schools eligible for invitations to the NCAA tournament and the number of actual participants has varied dramatically for a host of reasons. 

For example, during the tournament’s first year – 1938-39 – three different champions were crowned by three different associations: the NCAA, the NIT, and the NY Sports Writers. 161 schools were eligible for the NCAA tourney that year but only 8 received bids, representing eight geographic regions or “districts” that the NCAA had established. Villanova, Brown, Ohio State, Wake Forest, Texas, Oregon, Utah State, and Oklahoma were invited, while Long Island University led by legionary coach and future novelist of the Chip Hilton series, Clair Bee, beat out five other schools in that year’s NIT. 

In the years that followed, the three post season tournaments eventually collapsed to two as the NY Sports Writers event fell by the wayside. The NIT continued to battle the NCAA for prominence even as the NCAA  gradually tinkered with its brackets and increased the number of participants. 

In the 1951, the NCAA tourney expanded to 16 teams and two seasons later, to 22. For the next two decades, the number of participants hovered between 22 and 25, and the NIT slowly declined in stature. By 1975, the NCAA had swelled to 32 teams and nine years later to 64. Finally, in 2011 the NCAA completed its evolution with 68 participants, regional seeding and pods to spread the talent evenly, and an 8-team, play-in or “first four,” leading to the 64-team, single elimination extravaganza we have today.

Regardless of the number of participants and how the brackets were arranged over the years, the tournament has always produced a “final four” – four survivors of the single elimination competition who pair up in semi-final matches culminating in the championship game. 

Since that first tournament in 1938-39, there have been 84 Final Fours. (85 seasons in total but the 2019-20 tournament was cancelled because of the Covid pandemic). 

84 Final Fours means that there have been 336 available spots for the last weekend of competition, yet a very small number of schools – 101 to be exact – filled those spots. In fact, a mere ten of those schools account for 137 or 41% of the spots. 

Add five more to the list and you discover that 15 schools own 58 of the 84 championships and 168 or 50% of the Final Four appearances.

Then, mentally round out the list with the “next best” five performers and…

we arrive at 20 schools that account for 194 or 58% of the available 336 spots … and collectively have won an incredible 73% of the 84 possible championships.

When we shift our perspective from great teams to great coaches, the same pattern emerges. 

Beginning in 1895 and extending through 2022-23, our last complete season, there have been 3,794 head coaches in college basketball, ranging in tenure from one season to forty-eight. Phog Allen leads the pack with 48 years at the helm – all at Kansas – while 780 coaches served no more than one season.

If we reclassify this list by winning percentage, one-fourth of the coaches make the 60% and higher category.

But if we overlay their years of tenure and examine only those who coached ten or more seasons, the list narrows significantly. Only 31% emerge as members of our “60/40” club.

Focus on those who coached 20 or more years and the list holds no surprises. 

And, then, there are those who don’t make the top 20 but are pretty prominent coaches. Here’s a representative list:

Keep descending through the coaching ranks to those with many victories but not enough to merit “60/40” recognition and you find prominent names like these:

Finally, consider coaches who have dominated the Final Four. Since its inception in 1939,  a small cohort of 20 men own nearly 60% of the championships and 40% of the appearances… all of them with career winning percentages of 60% or greater. Unsurprising, they align pretty closely with our list of frequent Final Four teams.

The mantra, 60 – 40 and an occasional championship, is both revealing and compelling, demonstrating that the margin between consistent winning and losing is, indeed, very small. 

If over the decades, the same schools and coaches consistently out-performed the competition, then many of their victories necessarily occurred before today’s era of advanced analytics even took hold. The same schools and coaches were apparently doing something right in the years predating analytics, as well as after. 

What, then, is the value of advanced analytics?

Do analytics merely reflect or mirror the results of doing the “right things” or does the data identify strategies for others to emulate… or a bit of both?

• With or without analytics, why do so few schools and coaches reach the “60/40” plateau? What role does sheer talent play? Last season, three newcomers appeared in the Final Four —   San Diego State, Florida Atlantic, and Miami — yet none of them had loads of “recognized” talent according to the recruiting services. They were a mixture of older kids, transfers, and diamonds-in-the-rough that the blue blood programs had missed. Yet, in the end, the only true blue blood in the field took home the trophy. As Connecticut’s coach Dan Hurley said, “This isn’t that hard. I have three NBA players and we put the right pieces around them.” What role might analytics play in “managing” the talent that you do have? 

• Shortly before his tipoff in last season’s SEC conference finals against Alabama, Buzz Williams, Texas A&M’s coach, talked about the need to contain Alabama’s fast-paced tempo; that 86% of the time, they shot the ball in the first 12 seconds of their possessions. But how is the precision of this statistic helpful? Does it reveal anything “operative” as to how Texas A&M might respond beyond what a simple eye test would have suggested? Alabama plays fast. If Alabama fired in the first 10 seconds or the first 14 seconds of their possessions, 75% instead of 85% of the time, would it change Williams’ response? What is the point of diminishing return in knowing such precise data?

• Speaking of Alabama, can strict adherence or even blind obedience to data hurt a team? Since his arrival in Tuscaloosa in 2019, Nate Oats has built a highly talented roster and taken them to the NCAA tournament three seasons in a row. From the beginning, he has enthusiastically preached a fast-paced, high scoring strategy that religiously ignores midrange jumpers in favor of more efficient 3-pointers and shots at the rim. Yet Alabama has been bounced unceremoniously from the tournament each year, including last March when they were the overall #1 seed. In ‘21, as a 2-seed, they lost to 11-seed UCLA; in ’22 as a 6-seed to 11-seed Notre Dame; and then last year, upset by 5-seed San Diego State. In those three loses they fired 39% of their shot attempts from 3-point range and shot a dismal 22.7%…. including 3 for 27 last year ….Is there a lesson here?

• Analyst Seth Partnow points out that generally speaking, the worst team in the NBA starts every game with a statistical baseline: 80 points, 25 rebounds, and 10 assists. In other words, if you’re good enough to play in the NBA, a team comprised of such players is going to start each game with this baseline in place. The worst team comprised of the worst players in the league is going to make some shots and free throws, probably enough to score at least 80 points, and pick up 25 rebounds and 10 assists in the process. What is the baseline for college basketball? And what are the marginal differences between the baseline and the game’s consistent winners?

These are just some of the questions I wish to explore in the weeks ahead. 

The Eye Test Still Works

As we saw in our last post, Speed Kills, modern analytics often misses the big picture.

Just like the data revealed in a traditional box score, today’s enhanced efficiency stats remain only an approximation of reality, a representation that never fully captures the entirety or whole of what takes place in a basketball game. 

Moreover, in its zeal to eliminate the bias or tempo from its representation, advanced analytics unintentionally hides or masks the operational pace of play – the relative rate of speed or intensity at which things occur during each team’s possessions. 

Instead, analytics harmonizes or levels out the mathematical differences of each team’s pace of play by counting the game’s possessions in a particular way. Because a shot attempt followed by an offensive rebound is tallied – not as a new possession – but as the continuation of the current one, each team ends up with the same or about the same number of possessions.

Team A advances up the floor and attempts to score, followed by Team B making its own attempt. In this manner, the teams “take turns,” alternating possessions until the game clock expires and one team has scored more points than the other. This makes it easy to measure the outcome of each possession, identifying which team got the most out of its respective turns with the ball.

For example, in a game of 130 possessions, 65 a piece, if Team A scored 80 points to Team B’s 70, Team A not only won the game, on a possession-by-possession basis, it performed more efficiently, producing an average of 1.23 points each time it had the ball while Team B yielded slightly less, at 1.07. A suite of additional efficiency stats flows naturally from this statistical baseline, offering insight into Team A and B’s respective performances – offensive rebounding ratio, effective shooting percentage, turnover rate, and the like. 

In all, though, the flesh and blood, real or actual pace of the game is artificially constrained so that the faster or slower tempo of the either team does not skew the mathematical outcome of the comparison. 

The fact that one team approaches the game in a risk-adverse, slow and deliberate manner while its opponent gambles with a full-court, trapping defense, denies every passing lane in the half court, runs the ball up the floor to generate quicker shots, and when it misses, rebounds furiously to garner additional shot attempts, is ignored in the data. 

And yet, those stylistic differences often separate victory from defeat, at times rendering today’s efficiency stats irrelevant, if not meaningless.

We saw this in our exploration of the 1963 NCAA championship game when Loyola Chicago overcame a dismal 30% shooting performance and a 15-point deficit to win the national title. Even though both teams enjoyed roughly the same number of possessions and from an analytic standpoint, competed at the same rate of tempo, the Ramblers generated 30 more scoring opportunities than their “more efficient” opponent.

But here’s the rub. 

While today’s efficiency stats often mask the stylistic differences that distinguish a game’s true, operational pace, the human eye detects them immediately… perhaps not in fine detail, but the general gist of what was occurring in real time.

In the case of the Loyola – Cincinnati contest, a simple eye test revealed all you needed to know: one team stopped shooting while the other continued to fire away; one team committed turnovers while the other seldom lost the ball even though it played at more frenetic pace.

Even casual fans sitting in Louisville’s Freedom Hall that night or watching the game on television could easily grasp what was happening. They didn’t need traditional stats, let alone today’s enhanced ones to comprehend that Loyola was struggling but fearlessly competing to win while Cincy was trying not to lose. Imagine a discussion between two fans sitting side-by-side in the arena:

“Loyola seems to be getting a lot of second shots… they’re not making many but they keep trying.”

“Yeh… and on defense they keep pressing. They’re frustrated and maybe a bit desperate but they’re not quitting.”

“Bonham’s playing a great game… seems to make every shot he takes… but I haven’t seen him take a shot in a long time… and what’s the deal with the Harkness kid? The game’s almost over and I don’t he’s made a shot yet.”

“How many more times is Cincinnati going to throw the ball away? 

“Cincy may have started their stall too early… they’re playing the clock instead of playing Loyola and the Chicago kids are catching up.”

There’s a lesson here. The eye test still works.

Human beings are learning machines. Our senses, especially the eyes, process the world around us. To make sense of what we experience, we look for similarities, placing random, discrete observations into mental categories. We put “like” things together – shapes, sizes, causes, effects, events, etc. – seeking patterns or connections between them, and drawing conclusions or inferences about what we have seen. The process is inductive, moving from specific observations to general theories or broad concepts. That’s how we learn.

In basketball, when a team takes possession of the ball, there are really only five things that can happen. In other words, in the course of a game, every one of a team’s possessions will fall into one of the following general categories:

• Team A shoots and scores 

• Team A shoots and misses 

• Team A is fouled, inbounds the ball and starts again, or is awarded one or more free throws 

• Team A turns the ball over, losing possession before it has a chance to score 

• Team A shoots and misses but rebounds the ball to continue the possession and get another chance to score.

At the end of each, Team B takes its own turn with the ball and repeats one of the five categories. 

A spectator isn’t likely to record these possession types or even be conscious of them, but if you showed him the list and answered a few obvious questions – “Where do you place jump balls?” – he’d likely say, “Yeh… okay, I get it. That pretty much describes what happens in any game.”

He wouldn’t need access to stats or knowledge of modern analytics to know this. The categories are self-evident and as he experiences them in real time, he forms conclusions about the style, quality, and pace of play as the game unfolds. Later on, the game stats may confirm or qualify or in some way sharpen what he has seen, but they’ll seldom replace what his eyes have already told him.

The neat thing about film, of course, is that it extends the eye.

With the help of Loyola University, I got hold of an old VHS copy of the ’63 championship game and digitized it. Understandably, it was granny, a bit jumpy in parts, the narration not always in synch with video, yet very revealing. 

The first thing I did was to compose a play-by-play log of the game – a brief set of notes outlining what happened each time the ball changed possession. Basic things: who shot, was it made or missed, an errant pass and turnover, an offensive rebound and another field goal attempt, and the like. 

If the camera happened to settle on the game clock or the t.v. announcer noted the time, I recorded those in my play log. And by re-watching film and noting the time lapse count on my computer, I was able to compute and note additional game times for particular exchanges that I felt were important. 

The ability to replay portions of the game as often as I wanted meant that I could keep refining my play log until I was sure that I had an accurate account of the game. Unlike the sportswriters and sports information directors who surely created their own logs from the sideline sixty years ago, I had an opportunity to sharpen what my eyes were telling me. How many tips did that kid just attempt? Did someone else get their hands on the ball or does the kid get credit for each of them?

Initially, aside from possession counts for each team, I didn’t attempt any kind of statistical analysis or make any value judgements about what I was seeing and recording. Only after I had transferred my log to an Excel spreadsheet did I run counts of the typical data points found in a traditional box score – the number of field goal attempts and makes, offensive and defensive rebounds, turnovers, and the like.

I quickly discovered that the official statistical record of the game found on the NCAA website and sports-reference.com, and widely reported in numerous newspapers and several books over the last sixty years is flawed. 

And, then the plot thickened.

Armed with my play-by-play in Excel format, I “tagged” each possession with one of the five “possession types” described above and ran simple counts to see if such groupings or categories might provide insight to the game’s outcome. 

Keep in mind that there’s nothing special about these possession categories. As noted above, they’re just simple groupings of “like things” that comprise a basketball game, organizing what the eye has naturally seen: a shot is taken and made, a shot is taken and missed, and so forth. There’s no deep dive into math… no attempt to calculate and compare one team’s “efficiency” score with its opponent. Just simple counts of key actions that occurred in each category. 

Effectively, instead of reviewing a game’s possessions in specific time periods – quarters or halves – the five “type” categories provide a way to reexamine the game based on the similarity of actions that made up each possession. In either case, the totals at the bottom of each chart are the same… exactly what you’d expect to find in an old-fashioned box score.  

Here’s Cincinnati’s breakdown followed by Loyola’s.

Two categories – B and E – jump out immediately. 

• B: Single FGA & Miss: Loyola had 22 possessions in which they attempted a single field goal and missed, while Cincy had only 9. In other words, 32% of Loyola’s possessions generated a shot attempt, but no points. This category is indicative of Loyola’s inefficiency throughout the game. Lots of shot attempts but few baskets. 

• E: Empty Possessions: 20 times or 30% of its 67 possessions, Cincinnati threw the ball away and with it, any chance to score. Clearly, Loyola’s pesky defense helped compensate for the team’s horrible shooting night. Moreover, Cincy’s turnovers immediately triggered Loyola possessions in which they attempted 21 field goals, 7 free throws, and scored 17 points. Inefficient shooting, to be sure, but numerous scoring opportunities that Cincy gave away. 

But most telling of all is category D: Multiple Scoring Opportunities. This possession type features an initial shot attempt followed by an offensive rebound, leading to additional scoring opportunities within the same possession. The differences here are startingly. 

In 35% of its possessions, Loyola snagged 28 offensive rebounds and generated 42 field goal attempts, nearly equaling Cincinnati’s FGA totals for the entire game. Along with free throws, Loyola scored almost half its total points in just those 24 possessions. 

Coupled with category E: Empty Possessions, the counts in this possession type reveal the true operational pace of the game. They confirm what the eye immediately grasped: Loyola’s aggressive offensive rebounding and tenacious, disruptive defense produced numerous scoring opportunities that overcame a horrific, inefficient shooting performance. Loyola played at a pace that generated 30 more field goal attempts than Cincinnati and needed only to convert one of those “extra” attempts to win the game.

The eye test and the intuitive leaps it stimulates is often more revealing than statistical analysis because it provides important context.

The ’63 championship game is a dramatic example of the inherent limits of data. My attempt to demonstrate this by zeroing in on a single game does not refute the potency of analytics, but to question our contemporary fascination and sometimes rigid allegiance to it.

Over the course of a season or a series of games, advanced analytics can help us evaluate performance and set quantifiable team goals; it can provide valuable insights to help players improve “on the margins,” but the larger context it so often misses is important, too. Often times, more important.

Imagine a single shot that misses and is rebounded by the defense. From an efficiency standpoint, the possession failed, but did the offensive scheme you designed produce the shot you wanted? Did the right player attempt the shot from the right location and under the right circumstances? If so, then your scheme was well-conceived even though the result was a miss and the possession deemed “inefficient.” A coach can’t dictate outcomes. All he can do is arrange the pieces intended to create the shot he desires; the shot goes in or it doesn’t, but a miss doesn’t necessarily mean his team “ran bad offense.” 

This post and the two that preceded it, as well as several more I’ll drop in the weeks ahead, are really about widening the lens… achieving a broader perspective.

Are there other ways to measure performance that may be more revealing than efficiency measurements and comparisons? If analytic data falls short of our expectations, does a solution lie elsewhere? Is there a different, more convincing barometer of performance and predictor of future success? 

Stay tuned.