Even without OMG/fx, there’s still plenty to learn

Do umpires call strikes differently depending on the count?

Rick Yeatts/Getty Images

So it’s been more than a week, but I’m finally getting caught up on everything I learned (or should have learned) at the MIT Sloan Sports Analytics Conference. I did write last week about the biggest baseball news at the conference, but skipped a number of other fascinating arguments and studies. For example, while I don’t have an abstract for you, Dan Rosenheck’s presentation about the importance of holding runners — in some cases, anyway — was mighty convincing. I hope he publishes this work soon.

But I do have three other abstracts, and the full papers can be found here. Here are those abstracts, along with my quick-and-dirty reactions.

First, a study of umpire behavior in potentially decisive counts by Etan Green and David Daniels:

Yes, I think "dramatic" is a fair assessment of this effect, which we might call Decision Aversion Disorder (yes, DAD). Essentially, the umpires, bless their hearts, want the games decided by the players. Which is lovely! Except it’s both subconscious and ultimately both counter-competitive and time-consuming. There’s another question here that I wish someone would answer: What would baseball look like if the umpires actually called the strike zone as it exists, regardless of the situation? Yes, we might assume the games would become a bit quicker and shorter … but would there also be significantly more walks and strikeouts? It seems so. And if that’s the case … well, you know how I feel about walks and (especially) strikeouts. If accuracy means more of those … man, the universe is a scary and complicated place, isn’t it? Would you vote for a corrupt politician if you knew he would make the world a better place?

Next, Martin Kleinbard’s look at the relationship between payroll and performance:

Contrary to popular belief, payroll’s explanatory value on wins is currently at a near-all-time low, in spite of rising payroll inequality. How is this possible? In a word: youth. Pre free-agency-eligible players continue to outperform their elder, more expensive peers at a staggering rate. With an increasingly large percentage of the best players not eligible for purchase on the open market, the payroll-wins relationship continues to erode.


It’s impossible to know for certain whether this trend of weakening "win-buying" ability will continue, but increasingly stringent penalties for performance-enhancing drugs — widely assumed to offer greater benefits to older players — could help maintain the youth dominance effect for the foreseeable future. While today’s headlines may speak of baseball’s "haves" and "have nots" in terms of financial clout, they may need some revision for a future in which the reigning currency is not money but youth.

Yes, this is heartening. Unless your favorite team is the Yankees or the Dodgers or the Angels or the Phillies, you’re probably not a big fan of win-buying. And in the short term, I’m confident that these results will hold up. In the medium term, though? Say, 2017-2020? I’m less confident. For two reasons. One, the new crazy money might be crazy enough for the haves to sign nearly all the best Cuban and Japanese players, and in fact we’re already seeing that. Two, while the ability to sign Dominican and Venezuelan players is supposedly constrained by financial penalties, there’s really no good reason for the Dodgers and the Yankees to give a tinker’s damn about those penalties. In the medium term, the only things likely to hold back the haves are stupidity and arrogance. Which will be enough in some cases, and not enough in others. In the long term, baseball will probably have to revisit the issue of massive income disparities. Because it’s not good for any of us if the have-nots haven’t a fighting chance.

Finally, Gartheeban Ganeshapillai and John Guttag conclude that managers aren’t going to their bullpen early enough in games:

The results suggest that using our model would frequently lead to different decisions late in games than those made by major league managers. From the 5th inning on in close games, for those games in which a manager left a pitcher in that our model would have removed, the pitcher ended up surrendering at least one run in that inning 60% (compared to 43% overall) of the time.

We look at the predictions for Red Sox in the 2013 postseason. There were 96 innings pitched by Red Sox starters, of which, 33 were beyond the 4th inning. In 24 of those innings our model would have agreed with the manger to keep the starter in. The starter ended up giving a run in 3 (12.5%) of those innings. There were 9 innings where the manager kept the starter in, but our model wouldn’t have, and the starter ended up giving a run in 5 (55%) of those innings.

These results shouldn’t be surprising, because we know that managers are reluctant to remove starting pitchers before the fifth inning, for at least two reasons: a) they want their pitchers to qualify for the win, and b) they don’t want to start blowing through their bullpen so early. Will we reach a point at which these factors will become significantly less important? Almost certainly. For one thing, wins are becoming less and less relevant with each passing season. And for another, there will be more and more pressure to expand bullpens, either by liberalizing the transaction rules or simply expanding the rosters. Ultimately, I think the only balancing factor might be baseball’s concerns about game times; more relief pitchers means more pitching changes which means longer (and slower) games which means more complaints from jerks like me.