Strike that: Stats show aces don’€™t get neighborhood calls

Stats show MLB umpires don't give favorable calls to 'aces.'

Doug Pensinger

As long as there are humans in charge of the strike zone, there are going to be inconsistencies. And as long as there are inconsistencies, there are going to be suspicions of bias.

Sometimes these are specific and a little paranoid. There are fans who believe umpires are biased against their favorite teams. Other times these paranoia are general and accepted without actual hard proof. For example, there’s a common belief that pitchers considered aces get the benefit of a bigger strike zone, that they’re given the benefit of the doubt around the borders, and this is just a part of the game, and there’s nothing to be done about it, really.

But is this really a part of the game?

We can accept it, or we can investigate it, and we might as well investigate it before we decide whether or not to accept it. This wouldn’t have been an option in the ’90s, when people believed Greg Maddux and Tom Glavine were getting calls. But we’ve had PITCHf/x data for the past several years, and we can make use of it toward this end. What strikes zones do aces get, relative to the non-aces?

AROUND THE HORN

We’ll cover the window between 2008-13. First, we must define an ace. This is subjective, but I’m going with a definition of at least five Wins Above Replacement, based on runs allowed. In other words, an ace-level season is defined as a season worth at least five WAR as a starting pitcher within the six-year window. This gives me a sample of 99 pitcher-seasons, or about 17 a year, which sounds fine to me. These seasons will be compared against other, inferior starting pitcher-seasons, of at least 50 innings each.

The next step is the more complicated step, involving a little math. For every pitcher’s season, we have to figure out what the strike zone was like that they pitched to. It turns out this is actually pretty simple. Over on FanGraphs, we offer PITCHf/x-based plate-discipline data. You can see the rate of pitches thrown in the strike zone, and you can see the rate of swings at pitches out of the strike zone. FanGraphs also offers raw strike and pitch totals. From all this information, one can calculate an "expected strike" total.

This can then be compared to the actual strike total. If a pitcher got more actual strikes than expected strikes, it can be said he pitched to a more favorable zone. If a pitcher got fewer actual strikes than expected strikes, it can be said he pitched to a less favorable zone. The theory here is that ace-level pitchers end up with more actual strikes than expected strikes, because they get more calls off the edges.

For every pitcher-season, I calculated the difference between actual strikes and expected strikes, per 200 innings. So what do we find from the resulting data?

Ace-level pitchers: Plus-2 strikes per 200 innings

Non-ace-level pitchers: Minus-2 strikes per 200 innings

There is a difference. On average, per 200 innings, ace-level pitchers have gotten four more strikes than non-ace-level pitchers. But, even if you believe in the difference, we’re talking about four pitches, over 200 frames. Current estimates put the value of an extra strike around 0.14 runs, so you’re talking about a benefit of maybe half of one run, over an entire year. For all intents and purposes, that’s nothing.

What if the effect lags? What if it shows up more the next year, after the pitcher has established himself? Here’s data for the same pitchers, but for the season following:

Ace-level pitchers: Plus-6 strikes per 200 innings

Non-ace-level pitchers: Plus-0 strikes per 200 innings

Again, you can point out that there’s a difference, but it’s so small as to be rather insignificant. It takes roughly seven or eight extra strikes to be worth even one run, which means it takes something like 70-to-80 extra strikes to be worth roughly one win. That isn’t observed in the data we have. If aces have gotten the benefit of the doubt, the effect has been so small it’s basically impossible to measure.

And there are some issues, anyway. For one, we’d expect better seasons to involve pitching to better strike zones, because pitching to a better strike zone would make a better season more likely. And one thing we’ve learned from pitch-framing research is that pitchers with better command are easier to receive than pitchers with worse command, and pitchers with better command are more likely to be ace-level starters. So what might look like the benefit of the doubt could just be better command, and we don’t even really see any effect, anyway. I guess I’m making excuses for an effect that doesn’t exist.

There is some evidence that being a veteran earns a slight benefit of the doubt. But that could very easily be survivor bias (good pitchers last and worse pitchers don’t), and the effect is small, even at the extremes. As far as aces are concerned, Roy Halladay (17-10, 2.79 ERA, 1.126 WHIP) in 2009 pitched to a favorable zone.

As another example that same year, Adam Wainwright (19-8, 2.63 ERA, 1.210 WHIP) pitched to a favorable zone. But in 2009, Felix Hernandez (19-5, 2.49 ERA, 1.135 WHIP) pitched to a terrible zone. Somehow, Jose Fernandez (12-6, 2.19 ERA, .979 WHIP) in 2013 pitched to an even worse zone. Some of the most favorable zones in recent history have belonged to Livan Hernandez, Jason Marquis and late-career Derek Lowe.

It isn’t about being an ace. It’s about command, and, probably more importantly, it’s about the catcher. A good-receiving catcher can make a favorable strike zone for anybody. A bad receiving catcher can make life worse for anyone, let alone aces.

An umpire behind home plate isn’t making calls based on the identity of the pitcher on the mound. He’s making calls based on the accuracy of the pitch, and on the movement of the catcher, and, most significantly, on whether or not the pitch looks like it’s in the strike zone.

The alleged ace bias is imaginary.