Ah, the playoffs. The smell of fall in the air, the sight of towel waving and packed stadiums across the country, and the endless stream of pontification on social media. Are the Rays just not built for the postseason due to a lack of star power? Have the Dodgers been playoff slouches because they’re too dependent on their stars? Do the Astros know something about how Martín Maldonado manages a pitching staff that we don’t? Do we know more about how to manage a pitching staff than John Schneider? The list goes on.
Especially with the new opportunities to weigh in given the expanded playoff structure, it’s been harder than ever to hone in on ideas worth pondering, let alone hypotheses that are falsifiable. But the other day, a xweet from MLB Network researcher Jessica Brand caught my eye:
Of the 145 pitchers with 50.0+ innings pitched in the playoffs, their average ERA changes by -0.32. Genuinely surprised it’s to their favor too!
Worried about outliers? Median further cements this point, at -0.36.
44 of 145 go up, 100 go down, 1 stays the same: Catfish Hunter. https://t.co/AcqKFqyJyA
— Jessica Brand (@JessicaDBrand) October 9, 2023
Thanks to our handy new postseason leaderboards, this was indeed an interesting assertion that I could test. I limited my sample to hurlers who not only tossed at least 50 frames in the playoffs, but who also managed 500 innings in the regular season. There were 142 pitchers who met these criteria, and they averaged an ERA three tenths of a run lower in the playoffs. Per a paired-samples t-test, this result was statistically significant.
Before we assume that this means pitchers have the edge in the playoffs, or that pitchers are better at bearing down in big moments, there are a few caveats to address. First, especially given recent trends, we have to keep in mind how pitcher usage changes in the postseason. Consider the following, where RS indicates “regular season” and PS indicates “postseason”:
Pre- vs. Post-Wild Card Playoff Bump
RS BF/G | PS BF/G | RS ERA | PS ERA | RS FIP | PS FIP | |
---|---|---|---|---|---|---|
Pre-WC | 22.5 | 26.0 | 3.19 | 2.71 | 3.34 | 3.17 |
Post-WC | 22.7 | 20.4 | 3.67 | 3.49 | 3.72 | 3.97 |
Conveniently, there were 61 pitchers who pitched in the postseason exclusively before the Wild Card era began in 1995 and 61 who pitched in the postseason exclusively after. If limiting the number of batters faced is the driving force behind the postseason ERA improvement, we’d expect that improvement to be substantially more pronounced since the Wild Card era began. Yet, the playoff bump is actually more pronounced pre-Wild Card. In fact, FIP thinks there isn’t even any playoff bump at all post-Wild Card.
After removing outliers (changes in batters faced per appearance of more than 10), the difference between playoff and regular-season leash explains just under 15% of the variation in playoff bump, but in the opposite direction of what I had anticipated:
This is likely because quicker hooks are reserved for pitchers who aren’t performing. Yes, managers have shorter leashes in the playoffs these days, but that prevents pitchers from regressing to the mean after a big inning; as a result, it’s actually diminished the playoff bump.
What established it in the first place, then? What if pitchers are typically appearing in the playoffs during their primes, while their regular season stats also include the dawns and twilights of their careers? Perhaps, due to advancements in training that have served to prolong careers, this distinction has not been as powerful in recent years; along with the quicker hooks, that would explain the diminished bump in the Wild Card era.
To test this theory, I took separate weighted averages of each pitcher’s age (with an assist from Stathead) for their postseason and regular season careers, weighted by number of innings pitched. I found no relationship between the playoff bump and the difference between these averages; in other words, appearing in the playoffs more often during their prime had no impact on if these pitchers performed better in the postseason or the regular season:
On the other hand, I also found little difference on average between playoff age and regular season age. So, even if there are some age-related effects that I’m missing, they wouldn’t really explain the playoff bump for this sample.
Back to the drawing board. Maybe this question is a lot simpler than it appeared at first glance. To determine if pitchers hold some kind of advantage over hitters in the playoffs, why not run the same procedure on hitters? This will also rule out the possibility that selection bias is playing a role here — maybe the pitchers in my sample are better in the playoffs because they’re better than the average pitcher, and they’ve stuck around long enough to meet my sampling criteria as a result.
So I compiled a dataset of the 192 hitters who made at least 150 postseason trips to the plate and 1,500 regular season ones. These players averaged a 99 wRC+ in the playoffs compared to a 116 mark in the regular season; it wasn’t just selection bias after all.
What’s really going on here? For one, while the quicker hooks may be robbing starters of the opportunity to experience regression, those starters would probably still be more vulnerable to giving up runs than capable relievers even if the starters stayed in and that regression did arrive. Sure enough, since the Wild Card era began and those capable relievers and quicker hooks arrived en masse, the divide between postseason and regular season wRC+ has been 19 points (97 to 116); before, it was merely 14 (106 to 120).
Additionally, though this is harder to monitor, workload constraints are typically loosened come October. Since pitchers are subject to these constraints more so than hitters due to their propensity for injury, the pitching side stands to gain more when those limits are lifted. This should be more evident on the team level, as effectiveness per appearance might go down with greater usage but even a slightly-diminished elite reliever will lower a team’s ERA when they’re turned to more often. Along those lines, at least anecdotally, pitchers seem less afraid to max out with their velocity in the postseason. Whether that’s intentional or due to adrenaline, it should serve to increase the pitching/hitting postseason divide as well.
It can be tough to sift through the noise of grand postseason theories, but there is some truth to the maxim that the playoffs are a different beast from the regular season. It’s hard to say how much the differences are a byproduct of tactical changes brought about by suspicious managers versus the nature of the tournament and the way that players respond to it, but the pitching/hitting divide represents further evidence that we should take the postseason/regular season gulf seriously regardless.