Monday, August 10, 2009

Why do we bother trying to judge trades as they happen?

There's a tendency in the media and among fans to evaluate trades after the fact. You wait a few years, see which teams win out, and then see which team got the most value.

On the other hand, many of us also like to try to judge a trade as it happens. Everyone does this, some using a qualitative approach and others using a quantitative approach (e.g. my analysis of the Rolen trade). Yet some would say that we shouldn't evaluate a trade until after the fact, as there's just too much uncertainty in trying to forecast the future--and what matters are results, right?

I certainly agree that there are big error bars around quantitative trade analyses. But I still think there's a lot of merit in trying to judge trades as they happen, because it helps us recognize quality moves by GM's--regardless of the eventual outcome. You can make what, by all information available, is a great trade at the time, only to have some bad luck (injuries, etc) derail it in a post-hoc analysis. I still think we should recognize that the GM made a smart move, even if it falls apart later.

Paul DePodesta wrote a great article about this that I've been meaning to link to for a long time. Here's an excerpt:
As tough as a good process/bad outcome combination is, nothing compares to the bottom left: bad process/good outcome. This is the wolf in sheep's clothing that allows for one-time success but almost always cripples any chance of sustained success - the player hitting on 17 and getting a four. Here's the rub: it's incredibly difficult to look in the mirror after a victory, any victory, and admit that you were lucky. If you fail to make that admission, however, the bad process will continue and the good outcome that occurred once will elude you in the future. Quite frankly, this is one of the things that makes Billy Beane as good as he is. He is quick to notice good luck embedded in a good outcome, and he refuses to pat himself on the back for it.

4 comments:

  1. While I largely agree with what you're saying, I wonder if it devalues the intangible qualities that players bring if we primarily evaluate trades at the time they are made.

    For example, if it turns out that Scott Rolen does provide exceptional leadership qualities to the Reds, it will only show up quantitatively as W/L for the team. Even then it would be very difficult to draw a causal relationship.

    Where in the "judge the trade at the time" paradigm is their room for post-facto analysis like this: "It turned out Walk Jockety was right, signing Scott Rolen seemed to provide the Reds the leadership they were missing. In his first full season with the team, the Reds made the playoffs."

    Intangibles are an undeniable factor in whether a team wins or loses. Even as subjective as those conditions are, the only possible way to even begin to judge them is as the trade plays out.

    ReplyDelete
  2. What you're describing is basically the definition of confirmation bias. If the Reds win, then the leadership was the cause of prior not winning.

    The problem with this approach, in my mind, is that there's no control. If your entire means of evaluating a trade is to wait and see what happens, knowing how much uncertainty there is, your setting yourself up for a faulty conclusion (the "wolf in sheep's clothing DePodesta mentioned).

    Let's say instead that Jocketty said that the reason the Reds weren't winning is that they didn't have enough players whose last names started with the letter "R." Of course, that's absurd. But the test you describe would also support that claim, or any other claim that Jocketty happened to make. It's just not a good test.

    I'm sure that leadership is important. But I don't know how important it is. Teams don't pay like it's particularly important, at least not compared to how much they spend on "tangibles." My opinion is that (almost) everything that has a causal impact on the baseball field is measurable. Some things are very hard to measure, but if it's having a significant impact on performance, it should be measurable. At least at some point (fielding positioning is very difficult, for example, but the gps stuff is going to be a huge step forward).

    The Cliff Floyd example I linked above is one take. And I can think of other ways to address the question of leadership. David Gassko's study in the Hardball Times Annual a few years back on managers was another nice and potentially related example--he showed that some managers do seem to have a significant positive (or negative) impact on individual player performance, above what they'd otherwise be expected to do. Dusty was among the best, which I believe. Might (emphasis on might) even justify his salary.
    -j

    ReplyDelete
  3. I'm not saying that it should be the entire way you evaluate a trade. On the contrary, I'm saying that if you ONLY look at it ahead of time, and only using the available player statistics, you can't measure the effects of leadership at all.

    Suppose Scott Rolen's presence makes Joey Votto and Jay Bruce better hitters? Suppose his presence makes our next shortstop a better defender?

    Those variables (a) can't be measured looking at Rolen's statistics, and (b) can't be known ahead of time.

    I agree that you can't just look post-facto and say, aha, it must have been the leadership. On the other hand, what if Rolen's leadership DOES in fact lead to the Reds winning five more games next year. How can we ever measure that?

    Greg Vaughn's leadership has been widely attributed as making a big difference with the 1999 Reds (not just his 45 HR). Should that, even though it is after the fact, be ignored in deciding whether or not trading for him or letting him go was a good idea?

    ReplyDelete
  4. Again, though, I don't see how you can assign credit to the leadership in a post-hoc manner. It's all well and good to say that Greg Vaughn's leadership was a big factor in 1999, but how do you know that? Some players on that team might think that, but it's an untested hypothesis at this point. Your sample size = 1.

    Here's one way that might work to measure "leadership" impact of a player (or, at least, impact of player on other players). Get projections for all players on each of your focal player's teams over time. Then, assess how players do vs. those projections. You'd expect that great "leadership" might result in across-the-board boosts (on average) to a player's teammates' performances. You'd need to do it over a lot of seasons, because you don't want to base it all on one semi-fluky season (like '99 with the Reds).

    So, if we did this with Vaughn, we'd need to see that most teams that he played on, at least after his first few seasons, would have the average player outperforming their projections. Then, we'd have an indication of how much of a positive impact he has on his teammates. .... which would be something that we could factor into his value as a player at the time of a trade or free agent contract.

    Something like what I described above is very similar to what Gassko did in his manager study. And it seemingly worked. Dusty did well by that measure of managing, even after ignoring Bonds' ridiculousness.
    -j

    ReplyDelete