Wednesday, April 17, 2013

Tax policy colloquium, week 11 - Sarah Lawsky's "Modeling Uncertainty in Tax Law"

Yesterday we discussed the above paper, which appeared in the Stanford Law Review recently.  (We don't entirely insist on current works in progress, so long as a paper and topic are fresh for our audience and the author to discuss.)  It concerns uncertainty aversion, aka ambiguity aversion, as distinct from risk aversion.

Ambiguity aversion can be illustrated via the Ellsberg Paradox (yes, that Daniel Ellsberg), based on lab experiments such as the following one (quoting from Lawsky's paper):

"Imagine two urns. Known Urn has 100 balls, 50 black and 50 red. Unknown Urn also has 100 balls, some red and some black, but the number of red and black balls, respectively, is unknown.

"First, Picker is told that he must bet on red, but he can choose which urn to draw from. Research shows that most people would choose to draw from Known Urn. That is, most people would prefer to bet that a red ball will be drawn from Known Urn, rather than to bet that a red ball will be drawn from Unknown Urn. If Picker prefers to draw from Known Urn, he is acting as if the probability of drawing a red ball from Known Urn is greater than the probability of drawing a red ball from Unknown Urn.

"Next, Picker is told he must bet on black. But again, he can choose which urn to draw from. And again, if Picker is like most people, he will prefer to draw from Known Urn. So Picker is acting as if the probability of drawing a black ball from Known Urn is greater than the probability of drawing a black ball from Unknown Urn."

The conclusion commonly drawn from this is that people hate uncertainty / ambiguity / second-order risk (i.e., not knowing what the probability is).  The paper takes the view that this phenomenon, which I will call ambiguity aversion (although the paper calls it uncertainty aversion), is worth adding to the "expected utility" models that researchers use to model taxpayers' behavior with regard to compliance.

Expected utility models treat the decision to comply versus cheat in tax filing as a risky financial investment.  Say I could reduce my tax bill by $100,000 by taking a very aggressive and dubious position that has only a 10% chance of being sustained if audited.  But there is only a 20% chance that I will be audited.  If I am audited and lose, suppose I will have to pay the tax, plus face a $300,000 penalty.  82% of the time, then, I save $100,000 through this strategy.  18% of the time, I lose $300,000.  Sounds like a winner, from the pure financial standpoint, unless I am very risk-averse.

It's widely believed that, given very low U.S. audit rates and also fairly low penalties (leaving aside jail time for outright fraud), the expected utility model, applied in light of evidence concerning people's manifested risk aversion in other contexts, greatly under-predicts actual compliance.  In other words, people cheat (or take very dubious positions) less than they "should" according to the model.  One could view this either as evidence of "irrationally" cautious behavior by taxpayers, given what we take to be their attitudes towards risky investment, or else as reflecting that the "arguments" typically permitted in the models - which may be limited to liking positive financial payouts and disliking negative ones, are too restrictive.  The conclusion commonly drawn is that complying or not, and being super-aggressive or not, responds not just to financial incentives but is also, to a degree, a "consumer" act.  For example, people may like being honest and socially responsible, at least if they are not angry at the government or convinced that everyone else is cheating.

OK, all that is old hat.  Lawsky's paper doesn't deny that any of that may matter, but it takes the very different tack of examining how ambiguity aversion could be added to the standard expected utility compliance model, thereby potentially increasing its degree of realism and predictive accuracy.

In effect, in her paper, the ambiguity-averse are modeled as if they were "pessimists," who acted as if, in Ellsberg's Unknown Urn, they are likely to lose whether betting on red or on black.  The paper recognizes that ambiguity aversion is not actually pessimism,which would imply lowering one's probability estimate rather than being nonplussed by one's inability to specify it.  More generally one of the difficulties in modeling ambiguity aversion is that, as soon as one converts it into a range of probabilistic estimates (e.g., "I think there's a 60% chance that I have a 40% chance to win, and a 40% chance that I have a 70% chance"), it's actually just a more refined version of standard risk, with determinate odds and payouts given one's beliefs.

The paper does a nice job of working with the problem, showing how ambiguity aversion might be modeled, and briefly discussing some possible implications - e.g., for IRS secrecy regarding its criteria for selecting audit targets, and perhaps for responding to the possibility that tax advisors, by reducing perceived ambiguity through the issuance of confident probability estimates, may encourage aggressive tax planning that has an unduly positive payoff by reason of the audit lottery  But it remains unclear to me to what extent focusing on this rather amorphous phenomenon actually produces a significant analytical payoff that would merit adding it to formal compliance models. (Which does not detract from the value of exploring and modeling it in this paper, if only to see where it might lead.)

I view ambiguity aversion, as in the Two Urns experiment, as reflecting a social instinct that, based on introspection, I surmise that people may have.  Suppose you are playing poker with people who know the odds much better than you do.  You are likely to get reamed but good in next to no time.  Or suppose the Three Card Monte guy on the street corner tries to get you to bet on red or on black in the Unknown Urn.  You are going to be rightly suspicious.

More generally, I surmise that we may frequently be inclined to really dislike acting under ambiguity or uncertainty, especially when we fear or suspect that others may have better information than we do.  And this could be hardwired emotionally, not just a rational calculation.  E.g., people may inclined to feel that they are ripe for exploitation when they know less rather than more about how to estimate the likely payoffs in a given situation, and they may feel like unhappy if they learn that they had the odds wrong, or even merely surmise this from the fact that they have lost.

What does "better information" about the odds mean in the tax setting?  For audit odds, there may actually be a frequentist probability estimate given the IRS discriminant function, yielding a determinate likelihood of audit even if you do not know what it is.  For questions of legal interpretation, it's easier to think of the underlying probability in subjectivist terms.  E.g., suppose an expert says that you are "60% likely to win" in a unique fact setting.  The issue will only arise and be resolved once.  So, rather than attempting a frequentist account, we might view this as stating odds under which the expert asserts that, if risk-neutral, he would be equally willing to bet either way.

In a frequentist setting, it's easy to say what it means to have worse rather than better information. Someone could count the marbles in the Unknown Urn, and perhaps the Picker suspects that the Asker has done so.  In the subjectivist setting, one might think instead in terms of people whom one considers better versus worse at gauging the odds.  For example, you might feel more confident about the accuracy of the stated odds if the # 1 tax lawyer in New York said that you were 60% likely to win, than if this prediction came from a college student who had a summer job at H&R Block.

Obviously, rich individuals and big corporations are highly likely to be able to get better rather than worse estimates, relative to the universe that's potentially available.  Thus, if the ambiguity parameter is significant (and potentially usable) to begin with, the well-advised appear unlikely to be its optimal targets.

No comments: