Current Reading

This blog is primarily for me to blog my responses to books that I'm reading. Sometimes I blog about other stuff too, though.

Poverty by America by Matthew Desmond.

Word cloud

Word cloud

Thursday, March 23, 2017

Project idea: Incentives, measurement, and fairness

Among the problems that I find most fascinating and most frustrating is the that of people who want to systematize education and incentives.  We all agree that what we really want is meaningful, intensive, in-depth discussion of important topics, creative assignments, and lots of attention to students as individuals.  We all agree that this is hard to systematize and measure, even if some approaches are clearly less bad than others.  We all agree that the things that are easiest to measure are not quite what we desire. We all agree that the more you incentivize something the more that people will focus on hitting the metric rather than doing what the metric is supposed to be a proxy for (sometimes called Campbell's Law).  And we all agree that without incentives of SOME sort people will get lazy.  So we have a hard problem.

My thought is that if a simple metric isn't TOO distant from what you really want, and if the competition isn't TOO intense, then being measured will keep people from being lazy, but because you still have some flexibility (the competition isn't so intense as to push you into a single-minded focus on that metric) and because you yourself actually value the thing that REALLY matters (and the metric isn't completely decoupled from it) then you'll show up to work and split your effort between hitting the metric and doing the more meaningful thing that everyone REALLY values.

But if the competition is more intense then you have to hit that number no matter what.

As to the people measuring you, on some level they know that the simple metric is flawed, but they face two other pressures;
1) Measuring something closer to what they want would be more expensive.
2) Because those measurements are of limited precision and may be subjective, it would be seen as unfair to focus so much on them.  Indeed, I sometimes think that the current focus on the findings of bias research plays into this:  The output of the bias research community has demonstrated that everything we do can be unfairly biased.  It's thus tempting to seek simpler and more transparent technical evaluations instead of subjective appraisals of nebulous quality.

So here's my idea for a model:

An administrator has a pot of money to allocate and people.  The funder assigns money based on two measures, and easy one and a hard one.

The researchers getting the money have to allocate their efforts among three things: Leisure (bad!), hitting an easy target (which generates some utility, but quickly reaches the point of diminishing marginal returns), and hitting a hard target (which takes more effort but yields more satisfaction).  

The administrator's utility payoff is based on 3 things:
1) How much effort people allocate to doing hard things.  The payoffs here are huge, because the administrator gets to take credit for good things that happen in the system that he/she oversees.
2) How fair and transparent the administrator is.  The more the administrator rewards easy things, the more that political bosses and/or the public will perceive him/her as fair and transparent.
3) How much time the administrator invests in measuring the hard thing.  The payoffs here are negative because it's hard to measure.

The researchers' utility payoff is based on 3 things:
1) How much time they spend on leisure.  There are diminishing marginal returns here.  Zero vacation days will kill a person, but after too much vacation they want to get back to the lab.
2) How much time they spend on hard things.  There are increasing returns on scale here, because you don't get anywhere until you've invested a lot of effort.
3) How much money they get from the administrator.

What can we say about the Nash Equilibria of this problem?  It strikes me as posing issues similar to Holmstrom's Theorem.

On some level I've basically outlined the issue to the point where a key idea can be presented:  The easy measure is useful to the extent that it satisfies the public's need to see that the system is honest, and also keeps researchers from being lazy.  The easy measure is parasitic to the extent that it diverts time away from what is meaningful.  You can say "Just improve the easy measure so it's more aligned with what you value!" but the key points are:
1) Measuring valuable things is hard (by assumption).
2) Aligning the easy measure more closely with what you really value is great, but if the competitive pressures are high enough you'll eventually see the difference between the easy measure and what  you really value.  When competitive effects are weak then people will still allocate a lot of time to hard but satisfying efforts.  When competitive effects are strong then people will devote much more time to easy things.

Of course, it's nice to make these points, but the system that I work in doesn't reward blog posts.  The system that I work in rewards peer-reviewed journal articles, and journals reward mathematical formalism.  I could just sit down and do it but I have lots of other projects on my plate, so I need a collaborator to prod me.  If anyone wants to help please let me know!

No comments: