I'm posting my thoughts on Academaze as a single review rather than a series of chapter responses, because I was asked to post a review in exchange for an advance copy of the book. The book is written by the pseudonymous Sydney Phlox, also known by the pseudonym Xykademiqz, under which she has a blog. If you've been reading her blog for several years (as I have) then the book may be more valuable as a reference than as a cover-to-cover read. The book is not a transcript of her blog, but many sections are edited versions of her blog posts. Fortunately, it can be read in pieces or in nonlinear order, for those who want to just focus on certain topics. However, you really should read the whole thing at some point, because she builds up a pretty comprehensive view of an academic career.
The best audience for this book is new professors. "How to survive the tenure track" seems to be a staple of the academic blogosphere (blogs being a genre that were big in the early 2000's, but now seem a little musty compared to short clickbait articles linked on Twitter or Facebook or whatever Kids These Days are using), but this one is better-written and more thorough than most writings in that genre. Also, so much of the academic blogosphere is either focused on biomedical researchers (a very different path than physical science) or humanities and social science (great people, but very different career issues). The book and blog by Xykademiqz fill a void for physical scientists. Also, she says plenty of things that are conventional, but she says plenty of things that are just her own genuine opinion, and she neither treats The System as cruelly illegitimate and in need of being completely trashed and replaced with something "transformative" (which the enthusiastic types of the Right-Thinking Classes seem to like) nor does she act as an apologist.
I would highly recommend her book for anyone who is considering a career at a research university, at least in physical science or engineering. She's pretty practical about what it takes to get tenure. For people like myself, at undergraduate-focused institutions, I would still recommend it, but with the explanation that while some of it is absolutely dead-on applicable to places off the R1 pedestal, other parts are a way to learn the things you didn't know about "how the sausage is made" when you were a grad student or postdoc. People at undergraduate-focused institutions will face a lot of similar issues with teaching (only moreso), we still need to get onto sustainable research trajectories (only without quite the same dizzying heights of money-grubbing and name recognition), and we still gripe with colleagues (even if the specifics of the political battles are a bit different when you're not fighting over massive research lab space). If you're at a research-focused institution read it as a manual. If you're at an undergraduate-focused institution read it as an exploration of faculty psychology with some practical specifics mixed in.
One nice feature of the book is her cartoons, mostly from 1 to 4 panels, done by herself. They add a nice mood to the book, injecting some levity into a genre (advice books) that too often induces either neurosis or a feeling of "I Am Now Enlightened!"
Finally, although her academic readers won't need to be sold on this, the book does a nice job of conveying all of the many things that professors do when we aren't in the classroom. The general public seems to think that our only "real work" is teaching classes and any time not in the classroom (especially summer!) is just for slacking off. Because so much of it is an insider take with advice for aspiring insiders, it's probably not quite the right book to recommend to non-academics who think we're lazy and under-worked, but it is in the right direction. Maybe some excerpts would be useful reading for the non-academic public. Even better, maybe she could write a second book telling the public what professors do when we're not in a classroom!
In short, buy this book as a gift for anyone seriously considering a career at a research university, and share excerpts with anyone who thinks professors don't do much work.
Saturday, June 18, 2016
Friday, June 17, 2016
Theranos is falling in the marketplace, Lizzie is at risk of being banned
OK, you loyal followers of my grumpy blog. We need to talk about Theranos, the start-up that got so much attention for its claims that it can do medical tests with just a tiny pin-prick's worth of blood instead of the larger samples that medical labs usually require. Why do we need to talk about Theranos? It's not that I know anything about blood testing, but rather that Right-Thinking People drank that kool-aid and drank it hard. Just today I came across an article (from last fall) by a very enthusiastic and progressive-minded professor talking about how wonderful Theranos is and how visionary their CEO is.
Why did Right-Thinking People develop such a massive crush on Theranos CEO Elizabeth Holmes? Well, first and foremost, Holmes dropped out of Stanford, always wears a black turtleneck (just like another sainted figure that Right-Thinking People idolize), and has given a TED Talk. She uses phrases like "transformative change." To a breathlessly enthusiastic person, that's pretty much all you need to be great. Bona fide achievement is optional, and that's good because in the past few weeks Theranos has lost their biggest customer and Holmes' net worth estimate has been downgraded to zero. The federal government is also considering whether to impose sanctions on her. It seems that this is related to the fact that her company had to repudiate two years' worth of test results. Oops.
And a lot of doubts were already swirling around at least by last fall, if not earlier.
And a lot of doubts were already swirling around at least by last fall, if not earlier.
I don't have anything against Holmes. I don't know her, and I understand that some great ideas are worth trying even if they fail. Thing is, the vast majority of the people idolizing her also didn't know her, didn't work with her or for her, and didn't invest in her, but they had no problem going all ga-ga about transformative leadership and big ideas and OMG TURTLENECK!!!
Right-Thinking People are so enthusiastically gullible.
Interestingly, the Theranos board is very heavy on military and government types, and somewhat light on medical types. This might have something to do with the fact that her father held high-level positions in the CIA...um, I mean, the US Agency for International Development. I have to love the idea of Right-Thinking People getting suckered by a daughter of great privilege. It's a metaphor for, well, just about everything that's wrong with the Right-Thinking classes.
But, if you are dismayed by my harsh and sarcastic writings, here's my parting conciliatory comment: It's quite likely that the next person who excites you will be exactly what they seem to be and not someone with a high ratio of hype to results. There's hope!
Thursday, June 16, 2016
I prefer my statistics free-range, not restricted in range
OK, kids, gather around and learn some statistics. Suppose that you're well-meaning and conscientious and want to make sure that colleges are doing the best possible job in admitting students and selecting them only on the basis of meaningful variables, not meaningless garbage input. So you recommend that every college--EVERY!!!--do a study of whether its admissions criteria actually predict success, and particularly whether standardized tests predict success on their campus.
There are two problems with this. One is actually an ethical problem that gets to the heart of equity concerns: Why should predictors of success be the only factors in determining who gets an opportunity to study at a college? You might argue that predictors of success are pretty important in justifying that investment of time, money, and other resources, or you might argue that giving a chance to people who are less likely to succeed is a justifiable endeavor. We can't decide between those propositions empirically. We can use empirical studies to determine if a test, or a GPA, or an essay, or an interview, or whatever "is" a statistically sound predictor of success, but we can't use empirical studies to decide if opportunities "ought" to be extended only on the basis of statistically likely success. That "is"/"ought" distinction is something that philosophers have long discussed. One could determine that a given test or whatever "is" a sound predictor but a student "ought" to be admitted in spite of a low score (or whatever) because of a commitment to opportunity. Whether you agree or disagree with that action probably depends on just how weak the student is, but resolving that conundrum is ultimately a value judgment.
But there's a second problem: Let's say that you've decided that you care about the statistical power of different predictors of success. Your "ought" questions are resolved, but answering "is" questions is tricky. If a student gets into your college with low grades, or low test scores, or weak extracurricular accomplishments, or weak letters of recommendation, or whatever it may be, then they're probably either strong by some other measure or else they're from a rich family and your development office has a lot of internal political clout. The second issue is beyond our scope here, but the first issue is a big one. If many of the students with low test scores are strong by some other measure, and many of the students with high test scores may not be as strong by some other measure, then it's hard to do "apples to apples" comparisons. If one group does better than another you won't know if it's because of how they differ on one measure or how they differ on another measure. If both groups perform similarly you won't know if the differences between them are actually meaningless, or if the differences simply cancel out. If the differences are meaningless then you shouldn't look at test scores (or whatever) at all. If the differences cancel out then you should absolutely look at test scores, but also look at other variables.
Now, some of you may be thinking "Wait, why is it that you're assuming people will only be strong by one measure but not another?" I'm not assuming that everyone fits that dichotomy, I'm just assuming that the people who are weakest by one measure could only get in by compensating in some way, while people who are average or stronger by that measure have more latitude on the other criteria. Of course there will be people who are strong by multiple measures, and looking at them will only tell you that being talented and accomplished and well-prepared by multiple measures is a good thing (but we already knew that). But there's a good chance that the people who are strongest by multiple measures will be poorly-represented at your school because they got into a more prestigious place. So the range of students that you can observe at most schools will be limited to a narrow band composed mostly (but not exclusively) of people who are decent by multiple measures but not amazing and people who are strong on one measure and weak on another. The result is a phenomenon called "range restriction" in the statistical literature.
There are ways to attempt to correct for range restriction, but those techniques work best if you have a decent understanding of the relationship between your sample and a wider pool. The best thing is to look at a wider pool that includes not only your students but also the people who were worse (or better) than most of your students and went elsewhere. This is why it's best if each school not attempt its own DIY social science, but instead people look at the wider literature and larger studies.
There are two problems with this. One is actually an ethical problem that gets to the heart of equity concerns: Why should predictors of success be the only factors in determining who gets an opportunity to study at a college? You might argue that predictors of success are pretty important in justifying that investment of time, money, and other resources, or you might argue that giving a chance to people who are less likely to succeed is a justifiable endeavor. We can't decide between those propositions empirically. We can use empirical studies to determine if a test, or a GPA, or an essay, or an interview, or whatever "is" a statistically sound predictor of success, but we can't use empirical studies to decide if opportunities "ought" to be extended only on the basis of statistically likely success. That "is"/"ought" distinction is something that philosophers have long discussed. One could determine that a given test or whatever "is" a sound predictor but a student "ought" to be admitted in spite of a low score (or whatever) because of a commitment to opportunity. Whether you agree or disagree with that action probably depends on just how weak the student is, but resolving that conundrum is ultimately a value judgment.
But there's a second problem: Let's say that you've decided that you care about the statistical power of different predictors of success. Your "ought" questions are resolved, but answering "is" questions is tricky. If a student gets into your college with low grades, or low test scores, or weak extracurricular accomplishments, or weak letters of recommendation, or whatever it may be, then they're probably either strong by some other measure or else they're from a rich family and your development office has a lot of internal political clout. The second issue is beyond our scope here, but the first issue is a big one. If many of the students with low test scores are strong by some other measure, and many of the students with high test scores may not be as strong by some other measure, then it's hard to do "apples to apples" comparisons. If one group does better than another you won't know if it's because of how they differ on one measure or how they differ on another measure. If both groups perform similarly you won't know if the differences between them are actually meaningless, or if the differences simply cancel out. If the differences are meaningless then you shouldn't look at test scores (or whatever) at all. If the differences cancel out then you should absolutely look at test scores, but also look at other variables.
Now, some of you may be thinking "Wait, why is it that you're assuming people will only be strong by one measure but not another?" I'm not assuming that everyone fits that dichotomy, I'm just assuming that the people who are weakest by one measure could only get in by compensating in some way, while people who are average or stronger by that measure have more latitude on the other criteria. Of course there will be people who are strong by multiple measures, and looking at them will only tell you that being talented and accomplished and well-prepared by multiple measures is a good thing (but we already knew that). But there's a good chance that the people who are strongest by multiple measures will be poorly-represented at your school because they got into a more prestigious place. So the range of students that you can observe at most schools will be limited to a narrow band composed mostly (but not exclusively) of people who are decent by multiple measures but not amazing and people who are strong on one measure and weak on another. The result is a phenomenon called "range restriction" in the statistical literature.
There are ways to attempt to correct for range restriction, but those techniques work best if you have a decent understanding of the relationship between your sample and a wider pool. The best thing is to look at a wider pool that includes not only your students but also the people who were worse (or better) than most of your students and went elsewhere. This is why it's best if each school not attempt its own DIY social science, but instead people look at the wider literature and larger studies.
Labels:
"Is" vs "Ought",
College Admissions,
statistics
Subscribe to:
Posts (Atom)