I'm posting my thoughts on Academaze as a single review rather than a series of chapter responses, because I was asked to post a review in exchange for an advance copy of the book. The book is written by the pseudonymous Sydney Phlox, also known by the pseudonym Xykademiqz, under which she has a blog. If you've been reading her blog for several years (as I have) then the book may be more valuable as a reference than as a cover-to-cover read. The book is not a transcript of her blog, but many sections are edited versions of her blog posts. Fortunately, it can be read in pieces or in nonlinear order, for those who want to just focus on certain topics. However, you really should read the whole thing at some point, because she builds up a pretty comprehensive view of an academic career.
The best audience for this book is new professors. "How to survive the tenure track" seems to be a staple of the academic blogosphere (blogs being a genre that were big in the early 2000's, but now seem a little musty compared to short clickbait articles linked on Twitter or Facebook or whatever Kids These Days are using), but this one is better-written and more thorough than most writings in that genre. Also, so much of the academic blogosphere is either focused on biomedical researchers (a very different path than physical science) or humanities and social science (great people, but very different career issues). The book and blog by Xykademiqz fill a void for physical scientists. Also, she says plenty of things that are conventional, but she says plenty of things that are just her own genuine opinion, and she neither treats The System as cruelly illegitimate and in need of being completely trashed and replaced with something "transformative" (which the enthusiastic types of the Right-Thinking Classes seem to like) nor does she act as an apologist.
I would highly recommend her book for anyone who is considering a career at a research university, at least in physical science or engineering. She's pretty practical about what it takes to get tenure. For people like myself, at undergraduate-focused institutions, I would still recommend it, but with the explanation that while some of it is absolutely dead-on applicable to places off the R1 pedestal, other parts are a way to learn the things you didn't know about "how the sausage is made" when you were a grad student or postdoc. People at undergraduate-focused institutions will face a lot of similar issues with teaching (only moreso), we still need to get onto sustainable research trajectories (only without quite the same dizzying heights of money-grubbing and name recognition), and we still gripe with colleagues (even if the specifics of the political battles are a bit different when you're not fighting over massive research lab space). If you're at a research-focused institution read it as a manual. If you're at an undergraduate-focused institution read it as an exploration of faculty psychology with some practical specifics mixed in.
One nice feature of the book is her cartoons, mostly from 1 to 4 panels, done by herself. They add a nice mood to the book, injecting some levity into a genre (advice books) that too often induces either neurosis or a feeling of "I Am Now Enlightened!"
Finally, although her academic readers won't need to be sold on this, the book does a nice job of conveying all of the many things that professors do when we aren't in the classroom. The general public seems to think that our only "real work" is teaching classes and any time not in the classroom (especially summer!) is just for slacking off. Because so much of it is an insider take with advice for aspiring insiders, it's probably not quite the right book to recommend to non-academics who think we're lazy and under-worked, but it is in the right direction. Maybe some excerpts would be useful reading for the non-academic public. Even better, maybe she could write a second book telling the public what professors do when we're not in a classroom!
In short, buy this book as a gift for anyone seriously considering a career at a research university, and share excerpts with anyone who thinks professors don't do much work.
Saturday, June 18, 2016
Friday, June 17, 2016
Theranos is falling in the marketplace, Lizzie is at risk of being banned
OK, you loyal followers of my grumpy blog. We need to talk about Theranos, the start-up that got so much attention for its claims that it can do medical tests with just a tiny pin-prick's worth of blood instead of the larger samples that medical labs usually require. Why do we need to talk about Theranos? It's not that I know anything about blood testing, but rather that Right-Thinking People drank that kool-aid and drank it hard. Just today I came across an article (from last fall) by a very enthusiastic and progressive-minded professor talking about how wonderful Theranos is and how visionary their CEO is.
Why did Right-Thinking People develop such a massive crush on Theranos CEO Elizabeth Holmes? Well, first and foremost, Holmes dropped out of Stanford, always wears a black turtleneck (just like another sainted figure that Right-Thinking People idolize), and has given a TED Talk. She uses phrases like "transformative change." To a breathlessly enthusiastic person, that's pretty much all you need to be great. Bona fide achievement is optional, and that's good because in the past few weeks Theranos has lost their biggest customer and Holmes' net worth estimate has been downgraded to zero. The federal government is also considering whether to impose sanctions on her. It seems that this is related to the fact that her company had to repudiate two years' worth of test results. Oops.
And a lot of doubts were already swirling around at least by last fall, if not earlier.
And a lot of doubts were already swirling around at least by last fall, if not earlier.
I don't have anything against Holmes. I don't know her, and I understand that some great ideas are worth trying even if they fail. Thing is, the vast majority of the people idolizing her also didn't know her, didn't work with her or for her, and didn't invest in her, but they had no problem going all ga-ga about transformative leadership and big ideas and OMG TURTLENECK!!!
Right-Thinking People are so enthusiastically gullible.
Interestingly, the Theranos board is very heavy on military and government types, and somewhat light on medical types. This might have something to do with the fact that her father held high-level positions in the CIA...um, I mean, the US Agency for International Development. I have to love the idea of Right-Thinking People getting suckered by a daughter of great privilege. It's a metaphor for, well, just about everything that's wrong with the Right-Thinking classes.
But, if you are dismayed by my harsh and sarcastic writings, here's my parting conciliatory comment: It's quite likely that the next person who excites you will be exactly what they seem to be and not someone with a high ratio of hype to results. There's hope!
Thursday, June 16, 2016
I prefer my statistics free-range, not restricted in range
OK, kids, gather around and learn some statistics. Suppose that you're well-meaning and conscientious and want to make sure that colleges are doing the best possible job in admitting students and selecting them only on the basis of meaningful variables, not meaningless garbage input. So you recommend that every college--EVERY!!!--do a study of whether its admissions criteria actually predict success, and particularly whether standardized tests predict success on their campus.
There are two problems with this. One is actually an ethical problem that gets to the heart of equity concerns: Why should predictors of success be the only factors in determining who gets an opportunity to study at a college? You might argue that predictors of success are pretty important in justifying that investment of time, money, and other resources, or you might argue that giving a chance to people who are less likely to succeed is a justifiable endeavor. We can't decide between those propositions empirically. We can use empirical studies to determine if a test, or a GPA, or an essay, or an interview, or whatever "is" a statistically sound predictor of success, but we can't use empirical studies to decide if opportunities "ought" to be extended only on the basis of statistically likely success. That "is"/"ought" distinction is something that philosophers have long discussed. One could determine that a given test or whatever "is" a sound predictor but a student "ought" to be admitted in spite of a low score (or whatever) because of a commitment to opportunity. Whether you agree or disagree with that action probably depends on just how weak the student is, but resolving that conundrum is ultimately a value judgment.
But there's a second problem: Let's say that you've decided that you care about the statistical power of different predictors of success. Your "ought" questions are resolved, but answering "is" questions is tricky. If a student gets into your college with low grades, or low test scores, or weak extracurricular accomplishments, or weak letters of recommendation, or whatever it may be, then they're probably either strong by some other measure or else they're from a rich family and your development office has a lot of internal political clout. The second issue is beyond our scope here, but the first issue is a big one. If many of the students with low test scores are strong by some other measure, and many of the students with high test scores may not be as strong by some other measure, then it's hard to do "apples to apples" comparisons. If one group does better than another you won't know if it's because of how they differ on one measure or how they differ on another measure. If both groups perform similarly you won't know if the differences between them are actually meaningless, or if the differences simply cancel out. If the differences are meaningless then you shouldn't look at test scores (or whatever) at all. If the differences cancel out then you should absolutely look at test scores, but also look at other variables.
Now, some of you may be thinking "Wait, why is it that you're assuming people will only be strong by one measure but not another?" I'm not assuming that everyone fits that dichotomy, I'm just assuming that the people who are weakest by one measure could only get in by compensating in some way, while people who are average or stronger by that measure have more latitude on the other criteria. Of course there will be people who are strong by multiple measures, and looking at them will only tell you that being talented and accomplished and well-prepared by multiple measures is a good thing (but we already knew that). But there's a good chance that the people who are strongest by multiple measures will be poorly-represented at your school because they got into a more prestigious place. So the range of students that you can observe at most schools will be limited to a narrow band composed mostly (but not exclusively) of people who are decent by multiple measures but not amazing and people who are strong on one measure and weak on another. The result is a phenomenon called "range restriction" in the statistical literature.
There are ways to attempt to correct for range restriction, but those techniques work best if you have a decent understanding of the relationship between your sample and a wider pool. The best thing is to look at a wider pool that includes not only your students but also the people who were worse (or better) than most of your students and went elsewhere. This is why it's best if each school not attempt its own DIY social science, but instead people look at the wider literature and larger studies.
There are two problems with this. One is actually an ethical problem that gets to the heart of equity concerns: Why should predictors of success be the only factors in determining who gets an opportunity to study at a college? You might argue that predictors of success are pretty important in justifying that investment of time, money, and other resources, or you might argue that giving a chance to people who are less likely to succeed is a justifiable endeavor. We can't decide between those propositions empirically. We can use empirical studies to determine if a test, or a GPA, or an essay, or an interview, or whatever "is" a statistically sound predictor of success, but we can't use empirical studies to decide if opportunities "ought" to be extended only on the basis of statistically likely success. That "is"/"ought" distinction is something that philosophers have long discussed. One could determine that a given test or whatever "is" a sound predictor but a student "ought" to be admitted in spite of a low score (or whatever) because of a commitment to opportunity. Whether you agree or disagree with that action probably depends on just how weak the student is, but resolving that conundrum is ultimately a value judgment.
But there's a second problem: Let's say that you've decided that you care about the statistical power of different predictors of success. Your "ought" questions are resolved, but answering "is" questions is tricky. If a student gets into your college with low grades, or low test scores, or weak extracurricular accomplishments, or weak letters of recommendation, or whatever it may be, then they're probably either strong by some other measure or else they're from a rich family and your development office has a lot of internal political clout. The second issue is beyond our scope here, but the first issue is a big one. If many of the students with low test scores are strong by some other measure, and many of the students with high test scores may not be as strong by some other measure, then it's hard to do "apples to apples" comparisons. If one group does better than another you won't know if it's because of how they differ on one measure or how they differ on another measure. If both groups perform similarly you won't know if the differences between them are actually meaningless, or if the differences simply cancel out. If the differences are meaningless then you shouldn't look at test scores (or whatever) at all. If the differences cancel out then you should absolutely look at test scores, but also look at other variables.
Now, some of you may be thinking "Wait, why is it that you're assuming people will only be strong by one measure but not another?" I'm not assuming that everyone fits that dichotomy, I'm just assuming that the people who are weakest by one measure could only get in by compensating in some way, while people who are average or stronger by that measure have more latitude on the other criteria. Of course there will be people who are strong by multiple measures, and looking at them will only tell you that being talented and accomplished and well-prepared by multiple measures is a good thing (but we already knew that). But there's a good chance that the people who are strongest by multiple measures will be poorly-represented at your school because they got into a more prestigious place. So the range of students that you can observe at most schools will be limited to a narrow band composed mostly (but not exclusively) of people who are decent by multiple measures but not amazing and people who are strong on one measure and weak on another. The result is a phenomenon called "range restriction" in the statistical literature.
There are ways to attempt to correct for range restriction, but those techniques work best if you have a decent understanding of the relationship between your sample and a wider pool. The best thing is to look at a wider pool that includes not only your students but also the people who were worse (or better) than most of your students and went elsewhere. This is why it's best if each school not attempt its own DIY social science, but instead people look at the wider literature and larger studies.
Labels:
"Is" vs "Ought",
College Admissions,
statistics
Tuesday, June 7, 2016
Defeated by a Russian...and it's summer!
I can't read anymore of Anna Karenina. I can't. I tried, and made it 100 pages, but it's so slow, and the day-to-day activities of these pampered rich people are so mundane, so boring, that I keep hoping some Communist revolutionaries will come along and send them to Siberia. I actually like the inner lives of these characters, but I hate their mundane activities, and I wish the story would move. I toughed it out for 120 pages and that was enough.
Sorry, Tolstoy. Maybe I'll try War and Peace one of these decades.
My next book is Academaze by friend-of-the-blog Sydney Phlox, aka Xykademiqz.
Sorry, Tolstoy. Maybe I'll try War and Peace one of these decades.
My next book is Academaze by friend-of-the-blog Sydney Phlox, aka Xykademiqz.
Monday, June 6, 2016
Amazingly, doing homework problems is a good way to learn physics material
I have graded my final exam and looked at correlations between homework scores and test scores. Besides looking at overall homework average and final exam score, I also look at the correlation between each midterm and the homework assignments with the material from that midterm. Some points to note:
Without further ado, the correlations:
- I give two types of homework. The first is a long problem set full of calculations every week. There are 9 of these in the quarter.
- I give 1-2 shorter assignments per week that are either warm-ups for the day’s lecture topics, reviews of the previous class session’s lecture topics, or reviews of the reading. There are 14 of these in the quarter.
- I drop the worst long homework and 2 worst short homeworks, so that there’s no penalty for having one particularly bad week during the quarter.
- Each midterm covers 3 problem sets and 4 short assignments. Half of the final exam was on the last 3 problem sets and the last 6 short assignments, and the other half was a review of topics from the first two midterms.
- I was able to break out the final exam problems related to problem sets 7-9, and correlate those final exam problems with the relevant problem sets and relevant short homeworks.
- I also computed correlations between the total final exam score and the homework averages for the quarter (with the lowest dropped to remove penalties for a bad week).
- There are 16 students who took all 3 tests.
Without further ado, the correlations:
Corresponding exam or section | Final exam score | |
Short HW 1-4 | 0.46 | |
Problem sets 1-3 | 0.46 | |
Short HW 5-8 | 0.45 | |
Problem sets 4-6 | 0.53 | |
Short HW 9-14 | 0.46 | |
Problem sets 7-9 | 0.63 | |
All Short HW | 0.78 | |
All problem sets | 0.81 |
Not surprisingly, the correlation between one test and a small number of assignments is weaker than the correlation of the final with a larger number of assignments, because when I average 9 assignments instead of 3 some noise averages out.
The fact that the homework (which is not timed) is so strongly correlated with the tests (which are timed) suggests that the homework and tests are measuring the same thing, in spite of possible issues of test anxiety, etc. Also, while the correlations between the subsets of homework and relevant exam problems are weaker than the correlation of the average homework score with the final exam, the correlations are quite close to each other, suggesting that all of the tests are similarly representative of the homework.
Most importantly, it appears that people who work hard at homework problems do well in the class overall. Who knew that doing homework is the best way to learn the material?
The voice of the opposition
I was interviewed to provide a critical perspective for an article about Eric Mazur in the Chronicle of Higher Education (paywall). There are a few quotes from a short interview, a link to an article I published in the Chronicle a few years ago, and a quote from a blog post. I'm basically quoted as saying that if people take notes and organize their thoughts they can learn, that expert perspectives should be important and held up alongside group activities and whatnot, and that what may have revolutionary 25 years ago is now mainstream to the point of being nearly obligatory.
Friday, June 3, 2016
Resurfacing
I have been reading fiction lately, and the themes of this blog are much more suited to discussing non-fiction than fiction (unless we count the promises of education reformers as fiction). But the LA Times has an article on how the Gates Foundation is beginning to acknowledge that silver bullets are in short supply, and that is worth sharing. I used to be a big fan of the TV show Once Upon A Time, and in Once Upon A Time there are two lines particularly worth quoting in discussions of education reform:
1) "All magic comes at a price." Said repeatedly by Rumpelstiltskin, the point is that whenever you use magic to solve a problem you might get a a quick and easy way to do something, but you'll pay for it down the road.
2) "That's the problem with this world. Everybody wants a magical solution to their problems but nobody wants to believe in magic." Said by the Mad Hatter, one of the few characters in the first season who knows that magic and fairytales are real, he's complaining about people who want something that's as quick and easy as magic, but somehow fits into their (supposedly) rational world-view. They want things neat and tidy. He was talking to a character who wanted to believe that there was a way to fix her situation without accepting the reality of other worlds and much bigger things than she had previously contemplated, things that could shatter her world and thrust her into a much bigger drama.
But it could just as easily be applied to technocrats. They want to believe that with properly-designed studies they will find the tricks that nobody else found before, and use those tricks to solve the problems of education. They are forgetting a few inconvenient facts. For starters, scientific discovery is never as clean and linear as the technocrats want to believe. You don't simply follow a best practice for research and thereby identify a best practice for teaching or social work or whatever. True insights, let along truly new insights, are dearly bought. They come at unexpected moments and from unexpected directions. New knowledge is the most expensive thing imaginable. There's a reason why thousands of years ago the Egyptian scribe Khekheperre-Sonbu, living in a time when so many of today's great innovations were unknown, lamented the difficulty of developing new and meaningful ideas.
Also, they want to think that success is just a matter of telling people to do the right things and then watching it happen. Just today I was talking to a couple people who found in their research that teaching assistants who were sent to a seminar in which they were told about the (supposedly) best practices for teaching would start out following those practices but would then deviate. Their belief was that they need to find some way to persuade or train the teaching assistants to stick with the best practices. Well, first, let me note that maybe some of these practices are only best for particular people in particular settings, and the teaching assistants might not be the right audience for these practices.
But second, getting people to do the right thing is more complicated than telling them to do the right thing. Perhaps the best way to instill a practice is not with a seminar that meets for a couple hours once per week over a period of 10 weeks, but rather to provide a role model. Perhaps they need to take a class from an inspiring teacher who uses the (supposedly) newer and better practices. Perhaps that is a better way to instill something into the brain. If you want people to believe down to their very bones that a particular method of teaching physics (or whatever subject) is better than any other method, perhaps they need to actually experience the method itself, and actually learn and appreciate physics via that method, and experience enough growth and insight into physics that they will emulate that method as much subconsciously as consciously. Perhaps role models and meaningful experiences matter more than simply hearing and believing best practices.
Think about how much we emulate our parents, and ask yourselves this: Do you do what your parents told you to do, or do you do what your parents did? You probably got some good habits from them, but did you get any bad habits from them? And did they tell you to adopt those habits, or did your mothers tell their children not to do what they had done? (To quote a great song.) I'm pretty sure that your parents discouraged you from emulating their worst habits, but that probably didn't stop you from emulating them (probably without consciously intending to do so). We know from more than a century of research on k-12 education that parents and family background predict educational outcomes better than any practice on the part of teachers. Why? Because people spend far more time around parents than around teachers, and (usually) get more attention from parents than teachers. (If they get more attention from teachers than parents then there's an entirely different set of problems here.) Likewise, people spend far more time with teachers trying to teach them subject matter than teachers trying to explicitly teach them about teaching. Perhaps the best way to shape the next generation of teachers is to teach subject matter as well as possible and inspire people to do what you did, not to spend 2 hours per week over 10 weeks telling people about best practices.
Of course, teaching the subject matter as well as possible is slower-impact than a special seminar, even if it is higher-impact. And we want the fast solution in our society. We want to believe in the magic without accepting that when the magic actually does happen it happens in a much bigger context. So I'm quite confident that when Bill Gates has left the stage Mark Zuckerberg or some similar figure will attempt to implement an education reform agenda, convinced that This Time It's Different. All of this has happened before and will happen again.
1) "All magic comes at a price." Said repeatedly by Rumpelstiltskin, the point is that whenever you use magic to solve a problem you might get a a quick and easy way to do something, but you'll pay for it down the road.
2) "That's the problem with this world. Everybody wants a magical solution to their problems but nobody wants to believe in magic." Said by the Mad Hatter, one of the few characters in the first season who knows that magic and fairytales are real, he's complaining about people who want something that's as quick and easy as magic, but somehow fits into their (supposedly) rational world-view. They want things neat and tidy. He was talking to a character who wanted to believe that there was a way to fix her situation without accepting the reality of other worlds and much bigger things than she had previously contemplated, things that could shatter her world and thrust her into a much bigger drama.
But it could just as easily be applied to technocrats. They want to believe that with properly-designed studies they will find the tricks that nobody else found before, and use those tricks to solve the problems of education. They are forgetting a few inconvenient facts. For starters, scientific discovery is never as clean and linear as the technocrats want to believe. You don't simply follow a best practice for research and thereby identify a best practice for teaching or social work or whatever. True insights, let along truly new insights, are dearly bought. They come at unexpected moments and from unexpected directions. New knowledge is the most expensive thing imaginable. There's a reason why thousands of years ago the Egyptian scribe Khekheperre-Sonbu, living in a time when so many of today's great innovations were unknown, lamented the difficulty of developing new and meaningful ideas.
Also, they want to think that success is just a matter of telling people to do the right things and then watching it happen. Just today I was talking to a couple people who found in their research that teaching assistants who were sent to a seminar in which they were told about the (supposedly) best practices for teaching would start out following those practices but would then deviate. Their belief was that they need to find some way to persuade or train the teaching assistants to stick with the best practices. Well, first, let me note that maybe some of these practices are only best for particular people in particular settings, and the teaching assistants might not be the right audience for these practices.
But second, getting people to do the right thing is more complicated than telling them to do the right thing. Perhaps the best way to instill a practice is not with a seminar that meets for a couple hours once per week over a period of 10 weeks, but rather to provide a role model. Perhaps they need to take a class from an inspiring teacher who uses the (supposedly) newer and better practices. Perhaps that is a better way to instill something into the brain. If you want people to believe down to their very bones that a particular method of teaching physics (or whatever subject) is better than any other method, perhaps they need to actually experience the method itself, and actually learn and appreciate physics via that method, and experience enough growth and insight into physics that they will emulate that method as much subconsciously as consciously. Perhaps role models and meaningful experiences matter more than simply hearing and believing best practices.
Think about how much we emulate our parents, and ask yourselves this: Do you do what your parents told you to do, or do you do what your parents did? You probably got some good habits from them, but did you get any bad habits from them? And did they tell you to adopt those habits, or did your mothers tell their children not to do what they had done? (To quote a great song.) I'm pretty sure that your parents discouraged you from emulating their worst habits, but that probably didn't stop you from emulating them (probably without consciously intending to do so). We know from more than a century of research on k-12 education that parents and family background predict educational outcomes better than any practice on the part of teachers. Why? Because people spend far more time around parents than around teachers, and (usually) get more attention from parents than teachers. (If they get more attention from teachers than parents then there's an entirely different set of problems here.) Likewise, people spend far more time with teachers trying to teach them subject matter than teachers trying to explicitly teach them about teaching. Perhaps the best way to shape the next generation of teachers is to teach subject matter as well as possible and inspire people to do what you did, not to spend 2 hours per week over 10 weeks telling people about best practices.
Of course, teaching the subject matter as well as possible is slower-impact than a special seminar, even if it is higher-impact. And we want the fast solution in our society. We want to believe in the magic without accepting that when the magic actually does happen it happens in a much bigger context. So I'm quite confident that when Bill Gates has left the stage Mark Zuckerberg or some similar figure will attempt to implement an education reform agenda, convinced that This Time It's Different. All of this has happened before and will happen again.
Labels:
Battlestar Galactica,
Edufads,
K-12,
kool-aid,
LA Times,
Once Upon A Time,
Technocrats
Subscribe to:
Posts (Atom)