A random list of things my students refuse to do (maybe you’ll actually try them)

A random list of things my students refuse to do (maybe you’ll actually try them)

Just needed to do some venting. After I find myself saying the same things repeatedly, I start to think that perhaps I should just make a recording and just hit the “play” button whenever someone neglects to do one of these things…for the fiftieth time.

1) When you get down to two answers on Critical Reading, GO BACK TO THE FRIGGIN’ PASSAGE AND CHECK TO SEE WHICH ONE IT DIRECTLY SUPPORTS. Pick the most concrete, specific aspect of one answer choice, and check to see whether the passage explicitly addresses it. If it doesn’t, it’s not the answer. If one of the answers contains extreme language, start by assuming it’s wrong and focus extra-hard on connecting the other answer to the passage. (more…)

In praise of multiple choice tests

In praise of multiple choice tests

I have to say I never thought I’d write a post singing the virtues of multiple choice tests (well, sort of). Despite the fact that much of my professional life is dictated by such exams, I’ve never had any overwhelming liking for them. Rather, I’ve generally seen them as a necessary evil, a crudely pragmatic way of assessing fundamental skills on a very large scale. Sure, the logic and elimination aspects are interesting, but they’ve always in comparison to the difficulty of, say, teaching a student to write out a close reading of a passage in their own words. People might argue that learning to do so in irrelevant (obviously I disagree, but I’m not going into that here), but basically no one is disputing that it’s hard. At any rate, I’ve always assumed that given the alternative between an essay-based test and a multiple-choice one, the former would invariably be superior. (more…)

Can we get something straight? The SAT does not test “rote learning”

Can we get something straight? The SAT does not test “rote learning”

People seem to be throwing around the term “rote learning” a whole lot these days in regard to the SAT, without any apparent understanding of what it actually means. So in a modest — and perhaps vain — attempt at cutting through some of this linguistic obfuscation, I offer the following explanation.

This is an example of a question that tests rote knowledge:

The dates of the American Civil War were:

(A) 1849-1853
(B) 1855-1860
(C) 1861-1865
(D) 1866-1871
(E) 1872-1876

This question does not require any thought whatsoever, nor does it require the answerer to have any actual knowledge of the American Civil War beyond when it occurred. It is simply necessary to have memorized a set of dates, end of story. This is what “rote learning” actually means — memorizing bits and pieces of information, devoid of context, and without consideration of how those particular bits and pieces of information fit into a larger context. (more…)

In response to the SAT overhaul

In response to the SAT overhaul

There are many, many things I could say about the overhaul of the SAT (coming to a testing center near you in 2016!), but I don’t want this to turn into an endless rant, and so I’ll do my best not to ramble on too long.

The elimination of the sentence completions and the 1/4 point penalty, as well the changes to the essay didn’t surprise me in the least; the combination of Reading and Writing into one section caught me a bit off guard, however. In retrospect, I should have seen it coming. If more time is going to be allotted to the essay — the only possibility if you’re giving a more in-depth assignment — it’s going to get cut somewhere else. (more…)

Why standardized test scores are compatible with holistic admissions

Among the favorite arguments regularly trotted out by critics of standardized testing is the fact that scores correlate so closely with income. Sure, there might be an occasional outlier, but for the most part, the correlation holds steady. Students who come from well-off families will obtain high scores, while students who come from poor families will score far lower. So if standardized test scores are nothing more than a reflection of socioeconomic status, why bother even having the tests in the first place?

Well, I can think of a couple reasons. For the purposes of this discussion, I’m going to restrict myself to highly competitive/elite colleges — the schools that the SAT was developed for in the first place.

Let’s start with the fact that in 2013, the average score for a student from a family with an income of over $200,000 a year was 1714: 565 Reading, 586 Math, and 563 Writing. Even with very good grades, a student who earns those scores is nowhere near a shoo-in for admission at even a second-tier university. (If they don’t need financial aid, they’ll probably have a better chance.) At the most competitive schools, they fall below the 25th percentile. Absent a very significant hook, they’re not even in the pool.

By the way, I’ve been trying to locate income statistics about the very highest scorers, but thus far, I’ve been unable to find them. If anyone has a link, I’d be really interested to see the breakdown. If the score curve continued to follow the income curve into the 700s, that might change my perspective, but I haven’t yet seen any evidence that the students scoring in the 750-800 range come from the very highest-earning subset of families.

Second, top colleges draw the majority of their applicant pools from the upper end of the socio-economic spectrum. Fair or not, that’s unlikely to change anytime soon. These schools are perfectly aware that wealthy students can pay thousands of dollars for tutoring, and they consider it a given that most of their applicants have had some sort of prep. (Although they might publicly bemoan the hysteria they’ve created, the reality is that they have too much — namely their U.S. News and World Report ranking — riding on the average scores of their admitted students to discount their importance.) Their primary concern is whether that prep actually got the student someplace.

This is where the “holistic” part comes in. These schools are not admitting statistics; they are admitting individual students, and it’s their job to worry about the outliers. As a general rule, a high score from a highly advantaged applicant is a prerequisite for serious consideration; provided it’s somewhere around the average for admitted students, it becomes more or less irrelevant.

On the other hand, a low to middling score from a well-off applicant is a serious red flag; it suggests that even given every advantage, the student still isn’t capable of performing at a top level academically. Under normal circumstances, colleges have no reason to accept someone who is genuinely likely to struggle. That’s unfair to everyone, the student included. And given that some prep schools go so far as to invent their own grading systems to obscure where their students actually stand, and prevent comparison between their students and those from other schools, test scores may provide the only clear-cut assessment of an applicant’s actual abilities.

On the flip side, a seriously disadvantaged applicant with exceptionally high scores, or an applicant from a high school that rarely sends its graduates to top colleges — or any college — can make admissions officers sit up and take notice. That’s why the SAT was developed in the first place, and occasionally it does what it was designed to do.

I’m not trying to trivialize the serious problems with the system, most notably the tendency to judge merely middle class applicants by the same standards as the wealthy (test prep aside, merit can be awfully expensive to attain). But given the alternative between a somewhat objective standard and a completely subjective one, I’d vote in favor of the former. Colleges have a right to ensure that entering students have obtained a baseline level of knowledge; the fact that the existing educational system fails to prepare students adequately across the board shouldn’t detract from that fact.