What's wrong with testing? Except for supressing the talented, protecting incompetents, and making us less productive - not much.

AuthorFallows, James

James Fallows, a contributing editor of The Washington Monthly, writes for The Atlantic from Yokohama, Japan. This is excerpted ftom his book MORE LIKE US: Making America Great Again published by Houghton Mifflin Company, Boston, Copyright (c) 1989 by James Fallows. Reprinted by permission.

Except for supressing the talented, protecting incompetents, and making us less productive-not much.

The term "meritocracy" was popularized 30 years ago by the English sociologist Michael Young, who introduced it in his short satire, The Rise of the Meritocracy. Taken literally, meritocracy means "rule by the meritorious," and such a system is what America, among other societies, has always dreamed of attaining. Many people assume, even without thinking, that the current system of school tracking, tests, and professional organizations is about as efficient a meritocracy as we're likely to devise. But is the connection between intelligence and success really so necessary and natural? It is not, as an examination of the meritocracy's premises will show.

The starting point for today's meritocracy, of course, is the idea that intelligence exists and can be measured, like weight or strength or fluency in French. The most obvious difference between intelligence and these other traits is that all the others are presumably changeable. If someone weighs too much, he can go on a diet; if he's weak, he can lift weights; if he wants to learn French, he can take a course. But in principle he can't change his intelligence. There is another important difference between intelligence and other traits. Height and weight and speed and strength and even conversational fluency are real things; there's no doubt about what's being measured. Intelligence is a much murkier concept. Some people are generally smarter than others, and some are obviously talented in specific ways; they're chess masters, math prodigies. But can the factors that make one person seem quicker than another be measured precisely, like height and weight?

Think for a moment about the difference between measuring intelligence and measuring anything else. We know that some natural traits are distributed according to what the statisticians call a "normal distribution," better known as a bell curve. Height is the classic example. If you randomly chose a thousand American men and measured them, you'd find that most would be slightly over or under six feet, smaller numbers would be four inches taller or shorter, and only a few would be at the top and bottom of the scale. Many other natural characteristics-the number of hairs on a person's head, the size of fish in a lake-follow a normal distribution. But some other, equally natural features, don't. Hair color among Japanese citizens does not have a normal distribution: almost everyone's is black. The ability to walk is another example, It is not "normally" distributed, since the great majority of people can walk without difficulty, and a minority of those who are too old, too young, or too sick cannot.

There is, then, nothing in nature that dictates that intelligence be distributed along a bell curve, with the normal proportions of geniuses and morons and people with average IQs. So how can we be sure that intelligence really is distributed that way? In fact, we can't; no one is sure just how it is distributed. The bell curve was invented for analytic convenience, not because anyone believed that it resembled the real, underlying pattern of intelligence. That is, if test makers produced a set of questions, but students' scores on the test did not follow a "normal" distribution, then the questions themselves must be bad. Good questions were those which yielded a bell -shaped curve.

IQ scores now fall into a bell curve mainly because that is where the original English and American psychometricians thought they should fall. But suppose they'd started out with different preconceptions. Suppose they believed that "intelligence" was something lik"health": some people were weak and some were strong, but most people were "healthy enough." Their bodies may have been shaped differently, but one type couldn't be called healthier than another. In that case, the distribution of IQ scores would have been very different. Indeed, it would look more like the distribution in Japan, where the prevailing idea is that intelligence (among Japanese) is like health. Most people are thought to have "enough."

This brings us to the second question. Whatever intelligence may be and however it may be distributed, is it really the main factor in determining how far people can go in life? Unless IQ is an important limit, the entire tracking system makes no sense. Why start channeling people early if most of them really can handle most jobs? Why not let them end up where they will, by trial or error, or encourage them to keep starting over? Why hive people off to trade school if, given a later chance, they could become scientists, doctors, or inventors?

As it happens, there have been some studies designed to test precisely this hypothesis: that only a small fraction of the public is intelligent enough to do complicated professional work. If the hypothesis were true, you would expect to see a correlation between IQ scores and positions on the job ladder. The greatest variety of IQ scores would be at the bottom of the ladder, because people in society's bad jobs would be (a) those people who weren't smart enough to do anything else and (b) those people who were smart enough but for one reason or another-weak ambition, negligent parents, sickness, alcoholism, character defects, simple bad luck-never fulfilled their potential. At the top of the ladder, there would not be much variety-there couldn't be, since only those with high IQs could handle professional or managerial work.

In I.Q. in the Meritocracy, R.J. Herrnstein discussed one important study that confirmed this expectation. He compared the intelligence test scores given to tens of thousands of recruits during World War II with the jobs they'd held before induction. As he had predicted, there was much variety at the bottom and less among those in good white-collar jobs.

But Herrnstein's subjects were young, just starting out in life. When Michael Olneck, of the University of Wisconsin, and James Crouse, of the University of Delaware, worked from data that followed men later into their careers, they found just the opposite. Their principal source of information was the "Kalamazoo brothers" study, one of sociology's longest-running and most thorough surveys, which followed thousands of boys from their childhood in Kalamazoo well into adulthood. Because the study lasted so long, early guesses about the boys' potential could be matched against the way their careers actually turned out,

When Crouse and Olneck compared men's first jobs with their test scores, they found a pattern like Herrnstein's. But the longer they followed subjects, the more the pattern changed. Of the Kalamazoo brothers who ended up as professionals, 10 percent had been considered "high-grade morons" as boys. Their childhood IQs were below 85, putting them in the bottom sixth of the population. One third of all the adult professionals, and 42 percent of the managers, had childhood IQs below 100. On average, managers were smarter than normal, but many managers were dumb. The greatest diversity of IQ scores was not among unskilled laborers, as Herrnstein had predicted, but among those in professional jobs. "While men...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT