BMCR 1993.05.15

BMCR 04.05.15, ALSO SEEN: “Future Libraries”

The preceding message carried two important letters regarding the National Research Council’s survey of graduate program quality now under way. See BMCR 4.3.1 and 4.3.2 for discussion earlier this summer. Bill Ziobro wrote to the NRC on behalf of the APA. As I understand it, we were not the only learned society to complain, and so Douglas Greenberg, Vice President of the American Council of Learned Societies wrote to NRC to emphasize wider concern, and hence the reply from an officer of the NRC was addressed to Greenberg. The documents speak for themselves but merit further comment.

The underlying flaw in the survey remains, and is now ardently defended by the NRC. A data set created by amateurs in voluntary compliance with vague and imprecise questions is going to be a bad data set. If one presents this set to qualified judges to evaluate, they will do a poor job, no matter how qualified they are. But if, as NRC did, one selects judges exclusively from among the individuals named in the data set, the effect of the erratic data collection will be multiplied dramatically. And so classics professors who work on Plato and have courtesy listing in a philosophy program are given equal voice in assessing philosophy programs nationally with full members of philosophy departments. Is it reasonable to argue, as the NRC reply seems to say, that if NRC does not have the money to do a decent survey, no one should blame them if they do a bad one? (How bad was the listing? My own department turns out to be the largest in the nation, because our deans innocently sent in the listing of our “graduate group”, which comprises not only departmental faculty but numerous collaborators in adjacent fields; Princeton turns out to be one of the smallest, because they reported only tenured faculty in the Classics Department. Practices in reporting varied just as widely everywhere. In other words, if measured against a consistent standard of reporting, listings seem to vary by about 50-75% either up or down.)

The survey is in fact a dinosaur, relic of the go-go competitive days of the 1960s, when deans and departments alike could scrutinize these lists, curse their fate if they had come in by rough chance thirtieth or so, and swear to work harder and spend more money to improve their ranking next time around. What has changed now is that we are no longer rich and profligate, but pressed and pinched. To trifle with the perceived reputation of departments is a game that now puts departments at dire risk. Just in the last month at Penn we have seen three departments of the School of Arts and Sciences closed, another put in receivership, and a fifth forced into a shotgun merger with a much larger partner. Our friends in the threatened Department of Religious Studies are expressly told that their national reputation is inadequately distinguished. To conduct a national survey that will create lists of winners and losers at this point is serious business. To do so in a light and irresponsible fashion is cruel. (Just since writing the first draft of this note, we hear that the classics program at the University of North Dakota, refreshed by new appointments and a charge to revitalize the subject only two years ago, has now been targeted for closure.)

(How should we be evaluated? The APA’s own directory of graduate programs, recently published, reminds by its chaste example that it is better to count things that lend themselves to the activity. Size of program [accurately represented], number of degrees awarded, placement of degree recipients, number of applicants, number of students admitted, etc.—all these things tell us important things about departments and about the profession. We need to do more and better analysis of that kind. But to opinions about reputation are dangerous things to try to count: margins of error multiply dizzyingly out of control. We should not fear evaluation, but we should insist on, and participate in, evaluation that produces reliable results.)

The competition that now matters most to us as classicists is one that no national survey that I can think of could get at. We compete not with each other in a kind of old-boy rivalry from ivied campus to ivied campus, but rather we with our colleagues intramurally. In an era of managed resources, every faculty position on the books in a classics department is at least notionally a position denied to other departments. We must justify our value to our institution every day, not to prove that we are better classicists than the team over at Siwash State, but to prove that we are as valuable to our institution as the economist or chemist—or the daring young professor of film studies perhaps.

So what is to be done? There is little value in this survey as constituted. (Declaration of self-interest: since the voting, whatever was done, is presumably over by now, I will venture the suggestion that my own department would probably climb at least a couple of notches in this ranking from last time.) The best outcome would be for the APA to make a formal request to the NRC that the results of the survey regarding classics programs be suppressed completely (no leaks, no informal distribution through an old-boy network of deans), and that a clear public statement of the reasons be made. We should also press the matter through the ACLS in the hope that any future survey can be made to meet minimum standards of responsible polling. Better still, we can fight to have the whole business stopped. With any luck, this could and should be the last survey of its kind. Then the APA could and should turn to consider how responsible self-evaluation can help us improve our own value to the profession and provide objective and reliable information to use in making the case for classics to our deans. JO’D
October 15, 1993