Editor’s Note: Early in the summer of 1993, the APA, as well as some of the other learned societies which are constituent members of the ACLS, expressed concern about the accuracy of the data presented for review in a survey of Research-Doctorate Programs conducted by the National Research Council. The NRC responded to these inquiries in August through the ACLS. Copies of the correspondence follow:
June 9, 1993
Prof. Marvin L. Goldberger and Prof. Brendan Maher
National Research Council
2101 Constitution Avenue
Washington, D.C. 20418
Dear Professors Goldberger and Maher:
I write to you in your capacity as Co-Chairs of the Committee for the Study of Research-Doctorate Programs in the United States to express the serious concern of the American Philological Association about the accuracy of the data which you have presented for review of programs in the Classics. The APA’s concern is based on a review of two questionnaires which you have forwarded for peer review to APA members and which, in turn, have been shared with the APA Office. Although your May 4, 1993 cover letter states that “up to 50 programs in the Classics” will be reviewed, both questionnaires contained only 29 programs for review. Omitted from this list are the following universities which, according to APA records, grant the Ph.D. degree: University of California, Irvine; City University of New York; Colorado; Georgia; Indiana; Iowa; Loyola, Chicago; Missouri; Pittsburgh; Rutgers; State University of New York, Buffalo; University of Southern California; Vanderbilt University.
Of these institutions, one granted five Ph.Ds in 1992 and another granted three in the same year. The APA’s data is based on two major self-studies and initiatives which the Association conducted in 1992: one was a Data Questionnaire (copy enclosed) of all graduate and under-graduate programs in the Classics and the other was the APA Guide to Graduate Programs in the Classics in the United States and Canada. From these studies the APA collected a considerable amount of data about the state of the discipline and its professional profile in 1992 and this information remains current. We have compared the data which you were presenting for review against these two sources and we unfortunately discovered that much of your data is inaccurate or out-of-date. For example, we compared the faculty which you listed in your 29 selected programs against the faculty which we have recorded for the same institutions in 1992. This comparison revealed the following discrepancies:
Number of Discrepancies for Number of Institutions None: 6; 1: 2; 2: 2; 3: 4; 4: 3; 5: 6; 6: 1; 9: 1; 11: 1; 12: 2; 17: 1; Total 29.
In short, the high number of discrepancies (80%) raise a question about the correctness of the data provided in your questionnaire about individual institutions and this fact may undermine the credibility of the NRC Survey even before it is completed.
The APA, however, is fully aware that it may be drawing false conclusions about the NRC Survey since our observations are based on only two questionnaires which have been shared with us. We are eager to work with the NCR so that reports for individual graduate programs are based on accurate and current information and that institutions which should be evaluated, and are not presently included, become part of the survey. The APA requests from the NRC a listing of all graduate programs in the Classics which are being evaluated and a statement of the criteria used for the inclusion or exclusion of a program. I look forward to discussing these concerns with you as soon as possible.
Yours sincerely,
William J. Ziobro
Secretary-Treasurer
August 2, 1993
Dr. Douglas Greenberg, Vice President
American Council of Learned Societies
228 East 45th Street
New York, NY 10017-3398
It was a pleasure to talk with you recently about the status of our study of research-doctorate programs in the United States. The National Survey of Graduate Faculty is well underway and the purpose of this note is to provide you with some further details about survey methodology.
ELIGIBILITY CRITERIA: RESEARCH-DOCTORATE PROGRAMS
About 4,000 research-doctorate programs in 41 fields have been included in our study. In 1992, as part of the study design, the Committee reviewed and approved criteria for determining which programs would be included in the National Survey of Graduate Faculty—and therefore, in the study as a whole. The first cut identified those universities eligible for inclusion in the study:
Criterion 1. Level of Ph.D. Production
Based on data from the Doctorate Records File, maintained by the National Research Council, we generated a count of Ph.D.s produced between 1988 and 1990 by field and by institution within field. We then allowed into the study any institution (within a field) that produced at least 3 Ph.D.s during the target period, and 1 Ph.D. in 1991 or a rating of 2.0 or better in that field in the 1982 NRC study of research-doctorate programs.
We allowed into the study institutions meeting those criteria, even if they had only one program in one of the 41 disciplines included in the study.
Criterion 2. Interest in Participation in the Study (University President)
We then wrote to the Presidents of 300 universities meeting the first criterion, and invited them to participate in the study. 13 universities did not respond to the invitation or declined to participate. We also asked Presidents in participating institutions to name an “Institutional Coordinator”—usually a graduate dean—with whom we worked at succeeding stages of program selection.
Criterion 3. Interest in Participation (Institutional Coordinator)
Institutional Coordinators at 284 universities were sent a list and a form for each Ph.D. program we had identified as being eligible for inclusion in the study based on Criterion 1. We sent forms at the same time as our initial contact asking for detailed information about program faculty, etc. The Institutional Coordinators either sent us program information or told us that they were not interested in having a program rated or that the program no longer existed (if it had been identified by its inclusion in the 1982 study). In addition, we sent every Institutional Coordinator a blank form instructing them that they could nominate a program in one of the 41 areas if it was strong.
Criterion 4. Unified Field Lists in the Biological Sciences
For the sake of completion, let me add that we used a slightly different approach in the biological sciences. We asked Institutional Coordinators to send us faculty lists for “programs” that were offered within seven broad fields established by the Committee. If, however, Ph.D. training in the biological sciences did not match the “unified field” name, and the Institutional Coordinator was interested in having the programs reviewed, the survey form was designed to permit the analysis of component programs within the field.
Thus, at the conclusion of this entire process, we had a pool of about 4,000 research-doctorate programs which form the basis for the study.
Problem Cases
Jim Voytuk has personally handled problems that have arisen about program “coverage”—that is, cases where someone has brought to our attention the fact that a Ph.D. program had been omitted from the study.
Before handling specific cases, we established a hard and fast rule in the office: The NRC will correct errors that we may have introduced into the handling of program/faculty information, but we cannot correct errors that may have arisen at the campus level. [BMCR editor’s note: Emphasis in the original]
Thus, if in the process of printing questionnaires, we omitted programs that were actually submitted by an Institutional Coordinator, we have arranged for the programs to be included in an auxiliary mailing. only one case has come to our attention which occurred in our handling of the questionnaire in Pharmacology. (An institution submitted two program lists (pharmacology and toxicology) requesting that they be rated separately; we included only one list and the questionnaire in the broad field on “Pharmacology”. We are in the process of correcting that error.]
In the case of “classics”, we were asked by John D’Arms to check the eligibility of seven programs. Based on a review of our data, we found that all were ineligible based on criterion 1. Jim Voytuk spoke with the Institutional coordinators at all seven institutions. Six were satisfied with our explanation; the seventh said she might have nominated the program but had not actively participated in the institutional response—which was handled by her staff. In the end, the seventh IC was satisfied wit h the approach we had taken in gathering information.
FACULTY LISTS
We have relied on Institutional Coordinators for all faculty lists. Institutional Coordinators were instructed to include faculty primarily involved in doctoral education. Some institutions chose to submit faculty lists for entire departments, some “winnowed out” faculty not involved in doctoral education and then sent us a list, and some added faculty from other departments involved in Ph.D. training in that program, in addition to those working in the department.
Thus, our faculty lists are not department lists for the most part. They list faculty considered to be involved in doctoral education in that field. Omission of faculty names occur as a result of the actions taken by the Institutional Coordinators.
Problem Cases
We have identified and corrected a few errors introduced by the handling of faculty lists. Most notably, we switched two lists at one institution. Raters have now been sent the correct faculty list for that institution and have been asked to re-rate the programs in question.
I hope this information is useful. We plan to summarize full details of the survey methodology at the next meeting of the committee in September. In the meantime, please don’t hesitate to call if you have other questions about the study.
Regards,