BMCR 1995.03.09

Response: Johnson on Neuburg on Golemo on Neuburg

Response to 1995.03.05

Response by

As a software developer for many years, I too have sometimes felt it unjust when someone criticized a piece of software which I had freely offered, especially when I had produced the software on my own time, explicitly as work-in-progress, and solely in order to be helpful to the scholarly community. So I can understand, to some extent, Matt Neuburg’s sensitivity on this score.

With that said, N.’s reply to the CAI reviews which we offered, also in a spirit of helpfulness to the scholarly community, is itself unjust. The level of invective is wholly inappropriate to what anyone should be trying to accomplish here. What, pray tell, does the collection of a set of reviews of CAI software have to do with the “Great Competition of -isms and power games” which caused N. to leave America for New Zealand?

The point of our collection of reviews was to try to offer a critical overview of some of the CAI software available today. This was offered as an effort to assist those interested in developing introductory Latin and Greek courses which make use of computers, since this is an area where information seems scarce. As editor of the collection, I tried wherever possible to argue for consistency in level of description and critical tone.

On reading N.’s response, I first thought that I (as the editor) must have made some mistake. For in reading this response, one cannot help but come away with the impression that Karl Golemo’s review savaged this CAI package, and that the critical style was inappropriate (thus, according to N., G. “carps” and “trumpets” and “punishes”; indulges in “crass hyper-criticism”; and is “unable to conceive” of a “helpful” motive). In fact a rereading proves that the opposite is the case. The review is extremely respectful, overall quite positive, and notices twice the very helpful “teacher stacks” (the “authoring tool” whose lack of notice seems the central point of N.’s complaint). G. does make the occasional critical observation, but that, I take it, is a normal part of a critical review.

As for N.’s central complaint, that the review focuses on the exercises which N. supplies “as a courtesy, to save time and to get you started” and that the review does not adequately address the central point of the software, that it “is really an authoring tool whereby teachers can easily make their own exercises to go with the textbook,” I can only quote from the “blurb” which N. himself distributes.

Okay, and now the blurb:
If you teach Greek 1 with the JACT Cambridge texts (Reading Greek), or know someone who does, please ftp yourself a copy of these stacks, which let your students drill and test themselves on (a lot of) the exercises and (all of) the forms and vocab from these texts.
These stacks, now four years in use, are a system for students to do most of the exercises in the JACT Reading Greek textbook, as well as to drill forms and vocab, and for teachers to modify the exercises and / or create new ones. To put it in a nutshell:
For MOST of the exercises in the JACT Greek, and ALL the vocab and drilling, YOU DON’T HAVE TO GO OVER IT IN CLASS ANY MORE! Why waste time checking the results of the student’s exercises when the computer can do it for you? Plus it’s more fun for them, private, convenient, etc. Works great; I have made graphs of students’ grades vs. time doing these computer exercises, and they show a clear correlation.

N. complains that G.’s review concentrates on the stacks themselves, and treats the ability to modify and create new stacks merely as a feature of what is distributed rather than as the central point of the distribution. But, as the above shows, this clearly is no more than a reflection of N.’s own documentation.

These comments should not be taken to detract from N.’s CAI package, which was, after all, positively reviewed. N. should be particularly commended for distributing these materials as freeware. No one intends to say anything but “thank you” for what N. has freely offered. But inasmuch as the materials are useful, and are used, they should be subject to the same critical review as any other scholarly tool.

I am personally very concerned about the general lack of critical review for computer software and data used by Classicists, especially since computer-aided tools are becoming increasingly central to what we do as scholars and teachers. Ibycus was, to my knowledge, reviewed only once. Most of the commonly-used data bases and software tools to access data bases have been reviewed once or not at all. When I set out to construct locally a computer lab with a variety of instructional software, I was able to locate very little by way of review. This set of reviews was accordingly offered in a spirit of constructive criticism, with the hope that more reviews would follow elsewhere, thereby offering a multiplicity of viewpoints. I am saddened and perplexed to see that—in at least one case—the criticism was not taken in the way it was intended.