Copyright 2000, Neal R. Wagner. All rights reserved. Many forms of academic dishonesty are found in programming courses. This article looks at plagiarism of programs: when students copy all or part of a program from some source and submit the copy as their own work. This includes students who collaborate and submit similar work. Such plagiarism is felt to be common, though the true extent is hard to assess. Quoting from [12]: ``As with Nessie of Loch Ness there does not seem to be a clear picture of this beast, only a variety of eyewitness reports.'' This article focuses on two events, referred to here as the UTSA experiment and the MIT incident. During the fall 1990 and spring 1991 semesters at the University of Texas at San Antonio, an experiment detected 29 students in a data structures course involved in plagiarism. In the spring 1990 semester, the Massachusetts Institute of Technology disciplined 73 students for cheating in a beginning programming course [1]. In both cases there was a special check for copying that was not common practice at the school and was unanticipated by the students. These events span the spectrum from a smaller state university to a prestigious private school, and in both cases students involved ranged from freshmen to seniors. The events suggest a significant plagiarism problem in programming courses everywhere -- perhaps more of a problem than many realize. Plagiarism of student programs has fundamental differences from ordinary plagiarism -- making it easier to carry out and harder to detect. In contrast with a student writing an English term paper, beginning programming students typically work in the same environment on the same problem, and one expects similar resulting programs. The output data may have an exact required form. Students can copy and exchange programs in machine-readable form, and can use an editor to make extensive changes that do not affect program execution. The terms ``copying'' or ``plagiarism'' refer to unequivocal cases of copying an entire program or most of a program. Even if there are many edited changes, there must be a structural similarity between the two versions. This article uses the criterion that someone knowledgeable in computer science should be convinced beyond reasonable doubt that the similarities result from copying. In the event of suspected copying, the school or department would have to apply their own criteria for plagiarism. What seems obvious to a computer scientist might not be convincing to someone else. This article first considers what students do when they copy and why they do it, including an exhaustive list of reasons for copying. Next the two events mentioned above are examined in detail. Finally the paper presents extensive recommendations. The focus is on prevention, rather than detection and punishment. To quote from the excellent article [12]: ``Preventing the incidence of cheating is far preferable to expending effort establishing that cheating has occurred and taking action against the students involved. When it is possible to do so without compromising educational quality, courses and supporting computer usage [should] be organized to avoid situations in which students might be pressured or tempted to cheat.'' The present article covers some of the same material as [12], though with a focus on two striking and sobering instances of plagiarism. One conclusion is that an unannounced and unexpected check for plagiarism in any programming course might uncover a surprising amount. As the author of this paper, I am not trying to claim some self-righteous moral high ground. Students who plagiarize must be dealt with firmly but with compassion and with an understanding of their right to a presumption of innocence. In the end plagiarism must be resisted one way or the other because it undermines learning.
What Students DoAcademic dishonesty is not limited to plagiarism -- consider the various ways of cheating on exams. By definition the most successful episodes go undetected. Certainly students can show incredible ingenuity. For example the author encountered a student in a large section computer course, who, after an exam was passed back, retyped it on new ditto masters, ran off a single copy, and filled in mostly correct answers from the answer sheet. As a final twist, the student ``graded'' it, arriving at a grade of 84, when the actual grade had been 48. The student later produced this new exam, claiming that the teaching assistant had transposed digits in recording the grade. This student was only caught because the new exam, though typed with amazing accuracy, was in the wrong typeface. Students commonly copy all or part of a program for an assignment. They copy from other current students, from past students, from files of old programs, and from textbooks and other sources of programs. Some of this copying might be acceptable to certain instructors. Students also get help from others in writing or debugging their programs. The amount of help given and the level of help acceptable to an instructor varies greatly. As another common practice, students edit a program's output before printing. Even a reasonable student may get 90% of the code complete, producing most of the correct output, and then make up 10% of the code along with the correct output. This practice argues in favor of students mailing their program source for compilation and execution by the instructor.
How Students CopySome students submit a copy of another student's program with almost no changes -- perhaps only the name changed. (In the UTSA experiment, one student forgot to change the name.) Other students go to great effort to make the copied program appear different from the original. Some of the changes they make are listed in Table 1. Some students obtain program source in electronic form, usually from someone cooperating with them, or less frequently by breaking into a computer account. Others will copy from a listing, either being given the listing, stealing it, or finding a discarded listing. Collaborating students may work together on a single sourceprogram. As an example, two separate programs turned in from the UTSA experiment had many similarities, including the executable statement ``count := count'' as the target of an if-statement and ``Counts := Counts'' as the target of the corresponding if-statement in the other program. This is a bit of redundant insanity that has no effect on program execution. Most likely the students were worried about negating the if expression. The same unusual construct in both programs is what one might call a ``smoking gun'' -- an unequivocal indication of copying.
Why Students CopyStudents copy a program because they have trouble writing it themselves (lack of ability), or because they do not have time to finish it (lack of time), or because they want a better result. Table 2 presents a surprisingly large number of factors that add to the chances of copying. Each item is followed by typical justifying comments. There seem to be enough factors here for every student to find some reason to copy -- perhaps that is part of the problem. Table 5 examines these factors with a goal of altering them to lessen the likelihood of copying. Many motivations from Table 2 were found with MIT's students. The article [1] refers to a ``sense of entitlement'' developed by some students -- they felt they had put in so much time that they should get a good grade. Students are quoted as saying: ``You could check for cheating in any class and you'd certainly find a significant portion of the people cheating -- I think it's one way of getting through MIT.'' One senior mechanical engineering student said he ``felt under intense pressure because everyone else was getting 100's and his program did not quite work.'' One official is quoted as saying: ``What is emerging from our experience is a sense that many MIT students see the institute as an obstacle course set up by the faculty. Many feel that the required work is clearly impossible by straightforward means, and that any means that makes survival possible is allowed. We found students who felt that the major problem was getting caught.''
The UTSA ExperimentThe author devised an experiment at UTSA to look for copied programs in all sections of a data structures course over two semesters. Students received the following written rules about copying: ``In practice, for this course, you may discuss assignments in general terms, but you are not allowed to share any details of actual algorithms or of program code. You may help someone else debug their program as long as you do not start substituting in your own code when there are problems. Turning in a copy of someone else's program, even a copy with extensive changes made to it, is a very serious offense in this course.'' Students were told to hand in a program listing and to E-mail the program source. They were not told about the experiment but were told in writing that mailing the source was ``another basic course requirement, to resolve any possible questions or problems.'' The author suggested to the instructors that they exercise ``normal'' vigilance in checking for copying, i.e., no elaborate checking, and no checking at all across sections. The instructors knew about the experiment, but understood that no results would be available until after they turned in grades. The author decided in setting up the experiment to use results only for statistical purposes and not to initiate any disciplinary action.
The MIT IncidentDuring the spring 1990 semester, 239 students finished the course ``CE 1.00: Introduction to Computers and Problem Solving'' at the MIT. This course, described as ``popular but difficult'' in [1], was a requirement of the six civil engineering students enrolled, but is taken electively by about 40% of MIT undergraduates. The enrollment figures in Table 4 show that many students take the course early in their studies. Non-civil engineering majors take the course because it satisfies a science distribution requirement and helps get employment. The instructor, Professor Nigel Wilson, did not give written rules for the course (he does now), but said [1] he had ``spelled out what he felt were clear guidelines on acceptable collaboration.... It would be all right [for students] to work together in the initial stages of each assignment .... But ... it would not be acceptable for students to collaborate on writing the programs.'' Help was available from teaching assistants, and students were encouraged to seek this help when they got stuck. They could also get help from other students to get around specific bugs -- they just could not code jointly. Late at night a student gave Professor Wilson a tip that there was a problem with plagiarism in the course. Professor Wilson is quoted as saying [1]: ``The student had worked very hard and was very frustrated that others were getting more credit than they deserved.'' Source for assignments had been submitted both in hard copy and electronically. Professor Wilson and his teaching assistant first monitored one assignment and seeing the results, extended the monitoring to the four remaining ones.
Software to Detect PlagiarismIdeally it should be harder to copy successfully than to write the program from scratch, and software detection should work even if students know the particular algorithm. The problem at issue is a special case of file or string comparison. There is a large body of literature on such problems [11], most of which is not of use here because of the changes that are often made to copied programs. Reference [3] gathers statistics about program features, but this approach does too weak a comparison and generates too many false alarms. The reference [6] considers a complex comparison of the structure of programs as trees of procedures. At MIT, the software used a statistical approach similar to [3], with an objective function based on the number of for's, and's, if's, else's, and or's. Possible duplicates identified by this function were examined visually. The author and two undergraduate students at UTSA used the approach of [5], along with a preprocessing stage that removed comments, made letters lowercase, deleted names except for reserved words, replaced constant names by the constant, prettyprinted, and removed material in quote marks. Then for each pair of resulting files, the software counted the number of lines that occur exactly once in each file. Dick Grune of Vrije Universiteit in Amsterdam has written a sophisticated program to test for plagiarism in a variety of languages. This software does a drastic transformation of each source program to a much shorter character string, and then in pairs finds matching substrings of decreasing lengths. It does a good job of detecting plagiarism and is available via anonymous ftp [4]. The author used this tool to recheck all the programs in the UTSA experiment.
Results of the UTSA ExperimentThe UTSA experiment uncovered an alarming amount of copying -- far more than the author expected (see Table 3). There were 8 students in the fall and 21 students in the spring involved, where this might not mean that they knew copying occurred. Two ``A'' students reported four incidents of apparently stolen listings. In each case a student involved in other episodes of copying later submitted a copy of the stolen program. Two collaborating students who turned in copied programs mailed their source at nearly identical times -- in one case the timestamps differed by 1.5 seconds. From the stored data it would be possible to retrieve the names of the students involved. There are students who did not earn their grade, though sorting out who copied from whom would be a daunting task, even involving students who have graduated and left town. Also two instructors gave permission to carry out the experiment with the understanding that only statistics would ever be released. For these reasons the author has decided not to follow up on any students. Some individuals who reviewed this article expressed misgivings about course failures caused by hidden detection systems. Others were upset that nothing was done to discipline any dishonest students.
Results of the MIT IncidentAt MIT an astounding amount of copying occurred -- 80 out of 239 students enrolled were referred to the MIT Committee on Discipline. In the end, 73 students were disciplined. Quoting the instructor, Nigel Wilson [1], ``... an eight-month investigation determined that [two] students stole others' [listings], and at least one student [obtained another's password] and stole the program electronically. But the majority of the 73 students worked together in some fashion, and it was difficult to tell who did what work. A great majority felt what they had done was inappropriate. But some felt it was O.K., and some had not understood my instructions. They had genuinely engaged in joint coding.'' The largest collaborating group was 8, but 2 was the most typical size. About 10% of the submitted assignments were copies with no changes (except the name). Probation was the dominant punishment, but several students were suspended for one semester. No students were expelled. About half of the reduced grades were to zero and remaining ones were reduced by 50%. Table 4 gives additional statistics related to the incident.
General RecommendationsThere are two ways to reduce plagiarism. One is the hardline strategy: the instructor must catch students who copy and make sure they never copy again. Such approaches involve vigilance, no concessions, and strong punishment. These people might object to elements of a course that lessen the need to copy, and might favor ``entrapment'' strategies, where apparent opportunities to copy are clever traps for copiers. Many people feel this hardline approach does not create a good atmosphere for learning. Reference [12] recommends avoiding ``an oppressive environment in which students are constantly aware that they must not even give the appearance of cheating.'' Instead, this article promotes a compassionate strategy: try to eliminate copying by changing factors that induce students to copy. Make no mistake, the author is not proposing to condone copying. One needs firm policies for dealing with instances of copying, including a graduated scale of punishments. Also one ought to be able to explain to students why they should not copy. The real issue is learning to program, and copying is one of many things that gets in the way of learning. In catching and punishing copiers, one is not directly promoting any learning. The summary recommendation: emphasize prevention rather than punishment.
Strategies to Lessen CopyingThis section suggests specific actions to lessen the student's perceived need to copy or to lessen the likelihood of copying. The items of Table 5 mirror the reasons for copying given in Table 2. Table 6 lists additional actions. Many of these recommendations would improve the course independent of the copying issue. Table 5 and Table 6 can be used as exhaustive checklists. Several items in Tables 5 and 6 are controversial or deserve further comment. The issue of deadlines and late programs is complex. No deadlines at all or too lenient a policy toward late programs is detrimental -- students have many demands on them from other courses. If deadlines are not tight, a late program can start a domino effect, where each program suffers in quality or is late due to the work required by the previous late program. Some people argue that tight deadlines with no late programs accepted is best for the students. They say this is like the situation in industry. If this is the policy, it should be clearly stated. There should also be a policy for students with unusual problems. One compromise allows students to work past the deadline, but they must turn in something, their best effort, by the deadline. Surely all programmers have encountered a persistent bug that was difficult to isolate. Students have particular problems with such bugs -- finding them and recovering from them is an important part of their education. But one should provide help in debugging and should be flexible in special circumstances. The level of difficulty of programming assignments is another problem area. Some instructors add features until an assignment is quite hard. They may add artificial features that serve only to require a more difficult program. The author recommends the opposite: taking a substantial task and adding artificial simplifying assumptions that make the program easier to complete. Instructors should also consider assignments that focus on a single topic or goal. Getting the rules disseminated, understood, and agreed to is hard. This was a problem in the MIT incident [1], where a student said Professor Wilson's instructions on what was acceptable collaboration and what was cheating were unclear, whereas Professor Wilson felt he had given clear (verbal) guidelines. Rules must be carefully crafted to be clear, complete, and unambiguous, and they must be written. A colleague of the author once gave a class written rules, writing in part: ``You are to complete your assignments individually .... The work you turn in must be your own work.'' He also said verbally: ``Do not work together. In case of any doubt about what is acceptable, come talk to me first.'' At a university hearing the lawyer for a student accused of collaborating argued successfully that the rules were not clear. The student completed his work individually, it was argued, even though he didn't start it individually. The work he handed in was his own work -- and also someone else's. This last anecdote illustrates the increasing fear of litigation. Students and faculty now share a new set of rules, where a student's case might be defensible, not because the student actually cheated or not, not because the student knew he or she was cheating or not, but because the instructor could not prove that the student was undeniably cheating and because the instructor could not prove that the student knew that the cheating was against the rules. The need for ethics education has been repeatedly recognized, both by the CSAB accrediting agency and in the new Computing Curriculum 1991 [13]. The author recommends a short segment (several lectures introducing ethical issues) in the first or second programming course and a short segment in an advanced course, such as software engineering. Students in such segments carry on a surprisingly lively discussion. In fact they often expect no controversy initially -- they expect agreement with their position and are themselves surprised by the opinions of their peers. Good texts for segments on ethics are [2], [7], and [8]. See [9] for a short article that would stimulate discussions. The ACM Self-Assessment Procedure dealing with ethics in computing [14] contains the ACM Code of Ethics and case studies. See also [10] for case studies. Unfortunately some computer scientists regard ethics instruction as a waste of time. They argue that there is no room in the syllabus for ethics, and that it would be hard to present in a uniform and responsible manner. They also argue that teaching ethics will not make students behave ethically. All these reasons conspire to keep ethics out of a curriculum. Ideally, a department should have one or two faculty members responsible for helping the students discuss ethics. Then students and faculty alike would realize that no one is trying to make them behave according to some ethical norm. The goal is for students to think about these issues and form their own opinions.
What to do in Case of CopyingNow suppose you are an instructor faced with copying. Table 7 gives actions to consider. Some instructors feel the best outcome is to avoid a formal complaint -- perhaps a zero grade on the assignment. This is often a quick and convenient resolution, but not necessarily the best course -- a student may later contest the outcome. There is no record of the offense and no possibility of an escalated penalty in case of a repetition. A common outcome of an interview with two students who submit copied programs is for them both to deny any wrongdoing. One could ask the students to explain their code, as recommended in [12], but such a small ``oral exam'' puts a lot of pressure on students. A second common outcome of an interview is for students to admit to some collaboration but to deny line-by-line copying, even in the face of clear line-by-line similarity. One should be careful here: for example a teaching assistant may have helped several students with exactly the same suggested code. The discovery of plagiarism can be disconcerting and frustrating, even demoralizing. Support from peers, university discipline committees, and administrators may be weak. Student reactions can be unsettling, as they deny any plagiarism or justify their actions with no sign of remorse, with faked remorse, or with remorse only at getting caught -- as they bring their own lawyer to a hearing. The actions listed in Table 7 often come when the instructor has many other commitments.
Why Not Copy?If one feels strongly that students should not copy, then one ought to give them reasons. Table 8 attempts an answer. In the end one hopes that each student finds reasons for not copying. These issues could also be discussed in an ethics segment in the computer science curriculum.
Sample FormsTable 9 presents a sample rules form, asking students to sign indicating their understanding. One could use a similar form that asked them to agree to abide by the rules, or a form with no signature. In contrast, Table 10 promotes more of a humane strategy involving a commitment, almost like a contract.
ConclusionsThis article has described two significant plagiarism events: an experiment at the University of Texas at San Antonio used to detect cases of copying of programs, and an unexpected plagiarism incident at the Massachusetts Institute of Technology. A great deal of copying occurred during the two semesters of the experiment at UTSA, and even more took place in the MIT incident. These results suggest that copying may be widespread. The article has also presented reasons why students copy and strategies to lessen this copying, suggesting positive alternatives to threats of punishment. In summary, the author recommends that schools make educational goals foremost and that they try to emphasize prevention. A school should study its plagiarism problem carefully, interviewing faculty and students, and looking over all the items of Table 5 and Table 6. The author particularly suggests various improvements -- to the hardware, to the working environment, to assignments, and to the curriculum, including the addition of ethics instruction. Finally, a school should use software to detect plagiarism.
AcknowledgementsThe author first thanks Walt Moore and Chris Kanute, who worked on the software in the UTSA experiment. Nigel Wilson provided information about the MIT incident. Jeff Popyack read several revisions and gave many useful suggestions. Additional help came from George Butchee, Dave Eberly, Bob Hazy, Dennis Kern, Hugh Maynard, Myles McNally, and Holly Roe.
References
Table 1. Changes Students Make to Copied Programs.
Table 2. Factors Contributing to Copying.
(Refer to Table 5 for strategies to lessen the need and the opportunities to copy.) I. Course organization.
II. School environment, CS curriculum, computer consultants.
Table 3. Results of the UTSA Experiment.
Fall 1990 (3 sections), Spring 1991 (3 sections)
Explanation: On each assignment line, students submitting similar programs have the same digit as entry. Thus students 4, 5 and 13 submitted similar programs for assignment two, and they received course grades of ``A'', ``F'', and ``W''. Italics = copying detected during the term. Further notes:
Table 4. Results of the MIT Incident.
Introduction to Computers and Problem Solving (CE 1.00)
Notes:
Table 5. Strategies
to Lessen the Need
and the Opportunities to Copy
|