Skip to content

The Autograders

NBKlepp edited this page Jun 13, 2018 · 3 revisions

The autograder is the raison d'etre of the TheoryAssistant package. Two sample autograder programs are provided in this repo, and they are described in this page.

The First Autograder

The first autograder can be found in the SampleGrader/ directory at SampleGrader/src/main/scala/grader/SampleGrader1.scala. It is intended to be used with the first sample solution set at data/solutions1/. The idea behind this autograder is that the "hard work" is completed by the instructor on the front end, while the auto-grader is written just to parse the solutions and submissions and compare the two for equality. This may not necessarily be the smartest use of the TheoryAssistant package, but it is a viable use of the package.

Structure of the First Autograder

First a Map object is created to store the evaluation scores corresponding to each exercise on the assignment. Next, a Parser object is created which is capable of parsing the files describing the solution and submission machines for the assignment. The file containing the names of the exercises is provided to the application as a command line argument and accessible via the args array.

The meat of the autograder is contained in the for loop at line 30 of the autograder. The for loop runs as a comprehension over the iterator returned by the Source.fromFile() static method defined in scala's io.Source package. Since the method is provided the path to the assignment file, each call to the iterator returns the name of one of the exercises in the assignment, which can then be passed to the Parser object to parse the corresponding file in the submissions and solutions directory. Parsing those files returns a submission machine and a solution machine. The equals method of the TheoryAssistant.DFA package will indicate whether the two machines recognize the same language. If the two machines recognize the same language, then then the scores data structure is updated to reflect the score for the submission.

Evaluating an Incorrect Submission

If, on the other hand, the two machines do not recognize the same language, then one of three possibilities is true:

  1. the submission machine accepts some strings which it should not accept,
  2. the submission does not accept all of the strings that it should accept, or
  3. both of the above are true.

The DFA class in the TheoryAssistant package can be used to see which of the above apply. In the first case, we must check whether there are any strings which are recognized by the submission machine but not by the solution machine. In this case, the intersection of the language of the submission machine and the complement of the language of the solution machine will be some non-empty language - a question which we can easily answer using the public API of the TheoryAssistant package. In the second case, we must check whether there are any strings recognized by the solution machine but not by the submission machine. Here, the intersection of the language of the solution machine and the complement of the language of the submission machine will be non-empty; again, we can test this easily using the public API. Both cases are tested, the score for the exercise is updated, and feedback is printed out for the student. It is often more appropriate to print the feedback to a file which can be returned to the student than printing the feedback to the standard out, which is done here simply for demonstration purposes.

The Try/Catch Block

It should be noted that the logic executed on each iteration of the for loop is contained within a Try/Catch block. This is because there are potentially three different exceptions which might be thrown by the various methods called in the program:

  1. If the student does not include a submission for one of the exercises, then the call to parseDFA at line 35 will return a NoSubmissionFound exception..

  2. If the student's submission contains syntactical errors in the description of their submission, then the call to parseDFA will throw a ParserException exception.

  3. If the student's submission defines a machine with an alphabet different from the alphabet specified in the submission, then the call to the equals method in line 38 will throw a MachineException exception.

All of the exceptions are handled appropriately in the Catch block.

The Second Autograder

The second autograder was written to demonstrate the more powerful aspects of the TheoryAssistant package. Here, the fact that the solutions to the exercises are more easily implemented in smaller machines which can be combined or manipulated some how to yield the desired solution along with the public API of the TheoryAssistant package is leveraged to allow the instructor to perform less work in designing the solutions, effectively handing the hard work off to the TheoryAssistant package. This is a much smarter way to use the TheoryAssistant package than the original autograder.

The Structure of the Second Autograder

The meat of this autograder is implemented in the compareDFAs method, which accepts as parameters two DFAs - a solution and a submission DFA - compares them and provides feedback on the comparison. The comparison and feedback here is identical to the logic found in the for loop of the first autograder. The point value for the evaluation of this exercise is returned by the compareDFAs method.

Similar to the first autograder, the second autograder defines a map for associating each exercise in the assignment to a points value for the submission for that exercise as well as a Parser object to parse the descriptions of the solution and submission machines for each exercise.

This is where the two autograders differ in implementation. Whereas in the first autograder the instructor performed the hard work of creating the solution machine, in the second autograder the hard work is offloaded to the TheoryAssistant API. Since the solutions to problem 1.4a is the intersection of two smaller machines, the two smaller machines are described in the solution files, and the machine which recognizes their intersection is returned by a call to the intersect() method of the DFA class. The hardwork was similarly exported to the TheoryAssistant package for problem 1.5c. The solution to problem 1.5c is a machine which is the complement of a simpler machine. Instead of actually implementing the solution directly, the instructor can simply describe the simpler machine and use the TheoryAssistant package to find the correct solution.

The answer to problem 1.5d is even more interesting in its usage of the TheoryAssistant API. The solution to this exercise is the complement of a much simpler machine; however, the simpler machine is more easily expressed as an NFA than a DFA. In this case, the NFA is described in the corresponding solution file, and the parser object calls the parseNFA() method instead of the parseDFA() method to return the NFA which is the complement of the solution. Then using the complement() and DFAify() methods, the instructor obtains the correct DFA solution for this exercise which can be compared to the students submission.

Try/Catch Block

Again, the entire logic of the evaluation method is surrounded in a try/catch block which handles the possible errors thrown by lack of submission, incorrect syntax, or incongruent alphabets in the submission files. This is critically important to the success of the autograder.

Next Steps

This concludes the WIKI orientation to the TheoryAssistant package. Please feel free to use the package and contact the author with any feedback or questions you may have.

Good luck and god speed!