Once marking is done (or very nearly done) it is time to reassemble the papers, build the spreadsheet and return results to students. But before that we need to check that everything is done so that we can assign any remaining tasks. We will use
plom-finish to do this. Note that this can only be run by “manager”.
Once we are mopping up the last few questions, and it becomes very important to know what is tasks are left to do. One way to access this information is through the manager-tools, but a simpler way of getting a quick overview is to use the
$ plom-finish status Please enter the 'manager' password: ********************* ** Completion data ** Produced papers: 20 Scanned papers: 20 (currently) Completed papers: 1–20 Identified papers: 1–20 Totalled papers: 1–20 Number of papers with 0 questions marked = 0. Tests numbers = Number of papers with 1 questions marked = 0. Tests numbers = Number of papers with 2 questions marked = 0. Tests numbers = Number of papers with 3 questions marked = 20. Tests numbers = 1–20 20 of 20 complete
This shows that everything is actually done.
Now that everything is done, Plom can build a CSV spreadsheet for us.
$ plom-finish csv Please enter the 'manager' password: >>> Warning <<< This script currently outputs all scanned papers whether or not they have been marked completely. Marks written to "marks.csv"
Please do note the warning — Plom will include all scanned papers in this sheet. While you can run this at any stage in the marking process, the sheet will not be complete until the marking is all done.
The sheet is saved as
marks.csv and is human-readible. Take a quick look at the first few rows:
|StudentID||StudentName||TestNumber||Question 1 Mark||Question 2 Mark||Question 3 Mark||Total||Question 1 Version||Question 2 Version||Question 3 Version||Warnings|
It contains the students ID and name, the number of the test-paper they wrote, their marks for each question, and the total. It also includes the versions of each question and a “Warnings” column. This last one will warn you:
[unidentified]: this test has not yet been identified
[unmarked]: at least one question on this test is unmarked
[no ID]: no ID given on test, but some questions were answered
[blank ID]: no ID was given was given and test is blank
It should not be too difficult to tweak the resulting spreadsheet for upload into your favourite LMS (or at least the one you have to use).
Once everything is IDd and marked and you’ve done any necessary mopping up and reviewing it is time to reassemble all the annotated page-images into papers complete with simple cover-pages. Run
plom-finish reassemble. But first a quick caveat:
NOTE To reassemble the annotated page-images into papers you must work in the same directory in which
plom-server ran. This is because
plom-finish needs access to every annotated page-image. Doing this over the internet would place a considerable load on your network and also be considerably slower. On with the show.
$ plom-finish reassemble Please enter the "manager" password: Reassembling 20 papers... 100%|████████████████████████████████████████████████████████| 20/20 [00:04<00:00, 4.16it/s] >>> Warning <<< This still gets files by looking into server directory. In future this should be done over http.
Note that for a long paper and a large class this could take some time (despite being parallelised). The resulting papers now reside in
reassembled. Each is named
<testName>_<studentID>.pdf where the
<testName> is the short name that you gave your test in the specification and
<studentID> is the ID-number of the student. Here is a sample paper (very obviously not real data, nor real annotations)
Returning the PDFs to your students turns out to be slightly more complicated than one would like. Unfortunately, one cannot simply email out hundreds of PDFs without running the risk of being designated a spam-server (not to mention sending private information over email is a bit of a no-no). And since every LMS is awful in its own special way, it is not at all easy to design to get PDFs into the right place inside the LMS via some API.
Consequently we return the tests with as little LMS interaction as possible, but still maintaining a good level of security. More precisely Plom
- assigns each student a short random 12 digit “return code”
- copies each student’s paper to a file named
- builds a simple website into which students can type their ID and return-code and it passes back their paper.
To do all this we simply run
plom-finish again, now with the
$ plom-finish webpage We will take pdf files from "reassembled". Generating return codes spreadsheet... extracting the following columns: ['StudentID', 'StudentName'] The return codes are in "return_codes.csv" Searching for foo_<studentnumber>.pdf files in reassembled... found SN XXXX: code YYYY, copying "plomdemo_XXXX.pdf" to "codedReturn/plomdemo_XXXX_YYYY.pdf" ... Copied (and renamed) 20 files Adding codedReturn/index.html file All done! Next tasks: * Copy "codedReturn/" to your webserver * Privately communicate info from "return_codes.csv" * Read docs about the security implications of all this.
where we have replaced the student number with
XXXX and the code with
The list of return codes has been assembled in the new
return_codes.csv file, which is of the form
and so on. You will need to upload the codes to your LMS so that your students can access them.
You will also find a new subdirectory
codedReturn which contains the renamed reassembled PDFs and a simple webpage for serving those PDFs to students. The directory is self-contained and just needs to be uploaded to a webserver accessible to your students.
We show the standard automagically-build-by-Plom webpage in action in the frame below. To try it out, use the following student number / return code pairs (also try entering an incorrect ID/Code pair and see the result):
|Student ID||Return Code|