Revamping the current state of TABOT

From REU@MU
Revision as of 15:03, 6 August 2021 by ANakvosaite (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Student Researcher: Alex Gebhard, Jack Forden, Agne Nakvosaite

Mentor: Dennis Brylow

Project Description :

The current state of TABOT is to act as an incentive for students. As it stands students submit their code to TABOT which runs a nightly test case script which compares their code output to predetermined result from the correct implementation of the assignment. In the case of a failed testcase, the students incorrect output is appended to their TABOT result email, along with what the expected output should have been. While this system has motivated some students to start their assignments earlier a common theme of feedback from past students is the confusing nature of the TABOT email. Since the current state of the TABOT email only provides students with the expected output, another common complaint is that students don't know what the testcases are actually doing.

Our goal over the summer is to expand upon the currently existing structures that TABOT provides. We additionally want to look into the feasibility of adding a linting script that would promote better "standardized" code. This linting process would promote better coding practices and general readability. Studies have shown that when students adhere to these coding practices there is a notable increase in their grade averages compared to students who do not(Investigating Static Analysis Errors).

Project Goals :

  • Create a web-based TABOT website
  • Upon Student submission, testcases would run and be reflected in the website UI
  • Upon Student submission, their code is submitted to a linting program that would analyze their code and provide "best coding practice" suggestions on how to improve the readability of their code.
  • These suggestions would then be mapped to their submission and each line that flagged in the linter would be correspondingly highlighted in the student's submission(visible in the UI)
  • For each linting flag, A student will also be given the link to the syntax error and why it is not the best practice, as a way to provide a real life example of why making the change actually matters.(ex:incorrect variable name)
  • Upon demand a student should be able to click on a failed testcase and get a brief synopses of what the testcase is testing.