User:Matthew

From REU@MU
Revision as of 15:17, 15 June 2023 by Matthew (Talk | contribs)

Jump to: navigation, search

Tuesday, May 30

  • met new people
  • heard some people talk
  • ate some panera
  • heard more talking
  • Started to get up to speed with TA Bot project.
  • In particular, I read a paper (``Experiences with TA-Bot in CS1") highlighting experiences with using TA Bot for Marquette CS classes, and read a survey that was given out to assess the effectiveness of the TA Bot.
    • starting to understand the leveling system: it only concerns test cases, and the levels represent difficulty of the tests (higher levels corner/edge cases for example)
    • big emphasis on the TBS system to encourage students to start work early, but many did not like it
    • how are the learning outcomes affected? how do we measure good learning outcomes
  • Also, got started with looking at the database of submission scores from TA Bot, and am beginning to look at Python libraries that will help me manipulate this data.
    • using base code as a reference
    • pandas library: I understand the very basics of series and data frames
    • need to understand sorting/grouping/splitting

May 31, 2023

  • heard some talking
  • getting comfortable with `pandas` and getting pertinent parts of the TA Bot submissions database
  • brainstorming ideas for comparisons/visualizations we want between TBS and non TBS semesters to assess positive/negative student outcomes
    • right now, we focus on the effects on linter errors: how much linter errors go down using TBS vs no TBS, and if students correct linter errors even after attaining 100% on an assignment
  • made a graph comparing the average reduction in linter errors from a student's first submission to his last per assignment with TBS vs. no TBS
    • a clear correlation in assignments 1-5 that showed that TBS had a higher reduction in linter errors
    • assignments 6-10 are not so clear. Brylow: either students aren't making as many errors or they are just not correcting them
  • some other data gathered, needing visualizations
    • students submit far fewer times on average using TBS for a given assignment
    • students tend to resubmit more often after reaching 100% without TBS though the numbers are both low
    • we also studied the number of linter error reductions after reaching 100% w//w/o TBS, but the data does not make entirely clear any overarching trends (that might also help explain the 1-5/6-10 disparity)

First of June, 2023

  • refactored visualization code
  • made visualizations of data from two more semesters
    • those semesters did not use TBS
    • improvements not very obvious, but the two new semesters did not use the same projects, so other factors may be at play
  • thought of idea for new visualizations
    • looking at percent change of linter errors reduced instead of just the number reduced
    • instead of comparing to a student's first submission (may be a test or a mess, which is unreliable) look at submissions beyond a certain scoring threshold (like 70%)
  • meeting with Dr. Islam
  • read the following papers studying failure rates of introductory CS courses:
    • ``My Program is Correct But it Doesn’t Run: A Preliminary Investigation of Novice Programmers’ Problems"
    • ``Failure Rates in Introductory Programming Revisited"
    • ``Pass Rates in Introductory Programming and in other STEM Disciplines"
    • ``Failure Rates in Introductory Programming — 12 Years Later"

Friday, June 2 2023

  • created new visualizations comparing reductions in pylint errors between submissions that score 70% or more, and submissions that are passing
    • clear data that suggests TBS is helping reduce more pylint errors
    • further work needs to be studied on the later assignments: is TBS helping students to create fewer linter errors in the later semesters (so that they wouldn't have many to fix)
  • talked to Dr. Brylow about stuff
    • without TBS there was also no grade for linters (so do students really reduce linter errors when they pass all correctness tests?)
    • need to move on from looking at just averages and start looking at measures of spread and outliers in linter numbers
    • also got many tips on writing the paper and telling a story about the data with the visualizations
    • who are these overachievers?
  • made pie graphs representing percentage of students with passing submissions who resubmitted
    • total is about 20%. need further analysis on who these people are
    • comparing number of linter errors with people who submitted only once vs multiple times
  • worked on visualizations regarding students reducing pylint errors even after getting all the test cases, comparing this to students who did not resubmit
    • in Fall 2021: students who did not resubmit had lower number of average linter errors than students who passed (comparing to their first passing submission), and the students later resubmitted lowering the number of linter errors to comparable numbers to the non-resubmitters
    • in Spring 2022: no large trends showing that resubmitters resubmitted to lower the number of pylint errors they had; values stayed the same as the nonresubmitters (and still remain larger than TBS semester)
  • RCR training

6/5/2023

  • RCR talk with Brylow
  • read paper "Investigating Static Analysis Errors in Student Java Programs" in preparation for presentation the next day

Tues 6/6

  • created line graph of number of pylint errors on average per day before the due date
    • no useful information gained
  • met with Dr. Islam
    • talking about paper
    • looking at the data more and seeing new patterns emerge
    • qualitative and quantitate data
    • reading papers in the last 5 years from SIGCSE
  • listened to paper summary presentations and gave my own

06/07

  • untangled two kinks with the data
    • TA submissions were previously counted when computing the statistics, now it's gone
    • Fall 2021 was taught by two different instructors, and one of them taught spring 2022 (and was the only one)
  • grades in spring 2022 was much lower than grades in fall 2021
    • the grades of the instructor who taught both semesters was the same
    • still, a good result in looking at the reduction of pylint errors
    • may remove the extra instructor to keep the comparisons between F21 and S22 pure
  • looked up blockly for presentation

Jun 08

  • looked more at overall project grade distribution
  • looking at progressions through the week
    • seeing if students submit earlier/later with TBS
    • if scores are better/getting better with TBS
    • if pylint errors are going down more quickly over the week with TBS
  • presenting on blockly and hearing presentations on other elementary learning tools
  • talked to Dr. Islam
  • talked to Dr. Brylow
  • writing nice notes on the findings/visualizations so far

June 9 (Friday)

  • read the following papers:
    • "Investigating Static Analysis Errors in Student Java Programs"
    • "Experiences with Marmoset: Designing and Using an Advanced Submission and Testing System for Programming Courses"
    • "Can Industrial-Strength Static Analysis Be Used to Help Students Who Are Struggling to Complete Programming Activities?"
  • identifying/dealing with other irregularities with the data
  • answered the following questions:
    • are students submitting earlier with TBS? yes
    • are students passing earlier with TBS? yes
  • students are submitting more in the early days, but this recovers by the last few days (think more)

twelfth day of june

  • prepared presentation for weekly paper summary
  • visualizations to show if students were submitting earlier or passing earlier with/without TBS (Tina F21 vs. S22)
    • looking at trends throughout the week
  • for submitting earlier:
    • students are definitely beginning projects earlier. however the lead that TBS has over non-TBS diminishes over the course of the week
    • looking to reduce the number of students who start the day before: a noticeable but not massive reduction
    • does it depend on how difficult the assignment is?
  • for passing earlier: some students passing sooner in the week, but many still able to pass on the last day (lead shrunk)
  • also looking at trends for pylint/points averages throughout the week
  • updating onenote notes

The Ides of June (13)

  • reorganizing current work into google colab notebook
  • gave presentation with Brylow group
  • discussed project future meeting work for next week and connecticut "travel"
  • also discussed irregularity in spring 2022 assignment timeline
    • brylow is stumped
    • punt on the issue

Flag day (6/14)

  • took a survey
  • heard a sample research presentation
  • created a table showing the following information:
    • statistics for total number of submissions, average submissions per students, pass rates
    • submissions per day, new submissions per day (and percent of totals)
    • new unique students making submissions per day
    • how quickly students are passing
  • discussing scratch and blockly with Brylow and John; preliminary ideas for a presentation for next week