Difference between revisions of "User:Matthew"

From REU@MU
Jump to: navigation, search
(6/26)
 
(5 intermediate revisions by the same user not shown)
Line 189: Line 189:
 
== 6/25 ==
 
== 6/25 ==
 
* starting work on new projects
 
* starting work on new projects
 +
* working on research mini presentation
  
 
== 6/26 ==
 
== 6/26 ==
Line 195: Line 196:
  
 
== 6/27 ==
 
== 6/27 ==
 +
* meeting with Dr. Islam
 
* working on playground
 
* working on playground
 
* working on mood machine
 
* working on mood machine
Line 214: Line 216:
 
== 7/6 ==
 
== 7/6 ==
 
* harley davidson
 
* harley davidson
 +
 +
== 7/7 ==
 +
* some more draggable features on the play ground
 +
* a big meeting on what to do with the project future projects
 +
* git hubbing
 +
 +
== 7/10 ==
 +
* even more git hubbing (mood meter)
 +
* set up overleaf/outline for TA bot paper
 +
* meeting about git hubs
 +
 +
== 7/11 ==
 +
* paper work
 +
* reviewing the past literature
 +
* meeting with Dr. Islam
 +
 +
== 7/12 ==
 +
* Zimmer talked about stuff
 +
* more reviewing about the literature
 +
* meeting with Brylow
 +
* more paper work
 +
 +
== 7/13 ==
 +
* more visualizations
 +
* more paper work
 +
* more meetings with Dr. Islam
 +
 +
== 7/14 ==
 +
* writing

Latest revision as of 19:56, 15 July 2023

Tuesday, May 30

  • met new people
  • heard some people talk
  • ate some panera
  • heard more talking
  • Started to get up to speed with TA Bot project.
  • In particular, I read a paper (``Experiences with TA-Bot in CS1") highlighting experiences with using TA Bot for Marquette CS classes, and read a survey that was given out to assess the effectiveness of the TA Bot.
    • starting to understand the leveling system: it only concerns test cases, and the levels represent difficulty of the tests (higher levels corner/edge cases for example)
    • big emphasis on the TBS system to encourage students to start work early, but many did not like it
    • how are the learning outcomes affected? how do we measure good learning outcomes
  • Also, got started with looking at the database of submission scores from TA Bot, and am beginning to look at Python libraries that will help me manipulate this data.
    • using base code as a reference
    • pandas library: I understand the very basics of series and data frames
    • need to understand sorting/grouping/splitting

May 31, 2023

  • heard some talking
  • getting comfortable with `pandas` and getting pertinent parts of the TA Bot submissions database
  • brainstorming ideas for comparisons/visualizations we want between TBS and non TBS semesters to assess positive/negative student outcomes
    • right now, we focus on the effects on linter errors: how much linter errors go down using TBS vs no TBS, and if students correct linter errors even after attaining 100% on an assignment
  • made a graph comparing the average reduction in linter errors from a student's first submission to his last per assignment with TBS vs. no TBS
    • a clear correlation in assignments 1-5 that showed that TBS had a higher reduction in linter errors
    • assignments 6-10 are not so clear. Brylow: either students aren't making as many errors or they are just not correcting them
  • some other data gathered, needing visualizations
    • students submit far fewer times on average using TBS for a given assignment
    • students tend to resubmit more often after reaching 100% without TBS though the numbers are both low
    • we also studied the number of linter error reductions after reaching 100% w//w/o TBS, but the data does not make entirely clear any overarching trends (that might also help explain the 1-5/6-10 disparity)

First of June, 2023

  • refactored visualization code
  • made visualizations of data from two more semesters
    • those semesters did not use TBS
    • improvements not very obvious, but the two new semesters did not use the same projects, so other factors may be at play
  • thought of idea for new visualizations
    • looking at percent change of linter errors reduced instead of just the number reduced
    • instead of comparing to a student's first submission (may be a test or a mess, which is unreliable) look at submissions beyond a certain scoring threshold (like 70%)
  • meeting with Dr. Islam
  • read the following papers studying failure rates of introductory CS courses:
    • ``My Program is Correct But it Doesn’t Run: A Preliminary Investigation of Novice Programmers’ Problems"
    • ``Failure Rates in Introductory Programming Revisited"
    • ``Pass Rates in Introductory Programming and in other STEM Disciplines"
    • ``Failure Rates in Introductory Programming — 12 Years Later"

Friday, June 2 2023

  • created new visualizations comparing reductions in pylint errors between submissions that score 70% or more, and submissions that are passing
    • clear data that suggests TBS is helping reduce more pylint errors
    • further work needs to be studied on the later assignments: is TBS helping students to create fewer linter errors in the later semesters (so that they wouldn't have many to fix)
  • talked to Dr. Brylow about stuff
    • without TBS there was also no grade for linters (so do students really reduce linter errors when they pass all correctness tests?)
    • need to move on from looking at just averages and start looking at measures of spread and outliers in linter numbers
    • also got many tips on writing the paper and telling a story about the data with the visualizations
    • who are these overachievers?
  • made pie graphs representing percentage of students with passing submissions who resubmitted
    • total is about 20%. need further analysis on who these people are
    • comparing number of linter errors with people who submitted only once vs multiple times
  • worked on visualizations regarding students reducing pylint errors even after getting all the test cases, comparing this to students who did not resubmit
    • in Fall 2021: students who did not resubmit had lower number of average linter errors than students who passed (comparing to their first passing submission), and the students later resubmitted lowering the number of linter errors to comparable numbers to the non-resubmitters
    • in Spring 2022: no large trends showing that resubmitters resubmitted to lower the number of pylint errors they had; values stayed the same as the nonresubmitters (and still remain larger than TBS semester)
  • RCR training

6/5/2023

  • RCR talk with Brylow
  • read paper "Investigating Static Analysis Errors in Student Java Programs" in preparation for presentation the next day

Tues 6/6

  • created line graph of number of pylint errors on average per day before the due date
    • no useful information gained
  • met with Dr. Islam
    • talking about paper
    • looking at the data more and seeing new patterns emerge
    • qualitative and quantitate data
    • reading papers in the last 5 years from SIGCSE
  • listened to paper summary presentations and gave my own

06/07

  • untangled two kinks with the data
    • TA submissions were previously counted when computing the statistics, now it's gone
    • Fall 2021 was taught by two different instructors, and one of them taught spring 2022 (and was the only one)
  • grades in spring 2022 was much lower than grades in fall 2021
    • the grades of the instructor who taught both semesters was the same
    • still, a good result in looking at the reduction of pylint errors
    • may remove the extra instructor to keep the comparisons between F21 and S22 pure
  • looked up blockly for presentation

Jun 08

  • looked more at overall project grade distribution
  • looking at progressions through the week
    • seeing if students submit earlier/later with TBS
    • if scores are better/getting better with TBS
    • if pylint errors are going down more quickly over the week with TBS
  • presenting on blockly and hearing presentations on other elementary learning tools
  • talked to Dr. Islam
  • talked to Dr. Brylow
  • writing nice notes on the findings/visualizations so far

June 9 (Friday)

  • read the following papers:
    • "Investigating Static Analysis Errors in Student Java Programs"
    • "Experiences with Marmoset: Designing and Using an Advanced Submission and Testing System for Programming Courses"
    • "Can Industrial-Strength Static Analysis Be Used to Help Students Who Are Struggling to Complete Programming Activities?"
  • identifying/dealing with other irregularities with the data
  • answered the following questions:
    • are students submitting earlier with TBS? yes
    • are students passing earlier with TBS? yes
  • students are submitting more in the early days, but this recovers by the last few days (think more)

twelfth day of june

  • prepared presentation for weekly paper summary
  • visualizations to show if students were submitting earlier or passing earlier with/without TBS (Tina F21 vs. S22)
    • looking at trends throughout the week
  • for submitting earlier:
    • students are definitely beginning projects earlier. however the lead that TBS has over non-TBS diminishes over the course of the week
    • looking to reduce the number of students who start the day before: a noticeable but not massive reduction
    • does it depend on how difficult the assignment is?
  • for passing earlier: some students passing sooner in the week, but many still able to pass on the last day (lead shrunk)
  • also looking at trends for pylint/points averages throughout the week
  • updating onenote notes

The Ides of June (13)

  • reorganizing current work into google colab notebook
  • gave presentation with Brylow group
  • discussed project future meeting work for next week and connecticut "travel"
  • also discussed irregularity in spring 2022 assignment timeline
    • brylow is stumped
    • punt on the issue

Flag day (6/14)

  • took a survey
  • heard a sample research presentation
  • created a table showing the following information:
    • statistics for total number of submissions, average submissions per students, pass rates
    • submissions per day, new submissions per day (and percent of totals)
    • new unique students making submissions per day
    • how quickly students are passing
  • discussing scratch and blockly with Brylow and John; preliminary ideas for a presentation for next week

15 June 2023

  • made scratch/blockly presentation for next week
  • weekly meeting with Dr. Islam, discussing results and more work to do
    • suggestions for new features for TA-Bot
    • computed statistics for the survey data
    • compiled student concerns from the survey data
  • computed fall 2022 weekly submission statistics

Return of Jack (16)

  • made some common themes in the TA-bot survey data
  • read ``Pedal: An Infrastructure for Automated Feedback Systems and starting to make presentation for it for next week
  • more submission statistics charts (need to make line graphs later)
    • Spring 2023 data
    • Fall 2021/Spring 2022 data with assignments bunched together (to fix ordering issues)
    • Split up F22 and S23 data for students who eventually ended up passing to see if there are any majorly different behaviors

June Nineteenth

  • creating Bitmoji for the presentation
  • presenting on Scratch to the school teachers for project future
  • more data to compare Fall 2021 and Spring 2022 passing students' submission data
    • Students are beginning projects sooner, making more submissions before the last day, also passing sooner
    • Pass rates overall are roughly the same (no big increase across the board)--problematic
    • for some assignments: no one who started on the last day passed (in either semester): Benefit to TBS forcing students to start sooner
    • students who started earlier were more likely to pass, even in non-TBS semester
  • Conclusion: students have a better chance of passing if they are starting early which TBS forces them to do
  • also looking at Justin Fall 2021 data
  • got assignment descriptions for Fall 2021-Fall 2022
    • looking at topics added in each assignment
    • hoping to group them up based on length/topic group/complexity

2023-07-20

  • constructed some line graphs of the weekly submission trends
  • talked with teachers about the use of scratch in the classroom
  • rerouted towards developing a long division practice game for 4th/5th graders
  • developed the model for said long division game

6/21/23

  • fixed bugs in the game model and some new features
  • attended another research talk and group meeting
  • helping to set up the view and controller for the game

22

  • rewrote significant portions of the model
  • helped get the visual elements up and running

23/6

  • presented prototype with teachers
  • gathered feedback for next version

6/24

  • getting info on new projects

6/25

  • starting work on new projects
  • working on research mini presentation

6/26

  • working on playground
  • presenting research progress so far

6/27

  • meeting with Dr. Islam
  • working on playground
  • working on mood machine

6/28

  • presenting playground

7/3

  • playground 2.0

7/4

  • playground 2.0
  • america

7/5

  • playground 2.0
  • ham burger

7/6

  • harley davidson

7/7

  • some more draggable features on the play ground
  • a big meeting on what to do with the project future projects
  • git hubbing

7/10

  • even more git hubbing (mood meter)
  • set up overleaf/outline for TA bot paper
  • meeting about git hubs

7/11

  • paper work
  • reviewing the past literature
  • meeting with Dr. Islam

7/12

  • Zimmer talked about stuff
  • more reviewing about the literature
  • meeting with Brylow
  • more paper work

7/13

  • more visualizations
  • more paper work
  • more meetings with Dr. Islam

7/14

  • writing