Difference between revisions of "User:Quinci Henry"
|Line 27:||Line 27:|
<br />'''6/11/18:''' Last week Chris mentioned to us that we need to pitch in as we use Google's geocoding API to clean the dataset. I began to investigate how I can do that efficiently. I continued to familiarize myself with Python
<br />'''6/11/18:''' Last week Chris mentioned to us that we need to pitch in as we use Google's geocoding API to clean the dataset. I began to investigate how I can do that efficiently. I continued to familiarize myself with Python. I also completed my contribution to last week's project in the form of a Powerpoint draft and the team collaborated upon a presentation plan
Revision as of 08:18, 13 June 2018
5/29/18: I attended the inaugural orientation meeting. Dr. Brylow clarified expectations for this summer. I then met Dr. Guha, Chris Supinger, and the rest of our research team. Dr. Guha elaborated upon our goals. I briefly reviewed some clustering algorithms, specifically the K - means and hierarchal methods. I watched some videos covering the basics of Python.
5/30/18: My REU coeds, Katy Weathington, Laura Schultz, and Dominique met with Chris Supinger, Dr. Guha's graduate assistant. Chris showed us the raw data we are to clean. He then explained to us that we must implement Python's regular expressions and dictionaries in an effort to make addresses API friendly. He also roughly defined to us the legal terminology [i.e. guilty, not guilty, dismissed with prejudice, dismissed without prejudice, & suspended] accompanying cases in our data. Later in the evening we received the data as well as some fundamental criminology readings via Dropbox. I skimmed over an article intended to explain "hotspots", a crime mapping staple.
5/31/18: I thoroughly read the introductory part of the "understanding hotspots" article and examined an intuitive table included in the article. Later, I began a running list of words that may be helpful in my researching efforts. Upon Dr. Guha's bidding, I downloaded several software packages and read some tutorials covering their applications. I then continued Python familiarization, particularly, with regular expressions.
6/3/18: I continued reading over articles shared by our team in Dropbox.
6/4/18: The meeting in Cudahy Hall today effectively gave me a better view of the big picture for this project. Inasmuch, I am also more aware of our immediate direction. I continued foundational reading pertaining to the pros and cons of a broad range of clustering algorithms. The interdisciplinary readings [criminological, computational, sociological] are culminating upon a tidy intersection. I am glad to have discovered common threads today as literature review will be significantly expedited going forward.
6/5/18: My coeds and I met with Dr. Guha. He elaborated upon the philosophical premises of our research. He gave a brief exposition on the history of Milwaukee in terms of her economy, demographics, and public policy. A point he highlighted several times was that contrary to inexpert sentiments, algorithms lose objectivity upon user interaction. This was a segue into a small project we are to complete this week. We are going to classify clustering algorithms by the extent to which the user encounters points of election.
6/7/18: The entire REU group had our weekly working lunch. We were essentially given an outline for an academic oral presentation along with a summary of "do's and dont's" for such a presentation. Katy, a data science major here at Marquette, helped me create a Jupyter notebook. She also briefly went over some of the relevant work she had been doing prior to the start of this project. We devised a collaborative approach to complete the classification project. I did some very basic Python tutorials in an effort to further internalize the syntax. I did my final preliminary literature reviews concerning criminology [social disorganization theory, routine activity theory].
6/8/18: Arbitrary data manipulation in Python.
6/11/18: Last week Chris mentioned to us that we need to pitch in as we use Google's geocoding API to clean the dataset. I began to investigate how I can do that efficiently. I continued to familiarize myself with Python. I also completed my contribution to last week's project in the form of a Powerpoint draft and the team collaborated upon a presentation plan.
6/12/18: Today the group presented to Dr. Guha and received a new project for this week. We will be implementing the algorithms temporally.