Difference between revisions of "User:Cnapun"

From REU@MU
Jump to: navigation, search
(LSTMs, Training, and Possible Improvements)
(Week 2 (12 Jun - 16 Jun))
Line 21: Line 21:
 
=== Week 2 (12 Jun - 16 Jun) ===
 
=== Week 2 (12 Jun - 16 Jun) ===
 
* Started playing with TensorFlow, mostly using the GEFCom2014-E dataset to allow me to continue working when not at the lab
 
* Started playing with TensorFlow, mostly using the GEFCom2014-E dataset to allow me to continue working when not at the lab
* Got single layer encoder-decoder architecture to compile in TensorFlow
+
* Things I got working in Tensorflow:
 +
** n-layer Sequence to sequence (seq2seq) model (encoder-deocder architecture)
  
 
== Reading ==
 
== Reading ==

Revision as of 19:33, 16 June 2017

Personal Information

  • Case Western Reserve University Class of 2019
  • Applied Mathematics and Computer Science Major

Weekly Log

Week 0 (30 May - 2 Jun)

  • Met Dr. Povinelli 4 times: to discuss possible project topics, to get oriented with the lab, to decide on a project topic, and to establish weekly milestones
  • Obtained MU and MSCS account logins and ID card
  • Read most of "Learning Deep Architectures for AI"
  • Read section on Sequence Modeling (Ch 10) of The Deep Learning book
  • Read various other papers

Week 1 (5 Jun - 9 Jun)

  • Attended GasDay camp and learned about what GasDay does
  • Attended responsible conduct of research training
  • Continued to read papers
  • Decided to use TensorFlow and Keras for now
  • Got Anaconda, TensorFlow, and Keras installed on a couple lab computers
  • Learned how to access customer data

Week 2 (12 Jun - 16 Jun)

  • Started playing with TensorFlow, mostly using the GEFCom2014-E dataset to allow me to continue working when not at the lab
  • Things I got working in Tensorflow:
    • n-layer Sequence to sequence (seq2seq) model (encoder-deocder architecture)

Reading

Here are some of the papers I have read, skimmed, and partially read:

Forecasting with Deep Learning

Hybrid Methods

Review Papers

LSTMs, Training, and Possible Improvements