Difference between revisions of "User:Snemoto"
From REU@MU
(Created page with "==Shota Nemoto== '''University:''' Case Western Reserve University Currently pursuing a Bachelor's degree in '''Computer Science'''") |
|||
(5 intermediate revisions by the same user not shown) | |||
Line 3: | Line 3: | ||
'''University:''' Case Western Reserve University | '''University:''' Case Western Reserve University | ||
− | Currently pursuing a Bachelor's degree in ''' | + | Currently pursuing a Bachelor's degree in Computer Science |
+ | |||
+ | Currently working with Dr. Perouli on the project: Identifying Appropriate Machine Learning Models for Multi Robot Secure Coordination in a Healthcare Facility. | ||
+ | |||
+ | ==Weekly Logs== | ||
+ | '''Week 1''' | ||
+ | *Read papers on neural network inversion and HopSkipJump attacks. | ||
+ | *Read abstracts for HumptyDumpty and MemGuard papers | ||
+ | *Attended Orientation | ||
+ | *Filled out pre-REU survey | ||
+ | *Reviewed Python skills | ||
+ | *Learned basics of Pandas and DataFrame manipulation | ||
+ | *Learned basics of creating and evaluating machine learning models | ||
+ | *Learned basic security considerations for applications | ||
+ | |||
+ | |||
+ | '''Week 2''' | ||
+ | *Attended Responsible Conduct of Research Training | ||
+ | *Attended Technical Writing instruction session | ||
+ | *Completed CITI Modules | ||
+ | *Began tutorial on Temi Robot SDK | ||
+ | *Read papers: | ||
+ | **[https://dl.acm.org/doi/10.1145/3319535.3345660 Procedural Noise Adversarial Examples For Black-Box Attacks on Deep Convolutional Networks] | ||
+ | **[https://www.computer.org/csdl/proceedings-article/sp/2019/666000a726/19skfWzmB1K Certified Robustness to Adversarial Examples with Differential Privacy] | ||
+ | *Read introductions and conclusions for papers: | ||
+ | **[https://dl.acm.org/doi/10.1145/3319535.3354209 Latent Backdoor Attacks on Deep Neural Networks] | ||
+ | **[https://ieeexplore.ieee.org/abstract/document/8844607?casa_token=Vuy2khB6fo4AAAAA:5yHwrbSaHy_pTaLS0_poE87Ff8-htRLSqHOngkfFUVnE11AlBMFU-wCogOyMj3P3SJlGdn9R Membership Inference Attacks against Adversarially Robust Deep Learning Models] | ||
+ | **[https://dl.acm.org/doi/10.1145/3319535.3354259 Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors] | ||
+ | **[https://www.computer.org/csdl/proceedings-article/sp/2019/666000a530/19skfH8dcqc Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks] |
Latest revision as of 20:03, 12 June 2020
Shota Nemoto
University: Case Western Reserve University
Currently pursuing a Bachelor's degree in Computer Science
Currently working with Dr. Perouli on the project: Identifying Appropriate Machine Learning Models for Multi Robot Secure Coordination in a Healthcare Facility.
Weekly Logs
Week 1
- Read papers on neural network inversion and HopSkipJump attacks.
- Read abstracts for HumptyDumpty and MemGuard papers
- Attended Orientation
- Filled out pre-REU survey
- Reviewed Python skills
- Learned basics of Pandas and DataFrame manipulation
- Learned basics of creating and evaluating machine learning models
- Learned basic security considerations for applications
Week 2
- Attended Responsible Conduct of Research Training
- Attended Technical Writing instruction session
- Completed CITI Modules
- Began tutorial on Temi Robot SDK
- Read papers:
- Read introductions and conclusions for papers:
- Latent Backdoor Attacks on Deep Neural Networks
- Membership Inference Attacks against Adversarially Robust Deep Learning Models
- Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors
- Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks