Difference between revisions of "User:Snemoto"
From REU@MU
Line 24: | Line 24: | ||
*Completed CITI Modules | *Completed CITI Modules | ||
*Read papers: | *Read papers: | ||
− | **Procedural Noise Adversarial Examples For Black-Box Attacks on Deep Convolutional Networks | + | **[https://dl.acm.org/doi/10.1145/3319535.3345660 Procedural Noise Adversarial Examples For Black-Box Attacks on Deep Convolutional Networks] |
− | **Certified Robustness to Adversarial Examples with Differential Privacy | + | **[https://www.computer.org/csdl/proceedings-article/sp/2019/666000a726/19skfWzmB1K Certified Robustness to Adversarial Examples with Differential Privacy] |
*Read introductions and conclusions for papers: | *Read introductions and conclusions for papers: | ||
**Latent Backdoor Attacks on Deep Neural Networks | **Latent Backdoor Attacks on Deep Neural Networks | ||
**Membership Inference Attacks against Adversarially Robust Deep Learning Models | **Membership Inference Attacks against Adversarially Robust Deep Learning Models | ||
**Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors | **Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors |
Revision as of 19:53, 12 June 2020
Shota Nemoto
University: Case Western Reserve University
Currently pursuing a Bachelor's degree in Computer Science
Currently working with Dr. Perouli on the project: Identifying Appropriate Machine Learning Models for Multi Robot Secure Coordination in a Healthcare Facility.
Weekly Logs
Week 1
- Read papers on neural network inversion and HopSkipJump attacks.
- Read abstracts for HumptyDumpty and MemGuard papers
- Attended Orientation
- Filled out pre-REU survey
- Reviewed Python skills
- Learned basics of Pandas and DataFrame manipulation
- Learned basics of creating and evaluating machine learning models
- Learned basic security considerations for applications
Week 2
- Attended Responsible Conduct of Research Training
- Attended Technical Writing instruction session
- Completed CITI Modules
- Read papers:
- Read introductions and conclusions for papers:
- Latent Backdoor Attacks on Deep Neural Networks
- Membership Inference Attacks against Adversarially Robust Deep Learning Models
- Seeing isn’t Believing: Towards More Robust Adversarial Attack Against Real World Object Detectors