Profile
International Journal of Computer & Software Engineering Volume 6 (2021), Article ID 6:IJCSE-165, 7 pages
https://doi.org/10.15344/2456-4451/2021/165
Research Article
Special Issue: Computational Analysis and Modeling
Improving Accuracy of Out-of-Distribution Detection and In-Distribution Classification by Incorporating JSD Consistency Loss

Kaiyu Suzuki and Tomofumi Matsuzawa*

Department of Information Sciences, Tokyo University of Science, Chiba, Japan
Prof. Tomofumi Matsuzawa, Department of Information Sciences, Tokyo University of Science, Chiba 278-8510, Japan; E-mail: t-matsu@is.noda.tus.ac.jp
07 June 2021; 23 June 2021; 25 June 2021
Suzuki K, Matsuzawa T (2021) Improving Accuracy of Out-of-Distribution Detection and In-Distribution Classification by Incorporating JSD Consistency Loss. Int J Comput Softw Eng 6: 165. doi: https://doi.org/10.15344/2456-4451/2021/165

References

  1. Krizhevsky A, Sutskever I, Hinton GE (2017) Imagenet classification with deep convolutional neural networks. Communications of the ACM 60: 84- 90. View
  2. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recog-nition. Published as a conference paper at ICLR. View
  3. He K, Zhang X, Ren S, Sun J (2016) Deep Residual Learning for Image Recognition," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770-778. View
  4. Zagoruyko S, Komodakis N (2016) Wide residual networks. In Edwin R. Hancock Richard C. Wilson and William A. P. Smith, editors, Proceedings of the British Machine Vision Conference (BMVC), pp. 87.1-87.12. View
  5. Tan M, Le QV (2019) EfficientNet: Rethinking Model Scaling for Convolutional Neural Net-works. View
  6. Tack J, Mo S, Jeong J, Shin J (2020) Csi: Novelty detection via contrastive learning on distribu-tionally shifted instances. 34th Conference on Neural Information Processing Systems (NeurIPS 2020), Vancouver, Canada. View
  7. Tian Y, Yu L, Chen X, Ganguli S (2020) Understanding self-supervised learning with dual deep networks. 34th Conference on Neural Information Processing Systems (NeurIPS 2020). View
  8. Grill JB, Strub F, Altché F, Tallec C, Richemond P, et al. (2020) Bootstrap your own latent - a new approach to self-supervised learning. 33: 21271-21284. View
  9. Chen X, He K (2021) Exploring simple siamese representation learning. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). View
  10. He K, Fan H, Wu Y, Xie S, Girshick R (2020) Momentum contrast for unsupervised visual rep-resentation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pat-tern Recognition (CVPR). View
  11. Ioffe S, Szegedy C (2015) Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings of the 32nd International Conference on International Con-ference on Machine Learning 37: 448-456. View
  12. Hendrycks D, Mazeika M, Kadavath S, Song D (2019) Using self-supervised learning can im-prove model robustnessand uncertainty. 33rd Conference on Neural Information Processing Sys-tems (NeurIPS 2019), Vancouver, Canada. View
  13. Hendrycks D, Mu N, Cubuk ED, Zoph B, Gilmer J, et al. (2020) AugMix: A simple data processing method to improve robustness and uncertainty. Proceedings of the International Conference on Learning Representations (ICLR). View