Art
J-GLOBAL ID:202002232294478998   Reference number:20A0249036

Efficient Learning for Distillation of DNN by Self Distillation

自己蒸留によるDNNの蒸留の効率化
Author (2):
Material:
Volume: 139  Issue: 12  Page: 1509-1516(J-STAGE)  Publication year: 2019 
JST Material Number: S0810A  ISSN: 0385-4221  Document type: Article
Article type: 原著論文  Country of issue: Japan (JPN)  Language: JAPANESE (JA)
Thesaurus term:
Thesaurus term/Semi thesaurus term
Keywords indexed to the article.
All keywords is available on JDreamIII(charged).
On J-GLOBAL, this item will be available after more than half a year after the record posted. In addtion, medical articles require to login to MyJ-GLOBAL.

Semi thesaurus term:
Thesaurus term/Semi thesaurus term
Keywords indexed to the article.
All keywords is available on JDreamIII(charged).
On J-GLOBAL, this item will be available after more than half a year after the record posted. In addtion, medical articles require to login to MyJ-GLOBAL.

Author keywords (6):
JST classification (2):
JST classification
Category name(code) classified by JST.
Artificial intelligence  ,  Pattern recognition 
Reference (18):
  • (1) K. Simonyan and A. Zisserman: “Very deep convolutional networks for large-scale image recognition”, In ICLR (2015)
  • (2) K. He, X. Zhang, S. Ren, and J. Sun: “Deep residual learning for image recognition”, In CVPR (2016)
  • (3) G. Hinton, O. Vinyals, and J. Dean: “Distilling the knowledge in a neural network”, In NIPS 2014 Deep Learning Workshop (2014)
  • (4) R. Anil, G. Pereyra, A. Passos, R. Ormandi, G. E. Dahl, and G. E. Hinton: “Large scale distributed neural network training through online distillation”, In ICLR (2018)
  • (5) A. Romero, N. Ballas, S. E. Kahou, A. Chassang, C. Gatta, and Y. Bengio: “Fitnets: Hints for thin deep nets”, In ICLR (2015)
more...
Terms in the title (3):
Terms in the title
Keywords automatically extracted from the title.

Return to Previous Page