J-GLOBAL ID:200901099669491268   Update date: Nov. 16, 2018

Yoichi Sato

Yoichi Sato
Affiliation and department:
Research field  (1): 知覚情報処理・知能ロボティクス
Papers (233):
  • Jinze Yu,Cewu Lu,Yoichi Sato. Sparsity-based color quantization with preserved image details. SIGGRAPH Asia 2014 Posters, Shenzhen, China, December 3-6, 2014. 2014. 32
  • Minjie Cai,Kris M. Kitani,Yoichi Sato. Understanding hand-object manipulation by modeling the contextual relationship between actions, grasp types and object attributes. CoRR. 2018. abs/1807.08254
  • Yifei Huang,Minjie Cai,Zhenqiang Li,Yoichi Sato. Predicting Gaze in Egocentric Video by Learning Task-dependent Attention Transition. CoRR. 2018. abs/1803.09125
  • Yoichi Sato. Sensing, predicting, and utilizing human visual attention. 4th International Conference on Image Processing Theory, Tools and Applications, IPTA 2014, Paris, France, October 14-17, 2014. 2014. 5
  • Rie Kamikubo,Keita Higuchi,Ryo Yonetani,Hideki Koike,Yoichi Sato. Exploring the Role of Tunnel Vision Simulation in the Design Cycle of Accessible Interfaces. The 15th Web for All Conference, W4A 2018, Lyon, France, April 23-25, 2018, Proceedings. 2018. 13:1-13:10
Awards (6):
  • 2008/12 - IAPR International Conference on Pattern Recognition(ICPR 2008) Best Industry Related Paper Award Recovering Audio-to-Video Synchronization by Audiovisual Correlation Analysis
  • 2007/11 - Asian Conference on Computer Vision (ACCV2007) Honorable Mention Pose-Invariant Facial Expression Recognition using Variable-Intensity Templates
  • 2006/06 - IEEE International Workshop on Projector- Camera System(PROCAMS 2006) Best Paper Award “Robust Content-Dependent Photometric Projector Compensation
  • 2001/03 - IEEE Virtual Reality Conference(VR 2001) Honorary mention for the best papaer award Real-time input of 3D pose and gestures of a user's hand and its applications for HCI
  • 2000/04 - IEEE International Conference on Robotics and Automation(ICRA2000) Finalist for the best vision paper award Robust localization for 3D object recognition using local EGI and 3D template matching with M-estimators
Show all
※ Researcher’s information displayed in J-GLOBAL is based on the information registered in researchmap. For details, see here.

Return to Previous Page