Salgaze: Personalizing gaze estimation using visual saliency

Zhuoqing Chang, J. Matias DI Martino, Qiang Qiu, Steven Espinosa, Guillermo Sapiro

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

9 Scopus citations

Abstract

Traditional gaze estimation methods typically require explicit user calibration to achieve high accuracy. This process is cumbersome and recalibration is often required when there are changes in factors such as illumination and pose. To address this challenge, we introduce SalGaze, a framework that utilizes saliency information in the visual content to transparently adapt the gaze estimation algorithm to the user without explicit user calibration. We design an algorithm to transform a saliency map into a differentiable loss map that can be used for the optimization of CNN-based models. SalGaze is also able to greatly augment standard point calibration data with implicit video saliency calibration data using a unified framework. We show accuracy improvements over 24% using our technique on existing methods.

Original languageEnglish
Title of host publicationProceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1169-1178
Number of pages10
ISBN (Electronic)9781728150239
DOIs
StatePublished - Oct 2019
Externally publishedYes
Event17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019 - Seoul, Korea, Republic of
Duration: 27 Oct 201928 Oct 2019

Publication series

NameProceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019

Conference

Conference17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019
Country/TerritoryKorea, Republic of
CitySeoul
Period27/10/1928/10/19

Keywords

  • Calibration
  • Convolutional neural network
  • Deep learning
  • Gaze estimation
  • Saliency

Fingerprint

Dive into the research topics of 'Salgaze: Personalizing gaze estimation using visual saliency'. Together they form a unique fingerprint.

Cite this