Salgaze: Personalizing gaze estimation using visual saliency

Zhuoqing Chang, J. Matias DI Martino, Qiang Qiu, Steven Espinosa, Guillermo Sapiro

Producción científica: Capítulo del libro/informe/acta de congresoContribución a la conferenciarevisión exhaustiva

9 Citas (Scopus)

Resumen

Traditional gaze estimation methods typically require explicit user calibration to achieve high accuracy. This process is cumbersome and recalibration is often required when there are changes in factors such as illumination and pose. To address this challenge, we introduce SalGaze, a framework that utilizes saliency information in the visual content to transparently adapt the gaze estimation algorithm to the user without explicit user calibration. We design an algorithm to transform a saliency map into a differentiable loss map that can be used for the optimization of CNN-based models. SalGaze is also able to greatly augment standard point calibration data with implicit video saliency calibration data using a unified framework. We show accuracy improvements over 24% using our technique on existing methods.

Idioma originalInglés
Título de la publicación alojadaProceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019
EditorialInstitute of Electrical and Electronics Engineers Inc.
Páginas1169-1178
Número de páginas10
ISBN (versión digital)9781728150239
DOI
EstadoPublicada - oct. 2019
Publicado de forma externa
Evento17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019 - Seoul
Duración: 27 oct. 201928 oct. 2019

Serie de la publicación

NombreProceedings - 2019 International Conference on Computer Vision Workshop, ICCVW 2019

Conferencia

Conferencia17th IEEE/CVF International Conference on Computer Vision Workshop, ICCVW 2019
País/TerritorioKorea, Republic of
CiudadSeoul
Período27/10/1928/10/19

Huella

Profundice en los temas de investigación de 'Salgaze: Personalizing gaze estimation using visual saliency'. En conjunto forman una huella única.

Citar esto