dor_id: 45889

506.#.#.a: Público

590.#.#.d: Los artículos enviados a la revista "Journal of Applied Research and Technology", se juzgan por medio de un proceso de revisión por pares

510.0.#.a: Scopus, Directory of Open Access Journals (DOAJ); Sistema Regional de Información en Línea para Revistas Científicas de América Latina, el Caribe, España y Portugal (Latindex); Indice de Revistas Latinoamericanas en Ciencias (Periódica); La Red de Revistas Científicas de América Latina y el Caribe, España y Portugal (Redalyc); Consejo Nacional de Ciencia y Tecnología (CONACyT); Google Scholar Citation

561.#.#.u: https://www.icat.unam.mx/

650.#.4.x: Ingenierías

336.#.#.b: article

336.#.#.3: Artículo de Investigación

336.#.#.a: Artículo

351.#.#.6: https://jart.icat.unam.mx/index.php/jart

351.#.#.b: Journal of Applied Research and Technology

351.#.#.a: Artículos

harvesting_group: RevistasUNAM

270.1.#.p: Revistas UNAM. Dirección General de Publicaciones y Fomento Editorial, UNAM en revistas@unam.mx

590.#.#.c: Open Journal Systems (OJS)

270.#.#.d: MX

270.1.#.d: México

590.#.#.b: Concentrador

883.#.#.u: https://revistas.unam.mx/catalogo/

883.#.#.a: Revistas UNAM

590.#.#.a: Coordinación de Difusión Cultural

883.#.#.1: https://www.publicaciones.unam.mx/

883.#.#.q: Dirección General de Publicaciones y Fomento Editorial

850.#.#.a: Universidad Nacional Autónoma de México

856.4.0.u: https://jart.icat.unam.mx/index.php/jart/article/view/8/8

100.1.#.a: Woodward, Alexander; Chan, Yuk Hin; Gong, Rui; Nguyen, Minh; Gee, Trevor; Delmas, Patrice; Gimel’farb, Georgy; Marquez Flores, Jorge Alberto

524.#.#.a: Woodward, et al. (2017). A low cost framework for real-time marker based 3-D human expression modeling. Journal of Applied Research and Technology; Vol. 15 Núm. 1. Recuperado de https://repositorio.unam.mx/contenidos/45889

245.1.0.a: A low cost framework for real-time marker based 3-D human expression modeling

502.#.#.c: Universidad Nacional Autónoma de México

561.1.#.a: Instituto de Ciencias Aplicadas y Tecnología, UNAM

264.#.0.c: 2017

264.#.1.c: 2017-02-01

653.#.#.a: Facial motion capture; Marker based motion capture; Expression recognition; Low cost; Stereo vision

506.1.#.a: La titularidad de los derechos patrimoniales de esta obra pertenece a las instituciones editoras. Su uso se rige por una licencia Creative Commons BY-NC-SA 4.0 Internacional, https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.es, para un uso diferente consultar al responsable jurídico del repositorio por medio del correo electrónico gabriel.ascanio@icat.unam.mx

884.#.#.k: https://jart.icat.unam.mx/index.php/jart/article/view/8

001.#.#.#: 074.oai:ojs2.localhost:article/8

041.#.7.h: eng

520.3.#.a: This work presents a robust, and low-cost framework for real-time marker based 3-D human expression modeling using off-the-shelf stereo web-cameras and inexpensive adhesive markers applied to the face. The system has low computational requirements, runs on standard hardware, and is portable with minimal set-up time and no training. It does not require a controlled lab environment (lighting or set-up) and is robust undervarying conditions, i.e. illumination, facial hair, or skin tone variation. Stereo web-cameras perform 3-D marker tracking to obtain head rigid motion and the non-rigid motion of expressions. Tracked markers are then mapped onto a 3-D face model with a virtual muscle animation system. Muscle inverse kinematics update muscle contraction parameters based on marker motion in order to create a virtual character’s expression performance.The parametrization of the muscle-based animation encodes a face performance with little bandwidth. Additionally, a radial basis function mapping approach was used to easily remap motion capture data to any face model. In this way the automated creation of a personalized 3-D face model and animation system from 3-D data is elucidated.The expressive power of the system and its ability to recognize new expressions was evaluated on a group of test subjects with respect to the six universally recognized facial expressions. Results show that the use of abstract muscle definition reduces the effect of potential noise in the motion capture data and allows the seamless animation of any virtual anthropomorphic face model with data acquired through human face performance.

773.1.#.t: Journal of Applied Research and Technology; Vol. 15 Núm. 1

773.1.#.o: https://jart.icat.unam.mx/index.php/jart

022.#.#.a: ISSN electrónico: 2448-6736; ISSN: 1665-6423

310.#.#.a: Bimestral

264.#.1.b: Instituto de Ciencias Aplicadas y Tecnología, UNAM

doi: https://doi.org/10.22201/icat.16656423.2017.15.1.8

harvesting_date: 2023-11-08 13:10:00.0

856.#.0.q: application/pdf

last_modified: 2023-11-08 13:00:00

license_url: https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.es

license_type: by-nc-sa

_deleted_conflicts: 2-03679781d5c1298fab34f3dca5f6c06f

No entro en nada

No entro en nada 2

Artículo

A low cost framework for real-time marker based 3-D human expression modeling

Woodward, Alexander; Chan, Yuk Hin; Gong, Rui; Nguyen, Minh; Gee, Trevor; Delmas, Patrice; Gimel’farb, Georgy; Marquez Flores, Jorge Alberto

Instituto de Ciencias Aplicadas y Tecnología, UNAM, publicado en Journal of Applied Research and Technology, y cosechado de Revistas UNAM

Licencia de uso

Procedencia del contenido

Cita

Woodward, et al. (2017). A low cost framework for real-time marker based 3-D human expression modeling. Journal of Applied Research and Technology; Vol. 15 Núm. 1. Recuperado de https://repositorio.unam.mx/contenidos/45889

Descripción del recurso

Autor(es)
Woodward, Alexander; Chan, Yuk Hin; Gong, Rui; Nguyen, Minh; Gee, Trevor; Delmas, Patrice; Gimel’farb, Georgy; Marquez Flores, Jorge Alberto
Tipo
Artículo de Investigación
Área del conocimiento
Ingenierías
Título
A low cost framework for real-time marker based 3-D human expression modeling
Fecha
2017-02-01
Resumen
This work presents a robust, and low-cost framework for real-time marker based 3-D human expression modeling using off-the-shelf stereo web-cameras and inexpensive adhesive markers applied to the face. The system has low computational requirements, runs on standard hardware, and is portable with minimal set-up time and no training. It does not require a controlled lab environment (lighting or set-up) and is robust undervarying conditions, i.e. illumination, facial hair, or skin tone variation. Stereo web-cameras perform 3-D marker tracking to obtain head rigid motion and the non-rigid motion of expressions. Tracked markers are then mapped onto a 3-D face model with a virtual muscle animation system. Muscle inverse kinematics update muscle contraction parameters based on marker motion in order to create a virtual character’s expression performance.The parametrization of the muscle-based animation encodes a face performance with little bandwidth. Additionally, a radial basis function mapping approach was used to easily remap motion capture data to any face model. In this way the automated creation of a personalized 3-D face model and animation system from 3-D data is elucidated.The expressive power of the system and its ability to recognize new expressions was evaluated on a group of test subjects with respect to the six universally recognized facial expressions. Results show that the use of abstract muscle definition reduces the effect of potential noise in the motion capture data and allows the seamless animation of any virtual anthropomorphic face model with data acquired through human face performance.
Tema
Facial motion capture; Marker based motion capture; Expression recognition; Low cost; Stereo vision
Idioma
eng
ISSN
ISSN electrónico: 2448-6736; ISSN: 1665-6423

Enlaces