Working Memory Model for Multi-party Interaction Using Audiovisual Perception

Authors

  • Asma Kanwal Government College University, Lahore
  • Ayesha Abdullah Government College University

Abstract

Human mind integrates visual and natural language information simultaneously and perform context switching between different tasks and they also have ability to learn from environment. Interactive systems having cognitive capabilities can learn and acquire knowledge within dynamic environment. Their responses in environment are more human like as compare to the static conversational agents. These systems have limitations to perform multi-party interaction on the basis of cognitive constructs with audiovisual stimuli. Therefore, there is a need to design a working memory model that will serve as an executive control for the required cognitive processing of multisensory modalities perceived within the context of multiparty interaction. This agent will be capable to perform fusion of data collected from different sensors. This may allow the underlying agent to act more humanly due to its subjective nature of selection of context and attention of the afore-mentioned percepts. Such agent that has multiparty capabilities can be applied as an effective conversational agent in any multi-party environment.

Downloads

Published

2022-06-30

How to Cite

Kanwal, A., & Abdullah, A. (2022). Working Memory Model for Multi-party Interaction Using Audiovisual Perception. Journal of NCBAE, 1(2), 45–55. Retrieved from http://jncbae.com/index.php/home3/article/view/11

Issue

Section

Articles