Skip to content

Instantly share code, notes, and snippets.

@kenwebb
Last active June 25, 2020 16:22
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save kenwebb/fbfb35d0cdef5d679114d5457d44c545 to your computer and use it in GitHub Desktop.
Save kenwebb/fbfb35d0cdef5d679114d5457d44c545 to your computer and use it in GitHub Desktop.
Modeling Virtual Humans for Understanding the Mind
<References>
<Reference>https://www.jspsych.org/
<Reference>jsPsych is a JavaScript library for running behavioral experiments in a web browser. The library provides a flexible framework for building a wide range of laboratory-like experiments that can be run online.</Reference>
<Reference>To use jsPsych, you provide a description of the experiment in the form of a timeline. jsPsych handles things like determining which trial to run next, storing data, and randomization. jsPsych uses plugins to define what to do at each point on the timeline. Plugins are ready-made templates for simple experimental tasks like displaying instructions or displaying a stimulus and collecting a keyboard response. Plugins are very flexible to support a wide variety of experiments. It is easy to create your own plugin if you have experience with JavaScript programming.</Reference>
</Reference>
<Reference>https://github.com/bernuly
<Reference>Ulysses Bernardet</Reference>
</Reference>
<Reference>https://github.com/onyalcin
<Reference>Ozge Nilay Yalcin</Reference>
</Reference>
<Reference>https://github.com/onyalcin/M-PATH
<Reference>An Empathic Embodied Conversational Agent</Reference>
</Reference>
<Reference>https://github.com/onyalcin/echo_bot
<Reference>A simple pipeline for embodied conversational agents that echoes the speech and facial emotion inputs.</Reference>
</Reference>
<Reference>https://ivizlab.org/wp-content/uploads/sites/2/2017/09/iva2016a.pdf
<Reference>Simulink Toolbox for Real-time VirtualCharacter ControlUlysses Bernardet, Maryam Saberi, and Steve DiPaolaSimon Fraser University Vancouver, Canada{ubernard,msaberi,sdipaola}@sfu.ca</Reference>
<Reference>Abstract.Building virtual humans is a task of formidable complexity.We believe that, especially when building agents that interact with bi-ological humans in real-time over multiple sensorial channels, graphical,data flow oriented programming environments are the development toolof choice. In this paper, we describe a toolbox for the system control andblock diagramming environment Simulink that supports the construc-tion of virtual humans. Available blocks include sources for stochasticprocesses, utilities for coordinate transformation and messaging, as wellas modules for controlling gaze and facial expressions.</Reference>
</Reference>
<Reference>https://ivizlab.org/people/ulysses-bernardet/
<Reference>iVizLab - Research Lab
<Reference>AI Affective Virtual Human
Our affective real-time 3D AI virtual human project with face emotion recognition, movement recognition and full AI talking, gesture and reasoning.</Reference>
</Reference>
<Reference>Ulysses Bernardet
<Reference>I’m a lecturer at Aston University, Birmingham, UK. I hold a doctorate in psychology from the University of Zurich, and have a background in psychology, computer science and neurobiology. I’m the main author of the large-scale neural systems simulator iqr, have developed models of insect cognition, and conceptualized and realized a number of complex, real-time interactive systems. My research follows an interdisciplinary approach that brings together psychological and neurobiological models of behavior regulation, motivation, and emotion with mixed and virtual reality. At the core of my current research interests is the development of models of personality and nonverbal communication. These models are embodied in virtual humans and interact with biological humans in real-time. I like to refer to this approach of “understanding humans by building them” as Synthetic Psychology.</Reference>
</Reference>
<Reference>Research</Reference>
<Reference>Publications</Reference>
</Reference>
<Reference>https://scholar.google.com/citations?user=NRTa3bEAAAAJ
<Reference>Özge Nilay Yalçın
Simon Fraser University
Verified email at sfu.ca</Reference>
</Reference>
<Reference>https://www.frontiersin.org/journals/psychology#research-topics</Reference>
<Reference>https://loop.frontiersin.org/people/893581/overview
<Reference>Steve Richard DiPaola
Professor
Simon Fraser University
Burnaby, Canada</Reference>
<Reference>A Professor at Simon Fraser University, Steve directs the I-Viz Lab (ivizlab.sfu.ca) which strives to make computer systems bend more to the human experience via cognitive based AI approaches. He came to SFU from Stanford University and before that spent 10 years as a senior researcher at NYIT Computer Graphics Lab, an early pioneering lab in high-end 3D techniques</Reference>
<Reference>41 Publications</Reference>
</Reference>
<Reference>https://www.scienceopen.com/document_file/32acde6c-ef16-4ca9-a0e8-085ef4f1468a/ScienceOpen/BHCI-2018_Bernardet.pdf
<Reference>Bio-Inspired Human-Computer-InteractionUlysses BernardetAston UniversityAston Triangle, B4 7ET, Birmingham, UKu.bernardet@aston.ac.uk</Reference>
<Reference>Bio-inspiration, the use of principles derived from biological system to the construction of artefacts, has been successfully applied in many areas of engineering. Here I argue that Human-Computer-Interaction can greatly benefit fromapplying principles found in different areas ofbiology. While HCI system, in general,can learn from biology, the recent trend ofmoving away from conventional user interfacesto a more naturalistic interactionmakes bio-inspiration timely. To support the case, the paper maps four domains of HCI to areas ofbiological sciences and gives examples of works that applied the underlying principles.</Reference>
</Reference>
<Reference>https://www.igi-global.com/gateway/article/169931#pnlRecommendationForm
<Reference>An Interactive Space as a Creature: Mechanisms of Agency Attribution and Autotelic Experience
Ulysses Bernardet (Simon Fraser University, Vancouver, Canada), Jaume Subirats Aleixandri (Universitat Pompeu Fabra, Barcelona, Spain) and Paul F.M.J. Verschure (Universitat Pompeu Fabra, Barcelona, Spain)</Reference>
<Reference>Abstract
Interacting with an animal is a highly immersing and satisfactory experience. How can interaction with an artifact can be imbued with the quality of an interaction with a living being? The authors propose a theoretical relationship that puts the predictability of the human-artifact interaction at the center of the attribution of agency and experience of “flow.” They empirically explored three modes of interaction that differed in the level of predictability of the interactive space's behavior. The results of the authors' study give support to the notion that there is a sweet spot of predictability in the reactions of the space that leads users to perceive the space as a creature. Flow factors discriminated between the different modes of interaction and showed the expected nonlinear relationship with the predictability of the interaction. The authors' results show that predictability is a key factor to induce an attribution of agency, and they hope that their study can contribute to a more systematic approach to designing satisfactory and rich interaction between humans and machines.</Reference>
</Reference>
<Reference>https://onlinelibrary.wiley.com/doi/full/10.1002/cav.1600
<Reference>A model for social spatial behavior in virtual characters
Nahid Karimaghalou
Ulysses Bernardet
Steve DiPaola
First published: 16 May 2014
https://doi.org/10.1002/cav.1600</Reference>
<Reference>ABSTRACT
<Reference>Plausible spatial behavior is a key capability that autonomous virtual characters need in order to provide ecologically valid social interactions. However, there is a lack of psychological data on spatial behavior in the larger scale social settings and over extended periods of time. In this paper, we present a social navigation model that aims at generating human‐like spatial behavior for a virtual human in a social setting with group dynamics. We employ an engineering approach by defining a dynamic representation of interest and then using it as the psychometric function that regulates the behavior of the agent. We evaluate our model by means of two test cases that address different aspect of the model and serve as a proof of concept. Our work is a step toward models for generating more plausible social spatial behavior for virtual characters that is based on both internal dynamics and attributes of the social environment.</Reference>
</Reference>
</Reference>
<Reference>https://loop.frontiersin.org/people/92953/publications
<Reference>Ulysses Bernardet
Doctorate
Lecturer / Senior Lecturer
Aston University
Birmingham, United Kingdom</Reference>
<Reference>49 Publications</Reference>
</Reference>
<Reference>https://loop.frontiersin.org/people/220629/overview
<Reference>Jonathan Gratch</Reference>
<Reference>2 Publications</Reference>
</Reference>
<Reference>https://www.frontiersin.org/research-topics/13415/modeling-virtual-humans-for-understanding-the-mind
<Reference>About this Research Topic</Reference>
</Reference>
</References>
# Automatically generated by Xholon version 0.9.1
# using Xholon2Yaml.java and YamlStrWriter.
# Tue Jun 16 08:21:01 GMT-400 2020 1592310061781
# model: Xholon with interact.js 1.9.19
# www.primordion.com/Xholon
# See also: http://yaml-online-parser.appspot.com/
# See also: https://yamlvalidator.com/
# See also: http://yamllint.com/
%YAML 1.1
---
- References:
_:
- Reference: https://www.jspsych.org/
_:
- Reference: jsPsych is a JavaScript library for running behavioral experiments in a web browser. The library provides a flexible framework for building a wide range of laboratory-like experiments that can be run online.
- Reference: To use jsPsych, you provide a description of the experiment in the form of a timeline. jsPsych handles things like determining which trial to run next, storing data, and randomization. jsPsych uses plugins to define what to do at each point on the timeline. Plugins are ready-made templates for simple experimental tasks like displaying instructions or displaying a stimulus and collecting a keyboard response. Plugins are very flexible to support a wide variety of experiments. It is easy to create your own plugin if you have experience with JavaScript programming.
- Reference: https://github.com/bernuly
_:
- Reference: Ulysses Bernardet
- Reference: https://github.com/onyalcin
_:
- Reference: Ozge Nilay Yalcin
- Reference: https://github.com/onyalcin/M-PATH
_:
- Reference: An Empathic Embodied Conversational Agent
- Reference: https://github.com/onyalcin/echo_bot
_:
- Reference: A simple pipeline for embodied conversational agents that echoes the speech and facial emotion inputs.
- Reference: https://ivizlab.org/wp-content/uploads/sites/2/2017/09/iva2016a.pdf
_:
- Reference: Simulink Toolbox for Real-time VirtualCharacter ControlUlysses Bernardet, Maryam Saberi, and Steve DiPaolaSimon Fraser University Vancouver, Canada{ubernard,msaberi,sdipaola}@sfu.ca
- Reference: Abstract.Building virtual humans is a task of formidable complexity.We believe that, especially when building agents that interact with bi-ological humans in real-time over multiple sensorial channels, graphical,data flow oriented programming environments are the development toolof choice. In this paper, we describe a toolbox for the system control andblock diagramming environment Simulink that supports the construc-tion of virtual humans. Available blocks include sources for stochasticprocesses, utilities for coordinate transformation and messaging, as wellas modules for controlling gaze and facial expressions.
- Reference: https://ivizlab.org/people/ulysses-bernardet/
_:
- Reference: iVizLab - Research Lab
_:
- Reference: AI Affective Virtual Human
Our affective real-time 3D AI virtual human project with face emotion recognition, movement recognition and full AI talking, gesture and reasoning.
- Reference: Ulysses Bernardet
_:
- Reference: I’m a lecturer at Aston University, Birmingham, UK. I hold a doctorate in psychology from the University of Zurich, and have a background in psychology, computer science and neurobiology. I’m the main author of the large-scale neural systems simulator iqr, have developed models of insect cognition, and conceptualized and realized a number of complex, real-time interactive systems. My research follows an interdisciplinary approach that brings together psychological and neurobiological models of behavior regulation, motivation, and emotion with mixed and virtual reality. At the core of my current research interests is the development of models of personality and nonverbal communication. These models are embodied in virtual humans and interact with biological humans in real-time. I like to refer to this approach of “understanding humans by building them” as Synthetic Psychology.
- Reference: Research
- Reference: Publications
- Reference: https://scholar.google.com/citations?user=NRTa3bEAAAAJ
_:
- Reference: Özge Nilay Yalçın
Simon Fraser University
Verified email at sfu.ca
- Reference: https://www.frontiersin.org/journals/psychology#research-topics
- Reference: https://loop.frontiersin.org/people/893581/overview
_:
- Reference: Steve Richard DiPaola
Professor
Simon Fraser University
Burnaby, Canada
- Reference: A Professor at Simon Fraser University, Steve directs the I-Viz Lab (ivizlab.sfu.ca) which strives to make computer systems bend more to the human experience via cognitive based AI approaches. He came to SFU from Stanford University and before that spent 10 years as a senior researcher at NYIT Computer Graphics Lab, an early pioneering lab in high-end 3D techniques
- Reference: 41 Publications
- Reference: https://www.scienceopen.com/document_file/32acde6c-ef16-4ca9-a0e8-085ef4f1468a/ScienceOpen/BHCI-2018_Bernardet.pdf
_:
- Reference: Bio-Inspired Human-Computer-InteractionUlysses BernardetAston UniversityAston Triangle, B4 7ET, Birmingham, UKu.bernardet@aston.ac.uk
- Reference: Bio-inspiration, the use of principles derived from biological system to the construction of artefacts, has been successfully applied in many areas of engineering. Here I argue that Human-Computer-Interaction can greatly benefit fromapplying principles found in different areas ofbiology. While HCI system, in general,can learn from biology, the recent trend ofmoving away from conventional user interfacesto a more naturalistic interactionmakes bio-inspiration timely. To support the case, the paper maps four domains of HCI to areas ofbiological sciences and gives examples of works that applied the underlying principles.
- Reference: https://www.igi-global.com/gateway/article/169931#pnlRecommendationForm
_:
- Reference: An Interactive Space as a Creature: Mechanisms of Agency Attribution and Autotelic Experience
Ulysses Bernardet (Simon Fraser University, Vancouver, Canada), Jaume Subirats Aleixandri (Universitat Pompeu Fabra, Barcelona, Spain) and Paul F.M.J. Verschure (Universitat Pompeu Fabra, Barcelona, Spain)
- Reference: Abstract
Interacting with an animal is a highly immersing and satisfactory experience. How can interaction with an artifact can be imbued with the quality of an interaction with a living being? The authors propose a theoretical relationship that puts the predictability of the human-artifact interaction at the center of the attribution of agency and experience of “flow.” They empirically explored three modes of interaction that differed in the level of predictability of the interactive space's behavior. The results of the authors' study give support to the notion that there is a sweet spot of predictability in the reactions of the space that leads users to perceive the space as a creature. Flow factors discriminated between the different modes of interaction and showed the expected nonlinear relationship with the predictability of the interaction. The authors' results show that predictability is a key factor to induce an attribution of agency, and they hope that their study can contribute to a more systematic approach to designing satisfactory and rich interaction between humans and machines.
- Reference: https://onlinelibrary.wiley.com/doi/full/10.1002/cav.1600
_:
- Reference: A model for social spatial behavior in virtual characters
Nahid Karimaghalou
Ulysses Bernardet
Steve DiPaola
First published: 16 May 2014
https://doi.org/10.1002/cav.1600
- Reference: ABSTRACT
_:
- Reference: Plausible spatial behavior is a key capability that autonomous virtual characters need in order to provide ecologically valid social interactions. However, there is a lack of psychological data on spatial behavior in the larger scale social settings and over extended periods of time. In this paper, we present a social navigation model that aims at generating human‐like spatial behavior for a virtual human in a social setting with group dynamics. We employ an engineering approach by defining a dynamic representation of interest and then using it as the psychometric function that regulates the behavior of the agent. We evaluate our model by means of two test cases that address different aspect of the model and serve as a proof of concept. Our work is a step toward models for generating more plausible social spatial behavior for virtual characters that is based on both internal dynamics and attributes of the social environment.
- Reference: https://loop.frontiersin.org/people/92953/publications
_:
- Reference: Ulysses Bernardet
Doctorate
Lecturer / Senior Lecturer
Aston University
Birmingham, United Kingdom
- Reference: 49 Publications
- Reference: https://loop.frontiersin.org/people/220629/overview
_:
- Reference: Jonathan Gratch
- Reference: 2 Publications
- Reference: https://www.frontiersin.org/research-topics/13415/modeling-virtual-humans-for-understanding-the-mind
_:
- Reference: About this Research Topic
...
<?xml version="1.0" encoding="UTF-8"?>
<!--Xholon Workbook http://www.primordion.com/Xholon/gwt/ MIT License, Copyright (C) Ken Webb, Thu Jun 25 2020 12:21:35 GMT-0400 (Eastern Daylight Time)-->
<XholonWorkbook>
<Notes><![CDATA[
Xholon
------
Title: Modeling Virtual Humans for Understanding the Mind
Description:
Url: http://www.primordion.com/Xholon/gwt/
InternalName: fbfb35d0cdef5d679114d5457d44c545
Keywords:
My Notes
--------
June 16, 2020
Modeling Virtual Humans for Understanding the Mind [ref 15]
see also: Xholon with interact.js 1.9.19 e86ec1b3e77ea793b68c04253beb4fe7
http://127.0.0.1:8080/war/Xholon.html?app=Modeling+Virtual+Humans+for+Understanding+the+Mind&src=lstr&gui=none&jslib=interact-1.9.19.min,FreeformText2XmlString
References
----------
(0) https://www.jspsych.org/
jsPsych is a JavaScript library for running behavioral experiments in a web browser. The library provides a flexible framework for building a wide range of laboratory-like experiments that can be run online.
To use jsPsych, you provide a description of the experiment in the form of a timeline. jsPsych handles things like determining which trial to run next, storing data, and randomization. jsPsych uses plugins to define what to do at each point on the timeline. Plugins are ready-made templates for simple experimental tasks like displaying instructions or displaying a stimulus and collecting a keyboard response. Plugins are very flexible to support a wide variety of experiments. It is easy to create your own plugin if you have experience with JavaScript programming.
(1) https://github.com/bernuly
Ulysses Bernardet
(2) https://github.com/onyalcin
Ozge Nilay Yalcin
(3) https://github.com/onyalcin/M-PATH
An Empathic Embodied Conversational Agent
(4) https://github.com/onyalcin/echo_bot
A simple pipeline for embodied conversational agents that echoes the speech and facial emotion inputs.
(5) https://ivizlab.org/wp-content/uploads/sites/2/2017/09/iva2016a.pdf
Simulink Toolbox for Real-time VirtualCharacter Control
Ulysses Bernardet, Maryam Saberi, and Steve DiPaola, Simon Fraser University Vancouver, Canada{ubernard,msaberi,sdipaola}@sfu.ca
Abstract
Building virtual humans is a task of formidable complexity.
We believe that, especially when building agents that interact with bi-ological humans in real-time over multiple sensorial channels, graphical, data flow oriented programming environments are the development tool of choice.
In this paper, we describe a toolbox for the system control and block diagramming environment Simulink that supports the onstruction of virtual humans.
Available blocks include sources for stochasticprocesses, utilities for coordinate transformation and messaging, as well as modules for controlling gaze and facial expressions.
(6) https://ivizlab.org/people/ulysses-bernardet/
iVizLab - Research Lab
AI Affective Virtual Human
Our affective real-time 3D AI virtual human project with face emotion recognition, movement recognition and full AI talking, gesture and reasoning.
Ulysses Bernardet
I’m a lecturer at Aston University, Birmingham, UK. I hold a doctorate in psychology from the University of Zurich, and have a background in psychology, computer science and neurobiology. I’m the main author of the large-scale neural systems simulator iqr, have developed models of insect cognition, and conceptualized and realized a number of complex, real-time interactive systems. My research follows an interdisciplinary approach that brings together psychological and neurobiological models of behavior regulation, motivation, and emotion with mixed and virtual reality. At the core of my current research interests is the development of models of personality and nonverbal communication. These models are embodied in virtual humans and interact with biological humans in real-time. I like to refer to this approach of “understanding humans by building them” as Synthetic Psychology.
Research
Publications
(7) https://scholar.google.com/citations?user=NRTa3bEAAAAJ
Özge Nilay Yalçın
Simon Fraser University
Verified email at sfu.ca
(8) https://www.frontiersin.org/journals/psychology#research-topics
(9) https://loop.frontiersin.org/people/893581/overview
Steve Richard DiPaola
Professor
Simon Fraser University
Burnaby, Canada
A Professor at Simon Fraser University, Steve directs the I-Viz Lab (ivizlab.sfu.ca) which strives to make computer systems bend more to the human experience via cognitive based AI approaches. He came to SFU from Stanford University and before that spent 10 years as a senior researcher at NYIT Computer Graphics Lab, an early pioneering lab in high-end 3D techniques
41 Publications
(10) https://www.scienceopen.com/document_file/32acde6c-ef16-4ca9-a0e8-085ef4f1468a/ScienceOpen/BHCI-2018_Bernardet.pdf
Bio-Inspired Human-Computer-InteractionUlysses BernardetAston UniversityAston Triangle, B4 7ET, Birmingham, UKu.bernardet@aston.ac.uk
Bio-inspiration, the use of principles derived from biological system to the construction of artefacts, has been successfully applied in many areas of engineering. Here I argue that Human-Computer-Interaction can greatly benefit fromapplying principles found in different areas ofbiology. While HCI system, in general,can learn from biology, the recent trend ofmoving away from conventional user interfacesto a more naturalistic interactionmakes bio-inspiration timely. To support the case, the paper maps four domains of HCI to areas ofbiological sciences and gives examples of works that applied the underlying principles.
(11) https://www.igi-global.com/gateway/article/169931#pnlRecommendationForm
An Interactive Space as a Creature: Mechanisms of Agency Attribution and Autotelic Experience
Ulysses Bernardet (Simon Fraser University, Vancouver, Canada), Jaume Subirats Aleixandri (Universitat Pompeu Fabra, Barcelona, Spain) and Paul F.M.J. Verschure (Universitat Pompeu Fabra, Barcelona, Spain)
Abstract
Interacting with an animal is a highly immersing and satisfactory experience. How can interaction with an artifact can be imbued with the quality of an interaction with a living being? The authors propose a theoretical relationship that puts the predictability of the human-artifact interaction at the center of the attribution of agency and experience of “flow.” They empirically explored three modes of interaction that differed in the level of predictability of the interactive space's behavior. The results of the authors' study give support to the notion that there is a sweet spot of predictability in the reactions of the space that leads users to perceive the space as a creature. Flow factors discriminated between the different modes of interaction and showed the expected nonlinear relationship with the predictability of the interaction. The authors' results show that predictability is a key factor to induce an attribution of agency, and they hope that their study can contribute to a more systematic approach to designing satisfactory and rich interaction between humans and machines.
(12) https://onlinelibrary.wiley.com/doi/full/10.1002/cav.1600
A model for social spatial behavior in virtual characters
Nahid Karimaghalou
Ulysses Bernardet
Steve DiPaola
First published: 16 May 2014
https://doi.org/10.1002/cav.1600
ABSTRACT
Plausible spatial behavior is a key capability that autonomous virtual characters need in order to provide ecologically valid social interactions. However, there is a lack of psychological data on spatial behavior in the larger scale social settings and over extended periods of time. In this paper, we present a social navigation model that aims at generating human‐like spatial behavior for a virtual human in a social setting with group dynamics. We employ an engineering approach by defining a dynamic representation of interest and then using it as the psychometric function that regulates the behavior of the agent. We evaluate our model by means of two test cases that address different aspect of the model and serve as a proof of concept. Our work is a step toward models for generating more plausible social spatial behavior for virtual characters that is based on both internal dynamics and attributes of the social environment.
(13) https://loop.frontiersin.org/people/92953/publications
Ulysses Bernardet
Doctorate
Lecturer / Senior Lecturer
Aston University
Birmingham, United Kingdom
49 Publications
(14) https://loop.frontiersin.org/people/220629/overview
Jonathan Gratch
2 Publications
(15) https://www.frontiersin.org/research-topics/13415/modeling-virtual-humans-for-understanding-the-mind
About this Research Topic
(16) https://ivizlab.org/research/ai-affective-virtual-human
AI Affective Virtual Human
Research Collaborators: Steve DiPaola , Nilay Ozge Yalcin , Ulysses Bernardet , Maryam Saberi , Michael Nixon
About:
Our open-source toolkit / cognitive research in AI 3D Virtual Human (embodied IVA : Intelligence Virtual Agents) : a real-time system that can converse with a human by sensing their
emotions and conversation ( via facial emotion recognition, voice stress, semantics of the speech and words) and respond affectively, emotionally (voice, facial animation, gesture,
etc) to a user in front of it via a host of gestural, motion and bio-sensor systems, with several in lab AI systems and give a coherent, personality-based conversational answers via speech,
expression and gesture. The system uses Unity and SmartBody (USC) API who we have collaborated with for years. We use cognitive modeling, empathy modelling, NLP and a variety
of AI-based modules in our system (see papers).
The Research
Our affective real-time 3D AI virtual human setup with face emotion recognition, movement recognition and data glove recognition. See overview video or specific videos or papers below.
Embodied Conversational Agent (ECA)
(17) https://www.sfu.ca/~msaberi/RealActDocumentation/
Dissertation Material:
lots of detailed diagrams
Matlab, state machines
(18) https://github.com/maryamsab/realact
Maryam Saberi
mostly C code, 5 years ago
(19) https://www.youtube.com/channel/UCMCLjA-uXGBajlp6Dj6Mj8A
ivizlab youtube videos
(20) https://ivizlab.org/research/bio-sensing-system-2d-3d-vr/
BioSensing 2D / 3D / VR Systems
(21) https://ivizlab.org/research/ai-anonymization/
AI Anonymization
(22) https://ivizlab.org/research/deep-learning-ai-creativity-research/
Deep Learning AI Creativity For Visuals / Words
(23) https://ivizlab.org/research/cognitive-ai-based-abstraction/
Cognitive (AI) Based Abstraction
(24) https://ivizlab.org/research/facetoolkit-3d-facial-toolkit/
FaceToolKit - A 3D Facial ToolKit
(25) https://ivizlab.org/research/iface-comprehensive-envir-interactive-face-anim/
IFACE: Interactive Face Animation - Comprehensive Environment
(26) https://ivizlab.org/publications/
(27) https://smartbody.ict.usc.edu/
) http://sourceforge.net/projects/smartbody/
) https://cas.ict.usc.edu/
SmartBody is a character animation platform originally developed at the USC Institute for Creative Technologies. SmartBody provides locomotion, steering, object manipulation, lip syncing, gazing, nonverbal behavior and retargeting in real time.
SmartBody is written in C++ and can be incorporated into most game and simulation engines. SmartBody is a Behavioral Markup Language (BML) realization engine that transforms BML behavior descriptions into realtime animations. SmartBody runs on Windows, Linux, OSx as well as the iPhone and Android devices.
LGPL license, C++
I downloaded SmartBodySDK-r6296-linux.tar.gz, and unzipped to ~/cspace/smartbody
(28) https://smartbody.ict.usc.edu/news/experimental-web-based-verison-javascript-online
Experimental Web-based verison JavaScript online
October 3rd, 2015
We have put together a Web-enabled (Emscripten-based) version of SmartBody. This version can be run within a web browser, and nearly the entire codebase has been compiled to Javascript by the efforts of Zengrui Wang here at USC over the summer.
You can try out this demo (remember, this is just experimental…). Most of the functionality works (except for sound and some other features). We are still experimenting with best practices for such a web-based application. Here is the link:
http://smartbody.ict.usc.edu/Javascript/smartbodyJS/ sSrver not found
The SmartBody/Javascript engine weighs in at about 20 mb, not including data. Performance is reasonable for a small number of characters, and we haven’t yet experimented with more lightweight character data.
Regards, Ari Shapiro
(29) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=Ulysses+Bernardet&btnG=
) https://scholar.google.com/scholar?start=0&q=Ulysses+Bernardet&hl=en&as_sdt=0,5
publications through Google Scholar
(30) https://en.wikipedia.org/wiki/Behavior_authoring
Behavior authoring is a technique that is widely used in crowd simulations and in simulations and computer games that involve multiple autonomous or non-player characters (NPCs). There has been growing academic and industry interest in the behavioral animation of autonomous actors in virtual worlds. However, it remains a considerable challenge to author complicated interactions between multiple actors in a way that balances automation and control flexibility.
Several varieties of behavior authoring systems have been created.
The BML Sequencer and Smartbody
Behavior Markup Language (BML) is a tool for describing autonomous actor behavior in simulations and computer games. SmartBody is a framework for animation of artificial intelligence conversation agents to provide a more lifelike simulation.[2] Combining both of these concepts, the BML sequencer is a tool to allow artists to create SmartBody compliant BML animation sequences for multiple virtual humans. SmartBody allows for complex behavior realization, synchronizing speech recordings with non-verbal behaviors by using the Behavior Markup Language (BML). However, there remain two problems for using BML and SmartBody to achieve the vision that an artist has for animating the character: the authoring problem and multi-party behavior synchronization. The BML Sequencer addresses both.
Behavior authoring in real-time strategy games
Behavior authoring for computer games consists of first writing the behaviors in a programming language, iteratively refining these behaviors, testing the revisions by executing them, identifying new problems and then refining the behaviors again.
(31) https://people.ict.usc.edu/~traum/Papers/bmlsequencer2011
The BML Sequencer:A Tool for authoring multi-character animations. Priti Aggarwal and David Traum, Institute for Creative Technologies
The BML sequencer is a tool to allow artists to create SmartBody compliant BML animation sequences for multiple virtual humans. SmartBody allows for complex behavior realization, synchronizing speech recordings with non-verbal behaviors by using the Behavior Markup Language (BML).
(32) https://www.cc.gatech.edu/faculty/ashwin/papers/er-08-08.pdf
An Intelligent IDE for Behavior Authoring in Real-Time Strategy Games
Suhas Virmani, Yatin Kanetkar, Manish Mehta, Santiago Ontanon, Ashwin Ram
Cognitive Computing Lab (CCL)
(33) https://www.researchgate.net/publication/221588350_The_Behavior_Markup_Language_Recent_Developments_and_Challenges
The Behavior Markup Language: Recent Developments and Challenges, Hannes Vilhjalmsson, et al
(34) https://ict.usc.edu/pubs/Towards%20a%20Common%20Framework%20for%20Multimodal%20Generation-%20The%20Behavior%20Markup%20Language.pdf
Towards a Common Framework for Multimodal Generation: The Behavior Markup Language, Stefan Kopp, et al
(35) https://vhtoolkit.ict.usc.edu/
The ICT Virtual Human Toolkit is a collection of modules, tools, and libraries designed to aid and support researchers and developers with the creation of virtual human conversational characters. The Toolkit is an on-going, ever-changing, innovative system fueled by basic research performed at the University of Southern California (USC) Institute for Creative Technologies (ICT) and its partners.
Designed for easy mixing and matching with a research project’s proprietary or 3rd-party software, the Toolkit provides a widely accepted platform on which new technologies can be built. It is our hope that, together as a research community, we can further develop and explore virtual human research and technologies.
Windows only, 20GB
(36) https://vhtoolkit.ict.usc.edu/publications/
some of the papers are authored/co-authored by Jonathan Gratch
(37) https://github.com/saiba
BML, several projects dated 2013 - 2016, Java
(38) https://web.cs.wpi.edu/~rich/engagement/publications/HolroydRich2012_HRI.pdf
Using the Behavior Markup Language for Human-Robot Interaction, Aaron Holroyd and Charles Rich
(39) https://www.academia.edu/download/51205503/s0925-2312_2802_2900412-520170105-12038-l72sv7.pdf
IQR: a distributed system for real-time real-world neuronal simulation
U Bernardet, M Blanchard, PFMJ Verschure - Neurocomputing, 2002 - Elsevier
IQR is a new simulator which allows neuronal models to control the behaviour of real-world
devices in real-time. Data from several levels of description can be combined. IQR uses a
distributed architecture to provide real-time processing. We present the key features of IQR
(40) http://www.academia.edu/download/43894107/iqr_a_tool_for_the_construction_of_multi20160319-12499-1mvfbdv.pdf
iqr: A Tool for the Construction of Multi-level Simulations of Brain and Behaviour
U Bernardet, PFMJ Verschure - Neuroinformatics, 2010 - Springer
The brain is the most complex system we know of. Despite the wealth of data available in
neuroscience, our understanding of this system is still very limited. Here we argue that an
essential component in our arsenal of methods to advance our understanding of the brain is …
(41) https://en.wikipedia.org/wiki/Apache_ActiveMQ
(42) https://activemq.apache.org/
Apache ActiveMQ™ is the most popular open source, multi-protocol, Java-based messaging server. It supports industry standard protocols so users get the benefits of client choices across a broad range of languages and platforms. Connectivity from C, C++, Python, .Net, and more is available. Integrate your multi-platform applications using the ubiquitous AMQP protocol. Exchange messages between your web applications using STOMP over websockets. Manage your IoT devices using MQTT. Support your existing JMS infrastructure and beyond. ActiveMQ offers the power and flexibility to support any messaging use-case.
(43) https://en.wikipedia.org/wiki/Advanced_Message_Queuing_Protocol
AMQP
(44) https://en.wikipedia.org/wiki/Streaming_Text_Oriented_Messaging_Protocol
STOMP
(45) https://en.wikipedia.org/wiki/Extensible_Messaging_and_Presence_Protocol
XMPP
(46) http://rtcquickstart.org/
) http://rtcquickstart.org/guide/multi/index.html
an ebook: Real-Time Communications Quick Start Guide, Daniel Pocock, Copyright © 2013, 2014, 2015 Daniel Pocock
(47) https://xmpp.org/
(48) https://xmpp.org/uses/webrtc.html
WebRTC + XMPP
WebRTC is a free, open project that provides browsers and mobile applications with real-time communications capabilities.
Jingle, the XMPP framework for establishing p2p sessions, makes for a great pairing with WebRTC.
XMPP is particularly a great fit with WebRTC in settings where there is a desire to pair WebRTC audio/video calls with text chat
Because WebRTC is a peer-to-peer protocol, multi-user experiences become exponentially complex.
Pairing a WebRTC service with XMPP allows developers to dramatically reduce this complexity.
Projects using WebRTC with XMPP
There are many people pairing WebRTC with XMPP.
The Jitsi Videobridge uses the COLIBRI XEP to manage connections and conference mixing.
Jitsi Meet is an open source instant videoconferencing web application, which uses XMPP.
Combining Jitsi videobridge and Jitsi Meet into a single package, Openfire Meetings makes WebRTC video conferences simple to deploy and use.
Otalk is an open-source platform for building realtime applications using XMPP. Talky is an example of an application built using these libraries.
(49) http://www.igniterealtime.org/index.jsp
Open Realtime
(50) https://www.simplewebrtc.com/
Affordable realtime for React
Empowering developers of all skill levels to build advanced realtime apps without breaking the bank
basic plan is $5 per month for 1GB
(51) https://cs.aston.ac.uk/chl/
Cybernetic Human Lab
(52) [BOOK] Integrating cognitive architectures into virtual character design
JO Turner, M Nixon, U Bernardet, S DiPaola - 2016 - books.google.com
Cognitive architectures represent an umbrella term to describe ways in which the flow of
thought can be engineered towards cerebral and behavioral outcomes. Cognitive
Architectures are meant to provide top-down guidance, a knowledge base, interactive …
) https://www.igi-global.com/book/integrating-cognitive-architectures-into-virtual/146983
Integrating Cognitive Architectures into Virtual Character Design
Jeremy Owen Turner (Simon Fraser University, Canada), Michael Nixon (Simon Fraser University, Canada), Ulysses Bernardet (Simon Fraser University, Canada) and Steve DiPaola (Simon Fraser University, Canada)
Release Date: June, 2016|Copyright: © 2016 |Pages: 346
price: $148
a collection of papers by different authors
) https://www.igi-global.com/pdf.aspx?tid=154999&ptid=146983&ctid=15&t=Detailed%20Table%20of%20Contents&isxn=9781522504542
Detailed Table of Contents, with abstracts
(53) https://hplustech.com/
(54) https://hplustech.com/blogs/news/m-m-middleware
(55) https://dl.acm.org/doi/pdf/10.1145/2948910.2948942
m+m: A novel Middleware for Distributed, Movement based Interactive Multimedia Systems, Ulysses Bernardet, et al, 2016
ABSTRACT
Embodied interaction has the potential to provide users with uniquely engaging and meaningful experiences.
m+m: Movement + Meaning middleware is an open sourcesoftware framework that enables users to construct real-time, interactive systems that are based on movement data.
The acquisition, processing, and rendering of movement data can be local or distributed, real-time or off-line.
Key features of the m+m middleware are a small footprint in terms of computational resources, portability between different platforms, and high performance in terms of reducedlatency and increased bandwidth.
Examples of systems that can be built with m+m as the internal communication middleware include those for the semantic interpretation of human movement data,
machine-learning models for movement recognition, and the mapping of movement data as a controller for online navigation, collaboration, and distributed performance.
(56) https://cs.aston.ac.uk/chl/portfolio/060%20Simulink%20Toolbox%20of%20control%20of%20virtual%20humans/
Virtual Humans Control Simulink Toolbox
Control systems for virtual humans tend to become complex very rapidly. Graphical tools provide better overview and understanding of what is happening within the system.
A number of graphical simulation environments exist, with Simulink being one of the most established ones.
The Simulink, running on top of Matlab, is a block diagram environment supports different types of simulations such as continuous, discrete, and finite state.
The goal of this project is to provide a library of re-usable components such commonly used task such as gaze control, categorical and PAD facial expression, 3D coordinate transform etc.
The toolbox is based on the BML standard and provides abstract encapsulation while still allowing inspection of the internal mechanisms.
(57) https://github.com/bernuly/VCSimulinkTlbx
Simulink Toolbox for Real-time Virtual Character Control
code is 99% C++ and C, for Matlab
(58) http://iqr.sourceforge.net/?file=kop1.php
IQR
The brain is organized at many different levels.
One of the key challenges we face in studying the brain, is how this system can be effectively described and studied.
In addition, these different levels of organization are not independent but intricately coupled.
We have developed a multi-level neuronal simulation environment, iqr, that exactly deals with this challenge.
iqr provides an efficient graphical environment to design large-scale multi-level neuronal systems that can control real-world devices - robots in the broader sense - in real-time.
(59) http://iqr.sourceforge.net/?file=kop15.php
news, 2015 and older
(60) https://www.igi-global.com/chapter/a-universal-architecture-for-migrating-cognitive-agents/155011
Designing migrating agents is a recent approach in developing the generic embodied agents that meets some requirements of the AGI.
The migration is the ability of an abstract entity to morph from one embodiment into another one and control the new body without altering the internal cognitive processes of the transferred entity.
(61) http://www.site.uottawa.ca/~wslee/publication/ACM_SIGGRAPH_ASIA_SGAVH_2014.pdf
On Designing Migrating Agents: From Autonomous Virtual Agents to Intelligent Robotic Systems
Kaveh Hassani and Won-Sook Lee
School of Electrical Engineering and Computer Science, University of Ottawa
p. 4
Each agent can control other agents or be controlled by them.
In former case, the agent sends its goals in formal format to the agents that are being controlled and waits for their feedbacks,
whereas in latter case, the agent receives the goals from controlling agents, plans and executes proper actions, and sends the execution feedback to the controlling agent.
(62) http://bernuly.blogspot.com/2017/03/stompsender-send-control-commands-from.html
StompSender: Send control commands from python to SmartBody via activeMQ
The StompSender class allows sending control messages to SmartBody via ActiveMQ (using the STOMP protocol).
Code available from: https://github.com/bernuly/StompSender
2018, Python code
(63) https://sourceforge.net/projects/iqr/
latest is 2.5.5 2020-03-04 for Windows, 2018-09-17 for Debian
C++, Qt5
(64) https://www.sfu.ca/~msaberi/
(65) https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=muscettola+nicola&oq=muscettola
papers by Nicola Muscettola
(66) Remote agent: To boldly go where no AI system has gone before
N Muscettola, PP Nayak, B Pell, BC Williams - Artificial intelligence, 1998 - Elsevier
Renewed motives for space exploration have inspired NASA to work toward the goal of
establishing a virtual presence in space, through heterogeneous fleets of robotic explorers.
Information technology, and Artificial Intelligence in particular, will play a central role in this …
Cited by 881 Related articles All 23 versions
Remote Agent: to boldly go where no AI system has gone before
Nicola Muscettola ’ , P. Pandurang Nayak, Barney Pell, Brian C. Williams
NASA Ames Research Center, MS 269-2, Mq&tt Field, CA 94035. USA
3.1.2. Goals
The DSl problem can only be expressed by making use of a disparate set of classical and non-classical goal types.
(67) Idea: Planning at the core of autonomous reactive agents
N Muscettola, GA Dorais, C Fry, R Levinson, C Plaunt… - 2002 - ntrs.nasa.gov
Several successful autonomous systems are separated into technologically diverse
functional layers operating at different levels of abstraction. This diversity makes them
difficult to implement and validate. In this paper, we present IDEA (Intelligent Distributed …
Cited by 232 Related articles
]]></Notes>
<_-.XholonClass>
<PhysicalSystem/>
<BlocksAndBricks/>
<!-- dropzones -->
<Dropzones>
<Annotations/>
<Attribute_Strings/>
<Cats/>
<Notes/>
<References/>
<XholonModules/>
</Dropzones>
<Note superClass="Attribute_String"/>
<Reference superClass="Attribute_String"/>
</_-.XholonClass>
<xholonClassDetails>
<!-- dropzones -->
<Dropzones><Color>rgba(255,255,255,1.0)</Color></Dropzones>
<Annotations><Color>red</Color></Annotations>
<Attribute_Strings><Color>orange</Color></Attribute_Strings>
<Cats><Color>yellow</Color></Cats>
<Notes><Color>green</Color></Notes>
<References><Color>blue</Color></References>
<XholonModules><Color>purple</Color></XholonModules>
</xholonClassDetails>
<PhysicalSystem>
<BlocksAndBricks roleName="Virtuoso">
<!-- dropzones -->
<Dropzones>
<Annotations/>
<Attribute_Strings/>
<Cats/>
<Notes/>
<References/>
<XholonModules/>
<script>
// this needs to be done before the Animate nodes are set up
$wnd.xh.fftxt2xmlstr.configGui("#xhanim", ["one", "two"], true);
$wnd.xh.fftxt2xmlstr.configInteract();
</script>
</Dropzones>
</BlocksAndBricks>
<!-- these work
xpath="./PhysicalSystem/BlocksAndBricks"
xpath="../.."
-->
<Animate duration="1" selection="#two" xpath="./PhysicalSystem/BlocksAndBricks" cssStyle=".d3cpnode circle {stroke-width: 0.5px;}" efParams="{&quot;selection&quot;:&quot;#two&quot;,&quot;sort&quot;:&quot;disable&quot;,&quot;width&quot;:400,&quot;height&quot;:400,&quot;mode&quot;:&quot;tween&quot;,&quot;labelContainers&quot;:true,&quot;includeClass&quot;:true,&quot;includeId&quot;:true,&quot;shape&quot;:&quot;circle&quot;}"/>
<Animate duration="1" selection="#one" xpath="../.." cssStyle=".d3cpnode circle {stroke-width: 0.5px;}" efParams="{&quot;selection&quot;:&quot;#one&quot;,&quot;sort&quot;:&quot;disable&quot;,&quot;width&quot;:800,&quot;height&quot;:800,&quot;mode&quot;:&quot;tween&quot;,&quot;labelContainers&quot;:true,&quot;includeClass&quot;:true,&quot;includeId&quot;:true,&quot;shape&quot;:&quot;circle&quot;}"/>
</PhysicalSystem>
<Chameleonbehavior implName="org.primordion.xholon.base.Behavior_gwtjs"><![CDATA[
var me, count, beh = {
postConfigure: function() {count = 0;},
act: function() {
if (count == 0) {
// this can't be done until everything else is set up
$wnd.xh.param("AttributePostConfigAction", "0");
count = 1;
}
}
}
//# sourceURL=Chameleonbehavior.js
]]></Chameleonbehavior>
<SvgClient><Attribute_String roleName="svgUri"><![CDATA[data:image/svg+xml,
<svg width="100" height="50" xmlns="http://www.w3.org/2000/svg">
<g>
<title>BlocksAndBricks</title>
<rect id="PhysicalSystem/BlocksAndBricks" fill="#98FB98" height="50" width="50" x="25" y="0"/>
<g>
<title>Dropzones</title>
<rect id="PhysicalSystem/BlocksAndBricks/Dropzones" fill="#6AB06A" height="50" width="10" x="80" y="0"/>
</g>
</g>
</svg>
]]></Attribute_String><Attribute_String roleName="setup">${MODELNAME_DEFAULT},${SVGURI_DEFAULT}</Attribute_String></SvgClient>
</XholonWorkbook>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment