Created
August 1, 2015 00:47
-
-
Save ssfrr/74280e5ad34345c08c4d to your computer and use it in GitHub Desktop.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
[JavaScript Error: "No matching references found"] | |
version => 4.0.28, platform => MacIntel, oscpu => Intel Mac OS X 10.10, locale => en-US, appName => Zotero, appVersion => 4.0.28 | |
========================================================= | |
(5)(+0000000): SELECT indexedPages, totalPages AS total FROM fulltextItems WHERE itemID=? | |
(5)(+0000000): Binding parameter 1 of type int: 67 | |
(3)(+0000005): MIME type application/pdf cannot be handled internally | |
(3)(+0002527): RecognizePDF: Running /Users/srussell/Library/Application Support/Zotero/Profiles/awagxnkf.default/zotero/pdftotext-MacIntel '-enc' 'UTF-8' '-nopgbrk' '-layout' '-l' '15' '/Users/srussell/Library/Application Support/Zotero/Profiles/awagxnkf.default/zotero/storage/AP6CABZG/networking_virtual_sound_and_physical_space_in_audio_augmented_environments.pdf' '/Users/srussell/Library/Application Support/Zotero/Profiles/awagxnkf.default/zotero/recognizePDFcache.txt' | |
(3)(+0000374): LISTEN: Contextualized Presentation for Audio-Augmented Environments | |
Andreas Zimmermann and Andreas Lorenz | |
Fraunhofer Institute for Applied Information Technology | |
Schloß Birlinghoven | |
53754 Sankt Augustin, Germany | |
{Andreas.Zimmermann, Andreas.Lorenz}@fit.fraunhofer.de | |
Abstract reason we have investigated three different approaches | |
that seem to be suitable for our purposes: Content- | |
The paper deals with the awareness of and the Based Filtering Systems [e.g. [1,2]), Collaborative Fil- | |
adaptation to the context in audio-augmented tering Systems (e.g. [6,15]) and Context-Aware Systems | |
environments. Taking into account the rela- (e.g. [11,16,17]). Furthermore we examined hybrid | |
tionship between aural and visual perceptions, systems (like [3,15]), which combine several ap- | |
we focus on the issues and the potential of proaches. They are often able to overcome some limita- | |
adapting intelligent audio interfaces to aug- tions of pure approaches and improve the quality of | |
ment the visual real environment. The descrip- recommendations. | |
tion of the LISTEN project, a system for the | |
creation of immersive audio-augmented envi- Section 2 of this paper gives an brief overview on | |
ronments, is taken as showcase; in particular, context-aware systems discussing how context can be | |
our focus is on the modelling and personaliza- defined and giving an idea about what context is in the | |
tion methods affecting the audio design and LISTEN system. A detailed description of the LISTEN | |
presentation. The outcomes of the preliminary system is provided in Section 3. The interpretation of | |
tests are reported in order to describe the is- the users’ context and monitoring of the context’s evo- | |
sues and complexity of intelligent user model- lution over time answers the question, why should be | |
ling in audio-augmented environments. adapted. Section 4 gives an answer to that question | |
from a LISTEN perspective and focuses on what can be | |
1 Introduction adapted within an audio-augmented environment. Thus, | |
we describe possibilities of combining the order, the | |
The LISTEN project conducted by the Fraunhofer Insti- source (position and motion) and the content of audio | |
tute in St. Augustin deals with the audio augmentation pieces for adaptation purposes. Furthermore, we pro- | |
of real and virtual environments. The users of this sys- vide an overview on the first evaluation process of the | |
tem move in space wearing headphones and listen to LISTEN system in Section 5. | |
audio sequences emitted by virtual sound sources | |
placed in the environment. The audio pieces vary ac- 2 Context-Aware Systems | |
cording to the user’s spatial position and orientation of | |
his/her head. By definition, context-aware systems are aware of and | |
adapt to the context of the user. Several approaches | |
At the beginning of July 2003, a first LISTEN proto- have defined context models and described different | |
type was applied to an art exhibition at the Kunstmu- aspects of a context taken into account for context- | |
seum in Bonn [20]. The visitors of the museum experi- aware systems. For example, Schilit et al. have men- | |
ence personalized audio information about exhibits tioned [17]: where you are, who you are, and what re- | |
through their headphones. In such a scenario, the dis- sources are nearby. Dey and Abowd [9] discuss several | |
tribution and authoring of this valuable information on approaches for taking the computing environment, the | |
exhibits is a non-trivial task [22]. The audio presenta- user environment, and the physical environment into | |
tion takes into account the visitor’s profile. Besides account. Furthermore, they distinguish between pri- | |
presentation, the system provides recommendations to mary and secondary context types: Primary context | |
the visitor regarding to his/her context. Recommended types describe the situation of an entity and are used as | |
exhibition objects attract the visitor’s attention by emit- indices for retrieving second level types of contextual | |
ting sounds from its position. information. In this work we base our context model- | |
ling approach on the four dimensions of a context | |
Furthermore, the LISTEN environment offers the which Gross and Specht have considered in [11]: | |
ability to adapt the audio presentation (i.e. the order of | |
audio pieces, their content, and sound source) to the Identity: The identity of a person gives access to the | |
users’ contexts (e.g. interests, motion, focus, etc.) and second level of contextual information. In some | |
thus perform personalization. Since the user movement context-aware applications highly sophisticated | |
in physical space is the only interface, these move- user models hold detailed activity logs of physical | |
ments have to be interpreted in order to derive a mean- space movements and electronic artefact manipu- | |
ingful interest model of the users. lations and infer information about the user’s | |
needs, interests, preferences, knowledge, etc. | |
This paper aims at describing the main issues in the | |
user modelling components that have been employed | |
for the personalization of the LISTEN system. For this | |
Location: We consider location as a parameter that can perception, developing an immersive audio-augmented | |
be specified in electronic and physical space. An environment. | |
artefact can have a physical position or an elec- | |
tronic location described by URIs or URLs. Loca- The key idea of the LISTEN concept [8] is to place | |
tion-based services as one type of context aware the individual perception of space at the centre of the | |
applications can be based on a mapping between interface design so as to convey an immersive user ex- | |
the physical presence of an artefact and the pres- perience. By moving through real space users addition- | |
entation of the corresponding electronic artefact ally navigate an acoustic information space designed as | |
[10]. a complement or extension of the real space. Virtual | |
acoustic landmarks play an equally important role than | |
Time: Time is an important dimension for describing a the visual ones for the orientation of the users in this | |
context. Beside the specification of time in CET augmented environment. Acoustic labels are attached | |
format categorical scales as an overlay for the to visual objects, thus affecting the soundscape and its | |
time dimension are mostly used in context-aware perception. Fine-grain motion tracking is essential for | |
applications (e.g., working hours vs. weekend). the LISTEN approach because full auditory immersion | |
For nomadic information systems a process- can only be reached when the binaural rendering proc- | |
oriented approach can be time dependent (similar ess takes into account the rotation of the user’s head. | |
to a workflow). The users of the LISTEN system move in the physical | |
space wearing wireless headphones, which are able to | |
Environment or Activity: The environment describes render 3-dimensional sound, and listen to audio se- | |
the artefacts and the physical location of the cur- quences emitted by virtual sound sources placed in the | |
rent situation. In several projects, approaches for environment. | |
modelling the artefacts and building taxonomies | |
or ontology about their interrelations are used for A first prototype of the LISTEN application has been | |
selecting and presenting information to a user. A applied for the August Macke art exhibition at the | |
group of people sharing a context is also part of Kunstmuseum in Bonn in 2001 (Figure 1). Museums | |
the environment (social context), as well as the and exhibitions as domains have already been explored | |
technical context describing which devices are in several research projects. Many of them focus on the | |
used. goal to provide content concerning the artworks [9], or | |
to immerse the user in a virtual augmented environment | |
Context awareness enhances the possibility to design built in virtual museums [4], or to provide orientation | |
intelligent user interfaces: Their context dependency and routing functionalities [5]. The visitors of the pro- | |
builds a bridge between user and system in order to totype experience personalized audio information about | |
improve the interaction usability. Smart modelling exhibits through their headphones. In July 2003, the | |
techniques are essential to acquire, represent and ex- first public exhibition is presented in the mentioned | |
ploit context awareness. In particular, context model- museum. | |
ling includes user modelling as a key issue. User mod- | |
els can be deduced explicitly or implicitly [12]. In the Fig. 1. Image of the LISTEN System Applied at the | |
first case, the user states information explicitly, such as Kunstmuseum in Bonn | |
his/her interests, preferences, skills etc. In the latter | |
case, inference techniques based on domain modelling, In the following sections we focus on the goals to be | |
understanding the typical behaviour of similar users, achieved, the modelling of an audio-augmented envi- | |
and user interaction observation (tracking systems) are ronment, and on the possibilities of combining the or- | |
used in order to draw hypothesis about user interests, der, the source, and the content of sound items in order | |
needs, and plans. Inference systems pose the cha… (42064 chars) | |
(3)(+0000001): RecognizePDF: No DOI found in text | |
(3)(+0000001): RecognizePDF: No ISBN found in text. | |
(3)(+0000003): RecognizePDF: Query string "focus is on the modelling and" "LISTEN project conducted by the Fraunhofer" "the beginning of July 2003, a first LISTEN" "the system provides recommendations" "contexts (e.g. interests, motion, focus, etc.)" | |
(3)(+0000002): HTTP GET http://scholar.google.com/scholar?q=%22focus%20is%20on%20the%20modelling%20and%22%20%22LISTEN%20project%20conducted%20by%20the%20Fraunhofer%22%20%22the%20beginning%20of%20July%202003%2C%20a%20first%20LISTEN%22%20%22the%20system%20provides%20recommendations%22%20%22contexts%20(e.g.%20interests%2C%20motion%2C%20focus%2C%20etc.)%22%20&hl=en&lr=&btnG=Search | |
(3)(+0000377): RecognizePDF: (200) Got page with title "focus is on the modelling and" "LISTEN project conducted by the Fraunhofer" "the beginning of July 2003, a first LISTEN" "the system provides recommendations" "contexts (e.g. interests, motion, focus, etc.)" - Google Scholar | |
(5)(+0000002): SELECT leafName, translatorJSON, code, lastModifiedTime FROM translatorCache | |
(3)(+0000172): Cached 466 translators in 172 ms | |
(3)(+0000000): Translators: Looking for translators for http://scholar.google.com/scholar?q=%22focus%20is%20on%20the%20modelling%20and%22%20%22LISTEN%20project%20conducted%20by%20the%20Fraunhofer%22%20%22the%20beginning%20of%20July%202003%2C%20a%20first%20LISTEN%22%20%22the%20system%20provides%20recommendations%22%20%22contexts%20(e.g.%20interests%2C%20motion%2C%20focus%2C%20etc.)%22%20&hl=en&lr=&btnG=Search | |
(4)(+0000024): Translate: Binding sandbox to http://scholar.google.com/scholar?q=%22focus%20is%20on%20the%20modelling%20and%22%20%22LISTEN%20project%20conducted%20by%20the%20Fraunhofer%22%20%22the%20beginning%20of%20July%202003%2C%20a%20first%20LISTEN%22%20%22the%20system%20provides%20recommendations%22%20%22contexts%20(e.g.%20interests%2C%20motion%2C%20focus%2C%20etc.)%22%20&hl=en&lr=&btnG=Search | |
(4)(+0000007): Translate: Parsing code for Google Scholar (57a00950-f0d1-4b41-b6ba-44ff0fc30289, 2015-06-30 15:25:13) | |
(4)(+0000005): Translate: Parsing code for unAPI (e7e01cac-1e37-4da6-b078-a0e8343b0e98, 2015-06-04 03:25:27) | |
(4)(+0000003): Translate: Parsing code for COinS (05d07af9-105a-4572-99f6-a8e231c0daef, 2015-06-04 03:25:10) | |
(4)(+0000003): Translate: Parsing code for DOI (c159dcfe-8a53-4301-a499-30f6549c340d, 2015-02-12 07:40:24) | |
(4)(+0000000): Translate: Parsing code for Embedded Metadata (951c027d-74ac-47d4-a107-9c3069ab7b48, 2015-07-12 15:17:36) | |
(3)(+0000005): Translate: Embedded Metadata: found 5 meta tags. | |
(3)(+0000002): Translate: All translator detect calls and RPC calls complete: | |
(3)(+0000000): No suitable translators found | |
(5)(+0000000): Translate: Running handler 0 for translators | |
(3)(+0000001): RecognizePDF: Query string "methods affecting the audio design" "real and virtual environments. The users of this" "personalized audio information about" "approach on the four dimensions of a" "interests, preferences, knowledge," | |
(3)(+0001402): HTTP GET http://scholar.google.com/scholar?q=%22methods%20affecting%20the%20audio%20design%22%20%22real%20and%20virtual%20environments.%20The%20users%20of%20this%22%20%22personalized%20audio%20information%20about%22%20%22approach%20on%20the%20four%20dimensions%20of%20a%22%20%22interests%2C%20preferences%2C%20knowledge%2C%22%20&hl=en&lr=&btnG=Search | |
(3)(+0000256): RecognizePDF: (200) Got page with title "methods affecting the audio design" "real and virtual environments. The users of this" "personalized audio information about" "approach on the four dimensions of a" "interests, preferences, knowledge," - Google Scholar | |
(3)(+0000002): Translators: Looking for translators for http://scholar.google.com/scholar?q=%22methods%20affecting%20the%20audio%20design%22%20%22real%20and%20virtual%20environments.%20The%20users%20of%20this%22%20%22personalized%20audio%20information%20about%22%20%22approach%20on%20the%20four%20dimensions%20of%20a%22%20%22interests%2C%20preferences%2C%20knowledge%2C%22%20&hl=en&lr=&btnG=Search | |
(4)(+0000001): Translate: Binding sandbox to http://scholar.google.com/scholar?q=%22methods%20affecting%20the%20audio%20design%22%20%22real%20and%20virtual%20environments.%20The%20users%20of%20this%22%20%22personalized%20audio%20information%20about%22%20%22approach%20on%20the%20four%20dimensions%20of%20a%22%20%22interests%2C%20preferences%2C%20knowledge%2C%22%20&hl=en&lr=&btnG=Search | |
(4)(+0000007): Translate: Parsing code for Google Scholar (57a00950-f0d1-4b41-b6ba-44ff0fc30289, 2015-06-30 15:25:13) | |
(4)(+0000004): Translate: Parsing code for unAPI (e7e01cac-1e37-4da6-b078-a0e8343b0e98, 2015-06-04 03:25:27) | |
(4)(+0000002): Translate: Parsing code for COinS (05d07af9-105a-4572-99f6-a8e231c0daef, 2015-06-04 03:25:10) | |
(4)(+0000001): Translate: Parsing code for DOI (c159dcfe-8a53-4301-a499-30f6549c340d, 2015-02-12 07:40:24) | |
(4)(+0000001): Translate: Parsing code for Embedded Metadata (951c027d-74ac-47d4-a107-9c3069ab7b48, 2015-07-12 15:17:36) | |
(3)(+0000007): Translate: Embedded Metadata: found 5 meta tags. | |
(3)(+0000002): Translate: All translator detect calls and RPC calls complete: | |
(3)(+0000000): No suitable translators found | |
(5)(+0000000): Translate: Running handler 0 for translators | |
(3)(+0000001): RecognizePDF: Query string "The outcomes of the" "sequences emitted by virtual sound" "is a non-trivial task [22]. The audio" "perform personalization. Since the user" "specified in electronic and physical space." | |
(3)(+0001720): HTTP GET http://scholar.google.com/scholar?q=%22The%20outcomes%20of%20the%22%20%22sequences%20emitted%20by%20virtual%20sound%22%20%22is%20a%20non-trivial%20task%20%5B22%5D.%20The%20audio%22%20%22perform%20personalization.%20Since%20the%20user%22%20%22specified%20in%20electronic%20and%20physical%20space.%22%20&hl=en&lr=&btnG=Search | |
(3)(+0000220): RecognizePDF: (200) Got page with title "The outcomes of the" "sequences emitted by virtual sound" "is a non-trivial task [22]. The audio" "perform personalization. Since the user" "specified in electronic and physical space." - Google Scholar | |
(3)(+0000001): Translators: Looking for translators for http://scholar.google.com/scholar?q=%22The%20outcomes%20of%20the%22%20%22sequences%20emitted%20by%20virtual%20sound%22%20%22is%20a%20non-trivial%20task%20%5B22%5D.%20The%20audio%22%20%22perform%20personalization.%20Since%20the%20user%22%20%22specified%20in%20electronic%20and%20physical%20space.%22%20&hl=en&lr=&btnG=Search | |
(4)(+0000020): Translate: Binding sandbox to http://scholar.google.com/scholar?q=%22The%20outcomes%20of%20the%22%20%22sequences%20emitted%20by%20virtual%20sound%22%20%22is%20a%20non-trivial%20task%20%5B22%5D.%20The%20audio%22%20%22perform%20personalization.%20Since%20the%20user%22%20%22specified%20in%20electronic%20and%20physical%20space.%22%20&hl=en&lr=&btnG=Search | |
(4)(+0000005): Translate: Parsing code for Google Scholar (57a00950-f0d1-4b41-b6ba-44ff0fc30289, 2015-06-30 15:25:13) | |
(4)(+0000004): Translate: Parsing code for unAPI (e7e01cac-1e37-4da6-b078-a0e8343b0e98, 2015-06-04 03:25:27) | |
(4)(+0000002): Translate: Parsing code for COinS (05d07af9-105a-4572-99f6-a8e231c0daef, 2015-06-04 03:25:10) | |
(4)(+0000002): Translate: Parsing code for DOI (c159dcfe-8a53-4301-a499-30f6549c340d, 2015-02-12 07:40:24) | |
(4)(+0000001): Translate: Parsing code for Embedded Metadata (951c027d-74ac-47d4-a107-9c3069ab7b48, 2015-07-12 15:17:36) | |
(3)(+0000004): Translate: Embedded Metadata: found 5 meta tags. | |
(3)(+0000002): Translate: All translator detect calls and RPC calls complete: | |
(3)(+0000000): No suitable translators found | |
(5)(+0000000): Translate: Running handler 0 for translators | |
(3)(+0000003): [object Object] { | |
"name": "recognizePDF.noMatches" | |
"params": [] | |
"_title": "general.error" | |
"cause": undefined | |
"title": "Error" | |
"message": "No matching references found" | |
"toString": function ) {...}, | |
"present": function window) {...}, | |
"log": function ) {...} | |
} | |
(1)(+0000000): [object Object] { | |
"name": "recognizePDF.noMatches" | |
"params": [] | |
"_title": "general.error" | |
"cause": undefined | |
"title": "Error" | |
"message": "No matching references found" | |
"toString": function ) {...}, | |
"present": function window) {...}, | |
"log": function ) {...} | |
} |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment