Skip to content

Instantly share code, notes, and snippets.

@jy00295005
Created April 20, 2018 01:18
Show Gist options
  • Save jy00295005/273455f6bd7559667104731deb0b6ab5 to your computer and use it in GitHub Desktop.
Save jy00295005/273455f6bd7559667104731deb0b6ab5 to your computer and use it in GitHub Desktop.
We can't make this file beautiful and searchable because it's too large.
NSF_id,Title,abstract,KMEANS_LABEL,organization
1020001000389662,EAGER: Using Crowdsourced Virtual Students to Create Intelligent Tutors,"This project will develop and evaluate the potential of a new human-computer system that bridges the roles of virtual student and virtual teacher to allow humans and computers to take turns teaching and learning from each other. The key insight is that reading comprehension activities (e.g., vocabulary building, summarizing, question generation, concept mapping) closely parallel the knowledge engineering required to create virtual teachers for intelligent tutoring systems (ITSs). The system links these activities so that when students read online, they engage a virtual student in educational tasks that both improve their reading comprehension and simultaneously contribute to the creation of ITSs for future students. An important aspect of the proposed research is to find the optimum balance between student learning (which benefits the individual) and the creation of ITS knowledge representations (which benefits many). Specific research objectives are: (1) to develop a baseline platform (called BrainTrust) such that students can create ITS knowledge representations by teaching a virtual student; (2) to study the relationship between the student's ability, the virtual student's ability, the student's learning outcomes, and the quality of knowledge representations produced. A distinctive characteristic of the proposed research is the study of these questions in ecologically valid conditions, as students engage in authentic study, while also participating in randomized experiments.<br/><br/>The research may lead to the development of systems that improve reading comprehension, which may have broad benefits given the centrality of reading comprehension to all learning. In particular, problems with reading comprehension have been linked to first-year college student dropout that disproportionately affects African-American students. The research will also enhance infrastructure for research and education through the development and dissemination of the BrainTrust platform, a next-generation computing infrastructure to rapidly create and deploy ITSs tailored to specific needs. If this exploratory project demonstrates that the dual outcomes of human learning and high-quality knowledge representations can be achieved, it will open a new area of research that brings teaching these virtual students full circle with learning from their derived intelligent tutoring systems.",13,University of Memphis
1020001000389672,EAGER: Generating and Understanding Narratives for Dynamic Environments,"Narrative generation is the process of generating textual descriptions of action in dynamic environments such as movies, sports events and educational programs. On-line narration of a dynamic environment is beneficial in a wide range of contexts, from entertainment to training and education. For example, successfully narrating a video would allow blind and visually-impaired people to follow visual cues that are important to understanding the video. Key to attaining this goal is the ability to translate natural language into a form that is understandable by computers. A particular challenge is to do this not for a specifically chosen domain, but in a general way that is suitable for adaptation to a wide range of natural dynamic environments. This project explores new directions to tackle these extremely challenging, yet crucial, issues, undertaking exploratory research towards building essential components of a domain-adaptive framework that learns to understand and generate narratives on-line for natural dynamic environments with minimal supervision by human experts. This research explores methods to generate narratives on-line by learning the natural dynamics of the environment, automatically forming templates, and deciding when and what to mention. <br/><br/>Many natural language applications are concerned with recognition of paraphrases and semantic understanding. The software and data resulting from this project are potentially useful for semantic analysis in natural language processing, and is being made available for research purposes. This work is designed for significant social impact through a broad range of applications including educational, entertainment, and accessibility. A narrative generation system could be beneficial to visually-impaired people to better understand videos over the internet. In addition, such a system can help broadcasting companies to report news or sports events with customized commentaries for different users. This project also provides research and collaborative work experience to undergraduate and graduate students including under-represented and minority groups.",0,University of Washington
1020001000389706,EAGER: PARTIAL: An Exploratory Study on Practical Approaches for Robust NLP Tools with Integrated Annotation Languages,"In order to develop natural language processing (NLP) technologies for text in a wider range of languages, dialects, genres, and styles, this Early Grant for Exploratory Research investigates a novel methodological approach. Conventionally, linguistic experts are employed to create gold-standard linguistically annotated datasets to which supervised machine learning algorithms are applied. This project frees annotators from the requirement that annotations be complete by moving more of the burden to learning algorithms. Algorithms are developed that are robust to partial evidence, annotator variation, and noise due to errors. As a result, any language enthusiast (not just trained experts) can provide annotations so that NLP can be developed for more kinds of text in more languages for less money. In this exploration, the focus is on dependency parsing, a fundamental NLP component that predicts the grammatical relationships between words in sentences, with experimentation on data in English (two genres), Chinese, and Farsi. The formal basis for the approach is a framework called Graph Fragment Language (GFL). The project assesses the quality of parsers learned from GFL and the productivity of annotators accorded this new flexibility.<br/><br/>Beyond documentation and assessment of the new methodology, this project produces open-source software tools for gathering annotated data and constructing NLP tools using the data. It emphasizes the usability of these tools in classrooms, contributing exercises that can be used in NLP and linguistics courses to allow students to engage directly with data, with the models that make use of the data, and with the technological goals that data annotation supports.",0,Carnegie-Mellon University
1020001000389752,EAGER: Define and Construct an Enhanced Graph Representation for Multiscale Vector Field Data Summarization,"Vector field data analysis is indispensable for many applications in science and engineering, ranging from climate study, physics, chemistry, automobile design, to medical practice. Most existing analysis techniques for vector field data are not scalable to the real-world data with ever-increasing sizes and complexity. More importantly, the inherent limited visual perception channel largely constrains the ability to understand the complex geometric and physical behaviors of vector fields as a whole or in detail. To address these challenges, this exploratory project investigates a graph-based vector field data reduction for the subsequent extraction of a multi-scale vector field data summary. The summary serves as a condensed, yet informative, representation of the original vector field, supporting data interpretation and interaction and shielding the user from the underlying complexity of the flow dynamics. The key to computing such a summary representation is the construction of a novel, enhanced graph representation that encodes both the global structural information and local characteristics of the vector field, as well as other derived information. The approach focuses on development and validation of critical issues in graph-based vector field data reduction , including; (1) identification of the key information of a vector field for the construction of the enhanced graph: (2) efficient storage of the graph; and (3) new graph algorithms for extracting features of interest from the obtained graph. To address these issues, theories and algorithms from dynamical system, algebraic topology, tensor calculus, information theory, and graph theory are extended and integrated in a novel framework. To validate the approach, the PI is working closely with domain scientists from mechanical engineering and aerodynamics to receive advice on the representation of the summary and its utility in specific applications. <br/><br/>The expected results in vector field summary represents will yield an important addition to the existing summarization techniques for various data forms. The analysis and abstraction are based on the enhanced graph and can enrich the conventional graph theory and graph algorithms. The ability to handle both steady and unsteady vector fields improves the theory and practice of dynamical systems in describing fluid dynamic phenomena, benefiting a wide variety of disciplines. Knowledge learned from the vector field summarization can be adapted to the study of summarized representation of more complex geometric data, such as tensor field data. In addition, the research on vector field summary represents one step towards a unified framework of knowledge discovery and integrity from heterogeneous data forms. The developed techniques are expected to be implemented as a software tool that will be applicable in a wider range of scientific and engineering domains. Furthermore, the new theory stemming from this work is expected to enrich the existing education on data analysis and visualization, enabling the development of new courses at both undergraduate and graduate levels in many academic disciplines. The project web site (http://www2.cs.uh.edu/~chengu/vf_summary/vf_summary.html) will provide access to project results, including developed software tools.",11,University of Houston
1020001000389760,"Collaborative Research: EAGER: A Virtual eXchange to Support Networks of Creativity and Innovation Amongst Science, Engineering, Arts and Design (XSEAD)","Collaborative Projects: EAGER: A virtual eXchange to support networks of creativity and innovation amongst Science, Engineering, Arts and Design (XSEAD)<br/><br/>Intellectual Merit<br/>One of the greatest challenges facing the United States in research and education is how to fundamentally encourage innovation across all sectors and spawn new solutions to address global challenges. Increasing research evidence and industrial innovations (i.e. mobile computing, social media) confirm that broad interdisciplinary collaborations that include both science and art fields have great potential for spawning creativity and innovation in computer science, engineering and the sciences. An emerging hybrid community of scientists, engineers, artists and designers is producing innovative and entrepreneurial research that advances new knowledge and proposes holistic solutions to societal challenges including health, education and environmental change. Yet, this burgeoning interdisciplinary community continues to face problems in its efforts to self-organize among constraints imposed by academic systems and historical biases; it continues to seek a dynamic and synergizing research and outreach exchange.<br/><br/>Building upon lessons-learned, a new Virtual eXchange to support networks of creativity and innovation amongst Science, Engineering, Art and Design (XSEAD) will be developed. The XSEAD project will address the following urgent needs of the interdisciplinary science-art community: establish a cohesive view of the field and provide a mechanism to attract entrepreneurs and industry; create a venue for multimodal documentation of research outcomes; provide extensive databases of prior and current research; allow rapid dissemination of research outcomes; facilitate forming of collaborations and specialized sub-communities; document and help evolve science-art curricula efforts and evaluation approaches; provide context and support mechanisms for science-arts careers; establish evidence of the societal impact of interdisciplinary science-art integration. The software engineering development components of XSEAD will contribute further knowledge in three technical areas: Content organization (improve the effectiveness of algorithms for dynamic, usage based, organization of large multimedia databases); Recommendation algorithms (promote the use of multi-relational structures for providing effective recommendations); Community dynamics (develop novel algorithms to extract structures that encode meaningful interactions in online social networks).<br/><br/>Broader Impact<br/>XSEAD will expose general non-expert audiences to the evolution and potential of collaborative research across science and arts. It will attract the interest of young people searching for careers that combine the rigor of science and engineering with the creativity and reflection of arts and design. It will serve teachers and informal learning communities seeking exemplars for curricular development, active practitioners looking for further institutional opportunities to present and support their ongoing work, academics developing related interdisciplinary efforts and commercial companies seeking cross-trained expertise. XSEAD will enable rapid research exchange and in-depth peer-reviewed scholarship between the worlds of science and art and provide a unique and deeply engaging inroad to a vast and creative repository. XSEAD will help promote new paradigms for developing human centric solutions to complex societal problems (i.e. cost effective health and wellness, globalization and conflict, adaptive K-12 learning, electronic communication and security). These paradigms will combine knowledge across broad and diverse areas of human knowledge.",10,Carnegie-Mellon University
1020001000389763,ACM Multimedia 2013 Student Travel Grant,"This student travel grant supports graduate students enrolled in U.S. institutions to attend the premier conference in the multimedia area, ACM Multimedia 2013. It is a top scientific conference in the area of multimedia, covering latest technical trends, system demos, grand challenges, open source software competition, innovative new ideas, doctoral symposium, and many other innovative activities. Participation in such an event allows students to interact with the leading experts from around the world, learn cutting edge research, develop professional networks, and interact with industry leaders involved in practical applications and technology transfer. The opportunity is broadcast to the broad community and students from underrepresented groups are particularly encouraged to apply.",10,Columbia University
1020001000389770,EAGER: Collaborative Research: Visualizing Event Dynamics with Narrative Animation,"Discovering and understanding the temporal evolution of events hidden in text corpora is a complex yet critical task for knowledge discovery. Although mining event dynamics has been an important research topic leading to many successful algorithms, researchers, research and development managers, intelligence analysts and the general public are still in dire need of effective tools to explore the evolutionary trends and patterns. This exploratory project focuses on developing and validating a novel idea called narrative animation. Narrative animation uses animated visualizations to narrate, explore, and share event dynamics conveyed in temporally evolving text collections. Film art techniques are employed to leverage the animated visualizations in information organization and change detection, with the goals of enhancing analytical power and user engagement. A prototype system called CityStories is being developed to generate narrative animations of events in cities derived from web-based text. <br/><br/>If this novel, risky research is successful, it is expected to yield fundamental results in narrative animation that can advance the current paradigm in information visualization and visual analytics by developing novel techniques in using animations for presenting and analyzing dynamic abstract data at a large scale. The pilot system CityStories system is expected provide a novel network platform for education, entertainment, and data analytics. It will engage general users such as students, teachers, journalists, bloggers, and many others in web information visualization and study. Results of this research will be disseminated through publications, the World Wide Web, and collaborations with researchers and analysts. The project web site (http://coitweb.uncc.edu/~jyang13/narrativeanimation/narrativeanimation.htm) will include research outcomes, publications, developed software, videos, and datasets for wide dissemination to public.",15,University of North Carolina at Charlotte
1020001000389773,EAGER: Identifying Barriers and Opportunities for Building SocioTechnical Capital,"This project advances a scientific understanding of how ""socio-technical capital"" - ties that are created, maintained, or exploited through the use of information and communication technologies - is developed and used across different socioeconomic groups and populations. It is hypothesized that, like other valuable resources, the benefits of computer-mediated opportunities for building socio-technical capital are unequally distributed in society. For example, it appears as if far more effort has gone into building social networking tools and online markets for highly paid professionals (such as LinkedIn) than for handymen or day laborers. And online labor markets that exist for low-skill, low-commitment jobs (such as Amazon Mechanical Turk) do not appear to offer a path toward building socio-technical capital that might lead to more-stable, higher-wage jobs.<br/><br/>This project seeks to understand the prospects of tailoring the technologies of social networking tools and online labor markets to meet the needs of specific socio-economic populations, such as populations in Detroit, Michigan, and other cities in economic decline. The project will follow a human-centered approach of contextual inquiry, conducting interviews and focus groups employing a range of ""design probes"". These probes will examine technologies that currently support the employment process (such as LinkedIn, CareerBuilder, ODesk, TaskRabbit, and Angie's List) as a springboard to identify fundamental barriers to usage, and also to generate ideas for features that might be especially useful. In later sessions, design probes will include low-fidelity prototypes embodying features generated in earlier sessions. The end result will be an articulation of the special needs, barriers, and opportunities for using technology to help people in economically vulnerable communities to build, maintain, and use social capital to start moving up the economic ladder. The project will investigate how information and communication technology can help to create and maintain of social and economic bridges between individuals within specific economic communities and people who can provide access to employment opportunities outside of those communities.<br/><br/>Broader Impacts: The project will have broad social impact by informing approaches to cultivating pathways to upward mobility in communities hit hardest by economic decline. If the research finds promising opportunities and surmountable barriers to the use of social networking tools and online labor markets, it will inform the design of future technologies and computer-mediated approaches to help these populations better prosper. If the research finds limited opportunities or insurmountable barriers, practitioners will know to look elsewhere to help vulnerable populations find essential socio-technical capital.",11,University of Michigan Ann Arbor
1020001000389774,EAGER: Enhancing Mobile Device Users' Levels of Situational Awareness through Tactile Feedback,"In this project the PI will explore a novel approach to allowing individuals to monitor their wider environment for potential obstacles and threats while engaged in a task where the eyes are occupied. Specifically, he will focus on mobile device users, who often perform visually-demanding tasks such as composing and reading text messages while ambulatory, so that they may fail to notice the presence of pedestrians, approaching vehicular traffic or other objects which they are at risk of encountering. The PI's approach is to present tactile feedback via a head-mounted interface in order to communicate the presence of obstacles. While situational awareness technologies have been designed to assist ambulatory users, alerts are often presented using visual or auditory feedback. But if the user is engaged with a mobile task precious time may be taken to identify the presence of graphical indicators, whereas auditory alerts may be masked by environmental sounds to that the user misses vital cues. The PI argues that tactile feedback offers considerable advantages when the user's other senses are blocked or restricted, and there is the additional benefit that tactile alerts can be presented discreetly without drawing attention by others. To test these hypotheses, the PI will conduct a sequence of studies to determine whether it is possible to design tactile cues that are effective in supporting informed decisions by the user. Project outcomes will include design of a head-mounted interface prototype using object-recognition and sensor-based technologies to track obstacles in the user's vicinity, along with innovative tactile interface design guidelines. <br/><br/>Broader Impacts: This research will advance our understanding of issues relating to situational awareness among mobile device users, and it will also contribute to the body of knowledge on presenting tactile feedback to locations on the head (a field still in its infancy). The development of a library of tactile icons to convey concepts such as the number of obstacles, their location, and their proximity to the user, will have application across diverse domains.",4,University of Maryland Baltimore County
1020001000389775,EAGER: Collaborative Research: Visualizing Event Dynamics with Narrative Animation,"Discovering and understanding the temporal evolution of events hidden in text corpora is a complex yet critical task for knowledge discovery. Although mining event dynamics has been an important research topic leading to many successful algorithms, researchers, research and development managers, intelligence analysts and the general public are still in dire need of effective tools to explore the evolutionary trends and patterns. This exploratory project focuses on developing and validating a novel idea called narrative animation. Narrative animation uses animated visualizations to narrate, explore, and share event dynamics conveyed in temporally evolving text collections. Film art techniques are employed to leverage the animated visualizations in information organization and change detection, with the goals of enhancing analytical power and user engagement. A prototype system called CityStories is being developed to generate narrative animations of events in cities derived from web-based text. <br/><br/>If this novel, risky research is successful, it is expected to yield fundamental results in narrative animation that can advance the current paradigm in information visualization and visual analytics by developing novel techniques in using animations for presenting and analyzing dynamic abstract data at a large scale. The pilot system CityStories system is expected provide a novel network platform for education, entertainment, and data analytics. It will engage general users such as students, teachers, journalists, bloggers, and many others in web information visualization and study. Results of this research will be disseminated through publications, the World Wide Web, and collaborations with researchers and analysts. The project web site (http://coitweb.uncc.edu/~jyang13/narrativeanimation/narrativeanimation.htm) will include research outcomes, publications, developed software, videos, and datasets for wide dissemination to public.",15,Kent State University
1020001000389777,EAGER: CIFRAM: Understanding High Frequency Trading Activity on the Nanosecond Time Scale,"The multidisciplinary project, led by Mao Ye of University of Illinois at Urbana Champaign and Robert Sinkovits of the San Diego Supercomputing Center, funded as part of the OFR-NSF Partnership in Support of Research Collaborations in Finance Informatics<br/>aims to examine the impact of trading activity at the nanosecond (10-9 second) timeframe on financial system. This exploratory project aims to address two key questions: First, does competition in speed at the nanosecond level create or destroy social value? To be more specific, does an increase of speed improve our usual measures of market quality such as bid-ask spread, market depth, and market efficiency or does it increase levels of volatility and instability? Second, are there any new types of market manipulations in the nanosecond world? The project leverages powerful supercomputers to investigate abnormal trading activity and suspicious market events on the nanosecond timescale by constructing an order-by-order snapshot of financial markets with nanosecond-resolution time stamps. <br/><br/>The project aims to develop effective computational and analytic approaches to exploring whether new regulations are needed when trading occurs on a nanosecond scale, and if so, how to design optimal regulatory policies. This is a challenging problem due to both the endogenous relationship between liquidity, price discovery, order cancellation, and speed on the one hand and the massive amounts of data generated on the other. The research team plans to use exogenous technology shocks and NASDAQ channel assignment as identification strategy. By identifying exogenous technology shocks in the data, the team aims to establish the casual relationship between speed and market quality measures. They also use random NASDAQ channel assignment to examine whether there is abnormal co-movement in message flow within each channel, evidence consistent with quote stuffing. <br/><br/>In the recent years, equity markets experienced a number of problems related to high frequency trading, including the May 6, 2010 Flash Crash, the initial public offerings of BATS and Facebook, the losses by Knight Capital and more recently, the crash which was precipitated by a rumor originating on Twitter. However, little is known about the market impacts of trading on the nanosecond scale. Development of effective policies and alarm systems requires understanding of the causal relationships between trading patterns and their effects on the financial system. The project, if successful, will help identify and characterize policy options and market discipline to ensure a safe, sound, and fair financial system in a world where trades are made at lightening speeds. The project fosters interdisciplinary collaborations and research-based advanced training at the intersection of computer science, high performance computing, and finance. It enriches the curriculum and training of the next generation of researchers in the emerging interdisciplinary field of Finance Informatics.",11,University of Illinois at Urbana-Champaign
1020001000389780,EAGER: Investigating the Role of Discourse Context in Speech-Driven Facial Animations,"This EArly-concept Grants for Exploratory Research analyzes the role of discourse and dialog context in the generation of believable, human-like behaviors for conversational agent (CA), i.e., a virtual agent that interacts with a user. CAs aim to engage the users by displaying human-like behaviors not only through speech by also through facial gestures. One useful modality to drive facial behaviors is speech. Spoken language carries important information beyond the verbal message that a CA engine should capitalize on. A challenge in speech-driven animation is to generate behaviors that respond to the discourse context. This proposal presents a top-down approach to explore the importance of considering contextual information in the modeling of speech-driven facial gestures. The project starts with speech-driven models, based on dynamic Bayesian networks, which do not capture the specific discourse context, responding only to the properties of the acoustic features. Then, the study considers discourse-specific models in which the intent of the gestures is known. The study defines a specific, controlled domain as testbed, recording multiple human interactions. Similar speech-driven models are trained constrained by the specific discourse function. The study evaluates the differences in the perceived naturalness, appropriateness and rapport of generated facial gestures. <br/><br/>The study explores which discourse aspects affect the facial animation models, and which are more domain specific or independent. By incorporating the intrinsic discourse information, the proposed models generate behaviors that respond to conversational functions, addressing one of the limitations in speech-driven facial animations. The findings have a longterm impact in variety of health care applications, such as helping hearing impaired individuals and teaching social skills to autistic children. Likewise, discourse-dependent speech-driven models can play a key role in better tutoring systems that display human-like behaviors to communicate and engage with the students.",12,University of Texas at Dallas
1020001000389789,EAGER: The LIT ROOM - A Networked Suite of Architectural-Robotic Artifacts Embedded in the Library for Advancing Literacy in Children,"In an effort to cultivate literacy skills in the United States the nation's public libraries have embraced digital technologies, mostly in the form of public-access computers loaded with software, supported by library staff. But interacting with a ""keyboard-mouse-screen"" may not offer the immediacy of interacting with the printed page, and both forms of interaction are far removed from the physical, tangible and social world in which young children thrive. The PI argues that literacy can be cultivated in a space that is at once physical and digital and evocative of the book being read. To this end, in this project the PI will explore the LIT ROOM, a literacy support tool at room-scale that consists of a novel suite of user-friendly, networked, ""architectural-robotic"" artifacts embedded in the everyday physical space of the library. This physical-digital environment is transformed by words read by its young visitors, so that the everyday space of the library ""merges"" with the imaginary space of the book; the book becomes the room, the room becomes the book. And should the LIT ROOM's intelligent reconfigurations not match the imagined spaces of young readers, they can ""fine-tune"" the room through tangible interfaces. The work will proceed in two phases. First, the PI will ask children to decide what makes for a compelling LIT ROOM. He will present to children, ages 4-8, low-fidelity ""architectural-robotic"" artifacts within a library space to help capture how children define and employ this digital-physical suite to ""create the book."" Then, he will iteratively develop and evaluate the suite as a fully-working environment that embodies what was learned from the child-centered participatory design process. The test bed implementation will be situated in the Richland County Public Library of Columbia, South Carolina, the largest public library in a state that ranks among the lowest in the State Technology and Science Index and the highest in numbers of people who are both illiterate and living below the poverty line. To tackle this challenge, the interdisciplinary team includes two investigators with complementary expertise in continuum and architectural robotics, and literacy education.<br/><br/>Broader Impacts: Because the prototype implementation will be located in a real-world public space it will have exposure to a large audience. Project outcomes will also advance the start-of-the-art of robotic systems for a real-world environment and application, using a mix of sensing/actuating and deformable continuum surfaces. The findings will further advance knowledge and understanding in literacy by studying and providing experimental data on the efficacy of tangible environmental technologies in promoting literacy in children.",4,Clemson University
1020001000389796,EAGER: Automatically Generating Formal Human-Computer Interface Designs From Task Analytic Models,"The concurrent nature of human-computer interaction (HCI) can result in situations unanticipated by designers. Usability may not always be properly maintained or human operators may not be able to complete the task goals that a system was designed to support. This can result in poor adoption of the system, decreased productivity with its use, or unsafe operating conditions. Mathematical tools and techniques called ""formal methods"" exist for modeling and providing proof-based evaluations of different elements of HCI including the human-computer interface, the human operator's task analytic behavior, and usability. Unfortunately, these approaches require the creation of formal models of interface designs, something that is non-standard practice and prone to modeling error. This project will show that a formal-methods approach can be used to automatically generate formal human-computer interface designs that are guaranteed to adhere to usability properties and to support human operator tasks. Specifically, a system that uses the L* machine learning algorithm will be created that will generate formal interface designs using task analytic behavior models and formal representations of usability properties.<br/><br/>The researchers will implement an interface generation system, test its performance with a suite of benchmark examples, and evaluate its ability to generate an interface for a realistic application. To implement the generator, the researchers will first construct an oracle system capable of accepting or rejecting interface state transition sequences based on analyst-specified task models and usability properties. This oracle system will be connected to an implementation of the L* algorithm that will progressively learn a formal interface model by observing how generated sequences of interface state transitions are accepted or rejected by the oracle. Artificial test cases that exploit the different features of the system will be used to generate interface designs, and formal verification will be used to check that the designs exhibit the intended properties. The system will be used to generate the human-computer interface for programming a patient controlled analgesia pump, a medical device that automatically delivers pain medication to patients intravenously. The generated interface will then be compared against the formal interface design standard that exists for these devices.<br/><br/>The automatic generation of human-computer interface designs from task analytic models and usability properties constitutes a novel approach to user-centered design. By using this method in the creation of interfaces, designs will be guaranteed to always exhibit certain properties. This will potentially help ensure that designs will be accepted by users, improve the associated system's efficiency, and facilitate safer operation. The formal representation of user interfaces that result from the implementation of this method will also permit HCI designers to pursue formal analysis and verification of other interface properties, and will facilitate the automated generation of test cases for usability verification and certification purposes.<br/><br/>Broader Impacts: The proposed research has the potential to significantly change the way human-computer interfaces are designed. By guaranteeing that generated interfaces are always usable, this research could improve the usability and safety of user interfaces across many domains. The performance guarantees of the generated designs could allow development and testing times to be reduced, thus decreasing development and software costs. This work will also enhance the education and research experience of UIC's diverse engineering student body. The computational resources acquired for this work will be made available to student for research projects and study results will be incorporated into the curriculum of the PI's graduate and undergraduate courses. Project results will be presented at conferences by student researchers and published with open access in high quality journals. A dedicated website will be used to rapidly disseminate results and tools produced during this effort.",4,University of Illinois at Chicago
1020001000389820,EAGER: 3D Event Reconstruction from Social Cameras,"This EAGER project explores the use of social cameras to reconstruct and understand social activities in the wild. Social cameras are an emerging phenomenon, producing video captures of social activity from the point of view of members of the social group itself. They are proliferating at an unprecedented rate, as smartphones, camcorders, and recently wearable cameras, become broadly adopted around the world. Users naturally direct social cameras at areas of activity they consider significant, by turning their heads towards them (with wearable cameras) or by pointing their smartphone cameras at them. The core scientific contribution of this work is the joint analysis of both the 3D motion of social cameras (that encodes group attention) and the 3D motion in the scene (that encodes social activity) towards understanding the social interactions in a scene. A number of internal models (such as maximizing rigidity or minimizing effort) for event reconstruction are being investigated to address the ill-posed inverse problems involved.<br/><br/>This research is establishing a new area of visual analysis by providing the requisite framework for social activity understanding in 3D rather than in 2D. The ability to analyze social videos in 3D space and time provides useful tools for almost any activity that involves social groups working together, such as citizen journalism, search-and-rescue team coordination, or collaborative assembly teams. The project is integrated with education through teaching and student training, and outreaches industry through collaborations.",11,Carnegie-Mellon University
1020001000389829,EAGER: Image and Video Forensics: Detecting Image Manipulation by Content Analysis,"The advent of sophisticated photo editing software has made it increasingly easier to manipulate digital images. Often visual inspection cannot definitively distinguish the resulting forgeries from authentic photographs. In response, forensic techniques have emerged to detect geometric or statistical inconsistencies that result from specific forms of photo manipulation. The PI aspires to develop new forensic methods based on geometric content analysis, which focus on finding inconsistencies in the geometric relationships among objects depicted in a photograph. The geometric relationships in a 2D image correspond to the projection of the relations that exist in the 3D scene; if a scene is known to contain a given relationship but the projected relation does not hold in the photograph, then one may conclude that the photograph is not a true projective image of the scene. With this in mind, the PI's goal in this exploratory project is to build a set of testable constraints that must be satisfied in real images, so that an unsatisfied constraint constitutes definitive, objective evidence of image manipulation. Fundamental challenges of this work include: developing tools for analysis from incomplete lighting information, building testable models of skin reflectance, accounting for structured uncertainty in feature comparison, and establishing method guidelines for forensic image analysis.<br/><br/>Broader Impacts: This project will create tools for objectively detecting image manipulation, which will help reporters, law enforcement, scientists, and others differentiate between legitimate photographs and forged images. The products of this research will be communicated via academic publications and online source code. Collaborations with industrial partners will allow the research to have practical impact as well.",1,University of California-Berkeley
1020001000389843,EAGER: A Local-Global Approach Towards Omnipresent Vision,"This project constructs an Omnipresent Vision system - a computational system that allows us to navigate, share, enhance, and understand the visual data captured by a slew of fixed and moving cameras. The society is flooded with various cameras. Almost every cell phone has a video camera and wearable cameras are starting to permeate our lives. These local cameras capture visual experiences from personal perspectives. Static cameras at various outdoor and indoor locations are also constantly capturing videos. These fixed-view cameras offer global, persistent looks into our daily lives. The key idea of this project is to fully leverage the combination of these local and global cameras to enable new visual experiences and facilitate the understanding of the scene and the people within. This is achieved with novel algorithms and computational tools that bring together the local and global views into an integrated platform, model the dynamic scene by joining those two sets of perspectives, and recognize the actions and events in them. <br/><br/>The research, at a personal level, enables the spatio-temporal and contextual expansion of the person's view, and at a scene level, it enables the interpretation of the scene at various scales of spatial and temporal resolutions. It also provides new means to understand people and scenes. For instance, it facilitates the understanding of people who cannot communicate their intentions. The research activities also furnish graduate and undergraduate students educational opportunities to take part in spawning this new area of research.",1,Drexel University
1020001000389860,EAGER: Understanding Barriers to Workplace Collaboration for People with Visual Impairments,"People with visual impairments face significant barriers to participation in public life and productive contributions in the workplace. Barriers due to inaccessible information and communication technology can make it difficult for a person with visual impairments to excel in school or find employment commensurate with his or her abilities and potential. Fortunately, accessible computing technology enables people with visual impairments to access information that was previously inaccessible. For example, people with visual impairments can use screen reader software to translate electronic documents into speech or Braille. However, barriers still exist in the context of using computers to participate in the contemporary classroom or workplace because these are now places where active, real-time, face-to-face collaboration with other people has become the norm, and because there is a gulf between the look and feel of assistive technology such as screen readers and the look and feel of contemporary graphical user interfaces such as those used routinely by people without disabilities. Screen readers are difficult to learn and use, and operate quite differently than contemporary graphical user interfaces, and most sighted people do not know how to use screen readers, and so the opportunity for shared artifacts in communication and collaboration is reduced.<br/><br/>The focus of this proposal is to explore and identify barriers to collaboration between people with and without visual impairments. This research will support the development of tools that will bridge the gap between the non-graphical user interfaces used by people with visual impairments and the graphical interfaces used by sighted people, enabling individuals with any level of visual ability to collaborate at school or work. Building on prior research on developing accessible user interfaces for people with visual impairments, this project will explore the challenges of collocated synchronous collaboration between people with and without visual impairments. The researchers will conduct formative interviews and observational studies of professional adults with and without visual impairments to understand the barriers to collaboration within and across these populations. Very little prior research has explored these issues. This project will conduct a deep exploration to reveal accessibility barriers that were previously hidden and unknown.<br/><br/>Broader Impacts: This research will identify existing barriers to employment that can be addressed through changes in policy or practice, or through the development of new technology. Enabling people with visual impairments to work alongside their sighted peers will significantly improve educational and employment outcomes for millions of people. The research project itself will be developed in the context of a collaboration between people with and without visual impairments, and will include blind and visually impaired people from the local community.",15,University of Maryland Baltimore County
1020001000389869,EAGER: Immunization in Influence and Virus Propagation on Large Networks,"Given a graph, like a social/computer network or the blogosphere, in which an infection (or meme or virus) has been spreading for some time, how does one select the k best nodes for immunization/quarantining immediately? This team was the first to show that the propagation (specifically, the so-called ""epidemic threshold"") depends on a single number, the first eigenvalue of the adjacency matrix of the network, for any graph and almost any propagation model in the literature. This team also gave linear-time provably near-optimal algorithms for static pre-emptive node/edge removal, by minimizing the eigenvalue on arbitrary graphs. They were also the first to give a a linear-time algorithm to automatically detect the number and identity of possible culprits under perfect information, carefully using the Minimum Description Length principle, again on arbitrary graphs. <br/><br/>The major thrust of this proposal is: Given a graph, a virus model (SIR, SIS etc.), a set of already infected nodes, and a fixed<br/>budget of k nodes/edges to immunize or quarantine, can one quickly find an optimal or near-optimal solution to best contain the virus?<br/><br/>Technical Merit: This is the first to study the short-term immunization problem on arbitrary graphs. The problem has received limited attention in past literature: the few current results (except the PI's past work, see related work) all are on specific graphs like random graphs, and not arbitrary graphs. The focus of this work is on scalable techniques (linear or sub-quadratic on nodes/edges) which can be applied to large graphs.<br/><br/>Impact: The work has numerous immediate applications in public health and epidemiology, e.g., designing dynamic ""what to do next"" policies etc. Leveraging state-of-the-art simulators from the Virginia Bio-Informatics Institute, this work helps in realistic simulations, as well as in making more informed choices and policy decisions for future. The work also has high broader impact, as propagation-style processes on networks appear in many other settings like viral marketing, cyber security, social media like Twitter and blogs etc.<br/><br/>Education: The PI will incorporate research findings in graduate level classes, give tutorials at conferences, and aim to engage undergraduate students from underrepresented groups into this exciting area of research through programs like NSF REU and MAOP/VTURCS (Minority Academic Opportunities Program and VT Undergraduate Research in CS) at VT.<br/><br/>For further information, please see the project web page: <br/>URL: http://www.cs.vt.edu/~badityap/NSF-PROJECTS/EAGER-13/",11,Virginia Polytechnic Institute and State University
1020001000389877,EAGER: Collaborative Research: Establishing Trustworthy-Citizen-Created Data for Disaster Response and Humanitarian Action,"Often referred to as microblogging, the practice of average citizens reporting on activities ""on-the-ground"" during a disaster is increasingly common. The contents of these message are potentially valuable to responder organizations and victims, but their volume makes it difficult to separate valuable messages from the stream. This project will examine microblogged messages sent during disasters to determine what aspects of the messages (individually and collectively) indicate that they are relevant, verifiable and actionable. Factors to be considered include the content of the messages, the identity of the sender and the overall pattern and spread of messages. The identified factors will then be used to instruct crowdsourced workers who will label messages to create a large corpus of labelled messages. <br/><br/>The project is important because microblogging data are seen as increasingly important: they are ubiquitous, rapid and accessible, and they are believed to empower average citizens to become more situationally aware during disasters and to coordinate to help themselves. The result of the project, if it is successful, will be evidence that it is possible to identify relevant, verifiable and actionable messages from a stream of microblogged messages and identification of the evidentiary factors. A further outcome will be a disaster-related, labeled dataset of messages, which will be useful to researchers, e.g., those seeking to automatically classify information within a microblogged data stream.",0,Pennsylvania State Univ University Park
1020001000389880,EAGER: Collaborative Research: Establishing Trustworthy-Citizen-Created Data for Disaster Response and Humanitarian Action,"Often referred to as microblogging, the practice of average citizens reporting on activities ""on-the-ground"" during a disaster is increasingly common. The contents of these message are potentially valuable to responder organizations and victims, but their volume makes it difficult to separate valuable messages from the stream. This project will examine microblogged messages sent during disasters to determine what aspects of the messages (individually and collectively) indicate that they are relevant, verifiable and actionable. Factors to be considered include the content of the messages, the identity of the sender and the overall pattern and spread of messages. The identified factors will then be used to instruct crowdsourced workers who will label messages to create a large corpus of labelled messages. <br/><br/>The project is important because microblogging data are seen as increasingly important: they are ubiquitous, rapid and accessible, and they are believed to empower average citizens to become more situationally aware during disasters and to coordinate to help themselves. The result of the project, if it is successful, will be evidence that it is possible to identify relevant, verifiable and actionable messages from a stream of microblogged messages and identification of the evidentiary factors. A further outcome will be a disaster-related, labeled dataset of messages, which will be useful to researchers, e.g., those seeking to automatically classify information within a microblogged data stream.",0,University of North Texas
1020001000389915,"CAREER: A Scalable, Declarative, Imprecise Database Management System","The unprecedented amounts of data available to individuals, companies, governments, and scientists promises to revolutionize the way entertainment, business, governance, and science operate. And while data are cheap and plentiful, much of this data is lower quality than the precise data that has been managed for the last 30 years. Building an application that processes this imprecise data is difficult: it requires that developers handle both standard data management challenges (e.g., concurrency and scalability), while at the same time coping with imprecise and incomplete data, which is typically done using statistical or machine learning techniques (e.g., interpolation and classification). The Hazy project addresses this challenge by building a system that integrates the paradigms of relational database management systems with statistical machine learning techniques. This project conducts the following major tasks: (I) designing a language to integrate these techniques with standard SQL, (II) proposing an algebra to implement this language along with support for automatic optimization (similar to a standard RDBMS), and (III) discovering techniques to efficiently maintain the statistical models as the underlying data are changed or updated. The end goal is a system that makes it as easy to develop scalable applications that use imprecise data as it is to develop their precise counterparts. Hazy allows users to process larger amounts of data with more sophisticated statistical processing than ever before. In turn, this enables new applications in a divese set of areas, such as life and physical science sensing applications, health-care and environmental monitoring, and enterprise-based and Web-based information extraction.<br/><br/>The research of this project is used to develop the data and infrastructure for new practicum-style courses that are under development at the University of Wisconsin-Madison. In addition, this infrastructure will be used as part of an outreach effort to enable high school students to gain access to data analysis tools. The source code of Hazy is released into open source and the results are disseminated on the project Web site (http://www.cs.wisc.edu/hazy/).",5,Stanford University
1020001000389941,EAGER: Diverse M-Best Predictions from Probabilistic Models,"Computer Vision systems must deal with significant levels of ambiguity - from inter- and intra-object occlusion and varying appearance, lighting, and pose. Probabilistic models provide a principled framework for dealing with uncertainty and for converting evidence into a posteriori belief about the world. Typically, a vision system uses this belief to predict the ""most likely"" or maximum a-posteriori hypothesis. Unfortunately, our current models are inaccurate and this single-best hypothesis is often incorrect. <br/><br/>This project explores a novel way to allow vision systems to hedge against uncertainty by producing multiple plausible hypotheses. Specifically, this project develops techniques for finding a diverse set of high-probability solutions from probabilistic models. The project focuses on (a) interactive object cutout (where multiple segmentations are shown to the user to expedite convergence to an acceptable result); (b) semantic segmentation (where multiple plausible scene labelings are propagated to subsequent stages of a cascade for higher-order processing); (c) person/object tracking (where multiple localization hypotheses on each frame reduce the search space of a sequence tracker). <br/><br/>This project is producing new scientific knowledge in the context of probabilistic reasoning and advancing the state of art in computer vision. The techniques developed are useful for other AI domains such as Speech and Natural Language Processing. The PI and his students are broadly disseminating produced work by organizing workshops, tutorials, and journal special issues, and publicly sharing code and results. The project is engaging undergraduate students and women in computer science research.",0,Virginia Polytechnic Institute and State University
1020001000389948,EAGER: Designing Reflective Opportunities in Human-Computer Interaction,"Many people presently interact more with and through computers than they do with other people directly but such interaction may have negative effects that go unobserved. These balance of positive and negative effects could be considered more thoughtfully if the design of the technology created opportunities for reflection, e.g., by creating ""seams"" in interaction that make the balance between the machine's influence and the human push-back more obvious, or by allowing people to nudge one another and themselves in particular directions. The goal of this project is to create at least one proof-of-concept implementation and demonstration of such a reflective design that will provide a paradigm or model for similar development and serve as a basis for future research. The pilot project has four parts: development of a task and example reflective opportunity, technology and experimental development, conduct of an experiment and analysis. <br/><br/>The project is important because of the enormous role that computer systems play in the interstices of everyday existence, because the influences that computer systems have are not necessarily well-understood by users, creators or analysts and because of the potential for novel and more beneficial approaches to design that bring these influences to greater awareness. If the project succeeds in demonstrating the value of a reflective design approach, it may influence the ways people in society conceptualize computation, increasing their awareness of the need to cultivate a pro-active stance as users and designers and so provide a new paradigm for human-computer interaction research and development.",4,Virginia Polytechnic Institute and State University
1020001000390065,EAGER: An Efficient and Effective Distributed Information System,"The tremendous growth of information in the data-intensive world and a new wave of big data are creating a promising future for global ultra-large-scale data sharing, where widely-scattered massive data will be pooled and shared globally. A distributed data intensive information system is a critical component for realizing this future. The system will allow users to efficiently and effectively search similar data. However, the unprecedented amount of data, along with the large-scale environment and autonomous nature of participants pose high efficiency and effectiveness challenges to the development of such a system. This research will provide collaborative research opportunities for faculty, graduate and undergraduate students, as well as K-12 students in South Carolina.<br/><br/>A growing need persists for developing an efficient and effective information searching system, and this challenge represents one of the more formidable hurdles facing data-intensive computing. This proposal is aimed at addressing this need through the development of a distributed information system supporting efficient and effective data searching. This system achieves both high efficiency and effectiveness. Efficiency means the speed and overhead of sorting and searching date, while effectiveness means the ability to find all matching data in the system with fewer false positives and false negatives. This system translates data items to IDs, maps the data items to nodes in a distributed system and enables the similarity searching in a distributed manner. First, previous data translation methods relying on a multi-dimensional space to hash a data item to one index achieve high efficiency but suffer from low effectiveness due to the curse of dimensionality in data dimension reduction. Previous exact mapping methods that hash each keyword of a data item for data search are highly effective but inefficient. By eliminating the need of a multi-dimensional space, this system is both highly efficient and effective. Second, unlike some previous systems relying on a centralized or hierarchical structure for data searching, this system builds a distributed hash table (DHT) structure, which provides highly efficient data searching in a distributed manner. Unlike most traditional DHT-based data sharing which provides only exact matching services, this system offers similarity searching.",8,Clemson University
1020001000390120,"EAGER: Individualized Musculoskeletal Modeling for Diagnosis, Rehabilitation and Real-time Feedback","The goal of this project is to vastly improve the way musculosketal modeling is performed and utilized by creating a data driven model that builds on the product of exponentials formulation for joints. By using parameter fitting, a kinematic chain of joints can be fitted the individual rather than scaling a generic template which does not take into account the large diversity in body shapes and sizes. The team will look at a structured, yet data driven approach to modeling a person, making it possible to compare joint ranges and muscular limitations both contralaterally, as well as against their peers. It is also possible to compare a patient and against their past history allowing for better understanding and diagnosis within a specified patient groups (scoliosis, the elderly, hip replacement recovery) as well as with the general population. The PI proposes a hybrid optimal control approach for determining muscular activation based on segmenting different dynamical modes. These modes can take into account changes in mass or geometric constraints such as assistive devices (e.g. crutches walkers or exoskeletons). They propose to apply these methods of musculoskeletal modeling to the upper limbs in the elderly group who experiences muscle weakness, joint damage and may have artificial prostheses.<br/><br/>The tools developed under this proposal can be used to analyze any number of biological creatures by modeling their joints in a similar manner. It expands to a wider robotic community where robotic kinematic chains can be directly compared to biological chains. This can be used for teleoperation of robotic devices where particular joints can be mapped between each other. It also adds to the tools that can be used to design assistive exoskeletons and prosthetic devices, as it allows biological and mechanical joints to be modeled in together- potentially improving the methods of controlling these devices. The project team consists of a PI from Computer Science and two consultants, one from Mechanical Engineering (UC Berkeley) and a MD from UCDMC who have extensive experience in workspace assessment techniques, human modeling, human-machine interaction, and control. Research findings will also be outreached to K-12 students and their parents though various official events at the University. The Center for Information Technology Research in the Interest of Society (CITRIS) at UC Berkeley provides a unique environment and opportunity for the investigators to interact and share research findings with other researchers, students, and broader public.",20,University of California-Berkeley
1020001000390121,EAGER: CISSDA: A Unified Cyberinfrastructure Framework for Scalable Spatiotemporal Data Analytics,"Massive spatiotemporal datasets are often collected using global positioning systems (GPS) and other location-aware devices. Spatiotemporal data mining and analysis have become increasingly important as continued growth in geographic information science and technology enables scientific investigations and decision-making support in a plethora of fields. Big data and extensive computational capabilities are needed to mine and analyze the massive quantities of complex spatiotemporal data collected across multiple scales and used for diverse applications. However, conventional methods and tools for spatiotemporal data mining and analysis are developed primarily using sequential computing, and cannot adequately handle this increasing data intensity, complexity, and diversity of applications. Only by seamlessly harnessing heterogeneous and advanced computing and information infrastructure - cyberinfrastructure - can large and complex spatiotemporal data be efficiently analyzed on a wide scale.<br/><br/>This project creates a unified cyberinfrastructure framework by adapting and integrating heterogeneous modalities of computing and information infrastructure (e.g., cloud, high-performance computing, and high-throughput computing) for scalable spatiotemporal data analytics. The framework encompasses two types of novel and complementary capabilities: 1) a suite of methods and algorithms for scalable spatiotemporal data analytics through synthesis of data mining, information network analysis, and parallel and cloud computing; and 2) a geographic information system (GIS) based on advanced cyberinfrastructure (i.e., cyberGIS) to facilitate the use of the methods and algorithms by a large number of users. These novel capabilities help overcome many current limitations in geographic and social science research involving huge amount of spatiotemporal data, and bring forth useful insights for formulating new policies. The framework is designed to gain new fundamental understanding about individual activity patterns and spaces in the domain of environmental health through scalable analysis of massive space-time trajectory data that depict the movement of individuals over space and time. By the ubiquitous use of spatiotemporal data, the project will lead to both transformative and broad impacts on almost all disciplines that employ geospatial technologies for scientific problem solving and decision-making support.",5,University of Illinois at Urbana-Champaign
1020001000390150,NRI: NSF: Challenges and Opportunities in Utilizing Robotics in Small and Medium Manufacturing Enterprises,This grant supports organization of the workshop that brings together thought leaders from industry and academia to discuss use-case scenarios for use of robots in small and medium sized enterprises. The objective of the workshop is (1) to present and discuss industry use-cases for robotics in manufacturing and in particular examples where a move from fixed automation to flexible automation is an economic enabler and (2) to have broad discussions across academia and industry to internalize a common set of challenges and opportunities. The workshop participants gain new insights into what advances needed to deploy robotics in small and medium manufacturers. The workshop report provides a roadmap of challenges and research opportunities to guide the robotics community. The workshop stimulates future research towards development of robots for manufacturing applications. These activities directly impact the U.S. manufacturing.,7,Georgia Tech Research Corporation
1020001000390317,EAGER: Building and analyzing dynamic brain functional networks,"Funtional Magnetic Resonance Imaging (fMRI) offers a rich source of data for understanding brain structure and function. There is an urgent need for computational approaches to eliciting brain structure-function relationships from such data. While current approaches that rely on network representations of fMRI data offer useful insights into brain connectivity, they have significant limitations in terms of discovering and validating complex functional mechanisms that underly brain function: the very notion of a node in the functional network is difficult to define, inter- and intra- subject variability leads to network representations that vary a great deal within and among subjects; and static representations of correlations between activities of different brain regions offer only a partial picture of brain activity which is inherently dynamic. <br/><br/>This exploratory research project aims to introduce dynamic network analysis techniques to discover functional brain regions and study their evolution over time and across subjects. It leverages the research team's experience in the analysis of continuous and highly variable spatio-temporal climate data to address several of the outstanding challenges in analyzing brain networks built from fMRI data. A key goal of this exploratory study is to explore the feasibility of analyzing network representations of fMRI data to answer questions such as the following: Do nodes changes dynamically with time during an fMRI scan? What community detection techniques work best for dynamic networks? What patterns arise in dynamic brain networks? What is the statistical validity of the observed patterns?<br/><br/>Broader Impact: If successful, the project could establish the feasibility of research directions that could eventually lead to computational tools that enable better characterization of normal and abnormal brain function; better understanding of the variation of brain function within and across individuals over time, including patterns that characterize the brain activities of different populations e.g., adolescents, those suffering from specific brain disorders, etc. The project offers enhanced opportunities for interdisciplinary collaborations between neuroscientists and computer scientists, and research-based advanced training of graduate and postdoctoral students at the University of Minnesota. Free dissemination of open source implementations of the algorithms resulting from the project to the larger research community contribute to its broader impact.",20,University of Minnesota-Twin Cities
1020001000390367,EAGER: Virtual Personality Assessment Laboratory,"The goal of this project is to develop the principles to design a virtual personality assessment laboratory. This requires development of a preliminary taxonomy of mechanics, grounded on personality research, that allows researchers to use behavioral patterns of individuals in computer-generated virtual environments, to assess their real-world personality. Such virtual personality detection mechanisms can then be used by other researchers to adapt the laboratory further, which would be one ultimate goal: the design of more personalized and adaptive applications that may improve impact on large societal problems. The research activity will lead to a new methodology with the potential to transform current practices across disciplines, from social psychology to human-centered computing.<br/><br/>The virtual personality assessment laboratory will be developed as a set of modular challenges and situations that make use of the mechanics individuated in the taxonomy. These situations are constructed to elicit personality preferences. The design is driven by personality theory and validated by a wide range of personality measures such as the Need For Cognition, the California Q-Sort, the Reiss Motivation Profiler and the Five Factor Model. The system will be validated through two iterations to ensure that scenarios are assimilated and that they conform to the intention of the designers. A final summative evaluation will be administered utilizing data on the behavior of research subjects inside the environment, as well as various personality measures such as scores from personality questionnaires, informant interviews, and behavior coding. Correlation analysis will be used to investigate relationships between choices emerging from the context of action in the virtual environment and personality scores. <br/><br/>This research impacts directly a number of disciplines from psychology of personality to adaptive technologies and personalization. The research affects our understanding of personality within virtual environments, which are becoming a major part of our lives. It also has the potential of developing customizable learning environments. Understanding individual differences through the analysis of consumption and behavior of digital entertainments allows for a deeper level of adaptation and personalization of persuasive technologies aimed at fostering education, health or training, potentially increasing participation from all segments of society.",13,Northeastern University
1020001000390429,EAGER: Towards a self-organizing map and hyper-dimensional information network for the human genome,"Genome data enable scientists to pose a host of compelling questions spanning diverse disciplines. However, relational databases are inefficient at modeling the complex relationships between genes and the proteins they encode. The PI will enable biologists to answer these questions efficiently and automatically by developing a computational infrastructure that models the inherent structure of biological data, by creating a graphical database of genome and proteome data for the human genome and related eukaryotic genomes to model relationships (evolutionary, interaction, regulatory) that cannot be represented effectively in relational databases. Nodes will represent different biological entities - genes, proteins, species - and edges between nodes will represent different relationships between these entities. For example, edges between genes and proteins can represent ""Gene G encodes protein P"" or ""Gene G is regulated by protein P"". Edges between proteins can represent physical interaction or homology. Different types of features for these entities at each node will be stored and the team will use the network structure and statistical modeling methods to enable precise predictions of various aspects of ""function"" -- molecular function, metabolic pathway, biological process, cellular localization, inter-molecular interactions, protein 3D structure, etc. Functional annotation will be automated, with results produced in both machine-readable and human-readable formats. Intuitive web-based interfaces will be provided for navigation and interpretation of data by experimental biologists. Provenance of predicted functions will be provided, allowing biologists to drill down to examine the underlying support and evidence. All core software tools will be provided in open source, and data will be downloadable. This project will contribute curriculum materials suitable for inclusion in undergraduate and graduate courses in bioinformatics, genomics, phylogenomics and evolutionary biology and provide a resource for researchers in vertebrate genomes.This project will contribute curriculum materials suitable for inclusion in undergraduate and graduate courses in bioinformatics, genomics, phylogenomics and evolutionary biology,<br/>and provide a resource for researchers in vertebrate genomes.",3,University of California-Berkeley
1020001000390430,EAGER: Physio-linguistic Models of Deception Detection,"The goal of this Early-concept Grant for Exploratory Research is to explore a new generation of computational tools for joint modeling of physiological and linguistic signals of human behavior. The project is the first to investigate physio-linguistic models for deception analysis. To achieve this goal, the following three research objectives are pursued. First, a novel physio-linguistic dataset of deceit is built, covering several different domains. Second, rule-based classifiers for deception detection are explored, using physiological features (e.g., heart rate, respiration rate, galvanic skin response, skin temperature), as well as linguistic features. Third, data-driven learning approaches for multimodal deception detection are developed, taking advantage of the recent progress in early, late, and temporal fusion models. <br/><br/>The project is exploratory in nature, and acts as a catalyst for novel research problems. First, it explores rich sets of multimodal features extracted from physiological and linguistic modalities, analyzing their effectiveness in the recognition of deceit. Second, it also explores the integration of multiple physio-linguistic modalities, through experiments with rule-based and data-driven techniques that fuse multimodal features into joint deception analysis models. To address the challenges of multimodal research work, the team working on this project brings together experts from the fields of bio-sensors, computational linguistics, and physiology and behavioral sciences.<br/><br/>The project has high potential payoffs, as models of deception detection have broad applicability, including: the development of critical tools for various applications in fields such as criminal justice, intelligence, and security; the enhancement of applications that can be negatively affected by the presence of deceit, such as opinion analysis or modeling of human communication; and a deeper understanding of fundamental aspects of human behavior, which can positively impact medical applications in psychiatry and psychology. The tools and datasets produced during this project will be made freely available for the research community.<br/><br/>For further information see the project web site at: http://web.eecs.umich.edu/~mihalcea/deceptiondetection/",5,University of Michigan Ann Arbor
1020001000390444,NRI: Small: Rapid exploration of robotic ankle exoskeleton control strategies,"This project compares different techniques for assisting individuals with stroke-related mobility impairments using a robotic ankle orthosis. Several promising assistance techniques have been developed for robotic prostheses and rehabilitation platforms, which have been extended to exoskeletons - worn at the ankle joint, and adapted for individuals with stroke. An ankle exoskeleton test bed is used to emulate each assistance technique, allowing comparisons of efficiency and effectiveness within the same platform. Each technique is first programmed and verified in pilot tests with this emulator, followed by multi-dimensional parameter studies, conducted first on subjects without neurological impairment and then on subjects with hemiparesis following stroke. The results for each technique are used to identify ideal parameters and their settings, which facilitates across-technique comparisons and development of a standardized set of quantitative performance metrics, including measures of effort, preferred speed, and stability. These studies will contribute to a scientific foundation for the design and prescription of robotic ankle-foot orthoses that will benefit the millions of impaired individuals.",7,Carnegie-Mellon University
1020001000408018,EAGER: WeatherBlur,"The Cyberlearning and Future Learning Technologies Program funds efforts that will help envision the next generation of learning technologies and advance what we know about how people learn in technology-rich environments. One promising approach to support learning is the use of online learning communities. Another approach is place-based education, in which people learn about concepts through their application in the places people know: their homes, towns, and regions. This project examines an existing experimental online community called WeatherBlur, which uses both approaches as well as citizen science to connect kids, teachers, fishermen and -women, and scientists to learn about the local impacts of weather and climate in Alaska and Maine. The project will examine how this unique community connects people and will collect data on other learning communities to compare what features of the community allow people to collaborate and learn well across such diverse audiences. The research will yield a set of guiding principles for how to effectively structure online communities like WeatherBlur.<br/><br/>This project aims to contribute to theories of computer-supported collaborative learning (CSCL) by exploring the theory of non-hierarchical learning communities. Data will be collected from the WeatherBlur community including social network data and a series of stakeholder interviews to document current practices within WeatherBlur; this data will be used inductively to help elaborate the theory. Stakeholder perceptions and theory building will be reciprocal and iterative through at least two iterations of member checks. Then, the project will conduct a cross-case comparative analysis of the WeatherBlur community and other citizen science online communities to help characterize the relationship between different types of communities for learning. The cross-case analysis will rely on self-report surveys and semi-structured interviews with both community architects and community participants. Furthermore, a community observation protocol will be constructed to operationalize the non-hierarchical learning community concept, and will be used to enrich the cross-case comparison.",9,Maine Mathematics and Science Alliance
1020001000408170,"SCH: INT: Collaborative Research: Replicating Clinic Physical Therapy at Home: Touch, Depth, and Epidermal Electronics in an Interactive Avatar System","Physical therapy is often hampered by lack of access to therapists, and lack of adherence to home therapy regimens. This research develops a physical therapy assistance system for home use, with emphasis on stroke rehabilitation. As a person exercises, inexpensive cameras observe color and depth, and unobtrusive tattoo sensors monitor detailed muscle activity. The 3D movement trajectory is derived and compared against the exercise done with an expert therapist. The patient watches a screen avatar where arrows and color coding guide the patient to move correctly. In addition to advancing fields such as movement tracking, skin sensors, and assistive systems, the project has the potential for broad impact by attracting women and under-represented minorities to engineering through health-related engineering coursework and projects, and because home physical therapy assistance can especially help rural and under-served populations. <br/><br/>This project uses bio-electronics, computer vision, computer gaming, high-dimensional machine learning, and human factors to develop a home physical therapy assistance system. During home exercises, patient kinematics and physiology are monitored with a Kinect color/depth camera and wireless epidermal electronics transferable to the skin with a temporary tattoo. The project involves optimization of electrode design and wireless signaling for epidermal electronics to monitor spatiotemporal aspects of muscle recruitment, hand and body pose estimation and tracking algorithms that are robust to rapid motion and occlusions, and development of machine learning and avatar rendering algorithms for multi-modal sensor fusion and expert-trained optimal control guidance logic, for both cloud and local usage. The system aims to provide real-time feedback to make home sessions as effective as office visits with an expert therapist, reducing the time and money required for full recovery.",12,University of California-Santa Cruz
1020001000408222,SCH: INT: Collaborative Research: Monitoring and Modeling Family Eating Dynamics (M2 FED): Reducing Obesity Without Focusing on Diet and Activity,"This project is funded under a joint solicitation between the National Science Foundation and the National Institutes of Health, named ""Smart and Connected Health"" (SCH), which aims to accelerate the development and use of innovative approaches that would support the much needed transformation of healthcare across the entire population. The obesity epidemic is the primary cause of recent increases in heart disease, diabetes, cancer, and other diseases that place an untenable strain on healthcare and public health. One of the primary behavioral causes, i.e. dietary intake, is a behavior that science has had little success in understanding, much less affecting. Recent advances in remote sensing have provided a new paradigm for tracking human behavior, but obesity-related efforts focused directly on diet and activity have been hampered by not only the accuracy of behavior tracking (especially dietary intake) but also the lack of behavioral theories and dynamic models for personalized just-in-time, adaptive interventions (JITAIs). Current behavioral science suggests that family eating dynamics (FED) have high potential to impact child and parent dietary intake and obesity rates. The confluence of technology research and behavioral science research creates the opportunity to change the focus of in situ obesity research and intervention from behaviors that have proven difficult to monitor, model, and modify (e.g., what and how much is being eaten) to the family mealtime and home food environment (e.g., who is eating, when, where, with whom, interpersonal stress), providing opportunities for monitoring and modeling (M2) behavior via remote sensing, and the potential for successful behavior modification via personalized, adaptable, real-time feedback.<br/><br/>This project proposes M2FED, an integrated system of in-home beacons, wireless and wearable sensors, and smartphones that collects synchronized real-time FED data that will be used to iteratively develop dynamic, contextualized FED systems models based on that data. The technology, ideographic models, and techniques to iteratively develop those models can guide future JITAIs and thus have a downstream positive impact on diet and ultimately obesity. The project brings together behavioral scientists, system scientists, obesity experts, computer scientists, and electrical engineers to address fundamental challenges of remote, continuous data capture for real-time behavior modeling for obesity prevention and treatment. Behavioral scientists traditionally have not had access to real-time data and dynamic models, while engineers have not had the expertise to identify what to monitor and model or what feedback to provide. This project connects complimentary expertise to develop a dramatically different approach to childhood obesity, focusing on behaviors, i.e. FED rather than diet, that can be more accurately monitored and modeled and have greater potential for positive and long-term modification. Fundamental technology research challenges in realizing the M2FED system include unique individual in-home localization, eating detection, conversation stress and mood assessment in reverberant environments, and a system-of-systems framework that includes heterogeneous sensing and communication systems across the family system itself. Fundamental behavioral research challenges include real-time modeling of FED based on past and ongoing observations of FED states and intra- and interpersonal states and events that create temporal and causal impact on FED. While this project is performed within the context of the obesity/FED relationship (which itself has the potential for sweeping impacts on human health and healthcare costs), the project also generalizes a framework, including both an evidence-based system and an experimental platform that extends to systems and applications beyond childhood obesity and behavior modification. The multidisciplinary nature of this work also provides new outreach and educational opportunities, informing (and being informed by) the public and preparing a workforce that is better equipped to address the fundamental human-behavior-centric challenges of health management and wellness preservation.",12,University of Virginia Main Campus
1020001000408226,SCH: INT: Collaborative Research: Monitoring and Modeling Family Eating Dynamics (M2 FED): Reducing Obesity Without Focusing on Diet and Activity,"This project is funded under a joint solicitation between the National Science Foundation and the National Institutes of Health, named ""Smart and Connected Health"" (SCH), which aims to accelerate the development and use of innovative approaches that would support the much needed transformation of healthcare across the entire population. The obesity epidemic is the primary cause of recent increases in heart disease, diabetes, cancer, and other diseases that place an untenable strain on healthcare and public health. One of the primary behavioral causes, i.e. dietary intake, is a behavior that science has had little success in understanding, much less affecting. Recent advances in remote sensing have provided a new paradigm for tracking human behavior, but obesity-related efforts focused directly on diet and activity have been hampered by not only the accuracy of behavior tracking (especially dietary intake) but also the lack of behavioral theories and dynamic models for personalized just-in-time, adaptive interventions (JITAIs). Current behavioral science suggests that family eating dynamics (FED) have high potential to impact child and parent dietary intake and obesity rates. The confluence of technology research and behavioral science research creates the opportunity to change the focus of in situ obesity research and intervention from behaviors that have proven difficult to monitor, model, and modify (e.g., what and how much is being eaten) to the family mealtime and home food environment (e.g., who is eating, when, where, with whom, interpersonal stress), providing opportunities for monitoring and modeling (M2) behavior via remote sensing, and the potential for successful behavior modification via personalized, adaptable, real-time feedback.<br/><br/>This project proposes M2FED, an integrated system of in-home beacons, wireless and wearable sensors, and smartphones that collects synchronized real-time FED data that will be used to iteratively develop dynamic, contextualized FED systems models based on that data. The technology, ideographic models, and techniques to iteratively develop those models can guide future JITAIs and thus have a downstream positive impact on diet and ultimately obesity. The project brings together behavioral scientists, system scientists, obesity experts, computer scientists, and electrical engineers to address fundamental challenges of remote, continuous data capture for real-time behavior modeling for obesity prevention and treatment. Behavioral scientists traditionally have not had access to real-time data and dynamic models, while engineers have not had the expertise to identify what to monitor and model or what feedback to provide. This project connects complimentary expertise to develop a dramatically different approach to childhood obesity, focusing on behaviors, i.e. FED rather than diet, that can be more accurately monitored and modeled and have greater potential for positive and long-term modification. Fundamental technology research challenges in realizing the M2FED system include unique individual in-home localization, eating detection, conversation stress and mood assessment in reverberant environments, and a system-of-systems framework that includes heterogeneous sensing and communication systems across the family system itself. Fundamental behavioral research challenges include real-time modeling of FED based on past and ongoing observations of FED states and intra- and interpersonal states and events that create temporal and causal impact on FED. While this project is performed within the context of the obesity/FED relationship (which itself has the potential for sweeping impacts on human health and healthcare costs), the project also generalizes a framework, including both an evidence-based system and an experimental platform that extends to systems and applications beyond childhood obesity and behavior modification. The multidisciplinary nature of this work also provides new outreach and educational opportunities, informing (and being informed by) the public and preparing a workforce that is better equipped to address the fundamental human-behavior-centric challenges of health management and wellness preservation.",12,University of Southern California
1020001000408249,SCH: INT: Disposable High Sensitivity Point of Care Immunosensors for Multiple Disease and Pathogen Detection,"This research program proposes to investigate substances sometimes found in sweat, such as protein, which can detect health and disease (biomarkers). The sweat is a biofluid that can provide a window to a person's health. At this time, there is no rapid method to measure the changes in proteins in the sweat that signal disease. The goal is to measure changes in these proteins in the sweat before, during, and after physical activity, including measuring levels of stress hormones and immune markers using a low cost and disposable point-of-care biosensor. In essence, the aim of this proposal is to develop a cheap and disposable patch sensor that can be worn on the skin, and eventually become as widely used as the box of bandages or oral thermometer found in most home medicine cabinets. One of the key engineering roadblocks to such a device is the availability of a low-cost manufacturing technology that can be combined with state-of-the-art biomarker proteomic detection technology to diagnose and monitor multiple diseases with medical laboratory level sensitivity. This proposal presents a new approach combining low-cost commercial display technology (found in your TV, computer monitor, or cell phone) with protein microarray printing technology to fabricate a low-cost, disposable skin patch for more effective self-management of patient health care in the home or in other non-clinical settings. The measurements will be available in real time for both patients and their caregivers using smart-phone technology. To meet this research objective, a fabric-like flexible skin patch is proposed that would provide non-invasive monitoring of multiple biomarkers in human sweat. This new approach is designed to use existing commercial manufacturing technology that can be readily adapted to produce hundreds of millions of devices per year at a cost of only a few dollars each.<br/><br/>The research program proposes to utilize multiple proteomic research platforms (mass spectrometry, protein, and antibody microarrays) to establish a profile of the sweat proteome that will have a fundamental impact on furthering the biological understanding of sweat and skin, and transition high-cost fluorescent-based assay platforms into a low-cost, disposable, highly sensitive fluorescent point-of-care based assay that is minimally to non-invasive. Highlights of the proposed development effort include: a detailed proteomic analysis utilizing mass spectrometry as well as custom protein and antibody microarrays for the discovery of proteins and antibodies present in sweat; development and optimization of the biosensor's optical and electronic components for the utilization of low-cost fluorescent recognition lateral flow assay; development of fluorescent recognition lateral flow assays utilizing identified serum and sweat biomarkers and the integration of the assay with our optical biosensor. This research program accelerates the development of technology that will transform healthcare from reactive and hospital-centered to preventative, proactive, evidence-based, and person-centered. This next generation health care solution will provide breakthrough sensor technology for in-home monitoring. This transformation in healthcare is especially critical for immune-compromised individuals, such as organ transplant recipients, cancer patients, post-operative patients, and the elderly.",12,Arizona State University
1020001000408257,SCH: INT: Collaborative Research: Diagnostic Driving: Real Time Driver Condition Detection Through Analysis of Driving Behavior,"The automobile presents a great opportunity for healthcare monitoring. For one, most Americans engage in daily driving, and patient's time spent in vehicles is a missed opportunity to monitor their condition and general wellbeing. The goal of this project is to develop and evaluate technology for automatic in-vehicle monitoring of early symptoms of medical conditions and disrupted medications of patients, and to provide preventive care. Specifically, in this project we will focus on Attention-Deficit/Hyperactivity disorder (ADHD) in teenagers and young adults, a prevalent chronic medical condition which when uncontrolled has the potential for known negative health and quality of life consequences. The approach of using driving behavior to monitor ADHD symptoms could be applied to many other medical conditions (such as diabetes, failing eyesight, intoxication, fatigue or heart attacks) thereby transforming medical management into real-time sensing and management. Identification of all these conditions from driving behavior and alerting the proper agent could transform how we think about health monitoring and result in saved lives and reduced injuries.<br/><br/>The main goal of this project is to leverage the large amounts of health data that can be collected while driving via machine learning, in order to detect subtle changes in behavior due to out-of-control ADHD symptoms that can, for example, indicate the onset of episodes of inattention before they happen. Via lab-based driving simulator as well as on-road studies, the research team will investigate the individualized behaviors and patterns in vehicle control behaviors that are characteristic of ADHD patients under various states of medication usage. The team will develop a machine learning framework based on case-based and context-based reasoning to match the current driving behavior of the patient with previously recorded driving behavior corresponding to different ADHD symptoms. The key machine learning challenge is to define appropriate similarity measures to compare driving behavior that take into account the key distinctive features of ADHD driving behavior identified during our study. The team will evaluate the accuracy with which the proposed approach can identify and distinguish between different out-of-control ADHD symptoms, which are the implications for long-term handling of ADHD patients, via driving simulator experiments as well as using instrumented cars with real patients.",12,Drexel University
1020001000408259,SCH: INT: Collaborative Research: Diagnostic Driving: Real Time Driver Condition Detection Through Analysis of Driving Behavior,"The automobile presents a great opportunity for healthcare monitoring. For one, most Americans engage in daily driving, and patient's time spent in vehicles is a missed opportunity to monitor their condition and general wellbeing. The goal of this project is to develop and evaluate technology for automatic in-vehicle monitoring of early symptoms of medical conditions and disrupted medications of patients, and to provide preventive care. Specifically, in this project we will focus on Attention-Deficit/Hyperactivity disorder (ADHD) in teenagers and young adults, a prevalent chronic medical condition which when uncontrolled has the potential for known negative health and quality of life consequences. The approach of using driving behavior to monitor ADHD symptoms could be applied to many other medical conditions (such as diabetes, failing eyesight, intoxication, fatigue or heart attacks) thereby transforming medical management into real-time sensing and management. Identification of all these conditions from driving behavior and alerting the proper agent could transform how we think about health monitoring and result in saved lives and reduced injuries.<br/><br/>The main goal of this project is to leverage the large amounts of health data that can be collected while driving via machine learning, in order to detect subtle changes in behavior due to out-of-control ADHD symptoms that can, for example, indicate the onset of episodes of inattention before they happen. Via lab-based driving simulator as well as on-road studies, the research team will investigate the individualized behaviors and patterns in vehicle control behaviors that are characteristic of ADHD patients under various states of medication usage. The team will develop a machine learning framework based on case-based and context-based reasoning to match the current driving behavior of the patient with previously recorded driving behavior corresponding to different ADHD symptoms. The key machine learning challenge is to define appropriate similarity measures to compare driving behavior that take into account the key distinctive features of ADHD driving behavior identified during our study. The team will evaluate the accuracy with which the proposed approach can identify and distinguish between different out-of-control ADHD symptoms, which are the implications for long-term handling of ADHD patients, via driving simulator experiments as well as using instrumented cars with real patients.",12,The Children's Hospital of Philadelphia
1020001000408261,SCH: INT: Collaborative Research: Diagnostic Driving: Real Time Driver Condition Detection Through Analysis of Driving Behavior,"The automobile presents a great opportunity for healthcare monitoring. For one, most Americans engage in daily driving, and patient's time spent in vehicles is a missed opportunity to monitor their condition and general wellbeing. The goal of this project is to develop and evaluate technology for automatic in-vehicle monitoring of early symptoms of medical conditions and disrupted medications of patients, and to provide preventive care. Specifically, in this project we will focus on Attention-Deficit/Hyperactivity disorder (ADHD) in teenagers and young adults, a prevalent chronic medical condition which when uncontrolled has the potential for known negative health and quality of life consequences. The approach of using driving behavior to monitor ADHD symptoms could be applied to many other medical conditions (such as diabetes, failing eyesight, intoxication, fatigue or heart attacks) thereby transforming medical management into real-time sensing and management. Identification of all these conditions from driving behavior and alerting the proper agent could transform how we think about health monitoring and result in saved lives and reduced injuries.<br/><br/>The main goal of this project is to leverage the large amounts of health data that can be collected while driving via machine learning, in order to detect subtle changes in behavior due to out-of-control ADHD symptoms that can, for example, indicate the onset of episodes of inattention before they happen. Via lab-based driving simulator as well as on-road studies, the research team will investigate the individualized behaviors and patterns in vehicle control behaviors that are characteristic of ADHD patients under various states of medication usage. The team will develop a machine learning framework based on case-based and context-based reasoning to match the current driving behavior of the patient with previously recorded driving behavior corresponding to different ADHD symptoms. The key machine learning challenge is to define appropriate similarity measures to compare driving behavior that take into account the key distinctive features of ADHD driving behavior identified during our study. The team will evaluate the accuracy with which the proposed approach can identify and distinguish between different out-of-control ADHD symptoms, which are the implications for long-term handling of ADHD patients, via driving simulator experiments as well as using instrumented cars with real patients.",12,University of Central Florida
1020001000408266,SCH: INT: Collaborative Research: Wearable Devices for In-Home Monitoring of Patients at Risk for Heart Failure,"In the US, heart failure affects over 6 million people and is the most common cause of hospitalization. New technologies are needed to enable in-home monitoring and treatment of patients at risk of developing acute decompensated heart failure. Early symptom detection will help clinicians reduce disease progression and prevent hospitalization. Current home-based management relies on monitoring weight gain, but weight alone does not accurately gauge the fluid accumulation that predicts worsening of heart failure. Atrial fibrillation, a heart rhythm abnormality associated with stroke, hospitalization, and 2-fold higher risk of death, is also common in heart failure patients. Because atrial fibrillation symptoms can be vague, intermittent, and short lasting, atrial fibrillation detection requires continuous monitoring--a challenge in ambulatory patients. A device that can measure vital signs (including heart rate, respiration rate, and tidal volume), intrathoracic fluid status (using thoracic bioimpedance) and heart rhythm will both detect atrial fibrillation and promote quick and accurate identification of the early stages of acute decompensation in chronic heart failure patients. Thus, the goal of this work is to develop a novel device for in-home monitoring of heart failure patients who are at risk of developing acute decompensated heart failure. The system will consist of a bioimpedance monitor (a wearable vest with dry electrodes--no adhesives, gels or wetting required) to detect trending to acute decompensated heart failure and of a smart-watch to measure atrial fibrillation. These components will require the development of novel sensor and information technologies, wearable monitors, signal processing algorithms, and telecommunication systems. The long-term goal of this project is to enable in-home monitoring of patients at risk of catastrophic cardiovascular events in order to provide prompt, targeted intervention, reduce symptom progression and avert preventable hospitalization. A wireless cardiac monitoring system that will be comfortable to wear, easy to use, cost effective, and feasible for home use with ambulatory patients will be developed. The system will provide patients and their healthcare providers with early warnings about cardiac decompensation and serious but difficult-to-detect arrhythmias, which can empower patients and caregivers, improve quality of life, and reduce health care costs.<br/><br/>This project aims to: 1) develop reuseable carbon-black and polydimethylsiloxane (CB/PDMS) electrodes that capture bioimpedance and electrocardiogram data; 2) develop hardware and algorithms for acute decompensated heart failure detection, resulting in a wearable monitor with embedded CB/PDMS electrodes; 3) develop hardware and algorithms for atrial fibrillation detection using a smart watch; and 4) evaluate the performance and usability of both detection systems in a prospectively recruited cohort study. The clinical study will target populations, including patients with psychosocial or cognitive limitations, that are at high risk for acute decompensated heart failure and atrial fibrillation progression but that typically are omitted from clinical studies. The central hypothesis is that an innovative bioimpedance monitor with re-usable, non-wetted bioimpedance electrodes embedded in a wearable vest device used in conjunction with a smart watch will continuously collect and transmit key physiologic data. Devices running decision-support algorithms will analyze this data to identify patients with emergent acute decompensated heart failure symptoms that require prompt attention. A second hypothesis is that a wrist-based continuous monitoring device will provide unobtrusive and acceptable 24-hour heart rhythm monitoring and will augment data collected by the bioimpedance monitor. Heart rhythm, vital signs, and intrathoracic fluid accumulation measurements will be collected via 24-hour data recordings. This data will be used to develop a robust clinical decision support algorithm that accurately detects atrial fibrillation and early acute decompensated heart failure.",12,University of Massachusetts Medical School
1020001000408268,SCH: INT: Collaborative Research: S.E.P.S.I.S.: Sepsis Early Prediction Support Implementation System,"Sepsis, infection plus systemic manifestations of infection, is the leading cause of in-hospital mortality. About 700,000 people die annually in US hospitals and 16% of them were diagnosed with sepsis (including a high prevalence of severe sepsis with major complication). In addition to being deadly, sepsis is the most expensive condition associated with in-hospital stay, resulting in a 75% longer stay than any other condition. The total burden of sepsis to the US healthcare system is estimated to be $20.3 billion, most of which is paid by Medicare and Medicaid. In fact, in June 2015 the Centers for Medicare & Medicaid Services (CMS) reported that sepsis accounted for over $7 billion in Medicare payments (second only to major joint replacement), a close to 10% increase from the previous year. This pervasive drain on health care resources is due, in part, to difficulties in diagnosis and delayed treatment. For example, every one hour delay in treatment of severe sepsis/shock with antibiotics decreases a patient's survival probability by 10%. Many of these deaths could have been averted or postponed if a better system of care was in place. The goal of this research is to overcome these barriers by integrating electronic health records (EHR) and clinical expertise to provide an evidence-based framework to diagnose and accurately risk-stratify patients within the sepsis spectrum, and develop and validate intervention policies that inform sepsis treatment decisions. The project to bring together health care providers, researchers, educators, and students to add value to patient care by integrating machine learning, decision analytical models, human factors analysis, as well as system and process modeling to advance scientific knowledge, predict sepsis, and prevent sepsis-related health deterioration. In addition to the societal impact that clinical translation of these findings may bring, the project will provide engineering and computer science students and health services researchers with cross-disciplinary educational experience.<br/><br/>The proposed research will apply engineering and computer science methodologies to analyze patient level EHR across two large scale health care facilities, Mayo Clinic Rochester and Christiana Care Health System and to inform clinical decision making for sepsis. The multi-institutional, interdisciplinary collaboration will enable the development of health care solutions for sepsis by describing and accurately risk-stratifying hospitalized patients, and developing decision analytical models to personalize and inform diagnostic and treatment decisions considering patient outcomes and response implications. The Sepsis Early Prediction Support Implementation System (S.E.P.S.I.S.) project aims will be to: 1) Develop data-driven models to classify patients according to their clinical progression to diagnose sepsis and predict risk of deterioration, thus informing therapeutic actions. 2) Develop personalized intervention policies for patients within the sepsis spectrum. 3) Develop decision support systems (DSS) for personalized interventions focusing on resource implications and usability within a real hospital setting. The team will 1) identify important factors that uncover patient profiles based on Bayesian exponential family principal components analysis; 2) develop hidden Markov models (HMMs) and input-output HMMs to identify clusters of patients with similar progression patterns within the sepsis spectrum; 3) provide an analytical framework to support sepsis staging in clinical practice using bilevel optimization. They will 1) predict short- and long-term individual patient outcomes using multivariate statistical models and simulation; 2) develop semi-Markov decision process and partially observable semi-Markov decision process models to identify timing of therapeutic actions and diagnostic tests. Furthermore, the team will 1) predict demand for resources and develop and validate a hybrid mixed integer programming and queueing model to optimize system level allocations; 2) utilize human factors analysis and usability testing to assess the implementation of the DSS.",12,Christiana Care Health Services Incorporated
1020001000408270,SCH: INT: Collaborative Research: Wearable Devices for In-Home Monitoring of Patients at Risk for Heart Failure,"In the US, heart failure affects over 6 million people and is the most common cause of hospitalization. New technologies are needed to enable in-home monitoring and treatment of patients at risk of developing acute decompensated heart failure. Early symptom detection will help clinicians reduce disease progression and prevent hospitalization. Current home-based management relies on monitoring weight gain, but weight alone does not accurately gauge the fluid accumulation that predicts worsening of heart failure. Atrial fibrillation, a heart rhythm abnormality associated with stroke, hospitalization, and 2-fold higher risk of death, is also common in heart failure patients. Because atrial fibrillation symptoms can be vague, intermittent, and short lasting, atrial fibrillation detection requires continuous monitoring--a challenge in ambulatory patients. A device that can measure vital signs (including heart rate, respiration rate, and tidal volume), intrathoracic fluid status (using thoracic bioimpedance) and heart rhythm will both detect atrial fibrillation and promote quick and accurate identification of the early stages of acute decompensation in chronic heart failure patients. Thus, the goal of this work is to develop a novel device for in-home monitoring of heart failure patients who are at risk of developing acute decompensated heart failure. The system will consist of a bioimpedance monitor (a wearable vest with dry electrodes--no adhesives, gels or wetting required) to detect trending to acute decompensated heart failure and of a smart-watch to measure atrial fibrillation. These components will require the development of novel sensor and information technologies, wearable monitors, signal processing algorithms, and telecommunication systems. The long-term goal of this project is to enable in-home monitoring of patients at risk of catastrophic cardiovascular events in order to provide prompt, targeted intervention, reduce symptom progression and avert preventable hospitalization. A wireless cardiac monitoring system that will be comfortable to wear, easy to use, cost effective, and feasible for home use with ambulatory patients will be developed. The system will provide patients and their healthcare providers with early warnings about cardiac decompensation and serious but difficult-to-detect arrhythmias, which can empower patients and caregivers, improve quality of life, and reduce health care costs.<br/><br/>This project aims to: 1) develop reuseable carbon-black and polydimethylsiloxane (CB/PDMS) electrodes that capture bioimpedance and electrocardiogram data; 2) develop hardware and algorithms for acute decompensated heart failure detection, resulting in a wearable monitor with embedded CB/PDMS electrodes; 3) develop hardware and algorithms for atrial fibrillation detection using a smart watch; and 4) evaluate the performance and usability of both detection systems in a prospectively recruited cohort study. The clinical study will target populations, including patients with psychosocial or cognitive limitations, that are at high risk for acute decompensated heart failure and atrial fibrillation progression but that typically are omitted from clinical studies. The central hypothesis is that an innovative bioimpedance monitor with re-usable, non-wetted bioimpedance electrodes embedded in a wearable vest device used in conjunction with a smart watch will continuously collect and transmit key physiologic data. Devices running decision-support algorithms will analyze this data to identify patients with emergent acute decompensated heart failure symptoms that require prompt attention. A second hypothesis is that a wrist-based continuous monitoring device will provide unobtrusive and acceptable 24-hour heart rhythm monitoring and will augment data collected by the bioimpedance monitor. Heart rhythm, vital signs, and intrathoracic fluid accumulation measurements will be collected via 24-hour data recordings. This data will be used to develop a robust clinical decision support algorithm that accurately detects atrial fibrillation and early acute decompensated heart failure.",12,Worcester Polytechnic Institute
1020001000408271,SCH: INT: Collaborative Research: Wearable Devices for In-Home Monitoring of Patients at Risk for Heart Failure,"In the US, heart failure affects over 6 million people and is the most common cause of hospitalization. New technologies are needed to enable in-home monitoring and treatment of patients at risk of developing acute decompensated heart failure. Early symptom detection will help clinicians reduce disease progression and prevent hospitalization. Current home-based management relies on monitoring weight gain, but weight alone does not accurately gauge the fluid accumulation that predicts worsening of heart failure. Atrial fibrillation, a heart rhythm abnormality associated with stroke, hospitalization, and 2-fold higher risk of death, is also common in heart failure patients. Because atrial fibrillation symptoms can be vague, intermittent, and short lasting, atrial fibrillation detection requires continuous monitoring--a challenge in ambulatory patients. A device that can measure vital signs (including heart rate, respiration rate, and tidal volume), intrathoracic fluid status (using thoracic bioimpedance) and heart rhythm will both detect atrial fibrillation and promote quick and accurate identification of the early stages of acute decompensation in chronic heart failure patients. Thus, the goal of this work is to develop a novel device for in-home monitoring of heart failure patients who are at risk of developing acute decompensated heart failure. The system will consist of a bioimpedance monitor (a wearable vest with dry electrodes--no adhesives, gels or wetting required) to detect trending to acute decompensated heart failure and of a smart-watch to measure atrial fibrillation. These components will require the development of novel sensor and information technologies, wearable monitors, signal processing algorithms, and telecommunication systems. The long-term goal of this project is to enable in-home monitoring of patients at risk of catastrophic cardiovascular events in order to provide prompt, targeted intervention, reduce symptom progression and avert preventable hospitalization. A wireless cardiac monitoring system that will be comfortable to wear, easy to use, cost effective, and feasible for home use with ambulatory patients will be developed. The system will provide patients and their healthcare providers with early warnings about cardiac decompensation and serious but difficult-to-detect arrhythmias, which can empower patients and caregivers, improve quality of life, and reduce health care costs.<br/><br/>This project aims to: 1) develop reuseable carbon-black and polydimethylsiloxane (CB/PDMS) electrodes that capture bioimpedance and electrocardiogram data; 2) develop hardware and algorithms for acute decompensated heart failure detection, resulting in a wearable monitor with embedded CB/PDMS electrodes; 3) develop hardware and algorithms for atrial fibrillation detection using a smart watch; and 4) evaluate the performance and usability of both detection systems in a prospectively recruited cohort study. The clinical study will target populations, including patients with psychosocial or cognitive limitations, that are at high risk for acute decompensated heart failure and atrial fibrillation progression but that typically are omitted from clinical studies. The central hypothesis is that an innovative bioimpedance monitor with re-usable, non-wetted bioimpedance electrodes embedded in a wearable vest device used in conjunction with a smart watch will continuously collect and transmit key physiologic data. Devices running decision-support algorithms will analyze this data to identify patients with emergent acute decompensated heart failure symptoms that require prompt attention. A second hypothesis is that a wrist-based continuous monitoring device will provide unobtrusive and acceptable 24-hour heart rhythm monitoring and will augment data collected by the bioimpedance monitor. Heart rhythm, vital signs, and intrathoracic fluid accumulation measurements will be collected via 24-hour data recordings. This data will be used to develop a robust clinical decision support algorithm that accurately detects atrial fibrillation and early acute decompensated heart failure.",12,University of Connecticut
1020001000408272,SCH: INT: Collaborative Research: S.E.P.S.I.S.: Sepsis Early Prediction Support Implementation System,"Sepsis, infection plus systemic manifestations of infection, is the leading cause of in-hospital mortality. About 700,000 people die annually in US hospitals and 16% of them were diagnosed with sepsis (including a high prevalence of severe sepsis with major complication). In addition to being deadly, sepsis is the most expensive condition associated with in-hospital stay, resulting in a 75% longer stay than any other condition. The total burden of sepsis to the US healthcare system is estimated to be $20.3 billion, most of which is paid by Medicare and Medicaid. In fact, in June 2015 the Centers for Medicare & Medicaid Services (CMS) reported that sepsis accounted for over $7 billion in Medicare payments (second only to major joint replacement), a close to 10% increase from the previous year. This pervasive drain on health care resources is due, in part, to difficulties in diagnosis and delayed treatment. For example, every one hour delay in treatment of severe sepsis/shock with antibiotics decreases a patient's survival probability by 10%. Many of these deaths could have been averted or postponed if a better system of care was in place. The goal of this research is to overcome these barriers by integrating electronic health records (EHR) and clinical expertise to provide an evidence-based framework to diagnose and accurately risk-stratify patients within the sepsis spectrum, and develop and validate intervention policies that inform sepsis treatment decisions. The project to bring together health care providers, researchers, educators, and students to add value to patient care by integrating machine learning, decision analytical models, human factors analysis, as well as system and process modeling to advance scientific knowledge, predict sepsis, and prevent sepsis-related health deterioration. In addition to the societal impact that clinical translation of these findings may bring, the project will provide engineering and computer science students and health services researchers with cross-disciplinary educational experience.<br/><br/>The proposed research will apply engineering and computer science methodologies to analyze patient level EHR across two large scale health care facilities, Mayo Clinic Rochester and Christiana Care Health System and to inform clinical decision making for sepsis. The multi-institutional, interdisciplinary collaboration will enable the development of health care solutions for sepsis by describing and accurately risk-stratifying hospitalized patients, and developing decision analytical models to personalize and inform diagnostic and treatment decisions considering patient outcomes and response implications. The Sepsis Early Prediction Support Implementation System (S.E.P.S.I.S.) project aims will be to: 1) Develop data-driven models to classify patients according to their clinical progression to diagnose sepsis and predict risk of deterioration, thus informing therapeutic actions. 2) Develop personalized intervention policies for patients within the sepsis spectrum. 3) Develop decision support systems (DSS) for personalized interventions focusing on resource implications and usability within a real hospital setting. The team will 1) identify important factors that uncover patient profiles based on Bayesian exponential family principal components analysis; 2) develop hidden Markov models (HMMs) and input-output HMMs to identify clusters of patients with similar progression patterns within the sepsis spectrum; 3) provide an analytical framework to support sepsis staging in clinical practice using bilevel optimization. They will 1) predict short- and long-term individual patient outcomes using multivariate statistical models and simulation; 2) develop semi-Markov decision process and partially observable semi-Markov decision process models to identify timing of therapeutic actions and diagnostic tests. Furthermore, the team will 1) predict demand for resources and develop and validate a hybrid mixed integer programming and queueing model to optimize system level allocations; 2) utilize human factors analysis and usability testing to assess the implementation of the DSS.",12,Mayo Clinic Rochester
1020001000408273,SCH: INT: Collaborative Research: S.E.P.S.I.S.: Sepsis Early Prediction Support Implementation System,"Sepsis, infection plus systemic manifestations of infection, is the leading cause of in-hospital mortality. About 700,000 people die annually in US hospitals and 16% of them were diagnosed with sepsis (including a high prevalence of severe sepsis with major complication). In addition to being deadly, sepsis is the most expensive condition associated with in-hospital stay, resulting in a 75% longer stay than any other condition. The total burden of sepsis to the US healthcare system is estimated to be $20.3 billion, most of which is paid by Medicare and Medicaid. In fact, in June 2015 the Centers for Medicare & Medicaid Services (CMS) reported that sepsis accounted for over $7 billion in Medicare payments (second only to major joint replacement), a close to 10% increase from the previous year. This pervasive drain on health care resources is due, in part, to difficulties in diagnosis and delayed treatment. For example, every one hour delay in treatment of severe sepsis/shock with antibiotics decreases a patient's survival probability by 10%. Many of these deaths could have been averted or postponed if a better system of care was in place. The goal of this research is to overcome these barriers by integrating electronic health records (EHR) and clinical expertise to provide an evidence-based framework to diagnose and accurately risk-stratify patients within the sepsis spectrum, and develop and validate intervention policies that inform sepsis treatment decisions. The project to bring together health care providers, researchers, educators, and students to add value to patient care by integrating machine learning, decision analytical models, human factors analysis, as well as system and process modeling to advance scientific knowledge, predict sepsis, and prevent sepsis-related health deterioration. In addition to the societal impact that clinical translation of these findings may bring, the project will provide engineering and computer science students and health services researchers with cross-disciplinary educational experience.<br/><br/>The proposed research will apply engineering and computer science methodologies to analyze patient level EHR across two large scale health care facilities, Mayo Clinic Rochester and Christiana Care Health System and to inform clinical decision making for sepsis. The multi-institutional, interdisciplinary collaboration will enable the development of health care solutions for sepsis by describing and accurately risk-stratifying hospitalized patients, and developing decision analytical models to personalize and inform diagnostic and treatment decisions considering patient outcomes and response implications. The Sepsis Early Prediction Support Implementation System (S.E.P.S.I.S.) project aims will be to: 1) Develop data-driven models to classify patients according to their clinical progression to diagnose sepsis and predict risk of deterioration, thus informing therapeutic actions. 2) Develop personalized intervention policies for patients within the sepsis spectrum. 3) Develop decision support systems (DSS) for personalized interventions focusing on resource implications and usability within a real hospital setting. The team will 1) identify important factors that uncover patient profiles based on Bayesian exponential family principal components analysis; 2) develop hidden Markov models (HMMs) and input-output HMMs to identify clusters of patients with similar progression patterns within the sepsis spectrum; 3) provide an analytical framework to support sepsis staging in clinical practice using bilevel optimization. They will 1) predict short- and long-term individual patient outcomes using multivariate statistical models and simulation; 2) develop semi-Markov decision process and partially observable semi-Markov decision process models to identify timing of therapeutic actions and diagnostic tests. Furthermore, the team will 1) predict demand for resources and develop and validate a hybrid mixed integer programming and queueing model to optimize system level allocations; 2) utilize human factors analysis and usability testing to assess the implementation of the DSS.",12,North Carolina State University
1020001000408275,"SCH: INT: Collaborative Research: Replicating Clinic Physical Therapy at Home: Touch, Depth, and Epidermal Electronics in an Interactive Avatar System","Physical therapy is often hampered by lack of access to therapists, and lack of adherence to home therapy regimens. This research develops a physical therapy assistance system for home use, with emphasis on stroke rehabilitation. As a person exercises, inexpensive cameras observe color and depth, and unobtrusive tattoo sensors monitor detailed muscle activity. The 3D movement trajectory is derived and compared against the exercise done with an expert therapist. The patient watches a screen avatar where arrows and color coding guide the patient to move correctly. In addition to advancing fields such as movement tracking, skin sensors, and assistive systems, the project has the potential for broad impact by attracting women and under-represented minorities to engineering through health-related engineering coursework and projects, and because home physical therapy assistance can especially help rural and under-served populations. <br/><br/>This project uses bio-electronics, computer vision, computer gaming, high-dimensional machine learning, and human factors to develop a home physical therapy assistance system. During home exercises, patient kinematics and physiology are monitored with a Kinect color/depth camera and wireless epidermal electronics transferable to the skin with a temporary tattoo. The project involves optimization of electrode design and wireless signaling for epidermal electronics to monitor spatiotemporal aspects of muscle recruitment, hand and body pose estimation and tracking algorithms that are robust to rapid motion and occlusions, and development of machine learning and avatar rendering algorithms for multi-modal sensor fusion and expert-trained optimal control guidance logic, for both cloud and local usage. The system aims to provide real-time feedback to make home sessions as effective as office visits with an expert therapist, reducing the time and money required for full recovery.",12,University of California-San Diego
1020001000408289,"NRI: RUI: Autonomous Vehicles: Ethics, Design, and Risk","This collaborative research project will examine ethical issues that arise in connection with autonomous vehicles (such as self-driving cars), from programming choices to guarding against abuse. Collaborators include engineers, philosophers, and legal experts. Autonomous vehicles are starting to emerge, and they are predicted to solve a lot of problems, especially traffic accidents and fatalities. But they will not be foolproof, meaning that (just like any other technology) they will be subject to error, misuse, and failure, and some designs may be better or more responsible than others. Even if autonomous vehicles can save thousands of lives each year, industry and researchers are not thereby released from the obligation to consider ethical, legal, and policy implications of the bringing about of unnecessary and wrongful deaths. The loss of lives in this way could have a devastating, high profile effect on the nascent industry. The results of this project will benefit both industry and the academia. Deliverables will include a public report, a book, university-level courses, conference talks, and academic as well as media publications. Beyond the autonomous driving industry, this project provides research and materials in robot ethics and technology ethics more broadly, as pressure mounts to integrate ethics into engineering and science programs.<br/><br/>The PI and his team will examine the different ways that an autonomous vehicle could be involved in an accident, as well as crash-avoidance and crash-optimization options to mitigate the harm that occurs in unavoidable crashes. They will also investigate the technical and policy feasibility of various ethical frameworks to guide actions by autonomous vehicles by drawing from classical dilemmas in ethics, ongoing work in technology ethics, and real engineering in academic and industry labs on the front lines. They will also attend to broader issues, such as how autonomous cars should handle medical emergencies, criminal conduct, and other common situations. The project is aligned with the NSF's mission, as well as the call of its NRI program for research to gain a better understanding of the long-term social, behavioral and economic implications of co-robots. Because autonomous vehicles are poised to be the first robots to be integrated with society on a large scale, they will set the tone for the entire social or co-robotics industry. It is crucially important to engage ethics ahead of these developments to consider the issues and scenarios in advance of potentially high-profile accidents.",17,California Polytechnic State University Foundation
1020001000408404,Planning the Future of Big Data R&D,"To date, that has been limited involvement by researchers and developers in the National Big Data R&D Strategic Plan. However, their input is vital to the success of the plan. This one-day workshop in Washington, D.C. will being together academics and industry leaders across disciplines to inform the development of an effective National Big Data R&D Strategic Plan. The development of a government-wide strategic plan impacts all of the agencies involved. The workshop will engage a range of experts to hear about the plan status and share concerns and ideas. Incorporating this feedback will improve the plan and develop an overall blueprint for Big Data R&D across much of the government. The growth in scale, diversity, and complexity of data has increased the demand for understanding large amounts of heterogeneous data. This present new challenges to the way data and information are used and managed. Need exists to understand ways to design systems for big data analytics, consider privacy issues that arise when using this massive data a determine ways to teach big data analytics across the sciences. The workshops aims to balance plenary presentations and smaller focus discussion groups for in=depth discussions. Coordinator reporters for each focus group will participate in writing the reports and outcomes of the workshop.",5,Georgetown University
1020001000408437,NRI-Large: Collaborative Research: Multilateral Manipulation by Human-Robot Collaborative Systems,"This project addresses a large space of manipulation problems that are repetitive, injury-causing, or dangerous for humans to perform, yet are currently impossible to reliably achieve with purely autonomous robots. These problems generally require dexterity, complex perception, and complex physical interaction. Yet, many such problems can be reliably addressed with human/robot collaborative (HRC) systems, where one or more humans provide needed perception and adaptability, working with one or more robot systems that provide speed, precision, accuracy, and dexterity at an appropriate scale, combining these complementary capabilities.<br/><br/>The project focuses on multilateral manipulation, which arises when a human controls one or more robot manipulators in partnership with one or more additional controllers (humans or autonomous agents). Complex operations in surgery and manufacturing can benefit from the extra degrees of freedom provided by more than two hands, and training often depends on hands-on interaction between expert and apprentice. Example applications include surgical operations, which typically involve several physicians and assistants, and other medical tasks such as turning a patient in bed and wrapping a cast to constrain a hand. Multilateral manipulation also applies in manufacturing, for example for threading wires or cables, aligning gaskets to obtain a tight seal, and in many household situations, such as folding tablecloths, wrapping packages, and zipping overfilled suitcases so they will fit inside diabolically-designed overhead airline compartments. Multilateral manipulation often arises with deformable materials or multi-jointed objects with more than six degrees of freedom (DOF). The extra DOFs in materials introduce challenges such as computational complexity, but they also can accommodate minor inconsistencies through redundancy and provide system damping. This project advances the fundamental science of multilateral manipulation guided by specific applications from surgery and manufacturing.<br/><br/>Broader Impacts: Multilateral manipulation systems have the potential to improve healthcare, improve American competitiveness and product quality in manufacturing, and open the door to new service robot applications in the home. The project will be guided by an Advisory Board of experts from industry and medical practice. Project results will be disseminated through yearly conference workshops, open-source software tools integrated into common robotics software environments such as Robot Operating System (ROS), and the investigators' research and course webpages, to encourage integration of our approach into research projects and courses at many institutions. Outreach programs, public lab tours, and mentoring of minority students will broaden participation of underrepresented groups in engineering. These activities will encourage participation in STEM activities and provide student and postdoctoral researchers with mentoring experience.",7,University of California-Los Angeles
1020001000408441,III: Student Travel Fellowships for Special Interest Group of Management of Data (SIGMOD) 2015,"The goal of this project is to provide a unique opportunity for database students to present their research results, learn cutting edge research, and interact with internationally recognized researchers from both academia and industry at the ACM SIGMOD 2015 Database Conference. As one of the most prestigious conferences in data management research, ACM SIGMOD/PODS has contributed significantly to the advance of all aspects of data management technologies and applications since 1975. Today, ACM SIGMOD is a dynamic and comprehensive program for publication, education, and interaction; and it is a leading international forum for database researchers, practitioners, developers, and users to explore cutting-edge ideas and results and to exchange techniques, tools, and experiences.<br/><br/>This project will provide partial support for students from U.S. institutions, especially female and minority students, to attend and present their research work at ACM SIGMOD 2015 in Melbourne, Australia. Besides meeting with researchers from academia and industry during regular program sessions, in SIGMOD 2015 students will be able to participate in the following interactive activities as partially supported by this project: a new researcher symposium, an undergraduate research poster competition, face-to-face meetings with leading researchers, and a student mentoring workshop. These opportunities will have a long lasting impact on the future career of the participants. The broader impact is to train the future generation of leaders and workforce in the critical field of data management.<br/><br/>Further information concerning this award may be found at the SIGMOD Travel Fellowship web site: http://www.sigmod2015.org/grants3.shtml",10,University of Minnesota-Twin Cities
1020001000408443,"NRI: Collaborative Research: RobotSLANG: Simultaneous Localization, Mapping, and Language Acquisition","Humans and robots alike have a critical need to navigate through new environments to carry out everyday tasks. A parent and child may be touring a college campus; a robot may be searching for survivors after a building has collapsed. In this collaboration by faculty at two institutions, the PIs envision human and robotic partners sharing common perceptual-linguistic experiences and cooperating in mundane tasks like janitorial work and home care as well as in critical tasks like emergency response or search-and-rescue. But while mapping and navigation are now commonplace for mobile robots, when considering human-robot collaboration for even simple tasks one is confronted by a critical barrier: robots and people do not share a common language. Human language is rich in linguistic elements for describing our spatial environment, the objects and places within it, and navigable paths through it (e.g., ""go down the hallway and enter the third door on the right.""). Robots, on the other hand, inhabit a metric world of occupied and unoccupied discretized grid cells, wherein most objects are devoid of meaning (semantics). The PIs' goal in this project is to overcome this limitation by conjoining the well understood problem of simultaneous localization and mapping (SLAM) with that of language acquisition, in order to enable robots to learn to communicate with people in English about navigation tasks. The PIs will spur interest in this novel research area within the scientific community by means of an Amazing Race challenge problem modeled after the reality television show of the same name, which will place robots and human-robot teams in unknown environments and charge them with completing a specific task as quickly as possible. Other outreach activities will include visits to K-12 schools with demonstrations. <br/><br/>This work will focus on simultaneous localization, mapping, and language acquisition, a field of inquiry that remains untouched. The crucial principles are that semantics are formulated as a cost function, which in turn specifies a joint distribution over many variables including those capturing sensory input, language, the environment map, and robot motor control. The cost function and joint distribution support standard inference of many forms, such as command following. More importantly, they support multidirectional inference over multiple variable sets jointly, such as simultaneous mapping and language interpretation. Within this innovative multivariate optimization-based framework, the PIs plan a thorough experimental regimen including both synthetic and real-world datasets of challenging environments, grounding the semantics of natural language in spatial maps of the realistic visual world and robot motor control, while navigating along particular paths or to arrive at particular destinations in (possibly novel) environments that are mapped not only in a geometric sense but also with linguistic underpinning to these particular paths and destinations. The language approach is compositional and uses spatially-grounded representations of nouns (objects/places) and prepositions (relations between them). These spatially-grounded representations will be modeled in the context of mapping. Furthermore, the PIs will consider realistic environments and adapt visual models thereof according to the joint model. The PIs are aware of no other work that jointly models mapping, vision, and language acquisition.",7,University of Michigan Ann Arbor
1020001000408446,EXP: Advancing Early STEM Learning through Haptic Feedback Displays,"Improving Science, Technology, Engineering and Mathematics (STEM) education is a national priority. Children in the U.S. have continued to fall behind their international peers in math and science scores on international assessments. Not only are students behind in STEM concepts, but they also show less interest in STEM learning. While much of the focus on STEM learning has occurred in grades K-12, an important area of STEM education involves improving math and science literacies at the pre-kindergarten (pre-K) level. Young children have the natural capacity to explore and understand STEM concepts in everyday life, and learning these early skills affects later development. In parallel, another trend in early learning involves the growing use of tablet computers in both formal and informal educational settings. However, little is known about how tablets support learning, particularly in STEM. Understanding the conditions under which digital tools may promote early STEM learning is an important topic of study given the pervasiveness of tablet technology and that tablet-based content can potentially increase learner engagement. One argument against purely digital content on a tablet computer, however, is that tactile cues and sensory experiences important for STEM learning are now lost. This project examines whether the emerging technology of haptic -- or tactile feedback -- touch-screen displays can improve preschoolers' learning of science concepts. The results will contribute to theories of how young children learn with electronic media as well as offer design guidance for the creation of developmentally appropriate content using novel haptic feedback displays. This research will lay the foundation that will enable other researchers to examine learning outcomes associated with this new media as it grows in commercial availability.<br/><br/>Research on child development emphasizes the need for children to have multiple modes to engage in science learning, and haptic displays may be one way to add an additional mode of interaction to traditional touchscreens and create a more immersive, embodied learning experience. The PIs will conduct human-centered design of new haptic science learning applications, which will involve preschool STEM education experts in the creation of these novel materials and two large-scale laboratory experiments that will assess the effectiveness of haptic feedback in tablet-based learning. Results will contribute new knowledge of (1) how to design educational media leveraging surface haptic displays and (2) the conditions under which this technology effectively promotes learning and engagement among young children. The PIs will make the developed applications openly available for use by other researchers and disseminate our results through interdisciplinary venues in children's media and development, human-computer interaction, and engineering of haptic technology. Project web site (http://preschoolhaptics.soc.northwestern.edu/) will provide information on the project and access to privacy-protected experimental data and software artifacts.",9,Northwestern University
1020001000408453,EXP: Promoting Learning through Annotation of Embodiment (PLAE),"The Cyberlearning and Future Learning Technologies Program funds efforts that will help envision the next generation of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects explore the viability of new kinds of learning technologies by designing and building new kinds of learning technologies and studying their possibilities for fostering learning and challenges to using them effectively. The Promoting Learning through Annotation of Embodiment (PLAE) project will research how new motion-tracking technologies and augmented reality can be adapted to support young children's science learning in the classroom. Embodied resources - gesture, physical motion, and one's location in space - are increasingly recognized as important modalities for students to engage and better understand science and mathematics concepts. However, these embodied resources need to be connected to other intellectual resources more readily recognized by school-notation systems, mathematical equations, graphs, and scientific vocabulary. The project will investigate how the ability to label, identify, and view key elements of activity within augmented reality learning environments support student reflection on scientific content in the physical sciences. Project research will help develop the foundations for the application of technology systems employing embodied resources combined with annotation to support children's learning of basic science concepts. Because of the popularity of motion-tracking interfaces for commercial computer gaming (e.g. the Xbox Kinect and Wii), the capabilities of less expensive motion-tracking systems are rapidly advancing to the point where practical classroom applications can be developed in the near future.<br/><br/>The project will research how annotations of embodied play simulations in an augmented reality environment can direct student attention towards key scientific concepts while providing them with opportunities to reflect upon and revise their understanding of those concepts. After development and pilot testing of the system, students will be assigned to one of two conditions that involve either: (1) all student-generated annotations or (2) all teacher- and researcher-created annotations. By contrasting these two models, the project will be able to explore in detail the role of annotation in supporting students' reflection within embodied modeling activities, and to further demonstrate the value that is added by allowing students to design and negotiate their own annotations. There will be two main sources of data for each of the experiments: (1) a pre-post content measure to assess overall growth in student understanding of the particulate nature of matter, and (2) project coding and analysis of videos of student activity to analyze the types of learning processes promoted by the technology and curriculum, with a focus on how the annotation features support reflection about the underlying rules of the system. The results of this research will inform the design of future educational technologies which rely upon embodied motion.",9,Indiana University
1020001000408454,"NRI: Collaborative Research: RobotSLANG: Simultaneous Localization, Mapping, and Language Acquisition","Humans and robots alike have a critical need to navigate through new environments to carry out everyday tasks. A parent and child may be touring a college campus; a robot may be searching for survivors after a building has collapsed. In this collaboration by faculty at two institutions, the PIs envision human and robotic partners sharing common perceptual-linguistic experiences and cooperating in mundane tasks like janitorial work and home care as well as in critical tasks like emergency response or search-and-rescue. But while mapping and navigation are now commonplace for mobile robots, when considering human-robot collaboration for even simple tasks one is confronted by a critical barrier: robots and people do not share a common language. Human language is rich in linguistic elements for describing our spatial environment, the objects and places within it, and navigable paths through it (e.g., ""go down the hallway and enter the third door on the right.""). Robots, on the other hand, inhabit a metric world of occupied and unoccupied discretized grid cells, wherein most objects are devoid of meaning (semantics). The PIs' goal in this project is to overcome this limitation by conjoining the well understood problem of simultaneous localization and mapping (SLAM) with that of language acquisition, in order to enable robots to learn to communicate with people in English about navigation tasks. The PIs will spur interest in this novel research area within the scientific community by means of an Amazing Race challenge problem modeled after the reality television show of the same name, which will place robots and human-robot teams in unknown environments and charge them with completing a specific task as quickly as possible. Other outreach activities will include visits to K-12 schools with demonstrations. <br/><br/>This work will focus on simultaneous localization, mapping, and language acquisition, a field of inquiry that remains untouched. The crucial principles are that semantics are formulated as a cost function, which in turn specifies a joint distribution over many variables including those capturing sensory input, language, the environment map, and robot motor control. The cost function and joint distribution support standard inference of many forms, such as command following. More importantly, they support multidirectional inference over multiple variable sets jointly, such as simultaneous mapping and language interpretation. Within this innovative multivariate optimization-based framework, the PIs plan a thorough experimental regimen including both synthetic and real-world datasets of challenging environments, grounding the semantics of natural language in spatial maps of the realistic visual world and robot motor control, while navigating along particular paths or to arrive at particular destinations in (possibly novel) environments that are mapped not only in a geometric sense but also with linguistic underpinning to these particular paths and destinations. The language approach is compositional and uses spatially-grounded representations of nouns (objects/places) and prepositions (relations between them). These spatially-grounded representations will be modeled in the context of mapping. Furthermore, the PIs will consider realistic environments and adapt visual models thereof according to the joint model. The PIs are aware of no other work that jointly models mapping, vision, and language acquisition.",7,Purdue University
1020001000408461,EXP: Understanding Computational Thinking Process and Practices in Open-Ended Programming Environments,"The Cyberlearning and Future Learning Technologies Program funds efforts that will help envision the next generation of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects explore the viability of new kinds of learning technologies by designing and building new kinds of learning technologies and studying their possibilities for fostering learning and challenges to using them effectively. This project brings together two approaches to help K-12 students learn programming and computer science: open-ended learning environments, and computer-based learning analytics, to help create a setting where youth can get help and scaffolding tailored to what they know about programming without having to take tests or participate in rigid textbook exercises for the system to know what they know. <br/><br/>The project proposes to use techniques from educational data mining and learning analytics to process student data in the Alice programming environment. Building on the assessment design model of Evidence-Centered Design, student log data will be used to construct a model of individual students' computational thinking practices, aligned with emerging standards including NGSS and research on assessment of computational thinking. Initially, the system will be developed based on an existing corpus of pair-programming log data from approximately 600 students, triangulating with manually-coded performance assessments of programming through game design exercises. In the second phase of the work, curricula and professional development will be created to allow the system to be tested with underrepresented girls at Stanford's CS summer workshops and with students from diverse high schools implementing the Exploring Computer Science curriculum. Direct observation and interviews will be used to improve the model. Research will address how learners enact computational thinking practices in building computational artifacts, what patters of behavior serve as evidence of learning CT practices, and how to better design constructionist programming environments so that personalized learner scaffolding can be provided. By aligning with a popular programming environment (Alice) and a widely-used computer science curriculum (Exploring Computer Science), the project can have broad impact on computer science education; software developed will be released under a BSD-style license so others can build on it.",9,SRI International
1020001000408475,EXP: Collaborative Research: A Personalized Storyteller Companion to Promote Preschooler Language Skills,"Early language ability, including vocabulary skills and oral language knowledge during preschool, is an important predictor of children's academic success in subsequent school years. Social robots that can engage children as personalized learning companions hold great promise in augmenting the learning experience of children with parents and teachers. The ultimate goal is to foster the development, learning and promotion of academic achievements and the well-being of children. Such robots offer unique opportunities of guided, personalized and controlled social interaction during the delivery of a desired curriculum. They can play, learn and engage with children in the real world -- physically, socially and emotively. This research project is developing and evaluating an autonomous personalized social robot tutor-companion that can engage preschool children in storytelling activities during a long-term interaction. The research goal is to assess the impact that such an interaction may have on children's language skills development. The project advances the fields of autonomous storytelling generation and analysis, as well as the interaction between autonomous social robots and preschool children to promote learning. Its broader impact is to develop an effective technology that augments the preschool setting that promotes children's language development in a personalized and socially engaging way. <br/><br/>This research project develops and assesses the efficacy of an autonomous, personalized social robot that engages as a learning companion on the language development of pre-school children in the context of storytelling tasks. For this purpose, the research project develops a novel automatic story analysis tool, and a new personalized story generation algorithm, that pushes the envelope of current understanding of free-form storytelling and how fosters the development of early language skills in pre-school aged children. Additionally, the research develops of a fully autonomous social robot that interacts with preschool children in the real-life environment of a preschool. A 9-month longitudinal study at multiple preschool sites is carried out to evaluate the impact of long-term interaction with the storytelling robot on children's engagement and language skill development. The research aims to increase the current understanding of the impact of longitudinal interaction with social robot on children's language development. Ultimately, this can inspire new tools and practices for early pre-literacy and language education (as well as other domains such as STEM) in the home, classroom, and beyond.",9,Tufts University
1020001000408477,EXP: Teaching Bias Mitigation through Training Games with Application in Credibility Attribution,"The Cyberlearning and Future Learning Technologies Program funds efforts that will help envision the next generation of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects explore the viability of new kinds of learning technologies by designing and building new kinds of learning technologies and studying their possibilities for fostering learning and challenges to using them effectively. This project will develop and study an interactive game entitled VERITAS for making players aware of their cognitive bias in decision making and attempting to mitigate its effects. The game focuses on detecting deception and many of the research participants are from law enforcement. <br/><br/>Heuristics are cognitive shortcuts acquired from existing beliefs and past experiences. Heuristics and cognitive biases affect virtually every judgment being made in daily life. Humans often perform no better than chance when attempting to distinguish truths from deception and tend to be over-confident in their ability to detect deception. They are rarely aware of their own biases and are resistant to traditional training efforts aimed at changing decision-making processes. There are few studies verifying our ability to mitigate cognitive biases. This project will explore using game-based learning to make people aware of cognitive biases and reduce their reliance on simple heuristics. The project asserts that the experiential environment afforded by game-based learning should be particularly effective at facilitating the introspection necessary for learners to actively experiment with more systematic decision-making techniques. It will experimentally test the effectiveness of a game-based training program targeting law enforcement officials. This research integrates a theory-driven design using multiple research methods, including observation of behavior during game play, surveys, interviewing, and experimentation. This project will contribute to the understanding of how cognitive biases function within the context of deception detection and will advance understanding of how a game may be better suited than traditional learning methods at mitigating cognitive biases. Results will be disseminated through convention exhibitions and journal publications and the team plans to showcase this game at professional conferences with game developers, law enforcement officials, and the general public.",9,University of California-Santa Barbara
1020001000408479,EXP: Attention-Aware Cyberlearning to Detect and Combat Inattentiveness During Learning,"The ability to concentrate on tasks is critical to learning. This project will develop attention-aware cyberlearning as a new genre of learning technologies that automatically detect and respond to students' attentional states. In particular, this project will implement technology that will detect mind wandering (MW) which is when attention shifts from task-related thoughts to task-unrelated thoughts. MW has been studied in the context of complex comprehension tasks and it has been found that a high degree of MW leads to inferior performance. However MW has not been studied in the context of learning with technology and technology solutions have not been proposed to reduce MW. This project addresses MW in the context of learning with technology. The detection of MW is through the use of inexpensive eye-tracking devices. The devices will be integrated with software to detect MW while students are engaged in learning high school biology through an interactive system called Guru. Once MW is detected, software strategies will be used to enable the students return to the learning task. The primary research will be in the development and testing of MW detection algorithms and in the development and testing of strategies to reduce MW.<br/><br/>In more detail, the attention-aware Guru will include an integrated eye tracker, an automated gaze-based MW detector, and intervention strategies to improve learning by mitigating the costs of MW. The research will be conducted in 9th grade biology classrooms in Northern Indiana, where the core technological components will be formatively studied, iteratively refined, and summatively evaluated. Generalizable insights will be identified at every stage of the project in order to promote transferability of the findings to future attention-aware technologies, thereby helping students learn to their fullest potential. In summary, the proposed attention-aware Guru technology will be used to advance fundamental research focused on uncovering: (1) the incidence of MW during learning with technology, (2) relationships between MW and learning, (3) patterns of eye-gaze that are diagnostic of MW, (4) automated intervention strategies to reorient attention and reduce the detrimental effects of MW, and (5) generalizable insights to catalyze future implementations of attention-aware cyberlearning.",9,University of Notre Dame
1020001000408487,EXP: Learning Parallel Programming Concepts Through an Adaptive Game,"The Cyberlearning and Future Learning Technologies Program funds efforts that will help envision the next generation of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects explore the viability of new kinds of learning technologies by designing and building new kinds of learning technologies and studying their possibilities for fostering learning and challenges to using them effectively. This particular project will develop educational games that adapt to the skill level of the user and will conduct research on using them to teach concepts from computer programming. <br/><br/>Modern computing is increasingly handled in a parallel fashion and despite the growing body of work on how to teach parallel programming, little is understood about the learning of this subject. This project will shed light on the challenge of learning parallel programming and gather initial data on ways to scaffold it in college-level courses. We propose to develop a genre of adaptive learning games in which we will gather data on how experts and novices address parallel programming problems and study ways to scaffold learning. Our research will advance understandings of how people learn concepts associated with parallel programming, as well as investigating which activities enhance the learning process in this domain. We will generate content tailored to specific students through a method entitled procedural content generation. This work will transform the transition from sequential programming to parallel programming in undergraduate computer science curricula and advance personalized learning. We will disseminate our prototype and results via the CSinParallel network, an NSF-funded national organization that works to introduce concurrent, parallel, and distributed computing concepts into a greater percentage of computer science curricula. The research will further our understanding of how students learn parallel programming concepts and contribute to training a competitive workforce that is better prepared for today's parallel computing world.",9,Drexel University
1020001000408488,EXP: Collaborative Research: A Personalized Storyteller Companion to Promote Preschooler Language Skills,"Early language ability, including vocabulary skills and oral language knowledge during preschool, is an important predictor of children's academic success in subsequent school years. Social robots that can engage children as personalized learning companions hold great promise in augmenting the learning experience of children with parents and teachers. The ultimate goal is to foster the development, learning and promotion of academic achievements and the well-being of children. Such robots offer unique opportunities of guided, personalized and controlled social interaction during the delivery of a desired curriculum. They can play, learn and engage with children in the real world -- physically, socially and emotively. This research project is developing and evaluating an autonomous personalized social robot tutor-companion that can engage preschool children in storytelling activities during a long-term interaction. The research goal is to assess the impact that such an interaction may have on children's language skills development. The project advances the fields of autonomous storytelling generation and analysis, as well as the interaction between autonomous social robots and preschool children to promote learning. Its broader impact is to develop an effective technology that augments the preschool setting that promotes children's language development in a personalized and socially engaging way. <br/><br/>This research project develops and assesses the efficacy of an autonomous, personalized social robot that engages as a learning companion on the language development of pre-school children in the context of storytelling tasks. For this purpose, the research project develops a novel automatic story analysis tool, and a new personalized story generation algorithm, that pushes the envelope of current understanding of free-form storytelling and how fosters the development of early language skills in pre-school aged children. Additionally, the research develops of a fully autonomous social robot that interacts with preschool children in the real-life environment of a preschool. A 9-month longitudinal study at multiple preschool sites is carried out to evaluate the impact of long-term interaction with the storytelling robot on children's engagement and language skill development. The research aims to increase the current understanding of the impact of longitudinal interaction with social robot on children's language development. Ultimately, this can inspire new tools and practices for early pre-literacy and language education (as well as other domains such as STEM) in the home, classroom, and beyond.",9,Massachusetts Institute of Technology
1020001000408495,EXP: Partners in Learning: Building Rapport with a Virtual Peer Tutor,"The Cyberlearning and Future Learning Technologies Program funds efforts that will help envision the next generation of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Exploration (EXP) Projects explore the viability of new kinds of learning technologies by designing and building new kinds of learning technologies and studying their possibilities for fostering learning and challenges to using them effectively. This project seeks to understand, and capitalize on, how teachers or tutors build rapport with learners by building technologies that support rapport. Research will study what rapport with learners looks like, when students deploy rapport techniques, when and how deploying rapport techniques (whether by people or automated agents) increases learning, and how rapport evolves over time. The project will build software that can help measure rapport between learners and with computers. <br/><br/>The project will begin by building a multimodal sensing rapport-detection system , based on recent advances in computer vision, signal processing, and machine learning which will automatically recognize audio and visual behaviors during learner interaction with an intelligent tutoring system. Human-human tutoring interactions will be used to guide development of the rapport detection system. Both short term and longitudinal analyses will be conducted using students working with an AI-based math tutor, focused on their visual behaviors (head gaze estimation will be used to measure facial action units and gestures like head nods or shakes, and mutual gaze between humans), verbal behaviors (using CoreNLP and other software to detect verbal utterances that represent rapport-related social constructs such as politeness, friendship, etc.), and entrainment behaviors (synchrony or asynchrony, divergence and convergence). The project will then design RAPT, the Rapport-Aligned Peer Tutor, which encompasses both the rapport detection system and an intelligent pedagogical agent that accounts for the persistent social states of rapport and non-rapport. Mockups/simulations of the interface will be used to test the designs before the full pedagogical agent is built. Trials will be conducted in 9-11 grade classrooms working with an intelligent geometry tutor using a two-iteration design-based research study.",9,Carnegie-Mellon University
1020001000408502,CAREER: Cross-Document Cross-Lingual Event Extraction and Tracking,"The goal of this research project is advance the Information Extraction (IE) paradigm beyond ""slot filling"", and achieve more accurate, salient, complete, concise and coherent extraction results by exploiting dynamic background knowledge and cross-document cross-lingual event ranking and tracking. The approach consists of cross-document inference, unknown implicit event time prediction and reasoning, cross-document entity coreference resolution with global contexts, centroid entity detection, event attribute extraction and graph-based clustering algorithms for redundancy and contradiction detection, automatic new event clustering and active learning, abstractive summary generation based on extraction results, name translations with comparable corpora and cross-lingual co-training.<br/><br/>The experimental research is integrated with educational activities, including project-related curriculum development. The project involves PhD students as well as undergraduate students, engages non-Computer Science undergraduate students in utility evaluation and corpus annotation, and attracts elementary school and high school students by tutorials, regular research seminars and an extensive summer workshop. The results of this project will also have a benefit in E-Science and E-Learning by extracting and tracking the related knowledge from scientific literature and learning materials used in elementary schools and high schools.<br/><br/>Project results, including open source software, task definition guidelines, annotated corpora, scoring metrics will be disseminated via project Web site (http://nlp.cs.rpi.edu/blendeet.html).",0,Rensselaer Polytechnic Institute
1020001000408574,SBP 2015 Outreach Efforts to Increase Diversity and Participation of Minorities,"The 2015 Social Computing, Behavioral-Cultural Modeling, and Prediction (SBP) conference will be held in Washington DC from March 31- April 3, 2015. The primary objectives are to provide an international platform for collaborative and exploratory research for a synergistic advancement of the emerging field of social computing. Social computing and behavioral modeling lie at the intersection of various disciplines, from computing and linguistics to the social sciences, economics and medicine. SBP is a highly interdisciplinary conference offering a rare and exciting opportunity for behavioral and social science researchers to come together with computational and computer scientists and other related disciplines. Through reaching out to students from diverse backgrounds, the travel grants supported by this award will broaden the base of interdisciplinary research involving computer science; students from many of these disciplines are less likely to have other travel support options than computer scientists. Past travel awards to SBP have been effective in attracting applicants from groups underrepresented in Computer Science. Through paper and poster presentations, interdisciplinary tutorials, keynote presentations, technical challenge problems and cross-fertilization round table sessions involving researchers and program staff from federal agencies, the conference aims to offer training, professional development, and community building opportunities.",10,West Virginia University Research Corporation
1020001000408584,HCC: Small: Modeling Human Communication Dynamics,"Face-to-face communication is a highly dynamic process where participants mutually exchange and interpret linguistic and gestural signals. Even when only one person speaks at the time, other participants exchange information continuously amongst themselves and with the speaker through gesture, gaze, posture and facial expressions. To correctly interpret the high-level communicative signal, an observer needs to jointly integrate all spoken words, subtle prosodic changes and simultaneous gestures from all participants.<br/><br/>The proposed effort endeavors to create a new generation of computational models for modeling the interdependence between linguistic symbols and nonverbal signals during social interactions. This computational framework has wide applicability, including the recognition of human social behaviors, the synthesis of natural animations for robots and virtual humans, improved multimedia content analysis, and the diagnosis of social and behavioral disorders (e.g., autism spectrum disorder). This research effort is an important milestone, complementary to recent research efforts focusing on only two components (e.g., social signal processing, which focuses on nonverbal and social signals). The proposed unified approach to Social-Symbols-Signals will pave the way for new robust and efficient computational perception algorithms able to recognize high-level communicative behaviors (e.g., intent and sentiments) and will enable new computational tools for researchers in behavioral sciences.<br/><br/>The proposed research will advance this endeavor through the development of new probabilistic models for jointly capturing the interdependence between language, gestures and social signals, and novel computational representations, which integrates data-driven processing and logic rule-based approach (so that prior knowledge from social sciences can be easily included). Four fundamental research goals will be directly addressed: symbol-signal representation (joint representation of language and nonverbal), modeling social interdependence (joint modeling of communicative signals between multiple participants), variability in signal interpretations (variability with annotations of high-level communicative signals), and generalization and validation (generalization over different communicative signals and domains).<br/><br/>The proposed research will enable more natural interaction between users and embodied conversational dialogue systems, impacting the way in which computers are used, for example, in tutoring and in cultural and language training. The potential uses of such software and data go far beyond the scope of this project, making it possible, for example, to perform large scale corpus-based studies about social aspects of human face-to-face (multimodal) communication, or cognitive aspects of human multimodal processing. Following the investigators' past experience with sharing research software open-source, code and corpus annotations will be made available to the research community. These shared research results will be valuable for new researchers as well as important educational material for course development.",11,Carnegie-Mellon University
1020001000408595,CHS: Small: Collaborative Research: Examining the Design and Use of Internet of Things Technology in Public Life,"The goal of this research is to advance an empirical and systematic understanding of the design and use of Internet of Things (IoT) technologies for enabling, organizing, and monitoring collective social activities. The IoT is the ubiquitous system of objects imbued with computational capacity, connected to and communicating with one another. A key result of this work will be a comprehensive set of design guidelines for creating IoT technologies that can be used by anyone, that will be disseminated through a series of products and public engagement strategies targeting community stakeholders and more general audiences in media venues. Such guidelines are commonly understood as high-level heuristics that people can take up and apply to their work to generate effective designs. By embodying the guidelines in the design of novel prototypes that enable public life, this project will help people more easily participate in public life at varying scales. Lastly, by evaluating the strengths and limitations of these prototypes in collective design reviews, it will generate methodological insights that may be adopted and extended by other researchers in the fields of human-centered computing, science and technology studies, and design.<br/><br/>This empirical design research will provide insight into how the social-technical configurations of IoT products and services might contribute to more understanding of, and participation in, the networks of interaction that comprise our public lives. In particular, leveraging ""tracing"" as both a design tactic and a novel capacity of IoT, the research will demonstrate and explain how the technical connectedness that characterizes IoT might be communicated in ways that foster a greater sense of social connectedness. Traces are marks that indicate the passing of some other phenomenon, for example data generated as a by-product of people's online activity. Charting traces in the engineering and use of information technology may produce an understanding of the complex networks of actors, structures, and processes within our public life that leads to new possibilities for action, insight and engagement. As a design tactic, traces can serve to foster a greater sense of the connectedness of socio-technical systems. Thus, this research will discover and codify new design guidelines for successful IoT design for public life. Additionally, it will develop an innovative fusion of research methods, combining the prototyping of IoT technologies with external design reviews from both design experts and community members to validate those prototypes.",4,Georgia Tech Research Corporation
1020001000408599,CHS: Small: Collaborative Research: Examining the Design and Use of Internet of Things Technology in Public Life,"The goal of this research is to advance an empirical and systematic understanding of the design and use of Internet of Things (IoT) technologies for enabling, organizing, and monitoring collective social activities. The IoT is the ubiquitous system of objects imbued with computational capacity, connected to and communicating with one another. A key result of this work will be a comprehensive set of design guidelines for creating IoT technologies that can be used by anyone, that will be disseminated through a series of products and public engagement strategies targeting community stakeholders and more general audiences in media venues. Such guidelines are commonly understood as high-level heuristics that people can take up and apply to their work to generate effective designs. By embodying the guidelines in the design of novel prototypes that enable public life, this project will help people more easily participate in public life at varying scales. Lastly, by evaluating the strengths and limitations of these prototypes in collective design reviews, it will generate methodological insights that may be adopted and extended by other researchers in the fields of human-centered computing, science and technology studies, and design.<br/><br/>This empirical design research will provide insight into how the social-technical configurations of IoT products and services might contribute to more understanding of, and participation in, the networks of interaction that comprise our public lives. In particular, leveraging ""tracing"" as both a design tactic and a novel capacity of IoT, the research will demonstrate and explain how the technical connectedness that characterizes IoT might be communicated in ways that foster a greater sense of social connectedness. Traces are marks that indicate the passing of some other phenomenon, for example data generated as a by-product of people's online activity. Charting traces in the engineering and use of information technology may produce an understanding of the complex networks of actors, structures, and processes within our public life that leads to new possibilities for action, insight and engagement. As a design tactic, traces can serve to foster a greater sense of the connectedness of socio-technical systems. Thus, this research will discover and codify new design guidelines for successful IoT design for public life. Additionally, it will develop an innovative fusion of research methods, combining the prototyping of IoT technologies with external design reviews from both design experts and community members to validate those prototypes.",4,University of Washington
1020001000408616,RI: Small: Deep Natural Language Understanding with Probabilistic Logic and Distributional Similarity,"The web offers huge amounts of information, but that also makes it harder to find and extract relevant information. Natural language processing has made huge strides in developing tools that extract information and automatically answer questions, often with relatively simple methods aimed at relatively superficial analysis. This project explores methods for a deeper analysis and detailed natural language understanding. Contemporary intelligent systems have long used logic to describe precisely what a sentence means and how its pieces connect. But this precision has a downside: Logic needs the data to exactly match its expectations, or it breaks down. This is problematic for applications like question answering because language is hugely variable. There are often many different ways to say the same thing, or to say things that are not exactly the same but similar enough to be relevant. This project combines logic with a technology that identifies words and passages that are similar but not exact matches. Also, language often only implies things rather than stating them outright. The project handles this through a mechanism that draws conclusions that are likely but not 100% certain, and that states its level of confidence in a conclusion. <br/><br/>Being highly interdisciplinary, the project gives students insights into logic and inferences, as well as methods that determine word similarity based on occurrences in large amounts of text. This project also forges new links between computational and theoretical linguistics by transferring ideas in both directions. Through its combination of precision and approximation, this project paves the way for language technology that understands language more deeply and thus will enhance societally important applications such as information extraction and and automatic question answering. Tasks in natural language semantics are requiring increasingly complex and fine-grained inferences. This project pursues the dual hypotheses that (a) logical form is best suited for supporting such inferences, and that (b) it is necessary to reason explicitly about uncertain, probabilistic information at the lexical level. This project combines logical form representations of sentence meaning with weighted inference rules derived from distributional similarity. It uses Markov Logic Networks for probabilistic inference over logical form with weighted rules, testing on the task of Recognizing Textual Entailment. It also develops new methods for describing word meaning in context distributionally in a way that is amenable to determining lexical entailment.",0,University of Texas at Austin
1020001000408646,NRI: Learning to Plan for New Robot Manipulation Tasks,"Robots have great potential societal benefits, especially working with humans in tasks such as manufacturing, disaster relief and elder care. Robots are however very difficult to program to perform new tasks: non-programmers can teach relatively stereotyped action sequences and expert programmers can generate more elaborate action strategies through long programming and debugging processes. Part of the difficulty stems from trying to teach the robot at the level of actions, since the actions to achieve a desired effect depend strongly on details of the environment. Instead, this project focuses on teaching the robot models of the environment. The robot can then use these models to plan its actions automatically. This approach leads to more adaptable behavior. Models are also easier to extend and re-use than action sequences, thereby reducing the burden for teaching subsequent tasks. The project involves a thorough integration of research and education. Graduate and undergraduate students are involved in all aspects of the research. Furthermore, the research in this project will become part of an undergraduate subject on robot algorithms at MIT.<br/><br/>This project will develop techniques to teach a robot to perform long-horizon tasks in complex, uncertain domains, in a way that equips the robot with knowledge it can re-use and re-combine with previous knowledge to solve not just the task it was taught, but a broad array of additional tasks. Furthermore, the robot will be aware of its own knowledge and lack of knowledge, and will be able to plan to take actions, including performing experiments and asking humans for further information, to improve its own knowledge about how to behave in its environment. The project will develop a set of machine learning tools that will allow humans to, relatively quickly and straightforwardly, teach the basic ideas of a new domain to the robot, and then enable to robot to continue to improve its knowledge as it gains experience in the domain. This project will build on a new hierarchical framework for integrating robot motion planning, symbolic planning, purposive perception and decision-theoretic reasoning. The framework, as it stands, supports planning and execution to achieve pick-and-place tasks in complex domains that may require moving objects out of the way, using real, noisy, robot perception and actuation. However, it requires a specification of the domain it is to operate in. In our existing implementation, the domain description was written by hand, by experts, through a long period of trial-and-error. The concrete objective of the project is to develop methods enabling a robot to learn to perform high-level tasks in new domains by acquiring new domain models through human-provided examples and advice. These methods will be evaluated in three domains using a Willow Garage PR2 mobile manipulation robot. The overriding objective will be to develop methods that apply broadly and can be used to instruct robots to perform a wide variety of tasks.",7,Massachusetts Institute of Technology
1020001000411177,"CGV: Large: Collaborative Research: Modeling, Display, and Understanding Uncertainty in Simulations for Policy Decision Making","The goal of this collaborative project (1212806, Ross T. Whitaker, University of Utah; 1212501, Donald H. House, Clemson University; 1212577, Mary Hegarty, University of California-Santa Barbara; 1212790/1540469, Michael K. Lindell, Texas A&M University/University of Washington) is to establish the computational and cognitive foundations for capturing and conveying the uncertainty associated with predictive simulations, so that software tools for visualizing these forecasts can accurately and effectively present this information about to a wide range of users. Three demonstration applications are closely integrated into the research plan: one in air quality management, a second in wildfire hazard management, and a third in hurricane evacuation management. This project is the first large-scale effort to consider the visualization of uncertainty in a systematic, end-to-end manner, with the goal of developing a general set of principles as well as a set of tools for accurately and effectively conveying the appropriate level of uncertainties for a range of decision-making processes of national importance.<br/><br/>The primary impact of this work will be methods and tools for conveying the results of predictive simulations and their associated uncertainties, resulting in better informed public policy decisions in situations that rely on such forecasts. Scientific contributions are expected in the areas of simulation and uncertainty quantification, visualization, perception and cognition, and decision making in the presence of uncertainty. Results will be broadly disseminated in a variety of ways across a wide range of academic disciplines and application areas, and will be available at the project Web site (http://visunc.sci.utah.edu). The multidisciplinary nature of the research and the close integration of the participating research groups will provide a unique educational environment for graduate students and other trainees, while also broadening the participation in computer science beyond traditional boundaries.",14,University of Washington
1020001000411228,"Smart and Connected Health (SCH) PI and Aspiring PI Meeting, 2015","This proposal supports the NSF Smart and Connected Health (SCH) Principal Investigators Meeting and Aspiring Investigator Workshop in Arlington, VA in July 2015. The meeting brings together principal investigators of projects awarded by the Smart and Connected Health program from 2011 to 2014. The SCH program was set up in order to bridge gaps in fundamental science and engineering necessary to transform health and healthcare in accordance with the with recent PCAST reports (2010), as well as to be focused on prevention and person-centered care. In particular, the workshop addresses a wider range of challenges in advancing health with foundational research from the computer science, engineering and the social, behavioral and economic disciplines. An objective of the workshop is to provide an opportunity for the principal investigators to explore research progress in a diverse set of funded projects and discuss important current issues, as well as future challenges associated with research in transdisciplinary scientific areas. The discussions will be supported by projects-related posters and further stimulated by a number of invited speakers in relevant fields that will explore the scientific frameworks in moving findings from discovery to dissemination and implementation research. It is anticipated that the meeting will stimulate discussions, connections, and collaborations that will facilitate progress in the area of Smart and Connected Health. Additionally, there is a one-day Aspiring Investigator workshop attached to the Principal Investigator meeting, that should grow the field by providing mentorship and support for a new cadre of investigators looking to do work in the Smart and Connected Health area. <br/><br/>The workshop should have a significant impact on the participants and the future of the Smart and Connected Health program (SCH). The focus of the workshop is on shared resources and assessing health impact in foundational research. The workshop results are likely to include new research directions that will be publicized to the research community through publications and a workshop website. The results dissemination should inform and the developing Smart and Connected Health research community in their transdisciplinary collaboration, educational activities, and spur other innovative research directions. The broader impacts of the workshop are to facilitate progress in the field and stimulate conversations, connections, and collaborations that will lead toward better informed and effective smart and connected health research and resulting technologies for the broadest possible user populations.",12,Arizona State University
1020001000411347,CAREER: High Performance Spatial Queries and Analytics for Spatial Big Data,"The rise of big data is changing our way of thinking about the world by providing new insights and creating new forms of value. The challenges for big data come not only from the volume but also the complexity, such as the multi-dimensional nature of spatial data. In this CAREER project, we will deliver a scalable and efficient spatial big data processing system that can take advantage of fast increasing processing power of computers and their latest supporting programming environments. This software can be used for a variety of applications in medical image processing and in GIS (Geographical Information Systems), e.g., for city planning, transportation planning, disaster response, military planning.<br/><br/>The fundamental goal of this CAREER project is to address the research challenges for delivering a high performance software system for spatial queries and analytics of spatial big data on MapReduce and CPU-GPU hybrid platforms, promote the use of the created open source software to support problem solving in multiple disciplines, and educate the next generation workforce in big data. Specifically, the following research aims will be pursued in this project: 1) Create new spatial data processing methods and pipelines with spatial partition level parallelism through MapReduce and propose multi-level indexing methods to accelerate spatial data processing; 2) Research two critical components to enable data parallelism: effective and scalable spatial partitioning in MapReduce, and query normalization methods for partition effect; 3) Research efficient GPU-based spatial operations to support object level and intra-object level parallelism, and integrate them into MapReduce pipelines; 4) Investigate optimization methods for data processing pipelines, data skew mitigation, and CPU/GPU resource coordination in MapReduce; and 5) Provide declarative spatial queries and create a query translator to automatically translate the queries into MapReduce applications.<br/><br/>The project will provide a high performance scalable spatial computing infrastructure to be deployed by researchers and application users world-wide from various disciplines, and the source codes will be made open source and fully available. The project will provide a strong foundation to solve spatial big data problems such as location based services, remote sensing based applications, and map based applications. It will also enable the fast solving of scientific problems such as pathology imaging at large scale. The education activities include a revised undergraduate course with a new spatial big data theme, a revised graduate course with a focus on big data management, involvement of undergraduate, graduate and underrepresented students in research, symposia and science projects for K-12 students, and a software infrastructure to support the education. <br/><br/>For further information see the project web site: http://fushengwang.net/hadoop-gis<br/><br/>Keywords: spatial big data, MapReduce, CPU-GPU, spatial queries, spatial analytics",5,SUNY at Stony Brook
1020001000411391,EAGER: Refinement and Evaluation of a Robotic Wheelchair System,"The interfaces used by full-time wheelchair users with ALS, quadriplegia, and similar conditions are typically single-tasking, direct control interfaces. For example, a person with ALS with whom the PI works drives his wheelchair through a combination of head- and shoulder-activated switches, and speaks with the assistance of a computer he controls with head movements. He cannot drive the chair and communicate at the same time, in the way that normally-abled people take for granted. For most of us, walking is an unconscious activity; we decide where to go in the room, and then forget about it. Adding autonomy to a powered wheelchair would allow many wheelchair users to recover some of this ability, by essentially turning the wheelchair into a robot that can take advantage of the vast body of existing software and techniques for navigating about the world. The user of such a wheelchair would select a location on a map displayed on his/her computer, and then forget about it as the wheelchair drives itself to that location, reducing the dependence on caregivers and increasing independence. The PI's goal in this exploratory project is to develop a low-cost, open-source electronics package that will provide this capability for a (somewhat arbitrary) total hardware cost of $500 since medical insurance will not pay for such a system so potential users must pay for it out-of-pocket and the PI is anxious to ensure that project outcomes will find their way into the lives of real wheelchair users. Instrumenting a wheelchair in this way will have additional benefits (e.g., knowing where the wheelchair is in the home could allow a simpler and faster interface to control home automation). And because the system will have an open software API, it will provide a common platform for researchers and developers working on assistive systems for full-time wheelchair users.<br/><br/>To these ends, the PI has designed and implemented a prototype electronics package for Permobil powered wheelchairs, and has tested this on a Permobil M300 powered wheelchair on loan from the ALS Association of Oregon and Southwest Washington. The electronics package is mounted underneath the seat at the front of the chair on a custom-fabricated metal plate, and includes two Hokuyo laser range-finders (one on each side), a small computer mounted on the front of the chair body behind the footrest, and custom electronics to supply power from the wheelchair batteries. An Arduino microcontroller connected to the computer allows movement commands to be sent to the wheelchair through a Permobil I/O Module. Integration with ROS allows the system to build maps of the environment, to use these to localize the wheelchair, and to take advantage of extensive autonomous navigation abilities. Once the wheelchair is localized within a map, the user can provide it with a goal point either via an on-screen map-based interface or by means of a Google Glass; the chair can then autonomously navigate to that point avoiding obstacles as it goes, using the standard ROS navigation system. Preliminary trials in a cluttered office environment have been encouraging, although additional refinement of the URDF and kinematic models of the system are needed, and the localization needs to be improved by adding an IMU to the electronics package. The intellectual merit of the current work lies in three areas: understanding the design pressures behind the envisaged electronics package, if it is to be deployed on a range of powered wheelchairs in the real world; the redesign and implementation of mapping, localization, and path-planning algorithms for this setting; and designing a system that is fault-tolerant and capable of running for months and years at a time, without intervention by trained roboticists. Specific tasks will include: to improve and generalize the hardware and electronic design; to improve the localization algorithm; to improve the quality of wheelchair movement; to investigate replacements for the current laser range-finders; to document how to integrate everything onto a chair; and to build a number of additional kits.",7,Oregon State University
1020001000411398,CAREER: Data-driven Bottom-Up Humanoid Articulations,"This project investigates in much more detail the human anatomy and dynamics to further progress in replicating human articulation capabilities. To overcome long-standing imaging limitations, the project follows a data-driven approach, in which sampled dynamic motion data is used to infer unknown parameters such as soft-tissue geometry and behavior. The project is cross-disciplinary and driven by specific problems in orthopaedics and human character animation, although its focus is fundamentally on computational and automated analysis tools. The research plan develops computational tools for capturing robustly and accurately dynamic skeletal motion from medical images, for inferring biological shape and behavior from dynamic motion information, and for representing and calculating with these data. The education effort is naturally integrated with the research work, and includes recruiting and training graduate, undergraduate, and high-school students into multi-disciplinary work, through a modeling and simulation approach to teaching computer graphics. The outcome of the project is a set of human-anatomy based (i.e., humanoid) models of articulations that impacts orthopaedists' understanding of articulation injury and disease, leads to improved diagnosis and medical treatment, and improves the realism of digital character animation. The broad impact of the project includes applications in biology, bioengineering, ergonomics, evolutionary biology and robotics. Results of this project, including software, data, and publications, are publicly available through the project web site, http://vis.cs.pitt.edu/marai_nsf_career",14,University of Illinois at Chicago
1020001000411500,RESNA Student Design Competition 2015,"This is funding to support the undergraduate Student Design Competition (SDC), which will be part of the 2015 annual RESNA conference to be held June 10-14 in Denver. Today, over 56 million people in the United States report some degree of disability, a number which will likely grow in coming years as the so-called ""baby boomer"" generation ages. RESNA, the Rehabilitation Engineering and Assistive Technology Society of North America, is the one organization with an international focus that is devoted solely to technology for individuals with disabilities. RESNA's membership consists of individuals and institutions covering a range of disciplines (including researchers, clinicians, suppliers, manufacturers, consumers and educators who work in both non-profit and for-profit settings), all of whom are dedicated to promoting the exchange of ideas and information for the advancement of assistive technology. The annual RESNA Conference and the Student Design Competition combine to create a forum for interaction between working and experienced rehabilitation engineers and students who are about to enter the field. Unique in its primary focus on undergraduates, the event provides participants with experience and skills that help them succeed in their engineering and design careers. It encourages and mentors students in various disciplines to become involved in the assistive technology and rehabilitation engineering fields, and provides an opportunity for the students to network with their peers as well as participants in previous SDCs, many of whom are now leaders in service, research, and education related to technology for people with disabilities. Furthermore, continuing a practice started a few years ago, all participants (not just the finalists) will create blog posts that capture and archive their submissions, and these will remain accessible indefinitely through the RESNA website and the AT Wiki. Thus, those who teach will be able to share this information with their students as examples of projects that others have done, while service providers and people with disabilities will be able to search for unique solutions to barriers they may face. More information about RESNA and its annual conference is available online at http://www.resna.org while information specific to the SDC may be found at http://aac-rerc.psu.edu/wordpressmu/RESNA-SDC and student team entry abstracts are included on the AT Wiki http://atwiki.assistivetech.net/index.php/ATWiki_Home.<br/><br/>The RESNA Student Design Competition fosters innovation and creativity with the ultimate goal of producing technology that can contribute to the independence of individuals with disabilities. The first SDC was held in 1980 as part of the inaugural RESNA conference; since then over 260 designs have been identified as winning entries from among over 800 submitted by students from over 125 different universities and institutes of higher learning. Both undergraduate and master's level graduate candidates are eligible to submit entries, which must represent the work of students only; no faculty or professionals may be included as co-authors, although faculty may be mentioned as advisors and/or mentors. NSF has been a supporter since 2005. This year's funding will enable the SDC to be further expanded and enhanced, so as to include more entries and support for more design teams, especially from minorities, women, and individuals with disabilities. A call for participation has been posted on the conference website, and also distributed electronically to a large number of colleges and universities with engineering and design schools. A team of 5 judges will pre-select up to 6 semi-finalist teams, from which two members each will be invited to attend the conference fully supported with travel and hotel funds as well as complimentary registration. On Thursday, June 11, during the pre-conference activities these teams will make presentations and show off their working prototypes to the judges before a public audience, and 5 finalist teams will be selected. Judges will have an opportunity to ask questions and make suggestions and recommendations to the design teams. The conference schedule includes a platform session on Sunday, June 14 in which the finalists will present their designs to the general conference audience; in addition, finalists will have an opportunity to present their projects during the Annual Developers' Forum on Friday evening, June 12, which highlights new products under development (some past SDC student team designs have been patented and are now available commercially).",10,Rehabilitation Engineering and Assistive Tech Society of NA
1020001000411504,"CAP: Data Science, Learning and Youth: Connecting Research and Creating Frameworks","The Cyberlearning and Future Learning Technologies Program funds efforts that will help envision the next generation of learning technologies and advance what we know about how people learn in technology-rich environments. Cyberlearning Capacity (CAP) Projects focus on expanding and strengthening the cyberlearning community and often include conferences, workshops, or short courses. This project focuses on a workshop exploring the application of data science to K-12 education. It is motivated by the importance that reasoning with data has in today's world.<br/><br/>The workshop is entitled Data Science, Learning and Youth: Connecting Research and Creating Frameworks. Its objective is to move the educational implications of Data Science to the forefront of conversations among the cyberlearning research community. A large number of undergraduate and post-graduate programs are presently focusing on imparting data skills and computational reasoning. This workshop will extend this focus to K-12 education. It will bring together established and emerging scholars interested in Data Science Education from fields including Learning Sciences, Human-Computer Interaction and Computer Science, Mathematics and Statistics Education, Science Education, and Community Engagement and Citizen Science, and practitioners from K-12 settings. This workshop will foster new interdisciplinary collaborations and expose researchers interested in Data Science Education to relevant communities, literatures, and projects. The short term goal is to enable these communities to synthesize emerging findings, frameworks, and theories and better understand what tools, activities, and environments can support Data Science literacy. Our long term goal is to foster the development of a unified research community interested in Data Science Education. Direct outcomes of the workshop will include concrete plans to produce articles and synthesis documents focused on Data Science Education during the year immediately following the workshop. These documents will speak to three broad and complementary audiences: researchers, through the proposal of a special issue of a scholarly journal; practitioners, through two practitioner-oriented articles focusing on mathematics and science education; and the broader Cyberlearning community, through an online Synthesis Statement to be hosted by the Center for Innovative Research in Cyberlearning (CIRCL) resource website.",10,Tufts University
1020001000404989,Supporting Student Travel in Bioinformatics,"Travel support for trainees to attend the Pacific Symposium on Biocomputing (PSB) is provided. PSB is one of the leading Bioinformatics conferences, especially for students. PSB has a program of cutting-edge topics, many of which are emerging in their respective field. This leads to an exciting and innovative series of presentations. PSB also includes several interactive sessions including poster sessions, discussion sessions, and most meals together. Due to the size of this meeting (between 200-300 participants), this makes it a perfect venue for young scientists to network and really interact with the field leaders. PSB brings together the outstanding science in many subdisciplines of Biocomputing. This results in the conference being an excellent learning event for all participants. However, for the students, they are able to learn about these rich, diverse areas from the experts, ask questions, and potentially discuss the downstream projects that may result in which they can participate. Previous student attendees indicate that it is one of the best for learning a broad range of important and complementary topics in bioinformatics.",10,Pennsylvania State Univ University Park
1020001000405119,HCC: Small: Modeling and Supporting Creativity During Collaborative STEM Activities,"This research will advance a novel technological approach that relies on machine learning techniques in general and Natural Language Processing (NLP) in particular to develop models and support for creativity during collaborative science, technology, engineering, and mathematics (STEM) educational activities. We will extend existing educational software with NLP capabilities to automatically assess and subsequently support creativity during collaborative tasks. The research questions are: (1) Which factors influence moment-by-moment creativity during collaborative problem solving activities? (2) How can NLP be used to build student models that detect those factors? (3) How can an ITS use this information to create personalized interventions to support creativity?<br/><br/>The first phase in this research will collect data from students solving problems in pairs with an educational application to identify factors that are relevant to creativity processes and outcomes. These data will be used to derive computational student models for automatically assessing student creativity in terms of both moment-to-moment processes and outcomes through machine learning methodologies focusing on an NLP approach. In addition to providing automatic assessment, the models will also inform factors that influence creativity during collaboration through educational data mining techniques. The final phase of the work will design and test a set of interventions to foster creativity during collaborative activities.<br/><br/>Using data corresponding to pairs of students solving open-ended STEM-based problems, this research will develop a rich and nuanced understanding of creativity processes and outcomes in collaborative contexts, and how these relate to knowledge, affect and creative thinking styles. Relying on that understanding, it will develop and evaluate novel student models that recognize salient, creativity-related events through NLP techniques, as well as personalized support for creativity during collaborative activities and evaluating that support through an experiment with university students. This project will pave the way for a new class of collaborative cyberlearning technologies to both assess and foster creativity, through just-in-time personalized support based on easily deployed NLP-based student models.",4,New York University
1020001000526362,CAREER: Using Imperfect Predictions to Make Good Decisions,"As humans and other animals navigate the world they demonstrate remarkable flexibility in encountering unfamiliar systems, spaces and phenomena, learning to make predictions about how they will behave, and making good decisions based on those predictions. Crucial to this ability is the fact that one does not need to make perfectly accurate or fully detailed predictions to make good decisions. Though, due to our natural limitations, our predictions about the future are necessarily flawed, they are nevertheless sufficiently useful to make reasonable decisions. For artificial agents, in contrast, imperfect predictions often lead to catastrophic failures in decision making. Many existing approaches fundamentally assume that the agent will eventually learn to make perfect predictions and make perfect decisions, which is unreasonable in sufficiently rich, complex environments. This work considers the problem of developing artificial agents that are more aware of and more robust to their own limitations. Agents that can more robustly and flexibly learn from experience in truly complex environments have the potential to impact nearly any application in which decisions are made over time, for instance autonomous robots/vehicles, personal assistants, and medical/legal decision support. Furthermore, as the project will be undertaken at an undergraduate-only liberal arts college, undergraduate researchers will play an integral role in the work. The PI will also build on the strength of the liberal arts setting to enhance instruction of key discipline-specific research and writing skills throughout the Computer Science curriculum. Explicit development of these skills will not only improve students' preparation for a wide variety of career paths (including basic research) but is also aligned with best practices for broadening participation in the discipline. <br/><br/>This project studies model-based reinforcement learning (MBRL) under the assumption that the agent has fundamental limitations that prevent it from learning a perfect model or from producing optimal plans. The central hypothesis is that in this context the MBRL problem cannot be decomposed into separate model-learning and planning problems, each treating the other as an idealized black box. Rather the optimization process for each component must be aware of its role in the overall architecture and of the limitations of its partner. One key aim of the work is to derive novel measures of model quality that are more tightly related to the true objective of control performance than standard measures of one-step prediction accuracy adapted from supervised learning settings. Another is to investigate how model learning objectives/algorithms can be adapted to account for the limitations of the specific planner that will use the model. Further, control algorithms will be investigated that can make effective use of models of non-homogeneous quality by mediating between model-based and model-free knowledge. The ultimate goal is to integrate these principles into novel MBRL agents that are significantly more robust to limitations in the model class and/or planner and are able to succeed in environments that are too complex and high-dimensional to be modeled or solved exactly.",2,Franklin and Marshall College
1020001000526368,CAREER: Collaborative Image Manipulation and Annotation in Surgical Telemedicine,"The objective of this project is to investigate the benefit of collaborative image interaction in conveying expert knowledge during distributed work. Although there have been great advances in the compression and transfer of audio and video signals for telecommunication, there is still a significant challenge in providing the appropriate tools for conveying expert knowledge in distributed work settings. Surgical telemedicine is one domain where collaborative interaction with highly specialized images is vital for the efficient and effective conveyance of expert information, and the collaborating individuals may not have the same level of expertise. In this research the PI will systematically ascertain the challenges of expert distributed communication of technical images and the benefits that collaborative image interaction has in conveying expert knowledge in distributed work. Because the findings from this research will be translatable to other expert distributed environments that utilize imagery in order to have a positive effect on distributed collaborative work practices, project outcomes will have broad implications aside from the immediate benefit of providing manipulation and annotation ability on images for distributed collaborative work to support healthcare. The project will provide educational experiences at multiple levels by including high school, undergraduate, and graduate students in this interdisciplinary research. Because the PI will actively engage and mentor under-represented groups throughout the project, the work may also increase the number of underrepresented minorities and women in science and technology. Finally, the PI will disseminate results broadly in both the medical and HCI literature, and coordinate workshops at the respective yearly conferences, in order to engage a wider audience of researchers and system designers.<br/><br/>This work is part of a long term effort by the PI in investigating how images play a part in scientific and medical collaborative practices, particularly with regards to how both professionals and lay people perceive the information through images and, in turn, the effects of new mechanisms for sensing, presenting, and interacting with those images. The research aims are: to determine the verbal and non-verbal mechanisms for conveying expert knowledge in co-located and distributed collaboration with images; to develop a prototype for distributed collaborative image interaction; to determine the effects of collaborative image interaction on expert communication processes and performance outcomes; and to determine the effects of and the reactions to collaborative image interaction on distributed work practices. The educational objective is to produce scholars with a deep understanding of collaborative technology design and social science evaluation on health information system success. This is part of a long-term effort by the PI to engage under-represented groups in computer and information science by exposing them to the process of developing technology solutions to real world problems. Specific educational activities will include development of an internship program for high school students to participate in research and systems design on the real-world problems of the operating room, and development of a cross-listed graduate course on telemedicine that utilizes a project-based curriculum to bring together the expertise of graduate students from three different UMBC graduate programs: Information Systems, Human Centered Computing, and Health IT.",1,University of Maryland Baltimore County
1020001000526375,CAREER: Learning and Using Models of Geo-Temporal Appearance,"Billions of geotagged and time-stamped images are publicly available via the Internet, providing a rich record of the appearance of people, places, and things across the globe. These images are a largely untapped resource that could be used to improve our understanding of the world and how it changes over time. This project develops automated methods of extracting useful information from this imagery and fusing it into high-resolution global models that capture geo-temporal trends. Once the trends have been captured, these models are used to improve performance on computer vision tasks and make geotagged imagery a usable and navigable resource for education and research in other disciplines. The project includes an education and outreach component that brings real-world problems to computer science (CS) students, mentors students across the educational spectrum, and makes the research accessible to the public.<br/><br/>This project develops computer vision technologies to capture spatial and temporal appearance trends and is organized into four main research thrusts: (1) investigating novel methods for extracting information from Internet imagery using weakly supervised learning, (2) developing techniques that integrate ground-level imagery with aerial and satellite data to model the expected image appearance anywhere in the world at any time, (3) evaluating methods for using such models to improve the performance of computer vision algorithms, and (4) automatically creating visual representations that make it possible for novice users to explore the learned geo-temporal trends via the Internet.",2,University of Kentucky Research Foundation
1020001000526380,CAREER: Machine Learning-Based Approaches Toward Combatting Abusive Behavior in Online Communities,"This research aims to computationally model abusive online behavior to build tools that help counter it, with the goal of making the Internet a more welcoming place. Since its earliest days, flaming, trolling, harassment and abuse have plagued the Internet. This project will lay bare the structure of online abuse over many types of online conversations, a major step forward for the study of computer-mediated communication. This will result from modeling abuse with statistical machine learning algorithms as a function of theoretically inspired, sociolinguistic variables, and will entail new technical and methodological advances. This work will enable a transformative new class of automated and semi-automated applications that depend on computationally generated abuse predictions. The education and outreach plan is deeply tied to the research activities, and focuses on scaling-up the research's broader impacts. A public application programming interface (API) will enable developers and online community managers around the world to integrate into their own sites the defenses against abuse developed by this research.<br/><br/>The work will consist of two major phases. In the first, the research will develop a deep understanding of abusive online behavior via statistical machine learning techniques. Specifically, the work will appropriate theories from social science and linguistics to inform the creation of features for robust statistical machine learning algorithms to predict abuse. These proposed abuse models will enable a brand new, transformative class of mixed-initiative artifacts capable of intervening in social media and online communities. In the second phase, this project will explore this newly enabled class of artifacts by building, deploying and evaluating sociotechnical tools for combatting abuse. Specifically, it will explore two classes of tools that use the abuse predictions: shields and moderator tools. The first, shields, will proactively block inbound abuse from reaching people. The second class of tools, moderator tools, will flag and triage abuse for community moderators.",2,Georgia Tech Research Corporation
1020001000369929,US-German Collaboration: Unravel CNS regeneration - From Fact Extraction to Experiment Design,"This research is motivated by the problem of spinal cord injuries, with the goal of producing data that can lead to pharmacological or other interventions to make recovery possible. It is hypothesized that major changes after a spinal cord injury occur on the protein level, and that a model of protein changes and interactions between proteins will make it possible to elucidate previously unnoticed relationships between the participating proteins and protein pathways. To do this, a computational neuroscience approach is a necessary addition to laboratory experiments. This analysis requires a considerable amount of background knowledge; to acquire this knowledge, natural language processing (text mining) is necessary; specifically, new techniques for natural language processing need to be developed in the neuroscience domain.<br/><br/>The research plan involves a series of proteomics experiments to be carried out on rats with induced spinal cord injuries and the interpretation of these results using a computational systems biology approach. The quality of knowledge-based computational analysis depends critically on the breadth of formally represented knowledge in the program. A major challenge is that much of the requisite knowledge is ""buried"" in scientific publications, rather than being available in computable form in databases. Therefore, novel text mining techniques will be developed, tailored for the neuroscience domain, to extract such knowledge from scientific publications and convert it into a computable form. A generic computational analytical tool, known as Hanalyzer, is already available but needs to be adapted to the specific problem; the challenge is to develop the natural language processing technology. A neuroscience-specific aspect of this challenge is that there is a high diversity in the surface forms of words that are used to refer to spinal cord regeneration, making machine-learning-based approaches susceptible to data sparsity and rule-based approaches vulnerable to an intractable number of keywords that must be accounted for. A distributional approach will be used to approach this challenge, with novel techniques that make use of semantic role labeling, recognition of ontological concepts, and dependency parsing to learn the surface forms that correspond to abstract or implicit concepts like spinal cord regeneration.<br/><br/>This project is a collaboration involving investigators in Denver, Colorado and in Duesseldorf, Germany. A companion project is being funded by the German Ministry of Education and Research (BMBF).",3,University of Colorado at Denver
1020001000369995,CRCNS: Collaborative Research: Neural Correlates of Hierarchical Reinforcement Learning,"Research on human behavior has long emphasized its hierarchical structure: Simple actions group together into subtask sequences, and these in turn cohere to bring about higher-level goals. This hierarchical structure is critical to humans' unique ability to tackle complex, large-scale tasks, since it allows such tasks to be decomposed or broken down into more manageable parts. While some progress has been made toward understanding the origins and mechanisms of hierarchical behavior, key questions remain: How are task-subtask-action hierarchies initially assembled through learning? How does learning operate within such hierarchies, allowing adaptive hierarchical behavior to take shape? How do the relevant learning and action-selection processes play out in neural hardware? <br/><br/>To pursue these questions, the present proposal will leverage ideas emerging from the computational framework of Hierarchical Reinforcement Learning (HRL). HRL builds on a highly successful machine-learning paradigm known as reinforcement learning (RL), extending it to include task-subtask-action hierarchies. Recent neuroscience and behavioral research has suggested that standard RL mechanisms may be directly relevant to reward-based learning in humans and animals. The present proposal hypothesizes that the mechanisms introduced in computational HRL may be similarly relevant, providing insight into the cognitive and neural underpinnings of hierarchical behavior. <br/><br/>The project brings together two computational cognitive neuroscientists and a computer scientist with expertise in machine learning. The proposed research, which includes both computational modeling and human functional neuroimaging and behavioral studies, pursues a set of hypotheses drawn directly from HRL research. A first set of hypotheses relates to the question of how complex tasks are decomposed into manageable subtasks. Here, fMRI and computational work will leverage the idea, drawn from HRL research, that useful decompositions ""carve"" tasks at points identifiable through graph-theoretic measures of centrality. A second set of hypotheses relates to the question of how learning occurs within hierarchies. Here, fMRI and modeling work will pursue the idea that hierarchical learning may be driven by reward prediction errors akin to those arising within the HRL framework. The project as a whole aims to construct a biologically constrained neural network model, translating computational HRL into an account of how the brain supports hierarchically structured behavior.",2,Princeton University
1020001000370017,CRCNS: Collaborative Research: Responses of the Rodent-Vibrissal-Trigeminal System to Air Currents,"The rodent vibrissal-trigeminal system is one of the most important models in neuroscience for the study of sensorimotor integration. To date, however, research has focused exclusively on direct tactile sensation. Recent results from the Hartmann and Gopal laboratories have demonstrated that rat vibrissae have a robust and repeatable mechanical response to airflow. In addition, neurons in the vibrissal-trigeminal system are known to respond to air puffs. These results suggest that the rat may use its vibrissae to detect air currents and determine wind direction. The Northwestern-Elmhurst team will perform mechanical, behavioral, and computational studies to characterize the role of vibrissae in wind-following behaviors, and the vibrissal-related neural response to air currents. These will constitute some of the first investigations of the underlying mechanisms that permit terrestrial mammals to sense and follow the wind. The team will specifically identify the morphological features of vibrissae and their orientation on the mystacial pad that enable flow sensing behaviors. They will investigate the broad hypothesis that differential mechanical deformations of the vibrissae across the mystacial pad can encode a variety of flow parameters. Finally, behavioral experiments will be performed to determine the extent to which the rat uses its vibrissae to sense airflow, and to quantify the movement strategies used during anemotaxis in the behaving animal. The partnership between Northwestern and Elmhurst will provide significant research opportunities for undergraduates; in addition, videos will be developed to teach the fundamental principles of fluid dynamics and biological sensing that underlie this research. The proposed work has potentially large implications for olfactory localization and the structure of the olfactory system, and is likely to lead to the development of novel flow-sensing technologies.",12,Elmhurst College
1020001000370021,NRI-Small: Multifunctional Electroactive Polymers for Muscle-Like Actuation,"This project aims to develop a new, bistable electroactive polymer that combines large actuation strain and energy density with variable stiffness and bistable deformation. The technical approach of the project involves: (1) synthesizing new polymers comprising interpenetrating polymers network to achieve stable, high-strain actuation; (2) investigating ultrathin carbon nanotube coatings for fault tolerance and enhanced operation reliability; (3) reducing the driving voltages of electroactive polymers to around 200 V by synthesizing new polymers with high dielectric permittivity as well as by developing processing techniques to produce high-quality polymer thin films; and (4) fabricating compact modular actuators that can be readily integrated into robotic systems. The potential transformative technical impact of this project is a radically new actuator material that can reproduce the structural, actuation, and sensing functions of muscles, and can be inserted into a broad range of robotic systems for locomotion and manipulation. <br/><br/>This project will develop a new actuator material based on a bistable electroactive polymer that behaves like an artificial muscle, and offers a combination of attributes that future robotic systems demand including power output that outperforms human skeletal muscle, flexibility, quietness, and biocompatibility. Actuators based on the new polymer material enable the design of robotic systems that interact with people, such as assistive prosthesis or assistive devices for people with disability, humanoid robots for elderly in-home care, and surgical robots to save lives. The material can also be used for industrial automation for increased production efficiency. This project includes significant outreach and educational activities. It will provide summer research intern opportunities for under-represented minority high school students each year. Undergraduate and graduate students will participate in the proposed research to gain hands-on research experience, as well as analytical, communication, and inter-personal skills.",7,University of California-Los Angeles
1020001000370033,CRCNS: Collaborative Research: Neural Correlates of Hierarchical Reinforcement Learning,"Research on human behavior has long emphasized its hierarchical structure: Simple actions group together into subtask sequences, and these in turn cohere to bring about higher-level goals. This hierarchical structure is critical to humans' unique ability to tackle complex, large-scale tasks, since it allows such tasks to be decomposed or broken down into more manageable parts. While some progress has been made toward understanding the origins and mechanisms of hierarchical behavior, key questions remain: How are task-subtask-action hierarchies initially assembled through learning? How does learning operate within such hierarchies, allowing adaptive hierarchical behavior to take shape? How do the relevant learning and action-selection processes play out in neural hardware? <br/><br/>To pursue these questions, the present proposal will leverage ideas emerging from the computational framework of Hierarchical Reinforcement Learning (HRL). HRL builds on a highly successful machine-learning paradigm known as reinforcement learning (RL), extending it to include task-subtask-action hierarchies. Recent neuroscience and behavioral research has suggested that standard RL mechanisms may be directly relevant to reward-based learning in humans and animals. The present proposal hypothesizes that the mechanisms introduced in computational HRL may be similarly relevant, providing insight into the cognitive and neural underpinnings of hierarchical behavior. <br/><br/>The project brings together two computational cognitive neuroscientists and a computer scientist with expertise in machine learning. The proposed research, which includes both computational modeling and human functional neuroimaging and behavioral studies, pursues a set of hypotheses drawn directly from HRL research. A first set of hypotheses relates to the question of how complex tasks are decomposed into manageable subtasks. Here, fMRI and computational work will leverage the idea, drawn from HRL research, that useful decompositions ""carve"" tasks at points identifiable through graph-theoretic measures of centrality. A second set of hypotheses relates to the question of how learning occurs within hierarchies. Here, fMRI and modeling work will pursue the idea that hierarchical learning may be driven by reward prediction errors akin to those arising within the HRL framework. The project as a whole aims to construct a biologically constrained neural network model, translating computational HRL into an account of how the brain supports hierarchically structured behavior.",2,University of Massachusetts Amherst
1020001000370036,CRCNS: Collaborative Research: The Role of Dendritic Processing in Persistent Neural Activity,"Memories on the time scale of seconds to tens of seconds are stored as patterns of neural activity that persist long after the offset of a stimulus. This persistent neural activity is believed to be critical for processing new information and forming cognitive perceptions. Recent studies suggest that purely circuit-based mechanisms are insufficient to explain the robustness of persistent activity to biological noise and perturbation. This proposal will test the hypothesis that persistent activity is maintained by a hybrid cellular/circuit mechanism in which circuit level feedback mediates the activation of memory processes in a neuron's dendrites known as plateau potentials. To quantitatively understand how active dendritic properties contribute to persistent activity, a new modeling framework will be developed to directly and simultaneously fit a memory network to data from a diverse set of experiments characterizing intrinsic excitability, anatomical connectivity, neural coding, and response to perturbations. These models will be used to predict the patterns of dendritic activity that can be seen with fluorescence calcium imaging. To test these predictions, the zebrafish preparation will be used to directly measure dendritic activity during eye movement behavior from cells storing a memory of desired eye position. Two-photon imaging of calcium indicators will be used to measure spatiotemporal patterns of activity in the dendritic neuropil, and separately in individual dendritic branchlets, to determine the presence of plateau potentials. Together, these computational and experimental results will help determine how cellular and circuit properties work in concert to generate one of the most important brain dynamics, persistent neural activity.",20,Joan and Sanford I. Weill Medical College of Cornell University
1020001000370039,CRCNS: Collaborative Research: Responses of the Rodent-Vibrissal-Trigeminal System to Air Currents,"The rodent vibrissal-trigeminal system is one of the most important models in neuroscience for the study of sensorimotor integration. To date, however, research has focused exclusively on direct tactile sensation. Recent results from the Hartmann and Gopal laboratories have demonstrated that rat vibrissae have a robust and repeatable mechanical response to airflow. In addition, neurons in the vibrissal-trigeminal system are known to respond to air puffs. These results suggest that the rat may use its vibrissae to detect air currents and determine wind direction. The Northwestern-Elmhurst team will perform mechanical, behavioral, and computational studies to characterize the role of vibrissae in wind-following behaviors, and the vibrissal-related neural response to air currents. These will constitute some of the first investigations of the underlying mechanisms that permit terrestrial mammals to sense and follow the wind. The team will specifically identify the morphological features of vibrissae and their orientation on the mystacial pad that enable flow sensing behaviors. They will investigate the broad hypothesis that differential mechanical deformations of the vibrissae across the mystacial pad can encode a variety of flow parameters. Finally, behavioral experiments will be performed to determine the extent to which the rat uses its vibrissae to sense airflow, and to quantify the movement strategies used during anemotaxis in the behaving animal. The partnership between Northwestern and Elmhurst will provide significant research opportunities for undergraduates; in addition, videos will be developed to teach the fundamental principles of fluid dynamics and biological sensing that underlie this research. The proposed work has potentially large implications for olfactory localization and the structure of the olfactory system, and is likely to lead to the development of novel flow-sensing technologies.",12,Northwestern University
1020001000370046,NRI-Small: Collaborative Research: Assistive Robotics for Grasping and Manipulation using Novel Brain Computer Interfaces,"This is a collaborative proposal (with UC Davis) which is aimed at making concrete some of the major goals of Assistive Robotics. A team of experts has been brought together from the fields of signal processing and control, robotic grasping, and rehabilitative medicine to create a field-deployable assistive robotic system that will allow severely disabled patients to control a robot arm/hand system to perform complex grasping and manipulation tasks using novel Brain Muscle Computer Interfaces (BMCI). Further, the intent of this effort is not just technology-driven, but is also driven by clear and necessary clinical needs, and will be evaluated on how well it meets these clinical requirements. Validation will be performed at the Department of Regenerative and Rehabilitation Medicine at Columbia University on a diverse set of disabled users who will provide important feedback on the technology being developed, and this feedback will be used to iterate on the system design and implementation.<br/><br/>Intellectual Merit: The intellectual merit of this proposal includes:<br/>o Novel research in Human Machine Interfaces that has the potential to be transformative in eliciting rich, multi-degree-of-freedom signal content from simple and non-invasive surface electromyographic (sEMG) sensors.<br/>o Development of smart adaptive software that employs machine learning algorithms that can continually monitor user performance, and then automatically calibrate and tune system parameters based on system performance.<br/>o Data driven methods for real-time grasp planning algorithms that can be used with both known and unknown objects.<br/>o Methods for finding pose-robust grasps that are tolerant of errors in sensing.<br/>o Evaluation of an underactuated hand as a grasping device for certain application tasks.<br/>o Integration of 3D vision with real-time grasp planning.<br/>o Scientific evaluation at the clinical level of the impact of these new technologies on the disabled population.<br/><br/>Broader Impacts: The broader impacts of this proposal include:<br/>o Development of a complete system to aid the severely disabled population with tetraplegia.<br/>o Extensions of this technlogy to others lacking motor control function including multiple sclerosis, stroke, amyotrophic lateral sclerosis (ALS or Lou Gehrig disease), cerebral palsy, and muscular dystrophy.<br/>o New technology that can extend the reach and impact of the field of Assistive Robotics.<br/>o Major extensions to the open-source GraspIt! software system that will allow many other researchers to leverage the results of this project.<br/>o Educational thrusts that will bring together engineering students, clinicians and the disabled population to extend the reach and scope of Assistive Robotics.<br/>o New directions in Human Machine Interfaces that can extend beyond the disabled population and into a variety of other applications.",7,Columbia University
1020001000370051,NRI-Small: Collaborative Research: Assistive Robotics for Grasping and Manipulation using Novel Brain Computer Interfaces,"This is a collaborative proposal (with UC Davis) which is aimed at making concrete some of the major goals of Assistive Robotics. A team of experts has been brought together from the fields of signal processing and control, robotic grasping, and rehabilitative medicine to create a field-deployable assistive robotic system that will allow severely disabled patients to control a robot arm/hand system to perform complex grasping and manipulation tasks using novel Brain Muscle Computer Interfaces (BMCI). Further, the intent of this effort is not just technology-driven, but is also driven by clear and necessary clinical needs, and will be evaluated on how well it meets these clinical requirements. Validation will be performed at the Department of Regenerative and Rehabilitation Medicine at Columbia University on a diverse set of disabled users who will provide important feedback on the technology being developed, and this feedback will be used to iterate on the system design and implementation.<br/><br/>Intellectual Merit: The intellectual merit of this proposal includes:<br/>o Novel research in Human Machine Interfaces that has the potential to be transformative in eliciting rich, multi-degree-of-freedom signal content from simple and non-invasive surface electromyographic (sEMG) sensors.<br/>o Development of smart adaptive software that employs machine learning algorithms that can continually monitor user performance, and then automatically calibrate and tune system parameters based on system performance.<br/>o Data driven methods for real-time grasp planning algorithms that can be used with both known and unknown objects.<br/>o Methods for finding pose-robust grasps that are tolerant of errors in sensing.<br/>o Evaluation of an underactuated hand as a grasping device for certain application tasks.<br/>o Integration of 3D vision with real-time grasp planning.<br/>o Scientific evaluation at the clinical level of the impact of these new technologies on the disabled population.<br/><br/>Broader Impacts: The broader impacts of this proposal include:<br/>o Development of a complete system to aid the severely disabled population with tetraplegia.<br/>o Extensions of this technlogy to others lacking motor control function including multiple sclerosis, stroke, amyotrophic lateral sclerosis (ALS or Lou Gehrig disease), cerebral palsy, and muscular dystrophy.<br/>o New technology that can extend the reach and impact of the field of Assistive Robotics.<br/>o Major extensions to the open-source GraspIt! software system that will allow many other researchers to leverage the results of this project.<br/>o Educational thrusts that will bring together engineering students, clinicians and the disabled population to extend the reach and scope of Assistive Robotics.<br/>o New directions in Human Machine Interfaces that can extend beyond the disabled population and into a variety of other applications.",7,University of California-Davis
1020001000370054,"CRCNS: Cortical representation of phonetic, syntactic and semantic information during speech perception and language comprehension","The overarching goal of this project is to discover how language-related information is represented and processed in the human brain. To address this issue we propose to use a novel computational modeling approach, voxel-wise modeling. Voxel-wise modeling draws from the principles of nonlinear system identification, and it provides an efficient method for using complex data sets collected under naturalistic conditions to test multiple hypotheses about language representation. The specific research plan is divided into three aims, each targeted at a different form of language-related information. Aim 1 will reveal how low-level features of speech, such as spectral power, spectral modulation and phonemic structure, are represented across human cortex. Subjects will passively listen to human speech while hemodynamic brain activity is recorded by functional MRI. Voxel-wise modeling will then be used to determine how each point in the brain (i.e., each voxel, or volumetric pixel) is tuned for these various features. Using analogous methods, Aim 2 will reveal how syntactic and semantic features are represented across cortex. Finally, Aim 3 will reveal how language-related information is represented when it is delivered by auditory versus visual modalities. In this case speech and video stimuli will be used. Separate models will be estimated for data recorded during auditory and visual stimulation, and voxel-wise tuning will be compared across modalities. The voxel-wise computational models developed under this proposal will reveal how these various types of language-related information are represented across the cortical surface. These models will also provide clear predictions about how the brain will respond to novel speech stimuli. The results of the proposed research will have broad impacts on clinical problems related to speech perception and production, and they could form the basis of powerful brain decoding device that would enable neurological patients to communicate by thought alone.",16,University of California-Berkeley
1020001000370057,CRCNS: Collaborative Research: The Role of Dendritic Processing in Persistent Neural Activity,"Memories on the time scale of seconds to tens of seconds are stored as patterns of neural activity that persist long after the offset of a stimulus. This persistent neural activity is believed to be critical for processing new information and forming cognitive perceptions. Recent studies suggest that purely circuit-based mechanisms are insufficient to explain the robustness of persistent activity to biological noise and perturbation. This proposal will test the hypothesis that persistent activity is maintained by a hybrid cellular/circuit mechanism in which circuit level feedback mediates the activation of memory processes in a neuron's dendrites known as plateau potentials. To quantitatively understand how active dendritic properties contribute to persistent activity, a new modeling framework will be developed to directly and simultaneously fit a memory network to data from a diverse set of experiments characterizing intrinsic excitability, anatomical connectivity, neural coding, and response to perturbations. These models will be used to predict the patterns of dendritic activity that can be seen with fluorescence calcium imaging. To test these predictions, the zebrafish preparation will be used to directly measure dendritic activity during eye movement behavior from cells storing a memory of desired eye position. Two-photon imaging of calcium indicators will be used to measure spatiotemporal patterns of activity in the dendritic neuropil, and separately in individual dendritic branchlets, to determine the presence of plateau potentials. Together, these computational and experimental results will help determine how cellular and circuit properties work in concert to generate one of the most important brain dynamics, persistent neural activity.",20,University of California-Davis
1020001000370059,"NRI-Small: Robust, highly-mobile MEMS micro-robots based on integration of piezoelectric and polymer materials","The research objective of this award is to integrate polymer and piezoelectric micro-structures to create robust, sub-centimeter terrestrial micro-robots, and to use a combination of modeling and experimentation to evaluate leg dynamics of such robots. Specifically, high-aspect ratio parylene flexural mechanisms will be integrated with thin-film lead-zirconate-titanate (PZT) actuators in complex, multi-degree-of-freedom micro-robotic leg joints. Experimental measurements of parylene structure response to integrated thin-film PZT actuation or external bulk PZT ceramic or load cell actuation will be used to characterize parylene stiffness and damping characteristics at varying strain rates, relative to high-strain rate piezoelectric actuation. Adhesion between bulk-micromachined silicon trench surfaces and PZT/metal stack layers will also be evaluated. Measured parylene properties will then be incorporated into existing micro-robotic foot-terrain models developed by the PI and students to produce simulation models of PZT-polymer robots that can be validated against experimental robot prototypes.<br/><br/>Successful completion of this work would dramatically improve the ability of walking millimeter-scale micro-robots to move over uneven terrain, thus increasing the range of possible interactions between human operators and engineered or natural systems. The target user community for millimeter-scale autonomous robots includes disaster response teams, infrastructure maintenance and monitoring workers, and national security organizations. The framework to be deployed would be a technique to embed piezoelectric microactuators in resilient micro-robotic appendages, producing sample walking micro-robot platforms. Results from this research would be coupled into both undergraduate and graduate curriculum and secondary school education. The latter effort will consist of interactive hands-on and web-based projects developed by the PI for use in science education for the local Ypsilanti, Michigan school district and the broader community of interested citizens.",7,University of Michigan Ann Arbor
1020001000370064,NRI-Small: Expert-Apprentice Collaboration,"Recent advances in robot platforms have outpaced our ability to effectively program robots to accomplish useful tasks, often in complex environments that they share with humans. In order for flexible, general purpose robots to become widespread e.g., in teaching skills to children, assisting the elderly, there must be a way of interacting with them beyond programming. Teaching by demonstration offers a potentially powerful and practical approach to realizing the promise of large scale personal robotics in a wide range of applications. In teaching by demonstration, the expert (human), demonstrates the task on different hardware than what the apprentice or student (robot) uses. <br/><br/>The project aims to develop visual feature-based methods that allow robots to teach humans and learn from them by unifying apprenticeship learning, learning by demonstration (or by imitation) and teaching humans, taking into account the differences between experts and apprentices. The resulting system will be evaluated on a PR2 robot (mostly on grasping and manipulation tasks). The scientific advances resulting from the project in learning from demonstration and imitation learning, both general techniques with broad applicability, will greatly simplify the programming of robots which would make it easier for non-expert users to perform this important task which currently requires considerable expertise in robotics as well as computer science. <br/><br/>Broader impacts of the research include development of new robotics curricula, enhanced opportunities for research-based interdisciplinary training at the intersection of computer vision, machine learning, and robotics, outreach activities (including participation in a public school robotics instruction program). All of the results of the research, including publications, open-source software and datasets, will be made freely available to the larger scientific and academic community.",7,Duke University
1020001000370071,NRI-Small: Robot Movement for Patient Improvement - Therapeutic Rehabilitation for Children with Disabilities,"In the United States, the Individuals with Disabilities Education Act (IDEA) states that children with a physical disability are entitled to a ""free appropriate public education that emphasizes special education and related services designed to meet their unique needs and prepare them for further education, employment, and independent living."" Unfortunately, access to necessary assistive robotic technology remains unequal and children with physical disabilities and movement disorders are largely overlooked. However, recent successes in commercial robots appear to foreshadow an explosion of promising robotic applications for individuals with disabilities. Not only can robots be tasked to assist with Activities of Daily Living (ADL) but they can also help individuals through rehabilitation exercises such that therapists and family members can provide assistance in other arenas. The major barrier is that, to date, most assistive robotic devices are not designed for children. And although robotic systems for rehabilitation can generally be used to record information about motor performance during active movements, these systems are not ideal for rehabilitation with respect to children. This causes a unique challenge for deploying such robotic devcies for this target demographic.<br/><br/>To overcome this barrier, state-of-the-art techniques must be created to facilitate the interaction necessary for robots to be useful for therapeutic rehabilitation with respect to children. Based on the fact that logically, animate playthings naturally engage children, the goal of this project is to fuse play and rehabilitation techniques using a robotic design to induce childrobot interaction that will be entertaining as well as effective for pediatric rehabilitation. Of importance within this proposed work are approaches that allow therapists to provide instruction to robots on rehabilitation tasks that can be remapped to play behaviors specific to the individual child. In addition, robots must have internal perception and inference algorithms that allow them to learn new play behaviors and incorporate them to evoke corresponding behaviors in the child.<br/><br/>Major research questions are (1) How can child play behavior most effectively be assessed and shared with an assistive robot, (2) How can this knowledge be captured and generalized into behaviors useful for rehabilitation and (3) What are the most effective robot interfaces for communicating these behaviors both to the therapist for evaluation and the child for directing movement.<br/><br/>Intellectual Merit: The objective of this research effort is to further rehabilitation techniques for children by developing and validating the core technologies needed to integrate therapy instruction with child-robot play interaction in order to improve pediatric rehabilitation. A principal goal of the research endeavor is to examine how evaluation of upper and/or lower limb body movement can be achieved through robot observation and how different rehabilitation exercises can be recoded to allow the mapping of therapist instruction to play behaviors. The focus is on using human-centered capabilities to enable robots to assist in rehabilitation exercises, an ability that is increasingly needed, especially given the desire to have daily therapeutic activities performed safely and consistently in almost any home environment.<br/><br/>Broader Impact: The successful development of a framework that builds upon proven human-centered observation techniques has the ability to tremendously increase the capabilities of robots that interact with children in a safe and effective manner. The significance of the approach will be emphasized in terms of providing assistance for children with cerebral palsy, but the results of this effort could lay the basis for similar efforts for children with varying disabilities. The PIs plan to incorporate these approaches into courses designed for robotics and software engineering. In addition, the education plan will incorporate many of these ideas into a ""boot-camp"" workshop for underrepresented students. Undergraduate research projects and demonstrations to middle-school students are anticipated to inspire and encourage the next generation of engineers and scientists and acclimate them into a new robot-integrated world.",7,Georgia Tech Research Corporation
1020001000370077,NRI-Small: A Novel Light-weight Cable-driven Active Leg Exoskeleton (C-ALEX) for Training of Human Gait,"Motorized exoskeletal orthoses are being actively researched today for gait training of stroke patients. These machines are typically designed to apply assistive/resistive forces on the impaired leg to help human subjects to improve walking, similar to what therapists do during training. While a number of such machines have been developed and used for gait training, these studies have only yielded ""mixed"" results in benefiting stroke patients clinically. The reasons for these disappointing results are the high inertia of the mechanisms, a mis-match in constraints between human and machine, and misalignment of the mechanism joints with the human joints. The proposed work investigates a novel and ground-breaking design of a cable driven exoskeleton to address these shortcomings. Based on extensive study of mechanisms and therapeutic control methods, cables will actuate the moving limbs and will also serve as structural members in tension. The design will consist of an inertial fixed cuff attached to the pelvis and three lightweight cuffs on the thigh, shank, and foot of each leg. This results in an order-of-magnitude reduction in the inertia of the links and eliminates rigid joints which, in turn, eliminates the mis-match and misalignment. Yet, the fact that cables can only pull and not push raises many scientific and design challenges that will be addressed theoretically and experimentally.<br/><br/>Broader Impact: Each year, about 700,000 people in the U.S. have an incidence of a stroke and currently there are 4.5 million people in the U.S. living with the after-effects of stroke. This research can directly impact the quality of life of these individuals with potentially better rehabilitative equipment and better rehabilitative results for retraining of their gait. This project will broaden the application of cable-driven robots to the emerging field of ""neuro-rehabilitation"" and ""functional learning."" This project will also involve close co-operation with Professor Clement Gosselin's research group at Laval University, who along with the PI, is credited with fundamental developments to the field of ""cable robots."" The project will also encourage undergraduate involvement in research as well as provide training and examples for a high school teacher/student to incorporate into the local curriculum. The PI has active links with high schools through a college-wide NSF-funded RET program. Several high school teachers and students have worked in the PI's laboratory to identify technologies to improve quality of life of neural impaired subjects.",20,University of Delaware
1020001000370089,SoCS: Collaborative Research: Strategies for Crowdsourcing Complex Design Work,"This project investigates how education and training methods from traditional work settings can be applied to paid online crowdsourcing. The focus is on how methods such as scaffolding, examples, critique, and apprenticeship affect worker performance, learning, task perseverance, and satisfaction. The project will produce guidelines for a more sustainable crowdsourcing infrastructure where employers can embed relevant domain knowledge into online tasks, and workers can learn key principles and then train less experienced members. The research focuses on worker-centered training strategies in the domain of visual design, which will yield knowledge about effective design principles and instructional methods for visual design. <br/><br/>Broader impacts: The project will contribute to increase the availability of online work. It will expand the capabilities and skills of crowd workers, thereby allowing online work to become a more viable part of the American economy. The project will also lead to novel methods for organizations to achieve complex visual design work. More generally, the project will lead to new knowledge about how to train crowds to perform a complex activity and produce practical guidelines to help requesters write tasks and manage the crowd. Finally, the project will provide interdisciplinary training for graduate and undergraduate students in socio-computational system design, HCI concepts, educational theory, and evaluation methodologies. All course materials will be available online for reuse and adaptation. Undergraduate researcher training will focus on supporting underrepresented student groups.",4,Carnegie-Mellon University
1020001000370090,NRI-Small: Collaborative Research: A Design Methodology for Multi-fingered Robotic Hands with Second-order Kinematic Constraints,"This project, developing a systematic methodology for the design of multi-fingered robotic hands and grasping devices for a desired kinematic task, represents a novel formalization of the kinematic synthesis of articulated systems as a tree structure. The kinematic task is to be defined as positions and higher motion derivatives of the fingers, with accelerations related to the contact geometry at the fingertips for grasping actions. This research team aims to develop multi-fingered grasping devices for human-robot and anthropomorphic tasks, however the method will be a general tool for the design of any kind of multiple-finger grasping device.<br/><br/>This research has a number of broader impacts affecting both the academic community and society at large. First, the project will directly result in a design tool for multi-fingered robotic hands to enable the automatic transformation from task specifications to design alternatives ? an important development in its own right. This design tool will increase the ability of industry to design high performance, cost-effective multi-fingered robotic hands and other end effectors. This directly impacts manufacturing by speeding the development of end-of-arm tooling, with secondary benefits to the cost and quality of the final product. This will assist the U.S. to maintain its leadership and encourage the creation of high-quality jobs. The proposed curriculum additions resulting from this project will produce competent engineers for industry with a greater ability of approaching and solving design problems.",7,Idaho State University
1020001000370091,NRI-Small: Collaborative Research: Addressing Clutter and Uncertainty for Robotic Manipulation in Human Environments,"The long-term goal of this project is to develop personal robots that share a workspace with humans. To achieve the goal of personal robots in homes, the robots must adapt to the humans' living space, not vice-versa. Unfortunately, most human living spaces appear cluttered and unstructured to a robot. Much of this ""clutter"" is in fact structure, but structure for humans, not robots. The preliminary work proposed in this revised project addresses preliminary work in robot manipulation in the presence of clutter and uncertainty. The demonstrator task is a canonical example of human-robot coexistence: sharing a refrigerator. The robot must be able to extract specified items from a refrigerator that may also be accessed and altered by humans. We will develop the beginnings of a solution based on the following principles: such manipulation tasks can be solved by a hierarchical two-level planning strategy, consisting of a high-level metaplanner making use of low-level primitives; the low-level primitives should include push-grasping, sweeping, and other nonprehensile actions that take advantage of mechanics to manipulate cluttered environments when simple grasp-and-carry is impeded; and uncertainty in the state of the environment and its physical properties should be accounted for at both the metaplanner and primitive levels.<br/><br/>Broader Impacts: Although not all outreach goals can be completed within the revised scope, cluttered tasks are critically important for an aging population of about 35 million people (one in eight) in the United States. Furthermore, graduate students involved in this project will benefit from an ongoing collaboration with TU Munich, a world leader in robot control and personal robotics. TUM, CMU, and Northwestern have a history of graduate student exchange and have agreed to host exchange students under this project. Undergraduates will participate in the research as REU students or in other capacities. Several recent undergraduates working in the labs at CMU and Northwestern have gone on to PhD study in robotics, some with NSF graduate fellowships. Graduate students on this project will participate in internships at the Museum of Science and Industry during its upcoming Robot Revolution exhibit. They will interact with the public and help develop a robot manipulation demonstration for the exhibit main stage. These students will provide technical expertise to the exhibit while benefitting from a valuable outreach experience. Other planned outreach activities include lab tours and talks at local high schools. Both PIs serve as mentors in research programs for underrepresented undergraduate students. These students would have an opportunity to work on state-of-the-art manipulation hardware as part of this project.",7,Carnegie-Mellon University
1020001000370092,NRI-Small: Contextually Grounded Collaborative Discourse for Mediating Shared Basis in Situated Human Robot Dialogue,"In human robot dialogue, although human partners and robots are co-present in a shared environment, they have completely mismatched capabilities in perceiving and reasoning about the environment. Their knowledge and representations of the shared world are drastically different. In addition, the shared environment is full of uncertainties and unexpected events. Humans and robots may have different capabilities in attending and responding to these uncertainties. All of these contribute to a misaligned perceptual basis between a human and a robot, which jeopardizes their collaborative activities and task performance. To enable situated human robot dialogue, a critical component is to develop techniques that will support mediating the shared perceptual basis for effective conversation and task completion. <br/><br/>The objective of this National Robotics Initiative project is to develop a novel framework that tightly integrates high level language and dialogue processing with low level sensing and control systems and contextually grounds the collaborative discourse to mediate shared perceptual basis. By capturing grounded symbolic representations as well as continuous representations of the internal configuration of a robotic system and continuous information sensed from the changing environment, the framework allows the robot to promptly modify its execution without interrupting the on-going tasks. It further enables collaborations between humans and robots to mediate a shared perceptual basis and support efficient interaction in a highly dynamic environment.<br/><br/>This project will provide insight as to how the misaligned perceptual basis between a human and a robot should be mediated through a collaborative process and how such a process should be integrated to produce intelligent and collaborative robot behaviors. The expected results will benefit many applications such as manufacturing, service, assistive technology, and search and rescue.",7,Michigan State University
1020001000370094,NRI-Small: Collaborative Research: A Design Methodology for Multi-fingered Robotic Hands with Second-order Kinematic Constraints,"This project, developing a systematic methodology for the design of multi-fingered robotic hands and grasping devices for a desired kinematic task, represents a novel formalization of the kinematic synthesis of articulated systems as a tree structure. The kinematic task is to be defined as positions and higher motion derivatives of the fingers, with accelerations related to the contact geometry at the fingertips for grasping actions. This research team aims to develop multi-fingered grasping devices for human-robot and anthropomorphic tasks, however the method will be a general tool for the design of any kind of multiple-finger grasping device. <br/><br/>This research has a number of broader impacts affecting both the academic community and society at large. First, the project will directly result in a design tool for multi-fingered robotic hands to enable the automatic transformation from task specifications to design alternatives ? an important development in its own right. This design tool will increase the ability of industry to design high performance, cost-effective multi-fingered robotic hands and other end effectors. This directly impacts manufacturing by speeding the development of end-of-arm tooling, with secondary benefits to the cost and quality of the final product. This will assist the U.S. to maintain its leadership and encourage the creation of high-quality jobs. The proposed curriculum additions resulting from this project will produce competent engineers for industry with a greater ability of approaching and solving design problems.",7,University of California-Irvine
1020001000370095,NRI-Small: Mixed Human-Robot Teams for Search and Rescue,"The project aims at increasing the ability to respond to large-scale disasters and manage emergencies by including robots and agents as teammates of humans in search and rescue teams. The project focuses on large teams of humans and robots that have only incomplete knowledge of the disaster situation while they accomplish the mission to rescue people and prevent fires.<br/><br/>The methodology to achieve cooperation within the teams will be based on the development of mental models shared by team members. The shared mental models will facilitate the interactions among robots and humans by providing a suitable level of abstraction enabling them to share beliefs, desires, and intentions as they work to accomplish their tasks.<br/><br/>The performance of teamwork models will be measured by comparing various task performance metrics (such as time to save people), system level metrics (such as computation time or message traffic), and amount of sharedness of the mental models. The experimental work will be conducted using the open source RoboCup Search and Rescue Simulator.<br/><br/>Broader impacts include integration of research results in undergraduate courses, availability of the software produced as open source, outreach activities to expose K-12 students to research issues and to excite them about using computing methods for real-world problems. The long term objective is to improve preparadeness for emergency situations, which will help saving lives and minimizing loss of properties.",6,University of Minnesota-Twin Cities
1020001000370096,NRI-Small: Virtualized Welding: A New Paradigm for Intelligent Welding Robots in Unstructured Environment,"This project is to develop a new robotic platform with novel 3D modeling and visualization algorithms. An existing ""dumb"" welding robot will be augmented with sensors to observe the work piece, as well as its surroundings. New algorithms will be developed to record and reconstruct in 3D the welding process. The reconstructed data are transmitted to a control room and visualized with augmented reality techniques: A skilled welder can look at the welding process from different angles, as if he/she was right next to the actual welding system. Welding parameters can be adjusted by the human (with intelligence) and executed by the robot (with precision). More importantly, the adjustment, together with the reconstructed welding process, will be recorded and analyzed. System modeling techniques will be developed to correlate the human adjustment with the 3D reconstruction of the welding process. In this way, a welding robot can ""learn by examples"" the knowledge and experiences of a human welder and make similar intelligent adjustments by itself in the future. <br/><br/>The primary use for this new technology is in manufacturing. Successful completion of the proposed project paves the foundation for intelligent welding robots with closed-loop intelligent control. Such a robotic system can perform high-speed and high-precision welding while allowing more variations in the work pieces and environments. In addition, virtualized welding can be integrated with a mobile platform to allow welding in places that are hazardous or unsuitable for human welders. The proposed welding extension platform will significantly expand the use of welding robots as well as reduce manufacturing costs. Under-represented students will be recruited to participate in the research through exiting institutional programs. Additional funding and industrial collaboration to transfer technology from research labs to industry will also be pursued.",7,University of Kentucky Research Foundation
1020001000370100,NRI-Small: Collaborative Research: A Dynamic Bayesian Approach to Real-Time Estimation and Filtering in Grasp Acquisition and Other Contact Tasks,"Robots cannot currently grasp objects or perform other contact tasks in unstructured environments with speed or reliability. This project is developing techniques for accurate real-time perception in support of contact tasks. In the proposed method, sensor data tracks the continuous motions of manipulated objects, while models of the objects are simultaneously updated. Particle filtering, a kind of Monte-Carlo simulation, ensures consistency of this tracking and updating.<br/><br/>The strongest impact of this work will be in robotic grasping and manipulation. Because of the synthesis of modeling and probabilistic inference, further impacts can be expected, for example in real-time haptics for telepresence.",7,SUNY at Albany
1020001000370101,NRI-Small: Collaborative Research: A Dynamic Bayesian Approach to Real-Time Estimation and Filtering in Grasp Acquisition and Other Contact Tasks,"Robots cannot currently grasp objects or perform other contact tasks in unstructured environments with speed or reliability. This project is developing techniques for accurate real-time perception in support of contact tasks. In the proposed method, sensor data tracks the continuous motions of manipulated objects, while models of the objects are simultaneously updated. Particle filtering, a kind of Monte-Carlo simulation, ensures consistency of this tracking and updating.<br/><br/>The strongest impact of this work will be in robotic grasping and manipulation. Because of the synthesis of modeling and probabilistic inference, further impacts can be expected, for example in real-time haptics for telepresence.",7,Rensselaer Polytechnic Institute
1020001000370103,NRI-Small: Collaborative Research: Addressing Clutter and Uncertainty for Robotic Manipulation in Human Environments,"The long-term goal of this project is to develop personal robots that share a workspace with humans. To achieve the goal of personal robots in homes, the robots must adapt to the humans? living space, not vice-versa. Unfortunately, most human living spaces appear cluttered and unstructured to a robot. Much of this ""clutter"" is in fact structure, but structure for humans, not robots. The preliminary work proposed in this revised project addresses preliminary work in robot manipulation in the presence of clutter and uncertainty. The demonstrator task is a canonical example of human-robot coexistence: sharing a refrigerator. The robot must be able to extract specified items from a refrigerator that may also be accessed and altered by humans. We will develop the beginnings of a solution based on the following principles: such manipulation tasks can be solved by a hierarchical two-level planning strategy, consisting of a high-level metaplanner making use of low-level primitives; the low-level primitives should include push-grasping, sweeping, and other nonprehensile actions that take advantage of mechanics to manipulate cluttered environments when simple grasp-and-carry is impeded; and uncertainty in the state of the environment and its physical properties should be accounted for at both the metaplanner and primitive levels.<br/><br/>Broader Impacts: Although not all outreach goals can be completed within the revised scope, cluttered tasks are critically important for an aging population of about 35 million people (one in eight) in the United States. Furthermore, graduate students involved in this project will benefit from an ongoing collaboration with TU Munich, a world leader in robot control and personal robotics. TUM, CMU, and Northwestern have a history of graduate student exchange and have agreed to host exchange students under this project. Undergraduates will participate in the research as REU students or in other capacities. Several recent undergraduates working in the labs at CMU and Northwestern have gone on to PhD study in robotics, some with NSF graduate fellowships. Graduate students on this project will participate in internships at the Museum of Science and Industry during its upcoming Robot Revolution exhibit. They will interact with the public and help develop a robot manipulation demonstration for the exhibit main stage. These students will provide technical expertise to the exhibit while benefitting from a valuable outreach experience. Other planned outreach activities include lab tours and talks at local high schools. Both PIs serve as mentors in research programs for underrepresented undergraduate students. These students would have an opportunity to work on state-of-the-art manipulation hardware as part of this project.",7,Northwestern University
1020001000370105,NRI-Small: Collaborative Research: Multiple Task Learning from Unstructured Demonstrations,"This project develops techniques for the efficient, incremental learning of complex robotic tasks by breaking unstructured demonstrations into reusable component skills. A Bayesian model segments task demonstrations into simpler components and recognizes instances of repeated skills across demonstrations. Established methods from control engineering and reinforcement learning are leveraged and extended to allow for skill improvement from practice, in addition to learning from demonstration. The project aims to unify existing research on each of these ideas into a principled, integrated approach that addresses all of these problems jointly, with the goal of creating a deployment-ready, open-source system that transforms the way experts and novices alike interact with robots.<br/><br/>A simple interface that allows end-users to intuitively program robots is a key step to getting robots out of the laboratory and into human-cooperative settings in the home and workplace. Although it is often possible for an expert to program a robot to perform complex tasks, this programming is often very time-consuming and requires a great deal of knowledge. In response to this, much recent research is focusing on robot learning-from-demonstration, where non-expert users can teach a robot how to perform a task by example. Unfortunately, much of this work is limited to the artificially-structured demonstration of a single task with a well-defined beginning and end. By contrast, human-cooperative robots will be required to efficiently and incrementally learn many different, but often related, tasks from complex, unstructured demonstrations that are easy for non-experts to produce.",7,University of Massachusetts Amherst
1020001000370106,NRI-Small: Cooperative Underwater Robotic Networks for Discovery & Rescue,"The goal of this project is to develop a cooperative underwater robotic network for underwater discovery and rescue. With technological advancements, human involvement in underwater activities has increased. Vulnerable oceanic engineering systems, however, can have drastically negative environmental and economic consequences. The oil spill in the Gulf of Mexico in 2010 was a case in point for the impact of underwater infrastructure has on marine ecosystem and local economy. Compared to manned systems, underwater robots, also called autonomous underwater vehicles (AUVs), have inherent advantages by eliminating the need of life support systems and potential risk of human life. Further, an AUV network (with a swam of AUVs) can offer more benefits in efficiency and cost in underwater exploration, discovery and rescue. In this project, innovative algorithms, methods and techniques in autonomous underwater vehicle (AUV) design, cooperative control and underwater acoustic communication networks are proposed to ensure the AUV network's performance in highly uncertain environments. Deliverables for this project include key enabling technologies for the AUV networks and the demonstration of a network including several prototype AUVs.<br/><br/>A high performance, energy efficient and autonomous AUV network is significant to science, economy and society. It will have significant impact to underwater infrastructure inspection, wildlife and habitat monitoring, and search and rescue missions. It can also be leveraged for oceanography data collection and water pathway monitoring. Beyond the research significance, this project has important impact on education and outreach by supporting undergraduates, women and other under-represented groups, as well as promoting multi-disciplinary collaboration across departments, campuses, and institutions.",7,University of Connecticut
1020001000370107,NRI-Small: Spacial Primitives for Enabling Situated Human-Robot Interaction,"To enable natural and productive human-robot interaction (HRI), a co-robot must both understand and control ""proxemics"" -- the social use of space -- in order to communicate in ways commonly used and understood by humans. This project focuses on answering the question: How do social (speech and gesture), environmental (loud noises and low lighting), and personal (hearing and visual impairments) factors influence positioning and communication between humans and co-robots, and how should a co-robot adjust its social behaviors to maximize human perception of its social signals?<br/><br/>The project will develop principled computational models for the recognition and control of proxemic co-robot behavior in HRI using both telepresence and autonomous co-robots. The research will establish a foundational component of HRI for co-robotics, with specific impact on special needs users in socially assistive contexts -- particularly the elderly, both aging in place and in institutions -- with the goal of mitigating isolation and depression, and encouraging exercise and socialization. <br/><br/>Broader impacts: The work will inform robot design and control, and provide software and a corpus of public HRI data for use by researchers worldwide. Beyond robotics, the project promises to inform, validate, and extend longstanding research in the social sciences. This project also includes a strong public and K-12 outreach component consisting of weaving the HRI themes being developed into annual regional and international outreach events. The events feature large-scale open houses and educational workshops with interactive demonstrations and hands-on activities that highlight human factors in computational systems as an effective means of increasing interest in STEM-related activities.",11,University of Southern California
1020001000370108,NRI-Small: Core Technologies for MRI-powered Robots,"The goal of this project is to create a transformative robotic technology that utilizes Magnetic Resonance Imaging (MRI) systems to power, control and image robots under the guidance and control of a clinician. Specifically, the research effort will be organized around three tasks: (1) creation of design principles for MRI-powered actuators, (2) development of motion planning and control algorithms for MRI-powered robots, and (3) design of MRI pulse sequences for closed-loop motor control. This tether-less robot technology addresses the needs for small, low cost medical robots identified in the Roadmap for US Robotics and can be exploited for robots ranging in size from centimeters down to fractions of a millimeter. At the centimeter scale, it could be used for robots designed to crawl inside body cavities to perform interventions and also for robotic prosthetic implants. At the millimeter and sub-millimeter scale, groups of MRI-powered robots can swim inside fluid-filled regions of the body to perform targeted therapies, such as drug and cell delivery, or to assemble as a sensor network. Two testbeds at these different scales will be used to evaluate and demonstrate the technology.<br/><br/>This research addresses a largely unexplored frontier in medical robotics that could revolutionize the standard of care for many serious medical conditions currently associated with both high mortality rates and high societal costs. The location of the PI's lab inside a teaching hospital provides a unique environment to integrate the research and education of the engineering and medical disciplines. To promote an understanding of engineering and medicine along with the value of learning and research to low income and minority school students, the project team will partner with local educational organizations. Furthermore, the project technology will consist of algorithms and software that can be utilized by researchers and educators throughout the country to provide fundamentally new capabilities to existing multi-million dollar equipment. The ultra-minimally invasive medical robots developed using this technology can potentially provide substantial societal benefits in terms of reduced trauma, precise image-based control and lower cost.",7,Children's Hospital Corporation
1020001000370112,NRI-Small: A Biologically Plausible Architecture for Robotic Vision,"The objective of this research is to develop a generic robotic vision architecture that is both biologically plausible and jointly optimal, in a decision theoretic sense, for attention, object tracking, object recognition, and action recognition, in both static and dynamic environments. The research is motivated by the observation that all these problems are solved by biological vision with very homogeneous neural computations. The approach is to exploit a mapping of accepted computational models of visual cortex into the elementary computations of statistical learning and inference in order to derive unified algorithms for all tasks. <br/><br/>Intellectual merit: the proposed unification of vision tasks is novel and of paramount importance for robotics, since it is computationally infeasible for a robot to implement a large set of disjoint vision algorithms. It will also exploit task synergies, producing algorithms that leverage the solution of one task to improve performance on another. This will likely enable overall better performance of vision systems. Finally, the project will produce novel insights on the structure of the visual world, and how it can be leveraged by robotic vision, by introducing new models for natural image statistics. <br/><br/>Broader impacts: The research has applicability in manufacturing, intelligent systems, health care, homeland security, etc. The expected theoretical insights are likely to be of wide application in statistics (models of feature dependence), neuroscience (models of neural computation), and computer vision (synergistic models). Educationally, the project provides an exciting opportunity for the involvement of undergraduates in research.",15,University of California-San Diego
1020001000370115,NRI-Small: Managing Uncertainty in Human-Robot Cooperative Systems,"This project attempts to combine human strengths in reasoning with machine capabilities in information fusion, task planning, and simulation to manage uncertainty and achieve successful human-robot partnerships to perform complex tasks in uncertain environments that were previously considered impractical or infeasible. The approach consists of three objectives: the use of sensing and control to reduce model registration uncertainty; the definition, simulation and implementation of virtual fixtures to allow humans to intuitively constrain the task; and the development of bi-directional task planning and execution with uncertainty to allow humans and robots to request help from one another.<br/><br/>This research has a number of broader impacts affecting both the academic community and society at large. The work is expected to have significant appeal to those in the manufacturing and medical robotics sectors, as testbeds will impact these areas. The PIs will mentor hands-on research by undergraduate, graduate, and post-doc students and guide them in the dissemination of their research to the scientific community. The team will also provide engineering experiences for middle school girls in the Baltimore area through weekend programs on the Johns Hopkins University campus, including the ""Ready, Set, Design!"" program. Finally, the development will be made freely available as all software will be fully integrated with open-source ROS.",6,Johns Hopkins University
1020001000370116,NRI-Small: Human-Robot Collectives as a Curriculum-Wide CS Learning Platform,"The following project is an effort to re-conceptualize what it means to study computer science at the undergraduate level. Based on years of research and curriculum development, the project team at the Rochester Institute of Technology (RIT) is collectively building on prior work to design a sequence of computer science courses that integrate the use of a network of robots to facilitate student learning. In this project, the co-robot teams share space and tasks with humans and are used as a teaching platform in an introductory context, and as a laboratory platform for projects in intermediate and upper-level courses in which students can develop and even invent new services. This approach enhances a traditional approach to teaching computer science and provides ample opportunities for students to design, test, and evaluate using co-robot systems. <br/><br/>The project focuses on three main goals: (1) Develop a co-robot platform that can be used by undergraduates of all levels within the existing curriculum and is adoptable by other universities; (2) Improve student engagement and learning of core CS concepts through modules that use the co-robot platform; and (3) Develop undergraduate students with a thorough understanding of co-robot systems and their potential applications. The project team is developing modules aimed at an introductory sequence, data management, and networking. Coupled with module development, there is heavy emphasis on building suitable hardware that will serve the needs of students, giving them the flexibility to develop new applications (code) to operate and manipulate the co-robotics systems. <br/><br/>The project measures the impact of the new approach on student outcomes through both qualitative and quantitative measures, with the intent of disseminating the model and technology beyond the context of RIT. As such, the project will enhance the knowledge base about the effective use of co-robotics systems in traditional undergraduate computer science courses.",9,Rochester Institute of Tech
1020001000370119,NRI-Small: The Intelligent Workcell - Enabling Robots and People to Work Together Safely in Manufacturing Environments,"The research objective of this award is to investigate methods to enable people and industrial robots to work safely within the same workspace. Current robotic manufacturing practice requires the physical separation of people and robots, which ensures safety, but is inefficient in terms of time and resources, and limits the tasks suitable for robotic manufacturing. This research will develop an ""Intelligent Workcell,"" which augments the traditional robotic workcell with perception systems that observe workers within the workspace. Methods to explicitly track workers and estimate their body pose will enable dynamically adaptive safety zones surrounding the robot, thereby preventing the robot from injuring workers. Algorithms will be developed to recognize the activities that workers are performing. These algorithms will learn a task-independent vocabulary of fundamental action components, which will form the building blocks for a hierarchical activity recognition framework. Finally, mechanisms for providing feedback to workers about the robot's intended actions will be studied.<br/><br/>This research is expected to provide new capabilities in robotic workcell safety and monitoring, allowing people and industrial robots to work safely and effectively in the same environment. Such capabilities would improve the efficiency of existing robotic workcells, since the robot would not be required to stop whenever a person enters the workspace (as is current practice). Furthermore, new manufacturing processes that involve robots and people working together on a single task would be enabled. Students at the graduate and undergraduate level will benefit from using the prototype Intelligent Workcell in project courses, and grade-school students will participate in short courses and workshops designed to ignite interest in STEM activities related to industrial robotics and computer vision.",7,Carnegie-Mellon University
1020001000370121,NRI-Small: Multi-modal sensor skin and garments for healthcare and home robots,"The objective of this research is to answer fundamental design questions for multi-functional robotic skin sensors, optimize their placement onto assistive robotic devices, have the robot and human ""learn"" how to use the skin sensors efficiently, and quantitatively assess the impact of this assistive technology to humans. The approach is to design and fabricate integrated micro-scale sensors in conjunction with iterative simulation and experimental studies of the performance of physical human-robot interaction enabled by this technology. <br/>Intellectual Merit<br/>This project will contribute efficient algorithms for optimal placement and data networking of distributed skin sensors on robots; new learning and control algorithms to sense human intent and improve interactivity; practical robotic skin and garment hardware with distributed sensors to include tactile, thermal imaging, and acceleration sensing in flexible materials that can be easily attached on and peeled off robots; and new metrics to evaluate the impact of this skin to humans including level of assistance, safety, ease of use, aesthetics, and therapeutic benefits.<br/>Broader Impacts<br/>Co-robots of the future will share their living spaces with humans, and, like people, will wear sensor skins and clothing that must be interconnected, fitted, cleaned, repaired, and replaced. In addition to aesthetic purposes that increase societal acceptance, these sensorized garments will also enhance robot perception of the environment, and enable extraordinary levels of safety, cooperation, and therapy for humans. The research proposed here will unlock near-term and also unforeseen applications of robotic skin with broad applicability, and especially to home assistance, medical rehabilitation, and prosthetics.",7,University of Texas at Arlington
1020001000370123,NRI-Small: Measuring Unconstrained Grasp Forces Using Fingernail Imaging,"This project develops the technology for unconstrained measurement of human grasp forces. Measurement of multi-fingered grasp forces typically requires a human to grasp an object at predefined sensor locations or to wear instrumented gloves that impede haptic sensations. The objective of this project is to characterize the ability to estimate three-dimensional grasp forces at the fingertips by measuring the color change of the fingernail. This fingernail imaging technique allows the human subject to freely choose where to place the fingers on the object, allowing for completely unconstrained multi-finger grasping. A magnetic levitation device is used to apply a range of 3-D forces to the human fingertip while collecting images of the fingernail. Various image processing techniques are being explored to register the fingernail images to a standard template, and various mathematical models relating pixel intensity to force are being investigated to determine an optimal method. A robotic motion-tracking technique is being implemented to keep the fingers in view of the camera as the hand moves during grasping experiments. The fingernail imaging technique is first validated using constrained grasping experiments, and then applied to unconstrained grasping experiments.<br/><br/>This research enables a co-robot to detect the individual finger forces of a human partner using a technique that does not interfere with the human's haptic sense. A co-robot trained with the appropriate calibration data could recognize and emulate or adapt to a human partner's grasp forces, measured using only vision. Research efforts are being integrated into the Robotics education and outreach at the University of Utah.",1,University of Utah
1020001000370124,NRI-Small: Perceptually Inspired Dynamics for Robot Arm Motion,"In order for robots to collaborate efficiently and effectively with humans, the human perception of their movement must be considered in motion creation. Because a human collaborator will interpret the movements of a robot (even subconsciously), robot motion synthesis algorithms that do not consider the human observer may create motions that are perceived incorrectly, interpreted negatively (e.g. as being angry or threatening), or at least miss out on the opportunity to use this subtle communication channel effectively. The key idea of this project is to develop an understanding of human perception of movement that can be applied to the development of robot trajectory planning and control algorithms. The team will use human subjects experiments to understand and evaluate the interpretation of movements and apply these findings in robotics and motion synthesis. The research plan interleaves empirical studies of how people interpret motions, algorithm development to create methods that generate robot motions in a controllable manner, and contextualized deployments that allow the PIs to evaluate the success of the methods. The success of the project will provide a deeper understanding of how people interpret movements, new algorithms for synthesizing robot movements, and demonstrations of the potential applications of collaborative robots.<br/><br/>Broader Impact: Perceptually inspired robot motion synthesis algorithms will enable robots to collaborate more effectively with people. It will enable more communicative robots that can serve as teachers and guides; more approachable and acceptable robots that can work in domestic situations such as elder care; more cooperative robots that can work as assistants to workers; and easier to instruct robots that can be trained by non-experts. This project will enhance the education and outreach efforts of hte PIs by connecting empirical human studies to the technical challenges of robot trajectory planning.",7,University of Wisconsin-Madison
1020001000370126,NRI-Small: Robotic Treadmill Therapy for Lower Spinal Cord Injuries,"New rehabilitation therapies for patients with incomplete lower spinal cord injuries (SCI) will be developed using the Treadport, a robotic treadmill that provides a realistic walking experience in a safe and flexible virtual environment. The Treadport overcomes limitations of current rehabilitation treadmills, which are too dissimilar from everyday walking and therefore limit a patient's recovery. We will seek to improve a patient's walking speed and effort, resistance to falling by strengthening and training a patient to unexpected perturbations, and arm swing coordination which is critical for normal walking. The intellectual merit is the body-weight assisted robotic treadmill training, and arm swing assistance using a light-weight exoskeleton. The robotic treadmill will provide monitored assistance and virtual reality training scenarios not currently possible.<br/><br/>The broader impact is the development of new technology and scientific understanding that will improve the lives of SCI patients so that they have the most mobility possible given their injuries. The proposed research combines the disciplines of robotics, biomechanics, and physical therapy, and requires interdisciplinary training. The socially positive nature of this project is expected to be especially attractive to underrepresented populations in engineering and computer science, particularly women and people with disabilities.<br/><br/>This proposal addresses the co-worker theme of the National Robotics Initiative, particularly rehabilitation, orthotics and prosthetics.",7,University of Utah
1020001000370127,NRI-Small: Robot Assistants for Promoting Crawling and Walking in Children at Risk of Cerebral Palsy,"Effective robotic assistance of infants with or at risk of developing Cerebral Palsy (CP) has the potential to reduce the significant functional limitations as well as the potential deficits in cognitive development. This project focuses on the development and testing of a sequence of robotic assistants that promote early crawling, creeping, and walking, along with a model of infant-robot interaction that encourages the continued practice of movement patterns that will ultimately lead to unassisted locomotion. Typically developing infants initially learn to crawl through the generation of spontaneous limb and trunk movements. Early in the process, these spontaneous movements transport the infant across the floor. The rewarding locomotory experience drives the infant to refine the movements to intentional and exploratory skills. Ultimately, the infant intentionally engages these skills to solve larger problems, such as obtaining an interesting toy or exploring the environment. Infants with conditions such as CP lack the muscle strength, postural control, and motor coordination necessary for these early exploratory limb and trunk movements to result in locomotion. Without this positive feedback, the development of the neural pathways for productive limb use is diminished, which results in delayed or lack of development of crawling and walking. These limitations in mobility negatively affect other domains of development such as perception and cognition, with effects being visible even into adulthood.<br/><br/>The robotic assistants to be developed in this project will aid the infant in developing locomotory skills by selectively supporting a portion of his/her weight and providing artificial, rewarding locomotory experiences. The PI's approach to infant-robot interaction is to first instrument the infant with a set of sensors, allowing for reconstruction of the trunk and limb positions in real time. A semi-supervised clustering process will then identify a menu of canonical spatio-temporal limb and trunk movement patterns given observations of behavior that is exhibited by children who are either typically developing or at risk of developing CP. The robot will respond to the recognition of a canonical movement by assisting in the corresponding postural support and transport of the child. The PI's hypothesis is that this positive feedback will encourage the continued practice of the canonical movements, as well as their use in solving larger problems. The infant-robot interaction model will selectively reward specific canonical movements as different levels of capabilities are exhibited. As the child becomes proficient at using a simple movement to trigger robotic assistance, the robot will reduce (and ultimately eliminate) its response to that particular canonical movement. Other canonical movements that encode related, but more complex and/or coordinated limb movements, will continue to be available. As the limb movements are mastered the vertical support will be reduced to encourage the infant to bear more of his/her own weight. The hypothesis is that this early intervention approach will help to guide the child along a progressive developmental trajectory that will end with locomotory skills and muscle strength that require little or no assistance. EEG-based neuroimaging will be used to monitor the progression of the infant's development. The hypothesis is that the degree of proficiency of certain skills will be identifiable using the EEG index related to motor output. This information will be used to guide the semi-supervised clustering process, as well as the decision process for selectively rewarding certain canonical movements.<br/><br/>Broader Impacts: Equipping children with CP at an early age with locomotory skills will not only bring them more in line with typically developing children, but will also reduce their reliance on long-term care while increasing their success in self-help, in education, and in the workplace. The techniques will be applicable to a range of other childhood disorders (including Down Syndrome), to retraining patients following stroke, and to the creation of tunable gestural interfaces for intelligent prostheses.",7,University of Oklahoma Norman Campus
1020001000370130,NRI-Small: Improved safety and reliability of robotic systems by faults/anomalies detection from uninterpreted signals of computation graphs,"One of the main challenges to designing robots that can operate around humans is to create systems that can guarantee safety and effectiveness, while being robust to the nuisances of unstructured environments, from hardware faults to software issues, erroneous calibration, and less predictable anomalies, such as tampering and sabotage. However, the fact that the streams of observations and commands possess coherence properties suggests that many of these disturbances could be detected and automatically mitigated with general methods that imply very low design efforts. Currently, robotic systems are developed as a set of components realizing a directed ""computation graph"". This project focuses on theoretical methods, applicable designs, and reference implementation of a faults/anomalies detection mechanism for low-level robotic sensorimotor signals. The system, without any prior information about the robot configuration, should learn a model of the robot and the environment by passive observations of the signals exposed in the computation graph, and, based on this model, instantiate faults/anomalies detection components in an augmented computation graph.<br/><br/>The project engages undergraduate and graduate students in advanced robotics design and development. It is expected the research results will have a significant impact on future robotic systems and machine learning.",7,California Institute of Technology
1020001000370375,SoCS: Collaborative Research: A Human Computational Approach for Improving Data Quality in Citizen Science Projects,"A unique interdisciplinary team of computer scientists, information scientists, ornithologists, project managers, and programmers will develop a novel network between machine learning methods and human observational capacity to explore the synergies between mechanical computation and human computation. This is called a Human/Computer Learning Network, and while the focus is to improve data quality in broad-scale citizen-science projects, the network has the potential for wide applicability in a variety of complex problem domains. The core of this network is an active learning feedback loop between machines and humans that dramatically improves the quality of both, and thereby continually improves the effectiveness of the network as a whole. The Human/Computer Learning Network will leverage the contributions of broad recruitment of human observers and process their contributed data with artificial intelligence algorithms leading to a total computational power far exceeding the sum of their individual parts. This work will use the highly successful eBird citizen-science project as a testbed to develop the Human/Computer Learning Network. eBird engages a global network of volunteers who submit tens of millions of bird observations annually to a central database.<br/>This research addresses three fundamental data quality challenges in citizen-science. These are: 1) reducing errors in identification or classification of objects; 2) identifying and quantifying the differences between individual observers; 3) reducing the spatial bias prevalent in many citizen-science projects. To address these challenges, the project will build on advances in artificial intelligence that now provide the opportunity to study systems through the generation of models that can account for enormous complexity. Preliminary work on observer classification will be extended by developing new multi-label machine learning classification algorithms that provide better ecological interpretations and more accurate predictions. In addition, the research will develop new active learning algorithms by constructing sampling paths that will optimize volunteer survey efforts to maximize overall spatial coverage, and incentivize participation via crowdsourcing techniques. Finally, it will study how participants can improve the quality of their observations based on the feedback and information provided by the artificial intelligence. <br/><br/>Broad-scale citizen-science projects can recruit extensive networks of volunteers, who act as intelligent and trainable sensors in the environment to gather observations. Artificial intelligence processes can dramatically improve the quality of the observational data that volunteers can provide by filtering inputs based on observers' expertise, a judgment that is based on aggregated historical data. By guiding the observers with immediate feedback on observation accuracy and customization of observation worksheets, the artificial intelligence processes contribute to advancing expertise of the observers, while simultaneously improving the quality of the training data on which the artificial intelligence processes make their decisions. The results of the project will have significant benefit for all citizen science and broader impact in an emerging world of ubiquitous computing in which human-machine partnerships will become increasingly common.",2,Cornell University
1020001000357341,III-CXT-Small: Collaborative Research: Automatic Geomorphic Mapping and Analysis of Land Surfaces Using Pattern Recognition,"Description<br/><br/>Advances in remote sensing techniques have made available large datasets of topographic measurements pertaining to terrestrial and planetary land surfaces. However, the scientific utilization of these datasets is hampered by a lack of tools for effective automated analysis. This project seeks to develop a system for fast, objective and transparent conversion of topographic data into knowledge about land surfaces. The project has two complementary goals: 1) to develop a tool that autonomously produces geomorphic maps mimicking traditional, manually derived maps in their appearance and content, and 2) to develop a tool that classifies entire topographic scenes into characteristic landscape categories. The mapping tool is based on the object-oriented supervised classification principle. A number of novel solutions, including semi-supervised learning, meta-learning, and a wrapping technique coupling classification and segmentation, are proposed to address challenges posed by the specificity of topographic data. The scene classification tool is based on information-theoretic metrics and incorporates novel solutions to problems posed by the raster character of topographic datasets.<br/><br/>Intellectual Merit<br/><br/>The project employs a novel fusion of machine learning and computer vision techniques to open new possibilities. In the process of constructing the mapping and classifying tools, novel machine learning methodologies will be developed and tested. The products of this research will enable a qualitatively new type of analysis of land surface topography: the large scale statistical comparison of spatial distribution of landforms.<br/><br/>Broad Impact<br/><br/>Successful mapping and classifying tools will have impact beyond the analysis of natural landscapes; they can be also be applied to the study of surface metrology (the numerical characterization of industrial surfaces). The nature of this project will attract interest and collaboration with specialists from diverse disciplines, such as computer science, remote sensing, geomorphology and hydrology. Such links will broaden the base of expertise for each discipline, as well as enrich participants from contributing domains.",5,University of Cincinnati Main Campus
1020001000366488,EAGER: Centralized Control of Large-Scale Distributed Sensor/Actuator Networks: Self-organizing Amorphous Facades,"This project, investigating active building facades that proactively contribute to energy conservation by changing their opacity and air permeability as a function of environmental and user parameters, promises to contribute strongly to both the cyber and physical sciences. Often energy is wasted when parts of a building are heated or cooled, but are not actually used, or when they are actively cooled if simply opening a window would suffice. The proposed ""Self-Organizing Amorphous Facades"" (SOAF) consist of a large number of identical cells that can each change their opacity and air permeability, sense light, temperature, and occupancy, and communicate with each other in a distributed collective. For complex cyber physical systems, this promises to provide a novel design methodology that is potentially applicable to a large class of systems and, therefore, will result in foundational knowledge of use to the community at large. This high-risk, high-reward project integrates ideas from computer science and engineering, with a little human physiology and environmental science thrown in, to develop new theoretical foundations for the design, validation, and improvement of coordination strategies for multi-agent robotic systems.<br/><br/>The project's intellectual merit lies in novel algorithms that allow one to take advantage of distributed computation to drastically reduce the dimensionality of the data coming from the system, and novel algorithms that turn low-dimensional control data to the system into high-dimensional control signals. In particular, this research focuses on distributed algorithms that can identify regions that share similar spatio-temporal data, distributed algorithms that recognize patterns and gestures in spatio-temporal data sets, and distributed algorithms that automatically derive distributed policies for global control signals on temperature and light.<br/><br/>Broader Impacts: The direct impact of this project will be huge potential reduction in the energy footprint of modern buildings by active lighting and ventilation control. A related impact is the introduction of novel ways of using space using truly reconfigurable walls. Due to its interdisciplinary nature spanning computer science and civil engineering together with its positive environment implications, this project is likely to be attractive to students with a broad range of backgrounds and interests. It will lead to educational modules that let students explore energy, heat transfer and solar gains in a building using sensors, wireless technologies, and algorithms, and introduce students to the challenges of complex cyber-physical systems. The PI proposes outreach to women and minorities and suggests a novel mechanism of comic distribution via HowToons.com that will make technical results and environmental impact of CPS accessible to a wide audience.",19,University of Colorado at Boulder
1020001000366501,EAGER: Presence and Navigation in Virtual Reality Rehabilitation Games for Mobility Impaired Persons,"The term presence refers to a user's level of involvement in, or feeling of actually being part of, an immersive virtual environment (VE), or virtual reality (VR) as it is commonly called. Although researchers have been empirically studying presence for over 15 years, they have typically only focused on persons without disabilities. So whether the findings from these studies hold true for persons with mobility impairments is unknown. Based in part on his personal experiences, the PI hypothesizes that in fact many of these prior results may not be relevant to persons with mobility impairments deriving, for example, from stroke, Multiple Sclerosis, or Parkinson's disease. Many of these individuals have sensory deficits (e.g., numbness in the legs and feet), and use assistive devices (e.g., canes, walkers, or wheelchairs), which impact the way they navigate through a virtual space. This, in turn, could affect their experience of presence.<br/><br/>VR games are intended to enable users to perform rehabilitation exercises (e.g., to practice walking in good form) as part of an immersive game. They aim to engage the user's senses with graphics, audio, and 3D user interfaces, and when properly designed have been shown to enhance motivation, which is a key factor in successful rehabilitation. However these games are not yet in widespread use for physical rehabilitation, most likely due to the many unanswered basic questions about how persons with mobility impairments navigate within a VE and how this affects their experience of presence. In this exploratory research the PI seeks to gain a better understanding of such issues, as well as their potential impact upon the user's motivation for rehabilitation. To these ends the PI will conduct a series of empirical studies in collaboration with the Neurology Institute of San Antonio (NISA). As preliminary work, he is currently studying how alternative navigation methods such as real walking, virtual walking, and flying impact presence for people who walk with canes.<br/><br/>Although the preliminary study is still underway, initial results suggest that people who walk with canes experience lower presence than persons without mobility impairments. The PI plans to focus next on a number of fundamental aspects of VR that may affect navigation and presence, especially avatars (virtual representation of the body as well as of assistive devices such as a virtual cane) and field of view (the typical human field of view in the real world is about 120 degrees but it is much lower in a typical VE, which may complicate navigation for some mobility impaired persons). Through these studies, the PI will develop a new presence questionnaire that is tailored to mobility impaired persons and which can be integrated into existing presence questionnaires. The outcomes of this research will be potentially transformative, in that the findings will challenge and potentially disrupt accepted theories and perspectives of presence in the fields of VR and rehabilitation games.<br/><br/>Broader Impacts: This exploratory research will lay the foundations for a better understanding of presence in VR for the mobility impaired that may enable more effective immersive experiences for this underrepresented population, thereby resulting to higher motivation and more effective VR games for rehabilitation. This, in turn, could ultimately improve rehabilitation adherence, thereby leading to an improved quality of life for mobility impaired persons. Moreover, The University of Texas at San Antonio (UTSA) is a minority serving institution. The PI has initiated a UTSA Game Development Club, which he plans to expand as a gateway for minority student involvement in VR games research.",13,University of Texas at San Antonio
1020001000366545,RI: SMALL: Statistical Linguistic Typology,"This project considers the unification of two view of language: that<br/>from natural language processing and that from linguistic typology.<br/>Our view is that typological information is both useful for solving<br/>real-world natural language processing thats and automatically<br/>derivable from language data. This research first explores how to use<br/>typological knowledge to improve performance on problems such as<br/>dependency parsing and machine translation for low density langauges.<br/>Intuitively, our statistical models waste time exploring a hypothesis<br/>space that is too big: the space of realistic grammars is much smaller<br/>than the space of all grammars. The second part of this research<br/>considers the automatic acquisition and boostrapping of typological<br/>knowledge from raw text. The outcome of this research is: (a)<br/>improved statistical models for hard natural language processing<br/>problems; and (b) a larger library of typological universals that have<br/>been derived automatically from data. Our outcomes are empirically<br/>evaluated on the raw language processing tasks and in terms of the<br/>quality of the universal implications mined from data, but comparing<br/>them with known repositories of universals. <br/><br/>Our results will impact the fields of natural language processing and linguistics. From the research side, this research will find applications in a wider variety of problems than the ones we intend to study; in particular, the use of linguistic universals in natural language processing technology<br/>will fundamentally change the way multilinguality is addressed in this<br/>field. From a linguistics perspective, the goal of this project is to<br/>shed new light on linguistic universals. This should impact not only<br/>the area of typology, but also the study and preservation of<br/>endangered languages. By automatically identifying typological<br/>features and implications from data, the process of documenting<br/>endangered languages could be made more efficient: leading to a<br/>smaller loss of knowledge of these languages.",0,University of Maryland College Park
1020001000366579,EAGER: An Exploration in Enabling Community-Driven Collaboration,"Contemporary information technology is ever more central to science and society in the midst of the deluge of complex data. The impact on bioscience is notable, where the pace of production and the data complexity means that a large amount of data is often not adequately analyzed by the data producers, yet researchers expect rapid dissemination of such types of data. To ensure effective impact, a solution promising to be transformational is to open ""big data"" analysis to the broader community. An avenue is provided by modern IT and the explosive growth and democratizing impact of the Internet, which, following the digitization of information and communication, has changed the pace of information exchange and opens up new channels for disseminating data and for engaging disparate disciplines in extended, productive collaborations. The result of this will be a platform with a customized pre-build interface that will significantly reduce the downside of the form-based data input approach. The input interface will be small, easy to use and readily accepted by users but still relevant to what a user might want to input. The interface will provide strong search ability to the controlled vocabulary and provide users with this information through ""input hint"", dropdown lists or auto-completion, according to what is most efficient for the specific extension and provides an effective, readily followed and precise process. Sustaining the free text input section will provide users with maximum freedom of data input. By enabling community collaboration via Web access and implementing a database resource linked with the knowledge collection interface together with free text entry format, this system will provide a venue for researchers among many communities, including those located at non-research intensive universities, community colleges and minority-serving institutions, in this Nation and worldwide, to contribute their insight to experimental research observations that currently requires expensive specialized equipment only available in a few centers around the world.",5,University of California-San Diego
1020001000366605,Where the Ocean Meets the Cloud: Ad Hoc Longitudinal Analysis of Massive Mesh Data,"Science is becoming a data management problem. Advancements in sensing and computational modeling have dramatically increased data acquisition rates, establishing queries -- ""in ferro"" experiments -- as an essential method of scientific discovery alongside in situ, in vitro, and in silico experiments. Unfortunately, the infrastructure to design and conduct in ferro experiments over massive datasets has not kept pace with our collective ability to create these datasets. Computational modelers, who have long enjoyed the benefits of a research focus on creating larger and faster CPU farms, now face terabytes of simulation results in which deep insights into the health of the planet remain locked. The key is seamlessness: to interactively analyze these data, quantitatively and qualitatively, without regard to boundaries manifesting from time and space domains, physical location, hardware architecture, storage medium, or file organization.<br/><br/>This project is building a new infrastructure that uses the CluE platform to allow ad hoc, longitudinal query and visualization of massive ocean simulation results at interactive speeds. This infrastructure leverages and extends two existing systems: GridFields, a library for general and efficient manipulation of simulation results; and VisTrails, a comprehensive platform for scientific workflow, collaboration, visualization, and provenance. By cloud-enabling these systems, the proposed infrastructure provides: (1) seamless access to the 10 year history of simulation results at interactive speeds; (2) an architecture and execution strategy that exploit both remote cloud and local desktop resources; and (3) a provenance capture and manipulation platform that enables repeatability, code reuse, forensics, and collaboration.",5,Polytechnic University of New York
1020001000366619,Learning Reading Strategies for Science Texts in a Gaming Environment: iSTART vs iTG,"It is well known that many students find science text challenging to comprehend. Students' reading ability is partly to blame. Reading problems become most apparent when the student is faced with a challenging text for which they have knowledge deficits. Science text, in particular, has many technical terms that are difficult to ground in everyday experience. Thus, there is a need for reading comprehension interventions to improve science comprehension. This project compares the effectiveness of two types of tutoring environments in improving high-school students' ability to understand challenging science text. Both environments contain the same pedagogical content, but present it differently: as a lesson or as a game. The first environment, developed and tested over the past 5 years, is an automated reading strategies tutor called iSTART (Interactive Strategy Training for Active Reading and Thinking) that uses animated pedagogical agents to deliver interactive instruction on self-explanation and reading strategies (comprehension monitoring, paraphrasing, generating inferences). Instruction occurs in three stages with each stage requiring increased interaction on the part of the learner. Results across a wide range of studies indicate that iSTART is highly effective in improving students' ability to understand challenging science text. While effective, iSTART can be somewhat unappealing to an average high-school student in extended practice situations. While students need extended practice to master the strategies, iSTART becomes monotonous over time. To increase students' engagement, an alternative version of iSTART practice will be developed that allows students to practice iSTART strategies in a game environment. iSTART-The Game (iTG) will present the same reading strategy practice to students, but will incorporate game-based principles to enhance engagement. This project examines whether a gaming environment for learning strategies for science text comprehension more effectively sustains students' attention and engagement during training, and thereby results in improved acquisition and mastery of these strategies. In the first year of funding, an automated reading strategy tutoring system that is framed in a gaming environment will be developed. In Year 2, pilot studies will be conducted to refine the system. In Year 3, practice using the strategies will be compared in three conditions: iTG, iSTART, and a control condition. Students will practice the strategies over a period of five additional sessions after the initial training. This experiment will provide information on the potential value of iTG over time and specifically, whether iTG engages students over repeated practice sessions. The potential interactive effects of individual differences such as prior science knowledge, reading skill, and motivational levels will also be examined.<br/><br/>This research will contribute to better understanding of engagement as a factor in learning gains. It is predicted by the investigators that many students who do not find the standard tutoring environment sufficiently engaging will significantly benefit from the alternative approach. Most importantly, this instructional intervention should be especially valuable for those students most at risk due to having lower ability and interest in science. This research will accomplish the goal of creating a test-bed learning environment intended to improve reader engagement and advance our understanding of the potential instructional gains from such environments. The project will contribute to our understanding of the relationship between game features and engagement. It will provide students with tools that help them more effectively meet the challenges of learning from difficult science texts. In addition, this research will contribute to our understanding of the roles that cognition and emotion play in fostering learning, the specific processes involved with learning from science texts, and the complex interplay of factors such as reading strategies, knowledge, reading skill, interest, and motivation.",9,Arizona State University
1020001000363410,2011 RESNA Student Design Competition,"This is funding to support the Student Design Competition (SDC), which will be part of the 2011 annual RESNA conference to be held June 5-8 in Toronto, Canada. Today, between 40 and 56 million people in the United States report some type of disability, and this number will likely grow in the coming years as the baby boom generation enters late life. RESNA (formerly the Rehabilitation Engineering Society of North America) is the one organization with an international focus that is devoted solely to technology for individuals with disabilities. RESNA's membership consists of individuals and institutions covering a range of disciplines (including researchers, clinicians, suppliers, manufacturers, consumers and educators who work in non-profit and for-profit settings), all of whom are dedicated to promoting the exchange of ideas and information for the advancement of assistive technology. In addition to its annual conference, RESNA conducts a credentialing program for assistive technology practitioners, rehabilitation engineering technologists and suppliers, as well as sponsored projects. RESNA's Technical Standards Board is the U.S. Technical Advisory Group to ANSI, the official U.S. representative to the International Organization for Standardization (ISO), for the development of ISO standards pertaining to assistive technology and other products for persons with disabilities.<br/><br/>The RESNA Student Design Competition fosters innovation and creativity with the ultimate goal of producing technology that can contribute to the independence of individuals with disabilities. The first SDC was held in 1980 as part of the inaugural RESNA conference. Since then, over 200 designs have been identified as winning entries, chosen from more than 600 submissions by students from over 115 different universities. SDC entries are required to represent the work of students ONLY, including the design documentation; both undergraduates and graduates are eligible to take part. Many past participants in the event are now leaders in service, research, and educational areas related to technology for people with disabilities. Some past student designs have been patented and are now available commercially. NSF has been a supporter since 2005. This year's funding will enable the SDC to be further expanded and enhanced, so as to include more entries and support for more design teams, especially from minorities, women, and individuals with disabilities. A call for participation has been posted on the conference website, and also distributed electronically to a large number of colleges and universities with engineering and design schools. A team of 5 judges will pre-select entries from up to 10 teams, from which two members each will be invited to attend the conference supported with travel and hotel funds as well as complimentary registration. During a half-day session in which the teams will make presentations before the judges and public audience at the conference, 5 teams will ultimately be selected as the final winners. Judges will have an opportunity to ask questions and make suggestions and recommendations to the design teams. A platform session will be held in which the 5 finalists will make presentations to the general conference attendees. All SDC teams invited to the conference will have an opportunity to present their projects in a poster session during the general conference time. More information is available online at http://www.resna.org/conference.<br/><br/>Broader Impacts: The annual RESNA Conference and the Student Design Competition combine to create a forum for interaction between working and experienced rehabilitation engineers and students who are about to enter the field. Unique in its primary focus on undergraduates, the event provides participants with experience and skills that help them succeed in their engineering and design careers. It further encourages and mentors students in various disciplines to become involved in the assistive technology and rehabilitation engineering fields, and also provides an opportunity for the students to network with their peers and rehabilitation engineers, many of whom participated in previous SDC competitions. In an effort to increase the diversity of submissions to the SDC in future years, this year's participants will build web pages to illustrate their design and design process. Not only will young scientists benefit from this medium of dissemination but professors and those who teach will also be able to share this information with their students as examples of projects that others have done. The content of all submissions, not just the finalists, will be captured and accessed through the RESNA website. The information will be searchable through the National Public Internet Site on assistivetech.net and the ATWiki. Use of an existing national search engine will allow professors to use design information in their course work, while allowing service providers and people with disabilities to search for unique solutions to barriers they may face.",10,Rehabilitation Engineering and Assistive Tech Society of NA
1020001000363542,III: Small: Women in Bioinformatics Initiative at ACM BCB 2011,"The Association for Computer Machinery (ACM) International Conference on Bioinformatics and Computational Biology (ACM-BCB) is aimed at providing a common environment to bridge important interdisciplinary research areas in computer science, mathematics, statistics, biology, bioinformatics, and biomedicine and an interactive forum which brings together superb researchers, practitioners, and students from around the world to promote scientific understanding and findings in computational bioscience. The meeting provides an opportunity for researchers from very different disciplines to form new collaborations and research programs. The proposed ?Women in Bioinformatics? initiative at ACM-BCB 2011 promotes broad participation from underrepresented groups in this important event and help develop the 21st century workforce in the computer science, biological science, biomedical science and other interdisciplinary areas.",10,University of North Carolina at Chapel Hill
1020001000363685,EAGER: Interactive Visualization and Modeling of Latent Communities,"In many areas of professional and social life, people form communities based on some obvious feature, so that members tend to be similar in some aspect of interest or behavior. However, as the number of groups expands, new dimensions of similarity are introduced. This project will focus on the ways to understand many different communities that differ in many dimensions and to present that understanding in ways that help to understand the community structure. The modern web provides a number of alternative data sources for discover communities, with activities ranging from blog comments to tagging of resources. This data can be used to discover and model communities reliably and present the complex data in clear and efficient ways that decision makers and other stakeholders can readily understand. The visualization of the complex data will also be interactive, permitting exploration of the data. The knowledge of the existence of these communities can be used to support recommendation approaches, such as identification of mentors or development of wide scientific collaborations.",15,University of Pittsburgh
1020001000363726,"HCC: Coordinating Communication: Visual, Social & Biological Factors in Grounding for Humans and Agents","With the advent of increasing numbers of increasingly smart machines, there is a growing need to develop technologies that are not only smart, but sensitive to the people and the other machines around them, and sensitive to the context in which they are used. Such an understanding will permit the development of technologies that can coordinate their interactions with humans in a more natural, seamless and fluid fashion. To meet these goals, this research program focuses on three critical yet under-studied contexts of interaction, each of which represents a different constraint upon interpersonal communication: (1) the physical context of shared visual access, (2) the social context of rapport, and (3) the biological context of aging. While some research has been conducted on each of these contextual factors, none has addressed their interaction, nor gathered them into one broader conception of the role of context in interpersonal coordination. This research applies a theory-driven design approach that includes experimental studies, theory development, computational modeling, system implementation and evaluation. In particular, the research program proposes: a) A rigorous study of human-to-human communication using elicitation experiments to develop a more detailed understanding of interpersonal communication across a range of contexts; b) A formalization of the findings into computationally explicit forms that provide predictions of behavior and capture the observed behavioral patterns; c) Integration of the models into a dialogue manager that is implemented within a larger computational architecture; and, d) Evaluation of the implemented system by having untrained humans interact with the system in such a way as to evaluate its effectiveness and reveal gaps in the underlying models as well as in our theoretical understanding.<br/><br/>The outcome of this research will advance our theoretical understanding of the role various contextual factors play during interpersonal communication. The results will be useful to a variety of scientific communities including those that study basic human communication (e.g., psychologists, linguists and communication researchers) and those that study interactive computational systems (e.g., computer scientists, computational linguists, and interaction designers). The research will also provide practical design guidelines and a general computational model that describes how machines can make intelligent choices on the basis of these contextual factors during everyday interactions. At a practical level, the general computational model can be applied by technologists developing many different technologies, such as embodied agents, large-scale displays, ubiquitous computing, in-car navigation, and assistive technologies for the elderly and those with cognitive impairments.",11,Carnegie-Mellon University
1020001000363763,EAGER: Collecting Training Videos for Location Estimation with Mechanical Turk,"Location-based services are rapidly gaining traction in the online world as they allow highly personalized services and easier retrieval and organization of multimedia. However, such services require accurate geolocation information (geo-tags) to be associated with the multimedia data e.g., videos. Because only a small fraction of available video data is geo-tagged. Hence, there is a growing interest in systems that estimate the geolocation of a given video automatically that does not include geo-location metadata. While machine learning offers a potential approach to training automatic location estimators, it requires a standardized training corpus of geo-tagged videos. Automatic collection of videos introduces a bias toward videos that are easily processible by machines and towards geographical locations that are over-represented in current corpora. Hence there is a need for carefully curated standard data sets. <br/><br/>This EArly-concept Grants for Exploratory Research (EAGER) project explores a novel, somewhat high risk, approach to collecting such an annotated training corpus of geo-tagged videos using Mechanical Turk (http://www.mturk.com), a ""marketplace for work"" for engaging workers with the desired expertise from around the world to work on a specific task, in this case, participating in a game that involves annotating videos with geolocation metadata e.g., GPS coordinates. The user interface for the game will allow participants to estimate the location of videos by clicking on a map. The knowledge gained from this EAGER would set the stage for more comprehensive geotagged multimedia data collection efforts. The resulting data sets and benchmarks will be made available to the research community to enable detailed and systematic comparative analysis of alternative methods (e.g., machine learning algorithms for predicting geolocation information from videos). <br/><br/>The availability of standardized geo-tagged multimedia data sets will help drive advances in machine learning techniques for geo-location prediction. The resulting advances in geo-tagging multimedia data would enable intelligent location based services and a variety of domains including law enforcement, personalized and location-aware media retrieval, for a variety of applications including journalistic and criminal investigations.",1,International Computer Science Institute
1020001000363772,"RAPID: Robots Designed to Assist During Nuclear Catastrophes - Autonomously Creating 3-D Maps, Collecting Radiation/Other Data at Japan's Fukushima Nuclear Plants","The project advances the state-of-the-art of autonomous Simultaneous Localization and Mapping (SLAM) algorithms, and maximizes the ability to explore under extreme conditions with minimal time using real-time 3D methods in nuclear power plants. The integrated system consists of a hardware system with three LIDAR's, a CPU, a GPU, a battery and a wireless unit.<br/><br/>The state-of-the-art in nuclear power plant disaster mitigation is a direct consequence, as well as the development of more advanced and robust SLAM techniques that are applicable in all domains.",7,Analytical Software Inc.
1020001000363773,RAPID: Low Cost Personal Remote Sensing for Cognitive Disaster Assessment with Enhanced Human-Machine Interface,"The project studies the use of advanced Human-Machine Interface kits such as helmet displays and Virtual Reality goggles with low-cost autonomous Umanned Aerial Vehicles (UAV) and assesses their impact on the ground operator's cognitive load, and explores the addition of the capability to enable the UAV with a certain amount of cognitive ability so that it can suggest places in the scene to make observations.<br/><br/>This project impacts search and rescue, disaster mitigation, mapping and assessment, and broadens the perspectives of faculty and students through an international collaborative effort.",4,Utah State University
1020001000363776,RAPID: Mobile Augmented Reality to Improve Rapid Assessments in Disasters,"Following an earthquake, or similar natural disaster, a key problem is rapid and accurate on-site damage assessment to support local first responders; however, trained experts are typically remote from the disaster and it can be time consuming and expensive to bring them onsite. Accessing remote experts to improve the accuracy of rapid assessments is a promising method to streamline provisioning of emergency shelters and other resources. This project focuses on new methods for improved rapid assessment of earthquake damaged building structures in Christchurch, New Zealand. The methods are based on collaboration using augmented-reality (AR) imagery, mobile phone based sensor technologies and crowdsourcing techniques for guided remote data collection. A key element of the system is intuitive remote collaboration. Our mobile AR system can be used to connect a user in the disaster zone to a remote expert via audio and shared still images and/or video, helping them to rapidly collect data on building structural integrity. A user evaluation will be performed to compare the performance between the prototype and more traditional approaches (e.g., waiting for an expert to arrive on the ground), and assessment based on imagery recorded from an untrained and unguided user. Two hypotheses will be tested: 1) a collaborative mobile AR system can improve the quality and type of data collected for structural assessment 2) the time to provide data from non-experts assisted by experts to decision makers in a digestible format is dramatically reduced as compared to traditional methods.<br/><br/>The approach will enable rapid post-event damage assessment, streamline emergency provisioning of shelters by allowing people to stay in safe dwellings, and speed up emergency response and reconstruction. The resulting valuable dataset will assist development of rapid assessment forms, contribute to earthquake structural damage case studies, provide key baseline to test several computer science research projects on improved disaster response, and provide key data for development of life-saving tools. The international collaboration also provides engagement of underrepresented groups in this computing research.",12,Carnegie-Mellon University
1020001000363778,RAPID: Earthquake Damage Assessment from Social Media,"In the minutes and hours following the recent earthquakes in New Zealand and in Japan, and storm in Haiti, thousands of locals posted pictures to social media sites like Facebook and Twitter. These pictures when coupled with extremely granular spatio-temporal information (e.g., timestamps and GPS-style geocodes) provide a minute-by-minute and region-by-region pictorial account of the emergency as it unfolded. The goal of this project is to assess, characterize, and model the quality of these images posted to social media in the minutes and hours post-emergency for guiding policy-based stakeholders and assets. Carefully framed images can convey a wealth of structural information to recovery experts: revealing damage levels, guiding resource allocation, and directing other policy-based assets. First, a sample of several thousand social media images from New Zealand will be assessed by domain experts and specifically structural earthquake engineers. Second, with RAPID funding, this project will link images posted during the emergency to actual damage assessments made in Christchurch for validating the quality of images. First, a sample of several thousand social media images from New Zealand will be assessed by domain experts and specifically structural earthquake engineers. The results of this project will have broad impacts, particularly in the development and deployment of a new rapid assessment tool for earthquake damage assessment based on social media. An additional broader impact is the ancillary development of training modules for increasing the effectiveness and image quality of future socially-generated image capture, which would greatly improve social computing for disasters. The methods and data generated by this project will be archived and made available for future studies.",11,Texas A&M Engineering Experiment Station
1020001000363780,RAPID: Minimizing the spread of false rumors in social media during a disaster,"This project focuses on understanding the spread of false information during responses to natural disasters and on the development of new techniques to prevent the spread of false information in social media. For example, after the March 11, 2011 major earthquake in Japan, social media such as Twitter played an important role in sharing information and coordinating disaster response. However, social media were also used by some people to spread false information about radiation and supplies, potentially creating widespread panic. The goals of this project are to better understand how false information is spread via Twitter after an emergency and to develop and evaluate new techniques to prevent the spread of false information. To achieve these goals, the investigators will build a visualization tool to measure the effectiveness of counteracting tweets that question the accuracy of false tweets and conduct experiments with university students in Japan and USA in which subjects' familiarity with and likelihood of spreading different types of false and counteracting tweets are measured. <br/><br/>Intellectual Merit: The project will provide new insights into the factors that determine the spread of false information, as well a set of recommendations for reducing this spread. The project will also contribute new methods for analyzing the spread of information in social media. <br/><br/>Broader Impacts: The insights and tools provided by the project will benefit future disaster response efforts by allowing emergency personnel to detect when false information is being spread and intervene to counteract the effects of false information before negative societal effects such as panic occur.",11,Stevens Institute of Technology
1020001000373165,CAREER: Similarity-based Representation of Large-scale Image Collections,"This proposal is to develop a general representation framework that uses similarity to capture relationships in large scale image collections. The representation is not restricted to any specific distance function, feature, or learning model. It includes new methods to combine multiple kernels based on different cues, learn low-rank kernels, and improve indexing efficiency. In addition, new methods for nearest neighbor search and semi-supervised learning are proposed. It has relevance to machine learning and computer vision research agendas. Two major research problems addressed are: (1) defining and computing similarities between images' in vast, expanding, repositories, and representing those similarities in an efficient manner so the right pairs can be retrieved on demand; and (2) developing a system that can learn and predict similarities with 'sparse supervisory information and constantly evolving data.' The approach is notable in its embrace of the scale of web archives and its use of verbal and visual means of analysis.",2,University of Illinois at Urbana-Champaign
1020001000373361,EAGER: Building Intelligent Mobile Manipulators for Assistive Care,"This project initiates a collaboration between two labs at Carnegie Mellon University (the Manipulation Lab and the Personal Robotics Lab), and the Intelligent Autonomous Systems Laboratory at the Technische Universitat Munchen. (The German participant costs are supported by their own funds.) By sharing ideas and technology, and by exchange of visits between PIs and students, the project is accelerating progress in autonomous mobile manipulation in human environments. The labs have complementary strengths and the project is expected to benefit both sides.<br/><br/>This project aims to expand the applications of robotics to include autonomous manipulation in human environments. As this technology is refined and ultimately commercialized, it will expand the impact of robots, as well as transform the robotics industry. Likely applications include eldercare, assisting the disabled, and rehabilitation, which are of great significance both economically and in the quality of our lives.<br/><br/>The project directly supports the education of graduate students, and scientific communication and collaboration across the Atlantic. Recruiting and outreach associated with the project will continue the PIs' tradition of inclusion of underrepresented groups. When appropriate, results will be incorporated into course curricula and in the PIs' ongoing outreach efforts. Code will be distributed open-source.",7,Carnegie-Mellon University
1020001000353941,CAREER: Learning- and Incentives-Based Techniques for Aggregating Community-Generated Data,"The Internet has led to the availability of novel sources of data on the preferences, behaviors, and beliefs of massive communities of users. <br/>Both researchers and engineers are eager to aggregate and interpret this data. However, websites sometimes fail to incentivize high-quality contributions, leading to variable quality data. Furthermore, assumptions made by traditional theories of learning break down in these settings.<br/><br/>This project seeks to create foundational machine learning models and algorithms to address and explain the issues that arise when aggregating local beliefs across large communities, and to advance the state-of-the-art understanding of how to motivate high quality contributions. The research can be split into three directions:<br/><br/>1. Developing mathematical foundations and algorithms for learning from community-labeled data. This direction involves developing learning models for data from disparate (potentially self-interested or<br/>malicious) sources and using insight from these models to design efficient learning algorithms.<br/><br/>2. Understanding and designing better incentives for crowdsourcing. This direction involves modeling crowdsourcing contributions to determine which features to include in systems to encourage the highest quality contributions.<br/><br/>3. Introducing novel economically-motivated mechanisms for opinion aggregation. This involves formalizing the properties a prediction market should satisfy and making use of ideas from machine learning and optimization to derive tractable market mechanisms satisfying these properties.<br/><br/>This research will have clear impact on industry, especially for web-based crowdsourcing. The PI will pursue her long-term goal of attracting and retaining women in computer science via her involvement in workshops and mentoring programs. Results will be disseminated at http://www.cs.ucla.edu/~jenn/projects/CAREER.html.",2,University of California-Los Angeles
1020001000353942,CAREER: Building and Searching a Structured Web Database,"This project investigates techniques for extracting and searching Web-embedded structured datasets. For example, a manufacturer's site may contain technical product data, and a governmental site may contain economic statistics. Unfortunately, such data can be hard to isolate from surrounding text, and difficult to find using existing search engines that focus exclusively on documents. The approach for the extraction step is to use current incomplete datasets to induce a large ""portfolio"" of possible extractors, apply all of them to crawled Web content, then test which are most successful. The approach for the search step is to examine user query logs to find common patterns that describe the relationship between topic words and words that describe the dataset's structure; e.g., ""endangered species near the Mississippi River"" is a prototype for a many-to-many geographic relationship. The central goal of this work is to eventually construct a working search engine for the structured-data component of the Web.<br/><br/>The success of this project is likely to increase access to structured datasets for a very broad population of users. The project will also yield a large amount of novel extracted data relevant for scientific research, plus useful tools and query logs. To accompany the research program, this project involves an educational plan that includes revised undergraduate course material, development of online educational material surrounding the datasets and tools, and a course on Web topics taught to a local rural high school. All project results will be distributed at the project's Web site (http://www.eecs.umich.edu/~michjc/structuredweb/index.html).",8,University of Michigan Ann Arbor
1020001000353950,CAREER: An Integrated Framework for Multimodal Music Search and Discovery,"A revolution in music production and distribution has made millions of songs instantly available to virtually anyone, on the Internet. However, a listener looking for ""dark electronica with cello"" or ""music like U2's"", without knowing a relevant artist or song name, or a musicologist wanting to search through large amounts of unknown ethnic music, would face serious challenges. Novel music search and discovery technologies are required to help users find the desired content.<br/><br/>The non-text-based, multimodal character of Internet-wide information about music (audio clips, lyrics, web documents, images, band networks, etc.) poses a new and difficult challenge to existing database technology that depends on unimodal, text-based data-structures. This project addresses two fundamental research questions at the core of addressing this challenge: (1) The automated annotation of (non-text-based) audio content with descriptive keywords; and (2) the automated integration of the heterogeneous content of multimodal databases, to improve music search and discovery on the Internet or in a personal database. The resulting architecture leverages the automation and scalability of machine learning with the effectiveness of human computation, engaging music professionals or enthusiasts around the world.<br/><br/>The research addresses questions at the core of multimedia information retrieval in general, enabling the design of a new generation of expressive and flexible retrieval systems for multimodal databases, with applications to music discovery, video retrieval, indexing multimedia content on the home PC, etc.<br/><br/>The results of this project, including a software library and annotated music data sets, will be incorporated in ongoing education and outreach activities and disseminated via the project website (http://cosmal.ucsd.edu/~gert/CAREER.html) to enhance research and education in music information retrieval.",8,University of California-San Diego
1020001000353958,CAREER: Document Layout and Formatting Helper for Blind Authors,"Most work on computer interaction for blind people focuses on these users as consumers, rather than creators, of information. Work on document creation specifically intended for blind people mainly concentrates on supporting general tasks associated with writing documents rather than on document formatting and layout, even though there is evidence that blind authors' documents are treated dismissively when they do not live up to ""expected"" standards of document presentation. Many blind authors therefore rely on sighted people to check their documents; consequently, they cannot be as productive as their sighted peers. The PI's goal in this project is to facilitate independence for blind authors in producing documents that meet the presentation standards expected by sighted readers. To these ends, the PI will develop an impact-weighted taxonomy of common document presentation errors of blind authors. She will explore blind authors' mental models and strategies for learning and coping, and how these models and strategies contribute to the success of independent document formatting and layout activities. And she will investigate the content-sharing and voting dynamics of blind authors. These findings will enable her to implement an integrated solution to document preparation for blind authors in the form of a Microsoft Word compatible formatting and layout checker, as well as a fix to the compatibility problems between Word's grammar and spell checkers and commonly used screen readers. And she will develop an accessible content-sharing environment for blind authors with an interface for sighted help from Amazon Mechanical Turk. To achieve her goals the PI will employ, among other techniques, participatory design with blind users and controlled experiments for investigating real-time non-visual presentation of grammar, spelling, formatting and layout statuses, errors and corrections (including through the use of haptic feedback and spatial sound).<br/><br/>Broader Impacts: This research will naturally involve blind persons at almost all stages. Project outcomes will reduce the necessity for blind authors to rely on sighted help, thus affording increased independence and productivity to members of this community. The work will likely prove of value to the general population as well, for whom non-visual document presentation information can be of benefit in eyes-busy or phone-based interactions. The PI will conduct numerous educational and outreach activities in conjunction with this project, including distribution of the software tools developed to centers and schools for blind and visually impaired persons, and development of a multidisciplinary and multilevel (undergraduate and graduate) course aimed at both psychology and computer engineering students on ""Designing Socio-technical Systems for People with Special Needs.""",0,University of California-Santa Cruz
1020001000353963,CAREER: Interactive Physically Based Animation and Optimal Control Using Model Reduction,"CAREER: Interactive Physically Based Animation and Optimal Control Using Model Reduction<br/><br/>Jernej Barbic, Computer Science Department, University of Southern California<br/><br/>Abstract<br/><br/>Many dynamical systems in nature are reasonably well-understood in terms of their underlying equations, but are very slow to simulate with detailed models. For example, the Finite Element Method (FEM) has been used extensively to simulate human tissue, automobile and airplane mechanical components, architectural structures, or characters in a computer game. Real-world systems are, however, very complex, which normally requires large computer processing power for their simulation, control and optimization. This research investigates systematic approaches to approximate complex physics with simple, yet mathematically principled models. The resulting fast simulation and control can make medical training more immersive, computer games more entertaining, and CAD/CAM faster and more reliable. More generally, this work applies to any system governed by differential equations, with broader applications in robotics, aeronautics, and defense systems. In addition to developing new publicly available coursework material, educational activities include releasing a large C++ computer graphics/animation codebase to the world under an open source license, and visits to high schools in under-deserved areas of Los Angeles where students are exposed to the benefits of careers in science and engineering.<br/><br/>The investigators tackle material and geometric complexity using model reduction, an approach where full equations of motion are approximated by a projection to a properly selected low-dimensional space. While model reduction of nonlinear systems has been previously employed in computer graphics and other disciplines, the existing algorithms often lack flexibility, are not adaptive in space or time, and can only provide smooth, low-dimensional output, even if the unreduced output is known to be complex. The investigators study how to overcome these limitations, using techniques from Lagrange mechanics, multi-resolution analysis and nonlinear optimization. The ultimate goal is to accelerate physics to the point where fusion of physics and design becomes possible for large, complex systems of computer graphics and engineering practice.",14,University of Southern California
1020001000353965,CAREER: Theory and Practice of Bayesian Motion Synthesis,"This project develops the theory and algorithms for the next generation of statistical motion models and their applications in Bayesian motion synthesis. Thus far, one of the most effective ways to model human movement is to construct statistical motion models from prerecorded motion data. While the promise of learning from motion data is unlimited, current statistical motion modeling techniques suffer from four major limitations. Firstly, they lack scalability and ability to model large and heterogeneous datasets. Secondly, they do not capture environmental contact information embedded in prerecorded motion data. Thirdly, they are mainly focused on modeling spatial-temporal patterns within a small temporal window rather than the global motion structures of human actions and thus face great risk of destroying global motion structures in motion generalization. Lastly and most importantly, they do not consider dynamics that cause the motion. This project investigates a new generation of statistical motion models that address these four challenges. The project also develops new Bayesian motion synthesis algorithms that leverage the proposed generative models in graphics and vision applications. In addition, the research produces new animation modeling systems for novices, new performance interfaces for full-body motion control, and new technologies for video-based motion capture. In the project, the PI makes special efforts to recruit students from under-represented groups, to integrate the research into existing and new courses, and to use a high school competition as a channel for attracting more young students to pursue careers in computer science.",20,Texas A&M Engineering Experiment Station
1020001000353968,CAREER: Large Vocabulary Gesture Recognition for Everyone: Gesture Modeling and Recognition Tools for System Builders and Users,"The PI's goal in this project is to develop new methods for automatically annotating, recognizing, and indexing large vocabularies of gestures, and to use these methods to create an integrated set of tools for sign language recognition. Current state-of-the-art methods for recognizing large vocabularies of gestures have significant limitations that impact both system design and the user experience. Many methods assume the existence of a near-perfect hand detector/tracker; that is a limiting assumption, which prevents deployment of these methods in complex real-world settings where such accuracy is unachievable. In the absence of perfect hand detectors, system design may involve a large investment in manual annotation of training videos (e.g., specifying hand locations), so as to provide sufficiently clean information to training modules. The user experience is affected by the limited accuracy and robustness of existing applications. In this research the PI will address these issues by explicitly designing recognition and indexing methods that require neither perfect hand detectors nor extensive manual annotations, thus making it substantially easier to deploy accurate and efficient gesture recognition systems in real-world settings. The PI will achieve these objectives through theoretical advances in the current state of the art in computer vision, pattern recognition, and database indexing. The unifying theme in the project is the integration of low-level tracking modules that produce imperfect output, with recognition and indexing methods that are designed to take as input this imperfect output from the tracking modules. Novel articulated tracking methods will be developed that utilize probabilistic graph models to provide fully automatic long-term tracking, while improving upon the excessive time complexity that probabilistic graph models currently incur. New methods will be designed for extracting and exploiting information from hand appearance. As these novel modeling and recognition methods will violate standard assumptions made by existing indexing methods, new indexing methods will be formulated which will improve the efficiency of search in large databases of dynamic gestures and static hand shapes within the proposed framework.<br/><br/>Broader Impacts: Project outcomes will significantly improve the ability of sign language users around the world to search databases of sign language videos and to perform tasks such as looking up the meaning of an unknown sign or retrieving occurrences of a sign of interest in videos of continuous signing. These search tools will have an impact in educational settings, facilitating both learning a sign language and accessing arbitrary information available in a sign language. To these ends, the PI will make his software freely available to the public online. He will also work with experts in American Sign Language to implement key applications using his tools, which will be made available to Deaf students. The PI will furthermore develop a publicly available package of gesture recognition source code, applications, and datasets that will help student researchers at all levels engage in gesture recognition research. As an additional outreach activity intended to attract young people to careers in science, the PI will co-organize summer camps that educate junior high and high school students in computer science.",1,University of Texas at Arlington
1020001000353980,CAREER: Scrapple: Fast Analytical Query Evaluation via Advanced Query Recycling Techniques,"The complex analytical queries characterizing decision support applications can be very expensive to compute, and the value of such applications is directly correlated to the speed at which answers can be returned to the user. Typically, once queries have been answered, database systems simply discard the results. However, a huge optimization opportunity is missed by doing this: there is tremendous latent energy in the discarded query results, if we only knew how to recycle them to help answer subsequent related queries. The goal of the project is to develop Scrapple, a principled database management system that aggressively reuses old query results to speed up the answering of new queries, resulting in potentially dramatic performance gains for a large class of decision support applications.<br/><br/>Scrapple's basic strategy is to view cached query results (and their intermediate subresults) as materialized views, and then employ advanced techniques for optimizing queries using materialized views to answer subsequent queries. To execute this strategy, the project develops: (1) a novel and comprehensive theory of differential reformulation strategies; (2) a set of unifying principles connecting incremental view maintenance and optimization of queries using materialized views; (3) a novel and comprehensive theory of data provenance for aggregate queries; and (4) practical implementation techniques for recycling cached results via cost-based search strategies. By using fully automated techniques, Scrapple will dramatically reduce the total cost of ownership of a typical data warehouse. Moreover, the techniques at the heart of our approach have wide application in areas such as data integration, data exchange, view maintenance, and data provenance. The research will also be used to develop lecture and project materials for new course modules. These educational materials, along the Scrapple source code and publications, will be made freely available at the project Web site, http://www.cs.ucdavis.edu/~green/scrapple.",8,University of California-Davis
1020001000353982,CAREER: Active Learning for Exploratory Clustering,"Data clustering is a widely used tool for organizing data into coherent groups that correspond to the underlying structure in data. In many applications, incorporating domain knowledge into clustering can help enhance both the quality and the utility of the results of clustering. Unfortunately, users who are not data mining experts currently lack effective means of providing such input to guide clustering. Against this background, Dr. Xiaoli Fern of Oregon State University seeks to develop a novel class of algorithms that take advantage of active learning strategies to interactively elicit information from users to drive clustering. <br/><br/>An important aim of this work is the identification of types of input e.g., in the form of must-link and cannot-link constraints, that are both informative and easy to interactively elicit from users to improve the quality and utility of the results of clustering. The study is driven by and evaluated using exploratory data analysis tasks that arise in several application domains (1) ecosystem informatics e.g. exploratory analysis of in-field bird recordings; (2) human-computer interaction (HCI) e.g., analysis of HCI data to understand user behavior; and (3) plant genomics in collaboration with scientists with expertise in each of these domains. <br/><br/>Improved tools for interactive exploratory data analysis benefit a broad range of applications including most areas of science in which such analysis is beginning to play an increasingly important role in extracting knowledge from data. For example, in ecological informatics, such tools can help scientists to better understand the impact of environmental changes on bird species which in turn can help develop better methods for managing ecosystems. Research-based education and training opportunities offered by this project help prepare a new generation of researchers and practitioners in exploratory data analysis as well as the emerging area of Ecosystem Informatics at Oregon State University. Dr. Fern's outreach efforts are aimed at helping draw female undergraduates and K-12 students from under-represented groups to careers in computer science and engineering. Further information on this project can be found at http://web.engr.oregonstate.edu/~xfern/CAREER",5,Oregon State University
1020001000304231,Collaborative Research III-COR: From a Pile of Documents to a Collection of Information: A Framework for Multi-Dimensional Text Analysis,"Many information workers are swamped with unfamiliar collections of text. One challenge is to obtain an accurate overview of a large text collection, such as the public comments collected in ''''''''notice and comment'''''''' rulemaking. No single tool currently provides a sufficiently diversified picture of such a corpus, and no adequate theory exists to help people explore and form a deep and nuanced understanding of such a text collection. This research seeks to develop a computational framework that allows further exploration of this problem from multiple, integrated perspectives. All the assembled perspectives will be brought together into a single overall supra-document structure that is dynamically constructed under user guidance. In this structure, hierarchical topic clusters will be cross-linked by opinion and argumentation links, using two classes of text analysis engines: one for topics and subtopics, and the other for argument structures. The research team will design, develop, build, and systematically test an overall text exploration framework, an application to support federal regulation writersone called the Rule-Writers Workbench. There is a strong collaboration with Federal government officials who will provide data and participate in user testing. The three PIs have successfully collaborated on a related project under previous NSF funding. <br/><br/>Intellectual Merit: This is a sustainable collaboration between computer science and political/social science research, rooted in a challenging and important real world application and informed by years of end user research. Dynamic, user-driven subtopic definition and clustering algorithms coupled with<br/>language modeling are an innovative yet reachable set of goals. The framework to be developed will be grounded in the humanities disciplines'' expertise in rhetoric, discourse structure, and subjectivity.<br/><br/>Broader Impacts: The Rule-Writers Workbench will allow federal government regulation writers to employ a suite of technical tools that perform independent analyses of public responses to proposed regulations, including near-duplicate detection and clustering, user-based topic selection from dynamically extracted keywords, opinion identification, and subtopic clustering. These capabilities will open new avenues for federal comment analysis.",0,Carnegie-Mellon University
1020001000304251,III-COR - ChunkyStore: Physical Database Design for Next-Generation Databases,"ChunkyStore: Physical Database Design for Next-Generation Databases<br/><br/>Traditional databases lay out tables on disk in a row-major format, but recent research has shown that other physical layouts (e.g., using a column-major format) can yield substantial (order-of-magnitude) performance gains in many applications. The broad goal of the ChunkyStore project is to explore this relationship between physical storage layouts and database performance. The primary mechanism for doing this is to partition database tables into ""chunks"" of varying numbers of co-located rows and columns and adaptively adjusting these chunks over time to optimize performance.<br/><br/>Specific techniques used to achieve this goal include: (1) investigation of the performance of different ""chunking"" algorithms under different workloads and applications, including scientific data management and data warehousing; (2) building an automatic storage designer that chooses the best chunking of tables based on models of historical workload patterns; (3) studying chunk-based representations and layout of non-tabular data, such as arrays of imagery (which are particularly important in scientific applications); and (4) using the storage designer, array management techniques, and various replication and distribution strategies to deploy a software system with high performance on large data sets.<br/><br/>ChunkyStore will substantially improve the performance of database systems and demonstrate that they can be used to manage a variety of scientific data. Education and dissemination efforts will be conducted via research papers, an open source release, several workshops for users, and integration with course projects in the graduate database systems class at MIT. For more information on ChunkyStore, including publications and source code, see http://db.csail.mit.edu/chunkystore/ .",8,Massachusetts Institute of Technology
1020001000304253,HRI: Perceptually Situated Human-Robot Dialog Models,"Humans naturally use dialog and gestures to discuss complex phenomena and plans, especially when they refer to physical aspects of the environment while they communicate with each other. Existing robot vision systems can sense people and the environment, but are limited in their ability to detect the detailed conversational cues people often rely upon (such as head pose, eye gaze, and body gestures), and to exploit those cues in multimodal conversational dialog. Recent advances in computer vision have made it possible to track such detailed cues. Robots can use passive measures to sense the presence of people, estimate their focus of attention and body pose, and to recognize human gestures and identify physical references. But they have had limited means of integrating such information into models of natural language; heretofore, they have used dialog models for specific domains and/or were limited to one-on-one interaction. Separately, recent advances in natural language processing have led to dialog models that can track relatively free-form conversation among multiple participants, and extract meaningful semantics about people's intentions and actions. These multi-party dialog models have been used in meeting environments and other domains. In this project, the PI and his team will fuse these two lines of research to achieve a perceptually situated, natural conversation model that robots can use to interact multimodally with people. They will develop a reasonably generic dialog model that allows a situated agent to track the dialog around it, know when it is being addressed, and take direction from a human operator regarding where it should find or place various objects, what it should look for in the environment, and which individuals it should attend to, follow, or obey. Project outcomes will extend existing dialog management techniques to a more general theory of interaction management, and will also extend current state-of-the-art vision research to be able to recognize the subtleties of nonverbal conversational cues, as well as methods for integrating those cues with ongoing dialog interpretation and interaction with the world.<br/><br/>Broader Impacts: There are clearly many positive societal impacts that will derive from this research. Ultimately, development of effective human-robot interfaces will allow greater deployment of robots to perform dangerous tasks that humans would otherwise have to perform, and will also enable greater use of robots for service tasks in domestic environments. As part of the project, the PI will conduct outreach efforts to engage secondary-school students in the hope that exposure to HRI research may increase their interest in science and engineering studies.",0,Massachusetts Institute of Technology
1020001000304261,"III-COR: Collaborative Research: User-centric, Adaptive and Collaborative Information Filtering","This is a collaborative research project (0704689: Yiming Yang, Carnegie-Mellon University; 0704628: Daqing He, University of Pittsburgh). Adaptive filtering (AF) is an open challenge in information retrieval, defined as the problem of incrementally learning about the topics of interest from user feedback (relevance judgments of the retrieved documents) over a chronologically processed stream of documents. The goal of this research project is to significantly improve adaptive filtering technologies. The approach consists of: (1) a new framework named the Enriched Vector Space Model (EVSM) that represents multi-type objects (including users, queries, topics, documents, Named Entities and sources of data), records the interactions among objects during the adaptive filtering process, and enables the comparison among objects based on both content similarity and relationship similarity; and (2) a system that bridges adaptive filtering, collaborative filtering, personalized active learning and Generalized Hubs and Authorities for effective learning about evolving interests of users. The experimental research is linked to educational benefits for graduate students via participation in the system implementation, data annotation, empirical evaluations and user studies in this project, as well as through course materials the Principal Investigators teach on the related topics and techniques. The results of this project will provide a significant contribution to the field of information search and to our understanding of how to effectively learn from multiple users, and how to combine multi-aspect user information in a new unified framework, with broad applications in information retrieval (web-based and enterprise search engines, for example) by giving them a major adaptive and personalization dimension.<br/><br/>The project Web sites (http://nyc.lti.cs.cmu.edu/UserCentricAFCF/ and http://amber.sis.pitt.edu/UserCentricAFCF ) will be used to disseminate resulting publications, open-source code and annotated test data sets.",8,University of Pittsburgh
1020001000304262,Collaborative Research: Interorganizational Information System Integration Through Industry-wide Standardization: Technical Design Choices and Collective Action Dilemmas,"One way industries harmonize the actions of individual organizations is via formal agreements on process and specification standards. Little is known about the mechanisms that drive this critical coordination so foundational to our national economic competitiveness. New information architecture technologies (e.g., XML) are rapidly replacing decades-old, stable formats (e.g., EDI), transforming the standards process in unforeseen ways. <br/><br/>This proposal empirically examines the development, adoption, implementation, and diffusion of industry-wide vertical information systems standards in three diverse industries: automotive, retailing, and mortgage finance. As standards diffusion is a complex interaction between independent organization-level action and collective industry-level action, the three case studies are grounded by data collection with a sample of 60 companies from each industry, which enables comparisons of relative success of a standard's adoption and use as well as its consequences. This is a unique multi-level approach, investigating the impact of collective action dynamics on specific IT design features. <br/><br/>Broader impacts: Improved electronic interorganizational collaboration has enormous potential to reduce transaction costs and develop more competitive industries. Such benefits can lead to lower prices for consumers. Moreover, the greater use of open Internet standards promises new opportunities for smaller organizations that were not able to enjoy the full benefits of earlier EDI approaches. The research will also produce materials for the education of practitioners and will advance the state of knowledge in several fields, including information systems and technology, management, industrial engineering, economics, sociology, and social informatics.",11,Bentley College
1020001000304264,"III-COR: Collaborative Research: User-centric, Adaptive and Collaborative Information Filtering","This is a collaborative research project (0704689: Yiming Yang, Carnegie-Mellon University; 0704628: Daqing He, University of Pittsburgh). Adaptive filtering (AF) is an open challenge in information retrieval, defined as the problem of incrementally learning about the topics of interest from user feedback (relevance judgments of the retrieved documents) over a chronologically processed stream of documents. The goal of this research project is to significantly improve adaptive filtering technologies. The approach consists of: (1) a new framework named the Enriched Vector Space Model (EVSM) that represents multi-type objects (including users, queries, topics, documents, Named Entities and sources of data), records the interactions among objects during the adaptive filtering process, and enables the comparison among objects based on both content similarity and relationship similarity; and (2) a system that bridges adaptive filtering, collaborative filtering, personalized active learning and Generalized Hubs and Authorities for effective learning about evolving interests of users. The experimental research is linked to educational benefits for graduate students via participation in the system implementation, data annotation, empirical evaluations and user studies in this project, as well as through course materials the Principal Investigators teach on the related topics and techniques. The results of this project will provide a significant contribution to the field of information search and to our understanding of how to effectively learn from multiple users, and how to combine multi-aspect user information in a new unified framework, with broad applications in information retrieval (web-based and enterprise search engines, for example) by giving them a major adaptive and personalization dimension.<br/><br/>The project Web sites (http://nyc.lti.cs.cmu.edu/UserCentricAFCF/ and http://amber.sis.pitt.edu/UserCentricAFCF ) will be used to disseminate resulting publications, open-source code and annotated test data sets.",8,Carnegie-Mellon University
1020001000304272,Collaborative Research: Interorganizational Information System Integration Through Industry-wide Standardization: Technical Design Choices and Collective Action Dilemmas,"One way industries harmonize the actions of individual organizations is via formal agreements on process and specification standards. Little is known about the mechanisms that drive this critical coordination so foundational to our national economic competitiveness. New information architecture technologies (e.g., XML) are rapidly replacing decades-old, stable formats (e.g., EDI), transforming the standards process in unforeseen ways. <br/><br/>This proposal empirically examines the development, adoption, implementation, and diffusion of industry-wide vertical information systems standards in three diverse industries: automotive, retailing, and mortgage finance. As standards diffusion is a complex interaction between independent organization-level action and collective industry-level action, the three case studies are grounded by data collection with a sample of 60 companies from each industry, which enables comparisons of relative success of a standard's adoption and use as well as its consequences. This is a unique multi-level approach, investigating the impact of collective action dynamics on specific IT design features. <br/><br/>Broader impacts: Improved electronic interorganizational collaboration has enormous potential to reduce transaction costs and develop more competitive industries. Such benefits can lead to lower prices for consumers. Moreover, the greater use of open Internet standards promises new opportunities for smaller organizations that were not able to enjoy the full benefits of earlier EDI approaches. The research will also produce materials for the education of practitioners and will advance the state of knowledge in several fields, including information systems and technology, management, industrial engineering, economics, sociology, and social informatics.",11,University of Arkansas Little Rock
1020001000304307,Collaborative Research III-COR: From a Pile of Documents to a Collection of Information: A Framework for Multi-Dimensional Text Analysis,"Many information workers are swamped with unfamiliar collections of text. One challenge is to obtain an accurate overview of a large text collection, such as the public comments collected in ''''''''notice and comment'''''''' rulemaking. No single tool currently provides a sufficiently diversified picture of such a corpus, and no adequate theory exists to help people explore and form a deep and nuanced understanding of such a text collection. This research seeks to develop a computational framework that allows further exploration of this problem from multiple, integrated perspectives. All the assembled perspectives will be brought together into a single overall supra-document structure that is dynamically constructed under user guidance. In this structure, hierarchical topic clusters will be cross-linked by opinion and argumentation links, using two classes of text analysis engines: one for topics and subtopics, and the other for argument structures. The research team will design, develop, build, and systematically test an overall text exploration framework, an application to support federal regulation writersone called the Rule-Writers Workbench. There is a strong collaboration with Federal government officials who will provide data and participate in user testing. The three PIs have successfully collaborated on a related project under previous NSF funding. <br/><br/>Intellectual Merit: This is a sustainable collaboration between computer science and political/social science research, rooted in a challenging and important real world application and informed by years of end user research. Dynamic, user-driven subtopic definition and clustering algorithms coupled with<br/>language modeling are an innovative yet reachable set of goals. The framework to be developed will be grounded in the humanities disciplines'' expertise in rhetoric, discourse structure, and subjectivity.<br/><br/>Broader Impacts: The Rule-Writers Workbench will allow federal government regulation writers to employ a suite of technical tools that perform independent analyses of public responses to proposed regulations, including near-duplicate detection and clustering, user-based topic selection from dynamically extracted keywords, opinion identification, and subtopic clustering. These capabilities will open new avenues for federal comment analysis.",0,University of Southern California
1020001000304310,Contextual Investigation of Constraint-Based Dynamic Scheduling,"Proposal 0705103<br/>""Contextual Investigation of Constraint-Based Dynamic Scheduling""<br/>PI: Martha Pollack<br/>University of Michigan<br/><br/>ABSTRACT <br/><br/>This project aims to develop techniques for a variety of important scheduling problems that occur frequently, yet are inadequately addressed by current techniques. The research will be done in the context of a particular application--patient scheduling for medical clinics and it will involve a Michigan clinic that works with patients with traumatic brain injury. This application context has three characteristics that make it challenging. First, it is dynamic, in that events, such as patient appointments, as well as constraints on the times of the events change over time. Second, it involves both hard constraints (e.g., that no appointments can be scheduled earlier than a given time), as well as so-called ""soft"" constraints that represent preferences over alternative schedules (e.g., that a particular patient prefers afternoon appointments, or that it is better not to have large gaps between the appointments a patient has on a given day). Third, it is interactive: a human being is responsible for specifying events, constraints, and preferences.<br/><br/>To create an effective scheduler, we will extend a well-studied class of constraint-satisfaction systems: Satisfiability Modulo Theory (SMT) solvers. A key goal of this project is to enable SMT solvers to perform optimization efficiently and to develop algorithms for solving sequences of problems in a way that minimizes change across solutions while still producing near-optimal results. The project will also develop interfaces that make it possible for lay users to describe richly expressive constraints and preferences on schedules.<br/><br/>The broader impact of the work includes the potential usefulness of the techniques to key applications including clinic scheduling; the exposure of graduate students to contextual research; and the development of real-world problem sets for undergraduate courses.",12,University of Michigan Ann Arbor
1020001000304313,HRI: ROBOT LEARNING FROM TELEOPERATIVE-BASED INSTRUCTION AND MULTIMODAL INTERACTION,"Teleoperated assistive robots in home environments have the potential to dramatically improve quality of life for older adults and/or people who experience disabling circumstances due to chronic or acute health conditions. It could similarly aid clinicians and healthcare professionals providing treatment. The success of these applications though will critically depend on the ease with which a robot can be commanded to perform common manipulation tasks within a home environment. Thus, the focus of the proposed research is to addresses this key challenge in two significant ways. First, by learning from teleoperated manipulation (i.e. teleoperative-based instruction), robots can acquire the ability to perform elements of common tasks with greater autonomy and reliability. Second, by automatically mapping new modalities (e.g. voice and gesture commands) to the robot's user interface, a wider variety of people will be able to use the robot more easily. The resulting multimodal interfaces may be especially important for people who have difficulty using a single modality, such as vision. These two fundamental research components underlie the basis of our approach to enable manipulation of everyday objects in an unstructured human environment.",7,Georgia Tech Research Corporation
1020001000304321,RI: Collaborative Proposal: Complementary Lexical Resources: Towards an Alignment of WordNet and FrameNet,"Machine-readable lexical resources are essential to Natural Language<br/>Processing applications such as information extraction and machine translation. <br/>The largest lexicon is WordNet, with <br/>semantic information about more than 150,000, or lexical units (LUs). <br/>A smaller, independently developed resource is FrameNet, <br/>which provides detailed information about the syntactic patterns for LUs. <br/>The project investigates the ways in which these complementary resources <br/>can be combined using the semantic-syntactic information from<br/>FrameNet (FN) where available and falling back on less detailed<br/>entries from WordNet (WN) in other cases. <br/><br/>WN and FN exhibit fundamentally different design principles. <br/>WN groups (near) synonymous LUs into <br/>""synsets,"" which are interconnected via conceptual and lexical relations <br/>to form a semantic network. FN groups LUs according to the<br/>""semantic frame"" they evoke, which is a type of event, relation or<br/>state along with the participants involved in the event. Thus,<br/>while antonyms such as _praise_ and _blame_ may be in the same FN frame <br/>they are in different, though interlinked, WN synsets. Moreover, FN frames <br/>cover semantically related nouns, verbs and adjectives; WN synsets<br/>do not mix part of speech. Crucially for NLP applications, the resources <br/>differ with respect to sense distinctions.<br/>Alignment will be investigated for the following differences: <br/>lexical coverage, sense distinctions, taxonomic and<br/>other semantic relations, and scalar frames for adjectives.<br/>Some 1,000 word senses are examined in detail so as to provide an idea of<br/>the distribution of each of these phenomena over the entire lexicon. <br/><br/>This theoretical work lays the foundation for constructing a unique, <br/>invaluable resource for the NLP community.",0,International Computer Science Institute
1020001000304330,Collaborative Research: Interorganizational Information Systems Integration Through Industry-wide IS Standardization: Technical Design Choices and Collective Action Dilemmas,"One way industries harmonize the actions of individual organizations is via formal agreements on process and specification standards. Little is known about the mechanisms that drive this critical coordination so foundational to our national economic competitiveness. New information architecture technologies (e.g., XML) are rapidly replacing decades-old, stable formats (e.g., EDI), transforming the standards process in unforeseen ways. <br/><br/>This proposal empirically examines the development, adoption, implementation, and diffusion of industry-wide vertical information systems standards in three diverse industries: automotive, retailing, and mortgage finance. As standards diffusion is a complex interaction between independent organization-level action and collective industry-level action, the three case studies are grounded by data collection with a sample of 60 companies from each industry, which enables comparisons of relative success of a standard's adoption and use as well as its consequences. This is a unique multi-level approach, investigating the impact of collective action dynamics on specific IT design features. <br/><br/>Broader impacts: Improved electronic interorganizational collaboration has enormous potential to reduce transaction costs and develop more competitive industries. Such benefits can lead to lower prices for consumers. Moreover, the greater use of open Internet standards promises new opportunities for smaller organizations that were not able to enjoy the full benefits of earlier EDI approaches. The research will also produce materials for the education of practitioners and will advance the state of knowledge in several fields, including information systems and technology, management, industrial engineering, economics, sociology, and social informatics.",11,Michigan State University
1020001000304334,Integrated modeling of biological nanomachines,"Biological nanomachines are the assemblies that carry out all the basic biological processes in a living organism. Electron cryo-microscopy (cryoEM) is the most appropriate structural tool to determine molecular structures of biological nanomachines that generally consist of multiple protein subunits and/or nucleic acids with a total mass greater than 0.5 million Daltons. The goal is to develop information discovery and integration methodologies for deriving atomic models of nanomachines. Such models will be derived from 3-dimensional (3-D) cryoEM mass density function (i.e. a volumetric density map) in conjunction with physics of protein folding and informatics data. This project is made possible by an integration of the expertise of five investigators in computer graphics, computational biophysics, structural informatics and cryoEM. The intellectual merit of this research is highlighted by the computational approaches of extracting structural information from low-resolution, complex cryoEM volume densities and integrating this information into classical protein structure modeling paradigms, such as comparative modeling and ab initio modeling, for understanding biological nanomachines. The three research goals involve information discovery, information integration and validation of the proposed algorithms. The proposed research will have significant impacts in three disparate disciplines: computer science, molecular modeling, and cryoEM. Furthermore, the team will disseminate their resulting tools freely to the academic community and will host a workshop towards the end of the project. To enhance the impact of their research, the investigators will integrate research with education at each member institution with an eye towards diversity. In particular, these investigators will develop a virtual didactic course in modeling of biological nanomachines for graduate and senior undergraduate students at the five participating institutions.",3,University of California-San Francisco
1020001000304335,RI: Collaborative Proposal: Complementary Lexical Resources: Towards an Alignment of WordNet and FrameNet,"Machine-readable lexical resources are essential to Natural Language<br/>Processing applications such as information extraction and machine translation. <br/>The largest lexicon is WordNet, with <br/>semantic information about more than 150,000, or lexical units (LUs). <br/>A smaller, independently developed resource is FrameNet, <br/>which provides detailed information about the syntactic patterns for LUs. <br/>The project investigates the ways in which these complementary resources <br/>can be combined using the semantic-syntactic information from<br/>FrameNet (FN) where available and falling back on less detailed<br/>entries from WordNet (WN) in other cases. <br/><br/>WN and FN exhibit fundamentally different design principles. <br/>WN groups (near) synonymous LUs into <br/>""synsets,"" which are interconnected via conceptual and lexical relations <br/>to form a semantic network. FN groups LUs according to the<br/>""semantic frame"" they evoke, which is a type of event, relation or<br/>state along with the participants involved in the event. Thus,<br/>while antonyms such as _praise_ and _blame_ may be in the same FN frame <br/>they are in different, though interlinked, WN synsets. Moreover, FN frames <br/>cover semantically related nouns, verbs and adjectives; WN synsets<br/>do not mix part of speech. Crucially for NLP applications, the resources <br/>differ with respect to sense distinctions.<br/>Alignment will be investigated for the following differences: <br/>lexical coverage, sense distinctions, taxonomic and<br/>other semantic relations, and scalar frames for adjectives.<br/>Some 1,000 word senses are examined in detail so as to provide an idea of<br/>the distribution of each of these phenomena over the entire lexicon. <br/><br/>This theoretical work lays the foundation for constructing a unique, <br/>invaluable resource for the NLP community.",0,Princeton University
1020001000304342,III-COR: Collaborative Research: Mining Biomedical and Network Data Using Tensors,"IIS 0705359, IIS 0705215<br/><br/>III-COR: Collaborative Research: Mining Biomedical <br/>and Network Data Using Tensors Christos Faloutsos (christos@cs.cmu.edu) CMU <br/>Vasileios Megalooikonomou (vasilis@cis.temple.edu) Temple Univ.<br/><br/>Given a large collection of functional Magnetic Resonance (fMR) images over time,<br/>how can one find patterns and correlations? Similarly, given a never-ending stream <br/>of network traffic information, how can one monitor for anomalies, intrusions, <br/>and potential failures? The main idea behind this proposal is to treat both <br/>problems using the theory of tensors. Despite the seemingly wide differences in <br/>the two settings, they both boil down to finding patterns in multidimensional <br/>arrays, sparse or dense. Tensors are exactly generalizations of matrices, <br/>and correspond roughly to ``DataCubes'' of data mining. Matrix analysis <br/>and decompositions are part of the standard toolbox for data mining, <br/>providing methods for dimensionality reduction, pattern discovery and<br/>``hidden variable'' discovery. Extending these tools to higher dimensionalities <br/>is valuable and tensors provide the tools to do this generalization. <br/>However, these tools have not yet been put to use in large volume data mining. <br/>This is the main contribution of this proposal. The investigators propose <br/>(a) to design tensor decomposition algorithms that scale for large datasets,<br/>with special attention to sparse datasets, and to never-ending streams of data <br/>and (b) to apply them on two driving applications, fMRI data analysis and network<br/>data analysis.<br/><br/>The investigators propose to analyze large volumes of fMRI data performing<br/>the following sub-tasks: cluster voxels with similar behavior over time for<br/>a given subject and/or task or across subjects and/or tasks, <br/>classify patterns of brain activity, and detect lag correlations<br/>and spatio-temporal patterns among fMRI time sequences. <br/>The investigators also propose to perform the following inter-related <br/>tasks on multiple GigaBytes of network flow data: anomaly detection, <br/>pattern discovery, and compression.<br/><br/>Both of these applications are important for medicine, health management,<br/>and for computer and national security. Analysis of fMRI data can help understanding<br/>how the brain functions, which parts of the brain collaborate with what other parts, <br/>and whether there are variations across subjects and across task-related activities. <br/>For the network traffic monitoring setting, fast detection of anomalies is important,<br/>to spot malware, port-scanning attempts, and just plain non-malicious failures.<br/><br/>The educational goals include incorporating the research findings in <br/>advanced graduate courses at CMU (15-826) and at Temple (9664, 9665)<br/>and proposing tutorials in leading conferences in databases, <br/>data mining and bio-informatics audiences.<br/><br/>For further information see the web page: <br/>http://knight.cis.temple.edu/~vasilis/research/tensors.html",5,Temple University
1020001000304370,III-CXT: Collaborative Research: Computational Methods for Understanding Social Interactions in Animal Populations,"The goal of the proposed research is to create analytical and computational tools that explicitly address the time and order of social interactions between individuals. The proposed approach combines ideas from social network analysis, Internet computing, distributed computing, and machine learning to solve problems in population biology. The diverse computational tasks of this project include design of algorithmic techniques to identify social entities such as a communities, leaders, and followers, and to use these structures to predict social response patterns to danger or disturbances. Nowhere is the impact of social structure likely to be greater than when species come in<br/>contact with predators. Thus, the accuracy and predictive power of the proposed computational tools will be tested by characterizing the social structure of horses and zebras (equids) both before and after human- or predator-induced perturbations to the social network. The proposed interdisciplinary research will have broader impacts on a wide range of research communities. New methods for analysis of social interactions in animal populations will be useful for behavioral biologists in such diverse fields as behavioral ecology, animal husbandry, conservation biology, and disease ecology. The machine learning algorithms that will be develop are relevant to many studies in which researchers need to classify temporal interaction data. The proposed network methods have broader relevance to human societies: disease transmission, dissemination<br/>of ideas, and social response to crises are all dynamic processes occurring via social networks. Further, through teaching and participation in outreach, students and school teachers will gain access to opportunities for hands-on, interdisciplinary experiences in a new area of computational biology. The research and software resulting from the proposed project will be disseminated both in computational and biological communities and enhanced by cross-disciplinary training activities and will serve to train a new generation of interdisciplinary scientists.",11,Princeton University
1020001000304375,III-COR: Collaborative Research: Graceful Evolution and Historical Queries in Information Systems--a Unified Approach,"III-COR: Collaborative Research: Graceful Evolution and Historical <br/>Queries in Information Systems-- a Unified Approach <br/><br/>Database schema evolution represents a constant in the life cycle <br/>of Information Systems, and is the source of major costs for <br/>maintenance, upgrading, and service down time. The traditional <br/>schema revision process depends on the installation of a new <br/>schema along with the revised database, and a converted set of <br/>applications (laboriously rewritten to work with this schema). <br/>Instead, this project develops the novel enabling technology <br/>whereby the schema evolution problem is reduced to coordinating <br/>mappings between multiple concurrent versions of the schema, <br/>applications, and the database. This is realized by the <br/>Meta-Manager system which provides integrated management of <br/>evolving (i) data, and (ii) metadata, and efficiently supports the <br/>(iii) mappings, and (iv) software artifacts needed for graceful <br/>schema evolution. Further, the Meta-Manager allows for <br/>preservation and querying of database history while it assists the <br/>user in planning how to evolve the current schema version with <br/>``what-if'' evolution scenarios. The functionality and performance <br/>of the system is validated using various testbeds, such as, the <br/>San Diego Supercomputing Center's Storage Request Broker, which <br/>hosts scientific data for various research groups ranging from <br/>astrophysicists to biologists. <br/><br/>This novel and timely approach provides a unified solution to both <br/>the evolution and preservation of information systems. Because of <br/>the key role played by information systems, a broad range of <br/>scientific, educational, and economic activities will benefit from <br/>these advances. <br/><br/>Results are disseminated via publications, reports and <br/>demos available from the project web sites: <br/><br/>http://www.cs.ucr.edu/~tsotras/meta-manager <br/>http://wis.cs.ucla.edu/projects/meta-manager <br/>http://db.ucsd.edu/people/alin/meta-manager",8,University of California-Los Angeles
1020001000304379,III-COR: Collaborative Research: Mining Biomedical and Network Data Using Tensors,"IIS 0705359, IIS 0705215<br/><br/>III-COR: Collaborative Research: Mining Biomedical <br/>and Network Data Using Tensors Christos Faloutsos (christos@cs.cmu.edu) CMU <br/>Vasileios Megalooikonomou (vasilis@cis.temple.edu) Temple Univ.<br/><br/>Given a large collection of functional Magnetic Resonance (fMR) images over time,<br/>how can one find patterns and correlations? Similarly, given a never-ending stream <br/>of network traffic information, how can one monitor for anomalies, intrusions, <br/>and potential failures? The main idea behind this proposal is to treat both <br/>problems using the theory of tensors. Despite the seemingly wide differences in <br/>the two settings, they both boil down to finding patterns in multidimensional <br/>arrays, sparse or dense. Tensors are exactly generalizations of matrices, <br/>and correspond roughly to ``DataCubes'' of data mining. Matrix analysis <br/>and decompositions are part of the standard toolbox for data mining, <br/>providing methods for dimensionality reduction, pattern discovery and<br/>``hidden variable'' discovery. Extending these tools to higher dimensionalities <br/>is valuable and tensors provide the tools to do this generalization. <br/>However, these tools have not yet been put to use in large volume data mining. <br/>This is the main contribution of this proposal. The investigators propose <br/>(a) to design tensor decomposition algorithms that scale for large datasets,<br/>with special attention to sparse datasets, and to never-ending streams of data <br/>and (b) to apply them on two driving applications, fMRI data analysis and network<br/>data analysis.<br/><br/>The investigators propose to analyze large volumes of fMRI data performing<br/>the following sub-tasks: cluster voxels with similar behavior over time for<br/>a given subject and/or task or across subjects and/or tasks, <br/>classify patterns of brain activity, and detect lag correlations<br/>and spatio-temporal patterns among fMRI time sequences. <br/>The investigators also propose to perform the following inter-related <br/>tasks on multiple GigaBytes of network flow data: anomaly detection, <br/>pattern discovery, and compression.<br/><br/>Both of these applications are important for medicine, health management,<br/>and for computer and national security. Analysis of fMRI data can help understanding<br/>how the brain functions, which parts of the brain collaborate with what other parts, <br/>and whether there are variations across subjects and across task-related activities. <br/>For the network traffic monitoring setting, fast detection of anomalies is important,<br/>to spot malware, port-scanning attempts, and just plain non-malicious failures.<br/><br/>The educational goals include incorporating the research findings in <br/>advanced graduate courses at CMU (15-826) and at Temple (9664, 9665)<br/>and proposing tutorials in leading conferences in databases, <br/>data mining and bio-informatics audiences.<br/><br/>For further information see the web page: <br/>http://knight.cis.temple.edu/~vasilis/research/tensors.html",5,Carnegie-Mellon University
1020001000304387,RI: Robust implementation of foot placement for balance of 3D bipedal walking,"Andy Ruina's group at Cornell will master the use of foot<br/>placement as a means for balance in robotic walking. Ruina's<br/>previous research shows that, in principle, a robot could walk,<br/>say, 10-20 km on a single battery charge much like a person can<br/>walk a similar distance on energy from a good meal. In practice,<br/>however, typical robots today only walk tens of meters. The<br/>efficient machines fall down too often. The stable machines run<br/>out of energy. Today's robots are either efficient or stable,<br/>but not both. This shortcoming represents a lack of understanding<br/>of the nature of balance using foot placement.<br/><br/>Ruina's approach will build on his Cornell lab's success in<br/>making bipedal robots whose energy use, per unit distance and<br/>mass, is comparable to that of humans. The new robots will<br/>improve on the robustness in the previous efficient machines by<br/>using controlled foot placement for balance.<br/><br/>Successful design and construction of the proposed new robots<br/>will demonstrate the utility of holding energetic efficiency,<br/>control simplicity and control robustness paramount in the design<br/>of humanoid mechanisms. Further, general understanding of<br/>machine efficiency and stability will be enhanced.",7,Cornell University
1020001000304390,HCC: Collaborative Research: Information Exchange and Social Factors in Human-Computer-Teamwork Decision Making,"This project addresses problems central to the design of decision-making strategies that enable computer agents to work effectively with people in heterogeneous groups that interact in carrying out complex activities. These mixed networks of people and systems arise in a wide variety of real-world applications as well as in virtual reality and simulation systems used for training. They occur in settings in which computer systems support people who are working together, those in which they act as proxies for individual people, and those in which groups of agents act autonomously (but alongside people) to carry out constituent tasks for which they are responsible. Despite mixed networks being wide spread, the design of agents that can operate in such settings has received less attention than the design of agents for multi-agent systems comprising only computer agents.<br/><br/>The inclusion of people in mixed networks presents novel problems for the design of autonomous agent decision-making mechanisms. This proposal focuses on the following three of these challenges, which have not been investigated sufficiently in prior work and which agent designers must address to construct systems able to work well with their human partners in mixed networks: (1) information exchange policies for agent competence and past behavior; (2) design of interruption management mechanisms for collaborative interactions; and, (3) learning and incorporation of models of social factors and organizational structures into decision-making mechanisms.",6,Harvard University
1020001000304392,RI:Collaborative Research: Robust Ornithopter Flight - from Engineering Models to Cooperative Indoor Maneuvers,"Flapping flight provides high maneuverability for indoor environments.<br/>To achieve robust intelligence for tasks such as search and indoor<br/>navigation, the maneuverability of an ornithopter will be combined<br/>with a learning approach which makes minimal assumptions about the<br/>nature of disturbances and obstacles. We propose to develop<br/>algorithms for ornithopters to cooperate in sensing and navigation in<br/>typical indoor environments without prior maps. Our research will be<br/>verified with full three dimensional dynamic simulation, a<br/>multi-tethered laboratory test-bed, as well as with actual indoor<br/>flying ornithopters.<br/><br/>The key research issues to be addressed in this work are:<br/>1) improved ornithopter mechanics and aerodynamics<br/>2) robust ornithopter flight control strategies<br/>3) learning algorithms for cooperative navigation<br/>of ornithopters using only simple sensor information<br/><br/>This research will advance understanding of high maneuverability<br/>flapping wing vehicles for indoor flight. By combining research from<br/>the levels of mechanics to learned behavior in a real indoor<br/>environment, we will test how performance at each level can be<br/>integrated to achieve robust intelligence.<br/><br/>Our project will provide interdisciplinary education for students in<br/>achieving robust intelligence through the combination of mechanics,<br/>sensing, control, and learning. This research can lead to flying<br/>robots which can robustly enter unknown and hazardous indoor<br/>environments, potentially keeping rescue workers out of harms way.",19,University of Delaware
1020001000304394,RI: Collaborative Research: Hierarchical models of time varying natural images,"Abstract<br/><br/>Title: Collaborative Research: Hierarchical Models of Time-Varying natural Images<br/>PIs: Bruno Olshausen, University of California-Berkeley and David Warland, University of California-Davis<br/><br/>The goal of this project is to advance the state of the art in image analysis and computer vision by building models that capture the robust intelligence exhibited by the mammalian visual system. The proposed approach is based on modeling the structure of time-varying natural images, and developing model neural systems capable of efficiently representing this structure. This approach will shed light on the underlying neural mechanisms involved in visual perception and will apply these mechanisms to practical problems in image analysis and computer vision.<br/><br/>The models that are to be developed will allow the invariant structure in images (form, shape) to be described independently of its variations (position, size, rotation). The models are composed of multiple layers that capture progressively more complex forms of <br/>scene structure in addition to modeling their transformations. <br/>Mathematically, these multi-layer models have a bilinear form in which the variables representing shape and form interact multiplicatively with the variables representing position, size or other variations. The parameters of the model are learned from the statistics of time-varying natural images using the principles of sparse and efficient coding.<br/><br/>The early measurements and models of natural image structure have had a profound impact on a wide variety of disciplines including visual neuroscience (e.g. predictions of receptive field properties of retinal ganglion cells and cortical simple cells in visual cortex) and image processing (e.g. wavelets, multi-scale representations, image denoising). The approach outlined in this proposal extends this interdisciplinary work by learning higher-order scene structure <br/>from sequences of time-varying natural images. Given the <br/>evolutionary pressures on the visual cortex to process time-varying images efficiently, it is plausible that the computations performed by the cortex can be understood in part from the constraints imposed by efficient processing. Modeling the higher order structure will also advance the development of practical image processing algorithms by finding good representations for image-processing tasks such as video search and indexing. Completion of the specific goals described in this proposal will provide (1) mathematical models that can help elucidate the underlying neural mechanisms involved in visual perception and (2) new generative models of time-varying images that better describe their structure.<br/><br/>The explosion of digital images and video has created a national priority of providing better tools for tasks such as object recognition and search, navigation, surveillance, and image analysis. The models developed as part of this proposal are broadly applicable to these tasks. Results from this research program will be integrated into a new neural computation course at UC Berkeley, presented at national multi-disciplinary conferences, and published in a timely manner in leading peer-reviewed journals. Participation in proposed research is available to both graduate and undergraduate levels, and the PI will advise Ph.D. students in both neuroscience and engineering as part of this project.<br/><br/>URL: http://redwood.berkeley.edu/wiki/NSF_Funded_Research",1,University of California-Davis
1020001000304396,RI: Collaborative Research:Robust Ornithopter Flight - from Engineering Models to Cooperative Indoor Maneuvers,"Flapping flight provides high maneuverability for indoor environments.<br/>To achieve robust intelligence for tasks such as search and indoor<br/>navigation, the maneuverability of an ornithopter will be combined<br/>with a learning approach which makes minimal assumptions about the<br/>nature of disturbances and obstacles. We propose to develop<br/>algorithms for ornithopters to cooperate in sensing and navigation in<br/>typical indoor environments without prior maps. Our research will be<br/>verified with full three dimensional dynamic simulation, a<br/>multi-tethered laboratory test-bed, as well as with actual indoor<br/>flying ornithopters.<br/><br/>The key research issues to be addressed in this work are:<br/>1) improved ornithopter mechanics and aerodynamics<br/>2) robust ornithopter flight control strategies<br/>3) learning algorithms for cooperative navigation<br/>of ornithopters using only simple sensor information<br/><br/>This research will advance understanding of high maneuverability<br/>flapping wing vehicles for indoor flight. By combining research from<br/>the levels of mechanics to learned behavior in a real indoor<br/>environment, we will test how performance at each level can be<br/>integrated to achieve robust intelligence.<br/><br/>Our project will provide interdisciplinary education for students in<br/>achieving robust intelligence through the combination of mechanics,<br/>sensing, control, and learning. This research can lead to flying<br/>robots which can robustly enter unknown and hazardous indoor<br/>environments, potentially keeping rescue workers out of harms way.",19,University of California-Berkeley
1020001000304409,III-CXT: Collaborative Research: Integrated Modeling of Biological Nanomachines,"Biological nanomachines are the assemblies that carry out all the basic biological processes in a living organism. Electron cryo-microscopy (cryoEM) is the most appropriate structural tool to determine molecular structures of biological nanomachines that generally consist of multiple protein subunits and/or nucleic acids with a total mass greater than 0.5 million Daltons. The goal is to develop information discovery and integration methodologies for deriving atomic models of nanomachines. Such models will be derived from 3-dimensional (3-D) cryoEM mass density function (i.e. a volumetric density map) in conjunction with physics of protein folding and informatics data. This project is made possible by an integration of the expertise of five investigators in computer graphics, computational biophysics, structural informatics and cryoEM. The intellectual merit of this research is highlighted by the computational approaches of extracting structural information from low-resolution, complex cryoEM volume densities and integrating this information into classical protein structure modeling paradigms, such as comparative modeling and ab initio modeling, for understanding biological nanomachines. The three research goals involve information discovery, information integration and validation of the proposed algorithms. The proposed research will have significant impacts in three disparate disciplines: computer science, molecular modeling, and cryoEM. Furthermore, the team will disseminate their resulting tools freely to the academic community and will host a workshop towards the end of the project. To enhance the impact of their research, the investigators will integrate research with education at each member institution with an eye towards diversity. In particular, these investigators will develop a virtual didactic course in modeling of biological nanomachines for graduate and senior undergraduate students at the five participating institutions.",3,Baylor College of Medicine
1020001000304411,III-CXT: Collaborative Research: Computational Methods for Understanding Social Interactions in Animal Populations,"The goal of the proposed research is to create analytical and computational tools that explicitly address the time and order of social interactions between individuals. The proposed approach combines ideas from social network analysis, Internet computing, distributed computing, and machine learning to solve problems in population biology. The diverse computational tasks of this project include design of algorithmic techniques to identify social entities such as a communities, leaders, and followers, and to use these structures to predict social response patterns to danger or disturbances. Nowhere is the impact of social structure likely to be greater than when species come in<br/>contact with predators. Thus, the accuracy and predictive power of the proposed computational tools will be tested by characterizing the social structure of horses and zebras (equids) both before and after human- or predator-induced perturbations to the social network. The proposed interdisciplinary research will have broader impacts on a wide range of research communities. New methods for analysis of social interactions in animal populations will be useful for behavioral biologists in such diverse fields as behavioral ecology, animal husbandry, conservation biology, and disease ecology. The machine learning algorithms that will be develop are relevant to many studies in which researchers need to classify temporal interaction data. The proposed network methods have broader relevance to human societies: disease transmission, dissemination<br/>of ideas, and social response to crises are all dynamic processes occurring via social networks. Further, through teaching and participation in outreach, students and school teachers will gain access to opportunities for hands-on, interdisciplinary experiences in a new area of computational biology. The research and software resulting from the proposed project will be disseminated both in computational and biological communities and enhanced by cross-disciplinary training activities and will serve to train a new generation of interdisciplinary scientists.",11,University of New Mexico
1020001000304412,"III-CXT: Enhancing Digital Video Libraries through the Evaluation and Transition of Automated Techniques for Visual Processing, Indexing and Access","IIS - 0705491<br/>Christel, Michael G.<br/>Carnegie Mellon University<br/>III-CXT: Enhancing Digital Video Libraries through the Evaluation and Transition<br/>of Automated Techniques for Visual Processing, Indexing, and Access<br/><br/><br/>The proposed work will deliver Informedia audio and video technologies (developed under the Digital Libraries Initiatives) and other non-text access methods to existing digital video libraries. Digital video is a critical and growing multimedia information source, one that is seen as an important to means for universal access to audio and video resources. The web is experiencing exponential video traffic as YouTube, Yahoo, Time Warner, Viacom, and others are presenting an opportunity for migrating advanced research digital video library capabilities into practice. This trend will likely to continue as network bandwidth increases and becomes more widely available. This project will evaluate fielded systems through human computer interaction (HCI) techniques. A primary impact of the project will be in the development and deployment of digital video library toolkits, curriculum support materials, and automated processing and access techniques. These resources will support building other digital video libraries and contribute to the education of those that create and manage these. Automated assistance for curators of audiovisual collections helps to increase the number and types of collections available, while reducing associated costs. The projects collaborators bring important collections to the project - broadcast news archives and oral history archives of African American leaders.",1,Carnegie-Mellon University
1020001000304413,HCC: Collaborative Research: Social-Emotional Technologies for Autism Spectrum Disorders,"This project is a collaborative effort between the Massachusetts Institute of Technology Media Laboratory and the Groden Center to develop and evaluate wearable social-emotional technology that helps individuals with high-functioning autism or Asperger syndrome acquire an affinity for the social domain and improve their overall social abilities. The project will develop the first wearable camera system capable of perceiving and visualizing social-emotional information in real-time human interaction. Using a small wearable camera and video-pattern analysis algorithms, the system analyzes video of the wearer or interaction partner and tags it at multiple granularities (facial actions, communicative facial or head gestures, and emotions). <br/><br/>The wearable system aims to: (1) facilitate learning and systemizing of social-emotional cues; (2) promote self-reflection and perspective-taking; (3) allow wearers to study subtle nonverbal cues and share experiences with peers, family members, and caregivers; and (4) contribute new computational models and theories of social-emotional intelligence in machines. A clinical study will compare the efficacy of the wearable system to current gold standard interventions for autism spectrum disorders (ASD). A participatory approach to the co-design and use of technology draws on the experiences of individuals with ASD and their solutions to systematizing social interactions, thereby empowering them to enhance their relationships, while participating in the development of next-generation social-emotional intelligent technologies.<br/><br/>The project will make significant contributions to the difficult challenge of developing machine intelligence that is robust at handling human social interaction. When people or machines fail to perceive, understand, and act on social-emotional cues they are hindered in their ability to interact with and learn from others. The results of this interdisciplinary work can be leveraged in human-computer interaction, robotics, and technologies with social-emotional intelligence. The research will also provide investigators with a new tool to study nonverbal communication outside of laboratory settings. <br/><br/>This project brings together the overlapping and converging goals and challenges of autism research and affective computing, both already interdisciplinary in nature, and demonstrates how a collaboration could lead to several mutually beneficial outcomes ? from developing new tools to assist people with ASD in understanding and functioning in the social-emotional world, to developing new computational models and theories that enable technology to provide an overall better experience to those who use it. This work also promotes the training and education of students and people with ASD by involving them in cutting-edge scientific research.",11,"The Groden Center, Inc."
1020001000304419,III-CXT: Collaborative Research: Integrated Modeling of Biological Nanomachines,"Biological nanomachines are the assemblies that carry out all the basic biological processes in a living organism. Electron cryo-microscopy (cryoEM) is the most appropriate structural tool to determine molecular structures of biological nanomachines that generally consist of multiple protein subunits and/or nucleic acids with a total mass greater than 0.5 million Daltons. The goal is to develop information discovery and integration methodologies for deriving atomic models of nanomachines. Such models will be derived from 3-dimensional (3-D) cryoEM mass density function (i.e. a volumetric density map) in conjunction with physics of protein folding and informatics data. This project is made possible by an integration of the expertise of five investigators in computer graphics, computational biophysics, structural informatics and cryoEM. The intellectual merit of this research is highlighted by the computational approaches of extracting structural information from low-resolution, complex cryoEM volume densities and integrating this information into classical protein structure modeling paradigms, such as comparative modeling and ab initio modeling, for understanding biological nanomachines. The three research goals involve information discovery, information integration and validation of the proposed algorithms. The proposed research will have significant impacts in three disparate disciplines: computer science, molecular modeling, and cryoEM. Furthermore, the team will disseminate their resulting tools freely to the academic community and will host a workshop towards the end of the project. To enhance the impact of their research, the investigators will integrate research with education at each member institution with an eye towards diversity. In particular, these investigators will develop a virtual didactic course in modeling of biological nanomachines for graduate and senior undergraduate students at the five participating institutions.",3,Washington University
1020001000304420,HCC: Collaborative Research: Affective Learning Companions: Modeling and supporting emotion during learning,"Emotion and motivation are fundamental to learning; students with high intrinsic motivation often outperform students with low motivation. Yet affect and emotion are often ignored or marginalized with respect to classroom practice. This project will help redress the emotion versus cognition imbalance. The researchers will develop Affective Learning Companions, real-time computational agents that infer emotions and leverage this knowledge to increase student performance. The goal is to determine the affective state of a student, at any point in time, and to provide appropriate support to improve student learning in the long term. Emotion recognition methods include using hardware sensors and machine learning software to identify a student's state. Five independent affective variables are targeted (frustration, motivation, self-confidence, boredom and fatigue) within a research platform consisting of four sensors (skin conductance glove, pressure mouse, face recognition camera and posture sensing devices). Emotion feedback methods include using a variety of interventions (encouraging comments, graphics of past performance) varied according to type (explanation, hints, worked examples) and timing (immediately following an answer, after some elapsed time). The interventions will be evaluated as to which best increase performance and in which contexts. Machine learning optimization algorithms search for policies that further engage individual students who are involved in different affective and cognitive states. Animated agents are enhanced with appropriate gestures and empathetic feedback in relation to student achievement level and task complexity. Approximately 500 ethnically and economically diverse students in Massachusetts and Arizona will participate.<br/><br/>The broader impact of this research is its potential for developing computer-based tutors that better address student diversity, including underrepresented minorities and disabled students. The solution proposed here provides alternative representations of scientific content, alternative paths through material and alternative means of interaction; thus, potentially leading to highly individualized science learning. Further, the project has the potential to advance our understanding of emotion as a predictor of individual differences in learning, unveiling the extent to which emotion, cognitive ability and gender impact different forms of learning.",2,University of Massachusetts Amherst
1020001000304423,Collaborative Research III-COR: From a Pile of Documents to a Collection of Information: A Framework for Multi-Dimensional Text Analysis,"Many information workers are swamped with unfamiliar collections of text. One challenge is to obtain an accurate overview of a large text collection, such as the public comments collected in ''''''''notice and comment'''''''' rulemaking. No single tool currently provides a sufficiently diversified picture of such a corpus, and no adequate theory exists to help people explore and form a deep and nuanced understanding of such a text collection. This research seeks to develop a computational framework that allows further exploration of this problem from multiple, integrated perspectives. All the assembled perspectives will be brought together into a single overall supra-document structure that is dynamically constructed under user guidance. In this structure, hierarchical topic clusters will be cross-linked by opinion and argumentation links, using two classes of text analysis engines: one for topics and subtopics, and the other for argument structures. The research team will design, develop, build, and systematically test an overall text exploration framework, an application to support federal regulation writersone called the Rule-Writers Workbench. There is a strong collaboration with Federal government officials who will provide data and participate in user testing. The three PIs have successfully collaborated on a related project under previous NSF funding. <br/><br/>Intellectual Merit: This is a sustainable collaboration between computer science and political/social science research, rooted in a challenging and important real world application and informed by years of end user research. Dynamic, user-driven subtopic definition and clustering algorithms coupled with<br/>language modeling are an innovative yet reachable set of goals. The framework to be developed will be grounded in the humanities disciplines'' expertise in rhetoric, discourse structure, and subjectivity.<br/><br/>Broader Impacts: The Rule-Writers Workbench will allow federal government regulation writers to employ a suite of technical tools that perform independent analyses of public responses to proposed regulations, including near-duplicate detection and clustering, user-based topic selection from dynamically extracted keywords, opinion identification, and subtopic clustering. These capabilities will open new avenues for federal comment analysis.",0,University of Pittsburgh
1020001000304424,HCC: Physical and Digital Design for Fluid Collaboration,"One promise of computing technology is to deliver information, anytime anywhere. Yet, even in the most computerized settings such as offices, this vision has yet to be fully realized. Instead, many workplaces suffer from an unfortunate irony: the very systems designed to solve workplace problems have created new difficulties. People are expert at seamlessly managing social and information flows, however, the technological substrate intended to support such collaboration is often brittle and slow to adapt. Interactions with this technological infrastructure become foregrounded, forcing users to attend to the technology itself rather than to the work at hand. This disconnect becomes all the more apparent in the face of highly dynamic, fluid forms of collaboration in which technological inertia slows the pace of interchange. Reconfiguring our technical infrastructure is rarely as easy or as natural as moving chairs together in a meeting room.<br/><br/>This proposed research aims to create, deploy, and evaluate a system of technical infrastructure that will help rather than hinder fluid collaboration. It will attempt to couple the digital infrastructure ? the services, applications, protocols, and devices in a space ? to the physical infrastructure of that space. This will make the digital infrastructure responsive to actions taken in the physical realm; likewise, it will make the physical infrastructure a medium for conveying affordances and feedback of the digital capabilities in a space to its users. This coupling will enable the fluid movement of information and collaborative artifacts from the physical domain to the digital, and vice versa. The results will be not only a set of digital services designed to enhance collaboration, but a deep integration of those services into the physical environment, along with a set of design guidelines, principles, and evaluation methodology for how to successfully blend the physical and the digital to better support fluid collaboration.<br/><br/>Broader Impact: This research has tremendous practical import to potentially improve the productivity of knowledge workers by decreasing the frustration and inefficiencies associated with on-the-fly use of collaborative technologies. The strong partnership with Steelcase and a commitment to public release of prototypes should facilitate rapid transfer of key findings to industry. The prototype design and evaluation activities will serve to engage students at all levels in Georgia Tech's new Human-Centered Computing program.",4,Georgia Tech Research Corporation
1020001000304425,III-CXT: Learning from graph-structured data: new algorithms for modeling physical interactions in cellular networks,"III-CXT: Learning from graph-structured data: new algorithms for <br/>modeling physical interactions in cellular networks<br/><br/>The complex behavior of the cell derives from an intricate network of <br/>molecular interactions of thousands of genes and their products. <br/>Understanding how this network operates and predicting its behavior <br/>are primary goals of biology and have broad implications for life <br/>science, medicine and biotechnology.<br/><br/>The genomic information revolution of the last ten years has enabled <br/>new systems-level and data-driven approaches for studying cellular <br/>networks. In particular, using machine learning to model gene <br/>regulatory networks---the switching on and off of genes by regulatory <br/>proteins that bind to non-coding DNA---has emerged as a central <br/>problem in systems biology. Now, an explosion of new high-throughput <br/>technologies for measuring physical interactions between proteins and <br/>between protein and DNA provides a new data integration challenge for <br/>computational modeling of gene regulation. These new data can all be <br/>viewed as graph-structured data, or physical interaction networks.<br/><br/>The central computational goal of this project is to develop new <br/>machine learning learning algorithms for exploiting graph-structured <br/>data, including: (1) boosting with efficient graph mining; (2) graph <br/>kernels based on subgraph histogramming; and (3) information-based <br/>graph partitioning. These new algorithms will be used to integrate <br/>physical interaction network data into models of gene regulation in <br/>order to better represent underlying biological mechanisms. The <br/>focus will be two fundamental modeling problems: inferring signal <br/>transduction pathways and modeling cis regulatory modules at the <br/>level of DNA sequence and interacting regulatory proteins. The <br/>algorithms will be applied both to publicly available data and to <br/>primary gene expression data provided by one of the investigators to <br/>study the hypoxia in yeast and the response to environmental toxins <br/>in mammalian neural cells.<br/><br/>This project will learn systems-level models that lead to new insight <br/>into the underlying mechanisms of gene regulation and open the way to <br/>broader biological discoveries. All data, results and source code <br/>will be publicly available via the Web (http://www.cs.columbia.edu/ <br/>compbio/cellular-networks) and disseminated through courses and <br/>bioinformatics software packages. The project will also create <br/>undergraduate research opportunities for joint dry and wet lab <br/>projects and outreach activities to introduce New York City public <br/>high school students to new interdisciplinary areas of science.",3,Columbia University
1020001000304428,HCC: Collaborative Research: Information Exchange and Social Factors in Human-Computer-Teamwork Decision Making,"This project addresses problems central to the design of decision-making strategies that enable computer agents to work effectively with people in heterogeneous groups that interact in carrying out complex activities. These mixed networks of people and systems arise in a wide variety of real-world applications as well as in virtual reality and simulation systems used for training. They occur in settings in which computer systems support people who are working together, those in which they act as proxies for individual people, and those in which groups of agents act autonomously (but alongside people) to carry out constituent tasks for which they are responsible. Despite mixed networks being wide spread, the design of agents that can operate in such settings has received less attention than the design of agents for multi-agent systems comprising only computer agents.<br/><br/>The inclusion of people in mixed networks presents novel problems for the design of autonomous agent decision-making mechanisms. This proposal focuses on the following three of these challenges, which have not been investigated sufficiently in prior work and which agent designers must address to construct systems able to work well with their human partners in mixed networks: (1) information exchange policies for agent competence and past behavior; (2) design of interruption management mechanisms for collaborative interactions; and, (3) learning and incorporation of models of social factors and organizational structures into decision-making mechanisms.",6,University of Maryland College Park
1020001000304429,III-COR: Collaborative Research: Graceful Evolution and Historical Queries in Information Systems-- a Unified Approach,"III-COR: Collaborative Research: Graceful Evolution and Historical <br/>Queries in Information Systems-- a Unified Approach <br/><br/>Database schema evolution represents a constant in the life cycle <br/>of Information Systems, and is the source of major costs for <br/>maintenance, upgrading, and service down time. The traditional <br/>schema revision process depends on the installation of a new <br/>schema along with the revised database, and a converted set of <br/>applications (laboriously rewritten to work with this schema). <br/>Instead, this project develops the novel enabling technology <br/>whereby the schema evolution problem is reduced to coordinating <br/>mappings between multiple concurrent versions of the schema, <br/>applications, and the database. This is realized by the <br/>Meta-Manager system which provides integrated management of <br/>evolving (i) data, and (ii) metadata, and efficiently supports the <br/>(iii) mappings, and (iv) software artifacts needed for graceful <br/>schema evolution. Further, the Meta-Manager allows for <br/>preservation and querying of database history while it assists the <br/>user in planning how to evolve the current schema version with <br/>``what-if'' evolution scenarios. The functionality and performance <br/>of the system is validated using various testbeds, such as, the <br/>San Diego Supercomputing Center's Storage Request Broker, which <br/>hosts scientific data for various research groups ranging from <br/>astrophysicists to biologists. <br/><br/>This novel and timely approach provides a unified solution to both <br/>the evolution and preservation of information systems. Because of <br/>the key role played by information systems, a broad range of <br/>scientific, educational, and economic activities will benefit from <br/>these advances. <br/><br/>Results are disseminated via publications, reports and <br/>demos available from the project web sites: <br/><br/>http://www.cs.ucr.edu/~tsotras/meta-manager <br/>http://wis.cs.ucla.edu/projects/meta-manager <br/>http://db.ucsd.edu/people/alin/meta-manager",8,University of California-San Diego
1020001000304433,HCC: Improving the Performance of Global Software Development Learning Teams,"This research proposes to help students work more effectively in global software teams. The project will involve researchers from Turkey, Panama, and England along with industrial advisors from Travelocity and Lockheed and will focus on issues related to teaching undergraduate computer science students how to use computer-supported collaborative tools to work together to develop large software applications. Researchers will enhance a Web service infrastructure that can support collaborative software tools and use it in advanced programming courses offered at each of the four Universities. The courses will allow researchers to examine how 'distance' factors such as time, geography and culture affect globally distributed student learners. Data gathered from student interactions will then be used to create strategies that will improve collaboration among the culturally, spatially, and temporally dispersed learning teams. The result of this research will be a model and technology that will focus on the problems related to teaching global software development. These problems include teaching students how to use collaborative software, be members of a culturally diverse work team, manage time, organize ideas, and chat (communicate) with one another. <br/><br/>This project will examine ways to use technology to help students learn how to overcome barriers of time, space and culture. The proposed research represents an important contribution by teaching students how to work in culturally mixed dispersed teams. It will also contribute to practical knowledge about how to support distributed learning teams by determining which specific individual, spatial-temporal, and cultural factors are important and how they interact in the context of a computer supported collaborative environment. It will also test whether specific problems can be remediated through direct or indirect intervention, and whether these remediation strategies actually improve group performance. Although this study will occur in the context of a programming course, the results will have implications for geographically distributed collaborative learning teams in general. It should also have an effect on broadening the experiences of all students who participate in the study by exposing them to people from different cultures and nationalities",9,University of North Texas
1020001000304434,III-CXT:Collaborative Research: Integrated Modeling of Biological Nanomachines,"Biological nanomachines are the assemblies that carry out all the basic biological processes in a living organism. Electron cryo-microscopy (cryoEM) is the most appropriate structural tool to determine molecular structures of biological nanomachines that generally consist of multiple protein subunits and/or nucleic acids with a total mass greater than 0.5 million Daltons. The goal is to develop information discovery and integration methodologies for deriving atomic models of nanomachines. Such models will be derived from 3-dimensional (3-D) cryoEM mass density function (i.e. a volumetric density map) in conjunction with physics of protein folding and informatics data. This project is made possible by an integration of the expertise of five investigators in computer graphics, computational biophysics, structural informatics and cryoEM. The intellectual merit of this research is highlighted by the computational approaches of extracting structural information from low-resolution, complex cryoEM volume densities and integrating this information into classical protein structure modeling paradigms, such as comparative modeling and ab initio modeling, for understanding biological nanomachines. The three research goals involve information discovery, information integration and validation of the proposed algorithms. The proposed research will have significant impacts in three disparate disciplines: computer science, molecular modeling, and cryoEM. Furthermore, the team will disseminate their resulting tools freely to the academic community and will host a workshop towards the end of the project. To enhance the impact of their research, the investigators will integrate research with education at each member institution with an eye towards diversity. In particular, these investigators will develop a virtual didactic course in modeling of biological nanomachines for graduate and senior undergraduate students at the five participating institutions.",3,Baylor College of Medicine
1020001000304435,HCC: Collaborative Research: Social-Emotional Technologies for Autism Spectrum Disorders,"This project is a collaborative effort between the Massachusetts Institute of Technology Media Laboratory and the Groden Center to develop and evaluate wearable social-emotional technology that helps individuals with high-functioning autism or Asperger syndrome acquire an affinity for the social domain and improve their overall social abilities. The project will develop the first wearable camera system capable of perceiving and visualizing social-emotional information in real-time human interaction. Using a small wearable camera and video-pattern analysis algorithms, the system analyzes video of the wearer or interaction partner and tags it at multiple granularities (facial actions, communicative facial or head gestures, and emotions). <br/><br/>The wearable system aims to: (1) facilitate learning and systemizing of social-emotional cues; (2) promote self-reflection and perspective-taking; (3) allow wearers to study subtle nonverbal cues and share experiences with peers, family members, and caregivers; and (4) contribute new computational models and theories of social-emotional intelligence in machines. A clinical study will compare the efficacy of the wearable system to current gold standard interventions for autism spectrum disorders (ASD). A participatory approach to the co-design and use of technology draws on the experiences of individuals with ASD and their solutions to systematizing social interactions, thereby empowering them to enhance their relationships, while participating in the development of next-generation social-emotional intelligent technologies.<br/><br/>The project will make significant contributions to the difficult challenge of developing machine intelligence that is robust at handling human social interaction. When people or machines fail to perceive, understand, and act on social-emotional cues they are hindered in their ability to interact with and learn from others. The results of this interdisciplinary work can be leveraged in human-computer interaction, robotics, and technologies with social-emotional intelligence. The research will also provide investigators with a new tool to study nonverbal communication outside of laboratory settings. <br/><br/>This project brings together the overlapping and converging goals and challenges of autism research and affective computing, both already interdisciplinary in nature, and demonstrates how a collaboration could lead to several mutually beneficial outcomes ? from developing new tools to assist people with ASD in understanding and functioning in the social-emotional world, to developing new computational models and theories that enable technology to provide an overall better experience to those who use it. This work also promotes the training and education of students and people with ASD by involving them in cutting-edge scientific research.",11,Massachusetts Institute of Technology
1020001000304438,RI: Exploiting and Exploring Discourse Connectivity: Deriving New Technology and Knowledge from the Penn Discourse Treebank,"Large scale corpora annotated at the sentence level have played a critical role in natural language research. They have enabled large scale integration of statistical knowledge (derived from the corpora) with linguistic knowledge leading to both technological and scientific applications, such as information extraction, question answering, summarization, and machine translation, among others. This approach is now being extended to the discourse level, thus going beyond the sentence level. Using a resource called the Penn Discourse Treebank (PDTB), a large scale corpus annotated with discourse structure along with the associated semantics, new major experimental work on discourse processing is being carried out, leading to the generation of more coherent summaries and texts, extraction of complex relations in texts, among others, as well as foundational research relevant to language technology. This work is also providing a deeper understanding of the relationship between sentence level and discourse level structures. While pursuing these goals, a variety of tools for making a productive use of the PDTB resource are also being developed. This research program is also coupled with a strong educational program involving training researchers in the PDTB methodology so that similar resources can be developed in other languages substantially divergent from English. This part of the research program has international components including collaboration with research groups in Czech Republic, India, and Finland. The international collaboration is funded by the NSF Office of International Science and Engineering.",0,University of Pennsylvania
1020001000304439,"Rethinking Privacy Policies in Location-Aware Systems: Bridging the Gaps Getween Users Applications, and Policy Systems","This project will seek ways to greatly simplify privacy policy creation for users. It is based on the premise that the use of contextual information from applications that help users to coordinate or communicate with others - such as their calendar, messaging contacts, and address books - can help in creating privacy policies for location-aware systems. In contrast to earlier work on privacy that focused on such location-independent applications as Web services, the focus in this project is primarily on privacy in location-aware systems where users carry devices that help determine their location. These include GPS-enabled cell phones, wireless PDAs, and laptops. This work will leverage a location-aware infrastructure called Whereabouts, which is deployed in the new Computer Science and Engineering building of the University of Michigan.<br/><br/>This research integrates three key concepts to simplify privacy policy specifications: (1) Privacy meta-policies: These are high-level rules that help map a user's context information. For example, a rule may make the user's location available to meeting participants near the start time of a scheduled meeting. (2) Privacy circles: These structures help users share privacy meta-policies with other users. The research will evaluate the hypothesis that privacy meta-policies will be much easier to share than low-level policy preferences that are typically available in existing privacy systems for ubiquitous computing. (3) Privacy mirrors: These are tools to help users understand the impact of their privacy meta-policies. In particular, a user can use a privacy mirror to see how a set of meta-policies would have revealed their location to other users for past events or scheduled future events.<br/><br/>The success of this work will help users coordinate and collaborate better with each other using location information, while safeguarding privacy. This is important, because location-aware computing infrastructures are starting to become widely available, but they have still not penetrated mainstream use. Better calendaring and coordination tools will also result from this research, and it will help push the frontiers on understanding how people can better manage their privacy in the context of location-aware systems.",18,University of Michigan Ann Arbor
1020001000304441,HCC: Privacy in Home-Based Ubicomp,"This project addresses the acute privacy challenge of home-based health care based on ubiquitous computing, or ubicomp, where vulnerable populations risk enforced technological intimacy. It will employ the well-defined """"design for values"""" method to create an innovative toolkit that can be used by our aging population, their caregivers, and designers to ensure privacy and autonomy in home-based ubicomp.<br/><br/>Ubiquitous computing integrates technology into our everyday environments, fundamentally altering privacy by creating continuous, detailed data flows. Ubicomp will result in an environment that is aware, active and responsive. It creates an aware environment through the pervasive distribution of sensors. It is active because sensor data are processed and examined. It is responsive in that the technology acts on the environment based on processed data. As ubicomp is networked, the data and decisions have the potential to be observed from any connected locale on the planet.<br/><br/>Design for privacy is complicated by the fact that privacy is a socially constructed value that differs significantly across environments and individuals. Currently, design for privacy requires a user who understands the social implications of ubicomp technology, demands a design that respects privacy, and articulates specific technical design requirements. Design for privacy also requires a ubicomp designer with mastery of privacy enhancing technologies, security mechanisms, and a profound understanding of privacy. Neither of these is a reasonable burden. This research will decrease the burdens for both parties.<br/><br/>This project will create a system for designing highly customized privacy¬-enhancing ubicomp. The privacy framework that consists of three integrated, complementary components. The first component is a participant tool for eliciting individual elder privacy concerns, making it easy for non-technical people to express privacy concerns. The second is a designer tool that translates elder concerns into technical choices or suggestions. The third is a privacy-enhancing code library for ubicomp sensors that vastly simplifies privacy-sensitive design, including data filtering, access control list creation, and integration of cryptographic privacy enhancing technologies.<br/><br/>The broader impacts of the project include: (1) development of multidisciplinary curriculum that will engage over 40 students in the research project; (2) a living laboratory to enable research and curricular activities in business, nursing, health and other disciplines; (3) expansion of the potential for privacy-enhanced home-based healthcare; (4) the development of tools to ensure that older people make their own choices about home monitoring and protection of their privacy and autonomy, and (5) a design tool and code library that enable ubicomp designers to easily embed appropriate privacy-enhancing and strong security-protecting mechanisms in home-based ubicomp without requiring expertise in privacy or security.",18,Indiana University
1020001000304442,Integrating the Local and Global Structure of Natural Scenes,"Abstract<br/><br/>Title: Integrating the Local and Global Structure of Natural Scenes<br/>PI: Michael Lewicki, CMU<br/>Co-PI: Aude Oliva, MIT<br/><br/><br/>One of the fundamental problems in modeling human vision is understanding the visual cues and computations that underlie the perception of natural visual scenes. Recent studies have suggested that there exist important aspects of scene perception which do not depend on the recognition of objects in the scene and are more global or holistic in nature. The objective of this proposal is to use integrated theoretical and experimental approaches to gain insight into the information processing that underlies the representation of natural scenes and the computation of their global and spatial layout properties. The research will be driven by the theoretical hypothesis that visual system representations at both a local and global level are adapted to the statistical structure of the natural images and scenes. This project will investigate local structure of natural images by developing hierarchical statistical models of local textures and testing to what extent human observers are sensitive to the same statistical features. The spatial structure of natural images will be investigated by developing statistical models that identify scene regions over which there are smooth changes in the local texture distribution and comparing the resulting segmentation to that of human observers. The global structure of natural scenes will be investigated by developing a statistical model that learns holistic, statistical representations, with the aim to evaluate scene depth and spatial layout information as human observers do.<br/>The broader impact of this work is that it will develop theoretical models that can be directly tested at a perceptual level and are also sufficiently detailed that they could lead to testable models of the <br/>underlying neural mechanisms. Furthermore, it will be essential to <br/>understand the computational principles underlying human perception in order to emulate their behavior in machines and also to better understand our own visual experience.<br/><br/>URL: http://www.cnbc.cmu.edu/nsf-natural-scenes/",1,Carnegie-Mellon University
1020001000304443,HCC: Collaborative Research: Continuous Control Brain-Computer Interfaces for Creative Expression,"Severe motor disabilities, including locked-in syndrome and paralysis, impact the quality of life for millions of people worldwide. The PIs' prior work in brain-computer interfaces based on functional near-infrared (fNIR) imaging has shown great promise for restoring communication and environmental control to people with such disabilities. Currently, typical control interfaces for these systems are simple discrete selection paradigms, which have proven to be effective but limited in information throughput rates. Innovative control interfaces based on continuous control paradigms, which dynamically map brain signal levels to control signals, have not been adequately studied for fNIR imaging. Depending upon the extent to which brain signals can be effectively mapped to continuous control, adding this feature to existing discrete control could significantly increase the range of tasks that can be performed by users of an fNIR-based direct brain interface (e.g., positional selection or 2-D drawing). In this work, the PIs will explore innovative direct brain-computer interfaces for continuous control and use them to develop applications for creative expression. For people with severe motor disabilities, creative expression can provide an emotional outlet as well as mental exercise to improve quality of life. The tasks inherent in creating visual art, such as drawing, coloring, and texturing, cannot be accomplished with discrete controls. Therefore, visual art provides an ideal experimental platform to study fNIR-based continuous control interfaces. It also provides an engaging and motivating platform for training that will improve users' abilities to control a direct brain interface. To these ends, the PIs will study non-traditional control interfaces for continuous and discrete selection such as wheels, dials, and gauges, to determine to what extent fNIR signals can be mapped to continuous control. The PIs will explore continuous methods for selection and control of art media such as brushes, colors, textures, and shapes, and investigate to what extent continuous brain signals can be translated into visual art gestures (drawing, shading, coloring). The advice of a professional, internationally-known artist who has ALS will guide the user requirements of the control interfaces. Quantitative and qualitative user performance data will be collected, and will among other things be used to compare learning effects with a visual art paradigm against traditional, discrete selection exercises to determine if training time and performance can be improved. Project outcomes will add to the body of knowledge for assistive technology and human-computer interfaces.<br/><br/>Broader Impacts: Methods for translating cortical oxygenation signals into continuous control signals for user interfaces will have mainstream applications for assistive technologies by essentially ""smoothing"" noisy input signals. Such developments could be applied for use by those with reduced motor coordination, including the elderly, young children, and those with motor diseases such as Parkinson's disease. Mainstream users may benefit from a hands-free interface, and neural control could provide added dimensions to the creative process.",20,Georgia Tech Research Corporation
1020001000304444,RI: Synergistic Machine Learning: Collaboration and Topology Exploitation in Dynamic Environments,"Proposal 0705681<br/>""RI: Synergistic Machine Learning: Collaboration and Topology Exploitation in Dynamic Environments.""<br/>PI: Terran Lane<br/>University of New Mexico<br/><br/>ABSTRACT<br/><br/>The goal of this project is to investigate a new generation of machine learning methods for networks comprised of a large number of inexpensive, lightweight, powerful nodes that integrate computation, sensing, and communication. Embedded in the physical world, these nodes will self-assemble into environmentally-aware networks to assist with environmental monitoring, safety, workflows, education, and entertainment. To achieve these goals, such ubiquitous computing networks need to be able to integrate diverse streams of sensor information into coherent views of the environment. This project seeks to create a new generation of machine learning methods to address the resulting data fusion and environmental awareness challenges. The project will develop topology-aware machine learning methods that (a) learn and exploit the topological structure of the environment, and (b) enable collaborative learning among distributed learning agents. This project will test these new machine learning methods on live sensor networks currently installed at two active volcanos: Kilauea (Hawaii) and Mt. Erebus (Antarctica). This project will involve undergraduate and graduate students via research assistantships. Further, this project will involve pre-college students as investigators through middle-school student workshops at the Sally Ride Festival. This work seeks to transform both machine learning and ubiquitous computing by opening up a vast space of novel machine learning problems that are beyond current techniques, and by inspiring development of new capabilities for ubiquitous computing systems.",2,University of New Mexico
1020001000304453,"HCC: Large Lexicon Gesture Representation, Recognition, and Retrieval","It is estimated that American Sign Language (ASL) is used by up to 2 million people in the United States. Yet many resources that are taken for granted by users of spoken languages are not available to users of ASL, given its visual nature and its lack of a standard written form. For instance, when an ASL user encounters an unknown sign, looking it up in a dictionary is not an option. With existing ASL dictionaries one can easily find what sign corresponds to an English word, but not what English word (or, more generally, what meaning) corresponds to a given sign. Another example is searching for computer files or web pages using keywords, which is now a frequent activity for computer users. At present, no equivalent for keyword search exists for ASL. ASL is not a written language, and the closest equivalent of a text document is a video sequence of ASL narration or communication. No tools are currently available for finding video segments in which specific signs occur. The lack of such tools severely restricts content-based access to video libraries of ASL literature, lore, poems, performances, or courses. The core goal of this research is to push towards making such resources available, by advancing the state-of-the-art in vision-based gesture recognition and retrieval. This poses challenging research problems in the areas of computer vision, machine learning, and database indexing. The effort will focus on the following: developing methods for learning models of sign classes, given only a few training examples per sign, by using a decomposition of signs into phonological elements; designing scalable indexing methods for video lexicons of gestural languages that achieve sign recognition at interactive speeds, in the presence of thousands of classes; creating indexing methods for spotting signs appearing in context in an ASL video database; incorporating linguistic constraints to improve performance of both lower-level vision modules, such as hand pose estimation and upper body tracking, and higher-level learning and indexing modules; and explicitly designing methods that can work with error-prone vision modules that often provide inaccurate or ambiguous outputs. The PIs will create two demonstration systems: an ASL lexicon containing a comprehensive database of ASL signs; and a ""Sign Language Google"" that can search for specific signs in large databases of ASL video content. The systems will be trained and evaluated using thousands of video sequences of signs performed in isolation and in context by native ASL signers. This usage data will be valuable for studying co-articulation effects and context-dependent sign variations. The signs collected will include the full list of ASL signs appearing in the first three years of standard college ASL curricula.<br/><br/>Broader Impacts: The methods developed in this project will enable sign-based search of ASL literature, lore, poems, performances, courses, from digital video libraries and DVDs, a capability which will have far-reaching implications for improving education, opportunities, and access for the deaf. These algorithms also aim to enable video-based queries of ASL lexicons, and eventually full-fledged dictionaries with metalinguistic information about signs and examples of usage. By enabling those learning ASL to ""look up"" a sign they do not know, this technology promises to transform the way students of ASL (both deaf and hearing), parents of deaf children, sign language interpreters, and linguists learn about signs they encounter. The algorithms developed in this effort may well lead to more robust ASL recognition systems, which can handle natural signing with a large lexicon of signs and the technology will also advance the state of the art in gesture recognition and synthesis systems. The large linguistically annotated corpus of native ASL produced as part of this effort will itself be an important resource.",1,Trustees of Boston University
1020001000304454,RI: Machine Learning for Robust Recognition of Invertebrate Specimens in Ecological Science,"Proposal 0705765<br/>PIs: Thomas Dietterich, David Lytle, Andrew Moldenke, Robert Paasch, Eric Mortensen, Linda Shapiro<br/><br/>Institution: Oregon State University<br/><br/>Title: RI: Machine Learning for Robust Recognition of Invertebrate Specimens in Ecological Science<br/><br/>Abstract<br/><br/>An interdisciplinary team of computer scientists, mechanical engineers, and entomologists from Oregon State University and the University of Washington are developing computer vision, machine learning, and robotic methods for high-precision generic object recognition and applying these methods to the imaging and classification of invertebrate specimens of soil mesofauna and freshwater zooplankton. Current manual methods for recognizing and counting these organisms are extremely tedious and time-consuming, and require a high degree of expertise. Automated, rapid-throughput population counting will provide a revolutionary new tool for <br/>ecologists to understand and monitor soil and freshwater ecosystems. <br/>Soil arthropods form a central component of ecological processes in soils, so accurate soil arthopod population counting is critical to improving our understanding of ecosystem functions and community ecology. Freshwater zooplankton species are a fundamental component of many ecosystems, because they transfer energy from primary producers to consumers such as fish and birds. Zooplankton also serve as a model system for understanding basic ecosystem processes, predator-prey dynamics, and disease ecology.<br/><br/>Automated recognition of these organisms poses difficult classification problems because it requires much more precise discrimination than generic object recognition tasks of the type commonly studied in computer vision. Current approaches to generic object recognition employ a bag-of-keypoints methodology in which hand-crafted region detectors, hand-crafted region descriptors, and unsupervised feature dictionaries are applied to convert an image into a fixed-length feature vector. Machine learning is only employed at the final step to classify this feature vector into a generic object class. This project seeks to integrate machine learning into all aspects of the vision pipeline. It will develop and test discriminative learning algorithms for the automated discovery of region detectors, region descriptors, feature dictionaries, and classifiers. To reduce the risk of overfitting, sub-part correspondences and spatial constraints will be imposed to constrain the learning algorithms. In addition to discriminative methods, the investigators will also learn generative models to help reject debris and unknown species that appear in the images. Model adaptation methods will be developed to take advantage of the fact that in any given biological sample, organisms of the same taxon tend to be more similar to each other than they are when samples from multiple sites are pooled.<br/><br/>Progress on this project will be regularly reported at http:// web.engr.oregonstate.edu/~tgd/bugid/",2,Oregon State University
1020001000304456,III-CXT: Process Families and Their Application to Online Dispute Resolution,"This research will evaluate a generative approach to creating families of processes, by generating families of Online Dispute Resolution (ODR) processes, and evaluating them through use at the National Mediation Board (NMB). The generative approach will be guided by NMB specifications of process goals, which will vary in emphasis according to different weightings of both tangible goals, such as resolving a specific conflict, and such nontangible goals as empowerment and relationship-building. The process families will be generated by binding different combinations of process concerns, such as coordination, agent behaviors, and artifact flows, into a high-level metaprocess framework. The generated process instances will also include specified instrumentation and measurement vehicles. This will facilitate the evaluation of the processes by NMB and project researchers and will form the foundation for evaluation of the overall generative approach. This approach will require a process definition language that features clear separation of concerns. An example is the Little-JIL process definition language, developed at UMass, which will be used as the basis for this research. The project will add to understanding of process generation and process technology in general, while also creating useful processes for the NMB, and a superior framework for social science experimentation with dispute resolution processes as well as processes in general. The project team includes computer science researchers, an ODR expert, dispute resolution researchers, and representatives from the NMB. The team has conducted successful NSF-funded research on a previous project, whose results indicate the need for the research proposed here. <br/><br/>Intellectual Merit: This project continues the exploration of the value of using software engineering perspectives and technology to deal with processes as rigorously definable objects. The main issue addressed here is the management of families of processes. Previous research indicates that organizations like NMB require families of processes, rather than a single process, and that such processes may not always be aimed at producing a single product nor one that is tangible. Use of a process definition language featuring clean separation of concerns seems to be a promising way to address these needs, and that approach will be pursued and evaluated in this research, thus making an important contribution to understanding the formal nature of processes. A process generation framework will be built and used to generate real ODR processes that will be used and evaluated by the NMB. The rigor and precision of these processes, and their incorporation of vehicles for evaluation, will facilitate the comparison of processes that differ in precisely documented ways. This will be an important contribution to social science research, supporting the ability of social scientists to perform precise experimentation with processes, with ODR processes being used as a first example. <br/><br/>Broader Impact: The ODR processes provided will be of considerable value to the NMB in a number of ways. The processes will improve NMB's effectiveness in dealing with disputes in the airline and railroad industries. They will also serve as an aid that NMB can use to train new personnel. The clarity and precision of the processes will render them suitable subjects for ongoing discussion and evaluation, leading to improvements in conflict resolution effectiveness. Moreover, success at the NMB will demonstrate the applicability of these ideas and approaches to the dozens of other government agencies responsible for dispute resolution.",10,University of Massachusetts Amherst
1020001000304457,III-CXT: Computer Science Research Using the Cornell Web Lab to Study Social and Informational Processes on the Web,"IIS - 0705774<br/>Arms, William Y.<br/>Cornel University<br/>III-CXT-Computer Science Research Using the Cornell Web Lab to Study Social and<br/>Information Processes on the Web<br/><br/><br/>The project is a collaborative effort between computer scientists and social scientists to advance understanding of the dynamics of the World Wide Web and its impact on science and society. The Web Lab enables social science scholars and others to study issues related to the diffusion of innovation - how new ideas arise and spread across large populations. The proposal addresses fundamental new computing and information science research questions and extends the Web Lab datasets to new levels of depth and complexity so that critical issues of scale can be explored. The project has four goals: (1) to develop large-scale datasets that support wide ranging research about the nature and evolution of the Web, (2) to study human interaction on the Web by combing methods of hypothesis-driven research from the social sciences with large-scale modeling of information structures from computer and information sciences, (3) to understand better how the Web evolves over time by developing models of the creation, modification, and destruction of pages and link structures, and by testing those models on Web-scale snapshots, and (4) to make the datasets available over the TeraGrid.",11,Cornell University
1020001000304460,HCC: Collaborative Research: Continuous Control Brain-Computer Interfaces for Creative Expression,"Severe motor disabilities, including locked-in syndrome and paralysis, impact the quality of life for millions of people worldwide. The PIs' prior work in brain-computer interfaces based on functional near-infrared (fNIR) imaging has shown great promise for restoring communication and environmental control to people with such disabilities. Currently, typical control interfaces for these systems are simple discrete selection paradigms, which have proven to be effective but limited in information throughput rates. Innovative control interfaces based on continuous control paradigms, which dynamically map brain signal levels to control signals, have not been adequately studied for fNIR imaging. Depending upon the extent to which brain signals can be effectively mapped to continuous control, adding this feature to existing discrete control could significantly increase the range of tasks that can be performed by users of an fNIR-based direct brain interface (e.g., positional selection or 2-D drawing). In this work, the PIs will explore innovative direct brain-computer interfaces for continuous control and use them to develop applications for creative expression. For people with severe motor disabilities, creative expression can provide an emotional outlet as well as mental exercise to improve quality of life. The tasks inherent in creating visual art, such as drawing, coloring, and texturing, cannot be accomplished with discrete controls. Therefore, visual art provides an ideal experimental platform to study fNIR-based continuous control interfaces. It also provides an engaging and motivating platform for training that will improve users' abilities to control a direct brain interface. To these ends, the PIs will study non-traditional control interfaces for continuous and discrete selection such as wheels, dials, and gauges, to determine to what extent fNIR signals can be mapped to continuous control. The PIs will explore continuous methods for selection and control of art media such as brushes, colors, textures, and shapes, and investigate to what extent continuous brain signals can be translated into visual art gestures (drawing, shading, coloring). The advice of a professional, internationally-known artist who has ALS will guide the user requirements of the control interfaces. Quantitative and qualitative user performance data will be collected, and will among other things be used to compare learning effects with a visual art paradigm against traditional, discrete selection exercises to determine if training time and performance can be improved. Project outcomes will add to the body of knowledge for assistive technology and human-computer interfaces.<br/><br/>Broader Impacts: Methods for translating cortical oxygenation signals into continuous control signals for user interfaces will have mainstream applications for assistive technologies by essentially ""smoothing"" noisy input signals. Such developments could be applied for use by those with reduced motor coordination, including the elderly, young children, and those with motor diseases such as Parkinson's disease. Mainstream users may benefit from a hands-free interface, and neural control could provide added dimensions to the creative process.",20,University of Virginia Main Campus
1020001000304462,III-CXT: Collaborative Research: Advanced learning and integrative knowledge transfer approaches to remote sensing and forecast modeling for understanding land use change,"Intellectual Merits. <br/>The characterization of land cover and usage over large geographical regions, as well as the near/long-term forecasting of changes in land use, is a key problem in geo-informatics that is particularly important for regions that are subject to rapid ecological changes or urbanization. At present, the data and knowledge required for detailed and accurate characterization is scattered across both traditional (GIS) spatial data sources and across remotely sensed data, and their associated models, none of which inter-operate well. This research will develop a comprehensive framework for efficient and accurate mapping, monitoring and modeling of land cover and changes in usage over large regions. This endeavor involves three complementary activities: (i) large scale classification of remote sensing imagery using advanced learning methods, including transfer learning, active learning and manifold based data descriptors; (ii) next-generation spatial modeling using ensembles for forecasting land transformations; and (iii) integration of GIS and remote sensing data by distributed, privacy aware learning, integrating taxonomies obtained from different data sources and portal building. A plan of interaction with various stakeholders is proposed to ensure that the results are meaningful and actionable. This project will result in substantial advances in analysis of remotely sensed data over extended regions and lead to a substantial reduction in the uncertainty of long-term forecasts of change. Concurrently, the chosen application domain will also provide a concrete setting that motivates several new data mining problems, leading to new algorithmic formulations and solutions that benefit the broader data mining community. <br/><br/>Broader Impacts. <br/>This project is designed to have many, diverse broader impacts. First is the involvement of application scientists in the remote sensing and modeling communities who will benefit from advanced methods in machine learning. The research results will be brought into the classroom through new graduate courses. Popular science lectures for middle and high school are also planned since the subject matter and results can be conveyed meaningfully to this audience in a visual way that emphasizes issues of broader concern, such as the impact of ecological changes and urban sprawl. Two project-wide workshops are proposed that will also involve stakeholders (e.g., planners) who would directly benefit from the results and provide valuable feedback. A portal will be created in year 3 to provide access to data, code and toolkits produced by the project. Results will be disseminated in each of the three main disciplines represented within the project through scholarly publications. Finally, tools will be developed so that they may eventually be incorporated into Commercial Off The Shelf software, such as GIS and remote sensing software.",5,University of Texas at Austin
1020001000304463,III-CXT: Collaborative Research: Computational Methods for Understanding Social Interactions in Animal Populations,"The goal of the proposed research is to create analytical and computational tools that explicitly address the time and order of social interactions between individuals. The proposed approach combines ideas from social network analysis, Internet computing, distributed computing, and machine learning to solve problems in population biology. The diverse computational tasks of this project include design of algorithmic techniques to identify social entities such as a communities, leaders, and followers, and to use these structures to predict social response patterns to danger or disturbances. Nowhere is the impact of social structure likely to be greater than when species come in contact with predators. Thus, the accuracy and predictive power of the proposed computational tools will be tested by characterizing the social structure of horses and zebras (equids) both before and after human- or predator-induced perturbations to the social network. The proposed interdisciplinary research will have broader impacts on a wide range of research communities. New methods for analysis of social interactions in animal populations will be useful for behavioral biologists in such diverse fields as behavioral ecology, animal husbandry, conservation biology, and disease ecology. The machine learning algorithms that will be develop are relevant to many studies in which researchers need to classify temporal interaction data. The proposed network methods have broader relevance to human societies: disease transmission, dissemination of ideas, and social response to crises are all dynamic processes occurring via social networks. Further, through teaching and participation in outreach, students and school teachers will gain access to opportunities for hands-on, interdisciplinary experiences in a new area of computational biology. The research and software resulting from the proposed project will be disseminated both in computational and biological communities and enhanced by cross-disciplinary training activities and will serve to train a new generation of interdisciplinary scientists.",11,University of Illinois at Chicago
1020001000304464,III-COR: iOPENER - A Flexible Framework to Support Rapid Learning in Unfamiliar Research Domains,"In today's rapidly expanding disciplines, scientists and scholars are constantly faced with the daunting task of keeping up with knowledge in their field. In addition, the increasingly interconnected nature of real-world tasks often requires experts in one discipline to rapidly learn about other areas in a short amount of time. Cross-disciplinary research requires scientists in such areas as linguistics, biology, and sociology to learn about computational approaches and applications. Both students and educators must have access to accurate surveys of previous work, ranging from short summaries to in-depth historical notes. Government decision-makers must learn about different scientific fields to determine funding priorities.<br/><br/>The goal of iOPENER (Information Organization for PENning Expositions on Research) is to generate readily-consumable surveys of different scientific domains and topics, targeted to different audiences and levels, e.g., expert specialists, scientists from related disciplines, educators, students, government decision makers, and citizens including minorities and underrepresented groups. Surveyed material is presented in different modalities, e.g., an enumerated list of articles, a bulleted list of key facts, a textual summary, or a visual presentation with zoom and filter capabilities. The original contributions of this research are in the creation of an infrastructure for automatically summarizing entire areas of scientific endeavor by linking three available technologies: (1) bibliometric lexical link mining; (2) summarization techniques; and (3) visualization tools for displaying both structure and content.<br/><br/>The iOPENER software and resulting surveys will be made publicly available via the project Web site (http://tangra.si.umich.edu/clair/iopener/) and research results will be presented at conferences such as the ACL, SIGIR, and ASIST, as well as to broader audiences, e.g., expert specialists, students, educators, and government decision makers. Application areas include digital government, emergency response, and public health issues.",10,University of Maryland College Park
1020001000304465,III-CXT: Collaborative Research: Advanced learning and integrative knowledge transfer approaches to remote sensing and forecast modeling for understanding land use change,"Intellectual Merits. <br/>The characterization of land cover and usage over large geographical regions, as well as the near/long-term forecasting of changes in land use, is a key problem in geo-informatics that is particularly important for regions that are subject to rapid ecological changes or urbanization. At present, the data and knowledge required for detailed and accurate characterization is scattered across both traditional (GIS) spatial data sources and across remotely sensed data, and their associated models, none of which inter-operate well. This research will develop a comprehensive framework for efficient and accurate mapping, monitoring and modeling of land cover and changes in usage over large regions. This endeavor involves three complementary activities: (i) large scale classification of remote sensing imagery using advanced learning methods, including transfer learning, active learning and manifold based data descriptors; (ii) next-generation spatial modeling using ensembles for forecasting land transformations; and (iii) integration of GIS and remote sensing data by distributed, privacy aware learning, integrating taxonomies obtained from different data sources and portal building. A plan of interaction with various stakeholders is proposed to ensure that the results are meaningful and actionable. This project will result in substantial advances in analysis of remotely sensed data over extended regions and lead to a substantial reduction in the uncertainty of long-term forecasts of change. Concurrently, the chosen application domain will also provide a concrete setting that motivates several new data mining problems, leading to new algorithmic formulations and solutions that benefit the broader data mining community. <br/><br/>Broader Impacts. <br/>This project is designed to have many, diverse broader impacts. First is the involvement of application scientists in the remote sensing and modeling communities who will benefit from advanced methods in machine learning. The research results will be brought into the classroom through new graduate courses. Popular science lectures for middle and high school are also planned since the subject matter and results can be conveyed meaningfully to this audience in a visual way that emphasizes issues of broader concern, such as the impact of ecological changes and urban sprawl. Two project-wide workshops are proposed that will also involve stakeholders (e.g., planners) who would directly benefit from the results and provide valuable feedback. A portal will be created in year 3 to provide access to data, code and toolkits produced by the project. Results will be disseminated in each of the three main disciplines represented within the project through scholarly publications. Finally, tools will be developed so that they may eventually be incorporated into Commercial Off The Shelf software, such as GIS and remote sensing software.",5,Purdue University
1020001000304469,HCC: Design and Evaluation of Spatially Compelling Virtual Environments,"This interdisciplinary project investigates human cognition of spaces to improve virtual environments, both from a user and an author's perspective. The objectives are to (1) improve virtual environments so that better learning can occur in them, and (2) develop authoring methods for virtual environments informed by the cognitive demands that people have when learning spaces. This research project should advance the design and authoring of virtual environments by leveraging human cognitive capabilities. The programs seeks to develop a system to increase the user's sense of presence and sensitivity to the environmental scale of virtual environments. It further seeks to develop locomotion interfaces to assist exploring large virtual environments from within small physical ones. A goal is to employ human-centered representations for locomotion in virtual environments and to develop methods for skill acquisition in virtual environments. This research proposal advances the scientific understanding of human cognition and learning as well. The research proposes studies that will be informative about the broad role that environmental geometry and self-representation play in perception, orientation, and navigation, while controlling factors that are extremely difficult, if not impossible, to control in the real world. A rigorous evaluation program for all components of the project is planned.<br/><br/>The importance of this proposal is that virtual environments provide people with opportunities to experience places and situations remote from their actual physical surroundings. Virtual environments allow the simulation of real-world events in a controllable and re-usable environment. They potentially allow people to learn about an environment which, for reasons of time, distance, expense, and safety, would not otherwise be available. Virtual environments could have a huge impact in education, entertainment, medicine, architecture, and training, but they are not widely used because of their expense and delicacy. The research program in this proposal should significantly improve the quality of learning in virtual environments, to reduce the time and cost of authoring virtual environments, and to overcome likely impediments to their widespread use. Moreover, this proposal builds a scientific program to develop a better understanding of the cognitive capabilities of humans in immersive virtual environments, and does so in a way that will inform the design process for such environments and our understanding of how humans reason about space.",13,Vanderbilt University
1020001000304470,HCC: Collaborative Research: Affective Learning Companions: Modeling and Supporting Emotion During Learning,"Emotion and motivation are fundamental to learning; students with high intrinsic motivation often outperform students with low motivation. Yet affect and emotion are often ignored or marginalized with respect to classroom practice. This project will help redress the emotion versus cognition imbalance. The researchers will develop Affective Learning Companions, real-time computational agents that infer emotions and leverage this knowledge to increase student performance. The goal is to determine the affective state of a student, at any point in time, and to provide appropriate support to improve student learning in the long term. Emotion recognition methods include using hardware sensors and machine learning software to identify a student's state. Five independent affective variables are targeted (frustration, motivation, self-confidence, boredom and fatigue) within a research platform consisting of four sensors (skin conductance glove, pressure mouse, face recognition camera and posture sensing devices). Emotion feedback methods include using a variety of interventions (encouraging comments, graphics of past performance) varied according to type (explanation, hints, worked examples) and timing (immediately following an answer, after some elapsed time). The interventions will be evaluated as to which best increase performance and in which contexts. Machine learning optimization algorithms search for policies that further engage individual students who are involved in different affective and cognitive states. Animated agents are enhanced with appropriate gestures and empathetic feedback in relation to student achievement level and task complexity. Approximately 500 ethnically and economically diverse students in Massachusetts and Arizona will participate.<br/><br/>The broader impact of this research is its potential for developing computer-based tutors that better address student diversity, including underrepresented minorities and disabled students. The solution proposed here provides alternative representations of scientific content, alternative paths through material and alternative means of interaction; thus, potentially leading to highly individualized science learning. Further, the project has the potential to advance our understanding of emotion as a predictor of individual differences in learning, unveiling the extent to which emotion, cognitive ability and gender impact different forms of learning.",2,Arizona State University
1020001000304471,RI: Dynamic Discrete Choice Networks -- An Artificial Intelligence Approach to Modeling Dynamic Travel Behavior,"Project Summary<br/>The goals of the proposed research are twofold: first, to advance the state of the art in artificial intelligence and cognitive sciences by developing novel probabilistic reasoning techniques; and second, to use these techniques in building better transportation models, which can then be used to help inform public deliberation regarding major infrastructure decisions. Problems of maintaining or replacing aging infrastructure, or adding new infrastructure to meet the needs of population growth and urban expansion of metropolitan areas, are becoming increasingly difficult to solve, in part because the cost is extremely large, and in part because the political discourse over alternative solutions is contentious and reflects divergent assumptions and values. Often, a major source of disagreement is cost; but another is rooted in differing assumptions about how people would adjust their travel in response to changed circumstances in both the short and long term, and how much congestion would result. Current transportation models used in operational analysis and planning are too behaviorally simple to be very useful in addressing these questions. Recent research advances have provided improvements in behavioral representation in these kinds of choice situations, but to date these nnovations are not integrated and are computationally not feasible for large-scale application. During the last decade, the artificial intelligence community has developed a set of techniques that enable fine-grained activity recognition from sensor data; among the most advanced and successful are approaches based on Dynamic Bayesian networks and statistical relational learning. The research team will build on this foundation, integrating these AI techniques with the Discrete Choice Models used in econometric approaches, to yield a new, hybrid reasoning system: Dynamic Discrete Choice Networks. This technique will be applied to the challenging domain of modeling dynamic travel choices of individuals, such as the number of trips, scheduled time of departure, destinations, modes, and routes and to predict how these choices change under dynamically updated travel conditions. <br/><br/>Intellectual Merit<br/><br/>The merit of this proposal is grounded in the research challenges in the artificial intelligence and urban modeling areas. This project advances the state of the art in artificial intelligence and cognitive sciences by developing novel probabilistic reasoning techniques that are well suited for modeling the complex combinations of factors involved in human decision making in the commonsense domain of daily travel. By integrating this modeling power into probabilistic temporal models, Dynamic Discrete Choice Networks will provide an extremely general and flexible framework for learning and recognizing human activities from sensor data and for understanding how everyday human decision making adapts to a constantly changing environment.<br/><br/>Broader Impacts<br/><br/>UrbanSim has the potential to significantly aid in public deliberation over major decisions regarding transportation replacement or expansion of transportation infrastructure, managing urban development, planning for response to mitigate the effects of events such as hurricane Katrina or a major earthquake, and other issues. UrbanSim is Open Source and freely available, and has already attracted considerable interest and use. Because of their improved ability to recognize and analyze human activities from raw sensor data, Dynamic Discrete Choice Networks will have applications to other significant domains as well, such as eldercare and long term health monitoring.",12,University of Washington
1020001000304472,"HCC: Coordinating Communication: Visual, Social & Biological Factors in Grounding for Humans and Agents","With the advent of increasing numbers of increasingly smart machines, there is a growing need to develop technologies that are not only smart, but sensitive to the people and the other machines around them, and sensitive to the context in which they are used. Such an understanding will permit the development of technologies that can coordinate their interactions with humans in a more natural, seamless and fluid fashion. To meet these goals, this research program focuses on three critical yet under-studied contexts of interaction, each of which represents a different constraint upon interpersonal communication: (1) the physical context of shared visual access, (2) the social context of rapport, and (3) the biological context of aging. While some research has been conducted on each of these contextual factors, none has addressed their interaction, nor gathered them into one broader conception of the role of context in interpersonal coordination. This research applies a theory-driven design approach that includes experimental studies, theory development, computational modeling, system implementation and evaluation. In particular, the research program proposes: a) A rigorous study of human-to-human communication using elicitation experiments to develop a more detailed understanding of interpersonal communication across a range of contexts; b) A formalization of the findings into computationally explicit forms that provide predictions of behavior and capture the observed behavioral patterns; c) Integration of the models into a dialogue manager that is implemented within a larger computational architecture; and, d) Evaluation of the implemented system by having untrained humans interact with the system in such a way as to evaluate its effectiveness and reveal gaps in the underlying models as well as in our theoretical understanding.<br/><br/>The outcome of this research will advance our theoretical understanding of the role various contextual factors play during interpersonal communication. The results will be useful to a variety of scientific communities including those that study basic human communication (e.g., psychologists, linguists and communication researchers) and those that study interactive computational systems (e.g., computer scientists, computational linguists, and interaction designers). The research will also provide practical design guidelines and a general computational model that describes how machines can make intelligent choices on the basis of these contextual factors during everyday interactions. At a practical level, the general computational model can be applied by technologists developing many different technologies, such as embodied agents, large-scale displays, ubiquitous computing, in-car navigation, and assistive technologies for the elderly and those with cognitive impairments.",11,Northwestern University
1020001000304477,III-COR: Collaborative Research: Graceful Evolution and Historical Queries in Information Systems -- a Unified Approach,"III-COR: Collaborative Research: Graceful Evolution and Historical<br/>Queries in Information Systems-- a Unified Approach<br/><br/>Database schema evolution is a constant in the life-cycle of<br/>Information Systems (IS), and a source of major costs for maintenance,<br/>upgrading, and service down time. The traditional process of<br/>installing a new schema, converting the database, and rewriting<br/>applications is slow and laborious. Instead, this project develops<br/>the novel technology whereby the schema evolution problem is<br/>reduced to coordinating mappings between multiple concurrent<br/>versions of schema, applications, and database. The project's<br/>approach consists of developing: (i) XML-based architectures for<br/>unifying the management of evolving data and metadata, (ii)<br/>methods for capturing evolution via schema mappings, (iii)<br/>efficient mapping techniques for queries and applications. These<br/>advances will enable the development of the MetaManager, a system<br/>that supports: (a) preserving and querying database histories, and<br/>(b) better planning on how-to evolve current schema versions, via<br/>evaluation and testing of ``what-if'' scenarios. Its<br/>functionality and performance are validated using various testbeds,<br/>including the SDSC Storage Request Broker, which hosts scientific<br/>data for research groups ranging from astrophysicists to<br/>biologists.<br/><br/>This novel and timely approach provides a solution to both the<br/>evolution and preservation of IS. Because of the<br/>key role played by IS, a broad range of scientific, educational,<br/>and economic activities will benefit. Project funds support<br/>training of PhD students while undergraduate research interns are<br/>trained on schema-evolution scenarios using the MetaManager. The<br/>technology is a part of database design courses. Results are<br/>disseminated via publications, reports and demos which are available from:<br/><br/>http://www.cs.ucr.edu/~tsotras/meta-manager<br/>http://wis.cs.ucla.edu/projects/meta-manager<br/>http://db.ucsd.edu/people/alin/meta-manager",8,University of California-Riverside
1020001000304479,RI: Collaborative Research: Hierarchical Models of Time-Varying Natural Images,"Abstract<br/><br/>Title: Collaborative Research: Hierarchical Models of Time-Varying natural Images<br/>PIs: Bruno Olshausen, University of California-Berkeley and David Warland, University of California-Davis<br/><br/>The goal of this project is to advance the state of the art in image analysis and computer vision by building models that capture the robust intelligence exhibited by the mammalian visual system. The proposed approach is based on modeling the structure of time-varying natural images, and developing model neural systems capable of efficiently representing this structure. This approach will shed light on the underlying neural mechanisms involved in visual perception and will apply these mechanisms to practical problems in image analysis and computer vision.<br/><br/>The models that are to be developed will allow the invariant structure in images (form, shape) to be described independently of its variations (position, size, rotation). The models are composed of multiple layers that capture progressively more complex forms of <br/>scene structure in addition to modeling their transformations. <br/>Mathematically, these multi-layer models have a bilinear form in which the variables representing shape and form interact multiplicatively with the variables representing position, size or other variations. The parameters of the model are learned from the statistics of time-varying natural images using the principles of sparse and efficient coding.<br/><br/>The early measurements and models of natural image structure have had a profound impact on a wide variety of disciplines including visual neuroscience (e.g. predictions of receptive field properties of retinal ganglion cells and cortical simple cells in visual cortex) and image processing (e.g. wavelets, multi-scale representations, image denoising). The approach outlined in this proposal extends this interdisciplinary work by learning higher-order scene structure <br/>from sequences of time-varying natural images. Given the <br/>evolutionary pressures on the visual cortex to process time-varying images efficiently, it is plausible that the computations performed by the cortex can be understood in part from the constraints imposed by efficient processing. Modeling the higher order structure will also advance the development of practical image processing algorithms by finding good representations for image-processing tasks such as video search and indexing. Completion of the specific goals described in this proposal will provide (1) mathematical models that can help elucidate the underlying neural mechanisms involved in visual perception and (2) new generative models of time-varying images that better describe their structure.<br/><br/>The explosion of digital images and video has created a national priority of providing better tools for tasks such as object recognition and search, navigation, surveillance, and image analysis. The models developed as part of this proposal are broadly applicable to these tasks. Results from this research program will be integrated into a new neural computation course at UC Berkeley, presented at national multi-disciplinary conferences, and published in a timely manner in leading peer-reviewed journals. Participation in proposed research is available to both graduate and undergraduate levels, and the PI will advise Ph.D. students in both neuroscience and engineering as part of this project.<br/><br/>URL: http://redwood.berkeley.edu/wiki/NSF_Funded_Research",1,University of California-Berkeley
1020001000310738,Ethics Fellows in Engineering,"The Ethics Fellows Pilot Program at the University of New Mexico will be offered in three or four sections of the Engineering curriculum. The one year program will provide data on the expansion to other institutions, including CNM and SIPI, and may also provide insight into issues that are cross-cultural, since UNM is an MSI. The co-teaching experience of the Fellows will be provided with a teaching mentor and a colleague in another department, Philosophy. A seminar will help prepare Fellows and regular meetings will provide follow-up. Faculty mentors are interdisciplinary; evaluation is a critical component.",17,University of New Mexico
1020001000310777,"Institute for Science, Technology, Ethics, and Policy","This proposal will support the initiation of a concept for courses at the interface of engineering and science and ethics. Educating students and faculty to regard attentively ethical determinations entails organizing hands-on coursework that demonstrate how the choices involved in the many activities associated with the research and professional practices of scientists and engineers should be framed in ethical parameters. These parameters are further determined by cultural and socio-political conditions that differentiate both needs and expectations. The plan will engage philosophy faculty, along with engineering and industry and policy professional. The focus for the first course will be issues related to computer technology development and use. Faculty will be engaged from around the El Paso region, enable participation by a number of Universities. In the interdisciplinary, international, and multi-cultural formal planned, an essential task will be to come to terms with the ethical nature of how each of the several other participants plays roles in defining identity and tasks. Additional relevant issues will be identified with an Advisory team. A summer seminar will begin the cross-disciplinary exploration.",17,University of Texas at El Paso
1020001000310813,"SGER, year II: A Web Sociologist's Workbench","Abstract<br/><br/><br/>Proposal ID: IIS-0735129<br/>PI: Garcia-Molina, Hector<br/>Institution: Stanford University<br/>Title: SGER Year 2: A Web Sociologist's Workbench<br/><br/><br/>This SGER project is constructing integrated tools that will help social scientists analyze large time series snapshots of World Wide Web content. Such data offer social scientists new means for conducting research into new topics. The exploratory work brings together Stanford's Computer Science department and Political Science faculty at the university. The computing faculty have developed an operational Web archive gathering and storage system. This facility specializes on repeatedly collecting materials from the same Web sites and storing them for subsequent time series analysis. This is considered an important new resource for social sciences research. The central problem for computer science is the large scale of the collected material and the heterogeneity of the content.",10,Stanford University
1020001000310820,"Collaborative Research in Computational Neuroscience (CRCNS) PI Meeting held on June 3-5, 2007 in College Park, Maryland","The PIs and Co-PIs of grants supported through the NSF-NIH Collaborative Research in Computational Neuroscience (CRCNS) program meet annually. This will be the third meeting of CRCNS investigators. The meeting brings together a broad spectrum of computational neuroscience researchers supported by the program, and includes poster presentations, talks and plenary lectures. The meeting is scheduled for June 3-5, 2007 and will be held in College Park, MD.",10,University of Maryland College Park
1020001000310873,Bridging the Achievement Gap with Authorable Virtual Peers,"This project will implement and evaluate technological supports for African-American Vernacular English(AAVE)-speaking children to learn Standard American English while engaging in problem-based scientific inquiry. The technology consists of virtual peers that collaborate with children to solve a bridge-building problem, while scaffolding the notion that different kinds of language are appropriate for different conversational contexts. The work relies on the recognition that primary school education is based on a set of mainstream oral practices and literacy-preparation skills, and yet all children do not share the same cultural experiences typical of mainstream culture, nor come to school speaking the same dialect of English. Similarly, while traditional science classrooms have emphasized a particular style of scientific discourse, not all children come to school with the mastery of these discourse styles. Scientific inquiry is at the heart of the contemporary science classroom but it is usually defined according to a specific cultural tradition that privileges individual opinion, 'talking back' to the teacher, and criticism of others; a tradition that may not be shared by all students, and which may have ramifications for science achievement among diverse populations. A unique approach for integrating cultural authenticity into learning technology will be pursued: (1) carrying out an in-depth investigation into AAVE peer-oriented language and nonverbal communicative behaviors. The corpus of data obtained from this study will be shared with all interested researchers via the Penn Linguistic Data Consortium; (2) Two technological innovations will extend prior work on virtual peers so as to make possible the current work: (a) PIPER, a new platform for rapid prototyping and implementation of virtual peers so that each of the virtual peers does not require extensive re-implementation as it did in our Flash days; (b) AVP, an authoring system for virtual peers so that children themselves can program the virtual peer as a way of actively engaging with the technology, with code-switching and with collaborative science inquiry and then description of that inquiry to a teacher; (3) evaluating the technologies with respect to their role in improving children?s use of SAE, their educational self-efficacy, and their learning gains in second grade standardized science measures.<br/><br/>The broader significance and important of the work lies in: (1) the potential to substantially increase access to reading, writing, and science literacy for under-served, at-risk children, and to thereby decrease the Black-White achievement gap; (2) technological innovation that will allow other researchers to quickly prototype and implement virtual peers and pedagogical agents that speak different dialects and language, and that can be programmed by their designers, and by their users; (3) an innovative program of dissemination of results and research practices that involves publication and presentation of results, sharing the corpus of data via the Penn Linguistic Data Consortium, but also the involvement of local schools with high populations of African-Americans, local churches and community centers, and informal education institutions such as science centers and children's museums.",13,Northwestern University
1020001000310875,Speech Technology Enhanced Assessment of Modeling (STEAM),"This project seeks to achieve more efficient and accurate assessment of students' mathematical reasoning by developing improved speech recognition technology, calibrated to recognize children's speech, and integrating it with a computerized mathematics education environment (SimCalc) involving interactive representations and simulations. Specific questions of interest include:<br/>Q1. What are the most promising task-relevant mechanisms for constraining students? spoken responses in such a way that enables valid and reliable speech-based assessments despite the imperfect accuracy of automatic spoken language systems?<br/>Q2. What are the most efficient modifications to existing spoken language technologies that achieve acceptable performance with middle-school students engaged in spontaneous speech acts about mathematical models, representations, and simulations?<br/>Q3. Which aspects of the computer-based dynamic representation system must be integrated with student speech to provide a more complete representation of student knowledge of the mathematics underlying the model?<br/>Q4. Which features of the combined output of the computer-based environment and student speech allow the spoken language understanding engine to reliably assign rubric-based scores to student work?<br/><br/>Assessments will address five key aspects of fluency with mathematical models: comprehending, predicting, explaining, improving, and reflecting. A particularly unique focus of this work will be on combining inputs from the speech recognition engine with time-stamped information from a mathematics education environment to reliably score student responses according to a rubric. In addition to its application to education, this research on adolescents' spontaneous mathematical speech will drive advances in spoken language technology.",16,SRI International
1020001000310876,Learning Reading Strategies for Science Texts in a Gaming Environment: iSTART vs iTG,"It is well known that many students find science text challenging to comprehend. Students' reading ability is partly to blame. Reading problems become most apparent when the student is faced with a challenging text for which they have knowledge deficits. Science text, in particular, has many technical terms that are difficult to ground in everyday experience. Thus, there is a need for reading comprehension interventions to improve science comprehension. This project compares the effectiveness of two types of tutoring environments in improving high-school students' ability to understand challenging science text. Both environments contain the same pedagogical content, but present it differently: as a lesson or as a game. The first environment, developed and tested over the past 5 years, is an automated reading strategies tutor called iSTART (Interactive Strategy Training for Active Reading and Thinking) that uses animated pedagogical agents to deliver interactive instruction on self-explanation and reading strategies (comprehension monitoring, paraphrasing, generating inferences). Instruction occurs in three stages with each stage requiring increased interaction on the part of the learner. Results across a wide range of studies indicate that iSTART is highly effective in improving students' ability to understand challenging science text. While effective, iSTART can be somewhat unappealing to an average high-school student in extended practice situations. While students need extended practice to master the strategies, iSTART becomes monotonous over time. To increase students' engagement, an alternative version of iSTART practice will be developed that allows students to practice iSTART strategies in a game environment. iSTART-The Game (iTG) will present the same reading strategy practice to students, but will incorporate game-based principles to enhance engagement. This project examines whether a gaming environment for learning strategies for science text comprehension more effectively sustains students' attention and engagement during training, and thereby results in improved acquisition and mastery of these strategies. In the first year of funding, an automated reading strategy tutoring system that is framed in a gaming environment will be developed. In Year 2, pilot studies will be conducted to refine the system. In Year 3, practice using the strategies will be compared in three conditions: iTG, iSTART, and a control condition. Students will practice the strategies over a period of five additional sessions after the initial training. This experiment will provide information on the potential value of iTG over time and specifically, whether iTG engages students over repeated practice sessions. The potential interactive effects of individual differences such as prior science knowledge, reading skill, and motivational levels will also be examined.<br/><br/>This research will contribute to better understanding of engagement as a factor in learning gains. It is predicted by the investigators that many students who do not find the standard tutoring environment sufficiently engaging will significantly benefit from the alternative approach. Most importantly, this instructional intervention should be especially valuable for those students most at risk due to having lower ability and interest in science. This research will accomplish the goal of creating a test-bed learning environment intended to improve reader engagement and advance our understanding of the potential instructional gains from such environments. The project will contribute to our understanding of the relationship between game features and engagement. It will provide students with tools that help them more effectively meet the challenges of learning from difficult science texts. In addition, this research will contribute to our understanding of the roles that cognition and emotion play in fostering learning, the specific processes involved with learning from science texts, and the complex interplay of factors such as reading strategies, knowledge, reading skill, interest, and motivation.",9,University of Memphis
1020001000310877,Design and Evaluation of a Pen-Based Tutoring System for Statics Instruction,"Intelligent tutoring systems have been widely studied and applied to a variety of subjects. Most systems are based on traditional computer interfaces involving windows, icons, mouse, and pointer. Research shows that transfer from training to testing is greater when testing and training environments are similar, suggesting benefits from creating and evaluating tutoring systems with interfaces that match real-world problem-solving environments. This project aims to create pen-based tutoring systems that scaffold students in the same way they would ordinarily solve problems with paper and pencil. This goal is consistent with recent research comparing student performance across different user interfaces, demonstrating that the interfaces matching familiar work practice enhanced student performance.<br/><br/>From a technology perspective, the project will produce basic algorithms and user interfaces for creating pen-based tutoring software. From an education perspective, the project will develop and evaluate innovative pedagogical principles for individualizing statics instruction in pen-based environments. Statics is a critical field required for engineering and science majors.<br/><br/>The project will create computational techniques and tutoring systems for both tablet and pentop computers, and will assess the strengths and limitations of the two platforms. Comparisons will also be made with paper-and-pencil and traditional computer interfaces. A tablet computer has an LCD display with an integrated digitizer; the user writes directly on the display using a stylus. A pentop is a pen with an integrated digitizer and embedded processor. A pentop uses paper preprinted with a grid that allows the system to locate the pen tip and digitize pen strokes.<br/><br/>The tutors build upon two prototype, pen-based tutoring systems recently developed by the research team. The first, Newton's Pen, is a statics tutor implemented on the LeapFrog FLY pentop computer. The second, Kirchoff's Pen, uses a tablet computer to help students apply Kirchhoff's current and voltage laws. Preliminary studies with Newton's Pen show it is an effective teaching tool. The prototypes uncovered several important research issues that will be ad-dressed by the proposal. The project will: (1) develop new techniques for interpreting hand-written input to enable more fluid interaction with pen-based devices, (2) develop improved, pedagogically-sound user interface design principles for pen-based tutoring systems, (3) expand the class of statics problems beyond those investigated with Newton's Pen, (4) assess the educational value (both learning and transfer) of the tutors, and (5) identify the relative strengths and weaknesses of tablet and pentop platforms for instruction.",4,University of California-Riverside
1020001000310916,A Computational Theory of Motion Perception Modeling the Statistics of the Environment,"This project develops a novel extension to a computational theory of visual motion perception. The overall goal of the theory is to understand how humans perceive motion in their natural environment; in other words, to understand what goes on inside a person's brain when he or she sees birds flying, snowflakes falling, or other complex patterns of motion that occur in the natural visual world. Building on recent work modeling the appearance of a limited set of motion flow patterns, the present project explores a probabilistic approach, based on Bayesian Ideal Observers, to the representation, learning, and modeling of natural visual, and the use of learned probabilistic models in turn to synthesize pseudo-realistic stimuli. Pseudo-realistic stimuli are a novel class of visual stimuli, which have the appearance of natural visual stimuli but can be quantified and varied in a precisely controlled manner. Stimuli of this type have never been used before and offer the exciting prospect of experimentally understanding the behavior of visual systems when exposed to realistic but controlled stimuli. It is anticipated that understanding how the human visual system processes motion will enable development of more robust and powerful computer vision algorithms which will have many technological applications.",15,University of California-Los Angeles
1020001000310926,SGER: DL-VT416: A Digital Library Testbed for Research Related to 4/16/2007 at Virginia Tech,"A digital library (http://recovery.dlib.vt.edu) is designed to support a wide range of research studies, as well as inquiries from the general public, related to the tragedy that occurred during the morning of April 16, 2007 on the Virginia Tech campus in Blacksburg, VA when 33 members of the university community were killed by a student turned gunman. There is a catalog of related content, along with selected data and multimedia information, submitted from a variety of sources. These will be incorporated in the """"Virginia Tech 4-16-07 Library Archive"""", permanently supported by University Libraries.<br/><br/>The target audience includes those interested in how technology aids detection, prevention, and responding to disasters in highly connected settings (e.g., in Virginia's largest university community, at the heart of the Blacksburg Electronic Village). A key question is how digital libraries can work in rapid-response settings, as well as for studying the aftermath of tragedies. <br/><br/>The thrust of this project is development of rapidly prototyped tailored systems for diverse user communities using the 5S theory (i.e., with formally defined underlying concepts: Streams, Structures, Spaces, Scenarios, and Societies), which provides a foundation for integration of distributed content and services. This project will lead to development of the theory and software support for large scale digital libraries that also allow researchers to apply closely-coupled data mining and visualization services, e.g., so that archived content can be efficiently and conveniently analyzed, and so that trends and outliers can be spotted. Computer and information scientists, following legal, policy, and human-subject guidelines, can study portions, or the whole complex, of the resulting testbed - of content, services, usage logs, etc.<br/><br/>Wide use of the digital library by scholars and the public, as well as dissemination of results, through the WWW, publications, and presentations, will ensure the broadest impact.",0,Virginia Polytechnic Institute and State University
1020001000310935,"Collaborative Research: Social Organization, Learning Technologies & Discourse: System Features for Facilitating Mathematical Reasoning in PreK-3 Students","This research project will investigate the effects social organization, learning technologies, and their interrelationships, have on the geometric reasoning of PreK-3 students. Social organization is operationalized here as the strategic grouping of students into triads to engage in collaborative problem solving. Learning technologies are operationalized here as physical and virtual manipulative types (tangrams, pentominoes, geoboards) together with supporting software designed to facilitate students in accessing their mathematical knowledge. Based on findings from preliminary work in this area, it is proposed that learning technologies allow for fluid exchanges between peers, and transitions between physical and virtual manipulatives, significantly enhancing geometric reasoning. The current project presents a robust research and design framework that combines iterative cycles of field studies in the classroom with development and experimentation in the lab. The goal is to design, develop and prototype a tabletop computer interface that allows for unprecedented exchanges among student peers and transitions between manipulative types. The tabletop interface will serve first as the apparatus that facilitates experimental manipulations, later being used, through iterative refinement, as the learning technology platform by which students gain access to mathematical knowledge. The advancement proposed by the current work is that it places all interaction on the horizontal tabletop. This is critical in that it removes confounding effects of different seating and screen orientations from pilot study findings. Manipulations of geometric artifacts reside along two dimensions of research questions: the effects of social interaction and of learning technology on geometric reasoning. Hence, the proposed studies will allow investigation of different transitions between fully functional and fully operational thinking, functional and qualitative operational thinking, and qualitative and quantitative parametric thinking with the same interaction morphologies.<br/><br/>The proposed research has the potential to make significant contributions to the following intellectual domains: 1) child development -understanding children's development of mathematical reasoning in terms of learning trajectories 2) mathematics education - understanding teachers' use of advanced learning technologies and role in classroom instruction, and children's learning of mathematics using advanced learning technologies; 3) learning sciences - refining techniques for the analysis, design, and development of advanced learning technologies in use; 4) human-computer interaction -design of interfaces that aid learning to include gestural interaction, tangible manipulation, and horizontal displays; and 5) psycholinguistics - grounding of concepts developed in other domains in the classroom environment. Results from the research will also have much to say about the participation of girls in the mathematics classroom, a goal being to encourage rich discourse of all students. In addition, developed technologies may have implications for the education of special needs children as instructional interfaces can be modified and reconfigured to meet the diverse needs of this underrepresented group. A significant outcome of collaboration among these disciplines will be cutting-edge research opportunities for doctoral students and groundwork for the establishment of new curricula and, possibly, degree programs.",9,University of Chicago
1020001000310942,"Social Organization, Learning Technologies & Discourse: System Features for Facilitating Mathematical Reasoning in PreK-3 Students","This research project will investigate the effects social organization, learning technologies, and their interrelationships, have on the geometric reasoning of PreK-3 students. Social organization is operationalized here as the strategic grouping of students into triads to engage in collaborative problem solving. Learning technologies are operationalized here as physical and virtual manipulative types (tangrams, pentominoes, geoboards) together with supporting software designed to facilitate students in accessing their mathematical knowledge. Based on findings from preliminary work in this area, it is proposed that learning technologies allow for fluid exchanges between peers, and transitions between physical and virtual manipulatives, significantly enhancing geometric reasoning. The current project presents a robust research and design framework that combines iterative cycles of field studies in the classroom with development and experimentation in the lab. The goal is to design, develop and prototype a tabletop computer interface that allows for unprecedented exchanges among student peers and transitions between manipulative types. The tabletop interface will serve first as the apparatus that facilitates experimental manipulations, later being used, through iterative refinement, as the learning technology platform by which students gain access to mathematical knowledge. The advancement proposed by the current work is that it places all interaction on the horizontal tabletop. This is critical in that it removes confounding effects of different seating and screen orientations from pilot study findings. Manipulations of geometric artifacts reside along two dimensions of research questions: the effects of social interaction and of learning technology on geometric reasoning. Hence, the proposed studies will allow investigation of different transitions between fully functional and fully operational thinking, functional and qualitative operational thinking, and qualitative and quantitative parametric thinking with the same interaction morphologies.<br/><br/>The proposed research has the potential to make significant contributions to the following intellectual domains: 1) child development -understanding children's development of mathematical reasoning in terms of learning trajectories 2) mathematics education - understanding teachers' use of advanced learning technologies and role in classroom instruction, and children's learning of mathematics using advanced learning technologies; 3) learning sciences - refining techniques for the analysis, design, and development of advanced learning technologies in use; 4) human-computer interaction -design of interfaces that aid learning to include gestural interaction, tangible manipulation, and horizontal displays; and 5) psycholinguistics - grounding of concepts developed in other domains in the classroom environment. Results from the research will also have much to say about the participation of girls in the mathematics classroom, a goal being to encourage rich discourse of all students. In addition, developed technologies may have implications for the education of special needs children as instructional interfaces can be modified and reconfigured to meet the diverse needs of this underrepresented group. A significant outcome of collaboration among these disciplines will be cutting-edge research opportunities for doctoral students and groundwork for the establishment of new curricula and, possibly, degree programs.",9,Virginia Polytechnic Institute and State University
1020001000310976,Changing the Center of Gravity: Transforming Classical Studies Through Cyberinfrastructure,"Abstract<br/><br/><br/>IIS - 0736476<br/>Seales, William B.<br/>University of Kentucky<br/>Changing the Center of Gravity: Transforming Classical Studies through Cyberinfrastructure<br/><br/>This project requests funds to support a two-day workshop to be held at the University of Kentucky focussed on computational humanities. The purpose of the workshop is to understand how new computational tools and resources have been taken up by humanists and applied to problems long considered out of reach, and at the same time revealed new challenges and desired capabilities for information technologies research. This meeting will bring together approximately 25 participants representing a diverse and experienced group of domain and IT scholars and practitioners. These scholars from diverse fields have all contributed to shaping through their work a vision for how computation, tools and underlying technical infrastructure can shape the future of research and scholarship in the Classics and humanities more generally. Artifacts and manuscripts essentially inaccessable for broad scholarly use in their physical form have been accurately rendered in digital form and distributed widely for examination and analysis by students and researchers. A specific expected outcome is an articulated description of the emerging state of cyberinfrastructure for the Classics and how it has been shifting the center of gravity away from the traditional structures and practices in the field, which has been in printed materials. The current and evolving set of tools based on technology (computational algorithms; digitization; databases; network access; tagged, indexed and cross-referenced archives) is substantial and collectively forms an impressive shift away from the traditional forms defined by print. A second expected outcome is identifying the global production and refinement of computing/humanities collaborative work and elucidate state-of-the-art progress in a number of areas.",1,University of Kentucky Research Foundation
1020001000310978,SGER: A Music Search Engine Based on Aesthetic Similarity,"Abstract<br/><br/>IIS - 0736480<br/>Manaris, Bill Z.<br/>College of Charleston<br/>A Music Search Engine based on Aesthetic Similarity<br/><br/><br/>This SGER project aims to develop a prototype music search engine based on identifying aesthetic similarities. This engine will utilize power-law metrics to extract statistical proportions of music-theoretic and other attributes of music pieces (e.g., Pitch, Duration, Pitch Distance, Duration Distance, Melodic Intervals, Harmonic Intervals, Melodic Bigrams, etc.).<br/>The engine searches for pieces that are aesthetically similar to the input piece using a mean squared error (MSE) approach. Preliminary testing has been done using the Classical Music Archives corpus (14,695 MIDI pieces), combined with 500+ MIDI pieces from other styles (e.g. Jazz, Rock, Country, etc.). Similar metrics have already been validated on aesthetic attributes of textual materials. Text results (author attribution, style identification, and pleasantness prediction) indicated an high level of accuracy. Assessment and validation experiments will be conducted to compare to computational findings indicating aesthetic similarity of retrieved pieces. These experiments will be conducted by Prof. Dwight Krehbiel (subaward, Bethel College), a specialist in cognitive neuroscience and psychology of music, who has extensive experience in measuring emotional and physiological responses to music.",8,College of Charleston
1020001000526585,CHS: Small: ETouch - Amplifying the Sense of Touch,"Despite advances in haptic (touch) technologies, current interactive systems make limited use of the sensorimotor capabilities of the human body. Not surprisingly, then, the enhancement of intrinsic touch sensation has received little prior attention. The PI's overarching goal in this research is to create knowledge and technology for tactile sensing and feedback, in order to realize a glove that enhances touch perception by electronically amplifying tactile sensations, thereby greatly extending the limits of normal human haptic perception. This will enable its wearer to feel objects with heightened sensitivity, and to discern fine levels of detail in surfaces, objects, or tissues. Project outcomes will include enabling technologies for hand-worn instruments that can amplify touch much like a hearing aid is able to amplify sound, or a digital microscope can magnify images. The wearer of such a device will be able to feel defects in surfaces that would not normally be detected, to detect lumps that would otherwise go unnoticed, and to perform fine manipulation tasks that would otherwise be impossible. Thus, the project holds the potential to impact a wide range of human activities and industries, many of which depend upon manual interaction and inspection. As part of the current project, in collaboration with the University of Pennsylvania School of Medicine's Department of Neurology, the PI will develop new methods for sensory assessment and guidelines for tactile enhancement with broad implications for clinical applications to the rehabilitation of touch deficits caused by disease or injury affecting the somatosensory system. Further applications in sensory feedback for upper-limb prosthetics and rehabilitation are envisioned to be directly facilitated by knowledge generated in this way. <br/><br/>This project will pioneer new methods for electronic sensing of the mechanical stimuli underlying touch sensations in the hand, for computational enhancement of perceptually relevant haptic features, and for reproducing touch sensations via distributed actuators of a wearable, glove-like electronic interface, by integrating work in tactile sensing, computational analysis, distributed actuation and control, and real-time touch amplification. The PI will design and fabricate new soft computing technologies for capturing, processing, and reproducing touch sensations felt with the hand, in the form of strain and vibration signatures of skin-object contact. Challenges to realizing such a vision include: the multiple time and length scales involved; the complexity of the mechanical stimuli, which consist of complex, movement-dependent distributed strain in the skin; limitations in current knowledge about the fundamental computations underlying sensory processing, which obscure the relevant signal features; and the difficulty of electronically reproducing distributed touch sensations during whole-hand haptic interaction. The work will contribute to advances in each of these areas, first by designing and fabricating new sensor and actuator technologies, and then by using these to reveal how touch sensation is elicited through contact generated mechanical signals, and how these sensations are processed by the perceptual system.",4,University of California-Santa Barbara
1020001000526629,EAGER: Cybermanufacturing: Design Tools for Nanofactories with Robust Millimetric Assemblers,"In this EArly-concept Grant for Exploratory Research (EAGER) project, the team conducts research leading to software design tools for automated and reconfigurable robotic systems that manufacture at the nano scale. They define these systems as Wafer-Scale Nanofactories.<br/><br/>The key ingredient in realizing this vision is full automation of the production of millimeter scale robots, and the ability to install and reliably operate a few hundred such robots on the nanofactory wafer. Before physically building the nanofactory, users will design and optimize its layout using assemblable building blocks through a novel software design tool - NanoLRM. This tool utilizes Lean Robotic Micromanufacturing (LRM) principles, through which tradeoffs related to resource allocation, precision requirements, production yield, rate and cost can be judiciously studied. Specifically, the following work is part of this project: modelling assembly, motion, and processes uncertainties using interval analysis, to select and simulate precision-based planners and controllers in the nanofactory, and employ parametric optimization tools for cost, yield and production rate trade-off studies.",7,University of Louisville Research Foundation Inc
1020001000386803,CAREER: Computation and Approximation in Structured Learning,"Machine learning is transforming the way many fields make sense of data, from engineering and science to medicine and business. Machine learning has vastly improved speech recognition, machine translation, robotic navigation and many other prediction tasks. A crucial goal of machine learning is automating intelligent processing of information: this project will focus on automatically describing videos by detecting objects, people, actions and interactions between them, and parsing documents by extracting entities, events and relationships between them. All these prediction tasks require more than just true-false or multiple-choice answers, but have an exponential number of possible answers to consider. Breaking these joint predictions up into independent decisions (for example, translating each word on its own, recognizing a phoneme at a time, detecting each object separately) ignores critical correlations and leads to poor accuracy.<br/><br/>Structured models, such as grammars and graphical models, can capture strong dependencies but at considerable computational costs. The barrier to improving accuracy in such structured prediction problems is the prohibitive cost of inference. Structured prediction problems present a fundamental trade-off between approximation error and inference error due to computational constraints as we consider models of increasing complexity. This trade-off is poorly understood but is constantly encountered in machine learning applications.<br/><br/>The primary outcome of this project will be a framework for addressing very large scale structured prediction using a novel coarse-to-fine architecture. This architecture will enable explicit, data-driven control of the approximation/computation trade-off. It promises to drastically advance state-of-the-art accuracy in computer vision and natural language applications and greatly enhance search and organization of documents, images, and video. The PI's plan includes an active role in the machine learning community, disseminating results through tutorials, code and data and organizing workshops.",2,University of Washington
1020001000386838,EESE: The Ethics of Algorithms,"This project will investigate the ethics and values of the computer scientists, information scientists, and software engineers who create algorithms. The research will contribute to the study of ethics and values in science and engineering in four key ways. First, it will bridge silos between philosophical and social scientific approaches to ethics to develop an integrated theoretical approach which simultaneously identifies the analytical, moral reasoning that is happening during the conceptualization and design phase as well as critically analyzes the interplay between an individual's personal ethics and values and the ethics and values created by aspects of policies, institutional, economic, and cultural contexts. Second, the work will further the literature on information ethics by taking an upstream approach that focuses on the design process. The literature concerned with the ethical use of information technology and computer science often focuses on the outcomes of these endeavors; switching the analytical lens, the PI will join a small but growing literature that addresses ethical questions at the design stage. Third, the research will provide an empirical contribution to information science given its investigation of the intersection of people, technology design and knowledge. Finally, by focusing on algorithms, the project will contribute to broader discussions about ethics, values, and big data; algorithms are the driving technique behind the creation of big data sets, yet there is little talk about the decisions and values that shape algorithm design and thus impact big data content.<br/><br/>Broader Impacts: The interrelationship between algorithms and big data is a timely and important topic for investigation. This research will open the black box of algorithmic design in order to provide more transparency for public discussion and debate on the implications of this new way of organizing knowledge and social life. Project outcomes will include a series of provocative images, concepts, and scenarios (to be presented in a well-designed series of cards), to impact graduate students in computer science, information science, and engineering. The PI plans to work with professional organizations such as the Association of Computing Machinery [ACM] and Computer Professionals for Social Responsibility [CPSR] to address a wider range of issues in the professional code of ethics.",17,Drexel University
1020001000386935,HCC: EAGER: Trust and Collaboration in 3D Virtual Communities,"This potentially transformative study will investigate how players in a Massively Multi-player Online (MMO) game - specializing in social challenges, crafting, and trading - use trust in order to accomplish tasks and collaborate with other players. This is the first time a game of this kind has been evaluated. Most MMOs that have been studied, such as World of Warcraft and EverQuest, evaluate trust and collaboration of players engaged in battles or battle planning, rather than peaceful and constructive purposes. This research employs a well-established but rather unique game that primarily focuses on a wide array of social challenges and tests. It is called ""A Tale in the Desert"" (ATITD), in which each player controls the avatar of an ancient Egyptian. ATITD is the ideal choice for this research because of the strong social aspect inherent in the game. <br/><br/>Both because the game creators have agreed to cooperate, and because ATITD incorporates databases such as a constantly updated census and extensive information about all players and their overlapping group memberships, the research will have access to fine grained data of players' actions, chats, and memberships in guilds, marriages and bureaucracies. Starting with an initial set of hypotheses of what data will be analyzed, the research team will then able to iteratively choose more data on the basis of early discoveries. To address the issues involved in human-agent interaction, this study of trust and collaboration is very important. Information obtained in this study can be used in future work to create agents to interact with human players in ATITD and other virtual environments. <br/><br/>In addition to log data from the game, this research will also collect standard psychological measures of individual players, using a voluntary questionnaire, including the Extroversion scale from the Big Five Inventory, the Trust-Suspicion Scale, and the Empathy Quotient. Of particular interest will be to learn if the psychological factors predict how well individuals interact with other players in the game. Game play data will be analyzed to identify specific behavioral patterns related to the central themes of trust and collaboration, and, conversely, betrayal and competition. The research will also gather data from ATITD about how the players trust and collaborate with each other, using data mining techniques, interviews of players, and examination of wikis created by players. <br/><br/>Studying the nature of trust and collaboration in 3D virtual communities will be of great interest to game designers, as well as to the research community of computer-mediated environments. The information gained from this study we will be the basis for detailed suggestions on how to improve trust and collaboration in an MMO. Methods also will be developed to detect untrustworthy behavior happening in the game and flag it. Data gathered from ATITD will be useful for creating agents that interact with human players, and for designing educational activities that exploit the unique opportunities for student collaboration offered by virtual worlds like A Tale in the Desert.",13,University of Southern California
1020001000387033,SIGGRAPH Pioneers Mentoring,"This funding supports student involvement and attendance at the 2013 ACM SIGGRAPH Conference. SIGGRAPH is the premier computer graphics conference in the world, attracting over 15,000 attendees and filling over 100,000 feet of exhibition space. Although many attendees come for the exhibition and advanced technology demonstrations, the SIGGRAPH Conference has long been the premier place to publish technical papers in computer graphics. The papers are published as a special issue of the ACM Transactions on Graphics. The SIGGRAPH Pioneers (20+ years of work in the field) started a mentoring program in 2003 to add more students to the technology pipeline. While small in number, the impact has been high as the students do not merely attend the conference, but rather are mentored by longstanding members of the community. Because each Pioneer is assigned no more than two students, the Pioneer guides the students to talks, courses, etc. that were not immediately on the students radar. The Pioneer mentors perform this service without compensation.<br/><br/>The intellectual merit and broader impact of the proposed activity lies in the educational opportunities provided the student by the mentoring process. The program also focuses on underrepresented groups (e.g., women, Hispanics, and African-American) who would have no opportunity to hear about SIGGRAPH, let alone attend, without the mentoring program. The students are exposed to the latest research and hardware covering the use of computer graphics for numerous topics that benefit society ranging from engineering design to simulation and entertainment. Written student feedback has, in the past, been extremely favorable.",10,Association Computing Machinery
1020001000387132,III: Medium: Collaborative Research: Developing a 3D Browser to Explore Genomes,"New genome technologies enabled us to analyze the spatial conformation and interaction of chromatin together with their functional implication in important cellular activities such as gene regulation and cell state determination. With the influx of new details about the higher-level structure and dynamics of the genome, novel techniques will be required to visualize and model the full extent of genomic interactions to gain insight about genome functions. Current genome browsers are specifically aimed at viewing primary sequence information. Although supplemental information can be annotated via new tracks, representing structural hierarchies and interactions is quite difficult in these browsers, particularly across non-contiguous genomic segments. In addition, in spite of many recent efforts to measure and model the genome structure at various resolutions and detail, little work has focused on combining these models or taken advantage of the large amount of genomic and epigenomic data generated from new high-throughput approaches. To address these issues, the team has created a proof-of-concept interactive 3D viewer, Genome3D, to enable integration and visualization of genomic and epigenomic data in three dimension. Substantial development is needed to take advantage of the newest genomic technologies and to enable its integration with analysis pipelines. While enormous amount of spatial information for eukaryotic chromosomes have been generated, the size and complexity of these data require the design and development of new algorithms and methods in data integration and model construction. The goal is to develop a full-fledged, platform independent system that enables biologists to build and refine their own 3D genome models to analyze their data. <br/><br/>The intellectual merits of the research include: 1) Implementing a novel strategy to employ new engines with strong interactive design element to transform the prototype into a cloud-based 3D genome browser that can be used on various platforms including web browsers and tablets, making 3D structural genome information available to a broader research community. 2) Adding integrated tools that can analyze 3D features of genomes and support model building and validation. 3) Designing and providing robust set of APIs and scripting for customized data analysis. 4) Collaborating with other researchers to explore and visualize new three dimensional genome models.<br/><br/>There are a number of broader impacts in this research. A multi-scale three dimensional genome browser is crucial to achieve fuller understanding of genome functions and will provide a new way to teach genomics. Exploring genomes through 3D visualization will significantly advance genome research and will have a profound impact on comparative genomics and genetics. The use of new user interaction-intensive engines into scientific research tools and will encourage researchers in every area to use interactive visualization to analyze data. New algorithms to analyze models and visualize genomic information can be extended to problems of similar size in other fields and form the basis for new computational approaches. This project provides valuable interdisciplinary training experiences to undergraduate and graduate students and will attract more students to computational biology research. Results and the new browser will be disseminated through publications, workshops and tutorials and will enable customized development by providing detailed APIs and tutorials.",3,University of Texas Health Science Center Houston
1020001000387168,NRI-Small: A Novel Light-weight Cable-driven Active Leg Exoskeleton (C-ALEX) for Training of Human Gait,"Motorized exoskeletal orthoses are being actively researched today for gait training of stroke patients. These machines are typically designed to apply assistive/resistive forces on the impaired leg to help human subjects to improve walking, similar to what therapists do during training. While a number of such machines have been developed and used for gait training, these studies have only yielded ""mixed"" results in benefiting stroke patients clinically. The reasons for these disappointing results are the high inertia of the mechanisms, a mis-match in constraints between human and machine, and misalignment of the mechanism joints with the human joints. The proposed work investigates a novel and ground-breaking design of a cable driven exoskeleton to address these shortcomings. Based on extensive study of mechanisms and therapeutic control methods, cables will actuate the moving limbs and will also serve as structural members in tension. The design will consist of an inertial fixed cuff attached to the pelvis and three lightweight cuffs on the thigh, shank, and foot of each leg. This results in an order-of-magnitude reduction in the inertia of the links and eliminates rigid joints which, in turn, eliminates the mis-match and misalignment. Yet, the fact that cables can only pull and not push raises many scientific and design challenges that will be addressed theoretically and experimentally.<br/><br/>Broader Impact: Each year, about 700,000 people in the U.S. have an incidence of a stroke and currently there are 4.5 million people in the U.S. living with the after-effects of stroke. This research can directly impact the quality of life of these individuals with potentially better rehabilitative equipment and better rehabilitative results for retraining of their gait. This project will broaden the application of cable-driven robots to the emerging field of ""neuro-rehabilitation"" and ""functional learning."" This project will also involve close co-operation with Professor Clement Gosselin's research group at Laval University, who along with the PI, is credited with fundamental developments to the field of ""cable robots."" The project will also encourage undergraduate involvement in research as well as provide training and examples for a high school teacher/student to incorporate into the local curriculum. The PI has active links with high schools through a college-wide NSF-funded RET program. Several high school teachers and students have worked in the PI's laboratory to identify technologies to improve quality of life of neural impaired subjects.",20,Columbia University
1020001000387217,EAGER: NSF-JUST Program on Robotic for Rehabilitaion and Medicine,"This joint program reviews the state-of-the-art of the field of model-based assistive robotic technologies for medicine and rehabilitation in the USA and Japan, outlines the fundamental science and technologies in this area, and proposes models for future collaborative research between Japan and USA by building on the mutual strengths. In particular, gait training for stroke patients using exoskeletons is studied.<br/><br/>The field of medical and rehabilitation robotics offers much in terms of broad societal impact. In addition, the project exposes graduate students and faculty to the culture of research in Japan, and more broadly, Asia. Today, Japan leads in many areas of robotics and automation. The students in the NSF-JST program interact with researchers in Japan and visit prominent universities and research laboratories. This exposure provides excellent training for the students and prepares them for careers in industry and academia globally.",7,Columbia University
1020001000387319,NSF Smart Health and Wellbeing PI Meeting,"It is widely acknowledged that improving health and quality of life in the United States will require contributions and advances in many technical and non-technical areas, including computer science, engineering, economics, as well as in the social and behavioral sciences. The NSF Smart Health and Wellbeing Program focused on fundamental research in these areas and has supported a diverse set of multidisciplinary projects and investigators that received their awards in 2011 and 2012. The workshop for the principal investigators in the Smart Health and Wellbeing Program was organized to bring together these researchers with diverse projects and varied backgrounds and areas of expertise to exchange ideas, network, learn from each other, form new collaborations, and help to shape the future of this dynamic and rapidly growing set of fields. In addition to the presentation of posters associated with the funded projects, the participants have the opportunity to attend several invited presentations by leaders in several relevant fields focused on healthcare challenges and potential solutions. To capture their creativity and innovative ideas, the workshop attendees were invited to participate in several breakout sessions. The summary of the workshop outcomes will be accessible at http: http://nsfsmarthealthwellness2013.usc.edu/",10,University of Southern California
1020001000387371,EAGER: Information and complexity in the analysis of biological data sets and networks,"A living system is distinguished from most of its non-living counterparts by the way it stores and transmits information. It is just this biological information that is the key to the biological functions. It is also at the heart of the conceptual basis of what we call systems biology. Much of the conceptual structure of systems biology can be built around the fundamental ideas concerning the storage transmission and use of biological information. Biological information resides, of course, in digital sequences in molecules like DNA and RNA, but also in 3-dimensional structures, chemical modifications, chemical activities, both of small molecules and enzymes, and in other components and properties of biological systems at many levels. The information depends critically on how each unit interacts with, and is related to, other components of the system. Biological information is therefore inherently context-dependent, which raises significant issues concerning its quantitative measure and representation. An important and immediate issue for the effective theoretical treatment of biological systems then is: how can context-dependent information be usefully represented and measured? This is important both to the understanding of the storage and flow of information that occurs in the functioning of biological systems and in evolution. This work involves both new ideas and the integration of new ideas. It represents new mathematical methods as well as a novel integration of approaches that are focused on the very real and practical problems of biological data analysis. The PI as developed a new conceptual approach that is novel and mathematically well-defined, exploring the relationships between graph properties and set complexity and considering new approaches to network analysis. New interaction distance measures are considered with a new way of dealing with especially large data sets, especially the maximal information coefficient, for which a general approach may be possible, certainly for a small number of variables, and possibly in the general case. The ideas will be tested on a number of diverse biological data sets, especially around gene expressions, and other variants. Current methods often fail in the face of truly complex dependencies in large data sets, and powerful new methods would be of high value. This work involves both new ideas and the integration of new ideas. It represents new mathematical methods as well as a novel integration of approaches that are focused on the very real and practical problems of biological data analysis.",3,Pacific Northwest Research Institute
1020001000387390,RESNA Student Design Competition: Technology Solutions for People with Disabilities,"This is funding to support the Student Design Competition (SDC), which will be part of the 2013 annual RESNA conference to be held June 22-24 in Bellevue, Washington. Today, more than 54 million people in the United States report some type of disability, and this number will likely grow in the coming years as the baby boom generation enters late life. RESNA, the Rehabilitation Engineering and Assistive Technology Society of North America, is the one organization with an international focus that is devoted solely to technology for individuals with disabilities. RESNA's membership consists of individuals and institutions covering a range of disciplines (including researchers, clinicians, suppliers, manufacturers, consumers and educators who work in non-profit and for-profit settings), all of whom are dedicated to promoting the exchange of ideas and information for the advancement of assistive technology. In addition to its annual conference, RESNA conducts a credentialing program for assistive technology practitioners, rehabilitation engineering technologists and suppliers, as well as sponsored projects. RESNA's Technical Standards Board is the U.S. Technical Advisory Group to ANSI, the official United States representative to the International Organization for Standardization (ISO), for the development of ISO standards pertaining to assistive technology and other products for persons with disabilities. More information about RESNA and its annual conference is available online at http://www.resna.org. <br/><br/>The RESNA Student Design Competition fosters innovation and creativity with the ultimate goal of producing technology that can contribute to the independence of individuals with disabilities. The first SDC was held in 1980 as part of the inaugural RESNA conference; since then over 250 designs have been identified as winning entries from among over 780 submitted by students from over 124 different universities and institutes of higher learning. Both undergraduate and master's level graduate candidates are eligible to submit entries, which must represent the work of students ONLY; no faculty or professionals may be included as co-authors, although faculty may be mentioned as advisors and/or mentors. NSF has been a supporter since 2005. This year's funding will enable the SDC to be further expanded and enhanced, so as to include more entries and support for more design teams, especially from minorities, women, and individuals with disabilities. A call for participation has been posted on the conference website, and also distributed electronically to a large number of colleges and universities with engineering and design schools. A team of 5 judges will pre-select entries from up to 10 teams, from which two members each will be invited to attend the conference supported with travel and hotel funds as well as complimentary registration. During a half-day session in which the teams will make presentations before the judges and public audience at the conference, 5 teams will ultimately be selected as the final winners. Judges will have an opportunity to ask questions and make suggestions and recommendations to the design teams. The conference schedule includes a platform session in which finalists will present their designs to the general conference audience; in addition, finalists will have an opportunity to present their projects during the Annual Developers' Forum which highlights new products under development (some past SDC student team designs have been patented and are now available commercially). New this year and incorporated into the conference is a Student Track with specifically designed sessions, including: Rehabilitation Engineering and Technology Careers; Presentation Skills; How to Write for Peer Review Journals; and Engineering Concepts in Assistive Technology: A course for Non-Engineers.<br/><br/>Broader Impacts: The annual RESNA Conference and the Student Design Competition combine to create a forum for interaction between working and experienced rehabilitation engineers and students who are about to enter the field. Unique in its primary focus on undergraduates, the event provides participants with experience and skills that help them succeed in their engineering and design careers. It encourages and mentors students in various disciplines to become involved in the assistive technology and rehabilitation engineering fields, and provides an opportunity for the students to network with their peers as well as participants in previous SDC competitions, many of whom are now leaders in service, research, and educational areas related to technology for people with disabilities. Furthermore, continuing a practice started a few years ago, all participants (not just the finalists) will create blog posts that capture and archive their submissions, and these will remain accessible indefinitely through the RESNA and the ATWiki. Thus, those who teach will also be able to share this information with their students as examples of projects that others have done, while service providers and people with disabilities will be able to search for unique solutions to barriers they may face.",10,Rehabilitation Engineering and Assistive Tech Society of NA
1020001000387593,RI: Small: Debugging Machine Visual Recognition via Humans in the Loop,"The problem of visual recognition is fundamental towards the goal of automatic image understanding. While a large number of efforts have been made in the computer vision community, machine performance at these tasks remains significantly inferior to human ability. <br/>The overarching goal of this project is to leverage the best known visual recognition system - the human visual recognition system. This project employs a ""Human Debugging"" paradigm to replace various components of a machine vision pipeline with human subjects, and examines the resultant effect on recognition performance. Meaningful comparisons provide valuable insights and pinpoint aspects of the machine vision pipeline that are performance bottlenecks and require future research efforts. Specifically, the project considers the problems of image classification and object detection, and explores the roles of local and global information, as well part-detection, spatial modeling and contextual reasoning (including non-maximal suppression) for these problems respectively. <br/>This project touches on a wide range of problems in visual recognition including object recognition, scene recognition and object detection. This novel paradigm of identifying weak links in computational models via humans in the loop is also applicable to other vision problems, as well as other sub-fields in AI. By sharing all collected data and results, and through organized conferences and workshops, this project will initiate and fuel a dialogue with the research community about leveraging humans to advance computer vision. More broadly, this work encourages the involvement of young women and undergraduate students in computer science research.",1,Virginia Polytechnic Institute and State University
1020001000387614,EAGER: Building Learning in Urban Extended Spaces,"In this Cyberlearning: Transforming Education-funded EAGER Project, the PIs are carrying out first steps in understanding how to use the archives of cities to draw inner-city youth into inquiry and expressive activities in the context of documenting the interesting history, geography, civics, and so on, of the places they live. The work is being carried out in Nashville, Tennessee. It is a collaborative effort of Vanderbilt University and city of Nashville, represented by The Country Music Hall of Fame and The Nashville Library System. The project uses what is known about how people learn to inform the design of technology and pedagogy in support of ""place-based education"" -- education that takes advantage of the place where learners live to promote learning and sustain their engagement. In this approach to place-based education, learners are taking on the curatorial practices of museum and library curators and using those practices to learn both the history of their city and also how to synthesize across information sources and express themselves in ways that are engaging and educational for others. New knowledge is being created about youth authoring and its potential for promoting learning and also about bringing the wealth of interesting city and cultural archives to the people who live in a city and its visitors.<br/><br/>Social media and ""curation"" applications have drawn the interest of many youth. The PIs in this project recognize potential in these technologies and in the interests of young people in curating collections for promoting learning of history, civics, geography, and other content and at the same time learning to synthesize across information sources and express themselves well. They envision a technical and socio-technical infrastructure for a new kind of informal after-school learning environment in which technology provides structure and aid as learners explore the archives of their city and compose compelling narratives to teach others, and social and interaction structures (the pedagogy) empower learners to engage in telling the stories of their city. The work is timely, as the tools that bring location-aware media together with mobile, personal information devices are becoming more capable and sophisticated at a rapid rate. The tool designs and pedagogy that are created through this project and follow-on projects will be applicable to supporting youth initiatives, both formal and informal, throughout our country and internationally.",9,Vanderbilt University
1020001000387635,Travel Support for Mobile Micro-Robotics Challenge at IEEE ICRA 2013,"This project proposes to provide competitive travel support for student teams wishing to compete in the Mobile Micro-Robotics Challenge. The challenge involves advances in the design and fabrication of microelectromechanical systems (MEMS) enabling the development of mobile micro-robots that can autonomously navigate and manipulate in controlled environments. It is expected that this technology will be critical in applications as varied as intelligent sensor networks, in vivo medical diagnosis and treatment, and adaptive microelectronics. However, many challenges remain, particularly with respect to locomotion, power storage, embedded intelligence, and motion measurement, therefore, it is valuable to help enable participation by defraying travel costs. <br/><br/>The broader impacts of this grant include the ability of research teams to showcase working demonstrations that support the challenge themes of micro-scale actuation, manipulation and locomotion. Travel assistance will increase the exposure of US student teams to international competitors, likely increasing the knowledge and competitiveness of the US-based teams since this event has traditionally been dominated by teams from Europe. The multi-day competitions will be held during the IEEE ICRA conference providing an excellent opportunity to engage a broad technical audience. The competitions are open to the general public to raise awareness of the state-of-the-art. The competitions provide an opportunity for students and leaders to learn how micro-robotics can play an important role in society.",19,University of Texas at Arlington
1020001000387696,RAPID: Social Media: Learning from the Boston Marathon Bombing,"The April 15, 2013 Boston Marathon Bombing (BMB) is a fresh reminder that societies can be shocked not only by natural disasters such as earthquakes and hurricanes but also by acts of terrorism. The bombing engaged the public in multiple ways, and social media platforms (Twitter, YouTube, Facebook, etc.) enabled the public to become both informed and to some extent involved. Digital traces that can be collected from these sites present a brief window of opportunity for research on how, and to what extent, this involvement emerged. This RAPID project will collect data from social media such as Twitter and other linked sources to address questions about the flow of information about the event across traditional and social media, the propogation and amplification of unsubstantiated information and misinformation, differences between official and popular social media use, self-organization of efforts for assistance or suspect tracking and changes in public sentiment over time. Data from this event may reveal differences in the dynamics of social media use in the wake of terrorist events vs. natural disasters. <br/><br/>The intellectual merit of the proposed project is that it combines emerging methods and techniques for social media research with recent research on disaster response coordination and planning to develop a conceptual model of the BMB information flows to guide data collection and analysis. Because the Boston Marathon attracted participants from around the world, this bombing has a global dimension that may affect the nature and reach of the social media communications. <br/><br/>Broader impacts of the project include educational benefits from students involved in the project and in courses that will be informed by the project findings. The collected data will be made available to other researchers and the principal investigators plan to coordinate their work with others examining this event, thus contributing to the infrastructure for science. The proposal includes funding for dissemination of the results of preliminary analysis of the collected data. Results from analysis may offer an improved guide for research on communication and information flows in crises and disasters (whether natural or human-initiated), thus benefiting society.",11,University of Washington
1020001000387762,2013 mHealth Training institute @UCLA,"Recent advances in mobile technologies (e.g., smart phones, sensors) have created opportunities for applications that were not previously possible in data gathering (activity monitoring, physiological, and psychological states assessment) and intervention (just in time and context aware alerts and reminders). As such mobile technologies are poised to transform various facets of our lives especially social and health-related aspects. Taking a full advantage of the mobile and mobile health (mHealth) technologies will require considerable multidisciplinary research at the intersection of behavioral, social sciences and clinical research fields with computer science, networking and engineering. The mHealth Summer Institute addresses the multidisciplinary challenges by bringing together scientists from diverse fields to enhance the quality of mHealth research.<br/>Intellectual Merit: <br/>The mHealth Summer Training Institute is the premier venue for cross-fertilization of research in the mHealth area. By providing a structured environment for researchers from multiple disciplines to collaboratively work on a mobile health issues of common interest, the training institute provides a platform for idea generation and development that can directly lead to transformative innovations, it provides invaluable experience of working in multidisciplinary teams towards a common goal. It facilitates research, development and specially innovation in mobile health and career development of young scientists in a transdisciplinary environment with rapidly changing landscape.<br/>Broader Impacts:<br/>The institute has several long-term impacts on the society. First, the ideas generated during the institute, are frequently developed fully into promising grant proposals, several of which (from prior year institutes) have been funded. These ideas generated at the institute can lead to scientific advancement and impact the society by improving health. Second, the participants who are selected for the institute directly benefit by developing skills and experience of working in multidisciplinary teams and acquire the basic knowledge and skills of the mobile health domain via lectures from thought leaders in mobile health across all the relevant disciplines. Third, the teams formed at the institute frequently lead to long-term collaborative relationships, which continue and flourish in the long-term. It helps seed new connections among young scientists who would not otherwise work together. Fourth, the participants often become champions of mobile health at their respective home institutions and in their respective scientific communities, and educate/train others, including the students.",10,University of California-Los Angeles
1020001000387814,CAREER: A Framework for Sparse Signal Reconstruction for Computer Graphics,"CAREER: A Framework for Sparse Signal Reconstruction for Computer Graphics<br/>Pradeep Sen, Dept. of Electrical & Computer Engr., University of New Mexico<br/><br/>Recent progress in computer graphics has benefited our society in many ways: from entertainment (e.g. movies and games) to product manufacturing (e.g. virtual prototyping) and medicine (e.g. interactive medical visualization). However, despite these improvements we are still far from true interactive photorealism. In this research, the investigators develop a novel framework for computer graphics that improves the speed and quality of existing algorithms by leveraging ideas from the emerging field of compressed sensing. By taking advantage of the compressibility of real-world signals, the researchers explore new algorithms for image synthesis and acquisition. The broader impact of this work is that the core ideas developed will not only benefit important applications in computer graphics, but could also impact areas such as Magnetic Resonance Imaging (MRI) used for medical applications. On the educational side, the PI integrates Hispanics students into the research by fostering relationships with Latin America.<br/><br/>This research is developing a fundamentally new paradigm for a core area of computer graphics: sampling and reconstruction. Most graphics algorithms (e.g. rendering systems) expend their effort sampling the entire signal, despite the fact the signal will be compressed afterwards (e.g. with a transform-coding compression algorithm such as JPEG). The investigators apply the ideas of compressed sensing in order to take advantage of the sparsity in the transform domain and sample the signal in an efficient manner. This results in a framework that can be used to accelerate rendering algorithms by reconstructing the final image from a sparse set of samples using greedy optimization algorithms. The same framework can also be used to accelerate the acquisition of light transport which is useful for relighting applications. The fundamental science explored through this work will spur new areas of research within the graphics community and in related fields.",11,University of California-Santa Barbara
1020001000387987,EAGER: Scalable Big Data Analytics,"Big Data analytics requires bridging the gap between data-intensive computing and data-driven computing to obtain actionable insights. The former has primarily focused on optimizing data movement, reuse, organization and storage, while the latter has focused on hypothesis-driven, bottom-up data-to-discovery and the two fields have evolved somewhat independently. This exploratory project aims to investigate a holistic Ecosystem that optimizes data generation from simulations, sensors, or business processes (Transaction Step); organizes this data (possibly combining with other data) to enable reduction, pre-processing for downstream data analysis (Organization Step); performs knowledge discovery, learning and mining models from this data (Prediction Step); and leads to actions (e.g., refining models, new experiments, recommendation) (Feedback Step). <br/><br/>Intellectual Merit: As opposed to the current practice of considering optimizations in each step in isolation, the project considers scalability and optimizations of the entire Ecosystem for big data analytics as part of the design strategy. The project aims to consider big data challenges in designing algorithms, software, analytics, and data management. This strategy contrasts with traditional approaches that first design algorithms for small data sizes and then scale them up. The project aims to treat data complexity, computational requirement, and data access patterns as a whole when designing and implementing algorithms, software and applications. <br/><br/>Broader Impacts: The project could advance the state of the art in big data analytics across a number of key applications such as Climate Informatics and Social Media Analytics. The software resulting from the project is being made available to the broder scientific community under open source license. The project offers enhanced opportunities for education and training of graduate students and postdoctoral researchers at Northwestern University.",5,Northwestern University
1020001000388030,SCH: EXP: Integrated Real-Time Clinical Deterioration Prediction for Hospitalized Patients and Outpatients,"Unexpected deaths of hospitalized patients continue to be common despite evidence that patients who are at risk often show signs of clinical deterioration hours in advance. Existing early warning systems have significant shortcomings because of their poor reliability and the need for monitoring by overburdened clinical staff. Almost 1 out of 5 patients are readmitted within 30 days of hospital discharge with an annual cost to tax payers of $15-17 Billion. Hence, there is an urgent need for automated early warning systems that can provide timely and accurate information. <br/><br/>The project seeks to integrate and mine patient data from multiple sources, including routine clinical processes, bedside monitoring, at-home sensing, and existing electronic data sources to facilitate optimized patient-centered decision making. Specifically, the project aims to develop techniques and systems to provide early warning of clinical deterioration and hospital readmission of discharged patients using a novel two-tier system. Tier 1 uses data mining algorithms on existing hospital data records to identify patients who are most at risk of clinical deterioration and readmission. Tier 2 combines clinical data with sensor data to improve the accuracy of predictions on patients who are identified as being at risk by Tier 1. <br/><br/>Key innovative aspects of the project include: (1) new data mining algorithms for predicting clinical deterioration and readmission from heterogeneous, multi-scale, and high-dimensional data streams; (2) an alert explanation system to identify the most relevant prognostic factors and suggests possible intervention based on novel feature ranking algorithms; (3) a novel scheme based on cost-sensitive learning to dynamically reconfigure the sensors for achieving good tradeoff between monitoring cost and effectiveness. The resulting advances in healthcare practices that are currently employed in general wards offer several key benefits including (1) reduced workload on clinical staff; (2) capability for continuous monitoring of ward patients that can be used to triage nursing efforts in order to optimize the desired clinical outcomes; (3) capability to extend hospital monitoring to patients at high-risk for hospital readmission with the attendant benefits of reducing readmissions by targeting early preemptive therapeutic interventions.<br/><br/>Plans for transitioning the technology to clinical practice include rigorous evaluation of the technology in real-world settings and broad dissemination of the algorithms and their open-source implementations. Some potential broader impacts of the project include improved clinical outcomes, reduced patient mortality rates and healthcare costs, and enhanced opportunities for research-based interdisciplinary training of graduate students in health informatics. Additional information about the project can be found at: http://www.cse.wustl.edu/~wenlinchen/project/clinical/",12,Washington University
1020001000388041,SCH: EXP: Intelligent Clinical Decision Support with Probabilistic and Temporal EHR Modeling,"Clinical decision support has the potential to reduce healthcare costs and improve patient outcomes, while shedding light into policy questions surrounding healthcare costs and practices in the US. This project aims to develop intelligent clinical decision support techniques for recommending optimal action plans - including both diagnostic tests and medical interventions - for treating chronic disease, performing multi-step and adaptive treatments, and modifying long-term health habits. In an effort to integrate evidence-driven decision-making with established clinical practices, the research will develop disease-agnostic artificial intelligence techniques that combine data from large electronic health records (EHRs) with recommendations from human experts. A prototype decision support system will be tested on three clinical settings - cardiology, clinical depression, and emergency room readmission - using existing EHR datasets and consultation with domain experts from clinical partners. Outcomes-driven and cost-driven optimized decisions will be compared to current clinical practice. This exploratory research will provide the groundwork for follow-up projects in decision support information presentation, integration with clinical workflow and IT systems, and making the transition from retrospective studies to clinical trials. Other broader impacts include workshops for healthcare applications of AI, and women and minority students will be recruited and mentored in graduate and undergraduate computer science research.<br/><br/>The technical approach of this research builds on state-of-the-art machine learning and artificial intelligence methods to automatically learn, simulate, and reason about patient-specific treatment plans. Such methods must be simultaneously probabilistic and temporal. Probabilistic techniques are needed to handle significant uncertainties in clinical diagnoses and outcomes, much like a human clinician would. Temporal techniques are needed to consider sequences of future decisions over the course of treatment, rather than decisions at single time points. More specifically, this project will consider the use of statistical relational learning (SRL) techniques to mine for probabilistic, temporal patterns in large electronic health records, and these patterns will be used in partially-observable Markov decision processes (POMDPs) that exhaustively search for optimal treatment sequences. Recent results indicate that SRL achieves superior performance to other machine learning methods in predicting cardiac arrest from demographic and lifestyle observations, and POMDP treatment plans outperform existing fee-for-service practices by reducing costs by 50% and improving outcomes by 40% on a clinical depression dataset. By combining SRL and POMDPs, specifically, using SRL to learn a disease progression model used by the POMDP, this project aims to achieve further improvements in recommendation quality and computational scalability for complex treatments. Furthermore, because EHRs may suffer from limited or missing data, clinical decision support tools should follow established practices and expert knowledge when necessary. To do so, new workflows for integrating expert knowledge into SRL and POMDPs will be explored. Evaluation will be performed on a variety of disease scenarios in conjunction with clinical partners at Marshfield Clinic, Centerstone, Wake Forest School of Medicine, and South Bend Memorial Hospital.",12,Indiana University
1020001000388048,SCH: EXP: Smart integration of community crowdsourced data for real-time individualized disease risk assessment,"This project addresses a major inefficiency in healthcare today: an under-informed, homogeneous approach to patient care that doesn't consider community disease prevalence and its effect on an individual's disease risk in an actionable manner. The approach is to combine real-time diagnostic information from select community members via an easy to use, gargle-based immunoassay detection test with more abundant contemporaneous community symptom reports via an online informatics platform. The hypothesis is that the likelihood of a virological diagnosis of influenza (flu) can be accurately predicted for the individual using real-time flu self-diagnostics and symptom information from their community, and can influence a user to take appropriate measures to prevent disease spread. This approach enables testing of several concepts regarding (1) the specificity and sensitivity of a rapid flu diagnostic test for Influenza A and B detection compared to a laboratory polymerase chain reaction test, (2) generation of valid user-contributed diagnostic information at scale (3), potential of real-time contextual data to be used to calculate influenza risk, (4) use and acceptance of this information by individuals. The team brings together experience in building and studying novel crowdsourced data sources for disease surveillance and expertise in software, graphic design, epidemiology, medicine, and engineering, to develop user-friendly systems that the community will employ at scale.<br/><br/>The proposed system has substantial potential for beneficial societal impact through acertaining and changing the way people approach healthcare. Empowering individuals to generate and act on health information impresses on them the importance of surveillance and educates them to become more involved and accountable for their health. In order to maximize the impact and sustainability of the system, this system also functions as an educational tool. Clear open communication of anonymized and privacy-protected data to the general public in an easy to interpret manner will assist individuals, policy-makers and other researchers in using contextual disease risk information. Undergraduate and graduate students from a variety of backgrounds will have opportunity to participate in the research. Collaborators at the American Public Health Association and their associated groups concerned with the health of minority populations such as The Association of Minority Health Professions Schools and The Latina Health Project will assist in engaging a diverse group of participants nationwide.",12,Children's Hospital Corporation
1020001000388049,"SCH: EXP: Collaborative Research: Smart Asthma Management: Statistical modeling, prognostics, and intervention decision making","Asthma is a common lung disease with acute and chronic manifestations that impacts more than 22.2 million Americans or 7.9% of the population, including over 6.7 million children younger than 18 years of age. The cost of asthma is significant both for individuals and for the society as a whole. It is highly desirable to establish transformative technologies to improve the patient quality of life and reduce the cost of asthma management. The recent development in sensor and mobile computing technology provide great opportunities to establish Smart Asthma Management (SAM) systems and achieve a quantum leap in asthma management. Leveraging on the fast development of information infrastructure, patients can create a detailed temporal log recording their symptoms, medicine usage, and possibly vital physiological signals through an easy access to a website or their smart phones in SAM systems. This unprecedented continuous stream of patient-generated data in SAM systems provides us significant opportunities to better estimate patient condition and make clinical intervention decisions. However, since the information infrastructure of SAM has not become available until recently, very limited work is available for SAM systems. Against this background, this collaborative project aims to develop a suite of statistical modeling, monitoring, prognosis, and clinical intervention decision making methodologies based on a flexible yet rigorous multistate model to describe the evolving of patient conditions. The true underlying state of the patient is assumed unknown; however, there is reason to expect that it could be inferred from patient generated data such as the frequency of the rescue inhaler usage (the time and frequency of the rescue inhaler use is an important indicator of asthma control). <br/><br/>Some anticipated advances include: (i) Multistate model with event intensity function as observations. The proposed methodology brings the mixed effect model and the multistate model into a unified framework to integrate the population information embedded in the historical records of multiple patients and the individual information collected in real-time in a quantitative way. (ii) Stochastic filtering approach for individual patient condition modeling and updating. The novel state space formulation enables efficient stochastic filtering algorithms to estimate and update the states and parameters in the multistate model. (iii) Clinical intervention decision support for patients and clinicians. The salient features of the proposed policy are that it is based on a condition-based policy and incorporates uncertainties in the patient condition model through a Partially Observable Markov Decision Process (POMDP) framework which has been widely used and proven to be very effective in the management of industrial systems. Plans are in place to evaluate the effectiveness of the resulting technologies in collaboration with clinical experts. <br/><br/><br/>The project is likely to contribute predictive technologies that could help reduce the cost and improve the quality of healthcare in the US, especially as it relates to effective management of chronic illnessess. Additional broader impacts of the project include enhanced research-based training opportunities for graduate and undergraduate students (including members of under-represented minorities) in healthcare engineering, statistics, and operation research; enrichment of the curricula in health systems in industrial engineering and operations research at the University of Wisconsin-Madison and the University of Iowa.",12,University of Wisconsin-Madison
1020001000388050,SCH: EXP: From Critique to Collaboration: A Fundamental Rethinking of Computerized Clinical Alerts,"To prescribe safe medications, physicians use computerized physician order entry (CPOE) that routinely relies on drug-drug interaction (DDI) alerts. Despite the current attempts to reduce frequency and complexity of alerts, the effectiveness of such alerts remains extremely low, with up to 96% of warnings being ignored by physicians on a daily basis. The primary cause for this is alert fatigue, a state in which physicians, bombarded by numerous warnings, become desensitized. The goal of this project is to transform drug safety alerts from oft-ignored warnings to trusted tools that advise physicians in daily decision making. To accomplish this, the investigators are aiming beyond incremental improvement establishing novel principles for alert design that are based on what physicians consider important when taking advice from peers in their daily clinical work. The project is addressing this problem with a three prong approach: (1) determining principles that accompany trusted physician-to-physician advice regarding appropriate medication prescribing; formative studies will be conducted in a variety of clinical settings to identify key factors in sharing trusted advice among doctors and residents; (2) generating novel designs for drug safety guidance that elicit physician trust and maximize compliance and (3) evaluating the impact of novel designs on physician compliance to DDI warnings.<br/><br/>Intellectual Merit:<br/>By investigating the question of why physicians trust each other, the project is leveraging a fundamental behavioral dynamics reconsidering the role of clinical alerts, and thus generating new principles to design effective computerized guidance that elicits similar notions of trust. A very important contribution of this research is a more complete understanding of what makes physician-to-physician advice trustworthy in the ecosystem of daily clinical activities provides a solid, long-term intellectual basis for the creation of substantially better alerts for a broad variety of CPOE systems. Key innovative aspect of this work is the departure from the mere optimization of alerts to focus rather on complex yet crucial dynamics by which trusted advice is shared among physicians. There are potentially transformative aspects of the proposed work, which if successful, could unleash a new generation of drug-drug interaction warnings that will substantially advance the level of adherence in daily medication prescribing.<br/><br/>Broader Impacts:<br/>The outcomes from this project will potentially affect the over 600,000 physicians in the United States who spend the majority of their time in direct patient care, and who are exposed to dozens of DDI alerts on a daily basis. By reframing current alerting strategies, the project results are expected to substantially increase safe drug prescription in day-to-day clinical setting. The investigators are disseminating the project results through large-scale real-world CPOE deployments and live experimental evaluations available to hundreds of physicians. The team is also reaching out to CPOE vendors, to promote the incorporation of the ideas generated in the project in the current and next generation of products. Ultimately, this basic research in physician communication and human-computer interaction is providing the research basis to catalyze a change in industry practice, also facilitated by the PI's leadership of the EMR Innovations Summit, an annual meeting of national EMR vendors hosted by the Regenstrief Institute.",4,Indiana University
1020001000388051,"SCH: EXP: Collaborative Research: Smart Asthma Management: Statistical Modeling, Prognostics, and Intervention Decision Making","Asthma is a common lung disease with acute and chronic manifestations that impacts more than 22.2 million Americans or 7.9% of the population, including over 6.7 million children younger than 18 years of age. The cost of asthma is significant both for individuals and for the society as a whole. It is highly desirable to establish transformative technologies to improve the patient quality of life and reduce the cost of asthma management. The recent development in sensor and mobile computing technology provide great opportunities to establish Smart Asthma Management (SAM) systems and achieve a quantum leap in asthma management. Leveraging on the fast development of information infrastructure, patients can create a detailed temporal log recording their symptoms, medicine usage, and possibly vital physiological signals through an easy access to a website or their smart phones in SAM systems. This unprecedented continuous stream of patient-generated data in SAM systems provides us significant opportunities to better estimate patient condition and make clinical intervention decisions. However, since the information infrastructure of SAM has not become available until recently, very limited work is available for SAM systems. Against this background, this collaborative project aims to develop a suite of statistical modeling, monitoring, prognosis, and clinical intervention decision making methodologies based on a flexible yet rigorous multistate model to describe the evolving of patient conditions. The true underlying state of the patient is assumed unknown; however, there is reason to expect that it could be inferred from patient generated data such as the frequency of the rescue inhaler usage (the time and frequency of the rescue inhaler use is an important indicator of asthma control). <br/><br/>Some anticipated advances include: (i) Multistate model with event intensity function as observations. The proposed methodology brings the mixed effect model and the multistate model into a unified framework to integrate the population information embedded in the historical records of multiple patients and the individual information collected in real-time in a quantitative way. (ii) Stochastic filtering approach for individual patient condition modeling and updating. The novel state space formulation enables efficient stochastic filtering algorithms to estimate and update the states and parameters in the multistate model. (iii) Clinical intervention decision support for patients and clinicians. The salient features of the proposed policy are that it is based on a condition-based policy and incorporates uncertainties in the patient condition model through a Partially Observable Markov Decision Process (POMDP) framework which has been widely used and proven to be very effective in the management of industrial systems. Plans are in place to evaluate the effectiveness of the resulting technologies in collaboration with clinical experts. <br/><br/><br/>The project is likely to contribute predictive technologies that could help reduce the cost and improve the quality of healthcare in the US, especially as it relates to effective management of chronic illnessess. Additional broader impacts of the project include enhanced research-based training opportunities for graduate and undergraduate students (including members of under-represented minorities) in healthcare engineering, statistics, and operation research; enrichment of the curricula in health systems in industrial engineering and operations research at the University of Wisconsin-Madison and the University of Iowa.",12,University of Iowa
1020001000388052,SCH: EXP: Collaborative Research: Privacy-Preserving Framework for Publishing Electronic Healthcare Records,"This project builds a novel privacy-preserving framework with both new algorithms and software tools to: 1) evaluate the effectiveness of current identifier-suppression techniques for Electronic Healthcare Record (EHR) data; 2) de-identify and anonymize EHR data to protect personal information without significantly reducing the utility of data for secondary data analysis. The proposed techniques eliminate the violation of privacy through re-identification, and facilitate the secondary usage, sharing, publishing and exchange of healthcare data without the risk of breaching protected health information (PHI). This new privacy-preserving framework injects the ICD-9-CM-aware constraint-based privacy-preserving techniques into EHRs to eliminate the threat of identifying an individual in the secondary use of research data. The proposed technique and development can be readily adapted to other types of healthcare databases in order to ensure privacy and prevent re-identification of published data. The project produces groundbreaking algorithms and tools for identifying privacy leakages and protecting personal privacy information in EHRs to improve healthcare data publishing. New privacy-preserving techniques developed in this project lead towards a new type of healthcare science for EHRs. The project also delivers fundamental advancements to engineering by showing how to integrate biomedical domain knowledge with a computationally advanced quantitative framework for preserving the privacy of published EHRs. HIPAA has established protocols and industry standards to protect the confidentiality of PHI. However, our results demonstrate that, even with regard to health data that meets HIPAA requirements, the risk of re-identification is not completely eliminated. By identifying the security vulnerabilities inherent in the HIPAA standards, our research develops a more rigorous security standard that greatly improves privacy protections by applying state-of-the-art algorithms.<br/><br/>The developed data privacy-preserving framework has significant implications for the future of US healthcare data publishing and related applications. Specifically, the transition from paper records to EHRs has accelerated significantly since the passage of the HITECH Act of 2009. The Act provides monetary incentives for the ""meaningful use"" of EHRs. As a result, the quality and quantity of healthcare databases has risen sharply, which has renewed the public's fear of a breach of privacy of their medical information. This research work is innovative and crucial not only for facilitating EHR data publishing, but also for enhancing the development and promotion of EHRs. At the educational front, this project facilitates the development of novel educational tools to construct entirely new courses and laboratory classes for healthcare, data privacy, data mining, and a wide range of applications. As a result, it enhances the current instructional methods for teaching data privacy and data mining, and has compelling biomedical and healthcare applications that can facilitate learning of computational algorithms. This project involves both undergraduate and graduate students in the three participating institutions. The PIs make a strong effort to engage minority graduate and undergraduate students in research activities in order to increase their exposure to cutting-edge research.",18,George Washington University
1020001000388063,SCH: EXP: Collaborative Research: Privacy-Preserving Framework for Publishing Electronic Healthcare Records,"This project builds a novel privacy-preserving framework with both new algorithms and software tools to: 1) evaluate the effectiveness of current identifier-suppression techniques for Electronic Healthcare Record (EHR) data; 2) de-identify and anonymize EHR data to protect personal information without significantly reducing the utility of data for secondary data analysis. The proposed techniques eliminate the violation of privacy through re-identification, and facilitate the secondary usage, sharing, publishing and exchange of healthcare data without the risk of breaching protected health information (PHI). This new privacy-preserving framework injects the ICD-9-CM-aware constraint-based privacy-preserving techniques into EHRs to eliminate the threat of identifying an individual in the secondary use of research data. The proposed technique and development can be readily adapted to other types of healthcare databases in order to ensure privacy and prevent re-identification of published data. The project produces groundbreaking algorithms and tools for identifying privacy leakages and protecting personal privacy information in EHRs to improve healthcare data publishing. New privacy-preserving techniques developed in this project lead towards a new type of healthcare science for EHRs. The project also delivers fundamental advancements to engineering by showing how to integrate biomedical domain knowledge with a computationally advanced quantitative framework for preserving the privacy of published EHRs. HIPAA has established protocols and industry standards to protect the confidentiality of PHI. However, our results demonstrate that, even with regard to health data that meets HIPAA requirements, the risk of re-identification is not completely eliminated. By identifying the security vulnerabilities inherent in the HIPAA standards, our research develops a more rigorous security standard that greatly improves privacy protections by applying state-of-the-art algorithms.<br/><br/>The developed data privacy-preserving framework has significant implications for the future of US healthcare data publishing and related applications. Specifically, the transition from paper records to EHRs has accelerated significantly since the passage of the HITECH Act of 2009. The Act provides monetary incentives for the ""meaningful use"" of EHRs. As a result, the quality and quantity of healthcare databases has risen sharply, which has renewed the public's fear of a breach of privacy of their medical information. This research work is innovative and crucial not only for facilitating EHR data publishing, but also for enhancing the development and promotion of EHRs. At the educational front, this project facilitates the development of novel educational tools to construct entirely new courses and laboratory classes for healthcare, data privacy, data mining, and a wide range of applications. As a result, it enhances the current instructional methods for teaching data privacy and data mining, and has compelling biomedical and healthcare applications that can facilitate learning of computational algorithms. This project involves both undergraduate and graduate students in the three participating institutions. The PIs make a strong effort to engage minority graduate and undergraduate students in research activities in order to increase their exposure to cutting-edge research.",18,University of North Texas Health Science Center at Fort Worth
1020001000388066,SCH: EXP: Discovering Patterns to Improve Health to Overcome Health Disparities,"The use of multidisciplinary scientific evidence based practice (EBP) guidelines during hospitalization can assist low income and minority populations to regain and maintain health, thus reducing rehospitalization. However, EBP guidelines may not be equally effective across all populations. The national mandate for all health professionals to implement interoperable electronic health records (EHRs) by 2015 provides an opportunity for reuse of EHR data to address new research questions that explore patterns of patient characteristics and resources, EBP interventions (actions of health professionals in treatment of the patient), and improvement in health, e.g., the effectiveness of multi-disciplinary EBP during hospitalization and follow-up. This exploratory project is interested in groups of patients that (1) share a particular condition, e.g., severe sepsis and septic shock or patients with diabetes or diabetic complications, (2) were hospitalized for this condition or related complications, (3) were treated as an outpatient in a clinic during a succeeding period of time, and (4) have an identifiable outcome, e.g., rehospitalization, emergency room (ER) visits, death related to the condition, or condition under control without rehospitalization. These patients will be analyzed to understand the differences between patients with the same condition but different outcomes, with the goals of (1) evaluating whether EBP guidelines made a difference and (2) discovering interventions which lead to improvement in outcomes that may need to be added to EBP guidelines. Achievement of these goals requires the development of new analysis techniques for deriving insights into health outcomes from EHR data. <br/><br/>The algorithms and approaches developed in this project will advance health informatics by enabling researchers to extract, from the relatively raw and unorganized mass of data in an EHR, a higher level view of the evolution of the patient's health and treatment over time and use that information to analyze the differences between patients with favorable and unfavorable health outcomes. More specifically, new techniques and tools will be developed to (1) create patient and intervention profiles that summarize important characteristics of the patient, their environment, and their treatment, (2) find groups (clusters) and patterns in these profiles, and (3) use the profiles, clusters, and patterns to analyze the differences in outcomes between patients with a common health condition. <br/><br/>Achievement of these goals poses significant challenges. For instance, EHR data in its original form is, for research and analysis purposes, mostly in a relatively unorganized and low-level format, e.g., flowsheets, which contain primarily nursing documentation, have numerous rows of data representing patient assessments and results as well as laboratory and other diagnostic tests. This necessitates the extraction and summarization of information relevant for the task. Because time plays such an important role in this data, extracting useful features from the data across time is critical. However, the time series involved are often irregular. More generally, not all patients have the same set of information and information is not available at regular intervals. Furthermore, data may need to be viewed at multiple temporal resolutions, e.g., sudden increase in blood pressure versus gradual, but noisy increase over several years. Additional complexities arise from population substructure, differences in the types of features, incorporating knowledge of prior dependencies among features, and incomplete and missing data. This project will address these challenges. Success in these efforts will advance data mining in the areas of classification, clustering and pattern mining, as well as various types of temporal data analysis, including trend, change point, and anomaly detection. <br/><br/>The novel pattern mining approaches proposed for this project will help generate the insights biomedical researchers need to make progress in understanding a number of serious health problems and avoiding poor outcomes. Such progress is likely to advance personalized health care and thus has the potential to improve human health and reduce health care costs. Beyond health applications, this work has broad and immediate applications to any complex system for which creating a comprehensive predictive model for complex entities is often unrealistic, at least in the near future, and the best that can be hoped for is to identify specific patterns that provide insight into the current or future state of an entity or system with respect to certain specific conditions of interest. Examples include transportation and energy systems, business and government organizations, ecosystems, sophisticated machinery, and computer / network systems. The creation of the proposed frameworks and algorithms will also directly train a number of graduate and undergraduate students in the areas of data mining and its use in analyzing health data. The results of this project will be presented in various conferences and journals in computer science, as well as those in domains related to the target applications.",12,University of Minnesota-Twin Cities
1020001000388068,SCH: EXP: Collaborative Research: Privacy-Preserving Framework for Publishing Electronic Healthcare Records,"This project builds a novel privacy-preserving framework with both new algorithms and software tools to: 1) evaluate the effectiveness of current identifier-suppression techniques for Electronic Healthcare Record (EHR) data; 2) de-identify and anonymize EHR data to protect personal information without significantly reducing the utility of data for secondary data analysis. The proposed techniques eliminate the violation of privacy through re-identification, and facilitate the secondary usage, sharing, publishing and exchange of healthcare data without the risk of breaching protected health information (PHI). This new privacy-preserving framework injects the ICD-9-CM-aware constraint-based privacy-preserving techniques into EHRs to eliminate the threat of identifying an individual in the secondary use of research data. The proposed technique and development can be readily adapted to other types of healthcare databases in order to ensure privacy and prevent re-identification of published data. The project produces groundbreaking algorithms and tools for identifying privacy leakages and protecting personal privacy information in EHRs to improve healthcare data publishing. New privacy-preserving techniques developed in this project lead towards a new type of healthcare science for EHRs. The project also delivers fundamental advancements to engineering by showing how to integrate biomedical domain knowledge with a computationally advanced quantitative framework for preserving the privacy of published EHRs. HIPAA has established protocols and industry standards to protect the confidentiality of PHI. However, our results demonstrate that, even with regard to health data that meets HIPAA requirements, the risk of re-identification is not completely eliminated. By identifying the security vulnerabilities inherent in the HIPAA standards, our research develops a more rigorous security standard that greatly improves privacy protections by applying state-of-the-art algorithms.<br/><br/>The developed data privacy-preserving framework has significant implications for the future of US healthcare data publishing and related applications. Specifically, the transition from paper records to EHRs has accelerated significantly since the passage of the HITECH Act of 2009. The Act provides monetary incentives for the ""meaningful use"" of EHRs. As a result, the quality and quantity of healthcare databases has risen sharply, which has renewed the public's fear of a breach of privacy of their medical information. This research work is innovative and crucial not only for facilitating EHR data publishing, but also for enhancing the development and promotion of EHRs. At the educational front, this project facilitates the development of novel educational tools to construct entirely new courses and laboratory classes for healthcare, data privacy, data mining, and a wide range of applications. As a result, it enhances the current instructional methods for teaching data privacy and data mining, and has compelling biomedical and healthcare applications that can facilitate learning of computational algorithms. This project involves both undergraduate and graduate students in the three participating institutions. The PIs make a strong effort to engage minority graduate and undergraduate students in research activities in order to increase their exposure to cutting-edge research.",18,University of Texas at Arlington
1020001000388069,SCH: EXP: SenseHealth: A Platform to Enable Personalized Healthcare through Context-aware Sensing and Predictive Modeling Using Sensor Streams and Electronic Medical Record Data,"Current healthcare diagnostics and assessment systems are limited by health data, which is sporadic, periodic, and incomplete. Wireless devices and health sensor technologies are increasing in use for continuous monitoring and assessment of key physiologic, psychological, and environmental variables and reduce the current gaps in health data. Uptake of such data by current health systems has been slow because of the reliance upon the physician/healthcare team to interpret and manage incoming data. Nevertheless, the large streams of data generated by these devices in conjunction with traditional clinical data (Electronic Medical Records) have the potential provide real and important insights into patient health and behavior. To address this gap, this proposal will develop SenseHealth -- a novel software platform that will automatically process and incorporate volumes of real-time data from sensors tailored to the individual in the context of personal electronic medical records and available environmental data. Such data will be integrated into the clinical care workflow to enable system usability, feasibility, and ultimately utility. A core component of the cyberinfrastructure is a collection of quantitative, predictive models that are sensitive to concerns across age, diseases, and health and variety of patient situations (ranging from low priority with no consequence on patient management to high priority requiring emergency evaluation), and sensor failures. The models will be integrated with a distributed real-time stream data processing system and a complex event stream processing engine to process sensor data in a scalable and fault-tolerant manner. Research at Rady Children's Hospital of San Diego, an affiliate of UCSD will be leveraged to develop these models. In each of the following studies, clinically relevant events (i.e. events that require clinical intervention) will be identified and disease specific models will be developed that will predict clinical relevance or the need for intervention. Incoming data and resulting clinical management activity from studies using various types of health sensors will be evaluated in two different patient populations: (1) MyGlucoHealth application for evaluating the use of a Bluetooth-enabled glucometer (for blood sugar measurements) in 40 youths with Type 1 diabetes, and (2) Asthma Tracking application for evaluating the ability of a metered dose inhaler (MDI) tracking device to track asthma medication use in 50 mild-to-moderate asthma subjects over a period of 6 months. The models will then be evaluated using multiple sensor streams in youth with diabetes (The Diabetes Management Integrated Technology Research Initiative (DMITRI) study) and in a prospective study in youth with asthma to determine their validity, efficacy, and utility in identifying patient scenarios of concern.<br/><br/>The SenseHealth system architecture will consist of four major components (1) Health and environmental sensors linked with (2) smartphone applications that communicate with (3) a back-end Data Center comprised of data storage and clusters doing and real-time analytics and data visualization, which will then provide a comprehensive health picture to users/clients via (4) tailored, programmed user/client applications. For these continuous sensing applications, managing sensors and smartphone in an energy-efficient manner is critical. SenseHealth will include a novel context-aware power management framework that uses both the application-level context (e.g., sensor data) and the dynamic environmental or system-level context (e.g., battery level, next phone charging opportunity prediction, or bandwidth availability) to adaptively control the state of hardware components and deliver a consistent performance (e.g. data accuracy, latency). In particular, data sampling protocols will be energy-aware and will be designed to sample data accurately but only as necessary to provide relevant clinical information. SenseHealth will use Storm, an open source distributed real-time computation system to process the data in a scalable and fault-tolerant manner. The aforementioned predictive models will be implemented in ESPER, an open-source complex event processing (CEP) engine. The models will use ESPER's rich Event Processing Language (EPL) to express filtering, aggregation, and joins, possibly over sliding windows of multiple event streams and pattern semantics to express complex temporal causality among events and trigger custom actions when event conditions occur among event streams. Finally, SenseHealth will fuse sensor and clinical data in a visual format that will increase interpretability and comprehension independent of literacy levels and will provide feedback and ultimately intervention support that is timely and relevant to the user (patient and clinician) based on comprehensive knowledge of data. Open source software visualization tools developed at Calit2 that leverage advances in scaled display wall technology will serve as the foundation for the data visualization component. NSF-funded DELPHI project will provide the data center component to store health sensor data and provide access to SenseHealth algorithm-processed data and visualization protocols.<br/><br/>The research itself will have direct impact on two patient communities, but the broader impacts of the proposed research will extend well beyond them. The proposed open software platform will be built with flexibility to allow for alternative programming with plug-and-play data processing algorithms as required for various sensors/data sources/clinical scenarios. The results from the proposed development activities and prototyping experiments will be of tremendous value to medical professionals, scientists and engineers who are engaged in planning and developing sensor-based systems for continuous health monitoring. The developed software products will be publicly available as open source products under the Apache license. The tools developed from this proposal will be designed to be extensible so that other sensors as well as models can easily be integrated and impact a broader range of healthcare applications. SenseHealth is an essential step toward providing a real-time 360-degree snapshot of health to optimize patient-centered, evidence-based decisions and to empower patients to participate in their own healthcare. The project team will contribute to training a diverse next generation of scientists by involving undergraduate students in the development process, both for computer science techniques and medical science research. The exciting aspect of this proposed work is that wellness is a very tangible and important factor even at young age. The education program will be structured to excite students, particularly those from traditionally underrepresented groups such as minorities and females, about multi-disciplinary research. Through the UCSD's COSMOS program, simple, fun and hands-on experiences for these students will be designed to allow them to understand importance of self-health assessment and disease management at an early age. The team is involved heavily in Graduate Medical Education at UCSD and will promote use of SenseHealth to integrate health data into current health systems in fellowship training activities. This proposal also funds for one graduate student.",12,University of California-San Diego
1020001000388081,"INSPIRE Track 1: UDiscoverIt: Integrating Expert Knowledge, Constraint-Based Reasoning and Learning to Accelerate Materials Discovery","This INSPIRE award is partially funded by the Information Integration and Informatics Program in the Division of Information and Intelligent Systems in the Directorate for Computer and Information Science and Engineering and the Solid State and Materials Chemistry Program in the Division of Materials Research and the Office of Multidisciplinary Activities in the Directorate for Mathematical and Physical Sciences.<br/><br/>The past two decades have seen a rapid development in experimental high-throughput experimentation (HTE) methodologies that would be extremely valuable for (i) the discovery of new applied materials with high complexity and (ii) the generation of deep understanding of structure/function, structure/activity and structure/performance relationships. Especially high photon flux X-ray techniques have enormous transformative potential in materials discovery. The research team leverages the data being collected by the Cornell High Energy Synchrotron Source (CHESS) and at Caltechs Joint Center for Artificial Photosynthesis (JCAP). While high-throughput inorganic library synthesis is relatively well-established, high-throughput structure determination, which is at the heart of the proposed research, is in its infancy. X-ray diffraction is well-suited for rapidly collecting information on the atomic arrangements in an inorganic sample, but the data do not immediately reveal a crystal structure. The development of data analysis, data mining and interpretation methodologies has not kept pace with the development of experimental capability. Consequently, data acquired in a week can take many months of traditional analysis by researchers. Automation and machine-intelligent processing of the data are absolutely necessary to maximise the impact of complex multidimensional datasets. <br/><br/>This project addresses this state of affairs head-on; It investigates computational techniques that allow dealing with the multiparameter space associated with HTE structure determination of materials libraries, through constraint guided search adn optimization, statistical machine learning, and inference techniques in combination with direct human input into the process. Anticipated advances include new probabilistic methods and computational discovery tools that integrate soft and hard constraints that capture the complex background knowledge from the underlying physics and chemistry of materials with insights gained from high throughput data analytics and machine learning. If the project succeeds in achieving the anticipated enormous efficiency gains in complex structure determination, it could have have a transformative impact on materials discovery and complex solid state chemistry and physics. <br/><br/>The ability to reduce complex materials dicovery and optimization from timeframes of months or years to hours or days could lead to a paradigm shift in the development of products benefiting society, with technological advances as well as commercial impact on energy, sustainability, health and quality of life. The planned free dissemination of data sets and computational tools to the larger scientific community is likely to enhance the broader impacts of the project. The project facilitates increased interdisciplinary interactions between computer scientists and material scientists at Cornell University and offer enhanced opportunities for training of a new generation of researchers at the interface between the two disciplines.",5,Cornell University
1020001000388102,INSPIRE Track 1: Human reasoning and learning in a complex but tractable decision-making paradigm,"This INSPIRE award is partially funded by the Robust Intelligence Program in the Division of Information and Intelligent Systems in the Directorate for Computer and Information Science and Engineering and the Perception, Action, and Cognition Program in the Division of Behavioral and Cognitive Sciences in the Directorate for Social, Behavioral, and Economic Sciences.<br/><br/>This project studies a hallmark of human intelligence, namely the ability to think ahead. Anticipating the consequences of one's own actions and those of others is of crucial importance in areas as diverse as business negotiations, military strategy, and teaching. In each of these domains, the quality of one's decisions depends on the quality of one's mental simulations of event sequences, which might be limited by cognitive capacity limitations, one's grasp of the complexities of the decision space, or both. The project's goal is to identify the factors that affect people's performance in thinking ahead, and investigate to what extent this performance can be improved through training. The project ties into the study of heuristics (general rules used by decision-makers) in psychology and behavioral economics.<br/><br/>Thinking ahead is difficult to measure and model in real-world problems. Therefore, the investigator has developed a two-person strategic decision-making task as a controllable experimental environment. Participants take turns to put tokens on a 4x11 board and try to get four of their own tokens in a row. The rules are unfamiliar to subjects, yet easy to learn. The size of the state space for this task is of the order of 10^20, much smaller than that of chess (~10^47), yet of appreciable complexity and much too large for humans to easily grasp. The investigators have ""weakly solved"" this task using an improved version of alpha-beta pruning. It can most likely also be solved strongly, which means that one can determine in any given position whether any given decision is an error. Human data will be collected in three task modes: one in which the subject is given a position and has to win in a set number of moves; human versus computer; and human versus human. The investigators will track subjects' eye movements, which could reveal aspects of planning and perhaps even serve to visualize the process of mental simulation. <br/><br/>An important component of the project will be computational modeling of the data. Humans cannot think ahead to the end of the task, so we hypothesize that they use simple features of positions (heuristics) to value certain moves over others. Examples of features could be the presence of a three-in-a-row, or of an adjacent, open-ended two-in-a-row. Preliminary human data suggest ""strategic blind spots"" created by the application of the incorrect heuristics. The investigators aim to predict the probability that a subject will in a given position make a particular move, based on features of the position that would be created by that move, as well as the subject's limited depth of reasoning. The resulting model will allow to quantitatively address the question of whether learning mostly serves to increase one's depth of reasoning or to refine one's palette of heuristics. The behavioral and eye movement data will lay the foundation for studies of the neural substrates of reasoning in complex decision-making contexts.<br/><br/>The project is positioned at the intersection of computer science, cognitive psychology, management and decision science, and education, and has the potential to contribute to each of these fields. In the long run, the project might be able to contribute to understanding and perhaps avoiding failures to think ""out of the box"" in real-life problem-solving. Moreover, strategic tasks like the one used in this project could serve as a mini-environment for testing hypotheses about teaching methods.",6,New York University
1020001000388103,INSPIRE Track 1: Language-Based Computational Methods for Analyzing Worldviews,"This INSPIRE award is partially funded by the Cyber-Human Systems Program in the Directorate for Computer and Information Science and Engineering, the Robust Intelligence Program in the Directorate for Computer and Information Science and Engineering, and the Social Psychology Program in the Directorate for Social, Behavioral and Economic Sciences. The goal of this project is to gather new insights into the ways people organize and understand their worlds within and across different cultures by means of innovative methodologies and tools from the fields of psychology and computational linguistics. The findings from this project will provide a better understanding of people on the individual psychological level as well as the cultures themselves, while developing and demonstrating new research techniques that can be used in future by many disciplines to exploit the vast troves of scientifically valuable textual data currently available online. Specifically, the project targets the following three main research objectives: 1) Construct a very large multicultural database of writings from English-speaking cultures, covering several styles and genres, including: social media (e.g., blogs, tweets); news articles; literary works; student writings. 2) Build computational linguistic models that can automatically identify differences in concept usage for different cultures, and apply these models on a large scale. 3) Validate the findings of these computational models through psychological qualitative and quantitative methods in laboratory studies. <br/><br/>The ways people use words can provide insights into the ways they see and understand their worlds. Everyday language can also tell us about people's social, emotional, and psychological states and even the ways they think about themselves and others. Particularly interesting is that many of the social and psychological insights we find with the language of individuals can be extrapolated to groups, communities, and entire cultures. This project seeks to analyze the written language of people across several cultures in a way that will allow us to better understand the ways groups of people understand their worlds. In short, it will use advances in computational linguistics and social psychology to track the underlying values, beliefs, and concerns of very large groups of people by analyzing the ways they use words. Unlike previous studies, which have been limited to relatively small self-report surveys targeting a handful of concepts across cultures, this project will help us understand the differences in perception for thousands of concepts, by several cultures representing hundreds of thousands of people.<br/><br/>This project promises to shed new light on cultural differences by analyzing the ways people understand their worlds through their everyday language use. The approach will inform applications in communication, threat control, tracking of cultural values, and others. The project will also provide educational opportunities, in the form of training for students in both computer science and psychology, who will be directly exposed to interdisciplinary research, cultural diversity, and international experiences. Finally, the large multicultural dataset that will be created as part of this project, along with the tools to process it, will be made publicly available, thus enabling future research, as well as educational projects concerned with the analysis and understanding of cultural diversity and worldview.",11,University of Michigan Ann Arbor
1020001000388112,INSPIRE Track 1: The Age of Water and Carbon in Hydroecological Systems: A New Paradigm for Science Innovation and Collaboration through Organic Team Science,"This INSPIRE award is partially funded by the Geobiology & Low Temperature Geochemistry Program in the Division of Earth Sciences in the Directorate for Geoscience; the Human Centered Computing Program in the Division of Information & Intelligent Systems in the Directorate for Computer & Information Science & Engineering; and the Virtual Organizations as Socio-technical Systems Program in the Division of Advanced Cyber-Infrastructure in the Directorate for Computer & Information Science & Engineering.<br/><br/>This project will develop new scientific work practices and cyberinfrastructure tools to advance the fields of hydrology and limnology (lake ecology). The project will develop a socio-technical model of ""organic team science"" in which scientists are motivated to collaborate across diverse scientific communities and to share and normalize data to solve scientific problems through an open framework. potentially creating new cross-disciplinary collaborations around the modelling problems. The project will advance hydrology by making already-collected geospatial data more usable for analysis and simulations. It will advance limnology by developing an integrated hydrodynamic model of lakes as connected to the broader hydrologic network to quantify water, material, nutrient and energy fluxes, which is potentially transformative for limnology. The project will be carried out with collaborators including the NSF Susquehanna/Shale Hills Critical Zone Observatory and the GLEON projects.<br/><br/>The project will provide benefits by developing cyberinfrastructure to provide access for limnology to climate and geospatial data and models as well as novel practices for supporting organic team science. The later is potentially a significant and transformative contribution to the infrastructure for science. The hydro-dynamic model could be useful for those managing lakes. The proposal includes plans for outreach to the scientific community to share these findings.",10,Pennsylvania State Univ University Park
1020001000388140,EAGER: Exploratory Research on Harnessing Human Manipulation,"This project is exploring the use of crowd-sourcing, or citizen science, to produce a large database of analyzed human manipulation video. Every human is an expert on human manipulation. Harnessing this expertise has the potential to radically transform our knowledge of manipulation. A large analyzed video database directly serves several important goals of robotics, including autonomous robotic manipulation and recognition of human behavior. This new approach is in its earliest stages. The project goals are to explore the new approach, identify problems, assess the value, refine the vision, and formulate plans. Activities include development of pilot interfaces, testing different approaches to user training, and developing approaches to the filtering and aggregation of results. The ultimate impact will be a broader scientific understanding of human manipulation, and a large dataset to support research in robotic manipulation and human behavior recognition. Results will be disseminated through publication of discoveries related to human manipulation, open access to the database, and open access to the crowd-sourcing interface and related software.",7,Carnegie-Mellon University
1020001000388173,SCH: INT: Novel Techniques for Patient-centric Disease Management using Automatically Inferred Behavioral Biomarkers and Sensor-Supported Contextual Self-Report,"The vision of patient-centric, personalized, precision medicine and wellness will be fully realized only when an individual?s self-care and clinical decision making are informed by a rich, predictive model of that individual?s health status. The evolution and dissemination of mobile technology has created unprecedented opportunities for highly detailed and personalized data collection in a far more granular, unobtrusive, and even affordable way; these data include activity levels, location patterns, sleep, consumption, and communication and social interaction. However, turning this potential into practice requires that we develop the algorithms and methodologies to transform these raw data into actionable information. The research will develop novel and generalizable techniques to derive robust measures relevant to individual health and clinical decision making. The team will develop and evaluate tools that convert raw human-activity data into clinically actionable behavioral biomarkers. This demands creative uses of the underlying technical capabilities (i.e., passive data capture, data analysis and machine learning, data visualization, user experience), as well as rigorous understanding of the underlying health condition and management (i.e. functional health measures, achievable and optimal health outcomes, patient challenges in adherence, risks and benefits associated with medication and other aspects of treatment, and clinical decision making). The approach has broad applicability across disease management (e.g., auto-immune, gastrointestinal, depression, cognitive decline, and neurologic disorders), but also calls for tailoring to specific conditions and individuals. Therefore, we will conduct this initial work in a specific context, that of chronic pain management for three prominent conditions: rheumatoid arthritis, osteoarthritis, and lower back pain. The behavioral biomarkers associated with our initial target domain, pain management, center around: (i) decline in activity levels; (ii) increase in stress; (iii) decrease in sleep quality; (iv) drop in function, e.g., reduction in travel distance or inability to go to work. The effectiveness of passive sensing capabilities of the mobile phone to track sleep, changes in activity level, stress, social isolation, geographic location and several other indicators that are likely antecedents or symptoms of pain interference has been demonstrated previously. <br/><br/>While behavioral biomarkers rely extensively on passively captured data streams (such as activity, location, communication, application usage and audio), there remain important cases in which self-report data is required to augment or clarify passively collected data. However, the standardized patient survey instruments that assess relevant symptoms and behavior are not suitable for use on a daily basis because of length, question design, or both. Further, traditional forms of self report are often intrusive, burdensome, and suffer high rates of attrition. A new approach, contextual recall, aims to mitigate the issues related to self-report through three key mechanisms: optimizing the delivery of prompts, providing the user with key contextual cues to improve recall, and employing visual input techniques as an alternative to long-form measures that do not scale well to frequent mobile self-reports. The approach to personalizing disease management is intentionally scalable in terms of affordability and accessibility. Passive data collection requires no user attention, and contextual recall is a form of self-report designed for busy individuals with a range of demands and constraints on their time, as well as potential literacy and numeracy constraints. The clinician-facing components of this approach are also designed to work in resource-constrained clinical settings where clinicians are under particular time pressure. The team will recruit patients and clinicians from typically underserved communities to engage in the participatory design process. The overall contributions of this work will include development and evaluation of: (1) software techniques to combine and transform passively monitored and self-reported data streams into clinically meaningful, actionable, and personalized indicators, which we call behavioral biomarkers; (2) contextual recall that allows the collection of highly granular and contextually specific self-report data to enhance passively captured data with information from the patient perspective, while balancing the tension faced in balancing recall bias and usability; and (3) a methodology that systematizes the collaboration with clinical domain experts to develop and integrate behavioral biomarkers into clinical decision making for specific diseases. We will create and evaluate a modular and extensible suite of analytics and user interaction techniques designed to facilitate iterative implementation and evaluation. These modules will themselves be a contribution, but equally important will be the evaluation of the overall approach of behavioral biomarkers as a driver of precision medicine.",12,Cornell University
1020001000388178,SCH: INT: Supporting Healthy Sleep Behaviors through Ubiquitous Computing,"Sleep is one of the key aspects of good health, along with a healthy diet and regular exercise. Computing researchers have recently worked to understand how systems can support nutrition and exercise, but sleep has been relatively under-studied despite its significant health benefits. The right amount of quality sleep can improve both physical and mental health and is associated with a lower risk for heart disease, diabetes, depression, and obesity. However, sleep disorders are often undiagnosed, and many people are unaware of how their activities or environments affect sleep. Ubiquitous computing has the opportunity to help through self-monitoring, awareness, and identification of strategies to promote healthy sleep behaviors.<br/><br/>This interdisciplinary research agenda will involve the design, development, and evaluation of novel ubiquitous computing approaches to support good sleep health and behaviors. This research will combine expertise in human-centered design, computer science, sleep medicine, and nursing. The researchers' previous formative work with target users and sleep experts has informed design requirements for technologies in this field. The work will focus on building on those results through three main activities. First, they will apply machine learning to model sleep patterns based on a person's smartphone usage to unobtrusively sense and predict sleep duration and timing. Then they will employ a human-centered design process to develop and study the feasibility and initial efficacy of two novel software tools to assist individuals in sensing, recording, and visualizing the behavioral (e.g., caffeine use, food intake) and environmental factors (e.g., noisy environment, light levels) that can disrupt their sleep. And then, they will develop and assess the feasibility and initial efficacy of a new technique and tool for assessing, modeling, and visualizing the impact of sleep deprivation on users' reaction time, cognitive functioning, and mood to help them prioritize sleep.<br/><br/>This research will bring into focus the domain of sleep as a new area for human-centered computing research. The design and evaluation of new applications for sleep will further knowledge of how technology can be designed for long-term health tracking and behavior change, and the designs and evaluations move beyond what is currently being addressed in industry. The technical contributions are novel approaches to monitoring sleep and an expansion of knowledge about how technologies can adapt to meet the unique health needs of different users. Finally, the research seeks to unite the fields of sleep research and computing research to develop solutions for better understanding and treating sleep disorders.<br/><br/>This work has the potential to significantly affect the lives of the estimated 40.6 million individuals in the U.S. with sleep disorders or sleep deprivation, which helps address a major public health issue. In addition, the economic cost of sleep deprivation has been estimated to be $63.8 billion per year. The research will have immediate impact by allowing free access to new behavior change technologies developed through this project. In addition, the research will also impact education by using sleep research as a means for attracting women and minorities to computing research and engaging students in interdisciplinary design teams through student projects and directed research groups.",12,University of Washington
1020001000388190,SCH: INT: Large-Scale Probabilistic Phenotyping Applied to Patient Record Summarization,"This project creates novel methods and tools for the analysis of large-scale Electronic Health Record (EHR) data. Models of disease, or phenotypes, are derived from a large collection of patient characteristics, as recorded in the EHR. To assess their value and robustness in a clinical application, the phenotypes are incorporated into a longitudinal patient record summarization system for clinicians at the point of patient care.<br/><br/>The research for this project contributes to two inter-related outcomes: (i) a probabilistic graphical model of a patient record, specifically a Latent Dirichlet Allocation (LDA) model of the patient phenotypes. Models that can handle the heterogeneous data types in the EHR, along with their challenges, such as sparseness and artificial redundancy are investigated. For the models to be useful in the clinical world, they must be interpretable by humans, easily adaptable for EHR-driven applications, and clinically relevant. This is achieved by specifying prior clinical knowledge into the models and learning from clinicians' feedback automatically; and (ii) a patient record summarizer for clinicians at the point of patient care. The summarizer leverages the probabilistic patient model and learns new models of salience through the clinicians' interactions with the deployed summarizer, in essence learning relevance of different patient phenotypes. For the evaluation of the phenome model and the summarizer, particular care is given to assessing their value in a real-world clinical setting, at the point of care.<br/><br/>The research builds on and is translated into deliverables that are robust and are inter-operable with the EHR of a large hospital in New York City. If successful, the availability of interpretable and actionable patient models can impact drastically both EHR-driven research activities and patient care, through better tools for clinicians. Finally, the project introduces students in the field of medicine to STEM activities, while presenting real-world, exciting application to STEM students.<br/><br/>For further information see the project website at: http://people.dbmi.columbia.edu/noemie/phenosum",12,Columbia University
1020001000388191,SCH: INT: Collaborative Research: A Self-Adaptive Personalized Behavior Change System for Adolescent Preventive Healthcare,"The majority of morbidity and mortality during adolescence is preventable and related to behaviors such as substance use and vehicle-related injuries. Most adolescents visit a healthcare provider once a year, providing an ideal opportunity to integrate behavioral health screening into clinical care. Although the majority of adolescent health problems are amenable to behavioral intervention, few health information technology interventions have been integrated into adolescent care. With complementary theoretical advances (social-cognitive theories of behavior change) and technology advances (intelligent narrative-centered learning environments, user modeling, and machine learning), the field is now well positioned to design health behavior change systems that can realize significant impacts on behavior change for adolescent preventive health. <br/><br/>Computationally-enabled models of behavior change hold significant promise for adolescent healthcare. The objective of the proposed research is to design, implement, and investigate INSPIRE, a self-adaptive personalized behavior change system for adolescent preventive health. INSPIRE will utilize a social-cognitive theory of behavior change built around a tight feedback loop in which a narrative-centered behavior change environment will produce improved behaviors in patients, and the resulting patient outcome data will be used by a reinforcement learning optimization system to learn refined computational behavior change models. With a focus on risky behaviors and an emphasis on substance use, adolescents will interact with INSPIRE to develop an experiential understanding of the dynamics and consequences of their substance use decisions. A unique feature of INSPIRE afforded by recent advances in machine learning will be its ability to optimize health behavior change at both the individual and population levels. At the individual level, INSPIRE will utilize a patient behavior model to personalize its behavior change narratives for individual adolescents. It will customize interactions based on an adolescent's goals and affective models. At the population level, INSPIRE will utilize reinforcement learning to adapt its narrative generation system to systematically increase its ability to improve two types of outcomes: behavior change and self-efficacy. The project will culminate with an experiment conducted with a fully implemented version of INSPIRE at outpatient clinics within the UC San Francisco Department of Pediatrics, Benioff Children's Hospital. <br/><br/>It is anticipated that INSPIRE interventions will yield two types of outcomes: 1) improved health behavior through significant reductions in adolescent risky behavior, relative to standard of care; and 2) increased self-efficacy with respect to adolescents' ability to make good decisions about their health behaviors, relative to standard of care. Designed for natural integration into clinic workflow, interoperability with EHR and patient portal systems, and security and privacy requirements, INSPIRE will report patient behavior change summaries to healthcare providers. Through multi-platform deployments supporting laptop, desktop, tablet, and mobile computing devices, INSPIRE will serve as an empowering tool for adolescents, making them full participants in their own wellbeing. It will also enable researchers to run behavior analytics to investigate which properties of alternate interventions contribute most effectively to behavior change outcomes. Going forward, it is anticipated that INSPIRE will provide a testbed for a broad range of behavior change research and serve as the foundation for next-generation personalized preventive healthcare through computationally-enabled behavior change.",12,University of California-San Francisco
1020001000388199,SCH: INT: Collaborative Research: FITTLE+: Theory and Models for Smartphone Ecological Momentary Intervention,"Many health conditions are caused by unhealthy lifestyles and can be improved by behavior change. Traditional behavior-change methods (e.g., weight-loss clinics; personal trainers) have bottlenecks in providing expert personalized day-to-day support to large populations for long periods. There is a pressing need to extend the reach and intensity of existing successful health behavior change approaches in areas such as diet and fitness. Smartphone platforms provide an excellent opportunity for projecting maximally effective interventions for behavior change into everyday life at great economies of scale. Smartphones also provide an excellent opportunity for collecting rich, fine-grained data necessary for understanding and predicting behavior-change dynamics in people going about their everyday lives. The challenge posed by these opportunities for detailed measurement and intervention is that current theory is not equally fine-grained and predictive. <br/><br/>This interdisciplinary project investigates theory and methods to support fine-grained behavior-change modeling and intervention integrated via smartphone into the daily lives of individuals and groups. Fittle+ develops a new and transformative form of smartphone-delivered Ecological Momentary Intervention (EMI) for improving diet and physical activity. This approach will provide social support and autonomously planned and personalized coaching that builds on methods from mobile sensing, cognitive tutoring, and evidence-based social design. The foundation for this new approach will require new predictive computational theories of health behavior change. Current coarse-grained conceptual theories of individual health behavior change will be refined into fine-grained predictive computational models. These computational models will be capable of tracking moment-by-moment human context, activity, and social patterns based on mobile sensing and interaction data. Using these monitoring capabilities, Fittle+'s computational models will support assessment of, and predictions about, individual users and groups based on underlying motivational, cognitive, and social mechanisms. These predictive models will also be used to plan and optimize coaching actions including detailed diagnostics, individualized goals, and contextually and personally adapted interventions. <br/><br/>The collaborative team of researchers works with weight-loss interventionists at one of nation's largest health organization's facility in Hawaii. The team includes expertise in mobile sensing, artificial intelligence, computational cognition, social psychology, human computer interaction, computer tutoring, and measurement theory.",12,Carnegie-Mellon University
1020001000388200,SCH: INT: Mapping the Cardiac Acousteome: Biosensing and Computational Modeling Applied to Smart Diagnosis and Monitoring of Heart Conditions,"The goal of the project is to develop fundamental science, knowledge, tools, and technologies for smart diagnosis and monitoring of heart conditions based on automated cardiac auscultation. An innovative wearable multimodal acoustic array (the StethoVest) is proposed. This sensory array localizes and separates acoustic broadband sources in space by measuring spatial and temporal derivatives of the acoustic field. Using this StethoVest, first-of-their-kind maps of the cardiac acousteome are generated. These maps include not only 4D (3D space and time) measurements of heart sounds; they are accompanied by high-fidelity hemoacoustic simulations that delineate cause-and-effect, as well as simulation-guided source-identification algorithms that provide unprecedented diagnostic sensitivity and specificity. The simulations take cardiac imaging data as input, and simulate cardiac blood flow as well as the associated heart sounds. The latter part of this four-year project focuses on investigating the physics of aortic valve murmurs as well as StethoVest based screening for hypertrophic obstructive cardiomyopathy.<br/><br/>Annual national expenditure on heart disease exceeds half a trillion dollars with over half a million deaths attributed to this disease each year. The proposed research leverages emerging capabilities in biosensing, computational modeling, imaging and signal processing, to produce a diagnostic technology that moves us away from management of heart disease that is mostly reactive, expensive and hospital-centric, towards an approach that is smart, proactive, patient-centric and cost-effective. The sound inventory generated from continuous, automated monitoring and interpretation of heart sounds has the potential to generate unprecedented understanding of human physiology. The project promotes interdisciplinary education and workforce development through involvement of undergraduates, graduate students, and postdocs in the research, development of courses and clinical training tools, and local and international outreach activities.",12,Johns Hopkins University
1020001000388204,SCH: INT: Collaborative Research: A Self-Adaptive Personalized Behavior Change System for Adolescent Preventive Healthcare,"The majority of morbidity and mortality during adolescence is preventable and related to behaviors such as substance use and vehicle-related injuries. Most adolescents visit a healthcare provider once a year, providing an ideal opportunity to integrate behavioral health screening into clinical care. Although the majority of adolescent health problems are amenable to behavioral intervention, few health information technology interventions have been integrated into adolescent care. With complementary theoretical advances (social-cognitive theories of behavior change) and technology advances (intelligent narrative-centered learning environments, user modeling, and machine learning), the field is now well positioned to design health behavior change systems that can realize significant impacts on behavior change for adolescent preventive health. <br/><br/>Computationally-enabled models of behavior change hold significant promise for adolescent healthcare. The objective of the proposed research is to design, implement, and investigate INSPIRE, a self-adaptive personalized behavior change system for adolescent preventive health. INSPIRE will utilize a social-cognitive theory of behavior change built around a tight feedback loop in which a narrative-centered behavior change environment will produce improved behaviors in patients, and the resulting patient outcome data will be used by a reinforcement learning optimization system to learn refined computational behavior change models. With a focus on risky behaviors and an emphasis on substance use, adolescents will interact with INSPIRE to develop an experiential understanding of the dynamics and consequences of their substance use decisions. A unique feature of INSPIRE afforded by recent advances in machine learning will be its ability to optimize health behavior change at both the individual and population levels. At the individual level, INSPIRE will utilize a patient behavior model to personalize its behavior change narratives for individual adolescents. It will customize interactions based on an adolescent's goals and affective models. At the population level, INSPIRE will utilize reinforcement learning to adapt its narrative generation system to systematically increase its ability to improve two types of outcomes: behavior change and self-efficacy. The project will culminate with an experiment conducted with a fully implemented version of INSPIRE at outpatient clinics within the UC San Francisco Department of Pediatrics, Benioff Children's Hospital. <br/><br/>It is anticipated that INSPIRE interventions will yield two types of outcomes: 1) improved health behavior through significant reductions in adolescent risky behavior, relative to standard of care; and 2) increased self-efficacy with respect to adolescents' ability to make good decisions about their health behaviors, relative to standard of care. Designed for natural integration into clinic workflow, interoperability with EHR and patient portal systems, and security and privacy requirements, INSPIRE will report patient behavior change summaries to healthcare providers. Through multi-platform deployments supporting laptop, desktop, tablet, and mobile computing devices, INSPIRE will serve as an empowering tool for adolescents, making them full participants in their own wellbeing. It will also enable researchers to run behavior analytics to investigate which properties of alternate interventions contribute most effectively to behavior change outcomes. Going forward, it is anticipated that INSPIRE will provide a testbed for a broad range of behavior change research and serve as the foundation for next-generation personalized preventive healthcare through computationally-enabled behavior change.",12,North Carolina State University
1020001000388205,SCH: INT: Collaborative Research: Physiological Studies of Brain Signals using a Wireless Neuro-Sensing-Diagnostic System,"The goal of this project is to develop a fully-passive intra cranial sensing system. This novel electronic brain-machine interface utilizes an implantable miniaturized neurosensor based on microwave back-scattering, a miniature textile antenna, and RF circuits on flexible polymer substrate. It has the unique properties of only minor heating, thus, minimizing injury and trauma to the brain, eliminating the need for wires through the cranium, and preserving natural lifestyle and comfort. The acquired neurosignals are collected by a self-powered wearable body area network for continuous real-time monitoring and subsequent physiological interpretation. The project includes physiological studies of the acquired brain signals. <br/><br/>While conceived initially as a tool for long-term monitoring of brain signals, this sensing technology has a very wide range of potential applications in health, including control of prosthetics and early detection of seizures. Overcoming the challenges in safety and long-term reliability presented by conventional neurosensor technology could transform healthcare for people suffering from severe chronic neurological disorders. Results of the physiological studies are disseminated through a data repository for understanding brain disorders. Research is integrated with education through several activities, including hands-on experience for students in this new area, summer camps, and a variety of outreach activities to attract women and minorities in engineering.",12,Ohio State University
1020001000388212,SCH: INT: Collaborative Research: Physiological Studies of Brain Signals using a Wireless Neuro-Sensing-Diagnostic System,"The goal of this project is to develop a fully-passive intra cranial sensing system. This novel electronic brain-machine interface utilizes an implantable miniaturized neurosensor based on microwave back-scattering, a miniature textile antenna, and RF circuits on flexible polymer substrate. It has the unique properties of only minor heating, thus, minimizing injury and trauma to the brain, eliminating the need for wires through the cranium, and preserving natural lifestyle and comfort. The acquired neurosignals are collected by a self-powered wearable body area network for continuous real-time monitoring and subsequent physiological interpretation. The project includes physiological studies of the acquired brain signals. <br/><br/>While conceived initially as a tool for long-term monitoring of brain signals, this sensing technology has a very wide range of potential applications in health, including control of prosthetics and early detection of seizures. Overcoming the challenges in safety and long-term reliability presented by conventional neurosensor technology could transform healthcare for people suffering from severe chronic neurological disorders. Results of the physiological studies are disseminated through a data repository for understanding brain disorders. Research is integrated with education through several activities, including hands-on experience for students in this new area, summer camps, and a variety of outreach activities to attract women and minorities in engineering.",12,Arizona State University
1020001000388217,SCH: INT: Optimal Prosthesis Design with Energy Regeneration,"The goals of this research are three-fold: (1) To develop new approaches for the simulation of human limb control; (2) To develop new approaches for optimizing prosthetic limb control, capture energy during walking, and store that energy to lengthen useful prosthesis life; and (3) To develop a prosthesis prototype. In order to accomplish these high-level goals, the following specific objectives will be pursued: (1) Study both able-bodied gait and amputee gait in our human motion lab; (2) Develop mathematical models for human motion control to provide a foundation for artificial limb control; (3) Develop electronic prosthesis controls; (4) Develop new approaches for optimizing prosthesis design parameters based on computer intelligence; (5) Fabricate a prosthesis prototype and test the prototype in a robotic system; (6) Conduct human trials of the prosthesis prototype.<br/><br/>If successful, the results of this research will lead to new methods for optimizing complex, inter-related subsystems, such as those that are characteristic of prostheses. The human leg operates by transferring energy between the knee, which absorbs energy, and the ankle, which produces energy. The new prosthesis design that results from this research will mimic the energy transfer of the human leg. Inter-related engineering systems are typically designed and optimized independently of each other. This results in designs that are overly conservative, that do not interact optimally, and that limit performance, adaptability, and robustness. This research will develop integrated optimization algorithms that consider the inter-dependence of related subsystems (for example, hardware, software, and control). Current knee prostheses do not restore normal gait, and this contributes to degenerative joint disease among amputees. This research will develop new design and optimization approaches that will allow prostheses to perform more robustly, to perform closer to natural human gait, and to last longer between battery charges than current prostheses.",20,Cleveland State University
1020001000388229,EAGER: Sub-second human-robot synchronization,"The Principal Investigator (PI) will explore and test a number of hardware platforms and software algorithms whose goal is to facilitate sub-second human-robot synchronization. To this end the PI will utilize the medium of music, one of the most time-demanding media where accuracy in milliseconds is critical because asynchronous operations of more than 10 ms are noticeable by listeners. Specifically, the PI will develop up to three different kinds of robotic devices intended to allow a drummer, whose arm was recently amputated from the elbow down, to play and synchronize between his organic functioning arm and the newly developed robotic devices. In addition, he will develop and investigate the efficiency of novel anticipation algorithms designed to foresee human actions before they take place and trigger robotic actions with low-latency and in a timely manner. This research will advance our understanding in a variety of areas, including the biomechanics of limb operations, machine-learning techniques for the anticipation and prediction of human gestures, and highly accurate myoelectric robotic devices.<br/><br/>Broader Impacts: This project will ultimately benefit a large population of amputees whose quality of life could improve through the use of low-latency robotic limbs with sub-second synchronization. Facilitating such accurate sub-second human-robot synchronization could also improve efficiency and flow in other human-robot interaction scenarios where humans and robots must collaborate to achieve time-demanding common goals. The novel solenoid-based robotic device(s) created in this research should also benefit musicians in general (that is to say, who are not disabled), who will be able to explore novel drumming techniques and create novel musical results.",7,Georgia Tech Research Corporation
1020001000388333,III: Travel Fellowships for Students from U.S. Universities to Attend ISWC 2013,"Semantic technologies, are beginning to play increasingly important roles across a broad range of applications. There is an urgent need for advanced training graduate students to conduct research in this area and prepare for academic or industrial careers.<br/><br/>Participation in premier research conferences in the area is an essential element of such training. This project provides funds to subsidize the travel expenses of 10-15 students at U.S. universities to attend the 2013 International Semantic Web Conference (ISWC) which will be held October 21-25, 2013 in Sydney, Australia. <br/><br/>ISWC is a premier international conference which offers a venue for presentation of rigorously peer-reviewed research results in Semantic Web and allied areas. The conference includes two events specially targeted to graduate students: The ISWC doctoral consortium offers an opportunity for doctoral students to present their work and receive feedback and mentoring. The ISWC Career Mentoring lunch provides an informal setting for students to discuss all issues pertaining to research careers with senior researchers in the community, and to establish long-term mentoring ties. <br/><br/>Broader impacts of the project include: Enhanced opportunities for training and mentoring of US-based graduate students in Semantic Web and related areas, broadening the particiation of students from groups (women and minorities) that are currently under-represented in Computer Science in general, and Semantic Web in particular.",10,University of California-Santa Barbara
1020001000388338,U.S.-Based Students Travel Support for the ACM SIGSPATIAL 2013,"This award provides travel support for 25-30 U.S.-based graduate and undergraduate students to participate in the 21th ACM SIGSPATIAL GIS 2013 Conference, held in Orlando, FL, USA, November 5-8, 2013 (http://sigspatial2013.sigspatial.org). The ACM SIGSPATIAL GIS conference has established itself as the world's premier conference to foster research in the areas of Spatial Data and Analysis and Geographic Information Systems (GIS). The conference provides a forum for original research contributions covering all conceptual, design, and implementation aspects of GIS ranging from applications, user interfaces, and visualization to storage management and indexing issues. It brings together researchers, developers, users, and practitioners carrying out research and development in novel systems based on geospatial data and knowledge, and fostering interdisciplinary discussions and research in all aspects of GIS. It is the premier annual event of the ACM Special Interest Group on Spatial Information (ACM SIGSPATIAL). The conference seeks to continuously advance the state of-the-art in spatial data management and spatial data analysis and broaden its impact.<br/><br/> The participation of U.S. graduate and undergraduate students results in the intellectual simulation of young minds to pursue advanced research and development activities in an area that has huge technical and societal impact. The students greatly benefit from attending this conference, as they are able to partake in the current state-of-the-art in the area of geospatial systems and applications, present their work, and potentially make connections for research collaborations and research mentoring. The total number of ACM SIGSPATIAL GIS participants in the past has been in excess of 300 participants, with a majority of the participants from the U.S., followed by Europe and Asia. A strong representation of U.S.-based graduate students at ACM SIGSPATIAL GIS is useful in maintaining U.S. competitiveness in the important research areas crucial for U.S. infrastructures and applications that critically depend on geo-referenced information. Applications of spatial and geographical information contribute to a wide-array of societal contexts: improving the lives of ordinary citizens through advanced transportation applications, increasing the security of the nation through better intelligence gathering using geospatial knowledge, and developing applications that will have a positive impact on the global environment. Educating and training the next generation of scientists, engineers, and practitioners in the area of spatial and geographical information systems will have a significant impact and will enhance the continuation of the competitive edge of the United States in this important area.",10,University of Southern California
1020001000388431,SCH: INT: Collaborative Research: FITTLE+: Theory and Models for Smartphone Ecological Momentary Intervention,"Many health conditions are caused by unhealthy lifestyles and can be improved by behavior change. Traditional behavior-change methods (e.g., weight-loss clinics; personal trainers) have bottlenecks in providing expert personalized day-to-day support to large populations for long periods. There is a pressing need to extend the reach and intensity of existing successful health behavior change approaches in areas such as diet and fitness. Smartphone platforms provide an excellent opportunity for projecting maximally effective interventions for behavior change into everyday life at great economies of scale. Smartphones also provide an excellent opportunity for collecting rich, fine-grained data necessary for understanding and predicting behavior-change dynamics in people going about their everyday lives. The challenge posed by these opportunities for detailed measurement and intervention is that current theory is not equally fine-grained and predictive. <br/><br/>This interdisciplinary project investigates theory and methods to support fine-grained behavior-change modeling and intervention integrated via smartphone into the daily lives of individuals and groups. Fittle+ develops a new and transformative form of smartphone-delivered Ecological Momentary Intervention (EMI) for improving diet and physical activity. This approach will provide social support and autonomously planned and personalized coaching that builds on methods from mobile sensing, cognitive tutoring, and evidence-based social design. The foundation for this new approach will require new predictive computational theories of health behavior change. Current coarse-grained conceptual theories of individual health behavior change will be refined into fine-grained predictive computational models. These computational models will be capable of tracking moment-by-moment human context, activity, and social patterns based on mobile sensing and interaction data. Using these monitoring capabilities, Fittle+'s computational models will support assessment of, and predictions about, individual users and groups based on underlying motivational, cognitive, and social mechanisms. These predictive models will also be used to plan and optimize coaching actions including detailed diagnostics, individualized goals, and contextually and personally adapted interventions. <br/><br/>The collaborative team of researchers works with weight-loss interventionists at one of nation's largest health organization's facility in Hawaii. The team includes expertise in mobile sensing, artificial intelligence, computational cognition, social psychology, human computer interaction, computer tutoring, and measurement theory.",12,Palo Alto Research Center Incorporated
1020001000388616,EAGER: Decision-Theoretic and Scalable Algorithms for Computing Finite State Equilibrium,"This project is exploring algorithms for computing multiagent strategies that are in exact and approximate equilibrium. The context involves economic games that are played repeatedly by agents each of whom privately observes noisy signals about other players' actions. A complete characterization of equilibria for such games, missing until recently, introduces the concept of a finite state equilibrium in which each player's strategy is represented as a finite state automaton. Players' strategies are verified to be in equilibrium by solving a partially observable Markov decision process. The research is building on this surprising and deep application of decision theory toward equilibrium analysis in a pragmatic class of games, which provides a bold and innovative bridge between decision and game theories. It is designing novel algorithms that utilize approximate and error-bounded solutions of partially observable Markov decision processes for computing approximate finite state equilibrium in games with increasing dimensions.<br/><br/>This research is contributing insights for broader classes of games such as stochastic games with noisy signals. The interdisciplinary outcomes of this research are being integrated into courses and conference tutorials on multiagent decision making for dissemination. New international research collaborations with eminent multiagent researchers in Japan are being established.<br/><br/>This research is bringing together the disciplines of decision and game theories with mutual benefit. Key outcomes include scalable algorithms for solving highly complex games thereby contributing to the understanding of sophisticated interactions under uncertainty. Applications include analyzing auctions without release of public information, covert price wars between firms, and managing resource congestion.",13,University of Georgia Research Foundation Inc
1020001000388638,EAGER: The REAL Challenge,"The speech and spoken dialog communities watched with interest as the world discovered Apple's SIRI. One of the first applications of an advanced spoken dialog system to a real world problem, it captured the imagination of potential users; the idea of speaking to an object to obtain information or direct an action has entered into the public mind, and the possibilities seem endless. Yet despite the potential advantages, academia has so far created few spoken dialog systems that serve real users and real applications. In the first three years of being open to the research community, the Let's Go system developed by the PI and her team was used for over 150 publications, including 18 theses outside of its host institution. The speech and spoken dialog communities clearly need more real world systems that furnish data, free architectures and research platforms. The PI's goal in this project is to foster such new real world systems, which will give the speech and spoken dialog communities steady streams of data as well as research platforms that they can use to run studies. The project will engage seasoned researchers (who know what will work and what will not), along with high school and undergraduate students (whose younger minds are free to imagine what speech applications can become, uninfluenced by research results or funding concerns), in an effort to find the next great speech applications. <br/><br/>Broader Impacts: The REAL Challenge will be the spark that ignites the creation of novel real speech applications. The project will address a broad range of students and invigorate research in a way that can be used in other areas of natural language research. It will inspire a new generation of researchers and provide a unique opportunity for young students to work with seasoned researchers. The novel applications that will be found will ultimately be of help to the general public, whether it be for better access to information, for interaction with a robotic helpmate, or perhaps for a new way of communicating with others using social networks.",0,Carnegie-Mellon University
1020001000388653,CAREER: Learning from Observational Data with Knowledge,"Large observational datasets from social networks, climatology, finance, and other areas have made it possible for researchers to test complex hypotheses that previous studies would have been under-powered to tackle. This is especially true in biology and health, with the proliferation of new methods for gathering long-term population data, such as from electronic medical records, and real-world health data from body-worn sensors. However, the number of complex hypotheses that can be tested in datasets with hundreds or thousands of variables far surpasses what humans can propose and reason about. Exhaustively testing all possible relationships is not computationally feasible, and after this testing a researcher must still examine a non-trivial number of seemingly significant findings to determine which still need to be validated experimentally. This project aims specifically to infer causal relationships, as these provide insight into not only how a system behaves, but also why it behaves as it does, enabling the development of successful interventions. Results from this work will be incorporated into education at three levels (high school, undergraduate, and graduate) through university courses and summer programs for high school students. In addition to communicating the core concepts of causal inference, the summer programs will also introduce potential computer scientists to key areas of computer science research. Applications of the methods developed to data from stroke and diabetes may lead to new knowledge about the physiologic processes underlying recovery in stroke, and the complex interaction of factors affecting glucose in people with diabetes.<br/><br/>This work will lead to more robust and efficient inference of causal relationships from large-scale datasets, through a feedback loop between experiments and prior knowledge. Current approaches require users to specify the set of variables and hypotheses to be tested, but these limit findings to the set a user chose to explore. Instead this work will develop methods that can use prior knowledge in the form of causal relationships as well as prior experimental results to constrain what will be tested and generate new hypotheses. Causes provide information about their effect that are not contained in other variables, so this work will develop measures of how explanatory a cause is and how much information it yields, and use changes in this measure to guide generation of complex relationships in the constrained hypothesis space. The proposed approach differs from stochastic heuristics in that the new method will be deterministic, and will evaluate relationships individually, thus addressing the computational challenge and reducing the impact of incorrect inference. Second, the work will lead to algorithms that can automatically evaluate how findings relate to prior knowledge, whether they are, for example, consistent, novel, or contradictory. This will allow researchers to focus more in depth on findings likely to be significant or interesting, rather than those that simply confirm prior knowledge. It also provides a feedback loop between knowledge and inference.",5,Stevens Institute of Technology
1020001000388674,EAGER: Viewpoint Tracking via Acceleration Stabilized with Computer Vision,"This project addresses a problem which has kept virtual reality from widespread use. Some 15 years ago high-capability graphics cards in PCs reduced the cost of computing for virtual environments from hundreds-of-thousands of dollars to (today) hundreds of dollars. Low-cost head-mounted displays have just appeared. The similar advance in viewpoint tracking has not occurred; accurate, low-latency wide-area viewpoint tracking remains very costly. Virtual reality demands stereo 60 frames per second per eye and system latencies below 50 ms. This research is developing a novel system to provide accurate, low-latency viewpoint tracking to meet these requirements with consumer-cost components. The research is based upon a recently demonstrated proof-of-concept system. A standard RGB-Depth camera sits on the user's head. Pose is calculated by matching images against an environment model. A Kalman filter integrates rotational velocity and linear acceleration from a cheap high-speed inertial measurement unit (IMU) to update the pose estimate many times between frames. This not only gives low-latency pose readings, it also improves initial values for the next camera calculation. The depth images and reconstruction software are concurrently used to incrementally build/update the depth model of the environment for the camera matching. <br/><br/>The current research is demonstrating the system's potential. To work completely successfully, both conceptual and algorithmic advances are in process. IMU calibrations are being improved. Temperature and dynamic bias must be compensated in the calibration to improve stimation and reduce jitter. Using multiple cameras to reduce overall noise and handle difficult cases (such as blank walls) are being addressed with new algorithms and evaluated. The merging of new and modeled data is computationally expensive. The feasibility demo uses two GPUs, one for rendering; one for tracking. Ways are being invented to do it on one. Additional future research includes tracking dynamic objects and incorporating object recognition (e.g., such as a desk or chairs) to improve estimates. Widespread access to virtual reality may well open new, unexpected creative uses of the technology. The research is inventing the proof-of-concept system forward to one that can make this exciting leap.",13,University of North Carolina at Chapel Hill
1020001000388678,EAGER: Formal models of intention,"The project conducts formal, basic research into the modeling of ""intention''. This under-researched and ill-understood concept underlies many applications, including online search, calendars, intelligent dialog systems, security applications, self-driving cars, military applications, and many more. This particular project is concerned with domain-independent, formal models underlying all such applications. 'informational attitudes' such as knowledge and belief which capture the information available to the agent, and 'motivational attitudes' such as goals, preference and utility which capture what the agent cares about, have been studied extensively. In contrast, and despite their importance, ""action attitudes"" which capture the agent's attitude towards different actions she might take in light of her motivations and the information available to her, have been poorly studied. This is true in particular of intention, perhaps the most basic action attitude, and the focus of this project.<br/><br/> The project is grounded in the Principal Investigator's prior work, in which he laid out a computational point of view, dubbed the ""database perspective''. That work, which offered an axiomatic theory of the joint revision of belief and simple ('atomic') action intention, is being extended along multiple dimensions:<br/><br/>-- Modeling complex intentions, using Dynamic Epistemic Logic (DEL).<br/>-- Modeling achievement intentions.<br/>-- Modeling teleology.<br/>-- Developing a quantitative (""probabilistic"") theory of intention.",6,Stanford University
1020001000388790,CAREER: Machine Learning Approaches for Genome-wide Biological Network Inference,"NSF-0644366<br/>Chen, Xue-Wen<br/><br/>The objectives of this research program are (1) to develop and apply novel computational<br/>approaches for uncovering genome-wide networks of interactions between genes and proteins, and (2) to conduct related educational activities in a newly established bioinformatics program in the Department of Electrical Engineering and Computer Science at the University of Kansas. Specifically, built upon reconstructing biological networks of moderate size, the new research will computationally uncover genome-wide biological networks and map interactions of genes and proteins across a variety of organisms. The research directions include: Simultaneously integrating multiple biological knowledge into dynamic Bayesian networks for learning networks of gene interactions; learning networks of protein interactions from heterogeneous data; learning integrated networks of gene and protein interactions; learning genome-wide networks of gene and protein interactions; and cross-species network learning. It will advance the state of the art by developing machine learning methods for effectively integrating multiple prior knowledge from different sources of data, including learning for highly heterogeneous data and large-scale network. The research will also produce new methods and user-friendly software that can be applied by molecular biologists to gain insight into diverse biological problems, such as how biological processes are regulated on a genome scale and how individual bio-molecules interact with one another in the cell.<br/><br/>Learning with prior knowledge and highly heterogeneous data sources are fundamental to computational biology, information theory, machine learning, data mining, and other areas. Thus, the proposed research will benefit a variety of application domains including research in biology and medicine. The biological discovery derived from this project will also contribute to a variety of fields that include agriculture development, rational drug design, and health care. The research program will foster and facilitate collaborations between biologists and the PI. The educational components are closely tied to the research activities, which include (1) developing and improving bioinformatics courses that are closely related to the research outlined here and integrating them into the core bioinformatics curriculum, and (2) providing special training opportunities in the interdisciplinary area of bioinformatics for a wide-range of students, from high school through graduate school, including groups typically underrepresented in the field of science and technology.",3,Wayne State University
1020001000388890,CAREER: A Multiagent Teacher/Student Framework for Sequential Decision Making Tasks,"Physical (robotic) agents and virtual (software) agents are becoming increasingly common in industry, education, and domestic environments. Although recent research advances have enabled agents to learn how to complete tasks without human intervention, little is known about how best to have humans teach agents or agents teach other agents or even how agents might teach humans. Considering the full matrix of agent/human learning, in which either an agent or a human can play the role of teacher or student, would increase the potential benefits of leveraging human and agent expertise and knowledge. <br/><br/>This project aims to study agent/human learning in the context of sequential decision-making problems, a class of central importance for real-world agent systems. This project aims to develop a novel teacher/student framework that integrates autonomous learning with teaching by another agent or a human. The project plans to develop and evaluate a set of core algorithms to allow: (1) agents to teach agents, thus enabling robust knowledge sharing among agents; (2) humans to teach agents, thus allowing humans to share or transfer common sense or domain-specific knowledge with agents; and (3) agents to teach humans, thus helping humans better understand how to perform or recast sequential decision-making tasks already understood or performed by autonomous agents. In all cases, the goal is to develop methods that significantly improve learning performance relative to learning without guidance from a teacher. Issues to be explored include mismatch between teacher/student abilities, learning from multiple teachers, and shared knowledge representation between teacher/student. The PI plans to focus on several scenarios, each with different sets of assumptions about the knowledge or skill of the student or teacher and the kind of interaction possible between them (e.g., whether the teacher can tell the student what action to take). The techniques developed in the project will be evaluated in a variety of tests domains and will involve simulations as well as actual robots.<br/><br/>The teacher/student framework will enable agents to teach other agents and humans, as well as integrate autonomous learning with agent and human teaching. Understanding how to best teach agents is of key importance in developing deployable agent systems. The platform- and domain-independent approach incorporates ideas from multiagent systems, machine learning, human-computer interaction, and human-robot interaction communities, and has the potential to impact each of these areas. This work takes a step towards transitioning agents from specialized systems usable only by experts into useful tools and teammates for people without programming expertise. <br/><br/>This project has a strong educational component. The PI teaches at an undergraduate college and undergraduate students will play a crucial role throughout the project. Furthermore, the research produced by this project will be incorporated into five of the PI's courses, providing exciting new material to attract and retain computer science majors. The PI will also continue outreach to secondary school students as well as to underrepresented groups via Lafayette College's S-STEM and Higher Achievement programs.",6,Washington State University
1020001000388903,EAGER: Hierarchical Topic Modeling by Nonnegative Matrix Factorization for Interactive Multi-scale Analysis of Text Data,"EAGER: Hierarchical Topic Modeling by Nonnegative Matrix Factorization for Interactive Multi-scale Analysis of Text Data<br/><br/>Nonnegative matrix factorization (NMF) has proven to be an important tool of choice for numerous data analytic problems in text, imaging, and computer vision. It provides advanced mathematical methods for improvements in dimensionality reduction, clustering, etc. A distinguishing feature of the NMF is the requirement of non-negativity in the factors that represent the matrix in a lower rank. This property greatly enhances the interpretability and modeling capability for many applications, where preserving non-negativity is important. This project is studying foundational properties of the NMF, producing new algorithmic methods using the framework of NMF for efficient and effective hierarchical clustering and topic modeling of large scale text data for multi-scale analysis, generating labels for the topics, and interactive analysis. In addition, an interactive visual analytic system for the proposed methods is being developed to make these theoretical and algorithmic discoveries readily available to the research and applications communities. New multi-scale hierarchical methods for generating clusters and discovering topics in the documents and detection of topic changes over time are being explored to enable computationally efficient and perceptually effective ways of exploring text data and discovering latent group structure. Visual analytic systems are also being developed based on this foundational work to enable more effective and informed discovery of topics in a large-scale document collection.<br/><br/>This project will have a significant impact on the analysis and development of NMF algorithms and new modeling of problems for applications utilizing the NMF (e.g., 'Big Data'). The project is yielding effective computational methods with solid analysis that will enhance the analysis of high-dimensional data in broad areas of science, engineering, medicine, and business disciplines beyond the application areas being considered within this project.",5,Georgia Tech Research Corporation
1020001000388968,CAREER: Advancing Social Computing with Tailored Motivators,"This project will enable the development of personally-tailored motivators for the use of social technologies, e.g., various incentives to encourage users to contribute knowledge, provide emotional support or curate information on an online system. The project includes (1) study of motivator-to-motivation matching using a series of controlled studies, (2) development of models to infer users' motivation profiles through analyzing how underlying motivations predict usages of social technologies, and (3) application and evaluation of tailored motivators to encourage user-participation in naturalistic settings. <br/><br/>The results of this research will advance scientific understanding of how a person's motivations influence his or her usage of social technologies. It will also deepen our understanding of human motivations more generally by elucidating the intricate relationship between the types of rewards and how they are moderated by different underlying motivations to affect behaviors. In addition, the research will also result in models for identifying users' motivation profiles, and a general design framework for researchers and designers to tailor motivators, bridging multiple disciplines.<br/><br/>The developed motivation profiles and design framework may be useful to tackle a variety of everyday challenges of motivation in social technologies to improve knowledge sharing, social support and information representation. Through collaborative projects, this research will also aid health behavior change efforts to encourage blood donation and combat infant obesity. Finally, through an integrated educational plan, this research program will also contribute to improving general classroom learning, the creation of a graduate course and high school summer camp, and the training of scholars in interdisciplinary research methods.",11,University of Washington
1020001000389013,Outstanding Student Research at GL2013,"This award provides funding for the travel and accommodation expenses for graduate students in computational linguistics whose papers have been selected for presentation at the Sixth International Conference on Generative Approaches to the Lexicon (GL2013) conference in Pisa, Italy, in September 2013. The overall goal of the GL conferences is to bring together researchers in theoretical and computational linguistics, computer science, cognitive science, and lexicography to explore the problem of semantic compositionality -- how the meaning of expressions in natural languages derives from the structure of the lexicon, or dictionary, of semantic formatives (words or idiomatic multi-word expressions) in a natural language. GL2013 specifically aims at exploring the relation and potential synergies between generative approaches, which assume that semantic formatives are structured objects, and distributional semantics, whose proponents typically assume that they are internally unstructured and analyze their semantic contribution by means of their distribution in linguistic contexts. <br/><br/>This award enables talented students working on computational semantics to interact with professionals in a variety of fields and perspectives related to that domain, and to present their work in a major international venue. This in turn will help to nurture a lasting interest in the upcoming generation of computational researchers in what is likely to remain a major area of interdisciplinary study for a very long time.",0,Brandeis University
1020001000389017,INDP: Collaborative Research: Coding for All: Interest-Driven Trajectories to Computational Fluency,"This Cyberlearning Integration and Deployment (INDP) project brings together an interdisciplinary research team from the MIT Media Lab, the Digital Media and Learning Hub at University of California Irvine, and Harvard University's Berkman Center for Internet and Society to explore development and use of new types of online tools, activities, and gatherings to engage more young people in developing computational fluency, particularly youth from groups currently underrepresented in computing. The project builds on the success of the NSF-funded Scratch programming language and online community (http://scratch.mit.edu), where more than 1.5 million young people have created interactive stories, games, animations, and simulations based on their interests. The Coding for All project aims to provide new pathways into Scratch for youth from populations that are not currently drawn in easily to technological and scientific discourse and activities. The PIs are designing and refining a variety of interest-based microworlds -- introductory programming environments that are customized to particular interests of youngsters in those populations -- to provide easier and more inviting entry points for getting started with coding, and they aim to develop guidelines for designing microworlds that are simple enough not to be overwhelming, engaging enough to draw youngsters in, rich enough to allow creative expression, and tuned well enough to the interests and prior knowledge of new participants to foster curiosity and learning. In addition, the team is exploring how to use personnel in libraries and other spaces where low-income youth congregate to support initial introduction to and engagement with these microworlds and developing and refining tools to support interest-based on-line hangouts and unconferences, where young people who become engaged through these microworlds can meet peers and mentors to share ideas, form collaborations, and increase their programming and expressive capabilities. The PIs are collecting much data about the engagement and participation of youngsters, the development of their skills and understanding, and the development of their interests, and their analysis will contribute to deeper understanding of needed supports, pathways, and outcomes related to computational fluency.<br/><br/>This project addresses the need to draw in and promote learning among those in populations not served well by current educational practices and important national priorities in workforce development, equity, and the need for a technologically fluent public. The project's tools and activities will provide alternative pathways into coding, increasing opportunities for young people in non-dominant communities to develop computational fluency. The focus on public libraries explores how to use public educational institutions most geared towards serving the technology needs and diverse interests of non-dominant communities in taking advantage of new online learning opportunities. The findings from this research will inform researchers and practitioners concerned with STEM-related learning, online educational resources, equity in education, and cyberlaw.",9,University of California-Irvine
1020001000389020,INDP: Collaborative Research: Coding for All: Interest-Driven Trajectories to Computational Fluency,"This Cyberlearning Integration and Deployment (INDP) project brings together an interdisciplinary research team from the MIT Media Lab, the Digital Media and Learning Hub at University of California Irvine, and Harvard University's Berkman Center for Internet and Society to explore development and use of new types of online tools, activities, and gatherings to engage more young people in developing computational fluency, particularly youth from groups currently underrepresented in computing. The project builds on the success of the NSF-funded Scratch programming language and online community (http://scratch.mit.edu), where more than 1.5 million young people have created interactive stories, games, animations, and simulations based on their interests. The Coding for All project aims to provide new pathways into Scratch for youth from populations that are not currently drawn in easily to technological and scientific discourse and activities. The PIs are designing and refining a variety of interest-based microworlds -- introductory programming environments that are customized to particular interests of youngsters in those populations -- to provide easier and more inviting entry points for getting started with coding, and they aim to develop guidelines for designing microworlds that are simple enough not to be overwhelming, engaging enough to draw youngsters in, rich enough to allow creative expression, and tuned well enough to the interests and prior knowledge of new participants to foster curiosity and learning. In addition, the team is exploring how to use personnel in libraries and other spaces where low-income youth congregate to support initial introduction to and engagement with these microworlds and developing and refining tools to support interest-based on-line hangouts and unconferences, where young people who become engaged through these microworlds can meet peers and mentors to share ideas, form collaborations, and increase their programming and expressive capabilities. The PIs are collecting much data about the engagement and participation of youngsters, the development of their skills and understanding, and the development of their interests, and their analysis will contribute to deeper understanding of needed supports, pathways, and outcomes related to computational fluency.<br/><br/>This project addresses the need to draw in and promote learning among those in populations not served well by current educational practices and important national priorities in workforce development, equity, and the need for a technologically fluent public. The project's tools and activities will provide alternative pathways into coding, increasing opportunities for young people in non-dominant communities to develop computational fluency. The focus on public libraries explores how to use public educational institutions most geared towards serving the technology needs and diverse interests of non-dominant communities in taking advantage of new online learning opportunities. The findings from this research will inform researchers and practitioners concerned with STEM-related learning, online educational resources, equity in education, and cyberlaw.",9,Harvard University
1020001000389023,INDP: Collaborative Research: Coding for All: Interest-Driven Trajectories to Computational Fluency,"This Cyberlearning Integration and Deployment (INDP) project brings together an interdisciplinary research team from the MIT Media Lab, the Digital Media and Learning Hub at University of California Irvine, and Harvard University's Berkman Center for Internet and Society to explore development and use of new types of online tools, activities, and gatherings to engage more young people in developing computational fluency, particularly youth from groups currently underrepresented in computing. The project builds on the success of the NSF-funded Scratch programming language and online community (http://scratch.mit.edu), where more than 1.5 million young people have created interactive stories, games, animations, and simulations based on their interests. The Coding for All project aims to provide new pathways into Scratch for youth from populations that are not currently drawn in easily to technological and scientific discourse and activities. The PIs are designing and refining a variety of interest-based microworlds -- introductory programming environments that are customized to particular interests of youngsters in those populations -- to provide easier and more inviting entry points for getting started with coding, and they aim to develop guidelines for designing microworlds that are simple enough not to be overwhelming, engaging enough to draw youngsters in, rich enough to allow creative expression, and tuned well enough to the interests and prior knowledge of new participants to foster curiosity and learning. In addition, the team is exploring how to use personnel in libraries and other spaces where low-income youth congregate to support initial introduction to and engagement with these microworlds and developing and refining tools to support interest-based on-line hangouts and unconferences, where young people who become engaged through these microworlds can meet peers and mentors to share ideas, form collaborations, and increase their programming and expressive capabilities. The PIs are collecting much data about the engagement and participation of youngsters, the development of their skills and understanding, and the development of their interests, and their analysis will contribute to deeper understanding of needed supports, pathways, and outcomes related to computational fluency.<br/><br/>This project addresses the need to draw in and promote learning among those in populations not served well by current educational practices and important national priorities in workforce development, equity, and the need for a technologically fluent public. The project's tools and activities will provide alternative pathways into coding, increasing opportunities for young people in non-dominant communities to develop computational fluency. The focus on public libraries explores how to use public educational institutions most geared towards serving the technology needs and diverse interests of non-dominant communities in taking advantage of new online learning opportunities. The findings from this research will inform researchers and practitioners concerned with STEM-related learning, online educational resources, equity in education, and cyberlaw.",9,Massachusetts Institute of Technology
1020001000389055,EAGER: Leveraging 3D structure estimates for photo collection based geo-localization and semantic indexing,"This project advances the state of the art by utilizing geometric consistency as a mid-level visual similarity cue used to develop a visual index of a geo-located image dataset and use the attained data associations as a means to infer semantic relationship among dataset elements. The characterization of the image content in terms of the geometric and semantic elements observed in scene provides a general framework for both identifying and managing data association in large scale photo collections. The project develops such complementary data abstractions into a single framework by focusing on two main research topics: (1) Determining the geographic location where an image was taken by comparing it against a large database of geo-located urban imagery - accordingly, the challenge of balancing both search completeness and computational tractability is brought to the forefront of research efforts; and<br/>(2) Incorporating geometric structure estimates attained from large photo-collections or ground reconnaissance video/photos as a means to identify and recognize semantically meaningful elements within the reconstructed 3D-environment.<br/><br/>This project leverages the use geometric consistency as a visual data association primitive in order to introduce the concept of structural and semantic indexing within the development internet scale photo collection analysis systems. Moreover, by combining the complementary data abstraction levels of geometrical structure and semantic context the research team develops more efficient and robust data organization framework with applicability well beyond the studied test application of urban geo-localization.",5,University of North Carolina at Chapel Hill
1020001000389060,EAGER: TAEMILE: Towards Automating Experience Management in Interactive Learning Environments,"A key challenge for interactive learning environments is how to automatically co-regulate ? balancing learners? autonomy with the pedagogical processes intended by educators. In this Cyberlearning: Transforming Education EAGER project, the PIs are exploring the use of experience management (EM) to address this issue. They are collecting preliminary data about (1) the relationship between a learner's goal orientations and play style and (2) the impact of dynamically adjusting the learning environment using a variety of EM strategies and their impacts on learners' autonomy and learning outcomes. These issues are being addressed in the context of an interactive learning environment called Solving the Incognitum. Using this environment, learners learn about geological time and the fossil record. The setting is the historical Charles Willson Peale Museum of Art and Science, the largest US natural history museum of its day (1801-1827). Included in this virtual museum are all the dinosaur and ancient animal bones that Peale and his group brought back from his expeditions. Learners are challenged to find the bones that are missing from a skeleton, and clues are scattered around the museum. Learners with a goal-achievement orientation may not explore enough and may need to be encouraged to do that, while those who are more natural explorers may need to be guided to move towards the planning needed to achieve their goal. Reflection in action and reflection on action are supported.<br/><br/>As ailing governments cut funding for schools, there is a push towards using technology for providing the kinds of help that aides and specialized teachers might have provided. For such an effort to be successful, we need to learn more about how to design engaging learning environments that can help struggling learners. Learning environments where learners get to explore, design, build, and solve problems are engaging, but using them to promote learning requires understanding how to give learners the autonomy they need to remain engaged and enthusiastic along with the guidance they need to successfully learn. This project represents an early attempt at addressing that need.",9,Drexel University
1020001000389137,"CAREER: Effective Analysis, Exploration and Visualization of Big Flow Data to Understand Dynamic Flows","The ever-growing size and complexity of flow data produced from many scientific, engineering and medical simulations pose significant challenges which are not thoroughly addressed by existing visualization techniques. These challenges include computation, interaction, visualization and user challenges. Addressing the computation challenge is a central research focus and remains a prominent direction in the field, while the other challenges are often overlooked. The goal of this CAREER project is to address these less investigated challenges by pioneering a comprehensive framework toward effective visual understanding of flow fields. It contributes to the state of the art flow visualization by promoting an innovative database approach to shape-based field line modeling and classification, investigating new string-, sketch- and graph-based interfaces and interactions for flow field exploration, and exploring occlusion and clutter reduction through unconventional streamline repositioning and automatic tour generation. The general approach developed in this research is expected to substantially improve our ability to visually understand a wide spectrum of flow fields, ranging from the traditional application of fluid flows to new applications such as traffic flows, cash flows and message flows. This project will provide training for graduate and undergraduate students in the area of data visualization and scientific computing via capstone class projects. A pedagogical toolbox will be designed along with web-based resources to support teaching visualization classes through expressive demos, potentially benefiting universities nationwide with a similar teaching need. The PI will continue to attract underrepresented students through university and department outreach programs and engage local middle and high school students through summer youth programs. <br/><br/>This research tackles the fundamental challenges in visualizing large, complex three-dimensional steady and unsteady flow fields. Underlying the proposed work is a novel database approach to field line shape encoding, classification and interrogation. The PI will integrate and unify a variety of concepts from geometric modeling, computer vision and data mining to create robust visual characters and words from field lines for shape analysis and organization. Novel interfaces and interactions will be introduced to enable intuitive retrieval of partial field lines via textual and visual forms, and examination of hierarchical field lines and their spatiotemporal relationships in the transformed graph space. Innovative streamline repositioning for focus+context viewing and automatic tour for examining hidden or occluded flow features will be devised to move from clutter to clarity in the visualization. The success of this research will benefit a wide variety of applications within and beyond graphics and visualization, such as shape analysis, visual perception, database organization, game development, and visualization in education.<br/><br/>The PI will collaborate with scientists and researchers at university, industry and national labs, applying the proposed solutions to solve real-world problems. Research results will be evaluated through both domain expert reviews and formal user studies. Selected research outcomes will be integrated into user-engaging educational applications that will be run on tablet devices and delivered to the general public for wide dissemination. This CAREER project will build a solid foundation for addressing key challenges in flow visualization, and lead to multidisciplinary collaborations spanning atmospheric cloud, combustion chemistry and cardiovascular research. It will also produce fruitful deliverables, featuring the first-ever benchmark field line shape database, tutorials and workshops at premier visualization conferences, and pedagogical tools and game apps.",15,Michigan Technological University
1020001000389194,CAREER: Bayesian Nonparametric Learning for Large-Scale Structure Discovery,"CAREER: Bayesian Nonparametric Learning for Large-Scale Structure Discovery<br/><br/>This CAREER project will advance the state-of-the-art for automated discovery of structure within data as diverse as images and video, natural language, audio sequences, and social and biological networks. Contemporary applications of statistical machine learning are dominated by parametric models. This approach constructs models of pre-determined size (with a finite-dimensional vector of parameters which) are tuned using training data. To be effective, the underlying structure of such models must be manually specified by experts with application-specific knowledge. This presumed structure imposes limits on what can possibly be learned even from very big datasets.<br/><br/>Bayesian nonparametric models instead define distributions on models of arbitrary size with infinite-dimensional spaces of functions, partitions, or other combinatorial structures. They lead to flexible, data-driven unsupervised learning algorithms, and models whose internal structure continually grows and adapts to new observations. Bayesian nonparametric models, while promising, are an incompletely-developed technology posing significant challenges to practice. This CAREER project will increase the practical feasibility and impact of Bayesian nonparametric approaches by pursuing three interrelated themes:<br/><br/>1) Nonparametric Model Design and Evaluation. New families of models for data with hierarchical, spatial, temporal, or relational structure are investigated. Quantitative validation of the statistical assumptions and biases inherent in these models will be emphasized, evaluating whether these align with the empirical statistics of significant application areas.<br/><br/>2) Reliable Structure Discovery. Statistical inference algorithms which move beyond the local moves of standard (and widely used) Monte Carlo and variational methods will be developed. Compelling examples indicate that local optima are a significant issue for contemporary methods, so a family of novel algorithms is proposed, which dynamically adjust model complexity as learning proceeds.<br/><br/>3) Scalable and Extensible Nonparametric Learning. Common patterns across a wide range of popular nonparametric models are identified, which suggest a corresponding family of scalable and parallelizable online learning algorithms. The ""memoized"" online variational inference algorithm avoids some practical instabilities and sensitivities of conventional methods, while allowing provably correct optimization of the nonparametric model structure and complexity.<br/><br/>An extensible ""BNPy: Bayesian Nonparametric Learning in Python"" software package is under development to allow easy application of the novel learning algorithms to a wide range of current and future BNP models. The education and outreach plan of this CAREER project leverages this software to create interdisciplinary undergraduate research teams exploring applications in the natural and social sciences, and a week-long summer school on Bayesian nonparametrics to be held twice at Brown University's Institute for Computational and Experimental Research in Mathematics (ICERM).",2,Brown University
1020001000389207,EAGER: Exploring Adapting Language Technology Across a Network of Domains,"Much of the most successful software for processing and understanding natural language is based on learning from labeled examples. However, applications to diverse genres such as social media and historical documents have demonstrated the limitations of this approach since the application data differs dramatically from the training examples. Labeling training datasets for each new genre is prohibitively expensive. Methods that adapt the software between the original source domain and the target --- for example, from 20th century newspapers to Shakespearean drama --- are an attractive alternative and an active research area. However, language does not naturally fall into a few source and target domains; rather, documents exist in a multidimensional field of similarity and difference, based on metadata attributes such as the date of publication. In addition, binary source/target adaptation ignores vast amounts of unlabeled data that may bridge the gap between, say, the 20th and 17th centuries, or between text from the Wall Street Journal and text entered on Twitter.<br/><br/>This EAGER award explores a new approach to adapting language technology to new application domains. Using explicit document metadata such as date of authorship (for historical documents) or product type (for online reviews), documents are situated in a network of fine-grained domains. Micro-adaptation is then performed between adjacent nodes in the network, which are expected to be more similar to each other than (distant) the source and target domains. These micro-adaptations can then be propagated across the domain graph, yielding an adaptation path from source to target. Empirical evaluations will compare this approach to the current state-of-the-art practices: adapting directly from source to target, and adapting from the source to a broader set of non-source documents. In addition, a theoretical analysis will identify conditions under which this approach is likely to succeed.<br/><br/>Language technology already impacts society by facilitating the retrieval, organization, and summarization of information, but its inability to transcend a small set of training domains is one of the most critical obstacles to more widespread adoption. Key application domains such as social media, patient medical records, and legal documents differ substantially from available training corpora, and the development of effective technology for these areas depends on bridging the domain gap. In addition, the sociocultural variation found in online language dramatically reduces the performance of state-of-the-art systems, creating a ""language gap"" between standard and minority dialects. This research is not tied to any specific language processing task; rather, it promises to build a more robust foundation that can apply across many tasks, bringing the benefits of language technology to new users and settings.",0,Georgia Tech Research Corporation
1020001000389218,EAGER: Formal and Empirical Foundations of Semantics-Preserving Machine Translation,"Statistical machine translation has been enormously successful over the last two decades, resulting in what is today a thriving industry highlighted by offerings such as Google Translate. Yet translation systems still often fail to preserve the semantics of sentences -- the ""who did what to whom"" relationships that they express. This is because they model translation as simple substitution and permutation of words, or at best as the reordering of syntactic units, such as nouns and adjectives. To preserve semantics, they must model semantics. At the same time, computational linguists have developed rigorous, expressive mathematical models of language that exhibit high empirical coverage of semantically annotated linguistic data, correctly predict a variety of important linguistic phenomena in many languages, and can be processed with highly efficient algorithms. However, these models are untested as the basis of statistical translation models. <br/><br/>This EArly Grant for Exploratory Research aims to close the gap, building the foundations of empirical semantics-preserving transduction models based on modern, linguistically-informed mathematical models of language. The project derives new mathematical functions that map linguistically expressive representations from one language to another, and implement them to align translated documents and translate new documents. Though high-risk, this exploratory project has the potential to unify and transform the disparate fields of empirical machine translation and theoretical computational linguistics.",0,Johns Hopkins University
1020001000389219,CAREER: BCSP: Methods for analyzing sequencing data from repetitive genomes,"Our understanding of how biological systems work is increasingly fueled by data from DNA sequencers. Sequencing has improved dramatically over the past several years, but the datasets produced by sequencers are unwieldy and difficult to interpret. This is especially true when the genome being studied contains many repeated stretches of DNA, as is the case for most mammals and plants. The goal of this project is to develop improved computational and statistical methods for analyzing DNA sequencing data, providing faster, more accurate, and more interpretable results to scientists studying organisms with repetitive genomes. These methods will be implemented as open source software tools made freely available to the research community. A successful project will result in these tools being widely adopted in the biological research community. Repetitive sequences are implicated in cellular regulation processes and associated with human disease. The integrated education plan also seeks to improve software for analyzing sequencing data by teaching computer science students the complete set of skills needed to make usable genomics software in the era of big data genomics. The PI will develop an undergraduate minor in computational biology, a graduate class covering methods for analyzing large sequencing datasets, and a competitive class project. A successful effort will result in more trained computer scientists joining and contributing to the study of computational biology and genomics.<br/><br/>The genomes of plants, mammals and other higher eukaryotes contain many repeated DNA sequences. 80% of the maize genome, for example, is covered by repetitive stretches of DNA. At the same time, computational tools typically model DNA as a string. This has advantages; it allows these tools to borrow methods developed for other strings, such as books and web pages, and apply them to DNA. But for repetitive genomes, the string abstraction fails to capture the prevalence of repeated DNA sequences related to each other through evolution. The PI proposes a broad research agenda based on the idea that analyzing sequencing data derived from repetitive genomes requires special, repeat-aware computational methods. The first project explores accurate and efficient methods for aligning sequence reads to repeat families. The PI proposes methods that exploit similarities between alignment problems to yield solutions that are more accurate than current approaches. The second project explores novel methods for predicting the probability that an alignment reported by a read aligner is correct, i.e., that the aligner correctly identified the read's point of origin. Downstream analysis tools use this quantity to weigh their confidence in evidence derived from the alignment. But estimating this quantity accurately is difficult, and there are no widely applicable approaches available now. The PI proposes a tandem simulation approach, whereby a simulator mimicking properties of a real dataset can provide training examples that in turn allows us to accurately predict these probabilities for real data. These methods address major deficiencies in everyday common genomics analyses, which are made slower and less accurate by repetitive DNA.<br/><br/>The PI will also conduct an integrated set of curriculum building and outreach efforts. These have the goal of bringing computational biology to the attention of more students earlier in their training, and to provide graduate and upper undergraduate students with a strong computational biology curriculum. Specifically, the PI will develop and implement an undergraduate minor in computational biology at Johns Hopkins University. Second, the PI will design a new graduate-level course covering contemporary methods for analyzing very large collections of sequence data. Finally, the PI will develop a competitive project called the Big Sequence Data Pentathlon that tests students' ability to design scalable, usable genomics analysis tools on a parallel computer system.",3,Johns Hopkins University
1020001000389229,EAGER: SupraPed: Biped Robot Support Expansions for Rough Terrains,"Biped humanoid robots hold the potential to perform human-like locomotion and manipulation tasks in a variety of situations where either skilled humans are unavailable or where environmental conditions prevent human intervention. However, while bipedal motion maximizes maneuverability in tight workspaces, it simultaneously reduces locomotion stability. To initiate a new direction of ongoing research in legged robot motion control, this work will enable humanoid robots to transform into tripeds or quadrupeds or, more generally, ""SupraPeds"". To control the potentially numerous contact points on SupraPeds, we will also develop a software system that implements generic multi-contact control for arbitrary humanoids, which will enable autonomous balancing while satisfying contact force constraints. <br/><br/>This concept is intuitive to humans as everyone has experienced the need for an extra hand or two to maintain balance or navigate difficult terrain. The PIs will capitalize on this aspect to communicate to high school students, undergraduates, and graduate students, as well as educators nationwide, through live and online courses and seminars. To broaden the participation of under-represented groups, we will participate in a Research Experience for Undergraduates (REU) program especially for community colleges and minority-serving institutions. The results of the research will be published in open-access conferences and journals whenever possible, and the sensing and control software will be documented and published on public websites as open-source code. The applications of the work will impact fields such as search and rescue and scientific exploration.",7,Stanford University
1020001000389254,CAREER: Supervised Learning for Incomplete and Uncertain Data,"This CAREER project will advance the state of the art in supervised machine learning to allow for incomplete, uncertain and unspecific label information. Supervised machine learning algorithms produce desired outputs for given input data by learning from example training data. The methods generally rely on completely and accurately labeled training data to drive the learning algorithm. However, many applications are plagued with labels that are incomplete, uncertain, and unspecific (lack precision). Current techniques do not adequately handle such data.<br/><br/>For example, analysis of satellite imagery to identify the content of each pixel is often conducted by coupling unsupervised learning methods (that do not rely on labeled training data) with manual exploration. This is time-consuming, error-prone, and expensive. Imagine, instead, easy-to-use tools that could understand the content of each pixel in satellite imagery. Extremely large amounts of road map data (for example from Google Maps or OpenStreetMap) and social media information (for example geo-tagged photographs, video clips, and social networking posts) are continually collected and stored. These data could be used as sparsely-labeled training data (with varying degrees of specificity and uncertainty) to guide understanding of satellite imagery.<br/><br/>Although the data is available, algorithms have yet to be developed to combine these data sources and identify the content of pixels in satellite images. This work will advance this and other potential applications of machine learning where incomplete, uncertain and unspecific labels in training data challenge the development of effective machine learning algorithms. <br/><br/>This CAREER project will achieve these advances through the following research objectives:<br/>(1) Investigate and develop a mathematical framework and associated algorithms for Multiple Instance Function Learning that addresses linear and non-linear classification and regression problems with varying levels and types of sparsity, uncertainty, and specificity in training labels.<br/><br/>(2) Study and apply the proposed framework and algorithms towards the fusion of satellite imagery, road map data and social media for global scene understanding. <br/><br/>This research will be conducted in conjunction with integrated education and outreach activities. In particular, an interactive web application will be developed to provide an avenue for introducing concepts from machine learning and remote sensing to the public for dissemination and outreach. This interactive web application will also be used, along with additional hands-on activities, to introduce high school students to machine learning and remote sensing concepts during an annual summer engineering camp held at the University of Missouri in Columbia, MO. Paired with the web application will be a research website in which data, code, publications and presentations will be shared with the research community. Furthermore, undergraduate and graduate research assistants will be trained in the areas of machine learning and remote sensing. Finally, relevant research topics will be introduced in the PI's undergraduate and graduate courses.",2,University of Missouri-Columbia
1020001000389262,CAREER: Scaling up Modeling and Statistical Inference for Massive Collections of Time Series,"Consider the task of predicting influenza rates at a very large set of spatial locations. Modeling each region independently does not leverage the information from related regions and can lead to poor predictions, especially in the presence of missing observations. Likewise, imagine estimating the value of every house in the United States. Capturing trends within a neighborhood is key; however, each neighborhood only has a few recent house sales. The challenges presented by these increasingly prevalent massive time series are endemic to a wide range of applications, from crime modeling for police resource allocation to forecasting consumer trends and social networks: the individual data streams often include only infrequent observations such that each alone does not provide sufficient data for accurate inferences. However, the structured relationships between them offer an opportunity to share information. A key question is how to discover these relationships. <br/><br/>This project takes a computationally-driven Bayesian nonparametric approach, trading off flexibility and scalability, to address the challenges of massive collections of infrequently observed time series. Our approaches exploit correlation among the data streams, e.g., among related regions, while enabling data-driven discovery of sparse dependencies. The multi-resolution and modular forms also allow incorporation of heterogeneous side information. Key to the success of the proposed methods is scalable Bayesian posterior inference. We focus on (i) parallel computations exploiting sparse graph dependencies, (ii) multi-resolution inference, and (iii) online algorithms for dependent data.<br/><br/>This project represents an ambitious cross-disciplinary effort, integrating ideas from machine learning, systems, engineering, and statistics. The work addresses a largely ignored question in the discussion on big data: How to cope with modeling and computational issues when the data has crucial structure across time, especially arising from individually sparse and disparate measurement sources. The tools developed will significantly broaden the scope of scientific questions that can be addressed. Results from this work will be publicly disseminated, including through open source software, and our industry partners aim to transition the technology into real-world systems. This project also involves developing (i) exciting and intensive programs harnessing existing infrastructure, UW DawgBytes, to increase the exposure of K-12 students, and especially girls, to machine learning; and (ii) curriculum training students in both statistical and computational thinking.<br/><br/>For further information, see the project website at http://www.stat.washington.edu/~ebfox/CAREER.html.",5,University of Washington
1020001000389269,CAREER: Human-Aware Autonomy for Team-Oriented Environments,"Robots are an increasingly common presence in human environments, working alongside people in factories, hospitals, and military field operations. However, today people must change how they work to accommodate robots in their workspace. This poses a significant barrier to adoption of robot technology by creating inefficiencies. This project provides an integrated research and educational approach to develop intelligent robotic technologies that more seamlessly integrate with human work environments. <br/><br/>The technical approach translates qualitative and quantitative insights from human studies into explicit computational models, and exploits these models to redesign robot algorithms for learning, decision-making, and control. The research effort specifically investigates three types modifications to robot behavior: (1) modifying robot motion planning using anticipatory signals of human motion, (2) customizing robot task plans using statistical models of human task execution, and (3) inferring and applying human domain expertise to expedite automated planning for mixed human-robot teams. Human subject experimentation is planned to assess ease-of-interaction, worker trust, and task performance, and the approach is validated using metrics to quantitatively assess the degree to which a robot's behavior preserves natural human workflow. By designing robot autonomy that minimizes disruption to human workflow, the approach supports graceful transitions from robotic work back to human work and vice versa.",7,Massachusetts Institute of Technology
1020001000389286,CAREER: Mental Models and Critical Mass: Shaping the Success of Online Communities,"As new computing and communication technologies become mainstream, online communities are transforming journalism and other industries to support more interactive, community-driven work. This project will study how mental models of online communities are formed, how they shape expectations about the future of the community, how they co-evolve with the community over time, and how they aggregate to form a critical mass that is essential for successful work and community survival. In many such online communities, participation is voluntary: each person must look at the community, develop a mental model of what that community is, how it works, and what it means for them to join, and then make a decision about whether they want to participate. And then, as the community evolves and changes, they need to continually decide whether and how to participate.<br/><br/>This research will triangulate three research methods to understand how individuals form mental models of online communities, and how mental models of multiple individuals interact and aggregate to form a larger community. First, it will use qualitative interviews to understand how individuals make sense of a new online community and how they make continuing participation decisions in communities they are already part of. Second, it will use a series of human subjects lab experiments to characterize how people form mental models of both the content available in an online community, and the other users of an online community. These experiments will also help to understand how a user's mental model helps her interpret the community's reactions to her participation. Finally, this project will use statistical analysis and formal mathematical modeling of existing online communities to understand how online communities grow over time and when people choose to leave online communities.<br/><br/>Results of this research will be of general value in designing, managing, and participating in many kinds of online communities, as well as contributing to education in the information, cognitive, and social sciences. A unique, cross-disciplinary education program will be created that trains students to use this research to build special-purpose online communities. This program will include a joint class linking a School of Journalism with a department of Telecommunication, Information Studies and Media. This class will form cross-disciplinary teams who spend the semester creating and growing an online community. This partnership will teach students to apply social science and computer science research for real-world applications, and work on collaborative, cross-disciplinary teams that include technical people, creative people, and topic experts. It represents a new type of education in journalism that will bring students into new, community-driven methods of doing journalism, based more on curating content and facilitating discussion than on original, unidirectional reporting.",11,Michigan State University
1020001000389291,CAREER: DataSketch: Exploring Computational Data Visualization in the Middle Grades,"Increasingly, the data visualizations used in contemporary science and media move beyond conventional graphs or diagrams to use problem-specific imagery and computational techniques to reveal patterns of interest. K-12 students, however, do not have many opportunities to explore such data visualizations, or to author their own. The goal of this project is to design software tools and classroom materials that make exploring and authoring data visualizations accessible in middle grades classrooms. The underlying hypothesis of the project is that by exploring and constructing their own data visualizations, students can develop data literacy, learn core math and science content, and engage in key STEM practices in powerful new ways. <br/><br/>The project involves two interrelated strands: (1) research on grade 6-8 students' knowledge related to data visualization, and (2) the development and study of DataSketch, a tablet-based tool for students to create sketches and program them to respond to archival or live data input. Research will include interviews, studio workshops, and classroom-based studies to explore: (1) what knowledge and skills middle school students have, and need to develop, to interpret and construct data visualizations? (2) how data visualization can be integrated into the grade 6-8 STEM curriculum? and, (3) what role data visualization can contribute to middle students' learning of core STEM practices and content? <br/><br/>Partners include two schools with historically under served student populations in the Greater Boston metro area, and teachers across the greater New England region. There are opportunities for undergraduate and graduate students studying Computer Science and Education at Tufts University to conduct research and outreach at participating middle school sites. There is potential to disseminate resulting classroom activities and software through LEGO, the Tufts Center for Engineering Education and Outreach, and iCreate to Educate.",15,Tufts University
1020001000389295,CAREER: Large-scale Appearance Modeling,"The visual appearance of the world around us is the result of complex light interactions between different surfaces and material properties that comprise a scene. Despite staggering advances in data-driven appearance modeling, the creation of accurate models of large environments remains an open problem. The reliance of most current appearance modeling methods on active lighting to probe different slices of a scene's appearance precludes their use in environments where there is limited or no control over the incident ambient lighting. Furthermore, to facilitate calibration, many appearance modeling techniques estimate the appearance of a scene from a fixed vantage point, excluding scenes too large to fit in a single view with sufficient detail. In this research, the PI will investigate two novel appearance modeling paradigms designed expressly for large-scale environments under uncontrolled ambient lighting: appearance-from-motion and appearance-by-similarity. The former exploits relations between observations from different viewpoints to infer the full reflectance behavior, while the latter seeks to identify the best match from a library of pre-existing appearance instances to a possibly under-constrained set of observations. To support these two paradigms, a novel appearance model will be developed that builds upon our intuitions regarding scene appearance. The work will focus on two common types of input: community photo-collections and targeted video sequences.<br/><br/>Broader Impacts: This research will pave the way towards practical techniques for in-situ appearance modeling of large-scale environments, while stimulating new research in computer vision and in data-driven appearance modeling in computer graphics by answering fundamental questions as to whether we can model appearance from motion and/or by exploiting similarity. The project will have far-reaching impact not only on computer science but also on diverse fields ranging from metropolitan planning to cultural heritage to entertainment. The ability to model existing environments will be beneficial to various security and safety training programs (for example, virtual fire drill simulations of existing buildings and sites could help train and prepare firefighters and first responders). The emerging field of virtual reality therapy will also benefit from this research, by making it easier to create digital models of large-scale environments (so that, for example, patients who have suffered a stroke can practice motor rehabilitation skills in virtual reproductions of environments they encounter in their daily lives, while autistic children can train to improve their social interactions in virtual reproductions of places such as classrooms which they encounter in their daily lives).",14,College of William and Mary
1020001000389297,CAREER: Geometric Shape Deformation with Applications in Medicine,"In spite of significant recent advances 3D computer graphics are still humbled when confronted with medical-grade requirements, so medical illustrators often continue to rely on 2D hand drawing. A fundamental challenge is that detailed geometric models and advanced nonlinear materials increase computational complexity, making them difficult to apply in real-time interactive applications. In this research, the PI will investigate an alternative approach based on geometric shape deformations rather than the processes which created them. He argues that intuitive shape deformation can be facilitated by guarantees of basic geometric properties such as smoothness and injectivity (no self-intersection). The key is to design algorithms that can do this quickly while providing the user with a small yet expressive set of adjustable controls to ensure an efficient interactive experience; the task of shape deformation techniques is to extrapolate this parsimonious, human manageable set of input controls into a full-scale 3D deformation field in a natural and predictable way. The PI's hypothesis is that this requirement can be formally expressed in terms of basic geometric properties. To this end, the PI will explore both direct (closed-form) and variational methods, because while direct methods excel in speed variational methods offer stronger guarantees and advanced geometric properties. In terms of direct methods, the PI will develop new ways to quickly blend certain groups of 3D transformations (e.g., with the help of new geometric algebraic structures). Transformation blending will be complemented by advanced influence weights that allow the user to explicitly control the resulting sparseness. In terms of variational methods, the PI will study deformation energies satisfying traditional properties such as rotation invariance but augmented with higher-order continuity and injectivity; here, the main challenge will be to find efficient numerical solutions for the underlying optimization problems. The PI believes it will prove possible to mitigate the inherent computational complexity of variational methods by suitably combining them with direct methods so as to cast some of the variational problems as convex optimizations, thereby opening the door to highly efficient convex solvers.<br/><br/>Broader Impacts: Shape deformation is relevant to architecture, computer aided design (CAD), and many areas of science and engineering, as well as to the entertainment industry. But this project has primarily been motivated by medical applications, inspired by requests from the PI's collaborators at The Children's Hospital of Philadelphia. Given the right tools, the classical field of hand drawn medical illustration will evolve into 3D animated medical atlases, setting new standards in medical education. Shape deformation techniques could ultimately contribute to clinical praxis, by facilitating diagnosis and pre-operative planning when treating conditions such as pathological skull deformities (craniosynostosis). And shape modeling tools in expert hands could help lower the radiation dose required in CT scanning, by applying new reconstruction methods that combine user input with template models and accurate surface scans (obtained with radiation-free methods such as laser scanning). The PI also will organize seminars and courses that bring together medical and engineering students, including members of underrepresented groups, thereby promoting interdisciplinary collaboration in both research and education.",1,University of Pennsylvania
1020001000389299,CAREER: Active Learning through Rich and Transparent Interactions,"Machine learning models are trained on data that are annotated (labeled) by humans. The accuracy of the trained models generally improves with the number of annotated data examples. Yet, annotating takes time, money, and effort. Active learning aims to minimize the costs by determining which exemples are most informative and directing the human labeler to them. Improvements in active learning will lower the costs associated with data annotation and lead to faster implementations of intelligent systems for a range of applications including robotics, speech technology, error and anomaly detection (for example in medicine, financial fraud, and condition-based maintenance of infrastructure), targeted advertising, human-computer interfaces, and bioinformatics.<br/><br/>In traditional active learning approaches, algorithms are limited in the types of information they can acquire, and they often do not provide any rationale to the user as to why a particular exemplar is chosen for annotation. This CAREER project develops a new paradigm dubbed ""rich and transparent active learning."" This new paradigm opens a communication channel between algorithms and users whereby they can exchange a rich set of queries, answers, and explanations. By using rich feedback from users the algorithms will be able to learn the target concept more economically, reducing the resources required to build an accurate predictive model. By explaining their reasoning, these algorithms will achieve transparency, build trust, and open themselves to scrutiny. <br/><br/>Towards that end, the project develops methods that allow algorithms to use a rich set of queries for resource-efficient model training, and generate explanations that are informative but not overwhelming for the users. The methods developed build on expected loss minimization, information theory, and principles from human-computer interaction. Approaches are evaluated using publicly available datasets and user studies carried out as part of the project. The project develops case studies on two high-impact real-world problems: detecting fraudulent health-care claims, and identifying patients at risk of disease.<br/><br/>The rich and transparent active learning paradigm provides unique educational opportunities. In contrast to standard machine learning algorithms, operated as black boxes, interactive and transparent machine learning is expected to raise students' interest and motivation for data science. Two PhD and several undergraduate and high school students are being trained under this award. A new graduate course on interactive machine learning is being developed. Finally the PI ensures effective outreach to under-represented groups by partnering with a Chicago public high school whose student population includes 90% minorities.",2,Illinois Institute of Technology
1020001000389305,CAREER: Authorship Analysis in Cross-Domain Settings,"Authorship Analysis (AA) is the task of extracting characteristics from written documents that can help to determine authorship of a document, generate a profile of the author, or identify cases of plagiarism. AA can be used for historical purposes, to settle disputes over the original creators of a given document, and to build a prosecution case against an online abuser.<br/><br/>Most previous work in AA assumes the availability of samples with known authorship that closely match the domain of the documents of interest. A strong assumption like this one limits the applications of AA approaches. This program addresses this key outstanding challenge by designing robust frameworks for scenarios with different cross-domain degrees: cross-topic, cross-genre and cross-modality (text vs. transcribed speech). The project leverages the large amounts of free text available representing each cross-domain setting to learn general lexical and syntactic distributional correspondences. These correspondences are used to map the out-of-domain texts to a representation that is closer to the target domain. <br/><br/>Direct contributions of this research include new approaches to extract and embed cross-domain prior knowledge into AA models in the form of distributional trajectories; and a solid understanding of the influence of topic, genre, and modality in the feature engineering process for AA that will also be helpful in other text processing tasks. This research will make direct contributions to the field of forensic linguistics, which is of major relevance for national security.<br/><br/>The PI will design an advanced seminar in computational approaches for forensic linguistics and will expand her ongoing educational and outreach activities for underrepresented groups in the STEM disciplines. The PI will integrate opportunities for international visits to key research labs for the graduate students involved in the program that will enrich their training and provide great networking opportunities.",0,University of Alabama at Birmingham
1020001000389306,EAGER: TEACHER: A Pilot Study on Mining the Web for Customized Curriculum Planning,"With massive quantities of educational materials freely available on the web, the vision of personalized and readily accessible education appears within our grasp. General-purpose search engines are insufficient as they do not focus on educational materials, objectives, pre-requisite relations, etc., nor do they stitch together multiple sources to create customized curricula for students' goals and current knowledge. This exploratory project focuses on establishing fundamental results in: (1) extracting educational units from diverse web sites and representing them in a large directed graph, whose nodes are content descriptors and whose edges encode pre-requisite and other relations; (2) conducting multi-field topic inference via a new family of graphical models to infer relations among educational units; and (3) automated curricular planning, focusing on providing sequences of lessons, courses, exercises and other education units for a student to achieve his or her educational goals, conditioned on current skills. The objective is to develop a data-driven course/curriculum planner on demand, based on a graph traversal that is enriched with alternate paths, reinforcement options, and conditional branches to match the learner's needs.<br/><br/>The broader impact of this research is two-fold: (1) developing methods for mining and traversing web-based educational materials in general, later generalizing to multi-media lessons and courses; and (2) individualized curricular planning, so any student anywhere can be provided with guidance on how to navigate and exploit the vast ocean of massive open online course (MOOC) materials and other educational texts, exercises, etc. in a manner customized to the student's learning objective, capabilities and skills. The resulting system, named TEACHER, can be applied to learning specific job skills, to reinforce classroom instructions, or as stand-alone academic support to address, for instance, the huge percentage of students who attempt taking MOOCs but never complete them due to lack of requisite skills and lack of guidance on how to acquire them. Project web site (http://nyc.lti.cs.cmu.edu/teacher/) will be used to disseminate results.",11,Carnegie-Mellon University
1020001000389316,CAREER: Scaling Up Mobile Accessibility Through Touchscreen Personalization,"Touchscreen interfaces are becoming increasingly prevalent as the interface with which people interact with computers and yet, for people with motor impairments, many touchscreen commands are difficult or impossible to execute. With the increased deployment of touchscreen interfaces, it becomes critically important for hardware and software developers to ensure that such devices are accessible to a broad range of users. While these challenges can be partially offset by multimodal (speech) input, touch and gesture remain necessary for fully functional, efficient, and socially acceptable use of many touchscreen devices. This is a serious concern for the almost 20 million people in the U.S. who have motor impairments that affect their upper body, a number that will only rise with the unprecedented increase in America's senior population.<br/><br/>This project pursues a research program to advance a fundamental understanding of how decreased motor ability impacts touchscreen interaction and, in turn, how touchscreen interactions can be personalized to support each user's abilities. While substantial user-interface-development effort has focused on personalizing content, personalized-interaction interfaces, such as to alter the means of issuing the taps, swipes, and clicks that underlie the use of touchscreen devices, have received much less attention. The increased use of touchscreens presents a tremendous opportunity for software-based modifications because the entire interactive surface is software-controlled, an advantage that this researcher has already leveraged to adapt touchscreen keyboards to how motor abilities for people without disabilities change in some situations such as while walking. This project goes far beyond this preliminary work to provide great benefits to people with permanent motor impairments.<br/><br/>The project consists of two complementary major activities. The first major activity employs large-scale studies to reliably assess and predict the impact of motor abilities on touchscreen interaction. By developing new methods to leverage user-generated content (e.g., videos, tweets) and by employing large-scale online experimentation, the large-scale studies will provide a more in-depth and ecologically valid characterization of how motor ability impacts touchscreen use than has been previously possible. The second major activity of the project builds on findings from the first major activity to design and evaluate new approaches for personalizing touchscreen interaction. The second activity will contribute new techniques for personalizing mobile interactions, generate new algorithms and predictive models of touchscreen performance, and identify design guidelines for personalizing mobile interaction.<br/><br/>Broader Impacts: This work will transform mobile accessibility for people with motor impairments. Enabling mobile access can lead to greater empowerment and independence for people with disabilities. Many of the proposed techniques will also likely benefit users more broadly. This work also has implications for the accessible design of the next generation of mobile devices, including wearables and 3D-gesture interfaces. Many of the techniques should be applicable in a commercial context, which is important to insure that new commercial interfaces are accessible to all users. Education plans include two courses related to accessibility, one of which establishes a partnership with the DC Public Library on touchscreen training for people with disabilities.",4,University of Maryland College Park
1020001000389328,"CAREER: High-order Tensor Analysis for Groupwise Correspondence: Theory, Algorithms, and Applications","Visual matching is a fundamental problem in computer vision (CV) and intensive research efforts have been devoted to building correspondence between a pair of visual objects. By contrast, finding correspondence among an ensemble of objects remains challenging. This project develops a unified framework for this problem and to apply the framework to different applications. The research establishes a close correlation between the classical multi-dimensional assignment (MDA) problem and low-rank tensor approximation. Such correlation paves a way of using high-order tensor analysis for groupwise visual matching that assumes an MDA formulation. Along the way, a series of algorithms are developed to address challenging issues such as computational efficiency and context modeling. These algorithms are then deployed to different tasks including simultaneous tracking of multiple targets, tracking of deformable structures, and batch alignment of visual ensembles. <br/><br/>This project can generate broad impact on areas of computer vision, computer graphics, combinatorial optimization, oral and maxillofacial radiology, image-guided intervention, physical therapy, security and defense, education research, etc. On the one hand, the fundamental importance of visual matching makes the project transformative to many other CV problems. On the other hand, the project benefits a wide range of fields outside the CV community through the use of interdisciplinary applications as test beds. This project also integrates tightly research and education with highlights on supervising students from underrepresented groups, combining computer vision and education research, and involving undergraduates in research.",15,Temple University
1020001000389329,CAREER: Machine Learning for Complex Health Data Analytics,"The fields of health and behavioral science are currently undergoing a data revolution. The Health Information Technology for Economic and Clinical Health act of 2009 has resulted in the wide adoption of electronic health records and the emergence of increasingly vast stores of heterogeneous clinical data. Simultaneously, emerging mobile health (mHealth) technologies are enabling the collection of ever-larger volumes of continuous physiological measurements and behavioral self-report data in non-clinical settings. Such data sources have the potential to yield transformative advances in the fundamental understanding of human behavior and health. They also have the potential to significantly enhance numerous applications including data-driven clinical decision support and continuous health monitoring, which will lead to increased efficiency within the healthcare system and facilitate a transition to patient-centered, personalized care. The proposed work will address several fundamental sources of complexity in the analysis of both clinical and mHealth data, enabling researchers in health and behavioral science to extract more useful knowledge from these data sources. The software toolboxes that will be developed will have immediate applications in research conducted by a network of research partners, and will also be broadly disseminated. The integrated education plan includes the development of an innovative applied machine learning course that will provide training in topics like cloud-scale computing that are of direct relevance to massive health data analytics. The outreach plan involves developing and running a health data-themed outreach workshop for underrepresented groups to foster computational thinking and broaden participation in computing. <br/><br/>The ability to learn models from complex data and apply those models to extract useful knowledge is at the core of machine learning research. This proposal seeks to significantly expand the frontiers of machine learning by developing new models and algorithms designed to meet the challenges posed by complex health data analysis. Key sources of complexity in clinical and mHealth data include sparse and irregular sampling, incompleteness, noise, non-stationary temporal dynamics, between-subjects variability, high volume, high velocity and heterogeneity. The presence of one or more of these factors in a given data source is often sufficient to render current machine learning methods ineffective or completely inapplicable. The long-term goal of this research is the development and validation of customized machine learning models and algorithms that can respond to all of these challenges. The objective of this proposal is to develop models and algorithms that address the following specific problems: (1) How can we extract useful knowledge from sparse and irregularly sampled clinical time series data? (2) How can we automate feature discovery from wearable physiological sensor data in the presence of high levels of noise, significant between subjects variability, and heterogeneous sensing modalities? (3) How can we make the learning of physiological time series event detection algorithms robust to event labels that are obtained through self-report mechanisms with limited reliability and temporal fidelity?",12,University of Massachusetts Amherst
1020001000389332,CAREER: More than Words: Advancing Prosodic Analysis,"Prosody is an essential component of human speech. Whereas the words are ""what is said"", prosody is ""how it is said"". A wealth of information is communicated via prosody including information about a speaker's intent and state (speaking-style and emotion). To advance the capabilities of machines to understand human speech, this CAREER project develops new representations of prosody and applies them to a variety of spoken language processing tasks: word recognition, speaking-style recognition, dialog-act classification and speaker identification. This project employs and advances semi-supervised and unsupervised representation learning techniques to characterize prosody. This project also investigates prosody across multiple languages. Speakers of multiple languages contribute speech and annotate some basic prosodic phenomena (phrasing and prominence). The overarching goal is to identify a compact and universal representation of prosody that will be employed effectively in spoken language processing tasks across languages. Scientific results, representations and tools for extraction will be made open-source as will the collected, annotated multi-lingual data.<br/><br/>Speech recognition is being integrated into our lives through mobile devices and spoken dialog systems. The next great hurdle in the ability to communicate with machines via speech is understanding prosody. Taking prosody into account will result in machines understanding humans better; conversely, automatically generating adequate prosody to convey intent will allow machines to sound more human. Both types of improvement are sorely needed as automated conversation agents and robots are starting to become a part of our everyday lives. Finally, this project implements an innovative and challenging education plan that is well-integrated with its research. It includes curricula modules on prosodic analysis and representation learning to be widely disseminated. Moreover, undergraduate students who provide and annotate speech samples for the project will get a hands-on introduction to computer science research, and will be compensated in part with tuition waivers for introductory courses in computer science.",16,CUNY Queens College
1020001000389333,CAREER: Holistic Scene Understanding with Multiple Hypotheses from Vision Modules,"This project develops algorithms and techniques for holistic scene understanding from images. The key barrier to building the next generation of vision systems is ambiguity. For example, a patch from an image may look like a face but may simply be an incidental arrangement of tree branches and shadows. Thus, a vision module operating in isolation often produces nonsensical results, such as hallucinating faces floating in thin air. This project develops a visual system that jointly reasons about multiple plausible hypotheses from different vision modules such as 3D scene layout, object layout, and pose estimation. The developed technologies have the potential to improve vision systems and make fundamental impact - from self-driving cars bringing mobility to the physically impaired, to unmanned aircrafts helping law enforcement with search and rescue in disasters. The project involves research tightly integrated with education and outreach to train the next generation of young scientists and researchers. <br/><br/>This research addresses the fundamental challenge in joint reasoning by extracting and leveraging a small set of diverse plausible hypotheses or guesses from computer vision modules (e.g. a patch may be a {sky or a vertical surface} x {face or tree branches}). This project generates new knowledge and techniques for (1) generating a small set of diverse plausible hypotheses from different vision modules, (2) joint reasoning over all modules to pick a single hypothesis from each module, and (3) reducing human annotation effort by actively soliciting user feedback only on the small set of plausible hypotheses. <br/><br/>Project Webpage: http://computing.ece.vt.edu/~dbatra",15,Virginia Polytechnic Institute and State University
1020001000389342,CAREER: A Visual Analysis Approach to Space-Time Data Exploration,"From smart phones to fitness trackers to sensor enabled buildings, data is currently being collected at an unprecedented rate. Now, more than ever, data exists that can be used to gain insight into how policy decisions can impact our daily lives. For example, one can imagine using data to help predict where crime may occur next or inform decisions on police resource allocations or diet and activity patterns could be used to provide recommendations for improving an individual's overall health and well-being. Underlying all of this data are measurements with respect to space and time. However, finding relationships within datasets and accurately representing these relationships to inform policy changes is a challenging problem. This research addresses fundamental questions of how we can effectively explore such space-time data in order to enhance knowledge discovery and dissemination. This research both extends traditional visual representations and develops novel views for showing how correlations, clusters and other various spatial dynamics change over time. Broader impacts of the research program include: (1) enhanced infrastructure for research and education in the form of new visual analytics algorithms and open source software; (2) broad dissemination of visual analysis methods across various domains including geography, urban planning, and public health; and (3) impacts on society including the dissemination of novel tools and methods for improved public health and safety. The primary educational goals of this CAREER project are to increase students' access to crucial but highly unavailable visual analytic technologies and to broaden participation in data science and engineering. Toward those ends, the Visual Analytics Education program will engage broad student populations (undergraduate and graduate) through innovative curricula focusing on visual data analysis and the core technologies that drive the research program (visual analytics tools). By focusing on those technologies and their synergy in the research program, the education program directly integrates the proposed research with education. The programs will benefit multiple groups (researchers, patients, students, underrepresented groups) and institutions (academia, industry, healthcare, education) both locally and globally.<br/><br/>For spatial data, the translation of such data into a visual form allows users to quickly see patterns, explore summaries and relate domain knowledge about underlying geographical phenomena that would not be apparent in tabular form. However, several critical challenges arise when visualizing and exploring these large spatiotemporal datasets. While, the underlying geographical component of the data lends itself well to univariate visualization in the form of traditional cartographic representations (e.g., choropleth, isopleth, dasymetric maps), as the data becomes multivariate, cartographic representations become more complex. Multivariate color maps, textures, small multiples and 3D views have been employed as means of increasing the amount of information that can be conveyed when plotting spatial data to a map. However, each of these methods has their own limitations. Multivariate color maps and textures result in cognitive overload where much time is spent trying to separate data elements in the visual channel. In 3D, occlusion and clutter remain fundamental challenges for effective visual data understanding. Utilizing small multiples can help in side-by-side comparison, but their scalability is limited by the available screen space and the cognitive overhead associated with pairwise comparisons. Instead of being confined to the original spatiotemporal domain, this proposal seeks to both extend traditional visual representations and develop novel views for showing how correlations, clusters and other various spatial dynamics change over time. Underlying these novel views is also the need for visual representations in which the manipulation of the representation is directly tied to the underlying computational analytics. Specifically, this research focuses on datasets from urban planning, geography, public health and crime to address: (1) the extraction of semi-supervised templates for spatial and temporal aggregation; (2) the development of interaction techniques for visual steering and classification of spatiotemporal data; (3) the integration of multiple families of anomaly detection algorithms and information theoretic methods for semi-supervised anomaly detection, and; (4) novel algorithms for the extraction of flow fields from spatiotemporal data. Additional information can be found at the project website (http://vader.lab.asu.edu/Space-TimeVA) including open source software, course learning modules and podcasts.",15,Arizona State University
1020001000389348,CAREER: A Broad Synthesis of Artificial Intelligence and Social Choice,"Social choice theory is the field that studies the aggregation of individual preferences toward a collective choice. While the artificial intelligence (AI) community has so far played a dominant role in the study of the computational aspects of social choice, the interaction between core AI paradigms and social choice theory has been surprisingly limited. <br/><br/>This project is enhancing the interaction between the two fields through a synthesis of social choice with the following AI areas: (i) decision making under uncertainty, by building on models studied in AI to create new ways to model, analyze, and make decisions in environments where preferences are dynamically changing; (ii) multiagent systems, by studying settings where agents randomly vote over multiple states, and investigating the connection between normative properties and system performance; and finally (iii) machine learning, by employing insights about strategic behavior under structured preferences, developed in the social choice literature, in order to design regression learning algorithms that discourage strategic manipulation.<br/><br/>An overarching goal of this project is to demonstrate the potential of social choice theory to AI researchers, and ultimately to establish social choice theory as a standard paradigm in AI. Equally importantly, this project is expected to increase the scope of social choice theory. Broader impacts include a new web-based voting system, which has the potential to serve and educate hundreds of thousands of users; dissemination through a new book on computational social choice; and a workshop on computational social choice, which will help set a new agenda for the field.",11,Carnegie-Mellon University
1020001000389363,"CAREER: Problem Solving in Dynamic, Distributed Environments","Computers are increasingly used to monitor and manage many aspects of our daily lives. These systems are often required to work together to solve complex problems that are rapidly changing. Current approaches to addressing these situations develop tailored distributed protocols that are verified through empirical testing. This project increases the practical applicability of distributed problem solving techniques by developing a theoretical model of these problems based on thermodynamic theory. Using this model, a protocol's performance can, for the first time ever, be predicted under previously untested conditions.<br/><br/>This theoretical model is validated through extensive empirical evaluation and this project develops a new protocol that alters its problem solving strategy to maximize the trade-off between deliberate and reactive decision making based on environmental dynamics. This protocol is applied to address a pressing practical problem: allocating telescopes for tracking objects in Low Earth Orbit (LEO). With nearly all of our manned space missions and satellites in LEO, effectively monitoring space debris has broad implications to society at large and scientific progress along numerous directions.<br/><br/>This transformative research combines cross-disciplinary ideas from artificial intelligence, distributed systems, and statistical physics. The educational initiatives in this project directly address the recruitment and retention of students, especially focusing on women and minorities, into Computer Science by generating excitement through the Heartland Gaming Expo and by utilizing a new peer outreach program, called Engineering Ambassadors.",14,University of Tulsa
1020001000389373,CAREER: Legged Locomotion Across Scales: Closing the Loop Between Task Planning and Motion Control,"This award seeks to introduce tightly integrated locomotion control and motion planning strategies for agile, highly-mobile legged robots with radically different sizes and morphologies. To realize the potential of these machines in real-world applications, basic movements must be composed to synthesize more complex locomotion behaviors that achieve desirable high-level planning objectives. As dynamic legged robots are becoming increasingly more capable, the need for hierarchically consistent locomotion planning strategies that translate descending task-level commands to suitable low-level control actions that harness the platform's locomotion capabilities becomes pressing. To address this need, this effort pursues new directions (i) in dynamic legged locomotion, by offering a portable library of locomotion primitives; (ii) in robot control, by providing constructive feedback reduction strategies and stochastic data-driven methods that map complex legged robot platforms to behavior-encoding target models; and (iii) in hybrid systems, by introducing a framework for complexity reduction through multiple layers of information processing and control action. This research effort seeks to enable legged machines to perform real-world tasks reliably and efficiently. This way, it promotes many different applications, including industrial, agricultural, and emergency response applications that require highly mobile and versatile robots. This effort also includes substantial educational and community outreach components, aiming at attracting underrepresented groups to science, technology, engineering, and mathematics. In addition, by pairing high-school teachers with graduate students, this effort addresses the critical need for K-12 teachers to stay current and articulate their teaching with the demands of college courses.",20,University of Delaware
1020001000389382,EAGER: Towards Human Centered Visual Understanding: Exploring the Intended and Interpreted Meaning of Images in Social Multimedia,"This project explores a new direction in computer vision, which is to model the context dependent visual semantics associated with images in social multimedia. The context dependent visual semantics, e.g., the intended and perceived sentiment of an image in social multimedia, are dynamically formed based on the various contextual information associated with it. This is different from the static visual semantics that conventional computer vision research focused on studying, such as the object category presented in the image.<br/><br/>The project develops a set of new networked and context aware probabilistic latent semantic models, which integrate situated contextual information into visual content analysis for modeling context dependent visual semantics. The research team is verifying two hypotheses: 1) the context dependent semantics needs to be holistically modeled and jointly inferred from a collection of related images; and 2) related context dependent visual semantics, such as intended and perceived meaning of an image, also needs to be jointly modeled for more robust recognition.<br/><br/>The project is integrated with education through training graduate and undergraduate students. The outcome of the research can be applied to many domains, such as targeted online advertisements; open source information analysis and social event prediction; and social multimedia security.",1,Stevens Institute of Technology
1020001000389390,CAREER: Measuring Search Engines' Ability to Help Users Complete Tasks,"The purpose of this project is to improve search systems' ability to help users complete tasks. The usefulness of any search engine ultimately depends on how good it is at aiding its users. The systems and the tasks they are used for can be very complicated; small changes in a system's implementation or a task's execution can have major effects on the usefulness of the system, especially over a long lifespan of use by a large base of people. The traditional approach to understanding utility involves the use of test collections, which consist of a collection of documents to be searched, unchanging information needs, and human judgments of the relevance of documents to needs; these components are put into a simple batch process that measures search effectiveness and tests simple statistical hypotheses. While this approach is useful, it often fails to capture variability present in users and tasks: different users often interact with the same system in very different ways, meaning a system that is useful for one user or one task may not be useful for another user or task. Therefore, this project focuses on developing new methods for understanding, estimating, and improving the usefulness of information retrieval (IR) systems that take variability into consideration. <br/><br/>The methods investigated in this project are designed to model user interactions with a system to complete a task, including how users determine relevance in context, how they modify their interaction with a system over time, and how different approaches by different users affect the overall system usefulness. The project will produce new types of test collections, evaluation measures, and statistical methods for batch-style systems-based information retrieval evaluation for use by researchers and practitioners in academia and industry. The work will demonstrate how to use these both to improve system utility to a population of users as well as to pose deeper hypotheses about causality in IR system development, thus leading to improvements in IR technology in all domains. Research will be integrated with educational activities for students as well as researchers and practitioners to learn advanced experimental design and analysis. Educational efforts will include tutorials and teaching courses on empirical methods in IR and computer science, methods in use in the wider scientific community, and how the newly developed methods relate to those. Results produced from this project can be found on the project web site (http://ir.cis.udel.edu/IIS-1350799).",8,University of Delaware
1020001000389403,CAREER: Gait Transition Principles in Quadruped Robots,"Quadrupedalism, pervasive in nature, is a promising locomotion mode for numerous future robotic applications. Utilizing its versatility can play a crucial role in managing unexpected and varying terrains in an efficient and stable manner. Understanding of why, how, and when to use a certain gait is central to successfully building stable, adaptable robots. Gait transition criteria in animals involve an intricate interplay among such biological characteristics as metabolic cost, bone stress, muscle physiology, and social stimuli. Obtaining general principles that are useful in design of robots by studying animals is very challenging. This project investigates the intrinsic nature of dynamic characteristics of quadrupedal gaits and the transitions among them by utilizing appropriate computational models. These models are selected to represent only important dynamic characteristics of quadrupedal gaits and filter out biological aspects that are not essential to the realization of robots. These models help to develop gait selection criteria from the energetics and stability analyses of each gait. The gait selection criteria constitute the basis of the development process of stable gait-transitioning controllers. This project aims to enhance our understanding of quadrupedal locomotion, contributing to future applications such as disaster response robots and new transportation systems. In addition, the project plans to integrate research results with educational activities. The new class on bio-inspired robot provides the opportunities for students to learn how to investigate scientific questions using computational methods and physical robots. The student training includes several outreach activities such as participation in science festivals and developing science exhibitions for K-12 education.",7,Massachusetts Institute of Technology
1020001000389405,CAREER: High Performance Spatial Queries and Analytics for Spatial Big Data,"The rise of big data is changing our way of thinking about the world by providing new insights and creating new forms of value. The challenges for big data come not only from the volume but also the complexity, such as the multi-dimensional nature of spatial data. In this CAREER project, we will deliver a scalable and efficient spatial big data processing system that can take advantage of fast increasing processing power of computers and their latest supporting programming environments. This software can be used for a variety of applications in medical image processing and in GIS (Geographical Information Systems), e.g., for city planning, transportation planning, disaster response, military planning.<br/><br/>The fundamental goal of this CAREER project is to address the research challenges for delivering a high performance software system for spatial queries and analytics of spatial big data on MapReduce and CPU-GPU hybrid platforms, promote the use of the created open source software to support problem solving in multiple disciplines, and educate the next generation workforce in big data. Specifically, the following research aims will be pursued in this project: 1) Create new spatial data processing methods and pipelines with spatial partition level parallelism through MapReduce and propose multi-level indexing methods to accelerate spatial data processing; 2) Research two critical components to enable data parallelism: effective and scalable spatial partitioning in MapReduce, and query normalization methods for partition effect; 3) Research efficient GPU-based spatial operations to support object level and intra-object level parallelism, and integrate them into MapReduce pipelines; 4) Investigate optimization methods for data processing pipelines, data skew mitigation, and CPU/GPU resource coordination in MapReduce; and 5) Provide declarative spatial queries and create a query translator to automatically translate the queries into MapReduce applications.<br/><br/>The project will provide a high performance scalable spatial computing infrastructure to be deployed by researchers and application users world-wide from various disciplines, and the source codes will be made open source and fully available. The project will provide a strong foundation to solve spatial big data problems such as location based services, remote sensing based applications, and map based applications. It will also enable the fast solving of scientific problems such as pathology imaging at large scale. The education activities include a revised undergraduate course with a new spatial big data theme, a revised graduate course with a focus on big data management, involvement of undergraduate, graduate and underrepresented students in research, symposia and science projects for K-12 students, and a software infrastructure to support the education. <br/><br/>For further information see the project web site: http://fushengwang.net/hadoop-gis<br/><br/>Keywords: spatial big data, MapReduce, CPU-GPU, spatial queries, spatial analytics",5,Emory University
1020001000389408,CAREER: Design Decision Patterns for Visualizing Multivariate Graphs,"Multivariate graphs, or datasets that link together entities that are associated with multiple different variables, occur in a broad range of problems. For example, the dataset could be geospatial locations that include socio-economic statistics, linked together through a public transportation system. These multivariate graphs are notoriously difficult to visualize because the number of data variables exceeds the number of available visual cues - these cues include color, size, position, etc. The goal of this project is to establish a set of validated and generalizable techniques for visualizing and interacting with multivariate graphs. Three target application areas will drive the investigations: one in cancer biology, a second in urban transportation, and a third in particle physics. These areas were chosen to represent a wide spectrum of possible applications in which multivariate graphs play a central role, thus fostering generalizable results. The multidisciplinary nature of the research and the close collaboration with domain experts in our target application areas will provide a unique educational environment for undergraduate and graduate students, while also broadening the participation in computer science beyond traditional boundaries.<br/><br/>This is the first systematic, problem-driven effort to consider the visualization of multivariate graphs using a diverse set of application areas, with the goal of developing a generalizable set of techniques and principles for supporting a broad range of visualization and data analysis tasks. The research will be conducted with domain experts using a design study methodology, which is a deeply collaborative and user-centered approach to visualization research. The primary impact of this work will be validated visualization design decision patterns for effective visual representation and user-driven exploration of complex multivariate graphs, resulting in a more comprehensive foundation of techniques for visualizing this increasingly important data type. The resulting design decision patterns will support ongoing research and discovery in our target application areas, as well generalize to a broad class of real-world problems. Furthermore, these patterns will form the foundation of software tools for visualizing multivariate graphs that effectively support exploration and sense-making of these complex data types by taking into account the varied relationships embedded within. Results and software will be disseminated to both the research communities of our target application areas, but also more broadly through the project website at http://mvgraphs.sci.utah.edu.",15,University of Utah
1020001000389411,CAREER: Controlling Ecologically Destructive Processes with a Network of Intelligent Robotic Agents,"This project aims to control destructive environmental processes such as forest fires, oil spills, and agricultural pest infestations through the intelligent, coordinated intervention of a group of robots. This requires the development of fundamentally new control theoretic and algorithmic tools to drive the robots to take control actions to regulate the environmental process. The robots' control actions close a large-scale feedback loop around the robots and the environment, giving rise to complex dynamical phenomena. The project proposes control strategies for this coupled robot-environment system in three different timescale regimes: (i) the environment changes slowly compared to the robots' dynamics, (ii) the environment and robot dynamics are on the same timescale and immediate control effect is sought, and (iii) the environment and robot dynamics are on the same timescale and long-term control effect is sought. Three different optimization based techniques are proposed to generate decentralized control strategies for each regime. Stability, convergence, and optimality properties of the robot-environment system under these strategies are studied. Furthermore, experiments with a network of quadrotor aerial robots, both in the lab and outdoors, demonstrate the practicality of the control strategies. The project also incorporates a comprehensive education and outreach program using quadrotor robots as teaching tools to reach students from diverse backgrounds at all grade levels. Ultimately, the project seeks to alleviate the economic, social, and ecological damage caused by oil and other chemical spills, forest fires, pest infestations, and other ecologically destructive phenomena by laying the foundations of a new robotic technology.",7,Trustees of Boston University
1020001000389424,CAREER: Exact Algorithms for Learning Latent Structure,"One of the fundamental tasks in science is to infer the causal relationships between variables from data, and to discover hidden phenomena that may affect their outcome. We can attempt to automate this scientific process by searching over probabilistic models of how the observed data might be influenced by unobserved (latent) factors or variables. Machine learning of such models provides insight into the underlying domain and a means of predicting the latent factors. However, it is challenging to search over the exponentially many models, and existing algorithms are unable to scale to large amounts of data.<br/><br/>The goal of this CAREER award will provide novel algorithms to circumvent this computational intractability. Based on a classical idea in statistics called the method-of-moments, the new algorithms will be applied in bioinformatics to discover regulatory modules from disease expression profiles, and in health care to predict a patient's clinical state using data from their electronic medical record. A key component of the project is to involve high school students from disadvantaged backgrounds in the research to inspire them to pursue STEM careers.<br/><br/>The project advances machine learning by introducing several new techniques for unsupervised and semi-supervised learning of Bayesian networks. The project overcomes the computational challenges associated with maximum-likelihood estimation by developing new method-of-moment based algorithms for learning latent variable models, focusing on settings where inference itself may be intractable. This includes Bayesian networks of discrete variables where a top layer consists of latent factors and a bottom layer consists of the observed data, a form of discrete factor analysis. The proposed algorithms run in polynomial time and are guaranteed to learn a close approximation to the true model.<br/><br/>The techniques developed as part of this project have the potential to be transformative in the social and natural sciences by enabling the efficient and accurate discovery of latent variables from discrete data. Furthermore, in collaboration with emergency department clinicians, the new algorithms will be applied to learn models relating diseases to symptoms from noisy and incomplete data that is routinely collected as part of electronic medical records. This will advance the field of machine learning in health care by providing algorithms that generalize between institutions without the need for a large amount of labeled training data.<br/><br/>The insights about exploratory data analysis developed as part of this project will be integrated into innovative curriculum in data science, both as part of an undergraduate class and new Master's classes. The project will bring students from nearby high schools to NYU throughout the academic year and during the summer to learn about machine learning through participation in the proposed research, having them use the unsupervised learning algorithms to discover new medical insights. The PI will also develop and deliver tutorials on machine learning to clinicians and the health care industry.",2,New York University
1020001000389426,CAREER: New Representations of Probability Distributions to Improve Machine Learning --- A Unified Kernel Embedding Framework for Distributions,"Computational intelligence touches our lives daily. Web searches, weather prediction, detecting financial fraud, medicine and education benefit from this ubiquitous technology. Problems in computational intelligence such as image classification and predicting properties of new materials produce copious amounts of high-dimensional, complex data. Many algorithms in computational intelligence rely on probability distributions, and such data can carry unusual distributions that challenge traditional methods of modeling. (For example, they are typically not textbook distributions such as the Gaussian.) In some applications, the data input to the algorithms are themselves probability distributions. Existing techniques are cannot both capture unusual distributions and scale to millions of data points without stalling the computation. There is a pressing need for a flexible, efficient framework for representing, learning, and reasoning about datasets arising from these problems.<br/><br/>This project will address these challenges by developing a novel and unified framework to represent and model, learn, and use probability distributions in computational intelligence. To evaluate the utility of the new techniques, the project will test them on difficult real-world problems in computer image analysis, materials science, and flow cytometry (a biotechnology technique used for cell counting, cell sorting, and protein engineering).<br/><br/>The project, an NSF CAREER award, will integrate the research results with several education intiatives. New curricula will be designed for both undergraduate and graduate students, with empahsis on students from under-represented groups. A new online course will be created to make the results accessible to massive online masters students. Finally, advanced high school math teachers will be engaged to design problems related to the reserach for use in a math competition for advanced high school students.<br/><br/>This project will (1) create a novel and unified nonparametric kernel framework for distributional data and distributions with fine-grained statistical properties, and (2) develop principled and scalable algorithms for nonparametric analysis of big data. The unified kernel embedding framework will advance large scale nonparametric data analysis significantly, and play an important synergistic role in bridging together traditionally separate research areas in data analysis, including kernel methods, graphical models, optimization, nonparametric Bayesian methods, functional analysis and tensor data analysis. In addition to advances in algorithmic methods, the applications to large-scale image classification, flow cytometry, and materials property prediction have the potential for transformative impact on society.",5,Georgia Tech Research Corporation
1020001000389427,CAREER: Efficient Learning of Personalized Strategies,"Online retailers frequently provide tailored product or movie recommendations. But the power of automated personalization, driven by data and statistics, could be far greater: imagine the impact on poverty reduction if all children had a personalized, self-improving tutoring system as part of their education. To realize this vision requires personalization systems that reason about both the immediate impact of a recommended item (e.g. will a learner immediately learn from a video lecture) as well as its longer term impact. For example, a recommended item or intervention may cause a user to change his/her preferences, state of knowledge, or reveal information about the user that was previously unknown. This requires methods for creating personalized strategies: adaptive rules about what decisions to make (whether or which ad to show, which pedagogical activity to provide) in which circumstances to maximize for long term outcomes. <br/><br/>This research involves developing new data-driven, machine learning approaches to construct such personalized strategies for related individuals, and using them towards improving the effectiveness of online mathematics educational systems. The project frames personalized strategy creation as sequential decision making under uncertainty research. Though there have been many advances in sequential decision making under uncertainty, existing approaches have focused primarily on other application areas, like robotics, and fail to account or leverage for some of the special features that arise when interacting with people. These include that accurate simulation of people is difficult but prior data is often available, and that individuals are often related. This project contributes algorithms for mining existing datasets to create and precisely bound the expected performance of new high-quality strategies and for online policy learning across a series of similar sequential decision making tasks.",9,Carnegie-Mellon University
1020001000389428,CAREER: Network modulation of cortical neuron computation,"The function of sensory neurons is typically defined by the relationship between sensory stimuli and their responses; however, in the cortex of awake animals, sensory responses account for only a fraction of neural activity. While activity not driven by the stimulus is often considered ""noise"" and neglected in experiments, such ongoing cortical activity has been linked to a number of processes related to cognition, and can be influenced by attention, tasks, and perception itself. It remains unclear how to relate such ongoing cortical activity to the processing of sensory stimuli, and more generally why it appears to play such a prominent role in sensory neuron function. <br/><br/>The goal of this project is to establish a new framework for understanding stimulus processing in the context of ongoing cortical activity, and thereby derive a much richer understanding of sensory neuron function. This work will leverage the wealth of information about activity within the cortical network that is now typically available from multi-electrode recordings, using experiments performed by collaborating laboratories in the awake visual cortex using tailored visual stimuli. The first aim is to develop new statistical approaches for identifying relevant modulatory signals detectable from these multi-electrode recordings, and perform detailed characterizations of stimulus processing in the context of these signals. The second aim is to study specific contexts where cortical activity is shaped by known network inputs, such as during saccadic eye movements, in order to directly link the modulation of stimulus processing to larger descriptions of sensory neuron function.<br/><br/>This work will provide potentially transformative insights into the relationship between sensory processing and cognitive function. The educational component of this proposal will integrate computational and quantitative approaches into general neuroscience coursework, and involve students at the graduate, undergraduate, and high school levels, in computational analyses of complex neurophysiological data.",20,University of Maryland College Park
1020001000389429,CAREER: Measuring and Reducing Cybersickness in Virtual Reality Physical Rehabilitation,"The effects of cybersickness (i.e., motion sickness caused by immersive simulation, such as virtual reality) on healthy users has been one of the fundamental research areas in virtual reality (VR) for many years, but its impact on persons with physical disabilities is still unknown, even though it could have a significant an impact on VR-based physical rehabilitation for this population. This project investigates cybersickness for persons with disabilities, specifically for persons with multiple sclerosis with a moderate level of mobility impairment and no cognitive impairment. The expected outcome of this research is to minimize the negative effects of cybersickness for people with disabilities, and to thereby significantly improve the effectiveness of VR-based physical rehabilitation and the quality of life for people with motor impairments.<br/><br/>The first objective of this project is to determine the best way to measure cybersickness in people with disabilities, and the second is to then figure out the main factors that contribute to cybersickness in persons with disabilities, specifically for people with proprioceptive and balance deficits (e.g., due to neurological, vestibular, balance issues). Based on preliminary data, the central hypothesis is that VR-induced cybersickness will be magnified for persons with disabilities as compared to persons without disabilities because of differences in balance and proprioception abilities. To test this hypothesis, the project will (a) determine how disability correlates with VR-induced cybersickness, (b) determine the most effective objective measures of VR-induced cybersickness for people with disabilities, (c) determine the main contributing aspects of VR design that affect cybersickness in people with disabilities and (d) create, disseminate, and maintain an open database of (anonymized) cybersickness data from people with disabilities.<br/><br/>Broader Impacts: The project will take a critical step towards the challenge of universal usability in VR and offer a deeper understanding of the effectiveness of VR as a medium for rehabilitation. The project will impact the way that people with disabilities are able to engage in VR-based rehabilitation and exercises, which could potentially have long term impact on the effectiveness and efficacy of the rehabilitation and ultimately the quality of life for these people. The project will actively engage graduate and undergraduate students in research, integrate results from research into a novel course on accessible user interfaces and universal usability, and collaborate with national advocacy societies for persons with disabilities to promote education and motivation for VR-based rehabilitation.",13,University of Texas at San Antonio
1020001000389432,CAREER: Apprenticeship Learning for Robotic Manipulation of Deformable Objects,"This project considers the problem of apprenticeship learning, in which a robot first gets access to demonstrations of a task and ought to learn from these demonstrations how to perform that task in new, yet similar, situations. This line of work has already shown significant promise, including in helicopter control where it enabled autonomous helicopter aerobatics at the level of the best human pilots. However, fundamental limitations remain, and robotic capabilities to manipulate deformable objects are currently still well below human level. The approach followed builds on, and extends, non-rigid registration algorithms, which can capture how scenes with deformable objects relate to each other. Such registration is extrapolated to morph a demonstrated manipulation trajectory into a good trajectory for a new scene. New machine learning algorithms are developed to enable choosing the optimal training demonstration and the optimal morphing objective while accounting for external constraints, such as avoiding collisions and satisfying joint limits. Infrastructure is being built for large-scale data collection of demonstrations and theoretical and empirical characterizations are developed for how much data is needed for a given task. Concrete challenge tasks considered are knot tying, cloth and fabric manipulation, surgical suturing, and small surgical procedures. Results will be incorporated into the PI's graduate robotics course and the source code will be shared with the robotics community.",7,University of California-Berkeley
1020001000389433,CAREER: Web Information Extraction: Integration and Scaling,"This project studies Web Information Extraction (WIE), the task of automatically extracting computer-understandable knowledge bases (KBs) from the World Wide Web. The project addresses two key challenges in WIE. First, many different teams in academia and industry are pursuing WIE, but they lack methods for combining their KBs into a more powerful whole. This project explores how to integrate knowledge automatically across WIE systems and approaches. Secondly, a long-standing goal for WIE is to construct systems that can scale to billions of facts, by continually improving themselves over time. This project is investigating new methods that continually optimize a WIE system with limited human intervention. The project's goal of scaling and integrating WIE systems promises to address needs in the research community, the computing industry, and the public. Methods that allow different WIE systems to seamlessly exchange knowledge could dramatically hasten the progress of Web extraction efforts currently underway in academia and industry. For the public, advances in Web extraction promise to enable improved search engines that can assist users with tasks and answer complex questions. Further, through application prototypes, the project will provide public-facing information retrieval tools that promise to help users retrieve, understand, and analyze the Web's knowledge more rapidly. The project's research is also integrated with an education plan that includes outreach to underrepresented groups.<br/><br/>The technical solutions pursued in the project utilize probability distributions over natural language. For the integration challenge, the project is developing new Application Programming Interfaces (APIs) that leverage the expressiveness of natural language to automatically integrate current and future WIE systems, even when the systems extract from different types of corpora and represent knowledge in different ways. For the scaling challenge, the project is developing ways to continually optimize new Statistical Language Models (SLMs) over text on the Web. The project investigates the SLM approach for WIE theoretically, asking what types of knowledge different SLMs can encode, and how much text is required to obtain the knowledge. Further, the project introduces new SLM capabilities, including methods for scaling to larger corpora and more semantic classes, and novel models that incorporate collocations, quantitative attributes, sense disambiguation, and actively-selected human input. The project web site (http://websail.eecs.northwestern.edu/wie/) provides additional information and access to results, including software, corpora, and evaluation data sets.",0,Northwestern University
1020001000389434,EAGER: ATAROS: Automatic Tagging and Recognition of Stance,"From activities as simple as scheduling a meeting to those as complex as balancing a national budget, people take stances in negotiations and decision making. While the related areas of subjectivity and sentiment analysis have received significant attention, work has focused almost exclusively on text, whereas much stance-taking activity is carried out verbally. Early experiments suggest that people alter their speaking style when engaged in stance-taking, and listeners can much more readily detect negative attitudes by listening to the original speech than by reading transcripts. However, due to the diversity of factors that influence speech production, from individual differences to social context, isolating the signals of stance-taking in speech for automatic recognition presents substantial challenges.<br/><br/>This Early Grant for Exploratory Research project represents a focused exploration of spoken interactions to provide a characterization of linguistic factors associated with stance-taking and develop computational methods that exploit these features to automatically detect stance-taking behavior. Robust linguistic markers of stance-taking are identified through analysis of both controlled elicitations and archived recordings of Congressional hearings on the financial crisis. The former allow experimental comparisons to highlight sometimes subtle contrasts, while the latter enable validation and extension of those findings in real-world, high-stakes discussions. The analysis includes novel acoustic-phonetic measures of dynamic patterns in speech, such as vowel space scaling and pitch/energy velocity, with sophisticated visualization techniques developed to support feature exploration. Findings are validated via stance recognition experiments combining acoustic and lexical cues, which lay the foundation for automatic tracking of trends and shifts in attitudes.",16,University of Washington
1020001000389437,"CAREER: Microscopy Image Analysis to Aid Biological Discovery: Optics, Algorithms, and Community","This project develops image analysis algorithms and systems to process microscopy images that record the proliferation history of biological specimens and evaluate their behaviors that respond to different culturing conditions, therefore, deciphering complex biological processes and accelerating the advance of biological discovery. The research combines techniques of physical optics, computer vision and crowdsourcing to bring a breakthrough to microscopy imaging and microscopy image analysis. The developments of such technologies transform the image-based biology research from subjective to a rigorous, quantitative, and efficient manner. The research team also seeks to promote interdisciplinary collaboration between biological imaging and computer vision, integrate the research outcomes into education activities, and disseminate the project to a wide audience via web, K-12 group, conferences and industry collaborations.<br/><br/>Previous microscopy image analysis methods do not consider the particular image formation process and treat them in the same manner as general natural images, causing many difficulties or failures in the image analysis. This project addresses the challenges in a principally different way by investigating the theoretical foundation of microscopy optics. The computational imaging models of microscopes are derived and used to restore artifact-free images and extract optics-oriented image features, which makes the automated image analysis fundamentally correct and easy. The models are further used to enhance the microscope's functionalities including calibration and virtual microscopy. A cyber-enabled research community is being established within which active learning and crowd-computing are leveraged to improve the algorithm performance and biological discovery.<br/><br/>Updates are available from http://web.mst.edu/~yinz/.",1,Missouri University of Science and Technology
1020001000389439,CAREER: Interactive Gesture-Based Data Manipulation and Visualization for Exploratory Learning and Research,"Visual exploration and analysis of data is increasingly important for advancement in virtually every area of human endeavor. Whether recorded directly by people or indirectly using machines, data captures our observations and interpretations of the world. When people interact with data, it is almost always in a visual form like graphics or text. The goal of this project is to vastly expand the usefulness of interactive visualizations by providing a general way to create and edit data inside the visualizations themselves. The key new idea of the project is that visualization users can perform sequences of gestures with common input devices to express their observations and interpretations directly in visual form. The visualizations not only show data, but also serve as meaningful graphical spaces in which to edit that data. By extending the data processing workflows and display techniques that are currently used in popular visualization tools and software libraries, we can flexibly and expressively translate the details of interactions into precise data changes with simultaneous visual feedback.<br/><br/>The innovative contributions of the project will include a general method to support interactive data editing in visualizations, a diverse collection of data editing gestures, a set of patterns to guide the process of designing visualization tools with data editing features, a declarative programming language for quickly building those tools, and a variety of built tools that show off real applications of data editing in visualizations. The project focuses on developing, evaluating, and distributing tools for scholarly research in the digital humanities. It tightly integrates education to bring together students and researchers from computer science, information science, and the humanities, and provide them with concrete opportunities to engage in authentic interdisciplinary collaboration. Scholarly research and education in the humanities involves open-ended exploration, analysis, and interpretation of complex data sets in diverse areas of study. This makes it an exemplary first target to demonstrate how gesture-based visual editing can be broadly applied to data analysis in virtually every segment of society. The broader impacts of the project will spring from the availability of a new, foundational, general-purpose methodology to support data entry, organization, annotation, and correction. Project products will include publications, tutorials, videos, the visualization gesture system as open source software, a compendium of data editing gestures, and a gallery of demonstration visualization tools for public download. Information on the project and resulting resources can be accessed on the project web site (http://www.cs.ou.edu/~weaver/nsf-career/).",15,University of Oklahoma Norman Campus
1020001000389462,"CAREER: Implementing and Assessing Inexpensive, Effective Methods of Exploring Virtual Environments","Virtual environments (VEs) are computer-generated depictions of three-dimensional worlds in which humans can navigate to explore. VEs have been shown to be effective in a wide variety of applications and disciplines such as to train miners on safety procedures, educate doctors and nurses, provide therapy for post-traumatic stress disorder, and treat children with autism. Despite the abundance of research that shows their usefulness, however, VEs are not widely used. This is largely due to the fact that current VE systems remain expensive and complex to operate. However, with recent improvements in the fidelity and accuracy of relatively low-cost consumer-grade sensors and head-mounted displays (HMDs), it is now increasingly possible to create high-fidelity and yet low-cost immersive virtual reality systems. It should now be possible to make VEs useful to the general public to an extent that was not previously attainable. However, significant challenges remain, such as how to give a person a means of moving around and navigating in a VE, especially when the person needs to move in all possible directions including up and down, such as when exploring a virtual model of a molecule or the solar system. Another challenge is to better understand the capabilities and limitations of VEs as general learning environments.<br/><br/>This project will make it more feasible to create VE systems that are both high-fidelity and cost-effective, which will make it easier for educators, researchers, workers in many fields, and the general public, to use VEs to improve knowledge and livelihood. The project will make it more practical to use VEs for applications such as to assess the evacuation plans of a building before it is built, provide therapy for post-traumatic stress disorder, or teach children about topics such as molecular biology or planetary phenomena. It has been shown that skills or knowledge acquired in a VE transfers to the real world if the experience closely mimics the real world situation; this project creates a general purpose inexpensive VE in which the perceived experience mimics a similar real world experience as closely as possible, but using cost-effective computer systems. This project take place at a small liberal arts undergraduate college and will offer a number of opportunities for undergraduate student involvement in the research.<br/><br/>The research outlined in this proposal enables both undergraduates and researchers to contribute to the body of knowledge in computer science, human-computer interaction, virtual reality, cognitive science, perceptual psychology, and education. The project will expand and extend the frontiers of the foundational science needed to conduct science and solve practical problems using VEs. Specific project activities include: (a) Explore how human spatial orientation in VEs is both similar and different relative to the real world, by conducting a series of user experiments, in different environmental contexts, where the primary experimental condition is VE versus real world. These experiments will fill in a missing gap in the existing knowledge of VE spatial orientation. (b) Develop and evaluate techniques for navigation and exploration within a VE. The project will systematically build and evaluate different navigation methods in both human-scaled virtual environments as well as multi-scale environments. Multi-scale virtual environments encompass virtual models that have no natural human scale, such as a model of a molecule or the entire solar system. The project will systematically evaluate navigation techniques for this type of VE. (c) Evaluate the human perceptual implications of the specific type of VE hardware (motion trackers and visual displays) that are used, specifically by comparing inexpensive commodity hardware and to expensive specialized high-end hardware. The goal is to allow the navigation techniques developed earlier in the project to be implemented on commodity devices. (d) Apply the VE navigation system to an intelligent tutoring system. The project will build a virtual front-end to an existing intelligent tutoring system that covers an entire first-year college biology sequence, and evaluate the extent to which the resulting system permits students to learn biology concepts such as by interacting with 3D DNA molecules in a VE.",13,Rhodes College
1020001000389564,CAREER: Generalizations in Obstacle Avoidance Theory,"This project develops a theoretical framework that enables an analytical characterization of guidance laws for obstacle avoidance, accompanied by an experimental validation of these laws. This has significant implications since the obstacle avoidance problem is an important component of the path planning problem, which appears in several diverse fields including robotics, autonomous air, ground and underwater vehicles, computer animation, molecular motion, autonomous wheelchairs, spacecraft avoiding space debris, robotic surgery, assistance aids for the blind, etc. The guidance laws designed are particularly applicable for real-time implementation of precise path planning in cluttered dynamic environments such as those containing robot manipulators, humanoid robots, vehicles flying in formation and other high-dimensional spaces wherein the agents have no a priori information about their environment. A robustness analysis of the designed guidance laws to various uncertainties such as sensor noise, data delays and data dropouts is performed, followed by an experimental validation wherein the guidance laws are coded on microcontroller platforms in a resource-efficient manner and implemented on small-scale robotic ground and air vehicles. The expected results include guidance laws suitable for collision avoidance of obstacles of various, possibly time-varying, shapes moving in high-dimensional stochastic environments, along with a postulation of the safety guarantees of these guidance laws. This project also performs multiple outreach activities and introduces new curriculum that promote the education and applications of robotics, and these activities are conducted in levels starting from K-12 all the way through undergraduate and graduate level engineering education.",7,Wichita State University
1020001000377043,EAGER: Collaborative Research: Some Assembly Required: Understanding the Emergence of Teams and Ecosystems of Teams,"This research project will develop a theoretical and computational framework to understand and enable the socio-technical dynamics shaping the assembly of teams in distributed global contexts. The main barrier to understanding and explaining the role of human centered computing in team assembly is finding a suitable research environment where (1) geographically distributed individuals from potentially different cultures are assembling in teams of varying sizes to accomplish a variety of tasks over varying durations; (2) their actions, interactions and transactions are captured with precise time-stamps; and (3) their outcomes would be recorded with well-defined metrics. Massively multiplayer online role-playing games offer a research environment that meets all of these requirements. EVE Online, a massively multiplayer online role-playing game, offers a potentially suitable research opportunity to study the assembly of teams and ecosystems of teams. It is notable for allowing as many as tens of thousands of people to interact simultaneously on a single server cluster, from around the world, through a well-developed economic system and serious long-term coalitions, in a more flexible action framework than many other popular games possess.<br/><br/>This high-risk high-payoff project will explore the feasibility of using data from EVE Online to identify the socio-technical and cultural mechanisms that explain the assembly of teams more generally. If successful, the study will serve as a model for larger scale studies that, in addition to identifying the assembly mechanisms also assess the impact of these mechanisms on the performance of global teams. The most important and complex decisions in society are made in teams. And yet, assembling effective teams is a daunting task. While there is an awareness of how team collaborations can spearhead socio-economic change, we still have sparse sociotechnical knowledge of how globally distributed cross-cultural teams and systems of teams are assembled. This project seeks to address this limitation. First, the proposed research offers the promise to launch a new generation of theorizing and research on the assembly mechanisms of teams and ecosystem of teams. The empirical data that will be used to develop and test these theories will be a high risk effort but with potential for unprecedented scale, size, and completeness. Second, the research will arguably be the first effort in the field of social networks to develop hypergraph techniques to study assembly of teams and ecosystems of teams.<br/><br/>The knowledge and tools developed in this research will allow practitioners to cultivate more effectively the emergence and performance of ad hoc teams in business, science and gaming. It will also provide other scientific disciplines with new computational statistical modeling methodologies and tools to model hypergraphs.",6,Northwestern University
1020001000377072,EAGER: The Effect of Stress and the Role of Computer Mediation on Exam Performance,"In this project the PI will explore a novel experimental and methodological framework for measuring and understanding the role of stress in student performance on exams. Stress is defined here as sympathetic arousal, an ever-present mechanism that helps humans cope with perceived threats or challenges. The planned experiments, in which criticality and computer-mediation define the two axes of the experimental space, will cover a rich set of combinations that include bi-weekly exams each of which counts for a small percentage of the grade vs. midterm and final exams that each count for a large percentage of the grade, alternately offered in paper and iPad forms. Subjects will be randomly divided into control and interventional groups; in the interventional group the time will be relaxed, to determine the effect of stress reduction on the evolution of competency. Unobtrusive measurements (to ensure non-interference with the observed behaviors) will span the genetic, physiological and psychological levels, while micro to macro measurements will afford a deep understanding of the issues. While most HCI-related projects aim to change the design of the interface, this research instead investigates the possibility of ""changing"" the human (student) by revolutionizing relevant aspects of the educational system. Improving the human-computer interface is a secondary aim where, depending upon the success of the intervention, stress-reduction designs can be conceived to counter-balance the time stressor.<br/><br/>Broader Impacts: This research tries to identify and suggest ways to correct problems in paper- vs. computer-mediated exams, precipitated by strong sympathetic responses. To this end the PI explores the role of the time stressor, and in so doing questions certain aspects of the prevailing educational philosophy.",12,University of Houston
1020001000377080,EAGER: skWiki - A Sketch-based Wiki,"The fundamental research question posed by this project is how to provide an integrated, flexible, and scalable storage repository for early design sketches. Early design is characterized by informal, unstructured, and heavily collaborative work processes involving multiple participants, multiple forms of representational media, and a plethora of interactions between these. However, there currently exist no storage mechanisms capable of supporting the full scope of sketching for creative design. In practice, designers either resort to low-tech physical artifacts such as pen, paper, and whiteboards that lack the persistence of digital storage, or general purpose software such as e-mail or shared file systems where the storage mechanisms are not well-integrated with the creative process and also lack provenance, history, and versioning information. <br/><br/>Intellectual Merit: The contribution of this project is to apply the concept of a Wiki to sketches. Wikis are collections of freely editable web documents and have become popular due to their capacity for drawing upon the crowd to create content. Wikis have also been shown to be particularly useful in early design, but existing Wiki software makes integrating visual representations burdensome. This research effort will be focused on designing, building, and evaluating a sketch-based Wiki (skWiki) that allows for combining vectorized sketches with text. <br/><br/>Broader Impacts: Unlike prototyping, manufacturing, or even engineering, innovation cannot easily be outsourced and is now recognized as the single most important ingredient in our economy. Consequently, there has never been a clearer imperative to both create designs which will fundamentally change the competitive landscape, and also to improve the innovative capacity of designers, particularly those in engineering. To facilitate such synergistic effects, the skWiki framework designed in this project will be made available as Open Source. We will also use the tool for teaching engineering design.",4,Purdue University
1020001000377109,RAPID: Enabling Collaborative Science Learning Experiences on Mobile Devices,"This RAPID project focuses on porting existing collaborative software in support of project-based science -- WeMap and WeKWL -- to the iPad. Initial software design and development has been done for Android machines as part of a complementary research project, but iPads are available in so many schools that it would be a shame to miss the opportunity to make available for the 2012-2013 academic year software that could be immediately used in science classes. The work consists in moving the core collaboration support, concept mapping, and KWL facilities to the IPad during summer, 2012, along with development of sample lesson that model how the software can be integrated into classroom activities. The apps will be available in Fall, 2012. Additional capabilities (file management and message board) will be ported in fall and winter and made available to schools as they are available. Together the apps will form what is being called the WeLearn Collaboration Platform. The first apps to be made available form a core of apps that teachers are used to using and ask for when they have a new device available; more general-purpose apps (e.g., drawing and animating, collaborative reading and writing) and science-specific apps (e.g., data collection and analysis tools) will be added to the suite as more is learned from complementary research projects. The purpose of this RAPID is to establish the platform into which those apps can be added and attract significant teacher buy-in with those apps. That infrastructure and the credibility gained through its use will form the basis for a more complete app set and reason for teachers and school systems to see that tools developed through a research endeavor can have value. <br/><br/>This project is addressing a real, imminent need: Throughout the country, schools are buying iPads this summer for use in the fall. Teachers need to be able to use the iPads productively in science class immediately when school begins, yet there is little in the way of educational software available in support of science learning. Given that public=supported bonds are often used to support large-scale technology purchases, it will be devastating (long term) to K-12 school budgets if, like their desktop and laptop cousins before them, iPads are not having the kind of positive impact on student achievement that is expected. Yet, without available apps and models of how to use them effectively for science education, there is little possibility of the technology being used well. This project addresses real needs -- making easy-to-use software for promoting science learning available to teachers and doing it in such a way that they have that software in time to use it on their new devices and have available models of its use to spur their imaginations.",9,University of Michigan Ann Arbor
1020001000377112,EAGER: Preliminary Study of Hashing Algorithms for Large-Scale Learning,"Many emerging applications of data mining call for techniques that can deal with data instances with millions, if not billions of dimensions. Hence, there is a need for effective approaches to dealing with extremely high dimensional data sets. <br/><br/>This project focuses on a class of novel theoretically well-founded hashing algorithms that allow high dimensional data to be encoded in a form that can be efficiently processed by standard machine learning algorithms. Specifically, it explores: One-permutation hashing, to dramatically reduce the computational and energy cost of hashing; Sparsity-preserving hashing, to take advantage of data sparsity for efficient data storage and improved generalization; Application of the new hashing techniques with standard algorithms for learning ""linear"" separators in high dimensional spaces. The success of this EAGER project could lay the foundations of a longer-term research agenda by the PI and other investigators focused on developing effective methods for building predictive models from extremely high dimensional data using ""standard"" machine learning algorithms. <br/><br/>Broader Impacts: Effective approaches to building predictive models from extremely high dimensional data can impact many areas of science that rely on machine learning as the primary methodology for knowledge acquisition from data. The PI's education and outreach efforts aim to broaden the participation of women and underrepresented groups. The publications, software, and datasets resulting from the project will be freely disseminated to the larger scientific community.",2,Cornell University
1020001000377135,EAGER: A Teachable Robot for Mathematics Learning in Middle School Classrooms,"In robotics activities, students are theorized to benefit from ""learning-by-doing"" activities where they set their own goals, but in practice, these activities have failed to produce the expected effects on STEM outcomes. To improve learning from robotics, this project will leverage teachable agent technologies, where students learn about a domain by teaching a computer agent. The agent interacts with students to expose misconceptions and encourage them to persist in the face of failure. By integrating the structure of teachable agents with the exploratory and engaging features of learning from robotics, the project will enhance the benefits of both approaches. The investigators will implement a robot that students can teach about the concepts they are learning in their middle school mathematics class. They will engage in design exercises with teachers and students to identify features of the robot that might be particularly important for improving student motivation and learning. Finally, they will conduct two studies, one in the laboratory and one in the classroom, to explore how students react to the robot in a realistic setting. <br/><br/>Intellectual merit: The project will improve understanding of what features students respond to in a teachable robot for mathematics, and of the potential educational benefits such a robot might have. The project will make a technological contribution by inventing methods for integrating teachable agents and robotic learning environments.<br/><br/>Broader impacts: The project will impact middle school students from underrepresented groups who will help design and test the teachable robot, exposing them to new technologies and engaging aspects of STEM careers. The project will also lead to a better understanding of the impact of teachable robots in the classroom and pave the way for these technologies to be adopted more widely in education.",7,Arizona State University
1020001000377136,EAGER: Prototype Dense Motion Capture for Large-Scale Deformable-Scene Tracking,"This EAGER proposal, developing a dense deformable object tracking method for testing and ground-truthing highly dynamic maneuvers of robotic vehicles, offers to prototype a radically new approach to instrumentation for highly agile machines of many types. Using heterogeneous fusion of numerous sensing modalities, the PI proposes to create a high resolution, high bandwidth map of deformable and articulated bodies such as cars with suspensions, humans, legged robots, and such for the performance evaluation and algorithm development for these highly dynamic research artifacts.<br/><br/>Broader Impacts: This work will facilitate a wide variety of research, including studies of agile robotics in dynamic environments. The impacts to robotic science, especially agile and fast moving robots, are clear. The system will also have impact on any other science, such as human motion analysis, ergonomics and medical rehabilitation, where accurate, dense tracking and mapping of deformable scenes is required.",7,George Washington University
1020001000377149,EAGER: Therabot - A Robotic Therapy Support System,"An issue of growing concern in the United States is the prevalence of sexual violence for women on college campuses, where up to one in four women report experiencing an attempted or completed rape while a student. It is important for these women to obtain therapeutic counseling to help them cope with the aftermath of these traumatic events. Research indicates that traditional therapy is beneficial in some instances, and additional benefits have been discovered with the use of animal-assisted therapy (AAT); however, there are limitations to AAT (e.g., training costs, availability, phobias, and allergies). <br/><br/>In this project the PI will explore an alternative called the Therabot, a lightweight socially assistive robotic support system. The goal is to design, implement and evaluate the efficacy of a prototype, which will improve upon the benefits discovered with AAT. To this end, the Therabot will have the appearance of a stuffed animal/toy and will exhibit affective behaviors (including head and arm movements, and animal-like or rhythmic sounds), to provide comfort and support during therapeutic interventions for the trauma associated with sexual violence. It will be convenient and easy for patients to use, not only in the clinic or lab but also in the home to provide support and encourage home therapy practices, something which is not offered by current forms of supplemental therapy support such as therapy animals.<br/><br/>This preliminary research will have three phases: design and construction of the Therabot system; validation that the system works as intended with human participants; and field testing with patients at the Department of Outreach and Sexual Assault Services at the PI's institution. Three hypotheses will be evaluated as part of this exploratory research: that patients who use the Therabot in counseling will be more likely to communicate with the therapist, and will feel more supported and comfortable during counseling, compared to patients who do not use the Therabot or who use an identical-looking but non-robotic Therabot in counseling; that patients will be more likely to perform home therapy practice using the Therabot than patients who do not use the Therabot or who use an identical non-robotic Therabot in home therapy practice; and that patients using the Therabot will experience improved therapeutic outcomes compared to those who are not using the Therabot or who use an identical non-robotic Therabot.<br/><br/>Broader Impacts: If the PI's hypotheses are supported, this research will not only impact survivors of sexual violence but will also be applicable to other therapeutic situations such as soldiers returning from the battlefield with PTSD and survivors of natural and man-made disasters. The PI expects that a lightweight socially assistive technology system like the Therabot will provide beneficial support and companionship to patients in hospitals, hospice facilities, and long-term care environments. It will be easy to incorporate the Therabot into the curricula of both undergraduate and graduate computer science, mechanical engineering, and clinical psychology courses, and also to use it for class projects. The many expected practical applications and potential societal impacts of the Therabot will make it a tremendous resource for K-12 outreach activities whose goal is to encourage interest in science and technology careers.",7,Mississippi State University
1020001000377160,EAGER: Combining natural language inference and data-driven paraphrasing,"Natural language inference (NLI) and data-driven paraphrasing share the related goals of being able to detect the semantic relationship between two natural language expressions, and being able to re-word an input text so that the resulting text is meaning-equivalent but worded differently. On the one hand, work in recognizing textual entailment (RTE) within NLI has attempted to formalize the process of determining whether a natural language hypothesis is entailed by a natural language premise, sometimes called ""natural logic"". Research in data-driven paraphrasing, on the other hand, attempts to extract paraphrases at a variety of levels of granularity including lexical paraphrases (simple synonyms), phrasal paraphrases, phrasal templates (or ""inference rules""), and sentential paraphrases, for various downstream applications such as question answering, information extraction, text generation, and summarization.<br/><br/>This EAGER award explores bridging the gap, through analysis of sentential paraphrasing via synchronous context free grammars (SCFGs), and how they may be coupled to formal constraints akin to recent work in phrase-based formulations of natural logic for RTE. Data-driven paraphrasing has largely neglected semantic formalisms, and NLI has relied heavily on hand-crafted resources like WordNet. If this project is successful it will potentially lead towards NLI systems that are more robust, and paraphrasing systems that are better formalized. Taken together, these improvements will allow better RTE systems to be developed. Moreover, this project has the potential to impact widely used human language technologies such as web search and natural language interfaces to mobile devices, and to further the connection between computational semantics and formal linguistics.",0,Johns Hopkins University
1020001000377194,EAGER: Scaling the Preprocessor and Making it More Intelligent in Deterministic Database Systems,"This research aims to bridge the gap between the current reality and the potential for database system deployments on large clusters of servers in a data center or large numbers of virtual machines in the cloud. There does not exist a scalable, elastic, ACID-compliant database system implementation today. In general, applications that require elastic scalability are forced to program around the lack of ACID guarantees of the database system, and many applications are too complicated to be rewritten to work around these issues. The goal of this project is to overcome these issues using the following approaches: (1) Implementing a database system using an innovative deterministic architecture that guarantees that nondeterministic processing events will not affect database state, (2) Leveraging this new architecture to avoid ""commit protocols"" for distributed transactions in a cluster, (3) Designing a scalable preprocessor for the deterministic database that collects, analyzes, and dispatches transactions to the database cluster in order to further improve scalability, and (4) Developing a new lazy transaction evaluation approach in order to spread out load and avoid damaging effects of database load spikes. Overall, this research enables thousands of applications written for many different use-cases (such as e-commerce, telecommunications, and online auctions) to achieve scalability ""for free"" without having to rewrite the application code. This research involves both Ph.D. students and undergraduates, with significant outreach efforts to encourage undergraduates to get involved in research. Open source code, publications, and technical reports from this research will be disseminated via the project web site http://db.cs.yale.edu/determinism/.",8,Yale University
1020001000377197,EAGER: Learning Upsampling Operators for Animation of Cloth and Fluids,"The PI's goal in this exploratory research is to tackle the fundamental obstacle preventing high quality, interactive animation of natural phenomena, namely the enormous number of degrees of freedom involved. Because hand animating a typical mesh is a tedious and time consuming task, computer graphics has turned to physics and simulation to animate most natural phenomena. In this context, the promise of simulation is generality; an infinite space of material properties and initial conditions can be explored. This generality is also simulation's greatest limitation; the space of possible animations is vast, while the space of desirable animations is a great deal smaller. The PI's approach is to use simulation's strength (its ability to create rich animation data under a variety of conditions) to combat its greatest limitations (high dimensionality and computational expense).<br/><br/>To this end, he will develop machine learning tools for finding new and more expressive low-dimensional representations, which do not describe all possible animations but rather succinctly describe the space of desirable animations. Previous attempts to apply machine learning to the animation of natural phenomena have shown promise, but also significant limitations. These approaches have suffered from over-fitting, have sacrificed locality, and have not allowed artistic control over the space of possible animations. Furthermore, these approaches have been too data-driven, failing to allow for the input of valuable human knowledge and intuition or mathematical and physical models. Until these limitations are addressed, the promise of high-quality interactive computer animation of natural phenomena will remain out of reach. For concreteness the PI will focus on cloth and fluids as test bed domains (initially assuming an algorithmic paradigm of coarse simulation enhanced by data-driven upsampling operators), for which he will explore questions of sparseness, expanded feature sets, combining operators, and artistic control. <br/><br/>Broader Impacts: Simulation is a powerful technique whose usefulness is not limited to computer animation. So while the test bed domains fall within the realm of traditional computer graphics, project outcomes will allow for high-quality, interactive computer animation of natural phenomena across all of science and engineering, with particular applicability to film, video games, virtual reality, medical training, etc. Moreover, the unique context of computer animation will necessarily require new machine learning algorithms that will feed back into that community as well. The PI plans to develop and release the majority of his source code under free BSD licenses.",14,University of Utah
1020001000377210,EAGER: Compositional Data Fusion,"The proposed activity will address two problems: (1) transportability, and (2) data fusion. In the first topic, the project focuses on the problem of utilizing conclusions obtained in one environment in another by permitting reasoning agents to focus their reasoning on only the differences, while taking for granted that which is common to both environments. In the second topic, this project will formalize and reduce to algorithmic procedures the general problem of fusing data coherently from multiple heterogeneous sources. The proposed activities will develop effective procedures for determining whether unbiased estimates of causal relationships in a target environment can be synthesized from information obtained from a set of heterogeneous studies. These activities will lead to a theoretical understanding of the conditions under which a learning system can rely on previously learned information, transferred from a different environment. <br/><br/>Results from this research project have the potential to impact all data-related sciences where the transportability and data-fusion problems are ubiquitous. These two problems demand understanding of causal relationships in the domains being considered. Such causal relationships need to be addressed by causal calculi so as to extract the invariant features from each information source. The approach pursued in this project builds on previous work of the PI, for instance, reasoning with structural causal models and counterfactuals. The problems of transportability and data fusion are critical in the health and social sciences, where data is scarce and experiments are costly; they are of particular interest in the ""Big Data"" enterprise, which is driven by the premise that data availability will automatically result in data interpretability and where there are nuances among the contexts of data collection.",5,University of California-Los Angeles
1020001000377288,EAGER: Quick Draw Semantics,"This project is developing methods and tools for ordinary web content creators to indicate the structure inherent in their content by drawing a few simple lines or figures over their web site. Their motivation for doing so is to use enhanced browsing and searching capabilities in their local site based on these mappings. These new browsing and searching capabilities are based on the definition of canonical structures (i.e., data model fragments) and domain patterns (a set of instantiated canonical structures) with associated navigation and access paths where domain patterns are easily recognizable by content creators and easily articulated using a simple drawing approach. This work is complementary to yet distinct from much existing effort to transition to a more semantic Web; the focus here is on (simple) local specification (drawing) of mappings with local benefit. This project uses a form of crowd-sourcing to create ontology mappings and (indirectly) create ontologies. This project applies decades of work on data modeling and schema mapping but with structural fragments rather than complete schemas/ontologies, without view update problems (because of the focus on browsing/searching), and with immediate local benefit. The research is motivated by and will be showcased in a series of websites that support public access to instructional materials. In general, this work is expected to contribute (nearly effortlessly yet in a high-quality way) to achieving the vision for the semantic web.",0,Portland State University
1020001000377289,"EAGER: Nonparametric Machine Learning on Sets, Functions, and Distributions","Most machine learning algorithms operate on fixed dimensional feature vector representations. In many applications, however, the natural representation of the data consists of more complex objects, for example functions, distributions, and sets, rather than finite-dimensional vectors. This project aims to develop a new family of machine learning algorithms that can operate directly on these complex objects. The key innovation is efficient estimation of certain information theoretic quantities for learning predictive models from complex data. The research is organized around three specific aims: (a) Development and analysis of nonparametric estimators for certain important functionals of densities, such as entropy, mutual information, conditional mutual information, and divergence; and study of the theoretical properties of these estimators including consistency, convergence rates of the bias and variance, and asymptotic normality. (b) Use of the preceding estimators to design new learning algorithms for clustering, classification, regression, and anomaly detection that work directly on sets, functions, and distributions without any additional, hand-made feature extraction, histogram creation, or density estimation steps that could lead to loss of information. (c) Study of the theoretical properties of these new machine learning algorithms (computation time, sample complexity, generalization error) and empirical evaluation of the algirithms them to a variety of important real-world problems, including nuclear detection astronomical data analysis, and computer vision in collaboration with researchers at Lawrence Livermore, University of Washington and Johns Hopkins University, and Carnegie Mellon University respectively.<br/><br/>Broader Impact. The project, if successful, could substantially advance the current state-of-the-art in building predictive models from complex data. The results of research, including publications and open source software, will be freely disseminated to the larger scientific community. The project provides enhanced research-based training opportunities for graduate and undergraduate students at Carnegie Mellon University as well as the collaborating institutions.",2,Carnegie-Mellon University
1020001000377295,EAGER: Collaborative Research: Technology to Support Mathematical Argumentation,"The big question being addressed in this project is, ""How can technology support the teaching and learning of mathematical argumentation?"" This EAGER proposal focuses on the first steps in moving towards the vision of helping youngsters learn mathematical/algebraic argumentation. The project has two complementary goals: to better define the technological tools needed to support elementary students learning about proof and argumentation and to understand better how elementary school students learn to make and defend mathematical claims when such tools are available. The PIs envision and are developing an animation tool to be implemented on tablet computers (so each child has his/her own electronic notebook) that provides infrastructure for sharing and refining arguments in small groups and across the class and for promoting concrete math discussions. A big challenge is providing the right tools for expression; for sharing to happen, learners need to first be able to express their understanding, and much research shows that a concrete ""written"" (sharable) expression of understanding provides better foundations for promoting concrete discussion than do expressions of understanding that are simply verbal. The particular advance in expression that is being made in this project is providing tools for dynamic representations; that is, they are able to animate what happens when mathematical operations are carried out. These concrete expressions of understanding can then be played back, paused, and so on. Students create animations by drawing, erasing, duplicating, moving, and grouping objects. They can edit each others' animations, and the teacher will also be able to create animations for students to view and edit. They record and save their oral explanations along with the animations and play them back together, thus making their verbal descriptions concrete, examinable, and sharable. <br/><br/>The ability to prepare and present a mathematical argument is a key component of the mathematical competence students need to achieve in upper elementary school (grades 3 through 5). Many students struggle with the subject matter and therefore struggle as well with making mathematical arguments. The PIs envision a computational tool set that young students will use to construct and share mathematical arguments, in the service of learning to be competent algebraic reasoners. In this EAGER project, they focus on first steps in developing that tool set and on investigating how young learners make and defend mathematical arguments when they have such tools available. This project represents work in its early states on an untested but potentially transformative idea and is likely to catalyze rapid and innovative advances in helping young learners become mathematical reasoners.",9,TERC Inc
1020001000377305,"EAGER: Toward a User-centered, Inclusive, and Personalized Approach to Mobile Web Adaptation","The PI's goal in this exploratory research is to seek ways to fundamentally improve the user experience when browsing the Web on mobile handheld devices. Existing approaches to mobile Web adaptation have a number of major problems or limitations. Although no single interface design can address the different needs of individual users, existing adaptation methods typically offer one-size-fits-all solutions. While many contextual factors affect Web navigation on handheld devices, relatively little context data is systematically captured today in real-time and exploited in adaptation. Current techniques do not provide users with cues to help them find information of interest. And almost all existing adaptation solutions do not consider the problem of mobile Web accessibility for disabled users; in particular, there is a paucity of knowledge on how to design and deploy the ability to invoke on a mobile device in a personalized, device-aware manner assistive technologies or special features that may be needed by an individual user. In this project the PI will take the first steps toward development and evaluation of a transformative user-centered, inclusive, and personalized approach to mobile Web adaptation that dynamically adapts the content and display of Web pages based on users' information needs, device characteristics, and accessibility requirements. To these ends, he will focus on users with visual impairments. Specific research questions to be addressed include the following: Can a user-centered approach to mobile Web adaptation better meet the preferences and needs of both typical users and those with visual impairments? Can device-aware mobile Web adaptation better support accessibility? Can we design and build a cloud-based integrative, personalized mobile Web adaptation system that provides ubiquitous access to not only typical users but to users with visual impairments as well? Project outcomes will include design, implementation and evaluation of a prototype cloud-based service that integrates a novel user-centered approach to mobile Web adaptation that enables users to specify and adjust adaptation preferences and strategies, a novel device-aware adaptation that dynamically adapts Web pages based on characteristics of individual mobile devices, and novel personalized adaptation techniques for restructuring Web content so as to provide useful information cues and accessibility.<br/><br/>Broader Impacts: This research will benefit not only researchers in the related fields of computer science and information systems, but also manufacturers and designers of handheld devices as well as individual users. The project will provide unique insights to manufacturers of mobile devices and to mobile application designers on how to achieve user-centered, context-aware adaptive interfaces for handheld devices that improve the user's navigation performance and overall Web browsing experience. It will lay the foundations for a novel personalized adaptation solution that improves mobile Web accessibility, for different user communities. And it will provide technical guidance and empirical evidence on how to reduce the digital divide as it relates to the mobile Web. Project findings will be disseminated through interdisciplinary conferences, workshops and journals, and they will be incorporated into existing graduate and undergraduate courses at UMBC as well as a variety of K-12 outreach programs.",4,University of Maryland Baltimore County
1020001000377355,CAP: Mobile Pathways for 21st Century Learning,"Open Air Boston (OAB) is a nonprofit organization whose mission is to bridge the digital divide so that low-income, under-served populations in the City of Boston have access to digital communications technology and the Internet. The OAB Technology Goes Home (TGH) Program is an 11-year-old, award-winning City initiative that gives under-served residents the opportunity to access the tools and education required for 21st century skills development. The next step in this initiative has been conceptualized as a collaboration with local informal education organizations to design game-like community learning experiences that use mobile technologies to engage students and their parents in scientific reasoning and exploration that will help them learn STEM content and practices and perhaps become interested in STEM careers. This Cyberlearning Capacity-Building Project (CAP) is for the purpose of building the research capacity of this already-strong team so that they have the capacity to develop an approach that is based on the best that the literature has to tell us about how people learn and how people learn with technology and so that they can use their efforts as an infrastructure for carrying out research that will advance what is known about engaging low-income, under-served populations of teens in STEM endeavors in beneficial ways. The team is undertaking three capacity-building sub-goals to help them incorporate state-of-the-art knowledge and findings into their approach: (1) further developing the project team, partnerships, and collaboration activities of the team so that they can successfully achieve those goals; (2) synthesizing the research literature on the use of a social-mobile learning applications to engage students in informal STEM learning; and (3) synthesizing the research literature on engaging under-represented, under-served middle school students and their parents in technology-based collaborative educational activities. Experts on how people learn and how people learn with technology are collaborating with the already-existing team to conceptualize an approach to using mobile devices to engage middle schoolers and their families in STEM exploration and investigation in their community that has foundations in what is known about how people learn and engaging adolescents in science and that has good potential to lead to STEM learning and interest in STEM-related careers.",9,OpenAirBoston.net
1020001000377359,EAGER: T2K: From Tables to Knowledge,"The Web has made humans smarter, providing ready access to vast amounts of knowledge and facts. The Semantic Web has the capacity to similarly enhance computer programs and devices by giving them access to enormous volumes of data, facts and knowledge. This project is exploring the feasibility of automatically extracting new knowledge directly from data found in spreadsheets, database relations, and document tables and representing it as highly interoperable linked open data (LOD) in the Semantic Web language RDF. The extraction is guided by probabilistic graphical models that use statistical information mined from current LOD knowledge resources. To demonstrate the potential payoff of the research, the system is used to extract knowledge from tables collected from medical journals and tables from web sites like data.gov. <br/><br/>While the W3C semantic web languages RDF and OWL are used to represent the knowledge, the results are applicable to other semantic data frameworks such as Microdata (Search Consortium), Freebase (Google), Probase (Microsoft) and the Open Graph (Facebook). The open sourced prototype software allows other researchers to experiment with automatically producing semantically enriched data from tables for their domains.<br/><br/>If successful, such software extraction systems are expected to become part of a new online knowledge ecology -- both consuming existing LOD knowledge to understand the intended meaning implicit in a table and producing new facts and knowledge that will become part of Web. This represents a dramatic increase in the breadth and depth of public semantic data that can make ""big data"" analytics more effective.",0,University of Maryland Baltimore County
1020001000377372,EAGER: Automatic Classification of Programming Difficulties by Mining Programming Events,"Today, when a student or industrial programmer faces difficulty in some task assigned to him/her, this event often goes unrecorded and unobserved by others. As a result, it is not possible to use mechanisms to ameliorate the effect of the difficulty. In this project, the researchers will address this problem by automatically detecting and classifying programming difficulties by mining programmers' interaction with the computer. Specifically, they will investigate (a) whether it is possible to automatically identify the barrier causing a difficulty and (b) whether it is possible to determine the severity of the difficulty. The project will start a new area of research exploring how difficulty-detection mechanisms should be designed, implemented, evaluated, and applied.<br/><br/>Broader impacts: If successful this research will lead to future work on a variety of difficulty amelioration mechanisms, including (a) allowing industrial workers and teachers to synchronously push help to developers facing difficulties; (b) informing developers facing difficulties about actions taken by others who overcame similar difficulties, so that they can take similar actions; (c) allowing assignment doers to anticipate the kind of difficulties they will encounter and thus be better prepared for the assignment; and (d) giving assignment definers an understanding of the inherent difficulty level of the assignment, which can lead to redefinition or better explanation of the assignment. These amelioration mechanisms can substantially reduce the high costs associated with software development and quality teaching, and transform collaborative software engineering and education. Such mechanisms can lead to significant productivity gains in industry, especially in distributed software development. An educational setting provides an even more compelling motivation because shyness of students and/or lack of instructor time prevents student difficulties from being addressed in a timely manner. In computer science this is particularly a problem as a small mistake can prove to be very costly. The difficulty amelioration mechanisms will reduce this problem and thus attract a larger variety of students to computer science and empower those who are already committed to it.",10,University of North Carolina at Chapel Hill
1020001000377374,BIGDATA: Small: DCM: DA: Advancing real-time data processing and reduction in radio astronomical detectors,"This award is to support development and test the accelerated, efficient reconstruction and beam-forming algorithms for radio astronomical detectors. When possible, computations will be ported out to the Graphics Processing Unit (GPU) clusters that provide massively parallel floating-point capabilities. This approach is ideally suited to the data generated by digital radio astronomy arrays, and especially well suited when the respective hardware is deployment at remote observation sites - because of the proposed algorithms show excellent performance-to-power efficiency when compared to general-purpose supercomputers. Additionally, the real-time (Field Programmable Gate Arrays) FPGA-based algorithms will be developed for radio transient data reduction. They will help developing accelerated radio simulation methods to support and validate the proposed methodology. These algorithms and methodology will be tested at the Askaryan Radio Array (ARA), an array of in-ice antennae deployed in the U.S. Antarctic South Pole Station to detect highest-energy neutrinos coming from the deep Universe sources. The award's broader impacts are in the use of high-performance parallel computing and efficient real-time data reduction in the fields of radio astronomy, cosmology, and particle astrophysics. Expansion of the use of GPUs will provide the necessary computing power for next-generation experiments in a sustainable, efficient way. The post-doctoral researcher and graduate student will be part of the overall development and testing. In addition, undergraduate students will also be involved in the research. An on-line public research component will be developed to engage ""citizen scientists"" in the analysis of radio astronomy data on home computers.",5,University of Wisconsin-Madison
1020001000377377,"BIGDATA: Small: DA: Data Summarization, Analysis, and Triage for Very Large Scale Flow Fields","Two major challenges are faced by computational scientists who routinely generate big data sets. The first is deciding what data are the most essential for analysis, given that only a small fraction of them can be retained. The second is transforming these data into information that conveys the most insight. As the size of simulation output continues to grow, the ""save the data first, analyze them later"" approach needs to be completely replaced with more aggressive data prioritization and reduction before any analysis can be done. In this project, core data analytics technologies are developed to facilitate effective data summarization, indexing, and triage for large-scale flow data. Fluid flow plays an important role in explaining many phenomena across a wide range of disciplines. To provide the scientists with a succinct view of the data content, and also organize the data and features based on their similarity and complexity, a graph-based model is developed to simultaneously reveal the major structure of the flow field, and to facilitate high performance and out-of-core flow line computation. We develop statistical and geometrical complexity measures for the flow lines to efficiently group and prioritize sub-regions in the vector field to allow efficient data access. To characterize the temporal complexity of flow fields, we develop time-varying analysis algorithms that allow for more detailed analysis of the data, and provide the user with flexible interface to quickly identify salient features.<br/><br/>The development of the proposed integrated flow analysis and visualization framework initially targeted two applications, simulations of turbo machinery in aerodynamics, and study of Madden Julian Oscillation in climate modeling. As typical flow in turbo machinery is full of evolving shocks and vortical structures, visualization allows the designers to identify loss regions and complex flow features in a relatively short amount of time if these features can be identified automatically. To understand the phenomenon of Madden Julian Oscillation, as this phenomenon is strongly related to the convection of air, the flow analysis techniques developed under this project can be used to identify and track its locations and durations. Because the size of data generated by time-varying simulations can be prohibitively large, the proposed time-varying data reduction techniques allow scientists to focus on the most salient portion of the data. The key impact of this project is to make available a working and attractive solution to assist scientists to comprehend the vast amount of data generated by large-scale simulations. Through close collaboration with application scientists, the research ideas developed in this project into will be transformed into an open source software framework.",5,Ohio State University
1020001000377379,EAGER: A Legal and Structural Investigation of Online User-Generated Content Systems,"This project examines user-generated content (UGC) in interactive media, in order to improve the functioning of copyright law with respect to important areas of computer software and online communications. UGC is currently generating interest among businesses, researchers, scholars, and game software designers. In recent years, increasingly powerful, simple, and cheap digital authoring tools have extended the power of amateur creativity. This project will provide: (1) a new structural theoretical framework for the analysis of UGC in games, (2) new empirical data on UGC in computer games, and (3) a legal analysis of the applicability of copyright's ""fair use"" doctrine to UGC in games.<br/><br/>Videogames are a new form of interactive media often combine traditional media forms with software interfaces that enable forms of player authorship. Content authored by players has the potential to infringe copyright law. While copyright infringement lawsuits have been filed based on UGC technologies, there have been few empirical studies of interactive media technologies that depend on UGC. Additionally, very few legal or media theorists have provided analytical structures that can be used to explore the complex relationship between player authorship and traditional authorship in the video game industry. <br/><br/>This project endeavors to map the intersection between copyright law and UGC authorship in video games. The payoff of this research will be to provide additional insight to the market, to help ensure the development of important emerging technologies, and to contribute to the contemporary debate over appropriate legal rules for the interactive media industry. The data collected and the theoretical framework developed will aid those researching interactive media in speaking about evaluating tools for user-generated content in games and other interactive media, and clarify the legal constraints on game software design.<br/><br/>This research will provide data on how interactive media technologies facilitate creativity, thereby helping policy makers and technologists understand how user-generated content intersects with new media forms. Additionally, by analyzing the legality of user-generated content, this research will aid policy makers and technologists in efforts to reform copyright law to take into account new forms of authorship.",13,Rutgers University Camden
1020001000377381,BIGDATA: Small: DA: Mining large graphs through subgraph sampling,"The size and complexity of these ""Big Data"" graphs have always posed significant challenges, limiting the scope of their analysis and thus also limiting the implications that one can draw from them. Mining data from large real-world graphs typically poses two challenges: one of computational resources and another of incomplete information. A comprehensive analysis of these graphs has usually required access to large distributed computing platforms and sophisticated software. This project aims to address a portion of these challenges by investigating a new method, based in statistics and spectral graph theory, to infer essential properties of the full graph through extracting a representative sample of small subgraphs from the full graph. The goal is to reduce the computational burden on researchers interested in large graphs and thus broaden participation in ""Big Data"" activities. As is now well-understood, the analysis of large graphs has many applications in a variety of fields including business, economics, public policy development, law enforcement, public health, sociology and, of course, computer science. This breadth of applicability and the proposed curriculum development activities have the potential to draw and retain a greater diversity of students into computer science and engineering and increasing the participation by under-represented groups.<br/><br/>Many of the principal properties of a graph can be inferred from the graph spectrum (eigenvalues of its adjacency or the normalized Laplacian matrix). In particular, a rich set of interlacing results in spectral graph theory allows one to bound the eigenvalues of the full graph using the eigenvalues of its subgraphs. This project will develop new algorithms for generating subgraph samples, and then use basic estimation theory from statistics and the interlacing results from spectral graph theory to discern properties of a large graph. The new method based on subgraph sampling (as opposed to node or edge sampling) uses results from spectral graph theory and statistics to estimate the spectrum (eigenvalues) of the graph based on the spectrum of the sampled subgraphs. The goal is to allow a meaningful analysis of extremely large graphs without the use of anything beyond a typical desktop computer. The data collected and the algorithms developed as part of this project will be made available to the larger research community through a data repository hosted by Drexel University. The project will also make contributions to open-source software.",11,Drexel University
1020001000377383,BIGDATA: Small: DA: DCM: Labeling the World,"The project aims to leverage the massive corpus of online photos, text, and maps to create a semantic 3D labeled model of the world, e.g., detailed representations of the world's top cultural and historical sites. While breakthroughs in computer vision enable creating detailed 3D models from millions of online 2D images, the resulting models capture only geometry. Consequently, they lack semantics; they don't provide information about the contents of the scene. The vast treasure trove of online text such as Wikipedia meticulously catalogs the scenes that are captured in photos and models. Modern Natural Language Processing (NLP) techniques can now process such data, opening up the opportunity to extract knowledge from the online text corpus and use it to label 3D geometry. This project seeks to jointly analyze the massive corpus of online text, maps, and photos to create labeled 3D models of the world's sites. Achieving this goal will require fundamental research advances at the interface of natural language processing and computer vision that impact both the scientific research community and the world at large. <br/><br/>The project addresses two key technical challenges: (1) automatic scene labeling: mapping semantics onto geometry, and (2) solving the 3D jigsaw puzzle: mapping pieces of geometry into the world. Many clues to these mapping problems lie in the text and other online datasources such as floorplans. Other clues lie in the content of the photos. Decoding this mapping therefore involves an interplay between NLP and computer vision. The key research advances center around new ways to jointly leverage computer vision and NLP to solve problems to solve challenging problems in both fields, specifically, 1) recognizing objects through joint NLP and 3D visual analysis, 2) placing objects in the world by correlating geometry with spatial text in maps and webpages, and 3) using semantics to improve geometry by augmenting visual cues with textual spatial relations.<br/><br/>Broader Impacts: The primary research outcomes are: (1) technology for creating labeled 3D models at a massive scale, and (2) labeled models for many top tourist sites. Both the algorithms and models will be made freely available for the research community. These algorithms and models will provide the foundation for a range of exciting applications of major practical impact on the world at large. The resulting tools could make it possible for resources such as Wikipedia to link the text directly to 3D models and vice-versa, with attendant benefits to online learning and education. The same technology could enable automated labeling of 2D photographs. In the context of real-time applications (e.g., augmented reality), the technology could provide visual overlays and instant feedback on what you are currently looking at, and enable augmented reality-style guided tours. Other applications include using labeled geometry for navigation (walking directions), and converting images to text for the visually impaired. The research is tightly integrated into education and training of students at the University of Washington. Additional information about the project can be found at: http://grail.cs.washington.edu/projects/label3d/",0,University of Washington
1020001000377385,EAGER: Collaborative Research: Technology to Support Mathematical Argumentation,"The big question being addressed in this project is, ""How can technology support the teaching and learning of mathematical argumentation?"" This EAGER proposal focuses on the first steps in moving towards the vision of helping youngsters learn mathematical/algebraic argumentation. The project has two complementary goals: to better define the technological tools needed to support elementary students learning about proof and argumentation and to understand better how elementary school students learn to make and defend mathematical claims when such tools are available. The PIs envision and are developing an animation tool to be implemented on tablet computers (so each child has his/her own electronic notebook) that provides infrastructure for sharing and refining arguments in small groups and across the class and for promoting concrete math discussions. A big challenge is providing the right tools for expression; for sharing to happen, learners need to first be able to express their understanding, and much research shows that a concrete ""written"" (sharable) expression of understanding provides better foundations for promoting concrete discussion than do expressions of understanding that are simply verbal. The particular advance in expression that is being made in this project is providing tools for dynamic representations; that is, they are able to animate what happens when mathematical operations are carried out. These concrete expressions of understanding can then be played back, paused, and so on. Students create animations by drawing, erasing, duplicating, moving, and grouping objects. They can edit each others' animations, and the teacher will also be able to create animations for students to view and edit. They record and save their oral explanations along with the animations and play them back together, thus making their verbal descriptions concrete, examinable, and sharable. <br/><br/>The ability to prepare and present a mathematical argument is a key component of the mathematical competence students need to achieve in upper elementary school (grades 3 through 5). Many students struggle with the subject matter and therefore struggle as well with making mathematical arguments. The PIs envision a computational tool set that young students will use to construct and share mathematical arguments, in the service of learning to be competent algebraic reasoners. In this EAGER project, they focus on first steps in developing that tool set and on investigating how young learners make and defend mathematical arguments when they have such tools available. This project represents work in its early states on an untested but potentially transformative idea and is likely to catalyze rapid and innovative advances in helping young learners become mathematical reasoners.",9,Massachusetts Institute of Technology
1020001000377391,BIGDATA: Small: DA: Choosing a Needle in a Big Data Haystack,"This big data project develops tools and algorithms to support users in the task of choosing one (or a few) object(s) from a very large set, particularly when there is a great deal of complex data on which to base this choice.<br/><br/>Consider a traveler looking at hotel options on a travel site, a scientist trying to identify proteins to investigate further based upon the results of a high throughput experiment, or an intelligence analyst trying to identify suspected terrorists. In all of these cases we have a big data challenge in that there are likely to be hundreds, perhaps thousands or even millions, of options to choose from. While there are some criteria that can be expressed as simple functions of attribute values, e.g. price for a hotel room, these criteria capture only a part of the objective function. Other considerations, such as stylishness of a hotel, can be much harder to determine as a function of known attributes. The user may be compelled to examine candidate options individually. The computer's task is to help minimize the number of candidates examined, and to optimize the order of examination. This project examines how best to accomplish this task.<br/><br/>Techniques explored include supporting human specification of information need against a variety of big data sources and machine presentation of relevant results with the volume of big data. The broader impact of this project is in effectively harnessing the power of big data in a variety of applications, including business, science, and national defense.",5,University of Michigan Ann Arbor
1020001000377393,BIGDATA: Small: DCM: Data Management for Analytics Applications on Modern Architecture,"We are now in the midst of the big data revolution where enterprise services are increasingly being driven by operational and business models that are powered by data analysis. A key part in making big data successful is ensuring that basic data processing primitives can execute efficiently on large and every increasing volumes of data. However, data processing kernels today largely employ techniques that have been designed about three decades ago, and are now out of touch with modern hardware that has made a fundamental technological shift. First, driven by power consumption characteristics, modern processors now have multiple processing units (called cores) fabricated in a single chip. In contrast, processors just a few years ago were single core. Second, traditionally the storage media for data has been the magnetic hard disk. Now, data has started to move nearly permanently to higher levels of the memory hierarchy, and more specifically to main memory. The goal of this project is to rethink key aspects of data processing techniques for the modern many-core and main memory hardware environment. The research approach is to design, implement and evaluate various methods for data kernels that can be used to store and process data efficiently. In other words, the key focus is on producing data kernels that ""run at the speed of modern hardware."" Thus, this project aims to have a broad impact on the big data ecosystem by developing faster, cheaper and more energy-efficient data kernels.",5,University of Wisconsin-Madison
1020001000377397,BIGDATA: Small: DA: A Random Projection Approach,"With the advent of Internet, numerous applications in the context of network traffic, search, and databases are faced with very large, inherently high-dimensional, or naturally streaming datasets. To effectively tackle these extremely large-scale practical problems (e.g., building statistical models from massive data, real-time network traffic monitoring and anomaly detection), methods based on statistics and probability have become increasingly popular. This proposal aims at developing theoretical, well-grounded statistical methods for massive data based on random projections, including data stream algorithms, quantized projection algorithms, and sparse projection algorithms.<br/><br/>Massive data are often generated as high-rate streams. Network traffic is a typical example. Effective measurements (and updates) of network traffic in real-time using small storage space are crucial for detecting anomaly events, for example the DDoS (Distributed Denial of Service) attacks. For many applications such as databases and machine learning, appropriate quantization of random projections will substantially improve the accuracies (in terms of variance per bit) and provide efficient indexing and dimension reductions to facilitate efficient search and learning. The proposed research will tackle a series of mathematically challenging problems in the development of random projections. A wide range of statistical learning and numerical linear algebra algorithms will be re-engineered to take advantage of the state-the-art projection methods.<br/><br/>These days, many industries such as search are in urgent demand for statistical algorithms which can effectively handle massive data. It is expected that algorithms to be developed in this proposal will be integrated with parallel platforms, to solve truly large-scale
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment