Skip to content

Instantly share code, notes, and snippets.

@d2207197
Created May 19, 2015 21:04
Show Gist options
  • Star 0 You must be signed in to star a gist
  • Fork 0 You must be signed in to fork a gist
  • Save d2207197/92c52e17ecdc45d47ab1 to your computer and use it in GitHub Desktop.
Save d2207197/92c52e17ecdc45d47ab1 to your computer and use it in GitHub Desktop.
This file has been truncated, but you can view the full file.
Wittmeyer's pseudoinverse iterative algorithm is formulated
Multiple-Phased Systems, whose operational life can be partitioned in a set of disjoint periods, called A?phasesA?; include several classes of systems such as Phased Mission Systems and Scheduled Maintenance Systems.
Because of their deployment in critical applications, the dependability modeling and analysis of Multiple-Phased Systems is a task of primary relevance.
However, the phased behavior makes the analysis of Multiple-Phased Systems extremely complex.
This paper is centered on the description and application of DEEM, a dependability modeling and evaluation tool for Multiple Phased Systems.
DEEM supports a powerful and efficient methodology for the analytical dependability modeling and evaluation of Multiple Phased Systems, based on Deterministic and Stochastic Petri Nets and on Markov Regenerative Processes.
As a first step toward realizing a dynamical system that evolves while spontaneously determining its own rule for time evolution, function dynamics (FD) is analyzed.
FD consists of a functional equation with a self-referential term, given as a dynamical system of a one-dimensional map.
Through the time evolution of this system, a dynamical graph (a network) emerges.
This graph has three interesting properties: (i) vertices appear as stable elements, (ii) the terminals of directed edges change in time, and (iii) some vertices determine the dynamics of edges, and edges determine the stability of the vertices, complementarily.
Two aspects of FD are studied, the generation of a graph (network) structure and the dynamics of this graph (network) in the system.
A simulation model is successful if it leads to policy action, i.e., if it is implemented.
Studies show that for a model to be implemented, it must have good correspondence with the mental model of the system held by the user of the model.
The user must feel confident that the simulation model corresponds to this mental model.
An understanding of how the model works is required.
Simulation models for implementation must be developed step by step, starting with a simple model, the simulation prototype.
After this has been explained to the user, a more detailed model can be developed on the basis of feedback from the user.
Software for simulation prototyping is discussed, e.g., with regard to the ease with which models and output can be explained and the speed with which small models can be written.
Hedging of fixed income securities remains one of the most challenging problems faced by financial institutions.
The predominantly used measures of duration and convexity do not completely capture the interest rate risks borne by the holder of these securities.
Using historical data for the entire yield curve, we perform a principal components analysis and find that the first four factors capture over 99.99% of the yield curve variation.
Incorporating these factors into the pricing of arbitrary fixed income securities via Monte Carlo simulation, we derive perturbation analysis (PA) estimators for the price sensitivities with respect to the factors.
Computational results for mortgage-backed securities (MBS) indicate that using these sensitivity measures in hedging provides far more protection against interest risk exposure than the conventional measures of duration and convexity.
As ubiquitous computing emerges in our lives and cities new opportunities for artistic and otherwise cultural interventions in urban space follow, but so far not much work has been done in order to articulate the socio-cultural significance of these new opportunities.
This paper is part of a general attempt to develop a coherent understanding of the implications and potentials of ubiquitous computing in the context of everyday city life.
On a more specific level the paper examines how the notion of social friction can be helpful in the development and analysis of ubiquitous computing in relation to art and design.
Social friction is articulated as a critical position, which could be applied as a strategy for design.
Our approach consists of a theoretical analysis and precedes concrete development and real-life experiments.
As such the paper aims to establish a steppingstone from which to launch actual digital designs.
We argue that by designing for the social friction, which is an intrinsic characteristic of everyday life, new forms of social and cultural potentials can be released.
By means of discussing CityNova, a vision for a possible use of ubiquitous computing in urban space, we explore how this approach might lead to systems that create new ways of experiencing the city.
This paper addresses the optimization of FIR filters for low power.
We propose a search algorithm to find the combination of the number of taps and coe#cient bit-width that leads to the minimum number of total partial sums, and hence to the least power consumption.
We show that the minimum number of taps does not necessarily lead to the least power consumption in fully parallel FIR filter architectures.
This is particularly true if the reduction of the bit-width of the coe#cients is taken into account.
We show that power is directly related to the total number of partial sums in the FIR filter, which in turn is determined by the number of bits set to 1 in the coe#cients.
We have developed a search algorithm that achieves up to 36% less power consumption when compared to an implementation using the minimum number of taps.
Educational research has highlighted the importance of maintaining an orderly classroom environment and providing both clear and well-organized instruction tailored to the needs of individual students.
Time spent on direct instruction and particularly the direct instruction of basic skills is associated with school learning (Wang, Haertel & Walberg, 1993).
With the increased interest in constructivistic conceptions of learning and teaching today, educators with constructivistic orientations contend that various forms of knowledge and skills are applied more generally when constructed by the learners themselves as opposed to explicitly taught: "knowledge is made, not acquired" (Phillips, 2000, p. 7).
Such a view nevertheless often leads to an inclination to reject direct instruction by the teacher (see, for example, Brooks & Brooks, 1993).
It should be noted, however, that many of the discussions of constructivistic orientations to learning and instruction are at the level of slogan and clichA(c) (Duffy & Cunningham, 1996; Finn & Ravitch, 1996; Kozloff, 1998).
In addition, the term constructivism has come to serve as an umbrella term for a diversity of views (Phillips, 1995; 2000).
We believe that a broad class of future applications will span both the Internet and the telephone network because such multiplanar applications have several economic and architectural advantages over conventional ones.
We also envision the close interlinking of the telephone network and the Internet to form a multimodal network.
In this paper, we describe these applications and networks, outline their architecture, and present our experiences in constructing a prototype multiplanar application.
We devise a simple model to study the phenomenon of free-riding and the effect of free identities on user behavior in peer-to-peer systems.
At the heart of our model is a strategic user of a certain type, an intrinsic and private parameter that reflects the user's generosity.
The user decides whether to contribute or free-ride based on how the current burden of contributing in the system compares to her type.
We derive the emerging cooperation level in equilibrium and quantify the effect of providing free-riders with degraded service on the emerging cooperation.
We find that this penalty mechanism is beneficial mostly when the "generosity level" of the society (i.e., the average type) is low.
To quantify the social cost of free identities, we extend the model to account for dynamic scenarios with turnover (users joining and leaving) and with whitewashers: users who strategically leave the system and re-join with a new identity.
We find that the imposition of penalty on all legitimate newcomers incurs a significant social loss only under high turnover rates in conjunction with intermediate societal generosity levels.
We present an algorithm for complete path planning for translating polyhedral robots in 3D.
Instead of exactly computing an explicit representation of the free space, we compute a roadmap that captures its connectivity.
This representation encodes the complete connectivity of free space and allows us to perform exact path planning.
We construct the roadmap by computing deterministic samples in free space that lie on an adaptive volumetric grid.
Our algorithm is simple to implement and uses two tests: a complex cell test and a star-shaped test.
These tests can be efficiently performed on polyhedral objects using max-norm distance computation and linear programming.
The complexity of our algorithm varies as a function of the size of narrow passages in the configuration space.
We demonstrate the performance of our algorithm on environments with very small narrow passages or no collision-free paths.
The model used in this report focuses on the analysis of ship waiting statistics and stock fluctuations under different arrival processes.
However, the basic outline is the same: central to both models are a jetty and accompanying tankfarm facilities belonging to a new chemical plant in the Port of Rotterdam.
Both the supply of raw materials and the export of finished products occur through ships loading and unloading at the jetty.
Since disruptions in the plants production process are very expensive, buffer stock is needed to allow for variations in ship arrivals and overseas exports through large ships.
Ports provide jetty facilities for ships to load and unload their cargo.
Since ship delays are costly, terminal operators attempt to minimize their number and duration.
Here, simulation has proved to be a very suitable tool.
However, in port simulation models, the impact of the arrival process of ships on the model outcomes tends to be underestimated.
This article considers three arrival processes: stock-controlled, equidistant per ship type, and Poisson.
We assess how their deployment in a port simulation model, based on data from a real case study, affects the efficiency of the loading and unloading process.
Poisson, which is the chosen arrival process in many client-oriented simulations, actually performs worst in terms of both ship delays and required storage capacity.
Stock-controlled arrivals perform best with regard to ship delays and required storage capacity.
In the case study two types of arrival processes were considered.
The first type are the so-called stock-controlled arrivals, i.e., ship arrivals are scheduled in such a way, that a base stock level is maintained in the tanks.
Given a base stock level of a raw material or ...
The effective use of humanoid robots in space will depend upon the efficacy of interaction between humans and robots.
The key to achieving this interaction is to provide the robot with sufficient skills for natural communication with humans so that humans can interact with the robot almost as though it were another human.
This requires that a number of basic capabilities be incorporated into the robot, including voice recognition, natural language, and cognitive tools on-board the robot to facilitate interaction between humans and robots through use of common representations and shared humanlike behaviors.
Couper (2002) outlines the "challenges and opportunities" of recent and stillemerging technological developments on the conduct of survey research.
This paper focuses on one such development -- the use of computer-assisted survey instruments in place of paper-andpencil questionnaires -- and it focuses on one particular opportunity which this development presents: the ability to improve efficiency, "flow," and naturalness, and in general make the interview experience a more pleasant one for all participants, while still controlling question wording and sequencing.
Moral arguments can be raised in defense of such efforts; the potential for important practical benefits, including improved survey cooperation, lends more mundane but perhaps more potent support.
Although the research literature is surprisingly scant, there is some evidence that improved instrument design can reduce nonresponse.
A recent effort by the U.S. Census Bureau to redesign the core instrument for the Survey of Income and Program Participation (SIPP) offers additional support.
Motivated in large measure by evidence of increasing unit nonresponse and attrition, the primary goal of the SIPP redesign effort was to improve the interview process, and in particular to seek ways to avoid violations of conversational norms (e.g., Grice, 1975).
A great many of the SIPP interview process improvements would not have been feasible without the computerization of the survey instrument.
This paper briefly summarizes many of the technology-based changes implemented in the SIPP instrument, and briefly describes a set of field experiments used to develop and refine the new procedures and to evaluate their success in achieving SIPP's redesign goals.
Keywords: burden, conversational norms, efficiency, flow, nonresponse/...
RAID-II is a high-bandwidth, networkattached storage server designed and implemented at the University of California at Berkeley.
In this paper, we measure the performance of RAID-II and evaluate various architectural decisions made during the design process.
We first measure the end-to-end performance of the system to be approximately 20 MB/s for both disk array reads and writes.
We then perform a bottleneck analysis by examining the performance of each individual subsystem and conclude that the disk subsystem limits performance.
By adding a custom interconnect board with a high-speed memory and bus system and parity engine, we are able to achieve a performance speedup of 8 to 15 over a comparative system using only off-theshelf hardware.
Introduction The force of induction F on a charge q is given by FA=-qtcdd,(1) where A is the usual magnetic vector potential defined by A r rJr rrc - s ,(2) where J is the current density.
Slowly varying effects are assumed here, where the basic theory may be given as a true relativity theory, involving the separation distance between two charges and its time derivatives.
This force of induction, Eq.
(1), yields Faraday's law of electromagnetic induction for the special case of an electromotive force (emf) around a fixed closed loop.
In particular, emf d d d d d ' & ( 0 ) =- =- =- s s s sF q s tc tc an tc an A B ,(3) where F is the magnetic flux through the loop.
It is observed in the laboratory that an emf is also induced when =A tc 0 , and the magnetic flux through the loop is changed by moving the loop, so Faraday's law becomes emf = - .-(4) Francisco Mller's (1987) experiments show that induction occurs locally and that the force
This paper describes the design, implementation and evaluation of a user-verification system for a smart gun, which is based on grip-pattern recognition.
An existing pressure sensor consisting of an array of 44 44 piezoresistive elements is used to measure the grip pattern.
An interface has been developed to acquire pressure images from the sensor.
The values of the pixels in the pressure-pattern images are used as inputs for a verification algorithm, which is currently implemented in software on a PC.
The verification algorithm is based on a likelihoodratio classifier for Gaussian probability densities.
First results indicate that it is feasible to use grip-pattern recognition for biometric verification.
The frequency shifts predicted by the `relativistic' Doppler e#ect are derived in the photon picture of light.
It turns out that, in general, the results do not depend exclusively on the relative velocity between observer and light source.
Protein-protein interactions are of great interest to biologists.
A variety of high-throughput techniques have been devised, each of which leads to a separate definition of an interaction network.
The concept of differential association rule mining is introduced to study the annotations of proteins in the context of one or more interaction networks.
Differences among items across edges of a network are explicitly targeted.
As a second step we identify differences between networks that are separately defined on the same set of nodes.
The technique of differential association rule mining is applied to the comparison of protein annotations within an interaction network and between different interaction networks.
In both cases we were able to find rules that explain known properties of protein interaction networks as well as rules that show promise for advanced study.
In order to manage the use of roles for the purpose of access control, it is important to look at attributes beyond the consideration of capability assignment.
Fundamentally, a generic attribute description using a constraint-based approach will allow many of the important aspects of role, such as scope, activation and deactivation, to be included.
Furthermore, the commonly accepted concept of role hierarchy is challenged from the point of view of subsidiarity in real organisations, with the suggestion that role hierarchy has limited usefulness that does not seem to apply widely.
In this paper, we consider the problem of assigning sensors to track targets so as to minimize the expected error in the resulting estimation for target locations.
Specifically, we are interested in how disjoint pairs of bearing or range sensors can be best assigned to targets in order to minimize the expected error in the estimates.
We refer to this as the focus of attention (FOA) problem.
In its
In the field of Computer-Aided anything, acronyms abound.
They are, after all, useful tools.
However, there is a risk that we become constrained by them and, as a result, fail to see beyond them.
Our approach to extracting information from the web analyzes the structural content of web pages through exploiting the latent information given by HTML tags.
For each specific extraction task, an object model is created consisting of the salient fields to be extracted and the corresponding extraction rules based on a library of HTML parsing functions.
We derive extraction rules for both single-slot and multiple-slot extraction tasks which we illustrate through two sample domains.
In kernel methods, an interesting recent development seeks to learn a good kernel from empirical data automatically.
In this paper, by regarding the transductive learning of the kernel matrix as a missing data problem, we propose a Bayesian hierarchical model for the problem and devise the Tanner-Wong data augmentation algorithm for making inference on the model.
The Tanner-Wong algorithm is closely related to Gibbs sampling, and it also bears a strong resemblance to the expectation-maximization (EM) algorithm.
For an e#cient implementation, we propose a simplified Bayesian hierarchical model and the corresponding TannerWong algorithm.
We express the relationship between the kernel on the input space and the kernel on the output space as a symmetric-definite generalized eigenproblem.
This paper presents the current state in an ongoing development of the Genetic Improvisation Model (GIM): a framework for the design of real-time improvisational systems.
The aesthetic rationale for the model is presented, followed by a discussion of its general principles.
A discussion of the Emonic Environment, a networked system for audiovisual creation built on GIM's principles, follows
In this paper we analyse the mean-variance hedging approach in an incomplete market under the assumption of additional market information, which is represented by a given, finite set of observed prices of non-attainable contingent claims.
Due to no-arbitrage arguments, our set of investment opportunities increases and the set of possible equivalent martingale measures shrinks.
Therefore, we obtain a modified mean-variance hedging problem, which takes into account the observed additional market information.
Solving this by means of the techniques developed by Gourieroux, Laurent and Pham (1998), we obtain an explicit description of the optimal hedging strategy and an admissible, constrained variance-optimal signed martingale measure, that generates both the approximation price and the observed option prices.
SIS PRUEBA is a software tool to integrate usability and user-centred design principles in the development process of services within Telefnica Mviles Espaa (TME), the largest mobile telecommunications operator in Spain.
Predicting the native conformation using computational protein models requires a large number of energy evaluations even with simplified models such as hydrophobic-hydrophilic (HP) models.
Clearly, energy evaluations constitute a significant portion of computational time.
We hypothesize that given the structured nature of algorithms that search for candidate conformations such as stochastic methods, energy evaluation computations can be cached and reused, thus saving computational time and e#ort.
In this paper, we present a caching approach and apply it to 2D triangular HP lattice model.
We provide theoretical analysis and prediction of the expected savings from caching as applied this model.
We conduct experiments using a sophisticated evolutionary algorithm that contains elements of local search, memetic algorithms, diversity replacement, etc.
in order to verify our hypothesis and demonstrate a significant level of savings in computational e#ort and time that caching can provide.
This paper considers the modes of interaction between one or several human operators and an active sensor network -- a fully decentralized network of sensors some or all of which have actuators and are in that sense active.
The primary goal of this study is to investigate the conditions under which the human involvement will not jeopardize scalability of the overall system.
Two aspects of human-robot interaction are considered: the ways in which the global view of the system may be conveyed to the operators, and how the operators may influence the behavior of the system during the course of its operation.
The results of analysis favor peer-topeer information-based interactions between the operators and the network whereby the humans act as extended sensors and communication nodes of the network itself.
Experiments on an indoor active sensor network are described.
Recently, an approach has been presented to minimize Disjoint Sumof -Products (DSOPs) based on Binary Decision Diagrams (BDDs).
Due to the symbolic representation of cubes for large problem instances, the method is orders of magnitude faster than previous enumerative techniques.
NASA has embarked on a long-term program to develop human-robot systems for sustained, affordable space exploration.
Packet classification is an enabling function for a variety of Internet applications including Quality of Service, security, monitoring, and multimedia communications.
In order to classify a packet as belonging to a particular flow or set of flows, network nodes must perform a search over a set of filters using multiple fields of the packet as the search key.
In general, there have been two major threads of research addressing packet classification: algorithmic and architectural.
A few pioneering groups of researchers posed the problem, provided complexity bounds, and offered a collection of algorithmic solutions.
Subsequently, the design space has been vigorously explored by many offering new algorithms and improvements upon existing algorithms.
Given the inability of early algorithms to meet performance constraints imposed by high speed links, researchers in industry and academia devised architectural solutions to the problem.
This thread of research produced the most widely-used packet classification device technology, Ternary Content Addressable Memory (TCAM).
New architectural research combines intelligent algorithms and novel architectures to eliminate many of the unfavorable characteristics of current TCAMs.
We observe that the community appears to be converging on a combined algorithmic and architectural approach to the problem.
Using a taxonomy based on the high-level approach to the problem and a minimal set of running examples, we provide a survey of the seminal and recent solutions to the problem.
It is our hope to foster a deeper understanding of the various packet classification techniques while providing a useful framework for discerning relationships and distinctions.
this report, Paul Lindgreen as secretary and as editor of the interim report [Lin90a]
In this paper we compare the average performance of one class of low-discrepancy quasi-Monte Carlo sequences for global optimization.
Weiner measure is assumed as the probability prior on all optimized functions.
We show how to construct van der Corput sequences and we prove their consistency.
Numerical experimentation shows that the van der Corput sequence in base 2 has a better average performance.
This paper addresses the opportunity to put into place a virtual consortium for modeling and simulation.
While periodic conferences such as the Winter Simulation Conference are tremendously vital to the continued growth of modeling and simulation research, they do not offer the day-to-day technical exchange that can now be made possible with matured collaborative technologies.
ue, the Netherlands, Norway, the Philippines, the Rockefeller Foundation, the Rural Industries Research and Development Corporation (Australia), South Africa, the Southern African Development Bank, Spain, Sweden, Switzerland, the United Kingdom, the United Nations Children's Fund, the United States, and the World Bank.
CLASSIFIC ATION AND REGRESSION TREES, CART^TM A USER MANUAL FOR IDENTIFYING INDIC A TORS OF VULNERABILITY TO FAMINE AND CHRONIC FOOD INSECURITY YISEHAC YOHANNES PATRICK WEBB MICROCOMPUTERS IN POLICY RESEARCH INTERNATIONAL FOOD POLICY RESEARCH INSTITUTE CART is a registered trademark of California Statistical Software, Inc.
Copyright 1999 by the International Food Policy Research Institute 2033 K Street, N.W.
Washington, D.C. 20006-1002 U.S.A. Library of Congress Cataloging-in-Publication Data available Yohannes, Yisehac Classification and Regression Trees, Cart^TM : A User Manual for Identifying Indicators of Vulnerability to Famine and Chronic Food Insecurity / Yise
An approach for segmentation of handwritten touching numeral strings is presented in this paper.
A neural network has been designed to deal with various types of touching observed frequently in numeral strings.
A numeral string image is split into a number of line segments while stroke extraction is being performed and the segments are represented with straight lines.
Four types of primitive are defined based on the lines and used for representing the numeral string in more abstractive way and extracting clues on touching information from the string.
Potential segmentation points are located using the neural network by active interpretation of the features collected from the primitives.
Also, the run-length coding scheme is employed for efficient representation and manipulation of images.
On a test set collected from real mail pieces, the segmentation accuracy of 89.1% was achieved, in image level, in a preliminary experiment.
1.
The Java Modeling Language (JML) can be used to specify the detailed design of Java classes and interfaces by adding annotations to Java source files.
The aim of JML is to provide a specification language that is easy to use for Java programmers and that is supported by a wide range of tools for specification type-checking, runtime debugging, static analysis, and verification.
This paper
Ensuring performance isolation and differentiation among workloads that share a storage infrastructure is a basic requirement in consolidated data centers.
Existing management tools rely on resource provisioning to meet performance goals; they require detailed knowledge of the system characteristics and the workloads.
Provisioning is inherently slow to react to system and workload dynamics, and in the general case, it is impossible to provision for the worst case.
This paper describes an on-line handwritten Japanese text recognition method that is liberated from constraints on writing direction (line direction) and character orientation.
This method estimates the line direction and character orientation using the time sequence information of pen-tip coordinates and employs writingbox -free recognition with context processing combined.
The method can cope with a mixture of vertical, horizontal and skewed lines with arbitrary character orientations.
It is expected useful for tablet PC's, interactive electronic whiteboards and so on.
In order to analyze market trends and make reasonable business plans, a company's local data is not sufficient.
Decision making must also be based on information from suppliers, partners and competitors.
This external data can be obtained from the Web in many cases, but must be integrated with the company's own data, for example, in a data warehouse.
To this end, Web data has to be mapped to the star schema of the warehouse.
In this paper we propose a semi-automatic approach to support this transformation process.
Our approach is based on the use a rooted labeled tree representation of Web data and the existing warehouse schema.
Based on this common view we can compare source and target schemata to identify correspondences.
We show how the correspondences guide the transformation to be accomplished automatically.
We also explain the meaning of recursion and restructuring in mapping rules, which are the core of the transformation algorithm.
In this paper we introduce a new embedding technique to linearly project labeled data samples into a new space where the performance of a Nearest Neighbor classifier is improved.
The approach is based on considering a large set of simple discriminant projections and finding the subset with higher classification performance.
In order to implement the feature selection process we propose the use of the adaboost algorithm.
The performance of this technique is tested in a multiclass classification problem related to the production of cork stoppers for wine bottles.
inchoative, "up" p#ed 16 48 "before, in front of" roz 80 295 inch., "disperse/ break into pieces" nad 5 33 "over" pod 26 74 "under" od 41 253 distantiational movement sum 195 762 TOTAL 957 (6) the secret must be found in the different status of stem-initial CC-clusters.
(7) stem-initial CCs observed with a. prefixal-V only +e b. prefixal - only -e c. both mix +e only: 17 CCs -e only: 38 CCs ct, dn, d#, jm, lstn, mk, pn, ps, rv, #v, sch, sr, v, tn, v#, z#, #r bl, b#, cl, cv, #l, f#, fr, hl, hm, hv, chl, chrchl, km, kr, k#, kv, m#, mr, pl, pt, sh, sv, k, n, p, r, tl, tr, tv, vd, vr, zbr, zp, zt, #h, #m, ##, #v mix: 35 CCs br, #t, dm, dr, dv, hn, hr, h#, chv, jd, kd, kl, ml, mn, pj, pr, p#, sk, sl, sm, sn, sp, st, l, t, t#, v#, vl, v#, v, vz, zd, zl, zn, zv TOTAL nb CC: 90 (8) A given root belongs to one and only one of these three groups.
(9) CC mix represented by how many it
In this paper we present a comprehensive approach to conceptual structuring and intelligent navigation of text databases.
Given any collection of texts, we first automatically extract a set of index terms describing each text.
Next, we use a particular lattice conceptual clustering method to build a network of clustered texts whose nodes are described using the index terms.
We argue that the resulting network supports an hybrid navigational approach to text retrieval - implemented into an actual user interface - that combines browsing potentials with good retrieval performance.
We present the results of an experiment on subject searching where this approach outperformed a conventional Boolean retrieval system.
Energy management has become one of the great challenges in portable computing.
This is the result of the increasing energy requirements of modern portable devices without a corresponding increase in battery technology.
Sleep is a new energy reduction technique for handheld devices that is most effective when the handheld's processor is lightly loaded, such as when the user is reading a document or looking at a web page.
When possible, rather than using the processor's idle mode, Sleep tries to put the processor in sleep mode for short periods (less than one second) without affecting the user's experience.
To enhance the perception that the system is on, an image is maintained on the display and activity is resumed as a result of external events such as touch-screen and button activity.
We have implemented Sleep on a prototype pocket computer, where it has reduced energy consumption by up to 60%.
In this tutorial we provide answers to the top ten inputmodeling questions that new simulation users ask, point out common mistakes that occur and give relevant references.
We assume that commercial input-modeling software will be used when possible, and only suggest non-commercial options when there is little else available.
Detailed examples will be provided in the tutorial presentation.
This paper discusses the initial efforts to implement simulation modeling as a visual management and analysis tool at an automotive foundry plant manufacturing engine blocks.
The foundry process was modeled using Pro Model to identify bottlenecks and evaluate machine performance, cycle times and production data (total parts, rejects, throughput, products/hr) essential for efficient production control.
Results from the current system identified assembly machine work area as the bottleneck (although utilization was greater than 95% for two assembly machines) resulting in high work-in-process (WIP) inventory level, low resource and machine utilization.
Based on these results, optimum numbers were identified through use of scenarios by varying the number of assembly machines and processing time of each machine.
In addition to these scenarios, strategies for production control involving buffer sizes were also made.
We present an algorithm for conjunctive and disjunctive Boolean equation systems (BESs), which arise frequently in the verification and analysis of finite state concurrent systems.
In contrast to the previously best known O(e²) time solutions, our algorithm computes the solution of such a fixpoint equation system with size e and alternation depth d in O(e log d) time.
This article is meant to provide the reader with details regarding the present state of the project, describing the current architecture of the system, its latest innovations and other systems 10 that make use of the NetSolve infrastructure.
Copyright # 2002 John Wiley & Sons, Ltd
This report presents the InfoVis Toolkit, designed to support the creation, extension and integration of advanced 2D Information Visualization components into interactive Java Swing applications.
The InfoVis Toolkit provides specific data structures to achieve a fast action/feedback loop required by dynamic queries.
It comes with a large set of components such as range sliders and tailored control panels required to control and configure the visualizations.
These components are integrated into a coherent framework that simplifies the management of rich data structures and the design and extension of visualizations.
Supported data structures currently include tables, trees and graphs.
Supported visualizations include scatter plots, time series, Treemaps, node-link diagrams for trees and graphs and adjacency matrix for graphs.
All visualizations can use fisheye lenses and dynamic labeling.
The InfoVis Toolkit supports hardware acceleration when available through Agile2D, an implementation of the Java Graphics API based on OpenGL, achieving speedups of 10 to 60 times.
The report
This paper addresses the simulation of the dynamics of complex systems by using hierarchical graph and multi-agent system.
A complex system is composed of numerous interacting parts that can be described recursively.
First we summarize the hierarchical aspect of the complex system.
We then present a description of hierarchical graph as a data structure for structural modeling in parallel with dynamics simulation by agents.
This method can be used by physiological modelers, ecological modelers, etc as well as in other domains that are considered as complex systems.
An example issued from physiology will illustrate this approach.
uses to deliver value to its customers.
In today's competitive environment, the globalization of markets has rapidly substituted the traditional integrated business.
The competitive success of an organization no longer depends only on its own efforts, but relies on the efficiency of the entire supply chain.
Therefore, building an effective supply chain is fast becoming paramount in today's marketplace.
Distributed Supply Chain (DSC) Simulation has been identified as one of the best means to test and analyze the performance of supply chains.
The Generic Runtime Infrastructure for Distributed Simulation (GRIDS) is a middleware that supports the reuse and interoperation of DSC simulations.
This paper reports the experience on employing the GRIDS to support the distributed collaboration of an automobile manufacture supply chain simulation.
Several advantages of GRIDS are also discussed here which make it an ideal middleware for DSC simulations.
this paper) and (2) develop a visual method for each characterization.
The mariner community needs enhanced characterizations of environmental uncertainty now, but the accuracy of the characterizations is still not sufficient enough and therefore formal user evaluations cannot take place at this point in development.
We received feedback on the applicability of our techniques from domain experts.
We used this in conjunction with previous results to compile a set of development guidelines (some obvious, others not)
This paper proposes the InstantGrid framework for on-demand construction of grid points.
In contrast to traditional approaches, InstantGrid is designed to substantially simplify software management in grid systems, and is able to instantly turn any computer into a grid-ready platform with the desired execution environment.
Experimental results demonstrate that a 256-node grid point with commodity grid middleware can be constructed in five minutes from scratch.
We introduce a generic framework for proof carrying code, developed and mechanically verified in Isabelle/HOL.
The framework defines and proves sound a verification condition generator with minimal assumptions on the underlying programming language, safety policy, and safety logic.
We demonstrate its usability for prototyping proof carrying code systems by instantiating it to a simple assembly language with procedures and a safety policy for arithmetic overflow.
this paper.
Ref [15] addresses the knowledge consensus problem when teams of agents only have local communication between nearest neighbors.
Since the set of nearest neighbors is constantly changing, the overall system becomes a hybrid system.
The paper shows that if the union over all bidirectional communication graphs is connected for finite periods of time, then consensus is achieved.
While the results in this paper are not as strong, only unidirectional communication links are assumed
In any multi-hop routing scheme, cooperation by the intermediate nodes are essential for the succesful delivery of traffic.
However, the effort exerted by the intermediate nodes are often unobservable by the source and/or destination nodes.
We show it is possible to overcome this problem of hidden action by designing contracts, in the form of payments, to induce cooperation from the intermediate nodes.
Interestingly, the ability to monitor per-hop or per-path outcomes, even if costless to implement, may not improve the welfare of the participants or the performance of the network.
This paper develops a framework to measure the impact of agricultural research on urban poverty.
Increased investments in agricultural R&D can lower food prices by increasing food production, and lower food prices benefit the urban poor because they often spend more than 60% of their income on food.
Application of the framework to China shows that these food price effects are large and that the benefits for the urban poor have been about as large as the benefits for the rural poor.
KEYWORDS: developing countries, China, agricultural research, urban, poverty ii ACKNOWLEDGMENTS The authors are grateful for helpful comments received from Peter Hazell, Robert Evanson and participants in a session at the American Agricultural Economics Association annual meeting in Chicago, August 5-8, 2001. iii TABLE OF CONTENTS 1.
To enable e#cient access to multimedia content, the media data has to be augmented by semantic metadata and functionality.
The semantic representation has to be integrated with domain ontologies to fully exploit domain-specific knowledge.
This knowledge can be used for refining ambiguous user queries by closing the conceptual gap between the user and the information to be retrieved.
In our previous research, we have introduced Enhanced Multimedia Meta Objects (EMMOs) as a new approach for semantic multimedia meta modeling, as well as the query algebra EMMA, which is adequate and complete with regard to the EMMO model.
This paper focuses on the refinement of EMMA queries by incorporating ontological knowledge.
This paper summarises the achievements of a multidisciplinary Bioinformatics project which has the objective of providing a general mechanism for efficient computerisation of typewritten/hand-annotated archive card indexes, of the type found in most museums, archives and libraries.
In addition to efficiently scanning, recognising and databasing the content of the cards, the original card images must be maintained as the ultimate source record, and a flexible database structure is required to allow taxonomists to reorganise and update the resulting online archive.
Implementation mechanisms for each part of the overall system are described, and conversion performance for a demonstrator database of 27,578 Pyralid moth archive cards is reported.
The system is currently being used to convert the full NHM archive of Lepidoptera totalling 290,886 cards.
this paper we present a detailed examination of the technical problems we have encountered in undertaking high-throughput analyses of alternative splicing over the last four years, and the specific solutions we have developed for these problems, in seeking to minimize both false positive and false negative errors
this paper we describe ######################################## a new distributed, robotic sensor methodology developed for applications including characterization of environmental structure and phenomena.
NIMS exploits deployed infrastructure that provides the benefits of precise motion, aerial suspension, and low energy sustainable operations in complex environments.
NIMS nodes may explore a three-dimensional environment and enable the deployment of sensor nodes at diverse locations and viewing perspectives.
NIMS characterization of phenomena in a three dimensional space must now consider the selection of sensor sampling points in both time and space.
Thus, we introduce a new approach of mobile node adaptive sampling with the objective of minimizing error between the actual and reconstructed spatiotemporal behavior of environmental variables while minimizing required motion.
In this approach, the NIMS node first explores as an agent, gathering a statistical description of phenomena using a ##################################approach.
By iteratively increasing sampling resolution, guided adaptively by the measurement results themselves, this NIMS sampling enables reconstruction of phenomena with a systematic method for balancing accuracy with sampling resource cost in time and motion.
This adaptive sampling method is described analytically and also tested with simulated environmental data.
Experimental evaluations of adaptive sampling algorithms have also been completed.
Specifically, NIMS experimental systems have been developed for monitoring of spatiotemporal variation of atmospheric climate phenomena.
A NIMS system has been deployed at a field biology station to map phenomena in a 50m width and 50m span transect in a forest environme...
this paper) is scripting.
Here, the user provides a simple ASCII file containing commands that steer the visualization.
Typically, the commands are held in plain English to make using the underlying scripting language easier.
Typical examples for scripting-driven AV systems include JAWAA (Akingbade et al., 2003), JSamba (Stasko, 1998), JHAV E (Naps et al., 2000) and Animal (Roling and Freisleben, 2002)
In speaker identification, we match a given (unkown) speaker to the set of known speakers in a database.
The database is constructed from the speech samples of each known speaker.
Feature vectors are extracted from the samples by short-term spectral analysis, and processed further by vector quantization for locating the clusters in the feature space.
We study the role of the vector quantization in the speaker identification system.
We compare the performance of different clustering algorithms, and the influence of the codebook size.
We want to find out, which method provides the best clustering result, and whether the difference in quality contribute to improvement in recognition accuracy of the system.
We present here a framework for developing a generic talking head capable of reproducing the anatomy and the facial deformations induced by speech movements with a set of a few parameters.
We will show that the speaker-specific articulatory movements can be straightforward encoded into the normalized MPEG-4 Facial Animation Parameters and Facial Definition Parameters.
Multiple representation occurs when information about the same geographic entity is represented electronically more than once.
This occurs frequently in practice, and it invariably results in the occurrence of inconsistencies among the different representations.
We propose to resolve this situation by introducing a multiple representation management system (MRMS), the schema of which includes rules that specify how to identify representations of the same entity, rules that specify consistency requirements, and rules used to restore consistency when necessary.
In this paper, we demonstrate by means of a prototype and a realworld case study that it is possible to implement a multiple representation schema language on top of an objectrelational database management system.
Specifically, it is demonstrated how it is possible to map the constructs of the language used for specifying the multiple representation schema to functionality available in Oracle.
Though some limitations exist, Oracle has proven to be a suitable platform for implementing an MRMS.
In this paper, we present a compressed pattern matching method for searching user queried words in the CCITT Group 4 compressed document images, without decompressing.
The feature pixels composed of black changing elements and white changing elements are extracted directly from the CCITT Group 4 compressed document images.
The connected components are labeled based on a line-by-line strategy according to the relative positions between the changing elements of the current coding line and the changing elements of the reference line.
Word boxes are bounded by merging the connected components.
A two-stage matching strategy is constructed to measure the dissimilarity between the template image of the user's query word and the words extracted from document images.
Experimental results confirmed the validity of the proposed approach.
The components of a key frame selection algorithm for a paper-based multimedia browsing interface called Video Paper are described.
Analysis of video image frames is combined with the results of processing the closed caption to select key frames that are printed on a paper document together with the closed caption.
Bar codes positioned near the key frames allow a user to play the video from the corresponding times.
This paper describes several component techniques that are being investigated for key frame selection in the Video Paper system, including face detection and text recognition.
The Video Paper system implementation is also discussed.
This paper proposes a method of using ontology hierarchy in automatic topic identification.
The fundamental idea behind this work is to exploit an ontology hierarchical structure in order to find a topic of a text.
The keywords which are extracted from a given text will be mapped onto their corresponding concepts in the ontology.
By optimizing the corresponding concepts, we will pick a single node among the concepts nodes which we believe is the topic of the target text.
However, a limited vocabulary problem is encountered while mapping the keywords onto their corresponding concepts.
This situation forces us to extend the ontology by enriching each of its concepts with new concepts using the external linguistics knowledge-base (WordNet).
Our intuition of a high number keywords mapped onto the ontology concepts is that our topic identification technique can perform at its best.
This module provides the information about the CAFCR course: "Multi-Objective Embedded Systems Design, based on CAFCR".
Distribution This article or presentation is written as part of the Gaud project.
The Gaud project philosophy is to improve by obtaining frequent feedback.
Frequent feedback is pursued by an open creation process.
This document is published as intermediate or nearly mature version to get feedback.
Further distribution is allowed as long as the document remains complete and unchanged.
All Gaud documents are available at: http://www.extra.research.philips.com/natlab/sysarch/ version: 0 status: draft 5th July 2004 Contents 1 Multi-Objective Embedded Systems design, based on CAFCR 1 1.1
Annotating a video-database requires an intensive human effort that is time consuming and error prone.
However this task is mandatory to bridge the gap between low-level video features and the semantic content.
We propose a partition sampling active learning method to minimize human effort in labeling.
Formally, active learning is a process where new unlabeled samples are iteratively selected and presented to teachers.
The major problem is then to find the best selection function that maximizes the knowledge gain acquired from new samples.
In contrast with existing active learning approaches, we focus on the selection of multiple samples.
We propose to select samples such that their contribution to the knowledge gain is complementary and optimal.
Hence, at each iteration we ensure to maximize the knowledge gain.
Our method offers many advantages; among them the possibility to share the annotation effort among several teachers.
A regional transportation system and the movement of large traffic volumes through it, are characteristic of stochastic systems.
The standard traffic management or transportation planning approach uses a slice in time view of the system.
Static, mean values of system variables are used for the basis of incident-caused, congestion management decisions.
By reason of the highly variable nature of transportation systems, discrete event simulation is used in the planning process.
The simulation model is highly dependent on the spatial accuracy of real world coordinates of nodes and the lengths of the roadway network links.
Link travel times, queue spill back and turn lane queue size are directly related to the magnitude of incident-caused congestion, and the roadway system's ability to recover from it.
The incorporation of accurate Geographic Information System (GIS) data with a powerful transportation simulation software package and properly designed data collection and analysis techniques are invaluable in support of transportation incident management decisions.
Nowadays, XML is becoming the standard for electronic information representation and exchange in our lives.
Access to information presented in hyperlinked XML documents and other formats have always been in demand by users.
In this paper, we describe the architecture, implementation, and evaluation of the P-RANK system built to address the requirement for efficient ranked keyword search over hyperlinked XML documents.
Our contributions include presenting a new efficient keyword search system using a genuine data structure called the P-tree, a novel ranking method based on dimension rank voting, and a fast rank sorting method using the EIN-ring.
In this paper we present tractable algorithms for learning a logical model of actions' effects and preconditions in deterministic partially observable domains.
These algorithms update a representation of the set of possible action models after every observation and action execution.
We show that when actions are known to have no conditional effects, then the set of possible action models can be represented compactly indefinitely.
We also show that certain desirable properties hold for actions that have conditional effects, and that sometimes those can be learned efficiently as well.
Our approach takes time and space that are polynomial in the number of domain features, and it is the first exact solution that is tractable for a wide class of problems.
It does so by representing the set of possible action models using propositional logic, while avoiding general-purpose logical inference.
Learning in partially observable domains is difficult and intractable in general, but our results show that it can be solved exactly in large domains in which one can assume some structure for actions' effects and preconditions.
These results are relevant for more general settings, such as learning HMMs, reinforcement learning, and learning in partially observable stochastic domains.
We show that temporal logic and combinations of temporal logics and modal logics of knowledge can be effectively represented in artificial neural networks.
We present a Translation Algorithm from temporal rules to neural networks, and show that the networks compute a fixed-point semantics of the rules.
We also apply the translation to the muddy children puzzle, which has been used as a testbed for distributed multi-agent systems.
We provide a complete solution to the puzzle with the use of simple neural networks, capable of reasoning about time and of knowledge acquisition through inductive learning.
F1 and F2 frequencies of the vowels /i/, /a/ and /u/ were measured in speech directed to an infant and to adults.
The vowels were taken from content words as well as function words.
The results showed that the vowel triangles in speech to the infant were expanded compared to those in speech to adults, but only in the content words.
For function words, the opposite pattern was found: adults produced more expanded vowels in adult-directed speech than in infant-directed speech.
So far, boosting has been used to improve the quality of moderately accurate learning algorithms, by weighting and combining many of their weak hypotheses into a final classifier with theoretically high accuracy.
In a recent work (Sebban, Nock and Lallich, 2001), we have attempted to adapt boosting properties to data reduction techniques.
In this particular context, the objective was not only to improve the success rate, but also to reduce the time and space complexities due to the storage requirements of some costly learning algorithms, such as nearest-neighbor classifiers.
In that framework, each weak hypothesis, which is usually built and weighted from the learning set, is replaced by a single learning instance.
The weight given by boosting defines in that case the relevance of the instance, and a statistical test allows one to decide whether it can be discarded without damaging further classification tasks.
In Sebban, Nock and Lallich (2001), we addressed problems with two classes.
It is the aim of the present paper to relax the class constraint, and extend our contribution to multiclass problems.
Beyond data reduction, experimental results are also provided on twenty-three datasets, showing the benefits that our boosting-derived weighting rule brings to weighted nearest neighbor classifiers.
that are stored in a fragile state, on a volatile medium.
They require conservation and restoration.
Automated tools for video restoration will be crucial in preserving our cultural heritage, since manual image restoration is a tedious and time-consuming process.
The need for new approaches to the consistent simulation of related phenomena at multiple levels of resolution is great.
While many fields of application would benefit from a complete and approachable solution to this problem, such solutions have proven extremely difficult.
We present a multi-resolution simulation methodology which uses numerical optimization as a tool for maintaining external consistency between models of the same phenomena operating at different levels of temporal and/or spatial resolution.
Our approach follows from previous work in the disparate fields of inverse modeling and spacetime constraintbased animation.
As a case study, our methodology is applied to two environmental models of forest canopy processes that make overlapping predictions under unique sets of operating assumptions, and which execute at different temporal resolutions.
Experimental results are presented and future directions are addressed.
We present an iterative algorithm for robustly estimating the egomotion and refining and updating a coarse, noisy and partial depth map using a depth based parallax model and brightness derivatives extracted from an image pair.
Given a coarse, noisy and partial depth map acquired by a range-finder or obtained from a Digital Elevation Map (DEM), we first estimate the ego-motion by combining a global ego-motion constraint and a local brightness constancy constraint.
Using the estimated camera motion and the available depth map estimate, motion of the 3D points is compensated.
We utilize the fact that the resulting surface parallax field is an epipolar field and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate.
Instead of assuming a smooth parallax field or locally smooth depth models, we locally model the parallax magnitude using the depth map, formulate the problem as a generalized eigen-value analysis and obtain better results.
In addition, confidence measures for depth estimates are provided which can be used to remove regions with potentially incorrect (and outliers in) depth estimates for robustly estimating ego-motion in the next iteration.
Results on both synthetic and real examples are presented.
ion.
In 5 patients, additional pancreatic tumors or distant metastases only suspected during PET scanning were confirmed.
Image fusion improved the sensitivity of malignancy detection from 76.6% (CT) and 84.4% (PET) to 89.1% (image fusion).
Compared with CT alone, image fusion increased the sensitivity of detecting tissue infiltration to 68.2%, but at the cost of decreased specificity.
Conclusion: The most important supplementary finding supplied by image fusion is a more precise correlation with focal tracer hot spots in PET.
Image fusion improved the sensitivity of differentiating between benign and malignant pancreatic lesions with no significant change in specificity.
All image modalities failed to stage lymph node involvement.
Key Words: PET; CT, spiral; image manipulation or reconstruction; pancreas; computer applications, detection J Nucl Med 2004; 45:1279--1286 With an incidence rate of 10 cases per 100,000 people per year, cancer of the pancreas is the third most common ma
Arabic rule in Middle Europe) contrastive vowel length in these languages.
Czech vowel length has been extensively studied since the 19th century.
However, no generalisation of any kind could be uncovered.
Diachronically, it does not relate to either Indo-European or Common Slavic vowel length, nor does it show any kinship with Baltic tones and East/ South Slavic accent.
Synchronically, closed syllable shortening (krva vs. krav, kravka "cow NOMsg, GENpl, dim") appears to coexist with closed syllable lengthening (n# vs. noe, n#ky "knife NOMsg, GENsg, scissors").
In sum, any attempt to propose a regularity underlying this system seems desperate.
Vowel length in Czech is therefore reputed to be anarchic and unpredictable.
This situation is mirrored in grammars by pages of amorphous lists of grammatical categories that exhibit length or shortness.
Czech vowel length is driven by a simple mechanism that is known from other languages: templates.
That is, a certain amount of vocalic space is associated to a given morphological and/ or semantic category.
If concatenation of underlying long and short vowels produces more morae than the specific category allows for, shortening is observed.
If it produces less vocalic weight than the category at stake demands, lengthening ensues.
This kind of templatic structure is a typical feature of Afro-Asiatic languages, and I believe that the templatic regularities I present have not been discovered before because nobody has ever looked at the relevant data through the prism of templates: these are commonly held to be a typological pecularity of Afro-Asiatic, absent from Indo-European.
In order to illustrate the preceding claim, only a few of the instances of templatic activity that I have identified may be quoted in t...
below the skeleton b.
UP = processes driven by syllable structure.
Consequences of syntagmatic relations between syllabic constituents (e.g.
lenition).
==> trigger above the skeleton 2 c. DOWN x x | | # # # # # UP R | N Coda | | x x | | V # # # (4) Vulgar Latin [VL]: consonification of short (non-low) vowel in hiatus a.
{i, e} -> j / __ V b.
{u, o} -> w / __ V fiilia > filja fille vidua > wedwa veuve viinea > winja vigne coagulaare > kwaglare cailler (5) a. Lat.
filia = 3 syll.,VL.
filja = 2 syll.
b. Cw/j clusters c. no original Cj/w preserved: Cj = palatalizations > mod.
French [j], [z], [s], [#], [#], [], ([L]) (+j metathesis / fusion with the preceding vowel ratjoone > [##z] (6) evolution of Cj: a. classical view: all processes depend on segmental characteristics of C b. our claim: - there is just one (fundamentally) syllabic process - segmental properties are secondary and never the cause
The focus of this paper is on developing and evaluating a practical methodology for determining if and when different types of traffic can be safely multiplexed within the same service class.
The use of class rather than individual service guarantees offers many advantages in terms of scalability, but raises the concern that not all users within a class see the same performance.
Understanding when and why a user will experience performance that differs significantly from that of other users in its class is, therefore, of importance.
Our approach relies on an analytical model developed under a number of simplifying assumptions, which we test using several real traffic traces corresponding to different types of users.
This testing is carried out primarily by means of simulation, to allow a comprehensive coverage of different configurations.
Our findings establish that although the simplistic model does not accurately predict the absolute performance that individual users experience, it is quite successful and robust when it comes to identifying situations that can give rise to substantial performance deviations within a service class.
As a result, it provides a simple and practical tool for rapidly characterizing real traffic profiles that can be safely multiplexed.
Over the last decade, importance sampling has been a popular technique for the efficient estimation of rare event probabilities.
This paper presents an approach for applying balanced likelihood ratio importance sampling to the problem of quantifying the probability that the content of the second buffer in a two node tandem Jackson network reaches some high level before it becomes empty.
Heuristic importance sampling distributions are derived that can be used to estimate this overflow probability in cases where the first buffer capacity is finite or infinite.
The proposed importance sampling distributions differ from previous balanced likelihood ratio methods in that they are specified as functions of the contents of the buffers.
Empirical results indicate that the relative errors of these importance sampling estimators is bounded independent of the buffer size when the second server is the bottleneck and is bounded linearly in the buffer size otherwise.
Agent-based Modeling and Simulation (ABMS) is a relatively new development that has found extensive use in areas such as social sciences, economics, biology, ecology etc.
Can ABMS be effectively used in finding answers to complex construction systems?
The focus of this paper is to provide some answers to this question.
Initial experimentation is conducted to understand the advantages of using ABMS either in isolation or in combination with traditional simulation methodologies.
The paper provides a summary of this experimentation, conclusions and sets the agenda for future research in this area.
this paper, we want to argue that image pro- This material is based upon work supported by the U. S. Department of Defense and by the National Science Foundation under Grant No.
9734102.
Additional support was provided by Sun Microsystems
We introduce a novel approach to the cerebral white matter connectivity mapping from di#usion tensor MRI.
DT-MRI is the unique non-invasive technique capable of probing and quantifying the anisotropic di#usion of water molecules in biological tissues.
We address the problem of consistent neural fibers reconstruction in areas of complex di#usion profiles with potentially multiple fibers orientations.
Our method relies on a global modelization of the acquired MRI volume as a Riemannian manifold M and proceeds in 4 majors steps: First, we establish the link between Brownian motion and di#usion MRI by using the Laplace-Beltrami operator on M .
We then expose how the sole knowledge of the di#usion properties of water molecules on M is su#cient to infer its geometry.
There exists a direct mapping between the di#usion tensor and the metric of M .
Next, having access to that metric, we propose a novel level set formulation scheme to approximate the distance function related to a radial Brownian motion on M .
Finally, a rigorous numerical scheme using the exponential map is derived to estimate the geodesics of M , seen as the di#usion paths of water molecules.
Numerical experimentations conducted on synthetic and real di#usion MRI datasets illustrate the potentialities of this global approach.
We show how a tableaux algorithm for that include range and domain axioms, prove that the extended algorithm is still a decision concepts w.r.t.
such a role box, and show how support for range and domian axioms can be exploited in order to add a new form of absorption optimisation called role absorption.
We illustrate the effectiveness of the optimised algorithm by analysing the perfomance of our FaCT++ implementation when classifying terminologies derived from realistic ontologies.
1
Accuracy of oversampled analog-to-digital (A/D) conversion, the dependence of accuracy on the sampling interval and on the bit rate are characteristics fundamental to A/D conversion but not completely understood.
These characteristics are studied in this paper for oversampled A/D conversion of band-limited signals in ( ).
We show that the digital sequence obtained in the process of oversampled A/D conversion describes the corresponding analog signal with an error which tends to zero as in energy, provided that the quantization threshold crossings of the signal constitute a sequence of stable sampling in the respective space of band-limited functions.
Further, we show that the sequence of quantized samples can be represented in a manner which requires only a logarithmic increase in the bit rate with the sampling frequency, = ( log ), and hence that the error of oversampled A/D conversion actually exhibits an exponential decay in the bit rate as the sampling interval tends to zero.
Temporal databases assume a single line of time evolution.
In other words, they support timeevolving data.
However there are applications which require the support of temporal data with branched time evolution.
With new branches created as time proceeds, branched and temporal data tends to increase in size rapidly, making the need for efficient indexing crucial.
We propose a new (paginated) access method for branched and temporal data: the BT-tree.
The BT-tree is both storage efficient and access efficient.
Wehaveimplemented the BT-tree and performance results confirm these properties.
There are high expectations in all sectors of society for immediate access to biological knowledge of all kinds.
To fully exploit and manage the value of biological resources, society must have the intellectual tools to store, retrieve, collate, analyze, and synthesize organism-level and ecological scale information.
However, it currently is difficult to discover, access, and use biodiversity data because of the long history of "bottom-up" evolution of scientific biodiversity information, the mismatch between the distribution of biodiversity itself and the distribution of the data about it, and, most importantly, the inherent complexity of biodiversity and ecological data.
This stems from, among many factors, numerous data types, the nonexistence of a common underlying (binary) language, and the multiple perceptions of different researchers/data recorders across spatial or temporal distance or both.
The challenge presented to the computer science and information technology community by the biodiversity and ecological information domain is worthy of all the time and talent that can be brought to bear, because the continued existence of the species Homo sapiens depends upon gaining an understanding of this spaceship Earth and our fellow passengers upon it.
Syntax Tree * Figure 2: Architecture diagram UML repository The repository contains the model, represented at the metamodel level (i.e.
a class is represented by an object, instance of the M2 concept named Class); Bridge The OCL interpreter itself should not know anything of UML, but rather manipulate it through a bridge pattern.
This bridge maps manipulated MOF-compliant concepts to OCL types and properties.
Proper education of a modeling and simulation professional meeting the extensive criteria imposed by the community poses significant challenges.
In this paper, we explore the formation of a university-based education in modeling and simulation to meet the challenges.
We examine the factors affecting the composition of a modeling and simulation course.
Based on the anticipated consequences, we propose potential solutions.
The problem of evaluating machine translation (MT) systems is more challenging than it may first appear, as diverse translations can often be considered equally correct.
The task is even more difficult when practical circumstances require that evaluation be done automatically over short texts, for instance, during incremental system development and error analysis.
While several
This paper proposes a novel technique for building layer animation models of real articulated objects from 3D surface measurement data.
Objects are scanned using a hand-held 3D sensor to acquire 3D surface measurements.
A novel geometric fusion algorithm is presented which enables reconstruction of a single surface model from the captured data.
This algorithm overcomes the limitations of previous approaches which cannot be used for hand-held sensor data as they assume that measurements are on a structured planar grid.
The geometric fusion introduces the normal-volume representation of a triangle to convert individual triangles to a volumetric implicit surface.
This paper develops statistical algorithms and performance limits for resolving sinusoids with nearby frequencies, in the presence of noise.
We address the problem of distinguishing whether the received signal is a single-frequency sinusoid or a double-frequency sinusoid, with possibly unequal, and unknown, amplitudes and phases.
We derive a locally optimal detection strategy that can be applied in a stand-alone fashion or as a refinement step for existing spectral estimation methods, to yield improved performance.
We further derive explicit relationships between the minimum detectable di#erence between the frequencies of two tones, for any particular false alarm and detection rate, and at a given SNR.
# This work was supported in part by NSF CAREER Award CCR-9984246, and AFOSR grant F49620-03-1-0387.
this article, Professor Ajay Agrawal at Queen's University's School of Business, can be reached at aagrawal@business.queensu.ca.
This paper represents the views of the author and does not necessarily reflect the opinions of Statistics Canada
Given a Morse function f over a 2-manifold with or without boundary, the Reeb graph is obtained by contracting the connected components of the level sets to points.
We prove tight upper and lower bounds on the number of loops in the Reeb graph that depend on the genus, the number of boundary components, and whether or not the 2-manifold is orientable.
We also give an algorithm that constructs the Reeb graph in time O(n log n), where n is the number of edges in the triangulation used to represent the 2-manifold and the Morse function.
This paper aims at finding fundamental design principles for hierarchical web caching.
An analytical modeling technique is developed to characterize an uncooperative twolevel hierarchical caching system where the least recently used (LRU) algorithm is locally run at each cache.
With this modeling technique, we are able to identify a characteristic time for each cache, which plays a fundamental role in understanding the caching processes.
In particular, a cache can be viewed roughly as a low-pass filter with its cutoff frequency equal to the inverse of the characteristic time.
Documents with access frequencies lower than this cutoff frequency have good chances to pass through the cache without cache hits.
This viewpoint enables us to take any branch of the cache tree as a tandem of low-pass filters at different cutoff frequencies, which further results in the finding of two fundamental design principles.
Finally, to demonstrate how to use the principles to guide the caching algorithm design, we propose a cooperative hierarchical web caching architecture based on these principles.
Both model-based and real trace simulation studies show that the proposed cooperative architecture results in more than 50% memory saving and substantial central processing unit (CPU) power saving for the management and update of cache entries compared with the traditional uncooperative hierarchical caching architecture.
this paper require very little additional hardware and no extra bus lines, achieving, nonetheless, a significant reduction of the activity level on the bus
In this paper, we present a maximum entropy (maxent) approach to the fusion of experts opinions, or classifiers outputs, problem.
The maxent approach is quite versatile and allows us to express in a clear, rigorous, way the a priori knowledge that is available on the problem.
For instance, our knowledge about the reliability of the experts and the correlations between these experts can be easily integrated: Each piece of knowledge is expressed in the form of a linear constraint.
Since the late 1970s dramatic economic changes have taken place in the agricultural sector in the highlands of Guatemala.
The introduction of new export crops, such as snow peas, broccoli, and miniature vegetables, has led to yet another agro-export boom.
Unlike earlier booms, however, this one has included all but the smallest farmers.
The high rate of smallholder participation in the boom, and the initial high profitability of nontraditional exports (NTXs), fueled initial optimism that NTX production could increase smallholders' ability to accumulate land and so decrease the highly skewed distribution of land in Guatemala, a country with one of the most unequal landholding patterns in all of Latin America.
The picture that emerges from the analysis in this paper raises serious questions about the sustainability and equity effects of NTX crop adoption among smallholders in the long run.
Two main findings illustrate the problems besetting NTX crop production.
First, the land accumulation rates of adopters have dropped dramatically in the 1990s.
NTX crop adopters accumulated close to three times more land than non-adopters in the 1980s.
Although adopters are still accumulating more land than non-adopters in the 1990s, the gap between the two groups has narrowed substantially.
Second, smaller adopters are no longer accumulating land at higher rates than their larger counterparts.
In the 1980s the landholdings of smaller adopters grew significantly faster than those of the larger adopters, but this trend reversed itself in the iii 1990s.
The advantages smallholders initially had in accumulating land may have been lost as a result of deteriorating agronomic conditions and volatile export markets.
However, given adequate policy support, smallholders could indeed improve thei...
This paper will describe the REDTOP-2 tool and its capabilities.
Sample results obtained from exercising the tool for a number of different existing engine designs will be presented.
Results from a multi-variable sensitivity study on a LOX/LH2 fuel-rich, single preburner staged-combustion engine will be highlighted.
Two sample applications involving vehicle designs will be discussed.
The first involves probabilistic/uncertainty analysis for an all-rocket vehicle design and the second the rocket main propulsion system analysis of an airbreathing, two-stage RLV concept with first stage tail-rockets and all-rocket second stage propulsion.
Finally, future directions in the development of REDTOP-2 will be discussed
Introduction Autism is a neurodevelopmental disorder with abnormal corpus callosum (CC) size [1].
Most previous studies used the area of predefined Witelson partition [5] as a morphometric measure but other shape metrics have not been considered.
We present a new computational technique for curvature estimation via piecewise quintic splines and use it in both CC nonlinear dynamic time warping algorithm [4] and detecting the regions of curvature difference.
Figure 1: Left: level set segmentation showing partial volume effect.
Right: spline smoothing.
A similar approach has been taken in [6].
Methods A group of 2D mid sagittal cross section images of the corpus callosum was taken from males of similar age, 15 autistic, and 12 normal controls.
The level set method was used to extract the boundary # of the corpus callosum automatically by solving ## #t + F|##| = 0 where F is the given boundary propagation velocity [2].
Then the pixelated CC contour was reconstructed into a rough
We describe soft versions of the global cardinality constraint and the regular constraint, with e#cient filtering algorithms maintaining domain consistency.
For both constraints, the softening is achieved by augmenting the underlying graph.
The softened constraints can be used to extend the meta-constraint framework for over-constrained problems proposed by Petit, Regin and Bessiere.
Graphs are a popular data structure, and graph-manipulation programs are common.
Graph manipulations can be cleanly, compactly, and explicitly described using graph-rewriting notation.
However, when a software developer is persuaded to try graph rewriting, several problems commonly arise.
Primarily, it is difficult for a newcomer to develop a feel for how computations are expressed via graph rewriting.
Also, graph-rewriting is not convenient for solving all aspects of a problem: better mechanisms are needed for interfacing graph rewriting with other styles of computation.
Despite the growing interest for component-based systems, few works tackle the question of the trust we can bring into a component.
Differentiated Services (DiffServ) is scalable for deployment in today's Internet, and Multiprotocol Label Switching (MPLS) provides fast packet switching and the opportunity for traffic engineering.
Thus, the combination of DiffServ and MPLS presents a very attractive strategy to backbone network providers.
This paper attempts to explain the concepts of DiffServ + MPLS and illustrate its effectiveness by performing a simulation using Network Simulator (ns-2).
The results show the fast rerouting feature of MPLS and how it alleviates the problem of link failures in DiffServ networks.
Several studies have shown that the performance advantages of adaptive routing over deterministic routing are reduced when the traffic contains strong degree of communication locality.
This paper proposes a new analytical model of an adaptive routing algorithm proposed by Duato in [Dua94].
The main feature of this algorithm is the use of a time-out selection function for assigning virtual channels.
This has the advantage of reducing virtual channels multiplexing to improve the network performance, especially, in the presence of communication locality.
In this paper, we provide a sub-channel partitioning based unequal error protection (UEP) scheme for a space-time block coded orthogonal frequency division multiplexing (STBCOFDM) system.
In such a scheme, video data is partitioned into high-priority (HP) and low-priority (LP) layers according to the importance of the data.
At the receiver side, OFDM subchannels are partitioned into high-quality (HQ) and low-quality (LQ) groups according to the estimated channel qualities.
Based on the feedback of sub-channel partitioning results, the transmitter assigns HP and LP video data to the corresponding HQ and LQ sub-channels.
Through theoretical analysis, we show there is indeed a significant BER difference between the HQ and LQ sub-channels, which can be exploited by UEP.
Based on the analysis, we provide a criterion for determining the appropriate transmission power.
Through computer simulations, we show that the proposed scheme offers significant performance gain compared to conventional methods.
We also demonstrate that the scheme is the least sensitive to channel estimation errors among all compared schemes, and is hardly influenced by the Doppler spread.
The feedback overhead can also be reduced with almost no performance penalty by bundling several neighboring sub-channels together and assigning them to the same group.
Supporting fast restoration for general mesh topologies with minimal network over build is a technically challenging problem.
Traditionally, ring based SONET networks have offered 50ms restoration at the cost of requiring 100% over-build.
Recently, fast (local) reroute has gained momentum in the context of MPLS networks.
Fast reroute, when combined with preprovisioning of protection capacities and bypass tunnels, comes close to providing fast restoration for mesh networks.
Preprovisioning has the additional advantage of greatly simplifying network routing and signaling.
Thus even for protected connections, online routing can now be oblivious to the offered protection, and may only involve single shortest path computations.
In this paper, we address the complex task of initializing an on-line simulation to a current system state collected from an operating physical system.
The paper begins by discussing the complications that arise when the system model employed by the controller and the planner are not the same.
The benefits of using the same model for control and planning are then outlined.
The paper then discusses a new simulation paradigm that models controller interactions and provides a single model that is capable of supporting planning and control functions.
Next, issues arising from performing a distributed simulation of the distributed control architecture that is being employed to manage the system are addressed.
The definition of the state for the distributed system is then discussed and the collection of the real-time state information from the elements of this distributed system is outlined.
Finally, the procedure for initializing the distributed on-line simulation from the collected real-time state information is given.
Although the Ethernet promises high probability packet delivery between all pairs of hosts, defects in the installation can sometimes prevent some pairs of hosts from communicating very well.
This paper describes a method that was developed and used at Carnegie Mellon University during 1984--1985 to monitor the connectivity between all pairs of hosts on the Ethernet.
The method is based on sending test packets from a central station through various cyclic routes and relating the results to the likelihood of various defects via a system probability model.
The method proved to be quite effective in practice and greatly assisted our support staff in maintaining our Ethernet.
Distributed weighted fair scheduling schemes for QoS support in wireless networks have not yet become standard.
In this paper we propose an Admission Control and Dynamic Bandwidth Management scheme that provides fairness in the absence of distributed link level weighted fair scheduling.
In case weighted fair scheduling becomes available, our system assists it by supplying the scheduler with weights and adjusting them dynamically as network and traffic characteristics vary.
To obtain these weights, we convert the bandwidth requirement of the application into a channel time requirement.
Our Bandwidth Manager then allots each flow a share of the channel time depending on its requirement relative to the requirements of other flows in the network.
It uses a max-min fairness algorithm with minimum guarantees.
The flow controls its packet transmission rate so it only occupies the channel for the fraction of time allotted to it by the Bandwidth Manager.
As available bandwidth in the network and the traffic characteristics of various flows change, the channel time proportion allotted also dynamically varies.
Our experiments show that, at the cost of a very low overhead, there is a high probability that every flow in the network will receive at least its minimum requested share of the network bandwidth.
This paper describes the emergence of improved traditional planting pits (za) in Burkina Faso in the early 1980s as well as their advantages, disadvantages and impact.
The za emerged in a context of recurrent droughts and frequent harvest failures, which triggered farmers to start improving this local practice.
Despair triggered experimentation and innovation by farmers.
These processes were supported and complemented by external intervention.
Between 1985 and 2000 substantial public investment has taken place in soil and water conservation (SWC).
The socio-economic and environmental situation on the northern part of the Central Plateau is still precarious for many farming families, but the predicted environmental collapse has not occurred and in many villages indications can be found of both environmental recovery and poverty reduction.
Keywords: soil fertility, soil conservation, water conservation iii TABLE OF CONTENTS 1.
The Context In Which Za Emerged In The Yatenga Region 1 2.
Development And Dissemination Of The Za Technology 3 3.
Impact On Farm Households And On Farmland 11 4.
Final Remarks 24 References 26 TRADITIONAL SOIL AND WATER CONSERVATION PRACTICE IN BURKINA FASO Daniel Kabor and Chris Reij 1.
THE CONTEXT IN WHICH ZA EMERGED IN THE YATENGA REGION In the 1970s the densely populated northern part of the Central Plateau faced an acute environmental crisis.
Recurrent droughts led to frequent harvest failure.
Between 1975 and 1985 this region witnessed substantial out-migration to less densely populated regions with better soils and higher rainfall .
Women had to walk longer distances to collect firewood.
Vegetation was destroyed not only for firewood, but even more to expand cultivated land.
Groundwater ...
Intelligent Agent and on its goal-oriented point of view.
STARS Program Manager TASK: PV03 CDRL: A025 14 June 1996 Data Reference: STARS-VC-A025/001/00 Version 2.0 (Signatures on File) Principal Author(s): Approvals: Mark Simos, Organon Motives, Inc. Dick Creps, Lockheed Martin Tactical Defense Systems Date Teri F. Payton, Lockheed Martin Tactical Defense Systems Carol Klingler, Lockeed Martin Tactical Defense Systems Date Larry Levine, Organon Motives, Inc.
Date Dean Allemang, Organon Motives, Inc.
Date Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing the collection of information.
Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington Headquarters Services, Directorate for Information Operations and Reports, 1215 Jefferson Davis Highway, Suite 1204, Arlington, VA 22202-4302, and to the Office of Management and Budget, Paperwork Reduction Project (0704-0188), Washington, DC 20503.
this paper, the case of 2 p is considered and alternative cases will be applied to power engineering estimation problems.
The specific cases of parameter estimation and measurement noise are discussed
Fluid models of IP networks have been recently proposed as a way to break the scalability barrier of traditional discrete state-space models, both simulative (e.g., ns-2) and analytical (e.g., queues and Markov chains).
Fluid models adopt...
this paper, we use the wrapper methods based on a nonlinear classification algorithm in order to extract the discriminative genes that di#cult to be extracted by conventional filter methods.
RFE method based on nonlinear Support Vector Machines (SVMs) [2] is employed to this end because it is successfully applied to classification of gene expression data.
We investigate the genes extracted by the RFE method based on SVMs with gaussian kernel function to indicate that it can extract discriminative genes which are not chosen by conventional filter methods
Per-flow traffic measurement is critical for usage accounting, traffic engineering, and anomaly detection.
Previous methodologies are either based on random sampling (e.g., Cisco's NetFlow), which is inaccurate, or only account for the "elephants".
We introduce a novel technique for measuring perflow traffic approximately, for all flows regardless of their sizes, at very high-speed (say, OC768).
The core of this technique is a novel data structure called Space Code Bloom Filter (SCBF).
A SCBF is an approximate representation of a multiset; each element in this multiset...
There is general agreement that metamodeling will play a pivotal role in the realization of the MDA, but less consensus on what the precise role of metamodeling should be and what form it should take.
In this paper we first analyze the underlying motivation for metamodeling within the context of the MDA and derive a concrete set of requirements that an MDA supporting infrastructure should satisfy.
We then present a number of concepts, which we believe are best suited to providing technical solutions to the identified requirements.
In particular, we discuss why the traditional "language definition" view is insufficient for an optimal MDA foundation.
OntoLearn is a system for word sense disambiguation, used to automatically enrich WordNet with domain concepts and to disambiguate WordNet glosses.
Peer-to-peer networks have attracted a significant amount of interest as a popular and successful alternative to traditional client-server networks for resource sharing and content distribution.
However, the existence of high degrees of free riding may be an important threat against P2P networks.
In this paper , we propose a distributed and measurement-based method to reduce the degree of free riding in P2P networks.
We primarily focus on developing schemes to locate free riders and on determining policies that can be used to take actions against them.
We propose a model in which each peer monitors its neighboring peers, makes decisions if they exhibit any kind of free-riding, and takes appropriate actions if required.
We specify three types of free riding and their symptoms observable from the activities of the neighboring peers.
We employ simple formulas to determine if a peer exhibits any kind of free riding.
The counter actions to be applied to the free riders are defined.
We combine the mechanisms proposed to detect free riders and to take appropriate actions in an ECA rule and a state diagram.
In position-based routing protocols, each node periodically transmits a short hello message (called beacon) to announce its presence and position.
Receiving nodes list all known neighbor nodes with their position in the neighbor table and remove entries after they have failed to receive a beacon for a certain time from the corresponding node.
Especially in highly dynamic networks, the information stored in the neighbor table is often out-dated and does not reflect the actual topology of the network anymore such that retransmissions and rerouting are required which consume bandwidth and increase latency.
Despite a considerable number of proposed position-based protocols, almost no analysis has been performed on the impact of beacons and the out-dated and inaccurate neighbor tables.
We show by analytical and simulation results that performance suffers especially in highly mobile ad-hoc networks and propose several mechanisms to improve the accuracy of neighborhood information.
Extensive simulations show the effectiveness of the proposed schemes to improve the network performance.
on and one in the prone position.
The prone images were registered to the respective supine images by use of an intensity-based registration algorithm, once using only the frame and once using only the head.
The difference between the transformations produced by these two registrations describes the movement of the patient's head with respect to the frame.
RESULTS: The maximum frame-based registration error between the supine and prone positions was 2.8 mm; it was more than 2 mm in two patients and more than 1.5 mm in six patients.
Anteroposterior translation is the dominant component of the difference transformation for most patients.
In general, the magnitude of the movement increased with brain volume, which is an index of head weight.
CONCLUSION: To minimize frame-based registration error caused by a change in the mechanical load on the frame, stereotactic procedures should be performed with the patient in the identical position during imaging and intervention.
KEY WOR
It is known that a data network may not be stable at the connection level under some unfair bandwidth allocation policies, even when the normal offered load condition is satisfied, i.e., the average traffic load at each link is less than its capacity.
In this paper, we show that, under the normal offered load condition, a data network is stable when the bandwidth of the network is allocated so as to maximize a class of general utility functions.
Using the microscopic model proposed by Kelly [9] for a TCP congestion control algorithm, we argue that the bandwidth allocation in the network dominated by this algorithm can be modelled as our bandwidth allocation model, and hence that the network is stable under the normal offered load condition.
This result may shed light on the stability issue of the Internet since the majority of its data tra#c is dominated by the TCP.
In this paper we experimentally compare the classification uncertainty of the randomised Decision Tree (DT) ensemble technique and the Bayesian DT technique with a restarting strategy on a synthetic dataset as well as on some datasets commonly used in machine learning community.
For quantitative evaluation of classification uncertainty, we use an Uncertainty Envelope dealing with the class posterior distribution and a given confidence probability.
Counting the classifier outcomes, this technique produces feasible evaluations of the classification uncertainty.
Using this technique in our experiments, we found that the Bayesian DT technique is superior to the randomised DT ensemble technique.
This paper describes the present results of introducing Computer Supported Collaborative Learning in a Dutch secondary school.
The inquiry focuses on three main questions: 1.
Does CSCL effect student's metacognition and motivation?
2.
Does the role of the teacher effect number and/or quality of the contributions in the database?
3.
What opinions do both students and teachers have on CSCL?
Method The study is carried out in a secondary school in the Netherlands, The Raayland College in Venray.
The Raayland is a school that includes all types of secondary education: Gymnasium, pre-university education (VWO), senior secondary education (HAVO), junior secondary education (MAVO) and preparatory vocational education (VBO).
It is a school with 2.300 students and 158 teachers.
The Raayland College is a so-called pioneering school.
About 120 of 700 high schools in the Netherlands receive extra money from the Department of Education of the government to introduce computers in their curriculum.
The aim of this initiative is that partner schools share their experiences with other schools, which do not receive extra funding.
The Raayland College is present on the World Wide Web.
Participating in this project fits well in this context of a computer supported collaborative learning environment.
Collaborative Learning, with or without computers, is not a common experience of students at Raayland College.
Six classes of the Raayland College have applied collaborative learning, supported by WebKnowledge Forum, in one or two courses.
Each course comprises of six lessons.
WebKnowledge Forum is a software program that has been developed by Dr. Marlene Scardamalia and Dr. Carl Bereiter of the Ontario Institute for Studies in Education at the University of Toronto, in succession to the Compu...
Data is produced and consumed everyday by information systems, and its inherent quality is a fundamental aspect to operational and support business activities.
However, inadequate data quality often causes severe economic and social losses in the organizational context.
The problem addressed in this paper is how to assure data quality, both syntactically and semantically, at information entity level.
An information entity is a model representation of a real world business entity.
To address this problem, we have taken an organizational engineering approach, consisting in using a business process-modeling pattern for describing, at a high level of abstraction, how to ensure and validate business object data.
The pattern defines a conceptual data quality model with specific quality attributes.
We use object-oriented concepts to take advantage of concepts such as inheritance and traceability.
The concepts and notation we use are an extension to the Unified Modeling Language.
A case study is detailed exemplifying the use of the proposed concepts.
While the tone of this paper is informal and tongue-incheek, we believe we raise two important issues in robotics and multi-modal interface research; namely, how crucial integration of multiple modes of communication are for adjustable autonomy, which in turn is crucial for having dinner with R2D2.
Furthermore, we discuss how our multimodal interface to autonomous robots addresses these issues by tracking goals, allowing for both natural and mechanical modes of input, and how our robotic system adjusts itself to ensure that goals are achieved, despite interruptions.
In this paper we are concerned with the principles underlying the utility of modelling concepts, in particular in the context of architecture-modelling.
Firstly, some basic concepts are discussed, in particular the relation between information, language, and modelling.
Our primary area of application is the modelling of enterprise architectures and information system architectures, where the selection of concepts used to model di#erent aspects very much depends on the specific concerns that need to be addressed.
The approach is illustrated by a brief review of the relevant aspects of two existing frameworks for modelling of (software intensive) information systems and their architectures.
This paper contributes to this methodology by presenting an improvement over previous algorithms.
Sections II and III give a short outline of previous Boltzmann annealing (BA) and fast Cauchy fast annealing (FA) algorithms.
Section IV presents the new very fast algorithm.
Section V enhances this algorithm with a re-annealing modification found to be extremely useful for multi-dimensional parameter-spaces.
This method will be referred to here as very fast reannealing (VFR)
Manually querying search engines in order to accumulate a large body of factual information is a tedious, error-prone process of piecemeal search.
Search engines retrieve and rank potentially relevant documents for human perusal, but do not extract facts, assess confidence, or fuse information from multiple documents.
This paper introduces KNOWITALL, a system that aims to automate the tedious process of extracting large collections of facts from the web in an autonomous, domain-independent, and scalable manner.
High dimensional American options have no analytic solution and are di#cult to price numerically.
Progress has been made in using Monte Carlo simulation to give both lower and upper bounds on the price.
Building on an idea of Glasserman and Yu we investigate the utility of martingale basis functions in regression based approximation methods.
Regression methods are known to give lower bounds easily, however upper bounds are usually computationally expensive.
orting life.
Do not extinguish that element, do not corrupt it, learn that it is divine; and do not substitute wretched scholastic feuds for the voice of nature." ---Essay on Tolerance, Voltaire (1763).
We do not live very long.
By the time a man has lived 20 years, he may have begun to learn some of what was done in previous centuries.
By the time a man has lived 30 years, he may have begun to do something himself, which was not done before.
Yet by the time a man has lived 40 years, he begins to doubt whether any of what was done before, including his own work, is of any real value!
If we lived much longer, say for 100 or 200 years in good health, some people would continue to learn and think and do original things all through their lifetimes.
Those long-lived men and women would then go far beyond us in their thinking and knowledge.
Indeed, the simple-minded ideas which we follow in our philosophy and science and religion today, might look meager and infantile to such superior bein
This paper addresses our experience in using and developing the VEG (Visual Event Grammars) toolkit for the formal specification, verification, design and implementation of graphical user interfaces
K-d trees have been widely studied, yet their complete advantages are often not realized due to ineffective search implementations and degrading performance in high dimensional spaces.
We outline an effective search algorithm for k-d trees that combines an optimal depth-first branch and bound (DFBB) strategy with a unique method for path ordering and pruning.
This technique was developed for improving nearest neighbor (NN) search, but has also proven effective for k-NN and approximate k-NN queries.
Software agents have been recognized as one of the main building blocks of the emerging infrastructure for the Semantic Web, but their relationship with more standard components, such as Web servers and clients, is still not clear.
At the server side, a possible role for agents is to enhance the capabilities of servers using their intelligence to provide more complex services and behaviors.
In this paper we explore the role of agents at the server side presenting an Open Service Architecture (OSA) which extends the centralized Internet Reasoning System (IRS-II) to a distributed scenario.
The architecture uses a distributed facilitation protocol which integrates Web Services with agent communication languages.
Finally we present an implementation which extends Tomcat with these features.
India's semi-arid tropical (SAT) region is characterized by seasonally concentrated rainfall, low agricultural productivity, degraded natural resources, and substantial human poverty.
The green revolution that transformed agriculture elsewhere in India had little impact on rainfed agriculture in the SAT.
In the 1980s and 1990s, agricultural scientists and planners aimed to promote rainfed agricultural development through watershed development.
A watershed is an area from which all water drains to a common point, making it an attractive unit for technical efforts to manage water and soil resources for production and conservation.
Watershed projects are complicated, however, by the fact that watershed boundaries rarely correspond to human-defined boundaries.
Also, watershed projects often distribute costs and benefits unevenly, with costs incurred disproportionately upstream, typically among poorer residents, and benefits realized disproportionately downstream, where irrigation is concentrated and the wealthiest farmers own most of the land.
Watershed projects take a wide variety of strategies, ranging from those that are more technocratic to those that pay more attention to the social organization of watersheds.
By the mid-1990s annual expenditure on watershed development in India approached $500 million, but there was relatively little information available on the success of different project approaches.
This study addresses three main research questions: 1) What projects are most successful in promoting the objectives of raising agricultural productivity, improving natural resource management and reducing poverty?
2) What approaches enable them to succeed?
3) What nonproject factors also contribute to achieving these objectives?
The major hypotheses are that participat...
i Controlled Fabrication System of Fabry-Perot Optical Fiber Sensors Wei Huo The Bradley Department of Electrical and Computer Engineering, Virginia Tech (Abstract) The use of optical fiber sensors is increasing widely in industry, civil, medicine, defense and research.
Among different categories of these sensors is the Extrinsic Fabry-Perot interferometer (EFPI) sensor which is inherently simple and requires only modest amount of interface electronics.
These advantages make it suitable for many practical applications.
Investigating a cost-effective, reliable and repeatable method for optical fiber sensor fabrication is challenging work.
In this thesis, a system for controlled fabrication of FabryPerot optical fiber sensors is developed and presented as the first attempt for the long-term goal of automated EFPI sensor fabrication.
The sensor fabrication control system presented here implements a real-time control of a carbon dioxide (CO 2 ) laser as sensor bonding power, an optical fiber white light interferometric subsystem for real-time monitoring and measurement of the air gap separation in the Fabry-Perot sensor probe, and real-time control of a piezoelectric (PZT) motion subsystem for sensor alignment.
The design of optoelectronic hardware and computer software is included.
A large number of sensors are fabricated using this system and are tested under high temperature and high pressure.
This system as a prototype system shows the potential in automated sensor fabrication.
Acknowledgements ii Acknowledgements First and foremost, I would like to thank Dr. Anbo Wang, my advisor and graduate committee chair, for his thoughtful guidance and constant encouragement and for giving me the opportunity to work at Photonics Lab as his graduate student.
I am extremely gr...
Hydrogen Wishes, presented at MIT's Center for Advanced Visual Studies, explores the themes of wishes and peace.
It dramatizes the intimacy and power of transforming one's breath and vocalized wishes into a floating sphere, a bubble charged with hydrogen.
The floating bubble represents transitory anticipation as a wish is sent on its trajectory toward fulfillment.
Light, heat sensors, microphones, projected imagery, hydrogen and ordinary soap bubbles come together in this exploration of human aspiration.
As in our lives, many wishes escape, but many others are catalyzed by the heat of the candle and become ethereal.
The fulfilled wishes then become living artifacts within projected photographs of Earth cities as seen from outer space.
With the rise of international bond markets in the 1990s, the role of sovereign credit ratings has become increasingly important.
In the aftermath of Asian Crises a series of empirical studies on the e#ects of sovereign ratings appeared.
The theoretical literature on the topic, however, remains rather scarce.
We propose a model of rating agencies that is an application of global game theory in which heterogeneous investors act strategically.
The model is consistent with the main findings by the empirical literature.
In particular, it is able to explain the independent e#ect of sovereign ratings on the cost of debt and the failure of rating agencies to predict crises.
An efficient methodology for simulating paths of fractional stable motion is presented.
The proposed approach is based on invariance principles for linear processes.
A detailed analysis of the error terms involved is given and the performance of the method is assessed through an extensive simulation study.
In economics and game theory agents are assumed to follow a model of perfect rationality.
This model of rationality assumes that the rational agent knows all and will take the action that maximizes her utility.
We can find evidence in the psychology and economics literature as well as in our own lives that shows human beings do not satisfy this definition of rationality.
Thus there are many who look to study some notion of bounded rationality.
Unfortunately, models of bounded rationality suffer from the exact phenomena that they attempt to explain.
Specifically, models of bounded rationality are bounded.
Understanding the limits of various rationality models will make clearer their contribution and place in the overall picture of rationality.
The primary contribution of this paper is the introduction of a new method to reduce significantly the computation time necessary to solve the multidimensional assignment (MDA) problem.
In the first part of the track oriented method clusters are formed to reduce the amount of computation time necessary for correlation.
For each formed target tree a mean track is formed.
The different mean tracks are used to determine independent components.
Each independent component corresponds with a cluster.
In the second part of the method the original MDA problem is decomposed in smaller, independent MDA problems, using a root track label for each target tree.
Energy-e#ciency and reliability are two major design constraints influencing next generation system designs.
In this work, we focus on the interaction between power consumption and reliability considering the on-chip data caches.
First, we investigate the impact of two commonly used architecturallevel leakage reduction approaches on the data reliability.
Our results indicate that the leakage optimization techniques can have very di#erent reliability behavior as compared to an original cache with no leakage optimizations.
Next, we investigate on providing data reliability in an energy-e#cient fashion in the presence of soft-errors.
In contrast to current commercial caches that treat and protect all data using the same error detection/correction mechanism, we present an adaptive error coding scheme that treats dirty and clean data cache blocks di#erently.
Furthermore, we present an early-write-back scheme that enhances the ability to use a less powerful error protection scheme for a longer time without sacrificing reliability.
Experimental results show that proposed schemes, when used in conjunction, can reduce dynamic energy of error protection components in L1 data cache by 11% on average without impacting the performance or reliability.
We investigate the use of dominating-set neighbor elimination as an integral part of the distribution of route requests using the Ad hoc On-demand Distance Vector (AODV) protocol as an example of on-demand routing protocols.
We use detailed simulations to show that simply applying dominant pruning (DP) to the distribution of route requests in AODV results in pruning too many route requests in the presence of mobility and cross-traffic.
Accordingly, we introduce several heuristics to compensate the effects of DP and show that the resulting AODV with Dominating Set heuristics (AODV-DS) has comparable or better delivery ratio, network load, and packet latency than the conventional AODV.
AODV-DS exhibits over 70% savings on RREQ traffic than conventional AODV, and in some situations, AODV-DS may have a lower control overhead using Hello packets than conventional AODV without Hellos.
this paper.
On January 2, 1985, Zaman Akil sent the Academy of Sciences a short summary of a longer work.
At his request, the Perpetual Secretary of the Academy, Prof. Paul Germain, sent the letter to several members of the Academy, including myself.
I was the only one who agreed to discuss it with the author.
His strange result was dismissed a priori by my colleagues as being a purely spurious relation without justification, and which could not be understood, since Akil equated a dimensionless quantity to a physical quantity of dimensions L .
A long correspondence then ensued between Mr. Akil and myself, notwithstanding the difficulties created by the fact that Mr. Akil divides his time between London and Kuwait.
This correspondence resulted in the paper published below (which was submitted to the Academy in 1988-1989) together with my "note to the reader" in defence of Akil's peculiar results
Prosody is one of the challenges for experts of synthesis and recognition of the speech will be involved in the next years.
Today’s speech synthesis systems are able to achieve better performancesthan older systems, but the produced speech do not appear as natural as human voice.
Those systems are already able to synthesize speech almost perfect on the segmental point of view: most of the artefact of former synthesizer are not present anymore.
Further improvements can only come from a better implementation of prosody in future system.
New systems have to be able to take under control intonation, tempo and loudness of voice in order to obtain the most natural speech.
Also automatic speech recognition systems make a poor use of prosodic features.
Speaker independency and noise robustness are not the only challenge of future recognition systems.
Further improvement can derive from a better processing of prosody: today’s systems require too much effort from the user to keep tempo and loudness constant.
Those systems are thus not able to deal with spontaneous speech.
In this work some prosodic processing tools will be shown.
An application of these tools will be the extraction on prosodic features to be used as input in automatic recognition, and the automatic prosodic labelling of corpora for speech synthesis purpose.
In the first chapter of this thesis a very fast introduction to prosody will be given, the most important similar systems described in literature will be shown and motivations for this work will be discussed.
The second chapter will show original algorithms for prosodic analysis developed by the author.
Most of the routines, used in this work, crucially rely on the proper choice of a number of parameter.
While in many other similar works they are set empirically, in the third chapter some tools are shown to effectively tune parameter.
The problem is essentially reduced to a minimization of an n-variable function.
Processing time of these tools will be reduced by using distributed computing.
The fourth chapter will show the architecture of the proposed system, it is essentially a library of classes able to model various prosodic entities.
In the fifth chapter comparison between automatic and human analysis will be shown.
In the same chapter, parameter tuning tools will be compared and benefits from distributed computing will be also shown.
Recently, a time-domain equalizer (TEQ) design for multicarrier-based systems has been proposed which claims to minimize the delay spread of the overall channel impulse response.
We show that this is true only in an approximate sense; depending on the channel considered, the loss in delay spread with respect to the true minimum can be significant.
An iterative algorithm to find this minimum is presented, whose computational complexity is similar to that of standard TEQ designs like the MSSNR approach of Melsa et al.
It is observed that the method iteratively and automatically seeks the time reference yielding best performance, an advantage with respect to the MSSNR design which must compute several TEQs over a range of time references in order to select the optimum.
We present novel intelligent tools for mining 3D medical images.
We focus on detecting discriminative Regions of Interest (ROIs) and mining associations between their spatial distribution and other clinical assessment.
To identify these highly informative regions, we propose utilizing statistical tests to selectively partition the 3D space into a number of hyper-rectangles.
We apply quantitative characterization techniques to extract k-dimensional signatures from the highly discriminative ROIs.
Finally, we use neural networks for classification.
The present paper is concerned with the statistical analysis of the resolution limit in a so-called "diffraction-limited" imaging system.
The canonical case study is that of incoherent imaging of two closely-spaced sources of possibly unequal brightness.
The objective is to study how far beyond the classical Rayleigh limit of resolution one can reach at a given signal to noise ratio.
The analysis uses tools from statistical detection and estimation theory.
Specifically, we will derive explicit relationships between the minimum detectable distance between two closely-spaced point sources imaged incoherently at a given SNR.
For completeness, asymptotic performance analysis for the estimation of the unknown parameters is carried out using the Cramr-Rao bound.
To gain maximum intuition, the analysis is carried out in one dimension, but can be well extended to the two-dimensional case and to more practical models.
We are currently developing a system for visualizing Usenet newsgroups at a variety of scales.
A macro/landscape view depicts many newsgroups and the relationships among them; a medium view depicts the interactions within a single group; a close-up view depicts the individual in the context of the conversational situation.
This paper describes the development of a three dimensional geometrically constrained target tracker.
This tracker combines the predictions of a circular prediction algorithm and a constant velocity filter by utilizing the Covariance Intersection.
This combined prediction can be updated with the consequent measurement using the linear estimator.
The proposed technique is illustrated on a benchmark trajectory including circular and straight line maneuvers.
We introduce an automated and accurate system for registering pre-operative 3D MR and CT images with intraoperative 3D ultrasound images based on the vessels visible in both.
The clinical goal is to guide the radio-frequency ablation (RFA) of liver lesions using percutaneous ultrasound even when the lesions are not directly visible using ultrasound.
The lesions locations and desired RFA sites are indicated on pre-operative images, and those markings are made to appear within the intra-operative 3D ultrasound images.
This paper deals with the dependability evaluation of software programs of iterative nature.
In this work we define a model that is able to account for both dependencies between input values of successive iterations and the effects of sequences of consecutive software failures on the reliability of the controlled system.
Differently from previously proposed models, some effort is devoted to address the problem of how to get accurate estimates for the basic parameters.
A model is thus proposed that, requiring the designers or users to provide information usually obtainable by experimental techniques, e.g.
testing, is more useful and more generally applicable.
Then a
We describe an architecture for next generation, distributed data mining systems which integrates data services to facilitate remote data analysis and distributed data mining, network protocol services for high performance data transport, and path services for optical paths.
We also present experimental evidence using geoscience data that this architecture scales the remote analysis of Gigabyte size data sets over long haul, high performance networks.
Pervasive computing allows a user to access an application on heterogeneous devices continuously and consistently.
However, it is challenging to deliver complex applications on resource-constrained mobile devices, such as cell phones and PDAs.
Different approaches, such as application-based or system-based adaptations, have been proposed to address the problem.
However, existing solutions often require degrading application fidelity.
We believe that this problem can be overcome by dynamically partitioning the application and offloading part of the application execution to a powerful nearby surrogate.
This will enable pervasive application delivery to be realized without significant fidelity degradation or expensive application rewriting.
Because pervasive computing environments are highly dynamic, the runtime offloading system needs to adapt to both application execution patterns and resource fluctuations.
Using the Fuzzy Control model, we have developed an offloading inference engine to adaptively solve two key decision-making problems during runtime offloading: (1) timely triggering of adaptive offloading, and (2) intelligent selection of an application partitioning policy.
Extensive trace-driven evaluations show the effectiveness of the offloading inference engine.
this paper, we discuss acoustic parameters and a classifier we developed to distinguish between nasals (/m/, /n/, /ng/) and semivowels (/r/, /l/, /w/, /y/).
Based on the literature and our own acoustic studies, we use an onset/offset measure to capture the consonantal nature of nasals, and an energy ratio, a low spectral peak measure and a formant density measure to capture the nasal murmur
Current Quality of Service models such as those embodied in the Differentiated Services proposal, rely on data path aggregation to achieve scalability.
Data path aggregation bundles into a single aggregate multiple flows with the same quality requirements, hence decreasing the amount of state to be kept.
A similar scalability concern exists on the control path, where the state required to account for individual reservations needs to be minimized.
There have been several proposals aimed at control path aggregation, and the goal of this paper is to expand on these works in an attempt to gain a better understanding of the various parameters that influence the efficiency of different approaches.
In particular, we focus on inter-domain control aggregation, and compare an Autonomous System (AS) sink-tree based approach with two examples of a shared AS segment based approach, in terms of the amount of state kept, both per AS and per edge router.
Our main contributions are in providing a greater understanding into the design of efficient control path aggregation methods.
This paper aims at assisting empirical researchers benefit from recent advances in causal inference.
The paper stresses the paradigmatic shifts that must be undertaken in moving from traditional statistical analysis to causal analysis of multivariate data.
Special emphasis is placed on the assumptions that underly all causal inferences, the languages used in formulating those assumptions, and the conditional nature of causal claims inferred from nonexperimental studies.
These emphases are illustrated through a brief survey of recent results, including the control of confounding, the assessment of causal effects, the interpretation of counterfactuals, and a symbiosis between counterfactual and graphical methods of analysis.
We consider the problem of identifying the orders and the model parameters of PWARX hybrid models from noiseless input/output data.
We cast the identification problem in an algebraic geometric framework in which the number of discrete states corresponds to the degree of a multivariate polynomial p and the orders and the model parameters are encoded on the factors of p. We derive a rank constraint on the input/output data from which one can estimate the coefficients of p. Given p, we show that one can estimate the orders and the parameters of each ARX model from the derivatives of p at a collection of regressors that minimize a certain objective function.
Our solution does not require previous knowledge about the orders of the ARX models (only an upper bound is needed), nor does it constraint the orders to be equal.
Also the switching mechanism can be arbitrary, hence the switches need not be separated by a minimum dwell time.
We illustrate our approach with an algebraic example of a switching circuit and with simulation results in the presence of noisy data.
this paper is applicable to binary data inputs only; investigation of the non-binary ART Pseudo-Random Vs Random 0 2 4 6 8 10 12 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 pseudo random Random 1 Random Vs Non-Random Data 0 2 4 6 8 10 12 0.002 0.1 0.3 0.5 0.7 0.9 Non-Random random Fig.
2a.
Baseline pseudo-random data reaches maximal vigilance faster than nonrandom data.
This indicates that clustering tendency is not caused by chance clustering Fig.
2b.
True random data reach maximal vigilance faster than the baseline, which is an indication that their clustering tendency is caused by mere chance
In multiprogrammed systems, synchronization often turns out to be a performance bottleneck and the source of poor fault-tolerance.
Wait-free and lock-free algorithms can do without locking mechanisms, and therefore do not su#er from these problems.
We present an e#cient almost wait-free algorithm for parallel accessible hashtables, which promises more robust performance and reliability than conventional lock-based implementations.
Our solution is as e#cient as sequential hashtables.
It can easily be implemented using C-like languages and requires on average only constant time for insertion, deletion or accessing of elements.
Apart from that, our new algorithm allows the hashtables to grow and shrink dynamically when needed.
A true problem of lock-free algorithms is that they are hard to design correctly, even when apparently straightforward.
Ensuring the correctness of the design at the earliest possible stage is a major challenge in any responsible system development.
Our algorithm contains 81 atomic statements.
In view of the complexity of the algorithm and its correctness properties, we turned to the interactive theorem prover PVS for mechanical support.
We employ standard deductive verification techniques to prove around 200 invariance properties of our almost wait-free algorithm, and describe how this is achieved using the theorem prover PVS.
This paper sketches the design of PAST, a large-scale, Internet-based, global storage utility that provides scalability, high availability, persistence and security.
PAST is a peer-to-peer Internet application and is entirely selforganizing.
PAST nodes serve as access points for clients, participate in the routing of client requests, and contribute storage to the system.
Nodes are not trusted, they may join the system at any time and may silently leave the system without warning.
Yet, the system is able to provide strong assurances, efficient storage access, load balancing and scalability.
We propose selective bitplane encryption to provide secure image transmission in low power mobile environments.
Two types of ciphertext only attacks against this scheme are discussed and we use the corresponding results to derive conditions for a secure use of this technique.
Bear the provision of Quality of Service (QoS) in the Internet, Differentiated Service (DiffServ) model has been proposed as a cost-effective solution.
Traffic is classified into several service classes with different priorities.
The premium class traffic has the highest one.
The routing algorithm used by the premium class service has significant effects not only on its own traffic, but on all other classes of traffic as well.
The shortest hopcount routing scheme used in current Internet turns out to be no longer sufficient in DiffServ networks.
Based on
Good quality terrain models are becoming more and more important, as applications such as runoff modelling are being developed that demand better surface orientation information than is available from traditional interpolation techniques.
A consequence is that poor-quality elevation grids must be massaged before they provide useable runoff models.
This paper describes improved methods for extracting good quality terrain models from topographic contour maps, which despite modern techniques are still the most available form of elevation information.
Recent work on the automatic reconstruction of curves from point samples, and the generation of medial axis transforms (skeletons) has greatly helped in the visualization of the relationships between sets of boundaries, and families of curves.
The insertion of skeleton points guarantees the elimination of all flat triangles.
Additional assumptions about the local uniformity of slopes give us enough information to assign elevation values to these skeleton points.
Various interpolation techniques were compared using visualization of the enriched contour data.
Examination of the quality and consistency of the resulting maps indicates the required properties of the interpolation method in order to produce terrain models with valid slopes.
The result provides us with a surprisingly realistic model of the surface - that is, one that conforms well to our subjective interpretation of what a real landscape should look like.
Given uncertainty in the input model and parameters of a simulation study, the goal of the simulation study often becomes the estimation of a conditional expectation.
The conditional expectation is expected performance conditional on the selected model and parameters.
The distribution of this conditional expectation describes precisely, and concisely, the impact of input uncertainty on performance prediction.
In this paper we estimate the density of a conditional expectation using ideas from the field of kernel density estimation.
We present a result on asymptotically optimal rates of convergence and examine a number of numerical examples.
Solving equations in equational theories is a relevant programming paradigm which integrates logic and equational programming into one unified framework.
Efficient methods based on narrowing strategies to solve systems of equations have been devised.
In this paper, we formulate a narrowing-based equation solving calculus which makes use of a top-down abstract interpretation strategy to control the branching of the search tree.
We define
This paper describes the MUMIS project, which applies ontology based Information Extraction to improve the results of Information Retrieval in multimedia archives.
It makes use of a domain specific ontology, multilingual lexicons and reasoning algorithms to automatically create a semantic annotation of sources.
The innovative aspect is the use of a cross document merging algorithm that combines the information extracted from separate textual sources to produce an integrated, more complete, annotation of the material.
This merging and unification process uses ontology based reasoning and scenarios which are extracted automatically from annotated sources.
Middleware provides simplicity and uniformity for the development of distributed applications.
However, the modularity of the architecture of middleware is starting to disintegrate and to become complicated due to the interaction of too many orthogonal concerns imposed from a wide range of application requirements.
This is not due to bad design but rather due to the limitations of the conventional architectural decomposition methodologies.
We introduce the principles of horizontal decomposition (HD) to address this problem with a mixed-paradigm middleware architecture.
HD provides guidance for the use of conventional decomposition methods to implement the core functionalities of middleware and the use of aspect orientation to address its orthogonal properties.
Our evaluation of the horizontal decomposition principles focuses on refactoring major middleware functionalities into aspects in order to modularize and isolate them from the core architecture.
New versions of the middleware platform can be created through combining the core and the flexible selection of middleware aspects such as IDL data types, the oneway invocation style, the dynamic messaging style, and additional character encoding schemes.
As a result, the primary functionality of the middleware is supported with a much simpler architecture and enhanced performance.
Moreover, customization and configuration of the middleware for a wide-range of requirements becomes possible.
In this paper we investigate a problem arising in decentralized registration of sensors.
The application we consider involves a heterogeneous collection of sensors - some sensors have on-board Global Positioning System (GPS) capabilities while others do not.
All sensors have wireless communications capability but the wireless communication has limited effective range.
Sensors can communicate only with other sensors that are within a fixed distance of each other.
Sensors with GPS capability are self-registering.
Sensors without GPS capability are less expensive and smaller but they must compute estimates of their location using estimates of the distances between themselves and other sensors within their radio range.
GPS-less sensors may be several radio hops away from GPS-capable sensors so registration must be inferred transitively.
Our approach to solving this registration problem involves minimizing a global potential or penalty function by using only local information, determined by the radio range, available to each sensor.
The algorithm we derive is a special case of a more general methodology we have developed called "Emergence Engineering".
this paper I will discuss the central role of licensing constraints (henceforth "LC's") in phonological systems and how they may be viewed as one of the principal engines of phonological events.
LC's were originally designed to explain restrictions on the combinatorial properties of elements.
Given a theory of phonological expressions (to be given below), the underlying assumption is that any syntactically well-formed combination of elements should be present in a phonological system unless explicitly excluded.
Since, as far as we know, no language expresses the full range of theoretically possible combinations of elements, LC's were proposed as language-specific constraints on such possibilities.
A subset of a small set of possible LC's is sufficient to define the lexical set of, say, nuclear expressions of a given linguistic system.
Recent work has shown that the usefulness of LC's extend far beyond their original raison d'tre.
In particular it is a pleasure to recognise the 2 seminal articles of Monik Charette & Asli Gksel (Charette and Gksel 1996) and (Charette and Gksel 1998) which have provided the leadership in this field and the inspiration for this present work.
I will briefly review part of their work in a later section.
In the following section, I give a succinct summary of the element theory of phonological representations
Introduction Identifying and recruiting prospective technology education teachers has been an ongoing concern for more than two decades.
Considerable research was conducted during the late 1970s and early 1980s relative to teacher recruitment (Craft, 1979; Devier, 1982).
These studies were prompted by declining enrollments in university programs and reported shortages of industrial arts teachers in forty-one states (Miller, 1980; Tomlinson, 1982; Wright, 1985).
In some cases, this shortage of teachers led to high school programs being closed or cut back, the utilization of under-qualified personnel, and the abandonment of planned expansion.
Simultaneously, university programs experienced significant drops in industrial arts teaching majors as students increasingly selected industrial technology or management options over teaching (Devier & Wright, 1988).
This trend of declining enrollments has continued and has now reached critical proportions (Volk, 1997).
Current data suggest that a
Security is a good example for Aspect-Oriented Programming, but there are few reusable software components at the level of aspects.
We introduce an elementary implementation of a reusable and generic aspect library providing security functions.
This aspect library is based on AspectJ and common Java security packages, and includes typical security mechanisms.
We describe the principle, architecture and usage of the security aspect library, and give a practical example of application in which security has been implemented using the aspects in the library.
We also discuss the advantages and disadvantages of aspect library in reusability and generality, and the future efforts we will focus on.
This paper uses a "natural experiment" in Canadian divorce law reform to discriminate empirically between unitary and Nash-bargained models of the household.
Using time-series data from three Canadian provinces, it demonstrates that following landmark divorce law reforms in the 1970s---reforms that led to improvements in women's expected settlement upon divorce in Ontario and British Columbia, suicide rates for older, married women in these provinces registered a sharp decline.
Similar declines were not registered for younger, unmarried women or men in Ontario and British Columbia, nor for older, married women in Quebec, where the legal basis for divorce did not change.
These results are consistent with Nash-bargained models of the household but not with the unitary model.
The typical processing paradigm in natural language processing is the "pipeline" approach, where learners are being used at one level, their outcomes are being used as features for a second level of predictions and so one.
In addition to accumulating errors, it is clear that the sequential processing is a crude approximation to a process in which interactions occur across levels and down stream decisions often interact with previous decisions.
This work develops a general...
Prior to the deployment of any new or replacement component within a transportation system, it should be demonstrated that the modified system meets or exceeds the safety requirements of the original system.
Since the occurrence of a mishap in such a system is a rare event, it is neither cost nor time effective to build and to test a prototype in an actual system prior to deployment.
The Axiomatic Safety-Critical Assessment Process (ASCAP) is a simulation methodology that models the complete system and analyzes the effects of equipment changes.
By carefully constraining the amount of the overall system state space required for analyses, it probabilistically determines the sequence of events that lead to mishaps.
ASCAP is applicable to any transportation system that is governed by a well-defined operational environment.
In this paper, we exhibit security flaws in MICROCAST payas -you-watch system.
From the sole knowledge of public parameters, we show how any intruder is able to forge a coin and so to freely get access to the service.
In 2002-2003, the American College of Medical Informatics (ACMI) undertook a study of the future of informatics training.
This project capitalized on the rapidly expanding interest in the role of computation in basic biological research, well characterized in the NIH BISTI report.
The defining activity of the project was the three-day 2002 Annual Symposium of the College.
A committee, comprised of the authors of this report, subsequently carried out activities, including interviews with a broader informatics and biological sciences constituency, collation and categorization of observations, and generation of recommendations.
The committee viewed biomedical informatics as an interdisciplinary field, combining basic informational and computational sciences with application domains including health care, biological research, and education.
Consequently, effective training in informatics, viewed from a national perspective, should encompass four key elements: 1) curricula that integrate experiences in the computational sciences and application domains, rather than just concatenating them; 2) diversity among trainees, with individualized, interdisciplinary cross-training allowing each trainee to develop key competencies that he/she does not initially possess, 3) direct immersion in research and development activities, and 4) exposure across the wide range of basic informational and computational sciences.
Informatics training programs that implement these features, irrespective of their funding sources, will meet and exceed the challenges raised by the BISTI report, and optimally prepare their trainees for careers in a field that continues to evolve.
We present a review of methods for the construction and deformation of character models.
We consider both state of the art research and common practice.
In particular we review applications, data capture methods, manual model construction, polygonal, parametric and implicit surface representations, basic geometric deformations, free form deformations, subdivision surfaces, displacement map schemes and physical deformation.
Field Programmable Gate Arrays (FPGAs) holds the possibility of dynamic reconfiguration.
The key advantages of dynamic reconfiguration are the ability to rapidly adapt to dynamic changes and better utilization of the programmable hardware resources for multiple applications.
However, with the advent of multi-million gate equivalent FPGAs, configuration time is increasingly becoming a concern.
High reconfiguration cost can potentially wipe out any gains from dynamic reconfiguration.
One solution to alleviate this problem is to exploit the high levels of redundancy in the configuration bitstream by compression.
In this paper, we propose a novel configuration compression technique that exploits redundancies both within a configuration's bitstream as well as between bitstreams of multiple configurations.
By maximizing reuse, our results show that the proposed technique performs 26.5--75.8% better than the previously proposed techniques.
To the best of our knowledge, ours is the first work that performs inter-configuration compression.
Current interest in ad hoc and peer-to-peer networking technologies prompts a re-examination of models for configuration management, within these frameworks.
In the future, network management methods may have to scale to millions of nodes within a single organization, with complex social constraints.
In this paper, we discuss whether it is possible to manage the configuration of large numbers of network devices using well-known and no-so-well-known configuration models, and we discuss how the special characteristics of ad hoc and peer-to-peer networks are reflected in this problem.
In this paper we describe a finite-capacity algorithm that can be used for production scheduling in a semiconductor wafer fabrication facility (wafer fab).
The algorithm is a beam-search-type algorithm.
We describe the basic features of the algorithm.
The implementation of the algorithm is based on the ILOG-Solver libraries.
We describe the simulation environment, which is used to evaluate the performance of the proposed algorithm.
We show some results from computational experiments with the algorithm and the simulation test-bed described.
This paper presents an overview of the use of simulation algorithms in the field of financial engineering, assuming on the part of the reader no familiarity with finance and a modest familiarity with simulation methodology, but not its specialist research literature.
The focus is on the challenges specific to financial simulations and the approaches that researchers have developed to handle them, although the paper does not constitute a comprehensive survey of the research literature.
It offers to simulation researchers, professionals, and students an introduction to an application of increasing significance both within the simulation research community and among financial engineering practitioners.
This paper proposes a method of creating a web document representation using a web ontology concepts instead of `bag-ofwords '.
However, since the web domain has a very small vocabulary, we are unable to transform all or most of the keywords of the web document into web ontology concepts.
This particular problem is solved by creating an extended part of the web ontology with words obtained from an external linguistics knowledgebase.
The promising outcome as the result of Natural Language Processing (NLP) and Information Retrieval (IR) fields being merged together convinces us to create the extended ontology using NLP technique.
U) 4.
AUTHOR(S) 5.
CORPORATE AUTHOR 506 Lorimer St Fishermans Bend Victoria 3207 Australia 6a.
DSTO NUMBER 6b.
AR NUMBER 6c.
TYPE OF REPORT Technical Report 7.
DOCUMENT DATE 8.
FILE NUMBER 510/207/1295 9.
TASK NUMBER NAV 00/206 10.
TASK SPONSOR Navy- CANSG 11.
NO.
OF PAGES 12.
NO.
OF REFERENCES ...
The process of selecting requirements for a release of a software product is challenging as the decision-making is based on uncertain predictions of issues such as market value and development cost.
This paper presents a method aimed at supporting software product development organisations in the identification of process improvement proposals to increase requirements selection quality.
The method is based on an in-depth analysis of requirements selection decision outcomes after the release has been launched to the market and is in use by customers.
The method is validated in a case study involving real requirements and industrial requirements engineering experts.
In this paper we will describe an approach to evaluating learning technology which we have developed over the last twenty-five years, outline its theoretical background and compare it with other evaluation frameworks.
This has given us a set of working principles from evaluations we have conducted at the Open University and from the literature, which we apply to the conduct of evaluations.
These working practices are summarised in the context interactions and outcomes (CIAO!
) model.
We describe here how we applied these principles, working practices and models to an evaluation project conducted in Further Education.
We conclude by discussing the implications of these experiences for the future conduct of evaluations.
One way to relieve resources when executing a program on constrained devices is to migrate parts of it to other machines in a distributed system.
Ideally, a system can automatically decide where to place parts of a program to satisfy resource constrains (CPU, memory bandwidth, battery power, etc.).
We describe a compiler and virtual machine infrastructure as the context for research in automatic program partitioning and optimization for distributed execution.
We define program partitioning as the process of decomposing a program into multiple tasks.
The main motivation for our design is to enable experimenting with optimizing program execution on resource-constrained devices with respect to memory consumption, CPU time, battery lifetime and communication.
Bluetooth and IEEE 802.11 (Wi-Fi) are two communication protocol standards which define a physical layer and a MAC layer for wireless communications within a short range (from a few meters up to 100 meters) with low power consumption (from less than 1 mW up to 100 mW).
not found in spite of differences in phytoplankton species composition: at the Antarctic Polar Front, biomass was dominated by a diatom population of Fragilariopsis kerguelensis, whereas smaller cells, including chrysophytes, were relatively more abundant in the Antarctic Circumpolar Current beyond the influence of frontal systems.
Because mixing was often in excess of 100 m in the latter region, diatom cells may have been unable to fulfil their characteristically high Fe demand at low average light conditions, and thus became co-limited by both resources.
Using a model that describes the C incorporation, the consistency was shown between the dynamics in the glucan pool in the field experiments and in laboratory experiments with an Antarctic diatom, Chaetoceros brevis.
The glucan respiration rate was almost twice as high during the dark phase as during the light phase, which is consistent with the role of glucan as a reserve supplying energy and carbon skeletons for conti
this paper is on IRs.
We believe that a more widespread acceptance and utilization of this approach has been hindered so far by a shortage of theoretical and experimental evidence suggesting its utility and overall feasibility for practical data mining.
The goal of this research is to contribute to fill this gap
This paper outlines a new autofocus procedure for improving SAS imagery, based on wavefront sensing techniques from the astronomical imaging field
In this paper, we discuss acoustic parameters and a classifier we developed to distinguish between nasals and semivowels.
Based on the literature and our own acoustic studies, we use an onset/offset measure to capture the consonantal nature of nasals, and an energy ratio, a low spectral peak measure and a formant density measure to capture the nasal murmur.
These acoustic parameters are combined using Support Vector Machine based classifiers.
Classification accuracies of 88.6%, 94.9% and 85.0% were obtained for prevocalic, postvocalic and intervocalic sonorant consonants, respectively.
The overall classification rate was 92.4% for nasals and 88.1% for semivowels.
These results have been obtained for the TIMIT database, which was collected from a large number of speakers and contains substantial coarticulatory effects.
This paper describes an algorithm to extract adaptive and quality quadrilateral/hexahedral meshes directly from volumetric imaging data.
First, a bottom-up surface topology preserving octree-based algorithm is applied to select a starting octree level.
Then the dual contouring method is used to extract a preliminary uniform quad/hex mesh, which is decomposed into finer quads/hexes adaptively without introducing any hanging nodes.
The positions of all boundary vertices are recalculated to approximate the boundary surface more accurately.
Mesh adaptivity can be controlled by a feature sensitive error function, the regions that users are interested in, or finite element calculation results.
Finally, the relaxation based technique is deployed to improve mesh quality.
Several demonstration examples are provided from a wide variety of application domains.
Some extracted meshes have been extensively used in finite element simulations.
On-chip cache sizes are likely to continue to grow over the next decade as working sets, available chip capacity, and memory latencies all increase.
Traditional cache architectures, with fixed sizes and discrete latencies, lock one organization down at design time, which will provide inferior performance across a range of workloads.
In addition, expected increases in on-chip communication delays will make the time to retrieve data in a cache a function of the data's physical location.
Consequently, cache access times will become a continuum of latencies rather than a single one.
This non-uniformity will make static organizations particularly limited for single-chip servers, in which multiple processors will be different distances from the cache controller.
In this paper, we propose a set of adaptive, high-performance cache design, called Non-Uniform Cache Architectures (NUCAs).
We extend these physical designs with logical policies that allow important data to migrate closer to the processor within the same cache.
We show that these adaptive level-two NUCA designs provide 1.6 times the performance of a Uniform Cache Architecture of any size, and that the adaptive NUCA scheme outperforms static NUCA schemes by 9% for multi-megabyte, on-chip server caches with large numbers of banks.
Design Patterns are now widely accepted as a useful concept for guiding and documenting the design of object-oriented software systems.
Still the UML is ill-equipped for precisely representing design patterns.
It is true that some graphical annotations related to parameterized collaborations can be drawn on a UML model, but even the most classical GoF patterns, such as Observer, Composite or Visitor cannot be modeled precisely this way.
We thus propose a minimal set of modifications to the UML 1.3 meta-model to make it possible to model design patterns and represent their occurrences in UML, opening the way for some automatic processing of pattern applications within CASE tools.
Category Theory is used to describe a category of fusors.
The category is formed from a model of a process begining with an event and leading to the final labeling of the event.
Although many techniques of fusing information have been developed the inherent relationships among different types of fusion techniques (fusors) have not yet been fully explored.
In this paper, a foundation of fusion is presented, definitions developed, and a method of measuring the performance of fusors is given.
Functionals on receiver operating characteristic (ROC) curves are developed to form a partial ordering of a set of classifier families.
The functional also induces a category of fusion rules.
The treatment includes a proof of how to find the Bayes optimal classifier (or Bayes Optimal fusor, if available) from a ROC curve.
Gridded volumetric data sets representing simulation or tomography output are commonly visualized by displaying a triangulated isosurface for a particular isovalue.
When the grid is stored in a standard format, the entire volume must be loaded from disk, even though only a fraction of the grid cells may intersect the isosurface.
In conventional applications it is easy to find detailed and structured practices that make use of models in order to describe almost every aspect of the user interface.
On the other hand, new user interfaces, such as those used in virtual and augmented applications, are usually designed and developed in an ad hoc fashion or following a rather simple systematic approach.
There are, however, a few notable efforts to develop model-based design methods for these new interfaces.
Current practices, research efforts and holes that are still to be addressed in the development of new user interfaces are reviewed in this paper.
There exists nowadays consensus on the importance of teachers' professional development.
Also, most authors agree that the school's workplace conditions can exert great influence on this development.
In this paper the impact of two workplace conditions, autonomy and collegiality, on elementary school teachers' professional development is analysed.
The qualitative research reported makes clear that this in#uence should be thought of in a balanced way.
Certain forms of autonomy and collegiality -- and more specifically certain combinations of both workplace conditions -- have a far more positive influence on teachers' professional development than others.
We present an approach for registering an aerial Digital Elevation Model (DEM) with a color intensity image obtained using a camera mounted on a mobile robot.
An approximate measurement of the camera pose is obtained using auxiliary sensors on-board the robot.
The DEM is transformed into a depth map in the camera's coordinate system using this initial pose.
The problem is now simplified to the alignment of two images, one containing intensity information, and the other, depth.
Region boundaries in the intensity image are matched with discontinuities in the depth map using a robust directed Hausdorff distance.
This cost function is minimized with respect to the six parameters defining the camera pose.
Due to the highly non-linear nature of cost function with multiple local minima, a stochastic algorithm based on the downhill simplex principle is employed for minimization.
Results on real data are presented.
A calibrated classifier provides reliable estimates of the true probability that each test sample is a member of the class of interest.
A two transponder long baseline positioning system to measure the sway of a free towed Synthetic Aperture Sonar (SAS) is proposed.
A Matlab simulation predicts a worst case sway accuracy of cm over a 150 m long tow path with an update rate of 14 Hz.
The sway is measured with respect to freely deployed transponders which remain stationary on the seabed connected via cables to floating buoys housing GPS timing receivers.
Sway information is completely independent for each sonar ping and allows the deblurring of the SAS images by post processing.
We present a real-time algorithm for skin rendering which was used in the real-time animation Ruby: The DoubleCross, appearing in this year's SIGGRAPH animation festival.
Our approach approximates the appearance of subsurface scattering by blurring the diffuse illumination in texture space using graphics hardware.
This approach, based on the offline skin rendering technique proposed by Borshukov and Lewis, gives a realistic look and is both efficient and easy to implement.
We describe algorithms to efficiently implement this technique in real-time using graphics hardware, as well as several enhancements to improve quality.
The up-coming Gbps high-speed networks are expected to support a wide range of communication-intensive, real-time multimedia applications.
The requirement for timely delivery of digitized audio-visual information raises new challenges for the next generation integrated-service broadband networks.
One of the key issues is the Quality-of-Service (QoS) routing.
It selects network routes with sufficient resources for the requested QoS parameters.
The goal of routing solutions is two-fold: (1) satisfying the QoS requirements for every admitted connection and (2) achieving the global efficiency in resource utilization.
Many unicast/multicast QoS routing algorithms were published recently, and they work with a variety of QoS requirements and resource constraints.
Overall, they can be partitioned into three broad classes: (1) source routing, (2) distributed routing and (3) hierarchical routing algorithms.
In this paper we give an overview of the QoS routing problem as well as the existing solutions.
We present the strengths and the weaknesses of different routing strategies and outline the challenges.
We also discuss the basic algorithms in each class, classify and compare them, and point out possible future directions in the QoS routing area.
Introduction The pertinent facts, leading up to this latest experiment, about acoustic and perceptual properties of Danish std have been reported and discussed in Grnnum & Basbll (2001a, 2001b, 2002a, 2002b).
Here is the briefest possible summary: Consonants with std are not systematically longer than consonants without std, and they may be shorter as well.
Vowels with std are as long as long vowels without std, and both are 50-70% longer than short vowels.
Std vowels are also found to equal long vowels perceptually, though this similarity may be overshadowed by the similarity between syllables with std, irrespective of vowel length.
2.
Std onset timing and cognitive reality Variability in the onset (when it can be determined at all) of the laryngealization which is the std, measured from vowel onset, is very considerable, with time lags ranging between 10 and 130ms.
It averages around 60ms, cf.
Grnnum & Basbll (2001a).
We need now to know whether and how this onset is perceived.
Wh
this paper we propose and experimentally investigate a vision-based technique for autonomously landing a robotic helicopter.
We model the solution to the landing problem discretely using a finite state machine, responsible for detecting the landing site, navigating toward it, and landing on it.
Data from a single on-board camera are combined with attitude and position measurements from an on-board inertial navigation unit.
These are the inputs to the on-board control system: a set of controllers running in parallel which are responsible for controlling the individual degrees of freedom of the helicopter.
The resulting hybrid control system is simple, yet effective as shown experimentally by trials in nominal and perturbed conditions
Summary indicators for measuring and assessing infant and child feeding practices are needed for research, communication and advocacy, and program evaluation.
This paper reports on progress in developing a summary measure of infant and child feeding practices that addresses the following two challenges: infant and child feeding is multidimensional, and appropriate practices vary by age of the child.
Much previous research in the area of infant and child feeding has focused on single practices over a narrow age range and so has not addressed the determinants and impact of adequate or optimal infant and child feeding.
Using data from the Ethiopia Demographic and Health Survey, an infant and child feeding index is constructed, summarizing a range of key practices, including breastfeeding, bottle use, feeding frequency, and diet diversity.
Because it provides agespecific scoring and incorporates various practices, the index is a useful analytic tool.
The index is associated with an indicator of child growth (height-for-age) in bivariate and multivariate analyses.
Examination of individual indicators shows that this association is driven by a strong positive association between one componentdiet diversityand height-for-age.
Further work is required to establish the nature of the relationship between infant and child feeding indicators, nutrient adequacy, growth, and other outcomes.
But because it can be used to illustrate the association between a set of recommended practices and growth, the index may serve as a communication tool with policymakers.
iii Simulations show that the index accurately reflects an averaging of changes in individual component practices, and so it may also be of use to program managers who seek a summary measure for assessing p...
Distributedsynchronizationforparallelsimulationisgenerallyclassifiedasbeingeitheroptimisticorconservative.
Whileconsiderableinvestigationshavebeenconducted toanalyzeandoptimizeeachofthesesynchronization strategies,verylittlestudyonthedefinitionandstrictness ofcausalityhavebeenconducted.Dowereallyneed topreservecausalityinalltypesofsimulations?This paperattemptstoanswerthisquestion.Wearguethat significantperformancegainscanbemadebyreconsideringthisdefinitiontodecideiftheparallelsimulation needstopreservecausality.Weinvestigatethefeasibility ofunsynchronizedparallelsimulationthroughtheuseof severalqueuingmodelsimulationsandpresentacomparativeanalysisbetweenunsynchronizedandTimeWarp simulation.
A mathematical method called subordination broadens the applicability of the classical advection-dispersion equation for contaminant transport.
In this method the time variable is randomized to represent the operational time experienced by different particles.
In a highly heterogeneous aquifer the operational time captures the fractal properties of the medium.
This leads to a simple, parsimonious model of contaminant transport that exhibits many of the features (heavy tails, skewness, and non-Fickian growth rate) typically seen in real aquifers.
We employ a stable subordinator that derives from physical models of anomalous diffusion involving fractional derivatives.
Applied to a onedimensional approximation of the MADE-2 data set, the model shows excellent agreement.
This paper develops a framework based on convex optimization and economic ideas to formulate and solve approximately a rich class of dynamic and stochastic resource allocation problems, fitting in a generic discrete-state multi-project restless bandit problem (RBP).
It draws on the single-project framework in the authors companion paper "Restless bandit marginal productivity indices I: Single-project case and optimal control of a make-to-stock M/G/1 queue", based on characterization of a projects marginal productivity index (MPI).
Our framework significantly expands the scope of Whittle (1988)s seminal approach to the RBP.
Contributions include: (i) Formulation of a generic multi-project RBP, and algorithmic solution via single-project MPIs of a relaxed problem, giving a lower bound on optimal cost performance; (ii) a heuristic MPI-based hedging point and index policy; (iii) application of the MPI policy and bound to the problem of dynamic scheduling for a multiclass combined MTO/MTS M/G/1 queue with convex backorder and stock holding cost rates, under the LRA criterion; and (iv) results of a computational study on the MPI bound and policy, showing the latters near-optimality across the cases investigated
This paper describes the design and implementation of a novel reliable multicast protocol, totally reliable and scalable to large number of receivers.
MAF relies on Active Networks technology: active routers in the multicast tree store sender's transmissions in order to be able to later retransmit them to repair downstream losses.
To address scalability, MAF organizes those active routers into a hierarchical structure obtained by dividing the multicast tree into subtrees.
Since a sender initiated approach is used within each of those subtrees, MAF has the particularity of operating correctly with finite buffers.
This paper also describes the implementation of MAF over the active network platform deployed by the RNRT project AMARRAGE.
Index Terms--- totally reliable multicast, active networks, hierarchical structure, aggregated ACK, finite buffers.
ix 1
Key management is an essential cryptographic primitive upon which other security primitives are built.
However, none of the existing key management schemes are suitable for ad hoc networks.
They are either too inefficient, not functional on an arbitrary or unknown network topology, or not tolerant to a changing network topology or link failures.
Recent research on distributed sensor networks suggests that key pre-distribution schemes (KPS) are the only practical option for scenarios where the network topology is not known prior to deployment.
However, all of the existing KPS schemes rely on trusted third parties (TTP) rendering them inapplicable in many ad hoc networking scenarios and thus restricting them from wide-spread use in ad hoc networks.
To eliminate this reliance on TTP, we introduce distributed key pre-distribution scheme (DKPS) and construct the first DKPS prototype to realize fully distributed and selforganized key pre-distribution without relying on any infrastructure support.
DKPS overcomes the main limitations of the previous schemes, namely the needs of TTP and an established routing infrastructure.
It minimizes the requirements posed on the underlying networks and can be easily applied to the ad hoc networking scenarios where key pre-distribution schemes were previously inapplicable.
Finally, DKPS is robust to changing topology and broken links and can work before any routing infrastructure has been established, thus facilitating the widespread deployment of secure ad hoc networks.
As technology shrinks and working frequency reaches multi gigahertz range, designing and testing interconnects are no longer trivial issues.
In this paper we propose an enhanced boundary scan architecture to test high-speed interconnects for signal integrity.
This architecture includes: a) a modified driving cell that generates patterns according to multiple transitions fault model; and b) an observation cell that monitors signal integrity violations.
To fully comply with conventional JTAG, two new instructions are used to control cells and scan activities in the integrity test mode.
Detecting a transient signal of unknown arrival time in noise is actually a binary hypothesis test problem, where the null hypothesis (noise only) is a simple one, while the alternative hypothesis is composite.
The generalized likelihood ratio test (GLRT) is a common tool to solve such problems.
In this paper we show how order statistics (OS) approach can be used to solve the same problem.
We show that the two hypothesis becomes simple using the OS approach so a likelihood ratio test (LRT) can be applied, and we discuss the trade-offs between the two solutions.
In particular, we point out cases where the OS detector outperforms the GLRT.
Facial expression interpretation, recognition and analysis is a key issue in visual communication and man to machine interaction.
In this paper, we present a factorization technique which decomposes the appearance parameters coding a natural image.
This technique is then used to perform facial expression synthesis on unseen faces showing any undetermined facial expression, as well as facial expression recognition.
Fingerprints are widely used in automatic identity verification systems.
The core of such systems is the verification algorithm to match two fingerprints.
So far, various method for fingerprint matching have been proposed, but few works investigated the fusion of two or more matching algorithms.
In this paper, various methods for fusing such algorithms have been investigated.
Experimental results showed that such fusion can outperform the best individual verification algorithm and increase the discrimination between genuine and impostor classes.
In this paper, I describe an alternative approach to building a semantic web that addresses some known challenges to existing attempts.
In particular, powerful information extraction techniques are used to identify concepts of interest in Web pages.
Identified concepts are then used to semi-automatically construct assertions in a computer-readable markup, reducing manual annotation requirements.
It is also envisioned that these semantic assertions will be constructed specifically by communities of users with common interests.
The structured knowledge bases created will then contain content that reflects the uses they were designed for, thereby facilitating effective automated reasoning and inference for real-world problems.
Congruence closure algorithms are nowadays central in many modern applications in automated deduction and verification, where it is frequently required to recover the set of merge operations that caused the equivalence of a given pair of terms.
For this purpose we study, from the algorithmic point of view, the problem of extracting such small proofs.
In this paper, we consider Dijkstra's algorithm for the single source single target shortest path problem in large sparse graphs.
The goal is to reduce the response time for on-line queries by using precomputed information.
Due to the size of the graph, preprocessing space requirements can be only linear in the number of nodes.
We assume that a layout of the graph is given.
In the preprocessing, we determine from this layout a geometric object for each edge containing all nodes that can be reached by a shortest path starting with that edge.
Over the past two years students taking two biology modules at the University of Derby have been assessed using computer assessments with TRIADs (Tripartite Interactive Assessment Delivery System) in both their formal end of module examinations and for scored formative assessments.
We were keen to establish the student views of the use of computer assessment and thus over this period in addition to the overall evaluation of the modules the students were also given the opportunity to evaluate these assessments.
In the first instance an open ended approach was taken, and students were given the opportunity to anonymously write comments on the computer examinations.
The results of this were encouraging in that only a minority of students (~5%) made non-positive comments on CAA with the majority of students being very positive on their CAA experiences.
In addition a range of useful comments in relation to the application of CAA were provided by students, pertaining to comparability with traditional examinations and student learning strategy these are also discussed.
It is widely understood that protein functions can be exhaustively described in terms of no single parameter, whether this be amino acid sequence or the three-dimensional structure of the underlying protein molecule.
This means that a number of different attributes must be used to create an ontology of protein functions.
Certainly much of the required information is already stored in databases such as Swiss-Prot, Protein Data Bank, SCOP and MIPS.
But the latter have been developed for different purposes and the separate data-structures which they employ are not conducive to the needed data integration.
When we attempt to classify the entities in the domain of proteins, we find ourselves faced with a number of cross-cutting principles of classification.
Our question here is: how can we bring together these separate taxonomies in order to describe protein functions?
Our proposed answer is: via a careful top-level ontological analysis of the relevant principles of classification, combined with a new framework for the simultaneous manipulation of classifications constructed for different purposes.
A common traffic engineering design principle is to select a small set of flows, that account for a large fraction of the overall traffic, to be differentially treated inside the network so as to achieve a specific performance objective.
In this paper we illustrate that one needs to be careful in implementing such an approach because there are tradeoffs to be addressed that arise due to traffic dynamics.
We demonstrate that Internet flows are very volatile in terms of volume, and may substantially change the volume of traffic they transmit as time evolves.
Currently proposed schemes for flow classification, although attractive due to their simplicity, face challenges due to this property of flows.
Bandwidth volatility impacts the amount of load captured in a set of flows, which usually drops both significantly and quickly after flow classification is performed.
Thus if the goal is to capture a large fraction of traffic consistently over time, flows will need to be reselected often.
Our first contribution is in understanding the impact of flow volatility on the classification schemes employed in a traffic engineering context.
Our second contribution is to propose a classification scheme that is capable of addressing the issues identified above by incorporating historical flow information.
Using actual Internet data we demonstrate that our scheme outperforms previously proposed schemes, and reduces both the impact of flow volatility on the load captured by the selected set of flows and the required frequency for its reselection.
The Winter Simulation Conference (WSC) is traditionally known as the most important annual conference serving the discrete event simulation community.
The purpose of this panel session is to generate discussion about the nature of WSC in the future and about its future role in the overall simulation community.
There are many reasons to do this.
It is important to the communities currently served by WSC, critical to the conference itself, and in a broad sense significant to the future of simulation itself.
In keeping with the track theme of discussing the future of simulation, it makes sense to discuss the future of the most important discrete-event simulation event.
We study the temporal connectivity structure of single-channel Internet-based chat participation streams.
Somewhat similar to bibliometric analysis, and complementary to topic-analysis, we base our study solely on context information provided by the temporal order of participants' contributions.
Experimental results obtained by employing both networkanalysis indicators and an aggregate Markov modelling approach indicate the existence of distinguishable communities in the about one day worth real-world chat dynamics analysed.
GAMBL is a word expert approach to WSD in which each word expert is trained using memorybased learning.
Joint feature selection and algorithm parameter optimization are achieved with a genetic algorithm (GA).
We use a cascaded classifier approach in which the GA optimizes local context features and the output of a separate keyword classifier (rather than also optimizing the keyword features together with the local context features).
A further innovation on earlier versions of memorybased WSD is the use of grammatical relation and chunk features.
This paper presents the architecture of the system briefly, and discusses its performance on the English lexical sample and all words tasks in SENSEVAL-3.
Anthropomorphic visualization is a new approach to presenting information about participants in online spaces using the human form as the basis for the visualization.
Various data about an individual's online behavior are mapped to different parts of a humanoid yet abstract form.
I hypothesized that using a humanoid form to visualize data about people in online social spaces could serve two purposes simultaneously: communicate statistics about the individuals and evoke a social response.
Using the
Service composition is the act of taking several component products or services, and bundling them together to meet the needs of a given customer.
In the future, service composition will play an increasingly important role in e-commerce, and automation will be desirable to improve speed and efficiency of customer response.
In this paper, we consider a service composition agent that both buys components and sells services through auctions.
It buys component services by participating in many English auctions.
It sells composite services by participating in Request-for-Quotes reverse auctions.
Because it does not hold a long-term inventory of component services, it must take risks; it must make offers in reverse auctions prior to purchasing all the components needed, and must bid in English auctions prior to having a guaranteed customer for the composite good.
We present algorithms that is able to manage this risk, by appropriately bidding/offering in many auctions and reverse auctions simultaneously.
The algorithms will withdraw from one set of possible auctions and move to another set if this will produce a better-expected outcome, but will effectively manage the risk of accidentally winning outstanding bids/offers during the withdrawal process.
We illustrate the behavior of these algorithms through a set of worked examples.
this paper most of the contents of this section (1.2) is taken
In this paper we present and evaluate the 4+4 architecture.
4+4 extends the IPv4 address space without requiring changes to existing routers.
It builds on the existence of NATs and multiple address realms, but it does not use address translation and provides end-to-end address transparency.
Existing address translation is used only as a transition tool.
The paper also presents an implementation of 4+4 and related experimental results.
We conclude that 4+4 is simple to introduce and may represent a mediumterm solution if IPv6 transition does not take off quickly enough.
The source code of our implementation can be downloaded from http://ipv44.comet.columbia.edu.
this paper is as follows.
We present a model for selection and segregation distortion in an infinitely large randomly mating population with one sex.
To fix ideas, we first consider the competition between the wildtype and two distorter alleles.
We then show how the analysis can be extended to the competition between a large number of distorters.
We start by assuming that the amount of complementation is the same for all combinations of distorter alleles.
In this case, each parameter configuration results in a unique stable polymorphism.
We give an analytical characterization of this equilibrium, and show that it typically involves many alleles.
Subsequently, we show by means of a simple example that the outcome of competition may be contingent on the initial conditions if the degree of complementation di#ers between distorters.
Finally, we study the competition between segregation distorters in case that there is a negative trade-o# between distorting e#ciency and complementing ability
The Virtual Home Environment (VHE) encompasses the deployment and management of adaptable services that retain any personalized service aspects, irrespective of terminal, network and geographic location.
We assert that the dynamic nature of the VHE requires management capabilities that can be suitably provided through the use of mobile agent technology.
In this direction, we examine four different engineering solutions for the realization of a VHE performance management component that allows service adaptation in relation to the available network Quality of Service (QoS).
The mobile agent approach is compared with competing technologies in order to identify the benefits of this novel application of mobile agents, discuss its drawbacks and finally focus on the lessons learned from our prototype system.
Although mobile agents are typically associated with increased performance costs, it is through agent migration that we were able to address the VHE requirements of universality, dynamic programmability and network technology independence.
The recent reduction in telecommunications spending has increased the importance of network planning to improve the return on investment on the existing network infrastructures.
Therefore, tools that help in maximizing the bandwidth efficiency of the network at a minimum cost are essential.
Previous work in this area focused on increasing bandwidth efficiency and reliability.
In this work, in addition to increasing the bandwidth efficiency, we address the complexity of network management and operations.
This issue is explicitly addressed by our novel framework, a simple polynomial time algorithm (SimPol)that achieves optimum network performance (in terms of congestion or bandwidth consumption) using only a small number of paths.
The problem formulation is based on splittable multicommodity flows.
Using SimPol we show that the total number of paths is at most k + m,wherek and m are the numbers of demands and edges in the network, respectively.
We extend the basic framework into an integer programming formulation to address the tradeoff between network congestion and the total number of paths.
We also use SimPol to address the problem of implementing path/link policies such as bandwidth-limited paths.
The performance of SimPol is evaluated through extensive simulations.
We find that for large number of demands the LPbased framework provides a near-optimal solution of almost one path per demand.
Using the integer programming approach, we can get exactly one path while losing about 10% to 50% in congestion depending on the number of demands.
This congestion is, however, far better than the traditional shortest path routing.
The framework is general and can be used in capacity planning for transport networks such as MPLS and ATM.
This paper review the state of the art in o#-line Roman cursive han dw iting recognition.
The input provided to an o#-line han iting recognition system is an image of a digit, aw ord, or - more generally - some text, and the system produces, as output, an ASCII transcription of the input.
This taskinvolves a number of processing steps, some of w ich are quite di#cult.
Typically, preprocessing, normalization, feature extraction, classification, and postprocessing operations are required.
We'll survey the state of the art, analyze recent trends, and try to identify challenges for future research in this field.
this paper I wish to present the model proposed some years ago by Louis de Broglie [2] for the tired photon.
The problem with most alternative explanations for the cosmological redshift arises from the fact that they result from ad hoc assumptions.
The model of de Broglie has none of the ad hoc character of which most tired-light mechanisms are accused: it follows from considerations at the fundamental level of his quantum theory [3].
It is only a corollary of his causal double solution theory, which stands almost side by side with the orthodox non causal theory for explanation and prediction of quantum phenomena.
Another major advantage of this model is that it can be tested on a laboratory scale
Recent advances in wireless communication and microelectronics have enabled the development of low-cost sensor devices leading to interest in large-scale sensor networks for military applications.
Sensor networks consist of large numbers of networked sensors that can be dynamically deployed and used for tactical situational awareness.
One critical challenge is how to dynamically integrate these sensor networks with information fusion processes to support real-time sensing, exploitation and decision-making in a rich tactical environment.
In this paper, we describe our work on an extensible prototype to address the challenge.
The prototype and its constituent technologies provide a proof-of-concept that demonstrates several fundamental new approaches for implementing next generation battlefield information systems.
Many cutting-edge technologies are used to implement this system, including semantic web, web services, peer-to-peer network and content-based routing.
This prototype system is able to dynamically integrate various distributed sensors and multi-level information fusion services into new applications and run them across a distributed network to support different mission goals.
Agent technology plays a role in two fundamental ways: resources are described, located and tasked using semantic descriptions based on ontologies and semantic services; tracking, fusion and decision-making logic is implemented using agent objects and semantic descriptions as well.
Previous work showed how moving particles that rest along their trajectory lead to time-nonlocal advection-dispersion equations.
If the waiting times have infinite mean, the model equation contains a fractional time derivative of order between 0 and 1.
In this article, we develop a new advection-dispersion equation with an additional fractional time derivative of order between 1 and 2.
Solutions to the equation are obtained by subordination.
The form of the time derivative is related to the probability distribution of particle waiting times and the subordinator is given as the first passage time density of the waiting time process which is computed explicitly.
Preprint submitted to Elsevier Science 19 July 2004 Key words: Anomalous Di#usion, Continuous Time Random Walks, First Passage Time, Fractional Calculus, Subdi#usion, Power laws 1
The telecommunications industry in Israel has changed significantly in recent years.
This paper examines key issues that will arise in Israel as a result of these major changes and argues that the major changes in the telecommunications industry require significant changes in the regulatory structure.
The paper first provides important background material on the current structure in the various sectors of the telecommunications industry in Israel.
The paper then discusses the current regulatory environment and makes recommendations regarding the future regulatory structure in Israel and the scope for regulation.
n Beschlagnahme pfndbar zabavit, -ovat beschlagnahmen drh zdr#ka zdr#n zdrh zadrhl Stockung Sperr- Schlinge verwickelt zadrhnout,-vat zadrhovat zadr#et,-ovat scheuern, reiben verknoten aufhalten duch zducha zdus zdusn zdusn zadusen zadusen zadchan Atemnot Kirchengut Kirchen- asthmatisch Ersticken erstickt stickig zadusit zadusovat se zadychat ersticken beteuern anhauchen hyb zhyb zahynut zahnut Falte, Krmmung Untergang gebogen zahnout zahbat zahynout umbiegen rtteln untergehen h# zh#evn zah#at zah#vac zah#va# zah#vadlo Wrme- Erwrmen Wrme- Vorwrmer "" zah#t,-vat erwrmen chod zchod zachzen zachzka Klosett Umgang Umweg zachzet einbiegen, untergehen chran zchrana zchrann zachrnce zachrnkyn# zachrn#n zachra#ovac Rettung Rettungs- Retter Retterin Rettung Rettungs- zachrnit zachra#ovat (er-) retten "" klad zklad zkladka zkladna zkladn zakladac zakladatel zakladatelka Grundla
Identity-based public key encryption facilitates easy introduction of public key cryptography by allowing an entity's public key to be derived from an arbitrary identification value, such as name or email address.
The main practical benefit of identity-based cryptography is in greatly reducing the need for, and reliance on, public key certificates.
Although some interesting identity-based techniques have been developed in the past, none are compatible with popular public key encryption algorithms (such as El Gamal and RSA).
This limits the utility of identity-based cryptography as a transitional step to full-blown public key cryptography.
Furthermore, it is fundamentally difficult to reconcile finegrained revocation with identity-based cryptography.
Fast restoration is an important feature of both MPLS and optical networks.
The main mechanism for achieving fast restoration is by locally routing around failures using pre-setup detour paths.
Signaling and routing protocol extensions to implement this local bypass ability are currently being standardized.
To make use of this ability, dynamic schemes that jointly route primary paths and all link detours for links used by the primary paths have been previously proposed.
These schemes also permit sharing of reserved restoration capacity for achieving efficiency.
However, this joint computation places a significantly larger computational load on the network elements than that imposed by the shortest path computation variants typically used for unprotected network connection routing.
In this paper, we propose a new scheme that is operationally much simpler, shares capacity used for restoration, and permits the network to route the primary paths in a manner that is oblivious to restoration needs.
Restoration of all carried traffic is guaranteed by a new link capacity partitioning scheme that maximizes the working capacity of the network without requiring any knowledge of the traffic that will be imposed on the network.
Being traffic independent for a priori link capacity partitioning and being oblivious to restoration needs for on-line network routing makes this scheme operationally simple and desirable in the sense of placing no additional routing load on the constrained computing resources at the network nodes.
To compute the link capacity partitions, we develop a fast combinatorial algorithm that uses only iterative shortest path computations, and is a fully polynomial time approximation scheme (FPTAS) , i.e., it achieves a (1 + #)-factor approximation for any #>0 and ru...
Does teaching experience di!erentially shape the thinking of teachers of di!erent academic disciplines regarding schooling issues incidentally related to subject matter instruction?
This question was addressed by examining the broad schooling goals established for students by novice and veteran teachers of humanistica and scienti"ca subjects.
Participants were 44 Israeli female teachers of grades 7}9.
Frequency and intensity of goal preferences were assessed in a semi-structured interview.
Results demonstrated that: (1) novices and veterans expressed di!erent goal preferences, as did humanities versus science teachers; (2) experienced humanities teachers preferred academic goals less than other teachers; and (3) the overall order of goal preference was academic'social'personal.
The signi"cance of the interaction between teacher experience and discipline taught is discussed.
# 1999 Elsevier Science Ltd. All rights reserved.
Interpretation The abstract-interpretation technique [2] allows conservative automatic verification of partial correctness to be conducted by identifying sound over-approximations to loop invariants.
An iterative computation is carried out to determine an appropriate abstract value for each program point.
The result at each program point is an abstract value that summarizes the sets of reachable concrete states at that point.
This work describes a computer model of the immune system s response to infection, specifically the cytotoxic T lymphocyte (CTL) response.
CTLs play an important role in the control of infectious agents, and they are essential components of our defense against HIV, cancer, and other diseases of great public interest.
Immunologists are interested in manipulating and enhancing the CTL response to these diseases, whether by vaccination or drug therapy, but the process can be difficult and ad hoc.
A combination of animal experimentation, limited human testing, and simple mathematical models have been the primary sources of guidance in the efforts to address these diseases.
Computer models provide an alternative strategy for exploring immune system therapies.
Recently developed laboratory techniques that have revealed and quantified many aspects of CTL behavior provide an unprecedented opportunity to develop detailed models.
The model used in this work integrates many of these new findings into a coherent system that simulates an immune response to viral infection.
This model reproduces many of the phenomena seen in CTL responses but not captured by other mathematical or computer models and can be used to explore vaccination strategies.
The value of modeling goes beyond simply making predictions.
It allows one to perform experiments difficult, or even impossible, to perform in the laboratory.
For example, in a computer model one can replicate experiments exactly or choose to allow stochastic fluctuations to influence the outcome.
In biological systems, achieving this level of control is impossible.
Model-building can also be used as a vehicle for hypothesis testing by formulating one s assumptions about a system s behavior as a model.
If the model s behavior does not match real-world experimental results, the initial assumptions can be changed and a new model built.
The model presented here is the result of a series of such choices.
A critical issue in wireless sensor networks is represented by the limited availability of energy within network nodes; therefore making good use of energy is a must.
A widely employed energy-saving technique is to place nodes in sleep mode, corresponding to a low-power consumption as well as to reduced operational capabilities.
In this work, we develop a Markov model of a sensor network whose nodes may enter a sleep mode, and we use this model to investigate the system performance in terms of energy consumption, network capacity, and data delivery delay.
Furthermore, the proposed model enables us to investigate the trade-offs existing between these performance metrics and the sensor dynamics in sleep/active mode.
Analytical results present an excellent matching with simulation results for a large variety of system scenarios showing the accuracy of our approach.
There are a lot of approaches for measuring semantic similarities between words.
This paper proposes a new method based on the analysis of a monolingual dictionary.
We can view the word definitions of a dictionary as a network: its nodes are the headwords found in the dictionary and its edges represent the relations between a headword and the words present in its definition.
In this view, the meaning of a word is defined by the total quantity of information, in which each element of its definition contributes.
The similarity between two words is defined by the maximal quantity of information exchanged between them through the network.
Using discrete-event simulation models, a study was conducted to evaluate the current production practices of a high-volume semiconductor back-end operation.
The overall goal was to find potential areas for productivity improvement that would collectively yield a 60% reduction in manufacturing cycle time.
This paper presents the simulation methodology and findings pertaining to analysis of the Assembly, Burn-In, and Test operations.
Many of the recommendations identified can be implemented at no additional cost to the factory.
The most significant opportunities for improvement are in the Test area, the system constraint.
Additionally, the model is extremely sensitive to changes in operator staffing levels, an accurate reflection of many back-end operations.
The model shows that the cumulative impact of these recommendations is a 41% reduction in average cycle time, a significant contribution to the overall goal.
this report may be reproduced without the express permission of but with acknowledgment to the International Food Policy Research Institute
There is considerable interest in Peer-topeer (P2P) traffic because of its remarkable increase over the last few years.
By analyzing flow measurements at the regional aggregation points of several cable operators, we are able to study its properties.
It has become a large part of broadband traffic and its characteristics are different from older applications, such as the Web.
It is a stable balanced traffic: the peak to valley ratio during a day is around 2 and the Inbound/Outbound traffic balance is close to one.
Although P2P protocols are based on a distributed architecture, they don't show strong signs of geographical locality.
A cable subscriber is not much more likely to download a file from a close region than from a far region.
This paper develops a polyhedral approach to the design, analysis, and computation of dynamic allocation indices for scheduling binary-action (engage/rest) Markovian stochastic projects which can change state when rested (restless bandits (RBs)), based on partial conservation laws (PCLs).
This extends previous work by the author [J. Nino-Mora (2001): Restless bandits, partial conservation laws and indexability.
Adv.
Appl.
Probab.
33, 76--98], where PCLs were shown to imply the optimality of index policies with a postulated structure in stochastic scheduling problems, under admissible linear objectives, and they were deployed to obtain simple sufficient conditions for the existence of Whittle's (1988) RB index (indexability), along with an adaptive-greedy index algorithm.
The new contributions include: (i) we develop the polyhedral foundation of the PCL framework, based on the structural and algorithmic properties of a new polytope associated with an accessible set system (J, (F-extended polymatroid); (ii) we present new dynamic allocation indices for RBs, motivated by an admission control model, which extend Whittle's and have a significantly increased scope; (iii) we deploy PCLs to obtain both sufficient conditions for the existence of the new indices (PCL-in- dexability), and a new adaptive-greedy index algorithm; (iv) we interpret PCL-indexability as a form of the classic economics law of diminishing marginal returns, and characterize the index as an optimal marginal cost rate; we further solve a related optimal constrained control problem; (v) we carry out a PCL-indexability analysis of the motivating admission control model, under time-discounted and long-run average criteria; this gives, under mild conditions, a new index characterization of optimal threshold...
Software systems generally suffer from a certain fragility in the face of "disturbances" such as bugs, unforeseen user input, unmodeled interactions with other software components, and so on.
A single such disturbance can make the machine on which the software is executing hang or crash.
We postulate that what is required to address this fragility is a general means of using feedback to stabilize these systems.
In this paper we develop a preliminary dynamical systems model of an arbitrary iterative software process along with the conceptual framework for "stabilizing" it in the presence of disturbances.
To keep the computational requirements of the controllers low, randomization and approximation are used.
We describe our initial attempts to apply the model to a faulty list sorter, using feedback to improve its performance.
Methods by which software robustness can be enhanced by distributing a task between nodes each of which are capable of selecting the "best" input to process are also examined, and the particular case of a sorting system consisting of a network of partial sorters, some of which may be buggy or even malicious, is examined.
Recent studies in the rotation of the plane of polarization of electromagnetic waves over...
The integration and coordination of different emergency service personnel is crucial to Crisis Management.
In this paper, we introduce a parametric semantics for timed controllers called the Almost ASAP semantics.
This semantics is a relaxation of the usual ASAP semantics (also called the maximal progress semantics) which is a mathematical idealization that can not be implemented by any physical device no matter how fast it is.
On the contrary, any correct Almost ASAP controller can be implemented by a program on a hardware if this hardware is fast enough.
We study the properties of this semantics, show how it can be analyzed using the tool HyTech, and illustrate its practical use on examples.
We examine the formation of networks among a set of players whose payoffs depend on the structure of the network.
We focus on games where players may bargain by promising or demanding transfer payments when forming links.
We examine several variations of the transfer/bargaining aspect of link formation.
One aspect is whether players can only make and receive transfers to other players to whom they are directly linked, or whether they can also subsidize links that they are not directly involved in.
Another aspect is whether or not transfers related to a given link can be made contingent on the full resulting network or only on the link itself.
A final aspect is whether or not players can pay other players to refrain from forming links.
We characterize the networks that are supported under these variations and show how each of the above aspects is related either to accounting for a specific type of externality, or to dealing with the combinatorial nature of network payoffs.
In previous studies we could show that linguistic word structures correlate closely with the time course of written word production.
In the present study we investigate whether there are also correlations between the syntactic structures of phrases and the time course of their production.
We introduce ASAP2, an improved variant of the batchmeans algorithm ASAP for steady-state simulation output analysis.
ASAP2 operates as follows: the batch size is progressively increased until the batch means pass the ShapiroWilk test for multivariate normality; and then ASAP2 delivers a correlation-adjusted confidence interval.
The latter adjustment is based on an inverted Cornish-Fisher expansion for the classical batch means t-ratio, where the terms of the expansion are estimated via a first-order autoregressive time series model of the batch means.
ASAP2 is a sequential procedure designed to deliver a confidence interval that satisfies a prespecified absolute or relative precision requirement.
When used in this way, ASAP2 compares favorably to ASAP and the well-known procedures ABATCH and LBATCH with respect to close conformance to the precision requirement as well as coverage probability and mean and variance of the half-length of the final confidence interval.
The path planning is not a trivial problem of artificial intelligence.
An agent has to find a path from one state (or position) to another whilst avoiding contact with obstacles.
The configuration space used for representation of all agent states is usually continuous, which makes the problem even more complex.
Skeletonisation is one of approaches, which "discretises" continuous space and reduces it to a graph search problem.
Previous approaches to implementing temporal DBMSs have assumed that a temporal DBMS must be built from scratch, employing an integrated architecture and using new temporal implementation techniques such as temporal indexes and join algorithms.
However, this is a very large and time-consuming task.
This paper explores approaches to implementing a temporal DBMS as a stratum on top of an existing non-temporal DBMS, rendering implementation more feasible by reusing much of the functionality of the underlying conventional DBMS.
More specifically, the paper introduces three stratum meta-architectures, each with several specific architectures.
Based on a new set of evaluation criteria, advantages and disadvantages of the specific architectures are identified.
The paper also classifies all existing temporal DBMS implementations according to the specific architectures they employ.
It is concluded that a stratum architecture is the best short, medium, and perhaps even longterm, approach to implementing a temporal DBMS.
Description logics are valuable for modeling the conceptual structures of scientific and engineering research because the underlying ontologies generally have a taxonomic core.
Such structures have natural representations through semantic networks that mirror the underlying description logic graph-theoretic structures and are more comprehensible than logical notations to those developing and studying the models.
This article reports experience in the development of visual language tools for description logics with the objective of making research issues, past and present, more understandable.
The ridgelet transform [6] was introduced as a sparse expansion for functions on continuous spaces that are smooth away from discontinuities along lines.
In this paper, we propose an orthonormal version of the ridgelet transform for discrete and finite -size images.
Our construction uses the finite Radon transform (FRAT) [11], [20] as a building block.
To overcome the periodization effect of a finite transform, we introduce a novel ordering of the FRAT coefficients.
We also analyze the FRAT as a frame operator and derive the exact frame bounds.
The resulting finite ridgelet transform (FRIT) is invertible, nonredundant and computed via fast algorithms.
Furthermore, this construction leads to a family of directional and orthonormal bases for images.
Numerical results show that the FRIT is more effective than the wavelet transform in approximating and denoising images with straight edges.
Most real-world database applications contain a substantial portion of time-referenced, or temporal, data.
Recent advances in temporal query languages show that such database applications could benefit substantially from builtin temporal support in the DBMS.
To achieve this, temporal query representation, optimization, and processing mechanisms must be provided.
This paper presents a general, algebraic foundation for query optimization that integrates conventional and temporal query optimization and is suitable for providing temporal support both via a stand-alone temporal DBMS and via a layer on top of a conventional DBMS.
By capturing duplicate removal and retention and order preservation for all queries, as well as coalescing for temporal queries, this foundation formalizes and generalizes existing approaches.
We discuss the diverse types and roles of ontologies in web information extraction and illustrate them on a small study from the product offer domain.
Attention is mainly paid to the impact of domain ontologies, presentation ontologies and terminological taxonomies.
We propose a new methodology for "soft" docking unbound protein molecules (reported at the isolated state).
The methodology is characterized by its simplicity and easiness of embedment in any rigid body docking process based on point complementarity.
It is oriented to allow limited free but not unrealistic interpenetration of the side chains of protein surface amino acid residues.
The central step to the technique is a filtering process similar to those in image processing.
The methodology assists in deletion of atomic-scale details on the surface of the interacting monomers, leading to the extraction of the most characteristic flattened shape for the molecule as well as the definition of a soft layer of atoms to allow smooth interpenetration of the interacting molecules during the docking process.
Although the methodology does not perform structural or conformational rearrangements in the interacting monomers, results output by the algorithm are in fair agreement with the relative position of the monomer in experimentally reported complexes.
The algorithm performs especially well in cases where the complexity of the protein surfaces is high, that is in hetero dimmer complex prediction.
The algorithm is oriented to play the role of a fast screening engine for proteins known to interact but for which no information other than that of the structures at the isolated state is available.
Consequently the importance of the methodology will increase in structural-function studies of thousand of proteins derived from large scale genome sequencing projects being executed all around the globe Keywords: protein-protein interaction, docking, soft docking, filtering 1
In recent years, the "Internet Multicast Backbone," or MBone, has risen from a small, research curiosity to a largescale and widely used communications infrastructure.
A driving force behind this growth was the development of multipoint audio, video, and shared whiteboard conferencing applications.
Because these real-time media are transmitted at a uniform rate to all of the receivers in the network, a source must either run at the bottleneck rate or overload portions of its multicast distribution tree.
We overcome this limitation by moving the burden of rate adaptation from the source to the receivers with a scheme we call receiver-driven layered multicast, or RLM.
In RLM, a source distributes a hierarchical signal by striping the different layers across multiple multicast groups, and receivers adjust their reception rate by simply joining and leaving multicast groups.
In this paper, we describe a layered video compression algorithm which, when combined with RLM, provides a comprehensive solution for scalable multicast video transmission in heterogeneous networks.
In addition to a layered representation, our coder has low complexity (admitting an efficient software implementation) and high loss resilience (admitting robust operation in loosely controlled environments like the Internet) .
Even with these constraints, our hybrid DCT/wavelet-based coder exhibits good compression performance.
It outperforms all publicly available Internet video codecs while maintaining comparable run-time performance.
We have implemented our coder in a "real" application---the UCB/LBL videoconferencing tool vic.
Unlike previous work on layered video compression and transmission, we have built a fully operational system that is currently being deployed on a very large scale over the MBone.
We consider the congestion-control problem in a communication network with multiple traffic sources, each modeled as a fullycontrollable stream of fluid traffic.
The controlled traffic shares a common bottleneck node with high-priority cross traffic described by a Markovmodulated fluid (MMF).
Each controlled source is assumed to have a unique round-trip delay.
We wish to maximize a linear combination of the throughput, delay, traffic loss rate, and a fairness metric at the bottleneck node.
We introduce an online sampling-based burst-level congestioncontrol scheme capable of performing effectively under rapidly-varying cross traffic by making explicit use of the provided MMF model of that variation.
The control problem is posed as a finite-horizon Markov decision process and is solved heuristically using a technique called Hindsight Optimization.
We provide a detailed derivation of our congestion-control algorithm based on this technique.
The distinguishing feature of our scheme relative to conventional congestion-control schemes is that we exploit a stochastic model of the cross traffic.
Our empirical study shows that our control scheme significantly outperforms the conventional proportionalderivative (PD) controller, achieving higher utlization, lower delay, and lower loss under reasonable fairness.
The performance advantage of our scheme over the PD scheme grows as the rate variance of cross traffic increases, underscoring the effectiveness of our control scheme under variable cross traffic.
This paper presents a new approach to the problem of building a global map from laser range data, utilizing shape based object recognition techniques originally developed for tasks in computer vision.
In contrast to classical approaches, the perceived environment is represented by polygonal curves (polylines), possibly containing rich shape information yet consisting of a relatively small number of vertices.
The main task, besides segmentation of the raw scan point data into polylines and denoising, is to find corresponding environmental features in consecutive scans to merge the polylinedata to a global map.
The correspondence problem is solved using shape similarity between the polylines.
The approach does not require any odometry data and is robust to discontinuities in robot position, e.g., when the robot slips.
Since higher order objects in the form of polylines and their shape similarity are present in our approach, it provides a link between the necessary low-level and the desired high-level information in robot navigation.
The presented integration of spatial arrangement information, illustrates the fact that high level spatial information can be easily integrated in our framework.
In this paper we present a storage method for sets of first order logic terms in a relational database using function symbols based indexing method of Discrimination trees.
This is an alternative method to a published one, based on attribute indexing.
This storage enables e#ective implementation of several retrieval operations: unification, generalization, instantation and variation of a given query term in the language of first order predicate calculus.
In our solution each term has unique occurrence in the database.
This is very useful when we need to store a large set of terms that have identical many subterms.
this paper, we outline three examples of ongoing work of this type.
For an introduction to factor graphs, we refer to [1] and [2].
We will use the notation of [2]
The purpose of this study was to investigate experienced secondary school teachers' (N"80) current and prior perceptions of their professional identity.
A questionnaire was used to explore the way teachers see (and saw) themselves as subject matter experts, didactical experts, and pedagogical experts.
The teachers currently see their professional identity as consisting of a combination of the distinct aspects of expertise.
Most teachers' current perceptions of their professional identity reportedly di!er signi"cantly from their prior perceptions of this identity during their period as beginning teachers.
On the basis of their current perceptions of their professional identity, "ve groups of teachers could be distinguished.
These groups had di!erent learning experiences throughout their careers for each aspect of expertise.
Also, teachers from di!erent subject areas did not undergo the same changes in their perceptions of their professional identity.
The di!erences among the groups in teachers' current perceptions of professional identity were not related to contextual, experiential, and biographical factors that might in#uence these perceptions.
this report, a novel methodology for the efficient multiplexing and transmission of MPEG4coded video signals over wireless networks will be presented and discussed.
The proposed approach relies on the joint exploitation of variable-bit-rate (VBR) multicarrier code-division multiplexing (MC-CDM), together with MPEG4 coding with Fine-Grain-Scalability (FGS) in order to provide unequal error protection to the transmitted video stream.
The innovative scheme proposed employs a shared bandwidth partitioned into orthogonal sub-channels in order to multiplex different layers of MPEG-4-coded signals.
The highest number of subchannels (and hence an increased frequency diversity) is assigned to the lowest-bit-rate base layer and the lowest number of sub-channels is assigned to the highest bit-rate enhancement layer.
In such a way, base layer information contents are more protected against channel degradations than information contained in FGS enhancement layers, which can only yield a refinement of the quality of the decoded streams.
A 2GHz LEO multicast satellite transmission to mobile users has been regarded as the application testbed for the proposed method.
Results achieved in terms of PSNR point out that the VBR MC-CDM technique can provide better results than a conventional MPEG-4 single-layer MC-SS transmission.
In the framework of a full-digital implementation of reconfigurable multimedia transceivers, the proposed VBR MCCDM technique may be regarded as an interesting solution for reliable multimedia transmissions in mobile environments
This paper is a practical guide to building higher-order filters with single-amplifier biquadratic MOSFET--C sections.
Theory, design guidelines, and measurement electronics are discussed by example of a 7th-order current-mode filter built to the specifications of a 1 DVD read channel filter.
The 7th-order filter was fabricated with the double-poly 0.6-micron CMOS process by AMS.
It is continuously tunable from 4.5 MHz up to 10 MHz, covers a chip area of only 0.24 mm , and consumes 49 mW from a 3.3-V supply.
The SNR at of harmonic distortion is between 48 dB and 50 dB over the whole tuning range.
The comparatively low power consumption and chip area could be achieved by using single-amplifier biquadratic building blocks implemented as MOSFET--C filters and generating the control voltage of the MOSFET resistors with an on-chip charge pump.
The technique is, with a small loss of SNR, also applicable on fabrication processes where only gate-oxide capacitors are available.
This paper reviews a number of recent books related to current developments in machine learning.
Some (anticipated) trends will be sketched.
These include: a trend towards combining approaches that were hitherto regarded as distinct and were studied by separate research communities; a trend towards a more prominent role of representation; and a tighter integration of machine learning techniques with techniques from areas of application such as bioinformatics.
The intended readership has some knowledge of what machine learning is about, but brief tutorial introductions to some of the more specialist research areas will also be given.
In this work, we propose a novel scheme to minimize drift in scalable wavelet based video coding, which gives a balanced performance between compression efficiency and quality.
Our drift control mechanism maintains two frame buffers in the encoder and decoder; one for the base layer and the other for the enhancement layer.
Drift control is achieved by switching between these two buffers for motion compensation and prediction.
In the encoder, the residues are coded using the embedded zerotree wavelet (EZW) algorithm.
Our prediction is based on the enhancement layer, which inherently introduces drift in the system, if part of the enhancement layer is not available at the receiver.
A measure of drift is computed based on channel information, and a threshold is set.
When the measure exceeds the threshold, i.e.
when drift becomes significant, we switch the prediction to be based on the base layer, which is always available to the receiver.
A continuous time random walk is a simple random walk subordinated to a renewal process, used in physics to model anomalous di#usion.
In this paper we show that, when the time between renewals has infinite mean, the scaling limit is an operator Levy motion subordinated to the hitting time process of a classical stable subordinator.
Density functions for the limit process solve a fractional Cauchy problem, the generalization of a fractional partial di#erential equation for Hamiltonian chaos.
We also establish a functional limit theorem for random walks with jumps in the strict generalized domain of attraction of a full operator stable law, which is of some independent interest.
1.
A wide range of database applications manage information that varies over time.
Many of the underlying database schemas of these were designed using one of the several versions, with varying syntax and semantics, of the Entity-Relationship (ER) model.
In the research community as well as in industry, it is common knowledge that the temporal aspects of the mini-world are pervasive and important, but are also difficult to capture using the ER model.
Not surprisingly, several enhancements to the ER model have been proposed in an attempt to more naturally and elegantly support the modeling of temporal aspects of information.
Common to the existing temporally extended ER models, few or no specific requirements to the models were given by their designers.
With the
In this paper, we examine the sources of random numbers used in signal processing.
We also hope to present some interesting solutions to modern application problems using random numbers and to provide methods in which to test the integrity of random number sequences for use in a variety of applications.
The purpose of this paper is to characterize the problem of multiple levels of abstraction in simulation modeling and to develop an approach that addresses the problem.
In this paper, we describe the notion of abstraction and the technical problems associated with multiple levels of abstraction, how abstractions affect different activities during the simulation modeling process, a preliminary approach for addressing the problems associated with multiple levels of abstraction, the conceptual architecture of a simulation modeling environment that implements the proposed approach, and a summary of the research on questions of abstraction in simulation.
Unlike snapshot queries in traditional databases, the processing of continuous queries in Data Stream Management Systems (DSMSs) needs to satisfy user-specified QoS requirements.
In this paper, we focus on three major QoS parameters in a DSMS environment: processing delay, querying frequency and loss tolerance.
To minimize processing delays, the Earliest Deadline First (EDF) CPU scheduling policy is recommended.
Globalisation and competitive pressure urge many organisations to radically change business processes.
Although this approach can provide significant benefits such as reducing costs or improving efficiency, there are substantial risks associated with it.
Using simulation for modelling and analysis of business processes can reduce that risk and increase the chance for success of Business Process Re-engineering projects.
This paper investigates the potential of simulation modelling to be used for modelling business processes and supports the case for a wider use of simulation techniques by the business community.
Following a discussion on business process modelling methods and tools, the usability of simulation modelling for evaluating alternative business process strategies is investigated.
Examples of simulation models representing business processes are presented and discussed.
The analysis of handwritten documents from the viewpoint of determining their writership has great bearing on the criminal justice system.
In many cases, only a limited amount of handwriting is available and sometimes it consists of only numerals.
Using a large number of handwritten numeral images extracted from about 3000 samples written by 1000 writers, a study of the individuality of numerals for identification/verification purposes was conducted.
The individuality of numerals was studied using cluster analysis.
Numerals discriminability was measured for writer verification.
The study shows that some numerals present a higher discriminatory power and that their performances for the verification/identification tasks are very different.
This paper presents an application of corpus-based terminology extraction in interactive information retrieval.
In this approach, the terminology obtained in an automatic extraction procedure is used, without any manual revision, to provide retrieval indexes and a "browsing by phrases" facility for document accessing in an interactive retrieval search interface.
We argue that the combination of automatic terminology extraction and interactive search provides an optimal balance between controlled-vocabulary document retrieval (where thesauri are costly to acquire and maintain) and free text retrieval (where complex terms associated to domain specific concepts are largely overseen).
Although the amount of earth science data is growing rapidly, as is the availability of high performance networks, our ability to access large remote earth science data sets is still very limited.
This is particularly true of networks with high bandwidth delay products (BDP), such as those between the US and Europe.
Recently, several network protocols have emerged that improve the situation and hold the promise of being much more e#ective than striped TCP.
(In striped TCP, data is striped across multiple TCP streams).
In this paper, we report on experimental studies using one of these new protocols called UDT, and compare UDT to other approaches.
In addition, we consider the effectiveness of these new protocols when reading and writing data from disk over high BDP networks.
We also consider the problem of accessing remote data by attribute over these same networks.
We show that with the appropriate protocol, accessing data across the Atlantic can be improved significantly.
We note that the UDT protocol used here can be deployed as an application library for earth science applications and neither requires upgrades to existing network infrastructure, such as routers, nor to the Linux kernels on the servers involved.
The BootCaT toolkit (Baroni and Bernardini, 2004) is a suite of perl programs implementing a procedure to bootstrap specialized corpora and terms from the web using minimal knowledge sources.
In this paper, we report ongoing work in which we apply the BootCaT procedure to a Japanese corpus and term extraction task in the hotel terminology domain.
The results of our experiments are very encouraging, indicating that the BootCaT procedure can be successfully applied, with relatively small modifications, to a language very different from English and the other Indo-European languages on which we tested the procedure originally.
We propose an energy-balanced allocation of a real-time application onto a single-hop cluster of homogeneous sensor nodes connected with multiple wireless channels.
An epoch-based application consisting of a set of communicating tasks is considered.
Each sensor node is equipped with discrete dynamic voltage scaling (DVS).
The time and energy costs of both computation and communication activities are considered.
We propose both an Integer Linear Programming (ILP) formulation and a polynomial time 3-phase heuristic.
Our simulation results show that for small scale problems (with # ## tasks), up to 5x lifetime improvement is achieved by the ILP-based approach, compared with the baseline where no DVS is used.
Also, the 3-phase heuristic achieves up to 63% of the system lifetime obtained by the ILP-based approach.
For large scale problems (with 60 - 100 tasks), up to 3.5x lifetime improvement can be achieved by the 3-phase heuristic.
We also incorporate techniques for exploring the energy-latency tradeoffs of communication activities (such as modulation scaling), which leads to 10x lifetime improvement in our simulations.
Model-driven development (MDD) processes are increasingly being used to develop component middleware and applications for distributed real-time and embedded (DRE) systems in various domains.
DRE applications are often missioncritical and have stringent quality of service (QoS) requirements, such as timeliness, predictability and scalability.
MDD software techniques are well suited for validating the operation of DRE applications since they offer a higher-level of abstraction than conventional third-generation programming languages.
The state-of-the-art in model-driven DRE application development is still maturing, however.
For example, conventional MDD development environments for DRE application do not yet provide seamless integration of development capabilities and model checking capabilities.
A fundamental problem in multi-view 3D face modeling is the determination of the set of optimal views required for accurate 3D shape estimation for a generic face.
There is no analytical solution to this problem, instead (partial) solutions require (near) exhaustive combinatorial search, hence the inherent computational difficulty.
We build on our previous modeling framework which uses an efficient contour-based silhouette method and extend it by aggressive pruning of the view-sphere with view clustering and various imaging constraints.
A multi-view optimization search is performed using both model-based (eigenheads) and data-driven (visual hull) methods, yielding comparable best views.
These constitute the first reported set of optimal views for silhouette-based 3D face shape capture and provide useful empirical guidelines for the design of 3D face recognition systems.
Analyzing systems by means of simulation is necessarily a time consuming process.
This becomes even more pronounced when models of multiple systems must be compared.
In general, and even more so in today's fast-paced environment, competitive pressure does not allow for waiting on the results of a lengthy analysis.
That competitive pressure also makes it more imperative that the processing performance of systems be seriously considered in the system design.
Having a generic model allows one model to be applied to multiple systems in a given domain and provides a feedback mechanism to systems designers as to the operational impact of design decisions.
The concept of web services represent the next generation of architectures for interoperability between software applications based on software industry standards.
Presented here is an overview of web services, a discussion of the use of web services in the context of simulation and a demonstration of the use of web services for simulation as implemented in the Microsoft .Net software development and execution framework.
The paper focuses on the vital role of industry standards in the definition and implementation of web services and relates this to the opportunities and challenges for similar standards and benefits for interoperability in simulation software.
We investigate finite-time blow-up and stability of semilinear partial differential equations of the form @w t =@t = w t +t t , w 0 (x) = '(x) 0, x 2 R+ , where is the generator of the standard gamma process and > 0, 2 R, > 0 are constants.
We show that any initial value satisfying c 1 x '(x), x > x 0 for some positive constants x 0 ; c 1 ; a 1 , yields a non-global solution if a 1 < 1 + , or if a 1 = 1 + and > 1.
If '(x) c 2 x , x > x 0 ; where x 0 ; c 2 ; a 2 > 0, and a 2 > 1 + , then the solution w t is global and satis es 0 w t (x) Ct , x 0, for some constant C > 0.
This extends the results previously obtained in the case of -stable generators.
Systems of semilinear PDE's with gamma generators are also considered.
We consider the problem of clustering data lying on multiple subspaces of unknown and possibly different dimensions.
We show that one can represent the subspaces with a set of polynomials whose derivatives at a data point give normal vectors to the subspace associated with the data point.
Since the polynomials can be estimated linearly from data, subspace clustering is reduced to classifying one point per subspace.
We do so by choosing points in the data set that minimize a distance function.
A basis for the complement of each subspace is then recovered by applying standard PCA to the set of derivatives (normal vectors) at those points.
The final result is a new GPCA algorithm for subspace clustering based on simple linear and polynomial algebra.
Our experiments show that our method outperforms existing algebraic algorithms based on polynomial factorization and provides a good initialization to iterative techniques such as K-subspace and EM.
We also present applications of GPCA on computer vision problems such as vanishing point detection, face clustering, and news video segmentation.
The standard design of on-line auction systems places most of the computational load on the server and its adjacent links, resulting in a bottleneck in the system.
In this paper, we investigate the impact, in terms of the performance of the server and its adjacent links, of introducing active nodes into the network.
The performance study of the system is done using the stochastic process algebra formalism PEPA.
IP-networked storage protocols such as NFS and iSCSI have become increasingly common in today 's LAN environments.
In this paper, we experimentally compare NFS and iSCSI performance for environments with no data sharing across machines.
This paper examines two multiple ground target tracking methods.
Their specificity is that they use the road network as additional prior geographical information to further refine the targets' state estimation.
The first method is based on belief functions theory for associating measurements to predictions as well as for determining the road segment relative to an existing target.
The second method uses a Variable Structure Interacting Multiple Model method integrated in a Multiple Hypothesis Tracking framework (MHT VS-IMM).
Finally both approaches are compared suggesting the possibility of using the advantages of the evidential approach inside the well established MHT framework.
A fundamental aspect in the design of overlay networks is the path length/node degree trade-o#.
Previous research has shown that it is possible to achieve logarithmic path lengths for logarithmic or even constant node degree.
While nearby contacts, with nodes that have close identifiers, ensure a connected lattice of nodes, short path lengths demand for the use of long range contacts.
In this respect, previous work exhibits limitations in scenarios where node distribution is unbalanced: either short path length properties do not hold or may require node degree and/or signaling to grow with respect to the virtual identification space instead of the number of nodes (which is usually several order of magnitudes smaller).
This paper contains some transaction related patterns from my forthcoming book, Patterns in Java, Volume 3: Design Patterns for Enterprise and Distributed Applications.
A transaction is a sequence of operations that change the state of an object or collection of objects in a well defined way.
Transactions are useful because they satisfy constraints about what the state of an object must be before, after or during a transaction.
For example, a particular type of transaction may satisfy a constraint that an attribute of an object must be greater after the transaction than it was before the transaction.
Sometimes, the constraints are unrelated to the objects that the transactions operate on.
For example, a transaction may be required to take place in less than a certain amount of time.
The patterns in this chapter provide guidance in selecting and combining constraints for common types of transactions.
Figure 1 shows how the patterns in this chapter build on each other.
Composite Transaction ACID Transaction Two Phase Commit Audit Trail Figure 1: Pattern Map The first and most fundamental pattern to read is the ACID Transaction pattern.
It describes how to design transactions that never have inconsistent or unexpected outcomes.
The Composite pattern describes how to compose a complex transaction from simpler transactions.
The Two Phase commit pattern describes how to ensure that a composite transaction is atomic.
The Audit Trail pattern describes how to maintain an historical of ACID transactions.
You may notice the lack of code examples in this paper.
It is the author's opinion that the patterns in this paper are too high level for concrete code examples to be useful.
The application of these transaction related patterns can be readily understood at the design level.
How...
There are different approaches to mobile robot navigation.
Landmark-based localization has shown to be the alternative to simple dead-reckoning, but often landmarks are environmental specific, and recognition algorithms are computationally very expensive.
This paper presents an approach to landmark-based navigation using emergency exit pannels and corridors as cues, without odometric information.
Experiments are carried out to verify appart each landmark identification subsystem and both behaviors are combined together in a complete path through the environment.
In this article we focus on evolving information systems.
First a delimitation of the concept of evolution is provided, resulting in a first attempt to a general theory for such evolutions.
The theory
Recent work in Bayesian classifiers has shown that a better and more flexible representation of domain knowledge results in better classification accuracy.
In previous work [1], we have introduced a new type of Bayesian classifier called Case-Based Bayesian Network (CBBN) classifiers.
We have shown that CBBNs can capture finer levels of semantics than possible in traditional Bayesian Networks (BNs).
Consequently, our empirical comparisons showed that CBBN classifiers have considerably improved classification accuracy over traditional BN classifiers.
The basic idea behind our CBBN classifiers is to intelligently partition the training data into semantically sound clusters.
A local BN classifier can then be learned from each cluster separately.
Bayesian Multi-net (BMN) classifiers also try to improve classification accuracy through a simple partitioning of the data by classes.
In this paper, we compare our CBBN classifiers to BMN classifiers.
Our experimental results show that CBBN classifiers considerably outperform BMN classifiers.
1
Anomalies are unusual and significant changes in a network's traffic levels, which can often span multiple links.
Diagnosing anomalies is critical for both network operators and end users.
It is a difficult problem because one must extract and interpret anomalous patterns from large amounts of high-dimensional, noisy data.
The average signal-to-noise ratio (SNR) of a generalized selection combining scheme, in which the m diversity branches (m L, L is the total number of diversity branches available) with the largest instantaneous SNR's are selected and coherently combined, is derived.
A Rayleigh fading channel is assumed, and a simple closed-form expression for the SNR is found which is upper bounded by the average SNR of maximal ratio combining, and lower bounded by average SNR of conventional selection combining.
In this paper, we evaluate and suggest methods to improve the performance of IEEE 802.11 based ad hoc networks from the perspective of spatial reuse.
Since 802.11 employs virtual carrier sensing to reserve the medium prior to a packet transmission, the relative size of the spatial region it reserves for the impending traffic significantly affects the overall network performance.
We show that the space reserved by 802.11 for a successful transmission is far from optimal and depending on the one hop distances between the sender and the receiver, we can have three scenarios with very different spatial reuse characteristics.
We also introduce a new quantitative measure, the spatial reuse index, to evaluate the efficiency of the medium reservation accomplished by 802.11 virtual carrier sensing.
We also propose an improved virtual carrier sensing mechanism for wireless LAN scenarios and using analysis and simulation results, show that it can significantly increase the spatial reuse and network throughput.
This paper studies the performance implications of using cryptographic controls in performance-critical systems.
Full cryptographic controls beyond basic authentication are considered and experimentally validated in the concept of network file systems.
This paper demonstrates that processor speeds have recently become fast enough to support cryptographic controls in many performance-critical systems.
Integrity and authentication using keyed-hash and RSA as well as confidentiality using RC5 are tested.
This analysis demonstrates that full cryptographic controls are feasible in a distributed network file system, by showing the performance overhead for including signature, hash and encryption algorithms on various embedded and workstation computers.
The results from these experiments are used to predict the performance impact using three proposed network disk security schemes.
Introduction Our Digial Human Memory project (Lin & Hauptmann, 2002) aims to collect and index every aspect of human daily experiences in digital form.
By wearing a spy camera, microphones, and a BodyMedia armband, the wearer can collect rich records in a unobtrutive fashion, and many applications can build on top of such multimodal collections.
For example, digital human memory can serve as a memory prosthesis to help the wearer recall past events; the habits or anomalies of the wearer can be analyzed from digital human memory.
The physiological recordings recorded by a Bodymedia armband provides complementary dimensions of the wearer's experiences, and play an important role in identifying wearer's context and activities.
In this year Physiological Data Modeling Contest, we build a baseline system that models the gender and context tasks as simple binary classification problems using only unambiguous annotations.
In addition, we explore two issues.
First, instead of ignoring ambiguo
Team Dynamo-Pavlov of Uppsala is an e#ort at the Department of Information Technology at Uppsala University in Sweden, to establish a soccer team in the four legged league of Robocup.
The core develoment team of the project is a group of 4th year computer science students taking a project course in the fall of 2002.
In 2003 a smaller group of students have been working with the code to compete in German Open and Robocup 2003.
For the past two decades, fractals (e.g., the Hilbert and Peano space-filling curves) have been considered the natural method for providing a locality-preserving mapping.
The idea behind a locality-preserving mapping is to map points that are nearby in the multi-dimensional space into points that are nearby in the one-dimensional space.
In this paper, we argue against the use of fractals in locality-preserving mapping algorithms, and present examples with experimental evidence to show why fractals produce poor locality-preserving mappings.
In addition, we propose an optimal locality-preserving mapping algorithm, termed the Spectral Locality-Preserving Mapping algorithm (Spectral LPM, for short), that makes use of the spectrum of the multi-dimensional space.
We give a mathematical proof for the optimality of Spectral LPM, and also demonstrate its practical use.
This paper describes an evaluation of the Kea automatic keyphrase extraction algorithm.
Tools that automatically identify keyphrases are desirable because document keyphrases have numerous applications in digital library systems, but are costly and time consuming to manually assign.
Keyphrase extraction algorithms are usually evaluated by comparison to author-specified keywords, but this methodology has several well-known shortcomings.
The results presented in this paper are based on subjective evaluations of the quality and appropriateness of keyphrases by human assessors, and make a number of contributions.
First, they validate previous evaluations of Kea that rely on author keywords.
Second, they show Kea's performance is comparable to that of similar systems that have been evaluated by human assessors.
Finally, they justify the use of author keyphrases as a performance metric by showing that authors generally choose good keywords.
directories, etc---exist in both a physical (paper) and virtual (Web) form.
Few approaches to knowledgment management and digital libraries fully exploit the opportunities a#orded by this fact.
Motivated by the goal of seamless integration of physical artifacts and their Web counterparts, we describe a large-scale case study of one aspect of this relationship.
Based on a corpus of hundreds of real-world product catalogs, we measure the e#ectiveness of hand-held scanner /OCR devices for the task of automatically retrieving a catalog's authoritative Web counterpart (the vendor's home page).
We find that, despite OCR errors, text fragments scanned from product catalogs can serve as reasonably effective queries for retrieving the Web counterparts.
Furthermore, the e#ectiveness of the technique increases with multiple scanned text fragments.
Our main technical contribution is a novel machine learning approach to adaptively merging the retrieved documents from multiple scans.
We present an application of multiple-objectives evolutionary optimization to the problem of engineering the distribution of the interdomain traffic in the Internet.
We show that this practical problem requires such a heuristic due to the potential conflicting nature of the traffic engineering objectives.
Furthermore, having to work on the parameter's space of the real problem makes such techniques as evolutionary optimization very easy to use.
We show the successful application of our algorithm to two important problems in interdomain traffic engineering.
Recently, there has been a wide interest in using ontologies on the Web.
As a basis for this, RDF Schema (RDFS) provides means to define vocabulary, structure and constraints for expressing metadata about Web resources.
However, formal semantics are not provided, and the expressivity of it is not enough for full-fledged ontological modeling and reasoning.
In this paper, we will show how RDFS can be extended in such a way that a full knowledge representation (KR) language can be expressed in it, thus enriching it with the required additional expressivity and the semantics of this language.
We do this by describing the ontology language OIL as an extension of RDFS.
An important benefit of our approach is that it ensures maximal sharing of meta-data on the Web: even partial interpretation of an OIL ontology by less semantically aware processors will yield a correct partial interpretation of the meta-data.
We conclude that our method of extending is equally applicable to other KR formalisms.
In this paper we show how the resilience approach can give a generic solution to the problems of looping and high-bandwidth output in autonomous agents.
A resilient approach to looping is for the agent to delay responding again to a source that has recently triggered a task.
A resilient approach to high-bandwith output is for the agent to delay output when the overall "noise" level in the environment is high.
The conditions under which the delays are triggered may be determined by data on past system behaviour.
Our generic approach allows agents to limit themselves, without requiring them to perform semantic analyses.
With the widespread and increasing use of data warehousing in industry, the design of effective data warehouses and their maintenance has become a focus of attention.
Independently of this, the area of temporal databases has been an active area of research for well beyond a decade.
This
Welch bound equality (WBE) signature sequences maximize the uplink sum capacity in direct-spread synchronous code division multiple access (CDMA) systems.
WBE sequences have a nice interference invariance property that typically holds only when the system is fully loaded and the signature set must be redesigned and reassigned as the number of active users changes to maintain this property.
An additional equiangular constraint on the signature set, however, maintains interference invariance.
Finding such signatures requires imposing equiangular side constraints on an inverse eigenvalue problem.
This paper presents an alternating projection algorithm that can design WBE sequences that satisfy equiangular side constraints.
The proposed algorithm can be used to find Grassmannian frames as well as equiangular tight frames.
Though one projection is onto a closed but non convex set, it is shown that this algorithm converges to a fixed point, and these fixed points are partially characterized.
Introduction It has been revealed that the function of transmembrane (TM) proteins (20-30% in most genomes [1]) can be classified and identified with the information of its TM topologies, i.e., the number of TM segments (TMSs), the position of TMS and the orientation of the TMS to the membrane lipid bilayer [6].
Therefore, developing the TM topology prediction method with high reliability is critical task for the elucidation of TM protein functions.
Although many TM topology prediction methods have been proposed, the prediction accuracies of these methods are still not high enough, i.e., at most 50-60% as to whole TM topology [3].
In this study, we propose a new consensus approach (ConPred elite) with reliabilities of 0.98 and 0.95 for prokaryotic and eukaryotic TM protein sequences, respectively, by combining the results from five currently used TM topology prediction methods.
We applied this method to TM proteins extracted from 87 prokaryotic and 12 eukaryotic proteomes.
2 Material
this paper, we introduce a system called Papyrus for distributed data mining over commodity and high performance networks and give some preliminary experimental results about its performance.
We are particularly interested in data mining over clusters of workstations, distributed clusters connected by high performance networks (super-clusters), and distributed clusters and super-clusters connected by commodity networks (meta-clusters)
This paper addresses the problem of automated design of a computer system for an embedded application.
The computer system to be designed consists of a VLIW processor and/or a customized systolic array, along with a cache subsystem comprising a data cache, instruction cache and second-level unified cache.
Several algorithms for "walking" the design space are described, and experimental results of custom designed systems for two applications are presented
A novel blind initialization procedure for iterative decision feedback equalizers in block-based transmission systems is proposed and investigated.
It relies on an initial stage using Regalia's blockbased Constant Modulus iterative algorithm for the blind computation of a linear equalizer; then a switch to decision feedback mode is performed.
It is shown how the building blocks of the decision feedback equalizer (feedforward and feedback filters, automatic gain control and phase rotation) can be blindly estimated.
Due to the unknown lag introduced by the blind linear equalizer, delay synchronization of the feedforward and feedback filters is also required.
These filters are then refined over successive decision feedback iterations.
This approach can also be used as a blind channel identifier for other block receiver designs, such as soft ISI cancelers and decoder-aided (i.e.
turbo) equalizers.
1.
AweSim# is a general-purpose simulation system which takes advantage of Windows# technology to integrate programs and provide componentware.
AweSim includes the Visual SLAM# simulation language to build network, subnetwork, discrete event, and continuous models.
Network models require no programming yet allow user-coded inserts in Visual Basic or C. Discrete event and continuous models can be created using the object-oriented technology of Visual Basic, C or Visual C++ and can be combined with network models.
This tutorial will demonstrate the process of using AweSim's componentware, describe examples of user interfaces that allow integration with other applications, and present a sample model.
1
Configuration management is an essential...This article addresses security in configuration management systems and proposes strategies for increasing security by randomized scheduling of actions constrained by a set of precedence relations...
This paper provides a novel approach for optimal route planning making efficient use of the underlying geometrical structure.
It combines classical AI exploration with computational geometry.
Given a set
We study the provision of deterministic rate guarantees over single crossbar switches.
Birkhoff decomposition yields a general approach for this problem, but the required complexity can be very high and the quality of service can be unsatisfactory for practical traffic sources.
developments such as the increasingly widespread acceptance of video surveillance in public places.
However, the decade's most striking developments (with respect to ubiquitous computing) have undoubtedly been the emergence of the Web as a global information and service resource and the widespread adoption of digital mobile telephony, letting users experience nearly ubiquitous wireless communications.
The World Wide Web The Web's emergence has fundamentally changed the way many people interact with computers.
It has also created a culture that is substantially more amenable to the deployment of ubiquitous computing environments than that which existed when Weiser first articulated his vision.
Most obviously, the Web has created a nearly ubiquitous information and communications infrastructure.
We can now access a huge wealth of knowledge and services from almost any computer, including low-power mobile devices such as smart phones and PDAs.
However, the Web has had other, more subtl
Nowadays there are a lot of vector drawings available for inclusion into documents, which tend to be achieved and accessed by categories.
However, to find a drawing among hundreds of thousands is not easy.
While text-driven attempts at classifying image data have been recently supplemented with query-by-image content, these have been developed for bitmap-type data and cannot handle vectorial information.
In this paper we present an approach to index and retrieve ClipArt images by content, using topological and geometric information automatically extracted from drawings.
Additionally, we introduce a set of simplification heuristics to eliminate redundant information and useless elements.
We leverage the buffering capabilities of end-systems to achieve scalable, asynchronous delivery of streams in a peer-to-peer environment.
Unlike existing cache-and-relay schemes, we propose a distributed prefetching protocol where peers prefetch and store portions of the streaming media ahead of their playout time, thus not only turning themselves to possible sources for other peers but their prefetched data can allow them to overcome the departure of their source-peer.
This stands in sharp contrast to existing cache-and-relay schemes where the departure of the source-peer forces its peer children to go the original server, thus disrupting their service and increasing server and network load.
Through mathematical analysis and simulations, we show the effectiveness of maintaining such asynchronous multicasts from several source-peers to other children peers, and the efficacy of prefetching in the face of peer departures.
We confirm the scalability of our dPAM protocol as it is shown to significantly reduce server load.
This paper presents an overview of techniques for improving the efficiency of option pricing simulations, including quasiMonte Carlo methods, variance reduction, and methods for dealing with discretization error.
This paper presents a machine learning method to predict polyadenylation signals (PASes) in human DNA and mRNA sequences by analysing features around them.
This method consists of three sequential steps of feature manipulation: generation, selection and integration of features.
In the first step, new features are generated using k-gram nucleotide acid or amino acid patterns.
In the second step, a number of important features are selected by an entropy-based algorithm.
In the third step, support vector machines are employed to recognize true PASes from a large number of candidates.
Our study shows that true PASes in DNA and mRNA sequences can be characterized by di#erent features, and also shows that both upstream and downstream sequence elements are important for recognizing PASes from DNA sequences.
We tested our method on several public data sets as well as our own extracted data sets.
In most cases, we achieved better validation results than those reported previously on the same data sets.
The important motifs observed are highly consistent with those reported in literature.
We use a formal tool to extract Finite State Machines (FSM) based representations (lists of states and transitions) of sequential circuits described by flip-flops and gates.
These complete and optimized representations helps the designer to understand the accurate behaviour of the circuit.
This deep understanding is a prerequisite for any verification or test process.
An example is fully presented to illustrate our method.
This simple pipelined processor comes from our experience in computer architecture and digital design education.
([2]) 1.
Effects of the windowing process, widely investigated by the scientific literature for narrow--band components embedded in white noise, is not sufficiently detailed when signals are corrupted by colored noise.
Such a phenomenon can heavily affect the spectral parameters estimation of the noisy signal.
In this paper effects of the windowing on the output of analog--to--digital converters with ## topology, which present a spectrally shaped quantization noise, is analyzed.
In particular, the spectral leakage of both narrow-- and wide-- band components is investigated and a criterion for choosing the most appropriate window for any given modulator resolution is given.
The proposed analysis validates the use of the Hanning sequence as the optimum two term cosine window to be employed for characterizing low order ## modulators.
We develop a probability forecasting model through a synthesis of Bayesian beliefnetwork models and classical time-series analysis.
By casting Bayesian time-series analyses as temporal belief-network problems, weintroduce dependency models that capture richer and more realistic models of dynamic dependencies.
With richer models and associated computational methods, we can movebeyond the rigid classical assumptions of linearityin the relationships among variables and of normality of their probability distributions.
We proposed a method for recognizing matrices which contain abbreviation symbols, and a format for representing the structure of matrices, and reported experimental results in our paper [1].
The method consisted of 4 processes; detection of matrices, segmentation of elements, construction of networks and analysis of the matrix structure.
In the paper, our work was described with a focus on the construction of networks and the analysis of the matrix structure.
However, we concluded that improvements in the other two processes were very important for obtaining a high accuracy rate for recognition.
In this paper, we describe the two improved processes, the detection of matrices and the segmentation of elements, and we report the experimental results.
The need for a sharable resource that can provide deep anatomical knowledge and support inference for biomedical applications has recently been the driving force in the creation of biomedical ontologies.
Previous attempts at the symbolic representation of anatomical relationships necessary for such ontologies have been largely limited to general partonomy and class subsumption.
We propose an ontology of anatomical relationships beyond class assignments and generic part-whole relations and illustrate the inheritance of structural attributes in the Digital Anatomist Foundational Model of Anatomy.
Our purpose is to generate a symbolic model that accommodates all structural relationships and physical properties required to comprehensively and explicitly describe the physical organization of the human body.
Mayday is an architecture that combines overlay networks with lightweight packet filtering to defend against denial of service attacks.
The overlay nodes perform client authentication and protocol verification, and then relay the requests to a protected server.
The server is protected from outside attack by simple packet filtering rules that can be efficiently deployed even in backbone routers.
Mayday generalizes
Applications that require good network performance often use parallel TCP streams and TCP modifications to improve the effectiveness of TCP.
If the network bottleneck is fully utilized, this approach boosts throughput by unfairly stealing bandwidth from competing TCP streams.
Improving the effectiveness of TCP is easy, but improving effectiveness while maintaining fairness is difficult.
In this paper, we describe an approach we implemented that uses a long virtual round trip time in combination with parallel TCP streams to improve effectiveness on underutilized networks.
Our approach prioritizes fairness at the expense of effectiveness when the network is fully utilized.
We compared our approach with standard parallel TCP over a wide-area network, and found that our approach preserves effectiveness and is fairer to competing traffic than standard parallel TCP.
Seven Tones ([13]) is a search engine specialized in linguistics and languages.
Its current database, which is stored on a single machine, contains approximately 240,000 indexed web pages about linguistics and languages.
The aim of todays software development is to build applications by the reuse of binary components.
This requires the composition of components and as special cases component enhancement as well as adaption.
We demonstrate how to deal with these cases by furnishing components with a type consisting of two protocols --- a call and a use protocol.
We model these protocols by finite automata and show how those reflect component enhancement and adaption.
This mechanism allows for automatic adaption of components in changing environments.
In order to
The South Asian countries are gradually diversifying with some inter-country variation in favor of high value commodities, namely fruits, vegetables, livestock and fisheries.
Agricultural diversification is strongly influenced by price policy, infrastructure development (especially markets and roads), urbanization and technological improvements.
Rainfed areas have benefited more as a result of agricultural diversification in favor of high value crops by substituting inferior coarse cereals.
Agricultural diversification is also contributing to employment opportunities in agriculture and increasing exports.
The need is to suitably integrate production and marketing of high value commodities through appropriate institutions.
Market reforms in developing and strengthening desired institutions through required legal changes would go a long way in boosting agricultural growth, augmenting income of small farm holders and promoting exports.
An adequate natural language description of developments in a real-world scene may be taken as a proof of `understanding what is going on'.
An algorithmic system which generates natural language descriptions from video recordings of road tra#c scenes may be said to `understand' its input to the extent the algorithmically generated text is acceptable to humans judging it.
A Fuzzy Metric-Temporal Horn Logic (FMTHL) provides a formalism to represent both schematic and instantiated conceptual knowledge about the depicted scene and its temporal development.
The resulting conceptual representation mediates in a systematic manner between the spatio-temporal geometric descriptions extracted from video input and a module which generates natural language text.
This contribution outlines a thirty years e#ort to create such a `cognitive vision' system, indicates its current status, summarizes lessons learned along the way, and discusses open problems against this background.
this report mostly focuses on the information extraction task (task II)
Hidden Markov fields (HMF), which are widely applied in various problems arising in image processing, have recently been generalized to Pairwise Markov Fields (PMF).
Although the hidden process is no longer necessarily a Markov one in PMF models, they still allow one to recover it from observed data.
We propose in this paper two original methods of parameter estimation in PMF, based on general Stochastic Gradient (SG) and Iterative Conditional Estimation (ICE) principles, respectively.
Some experiments concerning unsupervised image segmentation based on Bayesian Maximum Posterior Mode (MPM) are also presented.
This article discusses the problems and proposes a top-down approach to overcome some of the problems.
A combined yo-yo approach aims to exploit both strategies' benefits
Providing service differentiation in wireless networks has attracted much attention in recent research.
Existing studies so far have focused on the design of differentiated media access algorithms.
Some QoS metrics, such as queueing delay can not be completely addressed by these approaches.
Moreover, without a formalized service differentiation goal that quantifies the outcome of differentiation, the performance of most approaches fluctuates, especially in short time-scales.
This paper addresses above problems by introducing the concept of proportional service differentiation, to the domain of wireless network and focuses on providing proportional delay differentiation in wireless LANs.
Due to the unique characteristic of distributed medium sharing, the 1 scheduling algorithm employed in wireline networks can not be applied directly to the context of wireless LANs.
We argue that delay differentiation in wireless LAN can only be achieved through a joint packet scheduling at the network layer and distributed coordination at the MAC layer.
Therefore, we present a cross-layer waiting time priority scheduling (CWTP) algorithm.
CWTP consists of two tiers: an intra-node WTP scheduler at the network layer and an inter-node distributed coordination function at the MAC layer.
These two tiers coordinate via a mapping function which maps the normalized waiting time at the network layer to the backoff time at the MAC layer.
Two mapping schemes, namely linear mapping and piecewise linear mapping, are presented and evaluated in this paper.
Extensive simulation results show that the CWTP algorithm can effectively achieve proportional delay differentiation in wireless LANs.
Although many real-world stochastic planning problems are more naturally formulated by hybrid models with both discrete and continuous variables, current state-of-the-art methods cannot adequately address these problems.
We present the first framework that can exploit problem structure for modeling and solving hybrid problems efficiently.
We formulate these problems as hybrid Markov decision processes (MDPs with continuous and discrete state and action variables), which we assume can be represented in a factored way using a hybrid dynamic Bayesian network (hybrid DBN).
This formulation also allows us to apply our methods to collaborative multiagent settings.
We present a new linear program approximation method that exploits the structure of the hybrid MDP and lets us compute approximate value functions more efficiently.
In particular, we describe a new factored discretization of continuous variables that avoids the exponential blow-up of traditional approaches.
We provide theoretical bounds on the quality of such an approximation and on its scale-up potential.
We support our theoretical arguments with experiments on a set of control problems with up to 28-dimensional continuous state space and 22-dimensional action space.
this paper for enabling the healthcare system to improve its capability, is to unbind the large scale and complex tasks, so efficient and effective organizations can be formed around these distinct tasks.
Specifically we argue for two very different systems: an efficient system to deal with health issues that affect entire populations (and that can be made efficient on a large scale) and a system to address the complexities of individual medical care in an effective and error-free way.
By separating simple, large scale "health care" from complex, individualized "medical care", we relieve physicians of tasks that can be addressed with a much higher efficiency, enabling them to focus their attention on the complex tasks for which they are uniquely trained.
Not only does this create a more cost-effective health care system but it also allows for a more effective and error-free medical system
Secure systems are best built on top of a small trusted operating system: The smaller the operating system, the easier it can be assured or verified for correctness.
In this
Z.
A fractional advection--dispersion equation ADE is a generalization of the classical ADE in which the second-order derivative is replaced with a fractional-order derivative.
In contrast to the classical ADE, the fractional ADE has solutions that resemble the highly skewed and heavy-tailed breakthrough curves observed in field and laboratory studies.
These solutions, known as a-stable distributions, are the result of a generalized central limit theorem which describes the behavior of sums of finite or infinite-variance random variables.
We use this limit theorem in a model which sums the length of particle jumps during their random walk through a heterogeneous porous medium.
If the length of solute particle jumps is not constrained to a representative elementary Z. volume REV , dispersive flux is proportional to a fractional derivative.
The nature of fractional derivatives is readily visualized and their parameters are based on physical properties that are measurable.
When a fractional Fick's law replaces the classical Fick's law in an Eulerian evaluation of solute transport in a porous medium, the result is a fractional ADE.
Fractional ADEs are ergodic equations since they occur when a generalized central limit theorem is employed.
q 2001 Elsevier Science B.V. All rights reserved.
We introduce channel sequence types to study finitary polymorphism in the context of mobile processes modelled in the #-calculus.
We associate to each channel a set of exchange types, and we require that output processes send values of one of those types, and input processes accept values of all the types in the set.
Our type assignment system enjoys subject reduction and guarantees the absence of communication errors.
We give several examples of polymorphism, and we encode the #-calculus with the strict intersection type discipline.
This paper describes the Visual Simulation Environment (VSE) software product.
VSE has been developed under $1.3 million research funding, primarily from the U.S. Navy, for over a decade.
It enables discrete-event, generalpurpose, object-oriented, picture-based, component-based, visual simulation model development and execution.
This advanced environment can be used for solving complex problems in areas such as air traffic control and space systems, business process reengineering and workflows, complex system design evaluation, computer and communication networks, computer performance evaluation, education and training, health care systems, manufacturing systems, military/combat systems, satellite and wireless communications systems, service systems, supply chain management, and transportation systems.
A description of optimal sequences for direct-spread code division multiple access is a byproduct of recent characterizations of the sum capacity.
This papers restates the sequence design problem as an inverse singular value problem and shows that it can be solved with finite-step algorithms from matrix analysis.
Relevant algorithms are reviewed and a new one-sided construction is proposed that obtains the sequences directly instead of computing the Gram matrix of the optimal signatures.
I.
As we approach 100nm technology the interconnect issues are becoming one of the main concerns in the testing of gigahertz system-onchips.
Voltage distortion (noise) and delay violations (skew) contribute to the signal integrity loss and ultimately functional error, performance degradation and reliability problems.
In this paper, we first define a model for integrity faults on the high-speed interconnects.
Then, we present a BIST-based test methodology that includes two special cells to detect and measure noise and skew occurring on the interconnects of the gigahertz system-on-chips.
Using an inexpensive test architecture the integrity information accumulated by these special cells can be scanned out for final test and reliability analysis.
The Internet has fostered an unconventional and powerful style of collaboration: "wiki" web sites, where every visitor has the power to become an editor.
In this paper we investigate the dynamics of Wikipedia, a prominent, thriving wiki.
We make three contributions.
First, we introduce a new exploratory data analysis tool, the history flow visualization, which is effective in revealing patterns within the wiki context and which we believe will be useful in other collaborative situations as well.
Second, we discuss several collaboration patterns highlighted by this visualization tool and corroborate them with statistical analysis.
Third, we discuss the implications of these patterns for the design and governance of online collaborative social spaces.
We focus on the relevance of authorship, the value of community surveillance in ameliorating antisocial behavior, and how authors with competing perspectives negotiate their differences.
The overall number of nearest neighbors in bounded distance decoding (BDD) algorithms is given by N o;eff = No + NBDD ; where NBDD denotes the number of additional, noncodeword, neighbors that are generated during the (suboptimal) decoding process.
We identify and enumerate the nearest neighbors associated with the original Generalized Minimum Distance (GMD) and Chase decoding algorithms.
After careful examination of the decision regions of these algorithms, we derive an approximated probability ratio between the error contribution of a noncodeword neighbor (one of NBDD points) and a codeword nearest neighbor.
For Chase Algorithm 1 it is shown that the contribution to error probability of a noncodeword nearest neighbor is a factor of 2 d01 less than the contribution of a codeword, while for Chase Algorithm 2 the factor is 2 dd=2e01 , d being the minimum Hamming distance of the code.
For Chase Algorithm 3 and GMD, a recursive procedure for calculating this ratio, which turns out to be nonexponential in d, is presented.
This procedure can also be used for specifically identifying the error patterns associated with Chase Algorithm 3 and GMD.
Utilizing the probability ratio, we propose an improved approximated upper bound on the probability of error based on the union bound approach.
Simulation results are given to demonstrate and support the analytical derivations.
igin, these anastomoses coil extensively over 200--300 m, before re-anastomosing with neighbouring vessels to form progressively larger secondary arteries (Olson, 1996).
In the skipjack tuna Katsuwonis pelamis and the Atlantic cod Gadus morhua, the SCS forms capillary beds (Dewar et al., 1994; Burne, 1929), which are assumed to be typical of water breathing teleosts (Vogel, 1985a), before draining into the primary venous system.
However, in Salaria pavo (prev.
Blennius) and Zosterisessor ophiocephalus, it fails to do so (Lahnsteiner et al., 1990).
The distribution and volume of the SCS has been widely discussed.
To date it has been shown that secondary vessels supply secondary capillary beds in the body surface, the fins, the buccal cavity, the pharynx and the peritoneum, and it may 591 The Journal of Experimental Biology 206, 591-599 2003 The Company of Biologists Ltd doi:10.1242/jeb.00113 The volume of the primary (PCS) and secondary (SCS) circulatory system in the Atlantic cod
Keywords: data mining, predictive modeling, data interchange formats, XML, SGML, ensemble learning, partitioned learning, distributed learning We introduce a markup language based upon XML for working with the predictive models produced by data mining systems.
The language is called the Predictive Model Markup Language (PMML) and can be used to define predictive models and ensembles of predictive models.
It provides a flexible mechanism for defining schema for predictive models and supports model selection and model averaging involving multiple predictive models.
It has proved useful for applications requiring ensemble learning, partitioned learning, and distributed learning.
In addition, it facilitates moving predictive models across applications and systems.
This paper presents a stochastic model of the lymphocyte recruitment in inflammed brain microvessels.
The framework used is based on stochastic process algebras for mobile systems.
The automatic tool used in the simulation is the BioSpi.
We compare our approach with classical hydrodinamical specifications
In this paper, we address the problem of dynamic allocation of storage bandwidth to application classes so as to meet their response time requirements.
A key problem in Optical Burst Switching (OBS) is to schedule as many bursts as possible on wavelength channels so that the throughput is maximized and the burst loss is minimized.
Currently, most of the research on OBS (e.g., burst scheduling and assembly algorithms) has been concentrated on reducing burst loss in an "average-case" sense.
Little effort has been devoted to understanding the worst-case performance.
Since OBS itself is an open-loop control system, it may often exhibit a worst-case behavior when adversely synchronized, thus a poor worst-case performance can lead to an unacceptable system-wide performance.
In this paper, we use competitive analysis to analyze the worstcase performance of a large set of scheduling algorithms, called best-effort online scheduling algorithms, for OBS networks, and establish a number of interesting upper and lower bounds on the performance of such algorithms.
Our analysis shows that the performance of any best-effort online algorithm is closely related to a few factors, such as the range of offset time, burst length ratio, scheduling algorithm, and number of data channels.
A surprising discovery is that the worst-case performance of any best-effort online scheduling algorithm is primarily determined by the maximum to minimum burst length ratio, followed by the range of offset time.
Furthermore, if all bursts have the same burst length and offset time, all best-effort online scheduling algorithms generate the same optimal solution, regardless how different they may look like.
Our analysis can also be extended to some non-besteffort online scheduling algorithms, such as the well-known Horizon algorithm, and establish similar bounds.
Based on the analytic results, we give guidelines for several widely discussed OBS problems, inclu...
One of the fundamental problems in distributed computing is how to efficiently perform routing in a faulty network in which each link fails with some probability.
This paper investigates how big the failure probability can be, before the capability to efficiently find a path in the network is lost.
Our main results show tight upper and lower bounds for the failure probability which permits routing, both for the hypercube and for the d-dimensional mesh.
We use tools from percolation theory to show that in the d-dimensional mesh, once a giant component appears --- efficient routing is possible.
A different behavior is observed when the hypercube is considered.
In the hypercube there is a range of failure probabilities in which short paths exist with high probability, yet finding them must involve querying essentially the entire network.
Thus the routing complexity of the hypercube shows an asymptotic phase transition.
The critical probability with respect to routing complexity lies in a different location then that of the critical probability with respect to connectivity.
Finally we show that an oracle access to links (as opposed to local routing) may reduce significantly the complexity of the routing problem.
We demonstrate this fact by providing tight upper and lower bounds for the complexity of routing in the random graph G n,p .
Many applications, including web transfers, software distribution, video-on-demand, and peer-to-peer data downloads, require the retrieval of structured documents consisting of multiple components like images, video, and text.
Large systems using these applications may be made more scalable by using efficient data distribution techniques like multicast, and by enabling clients to retrieve data from multiple servers in parallel.
With the proliferation of Internet services, many solutions have emerged to provide Quality-of-Service (QoS) guarantees when the demands for the hosted services exceed the server's capacity.
In this paper, we take an analytical approach to answering key questions in the design and performance of application-level QoS techniques, especially those that are based on the multi-threading or multi-processing abstraction.
Key to our analysis is the integration of the effects of concurrency into the interactions between multi-threaded services.
To this end, we extend traditional time-sharing models to develop the multi-threaded round-robin (MTRR) servers, a more accurate model of operation of typical multi-threaded Internet services.
For this model, we first develop powerful, yet computationallyefficient, mathematical relationships that describe the performance (in terms of throughput and response time) of multithreaded services.
We then apply optimization techniques to derive the optimal allocation of threads given specific QoS objective functions.
Using realistic workloads on a typical web server, we show the efficacy and accuracy of the proposed new methodology.
We propose to modify a conventional single-chip multicore so that a sequential program can migrate from one core to another automatically during execution.
The goal of execution migration is to take advantage of the overall onchip cache capacity.
We introduce the affinity algorithm, a method for distributing cache lines automatically on several caches.
We show that on working-sets exhibiting a property called "splittability", it is possible to trade cache misses for migrations.
Our experimental results indicate that the proposed method has a potential for improving the performance of certain sequential programs, without degrading significantly the performance of others.
Gri#n, Jaggard, and Ramachandran introduced in [4] a framework for understanding the design principles of path-vector protocols such as the Border Gateway Protocol (BGP), which is used for inter-domain routing on the Internet.
They described as an application of their framework a study of Hierarchical-BGP-like systems where routing at a node is determined by the relationship with the next-hop node on a path (e.g., an ISPpeering relationship) and some additional scoping rules (e.g., the use of backup routes).
These systems are called class-based path-vector systems.
The robustness of these systems depends on the presence of a global constraint on the system, but an adequate constraint has not yet been given.
In this paper, we give the best-known su#cient constraint that guarantees robust convergence.
We show how to generate this constraint from the design specification of the path-vector system.
We also give centralized and distributed algorithms to enforce this constraint, discuss applications of these algorithms, and compare them to algorithms given in previous work on path-vector protocol design.
CODEX (COrnell Data EXchange) stores secrets for subsequent access by authorized clients.
It also is a vehicle for exploring the generality of a relatively new approach to building distributed services that are both fault-tolerant and attack-tolerant.
Elements of that approach include: embracing the asynchronous (rather than synchronous) model of computation, use of Byzantine quorum systems for storing state, and employing proactive secret sharing with threshold cryptography for implementing confidentiality and authentication of service responses.
Besides explaining the CODEX protocols, experiments to measure their performance are discussed.
Predicting the secondary structure of RNA molecules from the knowledge of the primary structure (the sequence of bases) is still a challenging task.
There are algorithms that provide good results e.g.
based on the search for an energetic optimal configuration.
However the output of such algorithms does not always give the real folding of the molecule and therefore a feature to judge the reliability of the prediction would be appreciated.
In this paper we present results on the expected structural behavior of LSU rRNA derived using a stochastic context-free grammar and generating functions.
We show how these results can be used to judge the predictions made for LSU rRNA by any algorithm.
In this way it will be possible to identify those predictions which are close to the natural folding of the molecule with a probability of 97% of success.
We propose a model and an algorithm to perform exact power estimation taking into account all temporal and spatial correlations of the input signals.
The proposed methodology is able to accurately model temporal and spatial correlations at the logic level, with the input signal correlations being specified at the word level using a simple but effective formulation.
We study the nature of the relationship between performance measures and privacy guarantees in the case study of an adaptive protocol for the secure transmission of real-time audio over the Internet.
The analysis is conducted on a...
A number of network simulators are now capable of simulating systems with millions of devices, at the IP packet level.
With this ability comes a need for realistic network descriptions of commensurate size.
This paper describes our effort to build a detailed model of the U.S. Internet backbone based on measurements taken from a variety of mapping sources and tools.
We identify key attributes of a network design that are needed to use the model in a simulation, describe which components are available and which must be modeled, and discuss the pros and cons of this approach as compared to synthetic generation.
As for attributes that we have to model, we also briefly discuss some measurement efforts that can potentially provide the missing pieces, and thus improve the fidelity of the model.
Finally, we describe the resulting network model of the U.S. Internet backbone, which is being made publicly available.
1
We investigate two important, common fluid flow patterns from computational fluid dynamics (CFD) simulations, namely, swirl and tumble motion typical of automotive engines.
We study and visualize swirl and tumble flow using three different flow visualization techniques: direct, geometric, and texture-based.
When illustrating these methods side-by-side, we describe the relative strengths and weaknesses of each approach within a specific spatial dimension and across multiple spatial dimensions typical of an engineer 's analysis.
Our study is focused on steady-state flow.
Based on this investigation we offer perspectives on where and when these techniques are best applied in order to visualize the behavior of swirl and tumble motion.
This paper presents a new model of agency, called the KGP (Knowledge, Goals and Plan) model.
This draws from the classic BDI model and proposes a hierarchical agent architecture with a highly modular structure that synthesises various reasoning and sensing capabilities of the agent in an open and dynamic environment.
The novel features of the model include: its innovative use of Computational Logic (CL) in a way that facilitates both the formal analysis of the model and its computational realisability directly from the high-level specification of the agents (a first prototype for the development of KGP agents exists, based upon a correct computational counterpart of the model), the modular separation of concerns and flexibility afforded by the model in designing heterogeneous agents and in developing independently the various components of an agent, and the declarative agent control provided through a context-sensitive cycle CL theory component that regulates the agent's operational behaviour, according to the current circumstances of operation, thus breaking away from the conventional one-size-fits-all control of operation.
Recently, Milner and Moller have presented several decomposition results for processes.
Inspired by these, we investigate decomposition techniques for the verification of parallel systems.
In particular, we consider those of the form q j (I) where p i and q j are (finite) state systems.
We provide a decomposition procedure for all p i and q j and give criteria that must be checked on the decomposed processes to see whether (I) does or does not hold.
We analyse the complexity of our procedure and show that it is polynomial in n, m and the sizes of p i and q j if there is no communication.
We also show that with communication the verification of (I) is co-NP hard, which makes it very unlikely that a polynomial complexity bound exists.
But by applying our decomposition technique to Milner's cyclic scheduler we show that verification can become polynomial in space and time for practical examples, where standard techniques are exponential.
Note: The authors are supported by the European Communities under ESPRIT Basic Research Action 3006 (CONCUR).
This paper describes a semantic portal on the domain of International Affairs.
This application is an integration of several technologies in the field of the Semantic Web in a complex project.
We describe an approach, tools and techniques that allow building a semantic portal, where access is based on the meaning of concepts and relations of the International Affairs domain.
The approach comprises an automatic ontology-based annotator, a semantic search engine with a natural language interface, a web publication tool allowing semantic navigation, and a 3D visualization component.
The portal is being deployed in the Royal Institute Elcano (Real Instituto Elcano) in Spain, which is a prestigious independent political institute whose mission is to comment on the political situation in the world focusing on its relation to Spain.
As part of its dissemination strategy it operates a public website.
The online content can be accessed by navigating through categories or by a keyword-based, full text search engine.
The work described in this paper aims at improving access to the content.
The semantic portal is currently being tested by the Institute.
Although the syntax and semantics of mainstream agent content languages are based on those of predicate logic, the popularity of the Java programming language, the availability of various free Java-based agent development toolkits and the use of frame-based ontology modelling languages have meant that many developers of multi-agent systems are accustomed to conceptualising their problem domain in terms of classes and objects.
Techniques for automatic query expansion from top retrieved documents have recently shown promise for improving retrieval effectiveness on large collections but there is still a lack of systematic evaluation and comparative studies.
In this paper we focus on term-scoring methods based on the differences between the distribution of terms in (pseudo-)relevant documents and the distribution of terms in all documents, seen as a complement or an alternative to more conventional techniques.
We show that when such distributional methods are used to select expansion terms within Rocchio's classical reweighting scheme, the overall performance is not likely to improve.
However, we also show that when the same distributional methods are used to both select and weight expansion terms the retrieval effectiveness may considerably improve.
We then argue, based on their variation in performance on individual queries, that the set of ranked terms suggested by individual distributional methods can be combined to further improve mean performance, by analogy with ensembling classifiers, and present experimental evidence supporting this view.
Taken together, our experiments show that with automatic query expansion it is possible to achieve performance gains as high as 21.34% over non-expanded query (for non-interpolated average precision).
We also discuss the effect that the main parameters involved in automatic query expansion, such as query difficulty, number of selected documents, and number of selected terms, have on retrieval effectiveness.
Despite their benefits, programmers rarely use formal specifications, because they are difficult to write and they require an up front investment in time.
To address these issues, we present a tool that helps programmers write and debug algebraic specifications.
Given an algebraic specification, our tool instantiates a prototype that can be used just like any regular Java class.
The tool can also modify an existing application to use the prototype generated by the interpreter instead of a hand-coded implementation.
The tool improves the usability of algebraic specifications in the following ways: (i) A programmer can "run " an algebraic specification to study its behavior.
The tool reports in which way a specification is incomplete for a client application.
(ii) The tool can check whether a specification and a hand-coded implementation behave the same for a particular run of a client application.
(iii) A prototype can be used when a hand-coded implementation is not yet available.
Two case studies demonstrate how to use the tool.
1.
The Joint Warfare System (JWARS) is being equipped with a Commander Model (CM) to perform situation assessment and Course of Action (COA) selection, and a Commander Behavior Model (CBM) to bias decisions with a commander's leadership style.
The CM is a hybrid artificial intelligence system that models doctrine through the use of fuzzy rule sets, together with a tree-based lookahead algorithm for the strategy.
The CBM employs behaviorbased fuzzy rule sets to augment the CM in assessing the situation, and in biasing the COA selection criteria.
Extending from Myers-Briggs personality traits, the CBM links personality traits to military attitudes, consequences and values.
Employing the fuzzy rule sets, the resulting sets of values are combined to select a specific COA with an auditable trail.
Users will have the ability to modify both the input parameters and the underlying rules.
The CM/CBM is applicable to decisions at multiple echelons.
Multitrajectory Simulation allows random events in a simulation to generate multiple trajectories.
Management techniques have been developed to manage the choices of trajectories to be continued as combinatorial explosion and limited resources prevents continuing all of them.
One of the seemingly most promising methods used trajectory probability as a criterion, so that higher probability trajectories were preferentially continued, resulting in a more even distribution of (surviving) trajectory probabilities, and better than stochastic approximation to a reference outcome.
It was also found that this management technique introduced a failed ergodicity assumption.
The higher and lower probability trajectories behave differently to a significant extent.
The effect is to limit the number of trajectories which can usefully be applied to the problem, such that additional runs would fail to converge further toward the definitive reference outcome set.
This may be a useful model for understanding other simulation modeling limitations.
Some recent works have shown that under an autoregressive constraint on the input signal, least-squares equationerror methods provide stable models of the estimated transfer function.
Here we present an alternative proof of this fact which allows to increase the order of the autoregressive input by one, for both the monic and unit-norm approaches.
This paper presents an analysis of increased diversity in genetic programming.
A selection strategy based on genetic lineages is used to increase genetic diversity.
A genetic lineage is defined as the path from an individual to individuals which were created from its genetic material.
The method is applied to three problem domains: Artificial Ant, Even-5-Parity and symbolic regression of the Binomial-3 function.
We examine how increased diversity affects problems differently and draw conclusions about the types of diversity which are more important for each problem.
Results indicate that diversity in the Ant problem helps to overcome deception, while elitism in combination with diversity is likely to benefit the Parity and regression problems.
This paper explores a statistical basis for a process often described in computer vision: image segmentation by region merging following a particular order in the choice of regions.
We exhibit a particular blend of algorithmics and statistics whose segmentation error is, as we show, limited from both the qualitative and quantitative standpoints.
This approach can be efficiently approximated in linear time/space, leading to a fast segmentation algorithm tailored to processing images described using most common numerical pixel attribute spaces.
The conceptual simplicity of the approach makes it simple to modify and cope with hard noise corruption, handle occlusion, authorize the control of the segmentation scale, and process unconventional data such as spherical images.
Experiments on gray-level and color images, obtained with a short readily available C-code, display the quality of the segmentations obtained.
It has been known for some time that proportional output feedback will stabilize certain classes of linear time-invariant systems under an adaptation mechanism that drives the feedback gain su#ciently high.
More recently, it was demonstrated that discrete implementations of the high-gain adaptive controller also require adaptation of the sampling rate.
In this paper, we use recent advances in the mathematical field of dynamic equations on time scales to unify the discrete and continuous versions of the high-gain adaptive controller.
A novel proof method is presented based on time scales, as is a brief tutorial on the subject of time scales.
We present a method for feature construction and selection that finds a minimal set of conjunctive features that are appropriate to perform the classification task.
For problems where this bias is appropriate, the method outperforms other constructive induction algorithms and is able to achieve higher classification accuracy.
The application of the method in the search for minimal multi-level boolean expressions is presented and analyzed with the help of some examples.
The complexities and costs associated with preserving the nation's bridge infrastructure demand innovative approaches to analysis of data and prediction of future bridge conditions.
Several Bridge Management systems (BMS) have come into existence following the ISTEA act of 1991.
The policy analysis module of BMS systems developed is restricted to analytical methods.
With the availability of modern infrastructure, realistic simulation models are being developed in several fields.
This leads to the question of whether reasonably realistic and practical discrete event simulation (DES) based policy analysis tools can be developed ?
A DES model was developed for the Salem district of Virginia using a simulation language, STROBOSCOPE.
This simulation model can be used to simulate the bridge network behavior under different policies and observe the impact on the health of the network making it a useful tool for decision-making.
The tool enables the formulation and testing of different bridge maintenance policies.
In this paper, we address the problem of protecting the underlying attribute values when sharing data for clustering.
The challenge is how to meet privacy requirements and guarantee valid clustering results as well.
A Bayesian blackboard is just a conventional, knowledge-based blackboard system in which knowledge sources modify Bayesian networks on the blackboard.
As an architecture for intelligence analysis and data fusion this has many advantages: The blackboard is a shared workspace or "corporate memory" for collaborating analysts; analyses can be developed over long periods of time with information that arrives in dribs and drabs; the computers contribution to analysis can range from data-driven statistical algorithms up to domain-specific, knowledge-based inference; and perhaps most important, the control of intelligence-gathering in the world and inference on the blackboard can be rational, that is, grounded in probability and utility theory.
Our Bayesian blackboard architecture, called AIID, serves both as a prototype system for intelligence analysis and as a laboratory for testing mathematical models of the economics of intelligence analysis.
Small scale software developments need specific low cost and lowoverhead methods and tools to deliver quality products within tight time and budget constraints.
This is particularly true of testing, because of its cost and impact on final product reliability.
We propose a lightweight approachtoembed tests into components, making them self testable.
We also propose a method to evaluate testing efficiency, based on mutation techniques, which ultimately provides an estimation of a component's quality.
This allows the software developer to consciously trade reliability for resources.
Our methodology has been implemented in the Eiffel, Java, C++ and Perl languages.
The Java implementation, built on top of iContract, is outlined here.
Animals and robots perceiving and acting in a world require an ontology that accommodates entities, processes, states of a#airs, etc., in their environment.
If the perceived environment includes information-processing systems, the ontology should reflect that.
Scientists studying such systems need an ontology that includes the first-order ontology characterising physical phenomena, the second-order ontology characterising perceivers of physical phenomena, and a (recursive) third order ontology characterising perceivers of perceivers, including introspectors.
We argue that second- and third-order ontologies refer to contents of virtual machines and examine requirements for scientific investigation of combined virtual and physical machines, such as animals and robots.
We show how the CogA# architecture schema, combining reactive, deliberative, and meta-management categories, provides a first draft schematic third-order ontology for describing a wide range of natural and artificial agents.
Many previously proposed architectures use only a subset of CogA#, including subsumption architectures, contention-scheduling systems, architectures with `executive functions' and a variety of types of `Omega' architectures.
Preprocessing is an often used approach for solving hard instances of propositional satisfiability (SAT).
Preprocessing can be used for reducing the number of variables and for drastically modifying the set of clauses, either by eliminating irrelevant clauses or by inferring new clauses.
Over the years, a large number of formula manipulation techniques has been proposed, that in some situations have allowed solving instances not otherwise solvable with stateof -the-art SAT solvers.
This paper proposes probing-based preprocessing, an integrated approach for preprocessing propositional formulas, that for the first time integrates in a single algorithm most of the existing formula manipulation techniques.
Moreover, the new unified framework can be used to develop new techniques.
Preliminary experimental results illustrate that probing-based preprocessing can be effectively used as a preprocessing tool in state-of-theart SAT solvers.
In this paper we propose a process and service oriented framework, which offers a structural and conceptual orientation in the field of electronic payment.
It renders possible an integral view on electronic payment that goes beyond the frame of an individual system.
To do this, we have generalized the systems-oriented approaches to a phase-oriented payment model.
Using this model, requirements and supporting services for electronic payment can be sorted systematically and described from both the customers' and the merchants' viewpoint.
Providing integrated payment processes and services is proving to be a difficult task.
With this paper we would like to outline the necessity for a Payment Service Provider to act as a mediator for suppliers and users of electronic payment systems.
We consider the problem of finding the optimal pair of string patterns for discriminating between two sets of strings, i.e.
finding the pair of patterns that is best with respect to some appropriate scoring function that gives higher scores to pattern pairs which occur more in the strings of one set, but less in the other.
We present an O(N²) time algorithm for finding the optimal pair of substring patterns, where N is the total length of the strings.
The algorithm looks for all possible Boolean combination of the patterns, e.g.
patterns of the form p ∧ ¬q, which indicates that the pattern pair is considered to match a given string s, if p occurs in s, AND q does NOT occur in s. The same algorithm can be applied to a variant of the problem where we are given a single set of sequences along with a numeric attribute assigned to each sequence, and the problem is to find the optimal pattern pair whose occurrence in the sequences is correlated with this numeric attribute.
An e#cient implementation based on suffix arrays is presented, and the algorithm is applied to several nucleotide sequence datasets of moderate size, combined with microarray gene expression data, aiming to find regulatory elements that cooperate, complement, or compete with each other in enhancing and/or silencing certain genomic functions.
Motivation: Is protein secondary structure primarily determined by local interactions between residues closely spaced along the amino acid backbone or by non-local tertiary interactions ?
To answer this question, we measure the entropy densities of primary and secondary structure sequences, and the local inter-sequence mutual information density.
A three-dimensional model of di#usion limited coral growth is introduced.
As opposed to previous models, in this model we take a "polyp oriented" approach.
Here, coral morphogenesis is the result of the collective behaviour of the individual coral polyps.
In the polyp oriented model, branching occurs spontaneously, as opposed to previous models in which an explicit rule was responsible for branching.
We discuss the mechanism of branching in our model.
Also, the e#ects of polyp spacing on the coral morphology are studied.
This paper presents our work on supporting flexible query evaluation over large distributed, heterogeneous, and autonomous sources.
Flexibility means that the query evaluation process can be configured according to application contextspecific, resources constraints and also can interact with its execution environment.
There are many useful observable characteristics of the state of a tracked object.
These characteristics could include normalized size, normalized speed, normalized direction, object color, position, and object shape among other characteristics.
Although these characteristics are by no means completely independent of each other, it is desirable to determine a separate, compact description of each of each of these aspects.
Using this compact factored description, different aspects of individual sequences can be estimated and described without overwhelming computational or storage costs.
In this work, we describe Factored Latent Analysis (FLA) and its application to deriving factored models for segmenting sequences in each of K separate characteristics.
This method exploits temporally local statistics within each of the latent aspects and their interdependencies to derive a model that allows segmentation of each of the observed characteristics.
This method is data driven and unsupervised.
Activity classification results for multiple challenging environments are shown.
Migrating applications from conventional to temporal database management technology has received scant mention in the research literature.
This paper formally defines three increasingly restrictive notions of upward compatibility which capture properties of a temporal SQL with respect to conventional SQL that, when satisfied, provide for a smooth migration of legacy applications to a temporal system.
The notions of upward compatibility dictate the semantics of conventional SQL statements and constrain the semantics of extensions to these statements.
The paper evaluates the seven extant temporal extensions to SQL, all of which are shown to complicate migration through design decisions that violate one or more of these notions.
We then outline how SQL--92 can be systematically extended to become a temporal query language that satisfies all three notions.
Self-organizing large amounts of textual data in accordance to some topics structure is an increasingly important application of clustering.
Adaptive Resonance Theory (ART) neural networks possess several interesting properties that make them appealing in this area.
Although ART has been used in several research works as a text clustering tool, the level of quality of the resulting document clusters has not been clearly established yet.
In this paper, we present experimental results with binary ART that address this issue by determining how close clustering quality is to an upper bound on clustering quality.
We describe two modifications to the FreeBSD 4.6 NFS server to increase read throughput by improving the read-ahead heuristic to deal with reordered requests and stride access patterns.
We show that for some stride access patterns, our new heuristics improve end-to-end NFS throughput by nearly a factor of two.
We also show that benchmarking and experimenting with changes to an NFS server can be a subtle and challenging task, and that it is often difficult to distinguish the impact of a new algorithm or heuristic from the quirks of the underlying software and hardware with which they interact.
We discuss these quirks and their potential effects.
For many years the Brightness Constancy Constraint Equation (BCCE) has been used for optical flow and related computer vision computations.
However, almost all cameras have some kind of automatic exposure feature such as Automatic Gain Control (AGC), so that the overall exposure level of the image varies as the camera is aimed at brighter or darker portions of a scene.
Moreover, because most cameras have some kind of unknown nonlinear response function, the change due to AGC cannot be captured by merely applying a multiplicative constant to the pixels of each image.
We propose, therefore, a Lightspace Change Constraint Equation (LCCE) that accounts for exposure change (AGC) together with the nonlinear response function of the camera.
The response function can be automatically "learned" by an intelligent image processing system presented with differently exposed capture of the same subject matter in overlapping regions of registered images.
Most importantly, a Logarithmic Lightspace Change Constraint Equation (LLCCE) is shown to have a very simple mathematical formulation.
The LCCE (and Log LCCE) is applied to the estimation of the projective coordinate transformation between pairs of images in a sequence, and is compared with examples where the BCCE fails.
Technology trends present new challenges for processor architectures and their instruction schedulers.
Growing transistor density will increase the number of execution units on a single chip, and decreasing wire transmission speeds will cause long and variable on-chip latencies.
These trends will severely limit the two dominant conventional architectures: dynamic issue superscalars, and static placement and issue VLIWs.
We present a new execution model in which the hardware and static scheduler instead work cooperatively, called Static Placement Dynamic Issue (SPDI).
This paper focuses on the static instruction scheduler for SPDI.
We identify and explore three issues SPDI schedulers must consider---locality, contention, and depth of speculation.
We evaluate a range of SPDI scheduling algorithms executing on an Explicit Data Graph Execution (EDGE) architecture.
We find that a surprisingly simple one achieves an average of 5.6 instructions-per-cycle (IPC) for SPEC2000 64-wide issue machine, and is within 80% of the performance without on-chip latencies.
These results suggest that the compiler is effective at balancing on-chip latency and parallelism, and that the division of responsibilities between the compiler and the architecture is well suited to future systems.
1
This paper describes the problems and an adaptive solution for process control in rubber industry.
We show that the human and economical benefits of an adaptive solution for the approximation of process parameters are very attractive.
The
Under traditional IP multicast, application-level FEC can only be implemented on an end-to-end basis between the sender and the clients.
Emerging overlay and peer-to-peer (p2p) networks open the door for new paradigms of network FEC.
The deployment of FEC within these emerging networks has received very little attention (if any).
In this paper, we analyze and optimize the impact of Network-Embedded FEC (NEF) in overlay and p2p multimedia multicast networks.
Under NEF, we place FEC codecs in selected intermediate nodes of a multicast tree.
The NEF codecs detect and recover lost packets within FEC blocks at earlier stages before these blocks arrive at deeper intermediate nodes or at the final leaf nodes.
This approach significantly reduces the probability of receiving undecodable FEC blocks.
In essence, the proposed NEF codecs work as signal regenerators in a communication system and can reconstruct most of the lost data packets without requiring retransmission.
We develop an optimization algorithm for the placement of NEF codecs within random multicast trees.
Our theoretical analysis and simulation results show that a relatively small number of NEF codecs placed in (sub-)optimally selected intermediate nodes of a network can improve the throughput and overall reliability dramatically.
We show that network motifs found in natural regulatory networks may also be found in an artificial regulatory network model created through a duplication / divergence process.
It is shown that these network motifs exist more frequently in a genome created through the aforementioned process than in randomly generated genomes.
These results are then compared with a network motif analysis of the gene expression networks of Escherichia Coli and Saccharomyces cerevisiae.
In addition, it is shown that certain individual network motifs may arise directly from the duplication / divergence mechanism.
Statistical density estimation techniques are used in many computer vision applications such as object tracking, background subtraction, motion estimation and segmentation.
The particle filter (Condensation) algorithm provides a general framework for estimating the probability density functions (pdf) of general non-linear and non-Gaussian systems.
However, since this algorithm is based on a Monte Carlo approach, where the density is represented by a set of random samples, the number of samples is problematic, especially for high dimensional problems.
In this paper, we propose an alternative to the classical particle filter in which the underlying pdf is represented with a semi-parametric method based on a mode finding algorithm using mean-shift.
A mode propagation technique is designed for this new representation for tracking applications.
A quasi-random sampling method [14] in the measurement stage is used to improve performance, and sequential density approximation for the measurements distribution is performed for efficient computation.
We apply our algorithm to a high dimensional colorbased tracking problem, and demonstrate its performance by showing competitive results with other trackers.
1
This paper proposes a research methodology for attacking the problem of providing fluent and natural discourse about space and spatially situated tasks between nave users and robots.
We suggest flexible and adaptive ontology mediation, parameterized according to empirically determined discourse and contextual factors, as a suitable architecture with clear applications for the treatment of natural human-human dialog also.
In this paper we present an overview of classical results about the variance reduction technique of control variates.
We emphasize aspects of the theory that are of importance to the practitioner, as well as presenting relevant applications.
1
BGP is the de-facto inter-domain routing protocol and it is essential to understand how well BGP performs in the Internet.
As a step toward this understanding, this paper studies the routing performance of a sample set of prefixes owned by the U.S. Department of Defense (DoD).
We examine how reliably the sample set is connected to the Internet and how it affects the rest of the Internet.
We show that our sample set receives reliable connectivity, with the exception of a few prefixes.
We also show that, on average, the sample set has minimal impact on global routing, but certain BGP features used by DoD routers result in periods of excessive routing overhead.
During some stressful periods, our sample set, only 0.2% of all prefixes, contributed over 80% of a particular BGP update class.
We explain how the BGP design allows certain local changes to propagate globally and amplifies the impact of our sample prefixes.
We study indexing techniques for main memory, including hash indexes, binary search trees, T-trees, B+-trees, interpolation search, and binary search on arrays.
An approach for tight coupling of process models and software development tools --- with the metaphor of component-based software development environments --- supporting "eXecutable Process Models" (XPM) is presented.
In this paper, we focus on the direction from the components of a software development environment towards the process models in order to automatically acquire process model information solving various drawbacks compared to a manual acquisition of this information.
Requirements on a process modeling language for using this information are discussed.
While many community-driven development (CDD) initiatives may be successful, their impact is often limited by their small scale.
Building on past and ongoing work on CDD, this study addresses the fundamental question: how can CDD initiatives motivate and empower the greatest number of communities to take control of their own development?
What are the key contextual factors, institutional arrangements, capacity elements, and processes related to successful scaling-up of CDD, and, conversely, what are the main constraints or limiting factors, in different contexts?
Drawing upon recent literature and the findings from five case studies, key lessons on how best to stimulate, facilitate, and support the scaling-up of CDD in different situations, along with some major challenges, are highlighted.
Lessons include the need for donors and supporters of CDD, including governments, to think of the process beyond the project, and of transformation or transition rather than exit.
Donor push and community pull factors need to be balanced to prevent supply-driven, demand-driven development.
Overall, capacity is pivotal to successful CDD and its successful scaling-up over time.
Capacity is more than simply resources, however; it also includes motivation and commitment, which, in turn, require appropriate incentives at all levels.
Capacity development takes time and resources, but it is an essential upfront and ongoing investment, with the capacity and commitment of facilitators and local leaders being particularly important.
A learning by doing cultureone that values adaptation, flexibility, and openness to changeneeds to be fostered at all levels, with time horizons adjusted accordingly.
The building of a library of well-documented, context-specific experiences th...
Though there may be millions of professionals worldwide acting as a designer, architect, or engineer in the design, realisation, and implementation of information systems, there is not yet a well established and clearly identified body of knowledge that can be said to define the profession.
In recent times, the way people access information from the web has undergone a transformation.
The demand for information to be accessible from anywhere, anytime, has resulted in the introduction of Personal Digital Assistants (PDAs) and cellular phones that are able to browse the web and can be used to find information using wireless connections.
However, the small display form factor of these portable devices greatly diminishes the rate at which these sites can be browsed.
This shows the requirement of efficient algorithms to extract the content of web pages and build a faithful reproduction of the original pages with the important content intact.
proof.
Let us examine why.
; #M 1 :A 2#A 1 2 :A 2 #E. M 2 :A 1 We can make the following inferences.
V 1 = #x:A 2 .M # 1 By type preservation and inversion At this point we cannot proceed: we need a derivation of [V 2 /x]M # 1 ## V for some V to complete the derivation of M 1 M 2 ## V .
Unfortunately, the induction hypothesis does not tell us anything about [V 2 /x]M # 1 .
Basically, we need to extend it so it makes a statement about the result of evaluation ( #x:A 2 .M # 1 ,inthis case).
Sticking to the case of linear application for the moment, we call a term M "good" if it evaluates to a "good" value V .AvalueVis "good" if it is a function #x:A 2 .M # 1 and if substituting a "good" value V 2 for x in M # 1 results in a "good" term.
Note that this is not a proper definition, since to see if V is "good" we may need to substitute any "good" value V 2 into it, possibly including V itself.
We can make this definition inductive if we observe that the value
to 100 km/sec per Mpc.
The most likely value for the deceleration parameter q o is , which corresponds to a flat universe.
But all measurements are in fact quite dispersed around this value, and seem now to indicate an open universe.
Two other cosmological facts are related to the Big Bang theory.
The second cosmological fact often quoted and extensively discussed by Rees is that on earth we are immersed in a cosmic microwave background radiation, like being in a furnace kept at a temperature of t 0 = 2.7 Kelvin.
This accidental discovery seemed to confirm the socalled Big Bang model for theorists.
The third cosmological fact depends on the measurement of elementary abundances in the universe, and it seems that the proportions of hydrogen, deuterium, helium, lithium, etc., have changed little since their formation at the time of the Big Bang.
Needless to say, when one discovers several facts which fit within the same theory, one is tempted not to go beyond and to relegate all o
Remaining elusive while navigating to a goal in a dynamic environment containing an observer requires taking advantage of opportunistic cover as it occurs.
A reactive navigation approach is needed that recognizes the utility of environment features in offering protective cover.
We present an approach that allows stealthy traverses in unknown environments containing dynamic objects.
It is a frontier-based method that allows a robot to follow in the obscuring shadow of objects despite their dynamics, and take advantage of more opportunistic cover if it becomes available.
An analysis of our approach in off-line modeling and experiments conducted in simulation and outdoor environments demonstrate its effectiveness in achieving high quality solutions for stealthy navigation.
We consider multi-robot systems that include sensor nodes and aerial or ground robots networked together.
Such networks are suitable for tasks such as large-scale environmental monitoring or for command and control in emergency situations.
We present a sensor network deployment method using autonomous aerial vehicles and describe in detail the algorithms used for deployment and for measuring network connectivity and provide experimental data collected from field trials.
A particular focus is on determining gaps in connectivity of the deployed network and generating a plan for repair, to complete the connectivity.
This project is the result of a collaboration between three robotics labs (CSIRO, USC, and Dartmouth.
) 1
Airports are an ideal application area for simulation.
The processes are in a continuous state of change, are complex and stochastic, involve many moving objects, and require a good performance that can be measured in several different performance indicators.
Within airports, but also between airports, the same kind of questions are answered over and over again.
Often, however, new simulation models are built for each question, if possible copying some parts of previous models.
Structured reuse of simulation components is rarely seen.
This paper shows an approach for airport terminal modeling that departs from the assumption that reusable simulation building blocks can form the core of a powerful airport modeling tool, which is able to answer different questions at airports better and faster than traditional models.
The building blocks have been implemented in the commercially available simulation language eM-Plant.
Several studies carried out with this library were very successful.
dist dist flag, described in section 3.1, can also be used in place of -d. The number of goods and bids must also be specified for each run.
If every instance is to have the same number of goods and bids, the -goods and -bids flags are used to provide these numbers.
It is also possible to choose the numbers of goods and bids separately for each instance from a uniform distribution over a specified range.
This is done with the -random goods and -random bids flags.
Both are followed by two integers specifying the minimum and maximum values of the range.
If the -default hard flag is used (see section 3.5), all of the above parameters take on default values and are not required.
A specific distribution can still be chosen, but the number of bids and goods, if they are (redundantly) entered, must be 1000 and 256, respectively, since our hardness models are based on this problem size.
The 2.1 release of CATS may allow variable problem sizes.
L1 and L5 are excluded because it is impossib
Countermeasures against node misbehavior and selfishness are mandatory requirements in mobile ad hoc networks.
Selfishness that causes lack of node activity cannot be solved by classical security means that aim at verifying the correctness and integrity of an operation.
In this paper we outline an original security mechanism (CORE) based on reputation that is used to enforce cooperation among the nodes of a MANET.
We then investigate on its robustness using an original approach: we use game theory to model the interactions between the nodes of the ad hoc network and we focus on the strategy that a node can adopt during the network operation.
As a first result, we obtained the guidelines that should be adopted when designing a cooperative security mechanism that enforces mobile nodes cooperation.
Furthermore, we were able to show that when no countermeasures are taken against misbehaving nodes, network operation can be heavily jeopardized.
We then showed that the CORE mechanism is compliant with guidelines provided by the game theoretic model and that, under certain conditions, it assures the cooperation of at least half of the nodes of a MANET.
The goal of information extraction from the Web is to provide an integrated view on heterogeneous information sources.
A main problem with current wrapper/mediator approaches is that they rely on very different formalisms and tools for wrappers and mediators, thus leading to an "impedance mismatch" between the wrapper and mediator level.
Additionally, most approaches currently are tailored to access information from a fixed set of sources.
The purpose of this paper is to give a very simple method for nonlinearly estimating the fundamental matrix using the minimum number of seven parameters.
Instead of minimally parameterizing it, we rather update what we call its orthonormal representation, which is based on its singular value decomposition.
We show how this method can be used for efficient bundle adjustment of point features seen in two views.
Experiments on simulated and real data show that this implementation performs better than others in terms of computational cost, i.e., convergence is faster, although methods based on minimal parameters are more likely to fall into local minima than methods based on redundant parameters.
A model of coalition government formation is presented in which inefficient, nonminimal winning coalitions can form in Nash equilibrium.
Predictions for five games are presented and tested experimentally.
The experimental data support potential maximization as a refinement of Nash equilibrium.
In particular, the data support the prediction that non-minimal winning coalitions occur when the distance between policy positions of the parties is small relative to the value of forming the government.
These conditions hold in games 1, 3, 4 and 5, where subjects played their unique potential-maximizing strategies 91, 52, 82 and 84 percent of the time, respectively.
In the remaining game (Game 2) experimental data support the prediction of a minimal winning coalition.
Players A and B played their unique potential-maximizing strategies 84 and 86 percent of the time, respectively, and the predicted minimal-winning government formed 92 percent of the time (all strategy choices for player C conform with potential maximization in Game 2).
In Games 1, 2, 4 and 5 over 98 percent of the observed Nash equilibrium outcomes were those predicted by potential maximization.
Other solution concepts including iterated elimination of dominated strategies and strong/coalition proof Nash equilibrium are also tested.
In this paper, we propose a methodology developed in the framework of the VISPO project for engineering a three-layer ontology, based on the conceptualization, integration, synthesis and categorization of XML data descriptions provided by a number of sources in a virtual district, where di#erent enterprises cooperate for business purposes.
Ontologies are proposed as an unifying framework for di#erent viewpoints by providing a shared understanding in a subject domain.
Our methodology generates an ontology organized into concepts and concept relationships at di#erent levels of detail, to provide multiple, unified views of the datasources containing heterogeneous information about the domain of interest.
ss affecting a segment because of its position in a string Coda V__V devoicing typical highly improbable deaspiration (C h -->C) typical highly improbable velarisation (l,n-->#,#) typical highly improbable s-debuccalisation (s-->h) typical highly improbable liquid gliding (r,l-->j) typical highly improbable depalatalisation (#-->n) typical highly improbable l-vocalisation (#-->w/o) typical highly improbable r-vocalisation/ loss ([kaad] "card") typical highly improbable [NC] hom : homorganisation of nasals typical highly improbable spirantisation (b,d,g-->#,##) highly improbable typical voicing (t-->d) highly improbable typical rhotacism (z-->r) highly improbable typical c. only solution when using the familiar model of syllabic structure: criterion based on {__#, __.C, V__V} = postvocalic pure adjacence V__V = flanked by vowels pure adjacence {__#, __.C} = Coda pure position d. contradiction: the superset is defined in pure terms of adjacence.
Hence, one of its subsets
The Fast Factorised Back-Projection (FFBP) algorithm has received considerable attention recently for SAS image reconstruction.
The FFBP algorithm provides a means of trading image quality and/or resolution for a reduction in computational cost over standard Back-Projection.
In this paper we describe FFBP for SAS image reconstruction and compare it to the Wavenumber algorithm in terms of computational cost and image quality.
opportunity for equipping private households with inexpensive smart devices for controlling and automating various tasks in our daily lives.
Networking technology and standards have an important role in driving this development.
The omnipresence of the Internet via phone lines, TV cable, power lines, and wireless channels facilitates ubiquitous networks of smart devices that will significantly change the way we interact with home appliances.
Home networking is considered to become one of the fastest growing markets in the area of information technology.
However, interoperability and flexibility of embedded devices are key challenges for making "Smart Home" technology accessible for a broad audience.
In particular, the software programs that determine the behavior of the smart home must facilitate customizability and extensibility.
Unlike industrial applications that are typically engineered by highly skilled programmers, control and automation programs for the smart home should be understandable to laypeople.
In this article, we discuss how recent technological progress in the areas of visual programming languages, component software, and connection-based programming can be applied to programming the smart home.
Our research is carried out in tight collaboration with a corporate partner in the area of embedded systems.
this paper is to survey the current semantic Web services languages and modeling frameworks by outlining their features and capabilities.
We will then compare the approaches and identify the deficient features which need to be overcome to meet the requirements of the industry and the SWSL in developing a formal language/technology for supporting semantic Web services
...
In this paper, we consider the case when the process is assumed to be fractional ARIMA and show that the new method still possesses the aforementioned qualities
There is experimental evidence that the performance of standard subspace algorithms from the literature (e.g.
the N4SID method) may be surprisingly poor in certain experimental conditions.
This happens typically when the past signals (past inputs andoutputs) and future input spaces are nearly parallel.
In this paper we argue that the poor behavior may be attributed to a form of ill-conditioning of the underlying multiple regression problem, which may occur for nearly parallel regressors.
An elementary error analysis of the subspace identification problem, shows that there are two main possible causes of ill-conditioning.
The first has to do with near collinearity of the state and future input subspaces.
The second has to do with the dynamical structure of the input signal and may roughly be attributed to "lack of excitation".
Stochastic realization theory constitutes a natural setting for analyzing subspace identification methods.
In this setting, we undertake a comparative study of three widely used subspace methods (N4SID, Robust N4SID and PO-MOESP) The last two methods are proven to be essentially equivalent and the relative accuracy, regarding the estimation of the (A, C) parameters, is shown to be the same.
This paper presents and discusses the LOTOS specification of a real-time parallel kernel.
The purpose of this specification exercise has been to evaluate LOTOS with respect to its capabilities to model real-time features with a realistic industrial product.
LOTOS was used to produce the formal specification of TRANS-RTXC, which is a real-time parallel kernel developed by Intelligent Systems International.
This paper shows that although timing constraints cannot be explicitly represented in LOTOS, the language is suitable for the specification of co-ordination of real-time tasks, which is the main functionality of the real-time kernel.
This paper also discusses the validation process of the kernel specification and the role of tools in this validation process.
We believe that our experience (use of structuring techniques, use of validation methods and tools, etc) is valuable for designers who want to apply formal models in their design or analysis tasks.
Among the various proposals answering the shortcomings of Document Type Definitions (DTDs), XML Schema is the most widely used.
Although DTDs and XML Schema Defintions (XSDs) di#er syntactically, they are still quite related on an abstract level.
Indeed, freed from all syntactic sugar, XML Schemas can be seen as an extension of DTDs with a restricted form of specialization.
In the present paper, we inspect a number of DTDs and XSDs harvested from the web and try to answer the following questions: (1) which of the extra features/expressiveness of XML Schema not allowed by DTDs are e#ectively used in practice; and, (2) how sophisticated are the structural properties (i.e.
the nature of regular expressions) of the two formalisms.
It turns out that at present real-world XSDs only sparingly use the new features introduced by XML Schema: on a structural level the vast majority of them can already be defined by DTDs.
Further, we introduce a class of simple regular expressions and obtain that a surprisingly high fraction of the content models belong to this class.
The latter result sheds light on the justification of simplifying assumptions that sometimes have to be made in XML research.
Introduction Database systems based on SQL are well suited for homogeneous databases -- either centralized or distributed.
Most traditional database architectures, however, seem inadequate to handle differenttypes of heterogeneity.Interoperability at the system level can be achieved to some degree byinterposing an additional interface layer between a database system and the application, as in the ODBC solution [Mic94] and, more recently, in the analogous, Java-based JDBC proposal [HC96].
Other vendor-specific solutions provide network and protocol transparency by standardizing their SQL interface.
The problem of data,orsemantic, heterogeneity,however, still remains.
Different systems that own different pieces of data maycomeinto conflict when they need to agree, at least in part, on the meaning of each other's data.
This situation is common in loosely coupled database federations, where private data from a common domain of discourse is shared, and yet each local system insists on ma
This paper presents HapticFlow, a haptics-based direct mesh editing system founded upon the concept of PDE-based geometric surface flow.
The proposed flow-based approach for direct geometric manipulation offers a unified design paradigm that can seamlessly integrate implicit, distance-field based shape modeling with dynamic, physics-based shape design.
HapticFlow provides an intuitive haptic interface and allows users to directly manipulate 3D polygonal objects with ease.
To demonstrate the effectiveness of our new approach, we developed a variety of haptics-based mesh editing operations such as embossing, engraving, sketching as well as force-based shape manipulation operations
Recent results indicate that the same turbo principle which delivers near to optimal strategies for channel coding, can be used to obtain very efficient source coding schemes.
We investigate this issue applying ten Brink's EXIT chart analysis and show how this technique can be used to select the most efficient match of component codes and puncturing matrices to compress discrete memoryless sources.
Aiming at perfect reconstruction at the decoder, i.e.
lossless source coding, we present an encoding algorithm, which gradually removes the redundancy while checking the decodability of the compressed bit stream.
This concept of decremental redundancy is dual to the principle of incremental redundancy that characterizes hybrid ARQ (Type II) communication protocols.
Both principles can be combined when the channel is noisy.
The C1 billion government drive to integrate information and communications technology (ICT) into UK schools and colleges has been "rmly focused on the technological transformation of the teaching profession.
In particular, the establishment of a National Grid for Learning (NGfL) remains dependent on the successful &selling' of ICT to teachers; many of whom have previously proved unwilling to use computers.
In practice much of this task has been left to IT "rms, eager to promote their products to a potentially lucrative educational marketplace.
From this basis the present paper takes a detailed examination of educational computing advertising material currently being produced by IT "rms in the UK.
In particular it concentrates on how advertisements construct both the process of education and the teacher as a potential user of ICT.
Four dominant themes emerge from this analysis: ICT as problematic for teachers; ICT as a problem solver for teachers; ICT as a futuristic form of education; and ICT as a traditional form of education.
Despite the con#icting, and often contra-factual, nature of these four discourses the paper argues that educational computing advertising is consistent in its disempowering portrayal of the teacher at the expense of both the computer and IT "rm.
This &demotion' of the teacher is likely to have negative e!ects on the way that teachers approach ICT as part of their professional routine, running contrary to the underlying aims of the National Grid for Learning initiative.
# 2000 Elsevier Science Ltd. All rights reserved.
Motivated by experience gained during the validation of a recent Approximate Mean Value Analysis (AMVA) model of modern shared memory architectures, this paper re-examines the "standard" AMVA approximation for non-exponential FCFS queues.
We find that this approximation is often inaccurate for FCFS queues with high service time variability.
For such queues, we propose and evaluate: (1) AMVA estimates of the mean residual service time at an arrival instant that are much more accurate than the standard AMVA estimate, (2) a new AMVA technique that provides a much more accurate estimate of mean center residence time than the standard AMVA estimate, and (3) a new AMVA technique for computing the mean residence time at a "downstream" queue which has a more bursty arrival process than is assumed in the standard AMVA equations.
Together, these new techniques increase the range of applications to which AMVA may be fruitfully applied, so that for example, the memory system architecture of shared memory systems with complex modern processors can be analyzed with these computationally efficient methods.
Bar and line graphs are a good medium when trying to understand overall trends and general relationships between data items.
Sometimes it is, however, desirable to make more detailed comparisons between data items.
In this case good tools are valuable, especially when examining a dense graph.
This paper introduces two techniques that can be used in such tools.
Spatial grouping and visual landmarks can be applied in a way that takes full advantage of the attributes of the human attention mechanism to facilitate visual comparisons.
We present in this paper a new complete method for distributed constraint optimization.
This is a utility-propagation method, inspired by the sum-product algorithm.
The original algorithm requires fixed message sizes, linear memory, and is time-linear in the size of the problem.
However, it is correct only for tree-shaped constraint networks.
In this paper, we show how to extend the algorithm to arbitrary topologies using cycle cutsets, while preserving the linear message size and memory requirements.
We present some preliminary experimental results on randomly generated problems.
The algorithm is formulated for optimization problems, but can be easily applied to satisfaction problems as well.
We investigate a number of issues related to the use of multiple trust authorities and multiple identities in the type of identifier based cryptography enabled by the Weil and Tate pairings.
An example of such a system is the Boneh and Frank9I encryption scheme.
We present various applications of multiple trust authorities.
In particular we focus on how one can equate a trust authority with a way to add contextual information to an identity.
1
There is a longstanding interest in how decisions about resource allocations are made within households and how those decisions affect the welfare of household members.
Much empirical work has approached the problem from the perspective that if preferences differ, welfare outcomes will depend on the power of individuals within the household to exert their own preferences.
Measures of power are therefore a central component of quantitative empirical approaches to understanding how differences in preferences translate into different welfare outcomes.
Following most of the empirical studies in this genre, this paper focuses on dynamics within couples, although we recognize that dynamics among extended family members and across generations are of substantial interest.
A number of different measures of power have been used in the literature.
Because control over economic resources is seen as an important source of power, individual labor income, which one earns and so presumably controls to some degree, is one potential measure of power.
However, whether and how much one works is a choice that is not likely to be independent of one's power in the household.
Non- labor income has also been used as a measure of power, but even if non- labor income does not reflect contemporaneous choices, it likely does reflect past choices, particularly labor supply choices, and so is also a function of power.
Levels of resources brought to the marriage by each spouse, over which they may individually retain control, are even less proximate to the current choices of household members, but nevertheless reflect one's taste in iii partners and therefore may not be exogenous to power.
(In some instances, resources brought to the marriage may reflect decisionmaking by the couple's parents, dependi...
We present new insights and algorithms for converting reasoning problems in monadic First-Order Logic (includes only 1place predicates) into equivalent problems in propositional logic.
Our algorithms improve over earlier approaches in two ways.
First, they are applicable even without the unique-names and domain-closure assumptions, and for possibly infinite domains.
Therefore, they apply for many problems that are outside the scope of previous techniques.
Secondly, our algorithms produce propositional representations that are significantly more compact than earlier approaches, provided that some structure is available in the problem.
We examined our approach on an example application and discovered that the number of propositional symbols that we produced is smaller by a factor of f # 50 than traditional techniques, when those techniques can be applied.
This translates to a factor of about 2 f increase in the speed of reasoning for such structured problems.
This paper describes a simulation model of a large beverage distribution center.
The brewery distribution center has a volume of 71,600 cubic meters and contains about 8,000 pallets.
Every day 1,800 pallets are handled in or out of the system, and the object of this study was to verify the functionality of the automated storage and retrieval system and integrated conveyor system -- including elevators connecting five levels of the distribution center.
The complex system is modeled with the powerful simulation software Arena.
A brief discussion of the results is also given.
Poverty profiles are a useful way of summarizing information on the levels of poverty and the characteristics of the poor in a society.
They also provide us with important clues to the underlying determinants of poverty.
However, important as they are, poverty profiles are limited by the bivariate nature of their informational content.
The bivariate associations typical in a poverty profile can sometimes be misleading; they beg the obvious question of the effect of a particular variable conditional on the other potential determinants.
While there may be certain contexts where unconditional poverty profiles are relevant to a policy decision (see Ravallion 1996), often one would be interested in the "conditional" poverty effects of proposed policy interventions.
It is not surprising therefore that empirical poverty assessments in recent years have seen a number of attempts at going beyond the poverty profile tabulations to engage in a multivariate analysis of living standards and poverty.
This study for Egypt has a similar motivation.
For Egypt, while there has been some work on a descriptive analysis of the characteristics of the poor, to our knowledge, there is no precursor to an empirical modeling of the determinants of poverty using nationally representative data.
To a large extent, this has been due to the nonavailability of unit-record data from the Household Income, Expenditure and Consumption Survey (HIECS), the primary source of data on iii 1997 Egypt Integrated Household Survey (EIHS).
Using the EIHS data, it is now possible to conduct a household-level multivariate analysis of living standards.
The EIHS, being an integrated, multimodule survey, also offers the potential of a richer analysis of this issue than may have been possible from other data sources.
In t...
INTRODUCTION Dipolar coupling data are potentially of great use to NMR spectroscopists since they contain long range information (as opposed to NOE and scalar couplings).
Since, however, the dipolar coupling of an isotropically tumbling molecule averages to zero, useful dipolar coupling data was, until recently, only available for the small number of paramagnetic proteins (1), and protein-- DNA complexes (2) that align spontaneously in strong magnetic fields.
The recent introduction of liquid crystal media that induce tunable levels of physical alignment, such as phospholipid mixtures (3), filimentous phage (4, 5), and purple membranes (6), should allow dipolar coupling data to be collected from essentially all nucleic acids and proteins.
The dipolar coupling between two nuclei is given by DPQ (#,#) a [(3 cos 1) 1.5R sin # cos 2#], [1] where D a subsumes the gyromagnetic ratios of the two nuclei, To whom correspondence should be addressed at Department of Chemistry,
An appropriate choice of the computing devices employed in digital signal processing applications requires to characterize and to compare various technologies, so that the best component in terms of cost and performance can be used in a given system design.
In this paper, a benchmark strategy is presented to measure the performances of various types of digital signal processing devices.
Although different metrics can be used as performance indexes, Fast Fourier Transform (FFT) computation time and Real-Time Bandwidth (RTBW) have proved to be excellent and complete performance parameters.
Moreover, a new index, measuring the architectural efficiency in computing FFT, is introduced and explained.
Both parameters can be used to compare several digital signal processing technologies, thus guiding designers in optimal component selection.
TCP-AQM protocols can be interpreted as distributed primal-dual algorithms over the Internet to maximize aggregate utility over source rates.
In this paper we study whether TCP-AQM together with shortest-path routing can maximize utility over both rates and routes.
We show that this is generally impossible because the addition of route maximization makes the problem NP-hard.
We exhibit
Most of the real-time scheduling algorithms are based on "open-loop" strategies that do not take application demands into account.
This precludes the scheduler to dynamically adjust task executions in order to optimize performance.
To overcome this limitation, we have focused our work on scheduling techniques that are able to take scheduling decisions based on continuous feedback information of the performance delivered by each task.
Focusing on control applications, we present an early specification of a novel scheduling technique: Large Error First (LEF).
It uses feedback information from each controlled plant in order to assign priorities to each control task.
For a given simulation set-up, comparing the performance of LEF versus open loop classical scheduling techniques, encouraging simulation results have been obtained.
This paper introduces communication systems (CS) as a unified model for socially intelligent systems.
This model derived from sociological systems theory, combines the empirical analysis of communication in a social system with logical processing of social information to provide a general framework for computational components that exploit communication processes in multiagent systems.
One of the most significant findings...
This paper reviews what is currently known about network traffic self-similarity and its significance.
We then consider a matter of current research, namely, the manner in which network dynamics (specifically, the dynamics of transmission control protocol (TCP), the predominant transport protocol used in today's Internet) can affect the observed self-similarity.
To this end, we first discuss some of the pitfalls associated with applying traditional performance evaluation techniques to highly-interacting, large-scale networks such as the Internet.
We then present one promising approach based on chaotic maps to capture and model the dynamics of TCP-type feedback control in such networks.
Not only can appropriately chosen chaotic map models capture a range of realistic source characteristics, but by coupling these to network state equations, one can study the effects of network dynamics on the observed scaling behavior.
We consider several aspects of TCP feedback, and illustrate by examples that while TCP-type feedback can modify the self-similar scaling behavior of network traffic, it neither generates it nor eliminates it.
We investigate the use of XML as an open, cross-platform, and extendable file format for the description of hierarchical simulation models, including their graphical representations, initial model conditions, and model execution algorithms.
We present HiMASS-x, an XML-centered suite of software applications that allows for cross-platform, distributed modeling and execution of hierarchical, componentized, and reusable simulation models.
It has become a matter of survival that many companies improve their supply chain efficiency.
This presents an opportunity for simulation.
However, there are many challenges that must be overcome for simulation to be a contributor to play an effective role.
Four contributors discuss the opportunities that they see for simulation to play a meaningful role in the area of supply chain management.
High capacity of transmission lines (Ethernet in particular) is much higher than what imposed by MIDI today.
So it is possible to use capturing interfaces with high-speed and high-resolution, thanks to the OSC protocol, for musical synthesis (either in realtime or non real-time).
These new interfaces offer many advantages, not only in the area of musical composition with use of sensors but also in live and interactive performances.
In this manner, the processes of calibration and signal processing are delocalized on a personal computer and augments possibilities of processing.
In this demo, we present two hardware interfaces developed in La kitchen with corresponding processing to achieve a high-resolution, high-speed sensor processing for musical applications.
Egyptian labor market is moving from a period of high overall unemployment to one where unemployment is increasingly concentrated among specific groups whose access to the private-sector labor market is limited.
Educated young women are more adversely affected than their male counterparts by the transition to a private-sector-led economy.
There is no systematic link between youth unemployment among new entrants and poverty unless it is the head of the household who is unemployed.
An economic policy environment that is favorable for labor-intensive, export-oriented industries would help absorb the new entrants into the labor market, and the prospect is particularly good for young female workers.
Policymakers should consider a reduction in the femalespecific employer mandates (such as the existing provision for a generous maternity leave) that raise the cost of hiring women.
iv v CONTENTS Acknowledgments.......................................................................................................... ix Executive Summary ....................................................................................................... xi 1.
Activity monitoring deals with monitoring data (usually streaming data) for interesting events.
It has several applications such as building an alarm or an alert system that triggers when outliers or change points are detected.
We discuss
The implementation of a portfolio assessment strategy in the education and training environment is a time consuming process that should be performed within a specific framework, structure or model to accommodate diverse learners.
We present a new model for the distributed implementation of pi-like calculi.
This model is a closemos h to a variety of calculi, and so perm02 strong correctness results that are easy to prove.
In particular, we describe a distributed abstractms hine called the fusion machnq .
In it, only channels exist at runtim0 It uses aform of concurrent constraints called fusions---equations on channelnaml#0#05 h it stores as trees of forwarders between channels.
We imH`B2# t in the fusionms hine a solos calculus with explicit fusions.
There are encodings into this calculusfrom the pi calculus and the explicit fusion calculus.
We quantify the e#ciency of the latter bymz2# of (co-)locations.
This paper discusses the nature and significance of artificial illumination in computer games.
It examines different game genres and finds their essence, in order to be able to locate a specific game typical to each genre.
Games that existed prior to the computer are shown use light in a much more functional way, as opposed to digital games, which use illumination primarily for creating reality or aesthetic pleasure.
Copyright 2004 KEYWORDS: Lighting design, Illumination, Shadows, Computer games, Game genres.
The high diversity in the capabilities of various mobile de-vices such as display capabilities and computation power makes the design of mobile information systems more challenging.
A transcoding proxy is placed between a client and an information server to coordinate the mismatch be-tween what the server provides and what the client prefers.
However, most research works in transcoding proxies in mo-bile computing environments are under the traditional client-server architecture and do not employ the data broadcast technique which is has been deemed a promising technique to design a power conservation, high scalable and high band-width utilization.
In addition, the issue of QoS provision is also not addressed.
In view of this, we design in this paper a QoS-aware transcoding proxy by utilizing the on-demand broadcasting technique.
We first propose a QoS-aware transcoding proxy architecture, abbreviated as QTP, and model it as a queueing network.
By analyzing the queueing network, several theoretical results are derived.
We then propose a version decision policy and a service admission control scheme to provide QoS in QTP.
The derived results are used to guide the execution of the proposed version decision policy and service admission control scheme to achieve the given QoS requirement.
To measure the performance of QTP, several experiments are conducted.
Experimental results show that the proposed scheme is more scalable than traditional client-server systems.
In addition, the proposed scheme is able to effectively control the system load to attain the desired QoS.
This paper describes a study investigating the potential for two user modelling systems: a location-aware user modelling system providing easy access to applications, files and course materials commonly used by an individual student in different locations
This paper outlines the concept of a Global GIS, and defines various aspects of its development, as well as various options and decisions that must be made.
The emphasis is on the advantages and disadvantages of maintaining a global topological structure, and whether topology should be generated on the fly in response to a specific query.
We first define what we mean by space in this context, followed by a description of topological structures and how we may use them in the context of graph traversal problems.
We then describe some appropriate data structures.
After mentioning some of the real-world problems associated with polygon construction problems, we touch on how graphs may represent change over time.
A global
Simulation models and business application software as they are used for decision support in enterprise management are both representations of an enterprises actual operations.
This paper describes a unified simulation and application framework where it is possible to represent the entire performance process along a supply chain in a unified business model, improve its performance with discrete event simulation technology, and then generate and implement the corresponding business application software from the same unified model, based on a so-called framework-based application technology which allows implementation of changes derived from simulation analysis with minimal effort and time.
This enables a company to optimise not only operational processes such as shopfloor or warehouse operations but also business processes such as planning, order management and scheduling through simulation.
1
Estimation of camera motion and structure of rigid objects in the 3D world from multiple camera images by bundle adjustment is often performed by iterative minimization methods due to their low computational e#ort.
These methods need a robust initialization in order to converge to the global minimum.
In this paper a new criterion for keyframe selection is presented.
While state of the art criteria just avoid degenerated camera motion configurations, the proposed criterion selects the keyframe pairing with the lowest expected estimation error of initial camera motion and object structure.
The presented results show, that the convergence probability of bundle adjustment is significantly improved with the new criterion compared to the state of the art approaches.
Using a newly constructed data set, we compare sources of funds and investment activities of venture capital (VC) funds in Germany, Israel, Japan and the UK.
Sources of VC funds differ significantly across countries, e.g.
banks are particularly important in Germany, corporations in Israel, insurance companies in Japan, and pension funds in the UK.
VC investment patterns also differ across countries in terms of the stage, sector of financed companies and geographical focus of investments.
We find that these differences in investment patterns are related to the variations in funding sources - for example, bank and pension fund backed VC firms invest in later stage activities than individual and corporate backed funds -- and we examine various theories concerning the relation between finance and activities.
We also report that the relations differ across countries; for example, bank backed VC firms in Germany and Japan are as involved in early stage finance as other funds in these countries, whereas they tend to invest in relatively late stage finance in Israel and the UK.
We consider the implication of this for the influence of financial systems on relations between finance and activities.
German orthography has the somewhat unique property of systematically marking nouns by capitalizing their first letter.
This gives the reader additional information with respect to the syntactic structure of a sentence but also burdens the writer with the task of making this structure explicit.
In some older studies, the benefits of this information have been demonstrated for the reading process, it still remains unclear though, how the writer accomplishes this task.
Two different processes are conceivable: The information is either delivered by the Orthographic Output Lexicon or is syntactically generated whilst the sentence to be written is constructed.
In a series of experiments, evidence is provided for an interactive exchange between lexical and syntactic processing dealing with the question of when capitalization should occur.
Auditory functional magnetic resonance imaging tasks are challenging since the MR scanner noise can interfere with the auditory stimulation.
To avoid this interference a sparse temporal sampling method with a long repetition time (TR # 17 s) was used to explore the functional anatomy of pitch memory.
Eighteen right-handed subjects listened to a sequence of sine-wave tones (4.6 s total duration) and were asked to make a decision (depending on a visual prompt) whether the last or second to last tone was the same or different as the first tone.
An alternating button press condition served as a control.
Sets of 24 axial slices were acquired with a variable delay time (between 0 and 6 s) between the end of the auditory stimulation and the MR acquisition.
Individual imaging time points were combined into three clusters (0 --2, 3-- 4, and 5-- 6 s after the end of the auditory stimulation) for the analysis.
The analysis showed a dynamic activation pattern over time which involved the superior temporal gyrus, supramarginal gyrus, posterior dorsolateral frontal regions, superior parietal regions, and dorsolateral cerebellar regions bilaterally as well as the left inferior frontal gyrus.
By regressing the performance score in the pitch memory task with task-related MR signal changes, the supramarginal gyrus (left#right) and the dorsolateral cerebellum (lobules V and VI, left#right) were significantly correlated with good task performance.
The SMG and the dorsolateral cerebellum may play a critical role in short-term storage of pitch information and the continuous pitch discrimination necessary for performing this pitch memory task.
We consider the problem of converting a decimal number to a base b number.
We present a conversion function that relates each digit in the base b system to the decimal value that is equal to the base b number in question.
Thus, each base b digit of the related base b number can be obtained directly from the corresponding decimal number without the requirement of knowing any other base b digit.
A system is introduced that allows a string player to control a synthesis engine with the gestural skills he is used to.
The implemented system is based on an electric viola and a synthesis engine that is directly controlled by the unanalysed audio signal of the instrument and indirectly by control parameters mapped to the synthesis engine.
This method offers a highly string-specific playability, as it is sensitive to the kinds of musical articulation produced by traditional playing techniques.
Nuances of sound variation applied by the player will be present in the output signal even if those nuances are beyond traditionally measurable parameters like pitch, amplitude or brightness.
The relatively minimal hardware requirements make the instrument accessible with little expenditure.
This paper describes a method for enumerating the ways in which combinations of vehicles can be observed at di#erent survey points.
The framework described is quite general and can be applied to a variety of problems where matches are to be found in data surveyed at a number of locations (or at a single location over a number of days).
As an example, the framework is applied to the problem of false matches in licence plate survey data.
this paper we discuss the various XML schema [15,16], the port types defined, and some of our experiences building RFT
multimedia content service delivery.
Its efficiency is maximized when all the service recipients have identical needs.
In reality however, the end users may have a heterogeneous set of requirements for different service levels as well as different service components, depending on their system and network capabilities.
We propose the notion of Service Adaptive Multicast (SAM) that balances the tradeoffs between providing individualized service to each client and maintaining an efficient overlay multicast tree structure.
The novel aspects of our approach are (a) the ability to augment and transform existing paths into service paths with the desired attributes; and (b) integration of two tree maintenance processes: a receiver-initiated just-in-time adaptation of the multicast service tree driven by application/user perceived QoS, and a demand-driven tree maintenance process geared towards long-term tree quality.
We demonstrate the performance of our approach using simulations of large client population.
butes, security levels, and the page size, were varied for a Selection and Join query.
We were particularly interested in the relationship between performance degradation and changes in the quantity of these properties.
The performance of each scheme was measured in terms of its response time.
The response times for the element level fragmentation scheme increased as the numbers of tuples, attributes, security levels, and the page size were increased, more significantly so than when the number of tuples and attributes were increased.
The response times for the attribute level fragmentation scheme was the fastest, suggesting that the performance of the attribute level scheme is superior to the tuple and element level fragmentation schemes.
In the context of assurance, this research has also shown that the distribution of fragments based on security level is a more natural approach to implementing security in MLS/DBMS systems, because a multilevel database is analogous to a
We propose how Genetic Programming (GP) can be used for developing, in real time, problem-specific heuristics for Branch and Bound (B&B) search.
A GP run, embedded into the B&B process, exploits the characteristics of the particular problem being solved, evolving a problem-specific heuristic expression.
The evolved heuristic replaces the default one for the rest of the B&B search.
The application of our method to node selection for B&B based Mixed Integer Programming is illustrated by incorporating the GP node selection heuristic generator into a B&B MIP solver.
The hybrid system compares well with the unmodified solver utilizing DFS, BFS, or even the advanced Best Projection heuristic when confronted with hard MIP problems from the MIPLIB3 benchmarking suite.
In today's Internet, only best-effort service is provided.
With up-coming Quality of Service (QoS) requirements raised by a wide range of communication-intensive, real-time multimedia applications, the best-effort service is no longer sufficient.
As a result, Differentiated Service Model (DiffServ) has been proposed as a cost-effective way to provision QoS in the Internet.
This paper has benefited from the detailed and perceptive comments of our reviewers, especially our shepherd Hank Levy.
We thank Randy Katz and Eric Anderson for their detailed readings of early drafts of this paper, and David Culler for his ideas on TACC's potential as a model for cluster programming.
Ken Lutz and Eric Fraser configured and administered the test network on which the TranSend scaling experiments were performed.
Cliff Frost of the UC Berkeley Data Communications and Networks Services group allowed us to collect traces on the Berkeley dialup IP network and has worked with us to deploy and promote TranSend within Berkeley.
Undergraduate researchers Anthony Polito, Benjamin Ling, and Andrew Huang implemented various parts of TranSend's user profile database and user interface.
Ian Goldberg and David Wagner helped us debug TranSend, especially through their implementation of the rewebber
This paper examines genetic programming as a machine learning technique in the context of object detection.
This paper introduces a distributed localized algorithm where sensor nodes determine if they are located along the perimeter of a wireless sensor network.
The algorithm works correctly in su#ciently dense wireless sensor networks with a minimal requisite degree of connectivity.
Using 1-hop and 2-hop neighbour information, nodes determine if they are surrounded by neighbouring nodes, and consequently, if they are located within the interior of the wireless sensor network.
The algorithm requires minimal communication between nodes - a desirable property since energy reserves are generally limited and non-renewable.
Sonoelastography is the visualisation of elastic properties using ultrasound.
It can enable tumours to be detected and localised based on their elasticity when they are less elastic than the surrounding soft tissue.
In vibration sonoelastography the target tissues are vibrated while simultaneously recording ultrasound images.
A technique for imaging relative elastic properties is proposed that uses a standard ultrasound machine.
It combines B-scan and power Doppler signals to produce images of relative vibration amplitude.
Preliminary results using simulations and liver phantoms are presented and the potential of the method to highlight areas of differing elasticity within an organ such as the breast is mentioned.
The possibility of combining such a method with freehand 3D scanning, enabling B-scan and power Doppler signals to simultaneously populate a voxel array for subsequent visualisation is discussed.
It has been known for some time that Learning Classifier Systems (Holland, 1986) have potential for application as Data Mining tools.
Parodi and Bonelli (1990) applied the Boole LCS (Wilson, 1985) to a Lymphography data set and reported 82% classification rates.
More recent work, such as GA-Miner (Flockhart, 1995) has sought to extend the application of LCS to larger commercial data sets, introducing more complex attribute encoding techniques, static niching, and hybrid genetic operators in order to address the problems presented by large search spaces.
Despite these results, the traditional LCS formulation has shown itself to be unreliable in the formation of accurate optimal generalisations, which are vital for the reduction of results to a human readable form.
XCS (Wilson, 1995, 1998) has been shown to be capable of generating a complete and optimally accurate mapping of a test environment (Kovacs, 1996) and therefore presents a new opportunity for the application of Learning Classifier Systems to Data Mining.
As part of a continuing research effort this paper presents some first results in the application of XCS to a Data Mining task.
It demonstrates that XCS is able to produce a classification performance and rule set which exceeds the performance of most current Machine Learning techniques when applied to the Monk's problems (Thrun, 1991).
This paper shows that it is possible to dramatically reduce the memory consumption of classes loaded in an embedded Java virtual machine without reducing its functionalities.
We describe how to pack the constant pool by deleting entries which are only used during the class loading process.
We present some benchmarks which demonstrate the efficiency of this mechanism.
We finally suggest some additional optimizations which can be applied if some restrictions to the functionalities of the virtual machine can be tolerated.
In this paper we introduce a conceptual model for information supply which abstracts from enabling technologies such as file types, transport protocols and rdf and daml+oil.
Rather than focusing on technologies that may be used to actually implement information supply, we focus on the question: what is information supply and how does it relate to the data (resources) found on the Web today.
By taking a high level of abstraction we can gain more insight in the information market, compare di#erent views on it and even present the architecture of a prototype retrieval system (Vimes ) which uses transformations to deal with the heterogeneity of information supply.
The authors proposed a content management approach to develop a Webbased learning platform that were implemented and used to support both presential and distance education.
The project, named EFTWeb, focus on the need to support both content and context.
It provided the basis for the current research concerning the impact of new learning approaches.
This paper presents current research extending EFTWeb to provide a broader environment that takes advantage of e-learning concepts by augmenting the framework to include beyond content and context, the experience dimension.
The augmented framework relates education activities with the individual, the group, and the community, addressing how e-learning can be used to support learning experiences.
?
1.
Few methods use molecular dynamics simulations based on atomically detailed force fields to study the proteinligand docking process because they are considered too time demanding despite their accuracy.
In this paper we present a docking algorithm based on molecular dynamics simulations which has a highly flexible computational granularity.
We compare the accuracy and the time required with well-known, commonly used docking methods like AutoDock, DOCK, FlexX, ICM, and GOLD.
We show that our algorithm is accurate, fast and, because of its flexibility, applicable even to loosely coupled distributed systems like desktop grids for docking.
Human centered systems must focus attention in roles, users and tasks, aiming to making the full potential of computing ubiquitous.
The paper proposes a generic design pattern for such systems, incorporating digital assistants and human representatives.
These agents collaborate with people and deliberate socially for helping them to (a) participate in numerous physical and social contexts consistently and coherently, (b) build explicit social structures governed by social laws (i.e.
agents values, permissions, preferences, contextual constraints), (c) deal with the dynamics of the activitiesenvironment and (d) manage the distributivity of the activities and environment.
This paper deals with multiwavelets and the different properties of approximation and smoothness associated with them.
In particular, we focus on the important issue of the preservation of discrete-time polynomial signals by multifilterbanks.
We introduce and detail the property of balancing for higher degree discrete-time polynomial signals and link it to a very natural factorization of the refinement mask of the lowpass synthesis multifilter.
This factorization turns out to be the counterpart for multiwavelets of the well-known zeros at condition in the usual (scalar) wavelet framework.
The property of balancing also proves to be central to the different issues of the preservation of smooth signals by multifilterbanks, the approximation power of finitely generated multiresolution analyses, and the smoothness of the multiscaling functions and multiwavelets.
Using these new results, we describe the construction of a family of orthogonal multiwavelets with symmetries and compact support that is indexed by increasing order of balancing.
In addition, we also detail, for any given balancing order, the orthogonal multiwavelets with minimum-length multifilters.
In this paper, we consider the compression of high-definition video sequences for bandwidth sensitive applications.
We show that down-sampling the image sequence prior to encoding and then up-sampling the decoded frames increases compression efficiency.
This is particularly true at lower bit-rates, as direct encoding of the high-definition sequence requires a large number of blocks to be signaled.
We survey previous work that combines a resolution change and compression mechanism.
We then illustrate the success of our proposed approach through simulations.
Both MPEG-2 and H.264 scenarios are considered.
Given the benefits of the approach, we also interpret the results within the context of traditional spatial scalability.
Java class files can be transmitted more efficiently over a network if they are compressed.
After an...
This paper models the assimilation process of migrants and shows evidence of the complementarity between their destination experience and upon-arrival human capital.
Bayesian learning and dynamics of matching are modeled and empirically assessed, using panel data of wages from the Bangkok labor market in Thailand.
The analysis incorporates (1) the heterogeneity of technologies and products, characteristic of urban labor markets, (2) imperfect information on migrants types and skill demanded in the markets, and (3) migrants optimal learning over time.
Returns to destination experience emerge endogenously.
Estimation results, which control migrants selectivity by firstdifferencing procedures, show that (1) schooling returns are lower for migrants than for natives, (2) the accumulation of destination experience raises wages for migrants, (3) the experience effect is greater for more-educated agents, i.e., education and experience are complementary, and (4) the complementarity increases as destination experience accumulates.
The results imply that more-educated migrants have higher learning efficiency and can perform tasks of greater complexity, ultimately yielding higher wage growth in the destination market.
Simulations show that, due to the complementarity, wages for different levels of upon-arrival human capital diverge in the migrants assimilation process.# iii Contents Acknowledgments............................................................................................................... v 1.
for the autoassociative neural networks and evaluates the performance by handwritten numeral recognition test.
Each of the autoassociative networks is first trained independently for each class using the feature vector of the class.
Then the mirror image learning algorithm is applied to enlarge the learning sample of each class by mirror image patterns of the confusing classes to achieve higher recognition accuracy.
This paper presents a model for more interactive interface agents.
This more interactive style of agents aims to increase the trust and understanding between user and agent, by allowing the agent, under certain conditions, to solicit further input from the user about his preferences and desires.
With the user and agent engaging in specific clarification dialogues, the user's input is employed to adjust the agent's model of the user.
Moreover, the user is provided with an ability to view this user model, under certain well defined circumstances.
Since both the agent and user can take the initiative to interact, basic issues regarding mixed-initiative systems arise.
These issues are addressed in our model, which also takes care to restrict the agent's interaction with the user, to avoid bothering the user unduly.
We illustrate our design for more interactive interface agents by including some examples in the domain of electronic mail.
This paper deals with efficient algorithms for simulating performance measures of Gaussian random vectors.
Recently, we developed a simulation algorithm which consists of doing importance sampling by shifting the mean of the Gaussian random vector.
Further variance reduction is obtained by stratification along a key direction.
A central ingredient of this method is to compute the optimal shift of the mean for the importance sampling.
The optimal shift is also a convenient, and in many cases, an effective direction for the stratification.
In this paper, after giving a brief overview of the basic simulation algorithms, we focus on issues regarding the computation of the optimal change of measure.
A primary application of this methodology occurs in computational finance for pricing path dependent options.
Introduction The concept of knowledge building (Scardamalia & Bereiter, 1994) is closely related to the notion of constructivism (Papert, 1991).
Both assume that learners construct knowledge by interpreting their perceptual experiences in terms of prior knowledge, current mental structures and existing beliefs (Jonassen & McAleese, 1993).
Constructivism implies that learning is the personal interpretation of the world situated in a rich context.
Learners' involvement in the process of knowledge construction, development and evaluation aims at the development of their reflective awareness.
Collaboration is used to encourage the construction of an understanding from multiple viewpoints.
In this project, it is assumed that learning occurs by building a virtual world both under the physical point of view, building virtual houses and objects, and under the cultural point of view, by exchanging, discussing and generating ideas, knowledge and specific content for each virtual house.
The stu
Global positioning systems (GPS) and mobile phone networks are making it possible to track individual users with an increasing accuracy.
It is natural to ask whether one can use this information to maintain social networks.
Here each user wishes to be informed whenever one of a list of other users, called the user's friends, appears in the user's vicinity.
In contrast to more traditional positioning based algorithms, the computation here depends not only on the user's own position on a static map, but also on the dynamic position of the user's friends.
Hence it requires both communication and computation resources.
The computation can be carried out either between the individual users in a peer-to-peer fashion or by centralized servers where computation and data can be collected at one central location.
In the peer-to-peer model, a novel algorithm for minimizing the number of location update messages between pairs of friends is presented.
We also present an efficient algorithm for the centralized model, based on region hierarchy and quadtrees.
The paper provides an analysis of the two algorithms, compares them with a naive approach, and evaluates them using the IBM City Simulator system.
The operational benefits of having a learning organization include at the very minimum increased organizational competitiveness and responsiveness in a given realm of competition.
Military simulation worlds have served and continue to serve as practice fields for organizational learning.
Organizational learning mechanisms like the simulation debriefing session have been linked to organizational learning through a taxonomy for rare events.
This research provides both descriptive and prescriptive findings for military interactive simulation and debriefing systems.
Some suggestions for simulation system design are made based on the research.
This paper examines trends and technologies leading towards simulation-based enterprise applications.
Component, internet and distributed computing technologies are presented as enablers of simulation-based enterprise applications.
Examples are given of typical applications that can take advantage of distributed simulation components.
The goal of this paper is to present a high level component architecture that will work in current enterprise information technology (IT) environments
This paper presents an application of information visualization techniques in the resource re-allocation domain and in particular flight rescheduling.
In collaboration with Swissair, our work concentrates on human-computer problem solving and how visualization techniques can help users perceive the entire solution space in four abstraction models in order to make the "right" decision.
We present a technique called coordinated visualization
This paper analyzes the reasons why behavioral synthesis was never widely accepted by designers, and then we propose a practical solution to this problem.
The main breakthrough of this new approach is the redefinition of the synthesis flow at the behavioral level to better profit from the powerful of RTL and FSM synthesis tools.
The effectiveness of this new methodology is illustrated with two large design examples: a 2-million-transistor ATM shaper design and a motion estimator for a video codec (H261 standard).
Most discrete-event simulation models have stochastic elements that mimic the probabilistic nature of the system under consideration.
A close match between the input model and the true underlying probabilistic mechanism associated with the system is required for successful input modeling.
The general question considered here is how to model an element (e.g., arrival process, service times) in a discrete-event simulation given a data set collected on the element of interest.
For brevity, it is assumed that data is available on the aspect of the simulation of interest.
It is also assumed that raw data is available, as opposed to censored data, grouped data, or summary statistics.
This example-driven tutorial examines introductory techniques for input modeling.
Most simulation texts (e.g., Law and Kelton 2000) have a broader treatment of input modeling than presented here.
Nelson and Yamnitsky (1998) survey advanced techniques.
We show that if a knot has a minimal spanning surface that admits certain Gabai disks, then this knot has Property P. As one of the applications we extend and simplify a recent result of Menasco and Zhang that closed 3-braid knots have Property P. Other applications are given.
To enable fast and accurate evaluation of HW/SW implementation choices of on-chip communication, we present a method to automatically generate timed OS simulation models.
The method generates the OS simulation models with the simulation environment as a virtual processor.
Since the generated OS simulation models use real OS code, the presented method can mitigate the OS code equivalence problem.
The generated model also simulates different types of processor exceptions.
This approach provides two orders of magnitude higher simulation speedup compared to the simulation using instruction set simulators for SW simulation.
Co-clustering is a powerful data mining technique with varied applications such as text clustering, microarray analysis and recommender systems.
Recently, an informationtheoretic co-clustering approach applicable to empirical joint probability distributions was proposed.
In many situations, co-clustering of more general matrices is desired.
In this paper, we present a substantially generalized co-clustering framework wherein any Bregman divergence can be used in the objective function, and various conditional expectation based constraints can be considered based on the statistics that need to be preserved.
Analysis of the coclustering problem leads to the minimum Bregman information principle, which generalizes the maximum entropy principle, and yields an elegant meta algorithm that is guaranteed to achieve local optimality.
Our methodology yields new algorithms and also encompasses several previously known clustering and co-clustering algorithms based on alternate minimization.
The paper introduces several issues that have one common target - secure cooperation of autonomous information systems.
We show that Active authorization model may be an abstract layer that allows simple, efficient and secure management of heterogeneous system's security properties.
Data Cleaning methods are used for finding duplicates within a file or across sets of files.
This overview provides background on the Fellegi-Sunter model of record linkage.
The Fellegi-Sunter model provides an optimal theoretical classification rule.
Fellegi and Sunter introduced methods for automatically estimating optimal parameters without training data that we extend to many real world situations.
Keywords EM Algorithm, string comparator, unsupervised learning.
1.
This paper proposes a practical and scalable technique for point-to-point routing in wireless sensornets.
This method, called Beacon Vector Routing (BVR), assigns coordinates to nodes based on the vector of distances (hop count) to a small set of beacons, and then defines a distance metric on these coordinates.
Packets are routed greedily, being forwarded to the next hop that is the closest (according to this beacon vector distance metric) to the destination.
This approach is evaluated through both simulation and a prototype implementation on motes.
How do we communicate wirh robots?...
This paper will outline some basic aspects of what characterizes human-robot interaction in contrast to other kinds of interaction, such as communication with children or foreigners.
Here, the robot's looks -- humanoid or not -- will not play a major role.
The question at hand is rather whether or to what degree humans expect a robot to behave linguistically like a human being...
Overview visualizations for small-screen web browsers were designed to provide users with visual context and to allow them to rapidly zoom in on tiles of relevant content.
Given that content in the overview is reduced, however, users are often unable to tell which tiles hold the relevant material, which can force them to adopt a time-consuming hunt-and-peck strategy.
Collapse-to-zoom addresses this issue by offering an alternative exploration strategy.
In addition to allowing users to zoom into relevant areas, collapse -to-zoom allows users to collapse areas deemed irrelevant, such as columns containing menus, archive material, or advertising.
Collapsing content causes all remaining content to expand in size causing it to reveal more detail, which increases the user's chance of identifying relevant content.
Collapse-to-zoom navigation is based on a hybrid between a marquee selection tool and a marking menu, called marquee menu.
It offers four commands for collapsing content areas at different granularities and to switch to a full-size reading view of what is left of the page.
This paper provides simulation practitioners and consumers with a grounding in how discrete-event simulation software works.
Topics include discrete-event systems; entities, resources, control elements and operations; simulation runs; entity states; entity lists; and entity-list management.
The implementation of these generic ideas in AutoMod, SLX, and Extend is described.
The paper concludes with several examples of "why it matters" for modelers to know how their simulation software works, including coverage of SIMAN (Arena), ProModel, and GPSS/H as well as the other three tools.
This paper presents a methodology of decision-making for embedded I/O buffer sizes in a single-bus shared-memory system.
The decision is made with the aid of a queuing model, simulation, and the proposed algorithm.
The generalized queueing model is simulated to cover two cases: independent processing units and pipelined processing units in a shared-memory environment.
The objective is to obtain the best performance with the optimized embedded buffers in the system.
Therefore, an algorithm is developed to find the optimal solution efficiently by exploring the correlation between buffers and system performance.
The local optimum is guaranteed.
The method can be widely applied to many applications.
This paper proposes a routing scheme for contentbased networking.
A content-based network is a communication network that features a new advanced communication model where messages are not given explicit destination addresses, and where the destinations of a message are determined by matching the content of the message against selection predicates declared by nodes.
Routing in a content-based network amounts to propagating predicates and the necessary topological information in order to maintain loop-free and possibly minimal forwarding paths for messages.
The routing scheme we propose uses a combination of a traditional broadcast protocol and a contentbased routing protocol.
We present the combined scheme and its requirements over the broadcast protocol.
We then detail the content-based routing protocol, highlighting a set of optimization heuristics.
We also present the results of our evaluation, showing that this routing scheme is effective and scalable.
In the main research of internet-computing enabled knowledge management, we use some of the most advanced research scenarios, arguing that we critically need a system approach to question where knowledge comes from.
In particular, within a given engineering domain, we synthesis the problems and reveal, that the knowledge is embraced by interactions among systems, system observers, observables, engineering objects and instruments; that the complex system interactions must be dispatched into infrastructural layers based on physicsontologies; that the ontologies must be dedicated to human and data communications.
Such a synthesis would impact on knowledge technologies for solving engineering problems in scalabilities, as well as in collective vocabularies that must associate with the communication crossing the layers in the problem solving environment.
We present a modified photon mapping algorithm capable of running entirely on GPUs.
Our implementation uses breadth-first photon tracing to distribute photons using the GPU.
The photons are stored in a grid-based photon map that is constructed directly on the graphics hardware using one of two methods: the first method is a multipass technique that uses fragment programs to directly sort the photons into a compact grid.
The second method uses a single rendering pass combining a vertex program and the stencil buffer to route photons to their respective grid cells, producing an approximate photon map.
We also present an efficient method for locating the nearest photons in the grid, which makes it possible to compute an estimate of the radiance at any surface location in the scene.
Finally, we describe a breadth-first stochastic ray tracer that uses the photon map to simulate full global illumination directly on the graphics hardware.
Our implementation demonstrates that current graphics hardware is capable of fully simulating global illumination with progressive, interactive feedback to the user.
of the Dissertation Doctor of Philosophy in Computer Science Dartmouth College, Hanover, NH August 2004 Professor David F. Kotz, Chair The complexity of developing context-aware pervasive-computing applications calls for distributed software infrastructures that assist applications to collect, aggregate, and disseminate contextual data.
In this dissertation, we present a Context Fusion Network (CFN), called Solar, which is built with a scalable and self-organized service overlay.
Solar is flexible and allows applications to select distributed data sources and compose them with customized data-fusion operators into a directed acyclic information flow graph.
Such a graph represents how an application computes highlevel understandings of its execution context from low-level sensory data.
To manage applicationspecified operators on a set of overlay nodes called Planets, Solar provides several unique services such as application-level multicast with policy-driven data reduction to handle buffer overflow, context-sensitive resource discovery to handle environment dynamics, and proactive monitoring and recovery to handle common failures.
Experimental results show that these services perform well on a typical DHT-based peer-to-peer routing substrate.
In this dissertation, we also discuss experience, insights, and lessons learned from our quantitative analysis of the input sensors, a detailed case study of a Solar application, and development of other applications in different domains.
In this article, we show that partial observability hinders full reconstructibility of the state space in SLAM, making the final map estimate dependent on the initial observations, and not guaranteeing convergence to a positive semi-definite covariance matrix.
By characterizing the form of the total Fisher information we are able to determine the unobservable state space directions.
To overcome this problem, we formulate new fully observable measurement models that make SLAM stable.
In this work, we study and analyze the contourlet transform for low bit-rate image coding.
This image-based geometrical transform has been recently introduced to efficiently represent images with a spars set of coefficients.
In order to explore the potentiality of this new transform as a tool for image coding, we developed a direct coding scheme that is based on using non-linear approximation of images.
We code the quantized transform coefficients as well as the significance map of an image in the contourlet transform domain.
Based on the proposed approach, we analyzed the rate-distortion curves for a set of images and concluded that this coding approach, despite its redundancy, is visually competitive with a direct wavelet transform coder, and in particular, it is visually superior to wavelet coding for images with textures and oscillatory patterns.
Introduction It is shown in the present paper, that the standard model of cosmology, owing to its mathematical structure, is actually basedontheconcept of a variation of light velocity with time, due to whichthe redshift is observed.
First, we recall the fundamental statements of the standard cosmology.
The generally accepted Big Bang cosmology is based on the space-time metric: d d d d d s c t a t r f r o 2 2 2 2 2 2 2 2 2 q q j sin (1) Here a(t) is the radius of the Universe or, otherwise, the scale dimension of space defined by the solution of the Einstein gravity equation; j, q are the angular coordinates of galaxies, and r is the relative constant radial coordinate.
To simplify the expression we shall consider the case of plane space, for which f(r) = r. In this case, themetric distance up to galaxies detectedbythe station- ary length standard will be, according to (1),R(t)=a(t)r. This signifies that the Universe is a sphere of radiusa(t), filled with galaxies th
Discrete Event Simulation of manufacturing systems has become widely accepted as an important tool to aid the design of such systems.
Often, however, it is applied by practitioners in a manner which largely ignores an important element of industry; namely, the workforce.
Workers are usually represented as simple resources, often with deterministic performance values.
This approach ignores the potentially large effect that human performance variation can have on a system.
A long-term data collection exercise is described with the aim of quantifying the performance variation of workers in a typical automotive assembly plant.
The data are presented in a histogram form which is immediately usable in simulations to improve the accuracy of design assessment.
The results show levels of skewness and range which are far larger than anticipated by current researchers and practitioners in the field.
We investigate the behavior of TCP(#,#) protocols in the presence of wireless networks.
We seek an answer to strategic issues of maximizing energy and bandwidth exploitation, without damaging the dynamics of multipleflow equilibrium.
Our perspective is novel indeed: What is the return of the effort that a protocol expends?
Can we achieve more gains with less effort?
We study first the design assumptions of TCP(#,#) protocols and discuss the impact of equation-based modulation of # and # on protocol efficiency.
We introduce two new metrics to capture protocol behavior: The "Extra Energy Expenditure" and the "Unexploited Available Resource Index".
We confirm experimentally that, in general, smoothness and responsiveness constitute a tradeoff
In this paper a novel content--based musical genre classification approach that uses combination of classifiers is proposed.
First, musical surface features and beat-- related features are extracted from different segments of digital music in MP3 format.
Three 15--dimensional feature vectors are extracted from three different parts of a music clip and three different classifiers are trained with such feature vectors.
At the classification mode, the outputs provided by the individual classifiers are combined using a majority vote rule.
Experimental results show that the proposed approach that combines the output of the classifiers achieves higher correct musical genre classification rate than using single feature vectors and single classifiers.
We consider radio networks modeled as directed graphs.
In ad hoc radio networks, every node knows only its own label and a linear bound on the size of the network but is unaware of the topology of the network, or even of its own neighborhood.
The fastest currently known deterministic broadcasting algorithm working for arbitrary n-node ad hoc radio networks, has running time O(n log n).
Our main result is a broadcasting algorithm working in time O(n log n log D) for arbitrary n- node ad hoc radio networks of eccentricity D. The best currently known lower bound on broadcasting time in ad hoc radio networks is hence our algorithm is the rst to shrink the gap between bounds on broadcasting time in radio networks of arbitrary eccentricity to a logarithmic factor.
We also show a broadcasting algorithm working in time O(n log D) for complete layered n-node ad hoc radio networks of eccentricity D. The latter complexity is optimal.
The major objectives of this paper are to shed some light on the mechanism that generates interregional economic imbalances among communities in rural China.
Central to this issue is the development of township and village enterprises (TVEs) because the presence of secondary industry is closely associated with the economic welfare of the people residing in rural communities.
In rural Jiangsu, for example, spatial disparities have become more pronounced over the past two decades.
This fact suggests that the influence of initial conditions---historical and geographical advantages of industrial frontrunners---has not been erased but rather continues to persist.
This is attributed to a variety of factors, including the less efficient use of TVE resources in poor areas, the decentralized fiscal system, and agglomeration economies.
In short, the socialist regime of self-reliance that still lingers in China's rural society traps less advanced areas in poverty.
KEYWORDS: economic imbalance, rural China, "past-dependency", institution, allocation efficiency, agglomeration economies ii ACKNOWLEDGMENTS The author acknowledges the general assistance of staff from the Chinese Academy of Agricultural Sciences, Jiangsu Academy of Social Sciences, Nanjing Agricultural University, and Policy Research Institute (MAFF, Japan), and financial support from the Government of Japan.
The author is grateful for helpful comments from Katsuji Nakagane, Zongshun Bao, Hao Hu, Funing Zhong, Peter Hazell, and other participants at various seminars.
iii TABLE OF CONTENTS 1.
INTRODUCTION At the present time, the most commonly accepted definition of a complex system is that of a system containing many interdependent constituents which interact nonlinearly .
Therefore, when we want to model a complex system, the first issue has to do with the connectivity properties of its network, the architecture of the wirings between the constituents.
In fact, we have recently learned that the network structure can be as important as the nonlinear interactions between elements, and an accurate description of the coupling architecture and a characterization of the structural properties of the network can be of fundamental importance also to understand the dynamics of the system.
The definition may seem somewhat fuzzy and generic: this is an indication that the notion of a complex system is still not precisely delineated and di#ers from author to author.
On the other side, there is complete agreement that the "ideal" complex systems are the biological ones, especially
Contents 1.
Notation and definitions .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1 2.
Heuristics for a-counts.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2 3.
Interactive consistency .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 3.1 Consolidation of a-counts .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6 3.2 Consolidation of binary accusations .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
8 4.
Diagnosis.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
9 5.
Acknowledgements .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
10 6.
References
This paper combines household survey and census data to construct a provincial poverty map of Vietnam and evaluate the accuracy of geographically targeted anti-poverty programs.
First, the paper estimates per capita expenditure as a function of selected household and geographic characteristics using the 1998 Vietnam Living Standards Survey.
Next, these results are combined with data on the same household characteristics from the 1999 Census to estimate the incidence of poverty in each province.
The results indicate that rural poverty is concentrated in ten provinces in the Northern Uplands, two provinces of the central Highlands, and two provinces in the Central Coast.
Finally, Receiver Operating Characteristics curves are used to evaluate the effectiveness of geographic targeting.
The results show that the existing poor communes system excludes large numbers of poor people, but there is potential to sharpen poverty targeting using a small number of easy-to-measure household characteristics.
This paper describes the New Millennium Remote Agent #NMRA# architecture for autonomous spacecraft control systems.
This architecture integrates traditional real-time monitoring and control with constraintbased planning and scheduling, robust multi-threaded execution, and model-based diagnosis and recon#guration.
Large, sparse binary matrices arise in numerous data mining applications, such as the analysis of market baskets, web graphs, social networks, co-citations, as well as information retrieval, collaborative filtering, sparse matrix reordering, etc.
Virtually all popular methods for the analysis of such matrices---e.g., k-means clustering, METIS graph partitioning, SVD/PCA and frequent itemset mining---require the user to specify various parameters, such as the number of clusters, number of principal components, number of partitions, and "support." Choosing suitable values for such parameters is a challenging problem.
The goal of the present paper is to report on the on-going research for applying psycholinguistic resources to building a WordNet-like lexicon of the Russian language.
We are to survey different kinds of the linguistic data that can be extracted from a Word Association Thesaurus, a resource representing the results of a largescaled free association test.
In addition, we will give a comparison of Word Association Thesaurus and other language resources applied to wordnet constructing (e.g.
text corpora, explanatory dictionaries) from the viewpoint of the quality and quantity of information they supply the researcher with.
solving linear congruences integer, 19 polynomial, 358 Sophie Germain prime, 94 splitting field, 364 square root (modular), 275 algorithm for computing, 284 square-free integer, 12 polynomial, 431 square-free decomposition algorithm, 457, 467 standard basis, 298 statistical distance, 125 Stein, C., 273 Stein, J., 74 Strassen, V., 55, 258, 289 strict polynomial time, 142 subalgebra, 347 subfield, 212 subgroup, 177 generated by, 194 submodule, 293 generated (or spanned) by, 294 subring, 211 subspace, 300 surjective, 2 theta function of Chebyshev, 77 total degree, 221, 222 trace, 440 transcendental element, 363 transpose, 310 trial division, 236 trivial ring, 206 Pomerance, C., 55, 96, 258, 259, 344, 345, 482 de la Vallee Poussin, C.-J., 95, 96 power map, 186 pre-image, 2 pre-period, 72 prefix free, 143 primality test deterministic, 471 probabilistic, 236 prime ideal, 226 in an integral domain, 371 number, 5 prime number theorem,
The power of ubiquitous computing lies not just in constant access, but also in tailoring of information based upon location.
In this paper we describe an architecture that supports tailoring of information and applications for their environment.
This environment includes the mobile client device, its location, the available bandwidth, and any soft real-time constraints.
e-Learning is expected to support organizations in being more adaptable and competitive, and individuals in becoming or staying more employable.
In order to ensure that the information and knowledge needed for the management of healthcare is appropriately shared, human behavior within health care organizations (HCOs) needs to be carefully analyzed.
Hence, guidelines, protocols, and messaging standards must be combined with models of resources and processes of patient care that are based on a sound ontology of organizations.
This requires a general theory of the ontology of social institutions.
Among the many groups attempting to develop efficient ways of sharing information across healthcare systems and organizations is Health Level 7 (HL7).
Here I address the question whether HL7 reflects a sound analysis of behavior within HCOs on the basis of a sound ontology of organizations.
I then apply ontological principles designed to show how the Reference Information Model (RIM) might be modified in such a way as to support efficient communication of medical information within and between healthcare organizations.
Query answering over commonsense knowledge bases typically employs a first-order logic theorem prover.
While first-order inference is intractable in general, provers can often be hand-tuned to answer queries with reasonable performance in practice.
This paper discusses the evaluation of adaptive traffic signal control using TSIS/CORSIM.
The paper reviews three adaptive control strategies that have been developed through contracts awarded by the FHWA's TurnerFairbank IST (Intelligent Systems and Technology) Division.
The paper discusses the framework and evaluation procedures for testing and assessing these advanced control algorithms, before they are deployed in the field.
The paper also discusses sophisticated hardware in the loop experiments that permit the benefits of other ITS concepts and technologies to be assessed and quantified.
In ECAI 1998 Smith & Grant performed a study [1] of the fail-first principle of Haralick & Elliott [2].
The fail-first principle states that "To succeed, try first where you are most likely to fail." For constraint satisfaction problems (CSPs), Haralick & Elliott realized this principle by minimizing branch depth.
This paper presents a qualitative and formative study of the uses of a starfield-based visualization interface for analysis of library collections.
The evaluation process has produced feedback that suggests ways to significantly improve starfield interfaces and the interaction process to improve their learnability and usability.
The study also gave us clear indication of additional potential uses of starfield visualizations that can be exploited by further functionality and interface development.
We report on resulting implications for the design and use of starfield visualizations that will impact their graphical interface features, their use for managing data quality and their potential for various forms of visual data mining.
Although the current implementation and analysis focuses on the collection of a physical library, the most important contributions of our work will be in digital libraries, in which volume, complexity and dynamism of collections are increasing dramatically and tools are needed for visualization and analysis.
Different planning techniques have been proposed so far which address the problem of automated composition of web services.
However, in realistic cases, the planning problem is far from trivial: the planner needs to deal with the nondeterministic behaviour of web services, the partial observability of their internal status, and with complex goals, e.g., expressing temporal conditions and preference requirements.
We propose...
Wireless distributed sensor networks (DSNs) are important for a number of strategic applications such as coordinated target detection, surveillance, and localization.
Energy is a critical resource in wireless sensor networks and system lifetime needs to be prolonged through the use of energyconscious sensing strategies during system operation.
We propose an energy-aware target detection and localization strategy for cluster-based wireless sensor networks.
The proposed method is based on an a posteriori algorithm with a two-step communication protocol between the cluster head and the sensors within the cluster.
Based on a limited amount of data received from the sensor nodes, the cluster head executes a localization procedure to determine the subset of sensors that must be queried for detailed target information.
This approach reduces both energy consumption and communication bandwidth requirements, and prolongs the lifetime of the wireless sensor network.
Simulation results show that a large amount of energy is saved during target localization using this strategy.
Introduction Gene expression of multi-cellular organisms is regulated by transcription factors (TFs) that interact with regulatory cis-elements on DNA sequences.
To find the functional regulatory elements, computer searching can predict TF binding sites (TFBS) using position weight matrices (PWMs) that represent positional base frequencies of collected experimentally determined TFBS.
However, it is still di#cult to tell authentic sites from false positives.
Reports have shown that particular TFBS are concentrated in promoters, though a general tendency is uncertain.
Computational approaches to reveal structure of promoter as combination of TFBS are required.
Here we have examined the correlation between predicted TFBS and promoters, and identified two PWM groups, 1) PWMs whose TFBS are clustered in promoters mainly by the existence of CpG islands (CGI), 2) PWMs whose TFBS are clustered in promoter independent of CGI.
As an application of the groups, we show that tissue specific genes
Current design automation methodologies are becoming incapable of achieving design closure especially in the presence of deep submicron effects.
This paper addresses the issue of design closure from a high level point of view.
A new metric called delay relaxation parameter (DRP) for RTL (Register Transfer Level) designs is proposed.
DRP essentially captures the degree of delay relaxation that the design can tolerate without violating the clock constraint.
This metric when optimized results in quicker design flow.
Algorithms to optimize DRP are formulated and their optimality are investigated.
Experimental results are conducted using a state of the art design flow with Synopsys Design Compiler followed by Cadence Place and Route.
Our approach of optimizing DRP resulted in lesser design iterations and faster design closure as compared to designs generated through Synopsys Behavioral Compiler and a representative academic design flow.
The World Wide Web is evolving into a medium that will soon make it possible for conceiving and implementing situation-aware services.
A situation-aware or situated web application is one that renders the user with an experience (content, interaction and presentation) that is so tailored to his/ her current situation.
This requires the facts and opinions regarding the context to be communicated to the server by means of a profile, which is then applied against the description of the application objects at the server in order to generate the required experience.
This paper discusses a profiles view of the situated web architecture and analyzes the key technologies and capabilities that enable them.
We conclude that trusted frameworks wherein rich vocabularies describing users and their context, applications and documents, along with rules for processing them, are critical elements of such architectures.
Hierarchical structures and catalogs is a way to organize and enrich semantically the available information in the Web.
From simple tree-like structures with syntactic constraints and type information, like DTDs and XML schemas, to hierarchies on a category/subcategory basis, like thematic hierarchies and RDF(s) models, such structures group data under certain properties.
Paths in these structures are the knowledge artifacts to represent such groups.
Considering paths in hierarchies as patterns which provide a conceptual clustering of data in groups sharing common properties, we present a language to manipulate patterns and data in hierarchical catalogs.
1
We present and analyze an iterative control algorithm that enables us to find a control input that generates the desired output asymptotically with parameter uncertainties.
The proposed algorithm keeps the clearness and compactness of the control input update form of iterative learning control while being applicable to nonrepeatable tracking problems, specifically visual tracking.
Uncertainties, such as misalignment of the optical axis of the camera with the motion platform, miscalibration, and other unknown parametric inaccuracies can be eliminated.
Sufficient conditions for the convergence of the trajectories to the desired one are given.
The performance and effectiveness are demonstrated through simulation.
This paper argues that to achieve success, a simulation project must not only describe the future state of a business process, but also indicate the best way to reach that state.
The paper also suggests how simulation may be used to guide such change program.
Prototyping to select the best change approach is critical for success, given that organizations can move toward various future states along many different paths.
By not analyzing implementation options, the traditional simulation project leaves management without a roadmap for the proposed change.
The roadmap must be plotted by a dynamic management tool, a simulator that can analyze future contextual factors and determine how the chosen path must adapt to respond to new environments.
Advances in hardware-related technologies promise to enable new data management applications that monitor continuous processes.
In these applications, enormous amounts of state samples are obtained via sensors and are streamed to a database.
Further, updates are very frequent and may exhibit locality.
While the R-tree is the index of choice for multi-dimensional data with low dimensionality, and is thus relevant to these applications, R-tree updates are also relatively inefficient.
Acquiring models of the environment belongs to the fundamental tasks of mobile robots.
In the last few years several researchers have focused on the problem of simultaneous localization and mapping (SLAM).
Classic SLAM approaches are passive in the sense that they only process the perceived sensor data and do not influence the motion of the mobile robot.
In this paper we present a novel and integrated approach that combines autonomous exploration with simultaneous localization and mapping.
Our method uses a grid-based version of the FastSLAM algorithm and at each point in time considers actions to actively close loops during exploration.
By re-entering already visited areas the robot reduces its localization error and this way learns more accurate maps.
Experimental results presented in this paper illustrate the advantage of our method over pervious approaches lacking the ability to actively close loops.
In this paper artificial regulatory networks (ARN) are evolved to match the dynamics of test functions.
The ARNs are based on a genome representation generated by a duplication / divergence process.
Current general-purpose memory allocators do not provide sufficient speed or flexibility for modern high-performance applications.
Highly-tuned general purpose allocators have per-operation costs around one hundred cycles, while the cost of an operation in a custom memory allocator can be just a handful of cycles.
To achieve high performance, programmers often write custom memory allocators from scratch -- a difficult and error-prone process.
In this
Screening experiments are performed to eliminate unimportant factors so that the remaining important factors can be more thoroughly studied in later experiments.
Sequential bifurcation (SB) is a screening method that is well suited for simulation experiments; the challenge is to prove the "correctness" of the results.
This paper proposes Controlled Sequential Bifurcation (CSB), a procedure that incorporates a two-stage hypothesis-testing approach into SB to control error and power.
A detailed algorithm is given, performance is proved and an empirical evaluation is presented.
Two experiments which investigate the impact of spatialised presentation on the identification of concurrently presented earcons are described.
The first experiment compared the identification of concurrently presented earcons based on the guidelines for individual earcon design and presentation of Brewster, Wright and Edwards [1] which were presented in spatially distinct locations, to the identification of non-spatially presented earcons which incorporated guidelines for concurrent presentation from McGookin and Brewster [2].
It was found that a significant increase in earcon identification occurred, as well as an increase in earcon register identification when earcons were spatially presented.
The second experiment compared the identification of concurrently presented earcons based on the guidelines of Brewster, Wright and Edwards [1] which were presented in spatially distinct locations, to the identification of spatially presented earcons which incorporated guidelines for the presentation of concurrent earcons from McGookin and Brewster [2].
The incorporation of the concurrent earcon guidelines was found to significantly increase identification of the timbre attribute but did not significantly effect the overall identification of earcons.
We present FaceCerts, a si5((( i nexp e, and cryptographi/Ri securei denti y certiR]5/0z system.
A FaceCert i a pri tout of person's portrai photo, anarbi(//R textual message, and a 2-D color barcode whi h encodes an SAsi$7$zFR of the message hash and the compressed representatie of the face encompassed by the photo.
The si5z7$)R i created usi0 the pri ate key of the partyi ssui( the ID.
ID veri$/R]5) i performed by a si((F o#-li7 scanni/ devi that contaiF thepubli key of theiR$7F$ The system does not requi4 smart cards;i t can be expanded to encompass otherbierR$7/ features, and morei nteresti]$7 , the ID does not need to bepri ted by a trusted or hi5)z0( pri ter,i can be pri ted anywhere, anyti$4 and potentinR$ by anyone.
The ID veri$) uses a si5F5 scan process whi h does not requiz the use ofdi$4z ys.
Wedetai system's components and present a preli/R]$0 performance evaluatiz using an in-field experiment.
This paper introduces the Service Object Request Management Architecture ("SORMA"), its design issues, and its concepts.
It is a software framework for rapid development of object-oriented software modules and their integration into stand-alone and distributed applications.
SORMA provides an intelligent "object-bus" for inter-operating and sharing distributed computing and robotics hardware.
We
this paper.
The work reported here was partly carried out in the course of the KACTUS project.
This project is partially funded by the ESPRIT Programme of the Commission of the European Communities as project number 8145.
The partners in the KACTUS project are ISL (UK), LABEIN (Spain), Lloyd's Register (United Kingdom), STATOIL (Norway), Cap Programmator (Sweden), University of Amsterdam (The Netherlands), University of Karlsruhe (Germany), IBERDROLA (Spain), DELOS (Italy), FINCANTIERI (Italy) and SINTEF (Norway).
This paper reflects the opinions of the authors and not necessarily those of the consortium
This paper presents three simple techniques for improving network service using relatively unknown features of many existing networks.
The resulting system provides greater reliability, enhanced security,and ease of management.
First, it addresses the application of IP anycast to provide reliable recursive DNS service.
Next, it explains the use of unicast reverse path forwarding and its usefulness in preventing local nodes from originating packets with spoofed source addresses.
Finally,itexplains how unicast reverse path forwarding can be used to quickly and easily apply source address filters on your network.
As an added benefit, some of these features provide mechanisms to conform a network to Best Common Practices (BCP) of network operators.
Anycast DNS Service Anycast [1] is an IP addressing technique where unicast IP addresses are assigned to multiple hosts and routes configured accordingly.Routers receiving packets destined for anycast addresses select one of potentially several valid paths to hosts configured with the address.
This technique can be used wherever unicast IP routing exists, as anycast IP addresses are simply unicast addresses designated by network operators.
Here, we use anycast addressing to improve the reliability of DNS service, load balance DNS requests across a number of servers, minimize service downtime due to maintenance, and automatically direct requests to the topologically nearest server.
The paper proposes a new tool for supporting educational and professional skill development in HLA environment; the application proposed by the authors is devoted to provide a realistic case and an easy to understand/modify example where to extend technical knowledge of HLA.
important role in forensic document examination.
However, so far there lacks a comprehensive and quantitative study on individuality of handwritten characters.
Based on a large number of handwritten characters extracted from handwriting samples of 1000 individuals in US, the individuality of handwritten characters has been quantitatively measured through identification and verification models.
Our study shows that in general alphabetic characters bear more individuality than numerals and use of a certain number of characters will significantly outperform the global features of handwriting samples in handwriting identification and verification.
Moreover, the quantitative measurement of discriminative powers of characters offers a general guidance for selecting most-informative characters in examining forensic documents.
ese early ribosome models were not rigorous three-dimensional reconstructions, but, at about the same time, the quantitative analysis of ribosome images began and these studies have recently produced extremely important results.
The easiest ribosome images to obtain are those of fields of randomly oriented particles.
For the image analyst, however, it is easy to reconstruct the three-dimensional form of objects from projection images when their relative orientations are known in advance, but much harder when orientations must be deduced after the fact, as is the case here.
It took years to develop the technology required, and, as it evolved, increasingly accurate ribosome reconstructions appeared, but the improvements were so gradual that they attracted little notice.
In 1995, there was a major breakthrough in the study of ribosomal structure.
Two, independently derived, 20--25 resolution reconstructions of the 70S ribosome from Escherichia coli were published, one by Joachim Frank
QoS represents one of the most crucial issues as it involves many different aspects and directly impacts the user satisfaction.
In this paper we will tackle the very complex and challenging issue to develop a comprehensive architecture to allow mobile wireless user to acces MPEG4 flows while moving and at a given level of QoS.
We will assume that network resources are managed according to the Grid paradigm, and that mobile agents are the underlying technology to implement coordination and communication mechanisms.
An adaptive computation maintains the relationship between its input and output as the input changes.
Although various techniques for adaptive computing have been proposed, they remain limited in their scope of applicability.
We propose a general mechanism for adaptive computing that enables one to make any purely-functional program adaptive.
We show
The paper presents the initial model proposed for the Gaia Digital project .
This three-year project is currently starting off within the Portugal Digital Initiative, within the Information Society operation program framework.
It provides a digital city counterpart for Vila Nova de Gaia, with around 288000 inhabitants, considering 2001 figures.
Introduction Alternative splicing of gene transcripts 1,2 is believed to be a major mechanism by which eukaryotes can amplify the number of distinct proteins produced from a limited number of genes.
Estimates of the fraction of alternatively spliced genes in the human genome range from 20% to nearly 60% .
In several cases, di#erent splice variants of a gene have been shown to play distinct or tissue-specific functional roles 5,6,7 .
These facts have driven the development of assays to discover and quantify alternative splicing.
Quantitative detection of alternative splicing aims to measure, for one or more genes, the amounts of each splice variant of that gene present in a pool of RNA.
In this work, we focus on splicing events that result in insertion or deletion of one or more complete exons from a transcript.
A gene is treated as an ordered list of exons G = {E 1 .
.
.
E n }, with each splice variant containing a subset of these exons.
We seek to determine which subsets
We address the problem of estimating the average switching activity of combinational circuits under random input sequences.
Switching activity is strongly affected by gate delays, and for this reason we use a variable delay model in estimating switching activity.
Unlike most probabilistic methods that estimate switching activity, our method takes into account correlation caused at internal gates in the circuit due to reconvergence of input signals.
Active Appearance Models (AAM) is very powerful for extracting objects, e.g.
faces, from images.
It is composed of two parts: the AAM subspace model and the AAM search.
While these two parts are closely correlated, existing efforts treated them separately and had not considered how to optimize them overall.
In this paper, an approach is proposed to optimize the subspace model while considering the search procedure.
We first perform a subspace error analysis, and then to minimize the AAM error we propose an approach which optimizes the subspace model according to the search procedure.
For the subspace error analysis, we decomposed the subspace error into two parts, which are introduced by the subspace model and the search procedure respectively.
This decomposition shows that the optimal results of AAM can be achieved only by optimizing both of them jointly rather than separately.
Furthermore, based on this error decomposition, we develop a method to find the optimal subspace model according to the search procedure by considering both the two decomposed errors.
Experimental results demonstrate that our method can find the optimal AAM subspace model rapidly and improve the performance of AAM significantly.
Soft constraints are recognised as being important for many constraints applications.
These include (a) over-constrained problems, where we cannot satisfy all the constraints, (b) situations where a constraint can be partially satisfied, so that there are degrees of satisfaction, and (c) where the identity of a constraint is uncertain, so that it can be uncertain whether a constraint is satisfied or not by a tuple.
This paper describes, from the viewpoint of device fabrication, single-electron and quantum devices using silicon-oninsulators (SOIs).
We point out that control of the oxidation of Si is quite important and could be the key to their fabrication.
We also introduce our technique for making single-electron transistors (SETs), which uses special phenomena that occur during the oxidation of SOIs, and show that the technique enables us to realize primary single-electron circuits as a result of its high controllability and high reproducibility.
2001 Elsevier Science B.V. All rights reserved.
This paper surveys recent research on using Monte Carlo techniques to improve quasi-Monte Carlo techniques.
Randomized quasi-Monte Carlo methods provide a basis for error estimation.
They have, in the special case of scrambled nets, also been observed to improve accuracy.
Finally through Latin supercube sampling it is possible to use Monte Carlo methods to extend quasi-Monte Carlo methods to higher dimensional problems.
This thesis describes describes a small number of problems arising from the applied study of networks in various contexts.
The work can be split into two main areas: telecommunications networks (particularly the Internet) and road networks.
In this paper we describe a method for the expansion of training sets made by XY trees representing page layout.
This approach is appropriate when dealing with page classification based on MXY tree page representations.
The basic idea is the use of tree grammars to model the variations in the tree which are caused by segmentation algorithms.
A set of general grammatical rules are defined and used to expand the training set.
Pages are classified with a k nn approach where the distance between pages is computed by means of tree-edit distance.
In this article we focus on evolving information systems.
First a delimitation of the concept of evolution is discussed.
The main result is a first attempt to a general theory for such evolution.
In this theory, the underlying data model is a parameter, making the theory applicable for a wide range of modelling techniques.
The barrier free internet is one of the greatest challenges for computer science in the future.
While in the last years the growth of the internet was exponential, still many potential user communities can not use internet technology for their communication needs because of inappropriate tools and narrowly designed communication processes.
These problems become obvious when transferring applications to communities of people with special needs.
Many people su#ering from aphasia are not able to interact with current chat tools while need for money for therapists could be eased by such virtual self-help groups in a geographically distributed setting.
This is because massive word finding problems can sum up typing a simple sentence up to several minutes.
We have designed, implemented and preliminary evaluated a new chat tool for such groups.
By using the tool aphasics can constantly monitor their communication behavior and in case of di#culties switch to a synchronous talk mode where up to four people can monitor typing letter by letter.
Proposal for phrases can be generated by the community to help their member.
Therapists and linguistic researchers can also monitor online and o#ine conversations from automatically generated transcripts.
An evolving information system supports the information needs of an evolving organisation.
These systems are able to adapt themselves instantaneously to the changes of the supported organisation, such that there is no need to interrupt the activities of the organisation.
Furthermore, evolving information systems support changes of all time- and application-dependent aspects, such as the database and the schema of the application.
The main focus
this paper, to enhance the accuracy of gene prediction, we propose a scheme that merges the ab-initio method with the homology-based one.
While the latter identifies each gene by taking advantage of the known information for previously identified genes, the former makes use of predefined gene features.
Also, the proposed scheme adopts parallel processing to guarantee optimum system performance, in the face of the crucial drawback of the homology-based method, i.e.
the bottleneck that inevitably occurs due to the large amount of sequence information that has to be processed
Introduction Gene expression data has been rapidly accumulated and the methods of these data analysis are required.
Statistical methods are used in these data analysis.
However, the biological interpretation of the data and the result of the statistical analysis are di#cult.
Thus we are developing a method of analysis to interpret the data easily.
We propose a PCA based analysis method and developed tools based on our proposal.
2 Method and Results Figure 1: Analysis process we proposed Hatching boxes: developed tools.
A process of human gene expression data analysis is followings: 1) pre-filtering, 2) statistical analysis, 3) interpretation of the data.
First we reduce and eliminate a noise of the data as pre-filtering, that is, we select data, such as genes or samples, according to the liability.
Second, we analyze the data statistically.
Generally many researchers use hierarchical clustering analysis [1] and principal component analysis (PCA) [3] as statistical method.
They chec
Although many algorithms for power estimation have been proposed to date, no comprehensive results have been presented on the actual complexity of power estimation problems.
This paper considers the problem of information consensus among multiple agents in the presence of limited and unreliable information exchange with dynamically changing interaction topologies.
Both discrete and continuous update schemes are proposed for consensus of information.
That the union of a collection of interaction graphs across some time intervals has a spanning tree frequently enough as the system evolves is shown to be a necessary and sufficient condition for information consensus under dynamically changing interaction topologies.
Simulation results show the effectiveness of our results.
In thisarticle we eicle how, throughdiscourse proceseb a thirdgrade tede-- and he stude--5 come to situationallydetu scieat inthe-- classroom.The tessroo use of particulardiscursive strateive promote studee talk, thus providing opportunitie forstude6-- toleb1 aboutscieF6 throughthe eughF6164 of ase of anomalous remalo in alife sciePb eePbFf7(1 Drawing from social studie ofscie6b6 we use a discourse analytical approach toeF6b(( the classroommessroo logic ofe4P--P6Ff7P( tion, the, e,FbbP(7Ff andscie-664 de-664Ff andthe6 accounts ofthe ee5P7 The5 analyse allowe us toide(56-- how particularterticu stratelar afforde studee opportunitie toleb-- scie1-- conce---- and aboutscieF671 proce71(1 # 2000 John Wile & Sons, Inc. Sci Ed 84:624 -- 657, 2000.
This paper deals with wordnet development tools.
It presents a designed and developed system for lexical database editing, which is currently employed in many national wordnet building projects.
We discuss basic features of the tool as well as more elaborate functions that facilitate linguistic work in multilingual environment.
This paper develops and validates an efficient analytical model for evaluating the performance of shared memory architectures with ILP processors.
First, we instrument the SimOS simulator to measure the parameters for such a model and we find a surprisingly high degree of processor memory request heterogeneity in the workloads.
Examining the model parameters provides insight into application behaviors and how they interact with the system.
Second, we create a model that captures such heterogeneous processor behavior, which is important for analyzing memory system design tradeoffs.
Highly bursty memory request traffic and lock contention are also modeled in a significantly more robust way than in previous work.
With these features, the model is applicable to a wide range of architectures and applications.
Although the features increase the model complexity, it is a useful design tool because the size of the model input parameter set remains manageable, and the model is still several orders of magnitude quicker to solve than detailed simulation.
Validation results show that the model is highly accurate, producing heterogeneous per processor throughputs that are generally within 5 percent and, for the workloads validated, always within 13 percent of the values measured by detailed simulation with SimOS.
Several examples illustrate applications of the model to studying architectural design issues and the interactions between the architecture and the application workloads.
Component integration creates value by automating the costly and error-prone task of imposing desired behavioral relationships on components manually.
Requirements for component integration, however, complicate software design and evolution in several ways: first, they lead to coupling among components; second, the code that implements various integration concerns in a system is often scattered over and tangled with the code implementing the component behaviors.
Straightforward software design techniques map integration requirements to scattered and tangled code, compromising modularity in ways that dramatically increase development and maintenance costs.
This paper presents the METAXPath data model and query language.
METAXPath extends XPath with support for XML metadata.
XPath is a specification language for locations in an XML document.
It serves as the basis for XML query languages like XSLT and the XML Query Algebra.
this paper we show how the combination of the ant-based approach with fuzzy rules leads to an algorithm which is conceptually simpler, more e#cient and more robust than previous approaches
rovement in classification accuracy that is obtained by the proposed method is assessed statistically using Kappa analysis.
Keywords and phrases: misclassification correction, image recognition, training-based optimization, genetic algorithms, musical key finding, remote sensing.
1.
INTRODUCTION Automatic classification of data is a standard problem in signal and image processing.
In this context, the overall objective of classification is to categorize all data samples into dierent classes as accurately as possible.
The selection of classes depends naturally on the particular application.
Powerful supervised classification methods based on neural networks, genetic algorithms, Bayesian methods, and Markov random fields have been developed (see, e.g., [1, 2, 3]).
However, even the most advanced methods of automatic classification are typically unable to provide a classification without misclassifications.
The main reason for this is the inherent presence of noise in data as well as
ults in very small average accumulation (#1 Tg) of O 3 in the east Asian region and very little net export averaged over the period (0.03 Tg d #1 ).
The low ozone export from east Asia predicted by RAQMS during TRACE-P is a consequence of relatively high dry deposition rates, which are 37% of the gross ozone formation (1.469 Tg d #1 ) within the TRACE-P regional domain.
INDEX TERMS: 0345 Atmospheric Composition and Structure: Pollution---urban and regional (0305); 0365 Atmospheric Composition and Structure: Troposphere---composition and chemistry; 0368 Atmospheric Composition and Structure: Troposphere---constituent transport and chemistry; 3362 Meteorology and Atmospheric Dynamics: Stratosphere/troposphere interactions; 3367 Meteorology and Atmospheric Dynamics: Theoretical modeling; KEYWORDS: tropospheric ozone, Asian emissions, stratosphere-troposphere exchange, ozone budget, regional photochemical modeling, global photochemical modeling Citation: Pierce, R. B., et al.,
This paper is also based on interviews with 11 families in the Boston area in 1997, conducted by the first author.
They were designed specifically to examine the location and use of the home PC by difference members of the family.
All families owned a multimedia PC and had children living at home, but represented a spread of income levels (between $20-100+k per year), housing types (private house, condominium, apartment) and locations (urban, suburban, rural).
Eight of the 11 families had an Internet connection.
Transcripts of both sets of interviews were coded to indicate discussion of topics relevant to the dynamics of computer and Internet use.
The resulting topic collections were surprisingly large for both studies, indicating that families had a lot to say about constituent issues such as the location of the computer, and the way it is shared and managed within the family.
In the following sections of the chapter we step through the major findings in this collection as they relate to the groups of questions raised in the previous section.
Where necessary, we cite relevant quantitative findings to back-up the qualitative analysis.
We preserve the same ordering of issues and questions as before, addressing the timing, location and shared use of the home computer in turn
Over the past 30 years Artificial Intelligence has fragmented from one broad subject into a cluster of narrow but deep individual disciplines.
During this time we have also seen the development of increasingly complex software systems for application domains such as robot control, mobile computing, and expert system interfaces.
Many of these designs use elements from the branches of AI, but pay little attention to the integration of these elements in an intelligent way.
This paper presents an approach to this intelligent integration problem, based on a community of Intentional Agents.
Each of the agents within the community uses a Social Minded Commitment Manager (SMCM) to allow it to reason and cooperate in order to achieve goals when individual execution has failed.
An implementation of the SMCM that has been developed for AgentFactory is presented, and its use then motivated through the description of a robust, redundancy tolerant robot control architecture named MARC.
In concept, a model's state is a (long) vector, that is, a list of values that are sufficient to define the state of the system at any point in time.
In practice, a model's state is defined implicitly by the internal status of all the entities used in the simulation software package.
Prior to this study, the Combined Forces Command (CFC) in Korea used a planning factor of 1.3, or 30% more assets than calculated as the minimum required.
This rigid number represents an "expectation" on the part of the planners, with no assumption as to the level of risk accepted.
This paper presents the applicability of a cosimulation methodology based on an object-oriented simulation environment, to multi-domain and multi-language systems design.
This methodology start with a system model given as a netlist of heterogeneous components and enables the systematic generation of simulation models for multi-domain and multi-language heterogeneous systems.
For experiments, we used a complex multi-domains application: an optical MEM switch.
A format-driven word recognition system is proposed for recognition of handwritten words.
Unlike most traditional handwritten word recognizers being given a set of target words as lexicon, we assume that our system is given a set of format descriptions other than lexicon words.
Applications of the proposed system include recognition of relatively more important keywords such as postal codes, titles or trademarks.
The format descriptions are in terms of the lengths of the keywords, the types of the characters in the keywords and positional informations.
Due to the important role of the keywords in the applications, the recognition expectations in terms of recognition rate and accuracy are usually higher then lexicon-driven word recognizers.
Web services is one of the emerging approaches in network management.
This paper describes the design and implementation of four Web services based network monitoring prototypes.
Each prototype follows a specific approach to retrieve management data, ranging from retrieving a single management data object, to retrieving an entire table of such objects at once.
We have focused on the interfaces table (ifTable), as described in the IF-MIB.
Identifying growth poles in the SSA region, strengthening linkages and generating mutual benefits across African countries is an important part of the strategy to promote agriculture-led growth at the Africa-wide scale.
Using agricultural trade data, this study focuses on identifying major countries that play important roles in regional agricultural trade and commodities in which African countries have a comparative advantage and where there is potential for more trade within the region.
There are 10 largest traders in the regions either as large agricultural exporters or importers and they seemingly have potential to become growth poles in Africawide growth led by promoting agricultural trade.
However, at the present, intra-SSA trade only plays a marginal role and that official trade data often significantly underestimate the actual trade flows between countries.
In order to avoid historical bias, we focus on the potential trade opportunities by investigating whether a group of commodities in which some countries have a comparative advantage matched with the group of commodities imported by other African countries.
We find that foodstuffs are among the most dynamic products in regional agricultural trade, as value of the correlation between the staple good exports and imports is high and doubles over the two observation periods, up from 0.34 in the first period (19901995) .
Poor infrastructure and institutional barriers are among the major reasons constraining African countries to exploit their comparative advantage and strengthen iv their economic linkages.
The model simulations show that opening the EU market is strongly in the common interest of African countries.
Reducing African countries own trade barriers, both in agriculture and non-agricul...
This study analyzes how market imperfections affect land productivity in a degraded low-potential cereal- livestock economy in the Ethiopian highlands.
A wide array of variables is used to control for land quality in the analysis.
Results of three different selection models were compared with least squares models using the HC3 heteroskedasticity-consistent covariance matrix estimator.
Market imperfections in labor and land markets were found to affect land productivity.
Land productivity was positively correlated with household male and female labor force per unit of land.
Female-headed households achieved much lower land productivity than male-headed households.
Old age of household heads was also correlated with lower land productivity.
Imperfections in the rental market for oxen appeared to cause overstocking of oxen by some households.
Conservation technologies had no significant positive short-run effect on land productivity.
The main results were consistent across the different econometric models.
.
KEYWORDS: Market imperfections, land productivity, Ethiopian highlands ii ACKNOWLEDGMENTS We are thankful for valuable comments from Peter Hazell and two anonymous reviewers on an earlier draft of this paper.
Funds for this research have been received from the Research Council of Norway and the Norwegian Ministry of Foreign Affairs.
Logistical support has been received from the International Food Policy Research Institute and the International Livestock Research Institute.
iii TABLE OF CONTENTS 1.
We investigate adgate e buffer management techniques for approximate evaluation of sliding window joins over multipledti streams.
In many applications,dpl stream processing systems havelimited memory or have todN: with very high speed dd streams.
In both cases, computing the exact results of joins between these streams may not be feasible, mainly because the buffers used to compute the joins contain much smaller number of tuples than the tuples contained in the slidA2 wind ws.
Therefore, a stream buffer management policy isneedA in that case.
We show that the buffer replacement policy is an importantd2---1:D]1: t of the quality of the prodoD2 results.
To that end we propose Greed2D]1NC1---D (GDJ) anad:BFF e and locality-aware buffering technique for managing these bu#ers.
GDJ exploits the temporal correlations (at both long and short time scales), which wefound to be prevalent in many real dal streams.
We note that our algorithm is read2C applicable to multipledip streamsand multiple joins and requires almost no adNA:C---D] system resources.
We report results of an experimentalstud using both syntheticand real-world d ata sets.
Our results demonstrate the superiority and flexibility of our approach when contrasted to other recently proposed techniques.
PRESERVING TRUST ACROSS MULTIPLE SESSIONS IN OPEN SYSTEMS Fuk Wing Thomas Chan Department of Computer Science Master of Science Trust negotiation, a new authentication paradigm, enables strangers on the Internet to establish trust through the gradual disclosure of digital credentials and access control policies.
Previous research in trust negotiation does not address issues in preserving trust across multiple sessions.
This thesis discusses issues in preserving trust between parties who were previously considered strangers.
It also describes the design and implementation of trust preservation in TrustBuilder, a prototype trust negotiation system.
Preserving trust information can reduce the frequency and cost of renegotiation.
A scenario is presented that demonstrates that a server supporting trust preservation can recoup the cost of the trust preservation facility when approximately 25% of its requests are from repeat customers.
The throughput and response time improve up to approximately 33% as the percentage of repeat customers grows to 100%.
Context-free grammars are not able to cover all linguistic phenomena.
Thus we define
Our aim is to find syntactic and semantic relationships of words based on the analysis of corpora.
We propose the application of independent component analysis, which seems to have clear advantages over two classic methods: latent semantic analysis and self-organizing maps.
Latent semantic analysis is a simple method for automatic generation of concepts that are useful, e.g., in encoding documents for information retrieval purposes.
However, these concepts cannot easily be interpreted by humans.
Self-organizing maps can be used to generate an explicit diagram which characterizes the relationships between words.
The resulting map reflects syntactic categories in the overall organization and semantic categories in the local level.
The self-organizing map does not, however, provide any explicit distinct categories for the words.
Independent component analysis applied on word context data gives distinct features which reflect syntactic and semantic categories.
Thus, independent component analysis gives features or categories that are both explicit and can easily be interpreted by humans.
This result can be obtained without any human supervision or tagged corpora that would have some predetermined morphological, syntactic or semantic information.
A comparison betweenth evolution of cancer cell populations andRNA viruses reveals a number of remarkable similarities.
Both displayhsp levels of plasticity and adaptability as a consequence ofhdG degrees of genetic variation.
It hd been suggested thgg as it occurswith RNA viruses,thus is a thIqO9d3 in th levels of genetic instability affordable by cancer cells in order to be able to overcome selection barriers (Trends Genet.
15 (1999) M57).
Here we explorethl concept by means of a simplemathd1OIx1d model.
It isshxx thx an errorthordqMM exists inthI model,whel investigatesboth competition between cancer cell populations and its impact on overall tumorgrowth dynamics.
In intensive care units physicians are aware of a high lethality rate of septic shock patients.
In this contribution we present typical problems and results of a retrospective, data driven analysis based on two neural network methods applied on the data of two clinical studies.
In this paper a new technique is introduced for automatically building recognisable moving 3D models of individual people.
A set of multi-view colour images of a person are captured from the front, side and back using one or more cameras.
Model-based reconstruction of shape from silhouettes is used to transform a standard 3D generic humanoid model to approximate the persons shape and anatomical structure.
Realistic appearance is achieved by colour texture mapping from the multi-view images.
Results demonstrate the reconstruction of a realistic 3D facsimile of the person suitable for animation in a virtual world.
The system is low-cost and is reliable for large variations in shape, size and clothing.
This is the first approach to achieve realistic model capture for clothed people and automatic reconstruction of animated models.
A commercial system based on this approach has recently been used to capture thousands of models of the general public.
The support vector machine (SVM) constitutes one of the most successful current learning algorithms with excellent classification accuracy in large real-life problems and strong theoretical background.
However, a SVM solution is given by a not intuitive classification in terms of extreme values of the training set and the size of a SVM classifier scales with the number of training data.
Generalized
Chunk parsing has focused on the recognition of partial constituent structures at the level of individual chunks.
Little attention has been paid to the question of how such partial analyses can be combined into larger structures for complete utterances.
The TuSBL
We consider the problem of how to enable the streaming of live video content from a single server to a large number of clients.
One recently proposed approach relies on the cooperation of the video clients in forming an application layer multicast tree over which the video is propagated.
Video continuity is maintained as client departures disrupt the multicast tree, using multiple description coded (MDC) streams multicast over several application layer trees.
While this maintains continuity, it can cause video quality fluctuation as clients depart and trees are reconstructed around them.
In this paper we develop a scheme using the transmission of a single-description coded video over an application layer multicast tree formed by cooperative clients.
Video continuity is maintained in spite of tree disruption caused by departing clients using a combination of two techniques: 1) providing time-shifted streams at the server and allowing clients that suffer service disconnection to join a video channel of the time-shifted stream, and 2) using video patching to allow a client to catch up with the progress of a video program.
Simulation experiments demonstrate that our design can achieve uninterrupted service, while not compromising the video quality, at moderate cost.
In this paper, we consider the traffic grooming, routing, and wavelength assignment (GRWA) problem for optical mesh networks.
In most previous studies on optical mesh networks, traffic demands are usually assumed to be wavelength demands, in which case no traffic grooming is needed.
In practice, optical networks are typically required to carry a large number of lower rate (sub-wavelength) traffic demands.
Hence, the issue of traffic grooming becomes very important since it can significantly impact the overall network cost.
In our study, we consider traffic grooming in combination with traffic routing and wavelength assignment.
Our objective is to minimize the total number of transponders required in the network.
We first formulate the GRWA problem as an integer linear programming (ILP) problem.
Unfortunately, for large networks it is computationally infeasible to solve the ILP problem.
Therefore, we propose a decomposition method that divides the GRWA problem into two smaller problems: the traffic grooming and routing problem and the wavelength assignment problem, which can then be solved much more efficiently.
In general, the decomposition method only produces an approximate solution for the GRWA problem.
However, we also provide some sufficient condition under which the decomposition method gives an optimal solution.
Finally, some numerical results are provided to demonstrate the efficiency of our method.
This paper describes a method for converting existing thesauri and related resources from their native format to RDF(S) and OWL.
The method identifies four steps in the conversion process.
In each step decisions have to be taken with respect to the syntax or semantics of the resulting representation.
Each step is supported through a number of guidelines.
The method is illustrated through conversions of two large thesauri, namely MeSH and WordNet.
Subdivision surfaces solve numerous problems related to the geometry of character and animation models.
However, unlike on parametrised surfaces there is no natural choice of texture coordinates on subdivision surfaces.
Existing algorithms for generating texture coordinates on non-parametrised surfaces often find solutions that are locally acceptable but globally are unsuitable for use by artists wishing to paint textures.
In addition, for topological reasons there is not necessarily any choice of assignment of texture coordinates to control points that can satisfactorily be interpolated over the entire surface.
We introduce a technique, pelting, for finding both optimal and intuitive texture mapping over almost all of an entire subdivision surface and then show how to combine multiple texture mappings together to produce a seamless result.
U) 4.
AUTHOR(S) 5.
CORPORATE AUTHOR Systems Sciences Laboratory PO Box 1500 Edinburgh South Australia 5111 Australia 6a.
DSTO NUMBER 6b.
AR NUMBER 6c.
TYPE OF REPORT Technical Report 7.
DOCUMENT DATE 8.
FILE NUMBER E9505-28-28 9.
TASK NUMBER ARM 01/044 10.
TASK SPONSOR DGLD 11.
NO.
OF PAGES 12.
NO.
OF REFERENCES ...
The Dempster-Shafer combination rule can be of great utility in multisensor image segmentation.
In addition, the approach based on theory of evidence can be seen as generalizations of the classical Bayesian approach, which is often used in the Hidden Markov Field Model context.
Finally, some recent works allow one to use the DempsterShafer combination rule in the Markovian context, and different methods so obtained can greatly improve the effectiveness of Markovian methods working alone.
The aim of this paper is to make these methods unsupervised by proposing some parameter estimation algorithms.
In order to do so, we use some recent methods of generalized mixture estimation, which allows one to estimate mixtures in which the exact nature of components is not known.
In general, automated assessment is based on collecting evidence of a candidate's performance in answering one or more questions, relating the evidence to the correct answer or answers to determine any errors and determining the assessment by relating any errors to the given assessment criteria.
In IT full-skills tests the candidate undertakes a typical exercise using a particular IT tool and the evidence collected is analysed to assess what individual skills the candidate has exhibited during the test.
One of the major difficulties of automated IT skills assessment arises from the difficulty in knowing how to associate errors made by the candidate with particular skills.
The difficulties can be reduced by suitable design of the test, by reducing the complexity of the assessment criteria and by the judicious use of human examiners.
This paper illustrates the connection between evidence, assessment criteria and the difficulty of assessment with examples from word processing and the use of spreadsheets.
This paper discusses some of the issues pertaining to the design of digital musical instruments that are to effectively fill the role of traditional instruments (i.e.
those based on physical sound production mechanisms).
The design and implementation of a musical instrument that addresses some of these issues, using scanned synthesis coupled to a "smart" physical system, is described.
We performed Bayesian model comparison on mass spectra from CH 4 rf process plasmas to detect radicals produced in the plasma.
The key ingredient for its implementation is the highdimensional evidence integral.
We apply Gauss approximation to evaluate the evidence.
The results were compared with those calculated by the thermodynamic integration method using Markov Chain Monte Carlo technique.
In spite of very large difference in the computation time between two methods a very good agreement was obtained.
Alternatively, a Monte Carlo integration method based on the approximated Gaussian posterior density is presented.
Its applicability to the problem of mass spectrometry is discussed.
Peer-to-Peer file sharing networks have gained tremendous popularity in recent years.
However, traversing Network Address and Port Translators (NAPT) may still fail in certain topologies.
In this paper, we present Programmable Port Forwarding, a lightweight approach for allowing private hosts to fully participate in a Peer-to-Peer network.
By extending the NAPT that a private host uses to connect to hosts outside its private realm, we enlarge the applicability of Peer-to-Peer systems in today's networks.
Additionally, we show that our proposed solution is able to deal with terminal mobility within the domain of the NAPT server as well.
A virtual fence is created by applying an aversive stimulus to an animal when it approaches a predefined boundary.
It is implemented by a small animal-borne computer system with a GPS receiver.
This approach allows the implementation of virtual paddocks inside a normal physically fenced paddock.
Since the fence lines are virtual they can be moved by programming to meet the needs of animal or land management.
This approach enables us to consider animals as agents with natural mobility that are controllable and to apply a vast body of theory in motion planning.
In this paper we describe a herd-animal simulator and physical experiments conducted on a small herd of 10 animals using a Smart Collar.
The Smart Collar consists of a GPS, PDA, wireless networking and a sound amplifier.
In particular we describe a motion planning algorithm that can move a virtual paddock which is suitable for mustering cows.
We present simulation results and data from experiments with 8 cows equipped with Smart Collars.
The standard development of a dialogue system today involves the following steps: corpus collection and analysis, system development guided by corpus analysis, and finally, rigorous evaluation.
Often, evaluation may involve more than one version of the system, for example when it is desirable to show the effect of system parameters that differ from one version to another.
Many algorithms have been proposed in literature to deal with the tracking and data association problem.
A common assumption made in the proposed algorithms is that the targets are independent.
There are however many interesting applications in which targets exhibit some sort of coordination, they satisfy shape constraints.
In the current work a general and well formalized method which allows to embed such constraints into data association filters is proposed.
The resulting algorithm performs robustly in challenging scenarios.
By introducing a mean-field version of the tile automaton model introduced in earlier works, growth of molecules through chemical reaction networks is studied with explicit consideration for molecule shape as a "tile".
Tiles are picked up randomly to collide, and with a certain rule they react to form new tiles.
A non-trivial growth pattern, called joint growth is found, with which tiles grow by combining tiles successively.
This joint growth leads to a power-law distribution of tile sizes, by forming a positive feedback process for reproduction of tiles through cooperative relationship among large tiles.
This effective growth is achieved by spontaneous differentiation of time scales: quick process for an autocatalytic network and a slower process with joint growth.
We also discuss the relevance of the present results to the origin of life as a loose set of reproducting chemicals.
This paper presents a method to exploit rank statistics to improve fully automatic tracing of neurons from noisy digital confocal microscope images.
Previously proposed exploratory tracing (vectorization) algorithms work by recursively following the neuronal topology, guided by responses of multiple directional correlation kernels.
These algorithms were found to fail when the data was of lower quality (noisier, less contrast, weak signal, or more discontinuous structures).
This type of data is commonly encountered in the study of neuronal growth on microfabricated surfaces.
We show that by partitioning the correlation kernels in the tracing algorithm into multiple subkernels, and using the median of their responses as the guiding criterion improves the tracing precision from 41% to 89% for low-quality data, with a 5% improvement in recall.
Improved handling was observed for artifacts such as discontinuities and/or hollowness of structures.
The new algorithms require slightly higher amounts of computation, but are still acceptably fast, typically consuming less than 2 seconds on a personal computer (Pentium III, 500 MHz, 128 MB).
They produce labeling for all somas present in the field, and a graph-theoretic representation of all dendritic/axonal structures that can be edited.
Topological and size measurements such as area, length, and tortuosity are derived readily.
The efficiency, accuracy, and fully-automated nature of the proposed method makes it attractive for large-scale applications such as high-throughput assays in the pharmaceutical industry, and study of neuron growth on nano/micro-fabricated structures.
A careful quantitative validation of the proposed algorithms is provided against manually derived tracing, using a performance measure that combines the precis...
Introduction Computational gene prediction has recently become essential to identify all the genes from enormous genome sequences and to define their functions.
However, gene prediction methods still show low specificity (45% at the exon level) [7].
Although computationally predicted data by automatic annotation system are growing rapidly, experimentally verified data provide the cornerstone for improvements of gene prediction.
Because there is no guarantee that the predicted events occur in vivo.
It has become important to discriminate experimentally verified data from computationally predicted data [5].
Aligning expressed sequence tags (ESTs)/mRNAs to the genomic sequences has been a practical approach to detect gene regions and to identify alternative splicing on a genomic scale.
In previous studies EST-genome alignments were made using 90-93% sequence identity as threshold [2, 3, 4].
However, their low thresholds allow ESTs to incorrectly align with paralogous genes or pseudogene
We describe a pointerless representation of hierarchical regular simplicial meshes, based on a bisection approach proposed by Maubach.
We introduce a new labeling scheme, called an LPT code, that uniquely encodes each simplex of the hierarchy.
We present rules to efficiently compute the neighbors of a given simplex through the use of these codes.
In addition, we show how to traverse the associated tree and how to answer point location and interpolation queries.Our system works in arbitrary dimensions.
this paper by context-sensitivity).
The first and most obvious is indexicality.
A sentence is indexical (as I use the term here) if it expresses different propositions at different contexts of use.
But not all context-sensitivity can be traced to indexicality (including "hidden" indexicality not attributable to an expressed component of the sentence).
The sentence "The number of AIDS babies born in the United States in 2003 is greater than ten thousand" is indexical-free, yet it is context-sensitive, because its truth varies with the world of utterance
This paper surveys nonparametric methods for modeling such data sets.
These models are based on a generalized central limit theorem.
The limit laws in the generalized central limit theorem are operator stable, a class that contains the multivariate Gaussian as well as marginally stable random vectors with di#erent tail behavior in each coordinate.
Modeling this kind of data requires choosing the right coordinates, estimating the tail index for those coordinates, and characterizing dependence between the coordinates.
We illustrate the practical application of these methods with several example data sets from finance and hydrology
The Semantic Web consists of many RDF graphs nameable by URIs.
This paper
is a challenging problem for many application domains.
It should be possible to set and manage one organization-wide access control policy that must then be enforced reliably in a multitude of applications running within the organization.
This is severely complicated by the fact that an access control policy can be fine-grained and dependent on application state, and hence its enforcement can crosscut an application in an intricate way.
Autonomous vehicles are increasingly being used in mission-critical applications, and robust methods are needed for controlling these inherently unreliable and complex systems.
This thesis advocates the use of model-based programming, which allows mission designers to program autonomous missions at the level of a coach or wing commander.
To support such a system, this thesis presents the Spock generative planner.
To generate plans, Spock must be able to piece together vehicle commands and team tactics that have a complex behavior represented by concurrent processes.
This is in contrast to traditional planners, whose operators represent simple atomic or durative actions.
Spock represents operators using the RMPL language, which describes behaviors using parallel and sequential compositions of state and activity episodes.
RMPL is useful for controlling mobile autonomous missions because it allows mission designers to quickly encode expressive activity models using object-oriented design methods and an intuitive set of activity combinators.
Spock also is significant in that it uniformly represents operators and plan-space processes in terms of Temporal Plan Networks, which support temporal flexibility for robust plan execution.
Finally, Spock is implemented as a forward progression optimal planner that walks monotonically forward through plan processes, closing any open conditions and resolving any conflicts.
This thesis describes the Spock algorithm in detail, along with example problems and test results.
Mobile agents and mobile computing have grown in importance recently.
The SupervisorWorker pattern is an architectural pattern that helps architects solving the problem of protecting the mobile agent from leakage and tampering.
The fundamental MasterSlave Pattern is widespread and heavily used in traditional applications.
The SupervisorWorker pattern inherits many of the Master-Slave pattern's benefits.
It also solves several of the security issues of mobile agents.
In this study, the first year implementation effects of a staff development program on cooperative learning for Dutch elementary school teachers were studied.
A pretest-posttest control group design was used to investigate program effects on teachers' instructional behaviors.
Based on observations of teacher behavior during a cooperative lesson, a statistically significant treatment effect was found for the following instructional cooperative behaviors: structuring positive interdependence, individual accountability, social skills, and evaluation of the group processing.
Training effects were also found for the use of cooperative activities in the direct instruction model and for activating pupils' prior knowledge of social skills.
3 on Cooperative Learning Promotion of cooperative learning has been high on the educational reforms and restructuring agendas, for the last few decades.
Cooperative learning (CL) involves pupils working together to accomplish shared learning goals.
Facilitating active learning, involving teaching for understanding, the use of teaching methods to develop critical thinking and problem solving, and the development of learning communities at school are central principles for the educational reforms in the OECD countries (Stern & Huber, 1997; Adviesraad Onderwijs, 1994).
The emphasis on active learning is supported by current cognitive conceptions of both learning and instruction (Shuell, 1996).
CL structures and approaches are seen as valuable instructional strategies for strengthening pupils' capacity for active learning at school and for the promotion of pupils' cognitive and social development.
According to Johnson, Johnson, and Stanne (2000) and Slavin (1995), there are many reasons for CL to enter the mainstream of educational practice.
Fi...
Whilst XCS (Wilson, 1998) has been shown to be more robust and reliable than previous LCS implementations (Kovacs, 1996, 1997), Lanzi (1997) identified a potential problem in the application of XCS to certain simple multi-step non Markovian environments.
The 'Aliasing Problem' occurs when the environment provides the same message for two states in environmental positions that generate different constant payoffs.
This prevents classifiers forming a correct payoff prediction for that message.
This paper introduces a sub-class of the aliasing problem termed the 'Consecutive State Problem' and uses the subclass to identify the effects of consecutive state aliasing on the learning of the State Action Payoff mapping within XCS.
It is shown that aliasing states can prevent the formation of classifiers covering preceding states due to the trade-off of accuracy for match set occupancy made by the classifiers covering the aliasing states.
This can be prevented by identifying a condition encoding which makes such match set 'piracy' improbable.
However,
Introduction The research reported on is undertaken within the Swedish GROG project "Boundaries and groupings -- the structuring of speech in different communicative situations" (Carlson et al., 2002) .
An extended version of the present paper will appear in Strangert (2003).
Focal accent is the highest level of prominence in the Swedish intonation model (Bruce, 1977).
It is signalled primarily by a rise in f 0 , although recent studies point to strong effects also of other parameters, in particular, duration (Heldner, 2001).
A focused word, further, may be more or less emphasized; within the phonologic category of focus a continuous variation of emphasis can be assumed (Bruce, 1998).
The means used to increase emphasis are tonal as well as temporal.
Carlson et al.
(1975) observed temporal and tonal adjustments to successively higher levels of emphasis, and Ericson & Lehiste (1995), reported on longer word durations in emphasized words.
To emphasize, and for contrastive purposes,
This paper provides an introduction to the technique of simulation-based production planning and scheduling, a fast growing and popular area in the simulation industry.
SIMUL8 and Visual8 Corporations have collaborated to develop a new software application called SIMUL8Planner that assists in the development of this type of system.
The following document outlines some of the requirements, advantages, and features within this exciting new product.
We have developed a compact new course on digital logic design and computer fundamentals, integrated with laboratory assignments using state-of-the-art design tools and a custom designed lab system board.
The labs give the students hands-on experience with FPGA design using Xilinx Foundation and 8051 assembler programming.
Through five assignments the students design, verify and demonstrate a simple audio processing system.
Feedback from students tells that the lab makes the course fun and helps the understanding of the theory.
In this paper we outline the objectives and contents of the course, give a brief description of the labs and summarise the experience from running this course for about 2000 students during the last four years.
Requirements Engineering's theoretical and practical developments typically look forward to the future (i.e.
a system to be built).
Under certain conditions, however, they can also be used for the analysis of problems related to actual systems in operation.
Building on the Jackson/Zave reference model [2] for requirements and specifications, this paper presents a framework useful for the prevention, analysis and communication of designer and operator errors and, importantly, their subtle interactions, so typical in complex socio-technical systems.
We consider the problem of architecting a reliable content delivery system across an overlay network using TCP connections as the transport primitive.
We first argue that natural designs based on store-and-forward principles that tightly couple TCP connections at intermediate end-systems impose fundamental performance limitations, such as dragging down all transfer rates in the system to the rate of the slowest receiver.
In contrast, the ROMA architecture we propose incorporates the use of loosely coupled TCP connections together with fast forward error correction techniques to deliver a scalable solution that better accommodates a set of heterogeneous receivers.
The methods we develop establish chains of TCP connections, whose expected performance we analyze through equation-based methods.
We validate our analytical findings and evaluate the performance of our ROMA architecture using a prototype implementation via extensive Internet experimentation across the PlanetLab distributed testbed.
Telemarketers, direct marketing agencies, collection agencies and others whose primary means of customer contact is via the telephone invest considerable sums of money to make the calling operation efficient and productive.
Investments are required in human resources, infrastructure and technology.
Having invested the dollars, businesses want to ensure that value is maximized.
Call scheduling algorithms provide an efficient method to maximize customer contact.
However, management at a large, national credit-card bank was not convinced that the software used to schedule calls was providing an adequate level of service.
Simulation studies showed that management was justified in this assumption.
The study also revealed that process improvement opportunities exist, which if implemented would likely produce the desired performance improvements.
To help a mobile user navigating and finding his or her way in a foreign environment there are nowadays more possibilities than only using a paper map, namely using small mobile devices.
But nevertheless, there is need for research and development before using these new technical possibilities in an ideal manner and replacing the traditional paper map.
small mobile devices in terms of location based services The optimum would be to pass the user the most actual data within a few seconds, representing the data in an understandable, uncomplicated and clear way, meeting the user's needs by personalising the visualisation and filtering the unimportant information.
To satisfy all these claims different steps of research are necessary.
The aim of this paper is to present a communication synthesis approach stated as an allocation problem.
In the proposed approach, communication synthesis allows to transform a system composed of processes that communicate via high level primitives through abstract channels into a set of processes executed byinterconnected processors that communicate via signals and share communication control.
The proposed communication synthesis approach deals with both protocol selection and interface generation and is based on binding/allocation of communication units.
This approach allows a wide design space exploration through automatic selection of communication protocols.
We presenta new algorithm that performs binding/allocation of communication units.
This algorithm makes use of a cost function to evaluate different allocation alternatives.
We illustrate through an example the usefulness of the algorithm for allocating automatically different protocols within the same application system.
Until the late 80's, the only constraints when designing integrated circuits (IC's) were the area and speed.
The field of low power design was confined to applications such as digital wrist watches or cardiac pacemakers.
In the beginning of the early 90's, this changed rapidly with the growing demand for portable electronic equipment such as cellular phones and notebook computers.
However, decreasing feature sizes and the demand for real-time processing systems have resulted in a level of miniaturisation where the heat dissipation is now the main problem.
Here the trade-off between special packaging, capable of cooling the chip and expensive fan solutions has to be balanced against the consumer's demand for low cost applications.
Furthermore, the recent demand for environmentally friendly consumer goods have pushed companies to design non-portable systems using low power techniques.
This has resulted in increased market share for companies producing such 'green machines'.
The technical problem addressed in this paper is, given two rule systems for consequence relations X and Y, how to construct Y-approximations of a given X-relation.
While an upper Y- approximation can be easily constructed if all Y- rules are Horn, the construction of lower Y- approximations is less straightforward.
We address the problem by defining the notion of coclosure under co-Horn rules, that can be used to remedy violation of certain rules by removing arguments.
In particular, we show how the coclosure under Monotonicity can be used to construct the monotonic restriction of a preferential relation.
Unlike the more usual closure under the rules of M , this co-closure operator supports the intuition that preferential reasoning is more liberal than monotonic reasoning.
The approach is embedded in a general framework for comparing rule systems for consequence relations.
A salient feature of this framework is that it is also possible to compare rule systems that are not related by metalevel entailment.
The new portfolio optimization engine, OptFolio^TM, simultaneously addresses financial return goals, catastrophic loss avoidance, and performance probability.
The innovations embedded in OptFolio enable users to confidently design effective plans for achieving financial goals, employing accurate analysis based on real data.
Traditional analysis and prediction methods are based on mean variance analysis -- an approach known to be faulty.
OptFolio takes a much more sophisticated and strategic direction.
State-ofthe -art technology integrates optimization and simulation techniques and a new surface methodology based on linear programming into a global system that guides a series of evaluations to reveal truly optimal investment scenarios.
OptFolio is currently being used to optimize project portfolio performance in oil and gas applications and in capital allocation and budgeting for investments in technology.
According to economic theory, supported by empirical and laboratory evidence, the equilibrium price of a financial security reflects all of the information regarding the security's value.
We investigate the dynamics of the computational process on the path toward equilibrium, where information distributed among traders is revealed stepby -step over time and incorporated into the market price.
We develop a simplified model of an information market, along with trading strategies, in order to formalize the computational properties of the process.
We show that securities whose payoffs cannot be expressed as a weighted threshold function of distributed input bits are not guaranteed to converge to the proper equilibrium predicted by economic theory.
On the other hand, securities whose payoffs are threshold functions are guaranteed to converge, for all prior probability distributions.
Moreover, these threshold securities converge in at most n rounds, where n is the number of bits of distributed information.
We also prove a lower bound, showing a type of threshold security that requires at least n/2 rounds to converge in the worst case.
In this article, we explore the structure of the web as an indicator of popular culture.
In a series of art and technology installations, the software agency needs to keep `grounded' to what people can readily understand.
We administered a survey to understand how people perceived word and phrase obscurity related with frequency information gathered from a popular Web search engine.
We found the frequency data gathered from the engine closely matched judgments gathered from people.
The results of this study point to a promising new area of research venturing out from a view of the Web as a tool for corpus linguistics, to its use in applications of art and science that provide compelling explorations of popular culture.
We propose a total variation based model for simultaneous image inpainting and blind deconvolution.
Embedding new visible objects such as video or images into MPEG video has many applications in newscasting, pay-per-view, Interactive TV and other distributed video applications.
Because the embedded foreground content interferes with the original motion compensation process of the background stream, we need to decode macroblocks in I and P frames via motion compensation and re-encode all macroblocks with broken reference links via motion reestimation.
Although previous work has explored DCTcompressed domain algorithms and provided a heuristic approach for motion re-estimation, the computation intensive motion compensation step is not much optimized and so still prevents efficient realtime embedding.
In this work, we optimize previous work to enable realtime embedding processing that can be applied to Interactive Internet TV applications.
We study the motion compensation process and show that on average up to 90% of the macroblocks decoded are not used at all.
To explore this phenomenon, we propose to buffer a GOP (Group-Of-Picture) of frames and apply a backtracking process that identifies the minimum set of macroblocks which need to go through the decoding operation.
At the price of a delay of one-GOP time, this approach greatly speeds up the whole embedding process and enables on-line software embedding operation even capable for processing HDTV stream.
Further optimizations are discussed and a real-world application scenario is presented.
Experimental results have confirmed that this approach is much more efficient than previous solutions and results in equally good video quality.
The experiences of four years teaching systems architecting are described.
The duration of the course systems architecting is 5 days.
The target audience consists of (potential) architects and stakeholders that cooperate intensely with the architect, such as project leaders, product managers, and group leaders.
The course has been given 23 times in the period November 1999 to January 2004.
The maximum number of participants is 16.
Creating Web processes using Web service technology gives us the opportunity for selecting new services which best suit our need at the moment.
Doing this automatically would require us to quantify our criteria for selection.
In addition, there are challenging issues of correctness and optimality.
We present a Constraint Driven Web Service Composition tool in METEOR-S, which allows the process designers to bind Web Services to an abstract process, based on business and process constraints and generate an executable process.
Our approach is to reduce much of the service composition problem to a constraint satisfaction problem.
It uses a multi-phase approach for constraint analysis.
This work was done as part of the METEOR-S framework, which aims to support the complete lifecycle of semantic Web processes.
Glue has evolved significantly during the past decade.
Although the recent move to type-theoretic notation was a step in the right direction, basing the current Glue system on system F (second-order #-calculus) was an unfortunate choice.
An extension to two sorts and ad hoc restrictions had to be improvised to avoid inappropriate composition of meanings.
As a result, the current system is unnecessarily complicated.
A first-order Glue system is proposed.
This new system is not only simpler and more elegant as it captures the exact requirements for Gluestyle compositionality without ad-hoc improvisations, but it also turns out to be more powerful than the current two-sorted (pseudo-) second-order system.
Firstorder Glue supports all existing Glue analyses as well as new, more elegant and/or more demanding analyses.
this paper.
The possibility of collapsing to within the Schwarzschild radius depends on m being greater than about 1.4, and this is in keeping with the result obtained above in section III.
So far there is no conflict between the two theories
We present an abstract machine that encodes both type safety and control safety in an efficient manner and that is suitable as a mobile-code format.
At the code consumer, a single linear-complexity algorithm performs not only verification, but simultaneously also transforms the stack-based wire format into a register-based internal format.
The latter is beneficial for interpretation and native code generation.
Our dual-representation approach overcomes some of the disadvantages of existing mobile-code representations, such as the JVM and CLR wire formats.
Source separation is becoming increasingly important in acoustical applications for spatial filtering.
In the absence of any known source signals (blind case), a blind update equation similar to the natural gradient method [1] is presented, a derivative of which can be used in the case of known references (non-blind case).
If some, but not all, source signals are known, blind-only algorithms are suboptimal, since some available information is not exploited.
To overcome this problem, non-blind separation techniques can be incorporated.
For the instantaneous mixing case (no time delays, no convolution), two different ways of combining blind and non-blind source separation methods are shown, namely an echo cancellertype and an equalizer-like approach.
Simulations allow a comparison of the convergence time of both structures versus the convergence time of the blind-only case and clearly demonstrate the benefit of using combined blind/non-blind separation techniques.
We study here lifts and random lifts of graphs, as defined in [1].
We consider the Hadwiger number # and the Hajos number # of #- lifts of K , and analyze their extremal as well as their typical values (that is, for random lifts).
When # = 2, we show that n, and random lifts achieve the lower bound (as n # #).
In this paper we review the most peculiar and interesting information-theoretic and communications features of fading channels.
We first describe the statistical models of fading channels which are frequently used in the analysis and design of communication systems.
Next, we focus on the information theory of fading channels, by emphasizing capacity as the most important performance measure.
Both single-user and multiuser transmission are examined.
Further, we describe how the structure of fading channels impacts code design, and finally overview equalization of fading multipath channels.
Introduction Several models were built recently in the metacognitive level of the students' interaction with Cognitive Tutors, an intelligent tutoring system based on ACT-R theory.
After finding suboptimal help-seeking behavior, we built a metacognitive model of desired help-seeking behavior (Aleven et al.
in press).
In a different Cognitive Tutor, Baker et al.
(2004) built a model that identifies misuse of the tutor.
Here we take another step and describe a model of students' goals and strategies, which rely in the basis of their metacognitive actions.
By comparing the model's predictions to students' log-files we find the correlation between having the goals and learning gains.
Goals and actions According to the model, each student has tendencies towards four different local-goals.
Each goal is related to a certain strategy, which leads to specific action/s with the tutor (table 1).
The unique personal pattern of tendencies categorizes the individual learning process with the tut
We describe the design, implementation, and performance of a new system for access control on the web.
To achieve greater flexibility in forming access-control policies -- in particular, to allow better interoperability across administrative boundaries -- we base our system on the ideas of proof-carrying authorization (PCA).
We extend PCA with the notion of goals and sessions, and add a module system to the proof language.
Our access-control system makes it possible to locate and use pieces of the security policy that have been distributed across arbitrary hosts.
We provide a mechanism which allows pieces of the security policy to be hidden from unauthorized clients.
Our system is implemented as modules that extend a standard web server and web browser to use proof-carrying authorization to control access to web pages.
The web browser generates proofs mechanically by iteratively fetching proof components until a proof can be constructed.
We provide for iterative authorization, by which a server can require a browser to prove a series of challenges.
Our implementation includes a series of optimizations, such as speculative proving, and modularizing and caching proofs, and demonstrates that the goals of generality, flexibility, and interoperability are compatible with reasonable performance.
We present a methodology and design flow for signal processing application specific integrated circuit macro-cells.
The key features of the methodology are the mastering the complexity of design, the increasing of reuse factor and the early error detection.
It takes advantages of a derivative designs, a signal processing modularity, generic modeling and combines both levels of abstraction, in order to produce an efficient architecture.
The flow includes a fast verification platform that drives both algorithm and architecture validation in an efficient way.
We illustrate the effectiveness of the proposed methodology by a significant industrial application.
Experimental design results indicate strong advantages of the proposed schemes.
We present a means to represent utility, the measure of goodness of a possible deal.
This representation includes a number of features necessary to represent complex requirements, such as time dependence, explicit combinations of terms, and cross dependences.
The formulation is closely tied to the form used to represent contracts, which makes it useful for automated negotiation software.
In order to better support current and new applications, the major DBMS vendors are stepping beyond uninterpreted binary large objects, termed BLOBs, and are beginning to offer extensibility features that allow external developers to extend the DBMS with, e.g., their own data types and accompanying access methods.
Existing solutions include DB2 extenders, Informix DataBlades, and Oracle cartridges.
Extensible systems offer new and exciting opportunities for researchers and third-party developers alike.
This paper reports on an implementation of an Informix DataBlade for the GR-tree, a new R-tree based index.
This effort represents a stress test of the perhaps currently most extensible DBMS, in that the new DataBlade aims to achieve better performance, not just to add functionality.
The paper provides guidelines for how to create an access method DataBlade, describes the sometimes surprising challenges that must be negotiated during DataBlade development, and evaluates the extensibility of the Informix Dynamic Server.
This paper explores the patterns and determinants of empowerment, income generation, and environmental sustainability under varying degrees of woodlot management in Tigray, Ethiopia.
Our analysis is based upon a survey of 120 collectively managed woodlots, devolved to varying degrees, and 66 households that have recently received small plots of community land for tree planting.
We find that more devolved woodlot
this paper remarked that Einstein's thinking was more along the lines of Section 7.
For the benefit of the present author and the audience that first saw some of the plots presented here, J.P. Vigier recounted his own knowledge of the history of privately expressed doubts about the Linard-Wiechert results.
The history traces through Louis deBroglie and indeed back to Einstein.
But nobody articulated the doubts in print.
The fact that Einstein used the Linard-Wiechert results conferred on them unwarranted authority.
With the end results assumed not subject to question, modern authors have generally just retro-fitted modern mathematical methods onto them, without seizing the opportunity to delve into questions that the modern methods might have exposed.
For example, the modern concept of invariant scalar inner product underlies the formulation (3).
But the fact that gkR is equivalent to the inner product , means only that it is an invariant; it does not mean that it is the invariant that corresponds to the correct time argument; i.e., the correct proper time of the correct entity in the problem.
The slipperiness of the construct V R has been demonstrated.
For example, Whitney (1989) shows that the operations of retardation and Lorentz transformation can lead to ambiguity by failing to commute.
Another of the modern approaches uses generalized functions: the Dirac delta and the Heaviside step.
[See, for example, Jackson (1975), Sections 12.11 and 14.1.
] The problem with the generalized functions is that they lack the mathematical property of uniform convergence, and as a result they can produce apparently pathological behaviors.
Worst among these is failure in operator commutation: as the generalized functions are used in field theory, the operations of diffe...
We study the distribution of the statistics `number of fixed points' and `number of excedances' in permutations avoiding subsets of patterns of length 3.
We solve all the cases of simultaneous avoidance of more than one pattern, giving generating functions enumerating these two statistics.
Some cases are generalized to patterns of arbitrary length.
For avoidance of one single pattern we give partial results.
We also describe the distribution of these statistics in involutions avoiding any subset of patterns of length 3.
Golay complementary sequences, often referred to as Golay pairs, are characterised by the property that the sum of their aperiodic autocorrelation functions equals to zero, except for the zero shift.
Because of this property, Golay complementary sequences can be utilised to construct Hadamard matrices defining sets of orthogonal spreading sequences for DS CDMA systems of the lengths not necessary being a power of 2.
In the paper, we present an evaluation, from the viewpoint of asynchronous DS CDMA applications, of some sets of spreading sequences derived from Golay complementary sequences.
We then modify those sets of sequences to enhance their correlation properties for asynchronous operation and simulate a multiuser DS CDMA system utilising the modified sequences.
Splines with free knots have been extensively studied in regard to calculating the optimal knot positions.
The dependence of the accuracy of approximation on the knot distribution is highly nonlinear, and optimisation techniques face a difficult problem of multiple local minima.
The domain of the problem is a simplex, which adds to the complexity.
We have applied a recently developed cutting angle method of deterministic global optimisation, which allows one to solve a wide class of optimisation problems on a simplex.
The results of the cutting angle method are subsequently improved by local discrete gradient method.
The resulting algorithm is sufficiently fast and guarantees that the global minimum has been reached.
The results of numerical experiments are presented.
In this paper, graphical games, a compact graphical representation for multi-player game theory is introduced.
In this paper we introduce Functional Difference Predictors (FDPs), a new class of perceptually-based image difference metrics that predict how image errors affect the ability to perform visual tasks using the images.
To define the properties of FDPs, we conduct a psychophysical experiment that focuses on two visual tasks: spatial layout and material estimation.
In the experiment we introduce errors in the positions and contrasts of objects reflected in glossy surfaces and ask subjects to make layout and material judgments.
The results indicate that layout estimation depends only on positional errors in the reflections and material estimation depends only on contrast errors.
These results suggest that in many task contexts, large visible image errors may be tolerated without loss in task performance, and that FDPs may be better predictors of the relationship between errors and performance than current Visible Difference Predictors (VDPs).
In this paper we study the impact of incorporating handover prediction information into the call admission control process in mobile cellular networks.
The comparison is done between the performance of optimal policies obtained with and without the predictive information.
The prediction agent classifies mobile users in the neighborhood of a cell into two classes, those that will probably be handed over into the cell and those that probably will not.
We consider the classification error by modeling the false-positive and non-detection probabilities.
Two different approaches to compute the optimal admission policy were studied: dynamic programming and reinforcement learning.
Preliminary results show significant performance gains when the predictive information is used in the admission process.
In this paper we present a new postal envelope segmentation method based on 2-D histogram clustering and watershed transform.
Segmentation task consists in detecting the modes associated with homogeneous regions in envelope images such as handwritten address block, postmarks, stamps and background.
The homogeneous modes in 2-D histogram are segmented through the morphological watershed transform.
Our approach is applied to complex Brazilian postal envelopes.
Very little a priori knowledge of the envelope images is required.
The advantages of this approach will be described and illustrated with tests carried out on 300 different images where there are no fixed position for the handwritten address block, postmarks and stamps.
The complexity of stochastic models of real-world systems is usually managed by abstracting details and structuring models in a hierarchical manner.
Systems are often built by replicating and joining subsystems, making possible the creation of a model structure that yields lumpable state spaces.
This fact has been exploited to facilitate model-based numerical analysis.
Likewise, recent results on model construction suggest that decision diagrams can be used to compactly represent large Continuous Time Markov Chains (CTMCs).
In this paper, we present an approach that combines and extends these two approaches.
In particular, we propose methods that apply to hierarchically structured models with hierarchies based on sharing state variables.
The hierarchy is constructed in a way that exposes structural symmetries in the constructed model, thus facilitating lumping.
In addition, the methods allow one to derive a symbolic representation of the associated CTMC directly from the given model without the need to compute and store the overall state space or CTMC explicitly.
The resulting representation of a generator matrix allows the analysis of large CTMCs in lumped form.
The efficiency of the approach is demonstrated with the help of two example models.
Problems in game theory can be used for benchmark DNA computations.
Large numbers of game strategies and chance events can be assembled into finite state machines.
These many machines perform, in parallel, distinct plays of a game.
Strategies can be exposed to selection and breeding.
The computational capabilities of DNA are matched with aspects of game theory, but the most interesting problems are yet to be treated.
Thispaperreportssimulationmethodsandresultsfor analyzingaself-adjustingQualityofService(QoS)control schemeformultimedia/telecommunicationsystemsbased onresourcereservation.Westudythecaseinwhichhigh priorityclients'QoSrequirementisnotchangedthroughout theserviceperiod,whilelowpriorityclients'QoSmaybe adjustedbythesystembetweenthemaximumandminimum QoSlevelsspecifiedinordertoadapttotheloadofthe system.Thegoalofthesystemdesignistooptimizethe systemrewardasaresultofservicingclientswithdifferent QoSandreward/penaltyrequirements.AQoSmanagerin thesesystemscandoatablelookupoperationusingthe simulationresultreportedheretooptimizethesystemtotal rewarddynamicallyinresponsetochangingworkloads duringtheruntime.Thesimulationresultisparticularly applicabletomultimediaandtelecommunicationsystems inwhichdynamicQoSnegotiation/renegotiationisusedas amechanismtooptimizetheoverallsystemperformance.
1
It is by now motherhood-and-apple-pie that complex distributed Internet services form the basis not only of ecommerce but increasingly of mission-critical networkbased applications.
What is new is that the workload and internal architecture of three-tier enterprise applications presents the opportunity for a new approach to keeping them running in the face of both "natural" failures and adversarial attacks.
The core of the approach is anomaly detection and localization based on statistical machine learning techniques.
Unlike previous approaches, we propose anomaly detection and pattern mining not only for operational statistics such as mean response time, but also for structural behaviors of the system---what parts of the system, in what combinations, are being exercised in response to different kinds of external stimuli.
In addition, rather than building baseline models a priori, we extract them by observing the behavior of the system over a short period of time during normal operation.
We explain the necessary underlying assumptions and why they can be realized by systems research, report on some early successes using the approach, describe benefits of the approach that make it competitive as a path toward selfmanaging systems, and outline some research challenges.
Our hope is that this approach will enable "new science" in the design of self-managing systems by allowing the rapid and widespread application of statistical learning theory techniques (SLT) to problems of system dependability.
1 Recovery as Rapid Adaptation A "five nines" availability service (99.999% uptime) can be down only five minutes a year.
Putting a human in the critical path to recovery would expend that entire budget on a single incident, hence the increasing interest in self-managing or so-cal...
Since the presentation of the backpropagation algorithm, several adaptive learning algorithms for training a multilayer perceptron (MLP) have been proposed.
In a recent article, we have introduced an efficient training algorithm based on a nonmonotone spectral conjugate gradient.
In particular, a scaled version of the conjugate gradient method suggested by Perry, which employ the spectral steplength of Barzilai and Borwein, was presented.
The learning rate was automatically adapted at each epoch according to Shanno's technique which exploits the information of conjugate directions as well as the previous learning rate.
In addition, a new acceptability criterion for the learning rate was utilized based on nonmonotone Wolfe conditions.
A crucial issue of these training algorithms is the learning rate adaptation.
Various variable learning rate adaptations have been introduced in the literature to improve the convergence speed and avoid convergence to local minima.
In this contribution, we incorporate in the previous training algorithm a new e#ective variable learning rate adaptation, which increases its efficiency.
Experimental results in a set of standard benchmarks of MLP networks show that the proposed training algorithm improves the convergence speed and success percentage over a set of well known training algorithms.
Storage device performance prediction is a key element of self-managed storage systems and application planning tasks, such as data assignment.
This work explores the application of a machine learning tool, CART models, to storage device modeling.
Our approach predicts a device's performance as a function of input workloads, requiring no knowledge of the device internals.
We propose two uses of CART models: one that predicts per-request response times (and then derives aggregate values) and one that predicts aggregate values directly from workload characteristics.
After being trained on our experimental platforms, both provide accurate black-box models across a range of test traces from real environments.
Experiments show that these models predict the average and 90th percentile response time with an relative error as low as 16%, when the training workloads are similar to the testing workloads, and interpolate well across different workloads.
We present a decentralized, behavior-based approach to assembling and maintaining robot formations.
Our approach dynamically grows formations from single robots into line segments and ultimately larger and more complex formations.
Formation growth
In this paper, we examine migration techniques of mobile agents in Java.
We identify the problems in Java technology, classify different migration styles and present possible solutions and related work.
The proposed classification distinguishes between code migration, execution migration and data migration.
The classification defines a partial order to compare different migration approaches.
For realizing strong migration in Java, two solutions are proposed.
On the one hand, a pre-processor adds all the necessary information for migration to the source code before compilation time.
On the other hand, A JNI-based plugin for any virtual machine provides mechanisms to captures the agent's execution state.
The restoration of the execution state is done by the plugin in combination with a byte code modifier which slightly changes the byte code of the agent.
In previous work we presented a foundational calculus for spatially distributed computing based on intuitionistic modal logic.
Through the modalities # and # we were able to capture two key invariants: the mobility of portable code and the locality of fixed resources.
This work
As a great many of new devices with diverse capabilities are making a population boom, their limited display sizes become the major obstacle that has undermined the usefulness of these devices for information access.
In this paper, we introduce our recent research on adapting multimedia content including images, videos and web pages for browsing on small-formfactor devices.
A theoretical framework as well as a set of novel methods for presenting and rendering multimedia under limited screen sizes is proposed to improve the user experience.
The content modeling and processing are provided as subscriptionbased web services on the Internet.
Experiments show that our approach is extensible and able to achieve satisfactory results with high efficiency.
The human eye can accommodate luminance in a single view over a range of about 10,000:1 and is capable of distinguishing about 10,000 colors at a given brightness.
By comparison, typical CRT displays have a luminance range less than 100:1 and cover about half of the visible color gamut.
Despite this difference, most digital image formats are geared to the capabilities of conventional displays, rather than the characteristics of human vision.
In this paper, we propose two compact encodings suitable for the transfer, manipulation, and storage of full range color images.
The first format is a replacement for conventional RGB images, and encodes color pixels as log luminance values and CIE (u',v') chromaticity coordinates.
We have implemented and distributed this encoding as part of the standard TIFF I/O library on the net.
The second format is proposed as an adjunct to conventional RGB data, and encodes out-of-gamut (and out-of-range) pixels in a supplemental image, suitable as a layer extension to the Flashpix standard.
This data can then be recombined with the original RGB layer to obtain a high dynamic range image covering the full gamut of perceivable colors.
Finally, we demonstrate the power and utility of full gamut imagery with example images and applications.
Predictability of financial time series (FTS) is a well-known dilemma.
A typical approach to this problem is to apply a regression model, built on the historical data and then further extend it into the future.
If however the goal is to support or even make investment decisions, regression-based FTS predictions are inappropriate as on top of being uncertain and unnecessarily complex, they require further analysis to make an investment decision.
Rather than precise FTS prediction, a busy investor may prefer a simple decision on the current day transaction: buy, wait, sell, that would maximise his return on investment.
Based on such assumptions a classification model is proposed that learns the transaction patterns from optimally labelled historical data and accordingly gives the profit-driven decision for the current day transaction.
Exploiting a stochastic nature of an investment cycle the model is locally reduced to a 2-class classification problem and is built on many features extracted from the share price and transaction volume time series.
Simulation of the model over 20 years of NYSE:CSC share price history showed substantial improvement of the profit compared to a passive long-term investment.
We show that the current TCP Vegas algorithm can become unstable in the presence of network delay and propose a modification that stabilizes it.
The stabilized Vegas remains completely source-based and can be implemented without any network support.
We suggest an incremental deployment strategy for stabilized Vegas when the network contains a mix of links, some with active queue management and some without.
This paper describes a simulation-based analysis of a printed circuit board (PCB) testing process.
The PCBs are used in a defense application and the testing process is fairly complex.
Boards are mounted on a test unit in batches and go through three thermal test cycles.
As boards fail testing during the thermal cycling, operators can either replace the failed boards at fixed points during the cycling or can allow the test unit to complete the testing cycle before removing failed boards.
The primary objective of the simulation study is to select an operating strategy for a given set of operating parameters.
A secondary objective is to identify the operating factors to which the strategy selection is sensitive.
Initial testing indicated that failed boards should be replaced as soon as possible under the current operating configuration of the sponsor's facility.
Secondary testing is also described.
This paper presents a simulation optimization of a real scheduling problem in industry, simulated annealing is introduced for this purpose.
Investigation is performed into the practicality of using simulated annealing to produce high quality schedules.
Results on the solution quality and computational effort show the inherent properties of the simulated annealing.
It is shown that when using this method, high quality schedules can be produced within reasonable time constraints.
This paper presents the current state of researches about supervision and control of discrete event systems.
The
The performance of a massively parallel computing system is often limited by the speed of its interconnection network.
One strategy that has been proposed for improving network efficiency is the use of adaptive routing, in which network state information can be used in determining message paths.
The design of an adaptive routing system involves several parameters, and in order to build high speed scalable computing systems, it is important to understand the costs and performance benefits of these parameters.
In this paper, we investigate the effect of buffer design on communication latency.
Four message storage models and their related route selection algorithms are analyzed.
A comparison of their performance is presented, and the features of buffer design which are found to significantly impact network efficiency are discussed.
traints.
In other words, constraints select representations, but not the reverse.
d. the arbitral award of representations is not sovereign anymore.
A representation is not ill- or well-formed as it was before; it is, as everything else in OT, more or less ill- or well-formed: there is nothing that cannot be violated.
In OT, computation does not operate ON representations as before, but WITH representations.
Hence, OT has abolished the red line between structure and process: there is no structure left, computation (= process) decides alone.
e. point out the consequences of the demotion of representations: 1. they had a function, i.e.
fighting back overgeneration.
Giving up on them sets phonology back to where it stood in post-SPE times.
- 2 - 2. another job of representations was to offer explanations for the facts we observe.
Their absence has triggered a run on extra-phonological explanations for phonological events: "grounded" constraints.
- is a "phon
Large string datasets are common in a number of emerging text and biological database applications.
State-of-the-art speech recognition systems are trained using human transcriptions of speech utterances.
In this paper, we describe a method to combine active and unsupervised learning for automatic speech recognition (ASR).
The goal is to minimize the human supervision for training acoustic and language models and to maximize the performance given the transcribed and untranscribed data.
Active learning aims at reducing the number of training examples to be labeled by automatically processing the unlabeled examples, and then selecting the most informative ones with respect to a given cost function.
For unsupervised learning, we utilize the remaining untranscribed data by using their ASR output and word confidence scores.
Our experiments show that the amount of labeled data needed for a given word accuracy can be reduced by 75% by combining active and unsupervised learning.
Learning classifier systems, their parameterisation, and their rule discovery systems have often been evaluated by measuring classification accuracy on small Boolean functions.
We demonstrate that by restricting the rule set to the initial random population high classification accuracy can still be achieved, and that relatively small functions require few rules.
We argue this demonstrates that high classification accuracy on small functions is not evidence of effective rule discovery.
However, we argue that small functions can nonetheless be used to evaluate rule discovery when a certain more powerful type of metric is used.
Ching-Chien Chen, Craig A. Knoblock, Cyrus Shahabi, and Snehal Thakkar University of Southern California Department of Computer Science and Information Sciences Institute 4676 Admiralty Way Marina del Rey, CA 90292 USA {chingchc, knoblock, shahabi, snehalth}@usc.edu 1
this paper we focus on finding serial episodes from data streams.
To the best of our knowledge the problem of mining serial episodes from data streams has been studied in depth only for length-1 episodes [2]
Introduction DNA topology is of fundamental importance for a wide range of biological processes [1].
Since the topological state of genomic DNA is of importance for its replication, recombination and transcription, there is an immediate interest to obtain information about the supercoiled state from sequence periodicities [2,3].
Identification of dominant periodicities in DNA sequence will help understand the important role of coherent structures in genome sequence organization [4,5].
Li [6] has discussed meaningful applications of spectral analyses in DNA sequence studies.
Recent studies indicate that the DNA sequence of letters A, C, G and T exhibit the inverse power law form 1/f frequency spectrum where f is the frequency and a the exponent.
It is possible, therefore, that the sequences have longrange order [7-14].
Inverse power-law form for power spectra of fractal space-time fluctuations is generic to dynamical systems in nature and is identified as self-organized criticality
We present new expressions for generating functions of irreducible characters of the symmetric group S n .
These generating functions are of the form where the sum is over partitions # of restricted shapes such as hooks and double hooks.
We use the #-ring theory for symmetric functions to demonstrate our statements.
1.
Large sparse matrices play important role in many modern information retrieval methods.
These methods, such as clustering, latent semantic indexing, performs huge number of computations with such matrices, thus their implementation should be very carefully designed.
In this paper we discuss three implementations of sparse matrices.
The first one is classical, based on lists.
The second is previously published approach based on quadrant trees.
The multi-dimensional approach is extended and usage of general multi-dimensional structure for sparse matrix storage is introduced in this paper.
This white paper provides a technical overview of the Red Hat Cluster Suite layered product.
The paper describes several of the software technologies used to provide high availability and provides outline hardware configurations.
The paper is suitable for people who have a general understanding of clustering technologies, such as those found in Microsoft Windows 2000 Advanced Server and Sun Cluster products.
this paper shows (Section 2) that Fibonacci series underlies fractal fluctuations on all space-time scales
This paper investigates the application of cotraining and self-training to word sense disambiguation.
Introduction Integral membrane proteins play a vital role in a number of essential biological functions.
Although abundant, about 30% of genes are known to code for membrane proteins, the number of solved structures in the pdb is less than 1%.
Thus, structure prediction of membrane proteins is an essential tool for understanding their functions.
A fundamental characteristic of the predicted structure is the topology -- identification of trans-membrane segments and the overall orientation with respect to the membrane (intra- or extra-cellular).
Several prediction methods have been developed for this purpose, both knowledge-based and residue hydrophobicity-based.
Although the performances of almost all of these methods are rather high, short loops and long helices are predicted less accurately [1].
One of the problems of estimating accuracy of di#erent prediction methods is the absence of experimentally reliable trans-membrane annotations to compare with.
Thus, one is forced to compare
this paper, we assume a global decision function is used in the belief change operations, and it will favor retaining the most preferred beliefs as determined by a preference ordering (#) that is irreflexive, anti-symmetric and transitive (referred to here as an IAT-preference ordering)
This paper has been rewritten half a dozen times, and each time it has looked completely different.
The reader will have to bear with the present report as a fallible one of tentative progress to date
The nearest-neighbor based document skew detection methods do not require the presence of a predominant text area, and are not subject to skew angle limitation.
However, the accuracy of these methods is not perfect in general.
In this paper, we present an improved nearest-neighbor based approach to perform accurate document skew estimation.
Size restriction is introduced to the detection of nearest-neighbor pairs.
Then the chains with a largest possible number of nearest-neighbor pairs are selected, and their slopes are computed to give the skew angle of document image.
Experimental results on various types of documents containing different linguistic scripts and diverse layouts show that the proposed approach has achieved an improved accuracy for estimating document image skew angle and has an advantage of being language independent.
erform routing table lookups in parallel manner, control agents (CA) to handle routing table computation and QoS control tasks, and high-speed switch fabric.
Separating IP header analysis and the routing table lookups, the line cards become light weighted.
Furthermore, this separation allowed us to achieve better load-balancing and therefore higher performance.
The major contribution in this work is the IP Packet Distribution Approach, performed by LCs and RAs.
This approach satisfies two requirements: RAs are working in a load-balanced manner, and packets from the same flow are not reordered.
The algorithm distributes packets, when arriving at LCs, using the Enhanced Hash-based Distribution Algorithm (EHDA).
When a packet arrives, EHDA generates a hash value for each IP packet, based on its destination address and then EHDA uses indirect hashing (hash values are used as indices to a hash table) to obtain the ID number of each RA.
The content of the hash table is dynamically changed a
This position paper presents an approach for predicting functional and extra-functional properties of layered software component architectures.
Our approach is based on parameterised contracts a generalisation of design-bycontract.
The main contributions of the paper are twofold.
Firstly, it attempts to clarify the meaning of "contractual use of components" a term sometimes used loosely -- or even inconsistently -- in current literature.
Secondly, we demonstrate how to deploy parameterised contracts to predict properties of component architectures with non-cyclic dependencies.
New models and strategies have been on trial for the advantage of emerging information and communication technologies over the last decades.
Among these, a particular group of technologies impacts the way time and space constraints are now consider.
Additionally, the information and knowledge society requires new skills to both the professional and the learner.
In particular, considering a higher education context, the need to deal with change, innovation and evolving models of competition and collaboration brings new challenges.
Although higher education keeps a traditional background of sharing ideas, experimentation and reflection about impact and future applications of available knowledge, it lacks the ability to embed within its own practices both its work and ideas as also efforts from its community.
Presential teaching, organisational structures, administrative processes, curricula organisation and knowledge sharing strategies are now put on pressure by an increasing number of high education newcomers who fail to adhere to the current status and learn the skills that the so called information and knowledge society may require.
A huge challenge is on place, based on a transition from processes to information based activities, from an individual approach to a collaborative one, from a knowledgeoriented learning to a skill-oriented learning.
It seems that the network both for individuals, organisations and also for organising the learning in higher education is a central concept: connecting people and sharing knowledge not efforts The use of virtuality, considered here as the desmaterialisation of learning settings and experiences, provides the opportunity to cope with time and space constraints and to innovate both on practices as on what individuals need to know-h...
This thesis tries to answer the question how to predict the reaction of the stock market to news articles using the latest suitable developments in Natural Language Processing.
This is done using text classification where a new article is matched to a category of articles which have a certain influence on the stock price.
The thesis first discusses why analysis of news articles is a feasible approach to predicting the stock market and why analysis of past prices should not be build upon.
From related work in this domain two main design choices are extracted; what to take as features for news articles and how to couple them with the changes in stock price.
This thesis then suggests which di#erent features are possible to extract from articles resulting in a template for features which can deal with negation, favorability, abstracts from companies and uses domain knowledge and synonyms for generalization.
To couple the features to changes in stock price a survey is given of several text classification techniques from which it is concluded that Support Vector Machines are very suitable for the domain of stock prices and extensive features.
The system has been tested with a unique data set of news articles for which results are reported that are significantly better than random.
The results improve even more when only headlines of news articles are taken into account.
Because the system is only tested with closing prices it cannot concluded that it will work in practice but this can be easily tested if stock prices during the days are available.
The main suggestions for feature work are to test the system with this data and to improve the filling of the template so it can also be used in other areas of favorability analysis or maybe even to extract interesting information o...
...
This paper presents a survey of programming languages and systems for prototyping concurrent applications to review the state of the art in this area.
The surveyed approaches are classified with respect to the prototyping process
CACTI 3.0 is an integrated cache access time, cycle time, area, aspect ratio, and power model.
By integrating all these models together users can have confidence that tradeoffs between time, power, and area are all based on the same assumptions and hence are mutually consistent.
CACTI is intended for use by computer architects so they can better understand the performance tradeoffs inherent in different cache sizes and organizations.
INTRODUCTION e syst e ATC actue est centr sur 'espace.
L'espace arien est divis en p usieurs secteurs, dont a tai e dpend du nombre d'avions dans a rgion et a gomtrie des routes ariennes.
I y a habitue ement deux contr eurs ariens dans chaque secteur qui manipu ent e trafic: un contr eur p anificateur et un contr eur excutif.
Le p anificateur travai e un niveau stratgique et essaye de rduire au minimum e nombre de conf its ou eur comp exit.
Le contr eur excutif travai e un niveau tactique et s'assure MaVj crit reu le 20 septembre 2002.
Minh NGUYEN-DUC.
La bora toire d'Informa tique de Pa ris 6 (LIP6), Universit dePa ris 6, Pa ris, Fra- e, minh.nguyen-duc@lip6.fr.
Institut de la Fra cophonie pour l'Informa tique (IFI), Ha@J , Vietna , ndminh@ifi.edu.vn.
Vu DUONG, EUROCONTROL Experimenta l Center, Brtigny,FraV , vu.duong@eurocontrol.int.
Jea-BHHV re BRIOT.La oraV ire d'Inform a ique dePa@ 6 (LIP6), Universit de Pa ris 6, Pa ris, Fra nce, jea n-p
We present the first complete, exact and efficient C++ implementation of a method for parameterizing the intersection of two implicit quadrics with integer coefficients of arbitrary size.
It is based on the near-optimal algorithm recently introduced by Dupont et al.
[4].
Unlike
We present the agent programming language GTGolog, which integrates explicit agent programming in Golog with game-theoretic multi-agent planning in Markov games.
It is a generalization of DTGolog to a multi-agent setting, where we have two competing single agents or two competing teams of agents.
The language allows for specifying a control program for a single agent or a team of agents in a high-level logical language.
The control program is then completed by an interpreter in an optimal way against another single agent or another team of agents, by viewing it as a generalization of a Markov game, and computing a Nash strategy.
We illustrate the usefulness of this approach along a robotic soccer example.
We also report on a first prototype implementation of a simple GTGolog interpreter.
The purpose of the present study was to investigate teachers' declarative metacognitive knowledge of higher order thinking skills.
This was a qualitative study conducted within the educational setting of in-service science teachers' courses.
The main finding is that teachers' intuitive (i.e., pre-instructional) knowledge of metacognition of thinking skills is unsatisfactory for the purpose of teaching higher order thinking in science classrooms.
A general practical implication of this study is that courses which prepare teachers for instruction of higher order thinking should address extensively the issue of metacognition of thinking skills.
# 1999 Elsevier Science Ltd. All rights reserved.
This report describes the interface and implementation of the Virtuoso system.
It is also a user manual for those who wish to try Virtuoso
The availability of large, inexpensive memory has made it possible to realize numerical functions, such as the reciprocal, square root, and trigonometric functions, using a lookup table.
This is much faster than by software.
However, a naive look-up method requires unreasonably large memory.
In this paper, we show the use of a look-up table (LUT) cascade to realize a piecewise linear approximation to the given function.
Our approach yields memory of reasonable size and significant accuracy.
Parameters of statistical distributions that are input to simulations are typically not known with certainty.
For existing systems, or variations on existing systems, they are often estimated from field data.
Even if the mean of simulation output were estimable exactly as a function of input parameters, there may still be uncertainty about the output mean because inputs are not known precisely.
This paper considers the problem of deciding how to allocate resources for additional data collection so that input uncertainty is reduced in a way that effectively reduces uncertainty about the output mean.
The optimal solution to the problem in full generality appears to be quite challenging.
Here, we simplify the problem with asymptotic approximations in order provide closed-form sampling plans for additional data collection activities.
The ideas are illustrated with a simulation of a critical care facility.
The dawning of the 21st century has seen unprecedented growth in the number of wireless users, applications, and network access technologies.
This trend is enabling the vision of pervasive, ubiquitous computing where users have network access anytime, anywhere, and applications are location-sensitive and contextaware.
To realize this vision, we need to extend network connectivity beyond private networks, such as corporate and university networks, into public spaces like airports, malls, hotels, parks, arenas, etc.
-- those places where individuals spend a considerable amount of their time outside of private networks.
In this paper we study communication networks that employ drop-tail queueing and Additive-Increase Multiplicative-Decrease (AIMD) congestion control algorithms.
We show that the theory of nonnegative matrices may be employed to model such networks.
In particular, we show that important network properties such as: (i) fairness; (ii) rate of convergence; and (iii) throughput; can be characterised by certain non-negative matrices that arise in the study of AIMD networks.
We demonstrate that these results can be used to develop tools for analysing the behaviour of AIMD communication networks.
The accuracy of the models is demonstrated by means of several NS-studies.
Semantic file systems enable users to search for files based on attributes rather than just pre-assigned names.
This paper develops and evaluates several new approaches to automatically generating file attributes based on context, complementing existing approaches based on content analysis.
Context captures broader system state that can be used to provide new attributes for files, and to propagate attributes among related files; context is also how humans often remember previous items [2], and so should fit the primary role of semantic file systems well.
Based on our study of ten systems over four months, the addition of context-based mechanisms, on average, reduces the number of files with zero attributes by 73%.
This increases the total number of classifiable files by over 25% in most cases, as is shown in Figure 1.
Also, on average, 71% of the content-analyzable files also gain additional valuable attributes.
In this paper we introduce a pattern classification system to recognize words of minimal length in their automorphic orbits in free groups.
This system is based on Support Vector Machines and does not use any particular results from group theory.
The main advantage of the system is its stable performance in recognizing minimal elements in free groups with large ranks.
M/G/1 queues, where G is a heavy-tailed distribution, have applications in Internet modeling and modeling for insurance claim risk.
The Pareto distribution is a special heavytailed distribution called a power-tailed distribution, and has been found to serve as adequate models for many of these situations.
However, to get the waiting time distribution, one must resort to numerical methods, e.g., simulation.
Many difficulties arise in simulating queues with Pareto service and we investigate why this may be so.
Even if we are willing to consider truncated Pareto service, there still can be problems in simulating if the truncation point (maximum service time possible) is too large.
The importance of semiconductor wafer fabrication has been increasing steadily over the past decade.
Wafer fabrication is the most technologically complex and capital intensive phase in semiconductor manufacturing.
It involves the processing of wafers of silicon in order to build up layers and patterns of metal and wafer material.
Many operations have to be performed in a clean room environment to prevent particulate contamination of wafers.
Also, since the machines on which the wafers are processed are expensive, service contention is an important concern.
All these factors underline the importance of seeking policies to design and operate them efficiently.
We describe a simulation model of a planned 300mm wafer fabrication line that we are using to make strategic decisions related to the factory.
this paper is to show how a security specification written in a generic and flexible security language (SPL) can be checked for inconsistencies with a workflow specification written in an off-the-shelf workflow process definition language, namely WPDL (Workflow Process Definition language) [8]
Dynamic power control and scheduling strategies provide efficient mechanisms for improving performance of wireless communications networks.
A common objective is to maximize throughput performance of a network or to minimize the total transmission power while satisfying quality-of-service (QoS) requirements of the users.
The achievement of these objectives requires the development of medium access control (MAC) strategies that optimally utilize scarce resources in wireless networks.
When developing such strategies, a good understanding of the structure of the feasibility region is essential.
The feasibility region is defined as a set of all QoS requirements that can be supported by a network with all users active concurrently.
Thus, the structure of this set shows when (if at all) scheduling strategies can improve network performance.
In particular, if the feasibility region is a convex set, then concurrent transmission strategies are optimal and the optimal power allocation can be obtained efficiently via a convex optimization.
Other important problems are how the total transmission power depends on QoS requirements and what the optimal QoS tradeoff is.
In this paper, we address all these problems and solve them completely in some important cases.
The purpose of this paper is to explore the interrelationship between QoS requirements and physical quantities such as transmission power.
Although the results are obtained in the context of a power-controlled CDMA system, they also apply to some other communications systems.
A key assumption is that there is a monotonous relationship between a QoS parameter of interest (such as data rate) and the signal-to-interference ratio at the output of a linear receiver.
This paper presents a high-level language for expressing image processing algorithms, and an optimizing compiler that targets FPGAs.
The language is called SA-C, and this paper focuses on the language features that 1) support image processing, and 2) enable efficient compilation to FPGAs.
It then describes the compilation process, in which SA-C algorithms are translated into non-recursive data flow graphs, which in turn are translated into VHDL.
Finally, it presents performance numbers for some wellknown image processing routines, written in SAC and automatically compiled to an Annapolis Microsystems WildForce board with Xilinx 4036XL FPGAs.
In this paper, we investigate the problem of distributively allocating transmission rates to users on the Internet.
We allow users to have concave as well as sigmoidal utility functions that are natural in the context of various applications.
In the literature, for simplicity, most works have dealt only with the concave case.
However, we show that when applying rate control algorithms developed for concave utility functions in a more realistic setting (with both concave and sigmoidal types of utility functions), they could lead to instability and high network congestion.
We show that a pricing based mechanism that solves the dual formulation can be developed based on the theory of subdifferentials with the property that the prices "self-regulate" the users to access the resource based on the net utility.
We discuss convergence issues and show that an algorithm can be developed that is efficient in the sense of achieving the global optimum when there are many users.
In this paper, a spread-spectrum-like discrete cosine transform domain (DCT domain) watermarking technique for copyright protection of still digital images is analyzed.
The DCT is applied in blocks of 8 8 pixels as in the JPEG algorithm.
The watermark can encode information to track illegal misuses.
For flexibility purposes, the original image is not necessary during the ownership verification process, so it must be modeled by noise.
Two tests are involved in the ownership verification stage: watermark decoding, in which the message carried by the watermark is extracted, and watermark detection, which decides whether a given image contains a watermark generated with a certain key.
We apply generalized Gaussian distributions to statistically model the DCT coefficients of the original image and show how the resulting detector structures lead to considerable improvements in performance with respect to the correlation receiver, which has been widely considered in the literature and makes use of the Gaussian noise assumption.
As a result of our work, analytical expressions for performance measures such as the probability of error in watermark decoding and probabilities of false alarm and detection in watermark detection are derived and contrasted with experimental results.
The paper considers various relaying strategies for wireless networks.
We comparatively discuss and analyse direct transmission, conventional "multihop" relaying, and the novel concepts of cooperative relaying from the viewpoint of system level performance.
While conventional relaying exploits pathloss savings, cooperative relaying additionally takes two inherent advantages of relay-based systems into account: the ability to exploit the broadcast nature of the wireless medium, and the diversity offered by the relay channel.
Following a description of these concepts, we analyse the performance of such systems in an exemplary manner for power-controlled cellular and ad hoc CDMA systems.
The resulting power savings and capacity improvements suggest that cooperative relaying may constitute an interesting candidate for future cellular and ad hoc network architectures.
We describe a simple, computationally light, real-time system for tracking the lower face and extracting information about the shape of the open mouth from a video sequence.
The system allows unencumbered control of audio synthesis modules by action of the mouth.
We report work in progress to use the mouth controller to interact with a physical model of sound production by the avian syrinx.
Beno t Garbinato Fernando Pedone + Rodrigo Schmidt + Universit e de Lausanne, CH-1015 Lausanne, Switzerland Phone: +41 21 692 3409 Fax: +41 21 692 3405 E-mail: benoit.garbinato@unil.ch + Ecole Polytechnique F ed erale de Lausanne (EPFL), CH-1015 Lausanne, Switzerland Phone: +41 21 693 4797 Fax: +41 21 693 6600 E-mail: {fernando.pedone, rodrigo.schmidt}@epfl.ch 1.
Dynamic Distributed Systems With the emergence of a mobile and large-scale Internet, highly-dynamic distributed systems are becoming increasingly important.
Examples of this growing importance can be found in recent researches in largescale peer-to-peer protocols [1, 3], as well as in ad hoc network technologies [4].
We study the fundamental limitations of relational algebra (RA) and SQL in supporting sequence and stream queries, and present effective query language and data model enrichments to deal with them.
We begin by observing the well-known limitations of SQL in application domains which are important for data streams, such as sequence queries and data mining.
Then we present a formal proof that, for continuous queries on data streams, SQL su#ers from additional expressive power problems.
We begin by focusing on the notion of nonblocking (NB) queries that are the only continuous queries that can be supported on data streams.
We characterize the notion of nonblocking queries by showing that they are equivalent to monotonic queries.
Therefore the notion of for RA can be formalized as its ability to express all monotonic queries expressible in RA using only the monotonic operators of RA.
We show that RA is not NB-complete, and SQL is not more powerful than RA for monotonic queries.
Sensor Positioning is a fundamental and crucial issue for sensor network operation and management.
In the paper, we first study some situations where most existing sensor positioning methods tend to fail to perform well, an example being when the topology of a sensor network is anisotropic.
Then, we explore the idea of using dimensionality reduction techniques to estimate sensors coordinates in two (or three) dimensional space, and we propose a distributed sensor positioning method based on multidimensional scaling technique to deal with these challenging conditions.
Multidimensional scaling and coordinate alignment techniques are applied to recover positions of adjacent sensors.
The estimated positions of the anchors are compared with their true physical positions and corrected, The positions of other sensors are corrected accordingly.
With iterative adjustment, our method can overcome adverse network and terrain conditions, and generate accurate sensor position.
We also propose an on demand sensor positioning method based on the above method.
The performance of global Internet communication is significantly influenced by the reliability and the stability of Internet routing systems, especially the Border Gateway Protocol (BGP), the de facto standard for inter-domain routing.
In this paper, we investigate the reliability of BGP sessions and the Internal BGP (IBGP) networks in the environment of unreliable physical and routing layers.
A new, exemplar-based, probabilistic paradigm for visual tracking is presented.
Probabilistic mechanisms are attractive because they handle fusion of information, especially temporal fusion, in a principled manner.
Exemplars are selected representatives of raw training data, used here to represent probabilistic mixture distributions of object configurations.
Their use avoids tedious hand-construction of object models, and problems with changes of topology.
Using exemplars
In this paper, we introduce an analysis of the requirements and design choices for hands-free documentation.
Hands-busy tasks such as cooking or car repair may require substantial interruption of the task: moving the pan off the burner and wiping hands, or crawling out from underneath the car.
We review the need for hands-free documentation and explore the role of task in the use of documentation.
Our central analysis examines the roles and characteristics of input and output modalities of hands-free documentation.
In particular, we review the use of speech as an input modality, and then visual means and speech as possible output modalities.
Finally, we discuss the implications of our analysis for the design of hands-free documentation and suggest future work.
The design implications include issues of navigating through the documentation, determining the user's task and taskstep, establishing mutual understanding of the state of the task, and determining when to start conveying information to the user.
This paper proposes an energy-efficient hardware acceleration architecture for the variable N-point 1D Discrete Cosine Transform (DCT) that can be leveraged if implementing MPEG-4's Shape Adaptive DCT (SA-DCT) tool.
The SA-DCT algorithm was originally formulated in response to the MPEG-4 requirement for object based texture coding, and is one of the most computationally demanding blocks in an MPEG-4 video codec.
Therefore energy-efficient implementations are important - especially on battery powered wireless platforms.
This N-point 1D DCT architecture employs a re-configurable distributed arithmetic data path and clock gating to reduce power consumption.
Shortest path algorithms have been used in a number of applications such as crack detection, road or linear feature extraction in images.
There are applications where the starting and ending positions of the shortest path need to be constrained.
In this paper, we present several new algorithms for the extraction of a circular shortest path in an image such that the starting and ending positions coincide.
The new algorithms we developed include multiple search algorithm, image patching algorithm, multiple backtracking algorithm, the combination of image patching and multiple back-tracking algorithm, and approximate algorithm.
The typical running time of our circular shortest path extraction algorithm on a 256256 image is about 0.3 seconds on a rather slow 85MHz Sun SPARC computer.
A variety of real images for crack detection in borehole data, object boundary extraction, and panoramic stereo matching have been tested and good results have been obtained.
We consider the problem of segmenting multiple rigid motions from point correspondences in multiple affine views.
We cast this problem as a subspace clustering problem in which the motion of each object lives in a subspace of dimension two, three or four.
Unlike previous work, we do not restrict the motion subspaces to be four-dimensional or linearly independent.
Instead, our approach deals gracefully with all the spectrum of possible affine motions: from twodimensional and partially dependent to four-dimensional and fully independent.
In addition, our method handles the case of missing data, meaning that point tracks do not have to be visible in all images.
Our approach involves projecting the point trajectories of all the points into a 5dimensional space, using the PowerFactorization method to fill in missing data.
Then multiple linear subspaces representing independent motions are fitted to the points in using GPCA.
We test our algorithm on various real sequences with degenerate and nondegenerate motions, missing data, perspective effects, transparent motions, etc.
Our algorithm achieves a misclassification error of less than 5% for sequences with up to 30% of missing data points.
Abduction is usually defined in terms of classical logical consequence.
In this paper we substitute this 'inferential parameter' by the notion of strict implication.
By doing so we hope to put more of the intended meaning of the abductive explanative relation into the background theory.
With ever shrinking geometries, growing metal density and increasing clock rate on chips, delay testing is becoming a necessity in industry to maintain test quality for speed-related failures.
The purpose of delay testing is to verify that the circuit operates correctly at the rated speed.
However, functional tests for delay defects are usually unacceptable for large scale designs due to the prohibitive cost of functional test patterns and the difficulty in achieving very high fault coverage.
Scanbased delay testing, which could ensure a high delay fault coverage at reasonable development cost, provides a good alternative to the at-speed functional test.
In this paper, we describe an experience of dependability assessment of a typical industrial Programmable Logic Controller (PLC).
The PLC is based on a two out of three voting policy and it is intended to be used for safety functions.
Safety assessment of computer based systems performing safety functions is regulated by standards and guidelines.
In all of them there is a common agreement that no single method can be considered sufficient to achieve and assess safety.
The paper addresses the PLC assessment by probabilistic methods to determine its dependability attributes related to Safety Integrity Levels as defined by IEC61508 standard.
The assessment has been carried out by independent teams, starting from the same basic assumptions and data.
Diverse combinatorial and state space probabilistic modelling techniques, implemented by public tools, have been used.
Even if the isolation of teams was not formally granted, the experience has shown different topics worthwhile to be described.
First of all, the usage of different modelling techniques has led to diverse models.
Moreover models focus on different system details also due the diverse teams skill.
Also slight differences in understanding PLC assumptions have been occurred.
In spite of all, the numerical results of the diverse models are comparable.
The experience has also allowed a comparison of the different modelling techniques as implemented by the considered public tools.
The paper analyzes SCEP, the Simple Certificate Enrollment Procedure, a two-way communication protocol to manage the secure emission of digital certificates to network devices.
The protocol provides a consistent method of requesting and receiving certificates from different Certification Authorities by offering an open and scalable solution for deploying certificates which can be beneficial to all network devices and IPSEC software solutions.
We formally analyze SCEP through a software tool for the automatic analysis of cryptographic protocols able to discover, at a conceptual level, attacks against security procedures.
es, enforcing a politeness policy as described in Section ??
: a Web crawler should not download more than one page from a single Web site at a time, and it should wait several seconds between requests.
(a) Full parallelization (b) Full serialization Figure 6.1: Two unrealistic scenarios for Web crawling: (a) parallelizing all page downloads and (b) serializing all page downloads.
The areas represent page sizes, as size = speed time.
Instead of downloading all pages in parallel, we could also serialize all the requests, downloading only one page at a time at the maximum speed, as depicted in Figure 6.1b.
However, the bandwidth available for Web sites B i is usually lower than the crawler bandwidth B, so this scenario is not realistic either.
The presented observations suggest that actual download time lines are similar to the one shown in Figure 6.2.
In the Figure, the optimal time T is not achieved, because some bandwidth is wasted due to limitations in the speed of Web sites
Household food security is an important measure of well-being.
Although it may not encapsulate all dimensions of poverty, the inability of households to obtain access to enough food for an active, healthy life is surely an important component of their poverty.
Accordingly, devising an appropriate measure of food security outcomes is useful in order to identify the food insecure, assess the severity of their food shortfall, characterize the nature of their insecurity (for example, seasonal versus chronic), predict who is most at risk of future hunger, monitor changes in circumstances, and assess the impact of interventions.
However, obtaining detailed data on food security status---such as 24-hour recall data on caloric intakes---can be time consuming and expensive and require a high level of technical skill both in data collection and analysis.
This paper examines whether an alternative indicator, dietary diversity, defined as the number of unique foods consumed over a given period of time, provides information on household food security.
It draws on data from 10 countries (India, the Philippines, Mozambique, Mexico, Bangladesh, Egypt, Mali, Malawi, Ghana, and Kenya) that encompass both poor and middle-income countries, rural and urban sectors, data collected in different seasons, and data on calories acquisition obtained using two different methods.
The paper uses linear regression techniques to investigate the magnitude of the association between dietary diversity and food security.
An appendix compiles the results of using methods such as correlation coefficients, contingency tables, and receiver operator curves.
iii We find that a 1 percent increase in dietary diversity is associated with a 1 percent increase in per capita consumption, a 0.7 percent increase in tota...
We study how students hedge and express affect when interacting with both humans and computer systems, during keyboard-mediated natural language tutoring sessions in medicine.
We found significant differences in such student behavior linked to whether the tutor was human or a computer.
Students hedge and apologize often to human tutors, but very rarely to computer tutors.
The type of expressions also differed---overt hostility was not encountered in human tutoring sessions, but was a major component in computer-tutored sessions.
Little gender-linking of hedging behavior was found, contrary to expectations based on prior studies.
A weak gender-linked effect was found for affect in human tutored sessions.
Parallel discrete event simulation (PDES) decreases a simulation's runtime by splitting the simulation's work between multiple processors.
Many users avoid PDES because it is difficult to specify a large and complicated model using existing PDES tools.
In this paper we describe how the ParaSol PDES system uses migrating user level threads to support the process interaction world view.
The process interaction world view is popular in sequential simulation languages and is a major departure form the logical process view supported by most PDES systems.
Kress, W. 2003.
High Order Finite Difference Methods in Space and Time.
Acta Universitatis Upsaliensis.
Comprehensive Summaries of Uppsala Dissertations from the Faculty of Science and Technology 880.
28 pp.
Uppsala.
ISBN 91-554-5721-5 In this thesis, high order accurate discretization schemes for partial differential equations are investigated.
A critical issue in advanced technology product development is assessing economic feasibility based on the potential for commercial success.
This is particularly difficult for an environmental product that has intangible benefits such as reduced air emissions.
Corporate confidentiality compounds this problem since many of the target customers of the new product do not allow product developers to access important process, cost, and environmental operating information.
With the proliferation of wireless communications and geo-positioning, e-services are envisioned that exploit the positions of a set of continuously moving users to provide context-aware functionality to each individual user.
Because advances in disk capacities continue to outperform Moore's Law, it becomes increasingly feasible to store on-line all the position information obtained from the moving e-service users.
With the much slower advances in I/O speeds and many concurrent users, indexing techniques are of essence in this scenario.
Past
Previous approaches to timestamping temporal data have implicitly assumed that transactions have no duration.
In this paper we identify several situations where a sequence of operations over time within a single transaction can violate ACID properties.
It has been
Cellular automata (CA) are considered an abstract model of fine-grain parallelism, in that the elementary operations executed at each node are rather simple and hence comparable to the most elementary operations in the computer hardware.
In a classical cellular automaton, all the nodes execute their operations in a truly concurrent manner: the state of node x i at time step t + 1 is some simple function of the states of node x i and a set of its pre-specified neighbors at time t. We consider herewith the sequential version of CA, or SCA, and compare it with the classical, parallel (meaning, truly concurrent) CA.
In particular, we show that there are 1-D CA with very simple node state update rules that cannot be simulated by any comparable SCA, irrespective of the node update ordering.
We argue that, while the CA and SCA we consider are very simple, the di#erence in dynamic behaviors (or, equivalently, the computation properties) are rather fundamental.
Hence, perhaps the granularity of basic CA operations, insofar as the ability to interpret their concurrent computation via an appropriate nondeterministic sequential interleaving semantics, is not fine enough - namely, we prove that no such sequential interleaving semantics can capture even rather simplistic concurrent CA computations.
We also share some thoughts on how to extend our early results, and, in particular, motivate introduction and study of asynchronous cellular automata.
We present algebraic laws for a language similar to a subset of sequential Java that includes inheritance, recursive classes, dynamic binding, access control, type tests and casts, assignment, but no sharing.
These laws are proved sound with respect to a weakest precondition semantics.
We also show that they are complete in the sense that they are sufficient to reduce an arbitrary program to a normal form substantially close to an imperative program; the remaining object-oriented constructs could be further eliminated if our language had recursive records.
This suggests that our laws are expressive enough to formally derive behaviour preserving program transformations; we illustrate that through the derivation of provably-correct refactorings.
In this note, we show how certain properties of Goldbeter's 1995 model for circadian oscillations can be proved mathematically, using techniques from the recently developed theory of monotone systems with inputs and outputs.
The theory establishes global asymptotic stability, and in particular no oscillations, if the rate of transcription is somewhat smaller than that assumed by Goldbeter.
This stability persists even under arbitrary delays in the feedback loop.
Assessing the life-cycle impacts of operations and maintenance decisions made for new or aging systems requires an accurate ability to measure and respond to uncertainty.
Maintenance and parts requirements forecasts for fielded military systems are traditionally performed through historical repair and supply demand models.
These models work well once several years of steady state weapon system operation has been accomplished, but tend to depend on a stable and somewhat regular operations and support structure.
Predictions based on data that capture the cyclic trends that tend to occur as the fleet endures standard operations, scheduled maintenance, and average component failure rates work best when components are relatively new.
Aging systems comprised of component populations of varying ages can be adversely affected by change or the failure to change the traditional maintenance and support concepts.
The right action for a new system may result in adverse impacts when considering older systems.
A course titled, "Process Design and Improvement -- Computer Based Tools" was developed and offered by the authors in Fall 2000 and 2001 for part-time graduate students in Manufacturing Systems Engineering and Technology Management programs at the University of St. Thomas, Minnesota, <www.stthomas.edu/technology/2001Fall/MMSE 850-13-F01.htm>.
The objective of the course is to introduce students to the current software and methods used to organize data and model manufacturing and industrial systems through virtual representation of business operations choosing problems from their workplaces.
The course was created to make the complex processes and tools of computer modeling more accessible to non-specialists for a better understanding of how their operations work.
It is not unusual that people only know a small part of their overall system.
This gives them a way to see the big picture.
A case study illustrates the application of these tools.
The dampening of low-frequency inter-area oscillations using Power System Stabilizers (PSS) may require remote stabilizing signals.
In this case, delays are associated with the signal transmission.
In this paper, several telecommunication schemes are investigated and critical communication delays are determined for a two-area four-generator (2A4G) power system, widely used in the literature.
OPNET Modeler, a discrete event simulator, is used to characterize those delays and also the number of packets dropped between each node.
This information about the network delays is then used to study the performance of the 2A4G system in the presence of non-ideal communications.
I.
In neuro-fuzzy approaches different membership functions are used for modeling the system's rule set.
Two wellknown membership function types are triangle functions and trapezoid functions.
In our contribution we demonstrate that trapezoid functions with larger core regions are the more appropriate functions for calculating the membership degrees within neuro-fuzzy systems.
If regions of the data of different classes are highly overlapping or if the data is noisy, the values of the membership degrees could be misleading with respect to rule confidence if the core region is modeled too small.
In fact, we show that data regions with a high membership degree need not to be the regions with a high rule confidence.
This effect that we call membership unrobustness is discussed.
We give preliminary benchmark examples and show how this effect influenced our recent work of analysing septic shock patient data.
KEYWORDS: neuro-fuzzy system, membership function, core region, rule confidence, robustness, medical data
iv CDRL: AC01 31 December 1996 Data Reference: STARS-PA29-AC01/001/01 Version 2.0 (Signatures on File) Principal Author(s): Approvals: Mark Simos, Organon Motives, Inc.
this paper.
The list would start with my three predecessors as director general but then becomes too vast to include even a part of it here
A computer vision software library is a key component of vision-based applications.
While there are several existing libraries, most are large and comple or limited to a particular hardware/platform combination.
These factors tend to impede the development of research applications, especially for non-computer vision experts.
In order to get more people to use simulation, improved teaching of simulation is important.
In this context, textbooks and, more generally, teachware play a critical role.
The panel looks at some of the older and successful textbooks as well as textbooks and teachware that are quite new and in some cases are still under development.
1
Monitoring and information services form a key component of a distributed system, or Grid.
A quantitative study of such services can aid in understanding the performance limitations, advise in the deployment of the monitoring system, and help evaluate future development work.
To this end, we examined the performance of the Globus Toolkit Monitoring and Discovery Service (MDS2) by instrumenting its main services using NetLogger.
Our study shows a strong advantage to caching or prefetching the data, as well as the need to have primary components at well-connected sites.
A skew angle estimation approach based on the application of a fuzzy directional runlength is proposed for complex address images.
The proposed technique was tested on a variety of USPS parcel images including both machine print and handwritten addresses.
The testing results showed a successful rate more than 90% of the test set.
Slow convergence in the Internet can be directly attributed to the path exploration phenomenon, inherent in all path vector protocols.
The root cause for path exploration is the dependency among paths propagated through the network.
Addressing this problem in BGP is particularly difficult as the AS paths exchanged between BGP routers are highly summarized.
In this paper, we describe why path exploration cannot be countered effectively within the existing BGP framework, and propose a simple, novel mechanism---forward edge sequence numbers--- to annotate the AS paths with additional "path dependency" information.
Then, we develop an enhanced path vector algorithm, EPIC, which can be shown to limit path exploration and lead to faster convergence.
In contrast to other solutions, ours is shown to be correct on a very general model of Internet topology and BGP operation.
Using theoretical analysis and simulations, we demonstrate that EPIC can achieve a dramatic improvement in routing convergence, compared to BGP and other existing solutions.
One way to combat P2P file sharing of copyrighted content is to deposit into the file sharing systems large volumes of polluted files.
Without taking sides in the file sharing debate, in this paper we undertake a measurement study of the nature and magnitude of pollution in KaZaA, currently the most popular P2P file sharing system.
We develop a crawling platform which crawls the majority of the KaZaA 20,000+ supernodes in less than 60 minutes.
From the raw data gathered by the crawler for popular audio content, we obtain statistics on the number of unique versions and copies available in a 24-hour period.
We develop an automated procedure to detect whether a given version is polluted or not, and we show that the probabilities of false positives and negatives of the detection procedure are very small.
We use the data from the crawler and our pollution detection algorithm to determine the fraction of versions and fraction of copies that are polluted for several recent and old songs.
We observe that pollution is pervasive for recent popular songs.
We also identify and describe a number of anti-pollution mechanisms.
Policies of devolving management of resources from the state to user groups are premised upon the assumption that users will organize and take on the necessary management tasks.
While experience has shown that in many places users do so and are very capable, expansion of co-management programs beyond initial pilot sites often shows that this does not happen everywhere.
Yet, much is at stake in this, with more widespread adoption of irrigation management transfers and other forms of communitybased resource management.
It is therefore important to move beyond isolated case studies to comparative analysis of the conditions for collective action.
K-Means clustering is a well-known tool in unsupervised learning.
The performance of K-Means clustering, measured by the F-ratio validity index, highly depends on selection of its initial partition.
This problematic dependency always leads to a local optimal solution for k-center clustering.
To overcome this difficulty, we present an intuitive approach that iteratively incorporates Fisher discriminant analysis into the conventional K-Means clustering algorithm.
In other words, at each time, a suboptimal initial partition for K-Means clustering is estimated by using dynamic programming in the discriminant subspace of input data.
Experimental results show that the proposed algorithm outperforms the two comparative clustering algorithms, the PCA-based suboptimal K-Means clustering algorithm and the kd-tree based K-Means clustering algorithm.
Document image retrieval is a task to retrieve document images relevant to a user's query.
Most of existing methods based on word-level indexing rely on the representation called "bag of words" which originated in the field of information retrieval.
This paper presents a new representation of documents that utilizes additional information about the location of words in pages so as to improve the retrieval performance.
We consider that pages are relevant to a query if they contains its terms densely.
This notion is embodied as density distributions of terms calculated in the proposed method.
Its performance is improved with the help of "pseudo relevance feedback", i.e., a method of expanding a query by analyzing pages.
Experimental results on English document images show that the proposed method is superior to conventional methods of electronic document retrieval at recall levels 0.0--0.6.
Quality of service (QoS) architectures have been required in recent years to support a wide range of distributed applications, particularly in wide-area systems.
In the context of QoS information management associated with QoS activities, QoS information modeling and mapping enable QoS architectures to be developed independently of the underlying environment and/or application.
This paper provides a review of existing QoS architectures on these aspects and presents our current framework.
This paper studies stability of network models that capture macroscopic features of data communication networks including the Internet.
The network model consists of a set of links and a set of possible routes which are fixed subsets of links.
A connection is dynamically established along one of the routes to transmit data as requested, and terminated after the transmission is over.
The transmission bandwidth of a link is dynamically allocated, according to specific bandwidth allocation policy, to ongoing connections that traverse the link.
A network model is said to be stable under a given bandwidth allocation policy if, roughly, the number of ongoing connections in the network will not blow up over time.
Introduction The importance of multimedia processing on generalpurpose computing platforms has prompted processor designers to add multimedia instructions to microprocessor instruction set architectures (ISAs).
These include MAX-2 for the PA-RISC architecture [1], MMX, SSE and SSE-2 for the Intel IA-32 architecture [2], and a superset of these to the Itanium IA-64 architecture [3].
Although these multimedia instructions may be very effective, they still incur the overhead of their base microprocessor ISA.
PLX [4] is a new ISA designed from scratch for fast and efficient multimedia processing.
Prior work has demonstrated its effectiveness for integer media applications [4].
This paper describes the new floating-point ISA for PLX version 1.3, designed to enable support for very fast 3D graphics.
With the proliferation of 3D games, it is highly desirable to support fast 3D graphics with the same media processor used for integer media types like images, video and audio.
In this paper an iterative detection technique for DPSK and its extension to higher level DAPSK modulation schemes is presented.
We consider the well-known OFDM transmission technique that requires, in combination with noncoherent detection, no channel state information.
By simulation it is shown that the proposed algorithm leads to a significant performance gain in terms of bit error rate.
This paper illustrates an approach to design and validation of heterogeneous systems.
The emphasis is placed on devices which incorporate MEMS parts in either a single mixed-technology (CMOS + micromachining) SOC device, or alternatively as a hybrid system with the MEMS part in a separate chip.
The design flow is general, and it is illustrated for the case of applications embedding CMOS sensors.
In particular, applications based on fingerprint recognition are considered since a rich variety of sensors and data processing algorithms can be considered.
A high level multilanguage /multi-engine approach is used for system specification and co-simulation.
This also allows for an initial high-level architecture exploration, according to performance and cost requirements imposed by the target application.
Thermal simulation of the overall device, including packaging, is also considered since this can have a significant impact in sensor performance.
From the selected system specification, the actual architecture is finally generated via a multi-language co-design approach which can result in both hardware and software parts.
The hardware parts are composed of available IP cores.
For the case of a single chip implementation, the most important issue of embedded-core-based testing is briefly considered, and current techniques are adapted for testing the embedded cores in the SOC devices discussed.
In this paper, we describe an ongoing effort to define common APIs for structured peer-to-peer overlays and the key abstractions that can be built on them.
In doing so, we hope to facilitate independent innovation in overlay protocols, services, and applications, to allow direct experimental comparisons, and to encourage application development by third parties.
We provide a snapshot of our efforts and discuss open problems in an effort to solicit feedback from the research community.
The technical details of a dual-transponder, long-baseline positioning system to measure the sway of a free towed Synthetic Aperture Sonar (SAS) are presented.
The sway is measured with respect to freely deployed, battery powered, transponders which sit stationary on the seabed connected via cables to floating buoys housing high-accuracy GPS timing receivers.
A T/R switch allows a single hydrophone on each transponder to alternately receive and transmit linear FM chirp signals.
The time of flight of the signals is determined by matched-filtering using a DSP and transmitted to the towboat for storage in real time using RF modems.
The sway information is completely independent for each sonar ping and allows the deblurring of the SAS images by post processing.
A Matlab simulation predicts a worst case sway accuracy of cm.
In this report we examine several multimedia applications with and without MMX/SSE enhancements and examine the impact on execution time and cache performance of these enhancements.
We implement several versions of the programs to isolate their memory and processing requirements.One criticism of SIMD technology is that it may be doomed to obsolesence as processors gain speed with respect to memory.
We discover that the multimedia applications we looked at are not memory bound.
Enhancing applications with MMX does make them more memory bound, but not so much as to nullify the gain given by the enhancement.
We show that prefetching instructions can be used to hide memory latency, and that MMX style enhancement will still be useful as long as the latency is predictable, the memory bandwidth scales sufficiently, and the total runtime of the program is large compared to the latency of memory.
Typically data integration systems have significant gaps of coverage over the global (or mediated) schema they purport to cover.
Given this reality, users are interested in knowing exactly which part of their query is supported by the available data sources.
This report introduces a set of assumptions which enable users to obtain intensional descriptions of the certain, uncertain and missing answers to their queries given the available data sources.
The general assumption is that query and source descriptions are written as tuple relational queries which return only whole schema tuples as answers.
More specifically, queries and source descriptions must be within an identified sub-class of these `schema tuple queries' which is closed over syntactic query difference.
Because this identified query class is decidable for satisfiability, query containment and equivalence are also decidable.
Sidestepping the schema tuple query assumption, the identified query class is more expressive than conjunctive queries with negated subgoals.
The ability to directly express members of the query class in standard SQL makes this work immediately applicable in a wide variety of contexts.
iv 1.
Recent scholarship on teacher education has drawn a sharp contrast between top}downa and teacher-directeda approaches to instructional reform.
However, this article suggests that all forms of teacher education share a common ground: they are all inescapably rhetorical in nature, aimed at the persuasion of teachers.
While reformers may attempt to deny such intentions, they cannot help but employ rhetoric in practice.
By way of illustration, the author provides a case study of a reform project that seeks to supporta teachers rather than trying to exert power over them.
Analysis reveals this to be an impossible ideal, one whose appearance can be maintained only by refusing to admit to contradictory motives.
# 1999 Elsevier Science Ltd. All rights reserved.
this paper, we address the problem of misconfiguration troubleshooting.
There are two essential goals in designing such a troubleshooting system: 1.
Troubleshooting effectiveness: the system should effectively identify a small set of sick configuration candidates with a short response time; 2.
Automation: the system should minimize the number of manual steps and the number of users involved
When people switch between two tasks, their performance on each is worse than when they perform that task in isolation.
This "switch cost" has been extensively studied, and many theories have been proposed to explain it.
One influential theory is the "failure to engage" (FTE) theory, which posits that observed responses are a mixture of prepared and unprepared response strategies.
The probability that participants use prepared processes can be manipulated experimentally, by changing preparation time, for example.
The FTE theory is a binary mixture model, and therefore makes a strong prediction about the existence of fixed points in response time distributions.
We found evidence contradicting this prediction, using data from 54 participants in a standard task-switching paradigm.
A time encoding machine is a real-time asynchronous
this article has arisen: "The Neutrino, what is it?" It should be noted that the predictions of the theory of Refs.
[13,14] associated with the notion of neutrino, differ markedly from 2002 C. Roy Keys Inc
This paper reports on a project in the area of simulation based decision support (SBDS) at the operational level of the manufacturing system.
The purpose of the project was to explore and describe the possibilities to use a standard discrete event simulation package for capacity planning purpose in a situation where labor was a primary and scarce production resource.
This has been done through a case study at a Radio Base Station (RBS) assembly line at Ericsson Radio System, Gvle.
Results from the study are a conceptual structure for a SBDS system and a prototype simulation system tailored for the RBS-2000 assembly line.
The system has been tested in a simulated environment and results indicate a delivery precision improvement of eleven percent.
Conclusions from the study are that this kind of tool for operational decision support offers a flexible decision support environment and that the need for high quality information and information collecting systems are crucial for the success of such tools.
In the Outsourced Database (ODB) model, organizations outsource their data management needs to an external service provider.
The service provider hosts clients' databases and offers seamless mechanisms to create, store, update and access (query) their databases.
This model introduces several research issues related to data security.
One of the core security requirements is providing efficient mechanisms to ensure data integrity and authenticity while incurring minimal computation and bandwidth overhead.
In this work, we investigate the problem of ensuring data integrity and suggest secure and practical schemes that help facilitate authentication of query replies.
We explore the applicability of popular digital signature schemes (RSA and DSA) as well as a recently proposed scheme due to Boneh et al.
[1] and present their performance measurements.
For successful deployment of robust quality of service (QoS) framework, the need for a QoS policy system looks inevitable.
In this report we explore the various elements, which together form a QoS policy framework and also try to gain insight into the implementation issues of such a framework and provide directives for future research.
This paper discusses the role of domain-specific standards for managing semantic heterogeneity among dissimilar information sources.
The process of integrating such heterogeneous information systems is also discussed in this context, whereby standards play a central role for `initiating' top-down processes by means of defining common data models for the involved information sources
This paper describes our research on both the detection and subsequent resolution of recognition errors in spoken dialogue systems.
The paper consists of two major components.
The first half concerns the design of the error detection mechanism for resolving city names in our MERCURY flight reservation system, and an investigation of the behavioral patterns of users in subsequent subdialogues involving keypad entry for disambiguation.
An important observation is that, upon a request for keypad entry, users are frequently unresponsive to the extent of waiting for a time-out or hanging up the phone.
The second half concerns a pilot experiment investigating the feasibility of replacing the solicitation of a keypad entry with that of a "speak-and-spell" entry.
A novelty of our work is the introduction of a speech synthesizer to simulate the user, which facilitates development and evaluation of our proposed strategy.
We have
Some speech variations due to involuntary and voluntary speech productions have been investigated.
In this very preliminary report duration variations for two speakers are discussed.
The nested partitions method is a flexible and effective framework of optimizing large-scale problems with combinatorial structure.
In this paper we consider the nested partitions method for simulation optimization and propose a new variant that uses inheritance to speed convergence.
The new nested partitions method with inheritance algorithm performs well for when applied to test problems but it also calls for new analysis of convergence.
In support of the order-to-delivery (OTD) business initiative, a simulation framework has been developed at GM R&D. The OTD simulation program is aimed at simulating the behavior of the OTD supply chain using detailed inputs associated with demand, supply, and production processes.
Customer demand variation is a key source of uncertainty in GM's supply chain.
Early capture of customer demand fluctuation enables GM to effectively reduce aggregate mismatch between production and sales and appropriate time series models have been suggested to capture demand patterns based on actual data.
The vehicle model and option mix with a given demand variation influences the performance of the OTD supply chain and provides a means to establish certain principles determining the extent of product offering and the scope of production leveling.
Analyzing the impact of the model and option mix on primary supply chain performance measures, such as customer wait time, condition mismatch, and parts usage, capacitates reduction of the mismatch between demand and production and stabilizes supply chain operations.
This paper describes the implementation of the OKE, which allows users other than root to load native and fully optimised code in the Linux kernel.
Safety is guaranteed by trust management, language customisation and a trusted compiler.
By coupling trust management with the compiler, the OKE is able to vary the level of restrictions on the code running in the kernel, depending on the programmer's privileges.
Static sandboxing is used as much as possible to check adherence to the security policies at compile time.
Through a dynamically reconfigurable optical layer, we propose a simple approach to handle traffic surges in IP networks.
Its effectiveness is established by analysis of data traffic in a large ISP.
Localized QoS routing techniques were proposed to achieve acceptable performance without exchanging global state information over the network.
Introduction The physiological data is characterized by large amounts of data, sequential data, issues of sensor fusion, and a rich domain complete with noise, hidden variables, and significant e#ects of context.
There is a continuous flow of data, and the problem is to built a predictive model from the data stream.
Several authors refer desirable properties for mining data streams: incremental algorithms, able to process examples in constant time and memory, performing a single scan over the training data, maintaining classifiers at any time, and dealing with concept drift.
Our recent work focus on induction of decision trees from data streams.
The possibility to evaluate the system in a real problem is the main motivation to participate in the Workshop.
In this paper we present a new methodology for modelling the development of the prices of defaultable zero coupon bonds that is inspired by the Heath-Jarrow-Morton (HJM) [19] approach to risk-free interest rate modelling.
Instead of precisely specifying the mechanism that triggers the default we concentrate on modelling the development of the term structure of the defaultable bonds and give conditions under which these dynamics are arbitrage-free.
These conditions are a drift restriction that is closely related to the HJM drift restriction for risk-free bonds, and the restriction that the defaultable short rate must always be not below the risk-free short rate.
We introduce the generalized semi-Markov decision process (GSMDP) as an extension of continuous-time MDPs and semi-Markov decision processes (SMDPs) for modeling stochastic decision processes with asynchronous events and actions.
Using phase-type distributions and uniformization, we show how an arbitrary GSMDP can be approximated by a discrete-time MDP, which can then be solved using existing MDP techniques.
The techniques we present can also be seen as an alternative approach for solving SMDPs, and we demonstrate that the introduction of phases allows us to generate higher quality policies than those obtained by standard SMDP solution techniques.
MDA (Model-Driven Architecture) has been coined by OMG (Object Management Group) as the next step in application integration.
Being based on standards already embraced by a large segment of the software engineering industry, MDA promises fully automatic model transformation.
MDA enables application developers to use formalisms such as UML to specify their applications in a totally platform-independent way.
Later transformations to platform-dependent software are automated.
Parlay, a middleware specification developed for the telecommunication domain, is on the other hand promising network independent development and deployment of telecommunication services and applications.
In this position paper we report on our experience from a Eurescom project where we try to couple MDA and Parlay in order to increase reuse in the telecommunication domain.
Telecommunication service development is hampered by long development cycles and low level of reuse.
We describe how MDA approach can be applied to telecommunication domain through the use of Parlay.
We believe this approach has substantial potential for reducing development costs for many telecommunication operators.
In addition, developed models and applications can be deployed on a wide variety of platforms without much change.
In this paper we present the 2D parametric freehand sketch component of an experimental prototype called GEGROSS (GEsture & Geometric ReconstructiOn based Sketch System).
The module implements a gesture alphabet and a calligraphic interface to manage geometric constraints found in 2D sections, that are later used to perform modeling operations.
We use different elements to implement this module.
The geometric kernel stores model data.
The constraint manager 2D DCM handles restrictions.
Finally, we use the CALI library to define gestural interfaces.
In this paper we present a strategy for integrating these tools, and a calligraphic interface we developed to provide dimensional controls over freehand sketches.
Our system allows users to build simple sketches composed by line segments and arcs, which are automatically tidied and beautified.
Proportional and dimensional information over sketched parts is provided by handwriting their corresponding sizes.
Corpora have proved their value both in linguistics and language technology.
Information obtained from corpora has challenged the intuitive language study, since intuitive observations are found inadequate while compared with findings from corpora.
However, the value of corpora is not yet acknowledged in India, although in recent times some sporadic attempts are made for designing corpora in Indian languages.
We argue here for initiating large-scale projects to develop corpora of various types in Indian languages not only to contribute in research of language technology, but also to provide reliable language resources for the benefit of people of the country.
We plea for the generation of specific types of corpus required for designing tools and systems for language technology linguistics research, and education.
This paper presents a brief introduction to the use of duality theory and simulation in financial engineering.
It focuses on American option pricing and portfolio optimization problems when the underlying state space is high-dimensional.
In general, it is not possible to solve these problems exactly due to the so-called "curse of dimensionality" and as a result, approximate solution techniques are required.
Approximate dynamic programming (ADP) and dual based methods have recently been proposed for constructing and evaluating good approximate solutions to these problems.
In this paper we describe these ADP and dual-based methods, and the role simulation plays in each of them.
Some directions for future research are also outlined.
This paper introduces a new technique for estimating cycle time quantiles from discrete event simulation models run at a single traffic intensity.
The Cornish-Fisher expansion is used as a vehicle for this approximation, and it is shown that for an M/M/1 system and a full factory simulation model, the technique provides accurate results with low variability for the most commonly estimated quantiles without requiring unreasonable sample sizes.
Additionally, the technique provides the advantages of being easy to implement and providing multiple cycle time quantiles from a single set of simulation runs.
Extending the single optimized spaced seed of PatternHunter [20] to multiple ones, PatternHunter II simultaneously remedies the lack of sensitivity of Blastn and the lack of speed of SmithWaterman, for homology search.
At Blastn speed, PatternHunter II approaches Smith-Waterman sensitivity, bringing homology search technology back to a full circle.
this paper constitutes a suitable basis for building an effective solution to extracting information from semi-structured documents for two principal reasons.
First, it provides an extensible architecture basis for: extracting structured information from semistructured documents; providing fast and accurate selective access to this information; performing selective dissemination of relevant documents depending on filtering criteria.
Second, it is simple in terms of: the complexity of the algorithms used for structure recognition and document filtering; the number and size of data structures required to perform the three functions mentioned above; the amount and complexity of the metadata required to handle a given collection of documents.
The work described here is part of the Dyade Mdiation project, which aims to provide integrated software components for accessing heterogeneous data sources in Internet/Intranet environments
this paper.
The results of the PATT-USA study indicated that: (a) students are interested in technology; (b) boys are more interested in technology than girls; (c) students in the U. S. think that technology is a field for both girls and boys; (d) girls are more convinced that technology is a field for both genders; (e) there is a positive influence of a parents' technological profession on the students' attitude, (f) U. S. students' concept of technology became more accurate with increasing age, (g) U. S. students are strongly aware of the importance of technology, (h) the U. S. has a rather low score on items measuring the concepts of technology compared to other industrialized countries, (i) students who had taken industrial arts/technology education classes had more positive attitudes on all sub-scales, and (j) the existence of technical toys in the home had a significantly positive impact on all attitude scales.
Although research on student attitudes in technology education has been used to assess student attitudes prior to curriculum development, a standardized attitude measure such as the PATT-USA has not been used to assess changes in attitude as the result of a treatment such as participation in a technology education program.
It is logical that students who have a positive experience in a technology education program will develop a positive attitude toward technology and the pursuit of technological careers, and would therefore be more interested in studying about technology.
As a result, students should become more technologically literate.
This premise is grounded in research from the affective domain that indicates that students who exhibit a positive attitude -7toward a subject are more likely to actively engage in learning during and after instruction (Po...
Smaller feature sizes, reduced voltage levels, higher transistor counts, and reduced noise margins make future generations of microprocessors increasingly prone to transient hardware faults.
Most commercial fault-tolerant computers use fully replicated hardware components to detect microprocessor faults.
The components are lockstepped (cycle-by-cycle synchronized) to ensure that, in each cycle, they perform the same operation on the same inputs, producing the same outputs in the absence of faults.
Unfortunately, for a given hardware budget, full replication reduces performance by statically partitioning resources among redundant operations.
We demonstrate
Computer systems that serve as personal assistants, advisors, or sales assistants frequently need to argue evaluations of domain entities.
Argumentation theory shows that to argue an evaluation convincingly requires to base the evaluation on the hearer's values and preferences.
In this paper we propose a framework for tailoring an evaluative argument about an entity when user's preferences are modeled by an additive multiattribute value function.
Since we adopt and extend previous work on explaining decision-theoretic advice as well as previous work in computational linguistics on generating natural language arguments, our framework is both formally and linguistically sound.
An XML publish/subscribe system needs to match many XPath queries (subscriptions) over published XML documents.
The performance and scalability of the matching algorithm is essential for the system when the number of XPath subscriptions is large.
Earlier solutions to this problem usually built large finite state automata for all the XPath subscriptions in memory.
The scalability of this approach is limited by the amount of available physical memory.
In this paper, we propose an implementation that uses a relational database as the matching engine.
The heavy lifting part of evaluating a large number of subscriptions is done inside a relational database using indices and joins.
We described several different implementation strategies and presented a performance evaluation.
The system shows very good performance and scalability in our experiments, handling millions of subscriptions with moderate amount of physical memory.
We present a package for algorithms on planar networks.
This package comes with a graphical user interface, which may be used for demonstrating and animating algorithms.
Our focus so far has been on disjoint path problems.
However, the package is intended to serve as a general framework, wherein algorithms for various problems on planar networks may be integrated and visualized.
For this aim, the structure of the package is designed so that integration of new algorithms and even new algorithmic problems amounts to applying a short "recipe." The package has been used to develop new variations of well-known disjoint path algorithms, which heuristically optimize additional NP-hard objectives such as the total length of all paths.
We will prove that the problem of finding edge-disjoint paths of minimum total length in a planar graph is NP -hard, even if all terminals lie on the outer face, the Eulerian condition is fulfilled, and the maximum degree is four.
Finally, as a demonstration how PlaNet can be used as a tool for developing new heuristics for NP-hard problems, we will report on results of experimental studies on e#cient heuristics for this problem.
Many commercial processors now offer the possibility of extending their instruction set for a specific application---that is, to introduce customised functional units.
There is a need to develop algorithms that decide automatically, from highlevel application code, which operations are to be carried out in the customised extensions.
A few algorithms exist but are severely limited in the type of operation clusters they can choose and hence reduce significantly the effectiveness of specialisation.
In this paper we introduce a more general algorithm which selects maximal-speedup convex subgraphs of the application dataflow graph under fundamental microarchitectural constraints, and which improves significantly on the state of the art.
Current trends in computing include increases in both distribution and wireless connectivity, leading to highly dynamic, complex environments on top of which applications must be built.
The task of designing and ensuring the correctness of applications in these environments is similarly becoming more complex.
The unified goal of much of the research in distributed wireless systems is to provide higher-level abstractions of complex low-level concepts to application programmers, easing the design and implementation of applications.
A new and growing class of applications for wireless sensor networks require similar complexity encapsulation.
However, sensor networks have some unique characteristics, including dynamic availability of data sources and application quality of service requirements, that are not common to other types of applications.
These unique features, combined with the inherent distribution of sensors, and limited energy and bandwidth resources, dictate the need for network functionality and the individual sensors to be controlled to best serve the application requirements.
In this article, we describe different types of sensor network applications and discuss existing techniques for managing these types of networks.
We also overview a variety of related middleware and argue that no existing approach provides all the management tools required by sensor network applications.
To meet this need, we have developed a new middleware called MiLAN.
MiLAN allows applications to specify a policy for managing the network and sensors, but the actual implementation of this policy is effected within MiLAN.
We describe MiLAN and show its effectiveness through the design of a sensor-based personal health monitor.
A combinatorial bijection between k-edge colored trees and colored Prufer codes for labelled trees is established.
This bijection gives a simple combinatorial proof for the number k(n nk-n of k-edge colored trees with n vertices.
1
tes into the collection of the elementary asymptotic helices.
An asymptotic helix obeys the linear Schroedinger equation with no dependence on mass.
The mass of the particle appears explicitly when we describe the motion of the whole ensemble of the elementary splinters.
Keywords: quantum physics, ideal fluid, line vortex, soliton.
1.
Introduction elow, the earlier suggested [1] mechanical analogy for quantum particle is further deve loped.
A helical wave on a vortex filament in the ideal fluid is considered.
It is shown to obey the linear Schroedinger equation.
Other properties of a vortex filament also reproduce the specific features of a quantum object.
This work is a constituent of the whole project aimed at constructing a regular mechanical analogy of physical fields and particles.
The approach is based on the concept of a substratum for physics.
The substratum is a universal medium serving to model the waves and action-at-a-distance in vacuum.
This medium is viewed meso
Capacity improvement is one of the principal challenges in wireless networking.
We present a link-layer protocol called Slotted Seeded Channel Hopping, or SSCH, that increases the capacity of an IEEE 802.11 network by utilizing frequency diversity.
SSCH can be implemented in software over an IEEE 802.11-compliant wireless card.
Each node using SSCH switches across channels in such a manner that nodes desiring to communicate overlap, while disjoint communications mostly do not overlap, and hence do not interfere with each other.
To achieve this, SSCH uses a novel scheme for distributed rendezvous and synchronization.
Simulation results show that SSCH significantly increases network capacity in several multi-hop and single-hop wireless networking scenarios.
This paper investigates convergence properties of basic REM flow control algorithm via Lyapunov functions.
The decentralized algorithm REM consists of a link algorithm that updates a congestion measure, also called "price", based on the excess capacity and backlog at that link, and a source algorithm that adapts the source rate to congestion in its path.
At the equilibrium of the algorithm, links are fully utilized, and all buffers are cleared.
Convergence of the algorithm is established for single and two-link cases using a Lyapunov argument.
Extension to the general multi-link model is discussed as well.
This paper proposes a new DiffServ routing architecture (PAP) that integrates the admission control signaling and the QoS routing.
It differs from traditional routing in its ability to route most Expedite Forwarding (EF) traffic along the shortest paths while making use of alternative paths to absorb transient overload.
Once an EF data flow is admitted, its performance is assured.
The overhead for storing alterative path information is minimal since only one routing entry at a branching point is needed for each alternative path.
The route map of Cisco IOS provides a mechanism for implementing PAP.
this report may be reproduced without the express permission of but with acknowledgment to the International Food Policy Research Institute
In this paper we discuss an approach for simulating the behaviour of interactive software systems, before starting on any of the actual implementation, based on a model of the system at the architectural level.
By providing a mock-up of the final user interface for controlling the simulation, it is possible to carry out usability assessments of the system much earlier in the design process than is usually the case.
This means that design changes informed by this usability assessment can be made at this early stage.
This is much less expensive than having to wait until an implementation of the system is completed before discovering flaws and having to make major changes to already implemented components.
The approach is supported by a suite of cooperating tools for specification, formal modelling and animation of the system.
and a full rank matrix D with N < L, we define the signal's overcomplete representations as all # satisfying S = D#.
Among all the possible solutions, we have special interest in the sparsest one -- the one minimizing 0 .
Previous work has established that a representation is unique if it is sparse enough, requiring 0 < Spark(D)/2.
empty Nucleus may exist only if it is governed (there is more to it, but that's enough for now).
c. instead of being translated into the familiar arborescence, syllabic generalisations are described by two lateral relations: 1.
Government (destructive) 2.
Licensing (supporting) example: a consonant occurs in a Coda iff it is follwed by a governed empty Nucleus (R = any sonorant, T = any obstruent) internal Coda (boldfaced) final Coda (boldfaced) Gov O N O N O N | | | | | | C V R T V Gov O N O N | | | | - 2 - d. The Coda Mirror (Sgral & Scheer 2001) there is a reason why consonants are weak in Codas (and strong in the Coda Mirror = {#,C}__): goverend Nuclei are laterally disabled, i.e.
can neither govern nor license.
Therefore, Coda-consonants are neither supported (by Licensing) nor damaged (by Government).
internal Coda (boldfaced) final Coda (boldfaced) Gov Gov O N O N O N | | | | | | C V R T V Lic Gov Gov O N O N | | | | C V R # Lic e. ok, that's it, you will be relieved of empty
Encryption algorithms commonly use table lookups to perform substitution, which is a confusion primitive.
The use of table lookups in this way is especially common in the more recent encryption algorithms, such as the AES finalists like MARS and Twofish, and the AES winner, Rijndael.
Workload characterization studies indicate that these algorithms spend a significant fraction of their execution cycles on performing these table lookups, more specifically on effective address calculations.
This study .
.
.
Modeling and simulation provide objective analysis tools for many fields including manufacturing.
This paper presents the requirements and describes the usefulness of a web-based interface to discrete-event simulation.
A description of related efforts is first presented and an approach is then described.
The approach develops a webbased interface to use commercial discrete-event commercial tools.
Incorporation into construction engineering and management curricula of tasks that improve the abilities of students to manage the complex dynamics, pressures, and demands of construction sites is becoming critical to meet the demands of the construction industry.
These goals are, however, difficult to incorporate using traditional educational tools.
This paper reviews the role, in construction engineering and management education, of computing and information technology in general and of simulation in particular.
The paper provides an overview of a Simulation Based Interactive Construction Management Learning System currently being developed at Western Michigan University (WMU) as part of a three-year project funded by the National Science Foundation and Western Michigan University.
In many fields of application shortest path finding problems in very large graphs arise.
Scenarios where large numbers ofonW##O queries for shortest paths have to be processedin real-time appear for examplein tra#cinc5###HF5 systems.In such systems, the techn5Ww# con sidered to speed up the shortest pathcomputation are usually basedon precomputed incomputed5 On approach proposedoften in thiscon text is a spacereduction where precomputed shortest paths are replaced by sin## edges with weight equal to thelenOq of the corresponres shortest path.In this paper, we give a first systematic experimen tal study of such a spacereduction approach.
Wein troduce theconOkW of multi-level graph decomposition Foron specificapplication scenica from the field of timetable information in public tranc ort, we perform a detailed anai ysisan experimen tal evaluation of shortest path computation based on multi-level graph decomposition.
ng persistence once and for all in a reusable (system) library providing the class of persistent predicates.
The main e#ect of declaring a predicate persistent (a process for which we propose a suitable syntax, compatible with the Ciao system's assertion language) is that any changes made to such predicates persist from one execution to the next one, and are transactional, and, optionally, externally visible.
The model allows associating an external, persistent storage medium (a file, a database table, etc.
) to each such predicate, which will "reside" in that medium.
Notably, persistent predicates appear to a program as ordinary (dynamic) predicates: calls to them do not need to be coded or marked specially, and the builtins to update them are (suitably modified versions of) the same used with the internal database (e.g., asserta/1, assertz/1, retract/1, etc.).
Thus, only minor modifications to the program code (often independent of its internal logic) are needed to achieve persistence
this paper, we will discuss the relationship between experimental #C p and #ASA upon binding calculated based on the structural information, and the prediction of #C p based on #ASA
Association Rule Mining, originally proposed for market basket data, has potential applications in many areas.
Remote Sensed Imagery (RSI) data is one of the promising application areas.
Extracting interesting patterns and rules from datasets composed of images and associated ground data, can be of importance in precision agriculture, community planning, resource discovery and other areas.
However, in most cases the image data sizes are too large to be mined in a reasonable amount of time using existing algorithms.
In this paper, we propose an approach to derive association rules on RSI data using Peano Count Tree (P-tree) structure.
P-tree structure, proposed in our previous work [21], provides a lossless and compressed representation of image data.
Based on P-trees, an efficient association rule mining algorithm P-ARM with fast support calculation and significant pruning techniques are introduced to improve the efficiency of the rule mining process.
P-ARM algorithm is implemented and compared with FP-growth and Apriori algorithms.
Experimental results showed that our algorithm is superior for association rule mining on RSI spatial data.
Texture has been recognized as an important visual primitive in image analysis.
A widely used texture descriptor, which is part of the MPEG-7 standard, is that computed using multiscale Gabor filters.
The high dimensionality and computational complexity of this descriptor adversely a#ect the e#ciency of content-based retrieval systems.
We propose a modified texture descriptor that has comparable performance, but with nearly half the dimensionality and less computational expense.
This gain is based on a claim that the distribution of (absolute values of) filter outputs have a strong tendency to be Rayleigh.
Experimental results show that the dimensionality can be reduced by almost 50%, with a tradeo# of less than 3% on the error rate.
Furthermore, it is easy to compute the new feature using the old one, without having to repeat the computationally expensive filtering step.
We also propose a new normalization method that improves similarity retrieval and indexing e#ciency.
We present a statistical view of the texture retrieval problem by combining the two related tasks, namely feature extraction (FE) and similarity measurement (SM), into a joint modeling and classification scheme.
We show that using a consistent estimator of texture model parameters for the FE step followed by computing the Kullback--Leibler distance (KLD) between estimated models for the SM step is asymptotically optimal in term of retrieval error probability.
The statistical scheme leads to a new wavelet-based texture retrieval method that is based on the accurate modeling of the marginal distribution of wavelet coefficients using generalized Gaussian density (GGD) and on the existence a closed form for the KLD between GGDs.
The proposed method provides greater accuracy and flexibility in capturing texture information, while its simplified form has a close resemblance with the existing methods which uses energy distribution in the frequency domain to identify textures.
Experimental results on a database of 640 texture images indicate that the new method significantly improves retrieval rates, e.g., from 65% to 77%, compared with traditional approaches, while it retains comparable levels of computational complexity.
Current unit test frameworks present broken unit tests in an arbitrary order, but developers want to focus on the most specific ones first.
We have therefore inferred a partial order of unit tests corresponding to a coverage hierarchy of their sets of covered method signatures: When several unit tests in this coverage hierarchy break, we can guide the developer to the test calling the smallest number of methods.
Our experiments with four case studies indicate that this partial order is semantically meaningful, since faults that cause a unit test to break generally cause less specific unit tests to break as well.
We describe an identity based signature scheme that uses biometric information to construct the public key.
Such a scheme would be beneficial in a legal dispute over whether a contract had been signed or not by a user.
A biometric reading provided by the alleged signer would be enough to verify the signature.
We make use of Fuzzy extractors [7] to generate a key string from a biometric measurement.
In this paper we will do a heuristic evaluation on a homepage that belongs to an illustrator who wants to show his work.
Before we present the result of the evaluation, we will shortly explain the context.
The result of the heuristic evaluation falls in two parts.
First, we look at some of the usability problems we discovered overall at the website and secondly we look at one particular problem with the website, where the illustrator tries to sell his book.
After each part, we present some recommendations to the problems.
Finally, we try to answer some of the questions that arose when we carried out the evaluation.
Questions like -- is it possible to do a traditional heuristic evaluation on a website which aim is rather aesthetic than commercial?
Is a website aesthetic if it has a commercial touch?
And finally what are the problems with heuristic evaluation in this content?
KEYWORDS: Heuristic evaluation, website, aesthetic, commercial, user involvement
is to evolve interoperability standards to develop flexible and scalable controlling and simulation services.
In order to overcome the limitations of proprietary solutions, efforts have been made to support interoperability among simulation models and geo information systems (GIS).
Existing standards in the domain of spatial information and spatial services define geoinformation (GI) in a more or less static way.
Though time can be handled as an additional attribute, its representation is not explicitly specified.
In contrast, as the standard for distributed heterogeneous simulation, the High Level Architecture (HLA) provides a framework for distributed time-variant simulation processes but HLA is lacking in supporting spatial information.
A web-based Distributed spAtio-temporaL In- teroperability architecture DALI integrating these initiatives will be presented here.
The long term goal of this DALI Architecture is making standardized off-theshelf GI and simulation services usable for highly specialized simulation and controlling applications.
We have developed ZENTURIO, which is an experiment management system for performance and parameter studies as well as software testing for cluster and Grid architectures.
In this paper we describe our experience with developing ZENTURIO as a collection of Web services.
A directivebased language called ZEN is used to annotate arbitrary les and specify arbitrary application parameters.
An Experiment Generator Web service parses annotated application les and generates appropriate codes for experiments.
An Experiment Executor Web service compiles, executes, and monitors experiments on a single or a set of local machines on the Grid.
Factory and Registry services are employed to create and register Web services, respectively.
An event infrastructure has been customised to support high-level events under ZENTURIO in order to avoid expensive polling and to detect important system and application status information.
A graphical user portal allows the user to generate, control, and monitor experiments.
We compare our design with the Open Grid Service Architecture (OGSA) and highlight similarities and dierences.
We report results of using ZENTURIO to conduct performance analysis of a material science code that executes on the Grid under the Globus Grid infrastructure.
One of the input image for an inscription on a column of the Parthenon.
The light-capturing frame (Fig.
1) consists of fiducials and MacBeth color checker chart samples, from which the camera's position and the incident radiant intensity can be estimated, as well as two glossy black spheres used to indicate the position of the light source.
To reconstruct each object, we re-adjusted the size of the frame and placed it around the geometry.
We took approximately ten images, each lit with a remotely mounted camera flash at a different position pointed towards the center of the frame.
For each set of images, we also took an additional image without the flash, which was subtracted from the other images to remove any ambient # e-mail:einarsson, timh, debevec@ict.usc.edu lighting.
The full process including setup took approximately 20 minutes for each inscription.
To compute a surface normal map, we first determine the light vector l by shooting rays from the camera's center toward the o
The computers we use are not secure, and they are even less so when connected to the Internet.
A lot of blame has been put on lazy sysadmins for not applying patches promptly, but the fault is not entirely theirs.
We believe that distributed systems should be designed to make attacks harder and to limit the damage done when attacks succeed.
We propose three components of the system architecture that address these goals and make distributed systems easier to monitor and manage, while simplifying the task of writing secure applications.
Following these guidelines won't make the system secure, but doing so will make it easier to build systems that are.
Information visualization exploits the phenomenal abilities of human perception to identify structures by presenting abstract data visually, allowing an intuitive exploration of data to get insight, to draw conclusions and to interact directly with the data.
The specification, analysis and evaluation of complex models and simulated model data can benefit from information visualization techniques by obtaining visual support for different tasks.
This paper presents an approach that combines modelling and visualization functionality to support the modelling process.
Based on this general approach, we have developed and implemented a framework that allows to combine a variety of models with statistical and analytical operators as well as with visualization methods.
We present several examples in the context of climate modelling.
Typically, large-scale optimistic parallel simulations will spend 90% or more of the total execution time forward processing events and very little time executing rollbacks.
In fact, it was recently shown that a large-scale TCP model consisting of over 1 million nodes will execute without generating any rollbacks (i.e., perfect optimistic execution is achieved).
The major cost involved in forward execution is the preparation for a rollback in the form of state-saving.
Using a technique called reverse computation, state-saving overheads can be greatly reduced.
Here, the rollback operation is realized by executing previously processed events in reverse.
However, events are retained until GVT sweeps past.
In this paper, we define a new algorithm for realizing a continuum of reverse computation-based parallel simulation systems, which enables us to relax the computing of GVT and potentially further reduces the amount of memory required to execute an optimistic simulation.
Large corporations can achieve significant cost savings by developing and employing a sophisticated and continuously updated, billing and credit policy.
Days of sale outstanding (DSO) is a major cost driver for corporations with large revenues, as this leads to an increased risk of default, increased dunning and collection costs, a non-optimal billing procedure with attendant costs and perhaps most importantly, an increase in the order-to-cash cycle time and the significant increase in hidden costs this implies.
Segmentation of the customer base according to behavior and risk combined with the design of bespoke billing and credit policies suited to the behavior and risk associated with each segment, can lead to a significant decrease in the costs mentioned above.
This paper illustrates the work done at Norway's largest telecommunication operator, Telenor, to address these issues using the continuous simulation methodology as well as other econometric tools.
The ETSI has recently published a front-end processing standard for distributed speech recognition systems.
The key idea of the standard is to extract the spectral features of speech signals at the front-end terminals so that acoustic distortion caused by communication channels can be avoided.
This paper investigates the e#ect of extracting spectral features from di#erent stages of the front-end processing on the performance of distributed speaker verification systems.
A technique that combines handset selectors with stochastic feature transformation is also employed in a back-end speaker verification system to reduce the acoustic mismatch between di#erent handsets.
Because the feature vectors obtained from the back-end server are vector quantized, the paper proposes two approaches to adding Gaussian noise to the quantized feature vectors for training the Gaussian mixture speaker models.
In one approach, the variances of the Gaussian noise are made dependent on the codeword distance.
In another approach, the variances are a function of the distance between some unquantized training vectors and their closest code vector.
The HTIMIT corpus was # Correspondence should be sent to M.W.
Mak, Dept.
of Electronic and Information Engineering, The Hong Kong Polytechnic University, Hong Kong.
Email: enmwmak@polyu.edu.hk.
Tel: (852)27666257.
Fax: (852)23628439.
Current implementation of real time service quality within converged IP networks is mainly accomplished by over-provisioning of bandwidth with limited definition of traffic classes on a network wide basis possibly enhanced by quasi-static provisioning of network elements using traffic engineering.
Per-flow ondemand resource reservation is mostly unavailable in large packet networks.
Examples are establishments of switched VC using PNNI signaling in ATM and establishment of LSPs using RSVP signaling in MPLS supported networks.
Management complexities and limited scalability slow down this trend.
While perflow reservation is a conceptually straightforward QoS solution it is usually looked at as an impractical, nonscalable and even a higher cost solution.
Today highend routers and switches can handle traffic volumes of many hundreds gigabit and even terabits per seconds, which can be translated to millions of simultaneous voice and video connections.
However, currentsignaling technologies will enable to handle only several hundreds connections that can be translated to only few thousands simultaneous short lived connections.
Clearly, today call establishment mechanisms cannot scale to support per-flow reservation that is conceptually a simple QoS solution.
In order to solve this limitation, there is an on-going effort to reduce the required reservation rate by developing complex hierarchical aggregation schemes and multiplexing concepts.
This limits the use of connection establishment signaling to aggregate traffic engineering, which is hard to define, understand and manage and may fail to provide a required QoS solution.
A new statistical model for DNA considers a sequence to be a mixture of regions with little structure and regions that are approximate repeats of other subsequences, i.e.
instances of repeats do not need to match each other exactly.
Both forward- and reverse-complementary repeats are allowed.
The model has a small number of parameters which are fitted to the data.
In general there are many explanations for a given sequence and how to compute the total probability of the data given the model is shown.
Computer algorithms are described for these tasks.
The model can be used to compute the information content of a sequence, either in total or base by base.
This amounts to looking at sequences from a data-compression point of view and it is argued that this is a good way to tackle intelligent sequence analysis in general.
Complex networks are characterized by highly heterogeneous distributions of links, often pervading the presence of key properties such as robustness under node removal.
Several correlation measures have been defined in order to characterize the structure of these nets.
Here we show that mutual information, noise and joint entropies can be properly defined on a static graph.
These measures are computed for a number of real networks and analytically estimated for some simple standard models.
It is shown that real networks are clustered in a well-defined domain of the entropy/noise space.
By using simulated annealing optimization, it is shown that optimally heterogeneous nets actually cluster around the same narrow domain, suggesting that strong constraints actually operate on the possible universe of complex networks.
The evolutionary implications are discussed.
This paper is concerned with a multiscale finite element method for numerically solving second order scalar elliptic boundary value problems with highly oscillating coefficients.
In the spirit of previous other works, our method is based on the coupling of a coarse global mesh and of a fine local mesh, the latter one being used for computing independently an adapted finite element basis for the coarse mesh.
The main new idea is the introduction of a composition rule, or change of variables, for the construction of this finite element basis.
In particular, this allows for a simple treatment of high order finite element methods.
We provide
We analyze the space of security policies that can be enforced by monitoring programs at runtime.
Our program monitors are automata that examine the sequence of program actions and transform the sequence when it deviates from the specified policy.
The simplest such automaton truncates the action sequence by terminating a program.
Such automata are commonly known as security automata, and they enforce Schneider's EM class of security policies.
We define automata with more powerful transformational abilities, including the ability to insert a sequence of actions into the event stream and to suppress actions in the event stream without terminating the program.
We give a set-theoretic characterization of the policies these new automata are able to enforce and show that they are a superset of the EM policies.
This work describes an approach for the inference of reduced ordered decision graphs from training sets.
Reduced ordered decision graphs (RODGs) are graphs where the variables can only be tested in accordance with a pre-specified order and no redundant nodes exist.
RODGs have several interesting properties that has made them the representation of choice for the manipulation of Boolean functions in the logic synthesis community.
We derive a RODG representation of the function implemented by a decision tree.
This decision tree can be obtained from a training set using any one of the different algorithms proposed to date.
This RODG is then used as the starting point for an algorithm that derives another RODG of minimal description length.
The reduction in complexity is obtained by performing incremental changes in the RODG.
By using ordered decision diagrams, the task of identifying common subgraphs is made much simpler than the identification of common sub-trees in a decision tree.
Ordered decision graphs require that a variable ordering be specified in advance.
The algorithm that derives such an ordering is based on a reordering algorithm commonly used that finds a locally optimal ordering by swapping the order of two adjacent variables.
These algorithms are tested in a set of examples that are known to be hard to solve using decision trees.
The results show that when an effective reduction of the description length is obtained, significant gains in generalization accuracycan be achieved.
In all casesthe generalization accuracy of the final RODG was better than the generalization accuracy of the decision tree that was used as the starting point.
We consider the problem of positioning data collecting base stations in a sensor network.
We show that in general, the choice of positions has a marked influence on the data rate, or equivalently, the power efficiency, of the network.
In our model, which is partly motivated by an experimental environmental monitoring system, the optimum data rate for a fixed layout of base stations can be found by a maximum flow algorithm.
Finding the optimum layout of base stations, however, turns out to be an NP-complete problem, even in the special case of homogeneous networks.
Our analysis of the optimum layout for the special case of the regular grid shows that all layouts that meet certain constraints are equally good.
We also consider two classes of random graphs, chosen to model networks that might be realistically encountered, and empirically evaluate the performance of several base station positioning algorithms on instances of these classes.
In comparison to manually choosing positions along the periphery of the network or randomly choosing them within the network, the algorithms tested find positions which significantly improve the data rate and power efficiency of the network.
Most verification approaches assume a mathematical formalism in which functions are total, even though partial functions occur naturally in many applications.
Furthermore, although there have been various proposals for logics of partial functions, there is no consensus on which is "the right" logic to use for verification applications.
In this paper, we propose using a three-valued Kleene logic, where partial functions return the "undefined" value when applied outside of their domains.
The particular semantics are chosen according to the principle of least surprise to the user; if there is disagreement among the various approaches on what the value of the formula should be, its evaluation is undefined.
We show that the problem of checking validity in the three-valued logic can be reduced to checking validity in a standard two-valued logic, and describe how this approach has been successfully implemented in our tool, CVC Lite.
Although the need for formalisation of modelling techniques is generally recognised, not much literature is devoted to the actual process involved.
This is comparable to the situation in mathematics where focus is on proofs but not on the process of proving.
This paper tries to accomodate for this lacuna and provides essential principles for the process of formalisation in the context of modelling techniques as well as a number of small but realistic formalisation case studies.
iCAP is a system that assists users in prototyping contextaware applications.
iCAP supports sketching for creating input and output devices, and using these devices to design interaction rules, which can be prototyped in a simulated or real context-aware environment.
We were motivated to build our system by the lack of tools currently available for developing rich sensor-based applications.
We iterated on the design of our system using paper prototypes and obtained feedback from fellow researchers, to develop a robust system for prototyping context-aware applications.
this paper we show that other models of a Universe in dynamical equilibrium without expansion had predicted this temperature prior to Gamow.
Moreover, we show that Gamow's own predictions were worse than these previous ones.
Before beginning let us list briefly some important historical information which help to understand the findings.
Stefan found experimentally in 1879 that the total bolometric flux of radiation F emitted by a black body at a temperature T is given by F T = s , where s is now called Stefan-Boltzmann's constant (5 67 10 8 Wm K - - 2 4 ).
The theoretical derivation of this expression was obtained by Boltzmann in 1884.
In 1924 Hubble established that the nebulae are stellar systems outside the Milky Way.
In 1929 he obtained the famous redshiftdistance law
This paper describes ThumbTEC, a novel general purpose input device for the thumb or finger that is useful in a wide variety of applications from music to text entry.
The device is made up of three switches in a row and one miniature joystick on top of the middle switch.
The combination of joystick direction and switch(es) controls what note or alphanumeric character is selected by the finger.
Several applications are detailed.
Given a metric space (X, d), a natural distance measure on probability distributions over X is the earthmover metric.
We use randomized rounding of earthmover metrics to devise new approximation algorithms for two well-known classification problems, namely, metric labeling and 0-extension.
In the design environment, system properties, such as fault tolerance and safe operation, need to be demonstrated in new product development of safety-critical systems.
The onus of the proof is by no means trivial, and the associated computational costs can be overwhelming.
In this paper, a novel quality metrics is introduced, Property Coverage (PC), which allows, with affordable computational effort, to have a measure of the degree of confidence within which the Property under evaluation holds.
The proposed method uses fault sampling, and enables PC evaluation with limited fault list sizes.
The methodology and associated metrics are ascertained through a case study, an ASIC for a safety-critical gas burner control system, recently certified to be compliant to EN 298 safety standard.
Recent findings in the domain of combining classifiers provide a surprising revision of the usefulness of diversity for modelling combined performance.
jects that use these technologies for archiving and retrieval.
The main goal of the MAESTRO project is to discover, implement, and evaluate various combinations of these technologies to achieve analysis performance that surpasses the sum of the parts.
For example, British Prime Minister Tony Blair can be identified in the news by his voice, his appearance, captions, and other cues.
A combination of these cues should provide more reliable identification of the Prime Minister than using any of the cues on their own.
MAESTRO is a highly multidisciplinary effort, involving contributions from three laboratories across two divisions at SRI.
Each of these SRI technologies is described in more detail here.
The integrating architecture makes it easy to combine these in different ways, and to incorporate new analysis technologies developed by our team or by others.
Multimedia Analysis Technologies on the MAESTRO Score On the MAESTRO Score (see Figure 1), on each line, similar
Virtual Machines (VMs) and Proof-Carrying Code (PCC) are two techniques that have been used independently to provide safety for (mobile) code.
Existing virtual machines, such as the Java VM, have several drawbacks: First, the e#ort required for safety verification is considerable.
Second and more subtly, the need to provide such verification by the code consumer inhibits the amount of optimization that can be performed by the code producer.
This in turn makes justin -time compilation surprisingly expensive.
Proof-Carrying Code, on the other hand, has its own set of limitations, among which are the sizes of the proofs and the fact that the certified code is no longer machine-independent.
In this paper, we describe work in progress on combining these approaches.
Our hybrid safe-code solution uses a virtual machine that has been designed specifically to support proofcarrying code, while simultaneously providing e#cient justin -time compilation and target-machine independence.
In particular, our approach reduces the complexity of the required proofs, resulting in fewer proof obligations that need to be discharged at the target machine.
In this paper, we show that the multiple input/outputqueued (MIOQ) switch proposed in our previous paper [22] can emulate an output-queued switch only with two parallel switches.
The MIOQ switch requires no speedup and provides an exact emulation of an output-queued switch with a broad class of service scheduling algorithms including FIFO, weighted fair queueing (WFQ) and strict priority queueing regardless of incoming traffic pattern and switch size.
First, we show that an N MIOQ switch with a (2, 2)-dimensional crossbar fabric can exactly emulate an N N output-queued switch.
For this purpose, we propose the stable strategic alliance (SSA) algorithm that can produce a stable many-to-many assignment, and then apply it to the scheduling of an MIOQ switch.
Next, we prove that a (2, 2)-dimensional crossbar fabric can be implemented by two NN crossbar switches in parallel for an NN MIOQ switch.
For a proper operation of two crossbar switches in parallel, each input-output pair matched by the SSA algorithm must be mapped to one of two crossbar switches.
For this mapping, we propose a simple algorithm that requires at most 2N steps for all matched input-output pairs.
In addition, to relieve the implementation burden of N input buffers being accessed simultaneously, we propose a buffering scheme called redundant buffering which requires two memory devices instead of N physically-separate memories.
We consider a new simulation-based optimization method called the Nested Partitions (NP) method.
This method generates a Markov chain and solving the optimization problem is equivalent to maximizing the stationary distribution of this Markov chain over certain states.
The method may therefore be considered a Monte Carlo sampler that samples from the stationary distribution.
We show that the Markov chain converges geometrically fast to the true stationary distribution, and use these results to derive a stopping criterion for the method.
Since the Lucas-Kanade algorithm was proposed in 1981 image alignment has become one of the most widely used techniques in computer vision.
Applications range from optical flow, tracking, and layered motion, to mosaic construction, medical image registration, and face coding.
Numerous algorithms have been proposed and a variety of extensions have been made to the original formulation.
We present an overview of image alignment, describing most of the algorithms in a consistent framework.
We concentrate on the inverse compositional algorithm, an efficient algorithm that we recently proposed.
We examine which of the extensions to the Lucas-Kanade algorithm can be used with the inverse compositional algorithm without any significant loss of efficiency, and which cannot.
In this paper, Part 3 in a series of papers, we cover the extension of image alignment to allow linear appearance variation.
We first consider linear appearance variation when the error function is the Euclidean L2 norm.
We describe three different algorithms, the simultaneous, project out, and normalization inverse compositional algorithms, and empirically compare them.
Afterwards we consider the combination of linear appearance variation with the robust error functions described in Part 2 of this series.
We first derive robust versions of the simultaneous and normalization algorithms.
Since both of these algorithms are very inefficient, as in Part 2 we derive efficient approximations based on spatial coherence.
We end with an empirical evaluation of the robust algorithms.
The convergence of communication and computation over the past two decades has given us the Internet.
We believe that the next phase of the information technology revolution will be the convergence of control, communication, and computation.
This will provide the ability for large numbers of sensors, actuators, and computational units, all interconnected wirelessly or over wires, to interact with the physical environment.
We present a logical foundation for object-oriented specifications which supports a rigorous formal development of object-oriented systems.
In this setting, we study two different views on a system, the implementor's view (glass-box view) and the user's view (black-box view) which both are founded on a model-theoretic semantics.
We also discuss the hierarchical construction of specifications and realisations.
Our approach is abstract in the sense that it can be instantiated by various concrete specification formalisms like OCL or JML.
To reason effectively about programs, it is important to have some version of a transitive-closure operator so that we can describe such notions as the set of nodes reachable from a program's variables.
On the other hand, with a few notable exceptions, adding transitive closure to even very tame logics makes them undecidable.
In this paper, we explore...
While the needs of many individuals with disabilities can be satisfied with power wheelchairs, some members of the disabled community find it difficult or impossible to operate a standard power wheelchair.
To accommodate this population, several researchers have used technologies originally developed for mobile robots to create "smart wheelchairs" that reduce the physical, perceptual, and cognitive skills necessary to operate a power wheelchair.
We are developing a Smart Wheelchair Component System (SWCS) that can be added to a variety of commercial power wheelchairs with minimal modification.
This paper describes the design of a prototype of the SWCS, which has been evaluated on wheelchairs from four different manufacturers.
In this report, we discuss Tree Music, an interactive computer music installation created using GAIA (Graphical Audio Interface Application), a new open-source interface for controlling the RTcmix synthesis and effects processing engine.
Tree Music, commissioned by the University of Virginia Art Museum, used a wireless camera with a wide-angle lens to capture motion and occlusion data from exhibit visitors.
We show how GAIA was used to structure and navigate the compositional space, and how this program supports both graphical and text-based programming in the same application.
GAIA provides a GUI which combines two open-source applications: RTcmix and Perl.
New challenging service scenarios are integrating wireless portable devices with limited and heterogeneous capabilities.
They are expected to access both traditional and novel (context-dependent) Internet services.
Okapi BM25 scoring of anchor text surrogate documents has been shown to facilitate e#ective ranking in navigational search tasks over web data.
We hypothesize that even better ranking can be achieved in certain important cases, particularly when anchor scores must be fused with content scores, by avoiding length normalisation and by reducing the attentuation of scores associated with high tf .
Preliminary results are presented.
Consider a large collection of objects, each of which has a large number of attributes of several di#erent sorts.
We assume that there are data attributes representing data, attributes which are to be statistically estimated from these, and attributes which can be controlled or set.
A motivating example is to assign a credit score to a credit card prospect indicating the likelihood that the prospect will make credit card payments and then to set a credit limit for each prospect in such a way as to maximize the over-all expected revenue from the entire collection of prospects.
In the terminology above, the credit score is called a statistical attribute and the credit limit a control attribute.
The methodology we describe in the paper uses data mining to provide more accurate estimates of the statistical attributes and to provide more optimal settings of the control attributes.
We briefly describe how to parallelize these computations.
We also briefly comment on some of data management issues which arise for these types of problems in practice.
We propose using object # For additional information, please contact Robert L. Grossman, Magnify, Inc., 815 Garfield, Oak Park, IL 60304, 708 383 7002, 708 383 7084 fax, rlg@magnify.com.
This work was supported in part by the Massive Digital Data Systems (MDDS) Program, which is supported by the Community Management Sta# in the Department of Defense.
Computer game character design and robotics share many of the same goals and computational constraints.
Both attempt to create intelligent artifacts that respond realistically to their environments, in real time, using limited computation resources.
Unfortunately, none of the current AI architectures is entirely satisfactory for either field.
We discuss some of the issues in believability and computational complexity that are common to both fields, and the types of architectures that have been used in the robotics world to cope with these problems.
Then we present a new class of architectures, called role passing architectures which combine the ability to perform high level inference with real-time performance.
The continuing drive to improve operating efficiency of information technology is motivating the development of knowledge planes or frameworks for coordinated monitoring and control of large data computing infrastructures.
In this paper, we propose the notion of a location- and environment-aware extended knowledge plane.
Asanillustration of such an extended knowledge plane, we architect the Splice framework, that extends the knowledge plane to include data from environmental sensors and the notions of physical location and spatial and topological relationships with respect to facilities-level support systems.
Our proposed architecture is designed to support easy extensibility, scalability, and support the notion of higher-level object views and events in the data center.
Using the above architecture, we demonstrate the richness of queries facilitated by Splice and discuss their potential for automating several categories of data center maintenance and control.
We also discuss our experience with deploying Splice on real-world data centers and discuss the value from Splice in the context on one specific optimization that would have otherwise not been possible without the extended knowledge plane.
Finally, we also provide evidence of the scalability of this deployment with number of readings, both in terms of database storage and query performance.
We explore the utility of second-order statistics for blind identification/equalization of nonlinear channels.
Under standard assumptions it is shown that the channel cannot be identified to within a scaling factor from the output second order statistics, but that the ambiguity is at a level that permits equalization.
Weshow that these results cover cases that the prior literature does not address.
We explore the extent to which we can exploit interest point detectors for representing and recognising classes of objects.
Detectors propose sparse sets of candidate regions based on local salience and stability criteria.
However, local selection does not take into account discrimination reliability across instances in the same object class, so we realise selection by learning from weakly supervised data in the form of images paired with their captions.
Through experiments on a wide variety of object classes and detectors, we show that modeling object recognition as a constrained data association problem and learning the Bayesian way by integrating over multiple hypotheses leads to sparse classifiers that outperform contemporary methods.
Moreover, our learned representations based on local features leave little room for improvement on standard image databases, so we propose new data sets to corroborate models for general object recognition.
The development and maintenance of domain-specific application ontologies require knowledge input from domain experts who are usually without any formal ontology or AI background.
When dealing with large-scale ontologies, for example of the kind with which we are currently familiar in the biomedical spheres, quality assurance becomes important in minimizing modelling mistakes and the application errors which they bring in their wake.
In this paper we describe how the upper-level framework BFO (for: Basic Formal Ontology), developed by the Institute for Formal Ontology and Medical Information Science, is being used to provide automatic error detection and run-time modelling support to the development of LinKBase, a large-scale medical domain ontology developed by Language and Computing NV to serve a range of natural language processing applications.
We describe UbiWise, a simulator for ubiquitous computing.
The simulator concentrates on computation and communications devices situated within their physical environments.
It presents two views, each in a separate window on the desktop of the users' PC.
One of the views provides a three dimensional world, built on the Quake III Arena graphics engine and serves to simulate a first-person view of the physical environment of a user.
The other view, built using Java, shows a close-up view of devices and objects the user may manipulate.
These views act as one unified whole by maintaining a clientserver model with a central server.
Multiple users can attach to the same server to create interactive ubiquitous computing scenarios.
We describe how UbiWise looks to researchers and examples of its use as tool for ubiquitous computing research.
Simulation modeling and analysis requires an investment in human resources and software.
And the rewards from using simulation are significant.
Many companies fine tune their operations and reduce waste using simulation.
But in the end, every time modeling and analysis are performed, a decision has to be made whether the simulation is "worth doing" (Waite 1999).
In this paper we will enumerate how AutoMod has been used to improve return on investment (ROI) from simulation.
Let A and B two n n matrices over a ring R (e.g., the reals or the integers) each containing at most m non-zero elements.
We present a new algorithm that multiplies A and B using O(m ) algebraic operations (i.e., multiplications, additions and subtractions) over R. The naive matrix multiplication algorithm, on the other hand, may need to perform #(mn) operations to accomplish the same task.
For , the new algorithm performs an almost optimal number of only n operations.
For m the new algorithm is also faster than the best known matrix multiplication algorithm for dense matrices which uses O(n ) algebraic operations.
The new algorithm is obtained using a surprisingly straightforward combination of a simple combinatorial idea and existing fast rectangular matrix multiplication algorithms.
We also obtain improved algorithms for the multiplication of more than two sparse matrices.
This paper describes the central component of a system to assist intelligence analysts detect deception.
We describe how deceptions exploit cognitive limits and biases and review prior work on processes that can help people recognize organized deceptions.
Our process is based on Heuer's Analysis of Competing Hypotheses, which we automate by generating state-based plans and converting them to Bayesian belief networks.
Our decision aid uses a concept from Bayesian classification to identify distinguishing evidence that a deceiver must hide and a counter-deceiver must uncover.
We illustrate the process with one of the most Century.
The objective of this paper is to review trends in government expenditures in the developing world, to analyze the causes of change, and to develop an analytical framework for determining the differential impacts of various government expenditures on economic growth.
Contrary to common belief, it is found that structural adjustment programs increased the size of government spending, but not all sectors received equal treatment.
As a share of total government spending, expenditures on agriculture, education, and infrastructure in Africa; on agricultural and health in Asia; and on education and infrastructure in Latin America, all declined as a result of the structural adjustment programs.
The impact of various types of government spending on economic growth is mixed.
In Africa, government spending on agriculture and health was particularly strong in promoting economic growth.
Asias investments in agriculture, education, and defense had positive growth-promoting effects.
However, all types of government spending except health were statistically insignificant in Latin America.
Structural adjustment programs promoted growth in Asia and Latin America, but not in Africa.
Growth in agricultural production is most crucial for poverty alleviation in rural areas.
Agricultural spending, irrigation, education, and roads all contributed strongly to this growth.
Disaggregating total agricultural expenditures into research and non-research spending reveals that research had a much larger impact on productivity than non-research spending.
ii Table of Contents 1.
Silk and SML are software libraries of Java, C++, C# and VB.Net classes that support object-oriented, discrete-event simulation.
SML^TM is a new open-source or "free" software library of simulation classes that enable multi-language development of complex, yet manageable simulations through the construction of usable and reusable simulation objects.
These objects are usable because they express the behavior of individual entity-threads from the system object perspective using familiar process-oriented modeling within an object-oriented design supported by a general purpose programming language.
These objects are reusable because they can be easily archived, edited and assembled using professional development environments that support multilanguage, cross-platform execution and a common component architecture.
This introduction supports the tutorial session that describes the fundamentals of designing and creating an SML or Silk model.
This paper analyzes the access of emerging market borrowers to international debt markets and specifically their decision of whether to borrow from banks or on the bond market (a decision that does not appear to have been analyzed in the literature before).
This choice is modeled using a framework that focuses on the implications of asymmetric information.
In this model, monitoring by banks can attenuate moral hazard.
But monitoring has costs, which cause the bank loan market to dry up faster than the bond market as risk and interest rates rise (reflecting the presence of adverse selection).
These are the factors that drive the borrower's decision between bank loans or bonds and that determine whether high risk borrowers can access international markets at all.
The model predicts that borrowers from countries where economic and political risks are highest will not have market access.
More substantively, it predicts that borrowers from countries where economic and political risks are somewhat lower will issue junk bonds, while those from countries where risks are still lower will borrow from banks, and that borrowers from the lowest risk countries will issue high-quality ("investment grade") bonds.
A censored regression model with random e#ects, estimated using simulated maximum likelihood, supports these predictions and reveals the variables that a#ect the choice of debt instrument at each end of the risk spectrum.
The distinction between reference ontologies and application ontologies crept rather unobtrusively into the recent literature on knowledge engineering.
A lot of the discourse surrounding this distinction - notably, the one framing the workshop generating this collection of papers - suggests the two types of ontologies are in some sort of opposition to one another.
Thus, Borge et al.
[3] characterize reference ontologies (more recently, foundational ontologies) as rich, axiomatic theories whose focus is to clarify the intended meanings of terms used in specific domains.
Application ontologies, by contrast, provide a minimal terminological structure to fit the needs of a specific community.

Introduction The stochastic theory of non-stationary processes with stationary increments began to be applied to detailed hydraulic conductivity (K) measurements during the early 1990's [Molz and Bowman, 1993; Painter, 1996].
Numerous additional applications followed [Molz et al., 2003].
Initial studies assumed that ln(K) increments or fluctuations (the stationary process) would follow Gaussian probability density functions (PDFs).
However, careful analysis of a variety of measurements soon showed that the increment PDFs were strongly non-Gaussian with a distinct resemblance to the Levy-stable PDF [Painter and Paterson, 1994].
This PDF was attractive, because like the Gaussian PDF it served as the natural mathematical basis for a stochastic fractal.
Still further analysis of measurements, and simulations, led researchers to realize that the tails of the empirical PDFs do not have a power-law decay [Painter, 1996; Lu and Molz, 2001].
This led to the proposal of stochastic models that c
Open source information on the Internet can contribute significantly to such assessments as competitive intelligence, business trends, or evolving social attitudes.
However, because the accuracy of this open source information varies widely, the correctness of the information needs to be assessed before it can be used reliably.
Current methods for estimating correctness rely on the subjective opinions of knowledgeable people in the field and can vary among evaluators.
Today, new data collection and information management tools enable objective reviewer-independent assessment of open source information correctness.
These tools support four objective methods for estimating reliability: (1) objective assessment of the historical accuracy of a particular source, by subject matter and viewpoint; (2) selfassessment of reliability from the source itself; (3) consistency of report with prior incidents and with established facts; and (4) consistency of information with other independent reports.
This paper describes how these techniques are employed in Evidence Based Research's war rooms to help clients understand the diversity and credibility of viewpoints on clientselected topics.
This paper introduces the concept of an electronic trade scenario as an aid to the management of (global) supply chains, and other forms of international, businessto -business electronic commerce.
The problem addressed is the following.
Competition demands that trade transactions be handled efficiently and securely.
However, the same competitive environment also demands flexibility, and the ability to re-design the supply chain as conditions change.
Current advances in electronic document technologies, notably XML schemas, ebXML, offer new possibilities for generic, resusable component software at the level of document specifications.
Here we address the additional challenge of supporting the rapid re-engineering of the process specifications for the supply chain.
For this, we focus on the component technologies for electronic trade scenarios: generic, reusable models of the entire trade transaction.
They are stored in a on-line repository, where each member of the supply chain can download the transaction component for their role in the transaction.
We describe a CASE tool, called InterProcs, which provides a graphical modeling internface for supply chain process specifications at several levels of abstraction, including Unified Modeling Language (UML) and Documentary Petri Nets (DPN).
Once specified, InterProcs auotmatically produces an operating prototyping of the supply chain transaction model.
In this paper we address the additional step to production implementation of the prototype supply chain model using distributed component technology.
Because we are concerned with open standards, we focus specially on the Java 2 Enterprise Edition (J2EE) platform.
On-Line Analytical Processing (OLAP) based on a dimensional view of data is being used increasingly for the purpose of analyzing very large amounts of data.
To improve query performance, modern OLAP systems use a technique known as practical pre-aggregation,whereselect combinations of aggregate queries are materialized and re-used to compute other aggregates; full preaggregation, where all combinations of aggregates are materialized, is infeasible.
However, this reuse of aggregates is contingent on the dimension hierarchies and the relationships between facts and dimensions satisfying stringent constraints, which severely limits the scope of practical pre-aggregation.
This paper
The Entity-Relationship (ER) model, using varying notations and with some semantic variations, is enjoying a remarkable, and increasing, popularity in both the research community#the computer science curriculum#and in industry.
In step with the increasing diffusion of relational platforms, ER modeling is growing in popularity.
It has been widely recognized that temporal aspects of database schemas are prevalent and difficult to model using the ER model.
As a result, how to enable the ER model to properly capture time-varying information has, for a decade and a half, been an active area in the database-research community.
This has led to the proposal of close to a dozen temporally enhanced ER models.
This paper surveys all temporally enhanced ER models known to the authors.
It is the first paper to provide a comprehensive overview of temporal ER modeling and it, thus, meets a need for consolidating and providing easy access to the research in temporal ER modeling.
In the presentation of each model, the paper examines how the time-varying information is captured in the model and presents the new concepts and modeling constructs of the model.
A total of 19 different design properties for temporally enhanced ER models are defined, and each model is characterized according the these properties.
Unsolicited and undesirable e-mail (spam) is a growing problem for Internet users and service providers.
We present the Secure Internet Content Selection (SICS) protocol, an efficient cryptographic mechanism for spam-control, based on allocation of responsibility (liability).
With SICS, e-mail is sent with a content label, and a cryptographic protocol ensures labels are authentic and penalizes falsely labeled e-mail (spam).
The protocol supports trusted senders (penalized by loss of trust) and unknown senders (penalized financially).
The recipient can determine the compensation amount for falsely labeled e-mail (spam)).
SICS is practical, with negligible overhead, gradual adoption path, and use of existing relationships; it is also flexible and appropriate for most scenarios, including deployment by end users and/or ISPs and support for privacy and legitimate, properly labeled commercial e-mail.
SICS improves on other crypto-based proposals for spam controls, and complements non-cryptographic spam controls.
The paper examines the efficiency of soft computing techniques in structural optimization, in particular algorithms based on evolution strategies combined with neural networks, for solving large-scale, continuous or discrete structural optimization problems.
The proposed combined algorithms are implemented both in deterministic and reliability based structural optimization problems, in an effort to increase the computational efficiency as well as the robustness of the optimization procedure.
The use of neural networks was motivated by the time-consuming repeated finite element analyses required during the optimization process.
A trained neural network is used to perform either the deterministic constraints check or, in the case of reliability based optimization, both the deterministic and the probabilistic constraints checks.
The suitability of the neural network predictions is investigated in a number of structural optimization problems in order to demonstrate the computational advantages of the proposed methodologies.
Streaming multimedia content to heterogeneous handheld devices is a significant research challenge, due to the diverse computation capabilities and battery lifetimes of these devices.
A unified framework that integrates low level architectural optimizations (CPU, memory), OS power-saving mechanisms (Dynamic Voltage Scaling) and adaptive middleware techniques (admission control, transcoding, network tra#c regulation) can provide significant improvements in both the system performance and user experience.
In this paper, we present such an integrated framework and investigate the trade-o#s involved in serving distributed clients simultaneously, while maintaining acceptable QoS levels for each client.
We show that the power savings attained at both CPU/memory and network levels can be aggregated for increased overall performance.
Based on this, we demonstrate how an integrated framework, that supports tight coupling of inter-level parameters can enhance user experience on handheld devices.
The young field of ubiquitous computing is steadily making progress and gaining attention in both academia and industry.
While new gadgets and smart home appliances cannot appear fast enough for many technologists, such rapid introductions of new technologies often come with unexpected side-effects.
Due to the
Internet is known to display a highly heterogeneous structure and complex fluctuations in its tra#c dynamics.
Congestion seems to be an inevitable result of user's behavior coupled to the network dynamics and it e#ects should be minimized by choosing appropriate routing strategies.
But what are the requirements of routing depth in order to optimize the tra#c flow?
In this paper we analyse the behavior of Internet tra#c with a topologically realistic spatial structure as described in a previous study [S.-H. Yook et al., Proc.
Natl Acad.
Sci.
USA 99, 13382 (2002)].
The model involves self-regulation of packet generation and di#erent levels of routing depth.
It is shown that it reproduces the relevant key, statistical features of Internet's tra#c.
Moreover, we also report the existence of a critical path horizon defining a transition from low-e#cient tra#c to highly e#cient flow.
This transition is actually a direct consequence of the web's small world architecture exploited by the routing algorithm.
Once routing tables reach the network diameter, the tra#c experiences a sudden transition from a low-e#cient to a highly-e#cient behavior.
It is conjectured that routing policies might have spontaneously reached such a compromise in a distributed manner.
Internet would thus be operating close to such critical path horizon.
this paper were written in Perl and SQL.
The Database used was IBM BD2 for workgroup, version 7.
Additionally, the Norm component of the UMLS Lexical Tools was obtained from the National Library of Medicine in 2003.
Applications were run on a Dual-processor SUN UltraSparc III V880 under the SunOS 5.8 operating system
Introduction fadR is a transcription factor which has a Helix-turn-Helix motif [5], one of most common prokaryotic transcription factor.
It regulates metabolic pathway such as the fatty acid biosynthesis and degradation pathways, glyoxylate pathway and possible role in regulation of amino acid biosynthesis directly or indirectly [1, 2, 3, 4].
By use of cDNA microarray one can search for regulation of transcription factors and its impact on metabolism using microarray data analysis and sequence information.
By comparing wild-type W3110 strain, fadR null mutant strain, subjecting them in various conditions (i.e., di#erent carbon sources such as oleic acid), it is possible to identify the functional role of fadR [1, 3].
Also candidate genes that could possibly be regulated by fadR can be predicted using sequence information, which assist microarray data analysis.
Expression profile of wild-type W3110 strain and fadR null mutant strain (WFR) was compared in LB media (peptone 10g/L, yeas
The ability to locate useful on-line Web Services is becoming critical for today's service-oriented business applications.
A number of efforts have been put to enhance the service discovery process by using conceptualised knowledge, called ontology, of particular service domains to describe service characteristics.
This paper presents an ontology-based approach to enhance descriptions of Web Services that are expressed in WSDL with ontology-based behavioural information, i.e.
input, conditional/unconditional output, precondition, and conditional/unconditional effect of the services.
Having a service ontology associated with each Web Service description, queries for services based on behavioural constraints can benefit from inferring semantics of the service from the service ontology.
The service discovery process becomes closer to discovery by service semantics or behaviour, in contrast with discovery by matching of service attributes values -- the mechanism that is supported currently by Web Services.
Recent advances in on-line data capturing technologies and its widespread deployment in devices like PDAs and notebook PCs is creating large amounts of handwritten data that need to be archived and retrieved efficiently.
Word-spotting, which is based on a direct comparison of a handwritten keyword to words in the document, is commonly used for indexing and retrieval.
We propose a string matching-based method for word-spotting in on-line documents.
The retrieval algorithm achieves a precision of 92.3% at a recall rate of 90% on a database of 6, 672 words written by 10 different writers.
Indexing experiments show an accuracy of 87.5% using a database of 3, 872 on-line words.
We present TOTA ("Tuples On The Air"), a novel middleware for supporting adaptive context-aware application in dynamic network scenarios.
The key idea in TOTA is to rely on spatially distributed tuples for both representing contextual information and supporting uncoupled and adaptive interactions between application components.
The middleware propagates tuples across a network on the basis of application-specific patterns and adaptively re-shapes the resulting distributed structures accordingly to changes in the network scenario.
Application components can locally "sense" these structures and exploit them to acquire contextual information and carry on complex coordination activities in an adaptive way.
Several examples show the effectiveness of the TOTA approach.
The process of documenting and describing the world's languages is undergoing radical transformation with the rapid uptake of new digital technologies for capture, storage, annotation, and dissemination.
While these technologies greatly enhance our ability to create digital data, their uncritical adoption has compromised our ability to preserve this data.
The new digital language resources of all kinds -- lexicons, interlinear texts, grammars, language maps, field notes, recordings -- are difficult to reuse and less portable than the conventional printed resources they replace.
This article is concerned with the portability of digital language resources, specifically with their ability to transcend computer environments, scholarly communities, domains of application, and the passage of time.
We begin by reviewing current uses of software tools and digital technologies for language documentation and description.
This sheds light on how digital language resources are created and managed, and leads to an analysis of portability problems in the seven areas of content, format, discovery, access, citation, preservation, and rights.
After characterizing each problem we articulate a set of values which underlie our intuitions about good and bad practices, and which serve as requirements for new practices supporting the creation of portable language resources.
Next we lay out an extensive set of recommendations to serve as a starting point for the community-based effort that we envision.
We conclude with a discussion of OLAC, the Open Language Archives Community, which provides a process that may be used to identify community-agreed best practices over the long term.
In this paper, we describe a new kind of image representation in terms of local multi--modal Primitives.
Our local Primitives can be characterized by three properties: (1) They represent different aspects of the image in terms of multiple visual modalities.
(2) They are adaptable according to context.
(3) They provide a condensed representation of local image structure.
This paper describes the conceptual framework, methodology, and some results from a project on the Emotions of Teaching and Educational Change.
It introduces the concepts of emotional intelligence, emotional labor, emotional understanding and emotional geographies.
Drawing on interviews with 53 teachers in 15 schools, the paper then describes key di!erences in the emotional geographies of elementary and secondary teaching.
Elementary teaching is characterized by physical and professional closeness which creates greater emotional intensity; but in ambivalent conditions of classroom power, where intensity is sometimes negative.
Secondary teaching is characterized by greater professional and physical distance leading teachers to treat emotions as intrusions in the classroom.
This distance, the paper argues, threatens the basic forms of emotional understanding on which high-quality teaching and learning depend.
# 2000 Elsevier Science Ltd. All rights reserved.
te; oxygen uptake.
INTRODUCTION There has been much recent interest in the potential for enhancement of fish growth in aquaculture through genetic manipulation (Du et al., 1992; Devlin et al., 1994; Martinez et al., 1996, 1999).
A strain of growth hormone transgenic (GHT) tilapia Oreochromis sp.
hybrids has been created that expresses homologous tilapia growth hormone (GH) at low levels in its tissues and exhibits increased growth relative to wild-type (W) conspecifics (de la Fuente et al., 1995; Martinez et al., 1996, 1999).
+Author to whom correspondence should be addressed at present address: CREMA L'Houmeau, Place du Se minaire, B.P.
5, 17137 L'Houmeau, France.
Tel.
: 5 46 50 06 29; fax: 5 46 50 06 00; email: david.mckenzie@ifremer.fr Journal of Fish Biology (2003) 63, 398--409 doi:10.1046/j.1095-8649.2003.00162.x,availableonlineathttp://www.blackwell-synergy.com 398 # 2003 The Fisheries Society of the British Isles Tilapia are important in warm-water aquaculture throug
be the true solution, but this is ---as will be shown---a flawed reasoning.
Keywords: Radioactivity, Neutrinos, Solar activity, Determinism, Quantum mechanics, Causality 1 Analysing the data Since Figure 2 in [1] is not sufficient for a mathematical analysis of the data, I asked E.D.
Falkenberg for the data in form of a numerical list.
Falkenberg was so kind as to send me his data of 73 measurements [t, M(t)] (attached in the Appendix) adding the comment that he wondered what an analysis of the data by a mathematician could bring out in addition.
Now, here is the result: First of all I plotted the graph of the measurements [t, M(t)] (see Fig.
1).
A careful examination of the graph of M(t) seems to show a slight corner at t 1 = 223.4.
In order to consider the graph in more detail we make use of a kind of "microscope." We know from experience that radioactive decay is (at least roughly) governed by exponential decrease.
Hence we remove the exponential function by taking the lo
Today's increasing information supply raises the need for more effective and automated information processing where individual information adaptation (personalization) is one possible solution.
Earlier computer systems for personalization lacked the ability to easily define and measure the effectiveness of personalization efforts.
Numerous projects failed to live up to the their expectations, and the demand for evaluation increased.
This thesis presents some underlying concepts and methods for implementing personalization in order to increase stated business objectives.
A personalization system was developed that utilizes descriptions of information characteristics (metadata) to perform content based filtering in a non-intrusive way.
Most of the described measurement methods for personalization in the literature are focused on improving the utility for the customer.
The evaluation function of the personalization system described in this thesis takes the business operator’s standpoint and pragmatically focuses on one or a few measurable business objectives.
In order to verify operation of the personalization system, a function called bifurcation was created.
The bifurcation function divides the customers stochastically into two or more controlled groups with different personalization configurations.
By giving one of the controlled groups a personalization configuration that deactivates the personalization, a reference group is created.
The reference group is used to measure quantitatively objectives by comparison with the groups with active personalization.
Two different companies had their websites personalized and evaluated: one of Sweden’s largest recruitment services and the second largest Swedish daily newspaper.
The purpose with the implementations was to define, measure, and increase the business objectives.
The results of the two case studies show that under propitious conditions, personalization can be made to increase stated business objectives.
Keywords: metadata, semantic web, personalization, information adaptation, one-to-one marketing, evaluation, optimization, personification, customization, individualization, internet, content filtering, automation.
A hybrid automaton is one of the most popular formal models for hybrid system specification.
The Chi language is a hybrid formalism for modeling, simulation and verification.
It consists of a number of operators that operate on all process terms, including differential algebraic equations.
This paper relates the two formalisms by means of a formal translation from a hybrid automaton model to a Chi model, and a comparison of the semantics of the two models in terms of their respective transition systems.
The comparison is illustrated by means of three examples: a thermostat, a railroad gate controller, and dry friction.
...
This paper describes the design and implementation of JVTOS on different platforms.
Since the transition to democracy, South African public works programs are to involve community participation, and be targeted to the poor and women.
This paper examines the targeting performance of seven programs in Western Cape Province, and analyzes the role of government, community-based organizations, trade unions, and the private sector in explaining targeting outcomes.
These programs were not well-targeted geographically in terms of poverty, unemployment, or infrastructure.
Within localities, jobs went to the poor and unemployed, though not always the poorest.
They did well in reaching women, despite local gender bias.
Targeting guidelines of the state are mediated by diverse priorities that emerge in programs with multiple objectives, local perceptions of need and entitlement, and competing voices within civil society.
iii CONTENTS Acknowledgments............................................................................................................... v 1.
We identify a wide range of human memory phenomena as potential certificates of identity.
These "imprinting" behaviors are characterized by vast capacity for complex experiences, which can be recognized without apparent effort and yet cannot be transferred to others.
They are suitable for use in near zero-knowledge protocols, which minimize the amount of secret information exposed to prying eyes while identifying an individual.
We sketch several examples of such phenomena[1-3], and apply them in secure certification protocols.
This provides a novel approach to human-computer interfaces, and raises new questions in several classic areas of psychology.
For the past few years several research teams have been developing intelligent learning environments (ILE) based on multi-agent architectures.
For such type of architectures to be possible, the agents must have specific roles in the architecture and must be able to communicate in between them.
To handle such needs, we have established a generic multi-agent architecture - the Pedagogical Agents Communication Framework (PACF).
In PACF a set of agents were defined, their roles established, and their communication infrastructure built.
Such communication infrastructure is based on a subset of the KQML language.
There are two main general agents in PACF: the Server that acts both as a facilitator in the KQML sense and as an application-independent name server; and a Learner Modelling Server (LMS).
The LMS can be used by several applications (composed by several agents) and each one can adapt the modelling strategy to its needs through the parameterisation of three configuration files: one that provides the application domain structure and the others the learner modelling strategies.
Through this parameterisation the applications define how the LMS will model their learners.
The LMS keeps one single database for all the learners being modelled by all the agents, allowing different applications to access to the same learner model simultaneously.
These different applications can share parts of the learner models provided that they use the same domain ontology in the modelling process.
This architecture has been used in a Web based distance learning scenario with two different ILEs.
Petri nets are widely used for modeling and analyzing workflows.
This paper discusses similarities and differences in autonomous helicopters developed at USC and CSIRO.
The most significant differences are in the accuracy and sample rate of the sensor systems used for control.
The USC vehicle, like a number of others, makes use of a sensor suite that costs an order of magnitude more than the vehicle.
The CSIRO system, by contrast, utilizes low-cost inertial, magnetic, vision and GPS to achieve the same ends.
It is well known that the standard equation-error (EE) method for the identification of linear systems yields biased estimates if the data are noise corrupted.
Due to this bias, the resulting estimate can be unstable in some cases, depending on the spectral characteristics of the input and the noise, the signal-tonoise ratio (SNR), and the unknown system.
In this work, the set of all-pole linear systems whose equation-error estimate is stable for all wide sense stationary inputs and white measurement noise is investigated.
Some results concerning the structure of this set are given.
Information Extraction systems offer a way of automating the discovery of information from text documents.
Research and commercial systems use considerable training data to learn dictionaries and patterns to use for extraction.
Learning to extract useful information from text data using only minutes of user time means that we need to leverage unlabeled data to accompany the small amount of labeled data.
This report will then be transmitted to the European Parliament, the Council and the European Economic and Social Committee.
It will be accompanied, if necessary, by proposals for amendments to the Directive in order to bring it into line with the developments observed in the Internal Market.
Paragraph 2 lays down that the Member States must provide the Commission with all the aid and assistance it may need when drawing up that report
We present a method for 3D face acquisition using a set or sequence of 2D binary silhouettes.
Since silhouette images depend only on the shape and pose of an object, they are immune to lighting and/or texture variations (unlike feature or texture-based shape-from-correspondence).
Our prior 3D face model is a linear combination of "eigenheads" obtained by applying PCA to a training set of laser-scanned 3D faces.
These shape coefficients are the parameters for a near-automatic system for capturing the 3D shape as well as the 2D texture-map of a novel input face.
Specifically, we use back-projection and a boundary-weighted XOR-based cost function for binary silhouette matching, coupled with a probabilistic "downhill-simplex" optimization for shape estimation and refinement.
Experiments with a multi-camera rig as well as monocular video sequences demonstrate the advantages of our 3D modeling framework and ultimately, its utility for robust face recognition with built-in invariance to pose and illumination.
Hybrid probabilistic programs framework [5] is a variation of probabilistic annotated logic programming approach, which allows the user to explicitly encode the available knowledge about the dependency among the events in the program.
In this paper, we extend the language of hybrid probabilistic programs by allowing disjunctive composition functions to be associated with heads of clauses and change its semantics to be more suitable for real-life applications.
We show on a probabilistic AI planning example that the new semantics allows us to obtain more intuitive and accurate probabilities.
The new semantics of hybrid probabilistic programs subsumes Lakshmanan and Sadri [17] framework of probabilistic logic programming.
The fixpoint operator for the new semantics is guaranteed to be always continuous.
This is not the case in the probabilistic annotated logic programming in general and the hybrid probabilistic programs framework in particular.
Continuity as usability property has been used in mixed reality systems and in multiplatform systems.
This paper compares the definitions that have been given to the concept in both fields.
Continuity is then given in a consolidated definition.
The title poses the essential question addressed herein: Is it possible to construct simulations that permit use in application domains with widely ranging objectives?
The question is raised in a tentative explanation of what is entailed in an answer.
Beginning with a taxonomy based on simulation objectives, we identify differences among the categories with respect to what is attendant in realizing different objectives and in using associated methodologies and tools.
The closing summary highlights the importance of producing an answer or eliminating the question.
Historically, simulation tools have only been used and understood by the academic community.
Special Purpose Simulation (SPS) techniques have introduced computer modeling to the industry, resulting in reduced model development time and a user-friendly environment.
This paper describes the special purpose simulation template, which is based on the tower crane operations performed by PCL Constructors Inc.
On-site management of the tower crane resource is based on prioritized work tasks that need to be performed within a set period of time.
Traditional SPS modeling techniques use `relationship logic links' to represent the logic contained in the modeled system.
As the number of work tasks increases for the tower crane resource, the model complexity using traditional simulation techniques becomes unmanageable, resulting in limited acceptance by industry practitioners.
The tower crane template uses `priority rating logic' to replace the `relationship logic links'.
Evaluation of the tower crane operations at the Electrical and Computer Engineering Research Facility (ECERF), being constructed in Edmonton, is used to illustrate the advantages of using the `priority rating logic' modeling approach for tower crane operations.
The simulation model analyzes the ECERF tower crane production cycle yielding outputs for total duration, crane utilization, and lift activity hook-time analysis.
Software product family variability facilitates the constructive and pro-active reuse of assets during the development of software applications.
The variability is typically represented by variation points, the variants and their interdependencies.
Those variation points and their variants have to be considered when defining the requirements for the applications of the software product family.
To facilitate the communication of the variability to the customer, an extension to UML-use case diagrams has been proposed in [9].
In this
INTRODUCTION LiterafiJ eaWW3@-- with ath oahfiW fordea&:1 with collision alisionfi ina dyna--L environment.
Some of these ase oaefiW require prior knowledge of the # To whom am correspondence should beafi& essed; e-ma;fi kkrishnaL----fiOJ1Jfia positionas traionfiL@@L of the objects.
1,2 Other ahe oaerfi compute the robot'spat ina sta--@ environment using globa straL--&OWfi 3--5 The computed pap is repla--3fi whendyna:& objectsa e introducedtha crisscross thepla:LL pa: La: a proaLLJ tha involve fuzzy inferencing or neurofuzzy bazy controlha: a: been reported.
6,7 Recent strafi@:O& har extended this tosituaOfi1 tha involve Jourr of Robotic Systems 19(2), 73--90 (2002) 2002 by John Wiley & Sons, Inc. coopera&1@ collision of multiple robotstha pla aa executea tacu 8 In afi theseaes oasefiW either theexa& position or velocity informaW:J of thedynaJJ objectsa e afi&--L: to beafiWLL&Jfi either before the pafi is pla&--L or during rea time.
However, it is difficult to provide such i
y experienced.
Adolescents' location in their social world is associated with sexual behavior among the close friends of a household sample of urban African American youth.
INTRODUCTION Thebiological,emotional,andpsychologicalvulnerabilityofadolescenceaccentuatepotentialnegative outcomes of sexual activity, such as sexually transmitted diseases and unwanted pregnancy.
Efforts intended to delay onset of sexual behavior would benefit from additional information about determinants of this behavior.
One important area of investigation in this area is the role peers play in the onset of sexual behavior (oral, anal, or vaginal intercourse).
Studies show that adolescents' perceptions and attitudes about their peers' behaviors as well as the peers' self-reported behaviors are influential.
Adolescents who perceive that their peers have actually engaged in sexual intercourse and those who perceive that their peers are accepting of sexual intercourse are more likely to be sexually experienced
In the context of on-going market reform in developing countries, there is a need for an improvement in the existing methods of spatial market efficiency analysis in order to better inform the debate toward designing and implementing new grain marketing policies, institutions, and infrastructure that facilitate the emergence of a well developed and competitive grain marketing system.
The standard parity bounds model (PBM), while it overcomes many weaknesses of the conventional methods of spatial market efficiency analysis, it does not allow for the test of structural changes in spatial market efficiency as a result of policy changes.
In this paper, building on the standard PBM, we develop an extended parity bounds model (EPBM).
The EPBM is a stochastic gradual switching model with three trade regimes.
The EPBM is estimated by maximum likelihood procedure and allows for tracing the time path and structural change in spatial market efficiency conditions due to the policy changes.
We applied the EPBM to analyze the effect of grain marketing policy changes on spatial efficiency of maize and wheat markets in Ethiopia.
The results show that the effect of policy changes on spatial market efficiency is not significant statistically in many cases; there is high probability of spatial inefficiency in maize and wheat markets before and after the policy changes.
The implication of these results is that maize and wheat markets are characterized by periodic gluts and shortages, which can undermine the welfare of producers, grain traders and consumers.
It is also observed that the nature of spatial inefficiency for maize and wheat markets is different implying that the two commodities might require different policy responses in order to improve spatial market efficie...
Tracking of ground targets presents a number of challenges.
Target trajectories meet various motion constraints.
Substantial non-homogenous clutter is usually present.
In multitarget situations measurement assignment may be computationally challenging as the number of operations increases exponentially with number of tracks and number of measurements.
LMIPDA-IMM aims to provide a solution to these issues.
Use of the IMM approach allows tracking ground targets with motion constraints and/or maneuvers.
LMIPDA calculates the probability of target existence for false track discrimination to enable automatic track initiation and termination.
The robust data association properties of LMIPDA are further enhanced by the use of a clutter map.
LMIPDA provides multi-target data association with number of operations linear in the number of tracks and the number of measurements.
Simulation studies illustrate the effectiveness of this approach in an environment of heavy non-homogenous clutter.
MIX-net" systems protect the privacy of participants by clouding together their transactions through cascades of third parties.
Reliability and trust are therefore open issues in this literature and limit the applicability of these systems.
This paper discusses how the MIX approach can be adapted to put the user at the center of the protocol and in control of it, so that each participant can take active steps to protect his or her privacy.
The paper also highlights various possible uses of the protocol.
Being "in control" comes at a cost, however, and the paper discusses the trade-o#s arising from the proposed approach.
Longitudinal household data can have considerable advantages over much more widely used cross-sectional data.
The collection of longitudinal data, however, may be difficult and expensive.
One problem that has concerned many analysts is that sample attrition may make the interpretation of estimates problematic.
Such attrition may be particularly severe in areas where there is considerable mobility because of migration between rural and urban areas.
Many analysts share the intuition that attrition is likely to be selective on characteristics such as schooling and that high attrition is likely to bias estimates made from longitudinal data.
This paper considers the extent of and implications of attrition for three longitudinal household surveys from Bolivia, Kenya, and South Africa that report very high per-year attrition rates between survey rounds.
Our estimates indicate that (1) the means for a number of critical outcome and family background variables differ significantly between attritors and nonattritors; (2) a number of family background variables are significant predictors of attrition; but (3) nevertheless, the coefficient estimates for "standard" family background variables in regressions and probit equations for the majority of the outcome variables considered in all three data sets are not affected significantly by attrition.
Therefore, attrition apparently is not a general problem for obtaining consistent estimates of the coefficients of interest for most of these outcomes.
These results, which are very similar to results for developed economies, suggest that for these outcome variables---despite suggestions of systematic attrition from iv univariate comparisons between attritors and nonattritors, multivariate estimates of behavioral relations of interest may n...
The multidimensional assignment problem (MAP) is a combinatorial problem where elements of a variable number of sets must be matched, in order to find a minimum cost solution.
The MAP has applications in a large number of areas, and is known to be NP-hard.
We survey some of the recent work being done in the determination of the asymptotic value of optimal solutions to the MAP, when costs are drawn from a known distribution (e.g., exponential, uniform, or normal).
Novel results, concerning the average number of local minima for random instances of the MAP for random distributions are discussed.
We also present computational experiments with deterministic local and global search algorithms that illustrate the validity of our results.
Construct validity is about the question, how we know that we're measuring the attribute that we think we're measuring?
This is discussed in formal, theoretical ways in the computing literature (in terms of the representational theory of measurement) but rarely in simpler ways that foster application by practitioners.
Construct validity starts with a thorough analysis of the construct, the attribute we are attempting to measure.
In the IEEE Standard 1061, direct measures need not be validated.
"Direct" measurement of an attribute involves a metric that depends only on the value of the attribute, but few or no software engineering attributes or tasks are so simple that measures of them can be direct.
Thus, all metrics should be validated.
The paper continues with a framework for evaluating proposed metrics, and applies it to two uses of bug counts.
Bug counts capture only a small part of the meaning of the attributes they are being used to measure.
Multidimensional analyses of attributes appear promising as a means of capturing the quality of the attribute in question.
Analysis fragments run throughout the paper, illustrating the breakdown of an attribute or task of interest into sub-attributes for grouped study.
Service-based approaches (such as Web Services and the Open Grid Services Architecture) have gained considerable attention recently for supporting distributed application development in e-business and e-science.
The emergence of a service-oriented view of hardware and software resources raises the question as to how database management systems and technologies can best be deployed or adapted for use in such an environment.
This paper explores one aspect of service-based computing and data management, viz., how to integrate query processing technology with a service-based Grid.
The paper describes in detail the design and implementation of a service-based distributed query processor for the Grid.
The query processor is service-based in two orthogonal senses: firstly, it supports querying over data storage and analysis resources that are made available as services, and, secondly, its internal architecture factors out as services the functionalities related to the construction of distributed query plans on the one hand, and to their execution over the Grid on the other.
The resulting system both provides a declarative approach to service orchestration in the Grid, and demonstrates how query processing can benefit from dynamic access to computational resources on the Grid.
This article discusses a conceptual framework for architecture-driven information system development.
Rather than defining a completely new framework, the conceptual framework is synthesized out of relevant pre-existing frameworks for system development and architecture.
before
The paper focuses on business domain modeling as part of requirements engineering in software development projects.
Domain modeling concerns obtaining and modeling the language (concepts, terminologies; ontologies) used by stakeholders to talk about a domain.
Achieving conceptual clarity and consensus among stakeholders is an important yet often neglected part of requirements engineering.
Domain modeling can play a key role in supporting it.
This does, however, require a nuanced approach to language aspects of domain modeling as well as ambition management concerning its goals, and the procedure followed.
The game of Mastermind is a constraint optimisation problem.
There are two aspects which seem interesting to minimise.
The first is the number of guesses needed to discover the secret combination and the second is how many combinations (potential guesses) we evaluate but do not use as guesses.
This paper presents a new search algorithm for mastermind which combines hill climbing and heuristics.
It makes a similar number of guesses to the two known genetic algorithmbased methods, but is more efficient in terms of the number of combinations evaluated.
It may be applicable to related constraint optimisation problems.
The code that provides solutions to key software requirements, such as security and fault-tolerance, tends to be spread throughout (or cross-cut) the program modules that implement the "primary functionality" of a software system.
Aspect-oriented programming is an emerging programming paradigm that supports implementing such cross-cutting requirements into named program units called "aspects".
To construct a system as an aspect-oriented program (AOP), one develops code for primary functionality in traditional modules and code for cross-cutting functionality in aspect modules.
Compiling and running an AOP requires that the aspect code be "woven" into the code.
Although aspect-oriented programming supports the separation of concerns into named program units, explicit and implicit dependencies of both aspects and traditional modules will result in systems with new testing challenges, which include new sources for program faults.
This paper introduces a candidate fault model, along with associated testing criteria, for AOPs based on interactions that are unique to AOPs.
The paper also identifies key issues relevant to the systematic testing of AOPs.
It is the common goal of today's knowledge management systems to bring the right piece of knowledge to the right person at the right time.
As soon as documents are involved in this process of information supply, intelligent techniques for information supply from text sources have to be employed.
To this end, we propose a profile-based approach.
Profiles describe the generic information need of individual persons according to their tasks and interests.
Attached to these information needs, declarative analysis knowledge exhibits the textual properties of information satisfying these profiles.
Such patterns are used by intelligent information assistants and allow them a very efficient and goal-directed analysis.
Whenever the current context of a user is available, it can be used as a dynamic extension of a profile.
In this case, information assistants can act more specific, thus receiving a better result quality.
Our approach currently distinguishes three information assistants: one for text categorization, one for information extraction, and one for process identification.
To make profile construction as easy as possible, distinct acquisition mechanisms have been developed for each assistant.
The computational and robotic synthesis of language evolution...
Despite the efforts to reduce the so-called semantic gap between the user s perception of image similarity and feature-based representation of images, the interaction with the user remains fundamental to improve performances of content-based image retrieval systems.
To this end, relevance feedback mechanisms are adopted to refine image-based queries by asking users to mark the set of images retrieved in a neighbourhood of the query as being relevant or not.
In this paper, Bayesian decision theory is used to compute a new query whose neighbourhood is more likely to fall in a region of the feature space containing relevant images.
The proposed query shifting method outperforms two relevance feedback mechanisms described in the literature.
Reported experiments also show that retrieval performances are less sensitive to the choice of a particular similarity metric when relevance feedback is used.
Roles have been used both as an intuitive concept in order to analyze multi-agent systems and model inter-agent social activity as well as a formal structure in order to implement coherent and robust teams.
The extensive use of roles in implemented systems evidences their importance in multi-agent systems design and implementation.
In this paper we emphasize the importance of roles for multi-agent systems to act in complex domains, identify some of their properties and we review work done concerning specification and exploitation of roles in agent-oriented system engineering methodologies, in formal models about agent social activity, and in multi-agent systems that are deployed in dynamic and unpredictable domains.
The Iterative Closest Point (ICP) algorithm is a widely used method for aligning three-dimensional point sets.
The quality of alignment obtained by this algorithm depends heavily on choosing good pairs of corresponding points in the two datasets.
If too many points are chosen from featureless regions of the data, the algorithm converges slowly, finds the wrong pose, or even diverges, especially in the presence of noise or miscalibration in the input data.
In this paper, we describe a method for detecting uncertainty in pose, and we propose a point selection strategy for ICP that minimizes this uncertainty by choosing samples that constrain potentially unstable transformations.
Integer division remains expensive on today's processors as the cost of integer multiplication declines.
We present code sequences for division by arbitrary nonzero integer constants and run--time invariants using integer multiplication.
The algorithms assume a two's complement architecture.
Most also require that the upper half of an integer product be quickly accessible.
We treat unsigned division, signed division where the quotient rounds towards zero, signed division where the quotient rounds towards -#, and division where the result is known a priori to be exact.
We give some implementation results using the C compiler GCC.
this paper we focus on semiring valuations: they are functions from the product set domain of this set of variables, which assign each such tuple an element in a semiring.
A semiring is a set with two operations, one labelled `addition', the other `multiplication', which are both commutative and associative and are such that the multiplication distributes over the addition
ls provide 3 consonants for 3 consonantal positions, while biliterals such as assimilated verbs supply only 2 consonants in the perfective.
assimilated verbs lack the Glide in imperfective forms (but 15 verbs out of 251, i.e.
6%) ex.
pf wazan-a ipf ya-zin-u pf: wazan ipf: ya-zin apophony +)), # * # # # # # # # # # # w a z n y a z i n * # .
))))))))))))))))))))))))))))))))))- apophony 3 (5) genralisation: a. violation of the Template Satisfaction Principle is but a particular case illustrating an "illegal" situation created by morphology b. the following sequences do not occur in Classical Arabic: any hiatus uwC, iyC uy,iw wu,yi c. we do not know why this is the case, but it is consistent to assume that the language reacts when these sequences are produced by morphology.
(6) V2 of all imperfective forms is the apophonic output of the corresponding pf V2 Pf.
Active Ipf.
Active Pf.
Passive Ipf.
Passive i C a C i C C a C a C C C i C C C a C a u * # u u .
This work addresses the problem of coordinating a team of mobile robots such that they form a connected ad-hoc wireless network while addressing task objectives.
Many tasks, such as exploration or foraging, can be performed more efficiently when robots are able to communicate with each other.
All or parts of these tasks can be performed in parallel, thus multiple robots can complete the task more quickly than a single robot.
Communication and coordination among the robots can prevent robots from duplicating the effort of other robots, allowing the team to address the task more efficiently.
In non-trivial environments, maintaining communication can be difficult due to the unpredictable nature of wireless signal propagation.
We propose a multi-robot coordination method based on perceived wireless signal strength between cooperating robots for exploration in maze-like environments.
This new method is tested and compared to an existing method that relies on preserving a clear line of sight between robots to maintain communication.
This paper presents a method and a tool for designing and automatically creating an HTML web site for publishing Semantic Web content represented in RDF(S).
The idea is to specify the needed RDF to HTML transformation on two separate levels.
On the HTML level, the layout of the pages can be described by an HTML layout designer by using templates and tags.
On the RDF level, the semantics of the tags are specified by a system programmer in terms of logical rules based on the RDF(S) repository.
The idea is to apply logic for defining the semantic linkage structure and the indices of the page repository.
The method has been implemented as a tool called SWeHG for generating a static, semantically linked site of HTML pages from an RDF repository.
As real life case applications, web exhibitions generated from museum collection metadata are presented.
The lending boom of the 1990s witnessed considerable variation over time and across countries in the ratio of international bonds to foreign bank loans used as debt instrument by emerging market borrowers.
Why some issuers float international bonds while others borrow from international banks has received little if any systematic attention.
This paper tests how macroeconomic fundamentals a#ect the choice of international debt instrument available to emerging market borrowers.
As a stepping stone for empirical analysis, a model with asymmetric information is presented.
Empirical results show that macroeconomic fundamentals explain a significant share of variation in the ratio of bonds to loans for private borrowers, but not for the sovereigns.
and compelling sounds that correspond to the motions of rigid objects.
By numerically precomputing the shape and frequencies of an object's deformation modes, audio can be synthesized interactively directly from the force data generated by a standard rigid-body simulation.
Using sparse-matrix eigen-decomposition methods, the deformation modes can be computed efficiently even for large meshes.
This approach allows us to accurately model the sounds generated by arbitrarily shaped objects based only on a geometric description of the objects and a handful of material parameters.
We validate our method by comparing results from a simulated set of wind chimes to audio measurements taken from a real set.
this article will be published in Presence, Vol
this paper we describe how this standardization has already led to an improvement in the LinKBase structure that allows for a greater degree of internal coherence than ever before possible.
We then show the use of this philosophical standardization for the purpose of mapping external databases to one another, using LinKBase as translation hub, with a greater degree of success than possible hitherto.
We demonstrate how this offers a genuine advance over other application ontologies that have not submitted themselves to the demands of philosophical scrutiny
We present our initial response to the OAS '03 Challenge Problem.
The Challenge Problem proposes an agent-assisted travel scenario, and asks what the role of ontologies would be to support the agent's activity.
We discuss a belief-desire-intention (BDI) approach to the problem using our Nuin agent platform, and illustrate various ways in which ontology reasoning supports BDI-oriented problem solving and communications by the agents in the system.
Kavcic proposed in [1] an algorithm that optimizes the parameters of a Markov source at the input to a finite-state machine channel in order to maximize the mutual information rate.
Numerical results for several channels indicated that his algorithm gives capacity-achieving input distributions.
In this paper we prove that the stationary points of this algorithm indeed correspond one-to-one to the critical points of the information rate curve.
We wish to extract the topology from scanned maps.
In previous work [GNY96] this was done by extracting a skeleton from the Voronoi diagram, but this required vertex labelling and was only useable for polygon maps.
We wished to take the crust algorithm of Amenta, Bern and Eppstein [ABE98] and modify it to extract the skeleton from unlabelled vertices.
We find that by reducing the algorithm to a local test on the original Voronoi diagram we may extract both a crust and a skeleton simultaneously, using a variant of the Quad-Edge structure of [GS85].
We show that this crust has the properties of the original, and that the resulting skeleton has many practical uses.
We illustrate the usefulness of the combined diagram with various applications.
A common criticism of antipoverty programs is that the high share of administrative (nontransfer) costs substantially reduces their effectiveness in alleviating poverty.
Yet there is surprisingly little hard empirical evidence on such programs costs.
A recent international review of targeted poverty alleviation programs in less developed countries found cost informationwhich was rarely comparable between studiesfor fewer than one-third of the programs examined.
Improved information and a better understanding of the costs of such programs are crucial for effective policymaking.
This study proposes and implements a methodology for a comparative analysis of the level and structure of costs of three similar poverty alleviation programs in Latin America, in order to assess their cost-efficiency.
The findings underscore that any credible assessment of cost-efficiency requires a detailed analysis of program cost structures that goes well beyond simply providing aggregate cost information.
iii Contents Acknowledgments............................................................................................................... v 1.
integration of information [10].
Furthermore, points flashed around the time of a saccade are systematically mislocalized in a way that suggests slow build-up of compensation for retinal shifts [11--17].
The dynamic properties of neurons that remap their receptive fields in the anticipation of saccades may be closely connected with these distortions and contribute to spatial constancy [6].
Such neurons have been found in posterior parietal cortex [18--20], in superior colliculus [21], and in the frontal eye field [22] of monkeys; recently, neuroimaging has demonstrated similar spatial updating in human parietal cortex [23].
An aspect of the spatial constancy problem that has received little or no attention is the threedimensional stability of the world during eye movements.
Vision serves not only for detecting the directions of points, but also---and at least as importantly---for extracting the three-dimensional layout of the environment, and in particular the orientations of surface
The current structure of the High Level Architecture (HLA) puts a tremendous burden on network load and CPU utilization for large distributed simulations due to its limited controls for publishing and subscribing object updates and interactions.
Several solutions to this problem have been proposed, but all require cooperation between federate developers and FOM extensions for both the publishing and subscribing Federates.
This paper explores an alternative that has the potential to dramatically reduce communications load by allowing subscribing federates to extend and control the publish/subscribe mechanisms that are local to the publishing federate's process without requiring changes to the publishing federate or the Federation Object Model (FOM).
this paper, we describe our first steps towards adapting the graph probing paradigm to allow pre-computation of a compact, efficient probe set for databases of graphstructured documents in general, and Web pages coded in HTML in particular.
This new model is shown in Figure 1, where the portion of the computation bounded by dashed lines is performed off-line.
We consider both comparing two graphs in their entirety, as well as determining whether one graph contains a subgraph that closely matches the other.
We present an overview of work in progress, as well as some preliminary experimental results
To provide a compact generative representation of the sequential activity of a number of individuals within a group there is a tradeoff between the definition of individual specific and global models.
This paper proposes a linear-time distributed model for finite state symbolic sequences representing traces of individual user activity by making the assumption that heterogeneous user behavior may be `explained' by a relatively small number of common structurally simple behavioral patterns which may interleave randomly in a user-specific proportion.
The results of an empirical study on three different sources of user traces indicates that this modelling approach provides an efficient representation scheme, reflected by improved prediction performance as well as providing lowcomplexity and intuitively interpretable representations.
this article is on supporting the clinical process---the fundamental one in health care---most of the comments are applicable to other non-clinical (such as administrative and financial) processes.
When considering patient records, the starting point is the paper record since it has many advantages.
It is familiar, portable, and can be easily browsed or scanned.
However, in the climate of modern health care delivery, it has a number of major shortcomings
This paper describes an on-line course on constraint programming.
This tutorial discusses some statistical procedures for selecting the best of a number of competing systems.
The term "best" may refer to that simulated system having, say, the largest expected value or the greatest likelihood of yielding a large observation.
We describe six procedures for finding the best, three of which assume that the underlying observations arise from competing normal distributions, and three of which are essentially nonparametric in nature.
In each case, we comment on how to apply the above procedures for use in simulations.
Active contour models are an efficient, accurate, and robust tool for the segmentation of 2D and 3D image data.
Introduction The Aharonov-Bohm (1959) effect is a shift in the electron interference pattern produced by an electron beam split to pass on opposite sides of a long thin solenoid, when current flows in the solenoid.
It is clearly an electrodynamic effect; because no effect is produced when no current flows in the solenoid.
According to the MaxwellLorentz theory, an electron moving outside the solenoid, where there is no magnetic field, should experience no electrodynamic force.
In particular, the magnetic vector potential A outside of a long solenoid of radius a carrying a current per unit length h is given by ph f a cr , (1) where e f is a unit vector in the direction of the current and r is the radial distance from the center of the solenoid to the point of observation.
From Eq.
(1) the curl and divergence of A are seen to vanish; thus, = = A A 0 0 , .
(2) Since A does not change with time and no static charge sources are present; the Lorentz force on an electron of charge
Time is essential in the study of System Dynamics.
When we are trying to represent and analyze the dynamics of complex quantitative systems, the problem of expressing the "effect" of time flow is naturally solved since the mathematical equations that describe the relations between the entities of the system are a function of time.
However, if we are dealing with real world qualitative systems that are impossible or difficult to model using mathematical equations, then the use of natural language becomes the best tool to represent the system and expressing time influence becomes a real issue that has not been addressed before.
This paper introduces a coherent procedure to implicitly represent time in Rule Based Fuzzy Cognitive Maps which are a previously introduced methodology and tool to represent and simulate the dynamics of qualitative systems.
Decision making in industry has become more complicated in recent years.
Customers are more demanding, competition is more fierce, and costs for labor and raw materials continue to rise.
Managers need state-of-the-art tools to help in planning, design, and operations of their facilities.
Simulation provides a virtual factory where ideas can be tested and performance improved.
The AutoMod product suite from Brooks-PRI Automation has been used on thousands of projects to help engineers and managers make the best decisions possible.
With the release of AutoMod 11.0 in 2002, AutoMod now supports hierarchical model construction.
This new architecture allows users to reuse model objects in other models, decreasing the time required to build a model.
Composite models are just one of the latest advances that make AutoMod one of the most widely used simulation software packages.
This paper presents some of my experience in applying a commercial optimum-seeking simulation tool to manufacturing system design and control problems.
After a brief introduction to both the general approach and to the specific tool being used, namely OptQuest for Arena, the main body of the paper reports on the use of the tool in tackling two manufacturing system design and control problems, one very simple and one significantly more complex.
The paper concludes with some material highlighting how easy the tool is to apply to this kind of problem and also presents some thoughts on how the tool might be enhanced to improve its value.
Two simple theorems are proved about the auto-correlation function (ACF) of weakly stationary time series which can only take two values.
The first theorem shows how the ACF can be expressed in terms of the conditional probability of occurrence of a single value of the time-series.
The second theorem shows how the ACF can be expressed in terms of the variance of the number of occurences of one of the values.
Keywords: Autocorrelation, binary series 1
This memory can be seen as a generalization of the working memory for novel foods introduced in chapter 5.
In that simple working memory, only the sickness sensor could recall the stored memory.
For a more general working memory, any sufficiently specific subschema from any modality should recall the entire sensory schema with which it was stored.
For example, the activation of a place represen- tation will read out the expectations of the stimulus that will be perceived at that location.
In the same way, the activation of some other sensory property will read out the location at which that property can be found (figure 9.4.1)
The unprecedented growth in the electronic and semiconductor industries, process controlled industries like automobile, textile and paper, in addition to the growing domestic load over the past three decades has imposed severe operational, economic and maintenance constraints on the power utility companies.
Service reliability and power quality are the key contributing factors imposing these constraints.
Distributed technologies are a potential solution for the current problem but may not be the optimum solution when specific characteristics like the nature of load, desired level of performance, geographical location and the available energy resources at the time instance of operation are considered.
This paper describes the feasibility of distributed resources in terms of the `worth-factor,' a criterion that incorporates intangible benefits and translates them in terms of cost.
We propose a case study where a familiar but very complex and intrinsically woven biocomputing system - the blood clotting cascade - is specified using methods from software design known as object-oriented design (OOD).
The specifications involve definition and inheritance of classes and methods and use design techniques from the most widely used OOD-language: the Unified Modeling Language (UML), as well as its Real-Time-UML extension.
In this paper, we consider the table understanding task and present a catalogue of particular issues that arise when the tables are those found on the web.
In addition, we consider what happens when processes commonly associated with web pages are applied to those bearing tables.
this paper, one being that "information gain," as it is commonly used in decision tree induction, should really be seen as an approximation to an exact expression.
This exact expression follows both from the analogy with physical entropy that Shannon [1] noted as well as from a probability argument that is given in Appendix B.
The practical success could, unfortunately, not be demonstrated which may either be due to the requirements of our specific algorithm or may indicate a problem with the concept of using entropy as a criterion.
A second idea within the rulebased approach was to generate multiple rules by starting the rule-generation process multiple times with a different starting attribute each time and combining results.
This strategy led to a further increase in prediction accuracy
Overlay multicast constructs a multicast delivery tree among end hosts.
Unlike traditional IP multicast, the nonleaf nodes in the tree are normal end hosts, which are potentially more susceptible to failures than routers and may leave the multicast group voluntarily.
In these cases, all downstream nodes will be affected.
Thus an important problem in overlay multicast is how to recover from node departures in order to minimize the disruption of service to those affected nodes.
In this paper, we propose a proactive approach to restore overlay multicast trees.
Rather than letting downstream nodes try to find a new parent after a node departure, each non-leaf node precalculates a parent-to-be for each of its children.
When this nonleaf node is gone, all its children can find their respective new parents immediately.
The salient feature of the approach is that each non-leaf node can compute a rescue plan for its children independently, and in most cases, rescue plans from multiple non-leaf nodes can work together for their children when they fail or leave at the same time.
We develop a protocol for nodes to communicate with new parents so that the delivery tree can be quickly restored.
Extensive simulations demonstrate that our proactive approach can recover from node departures 5 times faster than reactive methods in some cases, and 2 times faster on average.
Interoperability among systems using di#erent term vocabularies requires some mapping between terms in the vocabularies.
Matching applications generate such mappings.
When the matching process utilizes term meaning (instead of simply relying on syntax), we refer to the process as semantic matching.
If users are to use the results of matching applications, they need information about the mappings.
They need access to the sources that were used to determine relations between terms and potentially they need to understand any deductions performed on the information.
In this paper, we present our approach to explaining semantic matching.
Our initial work uses a satisfiability-based approach to determine subsumption and semantic matches and uses the Inference Web and its OWL encoding of the proof markup language to explain the mappings.
The Inference Web solution also includes a registration of the OWL reasoning component of JTP, as well as other reasoner registrations, and thus provides a foundation for explaining semantic matching systems.
Threat models in computer security often consider a very powerful adversary.
A more useful model may be to consider conflict in which both sides have economic considerations that limit the resources they are willing to devote to the conflict.
This paper examines censorship resistance in a peer-to-peer network.
A simple game theoretic model is examined and then elaborated to include multiple publishers, non-linear cost functions, and non-trivial search heuristics.
In each elaboration, we examine the equilibrium behaviour of the censor and the publisher.
A major question that has surfaces in the changing context of world agriculture...
We study the downlink scheduling problem in a cellular wireless network.
The base stations are equipped with antenna arrays and can transmit to more than one mobile user at any time instant, provided the users are spatially separable.
In previous work, an infinite traffic demand model is used to study the physical layer beamforming and power control algorithms that maximize the system throughput.
In this paper we consider finite user traffic demands.
A scheduling policy makes a decision based on both the queue lengths and the spatial separability of the users.
The objective of the scheduling algorithm is to maintain the stability of the system.
We derive an optimal scheduling policy that maintains the stability of the system if it is stable under any scheduling policy.
However, this optimal scheduling policy is exponentially complex in the number of users which renders it impractical.
We propose four heuristic scheduling algorithms that have polynomial complexity.
The first two algorithms are for the special case of single cell systems, while the other two algorithms deal with multiple cell systems.
Using a realistic multi-path wireless channel model, we evaluate the performance of the proposed algorithms through computer simulations.
The results demonstrate the benefits of joint consideration of queue length and dynamic base station assignment.
A detailed understanding of the many facets of the Internet's topological structure is critical for evaluating the performance of networking protocols, for assessing the effectiveness of proposed techniques to protect the network from nefarious intrusions and attacks, or for developing improved designs for resource provisioning.
Previous studies of topology have focused on interpreting measurements or on phenomenological descriptions and evaluation of graph-theoretic properties of topology generators.
We propose a complementary approach of combining a more subtle use of statistics and graph theory with a first-principles theory of router-level topology that reflects practical constraints and tradeoffs.
While there is an inevitable tradeoff between model complexity and fidelity, a challenge is to distill from the seemingly endless list of potentially relevant technological and economic issues the features that are most essential to a solid understanding of the intrinsic fundamentals of network topology.
We claim that very simple models that incorporate hard technological constraints on router and link bandwidth and connectivity, together with abstract models of user demand and network performance, can successfully address this challenge and further resolve much of the confusion and controversy that has surrounded topology generation and evaluation.
This paper presents probabilistic modeling methods to solve the problem of discriminating between five facial orientations with very little labeled data.
Three models are explored.
The first model maintains no inter-pixel dependencies, the second model is capable of modeling a set of arbitrary pair-wise dependencies, and the last model allows dependencies only between neighboring pixels.
We show that for all three of these models, the accuracy of the learned models can be greatly improved by augmenting a small number of labeled training images with a large set of unlabeled images using Expectation-Maximization.
This is important because it is often difficult to obtain image labels, while many unlabeled images are readily available.
Through a large set of empirical tests, we examine the benefits of unlabeled data for each of the models.
By using only two randomly selected labeled examples per class, we can discriminate between the five facial orientations with an accuracy of 94%; with six labeled examples, we achieve an accuracy of 98%.
The emergence of sensor networks as one of the dominant technology trends in the coming decades [1] has posed numerous unique challenges to researchers.
These networks are likely to be composed of hundreds, and potentially thousands of tiny sensor nodes, functioning autonomously, and in many cases, without access to renewable energy resources.
Introduction We are in search of a mechanical medium capable to reproduce or imitate the world of particles and physical fields.
Earlier, a mechanical model of particles and fields has been proposed, which is based on the approximation of an incompressible substratum.
Average turbulence in an ideal fluid was considered.
In the ground state, the turbulence was taken to be homogeneous and isotropic.
Perturbations of the background turbulence model the physical fields [1].
Voids in the turbulent fluid give rise to the structures, which can be taken as the model of the particles [2].
The condition of the substratum incompressibility manifests itself in the classical electromagnetism as the Coulomb gauge.
Now, we give some refinement of the above model.
It is suggested that the substratum is represented by a volume distribution of the empty space in the ideal fluid.
Microscopically, this is conveniently viewed as the vortex sponge -- the plenum of hollow vortex tubes, which pierce the ide
INTRODUCTION An absent-minded driver starts driving at START in Figure 1.
At X he Z. can either EXIT and get to A for a payoff of 0 or CONTINUE to Y.AtY he Z.
Z. can either EXIT and get to B payoff 4 , or CONTINUE to C payoff 1 .
The essential assumption is that he cannot distinguish between intersections X and Y, and cannot remember whether he has already gone through one of them.
Z. Piccione and Rubinstein 1997; henceforth P &R , who introduced this example, claim that a "paradox" or "inconsistency" arises when the decision reached at the planning stage}at START}is compared with that at the action stage}when the driver is at an intersection.
Though the example is provocative and worth having, P & R's analysis seems flawed.
A careful analysis reveals that while the considerations at the planning and action stages do differ, there is no paradox or inconsistency.
This is an outgrowth of notes and correspondence originating in May and June of 1994.
We thank Ehud Kalai, Roger Myerson
In this work, we study and quantify the effects of hotspots in wireless cellular networks.
Hotspots are caused when the bandwidth resources available at some location in the network are not enough to sustain the needs of the users, which are then blocked or dropped.
A deeper understanding of hotspots can help in conducting more realistic simulations and enable improved network design.
We identify
This article offers a unique perspective on one such collaborative network that might serve to assist others in establishing similar inclusive environments.
This network, the West Alabama Learning Coalition (WALC), is a multi-institutional partnership that seeks to improve schools, teacher education, and the community.
In keeping with the theme of this special issue of International Journal of Leadership in Education, we explored this coalition from the practitioner's view.
We opted to take this approach rather than examine how theory is reflected in practice from a researcher's view.
This piece creates theory from practice by presenting an analysis of the motivation of members, benefits, and experience of Carol A. Mullen, Assistant Professor, is leadership faculty in Leadership Development Department, University of South Florida, Tampa, Florida 33620-5650, USA; Frances K. Kochan, Professor, is leadership faculty in Educational Foundations, Technology, and Leadership, Auburn University, Auburn, Alabama, 36849-5112, USA.
They both specialize in collaboration and partnership building with different cultures of professionals, which have resulted in numerous academic articles and guest edited issues of journals.
Dr Mullen has published four books, two on mentoring forms of leadership development.
Breaking the Circle of One (Peter Lang, 2000, 2nd edn) received the Exemplary Research in Teacher Education Award from AERA (Division K) in 1998 (e-mail: cmullen@tempest.coedu.usf.
edu).
Dr Kochan served as co-editor of the book A Thousand Voices from the Firing Line (1999, UCEA) and as editor of The Southern Regional Council on Educational Administration Yearbook (1999) (e-mail: kochafr@auburn.edu)
This paper exposes the Rapid Dialogue Prototyping Methodology [1, 2, 3], a methodology allowing the easy and automatic derivation of an ad hoc dialogue management system from a specific task description.
The goal of the produced manager is to provide the user with a dialogue based interface to easily perform the target task.
In addition, reset patterns, an extension of the prototyping methodology allowing a more flexible interaction with the user, are proposed in order to improve the efficiency of the dialogue.
Reset patterns are justified and theoretically validated by the definition of an average gain function to optimize.
Two approaches to such an optimization are presented, focusing on a different aspect of the gain function.
Eventually, experimental results are presented and a conclusion is drawn on the usefulness of the new feature.
this paper we present data from a preliminary investigation on f0 timing in the chanted call' contour.
The chanted call is an intonation pattern that exists functionally and formally similar in several languages.
Ladd (1996) provides the following description: "In many European languages, in certain situations, people who are some distance away from a speaker can be called or hailed using a chanted tune on two sustained notes, stepping down from a fairly high level to a somewhat lower level." (p. 136) We are mainly interested in how syllable borders and the segmental structure take influence on the timing of the f0 contour.
The chanted call is an exemplary pattern that we hope will be useful for the investigation of text-tune associations in general, and for crosslinguistic comparisons in particular.
It is reasonably stable in its form and function across languages, easy to elicit and even elicitable with different levels of emphasis.
Though we aim at cross-linguistic comparisons and a more general picture of text-tune associations in the long run, we will here only present German data
In this paper we describe a new network service called Speccast.
Speccast offers a generalized addressing and routing abstraction on which a rich variety of semantic services can be built, and, as such, provides a vehicle for studying the relationships among routing, addressing and topology.
Unlike overlay-based systems, we study a more basic problem, in which the topology of the network is given, and there is not necessarily any pre-existing underlying network service.
In the speccast model, each packet carries a destination predicate and the network's job is to deliver the packet to all nodes satisfying that predicate.
After
Peer-to-Peer Database Management Systems (PDBMS) are still in the beginning of their evolution.
They rise up p2p technology to exploit the power of available distributed database management technologies.
The proposed PDBMS will be completely autonomous and any notions of centralization as central server or creating a cost-based global schema will be absent.
In this paper a number of potential research issues in the overlap between database and p2p systems is identified, and a vision for building a PDBMS is presented.
The PDBMS is envisioned as a system that will be able to manage effectively at runtime semantic interdependences among databases in a decentralized, distributed and collaborative manner.
The main focus of the proposed PhD work is the research of adaptive techniques for development of a database management system compatible with the p2p paradigm.
A major premise of this paper is that the failure---or limited achievements---of many large-scale nutrition programs is very often a function of insufficient sustainable capacities within communities and organizations responsible for implementing them.
Following a brief review of the various rationales for an intensified focus on capacity and capacity development, the paper examines the linkages between nutrition programming and capacity development processes before proposing a new approach to assessing, analyzing, and developing capacity.
The ensuing sections then focus in more detail on the ingredients and influences of capacity at the levels of the community, program management, supporting institutions, and the government.
Finally, the implications of a more proactive focus on strengthening nutrition capacity for donor modes of operation and support priorities are discussed.
A fundamental premise, as enshrined in major international conventions and declarations, is that adequate nutrition is a human right.
In order to operationalize a truly human-rights-based approach to nutrition action---whether policy or programs, a fundamental first step is to assess capacity.
The rights approach demands an active involvement of "beneficiaries" in processes to improve nutrition.
Nutrition-vulnerable individuals, households, and communities are no longer objects of welfare transfers, but rather subjects whose capabilities are ultimately the foundations of sustainable progress.
There are several key recommendations for donor policy and practice that emerge.
First, donors need to provide more support for capacity assessment and development, operational research, and the building of policy-research-training-program networks.
A concrete, rights-based programming process demands a focu...
We propose a novel theoretical framework for understanding learning and generalization which we call the bin model.
Using the bin model, a closed form is derived for the generalization error that estimates the out-of-sample performance in terms of the in-sample performance.
We address the problem of over tting, and show that using a simple exhaustive learning algorithm it does not arise.
This is independent of the target function, input distribution and learning model, and remains true even with noisy data sets.
We apply our analysis to both classi cation and regression problems and give an example of how it may be used eciently in practice.
Thanks to recent technological advances, wireless networks are beginning to represent an interesting opportunity for factory communication systems.
Among the o#--the--shelves solutions for radio communications, the IEEE802.11 technology is one of the most promising.
However, industrial applications typically impose severe requirements in terms of both real-- time performances and dependability.
In this paper we consider one of the most popular models of fieldbus protocols, namely the Producer--Consumer, and study the possibility of implementing it on top of IEEE802.11 protocol suite.
After a description of how the Producer--Consumer services could be mapped onto IEEE802.11, we introduce an analytical model, which enables us to evaluate two important performance indexes: the update time jitter and the mean alarm latency.
The analysis is validated by means of numerical simulations.
this paper we analyzed the HL7 RIM from the perspective of speech act theory and submitted its various classes to ontological analysis.
We discovered that the RIM is marked by a number of problems, above all when it comes to taking dependent continuants properly into account.
Although the RIM intends to represent registrations of relevant medical acts in a way which acknowledges the distinction between what we have called primary and secondary acts.
Yet this very distinction is not adequately maintained in all the parts of the RIM.
It is true that this shortfall may be of little practical importance for the purposes of messaging; but if the RIM has the goal of being used as a reliable and effective ontology of medical acts in the future, then the situation would be quite different.
For delivering adequate and cost-effective care, it is mandatory that one can trust an ordered test not just to be carried out, but also to be registered both as having been ordered (to avoid it being ordered a second time for no other reason than that it was not known to have been ordered before) and as having been carried out.
The way this and related types of information are coded in HL7 messages may be sufficient for interpretation by HL7-trained humans, but it does not allow algorithms to grasp the underlying differences.
We propose that to make this possible in the future correction of the RIM should be undertaken on the basis of a more fine-grained ontological analysis along the lines set forth above
GPSS/H is a well-known, traditional simulation tool whose user base continues to grow despite the presence of many "new" trends in simulation technology.
In GPSS/H, the process-interaction world view has been combined with many advanced features to make one of the most powerful and flexible tools available, capable of handling large and complicated models with ease, yet still providing exceptionally high performance.
This paper presents the application of real-time simulation to assign due dates on logistic-manufacturing networks.
Information from the manufacturing, transportation, and supplier elements was integrated into a simulation model of the system to help the assignment of reliable delivery dates.
In addition, the system was used to generate multiple due date options so customers could pick the delivery speed and cost option that satisfied their specific needs.
Early childhood nutrition is thought to be an important input into subsequent academic achievement.
This paper investigates the nutrition-learning nexus using a unique longitudinal data set, which follows a large sample of Philippine children from birth until the end of their primary education.
We find that malnourished children perform more poorly in school, even after correcting for the effects of unobserved heterogeneity both across and within households.
Part of the advantage that wellnourished children enjoy arises from the fact that they enter school earlier and thus have more time to learn.
The rest of their advantage appears to stem from greater learning productivity per year of schooling rather than from greater learning effort in the form of homework time, school attendance, and so forth.
Despite these findings, our analysis suggests that the relationship between nutrition and learning is not likely to be of overriding importance either for nutrition policy or in accounting for economic growth.
CONTENTS Acknowledgments.....................................................iv 1.
this paper, we adopt #"2, which implies a quadratic e!ect of enzymes.
This speci"c choice of a quadratic e!ect is not essential in our model of cell di!erentiation.
Furthermore, we take into account the change in the volume of a cell, which varies as a result of the transportation of chemicals between the cell and the environment.
For simplicity, we assume that the total concentration of chemicals in a cell is constant, # m x #m# "const.
It follows that the volume of a cell is proportional to the sum of the quantities of all chemicals in the cell.
The volume change is calculated from the transport, as discussed below
We study the problem of information ow in real-time systems described by Timed Automata.
We distinguish between secret and observable actions of automata, we introduce a logic to express information ow properties as properties of the languages accepted by automata, and we give an algorithm to check these properties.
Performances of ## modulators are evaluated by applying a coherently sampled tone and by estimating powers of the in--band tones and noise.
In particular, the power of the shaped noise is usually estimated by subtracting the evaluated input tone from the output data and by integrating the power spectral density estimated by means of the periodogram.
Although the coherency, the finite number of processed samples induces spectral leaking of the wide-- band noise, thus affecting the noise power estimate.
To cope with such an issue, usually data are weighted by the Hanning sequence.
In this paper, the noise power estimation error induced by the use of such window is investigated, and a criterion for choosing the minimum number of samples N which bounds the relative leakage error within a specified maximum value is explicitly given.
Moreover, it is shown that, for any N , such an error is negligible for modulator orders lower than 3.
Higher order modulators require the use of a large number of samples to bound the relative error of the noise power estimate when high oversampling ratios are employed.
Computer vision with appropriate simplifying constraints, is providing a powerful sensory tool for robot control and for another important applications.
Computer vision supplemented as required by force and torque sensing, can greatly enhance the performance of first generation robots presently limited to operations based on fixed, predetermined actions.
The new capabilities include the identification of workpieces, the determination of their position and orientation, and the provision of real-time visual feedback for effecting adaptive corrections of the robot's trajectories.
Typical applications selected from real problems in industry are described, and analyzed.
Further, there are indicated some approaches to possible solutions.
Abstract Computer games are viewed by academics as un-grounded hack and patch experiments.
Academic artificial intelligence is often viewed as unimplementable and narrow minded by the majority of non-AI programmer.
By
this paper, a complete solution of video to photo is presented.
The intent of the user is first derived by analyzing video motions.
Then, photos are produced accordingly from the video.
They can be keyframes at video highlights, panorama of the scene, or high-resolution frames.
Methods and results of camera motion mining, intelligent keyframe extraction, video frame stitching and super-resolution enhancement are described
In this paper, we present a variance minimization (VM) procedure for rare event simulation in tandem queueing networks.
We prove that the VM method can produce a zero variance.
The VM method is suitable to compute optimal importance sampling (IS) parameters for small scale tandem networks.
For large scale tandem networks we propose a sub-optimal IS (SOIS) method, which projects the optimal biased transition probabilities of the corresponding small scale system into those of a large scale system.
In other words, we establish an efficient IS method for a large scale system by zooming into a small scale system and then projecting our findings into the large scale system.
The numerical results show that our SOIS method can produce accurate results with very short CPU time, while many other methods often require much longer.
Exercising real options often requires an implementation time, whereas financial options can be exercised instantly.
Neglecting the implementation time needed to exercise a real option causes overvaluing that option.
We develop lattice and Monte Carlo simulation techniques to value real option problems, where exercising the option requires an implementation time.
We present the application of the proposed techniques on a global supply chain network problem with exchange rate uncertainty and value the flexibility to switch between manufacturing options for a firm that has operations in different countries.
A system for recognition and morphological classification of unknown words for German is described and evaluated.
It takes raw text as input and outputs a list of the unknown nouns together with a hypothesis about their possible morphological class and stem.
MorphoClass exploits global information (ending-guessing rules, maximum likelihood estimations, word frequency statistics), morphological properties (compounding, inflection, affixes) and external knowledge (lexicons, German grammar information etc.
).
This paper describes the design, implementation, and evaluation of a Federated Array of Bricks (FAB), a distributed disk array that provides the reliability of traditional enterprise arrays with lower cost and better scalability.
FAB is built from a collection of bricks, small storage appliances containing commodity disks, CPU, NVRAM, and network interface cards.
FAB deploys a new majority-votingbased algorithm to replicate or erasure-code logical blocks across bricks and a reconfiguration algorithm to move data in the background when bricks are added or decommissioned.
We argue that voting is practical and necessary for reliable, high-throughput storage systems such as FAB.
We have implemented a FAB prototype on a 22-node Linux cluster.
This prototype sustains 85MB/second of throughput for a database workload, and 270MB/second for a bulk-read workload.
In addition, it can outperform traditional masterslave replication through performance decoupling and can handle brick failures and recoveries smoothly without disturbing client requests.
this paper we define a requirements-level execution semantics for object-oriented statecharts and show how properties of a system specified by these statecharts can be model checked using tool support for model checkers.
Our execution semantics is requirements-level because it uses the perfect technology assumption, which abstracts from limitations imposed by an implementation.
Statecharts describe object life cycles.
Our semantics includes synchronous and asynchronous communication between objects and creation and deletion of objects.
Our tool support presents a graphical front-end to model checkers, making these tools usable to people who are not specialists in model checking.
The model-checking approach presented in this paper is embedded in an informal but precise method for software requirements and design.
We discuss some of our experiences with model checking
Sensor networks have become an important source of data with numerous applications in monitoring various real-life phenomena as well as industrial applications and traffic control.
Unfortunately, sensor data is subject to several sources of errors such as noise from external sources, hardware noise, inaccuracies and imprecision, and various environmental effects.
Such errors may seriously impact the answer to any query posed to the sensors.
In particular, they may yield imprecise or even incorrect and misleading answers which can be very significant if they result in immediate critical decisions or activation of actuators.
In this paper, we present a framework for cleaning and querying noisy sensors.
Specifically, we present a Bayesian approach for reducing the uncertainty associated with the data, that arise due to random noise, in an online fashion.
Our approach combines prior knowledge of the true sensor reading, the noise characteristics of this sensor, and the observed noisy reading in order to obtain a more accurate estimate of the reading.
This cleaning step can be performed either at the sensor level or at the base-station.
Based on our proposed uncertainty models and using a statistical approach, we introduce several algorithms for answering traditional database queries over uncertain sensor readings.
Finally, we present a preliminary evaluation of our proposed approach using synthetic data and highlight some exciting research directions in this area.
this paper.
However, in order to understand these facts, some linear algebra and multivariate analysis are needed, which are not always covered su ciently in undergraduate texts.
The attempt of this paper is to pool these facts together, and hope that it will be useful for new researchers entering this area
this paper we confine ourselves to examine the possibility of considering elementary particles as micro universes (see e.g.
Recami 1983a, 1983b, 1979; Cf.
also Ammiraju, Recami & Rodrigues 1983): that is to say, the possibility that they be similar---in a sense to be specified---to our cosmos.
More precisely, we shall refer to the thread followed by P. Caldirola, P. Castorina, A. Italiano, G.D. Maccarrone, M. Pavsic, V.T.
Zanchin and ourselves (for an extended summary of that theory, see e.g.
Recami 1982, and refs.
therein; Recami, Martnez & Zanchin 1986; and Recami & Zanchin, 1992; see also Recami & Zanchin 1986).
Let us recall that Riemann, as well as Clifford and later on Einstein (see e.g.
Einstein 1919) believed that the fundamental particles of matter were the perceptible evidence of a strong local space curvature.
A theory which stresses the role of space (or, rather, space-time) curvature already does exist for our whole cosmos: General Relativity, based on Einstein's gravitational field equations, which are probably the most important equations of classical physical theories together with Maxwell's electromagnetic field equations.
While much effort has already been made to generalize Maxwell's equations, passing for example from the electromagnetic field to Yang-Mills fields (so that almost all modern gauge theories are modeled on Maxwell's equations) , on the contrary, the Einstein equations have never been applied to domains other than gravitation.
Nevertheless, like any differential equations, they do not contain any in-built fundamental length, so they can be used a priori to describe cosmoses of any size.
Our first purpose is to explore whether it is possible to apply successfully the methods of general relativity (GR) to the domain of the so-called nucle...
The recent file storage applications built on top of peer-to-peer distributed hash tables lack search capabilities.
We believe that search is an important part of any document publication system.
To that end, we have designed and analyzed a distributed search engine based on a distributed hash table.
Our simulation results predict that our search engine can answer an average query in under one second, using under one kilobyte of bandwidth.
We present a side-by-side analysis of two recent image space approaches for the visualization of vector fields on surfaces.
The two methods, Image Space Advection (ISA) and Image Based Flow Visualization for Curved Surfaces (IBFVS) generate dense representations of time-dependent vector fields with high spatio-temporal correlation.
While the 3D vector fields are associated with arbitrary surfaces represented by triangular meshes, the generation and advection of texture properties is confined to image space.
Fast frame rates are achieved by exploiting frame-to-frame coherency and graphics hardware.
In our comparison of ISA and IBFVS we point out the strengths and weaknesses of each approach and give recommendations as to when and where they are best applied.
GAMBL is a word expert approach to WSD in which each word expert is trained using memorybased learning.
Joint feature selection and algorithm parameter optimization are achieved with a genetic algorithm (GA).
We use a cascaded classifier approach in which the GA optimizes local context features and the output of a separate keyword classifier (rather than also optimizing the keyword features together with the local context features).
A further innovation on earlier versions of memorybased WSD is the use of grammatical relation and chunk features.
This paper presents the architecture of the system briefly, and discusses its performance on the English lexical sample and all words tasks in SENSEVAL-3.
The GEOS-CHEM global 3-D model of tropospheric chemistry predicts a summertime O3 maximum over the Middle East, with mean mixing ratios in the middle and upper troposphere in excess of 80 ppbv.
This model feature is consistent with the few observations from commercial aircraft in the region.
Its origin in the model reflects a complex interplay of dynamical and chemical factors, and of anthropogenic and natural influences.
The anticyclonic circulation in the middle and upper troposphere over the Middle East funnels northern midlatitude pollution transported in the westerly subtropical jet as well as lightning outflow from the Indian monsoon and pollution from eastern Asia transported in an easterly tropical jet.
Large-scale subsidence over the region takes place with continued net production of O3 and little mid-level outflow.
Transport from the stratosphere does not contribute significantly to the O3 maximum.
Sensitivity simulations with anthropogenic or lightning emissions shut o# indicate decreases of 20-30% and 10-15% respectively in the tropospheric O3 column over the Middle East.
More observations in this region are needed to confirm the presence of the O3 maximum.
this paper we apply a Hidden Markov Model to model the structure of a collection of known proteins.
This Markov classi#cation is able to take advantage of information implicit in the order of a sequence of observations and hence is better suited to modelling protein data than a classi#cation model that assumes independence between observations.
We use an Minimum Message Length #MML# information measure to evaluate our protein structure model which enables us to #nd the model best supported by the known evidence
Introduction Out of the need to simplify linguistic description for pedagogic purposes, a basic prosodic model for Swedish has developed over the last two decades.
This model often manifests itself in textbooks in Swedish as a second language in the following way: Potatis r gott.
In each prominent word, there is an underlining representing "the long sound in the stressed syllable".
This marking points out the prominent words in a sentence, the stressed syllables in the prominent words and finally the phonologically long sound in the stressed syllable, thus also promoting the quantity distinction by increasing or reducing the V/C-ratio.
The model has taken influence by Bruce's (1977) model, where the quantity distinction -- which requires a prolonged syllable -- is the common denominator of all degrees of word prominence in Swedish.
It also owes much to Bannert's (e.g.
1986, 1995) many studies that have improved our understanding of what phonetic properties are crucial for speaking i
Layered multicast is probably the most elegant solution to tackle the heterogene ity problem in multicast delivery of real-time multimedia streams.
However, the multiple join experiments carried out by di#erent receivers in order to detect the available bandwidth make it hard to achieve fairness.
In the present paper, we present a simple protocol, inspired from TCP-Vegas, that reduces considerably the unnecessary join experiments while achieving intra-session and inter-session fairness as well as being TCP-Friendly.
In Gross and Juttijudata (1997) a single node, G/G/1 queue was investigated as to the sensitivity of output performance measures, such as the mean queue wait, to the shape of the interarrival and service distributions selected.
Gamma, Weibull, lognormal and Pearson type 5 distributions with identical first and second moments were investigated.
Significant differences in output measures were noted for low to moderate traffic intensities (offered load, r), in some cases, even as high as 0.8.
We continue this type of investigation for two types of queueing networks, namely two versions of a two-node call center, to see if network mixing might reduce the sensitivity effect.
We exploit the isolation of TCP flows based on their lifetime (classified as short- or long-lived flows) to eliminate the impact of long-lived flows on the short-lived, achieving improved response time for short-lived flows.
An additional classification scheme provides large-grain separation of flows with drastically di#erent end-to-end round-trip-times (RTTs).
The scheme provides long-term fairness among the long-lived flows.
Hence, the combined lifetime and RTT classification appears to be able to provide both fair bandwidth sharing and better response time for the long-lived and short-lived flows respectively.
The presented evaluation results suggest that the combined classification performs, under all tested case, comparably or better than RED.
The results indicate that a classification scheme together with some not-perfectly tuned RED, or even with DropTail dropping policy, can avoid the complexity of having to properly parameterize RED while achieving equal or better performance.
Up until now, the main foci of development in mobile communication equipment have been to decrease its size and to extend its battery operation times.
However, further reductions in the size of devices are physically limited by the user interface requirements and therefore, alternative aspects of these devices must be targeted for enhancement by designers.
A feature of mobile communications equipment is the variety of environments within which they are used, so algorithms that can improve the quality of a transmission are highly desirable.
In this paper, mobile telephony devices are being specifically considered and a CMOS implementation of the filter block of an adaptive noise canceller will be presented.
Results will then be given to demonstrate how this circuit can significantly increase speech quality by suppressing interfering noise without requiring any prior assumptions on its properties.
This paper presents our approach to inferring communities on the Web.
It delineates the sub-culture hierarchies based on how individuals get involved in the dispersion of online objects.
To this end, a relatedness function is chosen to satisfy a set of carefully defined mathematical conditions.
The conditions are deduced from how people may share common interests through placing common objects on their homepages.
Our relatedness function can infer much more detailed degree of relatedness between homepages than other contemporary methods.
Privacy amplification is the art of shrinking a partially secret string Z to a highly secret key S. We introduce a universally composable security definition for secret keys in a context where an adversary holds quantum information and show that privacy amplification by two-universal hashing is secure with respect to this definition.
Additionally, we give an asymptotically optimal lower bound on the length of the extractable key S in terms of the adversary's (quantum) knowledge about Z.
In this tutorial we present an introduction to simulationbased optimization, which is, perhaps, the most important new simulation technology in the last five years.
We give a precise statement of the problem being addressed and also experimental results for two commercial optimization packages applied to a manufacturing example with seven decision variables.
There are thousands of jobs performed on the Queen of the Sky, the Boeing 747, final assembly line for each airplane.
When the decision was made to implement a moving line for the final assembly of the 747 it was absolutely necessary to evaluate many aspects of these jobs.
Discrete event simulation models were constructed to analyze numerous 747 final assembly moving line scenarios throughout several phases.
These models not only presented a visual understanding of different concepts, but also provided quantitative analysis of suggested scenarios to the moving line team.
The results presented highly optimized production flows and processes, reducing cost and flow time from the traditionally 24 days to the targeted possible 18 days.
This work outlined some of the moving line concepts, modeling objectives, and simulation analysis.
Utilizations of different assembly positions were yielded as the result of discrete simulation modeling of many bundled jobs and stands of the 747 final assembly operation.
Internet is migrating from a simple data network to an environment where a more demanding multimedia content like audio, video, and IP telephony is being delivered.
The Internet Protocol (IP) was originally designed to interconnect heterogeneous networks.
It scales well by keeping the core network as simple and dumb as possible and provides a best effort delivery service.
However, multimedia applications require something better than a simple best effort delivery.
Many solutions have been proposed to implement quality of service (QoS) on IP networks.
Typically, these methods do not take into account the inherent characteristics of multimedia data, and leave most of the work to the end hosts.
In this paper, we propose a framework for a new protocol set to integrate network and application level QoS to reach the best possible quality for multimedia delivery over the Internet.
The suggested method has a modular, distributed and scalable architecture, enabling it to easily grow as the network size and/or QoS requirements change.
SUMTIME-MOUSAM is a Natural Language Generation (NLG) system that produces textual weather forecasts for offshore oilrigs from Numerical Weather Prediction (NWP) data.
It has been used for the past year by Weathernews (UK) Ltd for producing 150 draft forecasts per day, which are then post-edited by forecasters before being released to end-users.
In this paper, we describe how the system works, how it is used at Weathernews and finally some lessons we learnt from building, installing and maintaining SUMTIME-MOUSAM.
One important lesson has been that using NLG technology improves maintainability although the biggest maintenance work actually involved changing data formats at the I/O interfaces.
We also found our system being used by forecasters in unexpected ways for understanding and editing data.
We conclude that the success of a technology owes as much to its functional superiority as to its suitability to the various stakeholders such as developers and users.
We define and study capacity regions for wireless ad hoc networks with an arbitrary number of nodes and topology.
These regions describe the set of achievable rate combinations between all source-destination pairs in the network under various transmission strategies, such as variable-rate transmission, single-hop or multihop routing, power control, and successive interference cancellation (SIC).
Multihop cellular networks and networks with energy constraints are studied as special cases.
With slight modifications, the developed formulation can handle node mobility and time-varying flat-fading channels.
Numerical results indicate that multihop routing, the ability for concurrent transmissions, and SIC significantly increase the capacity of ad hoc and multihop cellular networks.
On the other hand, gains from power control are significant only when variable-rate transmission is not used.
Also, time-varying flat-fading and node mobility actually improve the capacity.
Finally, multihop routing greatly improves the performance of energy-constraint networks.
This article presents an automatic information extraction method from poor quality specific-domain corpora.
This method is based on building a semi-formal ontology in order to model information present in the corpus and its relation.
This approach takes place in four steps: corpus normalization by a correcting process, ontology building from texts and external knowledge, model formalization in grammar and the information extraction itself, which is made by a tagging process using grammar rules.
After a description of the different stages of our method, experimentation on a French bank corpus is presented.
We study the fundamental limitations of relational algebra (RA) and SQL in supporting sequence and stream queries, and present effective query language and data model enrichments to deal with them.
We begin by observing the well-known limitations of SQL in application domains which are important for data streams, such as sequence queries and data mining.
Then we present a formal proof that, for continuous queries on data streams, SQL su#ers from additional expressive power problems.
We begin by focusing on the notion of nonblocking (NB) queries that are the only continuous queries that can be supported on data streams.
We characterize the notion of nonblocking queries by showing that they are equivalent to monotonic queries.
Therefore the notion of for RA can be formalized as its ability to express all monotonic queries expressible in RA using only the monotonic operators of RA.
We show that RA is not NB-complete, and SQL is not more powerful than RA for monotonic queries.
Volumetric light transport effects are significant for many materials like skin, smoke, clouds, snow or water.
In particular, one must consider the multiple scattering of light within the volume.
While it is possible to simulate such media using volumetric Monte Carlo or finite element techniques, those methods are very computationally expensive.
On the other hand, simple analytic models have so far been limited to homogeneous and/or optically dense media and cannot be easily extended to include strongly directional effects and visibility in spatially varying volumes.
We present a practical method for rendering volumetric effects that include multiple scattering.
We show an expression for the point spread function that captures blurring of radiance due to multiple scattering.
This paper reports on the development of the Electric Power and Communication Synchronizing Simulator (EPOCHS), a distributed simulation environment.
Existing electric power simulation tools accurately model power systems of the past, which were controlled as large regional power pools without significant communication elements.
However, as power systems increasingly turn to protection and control systems that make use of computer networks, these simulators are less and less capable of predicting the likely behavior of the resulting power grids.
Similarly, the tools used to evaluate new communication protocols and systems have been developed without attention to the roles they might play in power scenarios.
EPOCHS utilizes multiple research and commercial offthe -shelf (COTS) systems to bridge the gap.
EPOCHS is also notable for allowing users to transparently encapsulate complex system behavior that bridges multiple domains through the use of a simple agent-based framework.
Paradoxes, particularly Tarski's liar paradox, represent an ongoing challenge that have long attracted special interest.
There have been numerous attempts to give either a formal or a more realistic resolution to this area based on natural logical intuition or common sense.
Introduction: To expand our ability to test current concepts about system and organ function within organisms in normal and disease states we need a new class of discrete, event-driven simulation models that achieve a higher level of biological realism across multiple scales, while being sufficiently flexible to represent different aspects of the biology.
Here we provide the first description of such models, one that is focused on the rat liver.
We use a middle-out design strategy that begins with primary parenchymal units.
The models are sufficiently flexible to represent different aspects of hepatic biology, including heterogeneous microenvironments.
Model components are designed to be easily joined and disconnected, and to be replaceable and reusable.
The models function within a multitier, in silico apparatus designed to support iterative experimentation on models that will have extended life cycles.
In this paper, we report on our approach to adding Natural Language Generation (NLG) capabilities to ITSs.
Our choice has been to apply simple NLG techniques to improve the feedback provided by an existing ITS, specifically, one built within the DIAG framework (Towne 1997a).
We evaluated the original version of the system and the enhanced one with a between subjects experiment.
On the whole, the enhanced system is better than the original one, other than in helping subjects remember the actions they took.
Current work includes exploiting more sophisticated NLG techniques but still without delving into full fledged text planning.
v I.
This paper focuses on the potential role of the Object-Role Modeling (ORM) approach to information modeling for the task of domain modeling.
Domain modeling concerns obtaining and modeling the language (concepts, terminologies, ontologies) used by stakeholders to talk about a domain.
Achieving conceptual clarity and consensus among stakeholders is an important yet often neglected part of system development, and requirements engineering in particular.
There have been competing arguments about the effect of public infrastructure on productivity in the literature.
Level-based regressions generally show a much higher return to public capital than private capital, while difference-based regressions tend to find insignificant or even negative effects.
To help reconcile this debate, this paper proposes that researchers should first test for causality in their data to check for length of lagged relationships and the existence of reverse causality, as a critical step before specifying a final model and estimating procedure on the relationship between the stock of capital and productivity growth.
A newly developed system GMM method of estimation is proposed for this purpose.
Second, a new method of estimating the relationship between the capital stock and productivity in level form is proposed that controls for possible endogeneity problems arising from reverse causation.
These methods are illustrated using a unique set of pooled time-series, cross-section data for India.
It is shown that infrastructure development in India is productive with an estimated impact lying between those obtained from level-based and difference-based estimates.
ii TABLE OF CONTENTS 1.
Contrast optimisation is a method that can be used to correct phase errors in coherent images such as SAS images.
However, the contrast measure of a given coherent image is a random variable due to the speckle present in coherent images.
The variance of this measure puts a limit on the ability of contrast optimisation to focus an image.
... this article, we examine this issue by classifying and discussing a wide ranging set of Web metrics.
We present the origins, measurement functions, formulations and comparisons of well-known Web metrics for quantifying Web graph properties, Web page significance, Web page similarity, search and retrieval, usage characterization and information theoretic properties.
We also discuss how these metrics can be applied for improving Web information access and use.
Recently, the software industry has published several proposals for transactional processing in the Web service world.
Even though most proposals support arbitrary transaction models, there exists no standardized way to describe such models.
This paper describes potential impacts and use cases of utilizing advanced transaction meta-models in the Web service world and introduces a suitable meta-model for defining arbitrary advanced transaction models.
In order to make this meta-model more usable in Web service environments, it had to be enhanced and an XML representation of the enhanced model had to be developed.
We consider the problem of blindly equalizing a single input single output communication channel, assuming that the tap input vector to the equalizer has mutually uncorrelated components.
This can be achieved if the received signal is preprocessed by an adaptive all-pole filter (which has been suggested as a means for both MSE improvement and DFE initialization), or by a standard lattice predictor.
In the former case, it has been observed recently that an eigenvector of the (prewhitened) input quadricovariance matrix may provide a good initial estimate for the equalizer.
We provide analytical justification for this fact and show that, if the MSE of a Wiener equalizer for a given delay is small, then this Wiener equalizer is close to an eigenvector of .
The corresponding eigenvalue decreases with the MSE, and thus picking the smallest eigenvalue automatically provides blind delay optimization.
1.
A major problem in web database applications and on the Internet in general is the scalable delivery of data.
One proposed solution for this problem is a hybrid system that uses multicast push to scalably deliver the most popular data, and reserves traditional unicast pull for delivery of less popular data.
However, such a hybrid scheme introduces a variety of data management problems at the server.
In this paper we examine three of these problems: the push popularity problem, the document classification problem, and the bandwidth division problem.
The push popularity problem is to estimate the popularity of the documents in the web site.
The document classification problem is to determine which documents should be pushed and which documents must be pulled.
The bandwidth division problem is to determine how much of the server bandwidth to devote to pushed documents and how much of the server bandwidth should be reserved for pulled documents.
We propose simple and elegant solutions for these problems.
We report on experiments with our system that validate our algorithms.
Web caches, content distribution networks, peer-to-peer file sharing networks, distributed file systems, and data grids all have in common that they involve a community of users who generate requests for shared data.
In each case, overall system performance can be improved significantly if we can first identify and then exploit interesting structure within a community's access patterns.
To this end, we propose a novel perspective on file sharing based on the study of the relationships that form among users based on the files in which they are interested.
We propose a new structure that captures common user interests in data---the data-sharing graph--- and justify its utility with studies on three data-distribution systems: a high-energy physics collaboration, the Web, and the Kazaa peer-to-peer network.
We find small-world patterns in the data-sharing graphs of all three communities.
We analyze these graphs and propose some probable causes for these emergent small-world patterns.
The significance of smallworld patterns is twofold: it provides a rigorous support to intuition and, perhaps most importantly, it suggests ways to design mechanisms that exploit these naturally emerging patterns.
We address the problem dealing with a large collection of data, and investigate the use of automatically constructing category hierarchy from a given set of categories to improve classification of large corpora.
We use two wellknown techniques, partitioning clustering, k- means and a loss function to create category hierarchy.
k-means is to cluster the given categories in a hierarchy.
To select the proper number of k, we use a loss function which measures the degree of our disappointment in any differences between the true distribution over inputs and the learner's prediction.
Once the optimal number of # is selected, for each cluster, the procedure is repeated.
Our evaluation using the 1996 Reuters corpus which consists of 806,791 documents shows that automatically constructing hierarchy improves classification accuracy.
Naive Bayesian classifiers have been very successful in attribute-value representations.
This paper describes WordNet design and development, discussing its origins, the objectives it initially intended to reach and the subsequent use to which it has been put, the factor that has determined its structure and success.
The emphasis in this description of the product is on its main applications, given the instrumental nature of WordNet, and on the improvements and upgrades of the tool itself, along with its use in natural language processing systems.
The purpose of the paper is to identify the most significant recent trends with respect to this product, to provide a full and useful overview of WordNet for researchers working in the field of information retrieval.
The existing literature is reviewed and present applications are classified to concur with the areas discussed at the First International WordNet Congress.
In this article, we present a model for transformation of resources in information supply.
These transformations allow us to reason more flexibly about information supply, and in particular its heterogeneous nature.
They allow us to change the form (e.g.
report, abstract, summary) and format (e.g.
PDF, DOC, HTML) of data resources found on the Web.
In a retrieval context these transformations may be used to ensure that data resources are presented to the user in a form and format that is apt at that time.
In order to build a new breed of software that can deeply understand people and our problems, so that they can help us to solve them, we are developing at the Media Lab a suite of computational tools to give machines the capacity to learn and reason about everyday life---in other words, to give machines `common sense'.
We are building several large-scale commonsense knowledge bases that model broad aspects of the ordinary human world, including descriptions of the kinds of goals people have, the actions we can take and their effects, the kinds of objects that we encounter every day, and so forth, as well as the relationships between such entities.
In this article we describe three systems we have built---ConceptNet, LifeNet, and StoryNet---that take unconventional approaches to representing, acquiring, and reasoning with large quantities of commonsense knowledge.
Each adopts a different approach: ConceptNet is a large-scale semantic network, LifeNet is a probabilistic graphical model, and StoryNet is a database of story-scripts.
We describe the evolution of these three systems, the techniques that underlie their construction and their operation, and conclude with a discussion of how we might combine them into an integrated commonsense reasoning system that uses multiple representations and reasoning methods.
Introduction and others (1997) have suggested that the description of the classical electromagnetic field may be incomplete and that the usual Maxwellian transverse wave components may be accompanied by a phase free longitudinal component, the so-called B field.
The original suggestion was that the conjugate product of the transverse field components yields a phase free, longitudinal, real magnetic field.
The predicted properties of this field have been the subject of many publications (see vols 1-3 Enigmatic Photon and references therein and this Special Issue).
This inference has been criticised by Barron (1993) 4 as violating CPT symmetry and also more recently by Comay (1996) as actually violating Maxwell's equations themselves.
Comay (1996) claims that the inclusion of a B component to the field of a rotating dipole leads to a violation of one of Maxwell's equations.
Evans and Jeffers (1996) have shown that this argument is incorrect since the curl of the
The complexity and dynamics of the Internet is driving the demand for scalable and e#cient network simulation.
Parallel and distributed network simulation techniques make it possible to utilize the power of multiple processors or clusters of workstations to simulate large-scale networks e#ciently.
However, synchronization overheads inherent in parallel and distributed simulations limit the e#ciency and scalability of these simulations.
We developed a novel distributed network simulation framework and synchronization approach which achieved better e#ciency than conventional approaches.
In this framework, BGP networks are partitioned into domains of Autonomous Systems (ASes), and simulation time is divided into intervals.
Each domain is simulated independently of and concurrently with the others over the same time interval.
At the end of each interval, packet delays and drop rates for each interdomain flow are exchanged between domain simulators.
The simulators iterate over the same time interval until the exchanged information converges to the value within a prescribed precision before progress to the next time interval.
This approach allows the parallelization with infrequent synchronization, and achieves significant simulation speedups.
In this paper, we focus on the design of distributed BGP network simulation in Genesis in which many BGP ASes can be assigned to a single Genesis domain.
We also report our experimental results that measure Genesis distributed e#ciency in large scale BGP network simulations.
this article is to operationalize this across production markets
Our goal is to characterize the mobility and access patterns in a IEEE802.11 infrastructure.
This can be beneficial in many domains, including coverage planning, resource reservation, supporting location-dependent applications and applications with real-time constraints, and produce models for simulations.
We conducted an extensive measurement study of wireless users and their association patterns on a major university campus using the IEEE802.11 wireless infrastructure.
We propose a new methodology to characterize and analyze the wireless access pattern based on several parameters such as mobility, session and visit durations.
This methodology can allow us to study how user association and mobility patterns evolve in the temporal and spatial dimension.
On-demand broadcast is an effective data dissemination technique to enhance system scalability and deal with dynamic user access patterns.
With the rapid growth of time-critical information services and emerging applications such as mobile location-based services, there is an increasing need for the system to support timely data dissemination.
This paper studies online scheduling algorithms for time-critical on-demand broadcast.
We propose a novel scheduling algorithm called SIN-# that takes into account the urgency and productivity of serving pending requests.
An efficient implementation of SIN-# is presented.
Moreover, we analyze the optimal broadcast schedule in terms of request drop rate when the request arrival rate rises towards infinity.
Trace-driven experiments demonstrate that SIN-# significantly outperforms existing algorithms over a wide range of workloads.
The results also show that the performance of SIN-# approaches the analytical optimum at high request rates.
This paper describes a generalisation of the Phase Gradient Autofocus (PGA) algorithm that allows strip-map operation.
A standard autofocus technique, PGA, uses prominent points within the target scene to estimate the point spread function of the system.
PGA was developed for tomographic mode spotlight synthetic aperture radar (SAR) but has limited applicability for side-scan synthetic aperture operation.
We show how it can be generalised to work with stripmap geometries and relate our new method to the previous PGA extension to strip-map systems.
This paper discusses verification and validation of simulation models.
The different approaches to deciding model validity are presented; how model verification and validation relate to the model development process are discussed; various validation techniques are defined; conceptual model validity, model verification, operational validity, and data validity are described; ways to document results are given; and a recommended procedure is presented.
Interoperability and reusability are features supported by the new High Level Architecture for Modeling and Simulation (HLA).
While the traditional approach of monolithic traffic simulation modeling has proven to be successful, distributed traffic simulations gain more attention.
The first part of the paper describes our work with distributed traffic simulation based on the High Level Architecture and the lessons learned from enhancing classic simulation and animation tools for HLA and our first HLA prototypes.
The second part elaborates on the additional flexibility that architectures for distributed simulation offer, focussing on the dynamic integration of information relevant to the overall simulation into the dynamic event space.
A promising outlook concludes the paper.
This paper explores the implications of the incompressibility of complex systems for the analysis and modelling of such systems.
In particular, a provisional epistemology (theory of knowledge) will be developed that attempts to remain faithful to the limitations derived from this aspect of complexity science.
We will argue that such an investigation of complex systems highlights the relevance of paradigmatic pluralism or eclecticism, analytical creativity and boundary critique, and therefore has some affinity to the writings on affirmative postmodernism.
Complexity thinking (i.e.
thinking based on insights derived in complexity science), like postmodernism, provides a clear warning as to the dangers of uncritically adopting any `black and white' theoretical position.
It encourages the deferral of paradigm selection and a healthy scepticism.
In this `middle way' there is equal attention paid to qualitative as well as quantitative approaches to analysis.
This paper presents different ways to use the Doppler Tissue Imaging (DTI) in order to determine deformation of the cardiac wall.
As an extra information added to the ultrasound images, the DTI gives the velocity in the direction of the probe.
We first show a way to track points along the cardiac wall in a M-Mode image (1D+t).
This is based on energy minimization similar to a deformable grid.
We then extend the ideas to finding the deformation field in a sequence of 2D images (2D+t).
This is based on energy minimization including spatio-temporal regularization.
Use-cases and scenarios have been identified as good inputs to generate test cases and oracles at requirement level.
Yet to have an automated generation, information is missing from use cases and sequence diagrams, such as the exact inputs of the system, and the ordering constraints between the use case.
The contribution of this paper is then twofold.
First we propose a contract language for functional requirements expressed as parameterized use cases.
Then we provide a method, a formal model and a prototype tool to automatically derive both functional and robustness test cases from the requirements enhanced with contracts.
We study the efficiency of the generated test cases on a case study.
While the aftbody engine exhaust flowfield of hypersonic launch vehicles can be analyzed with computer-intensive computational fluid dynamic codes, this approach is not suitable for use in conceptual vehicle studies.
In order to accomplish any complete vehicle-level optimization in the conceptual design phase, performance changes due to the nozzle design must be available quickly.
To make this task even more challenging, performance changes need to be assessed over a broad range of flight conditions, instead of just at a single point.
ly small, nodes have tiny or irreplaceable power reserves, communicate wirelessly, and may not possess unique identifiers.
Further, they must form ad hoc relationships in a dense network with little or no preexisting infrastructure.
Protocols and algorithms operating in the network must support large-scale distribution, often with only localized interactions among nodes.
The network must continue operating even after significant node failure, and it must meet real-time requirements.
In addition to the limitations imposed by applicationdependent deadlines, because it reflects a changing environment, the data the network gathers may intrinsically be valid for only a short time.
Sensor networks may be deployed in a host of different environments, and they often figure into military scenarios.
These networks may gather intelligence in battlefield conditions, track enemy troop movements, monitor a secured zone for activity, or measure damage and casualties.
An airplane o
Generative Programming (GP) is a new paradigm that allows automatic creation of entire software family, using the configuration of elementary and reusable components.
GP can be projected on different technologies, e.g.
C++- templates, JavaBeans, Aspect-Oriented Programming (AOP), or Frame technology.
This paper focuses on Frame Technology, which aids the possible implementation and completion of software components.
The purpose of this paper is to introduce the GP paradigm in the area of GUI application generation.
It also demonstrates how automatically customized executable applications with GUI parts can be generated from an abstract specification.
This article suggests a preliminary version of a Cantorian superfluid vortex hypothesis as a plausible model of nonlinear cosmology.
Though some parts of the proposed theory resemble several elements of what have been proposed by Consoli (2000, 2002), Gibson (1999), Nottale (1996, 1997, 2001, 2002a), and Winterberg (2002b), it seems such a Cantorian superfluid vortex model instead of superfluid or vortex theory alone has never been proposed before
Distributed real-time simulation is the focus of intense development, with complex systems being represented by individual component simulations interacting as a coherent model.
The real-time architecture may be composed of physically separated simulation centres.
Commercial offthe -shelf (COTS) and Freeware Real-time software exists to provide data communication channels between the components, subject to adequate system bandwidth.
However if the individual models are too computationally intensive to run in real time, then the performance of the real-time simulation architecture is compromised.
In this paper, model representations are developed from dynamic simulation by the response surface methodology (RSM), allowing complex systems to be included in a real-time environment.
A Permanent Magnet AC (PMAC) motor drive simulation with model reference control for a more electric aircraft application is examined as a candidate for inclusion in a realtime simulation environment.
Translucent WDM optical networks use sparse placement of regenerators to overcome the impairments and wavelength contention introduced by fully transparent networks, and achieve a performance close to fully opaque networks with much less cost.
Our previous study proved the feasibility of translucent networks using sparse regeneration technique.
We addressed the placement of regenerators based on static schemes allowing only fixed number of regenerators at fixed locations.
This paper furthers the study by proposing a suite of dynamical routing schemes.
Dynamic allocation, advertisement and discovery of regeneration resources are proposed to support sharing transmitters and receivers between regeneration and access functions.
This study follows the current trend in optical networking industry by utilizing extension of IP control protocols.
Dynamic routing algorithms, aware of current regeneration resources and link states, are designed to smartly route the connection requests under quality constraints.
A hierarchical network model, supported by the MPLS-based control plane, is also proposed to provide scalability.
Experiments show that network performance is improved without placement of extra regenerators.
Simulation can provide insight to the behavior of a complex queueing system by identifying the response surface of several performance measures such as delays and backlogs.
However, simulations of large systems are expensive both in terms of CPU time and use of available resources (e.g.
processors).
Thus, it is of paramount importance to carefully select the inputs of simulation in order to adequately capture the underlying response surface of interest and at the same time minimize the required number of simulation runs.
In this study, we present a methodological framework for designing efficient simulations for complex networks.
Our approach works in sequential and combines the methods of CART (Classification And Regression Trees) and the design of experiments.
A generalized switch model is used to illustrate the proposed methodology and some useful applications are described.
Homebuyer Homebuilder Framing Trade Contractor Concrete Trade Contractor Plumbing Trade Contractor etc.
Online simulation of pedestrian flow in public buildings is a new tool which can be especially useful for improving the aspects of safety and short-term planning in the phase of organizing and operating large public buildings.
These might be places such as a train station, an airport or a shopping center.
This paper provides an insight into the different concepts of pedestrian flow simulation.
Special emphasis is placed on explaining the mesoscopic approach as applied to the area of traffic simulation.
This approach is transferred to the context of analyzing and predicting the pedestrian flow.
A first prototypical implementation of a simulation supported control center is briefly presented, also.
1
As the technology is shrinking toward 50 nm and the working frequency is going into multi gigahertz range, the effect of interconnects on functionality and performance of system-on-chips is becoming dominant.
More specifically, distortion (integrity loss) of signals traveling on high-speed interconnects can no longer be ignored.
In this paper, we propose a new fault model, called multiple transition, and its corresponding test pattern generation mechanism.
We also extend the conventional boundaryscan architecture to allow testing signal integrity in SoC interconnects.
Our extended JTAG architecture collects and outputs the integrity loss information using the enhanced observation cells.
The architecture fully complies with the JTAG standard and can be adopted by any SoC that is IEEE 1149.1 compliant.
This paper, or parts thereof, may not be reproduced in any form without the written permission of IS&T: The Society for Imaging Science and Technology, the sole copyright owners of The Journal of Imaging Science and Technology
We apply coset codes to adaptive modulation in fading channels.
Adaptive modulation is a powerful technique to improve the energy efficiency and increase the data rate over a fading channel.
Coset codes are a natural choice to use with adaptive modulation since the channel coding and modulation designs are separable.
Therefore, trellis and lattice codes designed for additive white Gaussian noise (AWGN) channels can be superimposed on adaptive modulation for fading channels, with the same approximate coding gains.
We first describe the methodology for combining coset codes with a general class of adaptive modulation techniques.
We then apply this methodology to a spectrally efficient adaptive M-ary quadrature amplitude modulation (MQAM) to obtain trellis-coded adaptive MQAM.
We present analytical and simulation results for this design which show an effective coding gain of 3 dB relative to uncoded adaptive MQAM for a simple four-state trellis code, and an effective 3.6-dB coding gain for an eight-state trellis code.
More complex trellis codes are shown to achieve higher gains.
We also compare the performance of trellis-coded adaptive MQAM to that of coded modulation with built-in time diversity and fixed-rate modulation.
The adaptive method exhibits a power savings of up to 20 dB.
Effective information disclosure in the context of databases with a large conceptual schema is known to be a non-trivial problem.
In particular the formulation of ad-hoc queries is a major problem in such contexts.
Existing approaches for tackling this problem include graphical query interfaces, query by navigation, query by construction, and point to point queries.
In this report we propose an adoption of the query by navigation mechanism that is especially geared towards the InfoAssistant product.
Query by navigation is based on ideas from the information retrieval world, in particular on the stratified hypermedia architecture.
This paper presents an extensive analysis of the client workloads for educational media servers at two major U.S. universities.
The goals of the analysis include providing data for generating synthetic workloads, gaining insight into the design of streaming content distribution networks, and quantifying how much server bandwidth can be saved in interactive educational environments by using recently developed multicast streaming methods for stored content.
We present a general method for automatic meta-analyses in neuroscience and apply it on text data from published functional imaging studies to extract main functions associated with a brain area --- the posterior cingulate cortex.
Abstracts from PubMed are downloaded, words extracted and converted to a bag-of-words matrix representation.
The combined data is analyzed with hierarchical non-negative matrix factorization.
We find that the prominent themes in the PCC corpus are episodic memory retrieval and pain.
We further characterize the distribution in PCC of the Talairach coordinates available in some of the articles.
This shows a tendency to functional segregation between memory and pain components where memory activations are predominantly in the caudal part and pain in the rostral part of PCC.
The need for new theoretical and experimental approaches to understand dynamic and heterogeneous behavior in complex economic and social systems is increasing recently.
An approach using the agent-based simulation and the artificial market on the computer system is considered to be an effective approach.
The computational simulation with dynamically interacting heterogeneous agents is expected to re-produce complex phenomena in economics, and helps us to experiment various controlling methods, to evaluate systematic designs, and to extract the fundamental elements which produce the interesting phenomena for future analytical works.
In the previous works, we investigated the stability of a virtual commodities market and the aggregated behavior of the dynamic online auctions with heterogeneous agents.
In this paper, we will introduce a simple framework to develop agent-based simulations systematically and consider an application of the agent-based simulation for a dynamical model of the international greenhouse gas emissions trading.
this paper, Newtonian cosmology is extended in a manner which yields the same effects as are obtained for the de Sitter cosmology in Einstein's theory.
In particular, the emergence of the Hubble redshift in de Sitter cosmology as a Doppler-cum-gravitational effect for de Sitter coordinates and as a "tired light" effect for Robertson coordinates can be matched in extended Newtonian cosmology
Most ad hoc networks do not implement any network access control, leaving these networks vulnerable to resource consumption attacks where a malicious node injects packets into the network with the goal of depleting the resources of the nodes relaying the packets.
To thwart or prevent such attacks, it is necessary to employ authentication mechanisms that ensure that only authorized nodes can inject traffic into the network.
In this paper, we present LHAP, a scalable and light-weight authentication protocol for ad hoc networks.
LHAP is based on two techniques: (i) hop-by-hop authentication for verifying the authenticity of all the packets transmitted in the network and (ii) one-way key chain and TESLA for packet authentication and for reducing the overhead for establishing trust among nodes.
We analyze the security of LHAP, and show LHAP is a lightweight security protocol through detailed performance analysis.
We use the move of Israeli stocks from call auction trading to continuous trading to show that investors have a preference for stocks that trade continuously.
When large stocks move from call auction to continuous trading, the small stocks that still trade by call auction experience a significant loss in volume relative to the overall market volume.
As small stocks move to continuous trading, they experience an increase in volume and positive abnormal returns because of the associated increase in liquidity.
Overall, though, a move to continuous trading increases the volume of large stocks relative to small stocks.
Choosing among alternative trading mechanisms is an issue of growing concern to financial economists.
A wealth of information is available on the Web.
But often, such data are hidden behind form interfaces which allow only a restrictive set of queries over the underlying databases, greatly hindering data exploration.
The ability to materialize these databases has endless applications, from allowing the data to be effectively mined to providing better response times in Web information integration systems.
However, reconstructing database images through restricted interfaces can be a daunting task, and sometimes infeasible due to network traffic and high latencies from Web servers.
In this paper we introduce the problem of generating efficient query covers, i.e., given a restricted query interface, how to efficiently reconstruct a complete image of the underlying database.
We propose a solution to the problem of finding covers for spatial queries over databases accessible through nearestneighbor interfaces.
Our algorithm guarantees complete coverage and leads to speedups of over 50 when compared against the naive solution.
We use our case-study to illustrate useful guidelines to attack the general coverage problem, and we also discuss practical issues related to materializing Web databases, such as automation of data retrieval and techniques which make it possible to circumvent unfriendly sites, while keeping the anonymity of the person performing the queries.
Recent action in California in the U.S.A. vividly illustrates that lack of appreciation by civil actors of the economics of energy companies.
This study seeks to act as future roadmap for legal actors to obtain clarity on economic issues affecting a potential future energy source, namely Space Solar Power (SSP).
Currently envisioned systems SSP systems would deliver gigawatts of power to terrestrial power grids from space, lasting over 20 years and having orbital masses on the order of 40 International Space Stations.
The interaction of legal challenges and economic justifications is examined for any group of public and private entities (fully domestic commercial ventures, international conglomerates, international civil organizations, etc) that seek to build and/or operate an SSP system.
In April of 2000, the Ministry of Economics and Industry (MITI) of Japan and the National Aeronautics and Space Administration (NASA) of the United States started a joint feasibility study on Space Solar Power.
Due to the current climate of limited public funding for such large-scale space projects, governments would prefer more industry involvement (technically and more important financially) in SSP.
Conceptual case studies are developed of innovative future government and private sector partnerships for SSP.
Sensitivities are performed on proposed legal and economic architectures.
Visualization systems that support multiple-display viewing can greatly enhance user perception.
In this paper we apply a deflection-optimal linear-quadratic detector to the detection of buried mines in images formed by a forward-looking, ground-penetrating, synthetic aperture radar.
The detector is a linear-quadratic form that maximizes the output signal to noise ratio (deflection), and its parameters are estimated from a set of training data.
We show that this detector is useful when the signal to be detected is expected to be stochastic, with an unknown distribution, and when only a small set of training data is available to estimate its statistics.
The detector structure can be understood in terms of the singular value decomposition; the statistical variations of the target signature are modelled using a compact set of orthogonal "eigenmodes" (or principal components) of the training data set.
Because only the largest eigenvalues...
this paper to formulate a general geometric evolution problem based on the notion of action-measure, introduced here.
For particular choices of the action-measure we obtain formulations of the mean curvature flow or the brittle fracture propagation problems
We investigate an aspect of the relationship between parsing and corpus-based methods in NLP that has received relatively little attention: coverage augmentation in rule-based parsers.
In the specific task of determining grammatical relations (such as subjects and objects) in transcribed spoken language, we show that a combination of rule-based and corpus-based approaches, where a rule-based system is used as the teacher (or an automatic data annotator) to a corpus-based system, outperforms either system in isolation.
Life in urban areas presents special challenges for maternal child care practices.
Data from a representative survey of households with children less than 3 years of age in Accra were used to test a number of hypothesized constraints to child care, including various maternal (education, employment, marital status, age, health, ethnic group, migration status) and household-level factors (income, calorie availability, quality of housing and asset ownership, availability of services, household size, and crowding).
An age-specific child care index was created using recall data on maternal child feeding practices and use of preventive health services.
A hygiene index was created from spot check observations of proxies of hygiene behaviors.
Multivariate analyses showed that maternal schooling was the most consistent constraint to both the care and the hygiene index.
None of the household-level characteristics were associated with the care index, but better housing quality and access to garbage collection services were associated with better hygiene.
Female head of household and larger family size were associated with poorer hygiene.
The programmatic implications of these findings for nutrition education and behavior change interventions in Accra are discussed.
The focus is on using the information to target the right practices to be modified as well as the main constraints to their adoption.
iii CONTENTS Acknowledgments.........................................................................................................v 1.
The small-world phenomenon has been already the subject of a huge variety of papers, showing its appeareance in a variety of systems.
However, some big holes still remain to be filled, as the commonly adopted mathematical formulation is valid only for topological networks.
In this paper we propose a generalization of the theory of small worlds based on two leading concepts, e#ciency and cost, and valid also for weighted networks.
E#ciency measures how well information propagates over the network, and cost measures how expensive it is to build a network.
The combination of these factors leads us to introduce the concept of economic small worlds, that formalizes the idea of networks that are "cheap" to build, and nevertheless e#cient in propagating information, both at global and local scale.
In this way we provide an adequate tool to quantitatively analyze the behaviour of complex networks in the real world.
Various complex systems are studied, ranging from the realm of neural networks, to social sciences, to communication and transportation networks.
In each case, economic small worlds are found.
Moreover, using the economic small-world framework, the construction principles of these networks can be quantitatively analyzed and compared, giving good insights on how e#ciency and economy principles combine up to shape all these systems.
A calibrated classifier provides reliable estimates of the true probability that each test sample is a member of the class of interest.
This paper describes a proposed framework for the use of language technology to provide computer-based help for patients with limited or no English.
Aimed at users of the Health Services who are disadvantaged by their (lack of) linguistic skills, the system will assist the patient in different ways at different stages of their interactions with health-care providers.
In its full conception it will embrace a wide range of NLP technologies.
Although the research is based on the UK model of health-care provision, there are clear messages for anyone interested in language technology and under-resourced languages, whatever the application.
Focusing on
The rapid and unprecedented increase in the heterogeneity of multimedia networks and devices emphasizes the need for scalable and adaptive video solutions both for coding and transmission purposes.
However, in general, there is an inherent tradeoff between the level of scalability and the quality of scalable video streams.
In other words, the higher the bandwidth variation, the lower the overall video quality of the scalable stream that is needed to support the desired bandwidth range.
In this paper, we introduce the notion of wireless video TranScaling (TS), which is a generalization of (non-scalable) transcoding.
With transcaling, a scalable video stream, that covers a given bandwidth range, is mapped into one or more scalable video streams covering different bandwidth ranges.
The most common approach to checking correctness of a hardware or software design is to verify that a description of the design has the proper behavior as elicited by a series of input stimuli.
In the case of software, the program is simply run with the appropriate inputs, and in the case of hardware, its description written in a hardware description language (hdl) is simulated with the appropriate input vectors.
Complete software
In this paper we show how to extend point-based surface rendering to illustrate object motion.
We do this by first extruding the circular points into ellipsoids, which fill the space traced out by the points in motion.
Using ellipsoids instead of cylinders achieves a low-passing effect of the motion trail.
We then find the screen-space projection of each ellipsoid, which is an ellipse.
These can be rendered conveniently using hardware acceleration.
This paper presents the use of mutation analysis as the main qualification technique for: - estimating and automatically enhancing a test set (using genetic algorithms), - qualifying and improving a component's contracts (that is the specification facet) - measuring the impact of contractable robust components on global system robustness and reliability.
The
We present a new class of on-demand routing protocols called Split Label Routing (SLR).
The protocols guarantee loop-freedom at every instant by ensuring that node labels are always in topological order, and thus induce a directed acyclic graph (DAG).
The novel feature of SLR is that it uses a dense ordinal set with a strict partial order to label nodes.
For any two labels there is always some label in between them.
This allows SLR to "insert" a node in to an existing DAG, without the need to relabel predecessors.
SLR inherently provides multiple paths to destinations.
We present a practical, finitely dense implementation that uses a destination-controlled sequence number.
The sequence number functions as a reset to node ordering when no more label splits are possible.
The sequence number is changed only by the destination.
Simulations show that our proposed protocol outperforms existing state-of-the-art on-demand routing protocols.
Hong and Ladner [6] used context-based group testing to implement bitplane coding for image compression.
We extend this technique to video coding, by replacing the quantization and entropy-coding stages, of an H.263 standard video coder, with bit-plane coding.
We experiment with ways to improve baseline coder, including di#erent classification schemes and cross-frame adaptive coding.
Our results indicate that our new coder, GTV (Group Testing for Video), significantly outperforms H.263 at medium to high bit-rates (300+ kbps) on most sequences, while allowing very precise rate scalability.
Kelb is a new real-time programming environment developed at Uppsala University for the Sony AIBO ERS-210.
It is aimed to provide efficiency by introducing a notion of light-weight tasks executing according to well-known real-time scheduling algorithms and resource protocols, while still allowing applications to be developed in a high-level abstract programming language.
In this paper we give an overview of the design of Kelb and describe the status of the environment, currently including: a real-time programming language and compiler extending gcc for MIPS with support for time- and event-triggered tasks, a runtime library with support for static and dynamic preemptive scheduling algorithms (e.g.
fixed priority...
Motion estimation and compensation is the key to high quality video coding.
Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L.
Motion estimation is also a key component in the digital restoration of archived video and for post-production and special e#ects in the movie industry.
Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more e#cient video coding.
However sub-pixel accuracy requires interpolation of the image data.
Image interpolation is a key requirement of many image processing algorithms.
Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved.
In this paper we propose using commodity computer graphics hardware for fast image interpolation.
We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.
This paper addresses the issue of reliable transport of emerging data services in Ethernet over SONET (EoS) networks that require protection guarantees beyond standard best effort delivery.
We argue that the current consensus of using Ethernet spanning tree and a SONET 1+1 protection, while providing reliability, is an inefficient use of resources.
Instead, we claim that EoS opens novel opportunities for protection heretofore unavailable in other environments.
In particular, the deployment of Virtual Concatenation and LCAS protocols enables "route splitting", creating a fundamentally new routing paradigm for circuit-switched environments.
We propose a scheme called PESO, appropriate for EoS, with innovative routing, failure notification and switching components.
More importantly, it is competitive with SONET protection without its 100% bandwidth overhead.
We also suggest a enhancement in LCAS that can further improve PESO's switching time.
PESO leverages the underlying protocols, making it extremely attractive to implement and use in practice.
The development of a Distributed Information System (DIS) can lead to critical bottlenecks because of the underlying architecture, which is becoming more and more complex.
Todays applications are both object-oriented and based on a new type of three-tiered client/server architecture.
In this context, the capabilities of a DIS can be drastically reduced if the performances of the system are not sufficient.
Recognizing these trends, industry and research are defining standards and technologies for communicating between components of a DIS and for database access mechanisms.
The emerging candidates for these middleware technologies include the OMGs CORBA specification and Microsoft's proprietary solution known as DCOM.
A key problem with such complex architectures is the performance issue.
This paper presents a simulationbased workbench for predicting the performance of applications relying on these architectures.
The proposed tool is based on providing end users with mechanisms to specify the essential characteristics of the application he/she is conceiving and the ability to match the software components with the operational environment (hardware and operating system).
This case study explores the development, dissemination, adoption, and impact of improved tree fallows in rural western Kenya.
The processes of technology development and dissemination throughout the region are described and analyzed.
To analyze adoption and impact, the paper applies a variety of different data collection methods as well as samples from both pilot areas where researchers maintained a significant presence and non-pilot areas where farmers learned of the technologies through other channels.
Sample sizes for the quantitative analysis ranged from almost 2,000 households for measuring the adoption process to just over 100 households for measuring impact indicators.
Qualitative methods included long-term case studies for 40 households and focus group discussions involving 16 different groups.
The paper describes the ways in which farmers used and modified improved fallow practices.
Discussion also examines the types of households using fallows and benefiting from their use.
Empirical results suggest that improved fallows almost always double on-farm maize yields.
In addition, the data indicates that poor households use improved fallows at much greater rate (about 30 percent) than they do fertilizer (8 percent), though, on average, the size of fallow plots remains small, at 440m .
As a result, despite these promising signs, the improved fallow systems were not found to be linked to improved household level food security or poverty indicators primarily, primarily because the size of the fields under the agroforestry systems was on average, quite small.
Conclusion To conclude, improved fallows represent a technically effective and financially profitable technology that is attractive to poor households with little cash available for ...
Neural Networks are widely used in pattern recognition, security applications and data manipulation.
We propose a novel hardware architecture for a generic neural network, using Network on Chip (NoC) interconnect.
The proposed architecture allows for expandability, mapping of more than one logical unit onto a single physical unit, and dynamic reconfiguration based on application-specific demands.
Simulation results show that this architecture has significant performance benefits over existing architectures.
xpressed, yet still be automatically verified.
Through careful, logically motivated design we hope to combine the best ideas from abstract interpretation, automated program analysis, type theory, and verification.
In the remainder of this section we explain and justify our approach in somewhat more detail, before giving a research plan in the next section.
Types and Complete Specifications.
Complete specifications of a program's behavior are generally not feasible for complex software systems.
For some smaller programs or components where specification may be possible, the e#ort required to formally prove adherence to the specification can be tremendous.
Finally, even if both specification and proof are undertaken for a given module, it is exceedingly burdensome to maintain such a proof as the 1 program evolves in response to changing requirements.
A combination of these factors means that complete specification and verification are rarely undertaken in practice.
Type systems as the
This paper analyzes the performance of a large population of long lived TCP flows experiencing random packet losses due to both random transmission errors and congestion created by the sharing of a common tail drop bottleneck router.
We propose a natural and simple model for the joint throughput evolution of the set of TCP sessions under such a mix of losses.
For the case of Poisson transmission errors, we show that the asymptotic model where the population tends to infinity leads to a well defined and tractable dynamical system.
In particular, we get the mean value of the throughput of each session as a function of the transmission error rate and the synchronization rate in the bottleneck router.
The large population asymptotic model has two interesting and non-intuitive properties: 1) there exists a positive threshold (given in closed form) on the transmission error rate above which there are no congestion losses at all in steady state; 2) below this threshold, the mean throughput of each flow is an increasing function of the transmission error rate, so that the maximum mean value is in fact achieved when the transmission error rate is equal to this threshold.
We present a technique for measuring the security of a system which relies on a probabilistic process algebraic formalisation of noninterference.
We define a mathematical model for this technique which consists of a linear space of processes and linear transformations on them.
In this model the measured quantity corresponds to the norm of a suitably defined linear operator associated to the system.
The probabilistic model we adopt is reactive in the sense that processes can react to the environment with a probabilistic choice on a set of inputs; it is also generative in the sense that outputs autonomously chosen by the system are governed by a probability distribution.
In this setting, noninterference is formulated in terms of a probabilistic notion of weak bisimulation.
We show how the probabilistic information in this notion can be used to estimate the maximal information leakage, i.e.
the security degree of a system against a most powerful attacker.
This is an expository paper explaining how trees can be used to compute e#ectively the vector field expressions which arise in nonlinear control theory.
It also describe the mathematical structure that sets of trees carry.
Estimating end-to-end packet loss on Internet paths is important not only to monitor network performance, but also to assist adaptive applications make the best possible use of available network resources.
There has been significant prior work on measuring and modeling packet loss in the Internet, but most of those techniques do not focus on providing realtime information and on assessing path performance from an application standpoint.
In this paper, we present an on-line probing-based approach to estimate the loss performance of a network path, and extend this estimate to infer the performance that an application using the path would see.
The approach relies on a hidden Markov model constructed from performance estimates generated from probes, which is then used to predict path performance as an application would experience.
The accuracy of the model is evaluated using a number of different metrics, including loss rate and loss burstiness.
The sensitivity of the results to measurement and computational overhead is also investigated, and an extension of the base approach using a layered model is explored as a possible solution to capturing time-varying channel behavior while keeping computational complexity reasonably low.
The results we present show that the approach is capable of generating accurate, real-time estimates of path performance, and of predicting the performance that applications would experience if routed on the path.
Web search engines index text represented in symbolic form.
However, it is well known that a fraction of the text on the web is present in the form of images, and the textual content of these images is not indexed by the search engines.
This fact immediately raises a few questions: i) What fraction of the images on the web contain text?
ii) What fraction of the text content of these images does not appear in the web page in symbolic form?
Answers to these questions will give the web users an idea about the amount of information being missed by the search engines, and, justify whether or not Optical Character Recognition should be a standard part of search engine indexing.
To answer these questions we statistically sample the images referenced in the web pages retrieved by a search engine for specific queries and then find the fraction of sampled images that contain text.
This paper concerns the design of temporal relational database schemas.
Normal forms
Using a simple model of saturated, synchronized and homogeneous sources of TCP Reno with drop-tail queue management and a discrete-time framework, we derive formulae for stationary as well as transient queueing behavior that shed light on the relationship between large buffers and work conservation (queue never empties).
Using simulations, the relevance of the results for the case of non-synchronized sources is demonstrated.
In particular, we demonstrate that a certain simple lower bound for the stationary queue length applies also to the case where the sources are non-stationary.
Petri net variants are widely used as a workflow modelling technique.
Recently, UML activity diagrams have been used for the same purpose, even though the syntax and semantics of activity diagrams has not been yet fully worked out.
Nevertheless, activity diagrams seem very similar to Petri nets and on the surface, one may think that they are variants of each other.
To substantiate or deny this claim, we need to formalise the intended semantics of activity diagrams and then compare this with various Petri net semantics.
In previous papers we have defined two formal semantics for UML activity diagrams that are intended for workflow modelling.
In this paper, we discuss the design choices that underlie these two semantics and investigate whether these design choices can be met in low-level and high-level Petri net semantics.
We argue that the main di#erence between the Petri net semantics and our semantics of UML activity diagrams is that the Petri net semantics models resource usage of closed, active systems that are non-reactive, whereas our semantics of UML activity diagrams models open, reactive systems.
This article presents a database of images of handwritten city names.
The aim is to provide a standard database for Sinhala handwriting recognition research.
This database contains about 15,000 images of about 500 city names of Sri Lanka.
These images are obtained from the addresses of live mail so that the writers had no idea that they would be used for this purpose.
Also, these are unconstrained handwriting images unlike the images collected using prescribed forms in laboratory environment.
The images are divided into two groups, training set and testing set.
This enables the comparison of results of different researches and serves the purpose of being a standard database.
This paper deals with the combination of classification models that have been derived from running di#erent (heterogeneous) learning algorithms on the same data set.
We focus on the Classifier Evaluation and Selection (ES) method, that evaluates each of the models (typically using 10-fold cross-validation) and selects the best one.
We examine the performance of this method in comparison with the Oracle selecting the best classifier for the test set and show that 10-fold cross-validation has problems in detecting the best classifier.
We then extend ES by applying a statistical test to the 10-fold accuracies of the models and combining through voting the most significant ones.
Experimental results show that the proposed method, E#ective Voting, performs comparably with the state-of-the-art method of Stacking with Multi-Response Model Trees without the additional computational cost of meta-training.
It has been noted (Lanzi, 1997, Butz et al, 2000) that XCS (Wilson, 1998) is unable to identify an adequate solution to the Maze14 problem (Cliff and Ross, 1994) without the introduction of alternative exploration strategies.
The simple expedient of allowing exploration to start at any position in the Maze will allow XCS to learn in such `difficult' environments (Barry, 2000b), and Lanzi (1997) has demonstrated that his `teletransportation' mechanism achieves similar results.
However, these approaches are in truth a re-formulation of the problem.
In many `real' robotic learning tasks there are no opportunities available to `leapfrog' to a new state.
This paper describes an initial investigation of the use of a pre-specified hierarchical XCS architecture.
It is shown that the use of internal rewards allows XCS to learn optimal local routes to each internal reward, and that a higher-level XCS can select over internal sub-goal states to find the optimum route across sub-goals to a global reward.
It is hypothesised that the method can be expanded to operate within larger environments, and that an emergent approach using similar techniques is also possible.
An alternative structure for adaptive linear prediction is proposed in which the adaptive filter is replaced by a cascade of independently adapting, low-order stages, and the prediction is generated by means of successive refinements.
When the adaptation algorithm for the stages is LMS, the associated short filters are less affected by eigenvalue spread and mode coupling problems and display a faster convergence to their steady-state value.
Experimental results show that a cascade of second-order LMS filters is capable of successfully modeling most input signals, with a much smaller MSE than LMS or lattice LMS predictors in the early phase of the adaptation.
Other adaptation algorithms can be used for the single stages, whereas the overall computational cost remains linear in the number of stages, and very fast tracking is achieved.
The k-nearest neighbour (k-nn) model is a simple, popular classifier.
Probabilistic k-nn is a more powerful variant in which the model is cast in a Bayesian framework using (reversible jump) Markov chain Monte Carlo methods to average out the uncertainy over the model parameters.
The lack of focus that is a characteristic of unsupervised pattern mining in sequential data represents one of the major limitations of this approach.
This lack of focus is due to the inherently large number of rules that is likely to be discovered in any but the more trivial sets of sequences.
Several authors have promoted the use of constraints to reduce that number, but those constraints approximate the mining task to a hypothesis test task.
In this paper
earlier versions.
But don't blame them for my indiscretions.
It can be purchased in Peru at Javier Prado 200; Magdalena, Lima.
Sales to outside of Peru are handled by Seta Soledad Esteban of the "Liberia Interregna" bookstore, Fax 426-2742.
I use "lexicon" (and "lexicons", not "lexica") to refer to speakers' knowledge of the lexical resources of their language, "lexical database" to refer to an information structure that represents lexical information (reflecting characteristics of a lexicon), and "dictionary" to refer to a rendering of a lexical database, whether printed on pages or displayed in some electronic form.
Tools like Shoebox [1] and the Making Dictionaries package [4] were developed to serve this approach.
This is not true for database programs that print information according to user-defined templates.
However, such programs present their own challenges, most notably their proprietary data formats.
The Huallaga Quechua lexical database was stored as such a re
We are developing interactive simulations of the National Institute of Standards and Technology (NIST) Reference Test Facility for Autonomous Mobile Robots (Urban Search and Rescue).
The NIST USAR Test Facility is a standardized disaster environment consisting of three scenarios of progressive difficulty: Yellow, Orange, and Red arenas.
The USAR task focuses on robot behaviors, and physical interaction with standardized but disorderly rubble filled environments.
The simulation will be used to test and evaluate designs for teleoperation interfaces and robot sensing and cooperation that will subsequently be incorporated into experimental robots.
This paper describes our novel simulation approach using an inexpensive game engine to rapidly construct a visually and dynamically accurate simulation for both individual robots and robot teams.
Introduction In the theory of relativity (TR) an interval (pseudo-distance) takes the place of the previous "prerelativistic " invariant-distance (length).
Therefore, for example, one should say more correctly about the (space-like) interval of a rod, i.e., in essence, in the non-relativistic limit, their values coincide which ensures the succession of corresponding theories and the necessary uniqueness of the interval.
Taking into account interval Lorenz invariance, in a moving reference frame it leads only to the "radar definition" of the moving rod length (see, e.g.,[2]).
It should be emphasized that we deal with one the fundamental problems of physics here.
Space dimensions or, in general, space correlations, parallel with time ones, serve as the basis for the description of all natural phenomena (by means of physical theories, in particular).
At the same time, we come across a highly strange phenomenon just in TR.
The thing is that two statements, namely the demand of interval i
This paper was written around the same time as Bell's landmark paper which addressed the problem of nonlocality [4].
This question of nonlocality had first been raised by Einstein-Podolsky-Rosen (EPR), who claimed that if quantum mechanics were a complete model of reality, then nonlocal interactions between particles had to exist [5].
Since they felt that nonlocality was impossible, quantum mechanics either had to be wrong or at least incomplete.
An experiment was later performed which showed that nonlocal influences do exist once these particles interact and, that one can test the explicit quantum nature of systems A 2001 C. Roy Keys Inc
Multi-homed, mobile wireless computing and communication devices can spontaneously form communities to logically combine and share the bandwidth of each other's wide-area communication links using inverse multiplexing.
But membership in such a community can be highly dynamic, as devices and their associated WAN links randomly join and leave the community.
We identify the issues and tradeoffs faced in designing a decentralized inverse multiplexing system in this challenging setting, and determine precisely how heterogeneous WAN links should be characterized, and when they should be added to, or deleted from, the shared pool.
We then propose methods of choosing the appropriate channels on which to assign newly-arriving application flows.
Using video traffic as a motivating example, we demonstrate how significant performance gains can be realized by adapting allocation of the shared WAN channels to specific application requirements.
Our simulation and experimentation results show that collaborative bandwidth aggregation systems are, indeed, a practical and compelling means of achieving high-speed Internet access for groups of wireless computing devices beyond the reach of public or private access points.
this paper it has been shown that the potential energy of a particle of mass m in an infinite, homogeneous Euclidean universe is not only finite but is exactly equal to -mc .
This implies that the total energy is zero, a fact which may have many interesting implications.
In this paper, we discuss how the focus in document analysis, generally speaking, and in graphics recognition more specifically, has moved from re-engineering problems to indexing and information retrieval.
After a review of ongoing work on these topics, we propose some challenges for the years to come.
We are designing and implementing a multi-modal interface to an autonomous robot.
For this interface, we have elected to use natural language and gesture.
Gestures can be either natural gestures perceived by a vision system installed on the robot, or they can be made by using a stylus on a Personal Digital Assistant.
In this paper we describe how we are attempting to provide a seamless integration of the various modes of input to provide a multi-modal interface that humans can manipulate as they desire.
The interface will allow the user to choose whatever mode or combination of modes seems appropriate for interactions with the robot.
The human user, therefore, does not have to be limited to any one mode of interaction, but can freely choose whatever mode is most comfortable or natural.
Motivation: Many experimental and algorithmic approaches in biology generate groups of genes that need to be examined for related functional properties.
For example, gene expression profiles are frequently organized into clusters of genes that may share functional properties.
We evaluate a method, neighbor divergence per gene (NDPG), that uses scientific literature to assess whether a group of genes are functionally related.
The method requires only a corpus of documents and an index connecting the documents to genes.
This paper presents a novel scheduling strategy, Anchored Opportunity Queueing (AOQ), which preserves the throughput and fairness characteristics of FBRR while significantly reducing the average delay experienced by packets.
The AOQ scheduler achieves lower average latencies by trying, as far as possible, to complete the transmission of a complete packet before beginning the transmission of flits from another packet.
The AOQ scheduler achieves provable fairness in the number of opportunities it offers to each of the virtual channels for transmissions of flits over the physical channel.
We prove this by showing that the relative fairness bound, a popular measure of fairness, is a small finite constant in the case of the AOQ scheduler.
Finally, we present simulation results comparing the delay characteristics of AOQ with other schedulers for virtual channels.
The AOQ scheduler is simple to implement in hardware, and also offers a practical solution in other contexts such as in scheduling ATM cells in Internet backbone switches
This paper develops a model of differentiated consumers to examine the consumption effects of genetic modification (GM) under alternative labeling regimes and segregation enforcement scenarios.
Analytical results show that if consumers perceive GM products as being different than their traditional counterparts, genetic modification affects consumer welfare and, thus, consumption decisions.
When the existence of market imperfections in one or more stages of the supply chain prevents the transmission of cost savings associated with the new technology to consumers, genetic modification results in welfare losses for consumers.
The analysis shows that the relative welfare ranking of the "no labeling" and "mandatory labeling" regimes depends on: (i) the level of consumer aversion to genetic modification, (ii) the size of marketing and segregation costs under mandatory labeling; (iii) the share of the GM product to total production; and (iv) the extent to which GM products are incorrectly labeled as non-GM products.
CONTENTS 1.
In this paper we present how the throughput in an ad hoc network is affected by using variable data rate.
The study is based on four different systems with different routing and MAC protocols.
We also study the imapct of different number of available data rates.
The data rate lies between 100 kbit/s to 20 Mbit/s.
Query processing in sensor networks is critical for several sensor based monitoring applications and poses several challenging research problems.
The in--network aggregation paradigm in sensor networks provides a versatile approach for evaluating simple aggregate queries, in which an aggregation--tree is imposed on the sensor network that is rooted at the base--station and the data gets aggregated as it gets forwarded up the tree.
In this paper we consider an two kinds of aggregate queries: value range queries that compute the number of sensors that report values in the given range, and location range queries that compute the sum of values reported by sensors in a given location range.
Such queries can be answered by using the in--network aggregation approach where only sensors that fall within the range contribute to the aggregate being maintained.
However it requires a separate aggregate to be computed and communicated for each query and hence does not scale well with the number of queries.
Many
This paper describes a new methodology to enable large scale high resolution environmental simulation.
Unlike the vast majority of environmental modeling techniques that split the space into cells, the use of a vector space is proposed here.
A phenomena will then be described by its shape, decomposed in several points that can move using a displacement vector.
The shape also have a dynamic structure, as each point can instantiate new point because of a change in the space properties or to obtain a better resolution model.
Such vector models are generating less overhead because the phenomena is recomputed only if a part of it is entering into a different space entity with different attributes, using cellular space the model would have been recomputed for each neighboring identical cells.
This technique uses the DSDEVS formalism to describe discrete event models with dynamic structure, and will be implemented in the JDEVS toolkit also presented.
XML is an emerging standard for data representation and exchange on the World-Wide Web.
Due to the nature of information on the Web and the inherent flexibility of XML, we expect that much of the data encoded in XML will be semistructured:the data may be irregular or incomplete, and its structure may change rapidly or unpredictably.
This paper describes the query processor of Lore,aDBMS for XML-based data supporting an expressive query language.
We focus primarily on Lore's cost-based query optimizer.
While all of the usual problems associated with cost-based query optimization apply to XML-based query languages, a number of additional problems arise, such as new kinds of indexing, more complicated notions of database statistics, and vastly different query execution strategies for different databases.
We define appropriate logical and physical query plans, database statistics, and a cost model, and we describe plan enumeration including heuristics for reducing the large search space.
Our optimizer is fully implemented in Lore and preliminary performance results are reported.
As policy research on natural resource management (NRM) evolves, new priorities are emerging related to the strategy, design and implementation of policies to support local organizations (LOs) as managers of natural resources.
However, research on policies affecting LOs is at a very early stage, with no accepted body of indicators, methodologies and conceptual approaches, and little documentation or critique of the research methods that have been used.
To address this gap, and to lay the basis for a future program of comparative research, IFPRI, CIFOR and ODI co-sponsored an international workshop in October 1994, with experts from different disciplines and different resource domains.
The recognition of script in historical documents requires suitable techniques in order to identify single words.
Segmentation of lines and words is a challenging task because lines are not straight and words may intersect within and between lines.
For correct word segmentation, the conventional analysis of distances between text objects needs to be supplemented by a second component predicting possible word boundaries based on semantical information.
For date entries, hypotheses about potential boundaries are generated based on knowledge about the different variations as to how dates are written in the documents.
It is modeled by distribution curves for potential boundary locations.
Word boundaries are detected by classification of local features, such as distances between adjacent text objects, together with location-based boundary distribution curves as a-priori knowledge.
We applied the technique to date entries in historical church registers.
Documents from the 18th and 19th century were used for training and testing.
The data set consisted of 674 word boundaries in 298 date entries.
Our algorithm found the correct separation under the best four hypotheses for a word sequence in 97% of all cases in the test data set.
The XWand is a wireless UI device that enables styles of natural interaction with intelligent environments.
The XWand system exploits human intuition, allowing control of everyday objects through pointing and gesturing.
We describe the hardware device and then examine several approaches to gesture recognition.
We discuss results from experiments using a linear time warping method, a dynamic time warping (DTW) method, and a hidden Markov model-based method (HMM).
Dynamic web sites commonly return information in the form of lists and tables.
Although hand crafting an extraction program for a specific template is time-consuming but straightforward, it is desirable to automatically generate template extraction programs from examples of lists and tables in html documents.
Supervised approaches have been shown to achieve high accuracy, but they require manual labeling of training examples, which is also time consuming.
Fully unsupervised approaches, which extract rows and columns by detecting regularities in the data, cannot provide su#cient accuracy for practical domains.
We describe a novel technique, Post-supervised Learning, which exploits unsupervised learning to avoid the need for training examples, while minimally involving the user to achieve high accuracy.
We have developed unsupervised algorithms to extract the number of rows and adopted a dynamic programming algorithm for extracting columns.
Our method achieves high performance with minimal user input compared to fully supervised techniques.
In a variety of PAC learning models, a tradeoff between time and information seems to exist: with unlimited time, a small amount of information suffices, but with time restrictions, more information sometimes seems to be required.
In addition, it has long been known that there are concept classes that can be learned in the absence of computational restrictions, but (under standard cryptographic assumptions) cannot be learned in polynomial time regardless of sample size.
Yet, these results do not answer the question of whether there are classes for which learning from a small set of examples is infeasible, but becomes feasible when the learner has access to (polynomially) more examples.
To address this question, we introduce a new measure of learning complexity called computational sample complexity which represents the number of examples sufficient for polynomial time learning with respect to a fixed distribution.
We then show concept classes that (under similar cryptographic assumpti...
Use of multiple choice question based computer aided assessment to assess level-one (first year) mineralogy produced a reliable assessment, though with rather poor scores.
The use of negative marking contributed to this, and also drew negative comment from the student cohort.
Reflection on these outcomes led to the use of multiple response questions, which performed better and did not encourage negative student feedback.
CAA performance does not equate very well with practical coursework assessment.
However, these two assessments are addressing different learning outcomes and so this disparity is not surprising.
Statistical analysis suggests that these two forms of assessment give a truer indication of a student's ability when they are combined.
It enforces the conclusion that appropriate assessment tools should be used for stated learning outcomes and that multimodal assessment is best.
A@&BC4D=/EB7A/=/')!%!>!#B 2E&F9BC G!H'@;0 EB/IB!
(!#<#!7/J!5KL'62EM*C'6=N9B/ 4D#B*#>'O,#B/)!P!%!#B4/.1QR+-S'O79BSA!#PI2'6I;P!#2'O!
9B/T!H'@S'O%!#U/=?/,V)9B!
:9U9'2E>'69B/01 !#'6!P!#BK(*C'<6!D.
;S'==TIBCE&*!CE10B'62E<#B/*,UI=/U/)!CETQ #B W0R*E&7>'6*!92'6=X9B/Y('CKZ.
;U'-[U*//)!S4P'/K 6 R \9B=]==B7U9/B9BIBCE&*!CE<B/CEB/Q +-AIBI;<'7/H'=P'6IIB'*H#M \%!H'@79G)!^'6* *9)!CQ(#B/^4 UIB/!,#C4_!#Y'IBIB?'6*#`4P'6G'6II=CE` !4-I&6a5/*!A&=B7^!#B>'9&!#/b^cedgfShXiPj%0X'U7M'O!
'6!B7A!#B,/9Y6 Xicgf_EB)*9U/)!/0'2E<dhXi $;0;'U7 'O!Y*/)!/)!F.2'6CE>!/'6=" D'692EBO92'=kE&)*/9U/)!/QGce 3!
'A*/UI'6^ !#BY'II&?'6*#4DJ!#l!#BS46@,/='6!CE^!
!#BU$/(')!
*:+l/.0T4 G*/*/=92E&%!#'6!Y'6K9BG6 P'U/9&*/ #B9B=3EL'B6!H'6!
(!#G/9B*/0 'E-!#2'6!G9*H#L'B!H'O!/0 '6A!#BKLIBOEB>'EEBJ!'=g@&BC4D=/EB7'6.;9&!A!#BZ/9&*/0 #B9B=3Em.
;*/UZI'6!< S!#B`$&/('6!
*N+l/.n'6A/9B*/<6 !#B/JeO4D1Q 1.
We argue that data webs employing specialized path services, network protocols, and data protocols can be an e#ective platform to analyze and access millions of distributed Gigabyte (and larger) size data sets.
We have built a prototype of such a data web today and demonstrated that it can e#ectively access, analyze and mine distributed Gigabyte size data sets even over thousands of miles by using specialized network and data protocols.
The prototype uses a server which employs the DataSpace Transfer Protocol or DSTP.
Our assumption is that WSDL/SOAP/UDDI-based discovery and description services will enable this same infrastructure to scale to millions of such DSTPServers.
The decreasing cost of computing technology is speeding the deployment of abundant ubiquitous computation and communication.
this paper translates these observations into constraints which are enforced to hold in a solution, and guide the recognition strategy.
A limitation of the system is that it makes no attempt to recognize arguments which are split in many phrases
In recent years, some cryptographic algorithms have gained popularity due to properties that make them suitable for use in constrained environments like mobile information appliances, where computing resources and power availability are limited.
In this paper, we select a set of public-key, symmetric-key and hash algorithms suitable for such environments and study their workload characteristics.
In particular, we study elliptic-curve versions of public-key cryptography algorithms, which allow fast software implementations while reducing the key size needed for a desired level of security compared to previous integer-based public-key algorithms.
We characterize the operations needed by elliptic-curve analogs of Diffie-Hellman key exchange, ElGamal and the Digital Signature Algorithm for public-key cryptography, for different key sizes and different levels of software optimization.
We also include characterizations for the Advanced Encryption Standard (AES) for symmetric-key cryptography, and SHA as a hash algorithm.
We show that all these algorithms can be implemented efficiently with a very simple processor.
Focus games have been shown to yield game-theoretical characterisations for the satisfiability and the model checking problem for various temporal logics.
One of the players is given a tool -- the focus -- that enables him to show the regeneration of temporal operators characterised as least or greatest fixpoints.
His strategy usually is build upon a priority list of formulas and, thus, is not positional.
This paper defines foci games for satisfiability of LTL formulas.
Strategies in these games are trivially positional since they parallelise all of the focus player's choices, thus resulting in a 1-player game in e#ect.
The games are shown to be correct and to yield smaller (counter-)models than the focus games.
Finally, foci games for model checking LTL are defined as well.
Motivation: Supertree methods have been often identified as a possible approach to the reconstruction of the `Tree of Life'.
However, a limitation of such methods is that, typically, they use just leaf-labelled phylogenetic trees to infer the resulting supertree.
...
In this paper, a few of the central concepts of InfoVis are introduced: (1) visualization with multiple views, which often are (but not necessarily need to be) of different visualization types and which are visually linked to each other, especially when used in conjunction with interactive brushing (linking and brushing, L&B); (2) focus-plus-context visualization (F+C visualization) as a means to jointly support zooming into the visual depiction of the data while at the same time maintaining the visual orientation of the visualization user to support navigation in the visualization; and (3) the potential combination of visualization methods and such from statistics as an interesting perspective for future work.
...
In this paper we propose a novel method to approximate a given mesh with a normal mesh.
Instead of building an associated parameterization on the fly we assume a globally smooth parameterization at the beginning and cast the problem as one of perturbing this parameterization.
Controlling the magnitude of this perturbation gives us explicit control over the range between fully constrained (only scalar coefficients) and unconstrained (3-vector coe#cients) approximations.
With the unconstrained problem giving the lowest approximation error we can thus characterize the error cost of normal meshes as a function of the number of non-normal offsets---we find a significant gain for little (error) cost.
Because the normal mesh construction creates a geometry driven approximation we can replace the difficult geometric distance minimization problem with a much simpler least squares problem.
This variational approach reduces magnitude and structure (aliasing) of the error further.
Our method separates the parameterization construction into an initial setup followed only by subsequent perturbations, giving us an algorithm which is far simpler to implement, more robust, and significantly faster
Since the state space of most games is a directed graph, many game-playing systems detect repeated positions with a transposition table.
This approach can reduce search effort by a large margin.
However, it suffers from the so-called Graph History Interaction (GHI) problem, which causes errors in games containing repeated positions.
This paper presents a practical solution to the GHI problem that combines and extends previous techniques.
Because our scheme is general, it is applicable to different game tree search algorithms and to different domains.
As demonstrated with the two algorithms ## and df-pn in the two games checkers and Go, our scheme incurs only a very small overhead, while guaranteeing the correctness of solutions.
We present a new logic, Linc, which is designed to be used as a framework for specifying and reasoning about operational semantics.
Linc is an extension of firstorder intuitionistic logic with a proof theoretic notion of definitions, induction and coinduction, and a new quantifier #.
In this paper, control Lyapunov functions are used to define static and dynamic safe regions for a system.
Based on a control Lyapunov function, a "feasible input set" is defined as all feasible controls for this CLF according to attractive behaviors and repulsive behaviors.
If the system is within a specified safe region, the human input will be used as the control input to the system.
If the system is outside the specified safe region, the human input will be snapped to the closest control element in the feasible input set.
Behavior based strategies are applied to achieve smooth transition from human input to snapped control input so as to guarantee maximum flexibility for humans as well as system stability and minimum base-line performance.
An illustrative example for a mobile robot shows the effectiveness of the approach.
This paper describes the collection and analysis of supercomputer I/O traces and their use in a collection of buffering and caching simulations.
This serves two purposes.
First, it gives a model of how individual applications running on supercomputers request file system I/O, allowing system designers to optimize I/O hardware and file system algorithms to that model.
Second, the buffering simulations show what resources are needed to maximize the CPU utilization of a supercomputer given a very bursty I/O request rate.
By using read-ahead and write-behind in a large solid-state disk,one or two applications were sufficient to fully maximize a Cray Y-MP CPU.
This expository article considers non-circular ellipses in the Riemann
Today on the Internet there is a wide variety of text based search engines, however the same is not true for searching visual information placed in Internet Web pages.
There is increased activity and research for querying such databases especially for content based visual querying.
The heterogeneous, distributed and transient nature of visual information, lack of interoperable retrieval systems and the limited bandwidth of Web environment presents bottlenecks for such efforts.
In this study the difficulties of visual information retrieval on the Web are highlighted and a visual information retrieval system in such an environment is presented.
Healthcare management operates in an environment of aggressive pricing, tough competition, and rapidly changing guidelines.
Computer simulation models are increasingly used by large healthcare institutions to meet these challenges.
However, small healthcare facilities serving the poor are equally in need of meeting these challenges but lack the finances and personnel required to develop and implement their own simulation solutions.
An academic medical center, healthcare facilities that serve the poor, and the local public health department formed a unique partnership to create low-cost tools to meet these challenges.
This article describes the creation of a low-cost, generic, discrete-event simulation model populated by a workflow observation Excel spreadsheet that can be completed by clinic staff themselves, thus "customizing" the simulation model for their own purposes.
This initial model focuses on childhood immunization delivery services; the intent is to develop a tool flexible enough to serve other health services delivery needs as well.
this paper, we presented asyG:319LG analy31 of the conservation of knownregulatory S.cerevisZf by comparison with other species.
It would be desirable to repeat these comparisons for worms (C.elegans ) and diptera (DrosffDZf melanogas0M andAnopheles gambiae, the malaria mosquito) , but information on theirregulatory elements was too limited to allow forany sy[:>GG19 analy>GG Human--mouse comparisons were helpful in differentiatingregulatory elements from background sequences, but --S. pombe pairwise comparisons were not.
Comparison of sequences from multiple species showed considerable promise in differentiating S.cerevisZf knownregulatory elements from background sequences.
However, the difference between the two distributions is not as significant as the one seen in human--mouse comparisons.
For future work, two strategies can be used to improve the separation.
One is to use more sophisticated statistics, such as those usedby Elnitski et al.
(2003), to maximize the separation between known regulatory elements and background sequences.
The other strategy is to include more species (Cliften et al.
2003; Kellis et al.
2003).
Once knownregulatory elements can besufficiently separated from background, we can extend CompareProspector from Table 2.
Motifs Discovered by CompareProspectoron the C. elegans PHA-4 Data Set Motifiden13603 by CompareProspector (Number of sites reported) (norte of sitesin gens with C.br27603 orthologs) Conologs)/3 of sites by one nucleotide 127/38 4/38 are 100% conserved; 4/38 differ by one nucleotide From the upstream sequences of the 211 pharyngeally expressed genes.
CompareProspector correctly identified the PHA-4 motif with the consensus TGTTTGC.
It also identified another motif with the consensus AGA...
Extraction of relevant data from the raw source of HTML pages poses specific requirements on their subsequent RDF storage and retrieval.
We describe an application of statistical information extraction technique (Hidden Markov Models) on product catalogues, followed with conversion of extracted data to RDF format and their structured retrieval.
The domain-specific query interface, built on the top of Sesame repository, o#ers a simple form of navigational retrieval.
Integration of further web-analysis methods, within the Rainbow architecture, is forthcoming.
Effective information disclosure in the context of databases with a large conceptual schema is known to be a non-trivial problem.
In particular the formulation of ad-hoc queries is a major problem in such contexts.
Existing approaches for tackling this problem include graphical query interfaces, query by navigation, query by construction, and point to point queries.
In this article we propose the spider query mechanism as a final corner stone for an easy to use computer supported query formulation mechanism for InfoAssisant.
Photon mapping is one of the most important algorithms for computing global illumination.
Especially for efficiently producing convincing caustics, there are no real alternatives to photon mapping.
On the other hand, photon mapping is also quite costly: Each radiance lookup requires to find the k nearest neighbors in a kd-tree, which can be more costly than shooting several rays.
Therefore, the nearest-neighbor queries often dominate the rendering time of a photon map based renderer.
In this paper, we present a method that reorganizes - i.e.
unbalances - the kd-tree for storing the photons in a way that allows for finding the k-nearest neighbors much more efficiently, thereby accelerating the radiance estimates by a factor of 1.2-3.4.
Most importantly, our method still finds exactly the same k-nearest-neighbors as the original method, without introducing any approximations or loss of accuracy.
The impact of our method is demonstrated with several practical examples.
This paper addresses the use of different coding methods for the arithmetic operators.
Signal encoding is widely used to reduce the switching activity in buses.
However, the signals need to be encoded and decoded since signal processing is executed in binary.
To avoid this step, we investigate the viability of processing operators that use the same signal encoding as that used in the bus.
Gray and a
In this paper we describe the functional requirements for research information systems and problems which arise in the development of such a system.
Here is shown which problems could be solved by using knowledge markup technologies.
In this article one DAML + OIL ontology for Research Information System is offered.
The already developed ontologies for research analyzed and compared.
The architecture based on knowledge markup for collecting research data and providing access to it is described.
It is shown how RDF Query Facilities can be used for information retrieval about research data.
Simulation-based wafer fabrication optimization models require extensive computational time to obtain accurate estimates of output parameters.
This research seeks to develop goal-driven optimization methodologies for a variety of semiconductor manufacturing problems using appropriate combinations of "resource-driven" (R-D), "job-driven" (J-D), and Mixed (combination of R-D and J-D) models to reduce simulation run times.
The initial phase of this research investigates two issues: a) the use of the R-D simulation control variates for the J-D simulation and b) development of metrics that calibrate the output from the R-D and J-D modeling paradigms.
The use of the R-D model as a control variate is proposed to reduce the variance of J-D model output.
Second, in order to use the R-D model output to predict the J-D model output, calibration metrics for the R-D and J-D modeling approaches were developed.
Initial developments were tested using an M/M/1 queuing system and an M/D/1 queuing system.
We consider importance sampling (IS) to increase the efficiency of Monte Carlo integration, especially for pricing exotic options where the random input is multivariate Normal.
When the importance function (the product of integrand and original density) is multimodal, determining a good IS density is a difficult task.
We propose an Automated Importance Sampling DEnsity selection procedure (AISDE).
AISDE selects an IS density as a mixture of multivariate Normal densities with modes at certain local maxima of the importance function.
When the simulation input is multivariate Normal, we use principal component analysis to obtain a reduced-dimension, approximate importance function, which allows efficient identification of a good IS density via AISDE in original problem dimensions over 100.
We present Monte Carlo experimental results on randomly generated option-pricing problems (including path-dependent options), demonstrating large and consistent efficiency improvement.
This paper outlines how a future ground surveillance system can be designed and how such a system could work within the framework of the Swedish Network Based Defense, (NBD), concept.
The material presented in this paper is in part an executive summary of the results from the project Fusion node 2 within phase 1 of the Swedish LedsystT study.
The paper discusses the general demands that the NBD concept will put on a ground surveillance system, how such a network based multisensor system should be designed and how the creation of vital parts of a Recognized Ground Picture can be achieved through a system for distributed sensor data fusion.
Furthermore, an NBD ground picture simulator developed for testing and demonstrating key issues in this field is presented.
1
Space, and COSE (Creation of Study Environments), as part of its commitment to distributed learning.
A wide-reaching evaluation model has been designed, aimed at appraising the quality of students' learning experiences using these VLEs.
The evaluation can be considered to be a hybrid system with formative, summative and illuminative elements.
The backbone of the model is a number of measuring instruments that were fitted around the educational process beginning in Jan 1999.
Optimizing compilers, including those in virtual machines, commonly utilize Static Single Assignment Form as their intermediate representation, but interpreters typically implement stack-oriented virtual machines.
This paper introduces an easily interpreted variant of Static Single Assignment Form.
Each instruction of this Interpretable Static Single Assignment Form, including the Phi Instruction, has self-contained operational semantics facilitating efficient interpretation.
Even the array manipulation instructions possess directly-executable single-assignment semantics.
In addition, this paper describes the construction of a prototype virtual machine realizing Interpretable Static Single Assignment Form and reports on its performance.
CONTENTS i Contents 1
Activities such as Web Services and the Semantic Web are working to create a web of distributed machine understandable data.
In this paper we present an application called Semantic Search which is built on these supporting technologies and is designed to improve traditional web searching.
We provide an overview of TAP, the application framework upon which the Semantic Search is built.
We describe two implemented Semantic Search systems which, based on the denotation of the search query, augment traditional search results with relevant data aggregated from distributed sources.
We also discuss some general issues related to searching and the Semantic Web and outline how an understanding of the semantics of the search terms can be used to provide better results.
We have already presented a system that can track the 3D speech movements of a speaker's face in a monocular video sequence.
For that purpose, speaker-specific models of the face have been built, including a 3D shape model and several appearance models.
In this paper, speech movements estimated using this system are perceptually evaluated.
These movements are re-synthesised using a Point-Light (PL) rendering.
They are paired with original audio signals degraded with white noise at several SNR.
We study how much such PL movements enhance the identification of logatoms, and also to what extent they influence the perception of incongruent audio-visual logatoms.
In a first experiment, the PL rendering is evaluated per se.
Results seem to confirm other previous studies: though less efficient than actual video, PL speech enhances intelligibility and can reproduce the McGurk effect.
In the second experiment, the movements have been estimated with our tracking framework with various appearance models.
No salient differences are revealed between the performances of the appearance models.
A novel modulation scheme suitable for noncoherent demodulation based on quaternary quasi-orthogonal sequences is proposed.
Compared to orthogonal modulation, the controlled quasi-orthogonality between the sequences allow significantly increased bandwidth efficiency with little or no degradation in power efficiency.
A hardware efficient demodulator structure using fast Walsh transforms is also presented.
This paper presents a formalized ontological framework for the analysis of multiscale classifications of geographic objects.
We propose a set of logical principles that guide such geographic classifications.
Then we demonstrate application of these principles on a practical example of the "National Hierarchical Framework of Ecological Units".
The framework has a potential to be used to facilitate interoperability between such geographic classifications
To represent the individual states of software systems we propose to use edge-labelled graphs: nodes will stand for dynamically allocated entities (e.g., objects or method frames) and edges for relations between those entities (e.g., arising from associations or variables).
Obviously, as these graphs may in principle grow unboundedly, the state space is generally infinite.
In this paper we present a technique to automatically obtain finite approximations of arbitrary state spaces, by recording only the local structure of the individual graphs: essentially, for each node we only store the approximate number of its neighbours according to each edge label.
This gives rise to a variant of shape graphs described elsewhere.
OWL have, at their heart, the RDF graph.
Jena2, a secondgeneration RDF toolkit, is similarly centered on the RDF graph.
RDFS and OWL reasoning are seen as graph-to-graph transforms, producing graphs of virtual triples.
Rich APIs are provided.
The Model API includes support for other aspects of the RDF recommendations, such as containers and reification.
The Ontology API includes support for RDFS and OWL, including advanced OWL Full support.
Jena includes the de facto reference RDF/XML parser, and provides RDF/XML output using the full range of the rich RDF/XML grammar.
N3 I/O is supported.
RDF graphs can be stored in-memory or in databases.
Jena's query language, RDQL, and the Web API are both offered for the next round of standardization.
A promising approach to graph clustering is based on the intuitive notion of intra-cluster density vs. inter-cluster sparsity.
While both formalizations and algorithms focusing on particular aspects of this rather vague concept have been proposed no conclusive argument on their appropriateness has been given.
Enhanced Multimedia Meta Objects (EMMOs) are a novel approach to multimedia content modeling, combining media, semantic relationships between those media, as well as functionality on the media, such as rendering, into tradeable knowledge-enriched units of multimedia content.
For the processing of EMMOs and the knowledge they contain, suitable querying facilities are required.
In this paper, we present EMMA, an expressive query algebra that is adequate and complete with regard to the EMMO model.
EMMA o#ers a rich set of formally-defined, orthogonal query operators that give access to all aspects of EMMOs, enable query optimization, and allow the representation of elementary ontology knowledge within queries.
Thereby, EMMA provides a sound and adequate foundation for the realization of powerful EMMO querying facilities.
We recently proposed a definition of a language for nonmonotonic reasoning based on intuitionistic logic.
Our main idea is a generalization of the notion of answer sets for arbitrary propositional theories.
We call this extended framework safe beliefs.
We present an algorithm, based on the Davis-Putnam (DP) method, to compute safe beliefs for arbitrary propositional theories.
We briefly discuss some ideas on how to extend this paradigm to incorporate preferences.
this paper using the translitertation system known as The Library of Congress system (without diacritics).
At the first occurence of a particular Russian word, it is given both in Cyrillic characters and in a transliterated form.
Subsequent references to the word are always written in transliterated form
Handwriting recognition and OCR systems need to cope with a wide variety of writing styles and fonts, many of them possibly not previously encountered during training.
This paper describes a notion of Bayesian statistical similarity and demonstrates how it can be applied to rapid adaptation to new styles.
The ability to generalize across different problem instances is illustrated in the Gaussian case, and the use of statistical similarity Gaussian case is shown to be related to adaptive metric classification methods.
The relationship to prior approaches to multitask learning, as well as variable or adaptive metric classification, and hierarchical Bayesian methods, are discussed.
Experimental results on character recognition from the NIST3 database are presented.
There are 339 combinatorial types of generic metrics on six points.
They correspond to the 339 regular triangulations of the second hypersimplex #(6, 2), which also has 14 non-regular triangulations.
This paper discusses the uses of SDL for the co-design of an ATM Network Interface Card (NIC).
In this study, the initial specification is given in SDL.
The architecture generation is made using Cosmos, a co-design tool for multiprocessor architecture.
Several architectures are produced starting from the same initial SDL specification.
The performance evaluation of these solutions was made using hardware/software cosimulation.
This paper describes the experiment and the lessons learned about the capabilities and the restrictions of SDL and Cosmos for hardware/software co-design of distributed systems.
The use of SDL allows for drastic reduction of the model size when compared to hardware/software model given in C/VHDL.
SDL simulation may be 30 times faster than C/VHDL simulation.
This paper is a detailed case study of building Code Tutor, a Web-based intelligent tutoring system (ITS) in the domain of radio communications.
It is ontologically founded and was built using CLIPS and Java-based expert system tools, latest integrated graphical CASE tools for software analysis and design, and Java servlets.
In Code Tutor, Apache HTTP Server stores and serves static HTML pages, and Apache JServ Java package enables dynamic interpretation of user defined servlet classes and generation of active HTML pages.
XML technology is used to generate files that Code Tutor uses to provide recommendations to the learners.
Such a rich palette of integrated advanced technologies has greatly alleviated the system design and implementation, and has also led to interesting solutions of a number of problems common to many ITSs.
The paper describes these solutions and useful design decisions, and discusses several practical issues related to architectures of intelligent Web-based applications.
Alternative forms of representation were employed to generate new insights into the knowledge teachers use to inform practice.
Conversation, drawing, metaphor, and story writing encouraged a group of teachers to make multiple probes into their ways of knowing how to manage the complexities of many everyday teaching situations.
'Sandy's Story', and comments from other teachers, illustrate how these methods can enhance efforts to understand the ways that personal images enter into teaching decisions.
Why teachers and researchers ought to inquire into this aspect of knowing how to teach is examined.
We construct various isometry groups of Urysohn space (the unique complete separable metric space which is universal and homogeneous), including abelian groups which act transitively, and free groups which are dense in the full isometry group.
Clustering is a core problem in data-mining with innumerable applications spanning many fields.
A key di#culty of e#ective clustering is that for unlabelled data a `good' solution is a somewhat ill-defined concept, and hence a plethora of valid measures of cluster quality have been devised.
Most clustering algorithms optimize just one such objective (often implicitly) and are thus limited in their scope of application.
In this paper, we investigate whether an EA optimizing a number of di#erent clustering quality measures simultaneously can find better solutions.
Using problems where the correct classes are known, our results show a clear advantage to the multiobjective approach: it exhibits a far more robust level of performance than the classic k-means and average-link agglomerative clustering algorithms over a diverse suite of 15 real and synthetic data sets, sometimes outperforming them substantially.
Normal forms play a central role in the design of relational databases.
Several normal forms for temporal relational databases have been proposed.
These definitions are particular to specific temporal data models, which are numerous and incompatible.
Programs are, nowadays, considered to be complex systems.
Entropy and Correlation are the most widely used metrics available for the analysis of complex systems.
This paper compares the application of these sorts of metrics, in the evaluation of the program organization.
We verify that, the metrics based on the correlation are the most valuable for the identification of the program organizations.
Contents 1 Wireless Networks 1 1.1 Introduction ...................................... 1 1.1.1 History of Wireless Networks ........................ 2 1.1.2 Wireless Data Vision ............................. 5 1.1.3 Technical Challenges ............................. 7 1.2 The Wireless Channel ................................. 8 1.2.1 Path loss ................................... 9 1.2.2 Shadow Fading ................................ 10 1.2.3 Multipath Flat-fading and Intersymbol Interference ............. 11 1.2.4 Doppler Frequency Shift ........................... 12 1.2.5 Interference .................................. 13 1.2.6 Infrared versus Radio ............................ 13 1.2.7 Capacity Limits of Wireless Channels .................... 14 1.3 Link Level Design .................................. 15 1.3.1 Modulation Techniques ............................ 15 1.3.2 Channel Coding and Link Layer Retransmission .............. 16 1.3.3 Flat-Fading Countermeasures ..
The publication of atomic resolution crystal structures for the large ribosomal subunit from Haloarcula marismortui and the small ribosomal subunit from Thermus thermophilus has permanently altered the way protein synthesis is conceptualized and experiments designed to address its unresolved issues.
The impact of these structures on RNA biochemistry is certain to be no less profound.
The background and substance of these developments are reviewed here.
This paper presents a simple Evolution Strategy and three simple selection criteria to solve engineering optimization problems.
This approach avoids the use of a penalty function to deal with constraints.
Its main advantage is that it does not require the definition of extra parameters, other than those used by the evolution strategy.
A self-adaptation mechanism allows the algorithm to maintain diversity during the process in order to reach competitive solutions at a low computational cost.
The approach was tested in four well-known engineering design problems and compared against several penalty-function-based approaches and other state-of-the-art technique.
The results obtained indicate that the proposed technique is highly competitive in terms of quality, robustness and computational cost.
1.
The problem of coverage without a priori global information about the environment is a key element of the general exploration problem.
Applications vary from exploration of the Mars surface to the urban search and rescue (USAR) domain, where neither a map, nor a Global Positioning System (GPS) are available.
We propose two algorithms for solving the 2D coverage problem using multiple mobile robots.
The basic premise of both algorithms is that local dispersion is a natural way to achieve global coverage.
Thus, both algorithms are based on local, mutually dispersive interaction between robots when they are within sensing range of each other.
Simulations show that the proposed algorithms solve the problem to within 5-7% of the (manually generated) optimal solutions.
We show that the nature of the interaction needed between robots is very simple; indeed anonymous interaction slightly outperforms a more complicated local technique based on ephemeral identification.
Brachytherapy is the treatment method of choice for patients with a tumor relapse after a radiation therapy with external beams or tumors in regions with sensitive surrounding organs-at-risk, e. g. prostate tumors.
The standard needle implantation procedure in brachytherapy uses pre-operatively acquired image data displayed as slices on a monitor beneath the operation table.
Since this information allows only a rough orientation for the surgeon, the position of the needles has to be verified repeatedly during the intervention.
This paper addresses the contested issue of the efficacy of targeting interventions in developing countries using a newly constructed comprehensive database of 111 targeted antipoverty interventions in 47 countries.
While the median program transfers 25 percent more to the target group than would be the case with a universal allocation, more than a quarter of targeted programs are regressive.
Countries with higher income or governance measures, and countries with better measures for voice do better at directing benefits toward poorer members of the population.
Interventions that use means testing, geographic targeting, and self-selection based on a work requirement are all associated with an increased share of benefits going to the bottom two quintiles.
Self-selection based on consumption, demographic targeting to the elderly, and community bidding show limited potential for good targeting.
Proxy means testing, community-based selection of individuals, and demographic targeting to children show good results on average, but with considerable variation.
Overall, there is considerable variation in targeting performance when we examine experiences with specific program types and specific targeting methods.
Indeed a Theil decomposition of the variation in outcome shows that differences between targeting methods account for only 20 p
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment