Skip to content

Instantly share code, notes, and snippets.

@abhiTronix
Created April 14, 2023 06:54
Show Gist options
  • Save abhiTronix/28d1b90a7024af555b6691599d5e9b29 to your computer and use it in GitHub Desktop.
Save abhiTronix/28d1b90a7024af555b6691599d5e9b29 to your computer and use it in GitHub Desktop.
Software architecture-based analysis and testing: a look into achievements and future challenges

Software architecture-based analysis and testing: a look into achievements and future challenges

1. The role of software architecture in testing and analysis

The term software architecture (SA) has been introduced to denote the high level structure of a software system. SA has been proposed as a means for managing complexity and improving reuse, by supporting the decomposition of a system into its high level components and their interconnections. SA became prevalent in the beginning of the ’90s when its main role was to describe the system structure (by identifying architectural components and connectors) and the required system behavior. Over the years, the SA scope has evolved, and today it also captures the architecturally-relevant decisions behind design [50] taken by a variety of stakeholders to satisfy their own specific concerns, and codified into different views and viewpoints [46]. Nowadays, the relevance of SA in both academic and industrial worlds is unquestionable, and SAs are used for documenting and communicating design decisions and architectural solutions [24], for driving analysis techniques [57,62,63,79], for code generation purposes in model-driven engineering [3,37], for product line engineering [15], for risks and cost estimation [36,67], and further on. SAs have also been advocated since the 1990s as a means for improving the dependability of complex software systems. In this light, different methods have been proposed on the one side for assessing the correctness of architectural decisions with respect to system goals and requirements, and on the other as a reference model to drive a more effective system design and implementation. Among these methods, testing and analysis play a central role. In this editorial paper, while introducing the two papers selected for this special issue, we report on those that we consider the most relevant advances in the field of architecture-based testing and analysis over the years. We do this based on our own journey in this topic, which we ourselves contributed to stir and shape. Many dedicated events have been hold to discuss the role plaid by SA in the analysis and testing of software systems. Starting from the first ROSATEA workshop run in 1998 (and followed by the ROSATEA 20061 and ROSATEA 20072 editions), the role plaid by SA in analysis and testing has been also discussed in dependability venues (such as the Architecting Dependable Systems workshops and books3 ), in testing venues (such as the Charette session of the “6th Workshop on Automation of Software Test” at ICSE 2011), and in software architecture venues (such as the “Workshop on Architecture-Based Testing and System Validation”, part of the WICSA 2011 programme4 ). Very recently three events (the “Working meeting on Architecture-Based Testing: Moving Research into Practice” held in Pisa in 20115 , the industrial and practitioners-oriented meetings “Workshop on Architecture-Based Testing: Best Practices, Best Potential” held at the CMU-SEI on February 2011, and the “Architecture-based Testing: Industrial Needs and Research Opportunities” BOF session6 at WICSA 2011) showed a renewed interest in the topics. Summarizing we can classify the role of SA in testing and analysis in three main areas: Architecture evaluation: Evaluates the architecture goodness with respect to quality requirements. Architecture-based analysis: Analyzes an architecture or architectural model with respect to functional and non functional qualities. Usually the analysis serves the purpose of comparing alternative architectures to make informed early design decisions. Architecture-based testing: Uses an architecture to produce artifacts useful in testing the implementation of a system (e.g., test cases, test plans, coverage measures) and executes code-level test cases to check the implementation. When dealing with SA-based analysis and testing, two perspectives can be taken: (i) the one where the architect wants to analyze or test the software architecture itself (e.g., with respect to given requirements or goals), or (ii) the other where the developer wants to analyze or test the developed system against the decisions made within its SA. This implies conformance between the architecture and the implemented system. This paper will cover both perspectives.

2. Architecture evaluation

Architecture evaluation (AE) consists in evaluating the software architecture compliance to quality requirements. Different methods have been proposed. Dobrica and Niemela in [31] analyze and compare eight well known AE methods, including the SAAM (scenario-based architecture analysis method) with its three extensions SAAMCS, ESAAMI, and SAAMER, the ATAM (architecture trade-off analysis method), the SBAR (scenario-based architecture reenginering), the ALPSM (architecture level prediction of software maintenance), and the SAEM (software architecture evaluation model). Most of these AE techniques are based on scenarios, and help to assess the SA quality with respect to a multitude of quality attributes. Different techniques specialize on different evaluation goals: ATAM locates and analyzes tradeoffs, SBAR and SAAMER evaluate the potential of the designed SA to achieve the software quality requirements, SAAMCS focuses on risk assessment, while SAAM and ESAAMI help to identify potential trouble spots [31]. Among them, SAAM and ATAM can be considered the most mature and applied methods. More recently, other AE methods have been introduced. The System Architecture Tradeoff Analysis Method7 is a variant of the ATAM, designed to evaluate the system and software architectures of a software-reliant system. It is used to assess the consequences of architectural decisions in light of business goals and quality-attributerequirement goals. The Holistic Product Line Architecture Assessment (HoPLAA) method [60] extends the ATAM architecture evaluation to product line architectures: in a first stage, HoPLAA focusses on the core architecture evaluation, while during a second stage it evaluates individual product architectures. The strength of HoPLAA is that it takes less time than performing separately the ATAM analysis on each individual product architecture. The Continuous Performance Assessment of Software Architecture (CPASA) [66] provides a method for assessing the performance requirements using the system architecture as a reference. CPASA, being designed with agile and incremental software development processes in mind, extends the PASA [75] framework that instead requires a full and finalized requirements specification. The Tiny Architectural Review Approach (TARA) [76] is a quick and inexpensive, non scenario-based approach, recently introduced to make architecture evaluation more easily applicable in industry. Starting from the consideration that scenario-based methods are perceived to be expensive when applied in industrial contexts, and there is not a clear confidence on the benefits of such assessments, TARA aims to be quicker and less expensive than most of the scenario-based methods. 7 See: http://www.sei.cmu.edu/architecture/tools/evaluate/systematam.cfm?location=quaternary-nav&so

3. Architecture-based analysis

While AE focuses on evaluating the SA itself, architecture-based analysis (ABA) analyzes the system expected qualities, based on the decisions made at the architecture level. In other words, assuming the SA model correctly implements the desired requirements, ABA aims at using the produced architectural artifacts to select the right architecture that at best satisfies expected system qualities and stakeholder concerns.

3.1 Model checking software architectures

Model checking [23] is a widely used formal verification technique whose aim is to analyze, through an exhaustive and automatic approach, systems behavior with respect to selected properties. In the context of software architectures, model checking consists in assessing whether an SA specification satisfies desired architectural properties. The input to this process is an SA behavioral specification and a set of SA properties. For each property, the model checker processes the SA inputs and returns “true” (in case the SA satisfies the property), or “false” (and in that case, it also returns a counter-example). Sixteen model checking SA techniques have been classified and compared in [79]. Starting from Wright [4], to be considered the first seminal work on model checking SA, many other approaches have been proposed, including approaches for checking real-time systems properties (e.g., Tsai et al.’s approach [72], and the Fujaba [20] approach and tool for model checking distributed, embedded, and real-time systems), or approaches for checking dynamically evolving and reconfigurable architectures (e.g., CHAM [25], PoliS [22], and ArchWARE [61]), approaches for model checking SA based on UML models (e.g., Bose [16], Charmy [62], and AutoFOCUS [2]), or approaches for checking concurrent and distributed architectures (e.g., Darwin/FSP [53], and SAM [43]). By looking at how the model checking SA approaches evolved over time, we can notice that: (i) while earlier model checking SA approaches relied on formal textual notations, more recent ones make use of model-based notations, or a combination of model-based and formal textual notations; (ii) SA properties are typically specified using existing languages, even if some new formal languages or graphical abstractions have been proposed; (iii) while most of the model checking approaches make use of existing (general purpose) model-checking engines, some approaches (e.g., LTSA8 and PolishMC [22]) introduced their own SA-specific model checking engine; (iv) the most recent approaches provide a more comprehensive support for architectural element modeling and checking, enabling the specification and checking of properties related to components, connectors, interfaces, types and instances, semantics, and configuration. The first paper in this special issue, titled “An Architectural Approach to the Analysis, Verification and Validation of Software Intensive Embedded Systems”, deals with model checking based on an SA behavioral specification (see Sect. 6 for further information).

3.2 Performance and reliability analysis at the software architecture level

According to the above cited survey [55], the interviewed practitioners identified the assessment of extra-functional properties to be the most important reason for conducting architecture-based analysis. In this study, we focus on the two extrafunctional analyses that received the bigger attention from the SA community, namely architecture-based performance and reliability analyses. Software performance engineering is a systematic, quantitative approach to constructing software systems that meet performance objectives [74]. When applying software performance engineering at the software architecture level, architectural models are enriched with performance annotations, with the purpose of either selecting the architectural solution that carries out the potentially best performances, or measuring the performance of an actually developed system in terms of its components (in line with the two perspectives defined in Sect. 1). Initial approaches for software performance engineering at the software architecture level have been proposed in the late 90’s. Williams and Smith [74] describe how to use software performance engineering techniques to perform early assessment of an SA: more specifically, given the ICAD application, they consider three different applications and informally demonstrate the importance of inter-component communication in determining performance. In [7] the authors describe how to derive a queuing network model from a Chemical Abstract Machine formal architectural specification, for providing a quantitative performance comparison among different architectural alternatives. Petriu and others [63–65] propose a formal approach for building layered queueing network performance models from the architectural patterns used in the system by using the PROGRES graph rewriting system. In [63] the proposed approach is applied to a telecommunication product. Other approaches have been presented in subsequent years. Balsamo et al. [8] survey model-based performance prediction approaches. They discuss software performance engineering approaches to address early software performance analysis and conclude that most of the approaches apply performance analysis at the architecture level. Another mainstream work recognizing the importance of SA for performance engineering is the Future of Software Engineering paper in [77], where the authors acknowledge the need of automatic performance optimization of architecture, design 8 See: http://www.doc.ic.ac.uk/ltsa/. and run-time configuration. More recently, Koziolek [48] surveys performance prediction and measurements approaches for component-based systems. Based on this work, state of the art performance evaluation approaches for component-base systems are classified into five main approaches: prediction approaches based on UML or on proprietary meta-models, prediction approaches focussing on middleware, formal performance specification approaches, and monitoring approaches for system implementations. Reliability can be seen as the ability of a system or component to perform its required functions under stated conditions for a specified period of time [1]. As discussed in [45], the reliability of a software architecture depends on three main factors: the reliability of individual components (e.g., implementation technology, size, and complexity), the component interactions (e.g., component dependencies and interactions), and the execution environment. Goseva-Popstojanova and Trivedi [40] provide a survey of architecture-based approaches, categorizing them into state-based, path-based, and additive models. State-based models use the control flow to represent the system architecture and probabilities of the transfer of control between components to estimate the system reliability. In path-based models, the system reliability is computed considering the possible execution paths of the program. Additive models, instead, are focused on estimating the overall application reliability using the components failure data. This study classifies state-of-the art approaches and discuss the different models assumptions, limitations and applicability. A survey conducted by Immonen and Niemela in 2008 [45] compares state of the art reliability and availability prediction methods from the viewpoint of SA. The approaches are classified into state- and path-based. State-based approaches are divided into composite (where architecture and failure behaviors are combined into a single model) and hierarchical (making use of approximate or accurate hierarchical models). In path-based approaches the reliability of the software is a weighted average of the reliabilities of all the paths. The study identified as the main shortcoming at that time the lack of tool support, the lack of methods that consider variability in an explicit way, a weak validation of the method and its results, and a limited ability to bridge the gap between reliability requirements and analysis. Other relevant papers on the topic have been published since the surveys presented above. Among them, the work by Roshandel et al. [71] leverages standard SA models to predict the reliability of software systems by means of Bayesian models, the paper by Cheung et al. [21] presents a framework for predicting the reliability of individual software components during architectural design, and the article by Brosch et al. [19] presents an approach for reliability modeling and prediction of component-based software architectures by explicitly modeling the system usage profile and execution environment.

3.3 Architecting fault tolerant systems

Fault tolerance (together with fault prevention, removal, and forecasting) is one of the four means to attain dependability [6]. The essence of fault tolerance is in detecting errors and carrying out the subsequent system recovery actions to avoid service failures in the presence of faults. Advances made since the 1970s have included a plethora of fault tolerance mechanisms, a good understanding of the basic principles of building fault tolerant software, and the dedication of a considerable fraction of requirements analysis, run-time resources, development efforts and code to ensuring fault tolerance. The introduction of fault tolerance at the architecting phase has the clear benefits of allowing developers to make good decisions, very early into the process, about what redundant resources to use, how to use them efficiently and how to establish if the chosen recovery strategy will be successful. A number of approaches, methods and styles/patterns for architecting fault tolerant systems have been surveyed and compared in [59]. In the last decade there has been a steady stream of research papers, mostly in dedicated and specific workshops and conferences. Between 2001 and 2003 the main focus was on architectural styles and patterns for fault tolerant architectures. Existing approaches can be classified according to different criteria. Some reuse a library of styles, like in [32] where a library of fault tolerance patterns is used in order to generate a fault-tolerant architecture, or in [52] where FT systems are architected by using a library of existing styles. Others introduce domain-specific fault-tolerant styles, like in [54] where the SOA style is modified in order to add fault tolerance mechanisms to SOA systems, or in [30] where a web service FT style is used in which a fault local to a web service is managed internally, but if the web service is unable to do so the failure is immediately propagated to a Global Fault Manager. Other approaches propose styles that support the idealised fault-tolerant component model, like in [9] where the Idealised Fault-Tolerant Component style (combined with the role-based collaboration styles) is used in order to produce a dependable software architecture, or in [18] where the Idealised FT architectural component (iFTComponent), the Idealised FT architectural connector (iFTConnector), and in general, the Idealised FT architectural element (iFTE) are introduced. IFTE has its own style that prescribes the way components and connectors inside it are integrated. Others combine existing styles, like in [28] where the C2 and the Idealised Fault-Tolerant Component styles are combined to create an Idealised C2 Component style, or in [78] where the pipe-andfilter, repository and object-oriented styles are combined to create the fault-tolerant architecture style. Since 2004 the growth in the area has further risen, with new contributors joining in, a renovated interest in the topic, indicated by a larger number of papers published, and a new maturity of the research community, as demonstrated by the many journal and book publications produced. New formal and diagrammatic modeling languages for describing fault tolerant architectures have been introduced. For example, in [27] a new Architecture Description Language called MAMA-dl is defined that combines a language for describing MAMA fault management architectures with a Fault-Tolerant Layered Queueing Network specification. In [17] the B-Method (used to specify architectural elements, interfaces and exception types) is combined with the CSP algebra (to specify architectural scenarios). In [33] an algebra of actors is extended with mechanisms to model and detect crash failures (of actors). In [18] (stereotyped) Component and Sequence diagrams are used to describe the four main idealised fault-tolerant architectural elements, enforcing the principles associated with the ideal fault-tolerant component model. A number of analysis techniques has been proposed for analyzing fault tolerant architectures, ranging from model and consistency checking of FT SA (e.g., in [29]), to unit, integration, conformance, and scenario-based testing (like in [26]), and performance analysis (like in [27]).

4. Architecture-based testing

Architecture-based testing (ABT) consists in using software architecture artifacts (e.g., software architecture specification models, architectural design decisions, architectural documentation) to select abstract (architecture-level) test specifications to be used for testing those systems implementing the selected architecture. ABT is an instance of specification-based testing, therefore, an SA model describing the system expected behavior and properties is used to generate test specifications (according to certain testing goals and coverage and adequacy criteria) that are successively refined into concrete test cases and executed on the running implementation. More specifically, the high level tests are passed to the software tester, who has to (i) derive code level tests corresponding to the specified architectural test sequences, and (ii) actually run the tests and observe the current implementation behavior. Following the two perspectives reported in Sect. 1, ABT can be used to test the architecture itself, or to test for integration the system’s implementation, or its conformance to the expected behavior and design decisions expressed by the system’s software architecture. In general, deriving a functional test plan means to identify on some reference model those classes of behavior that are relevant for testing purposes. Since an architectural model points out architecturally relevant information, such as components and connectors behaviors, deployment and configuration of architectural elements, architectural styles and constraints, architecture-based test cases will typically reveal (integration) failures related to those aspects. By looking at the state of the art on ABT, we may recognize the following ones as the most relevant contributions to the field. In [12] the authors analyze the advantages in using SA-level unit and integration testing for reuse of test results and to test extra-functional properties. In [69] the authors propose a family of architecture-based (integration) test criteria based on the SA specification, adapting specification-based approaches. These two papers represent the seminal attempts in ABT. In [14] Bertolino et al. outline an automatable method for the integration testing of software systems based on a formal description of its software architecture. In [68] Richardson et al. present an architecture-based integration testing approach that takes into consideration architecture testability, simulation, and slicing. In [41] Harrold presents an approach for effective software architecture regression testing, and in [42] she also discusses the use of software architecture for testing. In [70] Rosenblum adapts his strategy for component-based systems testing to SAs. The author shows how formal models of test adequacy can be used in conjunction with architectural models to guide testing. In [11] the authors present an approach for deriving test plans for the conformance testing of a system implementation with respect to an architecture description, while in [13] the authors establish a relation between SA tests and concrete, executable tests. In [47] the authors propose a technique to test data flow and control flow properties at the architectural level. Six architecture relations among architecture units are defined and then used to define architecture testing paths. Five architecture-level testing criteria are proposed. In [58] a framework for SA-based regression testing is presented, coping with two main types of evolution: architectural evolution and code evolution. The approach proposed by us in [57] still remains, to the best of our knowledge, the most comprehensive attempt to tackle the whole cycle of SA-based testing, spanning the whole spectrum from test derivation down to test execution, and relying on empirical hands-on experience on real-world case studies. By looking at the state of the art on ABT from a temporal perspective, we can notice that: (i) while the preliminary papers were mostly introducing the ABT topic and focussing on selecting test cases, later approaches focus more on assessment criteria and test execution; (ii) architecture-related coverage criteria have been (almost) constantly proposed through the time; (iii) regression testing has recently gained a certain attention from the community [58]; (iv) the testability topic, while initially analyzed in [41,68,69], has been silent in subsequent years. More recently, the Software Engineering Institute has been elaborating new testability profiles.

5. Challenges

In this paper we have shortly classified and summarized the relevant research proposals that have been conducted over the years on the topics related to architecture-based analysis and testing. Still, much research is needed to transfer academic research into industrial practice, and to keep pace with the new requirements modern systems expose. On the role plaid by software architecture in analysis and testing in practice, an initiative named Architecture Support for Testing (AST), and conducted by the University of L’Aquila (Italy), ISTI—CNR Pisa (Italy), and Software Engineering Institute—SEI, Pittsburgh (US) started in 2010. The main objective of AST is to collect industrial needs related to the use of system architecture descriptions for supporting the system testing process, and to provide solutions on the topic. The AST main research question is: “How can we use, or better use, architecture to help test our systems?”. Within AST, an initial workshop was held by SEI in Pittsburgh (on February 2011) with practitioners from over 15 companies. The output has been a set of 29 important model problems10 in architecture-based testing. These model problems were grouped into four main areas: (i) AST and requirements, (ii) AST and product lines, (iii) the scope of AST, and (iv) AST and integration testing. The model problems are available, See: http://www.sei.cmu.edu/architecture/research/archpractices/More-Architecture-Support-for-Testin A model problem is considered a problem that, if solved, would result in a significant improvement over current testing practice. Research solutions to those model problems have been discussed in subsequent meetings (as already listed in Sect. 1). The already discussed study conducted in [55], although not directly related to the role of software architecture in testing and analysis (but rather on the needs and challenges perceived by practitioners when using existing architectural languages), has nevertheless provided useful information. The 63 % of the respondents reported that they have the need to analyze architecture descriptions semantically. The analyses named in the answers are: (i) data flow analysis, (ii) run-time dependencies analysis, (iii) performance, (iv) scalability, (v) security, (vi) requirements analysis (from either informal specifications or UML models), (vii) simulation, (viii) finding design flaws, (ix) change impact analysis, (x) cost/value analysis. Analysis also scores third among the perceived needs, following design and communication. However, analysis is not only a need, but it is also practiced by about 74 % of the interviewed practitioners. Further, the top-most reason for analysis is to check extra-functional properties (about 48 %), followed by behavioral concerns (about 24 %). The respondents that do not carry out any analysis are about 26 % in most of the cases, using the Architecture Language-based model for documentation purposes only. Motivation for not carrying out analysis are: no value perceived (about 44 %), architectural languages (AL) too limited/imprecise (about 44 %) and lack of skills, competencies or available resources (about 11 %). By shifting our focus from current software systems and practice to the new emerging paradigms in systems development, the most relevant challenges concern: (i) analysis and testing of dynamic and adaptive systems, (ii) analysis and testing of systems under uncertainty, (iii) analysis and testing of systems in the cloud, (iv) analysis and testing of mobile-enabled systems, (v) analysis and testing of secure trustworthy systems. In these domains it is interesting to see if SA can still play a role. Indeed, emerging work in the considered areas show that the use of architectural concerns can even be more evident and profitable. Analysis and testing of dynamic and adaptive systems: In the last few years a lot of research has been focussing on dynamically evolving and adaptive systems, and specifically on the analysis and design of their architectures. Indeed the architectural abstraction is the most favorite one to model system’s changes and adaptations [39,49]. Challenges in this domain attain the possibility to use the architecture to constrain the scope of the change into well defined boundaries (e.g., not allowing a run-time change that has the potential to mine the system quality) thus making system’s analysis easier. The second paper in this special issue, titled “Architecture-Based Resilience Evaluation for Self-Adaptive Systems”, deals with the evaluation of alternative adaptation solutions for a given self-adaptive system (see Sect. 6 for further information). Analysis and testing of systems under uncertainty: Uncertainty consists in partial ignorance and incomplete or conflicting information [73]. While for a long time uncertainty has been (sort of) ignored, assuming that everything in our systems was fully specifiable, and that systems could be engineered to be trouble-free [38], today’s research points out the need to formally model the uncertainty modern systems bring with them due to their ubiquitous nature. SA is an important source of uncertainty, and the scope of uncertainty deals with not knowing the exact impact of architectural alternatives on properties of interest [34]. On the other hand SA can also be a way to control the uncertainty brought by the components or by the execution environment, thus permitting the guarantee of properties of interest [5]. The questions to be answered in future work are: (i) how to use architectural specification embracing uncertainty for testing the system to be?, (ii) how to verify the architecture conformance to properties/requirements, in the presence of uncertainty?, (iii) how to predict software quality from an architectural specification in the presence of uncertainties?, (iv) how to make architectural design decisions taking into account uncertainties? Analysis and testing of systems in the cloud: Cloud computing enables the on-demand access to distributed resources. When architecting cloud-based applications, new requirements such as elasticity, multi-tenancy, and application independence introduce new challenges on the architecture description, analysis and testing. Initial work on how to model the architecture of cloud-based applications has been presented in [35] but analysis and testing of such architectures is still to be developed. The questions to be answered in future work are: (i) how to reflect cloud-based applications peculiarities into architectural languages?, (ii) what are the new challenges when analyzing and testing the architecture of cloud-based applications?, (iii) how to test cloud-based applications in the presence of elasticity? Analysis and testing of mobile-enabled systems: Modern applications make more and more use of things, that are, static or mobile computational units, with variable computational power and resources, always connected, and typically enabled to sense data from the surrounding environment. New scenarios arise when architecting what we will refer as mobile-enabled systems12 : (i) resources can appear and disappear depending on the computational environment, (ii) the computation can be off-loaded to other nodes, (iii) context-aware information can be collected and aggregated, (iv) the communication strength varies in unpredictable ways, and so on. All those scenarios impose new challenges when architecting mobile-enabled systems. Analysis and testing of secure trustworthy systems: In current distributed pervasive systems, security and trust emerge among the most crucial qualities that a software architecture should guarantee. Security is a multi-faceted concern, involving all layers of a system and cannot be dealt with as an after-thought. System designers should take into account security principles since the architectural stages: as straightly said by McGraw, “testing that doesn’t take the software architecture into account probably won’t cover anything interesting about software risk” [56]. Concerning architectural analysis, research is active on approaches for performing risk-based analysis (e.g., [51]) and formal verification [44] of security-related properties. On the other hand, trust is not a static property that is gained once and for all, but it should be continuously reinforced by moving the testing phase to on-line: in the recently concluded European project “Trusted Architecture for Securely Shared Services” TAS313 a trusted architecture preserving personal privacy and confidentiality in dynamic environments has been developed, which embedded mechanisms for on-line functional testing [10]. Challenging open questions remain, such as how to assess security properties in compositional way, and how to mitigate side-effects in on-line testing for trustworthy architectures.

6 Introduction to the selected papers

This special issue comprises two papers that resulted from the selection of six submissions. Those submissions responded to an open call that was widely advertised in the software engineering community. All the papers underwent a rigorous ad selective review process. We want to take here the opportunity to thank all the authors and the reviewers for their hard work. The two selected papers provide a rather comprehensive picture of the many dimensions we mentioned earlier in this editorial, included the more futuristic ones. The first paper is “An Architectural Approach to the Analysis, Verification and Validation of Software Intensive Embedded Systems”, by DeJiu Chen , Lei Feng, Tahir Naseer Qureshi, Henrik Lönn, and Frank Hagl. Its describes a holistic development approach for embedded systems centered around the architecture and its description language. The approach described in the paper makes an effective use of the architectural abstractions to analyze functional and non functional properties of the system by exploiting the architectural description and ensuring its conformance to the system due to the systematic and rigorous development steps down to the code. The second paper is “Architecture-Based Resilience Evaluation for Self-Adaptive Systems”, by Javier Camara, Rogério de Lemos, Marco Vieira, Raquel Almeida, and Rafael Ventura. It addresses the new emerging domain of self-adaptive systems and proposes an architecture-based approach to compare different adaptation mechanisms of a self-adaptive software system. Besides proposing a new technique, the paper shows that changes at the architectural level are indeed relevant for the dependability of the system.

References

  1. IEEE Standard Computer Dictionary (1991) A compilation of IEEE standard computer glossaries. IEEE Std 610
  2. AutoFOCUS 3. Autofocus project. http://autofocus.in.tum.de/index.php/Main_Page, last visit: 2013
  3. Abi-Antoun M, Aldrich J, Garlan D, Schmerl B, Nahas N, Tseng T (2005) Improving system dependability by enforcing architectural intent. In: Proceedings of the 2005 workshop on architecting dependable systems, WADS ’05. ACM , New York, pp 1–7
  4. Allen R, Garlan D (1997) A formal basis for architectural connection. ACM Trans Softw Eng Methodol 6(3):213–249
  5. Autili M, Cortellessa V, Di Ruscio D, Inverardi P, Pelliccione P, Tivoli M (2012) Integration architecture synthesis for taming uncertainty in the digital space. In: Calinescu R, David (eds) Monterey workshop, volume 7539 of Lecture Notes in Computer Science. Springer, Berlin, pp 118–13113 Project page is maintained here: http://vds1628.sivit.org/tas3/.
  6. Avizienis A, Laprie J-C, Randell B, Landwehr CE (2004) Basic concepts and taxonomy of dependable and secure computing. IEEE Trans Dependable Sec Comput 1(1):11–33
  7. Balsamo S, Inverardi P, Mangano C (1998) An approach to performance evaluation of software architectures. In: Proceedings of the 1st international workshop on software and performance, WOSP ’98. ACM, New York, pp 178–190
  8. Balsamo S, Di Marco A, Inverardi P, Simeoni M (2004) Model-based performance prediction in software development: A survey. IEEE Trans. Software Eng. 30(5):295–310
  9. Beder DM, Romanovsky A, Randell B, Rubira CMF (2001) On applying coordinated atomic actions and dependable software architectures in developing complex systems. 4th IEEE international symposium on object-oriented real-time distributed computing (ISORC’01), Magdeburg
  10. Bertolino A, De Angelis G, Kellomaki S, Polini A (2012) Enhancing service federation trustworthiness through online testing. IEEE Computer 45(1):66–72
  11. Bertolino A, Corradini F, Inverardi P, Muccini H (2000) Deriving test plans from architectural descriptions. In: Ghezzi C, Jazayeri M, Wolf AL (eds) ICS. ACM, New York, pp 220–229
  12. Bertolino A, Inverardi P (1996) Architecture-based software testing. In: Joint proceedings of the second international software architecture workshop (ISAW-2) and international workshop on multiple perspectives in software development (Viewpoints ’96) on SIGSOFT ’96 workshops, ISAW ’96. ACM, New York, pp 62–64
  13. Bertolino A, Inverardi P, Muccini H (2001) An explorative journey from architectural tests definition downto code tests execution. In: Müller HA, Harrold MJ, Schäfer W (eds) ICSE. IEEE Computer Society, pp 211–220
  14. Bertolino A, Inverardi P, Muccini H, Rosetti A (1997) An approach to integration testing based on architectural descriptions. ICECCS. IEEE Computer Society, p 77
  15. Bosch J (2000) Design and use of software architectures: adopting and evolving a product-line approach, 1st edn. Addison-Wesley Professional, Boston
  16. Bose P (1999) Automated translation of UML models of architectures for verification and simulation using spin. In: Proceedings of 14th IEEE international conference on automated software engineering, Cocoa Beach, pp 102–109
  17. Brito PH, Lemos R, Rubira CM (2008) Development of fault-tolerant software systems based on architectural abstractions. In: Proceedings of the 2nd European conference on software Architecture, ECSA ’08. Springer, Berlin, pp 131–147
  18. Brito PHS, de Lemos R, Rubira CMF, Martins E (2009) Architecting fault tolerance with exception handling: verification and validation. J Comput Sci Technol 24(2):212–237
  19. Brosch F, Koziolek H, Buhnova B, Reussner R (2012) Architecture-based reliability prediction with the palladio component model. IEEE Trans Softw Eng 38(6):1319–1339
  20. Burmester S, Holger G, Hirsch M, Schilling D, Tichy M (2005) The Fujaba real-time tool suite: modeldriven development of safety-critical, real-time systems. In: International conference on software engineering, ICSE, pp 670–671
  21. Cheung L, Roshandel R, Medvidovic N, Golubchik L (2008) Early prediction of software component reliability. In: Proceedings of the 30th international conference on software engineering, ICSE ’08. ACM, New York, pp 111–120
  22. Ciancarini P, Franzé F, Mascolo C (2000) Using a coordination language to specify and analyze systems containing mobile components. ACM Trans Softw Eng Methodol 9(2):167–198
  23. Clarke EM, Grumberg O, Peled DA (1999) Model checking. MIT press, Cambridge
  24. Clements P, Bachmann F, Bass L, Garlan D, Ivers J, Little R, Merson P, Nord R, Stafford J (2010) Documenting software architectures: views and beyond, 2nd edn. Addison-Wesley Professional, Boston
  25. Corradini F, Inverardi P, Wolf AL (2006) On relating functional specifications to architectural specifications: a case study. Sci Comput Program 59(3):171–208
  26. da Brito PHS, Rocha CR, Filho FC, Martins E, Rubira CMF (2005) A method for modeling and testing exceptions in component-based software development. In: Proceedings of the second Latin-American conference on dependable computing, LADC’05. Springer, Berlin, pp 61–79
  27. Das O, Woodside CM (2004) Analyzing the effectiveness of fault-management architectures in layered distributed systems. Perform Eval 56(1–4):93–120
  28. de Guerra PAC, Rubira CMF, Romanovsky A, de Lemos R (2004) A dependable architecture for cots-based software systems using protective wrappers. In: Lemos R, Gacek C, Romanovsky A (eds) Architecting dependable systems II, volume 3069 of Lecture Notes in Computer Science. Springer, Berlin, pp 144–166 29. de Lemos R (2004) Analysing failure behaviours in component interaction. J Syst Softw 71(1–2):97– 115
  29. Dialani V, Miles S, Moreau L, De Roure D, Luck M (2002) Transparent fault tolerance for web services based architectures. In: Proceedings of the 8th international Euro-Par conference on parallel processing, Euro-Par ’02. Springer, London, pp 889–898
  30. Dobrica L, Niemela E (July 2002) A survey on software architecture analysis methods. IEEE Trans Softw Eng 28(7):638–653
  31. Domokos P, Majzik I (2005) Design and analysis of fault tolerant architectures by model weaving. In: Proceedings of the ninth IEEE international symposium on high-assurance systems engineering, HASE ’05. IEEE Computer Society, Washington, DC, pp 15–24
  32. Dragoni N, Gaspari M (2005) An object based algebra for specifying a fault tolerant software architecture. J Logic Algebr Program 63(2):271–297 (Special Issue on Process Algebra and System Architecture)
  33. Esfahani N, Malek S, Razavi K (2013) Guidearch: guiding the exploration of architectural solution space under uncertainty. In: Proceedings of the 2013 international conference on software engineering, ICSE ’13. IEEE Press, Piscataway, pp 43–52
  34. Everton Cavalcante ALM, Batista T (2013) Cloud-adl: an architecture description language for modeling cloud computing applications. In: Proceedings of the 7th European conference on software architecture, ECSA ’13. Springer, Berlin, pp 320–323
  35. Fairbanks GH (2010) Just enough software architecture: a risk-driven approach, 1st edn. Marshall and Brainerd
  36. Fujaba Project (2005) http://www.cs.uni-paderborn.de/cs/fujaba/publications/index.html. University of Paderborn, Software Engineering Group
  37. Garlan D (2010) Software engineering in an uncertain world. In; Proceedings of the FSE/SDP workshop on future of software engineering research, FoSER ’10. ACM, New york, pp 125–128
  38. Garlan D, Cheng S-W, Huang A-C, Schmerl BR, Steenkiste P (2004) Rainbow: architecture-based self-adaptation with reusable infrastructure. IEEE Comput 37(10):46–54
  39. Goseva-Popstojanova K, Trivedi KS (2001) Architecture-based approach to reliability assessment of software systems. Perform Eval 45(2–3):179–204
  40. Harrold MJ (1998) Architecture-based regression testing of evolving systems. In: Proceedings of the international workshop on the role of software architecture in testing and analysis—ROSATEA 98, pp 73–77
  41. Harrold MJ (2000). Testing: a roadmap. In: Finkelstein A (ed) ACM ICSE 2000, the future of software engineering, pp 61–72
  42. He X (2005) A framework for ensuring system dependability from design to implementation. In: Proceedings of the 3rd international workshop on modelling, simulation, verification and validation of enterprise information systems, MSVVEIS 2005. In conjunction with ICEIS 2005
  43. Heyman T, Scandariato R, Joosen W (2012) Reusable formal models for secure software architectures. In: 2012 Joint working IEEE/IFIP conference on software architecture (WICSA) and European conference on software architecture (ECSA), pp 41–50
  44. Immonen A, Niemel E (2008) Survey of reliability and availability prediction methods from the viewpoint of software architecture. Softw Syst Model 7(1):49–65
  45. ISO/IEC/IEEE (2011) ISO/IEC/IEEE 42010:2011 systems and software engineering—architecture description 47. Jin Z, Offutt J (2001) Deriving tests from software architectures. In: Proceedings 12th international symposium on software reliability engineering, ISSRE 2001, pp 308–313
  46. Koziolek H (2010) Performance evaluation of component-based software systems: A survey. Perform. Eval. 67(8):634–658
  47. Kramer J, Magee J (2007) Self-managed systems: an architectural challenge. In: Briand LC, Wolf AL (eds) FOSE, pp 259–268
  48. Kruchten P (2004) An ontology of architectural design decisions in software intensive systems. 2nd Groningen workshop on software variability, pp 54–61
  49. Kuz I, Zhu L, Bass L, Staples M, Xu X (2012) An architectural approach for cost effective trustworthy systems. In: 2012 Joint working IEEE/IFIP conference on software architecture (WICSA) and European conference on software architecture (ECSA), pp 325–328
  50. Li J, Chen X, Huang G, Mei H, Chauvel F (2009) Selecting fault tolerant styles for third-party components with model checking support. In: Proceedings of the 12th international symposium on componentbased software engineering, CBSE ’09. Springer, Berlin, pp 69–86
  51. Magee J, Kramer J, Giannakopoulou D (1999) Behavior analysis of software architectures. In: Proceedings of the 1st working IFIP conference on software architecture. WICSA, San Antonio
  52. Mahdian F, Rafe V, Rafeh R, Rahmani AT (2009) Modeling fault tolerant services in serviceoriented architecture. TASE ’09. In: Proceedings of the 2009 third IEEE international symposium on theoretical aspects of software engineering. IEEE Computer Society, Washington, DC, pp 319– 320
  53. Malavolta I, Lago P, Muccini H, Pelliccione P, Tang A (2013) What industry needs from architectural languages: a survey. IEEE Trans Softw Eng 39(6):869–891 56. McGraw G (2006) Software security: building security in. Pearson Education, Inc
  54. Muccini H, Bertolino A, Inverardi P (2003) Using software architecture for code testing. IEEE Trans Softw Eng 30(3):160–171
  55. Muccini H, Dias MS, Richardson DJ (2006) Software architecture-based regression testing. J Syst Softw 79(10):1379–1396
  56. Muccini H, Romanovsky A (2007) Architecting fault tolerant systems. Technical Report CS-TR-1051, Newcastle University
  57. Olumofin FG, Misic VB (2005) Extending the ATAM architecture evaluation to product line architectures. Fifth working IEEE/IFIP conference on software architecture (WICSA 2005). IEEE Computer Society, Pittsburgh, pp 45–56
  58. Oquendo F, Warboys B, Morrison R, Dindeleux R, Gallo F, Garavel H, Occhipinti C (2004) Archware: architecting evolvable software. In: Oquendo F, Warboys B, Morrison R (eds) Proceedings of the 1st European workshop on software architecture EWSA ’2004 (St Andrews, Scotland, UK), volume 3047 of Lecture Notes in Computer Science. Springer, Berlin, pp 257–271
  59. Pelliccione P, Inverardi P, Muccini H (2009) Charmy: a framework for designing and verifying architectural specifications. IEEE Trans Softw Eng 35:325–346
  60. Petriu D, Shousha C, Jalnapurkar A (2000) Architecture-based performance analysis applied to a telecommunication system. IEEE Trans Softw Eng 26:1049–1065
  61. Petriu DC, Wang X (1998) Deriving software performance models from architectural patterns by graph transformations. In: Ehrig H, Engels G, Kreowski H, Rozenberg G (eds) TAGT, volume 1764 of Lecture Notes in Computer Science. Springer, Berlin, pp 475–488
  62. Petriu DC, Wang X (1999) From UML descriptions of high-level software architectures to LQN performance models. In: Nagl M, Schürr A, Münch M (eds) AGTIVE, volume 1779 of Lecture Notes in Computer Science. Springer, Berlin, pp 47–62
  63. Pooley RJ, Abdullatif AAL (2010) Cpasa: continuous performance assessment of software architecture. In: Proceedings of the 2010 17th IEEE international conference and workshops on the engineering of computer-based systems, ECBS ’10. IEEE Computer Society, Washington, DC, pp 79–87
  64. Poort ER, van Vliet H (2011) Architecting as a risk- and cost management discipline. In: Proceedings of the 2011 ninth working IEEE/IFIP conference on software architecture, WICSA ’11. IEEE Computer Society, Washington, DC, pp 2–11
  65. Richardson DJ, Stafford J, Wolf AL (1998) A formal approach to architecture-based software testing, Technical report. University of California, Irvine
  66. Richardson DJ, Wolf AL (1996) Software testing at the architectural level. In: ISAW-2, in joint proceedings of the ACM SIGSOFT ’96 workshops, pp 68–71
  67. Rosenblum D (1998) Challenges in exploiting architectural models for software testing. In: Proceedings of the international workshop on the role of software architecture in testing and analysis—ROSATEA
  68. Roshandel R, Medvidovic N, Golubchik L (2007) A Bayesian model for predicting reliability of software systems at the architectural level. In: Overhage S, Szyperski CA, Reussner R, Stafford JA (eds) Software architectures, components, and applications, volume 4880 of Lecture Notes in Computer Science. Springer, Berlin, pp 108–126
  69. Tsai JJP, Prasad Sistla A, Sahay A, Paul R (1997) Incremental verification of architecture specification language for real-time systems. In: WORDS ’97: Proceedings of the 3rd workshop on object-oriented real-time dependable systems—(WORDS ’97). IEEE Computer Society, Washington, DC, p 215
  70. Walley P (1996) Measures of uncertainty in expert systems. Artif Intell 83(1):1–58
  71. Williams LG, Smith CU (1998) Performance evaluation of software architectures. In: Proceedings of the 1st international workshop on software and performance, WOSP ’98. ACM, New York, pp 164–177
  72. Williams LG, Smith CU (2002) PASASM: a method for the performance assessment of software architectures. In Proceedings of the 3rd international workshop on software and performance, WOSP ’02. ACM, New York, pp 179–189
  73. Woods E (2012) Industrial architectural assessment using TARA. J Syst Softw 85(9):2034–2047 (Selected papers from the 2011 Joint Working IEEE/IFIP Conference on Software Architecture (WICSA 2011))
  74. Woodside M, Franks G, Petriu DC (2007) The future of software performance engineering. In: 2007 Future of software engineering, FOSE ’07. IEEE Computer Society, Washington, DC, pp 171–187
  75. Yuan L, Dong JS, Sun J, Basit HA (2006) Generic fault tolerant software architecture reasoning and customization. IEEE Trans Reliab 55(3):421–435
  76. Zhang P, Muccini H, Li B (2010) A classification and comparison of model checking software architecture techniques. J Syst Softw 83(5):723–744
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment