Prof. Dr. Ralf Reussner
Sein Arbeitsschwerpunkt ist die Etablierung der Software-Technik als Ingenieursdisziplin. Dazu gehört die systematische Aufstellung von Software-Architekturen, deren Qualitätsbewertung und ihre systematische Umsetzung in Code mit klassischen und generativen Verfahren ebenso wie die automatische Analyse von bestehendem Code.
Die vollständige Publikationsliste finden Sie unter: http://sdq.ipd.kit.edu/people/ralf_reussner/publications/
- Handbuch der Software-ArchitekturDetails
Reussner, Ralf H. and Hasselbring, Wilhelm, dPunkt.verlag Heidelberg, 2008
- Dependability MetricsDetails
Springer-Verlag Berlin Heidelberg, 2008
- The Common Component Modeling Example: Comparing Software Component ModelsDetails
Springer-Verlag Berlin Heidelberg, 2008
- Dependability MetricsDetails
Ralf Reussner and Viktoria Firus, Springer-Verlag Berlin Heidelberg, 2008
- The Common Component Modeling ExampleInfoDetails
Sebastian Herold and Holger Klus and Yannick Welsch and Constanze Deiters and Andreas Rausch and Ralf Reussner and Klaus Krogmann and Heiko Koziolek and Raffaela Mirandola and Benjamin Hummel and Michael Meisinger and Christi, Springer-Verlag Berlin Heidelberg, 2008
The example of use which was chosen as the Common Component Modeling Example (CoCoME) and on which the several methods presented in this book should be applied was designed according to the example described by Larman in . The description of this example and its use cases in the current chapter shall be considered under the assumption that this information was delivered by a business company as it could be in the reality. Therefore the specified requirements are potentially incomplete or imprecise.
- Handbuch der Software-ArchitekturDetails
Koziolek, Heiko and Firus, Viktoria and Becker, Steffen and Reussner, Ralf H., dPunkt.verlag Heidelberg, 2006
- Parametrisierte Verträge zur Protokolladaption bei Software-KomponentenDetails
Reussner, Ralf H., Logos Verlag, Berlin, 2001
Zeitungs- oder Zeitschriftenartikel (20)
- Stateful component-based performance modelsDetails
Happe, Lucia and Buhnova, Barbora and Reussner, Ralf, Springer-Verlag, 2013
- Deriving performance-relevant infrastructure properties through model-based experiments with GinpexInfoDetails
Michael Hauck and Michael Kuperberg and Nikolaus Huber and Ralf Reussner, Springer-Verlag, 2013
To predict the performance of an application, it is crucial to consider the performance of the underlying infrastructure. Thus, to yield accurate prediction results, performance-relevant properties and behaviour of the infrastructure have to be integrated into performance models. However, capturing these properties is a cumbersome and error-prone task, as it requires carefully engineered measurements and experiments. Existing approaches for creating infrastructure performance models require manual coding of these experiments, or ignore the detailed properties in the models. The contribution of this paper is the Ginpex approach, which introduces goal-oriented and model-based specification and generation of executable performance experiments for automatically detecting and quantifying performance-relevant infrastructure properties. Ginpex provides a metamodel for experiment specification and comes with predefined experiment templates that provide automated experiment execution on the target platform and also automate the evaluation of the experiment results. We evaluate Ginpex using three case studies, where experiments are executed to quantify various infrastructure properties.
- Architecture-based Reliability Prediction with the Palladio Component ModelInfoDetails
Franz Brosch and Heiko Koziolek and Barbora Buhnova and Ralf Reussner, IEEE Computer Society, 2011
With the increasing importance of reliability in business and industrial software systems, new techniques of architecture-based reliability engineering are becoming an integral part of the development process. These techniques can assist system architects in evaluating the reliability impact of their design decisions. Architecture-based reliability engineering is only effective if the involved reliability models reflect the interaction and usage of software components and their deployment to potentially unreliable hardware. However, existing approaches either neglect individual impact factors on reliability or hard-code them into formal models, which limits their applicability in component-based development processes. This paper introduces a reliability modelling and prediction technique that considers the relevant architectural factors of software systems by explicitly modelling the system usage profile and execution environment and automatically deriving component usage profiles. The technique offers a UML-like modelling notation, whose models are automatically transformed into a formal analytical model. Our work builds upon the Palladio Component Model, employing novel techniques of information propagation and reliability assessment. We validate our technique with sensitivity analyses and simulation in two case studies. The case studies demonstrate effective support of usage profile analysis and architectural configuration ranking, together with the employment of reliability-improving architecture tactics.
- Facilitating Performance Predictions Using Software ComponentsDetails
Happe, Jens and Koziolek, Heiko and Reussner, Ralf, 2011
- From monolithic to component-based performance evaluation of software architecturesInfoDetails
Anne Martens and Heiko Koziolek and Lutz Prechelt and Ralf Reussner, Springer Netherlands, 2011
Background: Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Objective: Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. Methods: We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. Results: For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. Limitations: The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Conclusions: Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.
- Architectural Concepts in Programming LanguagesDetails
Manfred Broy and Ralf Reussner, IEEE Computer Society, 2010
- Parametric Performance Completions for Model-Driven Performance PredictionInfoDetails
Jens Happe and Steffen Becker and Christoph Rathfelder and Holger Friedrich and Ralf H. Reussner, Elsevier, 2010
Performance prediction methods can help software architects to identify potential performance problems, such as bottlenecks, in their software systems during the design phase. In such early stages of the software life-cycle, only a little information is available about the system?s implementation and execution environment. However, these details are crucial for accurate performance predictions. Performance completions close the gap between available high-level models and required low-level details. Using model-driven technologies, transformations can include details of the implementation and execution environment into abstract performance models. However, existing approaches do not consider the relation of actual implementations and performance models used for prediction. Furthermore, they neglect the broad variety of possible implementations and middleware platforms, possible configurations, and possible usage scenarios. In this paper, we (i) establish a formal relation between generated performance models and generated code, (ii) introduce a design and application process for parametric performance completions, and (iii) develop a parametric performance completion for Message-oriented Middleware according to our method. Parametric performance completions are independent of a specific platform, reflect performance-relevant software configurations, and capture the influence of different usage scenarios. To evaluate the prediction accuracy of the completion for Message-oriented Middleware, we conducted a real-world case study with the SPECjms2007 Benchmark [http://www.spec.org/jms2007/]. The observed deviation of measurements and predictions was below 10% to 15%
- Using Genetic Search for Reverse Engineering of Parametric Behaviour Models for Performance PredictionInfoDetails
Klaus Krogmann and Michael Kuperberg and Ralf Reussner, IEEE, 2010
In component-based software engineering, existing components are often re-used in new applications. Correspondingly, the response time of an entire component-based application can be predicted from the execution durations of individual component services. These execution durations depend on the runtime behaviour of a component, which itself is influenced by three factors: the execution platform, the usage profile, and the component wiring. To cover all relevant combinations of these influencing factors, conventional prediction of response times requires repeated deployment and measurements of component services for all such combinations, incurring a substantial effort. This paper presents a novel comprehensive approach for reverse engineering and performance prediction of components. In it, genetic programming is utilised for reconstructing a behaviour model from monitoring data, runtime bytecode counts and static bytecode analysis. The resulting behaviour model is parametrised over all three performance-influencing factors, which are specified separately. This results in significantly fewer measurements: the behaviour model is reconstructed only once per component service, and one application-independent bytecode benchmark run is sufficient to characterise an execution platform. To predict the execution durations for a concrete platform, our approach combines the behaviour model with platform-specific benchmarking results. We validate our approach by predicting the performance of a file sharing application.
- The Palladio component model for model-driven performance predictionInfoDetails
Steffen Becker and Heiko Koziolek and Ralf Reussner, Elsevier Science Inc., 2009
One aim of component-based software engineering (CBSE) is to enable the prediction of extra-functional properties, such as performance and reliability, utilising a well-defined composition theory. Nowadays, such theories and their accompanying prediction methods are still in a maturation stage. Several factors influencing extra-functional properties need additional research to be understood. A special problem in CBSE stems from its specific development process: Software components should be specified and implemented independently from their later context to enable reuse. Thus, extra-functional properties of components need to be specified in a parametric way to take different influencing factors like the hardware platform or the usage profile into account. Our approach uses the Palladio component model (PCM) to specify component-based software architectures in a parametric way. This model offers direct support of the CBSE development process by dividing the model creation among the developer roles. This paper presents our model and a simulation tool based on it, which is capable of making performance predictions. Within a case study, we show that the resulting prediction accuracy is sufficient to support the evaluation of architectural design decisions.
- Design for Future - Legacy-Probleme von morgen vermeidbar?Details
Gregor, Engels and Michael, Goedicke and Ursula, Goltz and Andreas, Rausch and Ralf, Reussner, 2009
- Reverse Engineering von Software-Komponentenverhalten mittels Genetischer ProgrammierungInfoDetails
Klaus Krogmann and Ralf Reussner, 2009
Die Verwendung von Komponenten ist ein anerkanntes Prinzip in der Software-Entwicklung. Dabei werden Software-Komponenten zumeist als Black-Boxes aufgefasst, deren Interna vor einem Komponenten-Verwender verborgen sind. Architektur-Analyse- Verfahren zur Vorhersage nicht-funktionaler Eigenschaften erlauben bspw. auf der Architekturebene Dimensionierungsfragestellungen fuer Hardware- / Software-Umgebungen zu beantworten, sowie Skalierbarkeitsanalysen und Was-Waere-Wenn-Szenarien fuer die Erweiterung von Altsystemen durchzufuehren. Dazu benoetigen sie jedoch Informationen ueber Interna (bspw. die Anzahl abgearbeiteter Schleifen oder Aufrufe externer Dienste) von Komponenten. Um an solche Informationen zu gelangen muessen existierende Software-Komponenten analysiert werden. Die benoetigten Informationen ueber das Innere der Komponenten muessen dabei derart rekonstruiert werden, dass sie fuer anschlie\ssende Analyseverfahren nicht-funktionaler Eigenschaften genutzt werden koennen. Eine haendische Rekonstruktion solcher Modelle scheitert haeufig an der Groe\sse der Systeme und ist sehr fehleranfaellig, da konsistente Abstraktionen ueber potentiell tausende Zeilen von Code gefunden werden muessen. Bestehende Verfahren liefern dabei nicht die notwendigen Daten- und Kontrollflussabstraktionen die fuer Analysen und Simulationen benoetigt werden. Der Beitrag dieses Papiers ist ein Reverse Engineering Verfahren fuer Komponentenverhalten. Die daraus resultierenden Modelle (Palladio Komponentenmodell) eignen sich zur Vorhersage von Performanz-Eigenschaften (Antwortzeit, Durchsatz) und damit fuer die oben angefuehrten Fragestellungen. Die aus Quellcode rekonstruierten Modelle umfassen parametrisierten Kontroll- und Datenfluss fuer Software-Komponenten und stellen eine Abstraktion realer Zusammenh¨ange im Quellcode dar. Das Reverse Engineering Verfahren kombiniert dabei ueber Genetische Programmierung (einer Form von Maschinen Lernen) statische und dynamische Analyseverfahren.
- The Impact of Software Component Adaptation on Quality of Service PropertiesInfoDetails
Steffen Becker and Ralf Reussner, RSTI, 2006
Component adapters are used to bridge interoperability problems between the required interface of a component and the provided interface of another component. As bridging functional mismatches is frequently required, the use of adapters is unavoidable. In these cases an impact on the Quality of Service resulting from the adaptation is often undesired. Nevertheless, some adapters are deployed to change the Quality of Service on purpose when the interoperability problem results from mismatching Quality of Service. This emphasises the need of adequate prediction models for the impact of component adaptation on the Quality of Service characteristics. We present research on the impact of adaptation on the Quality of Service and focus on unresolved issues hindering effective predictions nowadays.
- Toward Trustworthy Software SystemsDetails
Hasselbring, Wilhelm and Reussner, Ralf H., 2006
- Empirical Research on Similarity Metrics for Software Component InterfacesInfoDetails
Kratz, Benedikt and Reussner, Ralf and van den Heuvel, Willem-Jan, IOS Press, 2004
The notions of design and process cut across many disciplines. Applications of abstract notions of design and process to engineering problem solving would certainly redefine and expand the notion of engineering itself in the 21st century. This Journal of SDPS strives to be the repository of human knowledge covering interdisciplinary notions of design and process in a rigorous fashion. We expect and encourage papers crossing the boundaries back and forth in mathematical landscape as well as among mathematics, physics, economics, management science, and engineering. Journal of Integrated Design and Process Science is an archival, peer-reviewed technical journal publishing the following types of papers: a) Research papers, b) Reports on case studies, c) Reports on major design and process projects, d) Design and process standards and proposals, and e) Insightful tutorials on design and process. It has been observed that most of the work related to design and process is interdisciplinary and until recently has been scattered in journals of many diverse disciplines. The objective on this journal is to publish state-of-the-art papers in this expanding field, providing an international and interdisciplinary forum for best work in design and process related areas. The audience of this journal will have a single source to stay current on new and quality work as academic research papers and synthesis on best-practices. Consistent with SDPS philosophy, the Journal strives to maintain an international and interdisciplinary balance by relying on experts from various corners of the world. Authors whose work are in the domain of interdisciplinary no-man's land with a flavor of design and process are encouraged to submit their papers to this Journal. The readership of this journal includes participants from academia and industry.
- Automatic Component Protocol Adaptation with the CoCoNut Tool SuiteInfoDetails
Reussner, Ralf H., 2003
The purpose of this tutorial is to provide concepts and historical background of the ?network integration testing? (NIT) methodology. NIT is a ?grey box? testing technique that is aimed at verifying the correct behaviour of interconnected networks (operated by different operators) in provisioning services to end users, or the behaviour of a complex network operated by a unique operator. The main technical concepts behind this technique are presented along with the history of some International projects that have contributed to its early definition and application. European Institute for Research and Strategic Studies in Telecommunication (EURESCOM) has actually been very active, with many projects, in defining the NIT basic methodology and providing actual NIT specifications (for narrow-band and broad-band services, covering both voice and data). EURESCOM has also been acting as a focal point in the area, e.g., encouraging the Industry in developing commercial tools supporting NIT. In particular, the EURESCOM P412 project (1994?1996) first explicitly defined the NIT methodology (the methodological aspects include test notation, test implementation, test processes, distributed testing and related co-ordination aspects). P412 applied the methodology to ISDN whilst another project, P410, applied NIT to data services. The P613 project (1997?1999) extended the basic NIT methodology to the broad band and GSM. More into details, the areas covered currently by NIT test specifications developed by EURESCOM projects include N-ISDN, N-ISUP, POTS, B-ISDN, B-ISUP, IP over ATM, ATM/FR, GSM, focusing also on their ?inter-working? cases (e.g., ISDN/ISDN, ISDN/GSM, etc.). ETSI, the European Telecommunication Standards Institute, also contributed to NIT development (e.g., the definition of the TSP1+ protocol, used for the functional co-ordination and timing synchronisation of all tools involved in a distributed testing session). The paper also discusses NIT in relation to the recent major changes (processes) within the telecommunication (TLC) community. Beyond the new needs coming from the pure technical aspects (integration of voice and data, fixed mobile convergence, etc.) the full deregulation of the TLC sector has already generated new processes and new testing needs (e.g., Interconnection Testing) that had a significant influence on the methodology. NIT is likely to continue to develop further in the future according to the needs of telecom operators, authorities, user?s associations and suppliers.
- Using SKaMPI for Developing High-Performance MPI Programs with Performance PortabilityDetails
Reussner, Ralf H., 2003
- Reliability Prediction for Component-Based Software ArchitecturesInfoDetails
Reussner, Ralf H. and Schmidt, Heinz W. and Poernomo, Iman, 2003
Due to the increasing size and complexity of software systems, software architectures have become a crucial part in development projects. A lot of effort has been put into defining formal ways for describing architecture specifications using Architecture Description Languages (ADLs). Since no common ADL today offers tools for evaluating the performance, an attempt to develop such a tool based on an event-based simulation engine has been made. Common ADLs were investigated and the work was based on the fundamentals within the field of software architectures. The tool was evaluated both in terms of correctness in predictions as well as usability to show that it actually is possible to evaluate the performance using high-level architectures as models.
- SKaMPI: a comprehensive benchmark for public benchmarking of MPIDetails
Reussner, Ralf H. and Sanders, Peter and Träff, Jesper Larsson, 2001
- Trust-By-Contract: Modelling, Analysing and Predicting Behaviour in Software ArchitecturesInfoDetails
Schmidt, Heinz W. and Poernomo, Iman and Reussner, Ralf H., 2001
The increasing pressure for enterprises to join into agile business networks is changing the requirements on the enterprise computing systems. The supporting infrastructure is increasingly required to provide common facilities and societal infrastructure services to support the lifecycle of loosely-coupled, eContract-governed business networks. The required facilities include selection of those autonomously administered business services that the enterprises are prepared to provide and use, contract negotiations, and furthermore, monitoring of the contracted behaviour with potential for breach management. The essential change is in the requirement of a clear mapping between business-level concepts and the automation support for them. Our work has focused on developing B2B middleware to address the above challenges; however, the architecture is not feasible without management facilities for trust-aware decisions for entering business networks and interacting within them. This paper discusses how trust-based decisions are supported and positioned in the B2B middleware.
- Une \'etude comparative de m\'ethodes industrielles d'ing\'enierie des exigencesDetails
von Knethen, Antje and Kamsties, Erik and Reussner, Ralf H. and Bunse, Christian and Shen, Bin, 1999
Export Suchergebnis .bib
Telefon: +49 721 9654-900
Fax: +49 721 9654-909
- Handbuch der Software-ArchitekturDetails