Quality in Model-Driven SOBA development

In order to engineer software artefacts with a high level of quality and low costs it is necessary to have adequate testing tools and methods. Model-Driven Engineering (MDE) methodologies enable the testing of application models before they are transformed or executed. Service-Oriented Business Applications (SOBA’s), with their specific architecture requirements, also need a tailored testing approach due to the nature of the underlying programming model. So, testing should be an integral part of an MDE methodology for SOBA’s, adapted to the chosen modelling languages, model transformations and the architecture of the resulting artefact.

In this article I’d like to research the testing requirements for an MDE methodology for SOBA’s. I first look at the meaning of quality in MDE. After that I describe the validation of models at design time, before looking at model checking techniques and the runtime behaviour of a resulting software artefact. I will conclude this article with an overview of the requirements for the testing part of an MDE methodology.

Quality in MDE

Quality is often defined as "fitness for purpose" (Mohagheghi & Dehlen, 2007). A more formal definition is given by the IEEE: software quality as an attribute is (1) the degree to which a system, component, or process meets specified requirements, and (2) the degree to which a system, component, or process meets customer or user needs or expectations (IEEE, 1990). The building or engineering of quality into software, based on model-driven approaches is referred to as Model-Driven Quality Engineering (MDQE) (Mohagheghi & Dehlen, 2007). MDE lends itself to quality engineering because:

  • Models are primary software artefacts, so the quality of models dictates the quality of the artefacts generated from that model (Mohagheghi & Dehlen, 2007).
  • Tools can analyze and monitor models for various characteristics, e.g. consistency checking, model checking/simulation, etc. (Mohagheghi & Dehlen, 2007).
  • Models can be used as input for model-based testing. Model-based testing involves test generation, execution and evaluation using models (Binder, 2000).

MDE, however, also introduces some specific quality needs due to its multi-view (multiple modelling dimensions) and multi-notational (multiple models describing a software artefact, defined in different DSL’s) approach. According to Mellor and Balcer (Mellor & Balcer, 2002) the following challenges arise:

  • Consistency: the models of various views need to be syntactically and semantically compatible with each other (i.e. horizontal consistency).
  • Transformation and evolution: a model must be semantically consistent with its refinements (i.e. vertical consistency).
  • Traceability: a change in the model of a particular view leads to corresponding consistent changes in the models of other views.
  • Integration: models of different views may need to be seamlessly integrated before software production.

These elements are all about multiple models and their relation. The quality of the models themselves, however, is equally important. The two main quality criteria for models to be used in MDE are transformability and maintainability (Solheim & Neple, 2006). Solheim and Neple decompose these criteria in several other criteria. The quality criteria for transformability are shown in Table 1, while the maintainability criteria are shown in Table 2.

Quality criterion
Type of qualityExplanation
CompletenessSemanticThe model contains all statements that are correct and relevant about the domain. This can be checked against the ontological metamodel.
Well-formednessSyntacticThe model complies with its language definition. This can be checked using the linguistic metamodel.
PrecisionTechnical pragmaticThe model is sufficiently accurate and detailed for a particular automatic transformation.
RelevanceTechnical pragmaticThe model contains only the statements necessary for a particular transformation.

Table 1 – Transformability quality criteria for models (Solheim & Neple, 2006).

Quality criterion
Type of qualityExplanation
TraceabilityTechnical pragmatic The model’s elements can be traced backward to their origin (requirements), and forward to their result (another model or program code).
Well-designednessSyntacticThe model has a tidy design, making it understandable by humans and transformable to an understandable and tidy result.

Table 2 – Maintainability quality criteria for models (Solheim & Neple, 2006).

Figure 1 shows how the quality criteria presented in Table 1 and Table 2 relate the model with its environment. This framework is based on the framework presented by Krogstie (Krogstie, 2003) and specialized by Solheim and Neple (Solheim & Neple, 2006) for MDE. The building blocks are explained as:

  • G, the (normally organizationally motivated) goals of the modeling task.
  • L, the language extension, i.e. the set of all statements that are possible to make according to the graphemes, vocabulary, and syntax of the modeling languages used.
  • M, the externalized model, i.e., the set of all statements in someone’s model of part of the perceived reality written in a language.
  • D, the domain, i.e., the set of all statements which can be stated about the situation at hand. By example, the enterprise domain.
  • T, the technical actor interpretation, i.e., the statements in the model as ‘interpreted’ by different model activators (e.g., modeling tools, transformation tools).

MDE model quality framework

Figure 1 – A specialized framework for model quality in MDE (Solheim & Neple, 2006).

Lange and Chaudron present an even more detailed quality model (Lange & Chaudron, 2005). They start with identifying two primary use of models, development and maintenance, which they further divide into different modelling purposes. These modelling purposes are linked to some quality characteristics which are further related to metrics.

In the next parts of this article I describe some techniques for evaluating and testing the models and artefacts in MDE, thereby showing how the presented quality criteria can be measured or reached.

Model validation

Model validation, or consistency checking, is used for evaluating models with respect to the semantic and syntactic quality criteria. Consistency checking evaluates a model against its metamodels (Binder, 2000). Two different dimensions of metamodels can be identified for each model, an ontological and a linguistic one. Checking a model in respect to its linguistic metamodel ensures syntactic quality or well-formedness. Checking a model in respect to its ontological metamodel ensures semantic quality or completeness.

In addition to metamodels constraints can be used (mostly defined in the metamodel) for validating a model. The most used constraint language is the Object Constraint Language (OCL) (OMG, 2006), a formal language used to describe expressions on UML models. These expressions typically specify invariant conditions that must hold for the system being modelled or queries over objects described in a model. OCL constraints can be translated into graph rules and transformation units, thus providing a precise semantics for such constraints (Bottoni, Koch, Parisi-Presicce, & Taentzer, 2000). Using these rules a model can be checked automatically for well-formedness with respect to the OCL constraints.

Besides model quality the consistency between models is important. As explained above, two kinds of consistency exist, horizontal (between models of various views) and vertical (between different versions or refinements) consistency. Vertical consistency is also referred to as evolution consistency (Straeten, Mens, Simmonds, & Jonckers, 2003). Straeten et al. present a formal way to keep multiple models consistent using description logic. In principle all change propagating model transformation languages can be used for keeping multiple models consistent.

Model checking

In a previous article the architecture requirements for a Service-Oriented Business Application (SOBA) have been described. I’ve concluded that a SOBA should be based on a service-oriented, process centric, programming model with a strong focus on messaging and assemblies (reuse). The components implementing the services have to be thoroughly tested, even more when third-party components are used. When assembling systems from distinct components the integration testing phase is indispensable (Rehman, Jabeen, Bertolino, & Polini, 2007). However, due to the model-driven, process centric nature of the MDE methodologies we are researching, this integration testing process can be automated with a technique called model checking.

Model checking is a formal verification technique based on state exploration. Given a state transition system and a property, model checking algorithms exhaustively explore the state space to determine whether the system satisfies the property (Chan, et al., 1998). In other words: the autonomous services of a service-oriented solution are communicating using messages, while each message receipt can trigger a state transition (in state full services) and/or results in a new message. While most of the application services are implemented using a process definition the whole system can be modelled in a formal way using processes and interactions. A model checker uses this definition as input along with some correctness claims to check the correctness of a system.

An example of such a model checker is SPIN. "SPIN is a generic verification system that supports the design and verification of asynchronous process systems. SPIN verification models are focused on proving the correctness of process interactions, and they attempt to abstract as much as possible from internal sequential computations" (Holzmann, 1997). SPIN accepts a high level model of a concurrent system or distributed algorithm specified in the verification language PROMELA (Holzmann, Design and Validation of Computer Protocols, 1991). The correctness claims are specified using the syntax of standard Linear Temporal Logic (LTL) (Pnueli, 1977).

Although model checking is a very powerful approach it has some problems to solve before it can be used at a large scale. However, for embedded software it is used in practice more and more. One of the problems of model checking is state explosion. While multiple state machines are multiplied and checked the number of combined states can become very large. Holzmann (Holzmann, Design and Validation of Computer Protocols, 1991) describes several techniques for solving this problem. Another problem is that most model checking approaches can only apply to finite state systems, where software is often specified with infinite states. Research is done solving this problem by building model checkers for infinite state system or by abstracting them as finite state systems, which is often possible (Chan, et al., 1998).

An interesting initiative in the field of model checking is the Bogor project (Robby, Dwyer, Hatcliff, & Hoosier, 2008) (Robby, Dwyer, & Hatcliff, Bogor: an extensible and highly-modular software model checking framework, 2003). This model checking is highly modular and gives the possibility, by example, to combine several modules implementing state reduction algorithms. The end-user can choose its own preferred algorithms. Bogor is available as Eclipse plugin and its model checking language includes support of features found in concurrent object-oriented languages such as dynamic creation of threads and objects, object inheritance, virtual methods, exceptions, garbage collection, etc.

Runtime behaviour

Besides testing, validating and checking models, we also want to test the resulting software artefact. As stated before, software quality is not only the degree to which a system, component, or process meets specified requirements, but also the degree to which a system, component, or process meets customer or user needs or expectations. So only testing the models based on the requirements and technical constraint isn’t enough. We also have to test the behaviour of the runtime system with use of test cases.

A nice overview of requirements for the model-base testing (MBT) of information systems is given by Santos-Neto, Resende and Pádua (Santos-Neto, Resende, & Pádua, 2007). They base their work on several existing works in this field (Abdurazik & Offutt, 2000) (Andrews, France, Ghosh, & Craig, 2003) (Briand & Labiche, 2001) (Hartman & Nagin, 2004) (Offsut & Abdurazik, 1999) (Santos-Neto, Resende, & Padua, 2005), but due to the incompleteness of these combined works in modern software development projects they have extended it using requirements elicitation techniques, such as brainstorm and JAD (Joint Application Development) sessions. The requirements are listed below.

An MBT method should prescribe (Santos-Neto, Resende, & Pádua, 2007):

  • The testing of the aspect of Software and Storage relationship.
  • Test generation using test criteria. A test criterion is a means of deciding what a suitable set of test cases should be. Test criteria should be base on user needs and expectations.
  • The testing of the system architecture.
  • Test generation for non-functional requirements.
  • Feasible mechanisms for automatic oracle generation. A test oracle determines the expected results for a case.
  • The automatic generation of test artefacts (test plan, test case specification, test procedure specification and a test incident report).
  • Mechanisms for test evaluation (evaluating the test results and considering the planning, designing and implementation of new tests).
  • Mechanisms to facilitate test planning.
  • Mechanisms for helping maintenance and system change. If a system changes the MBT method should help the identification of the system parts affected and generate new test cases.
  • Mechanisms allowing the interoperability among methods, tools and languages. This interoperability can by example be reached by basing the test on a Platform Independent Model (PIM).
  • Mechanisms to support the incorporation of new tests. Automatic test generation based on models cannot always assure the ideal coverage. Humans should be able to add test, by example by using recording functionality.
  • The automatic test execution.
  • The support of bug registering and tracking.

The last requirement mentioned is that an MBT method should decrease the development cost and improve test quality. MBT is a necessary part of an MDE methodology and should fit into the chosen modelling approach.

Conclusion

Testing should be an integral part of an MDE methodology for SOBA’s, adapted to the chosen modelling languages, model transformations and the architecture of the resulting artefact.

MDE lends itself to quality engineering because:

  • Models are primary software artefacts, so they dictate the quality of the artefacts generated from the models.
  • Tools can analyze and monitor models for various characteristics.
  • Models can be used as input for model-based testing.

MDE also asks for specific testing methods. An MDE methodology should support model validation by using metamodels for describing the model languages. It should also be possible to define constraints on these metamodels.

Because of the service-oriented, and thereby distributed, concurrent nature, of SOBA’s they require some specific test effort. An MDE methodology for SOBA’s should support model checking by describing transformations from the used models to a formal notation which can be feed into a model checker.

Because quality is partly defined by user needs and expectations an MDE methodology should include MBT methods complying to the presented requirements.

——————————————
Abdurazik, A., & Offutt, J. (2000). Using UML collaboration diagrams for static checking and test generation. Proceedings of the 3rd International Conference on the Unified Modeling Language (UML’00), (pp. 383-395). York, UK.

Andrews, A., France, R., Ghosh, S., & Craig, G. (2003). Test adequacy criteria for UML design models. Journal of Software Testing, Verification, and Reliability , 13 (2), 95-127.

Binder, R. V. (2000). Testing Object-Oriented Systems: Models, Patterns, and Tools. Addison-Wesley.

Bottoni, P., Koch, M., Parisi-Presicce, F., & Taentzer, G. (2000). Consistency Checking and Visualization of OCL Constraints. In A. Evans, S. Kent, & B. Selic, «UML» 2000 – The Unified Modeling Language (pp. 294-308). Berlin Heidelberg: Springer-Verlag.

Briand, L., & Labiche, Y. (2001). A UML-based approach to system testing. Proceedings of the 4th Unified Modeling Language Conference (UML’01), (pp. 194-208). Toronto, Canada.

Chan, W., Anderson, R. J., Beame, P., Burns, S., Modugno, F., Notkin, D., et al. (1998). Model Checking Large Software Specifications. IEEE Transactions on Software Engineering , 24 (7).

Hartman, A., & Nagin, K. (2004). The AGEDIS tools for model based testing. Proceedings of the International Symposium on Software Testing and Analysis (ISSTA 2004). Boston, Massachusetts, USA.

Holzmann, G. J. (1991). Design and Validation of Computer Protocols. Prentice Hall.

Holzmann, G. J. (1997). The Model Checker SPIN. IEEE Transactions on Software Engineering , 23 (5).

IEEE. (1990). 610.12 IEEE Standard Glossary of Software Engineering Terminology.

Krogstie, J. (2003). Evalutating UML Using a Generic Quality Framework. In L. Favre, UML and the Unified Process (pp. 1-22). Idea Group Publishing.

Lange, C. F., & Chaudron, M. R. (2005). Managing Model Quality in UML-based Software Development. Proceedings of the 13th IEEE International Workshop on Software Technology and Engineering Practice (pp. 7-16). Washington, DC, USA: IEEE Computer Society.

Mellor, S. J., & Balcer, M. J. (2002). Executable UML: a Foundation for Model-Driven Architecture. Addison-Wesley.

Mohagheghi, P., & Dehlen, V. (2007). An Overview of Quality Frameworks in Model-Driven Engineering and Observations on Transformation Quality. In L. Kuzniarz, J. L. Sourrouille, & M. Staron (Ed.), Proceedings of the 2nd Workshop on Quality in Modeling, MoDELS 2007, (pp. 3-17). Nashville, TN, USA.

Offsut, J., & Abdurazik, A. (1999). Genering tests from UML specifications. Proceedings of the 2nd Unified Modeling Language Conference (UML’99). Fort Colins, Colorado, USA.

OMG. (2006). Object Constraint Language version 2.0. OMG Available Specification, formal/06-05-01.

Pnueli, A. (1977). The Temporal Logic of Programs. Proc. 18th IEEE Symp. Foundations of Computer Science, (pp. 46-57). Providence.

Rehman, M. J.-u., Jabeen, F., Bertolino, A., & Polini, A. (2007). Testing software components for integration: a survey of issues and techniques. Software Testing, Verification and Reliability (17), 95-133.

Robby, Dwyer, M. B., & Hatcliff, J. (2003). Bogor: an extensible and highly-modular software model checking framework. Proceedings of the 9th European software engineering conference held jointly with 11th ACM SIGSOFT international symposium on Foundations of software engineering (pp. 267-276). Helsinki, Finland: ACM.

Robby, Dwyer, M. B., Hatcliff, J., & Hoosier, M. (2008). Bogor, software model checking framework. Retrieved June 04, 2008, from Bogor: http://bogor.projects.cis.ksu.edu

Santos-Neto, P., Resende, R., & Padua, C. (2005). A method for information system testing automation. Proceedings of the 17th Conference on Advanced Information Systems Engineering (CAiSE’05). Porto, Portugal.

Santos-Neto, P., Resende, R., & Pádua, C. (2007). Requirements for Information Systems Model-Based Testing. Proceedings of the 2007 ACM symposium on Applied computing (pp. 1409-1415). Seoul, Korea: ACM.

Solheim, I., & Neple, T. (2006). Model Quality in the Context of Model-Driven Development. In L. F. Pires, & S. Hammoudi (Ed.), Proceedings of the 2nd International Workshop on Model-Driven Enterprise Information Systems, MDEIS 2006 (pp. 27-35). Paphos, Cyprus: INSTICC Press 2006.

Straeten, R. V., Mens, T., Simmonds, J., & Jonckers, V. (2003). Using Description Logic to Maintain Consistency between UML Models. In P. Stevens, "UML" 2003 – The Unified Modeling Language (pp. 326-340). Berlin Heidelberg: Springer-Verlag.

1 Comments Added

Join Discussion
  1. software developer October 19, 2009 | Reply

    Quite inspiring,
    Those explanatins are really helpfull,
    Keep up the good work
    Thanks for writing, most people don’t bother.

Leave a Reply to software developer Cancel reply