Model Driven Engineering tools compared on user activities
In my previous post I gave you a quick overview of the roles involved in Model Driven Engineering. The question for today is how to support these roles with appropriate tooling?
Tools for Model Driven Engineering are often seen as a single category of tools. I’ll argue in this article that they can be separated in two categories: DSL (Domain Specific Language) tools and Model Driven Software Factories. Both types of tools have very different user activities. However, before going into details, let’s first explain the basics: what is a language workbench and what workbenches do we actually need in MDE?
What is a Language workbench?
Martin Fowler introduced the term Language Workbench a couple of years ago to refer to tooling supporting language-oriented programming. I see language-oriented programming as an important aspect of Model Driven Engineering, i.e. the meta level part of MDE. Martin Fowler defines a language workbench by naming the essential characteristics as follows:
- Users can freely define new languages which are fully integrated with each other.
- The primary source of information is a persistent abstract representation.
- Language designers define a Domain-Specific Language (DSL) in three main parts: schema, editor(s), and generator(s).
- Language users manipulate a DSL through a projectional editor.
- A language workbench can persist incomplete or contradictory information in its abstract representation.
If we compare these characteristics with the roles involved in MDE, we see that all roles need to use this language workbench for their own part of the job.
Needed workbenches in Model-Driven Engineering
Let’s take a step back. A language workbench as defined by Martin Fowler in principle consists of a couple of workbenches. So, what workbenches are needed in MDE? Let’s start with the MDE overview picture from my previous post (see Figure 1).
Figure 1 – Overview of Model Driven Engineering
In a model driven development process the dark grey artifacts exhibited in Figure 1 need to be specified somehow. Hence, we need at least three different workbenches. A domain expert and a language engineer together define a DSL in a DSL workbench with use of a meta language. A transformation specialist and an implementation expert together define the transformation rules in a transformation workbench with use of a transformation language. They define how models expressed in the defined DSL are executed. The last workbench is the solution workbench in which the business engineer and solution architect model an application with use of the defined DSL. An overview of the different workbenches and their users is given in Figure 2.
In reality an model driven software development process isn’t that straightforward. DSLs are meant to be domain-specific, i.e. they just model a system aspect. Hence, we need multiple DSLs to describe a software solution. That’s why we also need the role of software factory architect (or method engineer). Someone needs to define what models and DSLs are needed in a development process and how they are connected. In principle this role defines what workbenches are combined in a single MDE tool.
Figure 2 – Overview of workbenches in MDE
Model Driven Engineering Tools
Looking at the current market, we can distinguish between two main approaches in MDE tooling: DSL tools and Model Driven Software Factories.
Let’s compare them on the following points:
- Workbenches: what workbenches are included in the tool.
- Input: what is the input of the tool.
- Output: what is the output of the tool.
- Tool vendor activities: what does a tool vendor need to specify /build in order to create the tool.
- Tool user activities: what does the user of the tool need to specify in order to produce the output. This is of course related to the input, but it also includes the activities not guided by the tool.
- Examples: existing tools in this category (just a few examples, not an exhaustive list).
DSL tool:
- Workbenches: DSL workbench and transformation workbench.
- Input: DSL definitions and transformation rules.
- Output: solution workbench and generator.
- Tool vendor activities: meta language definition, transformation language definition, workbench implementations.
- Tool user activities: DSL definitions, transformation rules, architecture definition, (domain) framework implementation.
- Examples: openArchitectureWare, MetaEdit+, Microsoft DSL Tools, JetBrains Meta Programming System.
Model Driven Software Factory:
- Workbenches: solution workbench.
- Input: functional specification.
- Output: working application.
- Tool vendor activities: DSL definitions, transformation rules, architecture definition, domain framework implementation, solution workbench implementation.
- Tool user activities: application modeling.
- Examples: Mendix (domain: Service Oriented Business Applications).
Depending on how well the DSL tool supports the definition of multiple DSLs (referring to each other, change propagation, etc.) and transformations, you can say that a Model Driven Software Factory can be the output of a DSL tool.
DSL tool or Model Driven Software Factory?
What tool you’ll need for your project depends on your specific wishes. A while ago Steven Kelly wrote an article on this subject from a financial perspective (in reaction on this post of me). He concludes with: "Building a DSL for your own use is a lot easier and cheaper, and gives greater benefits". I don’t agree with this conclusion.
Building your own DSLs, i.e. using DSL tools, gives you all the flexibility you’ll ever need. However, it also comes with a cost: DSL design isn’t that easy. Designing a full set of DSLs for modeling all application aspects will cost a lot of effort (the DSLs will evolve along with the applications you build with them). And don’t forget the training, language support, standardization, and maintenance of your languages.
A Model Driven Software Factory, on the other hand, is only useful if it precisely targets the domain you’re searching a solution for. You also need to commit you to the vendor of the factory. However, the domain of a Model Driven Software Factory can be quite broad and with current business process engines, workflow engines, and business rules engines, the used languages can be both applicable to multiple problem domains and easy to understand for business engineers. With a Model Driven Software Factory you can directly start using the DSLs for modeling your application and you don’t need to have the expertise in-house to define the languages, transformations, and generators yourself.
What type of tool you need depends on your situation. My advice: read and understand the characteristics of the different types of tools and make a choice for yourself.
What is your preferred choice?
12 Comments Added
Join DiscussionI prefer the direct Model Driven Software Factory. I still don’t see why people need to create an intermediary language (DSL) to be more productive.
I am also a defendant of the MDSF since I was able to create a generic one (that uses a single meta metamodel), so I now know that a MDSF can avoid the use of DSL’s. However it still requires a transformation language. In my case, it’s Lua.
Hi Rui,
In my experience you need DSLs for a Model Driven Software Factory and you can skip the transformation language (exactly the opposite of your approach ;).
With multiple DSLs for different system aspects you can keep the languages small and easy to learn. Each DSL can automatically be translated into source code or interpreted on a runtime / virtual machine.
But of course no holy grail exists, so I’m curious to learn more about your approach.
Johan,
“Each DSL can automatically be translated into source code”
In this case you’re not skipping the transformation. 🙂
Anyway, I think multiple DSLs look attractive but, don’t you have to mix them at the end, on a single project. If you do, then each of these DSLs will add-up, so at the end instead of having a big DSL you have a big mashup of smaller DSLs. If the approach is better it has to be proven in practice.
If you’re following the Model Driven Software Network, you already know my approach. I’ve called it ABSE (Atom-Based Software Engineering). The main advantages I see on this approach is:
– A simple, single, universal meta metamodel (the “Atom”)
– Abstraction level independent, so you can work on the solution domain or on the problem domain, or both
– Uses a tree instead of a graph
– Focuses on reuse
– You can apply composition and refactor models
– Easily supports multiple paradigms (AOP, CBD, DSM)
– Domain-independent. You can use ABSE from games to avionics
– The list could go on…
Of course, since I am the ABSE creator, you should take this with a grain a salt… everyone loves their babies.
It is named “ABSE” (E for Engineering) and not “ABSD” (D for Development) since it can also support ALM. The entire application lifecycle can be modeled and managed. This is work in progress so I make no claims at this point.
I submitted a session proposal to present ABSE for the first time at CG2009. If it is accepted, you’ll learn more about it at that time.
Rui,
I didn’t mean skipping the transformation, but skipping the transformation language, i.e. not exposing the transformation language to the tool users. Moreover, I prefer interpretation over code generation.
"Anyway, I think multiple DSLs look attractive but, don’t you have to mix them at the end, on a single project. If you do, then each of these DSLs will add-up, so at the end instead of having a big DSL you have a big mashup of smaller DSLs. If the approach is better it has to be proven in practice."
I think there are lot’s of examples showing that multiple DSLs can efficiently abstract complex systems. If you use soft references you don’t have to merge them at design time. You can resolve the connections between the DSLs at runtime. You’ll of course need a tailored modeling environment ensuring the consistency among the DSLs.
I don’t think my schedule will let me visit CG2009, but maybe we can talk about your approach afterwards…
First, congratulation to johan for this great synthesis.
I think DSL could become a dead direction because the industry needs industrial standards and UML plays that role but one may ask if UML with extension (required for practical MDD) is still an industry standard even if you can export the whole thru XMI. Maybe UML is the right trade-off and DSL a nice tool to model things not foreseen in UML, which is also larg enough.
But MDD also shoots too short to model behavior, surely when limited by code generation thru transformation rules only. The other major issue is that adding use of features at some places in a skeletton code requires these features being orthogonal 2 by 2; and this is not sustainable for real-life application generation.
This is the object of my own development based but extending MDD.
Hi Claude,
See one of my previous posts for some comments on UML vs. DSL: http://www.theenterprisearchitect.eu/archive/2008/08/20/dsl-in-the-context-of-uml-and-gpl
I agree with you that there’s a need for industry standards, but I don’t see the UML as the ultimate candidate. UML was designed with another goal in mind. I think it is important to collect real-world examples of DSLs used in practice. In that way we can build a body of knowledge which can lead to an industry standard.
Hi Johan,
Maybe you misunderstood my sentence “Building a DSL for your own use is a lot easier and cheaper” to mean “…than buying an existing tool with a ready-built DSL for your domain”? That’s not how I intended it: as the rest of the article hopefully makes clear, the end of the sentence would be “…than building one that you intend to sell to lots of people in similar domains”.
My whole point is that people in your position, who have tried to build a DSL to sell to many companies, generally overestimate the cost of building a DSL just for ones own company. Being able to narrow down the problem domain and solution domain to a single company (or department or project) collapses a lot of the complexity that a language for a broader set of situations would have to cover.
This of course is the whole idea of DSLs: by narrowing things down to a single domain, we can achieve significant benefits compared to a language that has to cope with every domain. It’s a nice symmetry that the same thing that makes the language easier to use, also makes it easier to build.
Hi Steven,
I agree with you that more specific DSLs are easier to build. However, this also means that you cover less variability, meaning that you have more commonalities in the resulting software system. More commonalities means that you need to put more effort in building a domain framework supporting your generated code. In principle, you generate just a part of the code, the biggest part is considered common and can’t be changed by using the DSL.
While you are in a very specific domain, this isn’t a problem, but it means that the activities you have to perform to build an software application differ from using a Model Driven Software Factory.
You have to design a DSL, which is, as you said, a lot easier for a specific domain (as also pointed out by your colleague – http://bit.ly/mKNkr). You also have to specify transformation rules / code generation, you have to think about the architecture of your application, and you have to implement a (domain) framework.
So, my point is that companies have to choose between:
A very specific, tailored DSL which can be used by every domain expert, but at the cost of more initial (technical) effort.
Less specific DSLs (i.e. not tailored at a specific problem domain), but still usable for domain experts after some training, just out-of-the-box.
I think our daily practices show there’s a need for both approaches.
Code generation is an intermediary step to Model-Driven development. The evolution of manufacturing is analogous – developed through a series of phases to CAD/CAM design, auto machine setting and continuous production. No need to fiddle with code when you can design an application that represents a real world need and then create via a ‘software machine’. If you have debugged the machine you then have debugged applications.
Code generation is an intermediary step to Model-Driven development and Model-Driven is a step to the generation of a complete application as a ‘container’ – changes to this are a change to the design and then a re-generation over a couple of hours – very rapid change in response to changing business needs once the application has been built.
I accept that people have SAP, Oracle, etc, but as we move to a real time interconnected world the architectural limitations of these technologies will inhibit business innovation.
Thanks for nice looking info.
Again well done, Johan
I stay claiming that what you do with DSL can be done with UML. While possibly agreeing on that, there are operational pro and contra which may be discussed but it summarizes on this: a huge dependable development platform based on actual standard vs dedicated specialized development platform. Behind this lies also the scope ambition of your work environment.
I remember that I don’t buy the idea of Microsoft that every business application consitutes a specific engineering domain; they are functionnaly different but it has to be transparent for the desired work environment.
I have the same point of view on architecture (that I call application software architecture).
The application implementation must be independent from these 2 (also independent) kinds of requirements.
I suppose that the concept of transformation rules must not be taken strictu sensu (as f.e. with a BNF).