Why Model Driven Software Development isn’t fast enough and how to fix it

MDD isn't fast enoughDid you hear all the rumors about Model Driven Development (MDD) lately? Start using MDD because it improves your productivity, use business engineering / MDD for more business agility, if you want your SOA fast you need Model-Driven SOA, and so on…

If you’re new to Model Driven Software Development and its related acronyms like MDA, MDD, MDSD, MDE, and it all sounds a bit abstract to you, please read this article explaining MDE with a simple metaphor.

All mentioned statements link MDD to the fast delivery of business results. MDD as the solution to slow software development cycles. Guess what? That’s just not true.

Why Model Driven Software Development isn’t fast enough

Yes, MDD can be much faster than traditional software development. However, the result, working software for end-users isn’t delivered fast enough. If we look at a typical software development project using MDD we see something like this:

Agile is key…
Short iterations…
Easy modeling…
Early results…
Showing prototypes to business / end-users…
Easy to involve business…
After a bunch of iterations the application is finished.
(see also 15 reasons why you should start using MDD)

And now… deployment!

The application should move to production. That’s were it all starts.

We need to build / package everything…
We need a server to deploy the application on…
IT guys will start talking about corporate policies, security, reference architectures, etc…
We need to configure / bind all kind of things (e.g. addresses of integration points)…
And what about testing for acceptance?
Lot’s of people need to be involved…

It’s needed, don’t get me wrong! However, were the first part was fast, the second part, deployment, is just slow. Most MDD tools / projects focus on the development part and they do it well. Believe me, I’ve seen some incredible results! But, to really unleash the power of MDD to the business we need more…

And how to fix it…

Luckily there is a way to fix this problem. We can make the whole process faster. Let’s look at three alternatives.

Cloud deployment

A possible way to make the deployment of applications easier and faster is to deploy them in the cloud. Don’t think about hardware, platforms, architecture, etc. Send your model to a cloud and just use your application. Model-Execution-as-a-Service!

Advantages:

  • No discussions about hardware, platforms, architecture, etc. The important thing is: make it work.
  • If the cloud is selected on before hand deployment can become very fast.
  • Probably more cost-effective and scalable.

Challenges:

  • Make it easy! Abstract away from all kind of deployment details.
  • Corporate policies will soon cover cloud infrastructures.
  • Take care of your security wishes.
  • Integration with corporate systems within the firewall.
  • Performance, watch your connection speed!
  • You still need to arrange a sufficient test process.

Change at runtime – engine

Another way to speed-up the deployment cycle is to define two levels of variability for applications in a certain domain. Let me explain that.

If we select a domain (e.g. insurance, healthcare, web applications) we can build a Model Driven Software Factory (MDSF) to build applications in that domain using high-level models (i.e. with use of Domain-Specific Languages). A MDSF is build on the idea that if you compare applications in a certain domain with each other, there’s a static part and a variable part. The static part is the same for each application in that domain, the variable part is different for one or more applications in the domain. The static part is implemented in libraries. The variable part is defined in DSLs and can be modeled by the one building the application. The code generated from the model defined with these DSLs (or the engine executing the model) uses the libraries containing the static part of the implementation.

The idea of two levels of variability is to select a part of the variability which needs to change a lot during the lifetime of an application. This part, the second level of variability, should be adaptable at runtime (i.e. during the use of the application, without re-deploying it).

So, in principle we can define the following levels of variability for an application:

  • Level 0: the static part of applications in a certain domain.
  • Level 1: the variable part which is defined during the development and design of the application using DSLs.
  • Level 2: the part of the application which can be configured or adapted at runtime.

Examples of level 2 elements are the targets/goals of KPIs, authorizations, GUI personalization, task-role relations (in workflows), business rules, etc.

Business Rules engines often support changes at runtime without re-deployment or stopping the current running transactions. MDD can learn from them. Use an engine (virtual machine, model executor) and allow for part of the model to be changed at runtime.

Advantages:

  • Fast and easy to apply changes.
  • Runtime changes, without the need for a maintenance time frame.
  • Only controlled changes, i.e. only changes at level 2 are possible at runtime.
  • If changes can be constrained / controlled, less testing effort.

Challenges:

  • Changing a running system means that all current running processes / transactions should keep running on the old model. New triggers will start executing the new version of the model.
  • It’s difficult to define the appropriate variability for each level.
  • What about testing, errors, risks, rollback of model versions, etc.

Change at runtime – adaptive modeling

The third alternative is mainly the same as the previous one. It also builds on the idea of two levels of variability. Instead creating a second level of variability within the tooling by allowing for model changes at runtime, this alternative is based on adaptive modeling.

In ‘normal’ Model Driven Development the metalevel is part of the model editor (see this article on DSLs for a deep-drive in meta-modelling). In case of adaptive modeling a metalevel is introduced in the model itself. The instances of a group of objects affect the behaviour of instances of other objects. In Domain-Driven Design (as described by Eric Evans) this is known as the pattern named Knowledge Level. The Knowledge Level pattern splits a model into two levels:

  • Operations level: the place were we do our daily business. For example Shipment, Account.
  • Knowledge level: objects that describe / constrain the objects in the operations level. For example EmployeeType, RebatePaymentMethod, ShipmentMethod.

Adaptive modeling allows you to alter the application by creating knowledge level objects and wiring them together. The knowledge level represents what I called variability level 2 before.

Advantages:

  • Fast and easy to apply changes.
  • Runtime changes, without the need for a maintenance time frame.
  • Only controlled changes, i.e. only changes at level 2 are possible at runtime.
  • If changes can be constrained / controlled, less testing effort.

Challenges:

  • The advantage of MDD is to use language which is as specific as possible. Adaptive modeling means that the model, and thus the tool support, is more abstract and less specific.
  • Where to stop? I.e. what should be level 1 and what level 2 variability?
  • Editing isn’t the hard part of adaptive modeling. How to use test and debug tools? How to ensure quality? The previous alternative is stronger on this point.

Conclusion

Model Driven Development (or Model Driven Engineering) can’t deliver software as fast as nowadays dynamic business environment needs it. The main slow-down is the phase between development and production. This phase can be made faster by using cloud deployment, runtime engines, or adaptive modeling.

We should go beyond MDD! Not only Model Driven Development, but also Model Driven Deployment or runtime adaption.

What’s your preferred alternative? Or do you have a fourth one?


Photo by Andrew Morrell Photography

7 Comments Added

Join Discussion
  1. Andriy Levytskyy February 11, 2010 | Reply

    I agree absolutely that MDE needs to go beyond the development phase. IMO MDE can be applied throughout the entire development lifecycle. The motivation for this is the following: once MDE removes a bottleneck (traditionally in the development phase), it is natural to move to the next biggest bottleneck (be it testing or deployment, etc..)
    I do not have another way to fix the problem you describe. However I prefer to look at the problem somewhat different: Instead of focusing on the phases, I focus on concrete processes and their costs. It is the cost that drives what will be automated with MDE next.
    As for new terms, there are already so many associated with MD* that I am not so sure that new ones are really needed. After all, Model Driven Deployment does not bring any new techniques or principles and is just MDE at the deployment phase, isn’t?

  2. Rui Curado February 12, 2010 | Reply

    Models are usually associated with abstraction, but models can also be associated with automation, as long as that model can be processed by some application.
    Therefore, if we want to remove a bottleneck, we have to:
    – Make it concise (abstraction + constraints)
    – Make it faster (automation)
    There you have it! Model-Driven Deployment is the key. Now, if we could just put that MDD thing to work…
    ABSE does support MDD (as D for Deployment) but I haven’t worked much on that front so far.

  3. Johan den Haan February 12, 2010 | Reply

    Hi Andriy,
    Thanks for your excellent add!
    >Instead of focusing on the phases, I focus on concrete processes and their costs. It is the cost that drives what will be automated with MDE next.
    My focus in this article was on the deployment phase, but I agree: we should see it as a process with MDE as a technique to optimize it.
    >As for new terms, there are already so many associated with MD* that I am not so sure that new ones are really needed. After all, Model Driven Deployment does not bring any new techniques or principles and is just MDE at the deployment phase, isn’t?
    Like you I prefer the more generic term Model Driven Engineering. While a lot of people use Model Driven Development I often use this term too. Model Driven Deployment was just a little joke because it’s also MDD 😉

  4. Yeu Wen February 17, 2010 | Reply

    Hi Johan,
    I am having some difficulty trying to differentiate the last 2 alternatives. In alternative 2, isn’t the part of the application that can be configured or adapted at runtime meta-level classes like shipmentMethod you have cited in alternative 3 above? In other words, the classes having attributes that are configurable are part of Level 2 variability, right? And these classes would have to be generic across all possible applications in the same domain, won’t they?

  5. Johan den Haan February 18, 2010 | Reply

    Hi Yeu,
    The difference between alternative 2 and 3 is that in 2 the model elements are separated from the runtime elements. In alternative 3 a meta level is introduced in the runtime itself.
    This means that in alternative 2 the modeling tool is connected to the runtime and model changes can be directly transferred to the running system. There is, however, an explicit ‘deployment’ step. The power of this approach is that you can use the modeling tool itself to do the changes and to check and debug the model.
    In alternative three you just adapt some ‘data’ in the database and the changes directly take place.

  6. Pingback:

    […] 6: Why Model Driven Software Devlopmen Isn’t Fast Enough […]

Leave a Reply