Model Driven Development: Code Generation or Model Interpretation?

During the Code Generation 2010 conference I was part of a Birds of a Feather (BoF) session about Code Generation versus Model Interpretation. It was an interesting, informal discussion with Walter Almeida, Peter Bell, Angelo Hulshout and Pedro Molina. We discussed the advantages of Code Generation over Model Interpretation and the other way around.

BoF during Code Generation 2010 about Code Generation vs. Model Interpretation

I want to give you a short overview of the points made during the discussion. We didn’t come up with a final comparison or overview of all issues. Hence, you should see this article as a starting point for a discussion. Join the discussion by adding your own views in the comments!

Introducing Code Generation and Model Interpretation

In Model-Driven Development code generation is used to generate code from a higher level model to create a working application. Let’s consider the next example domain model specified in a Domain-Specific Language:

Customer {
    Name: String;
    Address: String;
}

If we want to generate Java code for this little model we can use a template engine. A template contains the Java code with some tokens which will be ‘filled’ based on the model. We can for example use a template for each entity in the domain model. The template represents a Java class with the name of the entity as name (e.g. Customer). For each attribute a private field will be generated. For a detailed overview / tutorial about creating a DSL and its associated code generator see "Getting started with Code Generation with Xpand".

In case of model interpretation we do not generate code to create a working software application from a model. In case of model interpretation a generic engine is implemented in (for example) Java and the model is directly interpreted by this engine. This generic Java program for example contains a class Entity with a property "name" and a hashmap containing the attributes of that entity (name – value pairs). A Customer entity is in this case not represented by a Java class, but by an Entity object with the property name containing the value "Customer". These Entity objects are created based on the information in the model.

Both code generation and model interpretation are used in practice. Let’s look at the advantages of these approaches compared to each other. Read 15 reasons to start using Model-Driven Development for an overview of the advantages of a Model-Driven approach in general.

Advantages of Code Generation

Code generation has the following advantages in comparison to model interpretation:

  • It protects your intellectual property: with code generation you can generate an application for a specific client. When using model interpretation you have to give your client the runtime engine which allows him to implement a whole class of applications. For example, if you can generate websites, you can give a client the code needed to run their website. With an interpreter, you have to give them the entire system for interpreting any kind of web application that can be described using your DSLs. This point is especially relevant when you have a software product line for a number of different clients.
  • It can target your customers architecture: when using model interpretation you have to implement an interpreter following your own architecture of choice. In case of code generation you can generate code precisely following the guidelines of your client(s).
  • The generated implementation is easier to understand: you can look at the generated code an directly understand the behavior of an application. In case of model interpretation you have to understand the generic implementation of the interpreter and the semantics of the model.
  • Easier to start with: if you already have build an application by hand you can start using code generation by turning existing code into templates and replacing parts of the code with tokens which will be replaced by model information. If you have build multiple applications for the same domain (e.g. for different customers) you can start analyzing these applications. Static code (i.e. code being the same for all applications) can be put in a domain framework, variable code needs to be generated (i.e. you need to create a Domain-Specific Language to model the variability).
  • It is more iterative: as explained in the previous point you can start using code generation by turning existing code into code generation templates. You can of course do this in an iterative way. First you only generate parts of the code and other parts will be implemented manually, later on you can extend your code generator to generate more parts of the code. The same holds for your DSL. At first it can be low level to reflect the code which will be generated. Later on you can tailor it more and more to domain experts by raising the level of abstraction.
  • It provides an additional check by the compiler: when you generate code, that code need to be compiled. This compilation step is an additional check as compilers will check the generated code for errors. In case of an interpreter you will need to do these checks yourself during the interpretation of the model or you need to create a tight coupling between the modeling environment and its interpreter.
  • Debugging the generator itself is easier than debugging an interpreter: if you need to debug an interpreter you need conditional breakpoints all the time because the interpreter code is generic.
  • Changes in templates are easier to track: code generation templates are just text files, hence changes can be easy to track (e.g. by using a version control system). The same holds for changes in the code of the interpreter, however, this code is generic and its less clear what exactly has changed.

Advantages of Model Interpretation

Model interpretation has the following advantages in comparison to code generation:

  • It enables faster changes: changes in the model don’t require an explicit regeneration, rebuild, retest, and redeploy step. This will lead to a significant shortening of the turnaround time.
  • It enables changes at runtime: because the model is available at runtime it is even possible to change the model without stopping the running application. See Why Model Driven Software Development isn’t fast enough and how to fix it to read more about this interesting subject.
  • Easier to change for portability: an interpreter in principle creates a platform independent target to execute the model. It’s easy to create an interpreter which runs on multiple platforms (e.g. multiple OS, multiple cloud platforms). In case of code generation you need to make sure you generate code compliant to the platform. In case of model interpretation, the interpreter is a black box, it doesn’t matter how it is implemented as long as it can run on the target platform.
  • Easier to deploy: when code generation is used you often see that you need to open the generated code in Eclipse or Visual Studio and build it to create the final application. In case of model interpretation you just have to start the interpreter and put the model into it. Code isn’t necessary anymore. Hence, it is much easier for domain experts to deploy and run an application instead of only modeling it.
  • Easier to update and scale: it is easier to change the interpreter and restart it with the same model. You do not have to generate the code again using an updated generator. The same can hold for scaling: scaling an application means initializing more instances of the interpreter, executing the same model. Especially in cloud environments this can give you advantages.
  • It’s more secure: for example on a cloud platform you only need to upload your model, there is no need to access the file system or other system resources. Only the code in the interpreter can access system libraries. The interpreter provides an additional layer on top of the infrastructure, everything underneath is abstracted away. This is essentially the idea of a Platform-as-a-Service (PaaS).
  • It’s more flexible than code generation: there are limits to template based code generation. In these case you will get the need for helper files to extend the possibilities of template based code generation. An interpreter can be less complex in these cases, and often less code is needed to accomplish the same result.
  • Debug models at runtime: while the model is available at runtime, it is possible to debug your models by stepping through them at runtime (e.g. you can add breakpoints at model level). This only holds for action languages, not for declarative languages (there you’ll need static analysis). When debugging at model level is possible, domain experts can debug their own models and adapt the functional behavior of an application based on this debugging. This can be very helpful when for example complex process or state models are used.

Conclusion

If we look at the advantages of both approaches we can conclude that in the end it all depends on the domain, the use case, and the skill-set (or comfortability) of the people building and/or using the Model-Driven Software Factory.

When we discussed the differences between code generation and model interpretation, soon the question came up: what’s the difference between code generation and model interpretation? What is the boundary between these two approaches? What if we have an in-memory file system were we generate the code? What if we optimize our interpreter by compiling parts of the model? What if an interpreter generates a database structure and web content for browsers?

Do you think there’s a relevant distinction between code generation and model interpretation?

What do you see as the main advantages of both approaches?

40 Comments Added

Join Discussion
  1. Pedro J. Molina June 28, 2010 | Reply

    Great summary of our informal meeting! Thxs Johan.

  2. Jeppe Cramon June 28, 2010 | Reply

    Good and balanced list 🙂
    Regarding code generation:
    I don’t understand why people still prefer template based code generation for anything but the simplest solutions.
    I know XPand can be quite flexible – but it also quickly gets somewhat incomprehensible.
    When I started out doing MDD in 2001 we used template based code generation but soon began to realize that it was a dead end street – you way to quickly end up with code in your templates. XPand has pushed the limits, but still…
    What I’ve been using ever since have been something comparable to Code DOM’s. We build an in memory representation of the code to be generated (e.g Java Code, Xml Schemas, WSDL, DDL). This takes longer at first, but when you start to add more and more orthogonal features (e.g. in a Domain Model adding Getters/Setter, Adding JPA persistence annotations, Adding code to handle Bidirectionality in memory, Handling versioning of objects, etc.) you simply need a flexible way of working with and transforming your code on the fly. This is very hard, but not impossible, to do with template based systems.
    From my experience working with a Code DOM is best when combined with either an Event based or "PointCut" based (which really just is a short hand for writing Event Handlers which are very specific about what they want to act upon) solution.
    For those interested – I have some slides about this in http://www.slideshare.net/jeppec/short-introduction-to-domain-driven-design-model-driven-development (see slide 75 to 108)

  3. Angelo Hulshout June 29, 2010 | Reply

    Nice summary of what we discussed, Johan. Thanks for that. But why did you use the picture where I bring the drinks? Not because that was my best contribution to the discussion I hope? 😉

  4. Johan den Haan June 29, 2010 | Reply

    Hi Angelo,
    >why did you use the picture where I bring the drinks? Not because that was my best contribution to the discussion I hope?
    The only pictures I could find from our BoF were picture where you are bringing drinks… As far as I see this can be interpreted in two ways… 😉

  5. Johan den Haan June 29, 2010 | Reply

    Hi Jeppe,
    Thanks for sharing your experiences!
    It’s indeed possible to do code generation in different ways. As you explain it is possible to create a metamodel of the target language (e.g. Java). From this metamodel it is straightforward to define the model-to-text transformation to generate the actual code. To generate the target model from the source model, model-to-model transformations can be used.
    Another example of such an approach is WebDSL defined using Stratego. The use rewrite rules to ‘rewrite’ a model specified with a DSL into a Java model. Afterwards they simply generate the Java files from this model.

  6. Steven Kelly June 29, 2010 | Reply

    It’s interesting to see how many of the listed (dis)advantages are from the point of view of the tool vendor / generator builder, rather than the point of view of the tool user. Not surprising, given the people in the group!
    One extra point in favour of code generation from the point of view of the tool user: you end up with full source code for the working end application. With an interpreted solution, there’s generally more of a long term dependence on the tool / interpreter / vendor.
    The best insulation is to own the code generator / interpreter yourself. Indeed, another session at Code Generation 2010 showed that is a strong factor in the overall success of MDD: http://www.metacase.com/blogs/jpt/blogView?showComments=true&entry=3454761261
    If you own the metaware, you also have the freedom to choose generation or interpretation. I actually find that most real cases have elements of both: you can choose the most appropriate solution for the various kinds of information in your models, and the runtime platform you use. Good hand-programming also uses data-driven metaprogramming where appropriate, so this is really nothing new.
    I wouldn’t agree that interpretation is more flexible than code generation; in fact the opposite can be shown to be true. Generated code can always work by metaprogramming, and can thus do everything interpretation can. Maybe the difference in opinion is because you didn’t consider generated code using framework code?
    Debugging at the model level is definitely possible with generated code – e.g. see the Digital Watch example in MetaEdit+, http://www.metacase.com/webcasts/DSM_AnimationAPI.html
    Of course, at the end of the day this is the old question of compiling vs. interpreting, or on an even more basic level, code vs. data. Just remember that even hand-written machine code is just data: an array of values 0-255, which is actually interpreted by the CPU!

  7. Nicolas June 30, 2010 | Reply

    Very interesting discussion, thanks for sharing that !
    I agree with the last comment; many advantages are from the point of view of the software editor rather than the end user, especially in the case of code generation.
    IMHO, I see much more advantages for the end user to the Model Interpretation (I would prefer Model Execution), and I agree with all the points you mentioned. On the contrary, I don’t see real advantages in the code generation strategy.
    Understanding the generated code, should not be an advantage, because it should not be a goal at all. And if it’s the case, I think Code Generation has totally missed the goal it was used for.
    I don’t see why Code Generation should be more flexible or easier to start with; from the end user (ie the developper), the task to do is quite similar, and it could be more difficult, if he has to deal with generated code integration in his solution.
    I agree debugging code interpreter is much more difficult than generated code, but the developper should never have to do it; it’s the tool vendor job.
    You can compile your Model as well you compile code; your interpreter should have the feature to check the Model, and assure it won’t throw errors in RunTime.
    And I totally agree with your first point; the necessary of a RunTime with the Model interpreter is a huge handicap, in a business strategy; people don’t feel safe when their application depends on a 3rd party RunTime.

  8. Angelo Hulshout June 30, 2010 | Reply

    @Steven: sure, the (dis)advantages are worded in a way that relates them directly to the point of view of the generator/builder. However, some points can be linked directly to the tool user (and even the end user) for at least the model interpretation part. For instance the first two have a direct positive impact on speed with which a lot of new features or different configurations can be deployed in production. Thus, without having to go back to the product vendor or the R&D department for a new version of the software.
    @Nicolas: I get the feeling you are talking about the end user of a software product, rather than the end user of the MDD tool in your first remark. If so, this end uses should not be bothered by whether the product is developed either way. Otherwise, we would be working on something that is similar to building houses the way the inhabitant wants. I couldn’t care less whether the carpenter nailed our window frames with a hammer or with a nail gun. The point with model interpretation is that the model can be debugged and changed in the run-time environment. That may benefit the end user, even in cases where he is not able to do it by himself completely, as well as the development team, who have to do less work to make a change. The latter also applies to code generation, which has nothing to do with the end user at all – unless with the developer in his role of end user of the code generator.
    Something interesting, that was brought up in the discussion but was not included in the summary because we started focusing on interpreters is this. Having a model at run-time doesn’t necesserily mean that it is a model of ‘the code’. A feature model, which dictates which pieces of an optionally precompiled system should be used, and how they should behave is also a ‘model@run-time’ – in fact, even more so than the models discussed here so far. Of course, to some extend, these models get interpreted as well, but only at initialisation time or after a ‘reconfiguration trigger’. I’ll try to publish something a little bit more elaborate about that on my blog over the weekend, because we seem to be falling into an old trap here: working on the wrong level of abstraction.

  9. Nicolas June 30, 2010 | Reply

    @Angelo: no, I was talking about the developper point of view (who is, in a certain way, the end user of a MDD tool).

  10. Andriy Levytskyy June 30, 2010 | Reply

    Steven makes very good points. I also feel that it would help reader to relate to these the advantages if the roles (end user, tool vendor, etc..) are made clear.
    To give an example, consider code generation advantage "It provides an additional check by the compiler". Obviously this concern does not apply to end users of model interpreter. It may apply to builders of model interpreters, but then such interpreter is probably compiled anyway..
    In my opinion as well, the code generation approach is more flexible then the interpretive approach. This is simply because the former by definition allows other inputs/choices then the models provided by end users. Examples of such inputs are choices of target platform (which may be portable, secure, scalable, etc..), platform information to enable e.g. performance optimization, selection of features/options, etc..,
    That said, as an end user, I really like and prefer model interpretation as it allows the fastest possible application development time. I also have to admit that a proper code generation DSM solution can be nearly indistinguishably as good i.e. fast, model debuggable, scalable, secure, deployable, portable.
    On the other hand, I found that it is not always possible to apply interpretation, if your product (end user model) is not complete without technical inputs/choices/creative decisions (see above for examples).
    Such inputs may be required for different reasons. For example, embedded system design may need platform information to optimize design for run-time performance (time budgets, memory budgets) of embedded systems. I recall a case of using an interpreter in embedded system design at ASML. As far as I remember input models described processes, mapped processes to resources and specified inter-process communication. The models proved to be very useful for analysis and simulation of run-time behavior. However, when deployed with an interpreter, the presence of the latter changed the system behavior so that it was not anymore in line with behavior specified by the model.
    In the end, choice of interpretation vs. generation depends much on the context. With that said I think that strength of code generation is its flexibility and whereas that of model interpretation in fastest development by end users.

  11. Andriy Levytskyy June 30, 2010 | Reply

    BTW, word “obvious” in the second paragraph of my previous post reflects my assumption that there is a tight coupling between the modeling environment and its interpreter.

  12. Rafael Chaves June 30, 2010 | Reply

    the code generation approach is more flexible then the interpretive approach. This is simply because the former by definition allows other inputs/choices then the models provided by end users. Examples of such inputs are choices of target platform (which may be portable, secure, scalable, etc..), platform information to enable e.g. performance optimization, selection of features/options, etc..,

    Andriy, I fail to see why it would be any harder to support the same kinds of options with model execution (except for “portable”). A runtime might provide an array of execution strategies, and choose between them based on some choices made by the developer.
    Now to the general discussion: one thing in favor of model execution is that you can test and debug your models at the level you write them, not by running whatever code is generated (which one should not have to care about).

  13. Johan den Haan June 30, 2010 | Reply

    Steven said: I wouldn’t agree that interpretation is more flexible than code generation; in fact the opposite can be shown to be true. Generated code can always work by metaprogramming, and can thus do everything interpretation can. Maybe the difference in opinion is because you didn’t consider generated code using framework code?
    Angelo said: Something interesting, that was brought up in the discussion but was not included in the summary because we started focusing on interpreters is this. Having a model at run-time doesn’t necesserily mean that it is a model of ‘the code’. A feature model, which dictates which pieces of an optionally precompiled system should be used, and how they should behave is also a ‘model@run-time’ – in fact, even more so than the models discussed here so far.
    Both Angelo and Steven talk about the abstraction level of the model(s) and what part of the end result is variable and thus actually generated or interpreted. It isn’t explicitly stated in this article, however, when we talk about code generation and model interpretation I assume we target a domain framework.
    In case of code generation this means that we generate code which calls the API of a domain framework containing the ‘static’ code (i.e. the code which cannot be changed / configured by changing the model).
    In case of model interpretation this means you execute the model on an interpreter / engine which contains the domain framework. The model configures the runtime behavior of the engine, meaning that the ‘static’ code in the engine is used as dictated by the model.
    So, we do not talk about models of ‘the code’. We talk about models at a higher abstraction level (otherwise there is no reason to use MDD 😉 ).
    I have to agree with Steven, model interpretation isn’t more flexible. It’s also not the other way around. In the end there is no big difference. Steven formulated it in a nice way:
    Of course, at the end of the day this is the old question of compiling vs. interpreting, or on an even more basic level, code vs. data. Just remember that even hand-written machine code is just data: an array of values 0-255, which is actually interpreted by the CPU!

  14. Angelo Hulshout June 30, 2010 | Reply

    Good last comment, Johan. I used the frase ‘models of the code’ for want of a better expression in English. What I was trying to say is that there are other models that can be used at run-time than those interpreted by a domain framework as you describe here (which was indeed what we discussed at CG2010). As said, I’ll get back to this over the weekend.

  15. Walter Almeida July 1, 2010 | Reply

    It is very interresting to see how deep such a discussion can go, once several experts of the MDD approach join. @Steven: indeed we did not started this BoF session by setting the most elementary scope: what point of view are we considering?
    Let me make some points here, from the perspective of concern to me: Let’ consider MDD from the tool vendor side. For instance Johan at Mendix and myself at Fenomen are working, to summarize, on products and services to allow end users quickly models and implement business applications, without code. To reach such a goal, we internally apply a model driven approach to make this magic happens. In such a scenario, the end user does not care at all whether we internally use code generation or model interpretation, he does even have to know at all about the whole model driven philosophy. The end user just without knowing contributes to this wonderful approach by defining the model for us.
    In such a scenario we end-up with a functional model of the final application which can be exactly the same whatever option (generation or interpretation) we then decide to apply to bring to life the final application. So is code generation or model interpretation better?
    IMHO there’s no definite answer. The objective is not to be the more purists or believe in one unique God and stick to the one technology we deeply believe in. The objective is to build business applications that work and provide the service expected by our clients. So let’s be pragmatic: I think some scenarios are best suited for code generation. For instance, IMHO it is much easier and straightforward to use code generation to convert the model of a user interface to a working product than to interpret this UI model at runtime and build a user interface on the fly. On the other hand you probably want to use interpretation at runtime for a security model: this way you can provide the end users with an admin console to add or remove users to security application groups at runtime, which is a common use case that would be harder to achieve with code generation. Other example: a worklow model is very well suited to be interpreted at runtime, and the runtime can then compose the workflow activities following the flow defined in the WF model, whereas the workflow activities themselves can be best suited for code generation.
    Hope it helps!

  16. Walter Almeida July 1, 2010 | Reply

    @Jeppe: I am personally a big fan of template-based code generation. I agree though that the templates can quickly become cumbersome when complexity grows. However it is such a great and straightforward approach to mix output code and code control generation code! My personal experience is that you then can apply advanced techniques to make complex templates manageable, and you can for instance build an underlying framework that you can then call from your generation templates. And it would probably be possible to mix the template-based approach with a DOM-like approach? And get the best of both world and make calls in your templates to trigger the DOM based generation? That could potentially be a powerful approach…

  17. Bas Geertsema July 1, 2010 | Reply

    Nice overview, Johan. One point in favor of code generation that I would like to mention is that it can provide more flexibility in the sense that additional code can be manually added to cover all requirements. One can argue whether this is required, but in my experience it is very hard to achieve 100% of you requirements in a pure MDD software environment. For example, if your code generator outputs Java code (with extension points), you end up with an application that is 95% generated and the remaining 5% that is very specific to application, can easily be covered by manually developed Java code. Java as a development platform is very mature and should therefore be able to cover this, albeit at the cost of developing at lower abstraction levels.

  18. Johan den Haan July 1, 2010 | Reply

    Hi Bas,
    I agree with you that you need the flexibility to add custom code. However, when using model interpretation you can achieve the same result. For example, by pointing to Java files from the model. The interpreter will interpret the model and call the java code at the points defined in the model.
    When using code generation you also need to think about the extension points and how to mix generated and manual code. So, I wouldn’t say it is easier to use manual code in combination with code generation.

  19. Steven Kelly July 2, 2010 | Reply

    Actually, that raises an interesting question. For cases where you get the generator or interpreter from a third party, do you get to view and edit it? There are obvious questions of the third party’s desire to protect their intellectual property, and your desire to adjust parts that weren’t set up as extension points.
    My impression is that in such cases, generators are left open more commonly than interpreters. Particularly in cases where the generator is written in a proprietary language, “jailbreaking” the IP from an open generator would seem a little harder than from an open interpreter. For both, reading models via a proprietary API rather than e.g. directly from XML files would also impede jailbreaking somewhat.
    Just to pick on one more point above: I wouldn’t agree that interpeters make portability easier. If the generator output or interpreter are in a platform-independent language, you obviously get that for free. If not, the work involved looks the same to me.
    One difference along those lines, however, is that if you supply an interpreter, people don’t generally care what language it’s written in. If you supply a generator, people may well care what language it outputs, so you may need to provide a generator per desired output language. This of course is mostly an issue for vendors; if you’re building a generator for your own use, you probably need just the one language.

  20. Walter Almeida July 2, 2010 | Reply

    @Steven: I love your comments! They really make sense and raise interesting points in a very clear and understandable way. I am starting to become a fan of yours 🙂
    You said: “For cases where you get the generator or interpreter from a third party, do you get to view and edit it? There are obvious questions of the third party’s desire to protect their intellectual property, and your desire to adjust parts that weren’t set up as extension points.”
    Well depends on what you want to sell: If you sell an end user tool, targeted at business users, you probably don’t want them to view/edit the generator/interpreter or even know there is a generator/interpreter, not to protect your IP but because it is not the aim of your product. Of course it can make your product less extendable and more constraints to certain limits but that’s the idea. If you sell a development tool, targeted for development teams then yes: you want to give them access to the generator/interpreter (or more precisely: THEY will want to get access to your generator/interpreter + source code or otherwise they won’t buy the product…)
    You said:” My impression is that in such cases, generators are left open more commonly than interpreters. “
    And a generator can easily leave open only part of itself: you can leave open a given number of templates only. The ones that really represent the interesting variation points of your architecture and that could be useful for your users to extend the system. And leaving the core architecture templates and internal framework closed, thus protecting your IP. Well and to be more precise: in this case it is not exactly the generator itself that you leave open, but the templates that feed it.
    And you said: “One difference along those lines, however, is that if you supply an interpreter, people don’t generally care what language it’s written in. If you supply a generator, people may well care what language it outputs, so you may need to provide a generator per desired output language”
    Well that’s absolutely true. If you use a generative approach and open your templates then you end-up with a limited target user-base: the one developing with the same technologies as you… And providing multiple generator for multiple output languages is a pain…

  21. Andriy Levytskyy July 2, 2010 | Reply

    Steven said: “There are obvious questions of the third party’s desire to protect their intellectual property, and your desire to adjust parts that weren’t set up as extension points.”
    This reminds me that adaptation to specific domain of end user is very important (e.g. it is recognized in DSM, MDA, and being domain specific is a hot topic today). Such adaptation is definitely possible and feasible for end users in case of code generation. What is the case with model interpreters? Do vendors of such interpreters provide end-users with efficient means for such adaptation?

  22. Andriy Levytskyy July 2, 2010 | Reply

    Rafael said about flexibility: “Andriy, I fail to see why it would be any harder to support the same kinds of options with model execution (except for “portable”). A runtime might provide an array of execution strategies, and choose between them based on some choices made by the developer.”
    I am not sure what you mean by runtime execution strategies. It is interesting that you say end user has a choice based on some choices made by the developer. This raises a question: Can a vendor foresee all choices that a tool customer may have?
    Actually I realize that for some customers their business model barely changes (e.g. lithography process in ASML machines is very stable). What is changing is runtime performance parameters of executing their business model. ASML is successful because performance of their products is better then that of their competitors. If ASML would choose interpreter approach they would be constantly evolving their interpreters – this would make them in fact a developer of very specific interpreters. Such development involves very specific choices (platform independence, optimizations for very specialized HW/SW platforms, reuse of their own component libraries, dealing with effects of organization structure on development, etc..). No external vendor of interpreters can foresee such choices in their tools. What clients like ASML need is MDD application to their development processes and their choices. Generative approach/tools has been shown to be *flexible* (Rafael, does it answer your question?) to work in such context. The question is if interpretive approach can do the same?

  23. Richard Kennard July 6, 2010 | Reply

    Hi guys,
    I wish I could have made it to Code Generation 2010!
    I think you are positing an invalid choice between either Code Generation OR Model Interpretation. You say ‘In case of model interpretation a generic engine is implemented… a Customer entity is in this case not represented by a Java class’. But there are many ways a model can be interpreted. You could, for example, use reflection to inspect an actual Java class and interpret the model at runtime. I have been exploring this ‘inspect anything’ approach for a few years now with my Open Source project Metawidget (http://metawidget.org).
    I note people are often concerned about ‘flexibility’ of model interpretation approaches. There is this idea that code generation is better because you can edit the generated code (of course this ignores the problem that when the model changes regeneration is seldom an option without losing those edits). I have done some work to try and identify what use cases people need to support and why they need to edit the code, and have tried to satisfy those scenarios without resorting to code generation. You can see a journal article on my research here: http://scholar.google.com.au/scholar?q=%22Towards+a+General+Purpose+Architecture+for+UI+Generation%22
    Your feedback would be most welcome.
    Regards,
    Richard.

  24. Enrico Oliva July 6, 2010 | Reply

    The code generation is a result of model interpretation. May be your point is more about code generation vs meta program.
    Enrico

  25. Richard Kennard July 6, 2010 | Reply

    Enrico,
    Thanks for your response. I agree that it is not an either/or choice, but the blog entry says several times ‘Code Generation or Model Interpretation’ and ‘Code Generation versus Model Interpretation’ and ‘In case of model interpretation we do not generate code’ and ‘these approaches compared to each other’.
    It certainly sounds as if it is positing one versus the other. So am I reading that wrong?

  26. Johan den Haan July 6, 2010 | Reply

    Hi Richard,
    >It certainly sounds as if it is positing one versus the other. So am I reading that wrong?
    No, you are reading it correctly 🙂 The goal of my article was to compare these approaches. In most cases its indeed more of a combination of these approaches. Either more code generation oriented or more model interpretation oriented. I think the points made in the article can help you decide what direction to go.
    >You could, for example, use reflection to inspect an actual Java class and interpret the model at runtime. I have been exploring this ‘inspect anything’ approach for a few years now with my Open Source project Metawidget
    I don’t fully understand your approach. Can you explain it a bit more? How do you go from model to a working application? What’s the place of code generation or model interpretation in your approach and why?

  27. Richard Kennard July 7, 2010 | Reply

    Johan,
    In a nutshell: I am suggesting that the model *is* the working application. If you take a working application, with all its database schemas, validation constraints, business rules, OO classes, XML configuration files, then you have quite a lot of raw data with which to construct a UI model.
    There is a challenge in inspecting (or ‘software mining’ if you will) all that disparate data and bringing it together in one place, but if you can do that you have a model that is an order of magnitude richer than something like the Naked Objects approach, and much more accurate than a hand-made model (which is meant to reflect the underlying system but it is easy to make mistakes).
    For more details, please see my paper “Separation anxiety: Stresses of developing a modern day separable User Interface”, my journal article “Towards a General Purpose Architecture for UI Generation”, and my Open Source implementation http://metawidget.org.
    I’d be most grateful for your feedback.
    Regards,
    Richard.

  28. Enrico Oliva July 7, 2010 | Reply

    Hi,
    I try to explain myself 🙂 I post some ideas in response to the starting questions.
    When you use a model interpretation for application it is similar work made by the JVM with the class file. Where the JVM could be considered as a meta-interpreter or meta-program.
    When you use the code generation for application you create the application source code resulting from a model interpretation process. (it is more like the compiler job to do the class file)
    Enrico

  29. Erik Engbrecht July 11, 2010 | Reply

    I hate to be pedantic here, but if you can perform full code generation from a “model” or “model” interpretation, then you don’t really have a model, because a model is supposed to be missing details required for the real thing. What you have is a domain-specific programming language. In other words, you basically have a 4GL masquerading under a hipper pseudonym.
    The only really interesting twist is that traditional 4GLs tend to be sealed black boxes, while most template-based code generators are white (or at least light gray) boxes that are open for extension.
    “Model” interpretation sounds like a wholesale return to the 4GL, because even if the box were theoretically open, the skills required to tinker with it would likely be prohibitive.

  30. Angelo Hulshout July 11, 2010 | Reply

    Hello Erik,
    an interesting point of view, that allows one to agree or disagree. I tend to disagree, in relation to a few points in your argument.
    First, you mix the notions of model and language – a language is used to express things, which could be called models, but the model is not the language and vice versa.
    Second, 4GLs and DSLs are overlapping domains, but with the definition that we work with (and which is an implicit assumption in Johan’s summary of the BoF) states that DSLs are _created_ for a specific domain, by people working in that domain. This could hold true for 4GL as well, but looking at examples such as those listed on infamous Wikipedia, 4GLs are created by tool vendors and cover broad, generic and rather technology oriented domains. What you state as being the ‘only really interesting twist’ to me is key to the whole topic of DSLs, model interpretation and code generation.
    Third, you state that a model is supposed to be missing details required for the real thing. That could be true, but I’d rather say that in the context of DSLs and models expressed using DSLs, the model omits details that are not relevant to describing the problem to be solved. These details are filled in by the generator or interpreter, and we can use any interpretation we like to realise ‘the real thing’. In fact, omitting certain details from a model allows us to create multiple generators and/or interpreters that allow implementation of the same model on different platforms. The key point is that we can do this without changing the model. If we were to follow your reasoning, wouldn’t we end up concluding that the only real model is the working application? That would be a bit limiting to what we want achieve in software engineering I think…
    Finally, your last point about model interpretation being a wholesale return to 4GL makes no sense at all, from my point of view. In an environment where we define our own, specific DSLs, and interpreters or generators, we have all the knowledge we need to make changes. Plus, model interpretation is about interpreting models in a running system – allowing for run-time flexibility, while most 4GLs focus(ed) on generating something that needs to be compiled into a running system with more or less fixed features.

  31. Erik Engbrecht July 11, 2010 | Reply

    @Angelo
    (1) My intent was not to conflate them, perhaps I chose the wrong wording. Yes, traditionally you have a model expressed in a modeling language, just like you have a program expressed in a programming language. I personally believe this dichotomy is unfortunate, but that’s another topic entirely…
    (2) You’re basically creating a fuzzy distinction between a 4GL and a DSL based on use case. 4GLs traditionally have a strong separation between “language vendor” and “language user” while in DSLs it’s somewhat fuzzier. Some of the comments here point towards wanting to acquire DSLs and some point towards developers creating them and some point towards a hybrid model. But I’d argue that who created it doesn’t change want it is.
    (3) C++ allows me to leave out all sorts of details that are required for assembly. Java allows me to leave out all sorts of details that are required for C++. Various other programming languages allow me to leave out details required by Java. These all allow me to work at higher levels of abstraction with fewer dependencies on specific concrete components. DSLs are programming languages that target a specific, and potentially very narrow, domains. They may not even be turing complete. But they still have a compilers (code generators) or interpreters. A compiler may compile the same program for multiple platforms. A compiler may compile to an intermediate representation (e.g. bytecode) that’s then compiled or interpreted or both by something else. Various dependencies may not be resolved until runtime.
    I commend the goals but there is nothing new about them, and I see very little that’s new (other than the words) in DSL or MDD approaches.
    The real model is not the working application, because it is the working application and therefore has all of the details necessary to be a working application, hence the working application is not a model. I’m arguing that if something describes an application in sufficient detail that it can be translated into a working program or interpreted, then the language used to describe it is a programming language, not a modeling language. A DSL in this case is a special purpose programming language, much like a 4GL but more democratized in its creation.
    (4) You’re talking about people creating their own 4GLs, as opposed to buying them, and evolving them quickly as opposed to waiting for new releases. If people follow through with that use case then the differences are probably significant enough to warrant a new term. But if people buy their DSL from a vendor, and just use it off the shelf, then it is just a 4GL by another name. The difference between a 4GL and a DSL is not in what they are, but rather in how they are intended to be developed and the intended breadth of the audience.
    Regarding compiling versus interpreting 4GLs, when 4GLs where hip we had a lot less computer power and thus could afford a lot less indirection at runtime. If they were created today, they’d be much more likely to be interpreted, just like so many modern programming languages are either directly interpreted or compiled to a high-level bytecode which is then interpreted.
    The bottom line here is I read the discussion here, other places, in books, magazines, and sales brochures, and I have a difficult time separating out what is new from what is renamed. This is critical, because there are things in the past that worked and that didn’t. Alan Kay likes to say our industry has a habit of “reinventing the flat tire.” Based on what I read it’s hard to tell. Based on the tools I’ve used it seems like just like a reinvented flat tire. I’m trying to sort out the substance from the rebranding.

  32. Andriy Levytskyy July 12, 2010 | Reply

    @Erik, unfortunately I can relate very well to your last paragraph. I had the “reinventing the wheel” feeling when I came to MDD/MDA software industry from a modeling and simulation community. Fortunately, not everything is reinvented.
    MDD and DSL is the next step of evolution we have seen in programming languages (including 4GLs). The goals are pretty much the same: higher productivity by providing developers with higher and more human friendly abstractions and automating tasks required to make more abstract programs executable. Language abstractions are raised from machines to platforms to technologies. (The last two milestones are the scope of MDD). The main abstraction trend – being human user domain specific – is shared by 4GLs and DSLs. Furthermore, on the “mechanical level” of means, 4GL interpreters/compilers and DSL model interpreters/transformations are conceptually similar.
    It also does not help that model as concept is very overloaded and weakly defined in software industry. What everyone knows is that model is an abstraction of a system. What is missing (or forgotten) in this definition is that it is a *proper abstraction for the goals* (be it system representation or specification) of a *modeler*. For example: C code is a model of a system in eyes of a C programmer, however it is not a model in eyes of the business analyst simply because C code does not provide abstractions needed for analyst’s goals. And the abstraction space is large and there are many modelers… (I think that both you and Angelo are talking about the same thing and you both are right). It is easy to see how difference between DSLs and 4GLs can be a matter of perspective. To further complicate things the MDD does not proscribe creating DSLs that are very much 4GLs. (In fact I do that routinely with MDD tools if that solves the problem).
    IMO the difference between DSLs and 4GLs is of a more incidental, implementation nature. Perhaps nowadays good DSLs tools can be more efficiently customized to new domains then it was the case with development of 4GLs? And due to the size of domains, end-users are more involved in DSL definition? Hence the important twist Erik mentioned and Angelo’s stress on specific domains as opposed to broad, generic and rather technology oriented domains. Indeed, DSL/MDD solutions tend to be white box (although there are plenty of black box MDD solutions whose existence is possible due to lack of accepted tool quality benchmarks).
    And BTW model interpreters can be white boxes as well – it was mentioned in this tread that MDD is applied to develop interpreters (but it is a subject for another discussion).

  33. Angelo Hulshout July 12, 2010 | Reply

    That sums it up pretty well, Andriy. Thanks for bridging the gap.
    @Erik: thanks for clarifying. I think we agree on a number of points, and less on others (as I more or less indicated already in my previous response).
    However, regarding your last reply:
    Re (3): true, but now you are talking about creating a ‘model’ or ‘program’ in a specific language and leaving out the details required by another language. What I tried to express is that we can create a model in a DSL, leaving out the details of _all_ programming languages and platforms, and generating code for any of them without changing the model (just replacing the generator or the interpreter). I have to add there that for me, and many more, an MDD solution typically consists of three parts: a platform that contains basic features of a set of derived products, a modeling language that is used to model the intended purpose of the individual products in the set, and a generator or interpreter that bridges the gap between the platform and the models.
    Re (4) we agree completely – that use case is exactly what I appreciate in MDD and DSLs compared to 4GL and other stuff we tried before. As Andriy put it, it’s the next step and very likely we’ll have to take more steps to achieve the goal we’re after: better software, faster and more efficiently.

  34. Gonçalo Borrêga August 4, 2010 | Reply

    Great summary!
    I would add to the list of pros of code generation the fact that you do not get vendor locked. Even if we don’t want to (or until a standard on MDD is reached) when someone ventures in MDD you get locked to the vendor of the engine/tool/models of choice (or you get to build your own). If the transformation engine is able to generate “good” code you can, at any time, go back to the usual development.
    On the other hand… although most of our customers asked to have it… no one ever dropped out 🙂

  35. Spencer Zhu August 5, 2010 | Reply

    You can give titles of “Model Interpretation”? What companys?

  36. Johan den Haan August 7, 2010 | Reply

    Hi Spencer,
    An example of a company / product using the Model Interpretation approach is Mendix (the company I work for). See http://www.mendix.com

  37. Rafael Chaves August 7, 2010 | Reply

    I ended up writing a post that is a segue to this discussion:
    http://abstratt.com/blog/2010/08/07/model-interpretation-vs-code-generation-both/
    The gist is that even if you decide to go with code generation for building an application, model interpretation is a superior approach for developing the models themselves.
    Feedback appreciated.

  38. Stefano Butti May 16, 2011 | Reply

    Very interesting discussion, thank you for sharing!
    After reading it I collected my thoughts and I came to the conclusion that Code Generation is better than Model Interpretation. You can find the reasons here:
    http://blog.webratio.com/2011/05/16/why-code-generation-is-better-than-model-interpretation-from-our-customers%e2%80%99-point-of-view/

  39. Flohack August 7, 2012 | Reply

    After a long google search I was pointed here… I wonder why not more people are dealing with such thoughts these days, as it looks quite logical for me to come to these conclusions one day hehe… Back to my search, I am looking for practical implementations of model-interpreting environments, because I deal with software where each downtime is a critical factor – 24/7 industrial production logistics. But I only found theoretical studies and thoughts. Are there toolkits out there on the market which actually allow online changes to code? Eventually something which can be programmed in .NET? 🙂
    regards Florian

  40. Pedro J. Molina August 8, 2012 | Reply

    Hi Florian: If you are looking for model interpretation over the .NET platform take a look to Essential: http://pjmolina.com/essential

Leave a Reply