The concept of modularity gained strong momentum with the wide spread of the object-oriented (OO) approach in software development. Engineers and computer designers realized some time ago that it is cheaper and more efficient to build devices made of replaceable units. So if you run out of space on your hard disk you can easily take it out and plug in a new bigger one. Similarly, you can swap your CD reader for a DVD reader. The same kind of functionality came with OO software, where pieces of your code became self-contained and self-sufficient and could be easily plugged into other programs or replaced by other components providing the same or improved functionality.
The next logical step was to apply the same concepts to modeling. But this required specific design criteria and rules for building and maintaining models. The features of 'decomposability' and 'composability' are the most important ones. The decomposability criterion requires that a module should be an independent, stand-alone submodel that can be analyzed separately. On the other hand, the composability criterion requires that modules can be put together to represent more complex systems.
Decomposability is mostly attained in the conceptual level, when modules are identified among the variety of processes and variables that describe the system. There is a lot of arbitrariness in choosing the modules. The choice may be driven either by purely logical, physical, ecological considerations about how the system operates, or by quantitative analysis of the whole system, when certain variables and processes are identified as rather independent from the other ones.
The composability of modules is usually treated as a software problem. That aspect is resolved by use of wrappers that enable modules to publish their functions and services using a common high-level interface specification language (the federation approach). The other alternative is the design of model specification formalism that draws on the OO methodology and embeds modules within the context of a specific modeling environment that provides all the software tools essential for simulation development and execution (the specification approach). In both cases as models find themselves in the realm of software developers, the gap between the engineering and the research views on models and their performance starts to grow.
From the software engineering viewpoint the exponential growth of computer performance offers unlimited resources for the development of new modeling systems.
With the advent of the Internet, the vision was to assemble models from building blocks connected over the Web and distributed over a network of computers. New languages and development tools started to appear to facilitate this process, in many cases even faster than their user-communities managed to develop.
On the other hand, from the research viewpoint, if a model is to be a useful simplification of reality, it should enable a more profound understanding of the system of interest. It is more important as a tool for understanding the processes and systems, than for merely simulating them. In this context there is a more limited demand for the overwhelming complexity of modeling systems. The existing software may remain on the shelf if it does not really help understand the systems. This is probably especially pertinent to models in biology and ecology, where in contrast to physical science or engineering, the models are much more loose and tend to 'black-box' much of the underlying complexity due to the difficulty of parametrization and simulation of all the mechanisms from a first-principles basis. They may require a good deal of analysis, calibration, and modifications, before they may be actually used. In this case the focus is on model and module transparency and openness. For research purposes it is much more important to know all the nuts and bolts of a module to use it appropriately. The 'plug-and-play' feature that is so much advocated by some software developers becomes of lower priority. In a way it may even be misleading, creating the illusion ofsimplicity of model construction from prefabricated components, with no real understanding of process, scale, and interaction.
Major requirements for a modular model are as follows:
• Expandability. The modules should be designed in such a way that new modules could be easily added and existing modules modified.
• Scalability. There should be some clear idea of scale attached to each module. Either a module is designed only for a specific scale, and this scale is clearly identified, or the scale is incapsulated in the module so that it can adjust depending on the scale used in other modules.
• Transparency. Modules should be easy to explore and understand. This is a prerequisite ofthem being reused. Documentation is crucial.
Was this article helpful?
Learning About 10 Ways Fight Off Cancer Can Have Amazing Benefits For Your Life The Best Tips On How To Keep This Killer At Bay Discovering that you or a loved one has cancer can be utterly terrifying. All the same, once you comprehend the causes of cancer and learn how to reverse those causes, you or your loved one may have more than a fighting chance of beating out cancer.