One of the major problems in including industrial ecological concepts in design is the problem of data inconsistencies and dispersion. Even for a narrowly defined production process, the necessary information is highly dispersed and in various forms. These inconsistencies can be attributed to one or more of the following factors: (a) non-comparable units of measurements; (b) uncertainties in the assumptions; (c) confidential and non-verifiable data and data from unreliable sources; (d) measurement uncertainties and (e) data violating laws of physics.
The first law of thermodynamics for conservation of mass and energy is applicable to every process network. It is, therefore, applicable to every firm and every industry that is in a steady state. This means that, for every process or process chain, the mass inputs must equal the mass of outputs, including wastes. Moreover, in many processes, non-reactive chemical components, such as process water and atmospheric nitrogen, can also be independently balanced. Thus various independent material balance constraints may have to be satisfied for each process. In short, systematic use of material balance conditions can increase the accuracy of empirical data by reducing error bounds (Ayres 1995a, 1995b). Alternatively, the material balance conditions can be used to 'fill in' missing data. Furthermore, material balance conditions are not the only basis for data augmentation. Energy conservation, constitutive relationships or statistical methods can also be used.
Process simulators are based on mass and energy balance principles. They utilize thermodynamic models and data, and hence are ideally suited for imposing these constraints on the available data. However, the constraints and data involved are not restricted to mass and energy balance principles, and are available in various forms. For example, it is common practice to report undetectable quantities of emissions in terms of the detection limit (or least count) of the measuring instrument (specifying that the data may be less than or equal to the detection limit). Sometimes the data are reported in order-of-magnitude terms (for example, refer to Case 3 in Ayres 1995a, where the Benzo(a)pyrene content is reported to be much smaller than 0.0001). Furthermore, discrete, categorical information about the occurrence or non-occurrence of particular reactions, or the presence or absence of reaction by-products, may be available.
Given that knowledge is available in various forms (for example, quantitative models for material and energy balances, order-of-magnitude information, qualitative information and logical information), a unified framework that incorporates information of each type in its inference is desirable. Optimization methods combined with artificial intelligence techniques, as proposed in Kalagnanam and Diwekar (1994), provide such a framework, in which information can be represented as inequality constraints. Unlike numerical methods for solving equations (equality constraints), optimization methods can handle both equality and inequality conditions and hence can be used to make inferences from data in various forms.
As stated earlier, methods for assessing economic impacts and profitability have been available for a number of years. However, methods and measures for characterizing environmental impacts and sustainability are as yet in their infancy. Recent attempts at defining ecological impacts for use in life cycle assessment and similar industrial ecology applications include the environmental burden system by ICI (Wright et al. 1998), sus-tainability indicators by Tyteca (1999), ecological risk indicators described by Koenig and Cantlon (2000), exergy as a unifying indicator for material and energy transformation (Ayres 1995b), environmental damage indices (DeCicco and Thomas 1999) and the generalized waste reduction (WAR) algorithm (Cabezas et al. 1997; Cabezas et al. 1999; Young and Cabezas 1999). The WAR algorithm uses a series of indices characterizing different environmental, social and economic impacts. With WAR the potential environmental impact is defined in terms of the pollution index, calculated by multiplying the mass of each pollutant emitted by a measure of its potential impact, then summing over all pollutants. This index is a carefully constructed function encompassing a comprehensive list of human health and environmental impacts for each chemical (see Table 11.3). However, like the other methods described above, the WAR index provides a highly simplified representation of environmental impacts. For example, effects of pollutants emitted to different media are not differentiated in the WAR algorithm. Chemical exergy content likewise provides only a partial insight into environmental impact, since it cannot be directly linked to toxicity to humans or other organisms. Nonetheless, these impact assessment methods provide a first-order qualitative indication of the environmental damage and hence a useful starting point for analysis.
Table 11.3 The potential environmental impact categories used within the WAR algorithm
Local Toxicological Human Ecological
Human toxicity Aquatic toxicity potential by potential (ATP)
Human toxicity potential by exposure, dermal and inhalation (HTPE)
Terrestrial toxicity potential (TTP)
Global warming potential (GWP)
Ozone depletion potential (ODP)
Acidification, or acid rain potential (ARP)
Photochemical oxidation potential or smog formation potential (PCOP)
Recently the WAR algorithm was added to the ASPEN simulator to allow consideration of the eight environmental impacts shown in Table 11.3. This was easily done, since chemical simulators keep track of mass balance and emissions information required for calculation of these indices. Similarly, the unified indicator based on exergy proposed by Ayres (1995b) is readily computed using process simulation technology, since most commercial simulators have a unit operation block based on the 'concept of Gibbs free energy minimization'.
Once different environmental impacts are calculated, they must be weighted and balanced against each other, as well as other concerns, such as cost and long-term sustain-ability. These multiple, often conflicting, goals pose significant challenges to process optimization and design. How can designs be identified that best satisfy multiple objectives? Multi-objective optimization algorithms provide a particularly useful approach, aimed at determining the set of non-dominant/non-dominated ('Pareto') designs where a further improvement for one objective can only be made at the expense of another. This determines the set of potentially 'best' designs and explicitly identifies the trade-offs between them. This is in contrast to cost-benefit analysis, which deals with multiple objectives by identifying a single fundamental objective and then converting all the other objectives into this single currency. The multi-objective approach is particularly valuable in situations where there are a large number of desirable and important production, safety and environmental objectives which are not easily translated into dollars. Formulation of a process simulation and optimization model with multiple objectives is illustrated in the following section, with particular application to the HDA benzene synthesis problem.
Was this article helpful?