Data

To demonstrate the ecological application of the MLP, we used brown trout redd data set reported by Lek et al. in 1996. Sampling was done at 29 stations, distributed on six rivers, subdivided into 205 morphodynamic units. Each unit corresponds to an area where depth, current, and gradient are homogeneous. The physical characteristics of the 205 morphodynamic units were measured in January, immediately after the brown trout reproduction period. Therefore, they most faithfully indicate the conditions met by the trout during its reproduction. Ten physical habitat variables are described in Table 1.

Preprocessing

The variables have different ranges of values and different units. If a variable has relatively high values, it might dominate or paralyze the model. In this case, data transformation is recommended. In this example, input (i.e., environmental) variables were transformed by variance normalization (standardization) which leads to unitless dimensions of the variables, and their corresponding output variable (trout density) was transformed by min-max range normalization in the range of 0-1. The data set consisting of 205 samples was divided into three subdata sets for training (103), validation (51), and testing (51).

MLP model training

The model was stabilized through the training of 280 iterations. Sum of square errors (SSEs; i.e., differences) between desired target values and estimated model outputs for training, validation, and testing are given in Figure 6.

Table 1 Variables used in the model to predict the density of brown trout reproduction

0.06

Table 1 Variables used in the model to predict the density of brown trout reproduction

Input/

Variable

output

Description

Width

Input

River width (m2)

ASSG

Input

Area with suitable spawning gravel for

trout per linear meter of river (m2/

linear m)

SV

Input

Surface velocity (m s-1)

GRA

Input

Water gradient (%)

Fwi

Input

Flow/width (m3/s per m)

Depth

Input

Mean depth (m)

SDD

Input

Standard deviation of depth (m)

BV

Input

Bottom velocity (m-1/s)

SDBV

Input

Standard deviation of bottom velocity

(ms-1)

VD

Input

Mean velocity/mean depth (m/s per m)

R/M

Output

Density of trout redds per linear meter of

streambed (redds/m)

"o

0.04

0.02

Training SSE Validation SSE Testing SSE

101 151 Iteration

Figure 6 Changes of SSEs during the training process of the MLP model.

Results of example MLP model

Figure 7 shows relations between observed output values and calculated values by the trained MLP model, displaying regression determination coefficients (R2) 0.54, 0.67, and 0.49 for training, validation, and test, respectively. Their residuals, which are differences between observed values and estimated values, are also provided, showing relations with estimated values (Figure 7). In all the three cases, residuals are scattered around zero lines.

Multiple linear regression and MLP

MLP could be compared with multiple linear regression (MLR), which is the method most frequently used in ecology, for their predictive capacities. The stepwise multiple regression technique is computed with the same data set used in the MLP. The model showed R2 = 0.469 5 (Figure 8), displaying lower predictability than MLP model (Figure 7). Table 2 shows variable coefficients obtained by the MLR model and their statistical significance. Coefficients of each variable could be used to define the significant variables and their contribution order. The standardized regression coefficients can be used to compare relative importance (or influence) on output values. Standardized coefficients are estimated with variables' values after standardization of the data set. However, nonstandardized coefficients cannot be used for comparison of the influence, but can be used for the prediction model.

In the MLP model, the numbers of hidden layers and hidden neurons strongly influence the performance of the model. However, the choice of number of hidden layers and number of hidden neurons should be done carefully to avoid overfitting problems and increase efficiency of the model. The network can be overtrained or overfitted, that is, it loses its capacity to generalize. Three parameters are responsible for this phenomenon: the number of epochs, the number of hidden layers, and the numbers of neurons in each hidden layer. It is very important to determine the appropriate numbers of these elements in MLP modeling.

Observed values

Observed values

Population Dynamics

Observed values

Observed values

Observed values

Observed values

Estimated values
Estimated values
Estimated values

Figure 7 Relations between observed values and calculated values by the MLP model (a)-(c): (a) training data set, (b) validation data set, and (c) testing data set. (d-f) Residuals of output values respectively for training, validation, and testing data sets.

Figure 8 Relations between observed values and estimated values by the MLR model. The data set used in the MLP model was also used in the MLR model.

Table 2 Coefficients of variables defined by the MLR model and their statistical significance

Standard

Table 2 Coefficients of variables defined by the MLR model and their statistical significance

Standard

Variable

Coefficient

error

T

Pr(>\t\)

(Intercept)

1.337 4

0.241 9

5.5282

0.0000

Width

-0.0192

0.0163

-1.1739

0.241 9

ASSG

0.4791

0.0524

9.1384

0.0000

SV

-0.5698

0.2638

-2.1596

0.0320

GRA

-0.0469

0.021 8

-2.1472

0.0330

Fwi

1.3092

0.807 5

1.621 3

0.1066

Depth

-0.0128

0.007 5

-1.7189

0.0872

SDD

-0.007 6

0.0076

-1.007 4

0.3150

BV

0.0096

0.0073

1.327 4

0.1859

SDBV

-0.0143

0.0078

-1.8332

0.0683

VD

-0.0184

0.0923

-0.1991

0.8424

Contribution of variables in MLP models

To show the contribution of input variables and the sensitivity analysis in the MLP model, we select to present here results from the PaD algorithm. Figure 9 presents the derivative plots of each input variable.

• The partial derivative values of R/M with respect to Wi (width) are all negative: an increase of Wi leads to a decrease of R/M. For high values of Wi, the partial derivative values approach zero; thus R/M tends to become constant.

• The partial derivative values of R/M with respect to ASSG are all positive and very high for low values of ASSG: R/M increases with the increase of ASSG and progressively this increase drops to null for the highest value of ASSG.

• The partial derivative values of R/M with respect to SV are negative for low values of SV and near zero for the higher values. R/M decreases with the increase of SV till it becomes constant at high values of SV.

• The partial derivative values of R/M with respect to GRA are negative for low values of GRA and near zero for higher values. R/M decreases with the increase of GRA and progressively becomes constant.

• For low values of Fwi, the partial derivatives of R/M with respect to Fwi are positive, become rapidly negative, then rise to reach null values for high Fwi: an increase of Fwi leads to a short increase of R/M and then a decrease which becomes attenuated and finally constant for high values of Fwi.

(b)

0.6

DASSG

B*: t , t

0 2 4 6

8 10

ASSG

20 i

SDBV

Figure 9 Partial derivatives of the MLP model response (R/M) with respect to each independent variable (PaD algorithm, derivatives profile): (a) width; (b) ASSG; (c) SV; (d) GRA; (e) Fwi; (f) D; (g) SDD, (h) BV; (i) SDBV; (j) VD.

• All the partial derivative values of R/M with respect to D are negative: an increase of D leads to a decrease of. R/M.

• The partial derivative of R/M with respect to SDD is positive or negative without a precise direction; it is then not possible to come to a real conclusion about the action of SDD on R/M. It could, for instance, be due to an interaction between SDD and another variable.

• The partial derivative values of R/M with respect to BV are all positive: an increase of BV leads to an increase of R/M but to a lesser extent for the high values of BV.

• The partial derivative values of R/M with respect to SDBV are all negative: an increase of this variable leads to a decrease of R/M.

• The partial derivative values of R/M with respect to VD are almost all positive and near zero for high values of VD, an increase in this variable leads to an increase in R/M, and R/M becomes constant for high values of VD.

Figure 10 presents the relative contributions resulting from the application of the PaD method. The method is

Wi SV Fwi SDD SDBV

ASSG GRA D BV VD

Figure 10 Contribution of the ten independent variables (width, ASSG, SV, GRA, Fwi, D, SDD, BV, BV, SDBV, VD) used in the 10-5-1 ANN model for R/M, in the PaD algorithm, relative contributions.

Wi SV Fwi SDD SDBV

ASSG GRA D BV VD

Figure 10 Contribution of the ten independent variables (width, ASSG, SV, GRA, Fwi, D, SDD, BV, BV, SDBV, VD) used in the 10-5-1 ANN model for R/M, in the PaD algorithm, relative contributions.

then very stable, whatever the model, and has a low confidence interval.

ASSG is the highest contributed variable (>65%), followed by GRA. However, the contribution of the other variables is very low and the difference between SV, BV, and SDBV is not significant; then come VD and Wi, and at last D, Fwi, and SDD (between which the difference is again nonsignificant).

Was this article helpful?

0 0
Project Earth Conservation

Project Earth Conservation

Get All The Support And Guidance You Need To Be A Success At Helping Save The Earth. This Book Is One Of The Most Valuable Resources In The World When It Comes To How To Recycle to Create a Better Future for Our Children.

Get My Free Ebook


Post a comment