Manuel Bustillo Revuelta. Handbook of Research on Geospatial Science and Technologies. Joyce Gosata Maphanyane. Hydrology and Water Resource Systems Analysis. Maria A. C Sjostrom. Urban Drainage. David Butler. Advances in Building Energy Research. Guidelines for Surveying Soil and Land Resources. NJ McKenzie. Underground Infrastructure of Urban Areas 4. Cezary Madryas. Dave Osborne. Mine Planning and Equipment Selection Solar Thermal Technologies for Buildings.

Materials and Infrastructures 2. Jean-Michel Torrenti. Urban Geology. Peter Huggenberger. Nilgun B. Precision agriculture ' John V. Soil Erosion Research Methods. Soil and Water Conservation Society U. Let There Be Water. Seth M. Landscape Erosion and Evolution Modeling. Russell S. Satellite Rainfall Applications for Surface Hydrology. Mekonnen Gebremichael. Blue Gold. Maude Barlow. Matthew R Hall. Viktor Schauberger. Rural Water Supply and Sanitation. Elijah K. Modeling, Control and Optimization of Water Systems.

Thomas Rauschenbach. Control and Instrumentation for Wastewater Treatment Plants. Reza Katebi. Kevin Sene. Water Management in Italy. Andrea Guerrini. Zsuzsa Puskas. Philip J. Environmental Contaminants. Jules M. Storm Water Management. Micro Irrigation Systems in India. Dinesh Kumar. Engineering Hydrology for Natural Resources Engineers.

Ernest W. Earth Observation for Water Resources Management. Reef Diagenesis. Modern Water Resources Engineering. Lawrence K. Askew, A. Aron, G. Proceedings of the ASCE. Beecham, S. Niagara Falls, Ontario, Canada, September Bock, P. Boose, J. Boyd, M. CE27, pp. Boyd M. Chapman, T. Stewart , Macmillan, Melbourne pp. Chocat, B. Chong, S. Codner, G. Crawford, N. Delleur, J. Department of the Environment U. De Saint-Venant, B. Diskin, M. D Thesis directed by V. Chow, University of Illinois, Urbana, Illinois. ASCE, vol. HY2, 99 , February. Ferguson, D. Forsgate, J.

Report No. Goyen, A. Havlik, V. Harrachov, Czech Republic, June Hawken, R. Heeps, D. Hicks, W. Horner, W. American Society of Civil Engineers. Horton, R. Huber, W. Environmental Protection Agency. The neuron has a branching input structure the dendrites , a cell body and a branching output structure the axon.

The axons of one cell connect to the dendrites of another cell via a synapse. When a neuron is activated, it fires an electrochemical signal along the axon. This signal crosses the synapses to other neurons, which may in turn fire themselves. A neuron only fires if the total signal received by the cell body exceeds a certain level called the firing threshold. The strength of the signal received by a neuron depends critically on the nature of the synapse.

Each synapse consists of a gap across which neurotransmitter chemicals are poised to transmit the signal. Learning consists essentially of altering the strength of the synaptic connections. From a system consisting of a large number of very simple processing units, the brain appears able to carry out extremely complex tasks. An ANN is a greatly simplified model of this perception of the human brain. So an ANN consists of a large number of processing elements called neurons. Each neuron has an internal state called its activation or activity level. This is a function of all the inputs it has received.

It then sends one signal at a time depending on its activation result to several other neurons. Typically an ANN developed for modelling the connectivity between a time series input and a corresponding time series output will consist of three layers of neurons: an input layer with a number of specific inputs, a hidden layer containing again a different number of neurons, and an output layer with one or more neurons. Note that -aoj is the threshold value such that the function f is zero for values less than zero.

Other activation functions can also be chosen such as the threshold function a and the linear or saturation function b. This is the training process, also called the learning process, during which the weights aij and bj are optimised. Such methods are usually variants of a gradient-based techniques like Levenberg- Marquardt. This type of ANN is called a multi-layer perceptron. It is a feed forward network in that information is fed through from the input to the output layer.

It learns through back propagation from the errors of the prescribed output data. This is called supervised learning. A successful implementation of an ANN depends on a number of unknowns. For example, what input data should be used for a given output? How many hidden layers should there be? How many neurons nodes should be used in a hidden layer? Can the number of nodes be reduced to limit the time taken in training the network? Generally, one hidden layer is sufficient to reproduce any non-linear function. Similarly, the number of nodes in the hidden layer is typically selected to be not more than twice the number of input nodes.

The validation set is used during the training to check whether the error on this set starts to increase even when the error on the training set is decreasing. This is a very brief introduction to ANNs for time series analysis on input and corresponding output data sets. ANNs can also be used very effectively for classification.

These facilities can be very important in complementing the time series analysis facility in modelling. Choice of relevant variables The choice of variables is an important subject, and some studies suffer from the lack of relevant analysis. Apart from the expert judgement and visual inspection, there are formal methods that help in justifying this choice, and the reader is directed to the paper by Bowden et al.

Note that the input data may require pre-processing e. In case of a high number of inputs, methods such as principal component analysis PCA may help. Several main approaches to inputs selection can be distinguished. Of course, the initial set of candidate inputs is selected on the basis of expert judgement and a priori knowledge of the system being modelled. Further, stepwise selection of inputs can be employed. In forward selection we begin by finding the best single input, and in each subsequent step we add the input that improves the model performance most.

Backward elimination starts with a set of all inputs, and sequentially removes the input that reduces performance the least. An optimal way would be to train many models on various sets of inputs and selecting the model with the lowest error; we may go for an exhaustive automated optimisation search across all possible combinations, or use a limited set of combinations. These methods need model runs for input selection. The so-called model-free approach is based either on statistical methods like cross-correlation, or information-theory based methods.

The information-theory based approach is in determining the information content between, say, the time series input data eg rainfall and the corresponding output time series eg discharge. Our own experience using the Average Mutual Information Abebe and Price, ; Solomatine and Dulal, shows that this simple and reliable method can help in selection of relevant input variables. The AMI is the measure of information in bits that can be learned about one data set in comparison with another known data set.

If the measurement of a value from A resulting in ai is completely independent of the measurement of a value from B resulting in bi , then the mutual information I AB is zero. Compared to techniques such as linear correlation, the advantage of AMI is that it can be used to detect non-linear relationships as well as linear ones since it employs set theoretic principles and is not bound to any specific function.

However, for discrete measurements actual values depend on technicalities such as the number of class intervals used to calculate the probability densities. The value of AMI in the case of rainfall-runoff modelling say, is that it clearly determines the lag time between the rainfall and the runoff, thus enabling a proper choice of input data for the input layer of the ANN. Other machine learning modelling techniques ANNs are one technique that can be used for data-driven modelling. There is a range of other techniques that have become popular in recent years, including: Nearest neighbour Based on the assumption that nearby points are more likely to be given the same classification than distant ones.

The class label k of the nearest neighbour vnearest is forwarded as a result of the classification Fuzzy rule based Consists of input-output membership functions, fuzzy rules and systems an inference engine. Crisp inputs are fuzzified, the fuzzy rules are applied and the inference engine is used to recover a crisp output; see Bardossy and Duckstein Genetic A functional form is allowed to evolve according to prescribed programming evolutionary rules such that the resulting function most closely generates the output set given the input set.

Each node in a tree specifies a test of some attribute of the instance, and each branch descending from a node corresponds to on of the possible values for this attribute. An instance is classified by starting at the root node of the tree, testing the attribute specified by this node, then moving down the branch corresponding to the value of the attribute. This process is repeated for the sub-tree root based at the new node Witten an Frank Support vector The approximating function is chosen on how well it fits the machines verification set minimising the structural risk as well as the training set minimising the empirical risk using statistical learning theory; see Vapnik 3.

Chaos Besides the connectionist techniques above there are other techniques that consider the underlying structure of a time series. This is done using nearest neighbour techniques and by progressively increasing the dimension d. Finally the stability of the system is determined by the Lyapunov exponents. These are determined by studying the separation of two points, a0 and b0, on two trajectories after some number n iterations.

A zero exponent means that the neighbouring points will remain at the same distance from each other and the system can be modelled by a set of differential equations. Negative Lyapunov exponents indicate that the neighbouring points are converging and the system is dissipative. For a good introduction to chaos theory; see Ararbanel See also Solomatine et al for an example of the application of chaos theory to the prediction of surge water level in the North Sea close to Hoek van Holland. Applications of data driven modelling An obvious use of ANNs is in modelling the rainfall-runoff process, or in routing flows from one point to another along a river.

Other uses include prediction of currents in the sea from meteorological conditions, the interpretation of cone penetration tests, the estimation of sedimentation in dredged channels, forecasts of demand in water distribution networks, prediction of intermittent overflows in drainage networks, and so on.

Such modelling can be done without the support of any physically based modelling. However, there is increasing use of ANNs and other data driven modelling techniques to complement physically based models. This is done by arranging for an ANN to model the error of a physically based model, that is, the difference or the ratio between the observed and predicted values of the output.

This is a particularly simple form of data assimilation.

### Swipe to navigate through the chapters of this book

One of the applications of data-driven models is to replicate physically-based models. In this study an alternative to the back propagation training was used — a direct search method evolutionary algorithm that reportedly allowed for avoiding local minima during training. Such models are normally considerably easier to set up than physically based models. They are particularly powerful in situations where it is difficult to determine the physical processes, or when accurate forecasts are needed based on what is known of the system up to time now.

There are however, obvious limitations in that these modelling techniques rely on there being no change in the structural physical domain that can change the assumed functional relationship between the input and output data sets. These could include modifications to a catchment land use, river training works, or alterations to control structures. Furthermore, ANNs, for example, are well known to have difficulty in extrapolating outside the range of the training data. There are ways of reducing this difficulty, but users should be aware of the problem.

Uncertainty in modelling A huge issue in modelling is the confidence that the decision-maker can put in the results from an instantiated model. Every model is, by definition, an approximation to reality.

## HIC Volume Information

The decision-maker needs to know therefore how safe and reliable the results from a model are in affecting the decision made. However, the fact is that decision-makers still have to live with uncertainty. This means that they have to consider such uncertainties when making decisions. The whole area therefore of decision-making in civil engineering risk analysis etc needs ongoing attention; see Maskey An excellent paper by Pappenberger and Beven presents some reasons why uncertainty estimation is still rarely used. In relation to water-related issues, there were many studies done where the model uncertainty was estimated.

- Maran illustrated weight training.
- Marsalek, J. (Jiri) 1940-.
- (PDF) A Brief Guide to Hydroinformatics | Dimitri Solomatine - zylahexulu.tk.
- Application of GIS in Urban Drainage | zylahexulu.tk.

Several main approaches can be identified. The first approach is to forecast the model outputs probabilistically and it is often used in hydrological modeling like Bayesian Forecasting System of Krzysztofowicz, The second approach is to estimate uncertainty by analyzing the statistical properties of the model errors that occurred in reproducing the observed historical data. This approach has been widely used in statistical Wonnacott and Wonnacott, and machine learning communities Nix and Weigend, For time series forecasting; uncertainty is estimated in terms of confidence interval or prediction interval.

A method that can be also attributed to this group meta-Gaussian model was developed by Montanari and Brath This method is used, for example, in a generalized likelihood uncertainty estimator, GLUE Beven and Binely, that is popular in hydrologic modelling. The fourth approach is based on fuzzy theory based method Maskey et al. This provides a non-probabilistic approach for modelling the kind of uncertainty associated with vagueness and imprecision. The first and the third approaches mentioned above require the prior distributions of the uncertain input parameters or data to be propagated through the model to the outputs.

In contrast, the second approach requires certain assumptions about the data and the errors, and obviously the relevancy and accuracy of such approach depends on the validity of these assumptions. The last approach requires knowledge of the membership function of the quantity subject to the uncertainty. It is based on an idea to build local data-driven models predicting the properties of the error distribution for particular hydrometeorological situations.

Further, it uses a scheme based on fuzzy clustering to aggregate the outputs of these models and to train an overall uncertainty prediction model. This is a distribution free, non parametric method to model the propagation of integral uncertainty through the models, and it was tested in forecasting river flows in a flood context. This discussion leads us to reflect on decision support environments. Integrated water modelling However, why stop at integration of models in the physical sphere alone?

In fact, a hierarchy of modelling is done in most water engineering organisations. For example, an urban water supply or wastewater disposal organisation has to model not only its water networks but also to model in comparatively simple terms the economics of making choices between one scenario or option and another. This points to a hierarchy of models in a similar manner to the integrated modelling of a river basin referred to above. He postulates a three- dimensional knowledge space of systems geo-, bio- and socio-spheres , processes plan, design, construct, manage and tools data sets, technologies, models, courseware, people, etc.

This is intended to be all encompassing. As such it runs the danger that it becomes unwieldy due to its complexity such that we cannot grasp its implications. Nevertheless, the hierarchy of models needs to work with this knowledge space in a way that we can retain some form of control on its complexity. One way of doing this is to address the hierarchy of the models in terms of model complexity. For example, if we were considering a river basin then at the top level the systems model would integrate the inter-related knowledge domains on the systems axis, as well as specific processes and associated tools.

It would probably be based on cybernetics principles. In order to make the model tractable it would also consist of a number of sub-models.

## Hydroinformatics

This is way of saying that at the next level down there will be models defined separately for each system domain. They will necessarily introduce an order of magnitude increase in complexity, but they should still be tractable. There will in addition be models below these intermediate models that address yet finer details of the problem.

In the river basin, for example, the second layer would include models of the economics of particular branches of industry or city development, integrated water resources, or recreation development. The models at each layer would be visualised in and make use of a suitable GIS. What is going to be extremely important is that there is consistency between the models at each layer, for example, between the MIKE-SHE and InfoWorks models and the integrated water resources model above them. There exists the possibility of training a cruder, higher- level model on a more detailed, lower level model to achieve consistency.

This is not the usual way in which computational hydraulics has developed.

In general there has been a striving for the models to become more detailed with the assumption that as detail is achieved so the latest model will include everything that a less detailed model will encompass and more besides. Whereas this is undoubtedly the case, in going to higher levels there is much less interest in the details: attention is focussed more on global or boundary conditions. Apart from conceptual integration, the models need to be integrated in terms of software and hardware, and the development of tools that make such integration easier and more effective is very much needed.

Recently various research and development groups are reporting interesting approaches to such integration using flexible straightforward protocols like file exchange and XML descriptors used in Delft- FEWS system Werner, , web services Donchyts, ; Horak et al. The latter approach is a result of a joint effort between several major competing suppliers of hydraulic and hydrologic modelling software DHI, Delft Hydraulics and Wallingford Software , see www. Optimization Optimization can be defined is a process of finding such values of the variables characterizing some system that would bring a particular function to a minimum or maximum.

The variables are called the decision variables, and the function — an objective function. This is an example of a multi-objective optimization problem. Optimization techniques and tools complement the arsenal of the modelling tools, and play an important role in Hydroinformatics. Decision support environments for local and distributed decision making Hydroinformatics incorporates computational hydraulics, but as has been stressed by many authors following Abbott it is more than simply modelling.

This is because a modelling software product is primarily a tool.

- Careers for gourmets & others who relish food;
- Roman Girlhood and the Fashioning of Femininity.
- Handbook of Concierge Medical Practice Design.
- Publications.

There are safe and reliable ways of applying a software tool just as there are unsafe and unreliable ways. The engineering user is therefore a critical component in the application of the software. This view can no longer be sustained. He or she has to make many decisions involving personal judgement based on experience.

In other words, what the user does in implementing the modelling software product is as important as the final instantiated model. And the user does not work alone. He is dependent on the situation in which he is working, and on his relationship to clients, stakeholders and personnel with different contributing functions within the organisation in which he works. The flow of the right information at the right time and in the right place becomes important for the success of the project involving the software. This is illustrated in the case of sewerage rehabilitation projects. There are, for example, many thousands of sewerage systems in Europe that at some stage will need rehabilitation for one reason or another.