Water Systems Monologue

Wednesday, April 29, 2009

Assignment-10 b

This is a review of the paper, "Use of Optimization Models in Public Sector Planning" by E. Downey Brill, Jr. published in Management Science, INFORMS in May, 1979.

The author discusses the use of multi-objective optimization methods in solving real time public sector planning. He is positive about the use of the optimization methods but in a limited perspective as most of the models in lieu of simplicity have a tendency to curtail some of the important decision elements which generally make the model complex. He also mentions that the use of trade-off curves and pareto fronts is limited as the exclusion of some important factors or an oversimplification may result the optimal solution to lie in the sub-optimal or inferior alternatives region than on the frontier of the so obtained trade-off curves. He also proposes the use of optimization schemes as tools in the planning process which is much more comprehensive and takes into account many factors which are not easily modelled.

Thoughts?
I completely agree with the author in the case of the scope of the use of optimization as tools in a comprehensive planning process with limited use. I also understand the fact that with advanced methods being developed optimization might have a fair chunk of a rational planning process the recent failures involving complex social models used in the optimization process may still allow us to think before we act.

Assignment-10 a

This is a review of the paper "GA-QP Model to Optimize Sewer Systems Design" by T.C.Pan et.al. published in the Journal of Environmental Engineering (ASCE), January 2009.

In this paper the authors have discussed using a hybrid optimization scheme to come up with the best cost sewer systems design. A genetic algorithm approach is proposed to find low cost schemes for sewer systems where the costs are calculated through a quadratic programming approach which takes care of pipe geometry, topological and other deterministic factors to come up with the cost of a candidate scheme. The candidate schemes thus obtained are represented as chromosomes and the fitness function being the reciprocal of the total cost of the scheme represented. The genetic algorithm uses three operations of selection, crossover, and mutation with the selection probability being proportional to the fitness of the individual chromosome.

The authors propose that the QP-GA approach could yield better results than a LP-GA approach used previously by Berry et.al. and others to simplify non-linear problems as QP conserves some non-linearty while exihibiting a uni-modal behavior facilitating a global optimal solution. Though with the use of the GA its still not guaranteed.

The authors also raise concerns over the factors like human intervention etc which aren't easily modelled and not included in this approach. They in turn use the idea of Modeling to generate alternatives(MGAs) as an approach to come up with new and different solutions to get near-optimal solutions. Finally, they do some comparative studies of the new QP-GA solutions and other classical methods like the DDDP (Discrete Dynamic Programming) etc.

Thoughts:

I like the idea of the hybrid optimization and alternative solutions but I would really like to see a way to establish hydraulic feasibility of such schemes. Like coupling this system to a hydraulic modeling s/w like sewerCAD and see for hydraulic instabilites and feasibilites in terms of O&M.

Wednesday, April 15, 2009

Assignment-9

This is the review of the paper titled "Compromise Programming Methodology for Determining Instream Flow under Multi-objective Water Allocation Criteria" by Jeng-Tzong Shiau and Fu-Chun Wu, in the Journal of the American Water Resources Association in October, 2006.

This paper is about implementing a quantitative assessment framework for determining the instream flow under multi-objective water allocation criteria using a the Range of Variability Approach (RVA) to evaluate hydrologic alterations caused by flow diversions. The resulting degrees of alterations for the 32 Indicators of Hydrologic Alterations (IHAs) are integrated into one overall degree of Hydrologic Alteration. The inclusion of this index in the objective function to optimize a water allocation scheme to minimize hydrologic alterations and water supply shortages using compromise programming is suggested.

The proposed methodology is applied to a case study of the Kaoping diversion weir in Taiwan that is designed to simultaneously assure the water supply reliability and sustain natural flow variability. The Kaoping Creek in southwestern Tainwan is 171 km long and has the largest drainage area(3,257 km2) on the island, the average annual runoff being about 8.5 billion m3. The creek supplies major proportions of the total water demand in this region. The Kaoping diversion weir with a design diversion capacity of 35 m3/s was completed in 1999, built to supply for the increasing municipal demands. The Kaoping Creek provides habitats for some endemic species and it is believed that agricultural withdrawals and municipal diversions both can effect the aquatic biota considerably dowstream of the Kaoping weir. Currently a minimum instream flow of 9.5 m3/s is being released at the weir but a minimum flow is unable to provide sufficient flow variation which is recognosed as a primary driving force for sustaining the integrity of aquatic ecosystems.

In RVA analysis, a range of variation for each IHA is determined from the prediversion flows. In this study the target range for each IHA was bracketed between 25th and 75th percentile values as suggested by Richter et al. (1998). The weir operations are aimed to make post diversion flow conditions reach the established RVA ranges at the same frequency as that of the prediversion flows.

This is a multi-objective decision making problem which involves the minimizing of the hydrologic impacts and water supply shortages as the operational goals for the Kaoping weir. The objective function can be expressed as a function of shortage ratio of the registered agriculural withdrawals, shortage ratio of the projected diversion for municipal supplies and the combined index for Hydrological Alterations whose definitions are provided in the article. All these can be expressed as the function of the instream flow value as decision variable.

The compromise programming algorithm is adopted as the authors believe its suited for the discrete problem and is still felixible for the preferences of the decision makers concerning the relative importance of each goal considered. Compromise programming identifies the optimal solution as the one that has the shortest distance to an ideal point where the multiple objectives simulatneously reach their minimum values. The ideal point is practically inachievable but may be used as a base point.The objective function involves the use of the base point and the worst point wrt each goal. It also involves the use of weights in order to decide the preferred goals.

The results indicate that the current realease of 9.5 m3/s as a minimum instream flow does not effectively restore the natural flow variations. Increasing the amount of instream flow release would reduce the overall degree of hydrologic alteration; however, this is achieved at the cost of increasing the water supply shortage ratios. An equal weighting to both water supply reliability and natural flow variability would suggest a minimum flow of 26 m3/s which is supported by the authors. The authors also suggest an improvement by including biological component into the current model for a better representation of ecological effects.

My views:

I like the approach and totally understand the use of a single index for hydrological alterations for reducing complexity of the model. Though I would suggest some alternative approaches like developing a pareto front and evaluate scenarios for different objects, trade-off analysis. I would also like to see a weighting introduced in the IHAs so as to recognize which IHAs have more effect on the downstream ecology.

Saturday, April 4, 2009

Assignment #6

This is a review of the article "Optimization of Regional Storm-Water Management Systems" by Behera et.al. in the ASCE Journal of Water Resources Planning and Management in April, 1999.

In this paper Behera et al. discuss the use of an optimization based methodology for determining the lowest cost Stormwater Management (SWM) scheme to abate stormwater quantity and quality related issues. The optimization methodology is described for obtaining design parameters like storage volume, release rate and pond depth for a SWM pond and a dynamic programming approach is discussed to extend to a multiple parallel catchment system (each with a single detention pond).

The authors say, "The natural storage capacity of urban catchments lost through the development process is offset by engineered storage facilities in the form of different types of detention ponds, which are often considered as one of various best management practices (BMPs) for storm-water control. They are implemented for peak flow attenuation, runoff volume control, and runoff quality control. Common practice is to design these facilities such
that the release from the facility maintains the pre-development runoff conditions or satisfies other local environmental regulations for runoff quantity and quality control." For the land developers the construction of SWM ponds is a loss of developable expensive land in addition to the cost of construction and O&M for these ponds. The main objective would be to optimize the costs of construction and O&M without violating relevent environmental regulations on discgarges and stormwater quality. A computational example of the proposed procedure is provided by the optimization of a system of three parallel catchments.

My thoughts:

I like the concept of using a stochastic measure for the run-off and quality control but I think this method will be more effective if we play with the release rates and look at a temporal variation on the control measure.

Sunday, February 22, 2009

Assignment-5

This is a review to the article "Sensor Placement in Municipal Water Networks" by Jonathan W. Berry et.al. published in the ASCE Journal of Water Resources Planning and Management in May/June 2005.

The author says that public water distribution systems are inherently vulnerable to accidental or intentional contamination events because of their distributed geography. Though major events are rare their effects can be severe both short and long term on human health. After the 9/11 attacks the US-EPA has been working with the community water systems to undertake a more comprehensive view of water safety and security and promotes the development of real-time early warning systems (EWS). The general goal of an EWS is to identify a low-probability/high-impact contamination incident while allowing sufficient time for an appropriate response that mitigates any adverse impacts. An effective deployment of an EWS may require an optimum number of online sensors ensuring adequate coverage of the network, the deployment costs are minimised and the security is maximised. Here in this paper, the criterion or objective is to minimise the the expected fraction of population exposed to contamination by properly deploying sensors in the network. The likelihood of a contamination is modeled modeled as a fixed probability distribution across junctions in a the network, which canbe used to model the likelihood of either accidental or intentional attacks.

The author mentions many technical approaches to the sensor placement problem which include integer programming, combinatorial heuristics, and general purpose metheuristics which can be applied to complex situations that can model sensor performance to detail, as well as detailed health effects. Th author though considers using an integer programming approach due to its simplicity and ability to ensure the best solution is found. The assumptions that were made to make it applicable to integer programming were:

  1. attack occurs at a single point in the network.
  2. The total population is considered exposed, without reference to specific health impacts. Without sensors in the network, a population at a node is exposed if contaminant can reahc the point in a given flow period.
  3. Sensors protect downstream populations. A population is considered exposed if it could be reached by a flow path from the attack point without passing a sensor.
  4. Transitions between time periods are ignored. Each time period is treated independently.
These assumptions allow to ignore any temporal effects, concentration effects and health impacts. algorithms have been proposed for optimizing sensor placements with respect to detection coverage and probability, volume of contaminated water consumed, the extent of contamination and time of detection.

The objective of the model is to minimize the expected fraction of the population that is at risk for some attack. An attack is modeled as the release of a harmful contaminant at a single point in
the network with a single injection. For any particular attack, it is assumed that all points downstream of the release point connected by a set of directed flows can be contaminated. No information apriori is available on the attack site and hence a compromise solution considering all possible attack scenarios is desirable. Attack scenarios are defined by a probability distribution over all pairs of population-weighted flows and attack points. This distribution may come from expert opinions, knowledge of network defenses, location of assets within the network, degree of damage, and attackers psychology. In this paper, these are generated synthetically.

The problem is solved using a mixed-integer program. The network is represented as a graph G=(V,E); E is the set of edges represting pipes; and V is the set of nodes where the pipes meet. Nodes can be sources like reservoirs or tanks and sinks where water is consumed. Each node is associated with a region of the city and hence with some population or demand. A set of constraints and an objective function is defined which are elaborately described in the paper. The sensor-placement is evaluated experimentally using two networks from the EPANET test set and one real network. Due to non-vailability of information on population density and risk palusible sythetic data was used. The flow pattern was determined on EPANET for 4 periods of 6 hour duration each.

The datasets were run for different scenarios with varying populations and maximum amount of sensors available. Also to consider the inaccurate data different noise levels in population density and risk probabilities were considered. Three noise levels of 5, 10 and 25% noise were considered. An experiment is defined as a set of trials of the MIP model for a fixed dataset, attack scenario, noise level, and number of sensors. The experiments with the sets 1&2 consisted of 30 trials each whereas the larger set was abt 5 trials long. A detailed description of the process is given in the paper.

The results show varying sensitivity towards the noise levels which first rise and then decline. The author thinks that a plausible explanation is that the number of sensors do fall into 3 regimes and with respect to the network and the datasets. With very few sensors, the best strategy is to protect the most valuable assets. Eventually, with more sensors, there are more choices for secondary assets to protect, and these choices may be quite sensitive to variations in attack probabilites and population densities. Finally, when there are enough sensors to easily protect everything, sensors are always placed in important locations.

In conclusion, the author states that MIP based model described in the paper can be used to effectively solve large-scale sensor placement problems. He also states that the model is very simplified and can be genralised by incorporating strategies to address the temporal effects, placement locations (placing on nodes or using a mixed strategy), Sensor costs and performance objectives.


Wednesday, February 18, 2009

Assignment-4

This is the review of the article " Optimal Locations of Monitoring Stations in Water Distribution System" by Byoung Ho Lee et.al. published in the Journal of Environmental Engineering, ASCE in February 1992.

The Safe water Act requires that water quality in distribution systems is monitored through sampling in various locations that are representative of the system. The authors main objective is to be able to locate optimum number and locations of monitoring stations which will be a representative of the entire system. This is achieved through defining coverage of all the nodes and then trying to find the dependence/influence of neighboring stations/nodes which enhance the coverage. The author also proposes a water fraction W(i,j) which is equal to the ratio of flow from node j to flow coming into node i. The water fractions are calculated for all node contributions and is stored in a water-fraction matrix which can be used to derive other knowledge carrying matrices which may enhance the decision process. A coverage matrix is proposed which shows the coverage of the network obtained by choosing a set of monitoring nodes and some knowledge carrying criteria.

A small network of 7 nodes was proposed as an example. An integer programming optimisation model was formulated for the problem which had 2 sets of variables; xi ( i=1,2,.........,n) denotes if the ith node has a monitoring station and yi (i=1,2,3.....,n) denotes if the demand at node i was met. Both set of variables are 0,1 type variables. It was deduced through careful inspectio that setting monitoring stations at nodes 5,6 can cover all other demands through y3-y7. For larger problems it was proposed to make use of programs like LINDO to solve the integer programming.

Two larger problems were also presented. The first problem was at the city of Flint, Michigan. The distribution system fed on water coming from the Lake Huron consists of a network of 337 pipes and 211 nodes. Before the optimization was applied the set of existing 14 monitoring stations for a maximum daily flow covered about 18.3% of the demand. The formulation had about 211 constraints, 422 variables and 6000 non-zero entries in the table for the 14 stations. Th solution comprised of solving it sequentially on two s/w programs called COVER and COVTOIP resp. The first one generates the coverage matrix whereas the 2nd simply converts the matrix into a standard IP program redable format. The final scenario run had improved the coverage to 54% of the demand which was a significant change over 18.5%. Later the program was run for a single monitoring station on LINDO and was found that a station at node 6 would have covered about 50%.


The 2nd problem was regarding the city of Cheshire, Connecticut. The city gets its supplies from two well fields namely North Well field and South Well field. The demand is abt 2.5 mgd and is equalized using 2 storage tanks of combined capacity of 4.5 mgd. A skeleton water network was designed as the basis for the analysis. To take into account the significant variations in demand and flow over time the analysis was done for 4 different flow and demand patterns called scenarios was carried out. The problem with 245 variables and 197 constraints was solved using the LINDO package. The optimal solutions for varying no. of stations were presented.

In conclusion, the author emphasizes on the importance of such rational mechanisms to come up with the optimum number of monitoring stations. He says the combination of pathway analysis, coverage matrices and IP provides first steps towards a rational algorithm for locating monitoring stations and the results of the two example do show that improvements are possible.

The Tragedy of the Commons

This article was written by Garret Hardin the the journal Science in 1968. The article describes a dilemma in which multiple individuals acting independently in their own self-interest can ultimately destroy a shared limited resource even where it is clear that it is not in anyone's long term interest for this to happen.

Wiesner and York had concluded in a thoughtful article on the fate of nuclear war that " Both sides in the arms race are confronted by the dilemma of steadily increasing military power and steadily decreasing national security. It is our considered professional judgment that this dilemma has no technical solution. If the great powers continue to look for solutions in the area of science and technology only, the result will be to worsen the situation." A technical solution as defined by the author is one that requires a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality. The author wants the reader to discuss the presence of "the no technical solution possible" class of problems. He explains that a game of tic-tac-toe can be such a problem given the opponent understands the game perfectly and game theory predicts in that case there is no way to beat him. The author also states the population problem being on of this class. He says that the people who are trying to find a solution to overpopulation look for alternatives to relinquish any priviliges the population enjoys now which are not sustainable.

The author mentions the famous Bentham's goal of " the greates good for the greatest number". The fact that two variables can't be maximized together as proven by Von Neumann and Morgenstern and which appears to be implicit in theory of partial differentiation puts a deathnail to Bentham's goal. He also mentions that the goal can't be realised due to biological/energy limitations. He even says that if provided with an infinite source of energy, the problem just reverses signs and doesn't cease to exist.

From this point, the author switches to non-technical or resource management solutions to population and resource problems. As a means of illustrating these, he introduces a hypothetical example of a pasture shared by local herders. The herders are assumed to wish to maximize their yield, and so will increase their herd size whenever possible. The utility of each additional animal has both a positive and negative component:

  • Positive: the herder receives all of the proceeds from each additional animal.
  • Negative: the pasture is slightly degraded by each additional animal.

Crucially, the division of these costs and benefits is unequal: the individual herder gains all of the advantage, but the disadvantage is shared among all herders using the pasture. Consequently, for an individual herder the rational course of action is to continue to add additional animals to his or her herd. However, since all herders reach the same rational conclusion, overgrazing and degradation of the pasture is its long-term fate. Nonetheless, the rational response for an individual remains the same at every stage, since the gain is always greater to each herder than the individual share of the distributed cost. The overgrazing cost here is an example of an externality.

Because this sequence of events follows predictably from the behaviour of the individuals concerned, Hardin describes it as a "tragedy".

In the course of his essay, Hardin develops the theme, drawing in examples of latter day "commons", such as the atmosphere, oceans, rivers, fish stocks, national parks, advertising, and even parking meters. The example of fish stocks had led some to call this the "tragedy of the fishers". A major theme running throughout the essay is the growth of human populations, with the Earth's resources being a general commons.

The essay also addresses potential management solutions to commons problems including privatizatiion, polluter pays, and regulation. Keeping with his original pasture analogy, Hardin categorises these as effectively the "enclosure" of commons, and notes an historical progression from the use of all resources as commons (unregulated access to all) to systems in which commons are "enclosed" and subject to differing methods of regulated use in which access is prohibited or controlled. Hardin argues against relying on conscience as a means of policing the commons, suggesting that this favours selfish individuals – often known as free riders – over those who are more altruistic. By recognizing resources as commons in the first place, and by recognizing that, as such, they require management, Hardin believes that humans "can preserve and nurture other and more precious freedoms."