Sunday, February 22, 2009

Assignment-5

This is a review to the article "Sensor Placement in Municipal Water Networks" by Jonathan W. Berry et.al. published in the ASCE Journal of Water Resources Planning and Management in May/June 2005.

The author says that public water distribution systems are inherently vulnerable to accidental or intentional contamination events because of their distributed geography. Though major events are rare their effects can be severe both short and long term on human health. After the 9/11 attacks the US-EPA has been working with the community water systems to undertake a more comprehensive view of water safety and security and promotes the development of real-time early warning systems (EWS). The general goal of an EWS is to identify a low-probability/high-impact contamination incident while allowing sufficient time for an appropriate response that mitigates any adverse impacts. An effective deployment of an EWS may require an optimum number of online sensors ensuring adequate coverage of the network, the deployment costs are minimised and the security is maximised. Here in this paper, the criterion or objective is to minimise the the expected fraction of population exposed to contamination by properly deploying sensors in the network. The likelihood of a contamination is modeled modeled as a fixed probability distribution across junctions in a the network, which canbe used to model the likelihood of either accidental or intentional attacks.

The author mentions many technical approaches to the sensor placement problem which include integer programming, combinatorial heuristics, and general purpose metheuristics which can be applied to complex situations that can model sensor performance to detail, as well as detailed health effects. Th author though considers using an integer programming approach due to its simplicity and ability to ensure the best solution is found. The assumptions that were made to make it applicable to integer programming were:

  1. attack occurs at a single point in the network.
  2. The total population is considered exposed, without reference to specific health impacts. Without sensors in the network, a population at a node is exposed if contaminant can reahc the point in a given flow period.
  3. Sensors protect downstream populations. A population is considered exposed if it could be reached by a flow path from the attack point without passing a sensor.
  4. Transitions between time periods are ignored. Each time period is treated independently.
These assumptions allow to ignore any temporal effects, concentration effects and health impacts. algorithms have been proposed for optimizing sensor placements with respect to detection coverage and probability, volume of contaminated water consumed, the extent of contamination and time of detection.

The objective of the model is to minimize the expected fraction of the population that is at risk for some attack. An attack is modeled as the release of a harmful contaminant at a single point in
the network with a single injection. For any particular attack, it is assumed that all points downstream of the release point connected by a set of directed flows can be contaminated. No information apriori is available on the attack site and hence a compromise solution considering all possible attack scenarios is desirable. Attack scenarios are defined by a probability distribution over all pairs of population-weighted flows and attack points. This distribution may come from expert opinions, knowledge of network defenses, location of assets within the network, degree of damage, and attackers psychology. In this paper, these are generated synthetically.

The problem is solved using a mixed-integer program. The network is represented as a graph G=(V,E); E is the set of edges represting pipes; and V is the set of nodes where the pipes meet. Nodes can be sources like reservoirs or tanks and sinks where water is consumed. Each node is associated with a region of the city and hence with some population or demand. A set of constraints and an objective function is defined which are elaborately described in the paper. The sensor-placement is evaluated experimentally using two networks from the EPANET test set and one real network. Due to non-vailability of information on population density and risk palusible sythetic data was used. The flow pattern was determined on EPANET for 4 periods of 6 hour duration each.

The datasets were run for different scenarios with varying populations and maximum amount of sensors available. Also to consider the inaccurate data different noise levels in population density and risk probabilities were considered. Three noise levels of 5, 10 and 25% noise were considered. An experiment is defined as a set of trials of the MIP model for a fixed dataset, attack scenario, noise level, and number of sensors. The experiments with the sets 1&2 consisted of 30 trials each whereas the larger set was abt 5 trials long. A detailed description of the process is given in the paper.

The results show varying sensitivity towards the noise levels which first rise and then decline. The author thinks that a plausible explanation is that the number of sensors do fall into 3 regimes and with respect to the network and the datasets. With very few sensors, the best strategy is to protect the most valuable assets. Eventually, with more sensors, there are more choices for secondary assets to protect, and these choices may be quite sensitive to variations in attack probabilites and population densities. Finally, when there are enough sensors to easily protect everything, sensors are always placed in important locations.

In conclusion, the author states that MIP based model described in the paper can be used to effectively solve large-scale sensor placement problems. He also states that the model is very simplified and can be genralised by incorporating strategies to address the temporal effects, placement locations (placing on nodes or using a mixed strategy), Sensor costs and performance objectives.


Wednesday, February 18, 2009

Assignment-4

This is the review of the article " Optimal Locations of Monitoring Stations in Water Distribution System" by Byoung Ho Lee et.al. published in the Journal of Environmental Engineering, ASCE in February 1992.

The Safe water Act requires that water quality in distribution systems is monitored through sampling in various locations that are representative of the system. The authors main objective is to be able to locate optimum number and locations of monitoring stations which will be a representative of the entire system. This is achieved through defining coverage of all the nodes and then trying to find the dependence/influence of neighboring stations/nodes which enhance the coverage. The author also proposes a water fraction W(i,j) which is equal to the ratio of flow from node j to flow coming into node i. The water fractions are calculated for all node contributions and is stored in a water-fraction matrix which can be used to derive other knowledge carrying matrices which may enhance the decision process. A coverage matrix is proposed which shows the coverage of the network obtained by choosing a set of monitoring nodes and some knowledge carrying criteria.

A small network of 7 nodes was proposed as an example. An integer programming optimisation model was formulated for the problem which had 2 sets of variables; xi ( i=1,2,.........,n) denotes if the ith node has a monitoring station and yi (i=1,2,3.....,n) denotes if the demand at node i was met. Both set of variables are 0,1 type variables. It was deduced through careful inspectio that setting monitoring stations at nodes 5,6 can cover all other demands through y3-y7. For larger problems it was proposed to make use of programs like LINDO to solve the integer programming.

Two larger problems were also presented. The first problem was at the city of Flint, Michigan. The distribution system fed on water coming from the Lake Huron consists of a network of 337 pipes and 211 nodes. Before the optimization was applied the set of existing 14 monitoring stations for a maximum daily flow covered about 18.3% of the demand. The formulation had about 211 constraints, 422 variables and 6000 non-zero entries in the table for the 14 stations. Th solution comprised of solving it sequentially on two s/w programs called COVER and COVTOIP resp. The first one generates the coverage matrix whereas the 2nd simply converts the matrix into a standard IP program redable format. The final scenario run had improved the coverage to 54% of the demand which was a significant change over 18.5%. Later the program was run for a single monitoring station on LINDO and was found that a station at node 6 would have covered about 50%.


The 2nd problem was regarding the city of Cheshire, Connecticut. The city gets its supplies from two well fields namely North Well field and South Well field. The demand is abt 2.5 mgd and is equalized using 2 storage tanks of combined capacity of 4.5 mgd. A skeleton water network was designed as the basis for the analysis. To take into account the significant variations in demand and flow over time the analysis was done for 4 different flow and demand patterns called scenarios was carried out. The problem with 245 variables and 197 constraints was solved using the LINDO package. The optimal solutions for varying no. of stations were presented.

In conclusion, the author emphasizes on the importance of such rational mechanisms to come up with the optimum number of monitoring stations. He says the combination of pathway analysis, coverage matrices and IP provides first steps towards a rational algorithm for locating monitoring stations and the results of the two example do show that improvements are possible.

The Tragedy of the Commons

This article was written by Garret Hardin the the journal Science in 1968. The article describes a dilemma in which multiple individuals acting independently in their own self-interest can ultimately destroy a shared limited resource even where it is clear that it is not in anyone's long term interest for this to happen.

Wiesner and York had concluded in a thoughtful article on the fate of nuclear war that " Both sides in the arms race are confronted by the dilemma of steadily increasing military power and steadily decreasing national security. It is our considered professional judgment that this dilemma has no technical solution. If the great powers continue to look for solutions in the area of science and technology only, the result will be to worsen the situation." A technical solution as defined by the author is one that requires a change only in the techniques of the natural sciences, demanding little or nothing in the way of change in human values or ideas of morality. The author wants the reader to discuss the presence of "the no technical solution possible" class of problems. He explains that a game of tic-tac-toe can be such a problem given the opponent understands the game perfectly and game theory predicts in that case there is no way to beat him. The author also states the population problem being on of this class. He says that the people who are trying to find a solution to overpopulation look for alternatives to relinquish any priviliges the population enjoys now which are not sustainable.

The author mentions the famous Bentham's goal of " the greates good for the greatest number". The fact that two variables can't be maximized together as proven by Von Neumann and Morgenstern and which appears to be implicit in theory of partial differentiation puts a deathnail to Bentham's goal. He also mentions that the goal can't be realised due to biological/energy limitations. He even says that if provided with an infinite source of energy, the problem just reverses signs and doesn't cease to exist.

From this point, the author switches to non-technical or resource management solutions to population and resource problems. As a means of illustrating these, he introduces a hypothetical example of a pasture shared by local herders. The herders are assumed to wish to maximize their yield, and so will increase their herd size whenever possible. The utility of each additional animal has both a positive and negative component:

  • Positive: the herder receives all of the proceeds from each additional animal.
  • Negative: the pasture is slightly degraded by each additional animal.

Crucially, the division of these costs and benefits is unequal: the individual herder gains all of the advantage, but the disadvantage is shared among all herders using the pasture. Consequently, for an individual herder the rational course of action is to continue to add additional animals to his or her herd. However, since all herders reach the same rational conclusion, overgrazing and degradation of the pasture is its long-term fate. Nonetheless, the rational response for an individual remains the same at every stage, since the gain is always greater to each herder than the individual share of the distributed cost. The overgrazing cost here is an example of an externality.

Because this sequence of events follows predictably from the behaviour of the individuals concerned, Hardin describes it as a "tragedy".

In the course of his essay, Hardin develops the theme, drawing in examples of latter day "commons", such as the atmosphere, oceans, rivers, fish stocks, national parks, advertising, and even parking meters. The example of fish stocks had led some to call this the "tragedy of the fishers". A major theme running throughout the essay is the growth of human populations, with the Earth's resources being a general commons.

The essay also addresses potential management solutions to commons problems including privatizatiion, polluter pays, and regulation. Keeping with his original pasture analogy, Hardin categorises these as effectively the "enclosure" of commons, and notes an historical progression from the use of all resources as commons (unregulated access to all) to systems in which commons are "enclosed" and subject to differing methods of regulated use in which access is prohibited or controlled. Hardin argues against relying on conscience as a means of policing the commons, suggesting that this favours selfish individuals – often known as free riders – over those who are more altruistic. By recognizing resources as commons in the first place, and by recognizing that, as such, they require management, Hardin believes that humans "can preserve and nurture other and more precious freedoms."



Sunday, February 1, 2009

Assignment # 2

The research paper reviewed here is titled "Hydraulic Gradient Control for Groundwater Contaminant Removal" by Dorothy F. Atwood and Steven M. Gorelick in the Journal of Hydrology, 1984.

In this study, an aquifer recovery methodology is proposed using linear programming for gradient control wells placement and a solute transport finite difference model to evaluate the contaminant plume characteristics and performance of the LP based well scheme.

The model area is the Rocky Mountain Arsenal near Denver, Colorado which used to be a site for toxic chemical processing and manufacture and has a history of bad disposal practices and groundwater contamination. The geology and geophysics of the area is also well known. The specific area considered here consists of the north boundary of the Rocky Mountain Arsenal where preventing the contaminant migration has been crucial. Flow in the unconfined aquifer is generally from south to north. The goal of the restoration procedure is to prevent further contaminant migration during aquifer cleanup especially beyond the northern boundary of the arsenal. The area is approximated and modeled in 702 finite difference nodes each of 76.2 m sides.

The hydraulic gradient control procedure requires the use of models of groundwater flow and solute transport in groundwater. The process is a two stage:

Stage 1: Simulating Contaminating distribution.

In stage one, contaminant transport is simulated through time using an assumed velocity field (corresponding to successful hydraulic screening) to determine the location of the plume boundary relative to the potential gradient control wells. Three steps are necessary here:
1) an approximate velocity filed assumed.
2) the central contaminant removal system design and
3) the contaminant distribution simulated.

Stage 2: Select best wells and best rates

Linear programming combined with groundwater flow simulation is used to select the wells and determine the pumping/recharge schedules that most effectively stop the migration of the plume by controlling the hydraulic gradient during cleanup. This is accomplished with a single global optimisation which minimises the total pumping/recharge over all time subject to a series of hydraulic constraints that force the gradient to be towards the center contaminant removal well.

The study in conclusion states that, this LP approach to optimize pumping rates not only saves cost but also reduces the time of effective removal from 22 years if no such scheme is applied to a little over 12 years. Although, the author do agree that there is a lot more which can be done as in using a mixed-integer programming technique to optimize the number of wells for gradient control.

My comments:

I think the idea is really good and effective. As the results suggest there is a scope of tremendous improvement in the contaminant removal and its quicker incurring lesser costs etc. I would like to suggest the following:

1) The gradient control should be revisited in order to be able to determine the optimum concentrations of contaminant due to movement of groundwater which can be removed. This can be done with an advanced groundwater modeling s/w like MODFLOW and I think can be used to use gradient control to be more effective.

2) The non-linearities in the management options can be modelled using other optimisation approaches to give an accurate model. Mixed-integer programming should be included for optimum number of wells and their placements.