Publications

191 Results
Skip to search filters

Combining water quality and operational data for improved event detection

Water Distribution Systems Analysis 2010 - Proceedings of the 12th International Conference, WDSA 2010

Hart, David B.; Mckenna, Sean A.; Murray, Regan; Haxton, Terra

Water quality signals from sensors provide a snapshot of the water quality at the monitoring station at discrete sample times. These data are typically processed by event detection systems to determine the probability of a water quality event occurring at each sample time. Inherent noise in sensor data and rapid changes in water quality due to operational actions can cause false alarms in event detection systems. While the event determination can be made solely on the data from each signal at the current time step, combining data across signals and backwards in time can provide a richer set of data for event detection. Here we examine the ability of algebraic combinations and other transformations of the raw signals to further decrease false alarms. As an example, using operational events such as one or more pumps turning on or off to define a period of decreased detection sensitivity is one approach to limiting false alarms. This method is effective when lag times are known or when the sensors are co-located with the equipment causing the change. The CANARY software was used to test and demonstrate these combinatorial techniques for improving sensitivity and decreasing false alarms on both background data and data with simulated events. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000. © 2012 ASCE.

More Details

Optimal determination of grab sample locations and source inversion in large-scale water distribution systems

Water Distribution Systems Analysis 2010 - Proceedings of the 12th International Conference, WDSA 2010

Wong, Angelica; Young, James; Laird, Carl D.; Hart, William E.; Mckenna, Sean A.

We present a mixed-integer linear programming formulation to determine optimal locations for manual grab sampling after the detection of contaminants in a water distribution system. The formulation selects optimal manual grab sample locations that maximize the total pair-wise distinguishability of candidate contamination events. Given an initial contaminant detection location, a source inversion is performed that will eliminate unlikely events resulting in a much smaller set of candidate contamination events. We then propose a cyclical process where optimal grab samples locations are determined and manual grab samples taken. Relying only on YES/NO indicators of the presence of contaminant, source inversion is performed to reduce the set of candidate contamination events. The process is repeated until the number of candidate events is sufficiently small. Case studies testing this process are presented using water network models ranging from 4 to approximately 13000 nodes. The results demonstrate that the contamination event can be identified within a remarkably small number of sampling cycles using very few sampling teams. Furthermore, solution times were reasonable making this formulation suitable for real-time settings. © 2012 ASCE.

More Details

Bayesian data assimilation for stochastic multiscale models of transport in porous media

Lefantzi, Sophia L.; Klise, Katherine A.; Salazar, Luke S.; Mckenna, Sean A.; van Bloemen Waanders, Bart G.; Ray, Jaideep R.

We investigate Bayesian techniques that can be used to reconstruct field variables from partial observations. In particular, we target fields that exhibit spatial structures with a large spectrum of lengthscales. Contemporary methods typically describe the field on a grid and estimate structures which can be resolved by it. In contrast, we address the reconstruction of grid-resolved structures as well as estimation of statistical summaries of subgrid structures, which are smaller than the grid resolution. We perform this in two different ways (a) via a physical (phenomenological), parameterized subgrid model that summarizes the impact of the unresolved scales at the coarse level and (b) via multiscale finite elements, where specially designed prolongation and restriction operators establish the interscale link between the same problem defined on a coarse and fine mesh. The estimation problem is posed as a Bayesian inverse problem. Dimensionality reduction is performed by projecting the field to be inferred on a suitable orthogonal basis set, viz. the Karhunen-Loeve expansion of a multiGaussian. We first demonstrate our techniques on the reconstruction of a binary medium consisting of a matrix with embedded inclusions, which are too small to be grid-resolved. The reconstruction is performed using an adaptive Markov chain Monte Carlo method. We find that the posterior distributions of the inferred parameters are approximately Gaussian. We exploit this finding to reconstruct a permeability field with long, but narrow embedded fractures (which are too fine to be grid-resolved) using scalable ensemble Kalman filters; this also allows us to address larger grids. Ensemble Kalman filtering is then used to estimate the values of hydraulic conductivity and specific yield in a model of the High Plains Aquifer in Kansas. Strong conditioning of the spatial structure of the parameters and the non-linear aspects of the water table aquifer create difficulty for the ensemble Kalman filter. We conclude with a demonstration of the use of multiscale stochastic finite elements to reconstruct permeability fields. This method, though computationally intensive, is general and can be used for multiscale inference in cases where a subgrid model cannot be constructed.

More Details

Spatial-temporal event detection in climate parameter imagery

Mckenna, Sean A.; Flores, Karen A.

Previously developed techniques that comprise statistical parametric mapping, with applications focused on human brain imaging, are examined and tested here for new applications in anomaly detection within remotely-sensed imagery. Two approaches to analysis are developed: online, regression-based anomaly detection and conditional differences. These approaches are applied to two example spatial-temporal data sets: data simulated with a Gaussian field deformation approach and weekly NDVI images derived from global satellite coverage. Results indicate that anomalies can be identified in spatial temporal data with the regression-based approach. Additionally, la Nina and el Nino climatic conditions are used as different stimuli applied to the earth and this comparison shows that el Nino conditions lead to significant decreases in NDVI in both the Amazon Basin and in Southern India.

More Details

Truncated multiGaussian fields and effective conductance of binary media

Advances in Water Resources

Mckenna, Sean A.; Ray, Jaideep R.; Marzouk, Youssef; van Bloemen Waanders, Bart G.

Truncated Gaussian fields provide a flexible model for defining binary media with dispersed (as opposed to layered) inclusions. General properties of excursion sets on these truncated fields are coupled with a distance-based upscaling algorithm and approximations of point process theory to develop an estimation approach for effective conductivity in two-dimensions. Estimation of effective conductivity is derived directly from knowledge of the kernel size used to create the multiGaussian field, defined as the full-width at half maximum (FWHM), the truncation threshold and conductance values of the two modes. Therefore, instantiation of the multiGaussian field is not necessary for estimation of the effective conductance. The critical component of the effective medium approximation developed here is the mean distance between high conductivity inclusions. This mean distance is characterized as a function of the FWHM, the truncation threshold and the ratio of the two modal conductivities. Sensitivity of the resulting effective conductivity to this mean distance is examined for two levels of contrast in the modal conductances and different FWHM sizes. Results demonstrate that the FWHM is a robust measure of mean travel distance in the background medium. The resulting effective conductivities are accurate when compared to numerical results and results obtained from effective media theory, distance-based upscaling and numerical simulation. © 2011 Elsevier Ltd.

More Details

Risk assessment as a framework for decisions

Mckenna, Sean A.; Borns, David J.

The risk assessment approach has been applied to support numerous radioactive waste management activities over the last 30 years. A risk assessment methodology provides a solid and readily adaptable framework for evaluating the risks of CO2 sequestration in geologic formations to prioritize research, data collection, and monitoring schemes. This paper reviews the tasks of a risk assessment, and provides a few examples related to each task. This paper then describes an application of sensitivity analysis to identify important parameters to reduce the uncertainty in the performance of a geologic repository for radioactive waste repository, which because of importance of the geologic barrier, is similar to CO2 sequestration. The paper ends with a simple stochastic analysis of idealized CO2 sequestration site with a leaking abandoned well and a set of monitoring wells in an aquifer above the CO2 sequestration unit in order to evaluate the efficacy of monitoring wells to detect adverse leakage.

More Details

Geologic controls influencing CO2 loss from a leaking well

Martinez, Mario J.; Hopkins, Polly L.; Mckenna, Sean A.

Injection of CO2 into formations containing brine is proposed as a long-term sequestration solution. A significant obstacle to sequestration performance is the presence of existing wells providing a transport pathway out of the sequestration formation. To understand how heterogeneity impacts the leakage rate, we employ two dimensional models of the CO2 injection process into a sandstone aquifer with shale inclusions to examine the parameters controlling release through an existing well. This scenario is modeled as a constant-rate injection of super-critical CO2 into the existing formation where buoyancy effects, relative permeabilities, and capillary pressures are employed. Three geologic controls are considered: stratigraphic dip angle, shale inclusion size and shale fraction. In this study, we examine the impact of heterogeneity on the amount and timing of CO2 released through a leaky well. Sensitivity analysis is performed to classify how various geologic controls influence CO2 loss. A 'Design of Experiments' approach is used to identify the most important parameters and combinations of parameters to control CO2 migration while making efficient use of a limited number of computations. Results are used to construct a low-dimensional description of the transport scenario. The goal of this exploration is to develop a small set of parametric descriptors that can be generalized to similar scenarios. Results of this work will allow for estimation of the amount of CO2 that will be lost for a given scenario prior to commencing injection. Additionally, two-dimensional and three-dimensional simulations are compared to quantify the influence that surrounding geologic media has on the CO2 leakage rate.

More Details

The effect of error models in the multiscale inversion of binary permeability fields

Ray, Jaideep R.; van Bloemen Waanders, Bart G.; Mckenna, Sean A.

We present results from a recently developed multiscale inversion technique for binary media, with emphasis on the effect of subgrid model errors on the inversion. Binary media are a useful fine-scale representation of heterogeneous porous media. Averaged properties of the binary field representations can be used to characterize flow through the porous medium at the macroscale. Both direct measurements of the averaged properties and upscaling are complicated and may not provide accurate results. However, it may be possible to infer upscaled properties of the binary medium from indirect measurements at the coarse scale. Multiscale inversion, performed with a subgrid model to connect disparate scales together, can also yield information on the fine-scale properties. We model the binary medium using truncated Gaussian fields, and develop a subgrid model for the upscaled permeability based on excursion sets of those fields. The subgrid model requires an estimate of the proportion of inclusions at the block scale as well as some geometrical parameters of the inclusions as inputs, and predicts the effective permeability. The inclusion proportion is assumed to be spatially varying, modeled using Gaussian processes and represented using a truncated Karhunen-Louve (KL) expansion. This expansion is used, along with the subgrid model, to pose as a Bayesian inverse problem for the KL weights and the geometrical parameters of the inclusions. The model error is represented in two different ways: (1) as a homoscedastic error and (2) as a heteroscedastic error, dependent on inclusion proportionality and geometry. The error models impact the form of the likelihood function in the expression for the posterior density of the objects of inference. The problem is solved using an adaptive Markov Chain Monte Carlo method, and joint posterior distributions are developed for the KL weights and inclusion geometry. Effective permeabilities and tracer breakthrough times at a few 'sensor' locations (obtained by simulating a pump test) form the observables used in the inversion. The inferred quantities can be used to generate an ensemble of permeability fields, both upscaled and fine-scale, which are consistent with the observations. We compare the inferences developed using the two error models, in terms of the KL weights and fine-scale realizations that could be supported by the coarse-scale inferences. Permeability differences are observed mainly in regions where the inclusions proportion is near the percolation threshold, and the subgrid model incurs its largest approximation. These differences also reflected in the tracer breakthrough times and the geometry of flow streamlines, as obtained from a permeameter simulation. The uncertainty due to subgrid model error is also compared to the uncertainty in the inversion due to incomplete data.

More Details

Sensor placement for municipal water networks

Phillips, Cynthia A.; Boman, Erik G.; Carr, Robert D.; Hart, William E.; Berry, Jonathan W.; Watson, Jean-Paul W.; Hart, David B.; Mckenna, Sean A.; Riesen, Lee A.

We consider the problem of placing a limited number of sensors in a municipal water distribution network to minimize the impact over a given suite of contamination incidents. In its simplest form, the sensor placement problem is a p-median problem that has structure extremely amenable to exact and heuristic solution methods. We describe the solution of real-world instances using integer programming or local search or a Lagrangian method. The Lagrangian method is necessary for solution of large problems on small PCs. We summarize a number of other heuristic methods for effectively addressing issues such as sensor failures, tuning sensors based on local water quality variability, and problem size/approximation quality tradeoffs. These algorithms are incorporated into the TEVA-SPOT toolkit, a software suite that the US Environmental Protection Agency has used and is using to design contamination warning systems for US municipal water systems.

More Details

Estimating parameters and uncertainty for three-dimensional flow and transport in a highly heterogeneous sand box experiment

Yoon, Hongkyu Y.; Mckenna, Sean A.; Hart, David B.

Heterogeneity plays an important role in groundwater flow and contaminant transport in natural systems. Since it is impossible to directly measure spatial variability of hydraulic conductivity, predictions of solute transport based on mathematical models are always uncertain. While in most cases groundwater flow and tracer transport problems are investigated in two-dimensional (2D) systems, it is important to study more realistic and well-controlled 3D systems to fully evaluate inverse parameter estimation techniques and evaluate uncertainty in the resulting estimates. We used tracer concentration breakthrough curves (BTCs) obtained from a magnetic resonance imaging (MRI) technique in a small flow cell (14 x 8 x 8 cm) that was packed with a known pattern of five different sands (i.e., zones) having cm-scale variability. In contrast to typical inversion systems with head, conductivity and concentration measurements at limited points, the MRI data included BTCs measured at a voxel scale ({approx}0.2 cm in each dimension) over 13 x 8 x 8 cm with a well controlled boundary condition, but did not have direct measurements of head and conductivity. Hydraulic conductivity and porosity were conceptualized as spatial random fields and estimated using pilot points along layers of the 3D medium. The steady state water flow and solute transport were solved using MODFLOW and MODPATH. The inversion problem was solved with a nonlinear parameter estimation package - PEST. Two approaches to parameterization of the spatial fields are evaluated: (1) The detailed zone information was used as prior information to constrain the spatial impact of the pilot points and reduce the number of parameters; and (2) highly parameterized inversion at cm scale (e.g., 1664 parameters) using singular value decomposition (SVD) methodology to significantly reduce the run-time demands. Both results will be compared to measured BTCs. With MRI, it is easy to change the averaging scale of the observed concentration from point to cross-section. This comparison allows us to evaluate which method best matches experimental results at different scales. To evaluate the uncertainty in parameter estimation, the null space Monte Carlo method will be used to reduce computational burden of the development of calibration-constrained Monte Carlo based parameter fields. This study will illustrate how accurately a well-calibrated model can predict contaminant transport.

More Details

Posterior predictive modeling using multi-scale stochastic inverse parameter estimates

Mckenna, Sean A.; Ray, Jaideep R.; van Bloemen Waanders, Bart G.

Multi-scale binary permeability field estimation from static and dynamic data is completed using Markov Chain Monte Carlo (MCMC) sampling. The binary permeability field is defined as high permeability inclusions within a lower permeability matrix. Static data are obtained as measurements of permeability with support consistent to the coarse scale discretization. Dynamic data are advective travel times along streamlines calculated through a fine-scale field and averaged for each observation point at the coarse scale. Parameters estimated at the coarse scale (30 x 20 grid) are the spatially varying proportion of the high permeability phase and the inclusion length and aspect ratio of the high permeability inclusions. From the non-parametric, posterior distributions estimated for these parameters, a recently developed sub-grid algorithm is employed to create an ensemble of realizations representing the fine-scale (3000 x 2000), binary permeability field. Each fine-scale ensemble member is instantiated by convolution of an uncorrelated multiGaussian random field with a Gaussian kernel defined by the estimated inclusion length and aspect ratio. Since the multiGaussian random field is itself a realization of a stochastic process, the procedure for generating fine-scale binary permeability field realizations is also stochastic. Two different methods are hypothesized to perform posterior predictive tests. Different mechanisms for combining multi Gaussian random fields with kernels defined from the MCMC sampling are examined. Posterior predictive accuracy of the estimated parameters is assessed against a simulated ground truth for predictions at both the coarse scale (effective permeabilities) and at the fine scale (advective travel time distributions). The two techniques for conducting posterior predictive tests are compared by their ability to recover the static and dynamic data. The skill of the inference and the method for generating fine-scale binary permeability fields are evaluated through flow calculations on the resulting fields using fine-scale realizations and comparing them against results obtained with the ground truth fine-scale and coarse-scale permeability fields.

More Details

Integrating event detection system operation characteristics into sensor placement optimization

Hart, David B.; Hart, William E.; Mckenna, Sean A.; Phillips, Cynthia A.

We consider the problem of placing sensors in a municipal water network when we can choose both the location of sensors and the sensitivity and specificity of the contamination warning system. Sensor stations in a municipal water distribution network continuously send sensor output information to a centralized computing facility, and event detection systems at the control center determine when to signal an anomaly worthy of response. Although most sensor placement research has assumed perfect anomaly detection, signal analysis software has parameters that control the tradeoff between false alarms and false negatives. We describe a nonlinear sensor placement formulation, which we heuristically optimize with a linear approximation that can be solved as a mixed-integer linear program. We report the results of initial experiments on a real network and discuss tradeoffs between early detection of contamination incidents, and control of false alarms.

More Details

Trajectory clustering approach for reducing water quality event false alarms

Proceedings of World Environmental and Water Resources Congress 2009 - World Environmental and Water Resources Congress 2009: Great Rivers

Vugrin, Eric D.; Mckenna, Sean A.; Hart, David B.

Event Detection Systems (EDS) performance is hindered by false alarms that cause unnecessary resource expenditure by the utility and undermine confidence in the EDS operation. Changes in water quality due to operational changes in the utility hydraulics can cause a significant number of false alarms. These changes may occur daily and each instance produces similar changes in the multivariate water quality pattern. Recognizing that patterns of water quality change must be identified, we adapt trajectory clustering as a means of classifying these multivariate patterns. We develop a general approach for dealing with changes in utility operations that impact water quality. This approach uses historical data water quality data from the utility to identify recurring patterns and retains those patterns in a library that can be accessed during online operation. We have implemented this pattern matching capability within CANARY and describe several example applications that demonstrate a decrease in false alarms. ©2009 ASCE.

More Details

Detailed investigation of solute mixing in pipe joints through high speed photography

Proceedings of the 10th Annual Water Distribution Systems Analysis Conference, WDSA 2008

Mckenna, Sean A.; O'Hern, Timothy J.; Hartenberger, Joel D.

Investigation of turbulent mixing in pipe joints has been a topic of recent research interest. These investigations have relied on experimental results with downstream sensors to determine the bulk characteristics of mixing in pipe joints. High fidelity computational fluid dynamics models have also been employed to examine the fine scale physics of the mixing within the joint geometry. To date, high resolution imaging of experimental conditions within the pipe joint has not been reported. Here, we introduce high speed photography as a tool to accomplish this goal. Cross joints with four pipes coming together in a single junction are the focus of this investigation. All pipes entering the junction are the same diameter and made of clear PVC. The cross joint was milled from clear acrylic material to allow for high resolution imaging of the mixing processes within the joint. Two pipes carry water into the joint, one with clear water and the other inlet with water containing dye and a salt tracer. Two outlet pipes are carry water away from the joint. A high-speed digital camera was used to image mixing within the joint at an imaging rate of 30 Hz. Each grey-scale (8-bit) image is 1280 x 1024 pixels in a roughly 17.8 x 14.5 cm image containing the cross joint. The pixel size is approximately 0.13 x 0.14 mm. Four experiments using the clear cross-joint have been visualized. The Reynolds number (Re) for the tracer inlet pipe is held constant at 1500, while a different Re in the clear inlet pipe is used for each experiment. The Re value in the outlets are held equal to each other at the average Re of the inlets. Re values in the clear inlet pipe values are: 500, 1000, 2000 and 5000. Visual examination of the images provides information on the mixing behavior including tracer transport along the walls of the pipe, transient variation in the amount of tracer entering each outlet, the sharpness of the clear-tracer interface and variation in the concentration of the tracer throughout the joint geometry. A sharp tracer-clear interface is visible for the clear inlet Re values of 500, 1000 and 2000, but decays to a broad gradual transition zone at a clear inlet Re of 5000. There are no visible instabilities in the clear-tracer interface at the lowest clear water Re (500), but regular periodic instabilities occur for the Re=1000 experiment and these become irregular, but still periodic at clear inlet Re = 2000 and then lose all regular structure in the Re = 5000 experiment. High speed photography applied to clear pipe joints with the necessary image processing can provide qualitative and quantitative insights into mixing processes. A limitation of this approach is that it provides two-dimensional images of a three-dimensional process. ©ASCE 2009.

More Details

Joint physical and numerical modeling of water distribution networks

Mckenna, Sean A.; Ho, Clifford K.; Cappelle, Malynda A.; Webb, Stephen W.; O'Hern, Timothy J.

This report summarizes the experimental and modeling effort undertaken to understand solute mixing in a water distribution network conducted during the last year of a 3-year project. The experimental effort involves measurement of extent of mixing within different configurations of pipe networks, measurement of dynamic mixing in a single mixing tank, and measurement of dynamic solute mixing in a combined network-tank configuration. High resolution analysis of turbulence mixing is carried out via high speed photography as well as 3D finite-volume based Large Eddy Simulation turbulence models. Macroscopic mixing rules based on flow momentum balance are also explored, and in some cases, implemented in EPANET. A new version EPANET code was developed to yield better mixing predictions. The impact of a storage tank on pipe mixing in a combined pipe-tank network during diurnal fill-and-drain cycles is assessed. Preliminary comparison between dynamic pilot data and EPANET-BAM is also reported.

More Details

Causal factors of non-fickian dispersion explored through measures of aquifer connectivity

IAMG 2009 - Computational Methods for the Earth, Energy and Environmental Sciences

Klise, Katherine A.; Mckenna, Sean A.; Tidwell, Vincent C.; Lane, Jonathan W.; Weissmann, Gary S.; Wawrzyniec, Tim F.; Nichols, Elizabeth M.

While connectivity is an important aspect of heterogeneous media, methods to measure and simulate connectivity are limited. For this study, we use natural aquifer analogs developed through lidar imagery to track the importance of connectivity on dispersion characteristics. A 221.8 cm by 50 cm section of a braided sand and gravel deposit of the Ceja Formation in Bernalillo County, New Mexico is selected for the study. The use of two-point (SISIM) and multipoint (Snesim and Filtersim) stochastic simulation methods are then compared based on their ability to replicate dispersion characteristics using the aquifer analog. Detailed particle tracking simulations are used to explore the streamline-based connectivity that is preserved using each method. Connectivity analysis suggests a strong relationship between the length distribution of sand and gravel facies along streamlines and dispersion characteristics.

More Details

Strip transect sampling to estimate object abundance in homogeneous and non-homogeneous poisson fields: A simulation study of the effects of changing transect width and number

Progress in Geomathematics

Coburn, Timothy C.; Mckenna, Sean A.; Saito, Hirotaka

This paper investigates the use of strip transect sampling to estimate object abundance when the underlying spatial distribution is assumed to be Poisson. A design-rather than model-based approach to estimation is investigated through computer simulation, with both homogeneous and non-homogeneous fields representing individual realizations of spatial point processes being considered. Of particular interest are the effects of changing the number of transects and transect width (or alternatively, coverage percent or fraction) on the quality of the estimate. A specific application to the characterization of unexploded ordnance (UXO) in the subsurface at former military firing ranges is discussed. The results may be extended to the investigation of outcrop characteristics as well as subsurface geological features. © 2008 Springer-Verlag Berlin Heidelberg.

More Details

Distributed network fusion for water quality

World Environmental and Water Resources Congress 2008: Ahupua'a - Proceedings of the World Environmental and Water Resources Congress 2008

Koch, Mark W.; Mckenna, Sean A.

To protect drinking water systems, a contamination warning system can use in-line sensors to detect accidental and deliberate contamination. Currently, detection of an incident occurs when data from a single station detects an anomaly. This paper considers the possibility of combining data from multiple locations to reduce false alarms and help determine the contaminant's injection source and time. If we consider the location and time of individual detections as points resulting from a random space-time point process, we can use Kulldorff's scan test to find statistically significant clusters of detections. Using EPANET, we simulate a contaminant moving through a water network and detect significant clusters of events. We show these significant clusters can distinguish true events from random false alarms and the clusters help identify the time and source of the contaminant. Fusion results show reduced errors with only 25% more sensors needed over a nonfusion approach. © 2008 ASCE.

More Details

CANARY: A water quality event detection algorithm development tool

Restoring Our Natural Habitat - Proceedings of the 2007 World Environmental and Water Resources Congress

Hart, David; Mckenna, Sean A.; Klise, Katherine A.; Cruz, Victoria; Wilson, Mark

The detection of anomalous water quality events has become an increased priority for distribution systems, both for quality of service and security reasons. Because of the high cost associated with false detections, both missed events and false alarms, algorithms which aim to provide event detection aid need to be evaluated and configured properly. CANARY has been developed to provide both real-time, and off-line analysis tools to aid in the development of these algorithms, allowing algorithm developers to focus on the algorithms themselves, rather than on how to read in data and drive the algorithms. Among the features to be discussed and demonstrated are: 1) use of a standard data exchange format for input and output of water quality and operations data streams; 2) the ability to "plug in" various water quality change detection algorithms, both in MATLAB® and compiled library formats for testing and evaluation by using a well defined interface; 3) an "operations mode" to simulate what a utility operator will receive; 4) side-by-side comparison tools for different evaluation metrics, including ROC curves, time to detect, and false alarm rates. Results will be shown using three algorithms previously developed (Klise and McKenna, 2006; McKenna, et al., 2006) using test and real-life data sets. © 2007 ASCE.

More Details

Evaluation of complete and incomplete mixing models in water distribution pipe network simulations

Restoring Our Natural Habitat - Proceedings of the 2007 World Environmental and Water Resources Congress

Ho, Clifford K.; Choi, Christopher Y.; Mckenna, Sean A.

A small-scale 3×3 pipe network was simulated to evaluate the validity of complete-mixing and incomplete-mixing models for water distribution systems under different flow rates and boundary conditions. CFD simulations showed that accurate predictions of spatially variable tracer concentrations throughout the network could be attained when compared to experimental data. In contrast, an EPANET model that assumed complete mixing within the junctions yielded uniform concentrations throughout the network, which was significantly different than the spatially variable concentrations observed in the experimental network. The EPANET model was also modified to include mixing correlations derived from previous single-joint experiments. The results from the modified model correctly reflected the incomplete mixing at the pipe junctions and matched the trend in the experimental data. Additional CFD simulations showed that networks comprised of T-junctions separated by at least several pipe diameters could be adequately modeled with complete-mixing models. © 2007 ASCE.

More Details

Contaminant mixing at pipe joints: Comparison between laboratory flow experiments and computational fluid dynamics models

8th Annual Water Distribution Systems Analysis Symposium 2006

Ho, Clifford K.; Orear, Leslie; Wright, Jerome L.; Mckenna, Sean A.

This paper presents computational simulations and experiments of water flow and contaminant transport through pipes with incomplete mixing at pipe joints. The hydraulics and contaminant transport were modeled using computational fluid dynamics software that solves the continuity, momentum, energy, and species equations (laminar and turbulent) using finite-element methods. Simulations were performed of experiments consisting of individual and multiple pipe joints where tracer and clean water were separately introduced into the pipe junction. Results showed that the incoming flow streams generally remained separated within the junction, leading to incomplete mixing of the tracer. Simulations of the mixing matched the experimental results when appropriate scaling of the tracer diffusivity (via the turbulent Schmidt number) was calibrated based on results of single-joint experiments using cross and double-T configurations. Results showed that a turbulent Schmidt number between ∼0.001-0.01 was able to account for enhanced mixing caused by instabilities along the interface of impinging flows. Unequal flow rates within the network were also shown to affect the outlet concentration at each pipe junction, with "enhanced" or "reduced" mixing possible depending on the relative flow rates entering the junction. Copyright ASCE 2006.

More Details

Dispersion analysis using particle tracking simulations through heterogeneity based on outcrop lidar imagery

Tidwell, Vincent C.; Mckenna, Sean A.

Solute plumes are believed to disperse in a non-Fickian manner due to small-scale heterogeneity and variable velocities that create preferential pathways. In order to accurately predict dispersion in naturally complex geologic media, the connection between heterogeneity and dispersion must be better understood. Since aquifer properties can not be measured at every location, it is common to simulate small-scale heterogeneity with random field generators based on a two-point covariance (e.g., through use of sequential simulation algorithms). While these random fields can produce preferential flow pathways, it is unknown how well the results simulate solute dispersion through natural heterogeneous media. To evaluate the influence that complex heterogeneity has on dispersion, we utilize high-resolution terrestrial lidar to identify and model lithofacies from outcrop for application in particle tracking solute transport simulations using RWHet. The lidar scan data are used to produce a lab (meter) scale two-dimensional model that captures 2-8 mm scale natural heterogeneity. Numerical simulations utilize various methods to populate the outcrop structure captured by the lidar-based image with reasonable hydraulic conductivity values. The particle tracking simulations result in residence time distributions used to evaluate the nature of dispersion through complex media. Particle tracking simulations through conductivity fields produced from the lidar images are then compared to particle tracking simulations through hydraulic conductivity fields produced from sequential simulation algorithms. Based on this comparison, the study aims to quantify the difference in dispersion when using realistic and simplified representations of aquifer heterogeneity.

More Details

Markov models and the ensemble Kalman filter for estimation of sorption rates

Mckenna, Sean A.; Vugrin, Kay E.; Vugrin, Eric D.

Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EnKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. For the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

More Details

Markov Models and the Ensemble Kalman Filter for Estimation of Sorption Rates

Sandia journal manuscript; Not yet accepted for publication

Vugrin, Eric D.; Mckenna, Sean A.

Non-equilibrium sorption of contaminants in ground water systems is examined from the perspective of sorption rate estimation. A previously developed Markov transition probability model for solute transport is used in conjunction with a new conditional probability-based model of the sorption and desorption rates based on breakthrough curve data. Two models for prediction of spatially varying sorption and desorption rates along a one-dimensional streamline are developed. These models are a Markov model that utilizes conditional probabilities to determine the rates and an ensemble Kalman filter (EKF) applied to the conditional probability method. Both approaches rely on a previously developed Markov-model of mass transfer, and both models assimilate the observed concentration data into the rate estimation at each observation time. Initial values of the rates are perturbed from the true values to form ensembles of rates and the ability of both estimation approaches to recover the true rates is examined over three different sets of perturbations. The models accurately estimate the rates when the mean of the perturbations are zero, the unbiased case. Finally, for the cases containing some bias, addition of the ensemble Kalman filter is shown to improve accuracy of the rate estimation by as much as an order of magnitude.

More Details

Hierarchical probabilistic regionalization of volcanism for Sengan region, Japan

Geotechnical and Geological Engineering

Kulatilake, Pinnaduwa H.S.W.; Park, Jinyong; Balasingam, Pirahas; Mckenna, Sean A.

A 1 km square regular grid system created on the Universal Transverse Mercator zone 54 projected coordinate system is used to work with volcanism related data for Sengan region. The following geologic variables were determined as the most important for identifying volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater pH value, presence of volcanic rocks and presence of hydrothermal alteration. Data available for each of these important geologic variables were used to perform directional variogram modeling and kriging to estimate geologic variable vectors at each of the 23949 centers of the chosen 1 km cell grid system. Cluster analysis was performed on the 23949 complete variable vectors to classify each center of 1 km cell into one of five different statistically homogeneous groups with respect to potential volcanism spanning from lowest possible volcanism to highest possible volcanism with increasing group number. A discriminant analysis incorporating Bayes' theorem was performed to construct maps showing the probability of group membership for each of the volcanism groups. The said maps showed good comparisons with the recorded locations of volcanism within the Sengan region. No volcanic data were found to exist in the group 1 region. The high probability areas within group 1 have the chance of being the no volcanism region. Entropy of classification is calculated to assess the uncertainty of the allocation process of each 1 km cell center location based on the calculated probabilities. The recorded volcanism data are also plotted on the entropy map to examine the uncertainty level of the estimations at the locations where volcanism exists. The volcanic data cell locations that are in the high volcanism regions (groups 4 and 5) showed relatively low mapping estimation uncertainty. On the other hand, the volcanic data cell locations that are in the low volcanism region (group 2) showed relatively high mapping estimation uncertainty. The volcanic data cell locations that are in the medium volcanism region (group 3) showed relatively moderate mapping estimation uncertainty. Areas of high uncertainty provide locations where additional site characterization resources can be spent most effectively. The new data collected can be added to the existing database to perform future regionalized mapping and reduce the uncertainty level of the existing estimations. © Springer Science+Business Media B.V. 2006.

More Details

Biological restoration of major transportation facilities domestic demonstration and application project (DDAP): technology development at Sandia National Laboratories

Griffith, Richard O.; Brown, Gary S.; Betty, Rita B.; Tucker, Mark D.; Ramsey, James L.; Brockmann, John E.; Lucero, Daniel A.; Mckenna, Sean A.; Peyton, Chad E.; Einfeld, Wayne E.; Ho, Pauline H.

The Bio-Restoration of Major Transportation Facilities Domestic Demonstration and Application Program (DDAP) is a designed to accelerate the restoration of transportation nodes following an attack with a biological warfare agent. This report documents the technology development work done at SNL for this DDAP, which include development of the BROOM tool, an investigation of surface sample collection efficiency, and a flow cytometry study of chlorine dioxide effects on Bacillus anthracis spore viability.

More Details

Joint Sandia/NIOSH exercise on aerosol contamination using the BROOM tool

Griffith, Richard O.; Brown, Gary S.; Tucker, Mark D.; Ramsey, James L.; Brockmann, John E.; Lucero, Daniel A.; Mckenna, Sean A.; Peyton, Chad E.; Einfeld, Wayne E.; Ho, Pauline H.

In February of 2005, a joint exercise involving Sandia National Laboratories (SNL) and the National Institute for Occupational Safety and Health (NIOSH) was conducted in Albuquerque, NM. The SNL participants included the team developing the Building Restoration Operations and Optimization Model (BROOM), a software product developed to expedite sampling and data management activities applicable to facility restoration following a biological contamination event. Integrated data-collection, data-management, and visualization software improve the efficiency of cleanup, minimize facility downtime, and provide a transparent basis for reopening. The exercise was held at an SNL facility, the Coronado Club, a now-closed social club for Sandia employees located on Kirtland Air Force Base. Both NIOSH and SNL had specific objectives for the exercise, and all objectives were met.

More Details

Accounting for geophysical information in geostatistical characterization of unexploded ordnance (UXO) sites

Environmental and Ecological Statistics

Saito, Hirotaka; Mckenna, Sean A.; Goovaerts, Pierre

Efficient and reliable unexploded ordnance (UXO) site characterization is needed for decisions regarding future land use. There are several types of data available at UXO sites and geophysical signal maps are one of the most valuable sources of information. Incorporation of such information into site characterization requires a flexible and reliable methodology. Geostatistics allows one to account for exhaustive secondary information (i.e.,, known at every location within the field) in many different ways. Kriging and logistic regression were combined to map the probability of occurrence of at least one geophysical anomaly of interest, such as UXO, from a limited number of indicator data. Logistic regression is used to derive the trend from a geophysical signal map, and kriged residuals are added to the trend to estimate the probabilities of the presence of UXO at unsampled locations (simple kriging with varying local means or SKlm). Each location is identified for further remedial action if the estimated probability is greater than a given threshold. The technique is illustrated using a hypothetical UXO site generated by a UXO simulator, and a corresponding geophysical signal map. Indicator data are collected along two transects located within the site. Classification performances are then assessed by computing proportions of correct classification, false positive, false negative, and Kappa statistics. Two common approaches, one of which does not take any secondary information into account (ordinary indicator kriging) and a variant of common cokriging (collocated cokriging), were used for comparison purposes. Results indicate that accounting for exhaustive secondary information improves the overall characterization of UXO sites if an appropriate methodology, SKlm in this case, is used. © Springer Science+Business Media, Inc. 2005.

More Details

Impact of sensor performance on protecting water distribution systems from contamination events

Mckenna, Sean A.; Yarrington, Lane Y.

Real-time water quality and chemical-specific sensors are becoming more commonplace in water distribution systems. The overall objective of the sensor network is to protect consumers from accidental and malevolent contamination events occurring within the distribution network. This objective can be quantified several different ways including: minimizing the amount of contaminated water consumed, minimizing the extent of the contamination within the network, minimizing the time to detection, etc. We examine the ability of a sensor network to meet these objectives as a function of both the detection limit of the sensors and the number of sensors in the network. A moderately-sized network is used as an example and sensors are placed randomly. The source term is a passive injection into a node and the resulting concentration in the node is a function of the volumetric flow through that node. The concentration of the contaminant at the source node is averaged for all time steps during the injection period. For each combination of a certain number of sensors and a detection limit, the mean values of the different objectives across multiple random sensor placements are evaluated. Results of this analysis allow the tradeoff between the necessary detection limit in a sensor and the number of sensors to be evaluated. Results show that for the example problem examined here, a sensor detection limit of 0.01 of the average source concentration is adequate for maximum protection.

More Details

Hierarchical probabilistic regionalization of volcanism for Sengan region in Japan using multivariate statistical techniques and geostatistical interpolation techniques

Mckenna, Sean A.

Sandia National Laboratories, under contract to Nuclear Waste Management Organization of Japan (NUMO), is performing research on regional classification of given sites in Japan with respect to potential volcanic disruption using multivariate statistics and geo-statistical interpolation techniques. This report provides results obtained for hierarchical probabilistic regionalization of volcanism for the Sengan region in Japan by applying multivariate statistical techniques and geostatistical interpolation techniques on the geologic data provided by NUMO. A workshop report produced in September 2003 by Sandia National Laboratories (Arnold et al., 2003) on volcanism lists a set of most important geologic variables as well as some secondary information related to volcanism. Geologic data extracted for the Sengan region in Japan from the data provided by NUMO revealed that data are not available at the same locations for all the important geologic variables. In other words, the geologic variable vectors were found to be incomplete spatially. However, it is necessary to have complete geologic variable vectors to perform multivariate statistical analyses. As a first step towards constructing complete geologic variable vectors, the Universal Transverse Mercator (UTM) zone 54 projected coordinate system and a 1 km square regular grid system were selected. The data available for each geologic variable on a geographic coordinate system were transferred to the aforementioned grid system. Also the recorded data on volcanic activity for Sengan region were produced on the same grid system. Each geologic variable map was compared with the recorded volcanic activity map to determine the geologic variables that are most important for volcanism. In the regionalized classification procedure, this step is known as the variable selection step. The following variables were determined as most important for volcanism: geothermal gradient, groundwater temperature, heat discharge, groundwater pH value, presence of volcanic rocks and presence of hydrothermal alteration. Data available for each of these important geologic variables were used to perform directional variogram modeling and kriging to estimate values for each variable at 23949 centers of the chosen 1 km cell grid system that represents the Sengan region. These values formed complete geologic variable vectors at each of the 23,949 one km cell centers.

More Details

Evaluating techniques for multivariate classification of non-collocated spatial data

Mckenna, Sean A.

Multivariate spatial classification schemes such as regionalized classification or principal components analysis combined with kriging rely on all variables being collocated at the sample locations. In these approaches, classification of the multivariate data into a finite number of groups is done prior to the spatial estimation. However, in some cases, the variables may be sampled at different locations with the extreme case being complete heterotopy of the data set. In these situations, it is necessary to adapt existing techniques to work with non-collocated data. Two approaches are considered: (1) kriging of existing data onto a series of 'collection points' where the classification into groups is completed and a measure of the degree of group membership is kriged to all other locations; and (2) independent kriging of all attributes to all locations after which the classification is done at each location. Calculations are conducted using an existing groundwater chemistry data set in the upper Dakota aquifer in Kansas (USA) and previously examined using regionalized classification (Bohling, 1997). This data set has all variables measured at all locations. To test the ability of the first approach for dealing with non-collocated data, each variable is reestimated at each sample location through a cross-validation process and the reestimated values are then used in the regionalized classification. The second approach for non-collocated data requires independent kriging of each attribute across the entire domain prior to classification. Hierarchical and non-hierarchical classification of all vectors is completed and a computationally less burdensome classification approach, 'sequential discrimination', is developed that constrains the classified vectors to be chosen from those with a minimal multivariate kriging variance. Resulting classification and uncertainty maps are compared between all non-collocated approaches as well as to the original collocated approach. The non-collocated approaches lead to significantly different group definitions compared to the collocated case. To some extent, these differences can be explained by the kriging variance of the estimated variables. Sequential discrimination of locations with a minimum multivariate kriging variance constraint produces slightly improved results relative to the collection point and the non-hierarchical classification of the estimated vectors.

More Details

Spatial analysis of hypocenter to fault relationships for determining fault process zone width in Japan

Roberts, Barry L.; Arnold, Bill W.; Mckenna, Sean A.

Preliminary investigation areas (PIA) for a potential repository of high-level radioactive waste must be evaluated by NUMO with regard to a number of qualifying factors. One of these factors is related to earthquakes and fault activity. This study develops a spatial statistical assessment method that can be applied to the active faults in Japan to perform such screening evaluations. This analysis uses the distribution of seismicity near faults to define the width of the associated process zone. This concept is based on previous observations of aftershock earthquakes clustered near active faults and on the assumption that such seismic activity is indicative of fracturing and associated impacts on bedrock integrity. Preliminary analyses of aggregate data for all of Japan confirmed that the frequency of earthquakes is higher near active faults. Data used in the analysis were obtained from NUMO and consist of three primary sources: (1) active fault attributes compiled in a spreadsheet, (2) earthquake hypocenter data, and (3) active fault locations. Examination of these data revealed several limitations with regard to the ability to associate fault attributes from the spreadsheet to locations of individual fault trace segments. In particular, there was no direct link between attributes of the active faults in the spreadsheet and the active fault locations in the GIS database. In addition, the hypocenter location resolution in the pre-1983 data was less accurate than for later data. These pre-1983 hypocenters were eliminated from further analysis.

More Details

Evolution of neural networks for the prediction of hydraulic conductivity as a function of borehole geophysical logs: Shobasama site, Japan

Mckenna, Sean A.; Reeves, Paul C.; Mckenna, Sean A.

This report describes the methodology and results of a project to develop a neural network for the prediction of the measured hydraulic conductivity or transmissivity in a series of boreholes at the Tono, Japan study site. Geophysical measurements were used as the input to EL feed-forward neural network. A simple genetic algorithm was used to evolve the architecture and parameters of the neural network in conjunction with an optimal subset of geophysical measurements for the prediction of hydraulic conductivity. The first attempt was focused on the estimation of the class of the hydraulic conductivity, high, medium or low, from the geophysical logs. This estimation was done while using the genetic algorithm to simultaneously determine which geophysical logs were the most important and optimizing the architecture of the neural network. Initial results showed that certain geophysical logs provided more information than others- most notably the 'short-normal', micro-resistivity, porosity and sonic logs provided the most information on hydraulic conductivity. The neural network produced excellent training results with accuracy of 90 percent or greater, but was unable to produce accurate predictions of the hydraulic conductivity class. The second attempt at prediction was done using a new methodology and a modified data set. The new methodology builds on the results of the first attempts at prediction by limiting the choices of geophysical logs to only those that provide significant information. Additionally, this second attempt uses a modified data set and predicts transmissivity instead of hydraulic conductivity. Results of these simulations indicate that the most informative geophysical measurements for the prediction of transmissivity are depth and sonic log. The long normal resistivity and self potential borehole logs are moderately informative. In addition, it was found that porosity and crack counts (clear, open, or hairline) do not inform predictions of hydraulic transmissivity.

More Details

Long-Term Pumping Test at MIU Site, Toki, Japan: Hydrogeological Modeling and Groundwater Flow Simulation

Mckenna, Sean A.; Eliassi, Mehdi E.; Mckenna, Sean A.

A conceptual model of the MIU site in central Japan, was developed to predict the groundwater system response to pumping. The study area consisted of a fairly large three-dimensional domain, having the size 4.24 x 6 x 3 km{sup 3} with three different geological units, upper and lower fractured zones and a single fault unit. The resulting computational model comprised of 702,204 finite difference cells with variable grid spacing. Both steady-state and transient simulations were completed to evaluate the influence of two different surface boundary conditions: fixed head and no flow. Steady state results were used for particle tracking and also serving as the initial conditions (i.e., starting heads) for the transient simulations. Results of the steady state simulations indicate the significance of the choice of surface (i.e., upper) boundary conditions and its effect on the groundwater flow patterns along the base of the upper fractured zone. Steady state particle tracking results illustrate that all particles exit the top of the model in areas where groundwater discharges to the Hiyoshi and Toki rivers. Particle travel times range from 3.6 x 10{sup 7} sec (i.e., {approx}1.1 years) to 4.4 x 10{sup 10} sec (i.e., {approx}1394 years). For the transient simulations, two pumping zones one above and another one below the fault are considered. For both cases, the pumping period extends for 14 days followed by an additional 36 days of recovery. For the pumping rates used, the maximum drawdown is quite small (ranging from a few centimeters to a few meters) and thus, pumping does not severely impact the groundwater flow system. The range of drawdown values produced by pumping below the fault are generally much less sensitive to the choice of the boundary condition than are the drawdowns resulted from the pumping zone above the fault.

More Details

Determining Optimal Location and Numbers of Sample Transects for Characterization of UXO Sites

Bilisoly, Roger L.; Bilisoly, Roger L.; Mckenna, Sean A.

Previous work on sample design has been focused on constructing designs for samples taken at point locations. Significantly less work has been done on sample design for data collected along transects. A review of approaches to point and transect sampling design shows that transects can be considered as a sequential set of point samples. Any two sampling designs can be compared through using each one to predict the value of the quantity being measured on a fixed reference grid. The quality of a design is quantified in two ways: computing either the sum or the product of the eigenvalues of the variance matrix of the prediction error. An important aspect of this analysis is that the reduction of the mean prediction error variance (MPEV) can be calculated for any proposed sample design, including one with straight and/or meandering transects, prior to taking those samples. This reduction in variance can be used as a ''stopping rule'' to determine when enough transect sampling has been completed on the site. Two approaches for the optimization of the transect locations are presented. The first minimizes the sum of the eigenvalues of the predictive error, and the second minimizes the product of these eigenvalues. Simulated annealing is used to identify transect locations that meet either of these objectives. This algorithm is applied to a hypothetical site to determine the optimal locations of two iterations of meandering transects given a previously existing straight transect. The MPEV calculation is also used on both a hypothetical site and on data collected at the Isleta Pueblo to evaluate its potential as a stopping rule. Results show that three or four rounds of systematic sampling with straight parallel transects covering 30 percent or less of the site, can reduce the initial MPEV by as much as 90 percent. The amount of reduction in MPEV can be used as a stopping rule, but the relationship between MPEV and the results of excavation versus no-further-action decisions is site specific and cannot be calculated prior to the sampling. It may be advantageous to use the reduction in MPEV as a stopping rule for systematic sampling across the site that can then be followed by focused sampling in areas identified has having UXO during the systematic sampling. The techniques presented here provide answers to the questions of ''Where to sample?'' and ''When to stop?'' and are capable of running in near real time to support iterative site characterization campaigns.

More Details

Examining the effects of variability in short time scale demands on solute transport

Mckenna, Sean A.; Mckenna, Sean A.; Tidwell, Vincent C.

Variations in water use at short time scales, seconds to minutes, produce variation in transport of solutes through a water supply network. However, the degree to which short term variations in demand influence the solute concentrations at different locations in the network is poorly understood. Here we examine the effect of variability in demand on advective transport of a conservative solute (e.g. chloride) through a water supply network by defining the demand at each node in the model as a stochastic process. The stochastic demands are generated using a Poisson rectangular pulse (PRP) model for the case of a dead-end water line serving 20 homes represented as a single node. The simple dead-end network model is used to examine the variation in Reynolds number, the proportion of time that there is no flow (i.e., stagnant conditions, in the pipe) and the travel time defined as the time for cumulative demand to equal the volume of water in 1000 feet of pipe. Changes in these performance measures are examined as the fine scale demand functions are aggregated over larger and larger time scales. Results are compared to previously developed analytical expressions for the first and second moments of these three performance measures. A new approach to predict the reduction in variance of the performance measures based on perturbation theory is presented and compared to the results of the numerical simulations. The distribution of travel time is relatively consistent across time scales until the time step approaches that of the travel time. However, the proportion of stagnant flow periods decreases rapidly as the simulation time step increases. Both sets of analytical expressions are capable of providing adequate, first-order predictions of the simulation results.

More Details

Syndrome Surveillance Using Parametric Space-Time Clustering

Koch, Mark W.; Mckenna, Sean A.; Bilisoly, Roger L.

As demonstrated by the anthrax attack through the United States mail, people infected by the biological agent itself will give the first indication of a bioterror attack. Thus, a distributed information system that can rapidly and efficiently gather and analyze public health data would aid epidemiologists in detecting and characterizing emerging diseases, including bioterror attacks. We propose using clusters of adverse health events in space and time to detect possible bioterror attacks. Space-time clusters can indicate exposure to infectious diseases or localized exposure to toxins. Most space-time clustering approaches require individual patient data. To protect the patient's privacy, we have extended these approaches to aggregated data and have embedded this extension in a sequential probability ratio test (SPRT) framework. The real-time and sequential nature of health data makes the SPRT an ideal candidate. The result of space-time clustering gives the statistical significance of a cluster at every location in the surveillance area and can be thought of as a ''health-index'' of the people living in this area. As a surrogate to bioterrorism data, we have experimented with two flu data sets. For both databases, we show that space-time clustering can detect a flu epidemic up to 21 to 28 days earlier than a conventional periodic regression technique. We have also tested using simulated anthrax attack data on top of a respiratory illness diagnostic category. Results show we do very well at detecting an attack as early as the second or third day after infected people start becoming severely symptomatic.

More Details

Probabilistic Approach to Site Characterization: MIU site, Tono Region, Japan

Mckenna, Sean A.

Geostatistical simulation is used to extrapolate data derived from site characterization activities at the MIU site into information describing the three-dimensional distribution of hydraulic conductivity at the site and the uncertainty in the estimates of hydraulic conductivity. This process is demonstrated for six different data sets representing incrementally increasing amounts of characterization data. Short horizontal ranges characterize the spatial variability of both the rock types (facies) and the hydraulic conductivity measurements. For each of the six data sets, 50 geostatistical realizations of the facies and 50 realizations of the hydraulic conductivity are combined to produce 50 final realizations of the hydraulic conductivity distribution. Analysis of these final realizations indicates that the mean hydraulic conductivity value increases with the addition of site characterization data. The average hydraulic conductivity as a function of elevation changes from a uniform profile to a profile showing relatively high hydraulic conductivity values near the top and bottom of the simulation domain. Three-dimensional uncertainty maps show the highest amount of uncertainty in the hydraulic conductivity distribution near the top and bottom of the model. These upper and lower areas of high uncertainty are interpreted to be due to the unconformity at the top of the granitic rocks and the Tsukyoshi fault respectively.

More Details

Predictive Modeling of MIU3-MIU2 Interference Tests

Mckenna, Sean A.; Roberts, Randall M.

The goal of this project is to predict the drawdown that will be observed in specific piezometers placed in the MIU-2 borehole due to pumping at a single location in the MIU-3 borehole. These predictions will be in the form of distributions obtained through multiple forward runs of a well-test model. Specifically, two distributions will be created for each pumping location--piezometer location pair: (1) the distribution of the times to 1.0 meter of drawdown and (2) the distribution of the drawdown predicted after 12 days of pumping at a discharge rates of 25, 50, 75 and 100 l/hr. Each of the steps in the pumping rate lasts for 3 days (259,200 seconds). This report is based on results that were presented at the Tono Geoscience Center on January 27th, 2000, which was approximately one week prior to the beginning of the interference tests. Hydraulic conductivity (K), specific storage (S{sub s}) and the length of the pathway (L{sub p}) are the input parameters to the well-test analysis model. Specific values of these input parameters are uncertain. This parameter uncertainty is accounted for in the modeling by drawing individual parameter values from distributions defined for each input parameter. For the initial set of runs, the fracture system is assumed to behave as an infinite, homogeneous, isotropic aquifer. These assumptions correspond to conceptualizing the aquifer as having Theis behavior and producing radial flow to the pumping well. A second conceptual model is also used in the drawdown calculations. This conceptual model considers that the fracture system may cause groundwater to move to the pumping well in a more linear (non-radial) manner. The effects of this conceptual model on the drawdown values are examined by casting the flow dimension (F{sub d}) of the fracture pathways as an uncertain variable between 1.0 (purely linear flow) and 2.0 (completely radial flow).

More Details

On the late-time behavior of tracer test breakthrough curves

Water Resources Research

Haggerty, Roy; Mckenna, Sean A.; Meigs, Lucy C.

We investigated the late-time (asymptotic) behavior of tracer test breakthrough curves (BTCs) with rate-limited mass transfer (e.g., in dual-porosity or multiporosity systems) and found that the late-time concentration c is given by the simple expression c = tad{c0g - [m0(∂g/∂t)]}, for t ≫ tad and tα ≫ tad, where tad is the advection time, c0 is the initial concentration in the medium, m0 is the zeroth moment of the injection pulse, and tα is the mean residence time in the immobile domain (i.e., the characteristic mass transfer time). The function g is proportional to the residence time distribution in the immobile domain; we tabulate g for many geometries, including several distributed (multirate) models of mass transfer. Using this expression, we examine the behavior of late-time concentration for a number of mass transfer models. One key result is that if rate-limited mass transfer causes the BTC to behave as a power law at late time (i.e., c ̃ t-k), then the underlying density function of rate coefficients must also be a power law with the form αk-3 as α → 0. This is true for both density functions of first-order and diffusion rate coefficients. BTCs with k < 3 persisting to the end of the experiment indicate a mean residence time longer than the experiment, and possibly an infinite residence time, and also suggest an effective rate coefficient that is either undefined or changes as a function of observation time. We apply our analysis to breakthrough curves from single-well injection-withdrawal tests at the Waste Isolation Pilot Plant, New Mexico. We investigated the late-time (asymptotic) behavior of tracer test breakthrough curves (BTCs) with rate-limited mass transfer (e.g., in dual-porosity or multiporosity systems) and found that the late-time concentration c is given by the simple expression c = tad{c0g - [m0(∂g/∂t)]}, for t ≫ tad and tα ≫ t ad, where tad is the advection time, c0 is the initial concentration in the medium, m0 is the zeroth moment of the injection pulse, and tα is the mean residence time in the immobile domain (i.e., the characteristic mass transfer time). The function g is proportional to the residence time distribution in the immobile domain; we tabulate g for many geometries, including several distributed (multirate) models of mass transfer. Using this expression, we examine the behavior of late-time concentration for a number of mass transfer models. One key result is that if rate-limited mass transfer causes the BTC to behave as a power law at late time (i.e., c t-k), then the underlying density function of rate coefficients must also be a power law with the form αk-3 as α → 0. This is true for both density functions of first-order and diffusion rate coefficients. BTCs with k < 3 persisting to the end of the experiment indicate a mean residence time longer than the experiment, and possibly an infinite residence time, and also suggest an effective rate coefficient that is either undefined or changes as a function of observation time. We apply our analysis to breakthrough curves from single-well injection-withdrawal tests at the Waste Isolation Pilot Plant, New Mexico.

More Details

Development of a Discrete Spatial-Temporal SEIR Simulator for Modeling Infectious Diseases

Mckenna, Sean A.

Multiple techniques have been developed to model the temporal evolution of infectious diseases. Some of these techniques have also been adapted to model the spatial evolution of the disease. This report examines the application of one such technique, the SEIR model, to the spatial and temporal evolution of disease. Applications of the SEIR model are reviewed briefly and an adaptation to the traditional SEIR model is presented. This adaptation allows for modeling the spatial evolution of the disease stages at the individual level. The transmission of the disease between individuals is modeled explicitly through the use of exposure likelihood functions rather than the global transmission rate applied to populations in the traditional implementation of the SEIR model. These adaptations allow for the consideration of spatially variable (heterogeneous) susceptibility and immunity within the population. The adaptations also allow for modeling both contagious and non-contagious diseases. The results of a number of numerical experiments to explore the effect of model parameters on the spread of an example disease are presented.

More Details

Threshold Assessment: Definition of Acceptable Sites as Part of Site Selection for the Japanese HLW Program

Mckenna, Sean A.; Webb, Erik K.

For the last ten years, the Japanese High-Level Nuclear Waste (HLW) repository program has focused on assessing the feasibility of a basic repository concept, which resulted in the recently published H12 Report. As Japan enters the implementation phase, a new organization must identify, screen and choose potential repository sites. Thus, a rapid mechanism for determining the likelihood of site suitability is critical. The threshold approach, described here, is a simple mechanism for defining the likelihood that a site is suitable given estimates of several critical parameters. We rely on the results of a companion paper, which described a probabilistic performance assessment simulation of the HLW reference case in the H12 report. The most critical two or three input parameters are plotted against each other and treated as spatial variables. Geostatistics is used to interpret the spatial correlation, which in turn is used to simulate multiple realizations of the parameter value maps. By combining an array of realizations, we can look at the probability that a given site, as represented by estimates of this combination of parameters, would be good host for a repository site.

More Details
191 Results
191 Results