Publications

Results 201–225 of 239
Skip to search filters

NetCAP status report for the end of fiscal year 2010

Hamlet, Benjamin R.; Young, Christopher J.

Fiscal year 2010 (FY10) is the second full year of NetCAP development and the first full year devoted largely to new feature development rather than the reimplementation of existing capabilities found in NetSim (Sereno et al., 1990). Major tasks completed this year include: (1) Addition of hydroacoustic simulation; (2) Addition of event Identification simulation; and (3) Initial design and preparation for infrasound simulation. The Network Capability Assessment Program (NetCAP) is a software tool under development at Sandia National Laboratories used for studying the capabilities of nuclear explosion monitoring networks. This report discusses motivation and objectives for the NetCAP project, lists work performed prior to fiscal year 2010 (FY10) and describes FY10 accomplishments in detail.

More Details

A global 3D P-velocity model of the Earth's crust and mantle for improved event location : SALSA3D

Ballard, Sanford B.; Young, Christopher J.; Hipp, James R.; Chang, Marcus C.; Encarnacao, Andre V.

To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is {approx}50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method. We compare the travel-time prediction and location capabilities of SALSA3D to standard 1D models via location tests on a global event set with GT of 5 km or better. These events generally possess hundreds of Pn and P picks from which we generate different realizations of station distributions, yielding a range of azimuthal coverage and ratios of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135 regardless of Pn to P ratio, with the improvement being most pronounced at higher azimuthal gaps.

More Details

SALSA3D : a global 3D p-velocity model of the Earth's crust and mantle for improved event location

Young, Christopher J.; Hipp, James R.; Chang, Marcus C.; Encarnacao, Andre V.

To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D version 1.5, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is {approx}50%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method. We compare the travel-time prediction and location capabilities of SALSA3D to standard 1D models via location tests on a global event set with GT of 5 km or better. These events generally possess hundreds of Pn and P picks from which we generate different realizations of station distributions, yielding a range of azimuthal coverage and ratios of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135 regardless of Pn to P ratio, with the improvement being most pronounced at higher azimuthal gaps.

More Details

Exploring the limits of waveform correlation event detection as applied to three earthquake aftershock sequences

Carr, Dorthe B.; Slinkard, Megan E.; Young, Christopher J.

Swarms of earthquakes and/or aftershock sequences can dramatically increase the level of seismicity in a region for a period of time lasting from days to months, depending on the swarm or sequence. Such occurrences can provide a large amount of useful information to seismologists. For those who monitor seismic events for possible nuclear explosions, however, these swarms/sequences are a nuisance. In an explosion monitoring system, each event must be treated as a possible nuclear test until it can be proven, to a high degree of confidence, not to be. Seismic events recorded by the same station with highly correlated waveforms almost certainly have a similar location and source type, so clusters of events within a swarm can quickly be identified as earthquakes. We have developed a number of tools that can be used to exploit the high degree of waveform similarity expected to be associated with swarms/sequences. Dendro Tool measures correlations between known events. The Waveform Correlation Detector is intended to act as a detector, finding events in raw data which correlate with known events. The Self Scanner is used to find all correlated segments within a raw data steam and does not require an event library. All three techniques together provide an opportunity to study the similarities of events in an aftershock sequence in different ways. To comprehensively characterize the benefits and limits of waveform correlation techniques, we studied 3 aftershock sequences, using our 3 tools, at multiple stations. We explored the effects of station distance and event magnitudes on correlation results. Lastly, we show the reduction in detection threshold and analyst workload offered by waveform correlation techniques compared to STA/LTA based detection. We analyzed 4 days of data from each aftershock sequence using all three methods. Most known events clustered in a similar manner across the toolsets. Up to 25% of catalogued events were found to be a member of a cluster. In addition, the Waveform Correlation Detector and Self Scanner identified significant numbers of new events that were not in either the EDR or regional catalogs, showing a lowering of the detection threshold. We extended our analysis to study the effect of distance on correlation results by applying the analysis tools to multiple stations along a transect of nearly constant azimuth when possible. We expected the number of events found via correlation would drop off as roughly 1/r2, where r is the distance from mainshock to station. However, we found that regional geological conditions influenced the performance of a given station more than distance. For example, for one sequence we clustered 25% of events at the nearest station to the mainshock (34 km), while our performance dropped to 2% at a station 550 km distant. However, we matched our best performance (25% clustering) at a station 198 km distant.

More Details

A global 3D P-Velocity model of the Earth%3CU%2B2019%3Es crust and mantle for improved event location

Ballard, Sanford B.; Young, Christopher J.; Hipp, James R.; Chang, Marcus C.; Encarnacao, Andre V.; Lewis, Jennifer E.

To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.

More Details

Station set residual : event classification using historical distribution of observing stations

Lewis, Jennifer E.; Young, Christopher J.

Analysts working at the International Data Centre in support of treaty monitoring through the Comprehensive Nuclear-Test-Ban Treaty Organization spend a significant amount of time reviewing hypothesized seismic events produced by an automatic processing system. When reviewing these events to determine their legitimacy, analysts take a variety of approaches that rely heavily on training and past experience. One method used by analysts to gauge the validity of an event involves examining the set of stations involved in the detection of an event. In particular, leveraging past experience, an analyst can say that an event located in a certain part of the world is expected to be detected by Stations A, B, and C. Implicit in this statement is that such an event would usually not be detected by Stations X, Y, or Z. For some well understood parts of the world, the absence of one or more 'expected' stations - or the presence of one or more 'unexpected' stations - is correlated with a hypothesized event's legitimacy and to its survival to the event bulletin. The primary objective of this research is to formalize and quantify the difference between the observed set of stations detecting some hypothesized event, versus the expected set of stations historically associated with detecting similar nearby events close in magnitude. This Station Set Residual can be quantified in many ways, some of which are correlated with the analysts determination of whether or not the event is valid. We propose that this Station Set Residual score can be used to screen out certain classes of 'false' events produced by automatic processing with a high degree of confidence, reducing the analyst burden. Moreover, we propose that the visualization of the historically expected distribution of detecting stations can be immediately useful as an analyst aid during their review process.

More Details

A global 3D P-velocity model of the Earth's crust and mantle for improved event location

Young, Christopher J.; Ballard, Sanford B.; Hipp, James R.; Chang, Marcus C.; Encarnacao, Andre V.; Lewis, Jennifer E.

To test the hypothesis that high quality 3D Earth models will produce seismic event locations which are more accurate and more precise, we are developing a global 3D P wave velocity model of the Earth's crust and mantle using seismic tomography. In this paper, we present the most recent version of our model, SALSA3D (SAndia LoS Alamos) version 1.4, and demonstrate its ability to reduce mislocations for a large set of realizations derived from a carefully chosen set of globally-distributed ground truth events. Our model is derived from the latest version of the Ground Truth (GT) catalog of P and Pn travel time picks assembled by Los Alamos National Laboratory. To prevent over-weighting due to ray path redundancy and to reduce the computational burden, we cluster rays to produce representative rays. Reduction in the total number of ray paths is > 55%. The model is represented using the triangular tessellation system described by Ballard et al. (2009), which incorporates variable resolution in both the geographic and radial dimensions. For our starting model, we use a simplified two layer crustal model derived from the Crust 2.0 model over a uniform AK135 mantle. Sufficient damping is used to reduce velocity adjustments so that ray path changes between iterations are small. We obtain proper model smoothness by using progressive grid refinement, refining the grid only around areas with significant velocity changes from the starting model. At each grid refinement level except the last one we limit the number of iterations to prevent convergence thereby preserving aspects of broad features resolved at coarser resolutions. Our approach produces a smooth, multi-resolution model with node density appropriate to both ray coverage and the velocity gradients required by the data. This scheme is computationally expensive, so we use a distributed computing framework based on the Java Parallel Processing Framework, providing us with {approx}400 processors. Resolution of our model is assessed using a variation of the standard checkerboard method, as well as by directly estimating the diagonal of the model resolution matrix based on the technique developed by Bekas, et al. We compare the travel-time prediction and location capabilities of this model over standard 1D models. We perform location tests on a global, geographically-distributed event set with ground truth levels of 5 km or better. These events generally possess hundreds of Pn and P phases from which we can generate different realizations of station distributions, yielding a range of azimuthal coverage and proportions of teleseismic to regional arrivals, with which we test the robustness and quality of relocation. The SALSA3D model reduces mislocation over standard 1D ak135, especially with increasing azimuthal gap. The 3D model appears to perform better for locations based solely or dominantly on regional arrivals, which is not unexpected given that ak135 represents a global average and cannot therefore capture local and regional variations.

More Details

The process for integrating the NNSA knowledge base

Martinez, Elaine M.; Young, Christopher J.; Wilkening, Lisa K.

From 2002 through 2006, the Ground Based Nuclear Explosion Monitoring Research & Engineering (GNEMRE) program at Sandia National Laboratories defined and modified a process for merging different types of integrated research products (IRPs) from various researchers into a cohesive, well-organized collection know as the NNSA Knowledge Base, to support operational treaty monitoring. This process includes defining the KB structure, systematically and logically aggregating IRPs into a complete set, and verifying and validating that the integrated Knowledge Base works as expected.

More Details
Results 201–225 of 239
Results 201–225 of 239