Publications

Results 101–144 of 144
Skip to search filters

Exact results and field-theoretic bounds for randomly advected propagating fronts, and implications for turbulent combustion

Mayo, Jackson M.; Kerstein, Alan R.

One of the authors previously conjectured that the wrinkling of propagating fronts by weak random advection increases the bulk propagation rate (turbulent burning velocity) in proportion to the 4/3 power of the advection strength. An exact derivation of this scaling is reported. The analysis shows that the coefficient of this scaling is equal to the energy density of a lower-dimensional Burgers fluid with a white-in-time forcing whose spatial structure is expressed in terms of the spatial autocorrelation of the flow that advects the front. The replica method of field theory has been used to derive an upper bound on the coefficient as a function of the spatial autocorrelation. High precision numerics show that the bound is usefully sharp. Implications for strongly advected fronts (e.g., turbulent flames) are noted.

More Details

Methodologies for advance warning of compute cluster problems via statistical analysis: A case study

Proceedings of the 2009 Workshop on Resiliency in High Performance, Resilience'09, Co-located with the 2009 International Symposium on High Performance Distributed Computing Conference, HPDC'09

Brandt, James M.; Gentile, Ann C.; Mayo, Jackson M.; Pébay, Philippe; Roe, Diana C.; Thompson, David; Wong, Matthew H.

The ability to predict impending failures (hardware or software) on large scale high performance compute (HPC) platforms, augmented by checkpoint mechanisms could drastically increase the scalability of applications and efficiency of platforms. In this paper we present our findings and methodologies employed to date in our search for reliable, advance indicators of failures on a 288 node, 4608 core, Opteron based cluster in production use at Sandia National Laboratories. In support of this effort we have deployed OVIS, a Sandia-developed scalable HPC monitoring, analysis, and visualization tool designed for this purpose. We demonstrate that for a particular error case, statistical analysis using OVIS would enable advanced warning of cluster problems on timescales that would enable application and system administrator response in advance of errors, subsequent system error log reporting, and job failures. This is significant as the utility of detecting such indicators depends on how far in advance of failure they can be recognized and how reliable they are. Copyright 2009 ACM.

More Details

Resource monitoring and management with OVIS to enable HPC in cloud computing environments

IPDPS 2009 - Proceedings of the 2009 IEEE International Parallel and Distributed Processing Symposium

Brandt, James M.; Gentile, Ann C.; Mayo, Jackson M.; Pébay, Philippe; Roe, Diana C.; Thompson, David; Wong, Matthew H.

Using the cloud computing paradigm, a host of companies promise to make huge compute resources available to users on a pay-as-you-go basis. These resources can be configured on the fly to provide the hardware and operating system of choice to the customer on a large scale. While the current target market for these resources in the commercial space is web development/hosting, this model has the lure of savings of ownership, operation, and maintenance costs, and thus sounds like an attractive solution for people who currently invest millions to hundreds of millions of dollars annually on High Performance Computing (HPC) platforms in order to support large-scale scientific simulation codes. Given the current interconnect bandwidth and topologies utilized in these commercial offerings, however, the only current viable market in HPC would be small-memoryfootprint embarrassingly parallel or loosely coupled applications, which inherently require little to no inter-processor communication. While providing the appropriate resources (bandwidth, latency, memory, etc.) for the HPC community would increase the potential to enable HPC in cloud environments, this would not address the need for scalability and reliability, crucial to HPC applications. Providing for these needs is particularly difficult in commercial cloud offerings where the number of virtual resources can far outstrip the number of physical resources, the resources are shared among many users, and the resources may be heterogeneous. Advanced resource monitoring, analysis, and configuration tools can help address these issues, since they bring the ability to dynamically provide and respond to information about the platform and application state and would enable more appropriate, efficient, and flexible use of the resources key to enabling HPC. Additionally such tools could be of benefit to non-HPC cloud providers, users, and applications by providing more efficient resource utilization in general. © 2009 IEEE.

More Details

Approaches for scalable modeling and emulation of cyber systems : LDRD final report

Mayo, Jackson M.; Minnich, Ronald G.; Rudish, Don W.; Armstrong, Robert C.

The goal of this research was to combine theoretical and computational approaches to better understand the potential emergent behaviors of large-scale cyber systems, such as networks of {approx} 10{sup 6} computers. The scale and sophistication of modern computer software, hardware, and deployed networked systems have significantly exceeded the computational research community's ability to understand, model, and predict current and future behaviors. This predictive understanding, however, is critical to the development of new approaches for proactively designing new systems or enhancing existing systems with robustness to current and future cyber threats, including distributed malware such as botnets. We have developed preliminary theoretical and modeling capabilities that can ultimately answer questions such as: How would we reboot the Internet if it were taken down? Can we change network protocols to make them more secure without disrupting existing Internet connectivity and traffic flow? We have begun to address these issues by developing new capabilities for understanding and modeling Internet systems at scale. Specifically, we have addressed the need for scalable network simulation by carrying out emulations of a network with {approx} 10{sup 6} virtualized operating system instances on a high-performance computing cluster - a 'virtual Internet'. We have also explored mappings between previously studied emergent behaviors of complex systems and their potential cyber counterparts. Our results provide foundational capabilities for further research toward understanding the effects of complexity in cyber systems, to allow anticipating and thwarting hackers.

More Details

OVIS 2.0 user%3CU%2B2019%3Es guide

Brandt, James M.; Gentile, Ann C.; Mayo, Jackson M.; Pebay, Philippe P.; Roe, Diana C.; Wong, Matthew H.

This document describes how to obtain, install, use, and enjoy a better life with OVIS version 2.0. The OVIS project targets scalable, real-time analysis of very large data sets. We characterize the behaviors of elements and aggregations of elements (e.g., across space and time) in data sets in order to detect anomalous behaviors. We are particularly interested in determining anomalous behaviors that can be used as advance indicators of significant events of which notification can be made or upon which action can be taken or invoked. The OVIS open source tool (BSD license) is available for download at ovis.ca.sandia.gov. While we intend for it to support a variety of application domains, the OVIS tool was initially developed for, and continues to be primarily tuned for, the investigation of High Performance Compute (HPC) cluster system health. In this application it is intended to be both a system administrator tool for monitoring and a system engineer tool for exploring the system state in depth. OVIS 2.0 provides a variety of statistical tools for examining the behavior of elements in a cluster (e.g., nodes, racks) and associated resources (e.g., storage appliances and network switches). It calculates and reports model values and outliers relative to those models. Additionally, it provides an interactive 3D physical view in which the cluster elements can be colored by raw element values (e.g., temperatures, memory errors) or by the comparison of those values to a given model. The analysis tools and the visual display allow the user to easily determine abnormal or outlier behaviors. The OVIS project envisions the OVIS tool, when applied to compute cluster monitoring, to be used in conjunction with the scheduler or resource manager in order to enable intelligent resource utilization. For example, nodes that are deemed less healthy, that is, nodes that exhibit outlier behavior in some variable, or set of variables, that has shown to be correlated with future failure, can be discovered and assigned to shorter duration or less important jobs. Further, applications with fault-tolerant capabilities can invoke those mechanisms on demand, based upon notification of a node exhibiting impending failure conditions, rather than performing such mechanisms (e.g. checkpointing) at regular intervals unnecessarily.

More Details

Notes on "Modeling, simulation and analysis of complex networked systems"

Mayo, Jackson M.

This is meant as a place to put commentary on the whitepaper and is meant to be pretty much ad-hoc. Because the whitepaper describes a potential program in DOE ASCR and because it concerns many researchers in the field, these notes are meant to be extendable by anyone willing to put in the effort. Of course criticisms of the contents of the notes themselves are also welcome.

More Details

Fronts in randomly advected and heterogeneous media and nonuniversality of Burgers turbulence: Theory and numerics

Physical Review E - Statistical, Nonlinear, and Soft Matter Physics

Mayo, Jackson M.; Kerstein, Alan R.

A recently established mathematical equivalence-between weakly perturbed Huygens fronts (e.g., flames in weak turbulence or geometrical-optics wave fronts in slightly nonuniform media) and the inviscid limit of white-noise-driven Burgers turbulence-motivates theoretical and numerical estimates of Burgers-turbulence properties for specific types of white-in-time forcing. Existing mathematical relations between Burgers turbulence and the statistical mechanics of directed polymers, allowing use of the replica method, are exploited to obtain systematic upper bounds on the Burgers energy density, corresponding to the ground-state binding energy of the directed polymer and the speedup of the Huygens front. The results are complementary to previous studies of both Burgers turbulence and directed polymers, which have focused on universal scaling properties instead of forcing-dependent parameters. The upper-bound formula can be heuristically understood in terms of renormalization of a different kind from that previously used in combustion models, and also shows that the burning velocity of an idealized turbulent flame does not diverge with increasing Reynolds number at fixed turbulence intensity, a conclusion that applies even to strong turbulence. Numerical simulations of the one-dimensional inviscid Burgers equation using a Lagrangian finite-element method confirm that the theoretical upper bounds are sharp within about 15% for various forcing spectra (corresponding to various two-dimensional random media). These computations provide a quantitative test of the replica method. The inferred nonuniversality (spectrum dependence) of the front speedup is of direct importance for combustion modeling. © 2008 The American Physical Society.

More Details

Scalar filtered mass density functions in nonpremixed turbulent jet flames

Combustion and Flame

Drozda, Tomasz D.; Wang, Guanghua H.; Sankaran, Vaidyanathan S.; Mayo, Jackson M.; Oefelein, Joseph C.; Barlow, R.S.

Filtered mass density functions (FMDFs) of mixture fraction and temperature are studied by analyzing experimental data obtained from one-dimensional Raman/Rayleigh/LIF measurements of nonpremixed CH4/H2/N2 turbulent jet flames at Reynolds numbers of 15,200 and 22,800 (DLR-A and -B). The experimentally determined FMDFs are conditioned on the Favré filtered values of the mixture fraction and its variance. Filter widths are selected as fixed multiples of the experimentally determined dissipation length scale at each measurement location. One-dimensional filtering using a top-hat filter is performed to obtain the filtered variables used for conditioning. The FMDFs are obtained by binning the mass and filter kernel weighted samples. Emphasis is placed on the shapes of the FMDFs in the fuel-rich, fuel-lean, and stoichiometric intervals for the Favré filtered mixture fraction, and low, medium, and high values for the Favré filtered mixture fraction variance. It is found that the FMDFs of mixture fraction are unimodal in samples with low mixture fraction variance and bimodal in samples with high variance. However, the FMDFs of mixture fraction at the smallest filter size studied are unimodal for all values of the variance. The FMDFs of temperature are unimodal in samples with low mixture fraction variance, and either unimodal or bimodal, depending on the mixture fraction mean, in samples with high variance. The influence of the filter size and the jet Reynolds number on the FMDFs is also considered. © 2008 The Combustion Institute.

More Details

Using probabilistic characterization to reduce runtime faults in HPC systems

Proceedings CCGRID 2008 - 8th IEEE International Symposium on Cluster Computing and the Grid

Brandt, James M.; Debusschere, Bert D.; Gentile, Ann C.; Mayo, Jackson M.; Pébay, Philippe; Thompson, David; Wong, Matthew H.

The current trend in high performance computing is to aggregate ever larger numbers of processing and interconnection elements in order to achieve desired levels of computational power, This, however, also comes with a decrease in the Mean Time To Interrupt because the elements comprising these systems are not becoming significantly more robust. There is substantial evidence that the Mean Time To Interrupt vs. number of processor elements involved is quite similar over a large number of platforms. In this paper we present a system that uses hardware level monitoring coupled with statistical analysis and modeling to select processing system elements based on where they lie in the statistical distribution of similar elements. These characterizations can be used by the scheduler/resource manager to deliver a close to optimal set of processing elements given the available pool and the reliability requirements of the application. © 2008 IEEE.

More Details

Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report

Mayo, Jackson M.; Armstrong, Robert C.; Vanderveen, Keith V.

The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

More Details

Exact results and field-theoretic bounds for randomly advected propagating fronts, and implications for turbulent combustion

Kerstein, Alan R.; Mayo, Jackson M.

One of the authors previously conjectured that the wrinkling of propagating fronts by weak random advection increases the bulk propagation rate (turbulent burning velocity) in proportion to the 4/3 power of the advection strength. An exact derivation of this scaling is reported. The analysis shows that the coefficient of this scaling is equal to the energy density of a lower-dimensional Burgers fluid with a white-in-time forcing whose spatial structure is expressed in terms of the spatial autocorrelation of the flow that advects the front. The replica method of field theory has been used to derive an upper bound on the coefficient as a function of the spatial auto-correlation. High precision numerics show that the bound is usefully sharp. Implications for strongly advected fronts (e.g., turbulent flames) are noted.

More Details
Results 101–144 of 144
Results 101–144 of 144