Wave Data Assimilation In Support Of Wave Energy Converter Power Prediction: Yakutat Alaska Case Study
Abstract not provided.
Abstract not provided.
Abstract not provided.
Proceedings of the Annual Offshore Technology Conference
Integration of renewable power sources into grids remains an active research and development area,particularly for less developed renewable energy technologies such as wave energy converters (WECs).WECs are projected to have strong early market penetration for remote communities, which serve as naturalmicrogrids. Hence, accurate wave predictions to manage the interactions of a WEC array with microgridsis especially important. Recently developed, low-cost wave measurement buoys allow for operationalassimilation of wave data at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time dataassimilation capabilities for WEC power prediction. The availability of real-time wave spectral componentsfrom low-cost wave measurement buoys allows for operational data assimilation with the Ensemble Kalmanfilter technique, whereby measured wave conditions within the numerical wave forecast model domain areassimilated onto the combined set of internal and boundary grid points while taking into account model andobservation error covariances. The updated model state and boundary conditions allow for more accuratewave characteristic predictions at the locations of interest. Initial deployment data indicated that measured wave data from one buoy that were assimilated intothe wave modeling framework resulted in improved forecast skill for a case where a traditional numericalforecast model (e.g., Simulating WAves Nearshore; SWAN) did not well represent the measured conditions.On average, the wave power forecast error was reduced from 73% to 43% using the data assimilationmodeling with real-time wave observations.
Abstract not provided.
Abstract not provided.
This project has developed models of variability of performance to enable robust design and certification. Material variability originating from microstructure has significant effects on component behavior and creates uncertainty in material response. The outcomes of this project are uncertainty quantification (UQ) enabled analysis of material variability effects on performance and methods to evaluate the consequences of microstructural variability on material response in general. Material variability originating from heterogeneous microstructural features, such as grain and pore morphologies, has significant effects on component behavior and creates uncertainty around performance. Current engineering material models typically do not incorporate microstructural variability explicitly, rather functional forms are chosen based on intuition and parameters are selected to reflect mean behavior. Conversely, mesoscale models that capture the microstructural physics, and inherent variability, are impractical to utilize at the engineering scale. Therefore, current efforts ignore physical characteristics of systems that may be the predominant factors for quantifying system reliability. To address this gap we have developed explicit connections between models of microstructural variability and component/system performance. Our focus on variability of mechanical response due to grain and pore distributions enabled us to fully probe these influences on performance and develop a methodology to propagate input variability to output performance. This project is at the forefront of data-science and material modeling. We adapted and innovated from progressive techniques in machine learning and uncertainty quantification to develop a new, physically-based methodology to address the core issues of the Engineering Materials Reliability (EMR) research challenge in modeling constitutive response of materials with significant inherent variability and length-scales.
Computer Methods in Applied Mechanics and Engineering
The advent of fabrication techniques such as additive manufacturing has focused attention on the considerable variability of material response due to defects and other microstructural aspects. This variability motivates the development of an enhanced design methodology that incorporates inherent material variability to provide robust predictions of performance. In this work, we develop plasticity models capable of representing the distribution of mechanical responses observed in experiments using traditional plasticity models of the mean response and recently developed uncertainty quantification (UQ) techniques. To account for material response variability through variations in physical parameters, we adapt a recent Bayesian embedded modeling error calibration technique. We use Bayesian model selection to determine the most plausible of a variety of plasticity models and the optimal embedding of parameter variability. To expedite model selection, we develop an adaptive importance-sampling-based numerical integration scheme to compute the Bayesian model evidence. We demonstrate that the new framework provides predictive realizations that are superior to more traditional ones, and how these UQ techniques can be used in model selection and assessing the quality of calibrated physical parameters.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Abstract not provided.
Integration of renewable power sources into electrical grids remains an active research and development area, particularly for less developed renewable energy technologies, such as wave energy converters (WECs). High spatio-temporal resolution and accurate wave forecasts at a potential WEC (or WEC array) lease area are needed to improve WEC power prediction and to facilitate grid integration, particularly for microgrid locations. The availability of high quality measurement data from recently developed low-cost buoys allows for operational assimilation of wave data into forecast models at remote locations where real-time data have previously been unavailable. This work includes the development and assessment of a wave modeling framework with real-time data assimilation capabilities for WEC power prediction. Spoondrift wave measurement buoys were deployed off the coast of Yakutat, Alaska, a microgrid site with high wave energy resource potential. A wave modeling framework with data assimilation was developed and assessed, which was most effective when the incoming forecasted boundary conditions did not represent the observations well. For that case, assimilation of the wave height data using the ensemble Kalman filter resulted in a reduction of wave height forecast normalized root mean square error from 27% to an average of 16% over a 12-hour period. This results in reduction of wave power forecast error from 73% to 43%. In summary, the use of the low-cost wave buoy data assimilated into the wave modeling framework improved the forecast skill and will provide a useful development tool for the integration of WECs into electrical grids.
Combustion Theory and Modelling
This investigation tackles the probabilistic parameter estimation problem involving the Arrhenius parameters for the rate coefficient of the chain branching reaction H + O2 → OH + O. This is achieved in a Bayesian inference framework that uses indirect data from the literature in the form of summary statistics by approximating the maximum entropy solution with the aid of approximate bayesian computation. The summary statistics include nominal values and uncertainty factors of the rate coefficient, obtained from shock-tube experiments performed at various initial temperatures. The Bayesian framework allows for the incorporation of uncertainty in the rate coefficient of a secondary reaction, namely OH + H2 → H2O + H, resulting in a consistent joint probability density on Arrhenius parameters for the two rate coefficients. It also allows for uncertainty quantification in numerical ignition predictions while conforming with the published summary statistics. The method relies on probabilistic reconstruction of the unreported data, OH concentration profiles from shock-tube experiments, along with the unknown Arrhenius parameters. The data inference is performed using a Markov chain Monte Carlo sampling procedure that relies on an efficient adaptive quadrature in estimating relevant integrals needed for data likelihood evaluations. For further efficiency gains, local Padé–Legendre approximants are used as surrogates for the time histories of OH concentration, alleviating the need for 0-D auto-ignition simulations. The reconstructed realisations of the missing data are used to provide a consensus joint posterior probability density on the unknown Arrhenius parameters via probabilistic pooling. Uncertainty quantification analysis is performed for stoichiometric hydrogen–air auto-ignition computations to explore the impact of uncertain parameter correlations on a range of quantities of interest.
Abstract not provided.
Computer Methods in Applied Mechanics and Engineering
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. Although these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolution in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. Parallel sparse matrix–vector operations are used to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.
Abstract not provided.
Abstract not provided.