Publications

3 Results
Skip to search filters

Retaining Systems Engineering Model Meaning Through Transformation: Demo 2

Carroll, Edward R.; Jarosz, Jason P.; Tafoya, Carlos J.; Compton, Jonathan E.; Akinli, Cengiz B.

Digital engineering strategies typically assume that digital engineering models interoperate seamlessly across the multiple different engineering modeling software applications involved, such as model- based systems engineering (MBSE), mechanical computer-aided design (MCAD), electrical computer-aided design (ECAD), and other engineering modeling applications. The presumption is that the data schema in these modeling software applications are structured in the familiar flat- tabular schema like any other software application. Engineering domain-specific applications (e.g., systems, mechanical, electrical, simulation) are typically designed to solve domain-specific problems, necessarily excluding explicit representations of non-domain information to help the engineer focus on the domain problems (system definition, design, simulation). Such exclusions become problematic in inter-domain information exchange. The obvious assumptions of one domain might not be so obvious to experts in another domain. Ambiguity in domain-specific language can erode the ability to enable different domain modeling applications to interoperate, unless the underlying language is understood and used as the basis for translation from one application to another. The engineering modeling software application industry has struggled for decades to enable these applications to interoperate. Industry standards have been developed, but they have not unified the industry. Why is this? The authors assert that the industry has relied on traditional database integration methods. The basic issue prohibiting successful application integration then is that traditional database-driven integration does not consider the distinct languages of each domain. An engineering models meaning is expressed through the underlying language of that engineering domain. In essence, traditional integration methods do not retain the semantic context (meaning) of the model. The basis of this research stems from the widely held assumption that systems engineering models are (or can be) structured according to the underlying semantic ontology of the model. This assumption can be imagined from two thoughts. 1) Digital systems engineering models are often represented using graph theory (the graph of a complex systems model can contain millions of nodes and edges). When examining the nodes one at a time and following the outbound edges of each node one by one, one can end up with rudimentary statements about the model (i.e., node A relates to node B), as in a semantic graph. 2) Likewise, from the study of natural languages, a sentence can be structured into unambiguous triples of subject-predicate-object within formal and highly expressive semantic ontologies. The rudimentary statements about a systems model discerned with graph theory closely mimic the triples used in the ontologies that try to structure natural languages. In other words, a systems models semantic graph can be (or is) structured into an ontology. Additionally, it is well established in industry that through natural language processing (NLP), which provides the means to create language structures, that computers can interpret ontological graphs. Therefore, the authors hypothesized that if the integrity of the underlying semantic structure of a systems model is retained, the contextual meaning of the model is retained. By structuring system models into the triples of the underlying ontology during the transformation from one MBSE application to another, the authors have provided a proof of the concept that the meaning of a system model can be retained during transformation. The authors assert that this is the missing ingredient in effective systems model-to-model interoperability. ACKNOWLEDGEMENTS The authors would like to thank the FY19 Model Interoperability team members who provided a solid foundation for the FY20 team to leverage: John McCloud, for the work he did to guide us toward the right use of technology that will appropriately discover and manipulate ontologies. Carlos Tafoya, for the work he did to develop an application programming interface (API)/Adapter that would export ontology-based data from GENESYS. Peter Chandler, for the work he did to architect our overall integration solution, with an eye toward the future that would influence a large-scale federated production-level systems engineering digital model ecosystem.

More Details

Soft-core processor study for node-based architectures

Gallegos, Daniel E.; Welch, Benjamin J.; Jarosz, Jason P.; Van Houten, Jonathan R.; Learn, Mark W.

Node-based architecture (NBA) designs for future satellite projects hold the promise of decreasing system development time and costs, size, weight, and power and positioning the laboratory to address other emerging mission opportunities quickly. Reconfigurable Field Programmable Gate Array (FPGA) based modules will comprise the core of several of the NBA nodes. Microprocessing capabilities will be necessary with varying degrees of mission-specific performance requirements on these nodes. To enable the flexibility of these reconfigurable nodes, it is advantageous to incorporate the microprocessor into the FPGA itself, either as a hardcore processor built into the FPGA or as a soft-core processor built out of FPGA elements. This document describes the evaluation of three reconfigurable FPGA based processors for use in future NBA systems--two soft cores (MicroBlaze and non-fault-tolerant LEON) and one hard core (PowerPC 405). Two standard performance benchmark applications were developed for each processor. The first, Dhrystone, is a fixed-point operation metric. The second, Whetstone, is a floating-point operation metric. Several trials were run at varying code locations, loop counts, processor speeds, and cache configurations. FPGA resource utilization was recorded for each configuration. Cache configurations impacted the results greatly; for optimal processor efficiency it is necessary to enable caches on the processors. Processor caches carry a penalty; cache error mitigation is necessary when operating in a radiation environment.

More Details
3 Results
3 Results