Introduction

Commands Introduction Table of Contents

Overview

In the DAKOTA system, a strategy creates and manages iterators and models. A model, generally speaking, contains a set of variables, an interface, and a set of responses, and the iterator operates on the model to map the variables into responses using the interface. Each of these six pieces (strategy, method, model, variables, interface, and responses) are separate specifications in the user's input file, and as a whole, determine the study to be performed during an execution of the DAKOTA software. The number of strategies which can be invoked during a DAKOTA execution is limited to one. This strategy, however, may invoke multiple methods. Furthermore, each method may have its own model, consisting of (generally speaking) its own set of variables, its own interface, and its own set of responses. Thus, there may be multiple specifications of the method, model, variables, interface, and responses sections.

The syntax of DAKOTA specification is governed by the New Input Deck Reader (NIDR) parsing system [Gay, 2008], which uses the dakota.input.nspec file to describe the allowable inputs to the system. A shortened form of this input specification file, dakota.input.summary, provides a quick reference to the allowable system inputs from which a particular input file (e.g., dakota.in) can be derived. This automatically derived shortened form omits implementation details not needed in a quick reference.

This Reference Manual focuses on providing complete details for the allowable specifications in an input file to the DAKOTA program. Related details on the name and location of the DAKOTA program, command line inputs, and execution syntax are provided in the Users Manual [Adams et al., 2010].

NIDR Input Specification File

DAKOTA input is governed by the NIDR input specification file. This file (dakota.input.nspec) is used by a code generator to create parsing system components that are compiled into the DAKOTA executable (refer to Instructions for Modifying DAKOTA's Input Specification for additional information). Therefore, dakota.input.nspec and its summary, dakota.input.summary, are the definitive source for input syntax, capability options, and optional and required capability sub-parameters. Beginning users may find dakota.input.summary more confusing than helpful and, in this case, adaptation of example input files to a particular problem may be a more effective approach. However, advanced users can master all of the various input specification possibilities once the structure of the input specification file is understood.

Refer to the dakota.input.summary file for current input specifications. From this file listing, it can be seen that the main structure of the strategy specification is that of several required group specifications separated by logical OR's: either hybrid OR multi-start OR pareto set OR single method. The method keyword is the most lengthy specification; however, its structure is again relatively simple. The structure is that of a set of optional method-independent settings followed by a long list of possible methods appearing as required group specifications (containing a variety of method-dependent settings) separated by OR's. The model keyword reflects a structure of three required group specifications separated by OR's. Within the surrogate model type, the type of approximation must be specified with either a global OR multipoint OR local OR hierarchical required group specification. The structure of the variables keyword is that of optional group specifications for continuous and discrete design variables, a number of different uncertain variable distribution types, and continuous and discrete state variables. Each of these specifications can either appear or not appear as a group. Next, the interface keyword allows the specification of either algebraic mappings, simulation-based analysis driver mappings, or both. Within the analysis drivers specification, a system OR fork OR direct OR grid group specification must be selected. Finally, within the responses keyword, the primary structure is the required specification of the function set (either optimization functions OR least squares functions OR generic response functions), followed by the required specification of the gradients (either none OR numerical OR analytic OR mixed) and the required specification of the Hessians (either none OR numerical OR quasi OR analytic OR mixed). Refer to Strategy Commands, Method Commands, Model Commands, Variables Commands, Interface Commands, and Responses Commands for detailed information on the keywords and their various optional and required specifications. And for additional details on NIDR specification logic and rules, refer to [Gay, 2008].

Some keywords, such as those providing bounds on variables, have an associated list of values. When the same value should be repeated several times in a row, you can use a notation of the form n*value instead of repeating the value n times. For example, in Sample 2: Least Squares below,

          lower_bounds    -2.0   -2.0
          upper_bounds     2.0    2.0
could also be written
          lower_bounds    2*-2.0
          upper_bounds    2 * 2.0
(with optional spaces around the * ). Another possible abbreviation is for sequences: L:S:U (with optional spaces around the : ) is expanded to L L+S L+2*S ... U, and L:U (with no second colon) is treated as L:1:U. For example, in one of the test examples distributed with DAKOTA (test case 2 of test/dakota_uq_textbook_sop_lhs.in ),
        histogram_point = 2                             
          abscissas     = 50. 60. 70. 80. 90.           
                          30. 40. 50. 60. 70.           
          counts        = 10  20  30  20  10            
                          10  20  30  20  10            
could also be written
        histogram_point = 2                             
          abscissas     = 50 : 10 : 90
                          30 : 10 : 70                  
          counts        = 10:10:30  20  10              
                          10:10:30  20  10              

Common Specification Mistakes

Spelling mistakes and omission of required parameters are the most common errors. Some causes of errors are more obscure:

In most cases, the NIDR system provides error messages that help the user isolate errors in DAKOTA input files.

Sample dakota.in Files

A DAKOTA input file is a collection of fields from the dakota.input.summary file that describe the problem to be solved by the DAKOTA system. Several examples follow.

Sample 1: Optimization

The following sample input file shows single-method optimization of the Textbook Example using DOT's modified method of feasible directions. A similar file is available in the test directory as Dakota/examples/tutorial/dakota_textbook.in.

strategy,
        single_method

method,
#       DOT performs better, but may not be available
        dot_mmfd,
#       conmin_mfd,
          max_iterations = 50,
          convergence_tolerance = 1e-4

variables,
        continuous_design = 2
          initial_point    0.9    1.1
          upper_bounds     5.8    2.9
          lower_bounds     0.5   -2.9
          descriptors      'x1'   'x2'

interface,
        direct
          analysis_driver =       'text_book'

responses,
        num_objective_functions = 1
        num_nonlinear_inequality_constraints = 2
        numerical_gradients
          method_source dakota
          interval_type central
          fd_gradient_step_size = 1.e-4
        no_hessians

Sample 2: Least Squares

The following sample input file shows a nonlinear least squares solution of the Rosenbrock Example using the NL2SOL method. A similar file is available in the test directory as Dakota/examples/tutorial/dakota_rosenbrock_ls.in.

strategy,
        single_method

method,
        nl2sol
          max_iterations = 50
          convergence_tolerance = 1e-4

model,
        single

variables,
        continuous_design = 2
          initial_point   -1.2    1.0
          lower_bounds    -2.0   -2.0
          upper_bounds     2.0    2.0
          descriptor       'x1'   'x2'

interface,
        system
          analysis_driver = 'rosenbrock'

responses,
        num_least_squares_terms = 2
        analytic_gradients
        no_hessians

Sample 3: Nondeterministic Analysis

The following sample input file shows Latin Hypercube Monte Carlo sampling using the Textbook Example. A similar file is available in the test directory as Dakota/test/dakota_uq_textbook_lhs.in.

strategy,
        single_method

method,
        nond_sampling,
          samples = 100 seed = 1
          complementary distribution
          response_levels = 3.6e+11 4.0e+11 4.4e+11
                            6.0e+04 6.5e+04 7.0e+04
                            3.5e+05 4.0e+05 4.5e+05
          sample_type lhs

variables,
        normal_uncertain = 2
          means             =  248.89, 593.33
          std_deviations    =   12.4,   29.7
          descriptors       =  'TF1n'  'TF2n'
        uniform_uncertain = 2
          lower_bounds      =  199.3,  474.63
          upper_bounds      =  298.5,  712.
          descriptors       =  'TF1u'  'TF2u'
        weibull_uncertain = 2
          alphas            =   12.,    30.
          betas             =  250.,   590.
          descriptors       =  'TF1w'  'TF2w'
        histogram_bin_uncertain = 2
          num_pairs   =  3         4
          abscissas   =  5  8 10  .1  .2  .3  .4
          counts      = 17 21  0  12  24  12   0
          descriptors = 'TF1h'  'TF2h'
        histogram_point_uncertain = 1
          num_pairs   = 2
          abscissas   = 3 4
          counts      = 1 1
          descriptors = 'TF3h'

interface,
        system asynch evaluation_concurrency = 5
          analysis_driver = 'text_book'

responses,
        num_response_functions = 3
        no_gradients
        no_hessians

Sample 4: Parameter Study

The following sample input file shows a 1-D vector parameter study using the Textbook Example. It makes use of the default strategy and model specifications (single_method and single, respectively). A similar file is available in the test directory as Dakota/examples/tutorial/dakota_rosenbrock_vector.in.

strategy,
        single_method

method,
        vector_parameter_study
          final_point = 1.1  1.3
          num_steps = 10

model,
        single

variables,
        continuous_design = 2
          initial_point   -0.3      0.2
          descriptors      'x1'     "x2"

interface,
        direct
          analysis_driver = 'rosenbrock'

responses,
        num_objective_functions = 1
        no_gradients
        no_hessians

Sample 5: Hybrid Strategy

The following sample input file shows a hybrid strategy using three methods. It employs a genetic algorithm, pattern search, and full Newton gradient-based optimization in succession to solve the Textbook Example. A similar file is available in the test directory as Dakota/test/dakota_hybrid.in.

strategy,
        graphics
        hybrid sequential
          method_list = 'GA' 'PS' 'NLP'

method,
        id_method = 'GA'
        model_pointer = 'M1'
        coliny_ea
          seed = 1234
          population_size = 10
          verbose output

method,
        id_method = 'PS'
        model_pointer = 'M1'
        coliny_pattern_search stochastic
          seed = 1234
          initial_delta = 0.1
          threshold_delta = 1.e-4
          solution_accuracy = 1.e-10
          exploratory_moves basic_pattern
          verbose output

method,
        id_method = 'PS2'
        model_pointer = 'M1'
        max_function_evaluations = 10
        coliny_pattern_search stochastic
          seed = 1234
          initial_delta = 0.1
          threshold_delta = 1.e-4
          solution_accuracy = 1.e-10
          exploratory_moves basic_pattern
          verbose output

method,
        id_method = 'NLP'
        model_pointer = 'M2'
        optpp_newton
          gradient_tolerance = 1.e-12
          convergence_tolerance = 1.e-15
          verbose output

model,
        id_model = 'M1'
        single
          variables_pointer = 'V1'
          interface_pointer = 'I1'
          responses_pointer = 'R1'

model,
        id_model = 'M2'
        single
          variables_pointer = 'V1'
          interface_pointer = 'I1'
          responses_pointer = 'R2'

variables,
        id_variables = 'V1'
        continuous_design = 2
          initial_point    0.6    0.7
          upper_bounds     5.8    2.9
          lower_bounds     0.5   -2.9
          descriptors      'x1'   'x2'

interface,
        id_interface = 'I1'
        direct
          analysis_driver=  'text_book'

responses,
        id_responses = 'R1'
        num_objective_functions = 1
        no_gradients
        no_hessians

responses,
        id_responses = 'R2'
        num_objective_functions = 1
        analytic_gradients
        analytic_hessians
 

Additional example input files, as well as the corresponding output and graphics, are provided in the Getting Started chapter of the Users Manual [Adams et al., 2010].

Tabular descriptions

In the following discussions of keyword specifications, tabular formats (Tables 4.1 through 9.10) are used to present a short description of the specification, the keyword used in the specification, the type of data associated with the keyword, the status of the specification (required, optional, required group, or optional group), and the default for an optional specification.

It can be difficult to capture in a simple tabular format the complex relationships that can occur when specifications are nested within multiple groupings. For example, in the model keyword, the actual_model_pointer specification is a required specification within the multipoint and local required group specifications, which are separated from each other and from other required group specifications (global and hierarchical) by logical OR's. The selection between the global, multipoint, local, or hierarchical required groups is contained within another required group specification (surrogate), which is separated from the single and nested required group specifications by logical OR's. Rather than unnecessarily proliferate the number of tables in attempting to capture all of these inter-relationships, a balance is sought, since some inter-relationships are more easily discussed in the associated text. The general structure of the following sections is to present the outermost specification groups first (e.g., single, surrogate, or nested in Table 6.1), followed by lower levels of group specifications (e.g., global, multipoint, local, or hierarchical surrogates in Table 6.3), followed by the components of each group (e.g., Tables 6.4 through 6.8) in succession.



Next chapter
Generated on Thu Dec 2 01:22:51 2010 for DAKOTA by  doxygen 1.4.7