hybrid
, multi_start
, pareto_set
, and single_method
. These algorithms are implemented within the Strategy "Strategy" class hierarchy in the CollaborativeHybridStrategy, EmbeddedHybridStrategy, SequentialHybridStrategy, ConcurrentStrategy, and SingleMethodStrategy classes. For each of the strategies, a brief algorithm description is given below. Additional information on the algorithm logic is available in the Users Manual [Adams et al., 2010].
In a hybrid minimization strategy (hybrid
), a set of methods is specified which will be used synergistically in seeking an optimal design. The relationships among the methods are categorized as collaborative, embedded, or sequential. The goal in each case is to exploit the strengths of different optimization and nonlinear least squares algorithms through different stages of the minimization process. Global/local hybrids (e.g., genetic algorithms combined with nonlinear programming) are a common example in which the desire for identification of a global optimum is balanced with the need for efficient navigation to a local optimum.
In the multi-start iteration strategy (multi_start
), a series of iterator runs are performed for different values of parameters in the model. A common use is for multi-start optimization (i.e., different local optimization runs from different starting points for the design variables), but the concept and the code are more general. An important feature is that these iterator runs may be performed concurrently.
In the pareto set optimization strategy (pareto_set
), a series of optimization or least squares calibration runs are performed for different weightings applied to multiple objective functions. This set of optimal solutions defines a "Pareto set," which is useful for investigating design trade-offs between competing objectives. Again, these optimizations can be performed concurrently, similar to the multi-start strategy discussed above. The code is similar enough to the multi_start
technique that both strategies are implemented in the same ConcurrentStrategy class.
Lastly, the single_method
strategy is a "fall through" strategy in that it does not provide control over multiple iterators or multiple models. Rather, it provides the means for simple execution of a single iterator on a single model.
Each of the strategy specifications identifies one or more method pointers (e.g., method_list
, method_pointer
) to identify the iterators that will be used in the strategy. These method pointers are strings that correspond to the id_method
identifier strings from the method specifications (see Method Independent Controls). These string identifiers (e.g., 'NLP1') should not be confused with method selections (e.g., dot_mmfd
). Each of the method specifications identified in this manner has the responsibility for identifying corresponding model specifications (using model_pointer
from Method Independent Controls), which in turn identify the variables, interface, and responses specifications (using variables_pointer
, interface_pointer
, and responses_pointer
from Model Commands) that are used to build the model used by the iterator. If one of these specifications does not provide an optional pointer, then that component will be constructed using the last specification parsed. In addition to method pointers, a variety of graphics options (e.g., tabular_graphics_data
), iterator concurrency controls (e.g., iterator_servers
), and strategy data (e.g., starting_points
) can be specified.
Specification of a strategy block in an input file is optional, with single_method
being the default strategy. If no strategy is specified or if single_method
is specified without its optional method_pointer
specification, then the default behavior is to employ the last method, variables, interface, and responses specifications parsed. This default behavior is most appropriate if only one specification is present for method, variables, interface, and responses, since there is no ambiguity in this case.
Example specifications for each of the strategies follow. A hybrid
example is:
strategy, hybrid sequential method_list = 'GA', 'PS', 'NLP'
A multi_start
example specification is:
strategy, multi_start method_pointer = 'NLP1' random_starts = 10
A pareto_set
example specification is:
strategy, pareto_set method_pointer = 'NLP1' random_weight_sets = 10
And finally, a single_method
example specification is:
strategy, single_method method_pointer = 'NLP1'
strategy, <strategy independent controls> <strategy selection> <strategy dependent controls>
where <strategy selection>
is one of the following: hybrid
, multi_start
, pareto_set
, or single_method
.
The <strategy independent controls>
are those controls which are valid for a variety of strategies. Unlike the Method Independent Controls, which can be abstractions with slightly different implementations from one method to the next, the implementations of each of the strategy independent controls are consistent for all strategies that use them. The <strategy dependent controls>
are those controls which are only meaningful for a specific strategy. Referring to dakota.input.summary, the strategy independent controls are those controls defined externally from and prior to the strategy selection blocks. They are all optional. The strategy selection blocks are all required group specifications separated by logical OR's (hybrid
OR multi_start
OR pareto_set
OR single_method
). Thus, one and only one strategy selection must be provided. The strategy dependent controls are those controls defined within the strategy selection blocks. Defaults for strategy independent and strategy dependent controls are defined in DataStrategy. The following sections provide additional detail on the strategy independent controls followed by the strategy selections and their corresponding strategy dependent controls.
graphics
, tabular_graphics_data
, tabular_graphics_file
, iterator_servers
, iterator_self_scheduling
, and iterator_static_scheduling
. The graphics
flag activates a 2D graphics window containing history plots for the variables and response functions in the study. This window is updated in an event loop with approximately a 2 second cycle time. For applications utilizing approximations over 2 variables, a 3D graphics window containing a surface plot of the approximation will also be activated. The tabular_graphics_data
flag activates file tabulation of the same variables and response function history data that gets passed to graphics windows with use of the graphics
flag. The tabular_graphics_file
specification optionally specifies a name to use for this file (dakota_tabular.dat
is the default). Within the file, the variables and response functions appear as columns and each function evaluation provides a new table row. This capability is most useful for post-processing of DAKOTA results with 3rd party graphics tools such as MATLAB, Tecplot, etc. There is no dependence between the graphics
flag and the tabular_graphics_data
flag; they may be used independently or concurrently. The iterator_servers
, iterator_self_scheduling
, and iterator_static_scheduling
specifications provide manual overrides for the number of concurrent iterator partitions and the scheduling policy for concurrent iterator jobs. These settings are normally determined automatically in the parallel configuration routines (see ParallelLibrary) but can be overridden with user inputs if desired. The graphics
, tabular_graphics_data
, and tabular_graphics_file
specifications are valid for all strategies. However, the iterator_servers
, iterator_self_scheduling
, and iterator_static_scheduling
overrides are only useful inputs for those strategies supporting concurrency in iterators, i.e., multi_start
and pareto_set
. Table 4.1 summarizes the strategy independent controls.Description | Keyword | Associated Data | Status | Default |
Graphics flag | graphics | none | Optional | no graphics |
Tabulation of graphics data | tabular_graphics_data | none | Optional group | no data tabulation |
File name for tabular graphics data | tabular_graphics_file | string | Optional | dakota_tabular.dat |
Number of iterator servers | iterator_servers | integer | Optional | no override of auto configure |
Self-scheduling of iterator jobs | iterator_self_scheduling | none | Optional | no override of auto configure |
Static scheduling of iterator jobs | iterator_static_scheduling | none | Optional | no override of auto configure |
sequential
, sequential
adaptive
, embedded
, and collaborative
approaches (see the Users Manual [Adams et al., 2010] for more information on the algorithms employed). In the sequential approaches, best solutions are transferred from one method to the next through a specified sequence. In the embedded approach, a tightly-coupled hybrid is employed in which a subordinate local method provides periodic refinements to a top-level global method. And in the collaborative approach, multiple methods work together and share solutions while executing concurrently.
In the two sequential
approaches, a list of method strings supplied with the method_list
specification specifies the identity and sequence of iterators to be used. Any number of iterators may be specified. The sequential adaptive approach may be specified by turning on the adaptive
flag. If this flag in specified, then progress_threshold
must also be specified since it is a required part of adaptive specification. In the nonadaptive case, method switching is managed through the separate convergence controls of each method. In the adaptive case, however, method switching occurs when the internal progress metric (normalized between 0.0 and 1.0) falls below the user specified progress_threshold
. The number of solutions transferred between methods is specified by num_solutions_transferred
. For example, if one sets up a two-level strategy with a first method that generates multiple solutions such as a genetic algorithm, followed by a second method that is initialized only at a single point such as a gradient-based algorithm, it is possible to take the multiple solutions generated by the first method and create several instances of the second method, each one with a different initial starting point. The logic governing the transfer of multiple solutions between methods is as follows: if one solution is returned from method A, then one solution is transferred to method B. If multiple solutions are returned from method A, and method B can accept multiple solutions as input (for example, as a genetic algorithm population), then one instance of method B is initialized with multiple solutions. If multiple solutions are returned from method A but method B only can accept one initial starting point, then method B is run num_solutions_transferred
times, each one with a separate starting point. The default number of solutions transferred is one. Table 4.2 summarizes the sequential hybrid strategy inputs.
Description | Keyword | Associated Data | Status | Default |
Hybrid strategy | hybrid | none | Required group (1 of 4 selections) | N/A |
Sequential hybrid | sequential | none | Required group (1 of 3 selections) | N/A |
Adaptive flag | adaptive | none | Optional group | nonadaptive hybrid |
Adaptive progress threshold | progress_threshold | real | Required | N/A |
Number of Solutions Transferred | num_solutions_transferred | integer | Optional | 1 |
List of methods | method_list | list of strings | Required | N/A |
In the embedded
approach, global and local method strings supplied with the global_method_pointer
and local_method_pointer
specifications identify the two methods to be used. The local_search_probability
setting is an optional specification for supplying the probability (between 0.0 and 1.0) of employing local search to improve estimates within the global search. Table 4.3 summarizes the embedded hybrid strategy inputs.
Description | Keyword | Associated Data | Status | Default |
Hybrid strategy | hybrid | none | Required group (1 of 4 selections) | N/A |
Embedded hybrid | embedded | none | Required group (1 of 3 selections) | N/A |
Pointer to the global method specification | global_method_pointer | string | Required | N/A |
Pointer to the local method specification | local_method_pointer | string | Required | N/A |
Probability of executing local searches | local_search_probability | real | Optional | 0.1 |
In the collaborative
approach, a list of method strings supplied with the method_list
specification specifies the pool of iterators to be used. Any number of iterators may be specified. The method collaboration logic follows that of either the Agent-Based Optimization or HOPSPACK codes and is currently under development. Table 4.4 summarizes the collaborative hybrid strategy inputs.
Description | Keyword | Associated Data | Status | Default |
Hybrid strategy | hybrid | none | Required group (1 of 4 selections) | N/A |
Collaborative hybrid | collaborative | none | Required group (1 of 3 selections) | N/A |
List of methods | method_list | list of strings | Required | N/A |
multi_start
strategy must specify an iterator using method_pointer
. This iterator is responsible for completing a series of iterative analyses from a set of different starting points. These starting points can be specified as follows: (1) using random_starts
, for which the specified number of starting points are selected randomly within the variable bounds, (2) using starting_points
, in which the starting values are provided in a list, or (3) using both random_starts
and starting_points
, for which the combined set of points will be used. In aggregate, at least one starting point must be specified. The most common example of a multi-start strategy is multi-start optimization, in which a series of optimizations are performed from different starting values for the design variables. This can be an effective approach for problems with multiple minima. Table 4.5 summarizes the multi-start strategy inputs.Description | Keyword | Associated Data | Status | Default |
Multi-start iteration strategy | multi_start | none | Required group (1 of 4 selections) | N/A |
Method pointer | method_pointer | string | Required | N/A |
Number of random starting points | random_starts | integer | Optional group | no random starting points |
Seed for random starting points | seed | integer | Optional | system-generated seed |
List of user-specified starting points | starting_points | list of reals | Optional | no user-specified starting points |
pareto_set
strategy must specify an optimization or least squares calibration method using method_pointer
. This minimizer is responsible for computing a set of optimal solutions from a set of response weightings (multi-objective weights or least squares term weights). These weightings can be specified as follows: (1) using random_weight_sets
, in which case weightings are selected randomly within [0,1] bounds, (2) using weight_sets
, in which the weighting sets are specified in a list, or (3) using both random_weight_sets
and weight_sets
, for which the combined set of weights will be used. In aggregate, at least one set of weights must be specified. The set of optimal solutions is called the "pareto set," which can provide valuable design trade-off information when there are competing objectives. Table 4.6 summarizes the pareto set strategy inputs.Description | Keyword | Associated Data | Status | Default |
Pareto set optimization strategy | pareto_set | none | Required group (1 of 4 selections) | N/A |
Optimization method pointer | method_pointer | string | Required | N/A |
Number of random weighting sets | random_weight_sets | integer | Optional | no random weighting sets |
Seed for random weighting sets | seed | integer | Optional | system-generated seed |
List of user-specified weighting sets | weight_sets | list of reals | Optional | no user-specified weighting sets |
single_method
keyword within a strategy specification. An optional method_pointer
specification may be used to point to a particular method specification. If method_pointer
is not used, then the last method specification parsed will be used as the iterator. Table 4.7 summarizes the single method strategy inputs.Description | Keyword | Associated Data | Status | Default |
Single method strategy | single_method | string | Required group (1 of 4 selections) | N/A |
Method pointer | method_pointer | string | Optional | use of last method parsed |