GPyOpt.methods package

Submodules

GPyOpt.methods.bayesian_optimization module

class GPyOpt.methods.bayesian_optimization.BayesianOptimization(f, domain=None, constraints=None, cost_withGradients=None, model_type='GP', X=None, Y=None, initial_design_numdata=5, initial_design_type='random', acquisition_type='EI', normalize_Y=True, exact_feval=False, acquisition_optimizer_type='lbfgs', model_update_interval=1, evaluator_type='sequential', batch_size=1, num_cores=1, verbosity=False, verbosity_model=False, maximize=False, de_duplication=False, **kwargs)

Bases: GPyOpt.core.bo.BO

Main class to initialize a Bayesian Optimization method. :param f: function to optimize. It should take 2-dimensional numpy arrays as input and return 2-dimensional outputs (one evaluation per row). :param domain: list of dictionaries containing the description of the inputs variables (See GPyOpt.core.space.Design_space class for details). :param constraints: list of dictionaries containing the description of the problem constraints (See GPyOpt.core.space.Design_space class for details). :cost_withGradients: cost function of the objective. The input can be:

  • a function that returns the cost and the derivatives and any set of points in the domain.
  • ‘evaluation_time’: a Gaussian process (mean) is used to handle the evaluation cost.
Model_type:

type of model to use as surrogate: - ‘GP’, standard Gaussian process. - ‘GP_MCMC’, Gaussian process with prior in the hyper-parameters. - ‘sparseGP’, sparse Gaussian process. - ‘warperdGP’, warped Gaussian process. - ‘InputWarpedGP’, input warped Gaussian process - ‘RF’, random forest (scikit-learn).

Parameters:
  • X – 2d numpy array containing the initial inputs (one per row) of the model.
  • Y – 2d numpy array containing the initial outputs (one per row) of the model.
  • normalize_Y – whether to normalize the outputs before performing any optimization (default, True).
  • model_update_interval – interval of collected observations after which the model is updated (default, 1).
  • evaluator_type – determines the way the objective is evaluated (all methods are equivalent if the batch size is one) - ‘sequential’, sequential evaluations. - ‘random’: synchronous batch that selects the first element as in a sequential policy and the rest randomly. - ‘local_penalization’: batch method proposed in (Gonzalez et al. 2016). - ‘thompson_sampling’: batch method using Thompson sampling.
  • batch_size – size of the batch in which the objective is evaluated (default, 1).
  • num_cores – number of cores used to evaluate the objective (default, 1).
  • verbosity – prints the models and other options during the optimization (default, False).
  • maximize – when True -f maximization of f is done by minimizing -f (default, False).
  • **kwargs

    extra parameters. Can be used to tune the current optimization setup or to use deprecated options in this package release.

Initial_design_numdata:
 

number of initial points that are collected jointly before start running the optimization.

Initial_design_type:
 

type of initial design: - ‘random’, to collect points in random locations. - ‘latin’, to collect points in a Latin hypercube (discrete variables are sampled randomly.)

Acquisition_type:
 

type of acquisition function to use. - ‘EI’, expected improvement. - ‘EI_MCMC’, integrated expected improvement (requires GP_MCMC model). - ‘MPI’, maximum probability of improvement. - ‘MPI_MCMC’, maximum probability of improvement (requires GP_MCMC model). - ‘LCB’, GP-Lower confidence bound. - ‘LCB_MCMC’, integrated GP-Lower confidence bound (requires GP_MCMC model).

Exact_feval:

whether the outputs are exact (default, False).

Acquisition_optimizer_type:
 

type of acquisition function to use. - ‘lbfgs’: L-BFGS. - ‘DIRECT’: Dividing Rectangles. - ‘CMA’: covariance matrix adaptation.

Note

The parameters bounds, kernel, numdata_initial_design, type_initial_design, model_optimize_interval, acquisition, acquisition_par model_optimize_restarts, sparseGP, num_inducing and normalize can still be used but will be deprecated in the next version.

GPyOpt.methods.modular_bayesian_optimization module

class GPyOpt.methods.modular_bayesian_optimization.ModularBayesianOptimization(model, space, objective, acquisition, evaluator, X_init, Y_init=None, cost=None, normalize_Y=True, model_update_interval=1, de_duplication=False)

Bases: GPyOpt.core.bo.BO

Modular Bayesian optimization. This class wraps the optimization loop around the different handlers.

Parameters:
  • model – GPyOpt model class.
  • space – GPyOpt space class.
  • objective – GPyOpt objective class.
  • acquisition – GPyOpt acquisition class.
  • evaluator – GPyOpt evaluator class.
  • X_init – 2d numpy array containing the initial inputs (one per row) of the model.
  • Y_init – 2d numpy array containing the initial outputs (one per row) of the model.
  • cost – GPyOpt cost class (default, none).
  • normalize_Y – whether to normalize the outputs before performing any optimization (default, True).
  • model_update_interval – interval of collected observations after which the model is updated (default, 1).
  • de_duplication – instantiated de_duplication GPyOpt class.

Module contents