Unconstrained Optimization
Mostrando 1-12 de 25 artigos, teses e dissertações.
-
1. STOCHASTIC GRADIENT METHODS FOR UNCONSTRAINED OPTIMIZATION
This papers presents an overview of gradient based methods for minimization of noisy functions. It is assumed that the objective functions is either given with error terms of stochastic nature or given as the mathematical expectation. Such problems arise in the context of simulation based optimization. The focus of this presentation is on the gradient based
Pesqui. Oper.. Publicado em: 2014-12
-
2. BUNDLE METHODS IN THE XXIst CENTURY: A BIRD'S-EYE VIEW
Bundle methods are often the algorithms of choice for nonsmooth convex optimization, especially if accuracy in the solution and reliability are a concern. We review several algorithms based on the bundle methodology that have been developed recently and that, unlike their forerunner variants, have the ability to provide exact solutions even if most of the ti
Pesqui. Oper.. Publicado em: 2014-12
-
3. TRUST-REGION-BASED METHODS FOR NONLINEAR PROGRAMMING: RECENT ADVANCES AND PERSPECTIVES
The aim of this text is to highlight recent advances of trust-region-based methods for nonlinear programming and to put them into perspective. An algorithmic framework provides a ground with the main ideas of these methods and the related notation. Specific approaches concerned with handling the trust-region subproblem are recalled, particularly for the larg
Pesqui. Oper.. Publicado em: 2014-12
-
4. State estimation of chemical engineering systems tending to multiple solutions
A well-evaluated state covariance matrix avoids error propagation due to divergence issues and, thereby, it is crucial for a successful state estimator design. In this paper we investigate the performance of the state covariance matrices used in three unconstrained Extended Kalman Filter (EKF) formulations and one constrained EKF formulation (CEKF). As bench
Braz. J. Chem. Eng.. Publicado em: 2014-09
-
5. Active-set strategy in Powell's method for optimization without derivatives
In this article we present an algorithm for solving bound constrained optimization problems without derivatives based on Powell's method [38] for derivative-free optimization. First we consider the unconstrained optimization problem. At each iteration a quadratic interpolation model of the objective function is constructed around the current iterate and this
Computational & Applied Mathematics. Publicado em: 2011
-
6. Using truncated conjugate gradient method in trust-region method with two subproblems and backtracking line search
A trust-region method with two subproblems and backtracking line search for solving unconstrained optimization is proposed. At every iteration, we use the truncated conjugate gradient method or its variation to solve one of the two subproblems approximately. Backtracking line search is carried out when the trust-region trail step fails. We show that this met
Computational & Applied Mathematics. Publicado em: 2010-06
-
7. Estudo de alguns metodos determinsticos de otimização irrestrita / Study of some deterministic methods for unconstrained optimization
In this work some classical methods of linear search for unconstrained optimization are studied. The main mathematical formulations for the optimization problem are presented. Two strategies, linear search and trust region, for the algorithm to move from one iteration to another are discussed. Furthermore, the main considerations about the choice of step len
Publicado em: 2010
-
8. DIRECT, analise intervalar e otimização global irrestrita / DIRECT, interval analysis and unconstrained global optimization
In this work we analyze two unconstrained global optimization methods: DIRECT, a branch-and-select method, based on Lipschitzian optimization, with a special selection criterion that balances the emphasis between local and global search; and a branch-and-bound method incorporating the state of art interval analysis techniques, with back-boxing and local sear
Publicado em: 2009
-
9. An adaptive mesh strategy for high compressible flows based on nodal re-allocation
An adaptive mesh strategy based on nodal re-allocation is presented in this work. This technique is applied to problems involving compressible flows with strong shocks waves, improving the accuracy and efficiency of the numerical solution. The initial mesh is continuously adapted during the solution process keeping, as much as possible, mesh smoothness and l
Journal of the Brazilian Society of Mechanical Sciences and Engineering. Publicado em: 2008-09
-
10. Lagrange multipliers : geometrical and algebraic aspects and an application in chemical engineering in the methanol distillation / Multiplicadores de Lagrange : aspectos geometricos e algebricos e uma aplicação em engenharia quimica na destilação do metanol
This work begins with a brief historical overview of Fermat?s method to find maxima and minima without derivatives. In theoretical terms, the elements concerning maximum and minimum of functions of n variables are discussed, together with a detailed study of unconstrained optimization, focusing on the Fermat?s rule and the classification of critical points.
Publicado em: 2008
-
11. Otimização de colunas de destilação : uma abordagem aplicada dos multiplicadores de Lagrange / Optimization of distillation comumns : an applied approach of the Lagrange multipliers
This work tackles the optimization of a distillation process of a binary mixture in a column with plates, which came from the methanol distillation in the production process of the biodiesel. More specifically, it considers the minimization of a cost objective function that encompass the heat rate supplied to the reboiler and the feed temperature, subject to
Publicado em: 2008
-
12. Aplicações de computação paralela em otimização contínua / Applications of parallel computing in continuous optimization
No presente trabalho, estudamos alguns conceitos relacionados ao desenvolvimento de programas paralelos, algumas formas de aplicar computação paralela em métodos de otimização contínua e dois métodos que envolvem o uso de otimização. O primeiro método que apresentamos, chamado PUMA (Pointwise Unconstrained Minimization Approach), recupera constante
Publicado em: 2008