The Optimization Problems Of The Constraint Optimization Problem

761 Words4 Pages
1. Introduction In this paper, the problem we consider is the constrained optimization problem, as follows: (P) min f (x) s.t. gi(x) ≤ 0, i = 1, 2, . . . , m, x ∈ X, Where X ⊂ Rn is a subset, and f , gi: X → R, i ∈ I = {1, 2, . . . , m} are continuously differentiable functions. Let X0= {x ∈ X|gi(x) ≤ 0, i = 1, 2, . . . , m} be the feasible solution set. Here we assume that X0 is nonempty. The penalty function method provides an important approach to solving (P), and it has attracted many researchers in both theoretical and practical aspects (see e.g. [1,8,9,11,12,18,25]). In 1967, Zangwill [25] first proposed the classic l1 exact penalty function: F(x, σ ) = f (x) + σm∑i=1max{0, gi(x)}, (1) Where σ > 0 is a penalty parameter, it is known from the theory of ordinary constrained optimization that the l1 exact penalty function is a better candidate for penalization. The obvious difficulty with the exact penalty functions is that it is not a smooth function. Which prevents the use of efficient minimization algorithms, and causes some numerical instability problems in its implementation when the value of the penalty parameter becomes larger. Hence, in order to use many efficient algorithms, such as Newton Method, it is very necessary and important to smooth exact penalty function to solve constrained optimization problems. In fact, almost all penalty function algorithms need to change the value of the penalty parameter in computational process. So do the exact
Open Document