licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 1633 |
[](https://travis-ci.com/JuliaGeometry/GeometryBasics.jl)
[](https://codecov.io/gh/JuliaGeometry/GeometryBasics.jl)
[](http://juliageometry.github.io/GeometryBasics.jl/stable/)
[](http://juliageometry.github.io/GeometryBasics.jl/dev/)
# GeometryBasics.jl
Basic geometry types.
This package aims to offer a standard set of geometry types that easily work
with metadata, query frameworks on geometries and different memory layouts. The
aim is to create a solid basis for graphics/plotting, finite element analysis,
geo applications, and general geometry manipulations - while offering a Julian
API that still allows performant C-interop.
This package is a replacement for the discontinued [GeometryTypes](https://github.com/JuliaGeometry/GeometryTypes.jl/).
**Documentation:** http://juliageometry.github.io/GeometryBasics.jl/stable/
## Contributing
Make sure your changes don't break the documentation.
To build the documentation locally, you first need to instantiate the `docs/` project:
```
julia --project=docs/
pkg> instantiate
pkg> dev .
```
Then use `julia --project=docs/ docs/make.jl` to build the documentation. This
will also run the doctests defined in Markdown files. The doctests should be
written for the Julia version configured in [ci.yml](.github/workflows/ci.yml)
(`:docs` section).
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 306 | # API Reference
## Exports
```@autodocs
Modules = [GeometryBasics]
Order = [:module, :constant, :type, :function, :macro]
Public = true
Private = false
```
## Private
```@autodocs
Modules = [GeometryBasics]
Order = [:module, :constant, :type, :function, :macro]
Public = false
Private = true
```
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 3248 | # Decomposition
## GeometryBasics Mesh interface
GeometryBasics defines an interface to decompose abstract geometries into
points and triangle meshes.
This can be done for any arbitrary primitive, by overloading the following interface:
```julia
function GeometryBasics.coordinates(rect::Rect2, nvertices=(2,2))
mini, maxi = extrema(rect)
xrange, yrange = LinRange.(mini, maxi, nvertices)
return ivec(((x,y) for x in xrange, y in yrange))
end
function GeometryBasics.faces(rect::Rect2, nvertices=(2, 2))
w, h = nvertices
idx = LinearIndices(nvertices)
quad(i, j) = QuadFace{Int}(idx[i, j], idx[i+1, j], idx[i+1, j+1], idx[i, j+1])
return ivec((quad(i, j) for i=1:(w-1), j=1:(h-1)))
end
```
Those methods, for performance reasons, expect you to return an iterator, to make
materializing them with different element types allocation free. But of course,
can also return any `AbstractArray`.
With these methods defined, this constructor will magically work:
```julia
rect = Rect2(0.0, 0.0, 1.0, 1.0)
m = GeometryBasics.mesh(rect)
```
If you want to set the `nvertices` argument, you need to wrap your primitive in a `Tesselation`
object:
```julia
m = GeometryBasics.mesh(Tesselation(rect, (50, 50)))
length(coordinates(m)) == 50^2
```
As you can see, `coordinates` and `faces` are also defined on a mesh
```julia
coordinates(m)
faces(m)
```
But will actually not be an iterator anymore. Instead, the mesh constructor uses
the `decompose` function, that will collect the result of coordinates and will
convert it to a concrete element type:
```julia
decompose(Point2f, rect) == convert(Vector{Point2f}, collect(coordinates(rect)))
```
The element conversion is handled by `simplex_convert`, which also handles convert
between different face types:
```julia
decompose(QuadFace{Int}, rect) == convert(Vector{QuadFace{Int}}, collect(faces(rect)))
length(decompose(QuadFace{Int}, rect)) == 1
fs = decompose(GLTriangleFace, rect)
fs isa Vector{GLTriangleFace}
length(fs) == 2 # 2 triangles make up one quad ;)
```
`mesh` uses the most natural element type by default, which you can get with the unqualified Point type:
```julia
decompose(Point, rect) isa Vector{Point{2, Float64}}
```
You can also pass the element type to `mesh`:
```julia
m = GeometryBasics.mesh(rect, pointtype=Point2f, facetype=QuadFace{Int})
```
You can also set the uv and normal type for the mesh constructor, which will then
calculate them for you, with the requested element type:
```julia
m = GeometryBasics.mesh(rect, uv=Vec2f, normaltype=Vec3f)
```
As you can see, the normals are automatically calculated,
the same is true for texture coordinates. You can overload this behavior by overloading
`normals` or `texturecoordinates` the same way as coordinates.
`decompose` works a bit different for normals/texturecoordinates, since they dont have their own element type.
Instead, you can use `decompose` like this:
```julia
decompose(UV(Vec2f), rect)
decompose(Normal(Vec3f), rect)
# the short form for the above:
decompose_uv(rect)
decompose_normals(rect)
```
You can also use `triangle_mesh`, `normal_mesh` and `uv_normal_mesh` to call the
`mesh` constructor with predefined element types (Point2/3f, Vec2/3f), and the requested attributes.
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 127 | # Implementation
In the backend, GeometryTypes relies on fixed-size arrays, specifically static vectors.
TODO add more here.
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 3712 | # GeometryBasics.jl
Basic geometry types.
This package aims to offer a standard set of geometry types that easily work
with metadata, query frameworks on geometries and different memory layouts. The
aim is to create a solid basis for graphics/plotting, finite element analysis,
geo applications, and general geometry manipulations - while offering a Julian
API that still allows performant C-interop.
This package is a replacement for the discontinued [GeometryTypes](https://github.com/JuliaGeometry/GeometryTypes.jl/).
## Quick start
Create some points:
```@repl quickstart
using GeometryBasics
p1 = Point(3, 1)
p2 = Point(1, 3);
p3 = Point(4, 4);
```
Geometries can carry metadata:
```@repl quickstart
poi = meta(p1, city="Abuja", rainfall=1221.2)
```
Metadata is stored in a NamedTuple and can be retrieved as such:
```@repl quickstart
meta(poi)
```
Specific metadata attributes can be directly retrieved:
```@repl quickstart
poi.rainfall
```
To remove the metadata and keep only the geometry, use `metafree`:
```@repl quickstart
metafree(poi)
```
Geometries have predefined metatypes:
```@repl quickstart
multipoi = MultiPointMeta([p1], city="Abuja", rainfall=1221.2)
```
Connect the points with lines:
```@repl quickstart
l1 = Line(p1, p2)
l2 = Line(p2, p3);
```
Connect the lines in a linestring:
```@repl quickstart
LineString([l1, l2])
```
Linestrings can also be constructed directly from points:
```@repl quickstart
LineString([p1, p2, p3])
```
The same goes for polygons:
```@repl quickstart
Polygon(Point{2, Int}[(3, 1), (4, 4), (2, 4), (1, 2), (3, 1)])
```
Create a rectangle placed at the origin with unit width and height:
```@repl quickstart
rect = Rect(Vec(0.0, 0.0), Vec(1.0, 1.0))
```
Decompose the rectangle into two triangular faces:
```@repl quickstart
rect_faces = decompose(TriangleFace{Int}, rect)
```
Decompose the rectangle into four vertices:
```@repl quickstart
rect_vertices = decompose(Point{2, Float64}, rect)
```
Combine the vertices and faces into a triangle mesh:
```@repl quickstart
mesh = Mesh(rect_vertices, rect_faces)
```
Use `GeometryBasics.mesh` to get a mesh directly from a geometry:
```@repl quickstart
mesh = GeometryBasics.mesh(rect)
```
## Aliases
GeometryBasics exports common aliases for Point, Vec, Mat and Rect:
### Vec
| |`T`(eltype) |`Float64` |`Float32` |`Int` |`UInt` |
|--------|------------|----------|----------|----------|----------|
|`N`(dim)|`Vec{N,T}` |`Vecd{N}` |`Vecf{N}` |`Veci{N}` |`Vecui{N}`|
|`2` |`Vec2{T}` |`Vec2d` |`Vec2f` |`Vec2i` |`Vec2ui` |
|`3` |`Vec3{T}` |`Vec3d` |`Vec3f` |`Vec3i` |`Vec3ui` |
### Point
| |`T`(eltype) |`Float64` |`Float32` |`Int` |`UInt` |
|--------|------------|----------|----------|----------|----------|
|`N`(dim)|`Point{N,T}`|`Pointd{N}`|`Pointf{N}`|`Pointi{N}`|`Pointui{N}`|
|`2` |`Point2{T}` |`Point2d` |`Point2f` |`Point2i` |`Point2ui`|
|`3` |`Point3{T}` |`Point3d` |`Point3f` |`Point3i` |`Point3ui`|
### Mat
| |`T`(eltype) |`Float64` |`Float32` |`Int` |`UInt` |
|--------|------------|----------|----------|----------|----------|
|`N`(dim)|`Mat{N,T}` |`Matd{N}` |`Matf{N}` |`Mati{N}` |`Matui{N}`|
|`2` |`Mat2{T}` |`Mat2d` |`Mat2f` |`Mat2i` |`Mat2ui` |
|`3` |`Mat3{T}` |`Mat3d` |`Mat3f` |`Mat3i` |`Mat3ui` |
### Rect
| |`T`(eltype) |`Float64` |`Float32` |`Int` |`UInt` |
|--------|------------|----------|----------|----------|----------|
|`N`(dim)|`Rect{N,T}` |`Rectd{N}`|`Rectf{N}`|`Recti{N}`|`Rectui{N}`|
|`2` |`Rect2{T}` |`Rect2d` |`Rect2f` |`Rect2i` |`Rect2ui` |
|`3` |`Rect3{T}` |`Rect3d` |`Rect3f` |`Rect3i` |`Rect3ui` |
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 464 | # Meshes
## Types
* [`AbstractMesh`](@ref)
* [`Mesh`](@ref)
## How to create a mesh
### Meshing.jl
### MeshIO.jl
The [`MeshIO.jl`](https://github.com/JuliaIO/MeshIO.jl) package provides load/save support for several file formats which store meshes.
## How to access data
The following functions can be called on an [`AbstractMesh`](@ref) to access its underlying data.
* [`faces`](@ref)
* [`coordinates`](@ref)
* `texturecoordinates`
* [`normals`](@ref)
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 3803 | # Metadata
## Meta
The `Meta` method provides metadata handling capabilities in GeometryBasics.
Similarly to remove the metadata and keep only the geometry, use `metafree`, and
for vice versa i.e., remove the geometry and keep the metadata use `meta`.
### Syntax
```julia
meta(geometry, meta::NamedTuple)
meta(geometry; meta...)
metafree(meta-geometry)
meta(meta-geometry)
```
### Examples
```@repl meta
using GeometryBasics
p1 = Point(2.2, 3.6)
poi = meta(p1, city="Abuja", rainfall=1221.2)
```
Metadata is stored in a NamedTuple and can be retrieved as such:
```@repl meta
meta(poi)
```
Specific metadata attributes can be directly retrieved:
```@repl meta
poi.rainfall
metafree(poi)
```
Metatypes are predefined for geometries:
```@repl meta
multipoi = MultiPointMeta([p1], city="Abuja", rainfall=1221.2)
```
(In the above example we have also used a geometry-specific meta method.)
```@repl meta
GeometryBasics.MetaType(Polygon)
GeometryBasics.MetaType(Mesh)
```
The metageometry objects are infact composed of the original geometry types.
```@repl meta
GeometryBasics.MetaFree(PolygonMeta)
GeometryBasics.MetaFree(MeshMeta)
```
## MetaT
In GeometryBasics we can have tabular layout for a collection of meta-geometries
by putting them into a StructArray that extends the [Tables.jl](https://github.com/JuliaData/Tables.jl) API.
In practice it's not necessary for the geometry or metadata types to be consistent.
For example, a geojson format can have heterogeneous geometries. Hence, such cases require
automatic widening of the geometry data types to the most appropriate type.
The MetaT method works around the fact that, a collection of geometries and metadata
of different types can be represented tabularly whilst widening to the appropriate type.
### Syntax
```julia
MetaT(geometry, meta::NamedTuple)
MetaT(geometry; meta...)
```
Returns a `MetaT` that holds a geometry and its metadata `MetaT` acts the same as `Meta` method.
The difference lies in the fact that it is designed to handle geometries and metadata of different/heterogeneous types.
For example, while a Point MetaGeometry is a `PointMeta`, the MetaT representation is `MetaT{Point}`.
### Examples
```@repl meta
MetaT(Point(1, 2), city = "Mumbai")
```
For a tabular representation, an iterable of `MetaT` types can be passed on to a `meta_table` method.
### Syntax
```julia
meta_table(iter)
```
### Examples
Create an array of 2 linestrings:
```@repl meta
ls = [LineString([Point(i, i+1), Point(i-1,i+5)]) for i in 1:2];
coordinates.(ls)
```
Create a multi-linestring:
```@repl meta
mls = MultiLineString(ls);
coordinates.(mls)
```
Create a polygon:
```@repl meta
poly = Polygon(Point{2, Int}[(40, 40), (20, 45), (45, 30), (40, 40)]);
coordinates(poly)
```
Put all geometries into an array:
```@repl meta
geom = [ls..., mls, poly];
```
Generate some random metadata:
```@repl meta
prop = [(country_states = "India$(i)", rainfall = (i*9)/2) for i in 1:4]
feat = [MetaT(i, j) for (i,j) = zip(geom, prop)]; # create an array of MetaT
```
We can now generate a `StructArray` / `Table` with `meta_table`:
```@repl meta
sa = meta_table(feat);
```
The data can be accessed through `sa.main` and the metadata through
`sa.country_states` and `sa.rainfall`. Here we print only the type names of the
data items for brevity:
```@repl meta
[nameof.(typeof.(sa.main)) sa.country_states sa.rainfall]
```
### Disadvantages
* The MetaT is pretty generic in terms of geometry types, it's not subtype to
geometries. eg : A `MetaT{Point, NamedTuple{Names, Types}}` is not subtyped to
`AbstractPoint` like a `PointMeta` is.
* This might cause problems on using `MetaT` with other constructors/methods
inside or even outside GeometryBasics methods designed to work with the main `Meta` types.
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.4.11 | b62f2b2d76cee0d61a2ef2b3118cd2a3215d3134 | docs | 208 | # Primitives
## Points and Vectors
## Simplices
## Shapes
* [`Circle`](@ref)
* [`Sphere`](@ref)
* [`Cylinder`](@ref)
## Abstract types
* `GeometryPrimitive`
* `AbstractSimplex`
* [`AbstractMesh`](@ref)
| GeometryBasics | https://github.com/JuliaGeometry/GeometryBasics.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | code | 204 | using Documenter
using DiscoDiff
makedocs(
sitename = "DiscoDiff.jl Documentation",
modules = [DiscoDiff],
format = Documenter.HTML(prettyurls = false),
pages = ["Home" => "index.md"],
)
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | code | 86 | module DiscoDiff
include("./ignore_gradient.jl")
include("./diff_examples.jl")
end
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | code | 745 | export heaviside, sign_diff
using NNlib: sigmoid
"""
heaviside(Number, [steepnes]) -> Number
Implements the Heaviside function in the forward pass.
It is 1 if x > 0 and 0 otherwise.
The derivative is taken to be the sigmoid. An extra parameter k controls the
steepnes of the sigmoid, default is 1.
"""
heaviside = construct_diff_version(
x -> x > zero(typeof(x)) ? one(typeof(x)) : zero(typeof(x)),
sigmoid,
)
"""
sign_diff(Number, [steepnes]) -> Number
Implements the sign function in the forward pass.
It is 1 if x > 0, -1 if x < 0 and 0 if x = 0.
The derivative is taken to be the tanh. An extra parameter k controls the
steepnes of the sigmoid, default is 1.
"""
sign_diff = construct_diff_version(sign, tanh)
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | code | 1411 | export ignore_gradient, construct_diff_version
using ForwardDiff, ChainRulesCore
"""
ignore_gradient(x) -> x
Drops the gradient in any computation. In reverse mode it uses ChainRulesCore
API. In forward mode it simply returns a new number with a zero dual.
"""
ignore_gradient(x) = ChainRulesCore.@ignore_derivatives x
ignore_gradient(x::ForwardDiff.Dual) = typeof(x)(x.value)
function ignore_gradient(arr::AbstractArray{<:ForwardDiff.Dual})
return typeof(arr)(ignore_gradient.(arr))
end
"""
construct_diff_version(f, g) -> pass_trough_function
Constructs a pass trough function for the given function `f` and gradient
function `g`. The pass trough function is a function that returns the same
value as `f` but the gradient is taken from `g`. Optional parameter k controls
the steepnes of the gradient, default is 1. Supports both scalars and arrays.
"""
function construct_diff_version(f, g)
@inline function pass_trough_function(x::T; k = nothing) where {T}
if isnothing(k)
if T <: Number
k = one(T)
elseif T <: AbstractArray
k = one(eltype(T))
else
error("Type not supported only supports Number and AbstractArray.")
end
end
zero = g(x .* k) .- ignore_gradient(g(x .* k))
return ignore_gradient(f(x)) .+ zero
end
return pass_trough_function
end
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | code | 731 | using DiscoDiff, Test, ForwardDiff, NNlib
@testset "heaviside function" begin
@test heaviside(2.0) == 1.0
@test heaviside(-2.0) == 0.0
@test heaviside(-2.0, k = 2.0) == 0.0
end
@testset "heaviside function gradient" begin
x = 2.0
value = NNlib.sigmoid(x)
f(x) = heaviside(x)
@test isapprox(ForwardDiff.derivative(f, x), value * (1.0 - value), atol = 1e-8)
end
using DiscoDiff, Test, ForwardDiff
@testset "sign_diff" begin
@test sign_diff(2.0) == 1.0
@test sign_diff(-2.0) == -1.0
@test sign_diff(0.0) == 0.0
end
@testset "sign_diff gradient" begin
x = 2.0
f(x) = sign_diff(x)
value = 1.0 - tanh(x)^2
@test isapprox(ForwardDiff.derivative(f, x), value, atol = 1e-8)
end
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | code | 317 | using DiscoDiff
using Test
using ForwardDiff, Zygote
@testset "ignore gradient" begin
function f(x)
return 3x + x^2 - ignore_gradient(x^2)
end
x = 2.0
@test isapprox(ForwardDiff.derivative(f, x), 3 + 2x, atol = 1e-8)
@test isapprox(Zygote.gradient(f, x)[1], 3 + 2x, atol = 1e-8)
end
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | code | 85 | using DiscoDiff, Test
include("./ignore_gradient.jl")
include("./diff_examples.jl")
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | docs | 3078 | # DiscoDiff.jl
A small package for differentiable discontinuities in Julia. Implements a simple API to generate differentiable discontinuous functions using the pass-through trick. Works both in forward and reverse mode with scalars and arrays.
## Main API
To generate a differentiable function version of a discontinuous function `f` such that the gradient of `f` is the gradient of `g`, simply use:
````julia
new_f = construct_diff_version(f,g)
````
This is used in the case
$$
\frac{df}{dx}
$$
is either not defined or does not have the desired properties. For example where $f$ is the sign function. Sometimes we want to still be able to propagate gradients trough this. In this case we impose
$$
\frac{df}{dx} = \frac{dg}{dx}
$$
Use it as:
```julia
new_f(x)
# control gradient steppes
new_f(2.0, k = 100.0)
```
In the second case we have
$$
\frac{df}{dx}(2.0) = \frac{dg}{dx}(100.0 \cdot 2.0)
$$
Note: to avoid type instabilities ensure $x$ and $k$ are of the same type. The package works both with forward and reverse mode automatic differentiation.
````julia
using Zygote, ForwardDiff
using DiscoDiff
using LinearAlgebra
f(x) = 1.0
g(x) = x
new_f = construct_diff_version(f,g)
f(1.0) == 1.0
Zygote.gradient(new_f, 1.0)[1] == 1.0
ForwardDiff.derivative(new_f, 1.0) == 1.0
````
And it supports not scalar functions
````julia
using Zygote, ForwardDiff
using DiscoDiff
f = construct_diff_version(x -> x, x -> x.^2)
x = rand(10)
f(x) == x
Zygote.jacobian(f, x)[1] == diagm(2 * x)
ForwardDiff.jacobian(f, x) == diagm(2 * x)
````
# Other
We also export to read-made function.
## Overview
The Heaviside function, also known as the step function, is a discontinuous function named after the British physicist Oliver Heaviside. It is often used in control theory, signal processing, and probability theory.
The Heaviside function is defined as:
$$
H(x) = \begin{cases}
0 & \text{for } x \leq 0, \\
1 & \text{for } x > 0.
\end{cases}
$$
We also implement a differentiable version of the sign function defined as:
$$
sign(x) = \begin{cases}
1 & \text{for } x > 0, \\
0 & \text{for } x = 0, \\
-1 & \text{for } x < 0.
\end{cases}
$$
## Differentiable Discontinuous functions
We implement a differentiable version of the Heaviside function, where the derivative is the derivative of the sigmoid. This function has a "steepness" parameter that controls the transition smoothness. The function is `heaviside(x, k)`.
We implement a differentiable version of the sign function, where the derivative is the derivative of tanh. This function has a "steepness" parameter that controls the transition smoothness. The function is `sign_diff(x, k)` to avoid overriding the Base `sign` function.
- `x`: The input to the function.
- `k`: The steepness parameter. Higher values of `k` make the sigmoid steeper and, hence, closer to the discontinuous function. The default is 1 in all cases.
#### Usage
For the Heaviside function:
```julia
heaviside(1.0)
heaviside(1.0, k = 2.0)
```
For the sign function
```julia
sign_diff(2.0)
sign_diff(2.0, k = 2.0)
```
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | docs | 128 | # Main Function Documentation
```@docs
ignore_gradient
construct_diff_version
```
# Examples
```@docs
heaviside
sign_diff
```
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 0.1.0 | 7432af52e9a7cbdf61ce1847240d6f72b6f15763 | docs | 128 | # Main Function Documentation
```@docs
ignore_gradient
construct_diff_version
```
# Examples
```@docs
heaviside
sign_diff
```
| DiscoDiff | https://github.com/Devetak/DiscoDiff.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 789 | # generate examples
import Literate
# TODO: Remove items from `SKIPFILE` as soon as they run on the latest
# stable `Optim` (or other dependency)
#ONLYSTATIC = ["ipnewton_basics.jl",]
ONLYSTATIC = []
EXAMPLEDIR = joinpath(@__DIR__, "src", "examples")
GENERATEDDIR = joinpath(@__DIR__, "src", "examples", "generated")
for example in filter!(x -> endswith(x, ".jl"), readdir(EXAMPLEDIR))
input = abspath(joinpath(EXAMPLEDIR, example))
script = Literate.script(input, GENERATEDDIR)
code = strip(read(script, String))
mdpost(str) = replace(str, "@__CODE__" => code)
Literate.markdown(input, GENERATEDDIR, postprocess = mdpost,
documenter = !(example in ONLYSTATIC))
Literate.notebook(input, GENERATEDDIR, execute = !(example in ONLYSTATIC))
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2308 | if Base.HOME_PROJECT[] !== nothing
# JuliaLang/julia/pull/28625
Base.HOME_PROJECT[] = abspath(Base.HOME_PROJECT[])
end
using Documenter, Optim
# use include("Rosenbrock.jl") etc
# Generate examples
include("generate.jl")
cp(joinpath(@__DIR__, "..", "LICENSE.md"),
joinpath(@__DIR__, "src", "LICENSE.md"); force = true)
#run('mv ../CONTRIBUTING.md ./dev/CONTRIBUTING.md') # TODO: Should we use the $odir/CONTRIBUTING.md file instead?
makedocs(
doctest = false,
sitename = "Optim",
pages = [
"Home" => "index.md",
"Tutorials" => [
"Minimizing a function" => "user/minimization.md",
"Gradients and Hessians" => "user/gradientsandhessians.md",
"Configurable Options" => "user/config.md",
"Linesearch" => "algo/linesearch.md",
"Algorithm choice" => "user/algochoice.md",
"Preconditioners" => "algo/precondition.md",
"Complex optimization" => "algo/complex.md",
"Manifolds" => "algo/manifolds.md",
"Tips and tricks" => "user/tipsandtricks.md",
"Interior point Newton" => "examples/generated/ipnewton_basics.md",
"Maximum likelihood estimation" => "examples/generated/maxlikenlm.md",
"Conditional maximum likelihood estimation" => "examples/generated/rasch.md",
],
"Algorithms" => [
"Gradient Free" => [
"Nelder Mead" => "algo/nelder_mead.md",
"Simulated Annealing" => "algo/simulated_annealing.md",
"Simulated Annealing w/ bounds" => "algo/samin.md",
"Particle Swarm" => "algo/particle_swarm.md",
],
"Gradient Required" => [
"Adam and AdaMax" => "algo/adam_adamax.md",
"Conjugate Gradient" => "algo/cg.md",
"Gradient Descent" => "algo/gradientdescent.md",
"(L-)BFGS" => "algo/lbfgs.md",
"Acceleration" => "algo/ngmres.md",
],
"Hessian Required" => [
"Newton" => "algo/newton.md",
"Newton with Trust Region" => "algo/newton_trust_region.md",
"Interior point Newton" => "algo/ipnewton.md",
]
],
"Contributing" => "dev/contributing.md",
"License" => "LICENSE.md",
]
)
deploydocs(
repo = "github.com/JuliaNLSolvers/Optim.jl.git",
)
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 7482 | # # Nonlinear constrained optimization
#
#-
#md # !!! tip
#md # This example is also available as a Jupyter notebook:
#md # [`ipnewton_basics.ipynb`](@__NBVIEWER_ROOT_URL__examples/generated/ipnewton_basics.ipynb)
#-
#
# The nonlinear constrained optimization interface in
# `Optim` assumes that the user can write the optimization
# problem in the following way.
# ```math
# \min_{x\in\mathbb{R}^n} f(x) \quad \text{such that}\\
# l_x \leq \phantom{c(}x\phantom{)} \leq u_x \\
# l_c \leq c(x) \leq u_c.
# ```
# For equality constraints on ``x_j`` or ``c(x)_j`` you set those
# particular entries of bounds to be equal, ``l_j=u_j``.
# Likewise, setting ``l_j=-\infty`` or ``u_j=\infty`` means that the
# constraint is unbounded from below or above respectively.
using Optim, NLSolversBase #hide
import NLSolversBase: clear! #hide
# # Constrained optimization with `IPNewton`
# We will go through examples on how to use the constraints interface
# with the interior-point Newton optimization algorithm [IPNewton](../../algo/ipnewton.md).
# Throughout these examples we work with the standard Rosenbrock function.
# The objective and its derivatives are given by
fun(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
function fun_grad!(g, x)
g[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
g[2] = 200.0 * (x[2] - x[1]^2)
end
function fun_hess!(h, x)
h[1, 1] = 2.0 - 400.0 * x[2] + 1200.0 * x[1]^2
h[1, 2] = -400.0 * x[1]
h[2, 1] = -400.0 * x[1]
h[2, 2] = 200.0
end;
# ## Optimization interface
# To solve a constrained optimization problem we call the `optimize`
# method
# ``` julia
# optimize(d::AbstractObjective, constraints::AbstractConstraints, initial_x::Tx, method::ConstrainedOptimizer, options::Options)
# ```
# We can create instances of `AbstractObjective` and
# `AbstractConstraints` using the types `TwiceDifferentiable` and
# `TwiceDifferentiableConstraints` from the package `NLSolversBase.jl`.
# ## Box minimization
# We want to optimize the Rosenbrock function in the box
# ``-0.5 \leq x \leq 0.5``, starting from the point ``x_0=(0,0)``.
# Box constraints are defined using, for example,
# `TwiceDifferentiableConstraints(lx, ux)`.
x0 = [0.0, 0.0]
df = TwiceDifferentiable(fun, fun_grad!, fun_hess!, x0)
lx = [-0.5, -0.5]; ux = [0.5, 0.5]
dfc = TwiceDifferentiableConstraints(lx, ux)
res = optimize(df, dfc, x0, IPNewton())
## Test the results #src
using Test #src
@test Optim.converged(res) #src
@test Optim.minimum(res) β 0.25 #src
# Like the rest of Optim, you can also use `autodiff=:forward` and just pass in
# `fun`.
# If we only want to set lower bounds, use `ux = fill(Inf, 2)`
ux = fill(Inf, 2)
dfc = TwiceDifferentiableConstraints(lx, ux)
clear!(df)
res = optimize(df, dfc, x0, IPNewton())
@test Optim.converged(res) #src
@test Optim.minimum(res) < 0.0 + sqrt(eps()) #src
# ## Defining "unconstrained" problems
# An unconstrained problem can be defined either by passing
# `Inf` bounds or empty arrays.
# **Note that we must pass the correct type information to the empty `lx` and `ux`**
lx = fill(-Inf, 2); ux = fill(Inf, 2)
dfc = TwiceDifferentiableConstraints(lx, ux)
clear!(df)
res = optimize(df, dfc, x0, IPNewton())
@test Optim.converged(res) #src
@test Optim.minimum(res) < 0.0 + sqrt(eps()) #src
lx = Float64[]; ux = Float64[]
dfc = TwiceDifferentiableConstraints(lx, ux)
clear!(df)
res = optimize(df, dfc, x0, IPNewton())
@test Optim.converged(res) #src
@test Optim.minimum(res) < 0.0 + sqrt(eps()) #src
# ## Generic nonlinear constraints
# We now consider the Rosenbrock problem with a constraint on
# ```math
# c(x)_1 = x_1^2 + x_2^2.
# ```
# We pass the information about the constraints to `optimize`
# by defining a vector function `c(x)` and its Jacobian `J(x)`.
# The Hessian information is treated differently, by considering the
# Lagrangian of the corresponding slack-variable transformed
# optimization problem. This is similar to how the [CUTEst
# library](https://github.com/JuliaSmoothOptimizers/CUTEst.jl) works.
# Let ``H_j(x)`` represent the Hessian of the ``j``th component
# ``c(x)_j`` of the generic constraints.
# and ``\lambda_j`` the corresponding dual variable in the
# Lagrangian. Then we want the `constraint` object to
# add the values of ``H_j(x)`` to the Hessian of the objective,
# weighted by ``\lambda_j``.
# The Julian form for the supplied function ``c(x)`` and the derivative
# information is then added in the following way.
con_c!(c, x) = (c[1] = x[1]^2 + x[2]^2; c)
function con_jacobian!(J, x)
J[1,1] = 2*x[1]
J[1,2] = 2*x[2]
J
end
function con_h!(h, x, Ξ»)
h[1,1] += Ξ»[1]*2
h[2,2] += Ξ»[1]*2
end;
# **Note that `con_h!` adds the `Ξ»`-weighted Hessian value of each
# element of `c(x)` to the Hessian of `fun`.**
# We can then optimize the Rosenbrock function inside the ball of radius
# ``0.5``.
lx = Float64[]; ux = Float64[]
lc = [-Inf]; uc = [0.5^2]
dfc = TwiceDifferentiableConstraints(con_c!, con_jacobian!, con_h!,
lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())
@test Optim.converged(res) #src
@test Optim.minimum(res) β 0.2966215688829263 #src
# We can add a lower bound on the constraint, and thus
# optimize the objective on the annulus with
# inner and outer radii ``0.1`` and ``0.5`` respectively.
lc = [0.1^2]
dfc = TwiceDifferentiableConstraints(con_c!, con_jacobian!, con_h!,
lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())
@test Optim.converged(res) #src
@test Optim.minimum(res) β 0.2966215688829255 #src
# **Note that the algorithm warns that the Initial guess is not an
# interior point.** `IPNewton` can often handle this, however, if the
# initial guess is such that `c(x) = u_c`, then the algorithm currently
# fails. We may fix this in the future.
# ## Multiple constraints
# The following example illustrates how to add an additional constraint.
# In particular, we add a constraint function
# ```math
# c(x)_2 = x_2\sin(x_1)-x_1
# ```
function con2_c!(c, x)
c[1] = x[1]^2 + x[2]^2 ## First constraint
c[2] = x[2]*sin(x[1])-x[1] ## Second constraint
c
end
function con2_jacobian!(J, x)
## First constraint
J[1,1] = 2*x[1]
J[1,2] = 2*x[2]
## Second constraint
J[2,1] = x[2]*cos(x[1])-1.0
J[2,2] = sin(x[1])
J
end
function con2_h!(h, x, Ξ»)
## First constraint
h[1,1] += Ξ»[1]*2
h[2,2] += Ξ»[1]*2
## Second constraint
h[1,1] += Ξ»[2]*x[2]*-sin(x[1])
h[1,2] += Ξ»[2]*cos(x[1])
## Symmetrize h
h[2,1] = h[1,2]
h
end;
# We generate the constraint objects and call `IPNewton` with
# initial guess ``x_0 = (0.25,0.25)``.
x0 = [0.25, 0.25]
lc = [-Inf, 0.0]; uc = [0.5^2, 0.0]
dfc = TwiceDifferentiableConstraints(con2_c!, con2_jacobian!, con2_h!,
lx, ux, lc, uc)
res = optimize(df, dfc, x0, IPNewton())
@test Optim.converged(res) #src
@test Optim.minimum(res) β 1.0 #src
@test isapprox(Optim.minimizer(res), zeros(2), atol=sqrt(eps())) #src
#md # ## [Plain Program](@id ipnewton_basics-plain-program)
#md #
#md # Below follows a version of the program without any comments.
#md # The file is also available here: [ipnewton_basics.jl](ipnewton_basics.jl)
#md #
#md # ```julia
#md # @__CODE__
#md # ```
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 7281 | # # Maximum Likelihood Estimation: The Normal Linear Model
#
#-
#md # !!! tip
#md # This example is also available as a Jupyter notebook:
#md # [`maxlikenlm.ipynb`](@__NBVIEWER_ROOT_URL__examples/generated/maxlikenlm.ipynb)
#-
#
# The following tutorial will introduce maximum likelihood estimation
# in Julia for the normal linear model.
#
# The normal linear model (sometimes referred to as the OLS model) is
# the workhorse of regression modeling and is utilized across a number
# of diverse fields. In this tutorial, we will utilize simulated data
# to demonstrate how Julia can be used to recover the parameters of
# interest.
#
# The first order of business is to use the `Optim` package
# and also include the `NLSolversBase` routine:
#
using Optim, NLSolversBase
using LinearAlgebra: diag
using ForwardDiff
#md # !!! tip
#md # Add Optim with the following command at the Julia command prompt:
#md # `Pkg.add("Optim")`
#
# The first item that needs to be addressed is the data generating process or DGP.
# The following code will produce data from a normal linear model:
n = 40 # Number of observations
nvar = 2 # Number of variables
Ξ² = ones(nvar) * 3.0 # True coefficients
x = [ 1.0 0.156651 # X matrix of explanatory variables plus constant
1.0 -1.34218
1.0 0.238262
1.0 -0.496572
1.0 1.19352
1.0 0.300229
1.0 0.409127
1.0 -0.88967
1.0 -0.326052
1.0 -1.74367
1.0 -0.528113
1.0 1.42612
1.0 -1.08846
1.0 -0.00972169
1.0 -0.85543
1.0 1.0301
1.0 1.67595
1.0 -0.152156
1.0 0.26666
1.0 -0.668618
1.0 -0.36883
1.0 -0.301392
1.0 0.0667779
1.0 -0.508801
1.0 -0.352346
1.0 0.288688
1.0 -0.240577
1.0 -0.997697
1.0 -0.362264
1.0 0.999308
1.0 -1.28574
1.0 -1.91253
1.0 0.825156
1.0 -0.136191
1.0 1.79925
1.0 -1.10438
1.0 0.108481
1.0 0.847916
1.0 0.594971
1.0 0.427909]
Ξ΅ = [0.5539830489065279 # Errors
-0.7981494315544392
0.12994853889935182
0.23315434715658184
-0.1959788033050691
-0.644463980478783
-0.04055657880388486
-0.33313251280917094
-0.315407370840677
0.32273952815870866
0.56790436131181
0.4189982390480762
-0.0399623088796998
-0.2900421677961449
-0.21938513655749814
-0.2521429229103657
0.0006247891825243118
-0.694977951759846
-0.24108791530910414
0.1919989647431539
0.15632862280544485
-0.16928298502504732
0.08912288359190582
0.0037707641031662006
-0.016111044809837466
0.01852191562589722
-0.762541135294584
-0.7204431774719634
-0.04394527523005201
-0.11956323865320413
-0.6713329013627437
-0.2339928433338628
-0.6200532213195297
-0.6192380993792371
0.08834918731846135
-0.5099307915921438
0.41527207925609494
-0.7130133329859893
-0.531213372742777
-0.09029672309221337]
y = x * Ξ² + Ξ΅; # Generate Data
# In the above example, we have 500 observations, 2 explanatory
# variables plus an intercept, an error variance equal to 0.5,
# coefficients equal to 3.0, and all of these are subject to change by
# the user. Since we know the true value of these parameters, we
# should obtain these values when we maximize the likelihood function.
#
# The next step in our tutorial is to define a Julia function for the
# likelihood function. The following function defines the likelihood
# function for the normal linear model:
function Log_Likelihood(X, Y, Ξ², log_Ο)
Ο = exp(log_Ο)
llike = -n/2*log(2Ο) - n/2* log(Ο^2) - (sum((Y - X * Ξ²).^2) / (2Ο^2))
llike = -llike
end
# The log likelihood function accepts 4 inputs: the matrix of
# explanatory variables (X), the dependent variable (Y), the Ξ²'s, and
# the error varicance. Note that we exponentiate the error variance in
# the second line of the code because the error variance cannot be
# negative and we want to avoid this situation when maximizing the
# likelihood.
#
# The next step in our tutorial is to optimize our function. We first
# use the `TwiceDifferentiable` command in order to obtain the Hessian
# matrix later on, which will be used to help form t-statistics:
func = TwiceDifferentiable(vars -> Log_Likelihood(x, y, vars[1:nvar], vars[nvar + 1]),
ones(nvar+1); autodiff=:forward);
# The above statment accepts 4 inputs: the x matrix, the dependent
# variable y, and a vector of Ξ²'s and the error variance. The
# `vars[1:nvar]` is how we pass the vector of Ξ²'s and the `vars[nvar +
# 1]` is how we pass the error variance. You can think of this as a
# vector of parameters with the first 2 being Ξ²'s and the last one is
# the error variance.
#
# The `ones(nvar+1)` are the starting values for the parameters and
# the `autodiff=:forward` command performs forward mode automatic
# differentiation.
#
# The actual optimization of the likelihood function is accomplished
# with the following command:
opt = optimize(func, ones(nvar+1))
## Test the results #src
using Test #src
@test Optim.converged(opt) #src
@test Optim.g_residual(opt) < 1e-8 #src
# The first input to the command is the function we wish to optimize
# and the second input are the starting values.
#
# After a brief period of time, you should see output of the
# optimization routine, with the parameter estimates being very close
# to our simulated values.
#
# The optimization routine stores several quantities and we can obtain
# the maximim likelihood estimates with the following command:
parameters = Optim.minimizer(opt)
@test parameters β [2.83664, 3.05345, -0.98837] atol=1e-5 #src
# !!! Note
# Fieldnames for all of the quantities can be obtained with the following command:
# fieldnames(opt)
#
# In order to obtain the correct Hessian matrix, we have to "push" the
# actual parameter values that maximizes the likelihood function since
# the `TwiceDifferentiable` command uses the next to last values to
# calculate the Hessian:
numerical_hessian = hessian!(func,parameters)
# Let's find the estimated value of Ο, rather than log Ο, and it's standard error
# To do this, we will use the Delta Method: https://en.wikipedia.org/wiki/Delta_method
# this function exponetiates log Ο
function transform(parameters)
parameters[end] = exp(parameters[end])
parameters
end
# get the Jacobian of the transformation
J = ForwardDiff.jacobian(transform, parameters)'
parameters = transform(parameters)
# We can now invert our Hessian matrix and use the Delta Method,
# to obtain the variance-covariance matrix:
var_cov_matrix = J*inv(numerical_hessian)*J'
# test the estimated parameters and t-stats for correctness
@test parameters β [2.83664, 3.05345, 0.37218] atol=1e-5 #src
t_stats = parameters./sqrt.(diag(var_cov_matrix))
@test t_stats β [48.02655, 45.51568, 8.94427] atol=1e-4 #src
# see the results
println("parameter estimates:", parameters)
println("t-statsitics: ", t_stats)
# From here, one may examine other statistics of interest using the
# output from the optimization routine.
#md # ## [Plain Program](@id maxlikenlm-plain-program)
#md #
#md # Below follows a version of the program without any comments.
#md # The file is also available here: [maxlikenlm.jl](maxlikenlm.jl)
#md #
#md # ```julia
#md # @__CODE__
#md # ```
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 6646 | # # Conditional Maximum Likelihood for the Rasch Model
#
#-
#md # !!! tip
#md # This example is also available as a Jupyter notebook:
#md # [``rasch.ipynb``](@__NBVIEWER_ROOT_URL__examples/generated/rasch.ipynb)
#-
using Optim, Random #hide
#
# The Rasch model is used in psychometrics as a model for
# assessment data such as student responses to a standardized
# test. Let $X_{pi}$ be the response accuracy of student $p$
# to item $i$ where $X_{pi}=1$ if the item was answered correctly
# and $X_{pi}=0$ otherwise for $p=1,\ldots,n$ and $i=1,\ldots,m$.
# The model for this accuracy is
# ```math
# P(\mathbf{X}_{p}=\mathbf{x}_{p}|\xi_p, \mathbf\epsilon) = \prod_{i=1}^m \dfrac{(\xi_p \epsilon_j)^{x_{pi}}}{1 + \xi_p\epsilon_i}
# ```
# where $\xi_p > 0$ the latent ability of person $p$ and $\epsilon_i > 0$
# is the difficulty of item $i$.
# We simulate data from this model:
Random.seed!(123)
n = 1000
m = 5
theta = randn(n)
delta = randn(m)
r = zeros(n)
s = zeros(m)
for i in 1:n
p = exp.(theta[i] .- delta) ./ (1.0 .+ exp.(theta[i] .- delta))
for j in 1:m
if rand() < p[j] ##correct
r[i] += 1
s[j] += 1
end
end
end
f = [sum(r.==j) for j in 1:m];
# Since the number of parameters increases
# with sample size standard maximum likelihood will not provide us
# consistent estimates. Instead we consider the conditional likelihood.
# It can be shown that the Rasch model is an exponential family model and
# that the sum score $r_p = \sum_{i} x_{pi}$ is the sufficient statistic for
# $\xi_p$. If we condition on the sum score we should be able to eliminate
# $\xi_p$. Indeed, with a bit of algebra we can show
# ```math
# P(\mathbf{X}_p = \mathbf{x}_p | r_p, \mathbf\epsilon) = \dfrac{\prod_{i=1}^m \epsilon_i^{x{ij}}}{\gamma_{r_i}(\mathbf\epsilon)}
# ```
# where $\gamma_r(\mathbf\epsilon)$ is the elementary symmetric function of order $r$
# ```math
# \gamma_r(\mathbf\epsilon) = \sum_{\mathbf{y} : \mathbf{1}^\intercal \mathbf{y} = r} \prod_{j=1}^m \epsilon_j^{y_j}
# ```
# where the sum is over all possible answer configurations that give a sum
# score of $r$. Algorithms to efficiently compute $\gamma$ and its
# derivatives are available in the literature (see eg Baker (1996) for a review
# and Biscarri (2018) for a more modern approach)
function esf_sum!(S::AbstractArray{T,1}, x::AbstractArray{T,1}) where T <: Real
n = length(x)
fill!(S,zero(T))
S[1] = one(T)
@inbounds for col in 1:n
for r in 1:col
row = col - r + 1
S[row+1] = S[row+1] + x[col] * S[row]
end
end
end
function esf_ext!(S::AbstractArray{T,1}, H::AbstractArray{T,3}, x::AbstractArray{T,1}) where T <: Real
n = length(x)
esf_sum!(S, x)
H[:,:,1] .= zero(T)
H[:,:,2] .= one(T)
@inbounds for i in 3:n+1
for j in 1:n
H[j,j,i] = S[i-1] - x[j] * H[j,j,i-1]
for k in j+1:n
H[k,j,i] = S[i-1] - ((x[j]+x[k])*H[k,j,i-1] + x[j]*x[k]*H[k,j,i-2])
H[j,k,i] = H[k,j,i]
end
end
end
end
# The objective function we want to minimize is the negative log conditional
# likelihood
# ```math
# \begin{aligned}
# \log{L_C(\mathbf\epsilon|\mathbf{r})} &= \sum_{p=1}^n \sum_{i=1}^m x_{pi} \log{\epsilon_i} - \log{\gamma_{r_p}(\mathbf\epsilon)}\\
# &= \sum_{i=1}^m s_i \log{\epsilon_i} - \sum_{r=1}^m f_r \log{\gamma_r(\mathbf\epsilon)}
# \end{aligned}
# ```
Ο΅ = ones(Float64, m)
Ξ²0 = zeros(Float64, m)
last_Ξ² = fill(NaN, m)
S = zeros(Float64, m+1)
H = zeros(Float64, m, m, m+1)
function calculate_common!(x, last_x)
if x != last_x
copyto!(last_x, x)
Ο΅ .= exp.(-x)
esf_ext!(S, H, Ο΅)
end
end
function neglogLC(Ξ²)
calculate_common!(Ξ², last_Ξ²)
return -s'log.(Ο΅) + f'log.(S[2:end])
end
# Parameter estimation is usually performed with respect to the unconstrained parameter
# $\beta_i = -\log{\epsilon_i}$. Taking the derivative with respect to $\beta_i$
# (and applying the chain rule) one obtains
# ```math
# \dfrac{\partial\log L_C(\mathbf\epsilon|\mathbf{r})}{\partial \beta_i} = -s_i + \epsilon_i\sum_{r=1}^m \dfrac{f_r \gamma_{r-1}^{(j)}}{\gamma_r}
# ```
# where $\gamma_{r-1}^{(i)} = \partial \gamma_{r}(\mathbf\epsilon)/\partial\epsilon_i$.
function g!(storage, Ξ²)
calculate_common!(Ξ², last_Ξ²)
for j in 1:m
storage[j] = s[j]
for l in 1:m
storage[j] -= Ο΅[j] * f[l] * (H[j,j,l+1] / S[l+1])
end
end
end
# Similarly the Hessian matrix can be computed
# ```math
# \dfrac{\partial^2 \log L_C(\mathbf\epsilon|\mathbf{r})}{\partial \beta_i\partial\beta_j} = \begin{cases} \displaystyle -\epsilon_i \sum_{r=1}^m \dfrac{f_r\gamma_{r-1}^{(i)}}{\gamma_r}\left(1 - \dfrac{\gamma_{r-1}^{(i)}}{\gamma_r}\right) & \text{if $i=j$}\\
# \displaystyle -\epsilon_i\epsilon_j\sum_{r=1}^m \dfrac{f_r \gamma_{r-2}^{(i,j)}}{\gamma_r} - \dfrac{f_r\gamma_{r-1}^{(i)}\gamma_{r-1}^{(j)}}{\gamma_r^2} &\text{if $i\neq j$}
# \end{cases}
# ```
# where $\gamma_{r-2}^{(i,j)} = \partial^2 \gamma_{r}(\mathbf\epsilon)/\partial\epsilon_i\partial\epsilon_j$.
function h!(storage, Ξ²)
calculate_common!(Ξ², last_Ξ²)
for j in 1:m
for k in 1:m
storage[k,j] = 0.0
for l in 1:m
if j == k
storage[j,j] += f[l] * (Ο΅[j]*H[j,j,l+1] / S[l+1]) *
(1 - Ο΅[j]*H[j,j,l+1] / S[l+1])
elseif k > j
storage[k,j] += Ο΅[j] * Ο΅[k] * f[l] *
((H[k,j,l] / S[l+1]) - (H[j,j,l+1] * H[k,k,l+1]) / S[l+1] ^ 2)
else #k < j
storage[k,j] += Ο΅[j] * Ο΅[k] * f[l] *
((H[j,k,l] / S[l+1]) - (H[j,j,l+1] * H[k,k,l+1]) / S[l+1] ^ 2)
end
end
end
end
end
# The estimates of the item parameters are then obtained via standard optimization
# algorithms (either Newton-Raphson or L-BFGS). One last issue is that the model is
# not identifiable (multiplying the $\xi_p$ by a constant and dividing the $\epsilon_i$
# by the same constant results in the same likelihood). Therefore some kind of constraint
# must be imposed when estimating the parameters. Typically either $\epsilon_1 = 0$ or
# $\prod_{i=1}^m \epsilon_i = 1$ (which is equivalent to $\sum_{i=1}^m \beta_i = 0$).
con_c!(c, x) = (c[1] = sum(x); c)
function con_jacobian!(J, x)
J[1,:] .= ones(length(x))
end
function con_h!(h, x, Ξ»)
for i in 1:size(h)[1]
for j in 1:size(h)[2]
h[i,j] += (i == j) ? Ξ»[1] : 0.0
end
end
end
lx = Float64[]; ux = Float64[]
lc = [0.0]; uc = [0.0]
df = TwiceDifferentiable(neglogLC, g!, h!, Ξ²0)
dfc = TwiceDifferentiableConstraints(con_c!, con_jacobian!, con_h!, lx, ux, lc, uc)
res = optimize(df, dfc, Ξ²0, IPNewton())
# Compare the estimate to the truth
delta_hat = res.minimizer
[delta delta_hat]
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 13038 | module OptimMOIExt
using Optim
using Optim.LinearAlgebra
import MathOptInterface as MOI
function __init__()
@static if isdefined(Base, :get_extension)
setglobal!(Optim, :Optimizer, Optimizer)
end
end
mutable struct Optimizer{T} <: MOI.AbstractOptimizer
# Problem data.
variables::MOI.Utilities.VariablesContainer{T}
starting_values::Vector{Union{Nothing,T}}
nlp_model::Union{MOI.Nonlinear.Model,Nothing}
sense::MOI.OptimizationSense
# Parameters.
method::Union{Optim.AbstractOptimizer,Nothing}
silent::Bool
options::Dict{Symbol,Any}
# Solution attributes.
results::Union{Nothing,Optim.MultivariateOptimizationResults}
end
function Optimizer{T}() where {T}
return Optimizer{T}(
MOI.Utilities.VariablesContainer{T}(),
Union{Nothing,T}[],
nothing,
MOI.FEASIBILITY_SENSE,
nothing,
false,
Dict{Symbol,Any}(),
nothing,
)
end
Optimizer() = Optimizer{Float64}()
MOI.supports(::Optimizer, ::MOI.NLPBlock) = true
function MOI.supports(::Optimizer, ::Union{MOI.ObjectiveSense,MOI.ObjectiveFunction})
return true
end
MOI.supports(::Optimizer, ::MOI.Silent) = true
function MOI.supports(::Optimizer, p::MOI.RawOptimizerAttribute)
return p.name == "method" || hasfield(Optim.Options, Symbol(p.name))
end
function MOI.supports(::Optimizer, ::MOI.VariablePrimalStart, ::Type{MOI.VariableIndex})
return true
end
const BOUNDS{T} = Union{MOI.LessThan{T},MOI.GreaterThan{T},MOI.EqualTo{T},MOI.Interval{T}}
const _SETS{T} = Union{MOI.GreaterThan{T},MOI.LessThan{T},MOI.EqualTo{T}}
function MOI.supports_constraint(
::Optimizer{T},
::Type{MOI.VariableIndex},
::Type{<:BOUNDS{T}},
) where {T}
return true
end
function MOI.supports_constraint(
::Optimizer{T},
::Type{MOI.ScalarNonlinearFunction},
::Type{<:_SETS{T}},
) where {T}
return true
end
MOI.supports_incremental_interface(::Optimizer) = true
function MOI.copy_to(model::Optimizer, src::MOI.ModelLike)
return MOI.Utilities.default_copy_to(model, src)
end
MOI.get(::Optimizer, ::MOI.SolverName) = "Optim"
function MOI.set(model::Optimizer, ::MOI.ObjectiveSense, sense::MOI.OptimizationSense)
model.sense = sense
return
end
function MOI.set(model::Optimizer, ::MOI.ObjectiveFunction{F}, func::F) where {F}
nl = convert(MOI.ScalarNonlinearFunction, func)
if isnothing(model.nlp_model)
model.nlp_model = MOI.Nonlinear.Model()
end
MOI.Nonlinear.set_objective(model.nlp_model, nl)
return nothing
end
function MOI.set(model::Optimizer, ::MOI.Silent, value::Bool)
model.silent = value
return
end
MOI.get(model::Optimizer, ::MOI.Silent) = model.silent
const TIME_LIMIT = "time_limit"
MOI.supports(::Optimizer, ::MOI.TimeLimitSec) = true
function MOI.set(model::Optimizer, ::MOI.TimeLimitSec, value::Real)
MOI.set(model, MOI.RawOptimizerAttribute(TIME_LIMIT), Float64(value))
end
function MOI.set(model::Optimizer, attr::MOI.TimeLimitSec, ::Nothing)
delete!(model.options, Symbol(TIME_LIMIT))
end
function MOI.get(model::Optimizer, ::MOI.TimeLimitSec)
return get(model.options, Symbol(TIME_LIMIT), nothing)
end
MOI.Utilities.map_indices(::Function, opt::Optim.AbstractOptimizer) = opt
function MOI.set(model::Optimizer, p::MOI.RawOptimizerAttribute, value)
if p.name == "method"
model.method = value
else
model.options[Symbol(p.name)] = value
end
return
end
function MOI.get(model::Optimizer, p::MOI.RawOptimizerAttribute)
if p.name == "method"
return p.method
end
key = Symbol(p.name)
if haskey(model.options, key)
return model.options[key]
end
error("RawOptimizerAttribute with name $(p.name) is not set.")
end
MOI.get(model::Optimizer, ::MOI.SolveTimeSec) = time_run(model.results)
function MOI.empty!(model::Optimizer)
MOI.empty!(model.variables)
empty!(model.starting_values)
model.nlp_model = nothing
model.sense = MOI.FEASIBILITY_SENSE
model.results = nothing
return
end
function MOI.is_empty(model::Optimizer)
return MOI.is_empty(model.variables) &&
isempty(model.starting_values) &&
isnothing(model.nlp_model) &&
model.sense == MOI.FEASIBILITY_SENSE
end
function MOI.add_variable(model::Optimizer{T}) where {T}
push!(model.starting_values, nothing)
return MOI.add_variable(model.variables)
end
function MOI.is_valid(model::Optimizer, index::Union{MOI.VariableIndex,MOI.ConstraintIndex})
return MOI.is_valid(model.variables, index)
end
function MOI.add_constraint(
model::Optimizer{T},
vi::MOI.VariableIndex,
set::BOUNDS{T},
) where {T}
return MOI.add_constraint(model.variables, vi, set)
end
function MOI.add_constraint(
model::Optimizer{T},
f::MOI.ScalarNonlinearFunction,
s::_SETS{T},
) where {T}
if model.nlp_model === nothing
model.nlp_model = MOI.Nonlinear.Model()
end
index = MOI.Nonlinear.add_constraint(model.nlp_model, f, s)
return MOI.ConstraintIndex{typeof(f),typeof(s)}(index.value)
end
function starting_value(optimizer::Optimizer{T}, i) where {T}
if optimizer.starting_values[i] !== nothing
return optimizer.starting_values[i]
else
v = optimizer.variables
return min(max(zero(T), v.lower[i]), v.upper[i])
end
end
function MOI.set(
model::Optimizer,
::MOI.VariablePrimalStart,
vi::MOI.VariableIndex,
value::Union{Real,Nothing},
)
MOI.throw_if_not_valid(model, vi)
model.starting_values[vi.value] = value
return
end
function requested_features(::Optim.ZerothOrderOptimizer, has_constraints)
return Symbol[]
end
function requested_features(::Optim.FirstOrderOptimizer, has_constraints)
features = [:Grad]
if has_constraints
push!(features, :Jac)
end
return features
end
function requested_features(::Union{IPNewton,Optim.SecondOrderOptimizer}, has_constraints)
features = [:Grad, :Hess]
if has_constraints
push!(features, :Jac)
end
return features
end
function sparse_to_dense!(A, I::Vector, nzval)
for k in eachindex(I)
i, j = I[k]
A[i, j] += nzval[k]
end
return A
end
function sym_sparse_to_dense!(A, I::Vector, nzval)
for k in eachindex(I)
i, j = I[k]
A[i, j] += nzval[k]
A[j, i] = A[i, j]
end
return A
end
function MOI.optimize!(model::Optimizer{T}) where {T}
backend = MOI.Nonlinear.SparseReverseMode()
vars = MOI.get(model.variables, MOI.ListOfVariableIndices())
evaluator = MOI.Nonlinear.Evaluator(model.nlp_model, backend, vars)
nlp_data = MOI.NLPBlockData(evaluator)
# load parameters
if isnothing(model.nlp_model)
error("An objective should be provided to Optim with `@objective`.")
end
objective_scale = model.sense == MOI.MAX_SENSE ? -one(T) : one(T)
zero_ΞΌ = zeros(T, length(nlp_data.constraint_bounds))
function f(x)
return objective_scale * MOI.eval_objective(evaluator, x)
end
function g!(G, x)
fill!(G, zero(T))
MOI.eval_objective_gradient(evaluator, G, x)
if model.sense == MOI.MAX_SENSE
rmul!(G, objective_scale)
end
return G
end
function h!(H, x)
fill!(H, zero(T))
MOI.eval_hessian_lagrangian(evaluator, H_nzval, x, objective_scale, zero_ΞΌ)
sym_sparse_to_dense!(H, hessian_structure, H_nzval)
return H
end
method = model.method
nl_constrained = !isempty(nlp_data.constraint_bounds)
features = MOI.features_available(evaluator)
has_bounds = any(
vi ->
isfinite(model.variables.lower[vi.value]) ||
isfinite(model.variables.upper[vi.value]),
vars,
)
if method === nothing
if nl_constrained
method = IPNewton()
elseif :Grad in features
# FIXME `fallback_method(f, g!, h!)` returns `Newton` but if there
# are variable bounds, `Newton` is not supported. On the other hand,
# `fallback_method(f, g!)` returns `LBFGS` which is supported if `has_bounds`.
if :Hess in features && !has_bounds
method = Optim.fallback_method(f, g!, h!)
else
method = Optim.fallback_method(f, g!)
end
else
method = Optim.fallback_method(f)
end
end
used_features = requested_features(method, nl_constrained)
MOI.initialize(evaluator, used_features)
if :Hess in used_features
hessian_structure = MOI.hessian_lagrangian_structure(evaluator)
H_nzval = zeros(T, length(hessian_structure))
end
initial_x = starting_value.(model, eachindex(model.starting_values))
options = copy(model.options)
if !nl_constrained && has_bounds && !(method isa IPNewton)
options = Optim.Options(; options...)
model.results = optimize(
f,
g!,
model.variables.lower,
model.variables.upper,
initial_x,
Fminbox(method),
options;
inplace = true,
)
else
d = Optim.promote_objtype(method, initial_x, :finite, true, f, g!, h!)
Optim.add_default_opts!(options, method)
options = Optim.Options(; options...)
if nl_constrained || has_bounds
if nl_constrained
lc = [b.lower for b in nlp_data.constraint_bounds]
uc = [b.upper for b in nlp_data.constraint_bounds]
c!(c, x) = MOI.eval_constraint(evaluator, c, x)
if !(:Jac in features)
error(
"Nonlinear constraints should be differentiable to be used with Optim.",
)
end
if !(:Hess in features)
error(
"Nonlinear constraints should be twice differentiable to be used with Optim.",
)
end
jacobian_structure = MOI.jacobian_structure(evaluator)
J_nzval = zeros(T, length(jacobian_structure))
function jacobian!(J, x)
fill!(J, zero(T))
MOI.eval_constraint_jacobian(evaluator, J_nzval, x)
sparse_to_dense!(J, jacobian_structure, J_nzval)
return J
end
function con_hessian!(H, x, Ξ»)
fill!(H, zero(T))
MOI.eval_hessian_lagrangian(evaluator, H_nzval, x, zero(T), Ξ»)
sym_sparse_to_dense!(H, hessian_structure, H_nzval)
return H
end
c = TwiceDifferentiableConstraints(
c!,
jacobian!,
con_hessian!,
model.variables.lower,
model.variables.upper,
lc,
uc,
)
else
@assert has_bounds
c = TwiceDifferentiableConstraints(
model.variables.lower,
model.variables.upper,
)
end
model.results = optimize(d, c, initial_x, method, options)
else
model.results = optimize(d, initial_x, method, options)
end
end
return
end
function MOI.get(model::Optimizer, ::MOI.TerminationStatus)
if model.results === nothing
return MOI.OPTIMIZE_NOT_CALLED
elseif Optim.converged(model.results)
return MOI.LOCALLY_SOLVED
else
return MOI.OTHER_ERROR
end
end
function MOI.get(model::Optimizer, ::MOI.RawStatusString)
return summary(model.results)
end
# Ipopt always has an iterate available.
function MOI.get(model::Optimizer, ::MOI.ResultCount)
return model.results === nothing ? 0 : 1
end
function MOI.get(model::Optimizer, attr::MOI.PrimalStatus)
if !(1 <= attr.result_index <= MOI.get(model, MOI.ResultCount()))
return MOI.NO_SOLUTION
end
if Optim.converged(model.results)
return MOI.FEASIBLE_POINT
else
return MOI.UNKNOWN_RESULT_STATUS
end
end
MOI.get(::Optimizer, ::MOI.DualStatus) = MOI.NO_SOLUTION
function MOI.get(model::Optimizer, attr::MOI.ObjectiveValue)
MOI.check_result_index_bounds(model, attr)
val = minimum(model.results)
if model.sense == MOI.MAX_SENSE
val = -val
end
return val
end
function MOI.get(model::Optimizer, attr::MOI.VariablePrimal, vi::MOI.VariableIndex)
MOI.check_result_index_bounds(model, attr)
MOI.throw_if_not_valid(model, vi)
return Optim.minimizer(model.results)[vi.value]
end
function MOI.get(
model::Optimizer{T},
attr::MOI.ConstraintPrimal,
ci::MOI.ConstraintIndex{MOI.VariableIndex,<:BOUNDS{T}},
) where {T}
MOI.check_result_index_bounds(model, attr)
MOI.throw_if_not_valid(model, ci)
return Optim.minimizer(model.results)[ci.value]
end
end # module
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 5768 | # Manifold interface: every manifold (subtype of Manifold) defines the functions
# project_tangent!(m, g, x): project g on the tangent space to m at x
# retract!(m, x): map x back to a point on the manifold m
# For mathematical references, see e.g.
# The Geometry of Algorithms with Orthogonality Constraints
# Alan Edelman, TomΓ‘s A. Arias, and Steven T. Smith
# SIAM. J. Matrix Anal. & Appl., 20(2), 303β353. (51 pages)
# Optimization Algorithms on Matrix Manifolds
# P.-A. Absil, R. Mahony, R. Sepulchre
# Princeton University Press, 2008
abstract type Manifold
end
# fallback for out-of-place ops
project_tangent(M::Manifold,x) = project_tangent!(M, similar(x), x)
retract(M::Manifold,x) = retract!(M, copy(x))
# Fake objective function implementing a retraction
mutable struct ManifoldObjective{T<:NLSolversBase.AbstractObjective} <: NLSolversBase.AbstractObjective
manifold::Manifold
inner_obj::T
end
# TODO: is it safe here to call retract! and change x?
function NLSolversBase.value!(obj::ManifoldObjective, x)
xin = retract(obj.manifold, x)
value!(obj.inner_obj, xin)
end
function NLSolversBase.value(obj::ManifoldObjective)
value(obj.inner_obj)
end
function NLSolversBase.gradient(obj::ManifoldObjective)
gradient(obj.inner_obj)
end
function NLSolversBase.gradient(obj::ManifoldObjective,i::Int)
gradient(obj.inner_obj,i)
end
function NLSolversBase.gradient!(obj::ManifoldObjective,x)
xin = retract(obj.manifold, x)
gradient!(obj.inner_obj,xin)
project_tangent!(obj.manifold,gradient(obj.inner_obj),xin)
return gradient(obj.inner_obj)
end
function NLSolversBase.value_gradient!(obj::ManifoldObjective,x)
xin = retract(obj.manifold, x)
value_gradient!(obj.inner_obj,xin)
project_tangent!(obj.manifold,gradient(obj.inner_obj),xin)
return value(obj.inner_obj)
end
"""Flat Euclidean space {R,C}^N, with projections equal to the identity."""
struct Flat <: Manifold
end
# all the functions below are no-ops, and therefore the generated code
# for the flat manifold should be exactly the same as the one with all
# the manifold stuff removed
retract(M::Flat, x) = x
retract!(M::Flat,x) = x
project_tangent(M::Flat, g, x) = g
project_tangent!(M::Flat, g, x) = g
"""Spherical manifold {|x| = 1}."""
struct Sphere <: Manifold
end
retract!(S::Sphere, x) = (x ./= norm(x))
# dot accepts any iterables
project_tangent!(S::Sphere,g,x) = (g .-= real(dot(x,g)).*x)
"""
N x n matrices with orthonormal columns, i.e. such that X'X = I.
Special cases: N x 1 = sphere, N x N = orthogonal/unitary group.
Stiefel() uses a SVD algorithm to compute the retraction. To use a Cholesky-based orthogonalization (faster but less stable), use Stiefel(:CholQR).
When the function to be optimized depends only on the subspace X*X' spanned by a point X in the Stiefel manifold, first-order optimization algorithms are equivalent for the Stiefel and Grassmann manifold, so there is no separate Grassmann manifold.
"""
abstract type Stiefel <: Manifold end
struct Stiefel_CholQR <: Stiefel end
struct Stiefel_SVD <: Stiefel end
function Stiefel(retraction=:SVD)
if retraction == :CholQR
Stiefel_CholQR()
elseif retraction == :SVD
Stiefel_SVD()
end
end
function retract!(S::Stiefel_SVD, X)
U,S,V = svd(X)
X .= U*V'
end
function retract!(S::Stiefel_CholQR, X)
overlap = X'X
X .= X/cholesky(overlap).U
end
#For functions depending only on the subspace spanned by X, we always have G = A*X for some A, and so X'G = G'X, and Stiefel == Grassmann
#Edelman et al. have G .-= X*G'X (2.53), corresponding to a different metric ("canonical metric"). We follow Absil et al. here and use the metric inherited from Nxn matrices.
project_tangent!(S::Stiefel, G, X) = (XG = X'G; G .-= X*((XG .+ XG')./2))
"""
Multiple copies of the same manifold. Points are stored as inner_dims x outer_dims,
e.g. the product of 2x2 Stiefel manifolds of dimension N x n would be a N x n x 2 x 2 matrix.
"""
struct PowerManifold<:Manifold
"Type of embedded manifold"
inner_manifold::Manifold
"Dimension of the embedded manifolds"
inner_dims::Tuple
"Number of embedded manifolds"
outer_dims::Tuple
end
function retract!(m::PowerManifold, x)
for i=1:prod(m.outer_dims) # TODO: use for i in LinearIndices(m.outer_dims)?
retract!(m.inner_manifold,get_inner(m, x, i))
end
x
end
function project_tangent!(m::PowerManifold, g, x)
for i=1:prod(m.outer_dims)
project_tangent!(m.inner_manifold,get_inner(m, g, i),get_inner(m, x, i))
end
g
end
@inline function get_inner(m::PowerManifold, x, i::Int)
size_inner = prod(m.inner_dims)
size_outer = prod(m.outer_dims)
@assert 1 <= i <= size_outer
return reshape(view(x, (i-1)*size_inner+1:i*size_inner), m.inner_dims)
end
"""
Product of two manifolds {P = (x1,x2), x1 β m1, x2 β m2}.
P is stored as a flat 1D array, and x1 is before x2 in memory.
Use get_inner(m, x, {1,2}) to access x1 or x2 in their original format.
"""
struct ProductManifold<:Manifold
m1::Manifold
m2::Manifold
dims1::Tuple
dims2::Tuple
end
function retract!(m::ProductManifold, x)
retract!(m.m1, get_inner(m,x,1))
retract!(m.m2, get_inner(m,x,2))
x
end
function project_tangent!(m::ProductManifold, g, x)
project_tangent!(m.m1, get_inner(m, g, 1), get_inner(m, x, 1))
project_tangent!(m.m2, get_inner(m, g, 2), get_inner(m, x, 2))
g
end
function get_inner(m::ProductManifold, x, i::Integer)
N1 = prod(m.dims1)
N2 = prod(m.dims2)
@assert length(x) == N1+N2
if i == 1
return reshape(view(x, 1:N1),m.dims1)
elseif i == 2
return reshape(view(x, N1+1:N1+N2), m.dims2)
else
error("Only two components in a product manifold")
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 6970 | """
# Optim.jl
Welcome to Optim.jl!
Optim.jl is a package used to solve continuous optimization problems. It is
written in Julia for Julians to help take advantage of arbitrary number types,
fast computation, and excellent automatic differentiation tools.
## REPL help
`?` followed by an algorithm name (`?BFGS`), constructors (`?Optim.Options`)
prints help to the terminal.
## Documentation
Besides the help provided at the REPL, it is possible to find help and general
documentation online at http://julianlsolvers.github.io/Optim.jl/stable/ .
"""
module Optim
using Compat
using NLSolversBase # for shared infrastructure in JuliaNLSolvers
using PositiveFactorizations # for globalization strategy in Newton
import PositiveFactorizations: cholesky!, cholesky
using LineSearches # for globalization strategy in Quasi-Newton algs
import NaNMath # for functions that ignore NaNs (no poisoning)
import Parameters: @with_kw, # for types where constructors are simply defined
@unpack # by their default values, and simple unpacking
# of fields
using Printf # For printing, maybe look into other options
using FillArrays # For handling scalar bounds in Fminbox
#using Compat # for compatibility across multiple julia versions
# for extensions of functions defined in Base.
import Base: length, push!, show, getindex, setindex!, maximum, minimum
# objective and constraints types and functions relevant to them.
import NLSolversBase: NonDifferentiable, OnceDifferentiable, TwiceDifferentiable,
nconstraints, nconstraints_x, NotInplaceObjective, InplaceObjective
# var for NelderMead
import StatsBase: var
import LinearAlgebra
import LinearAlgebra: Diagonal, diag, Hermitian, Symmetric,
rmul!, mul!,
norm, normalize!,
diagind,
eigen, BLAS,
cholesky, Cholesky, # factorizations
I,
svd,
opnorm, # for safeguards in newton trust regions
issuccess
import SparseArrays: AbstractSparseMatrix
# exported functions and types
export optimize, maximize, # main function
# Re-export objective types from NLSolversBase
NonDifferentiable,
OnceDifferentiable,
TwiceDifferentiable,
# Re-export constraint types from NLSolversBase
TwiceDifferentiableConstraints,
# I don't think these should be here [pkofod]
OptimizationState,
OptimizationTrace,
# Optimization algorithms
## Zeroth order methods (heuristics)
NelderMead,
ParticleSwarm,
SimulatedAnnealing,
## First order
### Quasi-Newton
GradientDescent,
BFGS,
LBFGS,
### Conjugate gradient
ConjugateGradient,
### Acceleration methods
AcceleratedGradientDescent,
MomentumGradientDescent,
Adam,
AdaMax,
### Nonlinear GMRES
NGMRES,
OACCEL,
## Second order
### (Quasi-)Newton
Newton,
### Trust region
NewtonTrustRegion,
# Constrained
## Box constraints, x_i in [lb_i, ub_i]
### Specifically Univariate, R -> R
GoldenSection,
Brent,
### Multivariate, R^N -> R
Fminbox,
SAMIN,
## Manifold constraints
Manifold,
Flat,
Sphere,
Stiefel,
## Non-linear constraints
IPNewton
include("types.jl") # types used throughout
include("Manifolds.jl") # code to handle manifold constraints
include("multivariate/precon.jl") # preconditioning functionality
# utilities
include("utilities/generic.jl") # generic utilities
include("utilities/maxdiff.jl") # find largest difference
include("utilities/update.jl") # trace code
# Unconstrained optimization
## Grid Search
include("multivariate/solvers/zeroth_order/grid_search.jl")
## Zeroth order (Heuristic) Optimization Methods
include("multivariate/solvers/zeroth_order/nelder_mead.jl")
include("multivariate/solvers/zeroth_order/simulated_annealing.jl")
include("multivariate/solvers/zeroth_order/particle_swarm.jl")
## Quasi-Newton
include("multivariate/solvers/first_order/gradient_descent.jl")
include("multivariate/solvers/first_order/bfgs.jl")
include("multivariate/solvers/first_order/l_bfgs.jl")
## Acceleration methods
include("multivariate/solvers/first_order/adamax.jl")
include("multivariate/solvers/first_order/adam.jl")
include("multivariate/solvers/first_order/accelerated_gradient_descent.jl")
include("multivariate/solvers/first_order/momentum_gradient_descent.jl")
## Conjugate gradient
include("multivariate/solvers/first_order/cg.jl")
## Newton
### Line search
include("multivariate/solvers/second_order/newton.jl")
include("multivariate/solvers/second_order/krylov_trust_region.jl")
### Trust region
include("multivariate/solvers/second_order/newton_trust_region.jl")
## Nonlinear GMRES
include("multivariate/solvers/first_order/ngmres.jl")
# Constrained optimization
## Box constraints
include("multivariate/solvers/constrained/fminbox.jl")
include("multivariate/solvers/constrained/samin.jl")
# Univariate methods
include("univariate/solvers/golden_section.jl")
include("univariate/solvers/brent.jl")
include("univariate/types.jl")
include("univariate/printing.jl")
# Line search generic code
include("utilities/perform_linesearch.jl")
# Backward compatibility
include("deprecate.jl")
# convenient user facing optimize methods
include("univariate/optimize/interface.jl")
include("multivariate/optimize/interface.jl")
# actual optimize methods
include("univariate/optimize/optimize.jl")
include("multivariate/optimize/optimize.jl")
# Convergence
include("utilities/assess_convergence.jl")
include("multivariate/solvers/zeroth_order/zeroth_utils.jl")
# Traces
include("utilities/trace.jl")
# API
include("api.jl")
## Interior point includes
include("multivariate/solvers/constrained/ipnewton/types.jl")
# Tracing
include("multivariate/solvers/constrained/ipnewton/utilities/update.jl")
# Constrained optimization
include("multivariate/solvers/constrained/ipnewton/iplinesearch.jl")
include("multivariate/solvers/constrained/ipnewton/interior.jl")
include("multivariate/solvers/constrained/ipnewton/ipnewton.jl")
# Convergence
include("multivariate/solvers/constrained/ipnewton/utilities/assess_convergence.jl")
# Traces
include("multivariate/solvers/constrained/ipnewton/utilities/trace.jl")
# Maximization convenience wrapper
include("maximize.jl")
@static if !isdefined(Base, :get_extension)
include("../ext/OptimMOIExt.jl")
using .OptimMOIExt
const Optimizer = OptimMOIExt.Optimizer
else
# declare this upfront so that the MathOptInterface extension can assign it
# without creating a new global
global Optimizer
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 7935 | Base.summary(r::OptimizationResults) = summary(r.method) # might want to do more here than just return summary of the method used
minimizer(r::OptimizationResults) = r.minimizer
minimum(r::OptimizationResults) = r.minimum
iterations(r::OptimizationResults) = r.iterations
iteration_limit_reached(r::OptimizationResults) = r.iteration_converged
trace(r::OptimizationResults) = length(r.trace) > 0 ? r.trace : error("No trace in optimization results. To get a trace, run optimize() with store_trace = true.")
function x_trace(r::UnivariateOptimizationResults)
tr = trace(r)
!haskey(tr[1].metadata, "minimizer") && error("Trace does not contain x. To get a trace of x, run optimize() with extended_trace = true")
[ state.metadata["minimizer"] for state in tr ]
end
function x_lower_trace(r::UnivariateOptimizationResults)
tr = trace(r)
!haskey(tr[1].metadata, "x_lower") && error("Trace does not contain x. To get a trace of x, run optimize() with extended_trace = true")
[ state.metadata["x_lower"] for state in tr ]
end
x_lower_trace(r::MultivariateOptimizationResults) = error("x_lower_trace is not implemented for $(summary(r)).")
function x_upper_trace(r::UnivariateOptimizationResults)
tr = trace(r)
!haskey(tr[1].metadata, "x_upper") && error("Trace does not contain x. To get a trace of x, run optimize() with extended_trace = true")
[ state.metadata["x_upper"] for state in tr ]
end
x_upper_trace(r::MultivariateOptimizationResults) = error("x_upper_trace is not implemented for $(summary(r)).")
function x_trace(r::MultivariateOptimizationResults)
tr = trace(r)
if isa(r.method, NelderMead)
throw(ArgumentError("Nelder Mead does not operate with a single x. Please use either centroid_trace(...) or simplex_trace(...) to extract the relevant points from the trace."))
end
!haskey(tr[1].metadata, "x") && error("Trace does not contain x. To get a trace of x, run optimize() with extended_trace = true")
[ state.metadata["x"] for state in tr ]
end
function centroid_trace(r::MultivariateOptimizationResults)
tr = trace(r)
if !isa(r.method, NelderMead)
throw(ArgumentError("There is no centroid involved in optimization using $(r.method). Please use x_trace(...) to grab the points from the trace."))
end
!haskey(tr[1].metadata, "centroid") && error("Trace does not contain centroid. To get a trace of the centroid, run optimize() with extended_trace = true")
[ state.metadata["centroid"] for state in tr ]
end
function simplex_trace(r::MultivariateOptimizationResults)
tr = trace(r)
if !isa(r.method, NelderMead)
throw(ArgumentError("There is no simplex involved in optimization using $(r.method). Please use x_trace(...) to grab the points from the trace."))
end
!haskey(tr[1].metadata, "simplex") && error("Trace does not contain simplex. To get a trace of the simplex, run optimize() with trace_simplex = true")
[ state.metadata["simplex"] for state in tr ]
end
function simplex_value_trace(r::MultivariateOptimizationResults)
if !isa(r.method, NelderMead)
throw(ArgumentError("There are no simplex values involved in optimization using $(r.method). Please use f_trace(...) to grab the objective values from the trace."))
end
!haskey(tr[1].metadata, "simplex_values") && error("Trace does not contain objective values at the simplex. To get a trace of the simplex values, run optimize() with trace_simplex = true")
[ state.metadata["simplex_values"] for state in tr ]
end
f_trace(r::OptimizationResults) = [ state.value for state in trace(r) ]
g_norm_trace(r::OptimizationResults) = error("g_norm_trace is not implemented for $(summary(r)).")
g_norm_trace(r::MultivariateOptimizationResults) = [ state.g_norm for state in trace(r) ]
f_calls(r::OptimizationResults) = r.f_calls
f_calls(d) = first(d.f_calls)
g_calls(r::OptimizationResults) = error("g_calls is not implemented for $(summary(r)).")
g_calls(r::MultivariateOptimizationResults) = r.g_calls
g_calls(d::NonDifferentiable) = 0
g_calls(d) = first(d.df_calls)
h_calls(r::OptimizationResults) = error("h_calls is not implemented for $(summary(r)).")
h_calls(r::MultivariateOptimizationResults) = r.h_calls
h_calls(d::Union{NonDifferentiable, OnceDifferentiable}) = 0
h_calls(d) = first(d.h_calls)
h_calls(d::TwiceDifferentiableHV) = first(d.hv_calls)
converged(r::UnivariateOptimizationResults) = r.converged
function converged(r::MultivariateOptimizationResults)
conv_flags = r.x_converged || r.f_converged || r.g_converged
x_isfinite = isfinite(x_abschange(r)) || isnan(x_relchange(r))
f_isfinite = if r.iterations > 0
isfinite(f_abschange(r)) || isnan(f_relchange(r))
else
true
end
g_isfinite = isfinite(g_residual(r))
return conv_flags && all((x_isfinite, f_isfinite, g_isfinite))
end
x_converged(r::OptimizationResults) = error("x_converged is not implemented for $(summary(r)).")
x_converged(r::MultivariateOptimizationResults) = r.x_converged
f_converged(r::OptimizationResults) = error("f_converged is not implemented for $(summary(r)).")
f_converged(r::MultivariateOptimizationResults) = r.f_converged
f_increased(r::OptimizationResults) = error("f_increased is not implemented for $(summary(r)).")
f_increased(r::MultivariateOptimizationResults) = r.f_increased
g_converged(r::OptimizationResults) = error("g_converged is not implemented for $(summary(r)).")
g_converged(r::MultivariateOptimizationResults) = r.g_converged
x_abstol(r::OptimizationResults) = error("x_abstol is not implemented for $(summary(r)).")
x_reltol(r::OptimizationResults) = error("x_reltol is not implemented for $(summary(r)).")
x_tol(r::OptimizationResults) = error("x_tol is not implemented for $(summary(r)).")
x_abstol(r::MultivariateOptimizationResults) = r.x_abstol
x_reltol(r::MultivariateOptimizationResults) = r.x_reltol
x_tol(r::MultivariateOptimizationResults) = r.x_abstol
x_abschange(r::MultivariateOptimizationResults) = r.x_abschange
x_relchange(r::MultivariateOptimizationResults) = r.x_relchange
f_abstol(r::OptimizationResults) = error("f_abstol is not implemented for $(summary(r)).")
f_reltol(r::OptimizationResults) = error("f_reltol is not implemented for $(summary(r)).")
f_tol(r::OptimizationResults) = error("f_tol is not implemented for $(summary(r)).")
f_tol(r::MultivariateOptimizationResults) = r.f_reltol
f_abstol(r::MultivariateOptimizationResults) = r.f_abstol
f_reltol(r::MultivariateOptimizationResults) = r.f_reltol
f_abschange(r::MultivariateOptimizationResults) = r.f_abschange
f_relchange(r::MultivariateOptimizationResults) = r.f_relchange
g_tol(r::OptimizationResults) = error("g_tol is not implemented for $(summary(r)).")
g_tol(r::MultivariateOptimizationResults) = r.g_abstol
g_residual(r::MultivariateOptimizationResults) = r.g_residual
initial_state(r::OptimizationResults) = error("initial_state is not implemented for $(summary(r)).")
initial_state(r::MultivariateOptimizationResults) = r.initial_x
lower_bound(r::OptimizationResults) = error("lower_bound is not implemented for $(summary(r)).")
lower_bound(r::UnivariateOptimizationResults) = r.initial_lower
upper_bound(r::OptimizationResults) = error("upper_bound is not implemented for $(summary(r)).")
upper_bound(r::UnivariateOptimizationResults) = r.initial_upper
rel_tol(r::OptimizationResults) = error("rel_tol is not implemented for $(summary(r)).")
rel_tol(r::UnivariateOptimizationResults) = r.rel_tol
abs_tol(r::OptimizationResults) = error("abs_tol is not implemented for $(summary(r)).")
abs_tol(r::UnivariateOptimizationResults) = r.abs_tol
time_limit(r::MultivariateOptimizationResults) = r.time_limit
time_run( r::MultivariateOptimizationResults) = r.time_run
time_limit(r::OptimizationResults) = error("time_limit is not implemented for $(summary(r)).")
time_run( r::OptimizationResults) = error("time_run is not implemented for $(summary(r)).")
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 3470 | Base.@deprecate method(x) summary(x)
const has_deprecated_fminbox = Ref(false)
function optimize(
df::OnceDifferentiable,
initial_x::Array{T},
l::Array{T},
u::Array{T},
::Type{Fminbox};
x_tol::T = eps(T),
f_tol::T = sqrt(eps(T)),
g_tol::T = sqrt(eps(T)),
allow_f_increases::Bool = true,
iterations::Integer = 1_000,
store_trace::Bool = false,
show_trace::Bool = false,
extended_trace::Bool = false,
show_warnings::Bool = true,
callback = nothing,
show_every::Integer = 1,
linesearch = LineSearches.HagerZhang{T}(),
eta::Real = convert(T,0.4),
mu0::T = convert(T, NaN),
mufactor::T = convert(T, 0.001),
precondprep = (P, x, l, u, mu) -> precondprepbox!(P, x, l, u, mu),
optimizer = ConjugateGradient,
optimizer_o = Options(store_trace = store_trace,
show_trace = show_trace,
extended_trace = extended_trace,
show_warnings = show_warnings),
nargs...) where T<:AbstractFloat
if !has_deprecated_fminbox[]
@warn("Fminbox with the optimizer keyword is deprecated, construct Fminbox{optimizer}() and pass it to optimize(...) instead.")
has_deprecated_fminbox[] = true
end
optimize(df, initial_x, l, u, Fminbox{optimizer}();
allow_f_increases=allow_f_increases,
iterations=iterations,
store_trace=store_trace,
show_trace=show_trace,
extended_trace=extended_trace,
show_warnings=show_warnings,
show_every=show_every,
callback=callback,
linesearch=linesearch,
eta=eta,
mu0=mu0,
mufactor=mufactor,
precondprep=precondprep,
optimizer_o=optimizer_o)
end
function optimize(::AbstractObjective)
throw(ErrorException("Optimizing an objective `obj` without providing an initial `x` has been deprecated without backwards compatability. Please explicitly provide an `x`: `optimize(obj, x)``"))
end
function optimize(::AbstractObjective, ::Method)
throw(ErrorException("Optimizing an objective `obj` without providing an initial `x` has been deprecated without backwards compatability. Please explicitly provide an `x`: `optimize(obj, x, method)``"))
end
function optimize(::AbstractObjective, ::Method, ::Options)
throw(ErrorException("Optimizing an objective `obj` without providing an initial `x` has been deprecated without backwards compatability. Please explicitly provide an `x`: `optimize(obj, x, method, options)``"))
end
function optimize(::AbstractObjective, ::Options)
throw(ErrorException("Optimizing an objective `obj` without providing an initial `x` has been deprecated without backwards compatability. Please explicitly provide an `x`: `optimize(obj, x, options)``"))
end
function optimize(df::OnceDifferentiable,
l::Array{T},
u::Array{T},
F::Fminbox{O}; kwargs...) where {T<:AbstractFloat,O<:AbstractOptimizer}
throw(ErrorException("Optimizing an objective `obj` without providing an initial `x` has been deprecated without backwards compatability. Please explicitly provide an `x`: `optimize(obj, x, l, u, method, options)``"))
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 5086 | # In v1.0 we should possibly just overload getproperty
# Base.getproperty(w::WrapVar, s::Symbol) = getfield(w.v, s)
struct MaximizationWrapper{T}
res::T
end
res(r::MaximizationWrapper) = r.res
# ==============================================================================
# Univariate warppers
# ==============================================================================
function maximize(f, lb::Real, ub::Real, method::AbstractOptimizer; kwargs...)
fmax = x->-f(x)
MaximizationWrapper(optimize(fmax, lb, ub, method; kwargs...))
end
function maximize(f, lb::Real, ub::Real; kwargs...)
fmax = x->-f(x)
MaximizationWrapper(optimize(fmax, lb, ub; kwargs...))
end
# ==============================================================================
# Multivariate warppers
# ==============================================================================
function maximize(f, x0::AbstractArray; kwargs...)
fmax = x->-f(x)
MaximizationWrapper(optimize(fmax, x0; kwargs...))
end
function maximize(f, x0::AbstractArray, method::AbstractOptimizer, options = Optim.Options(); kwargs...)
fmax = x->-f(x)
MaximizationWrapper(optimize(fmax, x0, method, options; kwargs...))
end
function maximize(f, g, x0::AbstractArray, method::AbstractOptimizer, options = Optim.Options(); kwargs...)
fmax = x->-f(x)
gmax = (G,x)->(g(G,x); G.=-G)
MaximizationWrapper(optimize(fmax, gmax, x0, method, options; kwargs...))
end
function maximize(f, g, h, x0::AbstractArray, method::AbstractOptimizer, options = Optim.Options(); kwargs...)
fmax = x->-f(x)
gmax = (G,x)->(g(G,x); G.=-G)
hmax = (H,x)->(h(G,x); H.=-H)
MaximizationWrapper(optimize(fmax, gmax, hmax, x0, method, options; kwargs...))
end
minimum(r::MaximizationWrapper) = throw(MethodError())
maximizer(r::Union{UnivariateOptimizationResults,MultivariateOptimizationResults}) = throw(MethodError())
maximizer(r::MaximizationWrapper) = minimizer(res(r))
maximum(r::Union{UnivariateOptimizationResults,MultivariateOptimizationResults}) = throw(MethodError())
maximum(r::MaximizationWrapper) = -minimum(res(r))
Base.summary(r::MaximizationWrapper) = summary(res(r))
for api_method in (:lower_bound, :upper_bound, :rel_tol, :abs_tol, :iterations, :initial_state, :converged, :x_tol, :x_converged,
:x_abschange, :g_tol, :g_converged, :g_residual, :f_tol, :f_converged,
:f_increased, :f_relchange, :iteration_limit_reached, :f_calls,
:g_calls, :h_calls)
@eval $api_method(r::MaximizationWrapper) = $api_method(res(r))
end
function Base.show(io::IO, r::MaximizationWrapper{<:UnivariateOptimizationResults})
@printf io "Results of Maximization Algorithm\n"
@printf io " * Algorithm: %s\n" summary(r)
@printf io " * Search Interval: [%f, %f]\n" lower_bound(r) upper_bound(r)
@printf io " * Maximizer: %e\n" maximizer(r)
@printf io " * Maximum: %e\n" maximum(r)
@printf io " * Iterations: %d\n" iterations(r)
@printf io " * Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(%.1e*|x|+%.1e): %s\n" rel_tol(r) abs_tol(r) converged(r)
@printf io " * Objective Function Calls: %d" f_calls(r)
return
end
function Base.show(io::IO, r::MaximizationWrapper{<:MultivariateOptimizationResults})
take = Iterators.take
@printf io "Results of Optimization Algorithm\n"
@printf io " * Algorithm: %s\n" summary(r.res)
if length(join(initial_state(r), ",")) < 40
@printf io " * Starting Point: [%s]\n" join(initial_state(r), ",")
else
@printf io " * Starting Point: [%s, ...]\n" join(take(initial_state(r),
2), ",")
end
if length(join(maximizer(r), ",")) < 40
@printf io " * Maximizer: [%s]\n" join(maximizer(r), ",")
else
@printf io " * Maximizer: [%s, ...]\n" join(take(maximizer(r), 2), ",")
end
@printf io " * Maximum: %e\n" maximum(r)
@printf io " * Iterations: %d\n" iterations(r)
@printf io " * Convergence: %s\n" converged(r)
if isa(r.res.method, NelderMead)
@printf io " * β(Ξ£(yα΅’-yΜ)Β²)/n < %.1e: %s\n" g_tol(r) g_converged(r)
else
@printf io " * |x - x'| β€ %.1e: %s \n" x_tol(r) x_converged(r)
@printf io " |x - x'| = %.2e \n" x_abschange(r)
@printf io " * |f(x) - f(x')| β€ %.1e |f(x)|: %s\n" f_tol(r) f_converged(r)
@printf io " |f(x) - f(x')| = %.2e |f(x)|\n" f_relchange(r)
@printf io " * |g(x)| β€ %.1e: %s \n" g_tol(r) g_converged(r)
@printf io " |g(x)| = %.2e \n" g_residual(r)
@printf io " * Stopped by an decreasing objective: %s\n" (f_increased(r) && !iteration_limit_reached(r))
end
@printf io " * Reached Maximum Number of Iterations: %s\n" iteration_limit_reached(r)
@printf io " * Objective Calls: %d" f_calls(r)
if !(isa(r.res.method, NelderMead) || isa(r.res.method, SimulatedAnnealing))
@printf io "\n * Gradient Calls: %d" g_calls(r)
end
if isa(r.res.method, Newton) || isa(r.res.method, NewtonTrustRegion)
@printf io "\n * Hessian Calls: %d" h_calls(r)
end
return
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 9416 | abstract type AbstractOptimizer end
abstract type AbstractConstrainedOptimizer <: AbstractOptimizer end
abstract type ZerothOrderOptimizer <: AbstractOptimizer end
abstract type FirstOrderOptimizer <: AbstractOptimizer end
abstract type SecondOrderOptimizer <: AbstractOptimizer end
abstract type UnivariateOptimizer <: AbstractOptimizer end
abstract type AbstractOptimizerState end
abstract type ZerothOrderState <: AbstractOptimizerState end
"""
Configurable options with defaults (values 0 and NaN indicate unlimited):
```
x_abstol::Real = 0.0,
x_reltol::Real = 0.0,
f_abstol::Real = 0.0,
f_reltol::Real = 0.0,
g_abstol::Real = 1e-8,
g_reltol::Real = 1e-8,
outer_x_abstol::Real = 0.0,
outer_x_reltol::Real = 0.0,
outer_f_abstol::Real = 0.0,
outer_f_reltol::Real = 0.0,
outer_g_abstol::Real = 1e-8,
outer_g_reltol::Real = 1e-8,
f_calls_limit::Int = 0,
g_calls_limit::Int = 0,
h_calls_limit::Int = 0,
allow_f_increases::Bool = true,
allow_outer_f_increases::Bool = true,
successive_f_tol::Int = 1,
iterations::Int = 1_000,
outer_iterations::Int = 1000,
store_trace::Bool = false,
show_trace::Bool = false,
extended_trace::Bool = false,
show_warnings::Bool = true,
show_every::Int = 1,
callback = nothing,
time_limit = NaN
```
See http://julianlsolvers.github.io/Optim.jl/stable/#user/config/
"""
struct Options{T, TCallback}
x_abstol::T
x_reltol::T
f_abstol::T
f_reltol::T
g_abstol::T
g_reltol::T
outer_x_abstol::T
outer_x_reltol::T
outer_f_abstol::T
outer_f_reltol::T
outer_g_abstol::T
outer_g_reltol::T
f_calls_limit::Int
g_calls_limit::Int
h_calls_limit::Int
allow_f_increases::Bool
allow_outer_f_increases::Bool
successive_f_tol::Int
iterations::Int
outer_iterations::Int
store_trace::Bool
trace_simplex::Bool
show_trace::Bool
extended_trace::Bool
show_warnings::Bool
show_every::Int
callback::TCallback
time_limit::Float64
end
function Options(;
x_tol = nothing,
f_tol = nothing,
g_tol = nothing,
x_abstol::Real = 0.0,
x_reltol::Real = 0.0,
f_abstol::Real = 0.0,
f_reltol::Real = 0.0,
g_abstol::Real = 1e-8,
g_reltol::Real = 1e-8,
outer_x_tol = nothing,
outer_f_tol = nothing,
outer_g_tol = nothing,
outer_x_abstol::Real = 0.0,
outer_x_reltol::Real = 0.0,
outer_f_abstol::Real = 0.0,
outer_f_reltol::Real = 0.0,
outer_g_abstol::Real = 1e-8,
outer_g_reltol::Real = 1e-8,
f_calls_limit::Int = 0,
g_calls_limit::Int = 0,
h_calls_limit::Int = 0,
allow_f_increases::Bool = true,
allow_outer_f_increases::Bool = true,
successive_f_tol::Int = 1,
iterations::Int = 1_000,
outer_iterations::Int = 1000,
store_trace::Bool = false,
trace_simplex::Bool = false,
show_trace::Bool = false,
extended_trace::Bool = false,
show_warnings::Bool = true,
show_every::Int = 1,
callback = nothing,
time_limit = NaN)
show_every = show_every > 0 ? show_every : 1
#if extended_trace && callback === nothing
# show_trace = true
#end
if !(x_tol === nothing)
x_abstol = x_tol
end
if !(g_tol === nothing)
g_abstol = g_tol
end
if !(f_tol === nothing)
f_reltol = f_tol
end
if !(outer_x_tol === nothing)
outer_x_abstol = outer_x_tol
end
if !(outer_g_tol === nothing)
outer_g_abstol = outer_g_tol
end
if !(outer_f_tol === nothing)
outer_f_reltol = outer_f_tol
end
Options(promote(x_abstol, x_reltol, f_abstol, f_reltol, g_abstol, g_reltol, outer_x_abstol, outer_x_reltol, outer_f_abstol, outer_f_reltol, outer_g_abstol, outer_g_reltol)..., f_calls_limit, g_calls_limit, h_calls_limit,
allow_f_increases, allow_outer_f_increases, successive_f_tol, Int(iterations), Int(outer_iterations), store_trace, trace_simplex, show_trace, extended_trace, show_warnings,
Int(show_every), callback, Float64(time_limit))
end
_show_helper(output, k, v) = output * "$k = $v, "
_show_helper(output, k, ::Nothing) = output
function Base.show(io::IO, o::Optim.Options)
content = foldl(fieldnames(typeof(o)), init = "Optim.Options(") do output, k
v = getfield(o, k)
return _show_helper(output, k, v)
end
print(io, content)
println(io, ")")
end
function Base.show(io::IO, ::MIME"text/plain", o::Optim.Options)
for k in fieldnames(typeof(o))
v = getfield(o, k)
if v isa Nothing
@printf io "%24s = %s\n" k "nothing"
else
@printf io "%24s = %s\n" k v
end
end
end
function print_header(options::Options)
if options.show_trace
@printf "Iter Function value Gradient norm \n"
end
end
function print_header(method::AbstractOptimizer)
@printf "Iter Function value Gradient norm \n"
end
struct OptimizationState{Tf<:Real, T <: AbstractOptimizer}
iteration::Int
value::Tf
g_norm::Tf
metadata::Dict
end
const OptimizationTrace{Tf, T} = Vector{OptimizationState{Tf, T}}
abstract type OptimizationResults end
mutable struct MultivariateOptimizationResults{O, Tx, Tc, Tf, M, Tls, Tsb} <: OptimizationResults
method::O
initial_x::Tx
minimizer::Tx
minimum::Tf
iterations::Int
iteration_converged::Bool
x_converged::Bool
x_abstol::Tf
x_reltol::Tf
x_abschange::Tc
x_relchange::Tc
f_converged::Bool
f_abstol::Tf
f_reltol::Tf
f_abschange::Tc
f_relchange::Tc
g_converged::Bool
g_abstol::Tf
g_residual::Tc
f_increased::Bool
trace::M
f_calls::Int
g_calls::Int
h_calls::Int
ls_success::Tls
time_limit::Float64
time_run::Float64
stopped_by::Tsb
end
# pick_best_x and pick_best_f are used to pick the minimizer if we stopped because
# f increased and we didn't allow it
pick_best_x(f_increased, state) = f_increased ? state.x_previous : state.x
pick_best_f(f_increased, state, d) = f_increased ? state.f_x_previous : value(d)
function Base.show(io::IO, t::OptimizationState)
@printf io "%6d %14e %14e\n" t.iteration t.value t.g_norm
if !isempty(t.metadata)
for (key, value) in t.metadata
@printf io " * %s: %s\n" key value
end
end
return
end
function Base.show(io::IO, tr::OptimizationTrace)
@printf io "Iter Function value Gradient norm \n"
@printf io "------ -------------- --------------\n"
for state in tr
show(io, state)
end
return
end
function Base.show(io::IO, r::MultivariateOptimizationResults)
take = Iterators.take
if converged(r)
status_string = "success"
else
status_string = "failure"
end
if iteration_limit_reached(r)
status_string *= " (reached maximum number of iterations)"
end
if f_increased(r) && !iteration_limit_reached(r)
status_string *= " (objective increased between iterations)"
end
if isa(r.ls_success, Bool) && !r.ls_success
status_string *= " (line search failed)"
end
if time_run(r) > time_limit(r)
status_string *= " (exceeded time limit of $(time_limit(r)))"
end
@printf io " * Status: %s\n\n" status_string
@printf io " * Candidate solution\n"
@printf io " Final objective value: %e\n" minimum(r)
@printf io "\n"
@printf io " * Found with\n"
@printf io " Algorithm: %s\n" summary(r)
@printf io "\n"
@printf io " * Convergence measures\n"
if isa(r.method, NelderMead)
@printf io " β(Ξ£(yα΅’-yΜ)Β²)/n %s %.1e\n" g_converged(r) ? "β€" : "β°" g_tol(r)
else
@printf io " |x - x'| = %.2e %s %.1e\n" x_abschange(r) x_abschange(r)<=x_abstol(r) ? "β€" : "β°" x_abstol(r)
@printf io " |x - x'|/|x'| = %.2e %s %.1e\n" x_relchange(r) x_relchange(r)<=x_reltol(r) ? "β€" : "β°" x_reltol(r)
@printf io " |f(x) - f(x')| = %.2e %s %.1e\n" f_abschange(r) f_abschange(r)<=f_abstol(r) ? "β€" : "β°" f_abstol(r)
@printf io " |f(x) - f(x')|/|f(x')| = %.2e %s %.1e\n" f_relchange(r) f_relchange(r)<=f_reltol(r) ? "β€" : "β°" f_reltol(r)
@printf io " |g(x)| = %.2e %s %.1e\n" g_residual(r) g_residual(r)<=g_tol(r) ? "β€" : "β°" g_tol(r)
end
@printf io "\n"
@printf io " * Work counters\n"
@printf io " Seconds run: %d (vs limit %d)\n" time_run(r) isnan(time_limit(r)) ? Inf : time_limit(r)
@printf io " Iterations: %d\n" iterations(r)
@printf io " f(x) calls: %d\n" f_calls(r)
if !(isa(r.method, NelderMead) || isa(r.method, SimulatedAnnealing))
@printf io " βf(x) calls: %d\n" g_calls(r)
end
if isa(r.method, Newton) || isa(r.method, NewtonTrustRegion)
@printf io " βΒ²f(x) calls: %d\n" h_calls(r)
end
return
end
function Base.append!(a::MultivariateOptimizationResults, b::MultivariateOptimizationResults)
a.iterations += iterations(b)
a.minimizer = minimizer(b)
a.minimum = minimum(b)
a.iteration_converged = iteration_limit_reached(b)
a.x_converged = x_converged(b)
a.f_converged = f_converged(b)
a.g_converged = g_converged(b)
append!(a.trace, b.trace)
a.f_calls += f_calls(b)
a.g_calls += g_calls(b)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2000 | # Some Boiler-plate code for preconditioning
#
# Meaning of P:
# P β βΒ²E, so the preconditioned gradient is P^{-1} βE
# P can be an arbitrary type but at a minimum it MUST provide:
# ldiv!(x, P, b) -> x = P \ b
# dot(x, P, b) -> x' P b
#
# If `dot` is not provided, then Optim.jl will try to define it via
# dot(x, P, b) = dot(x, mul!(similar(x), P, y))
#
# finally the preconditioner can be updated after each x-update using
# precondprep!
# but this is passed as an argument at the moment!
#
# Fallback
ldiv!(out, M, A) = LinearAlgebra.ldiv!(out, M, A)
dot(a, M, b) = LinearAlgebra.dot(a, M, b)
dot(a, b) = LinearAlgebra.dot(a, b)
#####################################################
# [0] Defaults and aliases for easier reading of the code
# these can also be over-written if necessary.
# default preconditioner update
precondprep!(P, x) = nothing
#####################################################
# [1] Empty preconditioner = Identity
#
# out = P^{-1} * A
ldiv!(out, ::Nothing, A) = copyto!(out, A)
# A' * P B
dot(A, ::Nothing, B) = dot(A, B)
#####################################################
# [2] Diagonal preconditioner
# P = Diag(d)
# Covered by base
#####################################################
# [3] Inverse Diagonal preconditioner
# here, P is stored by the entries of its inverse
# TODO: maybe implement this in Base?
mutable struct InverseDiagonal
diag
end
ldiv!(out::AbstractArray, P::InverseDiagonal, A::AbstractArray) = copyto!(out, A .* P.diag)
dot(A::AbstractArray, P::InverseDiagonal, B::Vector) = dot(A, B ./ P.diag)
#####################################################
# [4] Matrix Preconditioner
# the assumption here is that P is given by its inverse, which is typical
# > ldiv! is about to be moved to Base, so we need a temporary hack
# > mul! is already in Base, which defines `dot`
# nothing to do!
ldiv!(x, P::AbstractMatrix, b) = copyto!(x, P \ b)
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 8269 | # Multivariate optimization
function check_kwargs(kwargs, fallback_method)
kws = Dict{Symbol, Any}()
method = nothing
for kwarg in kwargs
if kwarg[1] != :method
kws[kwarg[1]] = kwarg[2]
else
method = kwarg[2]
end
end
if method === nothing
method = fallback_method
end
kws, method
end
default_options(method::AbstractOptimizer) = NamedTuple()
function add_default_opts!(opts::Dict{Symbol, Any}, method::AbstractOptimizer)
for newopt in pairs(default_options(method))
if !haskey(opts, newopt[1])
opts[newopt[1]] = newopt[2]
end
end
end
fallback_method(f) = NelderMead()
fallback_method(f, g!) = LBFGS()
fallback_method(f, g!, h!) = Newton()
function fallback_method(f::InplaceObjective)
if !(f.fdf isa Nothing)
if !(f.hv isa Nothing)
return KrylovTrustRegion()
end
return LBFGS()
elseif !(f.fgh isa Nothing)
return Newton()
elseif !(f.fghv isa Nothing)
return KrylovTrustRegion()
end
end
function fallback_method(f::NotInplaceObjective)
if !(f.fdf isa Nothing)
return LBFGS()
elseif !(f.fgh isa Nothing)
return LBFGS()
else
throw(ArgumentError("optimize does not support $(typeof(f)) as the first positional argument"))
end
end
fallback_method(f::NotInplaceObjective{<:Nothing, <:Nothing, <:Any}) = Newton()
fallback_method(d::OnceDifferentiable) = LBFGS()
fallback_method(d::TwiceDifferentiable) = Newton()
# promote the objective (tuple of callables or an AbstractObjective) according to method requirement
promote_objtype(method, initial_x, autodiff::Symbol, inplace::Bool, args...) = error("No default objective type for $method and $args.")
# actual promotions, notice that (args...) captures FirstOrderOptimizer and NonDifferentiable, etc
promote_objtype(method::ZerothOrderOptimizer, x, autodiff::Symbol, inplace::Bool, args...) = NonDifferentiable(args..., x, real(zero(eltype(x))))
promote_objtype(method::FirstOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f) = OnceDifferentiable(f, x, real(zero(eltype(x))); autodiff = autodiff)
promote_objtype(method::FirstOrderOptimizer, x, autodiff::Symbol, inplace::Bool, args...) = OnceDifferentiable(args..., x, real(zero(eltype(x))); inplace = inplace)
promote_objtype(method::FirstOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f, g, h) = OnceDifferentiable(f, g, x, real(zero(eltype(x))); inplace = inplace)
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f) = TwiceDifferentiable(f, x, real(zero(eltype(x))); autodiff = autodiff)
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f::NotInplaceObjective) = TwiceDifferentiable(f, x, real(zero(eltype(x))))
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f::InplaceObjective) = TwiceDifferentiable(f, x, real(zero(eltype(x))))
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f::NLSolversBase.InPlaceObjectiveFGHv) = TwiceDifferentiableHV(f, x)
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f::NLSolversBase.InPlaceObjectiveFG_Hv) = TwiceDifferentiableHV(f, x)
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f, g) = TwiceDifferentiable(f, g, x, real(zero(eltype(x))); inplace = inplace, autodiff = autodiff)
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, f, g, h) = TwiceDifferentiable(f, g, h, x, real(zero(eltype(x))); inplace = inplace)
# no-op
promote_objtype(method::ZerothOrderOptimizer, x, autodiff::Symbol, inplace::Bool, nd::NonDifferentiable) = nd
promote_objtype(method::ZerothOrderOptimizer, x, autodiff::Symbol, inplace::Bool, od::OnceDifferentiable) = od
promote_objtype(method::FirstOrderOptimizer, x, autodiff::Symbol, inplace::Bool, od::OnceDifferentiable) = od
promote_objtype(method::ZerothOrderOptimizer, x, autodiff::Symbol, inplace::Bool, td::TwiceDifferentiable) = td
promote_objtype(method::FirstOrderOptimizer, x, autodiff::Symbol, inplace::Bool, td::TwiceDifferentiable) = td
promote_objtype(method::SecondOrderOptimizer, x, autodiff::Symbol, inplace::Bool, td::TwiceDifferentiable) = td
# if no method or options are present
function optimize(f, initial_x::AbstractArray; inplace = true, autodiff = :finite, kwargs...)
method = fallback_method(f)
checked_kwargs, method = check_kwargs(kwargs, method)
d = promote_objtype(method, initial_x, autodiff, inplace, f)
add_default_opts!(checked_kwargs, method)
options = Options(; checked_kwargs...)
optimize(d, initial_x, method, options)
end
function optimize(f, g, initial_x::AbstractArray; inplace = true, autodiff = :finite, kwargs...)
method = fallback_method(f, g)
checked_kwargs, method = check_kwargs(kwargs, method)
d = promote_objtype(method, initial_x, autodiff, inplace, f, g)
add_default_opts!(checked_kwargs, method)
options = Options(; checked_kwargs...)
optimize(d, initial_x, method, options)
end
function optimize(f, g, h, initial_x::AbstractArray; inplace = true, autodiff = :finite, kwargs...)
method = fallback_method(f, g, h)
checked_kwargs, method = check_kwargs(kwargs, method)
d = promote_objtype(method, initial_x, autodiff, inplace, f, g, h)
add_default_opts!(checked_kwargs, method)
options = Options(; checked_kwargs...)
optimize(d, initial_x, method, options)
end
# no method supplied with objective
function optimize(d::T, initial_x::AbstractArray, options::Options) where T<:AbstractObjective
optimize(d, initial_x, fallback_method(d), options)
end
# no method supplied with inplace and autodiff keywords becauase objective is not supplied
function optimize(f, initial_x::AbstractArray, options::Options; inplace = true, autodiff = :finite)
method = fallback_method(f)
d = promote_objtype(method, initial_x, autodiff, inplace, f)
optimize(d, initial_x, method, options)
end
function optimize(f, g, initial_x::AbstractArray, options::Options; inplace = true, autodiff = :finite)
method = fallback_method(f, g)
d = promote_objtype(method, initial_x, autodiff, inplace, f, g)
optimize(d, initial_x, method, options)
end
function optimize(f, g, h, initial_x::AbstractArray{T}, options::Options; inplace = true, autodiff = :finite) where {T}
method = fallback_method(f, g, h)
d = promote_objtype(method, initial_x, autodiff, inplace, f, g, h)
optimize(d, initial_x, method, options)
end
# potentially everything is supplied (besides caches)
function optimize(f, initial_x::AbstractArray, method::AbstractOptimizer,
options::Options = Options(;default_options(method)...); inplace = true, autodiff = :finite)
d = promote_objtype(method, initial_x, autodiff, inplace, f)
optimize(d, initial_x, method, options)
end
function optimize(f, c::AbstractConstraints, initial_x::AbstractArray, method::AbstractOptimizer,
options::Options = Options(;default_options(method)...); inplace = true, autodiff = :finite)
d = promote_objtype(method, initial_x, autodiff, inplace, f)
optimize(d, c, initial_x, method, options)
end
function optimize(f, g, initial_x::AbstractArray, method::AbstractOptimizer,
options::Options = Options(;default_options(method)...); inplace = true, autodiff = :finite)
d = promote_objtype(method, initial_x, autodiff, inplace, f, g)
optimize(d, initial_x, method, options)
end
function optimize(f, g, h, initial_x::AbstractArray{T}, method::AbstractOptimizer,
options::Options = Options(;default_options(method)...); inplace = true, autodiff = :finite) where T
d = promote_objtype(method, initial_x, autodiff, inplace, f, g, h)
optimize(d, initial_x, method, options)
end
function optimize(d::D, initial_x::AbstractArray, method::SecondOrderOptimizer,
options::Options = Options(;default_options(method)...); autodiff = :finite, inplace = true) where {D <: Union{NonDifferentiable, OnceDifferentiable}}
d = promote_objtype(method, initial_x, autodiff, inplace, d)
optimize(d, initial_x, method, options)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 7455 | update_g!(d, state, method) = nothing
function update_g!(d, state, method::M) where M<:Union{FirstOrderOptimizer, Newton}
# Update the function value and gradient
value_gradient!(d, state.x)
if M <: FirstOrderOptimizer #only for methods that support manifold optimization
project_tangent!(method.manifold, gradient(d), state.x)
end
end
update_fg!(d, state, method) = nothing
update_fg!(d, state, method::ZerothOrderOptimizer) = value!(d, state.x)
function update_fg!(d, state, method::M) where M<:Union{FirstOrderOptimizer, Newton}
value_gradient!(d, state.x)
if M <: FirstOrderOptimizer #only for methods that support manifold optimization
project_tangent!(method.manifold, gradient(d), state.x)
end
end
# Update the Hessian
update_h!(d, state, method) = nothing
update_h!(d, state, method::SecondOrderOptimizer) = hessian!(d, state.x)
after_while!(d, state, method, options) = nothing
function initial_convergence(d, state, method::AbstractOptimizer, initial_x, options)
gradient!(d, initial_x)
stopped = !isfinite(value(d)) || any(!isfinite, gradient(d))
maximum(abs, gradient(d)) <= options.g_abstol, stopped
end
function initial_convergence(d, state, method::ZerothOrderOptimizer, initial_x, options)
false, false
end
function optimize(d::D, initial_x::Tx, method::M,
options::Options{T, TCallback} = Options(;default_options(method)...),
state = initial_state(method, options, d, initial_x)) where {D<:AbstractObjective, M<:AbstractOptimizer, Tx <: AbstractArray, T, TCallback}
t0 = time() # Initial time stamp used to control early stopping by options.time_limit
tr = OptimizationTrace{typeof(value(d)), typeof(method)}()
tracing = options.store_trace || options.show_trace || options.extended_trace || options.callback !== nothing
stopped, stopped_by_callback, stopped_by_time_limit = false, false, false
f_limit_reached, g_limit_reached, h_limit_reached = false, false, false
x_converged, f_converged, f_increased, counter_f_tol = false, false, false, 0
f_converged, g_converged = initial_convergence(d, state, method, initial_x, options)
converged = f_converged || g_converged
# prepare iteration counter (used to make "initial state" trace entry)
iteration = 0
options.show_trace && print_header(method)
_time = time()
trace!(tr, d, state, iteration, method, options, _time-t0)
ls_success::Bool = true
while !converged && !stopped && iteration < options.iterations
iteration += 1
ls_success = !update_state!(d, state, method)
if !ls_success
break # it returns true if it's forced by something in update! to stop (eg dx_dg == 0.0 in BFGS, or linesearch errors)
end
if !(method isa NewtonTrustRegion)
update_g!(d, state, method) # TODO: Should this be `update_fg!`?
end
x_converged, f_converged,
g_converged, f_increased = assess_convergence(state, d, options)
# For some problems it may be useful to require `f_converged` to be hit multiple times
# TODO: Do the same for x_tol?
counter_f_tol = f_converged ? counter_f_tol+1 : 0
converged = x_converged || g_converged || (counter_f_tol > options.successive_f_tol)
if !(converged && method isa Newton) && !(method isa NewtonTrustRegion)
update_h!(d, state, method) # only relevant if not converged
end
if tracing
# update trace; callbacks can stop routine early by returning true
stopped_by_callback = trace!(tr, d, state, iteration, method, options, time()-t0)
end
# Check time_limit; if none is provided it is NaN and the comparison
# will always return false.
_time = time()
stopped_by_time_limit = _time-t0 > options.time_limit
f_limit_reached = options.f_calls_limit > 0 && f_calls(d) >= options.f_calls_limit ? true : false
g_limit_reached = options.g_calls_limit > 0 && g_calls(d) >= options.g_calls_limit ? true : false
h_limit_reached = options.h_calls_limit > 0 && h_calls(d) >= options.h_calls_limit ? true : false
if (f_increased && !options.allow_f_increases) || stopped_by_callback ||
stopped_by_time_limit || f_limit_reached || g_limit_reached || h_limit_reached
stopped = true
end
if method isa NewtonTrustRegion
# If the trust region radius keeps on reducing we need to stop
# because something is wrong. Wrong gradients or a non-differentiability
# at the solution could be explanations.
if state.delta β€ method.delta_min
stopped = true
end
end
if g_calls(d) > 0 && !all(isfinite, gradient(d))
options.show_warnings && @warn "Terminated early due to NaN in gradient."
break
end
if h_calls(d) > 0 && !(d isa TwiceDifferentiableHV) && !all(isfinite, hessian(d))
options.show_warnings && @warn "Terminated early due to NaN in Hessian."
break
end
end # while
after_while!(d, state, method, options)
# we can just check minimum, as we've earlier enforced same types/eltypes
# in variables besides the option settings
Tf = typeof(value(d))
f_incr_pick = f_increased && !options.allow_f_increases
stopped_by =(f_limit_reached=f_limit_reached,
g_limit_reached=g_limit_reached,
h_limit_reached=h_limit_reached,
time_limit=stopped_by_time_limit,
callback=stopped_by_callback,
f_increased=f_incr_pick)
return MultivariateOptimizationResults{typeof(method),Tx,typeof(x_abschange(state)),Tf,typeof(tr), Bool, typeof(stopped_by)}(method,
initial_x,
pick_best_x(f_incr_pick, state),
pick_best_f(f_incr_pick, state, d),
iteration,
iteration == options.iterations,
x_converged,
Tf(options.x_abstol),
Tf(options.x_reltol),
x_abschange(state),
x_relchange(state),
f_converged,
Tf(options.f_abstol),
Tf(options.f_reltol),
f_abschange(d, state),
f_relchange(d, state),
g_converged,
Tf(options.g_abstol),
g_residual(d, state),
f_increased,
tr,
f_calls(d),
g_calls(d),
h_calls(d),
ls_success,
options.time_limit,
_time-t0,
stopped_by,
)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 17514 | import NLSolversBase: value, value!, value!!, gradient, gradient!, value_gradient!, value_gradient!!
####### FIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIIX THE MIDDLE OF BOX CASE THAT WAS THERE
mutable struct BarrierWrapper{TO, TB, Tm, TF, TDF} <: AbstractObjective
obj::TO
b::TB # barrier
mu::Tm # multipler
Fb::TF
Ftotal::TF
DFb::TDF
DFtotal::TDF
end
f_calls(obj::BarrierWrapper) = f_calls(obj.obj)
g_calls(obj::BarrierWrapper) = g_calls(obj.obj)
h_calls(obj::BarrierWrapper) = h_calls(obj.obj)
function BarrierWrapper(obj::NonDifferentiable, mu, lower, upper)
barrier_term = BoxBarrier(lower, upper)
BarrierWrapper(obj, barrier_term, mu, copy(obj.F), copy(obj.F), nothing, nothing)
end
function BarrierWrapper(obj::OnceDifferentiable, mu, lower, upper)
barrier_term = BoxBarrier(lower, upper)
BarrierWrapper(obj, barrier_term, mu, copy(obj.F), copy(obj.F), copy(obj.DF), copy(obj.DF))
end
struct BoxBarrier{L, U}
lower::L
upper::U
end
function in_box(bb::BoxBarrier, x)
all(x->x[1]>=x[2] && x[1]<=x[3], zip(x, bb.lower, bb.upper))
end
in_box(bw::BarrierWrapper, x) = in_box(bw.b, x)
# evaluates the value and gradient components comming from the log barrier
function _barrier_term_value(x::T, l, u) where T
dxl = x - l
dxu = u - x
if dxl <= 0 || dxu <= 0
return T(Inf)
end
vl = ifelse(isfinite(dxl), -log(dxl), T(0))
vu = ifelse(isfinite(dxu), -log(dxu), T(0))
return vl + vu
end
function _barrier_term_gradient(x::T, l, u) where T
dxl = x - l
dxu = u - x
g = zero(T)
if isfinite(l)
g += -one(T)/dxl
end
if isfinite(u)
g += one(T)/dxu
end
return g
end
function value_gradient!(bb::BoxBarrier, g, x)
g .= _barrier_term_gradient.(x, bb.lower, bb.upper)
value(bb, x)
end
function gradient(bb::BoxBarrier, g, x)
g = copy(g)
g .= _barrier_term_gradient.(x, bb.lower, bb.upper)
end
# Wrappers
function value!!(bw::BarrierWrapper, x)
bw.Fb = value(bw.b, x)
bw.Ftotal = bw.mu*bw.Fb
if in_box(bw, x)
value!!(bw.obj, x)
bw.Ftotal += value(bw.obj)
end
end
function value_gradient!!(bw::BarrierWrapper, x)
bw.Fb = value(bw.b, x)
bw.Ftotal = bw.mu*bw.Fb
bw.DFb .= _barrier_term_gradient.(x, bw.b.lower, bw.b.upper)
bw.DFtotal .= bw.mu .* bw.DFb
if in_box(bw, x)
value_gradient!!(bw.obj, x)
bw.Ftotal += value(bw.obj)
bw.DFtotal .+= gradient(bw.obj)
end
end
function value_gradient!(bb::BarrierWrapper, x)
bb.DFb .= _barrier_term_gradient.(x, bb.b.lower, bb.b.upper)
bb.Fb = value(bb.b, x)
bb.DFtotal .= bb.mu .* bb.DFb
bb.Ftotal = bb.mu*bb.Fb
if in_box(bb, x)
value_gradient!(bb.obj, x)
bb.DFtotal .+= gradient(bb.obj)
bb.Ftotal += value(bb.obj)
end
end
value(bb::BoxBarrier, x) = mapreduce(x->_barrier_term_value(x...), +, zip(x, bb.lower, bb.upper))
function value!(obj::BarrierWrapper, x)
obj.Fb = value(obj.b, x)
obj.Ftotal = obj.mu*obj.Fb
if in_box(obj, x)
value!(obj.obj, x)
obj.Ftotal += value(obj.obj)
end
obj.Ftotal
end
value(obj::BarrierWrapper) = obj.Ftotal
function value(obj::BarrierWrapper, x)
F = obj.mu*value(obj.b, x)
if in_box(obj, x)
F += value(obj.obj, x)
end
F
end
function gradient!(obj::BarrierWrapper, x)
gradient!(obj.obj, x)
obj.DFb .= gradient(obj.b, obj.DFb, x) # this should just be inplace?
obj.DFtotal .= gradient(obj.obj) .+ obj.mu*obj.Fb
end
gradient(obj::BarrierWrapper) = obj.DFtotal
# this mutates mu but not the gradients
# Super unsafe in that it depends on x_df being correct!
function initial_mu(obj::BarrierWrapper, F)
T = typeof(obj.Fb) # this will not work if F is real, G is complex
gbarrier = map(x->(isfinite.(x[2]) ? one(T)/(x[1]-x[2]) : zero(T)) + (isfinite(x[3]) ? one(T)/(x[3]-x[1]) : zero(T)), zip(obj.obj.x_f, obj.b.lower, obj.b.upper))
# obj.mu = initial_mu(gradient(obj.obj), gradient(obj.b, obj.DFb, obj.obj.x_df), T(F.mufactor), T(F.mu0))
obj.mu = initial_mu(gradient(obj.obj), gbarrier, T(F.mufactor), T(F.mu0))
end
# Attempt to compute a reasonable default mu: at the starting
# position, the gradient of the input function should dominate the
# gradient of the barrier.
function initial_mu(gfunc::AbstractArray{T}, gbarrier::AbstractArray{T}, mu0factor::T = T(1)/1000, mu0::T = convert(T, NaN)) where T
if isnan(mu0)
gbarriernorm = sum(abs, gbarrier)
if gbarriernorm > 0
mu = mu0factor*sum(abs, gfunc)/gbarriernorm
else
# Presumably, there is no barrier function
mu = zero(T)
end
else
mu = mu0
end
return mu
end
function limits_box(x::AbstractArray{T}, d::AbstractArray{T},
l::AbstractArray{T}, u::AbstractArray{T}) where T
alphamax = convert(T, Inf)
@inbounds for i in eachindex(x)
if d[i] < 0
alphamax = min(alphamax, ((l[i]-x[i])+eps(l[i]))/d[i])
elseif d[i] > 0
alphamax = min(alphamax, ((u[i]-x[i])-eps(u[i]))/d[i])
end
end
epsilon = eps(max(alphamax, one(T)))
if !isinf(alphamax) && alphamax > epsilon
alphamax -= epsilon
end
return alphamax
end
# Default preconditioner for box-constrained optimization
# This creates the inverse Hessian of the barrier penalty
function precondprepbox!(P, x, l, u, dfbox)
@. P.diag = 1/(dfbox.mu*(1/(x-l)^2 + 1/(u-x)^2) + 1)
end
struct Fminbox{O<:AbstractOptimizer, T, P} <: AbstractConstrainedOptimizer
method::O
mu0::T
mufactor::T
precondprep::P
end
"""
# Fminbox
## Constructor
```julia
Fminbox(method;
mu0=NaN,
mufactor=0.0001,
precondprep(P, x, l, u, mu) -> precondprepbox!(P, x, l, u, mu))
```
## Description
Fminbox implements a primal barrier method for optimization with simple
bounds (or box constraints). A description of an approach very close to
the one implemented here can be found in section 19.6 of Nocedal and Wright
(sec. 19.6, 2006).
## References
- Wright, S. J. and J. Nocedal (1999), Numerical optimization. Springer Science 35.67-68: 7.
"""
function Fminbox(method::AbstractOptimizer = LBFGS();
mu0::Real = NaN, mufactor::Real = 0.001,
precondprep = (P, x, l, u, mu) -> precondprepbox!(P, x, l, u, mu))
if method isa Newton || method isa NewtonTrustRegion
throw(ArgumentError("Newton is not supported as the Fminbox optimizer."))
end
Fminbox(method, promote(mu0, mufactor)..., precondprep) # default optimizer
end
Base.summary(F::Fminbox) = "Fminbox with $(summary(F.method))"
# barrier_method() constructs an optimizer to solve the barrier problem using m = Fminbox.method as the reference.
# Essentially it only updates the P and precondprep fields of `m`.
# fallback
barrier_method(m::AbstractOptimizer, P, precondprep) =
error("You need to specify a valid inner optimizer for Fminbox, $m is not supported. Please consult the documentation.")
barrier_method(m::ConjugateGradient, P, precondprep) =
ConjugateGradient(eta = m.eta, alphaguess = m.alphaguess!,
linesearch = m.linesearch!, P = P,
precondprep = precondprep)
barrier_method(m::LBFGS, P, precondprep) =
LBFGS(alphaguess = m.alphaguess!, linesearch = m.linesearch!, P = P,
precondprep = precondprep)
barrier_method(m::GradientDescent, P, precondprep) =
GradientDescent(alphaguess = m.alphaguess!, linesearch = m.linesearch!, P = P,
precondprep = precondprep)
barrier_method(m::Union{NelderMead, SimulatedAnnealing, ParticleSwarm, BFGS, AbstractNGMRES},
P, precondprep) = m # use `m` as is
function optimize(f,
g,
l::AbstractArray{T},
u::AbstractArray{T},
initial_x::AbstractArray{T},
F::Fminbox = Fminbox(),
options = Options(); inplace = true, autodiff = :finite) where T<:AbstractFloat
g! = inplace ? g : (G, x) -> copyto!(G, g(x))
od = OnceDifferentiable(f, g!, initial_x, zero(T))
optimize(od, l, u, initial_x, F, options)
end
optimize(f, l::Number, u::Number, initial_x::AbstractArray{T}; kwargs...) where T = optimize(f, Fill(T(l), size(initial_x)...), Fill(T(u), size(initial_x)...), initial_x; kwargs...)
optimize(f, l::AbstractArray, u::Number, initial_x::AbstractArray{T}; kwargs...) where T = optimize(f, l, Fill(T(u), size(initial_x)...), initial_x; kwargs...)
optimize(f, l::Number, u::AbstractArray, initial_x::AbstractArray{T}; kwargs...) where T = optimize(f, Fill(T(l), size(initial_x)...), u, initial_x; kwargs...)
optimize(f, l::Number, u::Number, initial_x::AbstractArray{T}, mo::AbstractConstrainedOptimizer, opt::Options=Options(); kwargs...) where T = optimize(f, Fill(T(l), size(initial_x)...), Fill(T(u), size(initial_x)...), initial_x, mo, opt; kwargs...)
optimize(f, l::AbstractArray, u::Number, initial_x::AbstractArray{T}, mo::AbstractConstrainedOptimizer, opt::Options=Options(); kwargs...) where T = optimize(f, l, Fill(T(u), size(initial_x)...), initial_x, mo, opt; kwargs...)
optimize(f, l::Number, u::AbstractArray, initial_x::AbstractArray{T}, mo::AbstractConstrainedOptimizer, opt::Options=Options(); kwargs...) where T = optimize(f, Fill(T(l), size(initial_x)...), u, initial_x, mo, opt; kwargs...)
optimize(f, g, l::Number, u::Number, initial_x::AbstractArray{T}, opt::Options; kwargs...) where T = optimize(f, g, Fill(T(l), size(initial_x)...), Fill(T(u), size(initial_x)...), initial_x, opt; kwargs...)
optimize(f, g, l::AbstractArray, u::Number, initial_x::AbstractArray{T}, opt::Options; kwargs...) where T = optimize(f, g, l, Fill(T(u), size(initial_x)...), initial_x, opt; kwargs...)
optimize(f, g, l::Number, u::AbstractArray, initial_x::AbstractArray{T}, opt::Options; kwargs...) where T = optimize(f, g, Fill(T(l), size(initial_x)...), u, initial_x, opt; kwargs...)
function optimize(f,
l::AbstractArray,
u::AbstractArray,
initial_x::AbstractArray,
F::Fminbox = Fminbox(),
options::Options = Options(); inplace = true, autodiff = :finite)
if f isa NonDifferentiable
f = f.f
end
od = OnceDifferentiable(f, initial_x, zero(eltype(initial_x)); autodiff = autodiff)
optimize(od, l, u, initial_x, F, options)
end
function optimize(
df::OnceDifferentiable,
l::AbstractArray,
u::AbstractArray,
initial_x::AbstractArray,
F::Fminbox = Fminbox(),
options::Options = Options())
T = eltype(initial_x)
t0 = time()
outer_iterations = options.outer_iterations
allow_outer_f_increases = options.allow_outer_f_increases
show_trace, store_trace, extended_trace = options.show_trace, options.store_trace, options.extended_trace
x = copy(initial_x)
P = InverseDiagonal(copy(initial_x))
# to be careful about one special case that might occur commonly
# in practice: the initial guess x is exactly in the center of the
# box. In that case, gbarrier is zero. But since the
# initialization only makes use of the magnitude, we can fix this
# by using the sum of the absolute values of the contributions
# from each edge.
boundaryidx = Vector{Int}()
for i in eachindex(l)
thisx = x[i]
thisl = l[i]
thisu = u[i]
if thisx == thisl
thisx = T(99)/100*thisl + T(1)/100*thisu
x[i] = thisx
push!(boundaryidx,i)
elseif thisx == thisu
thisx = T(1)/100*thisl + T(99)/100*thisu
x[i] = thisx
push!(boundaryidx,i)
elseif thisx < thisl || thisx > thisu
throw(ArgumentError("Initial x[$(Tuple(CartesianIndices(x)[i]))]=$thisx is outside of [$thisl, $thisu]"))
end
end
if length(boundaryidx) > 0
@warn("Initial position cannot be on the boundary of the box. Moving elements to the interior.\nElement indices affected: $boundaryidx")
end
dfbox = BarrierWrapper(df, zero(T), l, u)
# Use the barrier-aware preconditioner to define
# barrier-aware optimization method instance (precondition relevance)
_optimizer = barrier_method(F.method, P, (P, x) -> F.precondprep(P, x, l, u, dfbox))
state = initial_state(_optimizer, options, dfbox, x)
# we wait until state has been initialized to set the initial mu because
# we need the gradient of the objective and initial_state will value_gradient!!
# the objective, so that forces an evaluation
if F.method isa NelderMead
gradient!(dfbox, x)
end
dfbox.mu = initial_mu(dfbox, F)
if F.method isa NelderMead
for i = 1:length(state.f_simplex)
x = state.simplex[i]
boxval = value(dfbox.b, x)
state.f_simplex[i] += boxval
end
state.i_order = sortperm(state.f_simplex)
end
if show_trace > 00
println("Fminbox")
println("-------")
print("Initial mu = ")
show(IOContext(stdout, :compact=>true), "text/plain", dfbox.mu)
println("\n")
end
g = copy(x)
fval_all = Vector{Vector{T}}()
# Count the total number of outer iterations
iteration = 0
# define the function (dfbox) to optimize by the inner optimizer
xold = copy(x)
converged = false
local results
first = true
f_increased, stopped_by_time_limit, stopped_by_callback = false, false, false
stopped = false
_time = time()
while !converged && !stopped && iteration < outer_iterations
fval0 = dfbox.obj.F
# Increment the number of steps we've had to perform
iteration += 1
copyto!(xold, x)
# Optimize with current setting of mu
if show_trace > 0
header_string = "Fminbox iteration $iteration"
println(header_string)
println("-"^length(header_string))
print("Calling inner optimizer with mu = ")
show(IOContext(stdout, :compact=>true), "text/plain", dfbox.mu)
println("\n")
println("(numbers below include barrier contribution)")
end
# we need to update the +mu*barrier_grad part. Since we're using the
# value_gradient! not !! as in initial_state, we won't make a superfluous
# evaluation
if !(F.method isa NelderMead)
value_gradient!(dfbox, x)
else
value!(dfbox, x)
end
if !(F.method isa NelderMead && iteration == 1)
reset!(_optimizer, state, dfbox, x)
end
resultsnew = optimize(dfbox, x, _optimizer, options, state)
stopped_by_callback = resultsnew.stopped_by.callback
if first
results = resultsnew
first = false
else
append!(results, resultsnew)
end
dfbox.obj.f_calls[1] = 0
if hasfield(typeof(dfbox.obj), :df_calls)
dfbox.obj.df_calls[1] = 0
end
if hasfield(typeof(dfbox.obj), :h_calls)
dfbox.obj.h_calls[1] = 0
end
copyto!(x, minimizer(results))
boxdist = min(minimum(x-l), minimum(u-x))
if show_trace > 0
println()
println("Exiting inner optimizer with x = ", x)
print("Current distance to box: ")
show(IOContext(stdout, :compact=>true), "text/plain", boxdist)
println()
println("Decreasing barrier term ΞΌ.\n")
end
# Decrease mu
dfbox.mu *= T(F.mufactor)
# Test for convergence
g = x.-min.(max.(x.-gradient(dfbox.obj), l), u)
results.x_converged, results.f_converged,
results.g_converged, f_increased = assess_convergence(x, xold, minimum(results), fval0, g,
options.outer_x_abstol, options.outer_x_reltol, options.outer_f_abstol, options.outer_f_reltol, options.outer_g_abstol)
converged = results.x_converged || results.f_converged || results.g_converged || stopped_by_callback
if f_increased && !allow_outer_f_increases
@warn("f(x) increased: stopping optimization")
break
end
_time = time()
stopped_by_time_limit = _time-t0 > options.time_limit ? true : false
stopped = stopped_by_time_limit
end
stopped_by =(#f_limit_reached=f_limit_reached,
#g_limit_reached=g_limit_reached,
#h_limit_reached=h_limit_reached,
time_limit=stopped_by_time_limit,
callback=stopped_by_callback,
f_increased=f_increased && !options.allow_f_increases)
return MultivariateOptimizationResults(F, initial_x, minimizer(results), df.f(minimizer(results)),
iteration, results.iteration_converged,
results.x_converged, results.x_abstol, results.x_reltol, norm(x - xold), norm(x - xold)/norm(x),
results.f_converged, results.f_abstol, results.f_reltol, f_abschange(minimum(results), value(dfbox)), f_relchange(minimum(results), value(dfbox)),
results.g_converged, results.g_abstol, norm(g, Inf),
results.f_increased, results.trace, results.f_calls,
results.g_calls, results.h_calls, nothing,
options.time_limit,
_time-t0, stopped_by)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 18178 | # """
# History: Based on Octave code samin.cc, by Michael Creel,
# which was originally based on Gauss code by E.G. Tsionas. A source
# for the Gauss code is http://web.stanford.edu/~doubleh/otherpapers/sa.txt
# The original Fortran code by W. Goffe is at
# http://www.degruyter.com/view/j/snde.1996.1.3/snde.1996.1.3.1020/snde.1996.1.3.1020.xml?format=INT
# Tsionas and Goffe agreed to MIT licensing of samin.jl in email
# messages to Creel.
#
# This Julia code uses the same names for control variables,
# for the most part. A notable difference is that the initial
# temperature can be found automatically to ensure that the active
# bounds when the temperature begins to reduce cover the entire
# parameter space (defined as a n-dimensional rectangle that is the
# Cartesian product of the(lb_i, ub_i), i = 1,2,..n. The code also
# allows for parameters to be restricted, by setting lb_i = ub_i,
# for the appropriate i.
"""
# SAMIN
## Constructor
```julia
SAMIN(; nt::Int = 5 # reduce temperature every nt*ns*dim(x_init) evaluations
ns::Int = 5 # adjust bounds every ns*dim(x_init) evaluations
rt::T = 0.9 # geometric temperature reduction factor: when temp changes, new temp is t=rt*t
neps::Int = 5 # number of previous best values the final result is compared to
f_tol::T = 1e-12 # the required tolerance level for function value comparisons
x_tol::T = 1e-6 # the required tolerance level for x
coverage_ok::Bool = false, # if false, increase temperature until initial parameter space is covered
verbosity::Int = 1) # scalar: 0, 1, 2 or 3 (default = 1).
```
## Description
The `SAMIN` method implements the Simulated Annealing algorithm for problems with
bounds constrains a described in Goffe et. al. (1994) and Goffe (1996). The
algorithm
## References
- Goffe, et. al. (1994) "Global Optimization of Statistical Functions with Simulated Annealing", Journal of Econometrics, V. 60, N. 1/2.
- Goffe, William L. (1996) "SIMANN: A Global Optimization Algorithm using Simulated Annealing " Studies in Nonlinear Dynamics & Econometrics, Oct96, Vol. 1 Issue 3.
"""
@with_kw struct SAMIN{T}<:AbstractConstrainedOptimizer
nt::Int = 5 # reduce temperature every nt*ns*dim(x_init) evaluations
ns::Int = 5 # adjust bounds every ns*dim(x_init) evaluations
rt::T = 0.9 # geometric temperature reduction factor: when temp changes, new temp is t=rt*t
neps::Int = 5 # number of previous best values the final result is compared to
f_tol::T = 1e-12 # the required tolerance level for function value comparisons
x_tol::T = 1e-6 # the required tolerance level for x
coverage_ok::Bool = false # if false, increase temperature until initial parameter space is covered
verbosity::Int = 1 # scalar: 0, 1, 2 or 3 (default = 1: see final results).
end
# * verbosity: scalar: 0, 1, 2 or 3 (default = 1).
# * 0 = no screen output
# * 1 = only final results to screen
# * 2 = summary every temperature change, without param values
# * 3 = summary every temperature change, with param values
# covered by the trial values. 1: start decreasing temperature immediately
Base.summary(::SAMIN) = "SAMIN"
function optimize(obj_fn, lb::AbstractArray, ub::AbstractArray, x::AbstractArray{Tx}, method::SAMIN, options::Options = Options()) where Tx
t0 = time() # Initial time stamp used to control early stopping by options.time_limit
hline = "="^80
d = NonDifferentiable(obj_fn, x)
tr = OptimizationTrace{typeof(value(d)), typeof(method)}()
tracing = options.store_trace || options.show_trace || options.extended_trace || options.callback !== nothing
@unpack nt, ns, rt, neps, f_tol, x_tol, coverage_ok, verbosity = method
verbose = verbosity > 0
x0 = copy(x)
n = size(x,1) # dimension of parameter
# Set initial values
nacc = 0 # total accepted trials
t = 2.0 # temperature - will initially rise or fall to cover parameter space. Then it will fall
converge = 0 # convergence indicator 0 (failure), 1 (normal success), or 2 (convergence but near bounds)
x_converged = false
f_converged = false
x_absΞ = Inf
f_absΞ = Inf
# most recent values, to compare to when checking convergend
fstar = typemax(Float64)*ones(neps)
# Initial obj_value
xopt = copy(x)
f_old = value!(d, x)
fopt = copy(f_old) # give it something to compare to
details = [f_calls(d) t fopt xopt']
bounds = ub - lb
# check for out-of-bounds starting values
for i = 1:n
if(( x[i] > ub[i]) || (x[i] < lb[i]))
error("samin: initial parameter $(i) out of bounds")
end
end
options.show_trace && print_header(method)
iteration = 0
_time = time()
trace!(tr, d, (x=xopt, iteration=iteration), iteration, method, options, _time-t0)
stopped_by_callback = false
# main loop, first increase temperature until parameter space covered, then reduce until convergence
while converge==0
# statistics to report at each temp change, set back to zero
nup = 0
nrej = 0
nnew = 0
ndown = 0
lnobds = 0
# repeat nt times then adjust temperature
for m = 1:nt
# repeat ns times, then adjust bounds
nacp = zeros(n)
for j = 1:ns
# generate new point by taking last and adding a random value
# to each of elements, in turn
for h = 1:n
iteration += 1
# new Sept 2011, if bounds are same, skip the search for that vbl.
# Allows restrictions without complicated programming
if (lb[h] != ub[h])
xp = copy(x)
xp[h] += (Tx(2.0) * rand(Tx) - Tx(1.0)) * bounds[h]
if (xp[h] < lb[h]) || (xp[h] > ub[h])
xp[h] = lb[h] + (ub[h] - lb[h]) * rand(Tx)
lnobds += 1
end
# Evaluate function at new point
f_proposal = value(d, xp)
# Accept the new point if the function value decreases
if (f_proposal <= f_old)
x = copy(xp)
f_old = f_proposal
nacc += 1 # total number of acceptances
nacp[h] += 1 # acceptances for this parameter
nup += 1
# If lower than any other point, record as new optimum
if f_proposal < fopt
xopt = copy(xp)
fopt = f_proposal
d.F = f_proposal
nnew +=1
details = [details; [f_calls(d) t f_proposal xp']]
end
# If the point is higher, use the Metropolis criteria to decide on
# acceptance or rejection.
else
p = exp(-(f_proposal - f_old) / t)
if rand(Tx) < p
x = copy(xp)
f_old = copy(f_proposal)
d.F = f_proposal
nacc += 1
nacp[h] += 1
ndown += 1
else
nrej += 1
end
end
end
if tracing
# update trace; callbacks can stop routine early by returning true
stopped_by_callback = trace!(tr, d, (x=xopt,iteration=iteration), iteration, method, options, time()-t0)
end
# If options.iterations exceeded, terminate the algorithm
_time = time()
if f_calls(d) >= options.iterations || _time-t0 > options.time_limit || stopped_by_callback
if verbose
println(hline)
println("SAMIN results")
println("NO CONVERGENCE: MAXEVALS exceeded")
@printf("\n Obj. value: %16.5f\n\n", fopt)
println(" parameter search width")
for i=1:n
@printf("%16.5f %16.5f \n", xopt[i], bounds[i])
end
println(hline)
end
converge = 0
return MultivariateOptimizationResults(method,
x0,# initial_x,
xopt, #pick_best_x(f_incr_pick, state),
fopt, # pick_best_f(f_incr_pick, state, d),
f_calls(d), #iteration,
f_calls(d) >= options.iterations, #iteration == options.iterations,
false, # x_converged,
x_tol,#T(options.x_tol),
0.0,#T(options.x_tol),
x_absΞ,# x_abschange(state),
NaN,# x_abschange(state),
false,# f_converged,
f_tol,#T(options.f_tol),
0.0,#T(options.f_tol),
f_absΞ,#f_abschange(d, state),
NaN,#f_abschange(d, state),
false,#g_converged,
0.0,#T(options.g_tol),
NaN,#g_residual(d),
false, #f_increased,
tr,
f_calls(d),
g_calls(d),
h_calls(d),
true,
options.time_limit,
_time-t0,NamedTuple())
end
end
end
# Adjust bounds so that approximately half of all evaluations are accepted
test = 0
for i = 1:n
if (lb[i] != ub[i])
ratio = nacp[i] / ns
if(ratio > 0.6) bounds[i] = bounds[i] * (1.0 + 2.0 * (ratio - 0.6) / 0.4) end
if(ratio < .4) bounds[i] = bounds[i] / (1.0 + 2.0 * ((0.4 - ratio) / 0.4)) end
# keep within initial bounds
if(bounds[i] > (ub[i] - lb[i]))
bounds[i] = ub[i] - lb[i]
test += 1
end
else
test += 1 # make sure coverage check passes for the fixed parameters
end
end
nacp = 0 # set back to zero
# check if we cover parameter space, if we have yet to do so
if !coverage_ok
coverage_ok = (test == n)
end
end
# intermediate output, if desired
if verbosity > 1
println(hline)
println("samin: intermediate results before next temperature change")
println("temperature: ", round(t, digits=5))
println("current best function value: ", round(fopt, digits=5))
println("total evaluations so far: ", f_calls(d))
println("total moves since last temperature reduction: ", nup + ndown + nrej)
println("downhill: ", nup)
println("accepted uphill: ", ndown)
println("rejected uphill: ", nrej)
println("out of bounds trials: ", lnobds)
println("new minima this temperature: ", nnew)
println()
println(" parameter search width")
for i=1:n
@printf("%16.5f %16.5f \n", xopt[i], bounds[i])
end
println(hline*"\n")
end
# Check for convergence, if we have covered the parameter space
if coverage_ok
# last value close enough to last neps values?
fstar[1] = f_old
f_absΞ = abs.(fopt - f_old) # close enough to best so far?
if all((abs.(fopt .- fstar)) .< f_tol) # within to for last neps trials?
f_converged = true
# check for bound narrow enough for parameter convergence
if any(bounds .> x_tol)
x_converged = false
converge = 0 # no conv. if bounds too wide
break
else
converge = 1
x_converged = true
x_absΞ = maximum(bounds)
end
else
f_converged = false
end
# check if optimal point is near boundary of parameter space, and change message if so
if (converge == 1) && (lnobds > 0)
converge = 2
end
# Like to see the final results?
if (converge > 0)
if verbose
println(hline)
println("SAMIN results")
if (converge == 1)
println("==> Normal convergence <==")
end
if (converge == 2)
printstyled("==> WARNING <==\n", color=:red)
println("Last point satisfies convergence criteria, but is near")
println("boundary of parameter space.")
println(lnobds, " out of ", (nup+ndown+nrej), " evaluations were out of bounds in the last round.")
println("Expand bounds and re-run, unless this is a constrained minimization.")
end
println("total number of objective function evaluations: ", f_calls(d))
@printf("\n Obj. value: %16.10f\n\n", fopt)
println(" parameter search width")
for i=1:n
@printf("%16.5f %16.5f \n", xopt[i], bounds[i])
end
println(hline*"\n")
end
end
# Reduce temperature, record current function value in the
# list of last "neps" values, and loop again
t *= rt
pushfirst!(fstar, f_old)
fstar = fstar[1:end-1]
f_old = copy(fopt)
x = copy(xopt)
else # coverage not ok - increase temperature quickly to expand search area
t *= 10.0
for i = neps:-1:2
fstar[i] = fstar[i-1]
end
f_old = fopt
x = xopt
end
end
return MultivariateOptimizationResults(method,
x0,# initial_x,
xopt, #pick_best_x(f_incr_pick, state),
fopt, # pick_best_f(f_incr_pick, state, d),
f_calls(d), #iteration,
f_calls(d) >= options.iterations, #iteration == options.iterations,
x_converged, # x_converged,
x_tol,#T(options.x_tol),
0.0,#T(options.x_tol),
x_absΞ ,# x_abschange(state),
NaN,# x_relchange(state),
f_converged, # f_converged,
f_tol,#T(options.f_tol),
0.0,#T(options.f_tol),
f_absΞ ,#f_abschange(d, state),
NaN,#f_relchange(d, state),
false,#g_converged,
0.0,#T(options.g_tol),
NaN,#g_residual(d),
false, #f_increased,
tr,
f_calls(d),
g_calls(d),
h_calls(d),
true,
options.time_limit,
_time-t0,
NamedTuple())
end
# TODO
# Handle traces
# * details: a px3 matrix. p is the number of times improvements were found.
# The columns record information at the time an improvement was found
# * first: cumulative number of function evaluations
# * second: temperature
# * third: function value
#
# Add doc entry
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 36219 | # TODO: when Optim supports sparse arrays, make a SparseMatrixCSC version of jacobianx
abstract type AbstractBarrierState end
# These are used not only for the current state, but also for the step and the gradient
struct BarrierStateVars{T}
slack_x::Vector{T} # values of slack variables for x
slack_c::Vector{T} # values of slack variables for c
Ξ»x::Vector{T} # Ξ» for equality constraints on slack_x
Ξ»c::Vector{T} # Ξ» for equality constraints on slack_c
Ξ»xE::Vector{T} # Ξ» for equality constraints on x
Ξ»cE::Vector{T} # Ξ» for linear/nonlinear equality constraints
end
# Note on Ξ»xE:
# We could just set equality-constrained variables to their
# constraint values at the beginning of optimization, but this
# might make the initial guess infeasible in terms of its
# inequality constraints. This would be a much bigger problem than
# not matching the equality constraints. So we allow them to
# differ, and require that the algorithm can cope with it.
function BarrierStateVars{T}(bounds::ConstraintBounds) where T
slack_x = Array{T}(undef, length(bounds.ineqx))
slack_c = Array{T}(undef, length(bounds.ineqc))
Ξ»x = similar(slack_x)
Ξ»c = similar(slack_c)
Ξ»xE = Array{T}(undef, length(bounds.eqx))
Ξ»cE = Array{T}(undef, length(bounds.eqc))
sv = BarrierStateVars{T}(slack_x, slack_c, Ξ»x, Ξ»c, Ξ»xE, Ξ»cE)
end
BarrierStateVars(bounds::ConstraintBounds{T}) where T = BarrierStateVars{T}(bounds)
function BarrierStateVars(bounds::ConstraintBounds{T}, x) where T
sv = BarrierStateVars(bounds)
setslack!(sv.slack_x, x, bounds.ineqx, bounds.Οx, bounds.bx)
sv
end
function BarrierStateVars(bounds::ConstraintBounds{T}, x, c) where T
sv = BarrierStateVars(bounds)
setslack!(sv.slack_x, x, bounds.ineqx, bounds.Οx, bounds.bx)
setslack!(sv.slack_c, c, bounds.ineqc, bounds.Οc, bounds.bc)
sv
end
function setslack!(slack, v, ineq, Ο, b)
for i = 1:length(ineq)
dv = v[ineq[i]]-b[i]
slack[i] = abs(Ο[i]*dv)
end
slack
end
slack(bstate::BarrierStateVars) = [bstate.slack_x; bstate.slack_c]
lambdaI(bstate::BarrierStateVars) = [bstate.Ξ»x; bstate.Ξ»c]
lambdaE(bstate::BarrierStateVars) = [bstate.Ξ»xE; bstate.Ξ»cE] # TODO: Not used by IPNewton?
lambdaI(state::AbstractBarrierState) = lambdaI(state.bstate)
lambdaE(state::AbstractBarrierState) = lambdaE(state.bstate) # TODO: Not used by IPNewton?
Base.similar(bstate::BarrierStateVars) =
BarrierStateVars(similar(bstate.slack_x),
similar(bstate.slack_c),
similar(bstate.Ξ»x),
similar(bstate.Ξ»c),
similar(bstate.Ξ»xE),
similar(bstate.Ξ»cE))
Base.copy(bstate::BarrierStateVars) =
BarrierStateVars(copy(bstate.slack_x),
copy(bstate.slack_c),
copy(bstate.Ξ»x),
copy(bstate.Ξ»c),
copy(bstate.Ξ»xE),
copy(bstate.Ξ»cE))
function Base.fill!(b::BarrierStateVars, val)
fill!(b.slack_x, val)
fill!(b.slack_c, val)
fill!(b.Ξ»x, val)
fill!(b.Ξ»c, val)
fill!(b.Ξ»xE, val)
fill!(b.Ξ»cE, val)
b
end
Base.convert(::Type{BarrierStateVars{T}}, bstate::BarrierStateVars) where T =
BarrierStateVars(convert(Array{T}, bstate.slack_x),
convert(Array{T}, bstate.slack_c),
convert(Array{T}, bstate.Ξ»x),
convert(Array{T}, bstate.Ξ»c),
convert(Array{T}, bstate.Ξ»xE),
convert(Array{T}, bstate.Ξ»cE))
Base.isempty(bstate::BarrierStateVars) = isempty(bstate.slack_x) &
isempty(bstate.slack_c) & isempty(bstate.Ξ»xE) & isempty(bstate.Ξ»cE)
Base.eltype(::Type{BarrierStateVars{T}}) where T = T
Base.eltype(sv::BarrierStateVars) = eltype(typeof(sv))
function Base.show(io::IO, b::BarrierStateVars)
print(io, "BarrierStateVars{$(eltype(b))}:")
for fn in (:slack_x, :slack_c, :Ξ»x, :Ξ»c, :Ξ»xE, :Ξ»cE)
print(io, "\n $fn: ")
show(io, getfield(b, fn))
end
end
Base.:(==)(v::BarrierStateVars, w::BarrierStateVars) =
v.slack_x == w.slack_x &&
v.slack_c == w.slack_c &&
v.Ξ»x == w.Ξ»x &&
v.Ξ»c == w.Ξ»c &&
v.Ξ»xE == w.Ξ»xE &&
v.Ξ»cE == w.Ξ»cE
const bsv_seed = sizeof(UInt) == 64 ? 0x145b788192d1cde3 : 0x766a2810
Base.hash(b::BarrierStateVars, u::UInt) =
hash(b.Ξ»cE, hash(b.Ξ»xE, hash(b.Ξ»c, hash(b.Ξ»x, hash(b.slack_c, hash(b.slack_x, u+bsv_seed))))))
function dot(v::BarrierStateVars, w::BarrierStateVars)
dot(v.slack_x,w.slack_x) +
dot(v.slack_c, w.slack_c) +
dot(v.Ξ»x, w.Ξ»x) +
dot(v.Ξ»c, w.Ξ»c) +
dot(v.Ξ»xE, w.Ξ»xE) +
dot(v.Ξ»cE, w.Ξ»cE)
end
function norm(b::BarrierStateVars, p::Real)
norm(b.slack_x, p) + norm(b.slack_c, p) +
norm(b.Ξ»x, p) + norm(b.Ξ»c, p) +
norm(b.Ξ»xE, p) + norm(b.Ξ»cE, p)
end
"""
BarrierLineSearch{T}
Parameters for interior-point line search methods that use only the value
"""
struct BarrierLineSearch{T}
c::Vector{T} # value of constraints-functions at trial point
bstate::BarrierStateVars{T} # trial point for slack and Ξ» variables
end
Base.convert(::Type{BarrierLineSearch{T}}, bsl::BarrierLineSearch) where T =
BarrierLineSearch(convert(Vector{T}, bsl.c),
convert(BarrierStateVars{T}, bsl.bstate))
"""
BarrierLineSearchGrad{T}
Parameters for interior-point line search methods that exploit the slope.
"""
struct BarrierLineSearchGrad{T}
c::Vector{T} # value of constraints-functions at trial point
J::Matrix{T} # constraints-Jacobian at trial point
bstate::BarrierStateVars{T} # trial point for slack and Ξ» variables
bgrad::BarrierStateVars{T} # trial point's gradient
end
Base.convert(::Type{BarrierLineSearchGrad{T}}, bsl::BarrierLineSearchGrad) where T =
BarrierLineSearchGrad(convert(Vector{T}, bsl.c),
convert(Matrix{T}, bsl.J),
convert(BarrierStateVars{T}, bsl.bstate),
convert(BarrierStateVars{T}, bsl.bgrad))
function ls_update!(out::BarrierStateVars, base::BarrierStateVars, step::BarrierStateVars, Ξ±s::NTuple{4,Number})
ls_update!(out.slack_x, base.slack_x, step.slack_x, Ξ±s[2])
ls_update!(out.slack_c, base.slack_c, step.slack_c, Ξ±s[2])
ls_update!(out.Ξ»x, base.Ξ»x, step.Ξ»x, Ξ±s[3])
ls_update!(out.Ξ»c, base.Ξ»c, step.Ξ»c, Ξ±s[3])
ls_update!(out.Ξ»xE, base.Ξ»xE, step.Ξ»xE, Ξ±s[4])
ls_update!(out.Ξ»cE, base.Ξ»cE, step.Ξ»cE, Ξ±s[4])
out
end
ls_update!(out::BarrierStateVars, base::BarrierStateVars, step::BarrierStateVars, Ξ±s::Tuple{Number,Number}) =
ls_update!(out, base, step, (Ξ±s[1],Ξ±s[1],Ξ±s[2],Ξ±s[1]))
ls_update!(out::BarrierStateVars, base::BarrierStateVars, step::BarrierStateVars, Ξ±::Number) =
ls_update!(out, base, step, (Ξ±,Ξ±,Ξ±,Ξ±))
ls_update!(out::BarrierStateVars, base::BarrierStateVars, step::BarrierStateVars, Ξ±s::AbstractVector) =
ls_update!(out, base, step, Ξ±s[1]) # (Ξ±s...,))
function initial_convergence(d, state, method::ConstrainedOptimizer, initial_x, options)
# TODO: Make sure state.bgrad has been evaluated at initial_x
# state.bgrad normally comes from constraints.c!(..., initial_x) in initial_state
gradient!(d, initial_x)
stopped = !isfinite(value(d)) || any(!isfinite, gradient(d))
norm(gradient(d), Inf) + norm(state.bgrad, Inf) < options.g_abstol, stopped
end
function optimize(f, g, lower::AbstractArray, upper::AbstractArray, initial_x::AbstractArray, method::ConstrainedOptimizer=IPNewton(),
options::Options = Options(;default_options(method)...))
d = TwiceDifferentiable(f, g, initial_x)
optimize(d, lower, upper, initial_x, method, options)
end
function optimize(f, g, h, lower::AbstractArray, upper::AbstractArray, initial_x::AbstractArray, method::ConstrainedOptimizer=IPNewton(),
options::Options = Options(;default_options(method)...))
d = TwiceDifferentiable(f, g, h, initial_x)
optimize(d, lower, upper, initial_x, method, options)
end
function optimize(d::TwiceDifferentiable, lower::AbstractArray, upper::AbstractArray, initial_x::AbstractArray,
options::Options = Options(;default_options(IPNewton())...))
optimize(d, lower, upper, initial_x, IPNewton(), options)
end
function optimize(d, lower::AbstractArray, upper::AbstractArray, initial_x::AbstractArray, method::ConstrainedOptimizer,
options::Options = Options(;default_options(method)...))
twicediffed = d isa TwiceDifferentiable ? d : TwiceDifferentiable(d, initial_x)
bounds = ConstraintBounds(lower, upper, [], [])
constraints = TwiceDifferentiableConstraints(
(c,x)->nothing, (J,x)->nothing, (H,x,Ξ»)->nothing, bounds)
state = initial_state(method, options, twicediffed, constraints, initial_x)
optimize(twicediffed,
constraints,
initial_x,
method,
options,
state)
end
function optimize(d::AbstractObjective, constraints::AbstractConstraints, initial_x::AbstractArray, method::ConstrainedOptimizer,
options::Options = Options(;default_options(method)...),
state = initial_state(method, options, d, constraints, initial_x))
#== TODO:
Let's try to unify this with the unconstrained `optimize` in Optim
The only thing we'd have to deal with is to dispatch
the univariate `optimize` to one with empty constraints::AbstractConstraints
==#
t0 = time() # Initial time stamp used to control early stopping by options.time_limit
_time = t0
tr = OptimizationTrace{typeof(value(d)), typeof(method)}()
tracing = options.store_trace || options.show_trace || options.extended_trace || options.callback !== nothing
stopped, stopped_by_callback, stopped_by_time_limit = false, false, false
f_limit_reached, g_limit_reached, h_limit_reached = false, false, false
x_converged, f_converged, f_increased, counter_f_tol = false, false, false, 0
g_converged, stopped = initial_convergence(d, state, method, initial_x, options)
converged = g_converged
# prepare iteration counter (used to make "initial state" trace entry)
iteration = 0
options.show_trace && print_header(method)
trace!(tr, d, state, iteration, method, options, t0)
while !converged && !stopped && iteration < options.iterations
iteration += 1
update_state!(d, constraints, state, method, options) && break # it returns true if it's forced by something in update! to stop (eg dx_dg == 0.0 in BFGS or linesearch errors)
update_fg!(d, constraints, state, method)
# TODO: Do we need to rethink f_increased for `ConstrainedOptimizer`s?
x_converged, f_converged,
g_converged, f_increased = assess_convergence(state, d, options)
# With equality constraints, optimization is not necessarily
# monotonic in the value of the function. If the function
# change is approximately canceled by a change in the equality
# violation, it's possible to spuriously satisfy the f_tol
# criterion. Consequently, we require that the f_tol condition
# be satisfied a certain number of times in a row before
# declaring convergence.
counter_f_tol = f_converged ? counter_f_tol+1 : 0
converged = x_converged || g_converged || (counter_f_tol > options.successive_f_tol)
# We don't use the Hessian for anything if we have declared convergence,
# so we might as well not make the (expensive) update if converged == true
!converged && update_h!(d, constraints, state, method)
if tracing
# update trace; callbacks can stop routine early by returning true
stopped_by_callback = trace!(tr, d, state, iteration, method, options)
end
# Check time_limit; if none is provided it is NaN and the comparison
# will always return false.
_time = time()
stopped_by_time_limit = _time-t0 > options.time_limit ? true : false
f_limit_reached = options.f_calls_limit > 0 && f_calls(d) >= options.f_calls_limit ? true : false
g_limit_reached = options.g_calls_limit > 0 && g_calls(d) >= options.g_calls_limit ? true : false
h_limit_reached = options.h_calls_limit > 0 && h_calls(d) >= options.h_calls_limit ? true : false
if (f_increased && !options.allow_f_increases) || stopped_by_callback ||
stopped_by_time_limit || f_limit_reached || g_limit_reached || h_limit_reached
stopped = true
end
end # while
after_while!(d, constraints, state, method, options)
# we can just check minimum, as we've earlier enforced same types/eltypes
# in variables besides the option settings
T = typeof(options.f_reltol)
Tf = typeof(value(d))
f_incr_pick = f_increased && !options.allow_f_increases
return MultivariateOptimizationResults(method,
initial_x,
pick_best_x(f_incr_pick, state),
pick_best_f(f_incr_pick, state, d),
iteration,
iteration == options.iterations,
x_converged,
T(options.x_abstol),
T(options.x_reltol),
x_abschange(state),
x_relchange(state),
f_converged,
T(options.f_abstol),
T(options.f_reltol),
f_abschange(d, state),
f_relchange(d, state),
g_converged,
T(options.g_abstol),
g_residual(d),
f_increased,
tr,
f_calls(d),
g_calls(d),
h_calls(d),
nothing,
options.time_limit,
_time-t0,
NamedTuple())
end
# Fallbacks (for methods that don't need these)
after_while!(d, constraints::AbstractConstraints, state, method, options) = nothing
update_h!(d, constraints::AbstractConstraints, state, method) = nothing
"""
initialize_ΞΌ_Ξ»!(state, bounds, ΞΌ0=:auto, Ξ²=0.01)
initialize_ΞΌ_Ξ»!(state, bounds, (Hobj,HcI), ΞΌ0=:auto, Ξ²=0.01)
Pick ΞΌ and Ξ» to ensure that the equality constraints are satisfied
locally (at the current `state.x`), and that the initial gradient
including the barrier would be a descent direction for the problem
without the barrier (ΞΌ = 0). This ensures that the search isn't pushed
out of the basin of the user-supplied initial guess.
Upon entry, the objective function gradient, constraint values, and
constraint jacobian must be set in `state.g`, `state.c`, and `state.J`
respectively. If you also wish to ensure that the projection of
Hessian is minimally-perturbed along the initial gradient, supply the
hessian of the objective (`Hobj`) and
HcI = β_i (Ο_i/s_i)ββ c_{Ii}
for the constraints. This can be obtained as
HcI = hessianI(state.x, constraints, 1./state.slack_c)
You can manually specify `ΞΌ` by supplying a numerical value for
`ΞΌ0`. Whether calculated algorithmically or specified manually, the
values of `Ξ»` are set using the chosen `ΞΌ`.
"""
function initialize_ΞΌ_Ξ»!(state, bounds::ConstraintBounds, Hinfo, ΞΌ0::Union{Symbol,Number}, Ξ²::Number=1//100)
if nconstraints(bounds) == 0 && nconstraints_x(bounds) == 0
state.ΞΌ = 0
fill!(state.bstate, 0)
return state
end
gf = state.g # must be pre-set to βf
# Calculate projection of βf into the subspace spanned by the
# equality constraint Jacobian
JE = jacobianE(state, bounds)
# QRF = qrfact(JE)
# Q = QRF[:Q]
# PEg = Q'*(Q*gf) # in the subspace of JE
C = JE*JE'
Cc = cholesky(Positive, C)
Pperpg = gf-JE'*(Cc \ (JE*gf)) # in the nullspace of JE
# Set ΞΌ
JI = jacobianI(state, bounds)
if ΞΌ0 == :auto
# Calculate projections of the Lagrangian's gradient, and
# possibly hessian, along (βf)_β
Dperp = dot(Pperpg, Pperpg)
Ο, s = sigma(bounds), slack(state)
Οdivs = Ο./s
Ξg = JI'*Οdivs
PperpΞg = Ξg - JE'*(Cc \ (JE*Ξg))
DI = dot(PperpΞg, PperpΞg)
ΞΊperp, ΞΊI = hessian_projections(Hinfo, Pperpg, (JI*Pperpg)./s)
# Calculate ΞΌ and Ξ»I
ΞΌ = Ξ² * (ΞΊperp == 0 ? sqrt(Dperp/DI) : min(sqrt(Dperp/DI), abs(ΞΊperp/ΞΊI)))
if !isfinite(ΞΌ)
Ξgtilde = JI'*(1 ./ s)
PperpΞgtilde = Ξgtilde - JE'*(Cc \ (JE*Ξgtilde))
DItilde = dot(PperpΞgtilde, PperpΞgtilde)
ΞΌ = Ξ²*sqrt(Dperp/DItilde)
end
if !isfinite(ΞΌ) || ΞΌ == 0
ΞΌ = one(ΞΌ)
end
else
ΞΌ = convert(eltype(state.x), ΞΌ0)
end
state.ΞΌ = ΞΌ
# Set Ξ»I
@. state.bstate.Ξ»x = ΞΌ / state.bstate.slack_x
@. state.bstate.Ξ»c = ΞΌ / state.bstate.slack_c
# Calculate Ξ»E
Ξ»I = lambdaI(state)
βbI = gf - JI'*Ξ»I
# qrregularize!(QRF) # in case of any 0 eigenvalues
Ξ»E = Cc \ (JE*βbI) + (cbar(bounds) - cE(state, bounds))/ΞΌ
k = unpack_vec!(state.bstate.Ξ»xE, Ξ»E, 0)
k = unpack_vec!(state.bstate.Ξ»cE, Ξ»E, k)
k == length(Ξ»E) || error("Something is wrong when initializing ΞΌ and Ξ».")
state
end
function initialize_ΞΌ_Ξ»!(state, bounds::ConstraintBounds, ΞΌ0::Union{Number,Symbol}, Ξ²::Number=1//100)
initialize_ΞΌ_Ξ»!(state, bounds, nothing, ΞΌ0, Ξ²)
end
function hessian_projections(Hinfo::Tuple{AbstractMatrix,AbstractMatrix}, Pperpg, y)
ΞΊperp = dot(Hinfo[1]*Pperpg, Pperpg)
ΞΊI = dot(Hinfo[2]*Pperpg, Pperpg) + dot(y,y)
ΞΊperp, ΞΊI
end
hessian_projections(Hinfo::Nothing, Pperpg::AbstractVector{T}) where T = convert(T, Inf), zero(T)
function jacobianE(state, bounds::ConstraintBounds)
J, x = state.constr_J, state.x
JEx = jacobianx(J, bounds.eqx)
JEc = view(J, bounds.eqc, :)
JE = vcat(JEx, JEc)
end
jacobianE(state, constraints) = jacobianE(state, constraints.bounds)
function jacobianI(state, bounds::ConstraintBounds)
J, x = state.constr_J, state.x
JIx = jacobianx(J, bounds.ineqx)
JIc = view(J, bounds.ineqc, :)
JI = vcat(JIx, JIc)
end
jacobianI(state, constraints) = jacobianI(state, constraints.bounds)
# TODO: when Optim supports sparse arrays, make a SparseMatrixCSC version
function jacobianx(J::AbstractArray, indx)
Jx = zeros(eltype(J), length(indx), size(J, 2))
for (i,j) in enumerate(indx)
Jx[i,j] = 1
end
Jx
end
function sigma(bounds::ConstraintBounds)
[bounds.Οx; bounds.Οc] # don't include Οz
end
sigma(constraints) = sigma(constraints.bounds)
slack(state) = slack(state.bstate)
cbar(bounds::ConstraintBounds) = [bounds.valx; bounds.valc]
cbar(constraints) = cbar(constraints.bounds)
cE(state, bounds::ConstraintBounds) = [state.x[bounds.eqx]; state.constr_c[bounds.eqc]]
function hessianI!(h, x, constraints, Ξ»cI, ΞΌ)
Ξ» = userΞ»(Ξ»cI, constraints)
constraints.h!(h, x, Ξ»)
h
end
"""
hessianI(x, constraints, Ξ»cI, ΞΌ) -> h
Compute the hessian at `x` of the `Ξ»cI`-weighted sum of user-supplied
constraint functions for just the inequalities. This also includes
contributions from any variables with bounds at 0, since those do not
cause introduction of a slack variable. Other (nonzero) box
constraints do not contribute to `h`, because the hessian of `x_i` is
zero. (They contribute indirectly via their slack variables.)
"""
hessianI(x, constraints, Ξ»cI, ΞΌ) =
hessianI!(zeros(eltype(x), length(x), length(x)), x, constraints, Ξ»cI, ΞΌ)
"""
userΞ»(Ξ»cI, bounds) -> Ξ»
Accumulates `Ξ»cI` into a vector `Ξ»` ordered as the user-supplied
constraint functions `c`. Upper and lower bounds are summed, weighted
by `Ο`. The resulting Ξ» includes an overall negative sign so that this
becomes the coefficient for the user-supplied hessian.
This is relevant only for the inequalities. If you want the Ξ» for just
the equalities, you can use `Ξ»[bounds.ceq] = Ξ»cE` for a zero-filled `Ξ»`.
"""
function userΞ»(Ξ»cI, bounds::ConstraintBounds)
ineqc, Οc = bounds.ineqc, bounds.Οc
Ξ» = zeros(eltype(bounds), nconstraints(bounds))
for i = 1:length(ineqc)
Ξ»[ineqc[i]] -= Ξ»cI[i]*Οc[i]
end
Ξ»
end
userΞ»(Ξ»cI, constraints) = userΞ»(Ξ»cI, constraints.bounds)
## Computation of the Lagrangian and its gradient
# This is in a parametrization that is also useful during linesearch
# TODO: `lagrangian` does not seem to be used (IPNewton)?
function lagrangian(d, bounds::ConstraintBounds, x, c, bstate::BarrierStateVars, ΞΌ)
f_x = NLSolversBase.value!(d, x)
ev = equality_violation(bounds, x, c, bstate)
L_xsΞ» = f_x + barrier_value(bounds, x, bstate, ΞΌ) + ev
f_x, L_xsΞ», ev
end
function lagrangian_fg!(gx, bgrad, d, bounds::ConstraintBounds, x, c, J, bstate::BarrierStateVars, ΞΌ)
fill!(bgrad, 0)
f_x, g_x = NLSolversBase.value_gradient!(d,x)
gx .= g_x
ev = equality_violation(bounds, x, c, bstate)
L_xsΞ» = f_x + barrier_value(bounds, x, bstate, ΞΌ) + ev
barrier_grad!(bgrad, bounds, x, bstate, ΞΌ)
equality_grad!(gx, bgrad, bounds, x, c, J, bstate)
f_x, L_xsΞ», ev
end
# TODO: do we need lagrangian_vec? Maybe for automatic differentiation?
## Computation of Lagrangian and derivatives when passing all parameters as a single vector
function lagrangian_vec(p, d, bounds::ConstraintBounds, x, c::AbstractArray, bstate::BarrierStateVars, ΞΌ)
unpack_vec!(x, bstate, p)
f_x, L_xsΞ», ev = lagrangian(d, bounds, x, c, bstate, ΞΌ)
L_xsΞ»
end
function lagrangian_vec(p, d, bounds::ConstraintBounds, x, c::Function, bstate::BarrierStateVars, ΞΌ)
# Use this version when using automatic differentiation
unpack_vec!(x, bstate, p)
f_x, L_xsΞ», ev = lagrangian(d, bounds, x, c(x), bstate, ΞΌ)
L_xsΞ»
end
function lagrangian_fgvec!(p, storage, gx, bgrad, d, bounds::ConstraintBounds, x, c, J, bstate::BarrierStateVars, ΞΌ)
unpack_vec!(x, bstate, p)
f_x, L_xsΞ», ev = lagrangian_fg!(gx, bgrad, d, bounds, x, c, J, bstate, ΞΌ)
pack_vec!(storage, gx, bgrad)
L_xsΞ»
end
## for line searches that don't use the gradient along the line
function lagrangian_linefunc(Ξ±s, d, constraints, state)
_lagrangian_linefunc(Ξ±s, d, constraints, state)[2]
end
function _lagrangian_linefunc(Ξ±s, d, constraints, state)
b_ls, bounds = state.b_ls, constraints.bounds
ls_update!(state.x_ls, state.x, state.s, alphax(Ξ±s))
ls_update!(b_ls.bstate, state.bstate, state.bstep, Ξ±s)
constraints.c!(b_ls.c, state.x_ls)
lagrangian(d, constraints.bounds, state.x_ls, b_ls.c, b_ls.bstate, state.ΞΌ)
end
alphax(Ξ±::Number) = Ξ±
alphax(Ξ±s::Union{Tuple,AbstractVector}) = Ξ±s[1]
function lagrangian_linefunc!(Ξ±, d, constraints, state, method::IPOptimizer{typeof(backtrack_constrained)})
# For backtrack_constrained, the last evaluation is the one we
# keep, so it's safe to store the results in state
state.f_x, state.L, state.ev = _lagrangian_linefunc(Ξ±, d, constraints, state)
state.L
end
lagrangian_linefunc!(Ξ±, d, constraints, state, method) = lagrangian_linefunc(Ξ±, d, constraints, state)
## for line searches that do use the gradient along the line
function lagrangian_lineslope(Ξ±s, d, constraints, state)
f_x, L, ev, slope = _lagrangian_lineslope(Ξ±s, d, constraints, state)
L, slope
end
function _lagrangian_lineslope(Ξ±s, d, constraints, state)
b_ls, bounds = state.b_ls, constraints.bounds
bstep, bgrad = state.bstep, b_ls.bgrad
ls_update!(state.x_ls, state.x, state.s, alphax(Ξ±s))
ls_update!(b_ls.bstate, state.bstate, bstep, Ξ±s)
constraints.c!(b_ls.c, state.x_ls)
constraints.jacobian!(b_ls.J, state.x_ls)
f_x, L, ev = lagrangian_fg!(state.g, bgrad, d, bounds, state.x_ls, b_ls.c, b_ls.J, b_ls.bstate, state.ΞΌ)
slopeΞ± = slopealpha(state.s, state.g, bstep, bgrad)
f_x, L, ev, slopeΞ±
end
function lagrangian_lineslope!(Ξ±s, d, constraints, state, method::IPOptimizer{typeof(backtrack_constrained_grad)})
# For backtrack_constrained, the last evaluation is the one we
# keep, so it's safe to store the results in state
state.f_x, state.L, state.ev, slope = _lagrangian_lineslope(Ξ±s, d, constraints, state)
state.L, slope
end
lagrangian_lineslope!(Ξ±s, d, constraints, state, method) = lagrangian_lineslope(Ξ±s, d, constraints, state)
slopealpha(sx, gx, bstep, bgrad) = dot(sx, gx) +
dot(bstep.slack_x, bgrad.slack_x) + dot(bstep.slack_c, bgrad.slack_c) +
dot(bstep.Ξ»x, bgrad.Ξ»x) + dot(bstep.Ξ»c, bgrad.Ξ»c) +
dot(bstep.Ξ»xE, bgrad.Ξ»xE) + dot(bstep.Ξ»cE, bgrad.Ξ»cE)
function linesearch_anon(d, constraints, state, method::IPOptimizer{typeof(backtrack_constrained_grad)})
Ξ±s->lagrangian_lineslope!(Ξ±s, d, constraints, state, method)
end
function linesearch_anon(d, constraints, state, method::IPOptimizer{typeof(backtrack_constrained)})
Ξ±s->lagrangian_linefunc!(Ξ±s, d, constraints, state, method)
end
## Computation of Lagrangian terms: barrier penalty
"""
barrier_value(constraints, state) -> val
barrier_value(bounds, x, sx, sc, ΞΌ) -> val
Compute the value of the barrier penalty at the current `state`, or at
a position (`x`,`sx`,`sc`), where `x` is the current position, `sx`
are the coordinate slack variables, and `sc` are the linear/nonlinear
slack variables. `bounds` holds the parsed bounds.
"""
function barrier_value(bounds::ConstraintBounds, x, sx, sc, ΞΌ)
# bΞΌ is the coefficient of ΞΌ in the barrier penalty
bΞΌ = _bv(sx) + # coords with other bounds
_bv(sc) # linear/nonlinear constr.
ΞΌ*bΞΌ
end
barrier_value(bounds::ConstraintBounds, x, bstate::BarrierStateVars, ΞΌ) =
barrier_value(bounds, x, bstate.slack_x, bstate.slack_c, ΞΌ)
barrier_value(bounds::ConstraintBounds, state) =
barrier_value(bounds, state.x, state.bstate.slack_x, state.bstate.slack_c, state.ΞΌ)
barrier_value(constraints::AbstractConstraints, state) =
barrier_value(constraints.bounds, state)
# don'tcall this barrier_value because it lacks ΞΌ
function _bv(v, idx, Ο) # TODO: Not used, delete? (IPNewton)
ret = loginf(one(eltype(Ο))*one(eltype(v)))
for (i,iv) in enumerate(idx)
ret += loginf(Ο[i]*v[iv])
end
-ret
end
_bv(v) = isempty(v) ? loginf(one(eltype(v))) : -sum(loginf, v)
loginf(Ξ΄) = Ξ΄ > 0 ? log(Ξ΄) : -oftype(Ξ΄, Inf)
"""
barrier_grad!(bgrad, bounds, x, bstate, ΞΌ)
barrier_grad!(gsx, gsc, bounds, x, sx, sc, ΞΌ)
Compute the gradient of the barrier penalty at (`x`,`sx`,`sc`), where
`x` is the current position, `sx` are the coordinate slack variables,
and `sc` are the linear/nonlinear slack
variables. `bounds::ConstraintBounds` holds the parsed bounds.
The result is *added* to `gsx`, and `gsc`, so these vectors
need to be initialized appropriately.
"""
function barrier_grad!(gsx, gsc, bounds::ConstraintBounds, x, sx, sc, ΞΌ)
barrier_grad!(gsx, sx, ΞΌ)
barrier_grad!(gsc, sc, ΞΌ)
nothing
end
barrier_grad!(bgrad, bounds::ConstraintBounds, x, bstate, ΞΌ) =
barrier_grad!(bgrad.slack_x, bgrad.slack_c, bounds, x, bstate.slack_x, bstate.slack_c, ΞΌ)
function barrier_grad!(out, v, ΞΌ)
for i = 1:length(out)
out[i] -= ΞΌ/v[i]
end
nothing
end
## Computation of Lagrangian terms: equality constraints penalty
"""
equality_violation([f=identity], bounds, x, c, bstate) -> val
equality_violation([f=identity], bounds, x, c, sx, sc, Ξ»x, Ξ»c, Ξ»xE, Ξ»cE) -> val
Compute the sum of `f(v_i)`, where `v_i = Ξ»_i*(target - observed)`
measures the difference between the current state and the
equality-constrained state. `bounds::ConstraintBounds` holds the
parsed bounds. `x` is the current position, `sx` are the coordinate
slack variables, and `sc` are the linear/nonlinear slack
variables. `c` holds the values of the linear-nonlinear constraints,
and the Ξ» arguments hold the Lagrange multipliers for `x`, `sx`, `sc`, and
`c` respectively.
"""
function equality_violation(f, bounds::ConstraintBounds, x, c, sx, sc, Ξ»x, Ξ»c, Ξ»xE, Ξ»cE)
ev = equality_violation(f, sx, x, bounds.ineqx, bounds.Οx, bounds.bx, Ξ»x) +
equality_violation(f, sc, c, bounds.ineqc, bounds.Οc, bounds.bc, Ξ»c) +
equality_violation(f, x, bounds.valx, bounds.eqx, Ξ»xE) +
equality_violation(f, c, bounds.valc, bounds.eqc, Ξ»cE)
end
equality_violation(bounds::ConstraintBounds, x, c, sx, sc, Ξ»x, Ξ»c, Ξ»xE, Ξ»cE) =
equality_violation(identity, bounds, x, c, sx, sc, Ξ»x, Ξ»c, Ξ»xE, Ξ»cE)
function equality_violation(f, bounds::ConstraintBounds, x, c, bstate::BarrierStateVars)
equality_violation(f, bounds, x, c, bstate.slack_x, bstate.slack_c,
bstate.Ξ»x, bstate.Ξ»c, bstate.Ξ»xE, bstate.Ξ»cE)
end
equality_violation(bounds::ConstraintBounds, x, c, bstate::BarrierStateVars) =
equality_violation(identity, bounds, x, c, bstate)
equality_violation(f, bounds::ConstraintBounds, state::AbstractBarrierState) =
equality_violation(f, bounds, state.x, state.constr_c, state.bstate)
equality_violation(bounds::ConstraintBounds, state::AbstractBarrierState) =
equality_violation(identity, bounds, state)
equality_violation(f, constraints::AbstractConstraints, state::AbstractBarrierState) =
equality_violation(f, constraints.bounds, state)
equality_violation(constraints::AbstractConstraints, state::AbstractBarrierState) =
equality_violation(constraints.bounds, state)
# violations of s = Ο*(v-b)
function equality_violation(f, s, v, ineq, Ο, b, Ξ»)
ret = f(zero(eltype(Ξ»))*(zero(eltype(s))-zero(eltype(Ο))*(zero(eltype(v))-zero(eltype(b)))))
for (i,iv) in enumerate(ineq)
ret += f(Ξ»[i]*(s[i] - Ο[i]*(v[iv]-b[i])))
end
ret
end
# violations of v = target
function equality_violation(f, v, target, idx, Ξ»)
ret = f(zero(eltype(Ξ»))*(zero(eltype(v))-zero(eltype(target))))
for (i,iv) in enumerate(idx)
ret += f(Ξ»[i]*(target[i] - v[iv]))
end
ret
end
"""
equality_grad!(gx, gbstate, bounds, x, c, J, bstate)
Compute the gradient of `equality_violation`, storing the result in `gx` (an array) and `gbstate::BarrierStateVars`.
"""
function equality_grad!(gx, gsx, gsc, gΞ»x, gΞ»c, gΞ»xE, gΞ»cE, bounds::ConstraintBounds, x, c, J, sx, sc, Ξ»x, Ξ»c, Ξ»xE, Ξ»cE)
equality_grad_var!(gsx, gx, bounds.ineqx, bounds.Οx, Ξ»x)
equality_grad_var!(gsc, gx, bounds.ineqc, bounds.Οc, Ξ»c, J)
gx[bounds.eqx] .= gx[bounds.eqx] .- Ξ»xE
equality_grad_var!(gx, bounds.eqc, Ξ»cE, J)
equality_grad_Ξ»!(gΞ»x, sx, x, bounds.ineqx, bounds.Οx, bounds.bx)
equality_grad_Ξ»!(gΞ»c, sc, c, bounds.ineqc, bounds.Οc, bounds.bc)
equality_grad_Ξ»!(gΞ»xE, x, bounds.valx, bounds.eqx)
equality_grad_Ξ»!(gΞ»cE, c, bounds.valc, bounds.eqc)
end
equality_grad!(gx, gb::BarrierStateVars, bounds::ConstraintBounds, x, c, J, b::BarrierStateVars) =
equality_grad!(gx, gb.slack_x, gb.slack_c, gb.Ξ»x, gb.Ξ»c, gb.Ξ»xE, gb.Ξ»cE,
bounds, x, c, J,
b.slack_x, b.slack_c, b.Ξ»x, b.Ξ»c, b.Ξ»xE, b.Ξ»cE)
# violations of s = Ο*(x-b)
function equality_grad_var!(gs, gx, ineq, Ο, Ξ»)
for (i,ix) in enumerate(ineq)
Ξ»i = Ξ»[i]
gs[i] += Ξ»i
gx[ix] -= Ξ»i*Ο[i]
end
nothing
end
function equality_grad_var!(gs, gx, ineq, Ο, Ξ», J)
@. gs = gs + Ξ»
if !isempty(ineq)
gx .= gx .- view(J, ineq, :)'*(Ξ».*Ο)
end
nothing
end
function equality_grad_Ξ»!(gΞ», s, v, ineq, Ο, b)
for (i,iv) in enumerate(ineq)
gΞ»[i] += s[i] - Ο[i]*(v[iv]-b[i])
end
nothing
end
# violations of v = target
function equality_grad_var!(gx, idx, Ξ», J)
if !isempty(idx)
gx .= gx .- view(J, idx, :)'*Ξ»
end
nothing
end
function equality_grad_Ξ»!(gΞ», v, target, idx)
for (i,iv) in enumerate(idx)
gΞ»[i] += target[i] - v[iv]
end
nothing
end
"""
isfeasible(constraints, state) -> Bool
isfeasible(constraints, x, c) -> Bool
isfeasible(constraints, x) -> Bool
isfeasible(bounds, x, c) -> Bool
Return `true` if point `x` is feasible, given the `constraints` which
specify bounds `lx`, `ux`, `lc`, and `uc`. `x` is feasible if
lx[i] <= x[i] <= ux[i]
lc[i] <= c[i] <= uc[i]
for all possible `i`.
"""
function isfeasible(bounds::ConstraintBounds, x, c)
isf = true
for (i,j) in enumerate(bounds.eqx)
isf &= x[j] == bounds.valx[i]
end
for (i,j) in enumerate(bounds.ineqx)
isf &= bounds.Οx[i]*(x[j] - bounds.bx[i]) >= 0
end
for (i,j) in enumerate(bounds.eqc)
isf &= c[j] == bounds.valc[i]
end
for (i,j) in enumerate(bounds.ineqc)
isf &= bounds.Οc[i]*(c[j] - bounds.bc[i]) >= 0
end
isf
end
isfeasible(constraints, state::AbstractBarrierState) = isfeasible(constraints, state.x, state.constraints_c)
function isfeasible(constraints, x)
# don't assume c! returns c (which means this is a little more awkward)
c = Array{eltype(x)}(undef, constraints.bounds.nc)
constraints.c!(c, x)
isfeasible(constraints, x, c)
end
isfeasible(constraints::AbstractConstraints, x, c) = isfeasible(constraints.bounds, x, c)
isfeasible(constraints::Nothing, state::AbstractBarrierState) = true
isfeasible(constraints::Nothing, x) = true
"""
isinterior(constraints, state) -> Bool
isinterior(constraints, x, c) -> Bool
isinterior(constraints, x) -> Bool
isinterior(bounds, x, c) -> Bool
Return `true` if point `x` is on the interior of the allowed region,
given the `constraints` which specify bounds `lx`, `ux`, `lc`, and
`uc`. `x` is in the interior if
lx[i] < x[i] < ux[i]
lc[i] < c[i] < uc[i]
for all possible `i`.
"""
function isinterior(bounds::ConstraintBounds, x, c)
isi = true
for (i,j) in enumerate(bounds.ineqx)
isi &= bounds.Οx[i]*(x[j] - bounds.bx[i]) > 0
end
for (i,j) in enumerate(bounds.ineqc)
isi &= bounds.Οc[i]*(c[j] - bounds.bc[i]) > 0
end
isi
end
isinterior(constraints, state::AbstractBarrierState) = isinterior(constraints, state.x, state.constraints_c)
function isinterior(constraints, x)
c = Array{eltype(x)}(undef, constraints.bounds.nc)
constraints.c!(c, x)
isinterior(constraints, x, c)
end
isinterior(constraints::AbstractConstraints, x, c) = isinterior(constraints.bounds, x, c)
isinterior(constraints::Nothing, state::AbstractBarrierState) = true
isinterior(constraints::Nothing, x) = true
## Utilities for representing total state as single vector
# TODO: Most of these seem to be unused (IPNewton)?
function pack_vec(x, b::BarrierStateVars)
n = length(x)
for fn in fieldnames(b)
n += length(getfield(b, fn))
end
vec = Array{eltype(x)}(undef, n)
pack_vec!(vec, x, b)
end
function pack_vec!(vec, x, b::BarrierStateVars)
k = pack_vec!(vec, x, 0)
for fn in fieldnames(b)
k = pack_vec!(vec, getfield(b, fn), k)
end
k == length(vec) || throw(DimensionMismatch("vec should have length $k, got $(length(vec))"))
vec
end
function pack_vec!(vec, x, k::Int)
for i = 1:length(x)
vec[k+=1] = x[i]
end
k
end
function unpack_vec!(x, b::BarrierStateVars, vec::Vector)
k = unpack_vec!(x, vec, 0)
for fn in fieldnames(b)
k = unpack_vec!(getfield(b, fn), vec, k)
end
k == length(vec) || throw(DimensionMismatch("vec should have length $k, got $(length(vec))"))
x, b
end
function unpack_vec!(x, vec::Vector, k::Int)
for i = 1:length(x)
x[i] = vec[k+=1]
end
k
end
## More utilities
function estimate_maxstep(Ξ±max, x, s)
for i = 1:length(s)
si = s[i]
if si < 0
Ξ±max = min(Ξ±max, -x[i]/si)
end
end
Ξ±max
end
# TODO: This is not used anymore??
function qrregularize!(QRF)
R = QRF[:R]
for i = 1:size(R, 1)
if R[i,i] == 0
R[i,i] = 1
end
end
QRF
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2634 | function backtrack_constrained(Ο, Ξ±::Real, Ξ±max::Real, Ξ±Imax::Real,
LcoefsΞ±::Tuple{<:Real,<:Real,<:Real}, c1::Real = 0.5,
Ο::Real=oftype(Ξ±, 0.5),
Ξ±minfrac::Real = sqrt(eps(one(Ξ±)));
show_linesearch::Bool=false)
# TODO: Specify that all elements should be of the same type T <: Real?
# TODO: What does Ξ±I do??
Ξ±, Ξ±I = min(Ξ±, 0.999*Ξ±max), min(Ξ±, 0.999*Ξ±Imax)
Ξ±min = Ξ±minfrac * Ξ±
L0, L1, L2 = LcoefsΞ±
if show_linesearch
println("L0 = $L0, L1 = $L1, L2 = $L2")
end
while Ξ± >= Ξ±min
val = Ο((Ξ±, Ξ±I))
Ξ΄ = evalgrad(L1, Ξ±, Ξ±I)
if show_linesearch
println("Ξ± = $Ξ±, Ξ±I = $Ξ±I, value: ($L0, $val, $(L0+Ξ΄))")
end
if isfinite(val) && val - (L0 + Ξ΄) <= c1*abs(val-L0)
return Ξ±, Ξ±I
end
Ξ± *= Ο
Ξ±I *= Ο
end
Ο((zero(Ξ±), zero(Ξ±I))) # to ensure that state gets set appropriately
return zero(Ξ±), zero(Ξ±I)
end
function backtrack_constrained_grad(Ο, Ξ±::Real, Ξ±max::Real, LcoefsΞ±::Tuple{<:Real,<:Real,<:Real},
c1::Real = 0.9, c2::Real = 0.9, Ο::Real=oftype(Ξ±, 0.5),
Ξ±minfrac::Real = sqrt(eps(one(Ξ±))); show_linesearch::Bool=false)
# TODO: Specify that all elements should be of the same type T <: Real?
# TODO: Should c1 be 0.9 or 0.5 default?
# TODO: Should Ο be 0.9 or 0.5 default?
Ξ± = min(Ξ±, 0.999*Ξ±max)
Ξ±min = Ξ±minfrac * Ξ±
L0, L1, L2 = LcoefsΞ±
if show_linesearch
println("L0 = $L0, L1 = $L1, L2 = $L2")
end
while Ξ± >= Ξ±min
val, slopeΞ± = Ο(Ξ±)
Ξ΄val = L1*Ξ±
Ξ΄slope = L2*Ξ±
if show_linesearch
println("Ξ± = $Ξ±, value: ($L0, $val, $(L0+Ξ΄val)), slope: ($L1, $slopeΞ±, $(L1+Ξ΄slope))")
end
if isfinite(val) && val - (L0 + Ξ΄val) <= c1*abs(val-L0) &&
(slopeΞ± < c2*abs(L1) ||
slopeΞ± - (L1 + Ξ΄slope) .<= c2*abs.(slopeΞ±-L1))
return Ξ±
end
Ξ± *= Ο
end
Ο(zero(Ξ±)) # to ensure that state gets set appropriately
return zero(Ξ±)
end
# Evaluate for a step parametrized as [Ξ±, Ξ±, Ξ±I, Ξ±]
function evalgrad(slopeΞ±, Ξ±, Ξ±I)
Ξ±*(slopeΞ±[1] + slopeΞ±[2] + slopeΞ±[4]) + Ξ±I*slopeΞ±[3]
end
# TODO: Never used anywhere? Intended for a linesearch that depends on Ο''?
function mulhess(HΞ±, Ξ±, Ξ±I)
Ξ±v = [Ξ±, Ξ±, Ξ±I, Ξ±]
HΞ±*Ξ±v
end
# TODO: Never used anywhere? Intended for a linesearch that depends on Ο''?
function evalhess(HΞ±, Ξ±, Ξ±I)
Ξ±v = [Ξ±, Ξ±, Ξ±I, Ξ±]
dot(Ξ±v, HΞ±*Ξ±v)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 17606 | struct IPNewton{F,TΞΌ<:Union{Symbol,Number}} <: IPOptimizer{F}
linesearch!::F
ΞΌ0::TΞΌ # Initial value for the barrier penalty coefficient ΞΌ
show_linesearch::Bool
# TODO: ΞΌ0, and show_linesearch were originally in options
end
Base.summary(::IPNewton) = "Interior Point Newton"
promote_objtype(method::IPNewton, x, autodiff::Symbol, inplace::Bool, f::TwiceDifferentiable) = f
promote_objtype(method::IPNewton, x, autodiff::Symbol, inplace::Bool, f) = TwiceDifferentiable(f, x, real(zero(eltype(x))); autodiff = autodiff)
promote_objtype(method::IPNewton, x, autodiff::Symbol, inplace::Bool, f, g) = TwiceDifferentiable(f, g, x, real(zero(eltype(x))); inplace = inplace, autodiff = autodiff)
promote_objtype(method::IPNewton, x, autodiff::Symbol, inplace::Bool, f, g, h) = TwiceDifferentiable(f, g, h, x, real(zero(eltype(x))); inplace = inplace)
# TODO: Add support for InitialGuess from LineSearches
"""
# Interior-point Newton
## Constructor
```jl
IPNewton(; linesearch::Function = Optim.backtrack_constrained_grad,
ΞΌ0::Union{Symbol,Number} = :auto,
show_linesearch::Bool = false)
```
The initial barrier penalty coefficient `ΞΌ0` can be chosen as a number, or set
to `:auto` to let the algorithm decide its value, see `initialize_ΞΌ_Ξ»!`.
*Note*: For constrained optimization problems, we recommend
always enabling `allow_f_increases` and `successive_f_tol` in the options passed to `optimize`.
The default is set to `Optim.Options(allow_f_increases = true, successive_f_tol = 2)`.
As of February 2018, the line search algorithm is specialised for constrained
interior-point methods. In future we hope to support more algorithms from
`LineSearches.jl`.
## Description
The `IPNewton` method implements an interior-point primal-dual Newton algorithm for solving
nonlinear, constrained optimization problems. See Nocedal and Wright (Ch. 19, 2006) for a discussion of
interior-point methods for constrained optimization.
## References
The algorithm was [originally written by Tim Holy](https://github.com/JuliaNLSolvers/Optim.jl/pull/303) (@timholy, [email protected]).
- J Nocedal, SJ Wright (2006), Numerical optimization, second edition. Springer.
- A WΓ€chter, LT Biegler (2006), On the implementation of an interior-point filter line-search algorithm for large-scale nonlinear programming. Mathematical Programming 106 (1), 25-57.
"""
IPNewton(; linesearch::Function = backtrack_constrained_grad,
ΞΌ0::Union{Symbol,Number} = :auto,
show_linesearch::Bool = false) =
IPNewton(linesearch, ΞΌ0, show_linesearch)
mutable struct IPNewtonState{T,Tx} <: AbstractBarrierState
x::Tx
f_x::T
x_previous::Tx
g::Tx
f_x_previous::T
H::Matrix{T} # Hessian of the Lagrangian?
HP # TODO: remove HP? It's not used
Hd::Vector{Int8} # TODO: remove Hd? It's not used
s::Tx # step for x
# Barrier penalty fields
ΞΌ::T # coefficient of the barrier penalty
ΞΌnext::T # ΞΌ for the next iteration
L::T # value of the Lagrangian (objective + barrier + equality)
L_previous::T
bstate::BarrierStateVars{T} # value of slack and Ξ» variables (current "position")
bgrad::BarrierStateVars{T} # gradient of slack and Ξ» variables at current "position"
bstep::BarrierStateVars{T} # search direction for slack and Ξ»
constr_c::Vector{T} # value of the user-supplied constraints at x
constr_J::Matrix{T} # value of the user-supplied Jacobian at x
ev::T # equality violation, β_i Ξ»_Ei (c*_i - c_i)
Optim.@add_linesearch_fields() # x_ls and alpha
b_ls::BarrierLineSearchGrad{T}
gtilde::Tx
Htilde # Positive Cholesky factorization of H from PositiveFactorizations.jl
end
# TODO: Do we need this convert thing? (It seems to be used with `show(IPNewtonState)`)
function Base.convert(::Type{IPNewtonState{T,Tx}}, state::IPNewtonState{S, Sx}) where {T,Tx,S,Sx}
IPNewtonState(convert(Tx, state.x),
T(state.f_x),
convert(Tx, state.x_previous),
convert(Tx, state.g),
T(state.f_x_previous),
convert(Matrix{T}, state.H),
state.HP,
state.Hd,
convert(Tx, state.s),
T(state.ΞΌ),
T(state.ΞΌnext),
T(state.L),
T(state.L_previous),
convert(BarrierStateVars{T}, state.bstate),
convert(BarrierStateVars{T}, state.bgrad),
convert(BarrierStateVars{T}, state.bstep),
convert(Vector{T}, state.constr_c),
convert(Matrix{T}, state.constr_J),
T(state.ev),
convert(Tx, state.x_ls),
T(state.alpha),
convert(BarrierLineSearchGrad{T}, state.b_ls),
convert(Tx, state.gtilde),
state.Htilde,
)
end
function initial_state(method::IPNewton, options, d::TwiceDifferentiable, constraints::TwiceDifferentiableConstraints, initial_x::AbstractArray{T}) where T
# Check feasibility of the initial state
mc = nconstraints(constraints)
constr_c = fill(T(NaN), mc)
# TODO: When we change to `value!` from NLSolversBase instead of c!
# we can also update `initial_convergence` for ConstrainedOptimizer in interior.jl
constraints.c!(constr_c, initial_x)
if !isinterior(constraints, initial_x, constr_c)
@warn("Initial guess is not an interior point")
Base.show_backtrace(stderr, backtrace())
println(stderr)
end
# Allocate fields for the objective function
n = length(initial_x)
g = similar(initial_x)
s = similar(initial_x)
f_x_previous = NaN
f_x, g_x = value_gradient!(d, initial_x)
g .= g_x # needs to be a separate copy of g_x
hessian!(d, initial_x)
H = collect(T, hessian(d))
Hd = zeros(Int8, n)
# More constraints
constr_J = fill(T(NaN), mc, n)
gtilde = copy(g)
constraints.jacobian!(constr_J, initial_x)
ΞΌ = T(1)
bstate = BarrierStateVars(constraints.bounds, initial_x, constr_c)
bgrad = copy(bstate)
bstep = copy(bstate)
# b_ls = BarrierLineSearch(similar(constr_c), similar(bstate))
b_ls = BarrierLineSearchGrad(copy(constr_c), copy(constr_J), copy(bstate), copy(bstate))
state = IPNewtonState(
copy(initial_x), # Maintain current state in state.x
f_x, # Store current f in state.f_x
copy(initial_x), # Maintain previous state in state.x_previous
g, # Store current gradient in state.g (TODO: includes Lagrangian calculation?)
T(NaN), # Store previous f in state.f_x_previous
H,
0, # will be replaced
Hd,
similar(initial_x), # Maintain current x-search direction in state.s
ΞΌ,
ΞΌ,
T(NaN),
T(NaN),
bstate,
bgrad,
bstep,
constr_c,
constr_J,
T(NaN),
Optim.@initial_linesearch()..., # Maintain a cache for line search results in state.lsr
b_ls,
gtilde,
0)
Hinfo = (state.H, hessianI(initial_x, constraints, 1 ./ bstate.slack_c, 1))
initialize_ΞΌ_Ξ»!(state, constraints.bounds, Hinfo, method.ΞΌ0)
update_fg!(d, constraints, state, method)
update_h!(d, constraints, state, method)
state
end
function update_fg!(d, constraints::TwiceDifferentiableConstraints, state, method::IPNewton)
state.f_x, state.L, state.ev = lagrangian_fg!(state.g, state.bgrad, d, constraints.bounds, state.x, state.constr_c, state.constr_J, state.bstate, state.ΞΌ)
update_gtilde!(d, constraints, state, method)
end
function update_gtilde!(d, constraints::TwiceDifferentiableConstraints, state, method::IPNewton)
# Calculate the modified x-gradient for the block-eliminated problem
# gtilde is the gradient for the affine-scaling problem, i.e.,
# with ΞΌ=0, used in the adaptive setting of ΞΌ. Once we calculate ΞΌ we'll correct it
gtilde, bstate, bgrad = state.gtilde, state.bstate, state.bgrad
bounds = constraints.bounds
copyto!(gtilde, state.g)
JIc = view(state.constr_J, bounds.ineqc, :)
if !isempty(JIc)
Hssc = Diagonal(bstate.Ξ»c./bstate.slack_c)
# TODO: Can we use broadcasting / dot-notation here and eliminate gc?
gc = JIc'*(Diagonal(bounds.Οc) * (bstate.Ξ»c - Hssc*bgrad.Ξ»c)) # NOT bgrad.slack_c
gtilde .+= gc
end
for (i,j) in enumerate(bounds.ineqx)
gxi = bounds.Οx[i]*(bstate.Ξ»x[i] - bgrad.Ξ»x[i]*bstate.Ξ»x[i]/bstate.slack_x[i])
gtilde[j] += gxi
end
state
end
function update_h!(d, constraints::TwiceDifferentiableConstraints, state, method::IPNewton)
x, ΞΌ, Hxx, J = state.x, state.ΞΌ, state.H, state.constr_J
bstate, bgrad, bounds = state.bstate, state.bgrad, constraints.bounds
m, n = size(J, 1), size(J, 2)
hessian!(d, state.x) # objective's Hessian
copyto!(Hxx, hessian(d)) # objective's Hessian
# accumulate the constraint second derivatives
Ξ» = userΞ»(bstate.Ξ»c, constraints)
Ξ»[bounds.eqc] = -bstate.Ξ»cE # the negative sign is from the Hessian
# Important! We are assuming that constraints.h! adds the hessian of the
# non-objective Lagrangian terms to the existing objective Hessian Hxx.
# This follows the approach by the CUTEst interface
constraints.h!(Hxx, x, Ξ»)
# Add the Jacobian terms (JI'*Hss*JI)
JIc = view(J, bounds.ineqc, :)
Hssc = Diagonal(bstate.Ξ»c./bstate.slack_c)
HJ = JIc'*Hssc*JIc
for j = 1:n, i = 1:n
Hxx[i,j] += HJ[i,j]
end
# Add the variable inequalities portions of J'*Hssx*J
for (i,j) in enumerate(bounds.ineqx)
Hxx[j,j] += bstate.Ξ»x[i]/bstate.slack_x[i]
end
state.Htilde = cholesky(Positive, Hxx, Val{true})
state
end
function update_state!(d, constraints::TwiceDifferentiableConstraints, state::IPNewtonState{T}, method::IPNewton, options) where T
state.f_x_previous, state.L_previous = state.f_x, state.L
bstate, bstep, bounds = state.bstate, state.bstep, constraints.bounds
qp = solve_step!(state, constraints, options, method.show_linesearch)
# If a step Ξ±=1 will not change any of the parameters, we can quit now.
# This prevents a futile linesearch.
if is_smaller_eps(state.x, state.s) &&
is_smaller_eps(bstate.slack_x, bstep.slack_x) &&
is_smaller_eps(bstate.slack_c, bstep.slack_c) &&
is_smaller_eps(bstate.Ξ»x, bstep.Ξ»x) &&
is_smaller_eps(bstate.Ξ»c, bstep.Ξ»c)
return false
end
# Estimate Ξ±max, the upper bound on distance of movement along the search line
Ξ±max = convert(eltype(bstate), Inf)
Ξ±max = estimate_maxstep(Ξ±max, bstate.slack_x, bstep.slack_x)
Ξ±max = estimate_maxstep(Ξ±max, bstate.slack_c, bstep.slack_c)
Ξ±max = estimate_maxstep(Ξ±max, bstate.Ξ»x, bstep.Ξ»x)
Ξ±max = estimate_maxstep(Ξ±max, bstate.Ξ»c, bstep.Ξ»c)
# Determine the actual distance of movement along the search line
Ο = linesearch_anon(d, constraints, state, method)
# TODO: This only works for method.linesearch = backtracking_constrained_grad
# TODO: How are we meant to implement backtracking_constrained?.
# It requires both an alpha and an alphaI (Ξ±max and Ξ±Imax) ...
state.alpha =
method.linesearch!(Ο, T(1), Ξ±max, qp; show_linesearch=method.show_linesearch)
# Maintain a record of previous position
copyto!(state.x_previous, state.x)
# Update current position # x = x + alpha * s
ls_update!(state.x, state.x, state.s, state.alpha)
ls_update!(bstate, bstate, bstep, state.alpha)
# Ensure that the primal-dual approach does not deviate too much from primal
# (See Waechter & Biegler 2006, eq. 16)
ΞΌ = state.ΞΌ
for i = 1:length(bstate.slack_x)
p = ΞΌ / bstate.slack_x[i]
bstate.Ξ»x[i] = max(min(bstate.Ξ»x[i], 1e10*p), 1e-10*p)
end
for i = 1:length(bstate.slack_c)
p = ΞΌ / bstate.slack_c[i]
bstate.Ξ»c[i] = max(min(bstate.Ξ»c[i], 1e10*p), 1e-10*p)
end
state.ΞΌ = state.ΞΌnext
# Evaluate the constraints at the new position
constraints.c!(state.constr_c, state.x)
constraints.jacobian!(state.constr_J, state.x)
state.ev == equality_violation(constraints, state)
false
end
function solve_step!(state::IPNewtonState, constraints, options, show_linesearch::Bool = false)
x, s, ΞΌ, bounds = state.x, state.s, state.ΞΌ, constraints.bounds
bstate, bstep, bgrad = state.bstate, state.bstep, state.bgrad
J, Htilde = state.constr_J, state.Htilde
# Solve the Newton step
JE = jacobianE(state, bounds)
gE = [bgrad.Ξ»xE;
bgrad.Ξ»cE]
M = JE*(Htilde \ JE')
MF = cholesky(Positive, M, Val{true})
# These are a solution to the affine-scaling problem (with ΞΌ=0)
ΞΞ»E0 = MF \ (gE + JE * (Htilde \ state.gtilde))
Ξx0 = Htilde \ (JE'*ΞΞ»E0 - state.gtilde)
# Check that the solution to the linear equations represents an improvement
Hpstepx, HstepΞ»E = Matrix(Htilde)*Ξx0 - JE'*ΞΞ»E0, -JE*Ξx0 # TODO: don't use full here
# TODO: How to handle show_linesearch?
# This was originally in options.show_linesearch, but I removed it as none of the other Optim algorithms have it there.
# We should move show_linesearch back to options when we refactor
# LineSearches to work on the function Ο(Ξ±)
if show_linesearch
println("|gx| = $(norm(state.gtilde)), |Hstepx + gx| = $(norm(Hpstepx+state.gtilde))")
println("|gE| = $(norm(gE)), |HstepΞ»E + gE| = $(norm(HstepΞ»E+gE))")
end
if norm(gE) + norm(state.gtilde) < max(norm(HstepΞ»E + gE),
norm(Hpstepx + state.gtilde))
# Precision problems gave us a worse solution than the one we started with, abort
fill!(s, 0)
fill!(bstep, 0)
return state
end
# Set ΞΌ (see the predictor strategy in Nodecal & Wright, 2nd ed., section 19.3)
solve_slack!(bstep, Ξx0, bounds, bstate, bgrad, J, zero(state.ΞΌ)) # store temporarily in bstep
Ξ±s = convert(eltype(bstate), 1.0)
Ξ±s = estimate_maxstep(Ξ±s, bstate.slack_x, bstep.slack_x)
Ξ±s = estimate_maxstep(Ξ±s, bstate.slack_c, bstep.slack_c)
Ξ±Ξ» = convert(eltype(bstate), 1.0)
Ξ±Ξ» = estimate_maxstep(Ξ±Ξ», bstate.Ξ»x, bstep.Ξ»x)
Ξ±Ξ» = estimate_maxstep(Ξ±Ξ», bstate.Ξ»c, bstep.Ξ»c)
m = max(1, length(bstate.slack_x) + length(bstate.slack_c))
ΞΌaff = (dot(bstate.slack_x + Ξ±s*bstep.slack_x, bstate.Ξ»x + Ξ±Ξ»*bstep.Ξ»x) +
dot(bstate.slack_c + Ξ±s*bstep.slack_c, bstate.Ξ»c + Ξ±Ξ»*bstep.Ξ»c))/m
ΞΌmean = (dot(bstate.slack_x, bstate.Ξ»x) + dot(bstate.slack_c, bstate.Ξ»c))/m
# When there's only one constraint, ΞΌaff can be exactly zero. So limit the decrease.
state.ΞΌnext = NaNMath.max((ΞΌaff/ΞΌmean)^3 * ΞΌmean, ΞΌmean/10)
ΞΌ = state.ΞΌ
# Solve for the *real* step (including ΞΌ)
ΞΌsinv = ΞΌ * [bounds.Οx./bstate.slack_x; bounds.Οc./bstate.slack_c]
gtildeΞΌ = state.gtilde - jacobianI(state, bounds)' * ΞΌsinv
ΞΞ»E = MF \ (gE + JE * (Htilde \ gtildeΞΌ))
Ξx = Htilde \ (JE'*ΞΞ»E - gtildeΞΌ)
copyto!(s, Ξx)
k = unpack_vec!(bstep.Ξ»xE, ΞΞ»E, 0)
k = unpack_vec!(bstep.Ξ»cE, ΞΞ»E, k)
k == length(ΞΞ»E) || error("exhausted targets before ΞΞ»E")
solve_slack!(bstep, Ξx, bounds, bstate, bgrad, J, ΞΌ)
# Solve for the quadratic parameters (use the real H, not the posdef H)
Hstepx, HstepΞ»E = state.H*Ξx - JE'*ΞΞ»E, -JE*Ξx
qp = state.L, slopealpha(state.s, state.g, bstep, bgrad), dot(Ξx, Hstepx) + dot(ΞΞ»E, HstepΞ»E)
qp
end
function solve_slack!(bstep, s, bounds, bstate, bgrad, J, ΞΌ)
# Solve for the slack variable and Ξ»I updates
for (i, j) in enumerate(bounds.ineqx)
bstep.slack_x[i] = -bgrad.Ξ»x[i] + bounds.Οx[i]*s[j]
# bstep.Ξ»x[i] = -bgrad.slack_x[i] - ΞΌ*bstep.slack_x[i]/bstate.slack_x[i]^2
# bstep.Ξ»x[i] = -bgrad.slack_x[i] - bstate.Ξ»x[i]*bstep.slack_x[i]/bstate.slack_x[i]
bstep.Ξ»x[i] = -(-ΞΌ/bstate.slack_x[i] + bstate.Ξ»x[i]) - bstate.Ξ»x[i]*bstep.slack_x[i]/bstate.slack_x[i]
end
JIc = view(J, bounds.ineqc, :)
SigmaJIΞx = Diagonal(bounds.Οc)*(JIc*s)
for i = 1:length(bstep.Ξ»c)
bstep.slack_c[i] = -bgrad.Ξ»c[i] + SigmaJIΞx[i]
# bstep.Ξ»c[i] = -bgrad.slack_c[i] - ΞΌ*bstep.slack_c[i]/bstate.slack_c[i]^2
# bstep.Ξ»c[i] = -bgrad.slack_c[i] - bstate.Ξ»c[i]*bstep.slack_c[i]/bstate.slack_c[i]
bstep.Ξ»c[i] = -(-ΞΌ/bstate.slack_c[i] + bstate.Ξ»c[i]) - bstate.Ξ»c[i]*bstep.slack_c[i]/bstate.slack_c[i]
end
bstep
end
function is_smaller_eps(ref, step)
ise = true
for (r, s) in zip(ref, step)
ise &= (s == 0) | (abs(s) < eps(r))
end
ise
end
function default_options(method::ConstrainedOptimizer)
(; allow_f_increases = true, successive_f_tol = 2)
end
# Utility functions that assist in testing: they return the "full
# Hessian" and "full gradient" for the equation with the slack and Ξ»I
# eliminated.
# TODO: should we put these elsewhere?
function Hf(bounds::ConstraintBounds, state)
JE = jacobianE(state, bounds)
Hf = [Matrix(state.Htilde) -JE';
-JE zeros(eltype(JE), size(JE, 1), size(JE, 1))]
end
Hf(constraints, state) = Hf(constraints.bounds, state)
function gf(bounds::ConstraintBounds, state)
bstate, ΞΌ = state.bstate, state.ΞΌ
ΞΌsinv = ΞΌ * [bounds.Οx./bstate.slack_x; bounds.Οc./bstate.slack_c]
gtildeΞΌ = state.gtilde - jacobianI(state, bounds)' * ΞΌsinv
[gtildeΞΌ; state.bgrad.Ξ»xE; state.bgrad.Ξ»cE]
end
gf(constraints, state) = gf(constraints.bounds, state)
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 160 | abstract type ConstrainedOptimizer{T} <: AbstractConstrainedOptimizer end
abstract type IPOptimizer{T} <: ConstrainedOptimizer{T} end # interior point methods
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 527 | function assess_convergence(state::IPNewtonState, d, options::Options)
# We use the whole bstate-gradient `bgrad`
bgrad = state.bgrad
Optim.assess_convergence(state.x,
state.x_previous,
state.L,
state.L_previous,
[state.g; bgrad.slack_x; bgrad.slack_c; bgrad.Ξ»x; bgrad.Ξ»c; bgrad.Ξ»xE; bgrad.Ξ»cE],
options.x_abstol,
options.f_reltol,
options.g_abstol)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 1807 | function print_header(method::IPOptimizer)
@printf "Iter Lagrangian value Function value Gradient norm |==constr.| ΞΌ\n"
end
function Base.show(io::IO, t::OptimizationState{<:Real, <:IPOptimizer})
md = t.metadata
@printf io "%6d %-14e %-14e %-14e %-14e %-6.2e\n" t.iteration md["Lagrangian"] t.value t.g_norm md["ev"] md["ΞΌ"]
if !isempty(t.metadata)
for (key, value) in md
key β ("Lagrangian", "ΞΌ", "ev") && continue
@printf io " * %s: %s\n" key value
end
end
return
end
function Base.show(io::IO, tr::OptimizationTrace{<:Real, <:IPOptimizer})
@printf io "Iter Lagrangian value Function value Gradient norm |==constr.| ΞΌ\n"
@printf io "------ ---------------- -------------- -------------- -------------- --------\n"
for state in tr
show(io, state)
end
return
end
function trace!(tr, d, state, iteration, method::IPOptimizer, options, curr_time=time())
dt = Dict()
dt["Lagrangian"] = state.L
dt["ΞΌ"] = state.ΞΌ
dt["ev"] = abs(state.ev)
dt["time"] = curr_time
if options.extended_trace
dt["Ξ±"] = state.alpha
dt["x"] = copy(state.x)
dt["g(x)"] = copy(state.g)
dt["h(x)"] = copy(state.H)
if !isempty(state.bstate)
dt["gtilde(x)"] = copy(state.gtilde)
dt["bstate"] = copy(state.bstate)
dt["bgrad"] = copy(state.bgrad)
dt["c"] = copy(state.constr_c)
end
end
g_norm = norm(state.g, Inf) + norm(state.bgrad, Inf)
Optim.update!(tr,
iteration,
value(d),
g_norm,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 287 | function ls_update!(out::AbstractArray, base::AbstractArray, step::AbstractArray, Ξ±)
length(out) == length(base) == length(step) || throw(DimensionMismatch("all arrays must have the same length, got $(length(out)), $(length(base)), $(length(step))"))
@. out = base + Ξ±*step
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 3254 | # http://stronglyconvex.com/blog/accelerated-gradient-descent.html
# TODO: Need to specify alphamax on each iteration
# Flip notation relative to Duckworth
# Start with x_{0}
# y_{t} = x_{t - 1} - alpha g(x_{t - 1})
# If converged, return y_{t}
# x_{t} = y_{t} + (t - 1.0) / (t + 2.0) * (y_{t} - y_{t - 1})
struct AcceleratedGradientDescent{IL, L} <: FirstOrderOptimizer
alphaguess!::IL
linesearch!::L
manifold::Manifold
end
Base.summary(::AcceleratedGradientDescent) = "Accelerated Gradient Descent"
function AcceleratedGradientDescent(;
alphaguess = LineSearches.InitialPrevious(), # TODO: investigate good defaults
linesearch = LineSearches.HagerZhang(), # TODO: investigate good defaults
manifold::Manifold=Flat())
AcceleratedGradientDescent(_alphaguess(alphaguess), linesearch, manifold)
end
mutable struct AcceleratedGradientDescentState{T, Tx} <: AbstractOptimizerState
x::Tx
x_previous::Tx
f_x_previous::T
iteration::Int
y::Tx
y_previous::Tx
s::Tx
@add_linesearch_fields()
end
function initial_state(method::AcceleratedGradientDescent, options, d, initial_x::AbstractArray{T}) where T
initial_x = copy(initial_x)
retract!(method.manifold, initial_x)
value_gradient!!(d, initial_x)
project_tangent!(method.manifold, gradient(d), initial_x)
AcceleratedGradientDescentState(copy(initial_x), # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
real(T)(NaN), # Store previous f in state.f_x_previous
0, # Iteration
copy(initial_x), # Maintain intermediary current state in state.y
similar(initial_x), # Maintain intermediary state in state.y_previous
similar(initial_x), # Maintain current search direction in state.s
@initial_linesearch()...)
end
function update_state!(d, state::AcceleratedGradientDescentState, method::AcceleratedGradientDescent)
value_gradient!(d, state.x)
state.iteration += 1
project_tangent!(method.manifold, gradient(d), state.x)
# Search direction is always the negative gradient
state.s .= .-gradient(d)
# Determine the distance of movement along the search line
lssuccess = perform_linesearch!(state, method, ManifoldObjective(method.manifold, d))
# Make one move in the direction of the gradient
copyto!(state.y_previous, state.y)
state.y .= state.x .+ state.alpha.*state.s
retract!(method.manifold, state.y)
# Update current position with Nesterov correction
scaling = (state.iteration - 1) / (state.iteration + 2)
state.x .= state.y .+ scaling.*(state.y .- state.y_previous)
retract!(method.manifold, state.x)
lssuccess == false # break on linesearch error
end
function trace!(tr, d, state, iteration, method::AcceleratedGradientDescent, options, curr_time=time())
common_trace!(tr, d, state, iteration, method, options, curr_time)
end
function default_options(method::AcceleratedGradientDescent)
(; allow_f_increases = true)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2695 | """
# Adam
## Constructor
```julia
Adam(; alpha=0.0001, beta_mean=0.9, beta_var=0.999, epsilon=1e-8)
```
## Description
Adam is a gradient based optimizer that choses its search direction by building up estimates of the first two moments of the gradient vector. This makes it suitable for problems with a stochastic objective and thus gradient. The method is introduced in [1] where the related AdaMax method is also introduced, see `?AdaMax` for more information on that method.
## References
[1] https://arxiv.org/abs/1412.6980
"""
struct Adam{T, Tm} <: FirstOrderOptimizer
Ξ±::T
Ξ²β::T
Ξ²β::T
Ο΅::T
manifold::Tm
end
# could use epsilon = T->sqrt(eps(T)) and input the promoted type
Adam(; alpha = 0.0001, beta_mean = 0.9, beta_var = 0.999, epsilon = 1e-8) =
Adam(alpha, beta_mean, beta_var, epsilon, Flat())
Base.summary(::Adam) = "Adam"
function default_options(method::Adam)
(; allow_f_increases = true, iterations=10_000)
end
mutable struct AdamState{Tx, T, Tm, Tu, Ti} <: AbstractOptimizerState
x::Tx
x_previous::Tx
f_x_previous::T
s::Tx
m::Tm
u::Tu
iter::Ti
end
function reset!(method, state::AdamState, obj, x)
value_gradient!!(obj, x)
end
function initial_state(method::Adam, options, d, initial_x::AbstractArray{T}) where T
initial_x = copy(initial_x)
value_gradient!!(d, initial_x)
Ξ±, Ξ²β, Ξ²β = method.Ξ±, method.Ξ²β, method.Ξ²β
m = copy(gradient(d))
u = zero(m)
a = 1 - Ξ²β
iter = 0
AdamState(initial_x, # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
real(T(NaN)), # Store previous f in state.f_x_previous
similar(initial_x), # Maintain current search direction in state.s
m,
u,
iter)
end
function update_state!(d, state::AdamState{T}, method::Adam) where T
state.iter = state.iter+1
value_gradient!(d, state.x)
Ξ±, Ξ²β, Ξ²β, Ο΅ = method.Ξ±, method.Ξ²β, method.Ξ²β, method.Ο΅
a = 1 - Ξ²β
b = 1 - Ξ²β
m, u = state.m, state.u
v = u
m .= Ξ²β .* m .+ a .* gradient(d)
v .= Ξ²β .* v .+ b .* gradient(d) .^ 2
# mΜ = m./(1-Ξ²β^state.iter)
# vΜ = v./(1-Ξ²β^state.iter)
#@. z = z - Ξ±*mΜ/(sqrt(vΜ+Ο΅))
Ξ±β = Ξ± * sqrt(1 - Ξ²β^state.iter) / (1 - Ξ²β^state.iter)
@. state.x = state.x - Ξ±β * m / (sqrt(v) + Ο΅)
# Update current position # x = x + alpha * s
false # break on linesearch error
end
function trace!(tr, d, state, iteration, method::Adam, options, curr_time=time())
common_trace!(tr, d, state, iteration, method, options, curr_time)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2594 | """
# AdaMax
## Constructor
```julia
AdaMax(; alpha=0.002, beta_mean=0.9, beta_var=0.999, epsilon=1e-8)
```
## Description
AdaMax is a gradient based optimizer that choses its search direction by building up estimates of the first two moments of the gradient vector. This makes it suitable for problems with a stochastic objective and thus gradient. The method is introduced in [1] where the related Adam method is also introduced, see `?Adam` for more information on that method.
[1] https://arxiv.org/abs/1412.6980
"""
struct AdaMax{T,Tm} <: FirstOrderOptimizer
Ξ±::T
Ξ²β::T
Ξ²β::T
Ο΅::T
manifold::Tm
end
AdaMax(; alpha = 0.002, beta_mean = 0.9, beta_var = 0.999, epsilon = sqrt(eps(Float64))) =
AdaMax(alpha, beta_mean, beta_var, epsilon, Flat())
Base.summary(::AdaMax) = "AdaMax"
function default_options(method::AdaMax)
(; allow_f_increases = true, iterations=10_000)
end
mutable struct AdaMaxState{Tx, T, Tm, Tu, Ti} <: AbstractOptimizerState
x::Tx
x_previous::Tx
f_x_previous::T
s::Tx
m::Tm
u::Tu
iter::Ti
end
function reset!(method, state::AdaMaxState, obj, x)
value_gradient!!(obj, x)
end
function initial_state(method::AdaMax, options, d, initial_x::AbstractArray{T}) where T
initial_x = copy(initial_x)
value_gradient!!(d, initial_x)
Ξ±, Ξ²β, Ξ²β = method.Ξ±, method.Ξ²β, method.Ξ²β
m = copy(gradient(d))
u = zero(m)
a = 1 - Ξ²β
iter = 0
AdaMaxState(initial_x, # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
real(T(NaN)), # Store previous f in state.f_x_previous
similar(initial_x), # Maintain current search direction in state.s
m,
u,
iter)
end
function update_state!(d, state::AdaMaxState{T}, method::AdaMax) where T
state.iter = state.iter+1
value_gradient!(d, state.x)
Ξ±, Ξ²β, Ξ²β, Ο΅ = method.Ξ±, method.Ξ²β, method.Ξ²β, method.Ο΅
a = 1 - Ξ²β
m, u = state.m, state.u
m .= Ξ²β .* m .+ a .* gradient(d)
u .= max.(Ο΅, max.(Ξ²β .* u, abs.(gradient(d)))) # I know it's not there in the paper but if m and u start at 0 for some element... NaN occurs next
@. state.x = state.x - (Ξ± / (1 - Ξ²β^state.iter)) * m / u
# Update current position # x = x + alpha * s
false # break on linesearch error
end
function trace!(tr, d, state, iteration, method::AdaMax, options, curr_time=time())
common_trace!(tr, d, state, iteration, method, options, curr_time)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 7156 | # Translation from our variables to Nocedal and Wright's
# JMW's dx <=> NW's s
# JMW's dg <=> NW' y
struct BFGS{IL, L, H, T, TM} <: FirstOrderOptimizer
alphaguess!::IL
linesearch!::L
initial_invH::H
initial_stepnorm::T
manifold::TM
end
Base.summary(::BFGS) = "BFGS"
"""
# BFGS
## Constructor
```julia
BFGS(; alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.HagerZhang(),
initial_invH = x -> Matrix{eltype(x)}(I, length(x), length(x)),
manifold = Flat())
```
## Description
The `BFGS` method implements the Broyden-Fletcher-Goldfarb-Shanno algorithm as
described in Nocedal and Wright (sec. 8.1, 1999) and the four individual papers
Broyden (1970), Fletcher (1970), Goldfarb (1970), and Shanno (1970). It is a
quasi-Newton method that updates an approximation to the Hessian using past
approximations as well as the gradient. See also the limited memory variant
`LBFGS` for an algorithm that is more suitable for high dimensional problems.
## References
- Wright, S. J. and J. Nocedal (1999), Numerical optimization. Springer Science 35.67-68: 7.
- Broyden, C. G. (1970), The convergence of a class of double-rank minimization algorithms, Journal of the Institute of Mathematics and Its Applications, 6: 76β90.
- Fletcher, R. (1970), A New Approach to Variable Metric Algorithms, Computer Journal, 13 (3): 317β322,
- Goldfarb, D. (1970), A Family of Variable Metric Updates Derived by Variational Means, Mathematics of Computation, 24 (109): 23β26,
- Shanno, D. F. (1970), Conditioning of quasi-Newton methods for function minimization, Mathematics of Computation, 24 (111): 647β656.
"""
function BFGS(; alphaguess = LineSearches.InitialStatic(), # TODO: benchmark defaults
linesearch = LineSearches.HagerZhang(), # TODO: benchmark defaults
initial_invH = nothing,
initial_stepnorm = nothing,
manifold::Manifold=Flat())
BFGS(_alphaguess(alphaguess), linesearch, initial_invH, initial_stepnorm, manifold)
end
mutable struct BFGSState{Tx, Tm, T,G} <: AbstractOptimizerState
x::Tx
x_previous::Tx
g_previous::G
f_x_previous::T
dx::Tx
dg::Tx
u::Tx
invH::Tm
s::Tx
@add_linesearch_fields()
end
function _init_identity_matrix(x::AbstractArray{T}, scale::T = T(1)) where {T}
x_ = reshape(x, :)
Id = x_ .* x_' .* false
idxs = diagind(Id)
@. @view(Id[idxs]) = scale * true
return Id
end
function reset!(method, state::BFGSState, obj, x)
n = length(x)
T = eltype(x)
retract!(method.manifold, x)
value_gradient!(obj, x)
project_tangent!(method.manifold, gradient(obj), x)
if method.initial_invH === nothing
if method.initial_stepnorm === nothing
# Identity matrix of size n x n
state.invH = _init_identity_matrix(x)
else
initial_scale = T(method.initial_stepnorm) * inv(norm(gradient(obj), Inf))
state.invH = _init_identity_matrix(x, initial_scale)
end
else
state.invH .= method.initial_invH(x)
end
end
function initial_state(method::BFGS, options, d, initial_x::AbstractArray{T}) where T
n = length(initial_x)
initial_x = copy(initial_x)
retract!(method.manifold, initial_x)
value_gradient!!(d, initial_x)
project_tangent!(method.manifold, gradient(d), initial_x)
if method.initial_invH === nothing
if method.initial_stepnorm === nothing
# Identity matrix of size n x n
invH0 = _init_identity_matrix(initial_x)
else
initial_scale = T(method.initial_stepnorm) * inv(norm(gradient(d), Inf))
invH0 = _init_identity_matrix(initial_x, initial_scale)
end
else
invH0 = method.initial_invH(initial_x)
end
# Maintain a cache for line search results
# Trace the history of states visited
BFGSState(initial_x, # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
copy(gradient(d)), # Store previous gradient in state.g_previous
real(T)(NaN), # Store previous f in state.f_x_previous
similar(initial_x), # Store changes in position in state.dx
similar(initial_x), # Store changes in gradient in state.dg
similar(initial_x), # Buffer stored in state.u
invH0, # Store current invH in state.invH
similar(initial_x), # Store current search direction in state.s
@initial_linesearch()...)
end
function update_state!(d, state::BFGSState, method::BFGS)
n = length(state.x)
T = eltype(state.s)
# Set the search direction
# Search direction is the negative gradient divided by the approximate Hessian
mul!(vec(state.s), state.invH, vec(gradient(d)))
rmul!(state.s, T(-1))
project_tangent!(method.manifold, state.s, state.x)
# Maintain a record of the previous gradient
copyto!(state.g_previous, gradient(d))
# Determine the distance of movement along the search line
# This call resets invH to initial_invH is the former in not positive
# semi-definite
lssuccess = perform_linesearch!(state, method, ManifoldObjective(method.manifold, d))
# Update current position
state.dx .= state.alpha.*state.s
state.x .= state.x .+ state.dx
retract!(method.manifold, state.x)
lssuccess == false # break on linesearch error
end
function update_h!(d, state, method::BFGS)
n = length(state.x)
# Measure the change in the gradient
state.dg .= gradient(d) .- state.g_previous
# Update the inverse Hessian approximation using Sherman-Morrison
dx_dg = real(dot(state.dx, state.dg))
if dx_dg > 0
mul!(vec(state.u), state.invH, vec(state.dg))
c1 = (dx_dg + real(dot(state.dg, state.u))) / (dx_dg' * dx_dg)
c2 = 1 / dx_dg
# invH = invH + c1 * (s * s') - c2 * (u * s' + s * u')
if(state.invH isa Array) # i.e. not a CuArray
invH = state.invH; dx = state.dx; u = state.u;
@inbounds for j in 1:n
c1dxj = c1 * dx[j]'
c2dxj = c2 * dx[j]'
c2uj = c2 * u[j]'
for i in 1:n
invH[i, j] = muladd(dx[i], c1dxj, muladd(-u[i], c2dxj, muladd(c2uj, -dx[i], invH[i, j])))
end
end
else
mul!(state.invH,vec(state.dx),vec(state.dx)', c1,1)
mul!(state.invH,vec(state.u ),vec(state.dx)',-c2,1)
mul!(state.invH,vec(state.dx),vec(state.u )',-c2,1)
end
end
end
function trace!(tr, d, state, iteration, method::BFGS, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["x"] = copy(state.x)
dt["g(x)"] = copy(gradient(d))
dt["~inv(H)"] = copy(state.invH)
dt["Current step size"] = state.alpha
end
g_norm = norm(gradient(d), Inf)
update!(tr,
iteration,
value(d),
g_norm,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 8506 | #
# Conjugate gradient
#
# This is an independent implementation of:
# W. W. Hager and H. Zhang (2006) Algorithm 851: CG_DESCENT, a
# conjugate gradient method with guaranteed descent. ACM
# Transactions on Mathematical Software 32: 113β137.
#
# Code comments such as "HZ, stage X" or "HZ, eqs Y" are with
# reference to a particular point in this paper.
#
# Several aspects of the following have also been incorporated:
# W. W. Hager and H. Zhang (2013) The limited memory conjugate
# gradient method.
#
# This paper will be denoted HZ2013 below.
#
#
# There are some modifications and/or extensions from what's in the
# paper (these may or may not be extensions of the cg_descent code
# that can be downloaded from Hager's site; his code has undergone
# numerous revisions since publication of the paper):
#
# cgdescent: the termination condition employs a "unit-correct"
# expression rather than a condition on gradient
# components---whether this is a good or bad idea will require
# additional experience, but preliminary evidence seems to suggest
# that it makes "reasonable" choices over a wider range of problem
# types.
#
# both: checks for Inf/NaN function values
#
# both: support maximum value of alpha (equivalently, c). This
# facilitates using these routines for constrained minimization
# when you can calculate the distance along the path to the
# disallowed region. (When you can't easily calculate that
# distance, it can still be handled by returning Inf/NaN for
# exterior points. It's just more efficient if you know the
# maximum, because you don't have to test values that won't
# work.) The maximum should be specified as the largest value for
# which a finite value will be returned. See, e.g., limits_box
# below. The default value for alphamax is Inf. See alphamaxfunc
# for cgdescent and alphamax for linesearch_hz.
struct ConjugateGradient{Tf, T, Tprep, IL, L} <: FirstOrderOptimizer
eta::Tf
P::T
precondprep!::Tprep
alphaguess!::IL
linesearch!::L
manifold::Manifold
end
Base.summary(::ConjugateGradient) = "Conjugate Gradient"
"""
# Conjugate Gradient Descent
## Constructor
```julia
ConjugateGradient(; alphaguess = LineSearches.InitialHagerZhang(),
linesearch = LineSearches.HagerZhang(),
eta = 0.4,
P = nothing,
precondprep = (P, x) -> nothing,
manifold = Flat())
```
The strictly positive constant ``eta`` is used in determining
the next step direction, and the default here deviates from the one used in the
original paper (where it was ``0.01``). See more details in the original papers
referenced below.
## Description
The `ConjugateGradient` method implements Hager and Zhang (2006) and elements
from Hager and Zhang (2013). Notice, the default `linesearch` is `HagerZhang`
from LineSearches.jl. This line search is exactly the one proposed in Hager and
Zhang (2006).
## References
- W. W. Hager and H. Zhang (2006) Algorithm 851: CG_DESCENT, a conjugate gradient method with guaranteed descent. ACM Transactions on Mathematical Software 32: 113-137.
- W. W. Hager and H. Zhang (2013), The Limited Memory Conjugate Gradient Method. SIAM Journal on Optimization, 23, pp. 2150-2168.
"""
function ConjugateGradient(; alphaguess = LineSearches.InitialHagerZhang(),
linesearch = LineSearches.HagerZhang(),
eta::Real = 0.4,
P::Any = nothing,
precondprep = (P, x) -> nothing,
manifold::Manifold=Flat())
ConjugateGradient(eta,
P, precondprep,
_alphaguess(alphaguess), linesearch,
manifold)
end
mutable struct ConjugateGradientState{Tx,T,G} <: AbstractOptimizerState
x::Tx
x_previous::Tx
g_previous::G
f_x_previous::T
y::Tx
py::Tx
pg::Tx
s::Tx
@add_linesearch_fields()
end
function reset!(cg, cgs::ConjugateGradientState, obj, x)
cgs.x .= x
cg.precondprep!(cg.P, x)
ldiv!(cgs.pg, cg.P, gradient(obj))
if cg.P !== nothing
project_tangent!(cg.manifold, cgs.pg, x)
end
cgs.s .= -cgs.pg
cgs.f_x_previous = typeof(cgs.f_x_previous)(NaN)
end
function initial_state(method::ConjugateGradient, options, d, initial_x)
T = eltype(initial_x)
initial_x = copy(initial_x)
retract!(method.manifold, initial_x)
value_gradient!!(d, initial_x)
project_tangent!(method.manifold, gradient(d), initial_x)
pg = copy(gradient(d))
# Could move this out? as a general check?
#=
# Output messages
isfinite(value(d)) || error("Initial f(x) is not finite ($(value(d)))")
if !all(isfinite, gradient(d))
@show gradient(d)
@show find(.!isfinite.(gradient(d)))
error("Gradient must have all finite values at starting point")
end
=#
# Determine the intial search direction
# if we don't precondition, then this is an extra superfluous copy
# TODO: consider allowing a reference for pg instead of a copy
method.precondprep!(method.P, initial_x)
ldiv!(pg, method.P, gradient(d))
if method.P !== nothing
project_tangent!(method.manifold, pg, initial_x)
end
ConjugateGradientState(initial_x, # Maintain current state in state.x
0 .*(initial_x), # Maintain previous state in state.x_previous
0 .*(gradient(d)), # Store previous gradient in state.g_previous
real(T)(NaN), # Store previous f in state.f_x_previous
0 .*(initial_x), # Intermediate value in CG calculation
0 .*(initial_x), # Preconditioned intermediate value in CG calculation
pg, # Maintain the preconditioned gradient in pg
-pg, # Maintain current search direction in state.s
@initial_linesearch()...)
end
function update_state!(d, state::ConjugateGradientState, method::ConjugateGradient)
# Search direction is predetermined
# Maintain a record of the previous gradient
copyto!(state.g_previous, gradient(d))
# Determine the distance of movement along the search line
lssuccess = perform_linesearch!(state, method, ManifoldObjective(method.manifold, d))
# Update current position # x = x + alpha * s
@. state.x = state.x + state.alpha * state.s
retract!(method.manifold, state.x)
# Update the function value and gradient
value_gradient!(d, state.x)
project_tangent!(method.manifold, gradient(d), state.x)
# Check sanity of function and gradient
isfinite(value(d)) || error("Non-finite f(x) while optimizing ($(value(d)))")
# Determine the next search direction using HZ's CG rule
# Calculate the beta factor (HZ2013)
# -----------------
# Comment on py: one could replace the computation of py with
# ydotpgprev = dot(y, pg)
# dot(y, py) >>> dot(y, pg) - ydotpgprev
# but I am worried about round-off here, so instead we make an
# extra copy, which is probably minimal overhead.
# -----------------
method.precondprep!(method.P, state.x)
@compat dPd = real(dot(state.s, method.P, state.s))
etak = method.eta * real(dot(state.s, state.g_previous)) / dPd # New in HZ2013
state.y .= gradient(d) .- state.g_previous
ydots = real(dot(state.y, state.s))
copyto!(state.py, state.pg) # below, store pg - pg_previous in py
ldiv!(state.pg, method.P, gradient(d))
state.py .= state.pg .- state.py
# ydots may be zero if f is not strongly convex or the line search does not satisfy Wolfe
betak = (real(dot(state.y, state.pg)) - real(dot(state.y, state.py)) * real(dot(gradient(d), state.s)) / ydots) / ydots
# betak may be undefined if ydots is zero (may due to f not strongly convex or non-Wolfe linesearch)
beta = NaNMath.max(betak, etak) # TODO: Set to zero if betak is NaN?
state.s .= beta.*state.s .- state.pg
project_tangent!(method.manifold, state.s, state.x)
lssuccess == false # break on linesearch error
end
update_g!(d, state, method::ConjugateGradient) = nothing
function trace!(tr, d, state, iteration, method::ConjugateGradient, options, curr_time=time())
common_trace!(tr, d, state, iteration, method, options, curr_time)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 3362 | struct GradientDescent{IL, L, T, Tprep} <: FirstOrderOptimizer
alphaguess!::IL
linesearch!::L
P::T
precondprep!::Tprep
manifold::Manifold
end
Base.summary(::GradientDescent) = "Gradient Descent"
"""
# Gradient Descent
## Constructor
```julia
GradientDescent(; alphaguess = LineSearches.InitialHagerZhang(),
linesearch = LineSearches.HagerZhang(),
P = nothing,
precondprep = (P, x) -> nothing)
```
Keywords are used to control choice of line search, and preconditioning.
## Description
The `GradientDescent` method is a simple gradient descent algorithm, that is the
search direction is simply the negative gradient at the current iterate, and
then a line search step is used to compute the final step. See Nocedal and
Wright (ch. 2.2, 1999) for an explanation of the approach.
## References
- Nocedal, J. and Wright, S. J. (1999), Numerical optimization. Springer Science 35.67-68: 7.
"""
function GradientDescent(; alphaguess = LineSearches.InitialPrevious(), # TODO: Investigate good defaults.
linesearch = LineSearches.HagerZhang(), # TODO: Investigate good defaults
P = nothing,
precondprep = (P, x) -> nothing,
manifold::Manifold=Flat())
GradientDescent(_alphaguess(alphaguess), linesearch, P, precondprep, manifold)
end
mutable struct GradientDescentState{Tx, T} <: AbstractOptimizerState
x::Tx
x_previous::Tx
f_x_previous::T
s::Tx
@add_linesearch_fields()
end
function reset!(method, state::GradientDescentState, obj, x)
retract!(method.manifold, x)
value_gradient!!(obj, x)
project_tangent!(method.manifold, gradient(obj), x)
end
function initial_state(method::GradientDescent, options, d, initial_x::AbstractArray{T}) where T
initial_x = copy(initial_x)
retract!(method.manifold, initial_x)
value_gradient!!(d, initial_x)
project_tangent!(method.manifold, gradient(d), initial_x)
GradientDescentState(initial_x, # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
real(T(NaN)), # Store previous f in state.f_x_previous
similar(initial_x), # Maintain current search direction in state.s
@initial_linesearch()...)
end
function update_state!(d, state::GradientDescentState{T}, method::GradientDescent) where T
value_gradient!(d, state.x)
# Search direction is always the negative preconditioned gradient
project_tangent!(method.manifold, gradient(d), state.x)
method.precondprep!(method.P, state.x)
ldiv!(state.s, method.P, gradient(d))
rmul!(state.s, eltype(state.s)(-1))
if method.P !== nothing
project_tangent!(method.manifold, state.s, state.x)
end
# Determine the distance of movement along the search line
lssuccess = perform_linesearch!(state, method, ManifoldObjective(method.manifold, d))
# Update current position # x = x + alpha * s
@. state.x = state.x + state.alpha * state.s
retract!(method.manifold, state.x)
lssuccess == false # break on linesearch error
end
function trace!(tr, d, state, iteration, method::GradientDescent, options, curr_time=time())
common_trace!(tr, d, state, iteration, method, options, curr_time)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 8031 | # Notational note
# JMW's dx_history <=> NW's S
# JMW's dg_history <=> NW's Y
# Here alpha is a cache that parallels betas
# It is not the step-size
# q is also a cache
function twoloop!(s,
gr,
rho,
dx_history,
dg_history,
m::Integer,
pseudo_iteration::Integer,
alpha,
q,
scaleinvH0::Bool,
precon)
# Count number of parameters
n = length(s)
# Determine lower and upper bounds for loops
lower = pseudo_iteration - m
upper = pseudo_iteration - 1
# Copy gr into q for backward pass
copyto!(q, gr)
# Backward pass
for index in upper:-1:lower
if index < 1
continue
end
i = mod1(index, m)
dgi = dg_history[i]
dxi = dx_history[i]
@inbounds alpha[i] = rho[i] * real(dot(dxi, q))
@inbounds q .-= alpha[i] .* dgi
end
# Copy q into s for forward pass
if scaleinvH0 == true && pseudo_iteration > 1
# Use the initial scaling guess from
# Nocedal & Wright (2nd ed), Equation (7.20)
#=
pseudo_iteration > 1 prevents this scaling from happening
at the first iteration, but also at the first step after
a reset due to invH being non-positive definite (pseudo_iteration = 1).
TODO: Maybe we can still use the scaling as long as iteration > 1?
=#
i = mod1(upper, m)
dxi = dx_history[i]
dgi = dg_history[i]
scaling = real(dot(dxi, dgi)) / sum(abs2, dgi)
@. s = scaling*q
else
# apply preconditioner if scaleinvH0 is false as the true setting
# is essentially its own kind of preconditioning
# (Note: preconditioner update was done outside of this function)
ldiv!(s, precon, q)
end
# Forward pass
for index in lower:1:upper
if index < 1
continue
end
i = mod1(index, m)
dgi = dg_history[i]
dxi = dx_history[i]
@inbounds beta = rho[i] * real(dot(dgi, s))
@inbounds s .+= dxi .* (alpha[i] - beta)
end
# Negate search direction
rmul!(s, eltype(s)(-1))
return
end
struct LBFGS{T, IL, L, Tprep} <: FirstOrderOptimizer
m::Int
alphaguess!::IL
linesearch!::L
P::T
precondprep!::Tprep
manifold::Manifold
scaleinvH0::Bool
end
"""
# LBFGS
## Constructor
```julia
LBFGS(; m::Integer = 10,
alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.HagerZhang(),
P=nothing,
precondprep = (P, x) -> nothing,
manifold = Flat(),
scaleinvH0::Bool = true && (typeof(P) <: Nothing))
```
`LBFGS` has two special keywords; the memory length `m`,
and the `scaleinvH0` flag.
The memory length determines how many previous Hessian
approximations to store.
When `scaleinvH0 == true`,
then the initial guess in the two-loop recursion to approximate the
inverse Hessian is the scaled identity, as can be found in Nocedal and Wright (2nd edition) (sec. 7.2).
In addition, LBFGS supports preconditioning via the `P` and `precondprep`
keywords.
## Description
The `LBFGS` method implements the limited-memory BFGS algorithm as described in
Nocedal and Wright (sec. 7.2, 2006) and original paper by Liu & Nocedal (1989).
It is a quasi-Newton method that updates an approximation to the Hessian using
past approximations as well as the gradient.
## References
- Wright, S. J. and J. Nocedal (2006), Numerical optimization, 2nd edition. Springer
- Liu, D. C. and Nocedal, J. (1989). "On the Limited Memory Method for Large Scale Optimization". Mathematical Programming B. 45 (3): 503β528
"""
function LBFGS(; m::Integer = 10,
alphaguess = LineSearches.InitialStatic(), # TODO: benchmark defaults
linesearch = LineSearches.HagerZhang(), # TODO: benchmark defaults
P=nothing,
precondprep = (P, x) -> nothing,
manifold::Manifold=Flat(),
scaleinvH0::Bool = true && (typeof(P) <: Nothing) )
LBFGS(Int(m), _alphaguess(alphaguess), linesearch, P, precondprep, manifold, scaleinvH0)
end
Base.summary(::LBFGS) = "L-BFGS"
mutable struct LBFGSState{Tx, Tdx, Tdg, T, G} <: AbstractOptimizerState
x::Tx
x_previous::Tx
g_previous::G
rho::Vector{T}
dx_history::Tdx
dg_history::Tdg
dx::Tx
dg::Tx
u::Tx
f_x_previous::T
twoloop_q
twoloop_alpha
pseudo_iteration::Int
s::Tx
@add_linesearch_fields()
end
function reset!(method, state::LBFGSState, obj, x)
retract!(method.manifold, x)
value_gradient!(obj, x)
project_tangent!(method.manifold, gradient(obj), x)
state.pseudo_iteration = 0
end
function initial_state(method::LBFGS, options, d, initial_x)
T = real(eltype(initial_x))
n = length(initial_x)
initial_x = copy(initial_x)
retract!(method.manifold, initial_x)
value_gradient!!(d, initial_x)
project_tangent!(method.manifold, gradient(d), initial_x)
LBFGSState(initial_x, # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
copy(gradient(d)), # Store previous gradient in state.g_previous
fill(T(NaN), method.m), # state.rho
[similar(initial_x) for i = 1:method.m], # Store changes in position in state.dx_history
[eltype(gradient(d))(NaN).*gradient(d) for i = 1:method.m], # Store changes in position in state.dg_history
T(NaN)*initial_x, # Buffer for new entry in state.dx_history
T(NaN)*initial_x, # Buffer for new entry in state.dg_history
T(NaN)*initial_x, # Buffer stored in state.u
real(T)(NaN), # Store previous f in state.f_x_previous
similar(initial_x), #Buffer for use by twoloop
Vector{T}(undef, method.m), #Buffer for use by twoloop
0,
eltype(gradient(d))(NaN).*gradient(d), # Store current search direction in state.s
@initial_linesearch()...)
end
function update_state!(d, state::LBFGSState, method::LBFGS)
n = length(state.x)
# Increment the number of steps we've had to perform
state.pseudo_iteration += 1
project_tangent!(method.manifold, gradient(d), state.x)
# update the preconditioner
method.precondprep!(method.P, state.x)
# Determine the L-BFGS search direction # FIXME just pass state and method?
twoloop!(state.s, gradient(d), state.rho, state.dx_history, state.dg_history,
method.m, state.pseudo_iteration,
state.twoloop_alpha, state.twoloop_q, method.scaleinvH0, method.P)
project_tangent!(method.manifold, state.s, state.x)
# Save g value to prepare for update_g! call
copyto!(state.g_previous, gradient(d))
# Determine the distance of movement along the search line
lssuccess = perform_linesearch!(state, method, ManifoldObjective(method.manifold, d))
# Update current position
state.dx .= state.alpha .* state.s
state.x .= state.x .+ state.dx
retract!(method.manifold, state.x)
lssuccess == false # break on linesearch error
end
function update_h!(d, state, method::LBFGS)
n = length(state.x)
# Measure the change in the gradient
state.dg .= gradient(d) .- state.g_previous
# Update the L-BFGS history of positions and gradients
rho_iteration = one(eltype(state.dx)) / real(dot(state.dx, state.dg))
if isinf(rho_iteration)
# TODO: Introduce a formal error? There was a warning here previously
state.pseudo_iteration=0
return true
end
idx = mod1(state.pseudo_iteration, method.m)
state.dx_history[idx] .= state.dx
state.dg_history[idx] .= state.dg
state.rho[idx] = rho_iteration
false
end
function trace!(tr, d, state, iteration, method::LBFGS, options, curr_time=time())
common_trace!(tr, d, state, iteration, method, options, curr_time)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2662 | # See p. 280 of Murphy's Machine Learning
# x_k1 = x_k - alpha * gr + mu * (x - x_previous)
struct MomentumGradientDescent{Tf, IL,L} <: FirstOrderOptimizer
mu::Tf
alphaguess!::IL
linesearch!::L
manifold::Manifold
end
Base.summary(::MomentumGradientDescent) = "Momentum Gradient Descent"
function MomentumGradientDescent(; mu::Real = 0.01,
alphaguess = LineSearches.InitialPrevious(), # TODO: investigate good defaults
linesearch = LineSearches.HagerZhang(), # TODO: investigate good defaults
manifold::Manifold=Flat())
MomentumGradientDescent(mu, _alphaguess(alphaguess), linesearch, manifold)
end
mutable struct MomentumGradientDescentState{Tx, T} <: AbstractOptimizerState
x::Tx
x_previous::Tx
x_momentum::Tx
f_x_previous::T
s::Tx
@add_linesearch_fields()
end
function initial_state(method::MomentumGradientDescent, options, d, initial_x)
T = eltype(initial_x)
initial_x = copy(initial_x)
retract!(method.manifold, initial_x)
value_gradient!!(d, initial_x)
project_tangent!(method.manifold, gradient(d), initial_x)
MomentumGradientDescentState(initial_x, # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
similar(initial_x), # Record momentum correction direction in state.x_momentum
real(T)(NaN), # Store previous f in state.f_x_previous
similar(initial_x), # Maintain current search direction in state.s
@initial_linesearch()...)
end
function update_state!(d, state::MomentumGradientDescentState, method::MomentumGradientDescent)
project_tangent!(method.manifold, gradient(d), state.x)
# Search direction is always the negative gradient
state.s .= .-gradient(d)
# Update position, and backup current one
state.x_momentum .= state.x .- state.x_previous
# Determine the distance of movement along the search line
lssuccess = perform_linesearch!(state, method, ManifoldObjective(method.manifold, d))
state.x .+= state.alpha.*state.s .+ method.mu.*state.x_momentum
retract!(method.manifold, state.x)
lssuccess == false # break on linesearch error
end
function trace!(tr, d, state, iteration, method::MomentumGradientDescent, options, curr_time=time())
common_trace!(tr, d, state, iteration, method, options, curr_time)
end
function default_options(method::MomentumGradientDescent)
(; allow_f_increases = true)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 18290 | #= TODO:
- How to deal with f_increased (as state and preconstate shares the same x and x_previous vectors)
- Check whether this makes sense for other preconditioners than GradientDescent and L-BFGS
* There might be some issue of dealing with x_current and x_previous in MomentumGradientDescent
* Trust region based methods may not work because we assume the preconditioner calls perform_linesearch!
=#
abstract type AbstractNGMRES <: FirstOrderOptimizer end
# TODO: Enforce TPrec <: Union{FirstOrderoptimizer,SecondOrderOptimizer}?
struct NGMRES{IL, Tp,TPrec <: AbstractOptimizer,L} <: AbstractNGMRES
alphaguess!::IL # Initial step length guess for linesearch along direction xP->xA
linesearch!::L # Preconditioner moving from xP to xA (precondition x to accelerated x)
manifold::Manifold
nlprecon::TPrec # Nonlinear preconditioner
nlpreconopts::Options # Preconditioner options
Ο΅0::Tp # Ensure A-matrix is positive definite
wmax::Int # Maximum window size
end
struct OACCEL{IL, Tp,TPrec <: AbstractOptimizer,L} <: AbstractNGMRES
alphaguess!::IL # Initial step length guess for linesearch along direction xP->xA
linesearch!::L # Linesearch between xP and xA (precondition x to accelerated x)
manifold::Manifold
nlprecon::TPrec # Nonlinear preconditioner
nlpreconopts::Options # Preconditioner options
Ο΅0::Tp # Ensure A-matrix is positive definite
wmax::Int # Maximum window size
end
Base.summary(s::NGMRES) = "Nonlinear GMRES preconditioned with $(summary(s.nlprecon))"
Base.summary(s::OACCEL) = "O-ACCEL preconditioned with $(summary(s.nlprecon))"
"""
# N-GMRES
## Constructor
```julia
NGMRES(;
alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.HagerZhang(),
manifold = Flat(),
wmax::Int = 10,
Ο΅0 = 1e-12,
nlprecon = GradientDescent(
alphaguess = LineSearches.InitialStatic(alpha=1e-4,scaled=true),
linesearch = LineSearches.Static(),
manifold = manifold),
nlpreconopts = Options(iterations = 1, allow_f_increases = true),
)
```
## Description
This algorithm takes a step given by the nonlinear preconditioner `nlprecon`
and proposes an accelerated step by minimizing an approximation of
the (\ell_2) residual of the gradient on a subspace spanned by the previous
`wmax` iterates.
N-GMRES was originally developed for solving nonlinear systems [1], and reduces to
GMRES for linear problems.
Application of the algorithm to optimization is covered, for example, in [2].
## References
[1] De Sterck. Steepest descent preconditioning for nonlinear GMRES optimization. NLAA, 2013.
[2] Washio and Oosterlee. Krylov subspace acceleration for nonlinear multigrid schemes. ETNA, 1997.
"""
function NGMRES(;manifold::Manifold = Flat(),
alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.HagerZhang(),
nlprecon = GradientDescent(
alphaguess = LineSearches.InitialStatic(alpha=1e-4,scaled=true), # Step length arbitrary,
linesearch = LineSearches.Static(),
manifold = manifold),
nlpreconopts = Options(iterations = 1, allow_f_increases = true),
Ο΅0 = 1e-12, # Ο΅0 = 1e-12 -- number was an arbitrary choice#
wmax::Int = 10) # wmax = 10 -- number was an arbitrary choice to match L-BFGS field `m`
@assert manifold == nlprecon.manifold
NGMRES(_alphaguess(alphaguess), linesearch, manifold, nlprecon, nlpreconopts, Ο΅0, wmax)
end
"""
# O-ACCEL
## Constructor
```julia
OACCEL(;manifold::Manifold = Flat(),
alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.HagerZhang(),
nlprecon = GradientDescent(
alphaguess = LineSearches.InitialStatic(alpha=1e-4,scaled=true),
linesearch = LineSearches.Static(),
manifold = manifold),
nlpreconopts = Options(iterations = 1, allow_f_increases = true),
Ο΅0 = 1e-12,
wmax::Int = 10)
```
## Description
This algorithm takes a step given by the nonlinear preconditioner `nlprecon`
and proposes an accelerated step by minimizing an approximation of
the objective on a subspace spanned by the previous
`wmax` iterates.
O-ACCEL is a slight tweak of N-GMRES, first presented in [1].
## References
[1] Riseth. Objective acceleration for unconstrained optimization. 2018.
"""
function OACCEL(;manifold::Manifold = Flat(),
alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.HagerZhang(),
nlprecon = GradientDescent(
alphaguess = LineSearches.InitialStatic(alpha=1e-4,scaled=true), # Step length arbitrary
linesearch = LineSearches.Static(),
manifold = manifold),
nlpreconopts = Options(iterations = 1, allow_f_increases = true),
Ο΅0 = 1e-12, # Ο΅0 = 1e-12 -- number was an arbitrary choice
wmax::Int = 10) # wmax = 10 -- number was an arbitrary choice to match L-BFGS field `m`
@assert manifold == nlprecon.manifold
OACCEL(_alphaguess(alphaguess), linesearch, manifold, nlprecon, nlpreconopts, Ο΅0, wmax)
end
mutable struct NGMRESState{P,Tx,Te,T,eTx} <: AbstractOptimizerState where P <: AbstractOptimizerState
# eTx is the eltype of Tx
x::Tx # Reference to nlpreconstate.x
x_previous::Tx # Reference to nlpreconstate.x_previous
x_previous_0::Tx # Used to deal with assess_convergence of NGMRES
f_x_previous::T
f_x_previous_0::T # Used to deal with assess_convergence of NGMRES
f_xP::T # For tracing purposes
grnorm_xP::T # For tracing purposes
s::Tx # Search direction for linesearch between xP and xA
nlpreconstate::P # Nonlinear preconditioner state
X::Array{eTx,2} # Solution vectors in the window
R::Array{eTx,2} # Gradient vectors in the window
Q::Array{T,2} # Storage to create linear system (TODO: make Symmetric?)
ΞΎ::Te # Storage to create linear system
curw::Int # Counter for current window size
A::Array{T,2} # Container for AΞ± = b
b::Vector{T} # Container for AΞ± = b
xA::Vector{eTx} # Container for accelerated step
k::Int # Used for indexing where to put values in the Storage containers
restart::Bool # Restart flag
g_abstol::T # Exit tolerance to be checked after nonlinear preconditioner apply
subspacealpha::Vector{T} # Storage for coefficients in the subspace for the acceleration step
@add_linesearch_fields()
end
"Update storage Q[i,j] and Q[j,i] for `NGMRES`"
@inline function _updateQ!(Q, i::Int, j::Int, X, R, ::NGMRES)
Q[j,i] = real(dot(R[:, j], R[:,i]))
if i != j
Q[i,j] = Q[j, i] # TODO: Use Symmetric?
end
end
"Update storage A[i,j] for `NGMRES`"
@inline function _updateA!(A, i::Int, j::Int, Q, ΞΎ, Ξ·, ::NGMRES)
A[i,j] = Q[i,j]-ΞΎ[i]-ΞΎ[j]+Ξ·
end
"Update storage ΞΎ[i,:] for `NGMRES`"
@inline function _updateΞΎ!(ΞΎ, i::Int, X, x, R, r, ::NGMRES)
ΞΎ[i] = real(dot(vec(r), R[:,i]))
end
"Update storage b[i] for `NGMRES`"
@inline function _updateb!(b, i::Int, ΞΎ, Ξ·, ::NGMRES)
b[i] = Ξ· - ΞΎ[i]
end
"Update value Ξ· for `NGMRES`"
@inline function _updateΞ·(x, r, ::NGMRES)
real(dot(r, r))
end
"Update storage Q[i,j] and Q[j,i] for `OACCEL`"
@inline function _updateQ!(Q, i::Int, j::Int, X, R, ::OACCEL)
Q[i,j] = real(dot(X[:,i], R[:,j]))
if i != j
Q[j,i] = real(dot(X[:,j], R[:,i]))
end
end
"Update storage A[i,j] for `OACCEL`"
@inline function _updateA!(A, i::Int, j::Int, Q, ΞΎ, Ξ·, ::OACCEL)
A[i,j] = Q[i,j]-ΞΎ[i,1]-ΞΎ[j,2]+Ξ·
end
"Update storage ΞΎ[i,:] for `OACCEL`"
@inline function _updateΞΎ!(ΞΎ, i::Int, X, x, R, r, ::OACCEL)
ΞΎ[i,1] = real(dot(X[:,i], r))
ΞΎ[i,2] = real(dot(x, R[:,i]))
end
"Update storage b[i] for `OACCEL`"
@inline function _updateb!(b, i::Int, ΞΎ, Ξ·, ::OACCEL)
b[i] = Ξ· - ΞΎ[i,1]
end
"Update value Ξ· for `OACCEL`"
@inline function _updateΞ·(x, r, ::OACCEL)
real(dot(x, r))
end
const ngmres_oaccel_warned = Ref{Bool}(false)
function initial_state(method::AbstractNGMRES, options, d, initial_x::AbstractArray{eTx}) where eTx
if !(typeof(method.nlprecon) <: Union{GradientDescent,LBFGS})
if !ngmres_oaccel_warned[]
@warn "Use caution. N-GMRES/O-ACCEL has only been tested with Gradient Descent and L-BFGS preconditioning."
ngmres_oaccel_warned[] = true
end
end
nlpreconstate = initial_state(method.nlprecon, method.nlpreconopts, d, initial_x)
# Manifold comment:
# We assume nlprecon calls retract! and project_tangent! on
# nlpreconstate.x and gradient(d)
T = real(eTx)
n = length(nlpreconstate.x)
wmax = method.wmax
X = Array{eTx}(undef, n, wmax)
R = Array{eTx}(undef, n, wmax)
Q = Array{T}(undef, wmax, wmax)
ΞΎ = if typeof(method) <: OACCEL
Array{T}(undef, wmax, 2)
else
Array{T}(undef, wmax)
end
copyto!(view(X,:,1), nlpreconstate.x)
copyto!(view(R,:,1), gradient(d))
_updateQ!(Q, 1, 1, X, R, method)
NGMRESState(nlpreconstate.x, # Maintain current state in state.x. Use same vector as preconditioner.
nlpreconstate.x_previous, # Maintain in state.x_previous. Use same vector as preconditioner.
copy(nlpreconstate.x), # Maintain state at the beginning of an iteration in state.x_previous_0. Used for convergence asessment.
T(NaN), # Store previous f in state.f_x_previous
T(NaN), # Store f value from the beginning of an iteration in state.f_x_previous_0. Used for convergence asessment.
T(NaN), # Store value f_xP of f(x^P) for tracing purposes
T(NaN), # Store value grnorm_xP of |g(x^P)| for tracing purposes
similar(initial_x), # Maintain current search direction in state.s
nlpreconstate, # State storage for preconditioner
X,
R,
Q,
ΞΎ,
1, # curw
Array{T}(undef, wmax, wmax), # A
Array{T}(undef, wmax), # b
vec(similar(initial_x)), # xA
0, # iteration counter
false, # Restart flag
options.g_abstol, # Exit tolerance check after nonlinear preconditioner apply
Array{T}(undef, wmax), # subspacealpha
@initial_linesearch()...)
end
nlprecon_post_optimize!(d, state, method) = update_h!(d, state.nlpreconstate, method)
nlprecon_post_accelerate!(d, state, method) = update_h!(d, state.nlpreconstate, method)
function nlprecon_post_accelerate!(d, state::NGMRESState{X,T},
method::LBFGS) where X where T
state.nlpreconstate.pseudo_iteration += 1
update_h!(d, state.nlpreconstate, method)
end
function update_state!(d, state::NGMRESState{X,T}, method::AbstractNGMRES) where X where T
# Maintain a record of previous position, for convergence assessment
copyto!(state.x_previous_0, state.x)
state.f_x_previous_0 = value(d)
state.k += 1
curw = state.curw
# Step 1: Call preconditioner to get x^P
res = optimize(d, state.x, method.nlprecon, method.nlpreconopts, state.nlpreconstate)
# TODO: Is project_tangent! necessary, or is it called by nlprecon before exit?
project_tangent!(method.manifold, gradient(d), state.x)
if any(.!isfinite.(state.x)) || any(.!isfinite.(gradient(d))) || !isfinite(value(d))
@warn("Non-finite values attained from preconditioner $(summary(method.nlprecon)).")
return true
end
# Calling value_gradient! in normally done on state.x in optimize or update_g! above,
# but there are corner cases where we need this.
state.f_xP, _g = value_gradient!(d, state.x)
# Manifold start
project_tangent!(method.manifold, gradient(d), state.x)
# Manifold stop
gP = gradient(d)
state.grnorm_xP = g_residual(gP)
if g_residual(gP) β€ state.g_abstol
return false # Exit on gradient norm convergence
end
# Deals with update_h! etc for preconditioner, if needed
nlprecon_post_optimize!(d, state, method.nlprecon)
# Step 2: Do acceleration calculation
Ξ· = _updateΞ·(state.x, gP, method)
for i = 1:curw
# Update storage vectors according to method {NGMRES, OACCEL}
_updateΞΎ!(state.ΞΎ, i, state.X, state.x, state.R, gP, method)
_updateb!(state.b, i, state.ΞΎ, Ξ·, method)
end
for i = 1:curw
for j = 1:curw
# Update system matrix according to method {NGMRES, OACCEL}
_updateA!(state.A, i, j, state.Q, state.ΞΎ, Ξ·, method)
end
end
Ξ± = view(state.subspacealpha, 1:curw)
Aview = view(state.A, 1:curw, 1:curw)
bview = view(state.b, 1:curw)
# The outer max is to avoid Ξ΄=0, which may occur if A=0, e.g. at numerical convergence
Ξ΄ = method.Ο΅0*max(maximum(diag(Aview)), method.Ο΅0)
try
Ξ± .= (Aview + Ξ΄*I) \ bview
catch e
@warn("Calculating Ξ± failed in $(summary(method)).")
@warn("Exception info:\n $e")
Ξ± .= NaN
end
if any(isnan, Ξ±)
@warn("Calculated Ξ± is NaN in $(summary(method)). Restarting ...")
state.s .= zero(eltype(state.s))
state.restart = true
else
# xA = xP + \sum_{j=1}^{curw} Ξ±[j] * (X[j] - xP)
state.xA .= (1.0-sum(Ξ±)).*vec(state.x) .+
sum(state.X[:,k]*Ξ±[k] for k = 1:curw)
state.s .= reshape(state.xA, size(state.x)) .- state.x
end
# 3: Perform condition checks
if real(dot(state.s, gP)) β₯ 0 || !isfinite(real(dot(state.s, gP)))
# Moving from xP to xA is *not* a descent direction
# Discard xA
state.restart = true # TODO: expand restart heuristics
lssuccess = true
state.alpha = 0.0
else
state.restart = false
# Update f_x_previous and dphi_0_previous according to preconditioner step
# This may be used in perform_linesearch!/alphaguess! when moving from x^P to x^A
# TODO: make this a function?
state.f_x_previous = state.nlpreconstate.f_x_previous
if typeof(method.alphaguess!) <: LineSearches.InitialConstantChange
nlprec = method.nlprecon
if isdefined(nlprec, :alphaguess!) &&
typeof(nlprec.alphaguess!) <: LineSearches.InitialConstantChange
method.alphaguess!.dΟ_0_previous[] = nlprec.alphaguess!.dΟ_0_previous[]
end
end
# state.x_previous and state.x are dealt with by reference
lssuccess = perform_linesearch!(state, method, ManifoldObjective(method.manifold, d))
@. state.x = state.x + state.alpha * state.s
# Manifold start
retract!(method.manifold, state.x)
# Manifold stop
# TODO: Move these into `nlprecon_post_accelerate!` ?
state.nlpreconstate.f_x_previous = state.f_x_previous
if typeof(method.alphaguess!) <: LineSearches.InitialConstantChange
nlprec = method.nlprecon
if isdefined(nlprec, :alphaguess!) &&
typeof(nlprec.alphaguess!) <: LineSearches.InitialConstantChange
nlprec.alphaguess!.dΟ_0_previous[] = method.alphaguess!.dΟ_0_previous[]
end
end
# Deals with update_h! etc. for preconditioner, if needed
nlprecon_post_accelerate!(d, state, method.nlprecon)
end
#=
Update x_previous and f_x_previous to be the values at the beginning
of the N-GMRES iteration. For convergence assessment purposes.
=#
copyto!(state.x_previous, state.x_previous_0)
state.f_x_previous = state.f_x_previous_0
lssuccess == false # Break on linesearch error
end
function update_g!(d, state, method::AbstractNGMRES)
# Update the function value and gradient
# TODO: do we need a retract! on state.x here?
value_gradient!(d, state.x)
project_tangent!(method.manifold, gradient(d), state.x)
if state.restart == false
state.curw = min(state.curw + 1, method.wmax)
else
state.k = 0
state.curw = 1
end
j = mod(state.k, method.wmax) + 1
copyto!(view(state.X,:,j), vec(state.x))
copyto!(view(state.R,:,j), vec(gradient(d)))
for i = 1:state.curw
_updateQ!(state.Q, i, j, state.X, state.R, method)
end
end
function trace!(tr, d, state, iteration, method::AbstractNGMRES, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["x"] = copy(state.x)
dt["g(x)"] = copy(gradient(d))
dt["subspace-Ξ±"] = state.subspacealpha[1:state.curw-1]
if state.restart == true
dt["Current step size"] = NaN
else
dt["Current step size"] = state.alpha
# This is a wasteful hack to get the previous values for debugging purposes only.
xP = state.x .- state.alpha .* state.s
dt["x^P"] = copy(xP)
# TODO: What's a good way to include g(x^P) here without messing up gradient counts?
end
end
dt["Restart"] = state.restart
if state.restart == false
dt["f(x^P)"] = state.f_xP
dt["|g(x^P)|"] = state.grnorm_xP
end
g_norm = g_residual(d)
update!(tr,
iteration,
value(d),
g_norm,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
#
# function assess_convergence(state::NGMRESState, d, options::Options)
# default_convergence_assessment(state, d, options)
# end
function default_options(method::AbstractNGMRES)
(;allow_f_increases = true)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 6332 | struct KrylovTrustRegion{T <: Real} <: SecondOrderOptimizer
initial_radius::T
max_radius::T
eta::T
rho_lower::T
rho_upper::T
cg_tol::T
end
KrylovTrustRegion(; initial_radius::Real = 1.0,
max_radius::Real = 100.0,
eta::Real = 0.1,
rho_lower::Real = 0.25,
rho_upper::Real = 0.75,
cg_tol::Real = 0.01) =
KrylovTrustRegion(initial_radius, max_radius, eta,
rho_lower, rho_upper, cg_tol)
update_h!(d, state, method::KrylovTrustRegion) = nothing
# TODO: support x::Array{T,N} et al.?
mutable struct KrylovTrustRegionState{T} <: AbstractOptimizerState
x::Vector{T}
x_previous::Vector{T}
f_x_previous::T
s::Vector{T}
interior::Bool
accept_step::Bool
radius::T
m_diff::T
f_diff::T
rho::T
r::Vector{T} # residual vector
d::Vector{T} # direction to consider
cg_iters::Int
end
function initial_state(method::KrylovTrustRegion, options, d, initial_x::Array{T}) where T
n = length(initial_x)
# Maintain current gradient in gr
@assert(method.max_radius > 0)
@assert(0 < method.initial_radius < method.max_radius)
@assert(0 <= method.eta < method.rho_lower)
@assert(method.rho_lower < method.rho_upper)
@assert(method.rho_lower >= 0)
value_gradient!!(d, initial_x)
KrylovTrustRegionState(copy(initial_x), # Maintain current state in state.x
copy(initial_x), # x_previous
zero(T), # f_x_previous
similar(initial_x), # Maintain current search direction in state.s
true, # interior
true, # accept step
convert(T,method.initial_radius),
zero(T), # model change
zero(T), # observed f change
zero(T), # state.rho
Vector{T}(undef, n), # residual vector
Vector{T}(undef, n), # direction to consider
0) # cg_iters
end
function trace!(tr, d, state, iteration, method::KrylovTrustRegion, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["x"] = copy(state.x)
dt["radius"] = copy(state.radius)
dt["interior"] = state.interior
dt["accept_step"] = state.accept_step
dt["norm(s)"] = norm(state.s)
dt["rho"] = state.rho
dt["m_diff"] = state.m_diff
dt["f_diff"] = state.f_diff
dt["cg_iters"] = state.cg_iters
end
g_norm = norm(gradient(d), Inf)
update!(tr,
iteration,
value(d),
g_norm,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
function cg_steihaug!(objective::TwiceDifferentiableHV,
state::KrylovTrustRegionState{T},
method::KrylovTrustRegion) where T
n = length(state.x)
x, g, d, r, z, Hd = state.x, gradient(objective), state.d, state.r, state.s, hv_product(objective)
fill!(z, 0.0) # the search direction is initialized to the 0 vector,
r .= g # so at first the whole gradient is the residual.
d .= -r # the first direction is the direction of steepest descent.
rho0 = 1e100 # just a big number
state.cg_iters = 0
for i in 1:n
state.cg_iters += 1
hv_product!(objective, x, d)
dHd = dot(d, Hd)
if -1e-15 < dHd < 1e-15
break
end
alpha = dot(r, r) / dHd
if dHd < 0. || norm(z .+ alpha .* d) >= state.radius
a_ = dot(d, d)
b_ = 2 * dot(z, d)
c_ = dot(z, z) - state.radius^2
tau = (-b_ + sqrt(b_ * b_ - 4 * a_ * c_)) / (2 * a_)
z .+= tau .* d
break
end
z .+= alpha .* d
rho_prev = dot(r, r)
if i == 1
rho0 = rho_prev
end
r .+= alpha * Hd
rho_next = dot(r, r)
r_sqnorm_ratio = rho_next / rho_prev
d[:] = -r + r_sqnorm_ratio * d
if (rho_next / rho0) < method.cg_tol^2
break
end
end
hv_product!(objective, x, z)
return dot(g, z) + 0.5 * dot(z, Hd)
end
function update_state!(objective::TwiceDifferentiableHV,
state::KrylovTrustRegionState,
method::KrylovTrustRegion)
state.m_diff = cg_steihaug!(objective, state, method)
@assert state.m_diff <= 0
state.f_diff = value(objective, state.x .+ state.s) - value(objective)
state.rho = state.f_diff / state.m_diff
state.interior = norm(state.s) < 0.9 * state.radius
if state.rho < method.rho_lower
state.radius *= 0.25
elseif (state.rho > method.rho_upper) && (!state.interior)
state.radius = min(2 * state.radius, method.max_radius)
end
state.accept_step = state.rho > method.eta
if state.accept_step
state.x .+= state.s
end
return false
end
function update_g!(objective, state::KrylovTrustRegionState, method::KrylovTrustRegion)
if state.accept_step
# Update the function value and gradient
state.f_x_previous = value(objective)
value_gradient!(objective, state.x)
end
end
function assess_convergence(state::KrylovTrustRegionState, d, options::Options)
if !state.accept_step
return state.radius < options.x_abstol, false, false, false
end
x_converged, f_converged, f_increased, g_converged = false, false, false, false
if norm(state.s, Inf) < options.x_abstol
x_converged = true
end
# Absolute Tolerance
# if abs(f_x - f_x_previous) < f_tol
# Relative Tolerance
if abs(state.f_diff) < max(options.f_reltol * (abs(value(d)) + options.f_reltol), eps(abs(value(d))+abs(state.f_x_previous)))
f_converged = true
end
if norm(gradient(d), Inf) < options.g_abstol
g_converged = true
end
return x_converged, f_converged, g_converged, false
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 3614 | struct Newton{IL, L} <: SecondOrderOptimizer
alphaguess!::IL
linesearch!::L
end
"""
# Newton
## Constructor
```julia
Newton(; alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.HagerZhang())
```
## Description
The `Newton` method implements Newton's method for optimizing a function. We use
a special factorization from the package `PositiveFactorizations.jl` to ensure
that each search direction is a direction of descent. See Wright and Nocedal and
Wright (ch. 6, 1999) for a discussion of Newton's method in practice.
## References
- Nocedal, J. and S. J. Wright (1999), Numerical optimization. Springer Science 35.67-68: 7.
"""
function Newton(; alphaguess = LineSearches.InitialStatic(), # Good default for Newton
linesearch = LineSearches.HagerZhang()) # Good default for Newton
Newton(_alphaguess(alphaguess), linesearch)
end
Base.summary(::Newton) = "Newton's Method"
mutable struct NewtonState{Tx, T, F<:Cholesky} <: AbstractOptimizerState
x::Tx
x_previous::Tx
f_x_previous::T
F::F
s::Tx
@add_linesearch_fields()
end
function initial_state(method::Newton, options, d, initial_x)
T = eltype(initial_x)
n = length(initial_x)
# Maintain current gradient in gr
s = similar(initial_x)
value_gradient!!(d, initial_x)
hessian!!(d, initial_x)
NewtonState(copy(initial_x), # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
T(NaN), # Store previous f in state.f_x_previous
Cholesky(similar(d.H, T, 0, 0), :U, BLAS.BlasInt(0)),
similar(initial_x), # Maintain current search direction in state.s
@initial_linesearch()...)
end
function update_state!(d, state::NewtonState, method::Newton)
# Search direction is always the negative gradient divided by
# a matrix encoding the absolute values of the curvatures
# represented by H. It deviates from the usual "add a scaled
# identity matrix" version of the modified Newton method. More
# information can be found in the discussion at issue #153.
T = eltype(state.x)
if typeof(NLSolversBase.hessian(d)) <: AbstractSparseMatrix
state.s .= .-(NLSolversBase.hessian(d)\convert(Vector{T}, gradient(d)))
else
state.F = cholesky!(Positive, NLSolversBase.hessian(d))
if typeof(gradient(d)) <: Array
# is this actually StridedArray?
ldiv!(state.s, state.F, -gradient(d))
else
# not Array, we can't do inplace ldiv
gv = Vector{T}(undef, length(gradient(d)))
copyto!(gv, -gradient(d))
copyto!(state.s, state.F\gv)
end
end
# Determine the distance of movement along the search line
lssuccess = perform_linesearch!(state, method, d)
# Update current position # x = x + alpha * s
@. state.x = state.x + state.alpha * state.s
lssuccess == false # break on linesearch error
end
function trace!(tr, d, state, iteration, method::Newton, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["x"] = copy(state.x)
dt["g(x)"] = copy(gradient(d))
dt["h(x)"] = copy(NLSolversBase.hessian(d))
dt["Current step size"] = state.alpha
end
g_norm = norm(gradient(d), Inf)
update!(tr,
iteration,
value(d),
g_norm,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 15660 | #
# Check whether we are in the "hard case".
#
# Args:
# H_eigv: The eigenvalues of H, low to high
# qg: The inner product of the eigenvalues and the gradient in the same order
#
# Returns:
# hard_case: Whether it is a candidate for the hard case
# lambda_index: The index of the first lambda not equal to the smallest
# eigenvalue, which is only correct if hard_case is true.
function check_hard_case_candidate(H_eigv, qg)
@assert length(H_eigv) == length(qg)
if H_eigv[1] >= 0
# The hard case is only when the smallest eigenvalue is negative.
return false, 1
end
hard_case = true
lambda_index = 1
hard_case_check_done = false
while !hard_case_check_done
if lambda_index > length(H_eigv)
hard_case_check_done = true
elseif abs(H_eigv[1] - H_eigv[lambda_index]) > 1e-10
# The eigenvalues are reported in order.
hard_case_check_done = true
else
if abs(qg[lambda_index]) > 1e-10
hard_case_check_done = true
hard_case = false
end
lambda_index += 1
end
end
hard_case, lambda_index
end
# Equation 4.38 in N&W (2006)
function calc_p!(lambda::T, min_i, n, qg, H_eig, p) where T
fill!( p, zero(T) )
for i = min_i:n
p[:] -= qg[i] / (H_eig.values[i] + lambda) * H_eig.vectors[:, i]
end
return nothing
end
#==
Returns a tuple of initial safeguarding values for Ξ». Newton's method might not
work well without these safeguards when the Hessian is not positive definite.
==#
function initial_safeguards(H, gr, delta, lambda)
# equations are on p. 560 of [MORESORENSEN]
T = eltype(gr)
Ξ»S = maximum(-diag(H))
# they state on the first page that ||β
|| is the Euclidean norm
gr_norm = norm(gr)
Hnorm = opnorm(H, 1)
Ξ»L = max(T(0), Ξ»S, gr_norm/delta - Hnorm)
Ξ»U = gr_norm/delta + Hnorm
# p. 558
lambda = min(max(lambda, Ξ»L), Ξ»U)
if lambda β€ Ξ»S
lambda = max(T(1)/1000*Ξ»U, sqrt(Ξ»L*Ξ»U))
end
lambda
end
# Choose a point in the trust region for the next step using
# the interative (nearly exact) method of section 4.3 of N&W (2006).
# This is appropriate for Hessians that you factorize quickly.
#
# Args:
# gr: The gradient
# H: The Hessian
# delta: The trust region size, ||s|| <= delta
# s: Memory allocated for the step size, updated in place
# tolerance: The convergence tolerance for root finding
# max_iters: The maximum number of root finding iterations
#
# Returns:
# m - The numeric value of the quadratic minimization.
# interior - A boolean indicating whether the solution was interior
# lambda - The chosen regularizing quantity
# hard_case - Whether or not it was a "hard case" as described by N&W (2006)
# reached_solution - Whether or not a solution was reached (as opposed to
# terminating early due to max_iters)
function solve_tr_subproblem!(gr,
H,
delta,
s;
tolerance=1e-10,
max_iters=5)
T = eltype(gr)
n = length(gr)
delta_sq = delta^2
@assert n == length(s)
@assert (n, n) == size(H)
@assert max_iters >= 1
# Note that currently the eigenvalues are only sorted if H is perfectly
# symmetric. (Julia issue #17093)
Hsym = Symmetric(H)
if any(!isfinite, Hsym)
return T(Inf), false, zero(T), false, false
end
H_eig = eigen(Hsym)
if !isempty(H_eig.values)
min_H_ev, max_H_ev = H_eig.values[1], H_eig.values[n]
else
return T(Inf), false, zero(T), false, false
end
H_ridged = copy(H)
# Cache the inner products between the eigenvectors and the gradient.
qg = H_eig.vectors' * gr
# These values describe the outcome of the subproblem. They will be
# set below and returned at the end.
interior = true
hard_case = false
reached_solution = true
# Unconstrained solution
if min_H_ev >= 1e-8
calc_p!(zero(T), 1, n, qg, H_eig, s)
end
if min_H_ev >= 1e-8 && sum(abs2, s) <= delta_sq
# No shrinkage is necessary: -(H \ gr) is the minimizer
interior = true
reached_solution = true
lambda = zero(T)
else
interior = false
# The hard case is when the gradient is orthogonal to all
# eigenvectors associated with the lowest eigenvalue.
hard_case_candidate, min_i =
check_hard_case_candidate(H_eig.values, qg)
# Solutions smaller than this lower bound on lambda are not allowed:
# they don't ridge H enough to make H_ridge PSD.
lambda_lb = nextfloat(-min_H_ev)
lambda = lambda_lb
hard_case = false
if hard_case_candidate
# The "hard case". lambda is taken to be -min_H_ev and we only need
# to find a multiple of an orthogonal eigenvector that lands the
# iterate on the boundary.
# Formula 4.45 in N&W (2006)
calc_p!(lambda, min_i, n, qg, H_eig, s)
p_lambda2 = sum(abs2, s)
if p_lambda2 > delta_sq
# Then we can simply solve using root finding.
else
hard_case = true
reached_solution = true
tau = sqrt(delta_sq - p_lambda2)
# I don't think it matters which eigenvector we pick so take
# the first.
calc_p!(lambda, min_i, n, qg, H_eig, s)
s[:] = -s + tau * H_eig.vectors[:, 1]
end
end
lambda = initial_safeguards(H, gr, delta, lambda)
if !hard_case
# Algorithim 4.3 of N&W (2006), with s insted of p_l for consistency
# with Optim.jl
reached_solution = false
for iter in 1:max_iters
lambda_previous = lambda
for i=1:n
H_ridged[i, i] = H[i, i] + lambda
end
F = cholesky(Hermitian(H_ridged), check=false)
# Sometimes, lambda is not sufficiently large for the Cholesky factorization
# to succeed. In that case, we set double lambda and continue to next iteration
if !issuccess(F)
lambda *= 2
continue
end
R = F.U
s[:] = -R \ (R' \ gr)
q_l = R' \ s
norm2_s = dot(s, s)
lambda_update = norm2_s * (sqrt(norm2_s) - delta) / (delta * dot(q_l, q_l))
lambda += lambda_update
# Check that lambda is not less than lambda_lb, and if so, go
# half the way to lambda_lb.
if lambda < lambda_lb
lambda = 0.5 * (lambda_previous - lambda_lb) + lambda_lb
end
if abs(lambda - lambda_previous) < tolerance
reached_solution = true
break
end
end
end
end
m = dot(gr, s) + 0.5 * dot(s, H * s)
return m, interior, lambda, hard_case, reached_solution
end
struct NewtonTrustRegion{T <: Real} <: SecondOrderOptimizer
initial_delta::T
delta_hat::T
delta_min::T
eta::T
rho_lower::T
rho_upper::T
use_fg::Bool
end
"""
# NewtonTrustRegion
## Constructor
```julia
NewtonTrustRegion(; initial_delta = 1.0,
delta_hat = 100.0,
delta_min = 0.0,
eta = 0.1,
rho_lower = 0.25,
rho_upper = 0.75,
use_fg = true)
```
The constructor has 5 keywords:
* `initial_delta`, the starting trust region radius. Defaults to `1.0`.
* `delta_hat`, the largest allowable trust region radius. Defaults to `100.0`.
* `delta_min`, the smallest alowable trust region radius. Optimization halts if the updated radius is smaller than this value. Defaults to `sqrt(eps(Float64))`.
* `eta`, when `rho` is at least `eta`, accept the step. Defaults to `0.1`.
* `rho_lower`, when `rho` is less than `rho_lower`, shrink the trust region. Defaults to `0.25`.
* `rho_upper`, when `rho` is greater than `rho_upper`, grow the trust region. Defaults to `0.75`.
* `use_fg`, when true always evaluate the gradient with the value after solving the subproblem. This is more efficient if f and g share expensive computations. Defaults to `true`.
## Description
The `NewtonTrustRegion` method implements Newton's method with a trust region
for optimizing a function. The method is designed to take advantage of the
second-order information in a function's Hessian, but with more stability that
Newton's method when functions are not globally well-approximated by a quadratic.
This is achieved by repeatedly minimizing quadratic approximations within a
dynamically-sized trust region in which the function is assumed to be locally
quadratic. See Wright and Nocedal and Wright (ch. 4, 2006) for a discussion of
trust-region methods in practice.
## References
- Nocedal, J., & Wright, S. (2006). Numerical optimization. Springer Science & Business Media.
"""
NewtonTrustRegion(; initial_delta::Real = 1.0,
delta_hat::Real = 100.0,
delta_min::Real = sqrt(eps(Float64)),
eta::Real = 0.1,
rho_lower::Real = 0.25,
rho_upper::Real = 0.75,
use_fg=true) =
NewtonTrustRegion(initial_delta, delta_hat, delta_min, eta, rho_lower, rho_upper, use_fg)
Base.summary(::NewtonTrustRegion) = "Newton's Method (Trust Region)"
mutable struct NewtonTrustRegionState{Tx, T, G} <: AbstractOptimizerState
x::Tx
x_previous::Tx
g_previous::G
f_x_previous::T
s::Tx
hard_case::Bool
reached_subproblem_solution::Bool
interior::Bool
delta::T
lambda::T
eta::T
rho::T
end
function initial_state(method::NewtonTrustRegion, options, d, initial_x)
T = eltype(initial_x)
n = length(initial_x)
# Maintain current gradient in gr
@assert(method.delta_hat > 0, "delta_hat must be strictly positive")
@assert(0 < method.initial_delta < method.delta_hat, "delta must be in (0, delta_hat)")
@assert(0 <= method.eta < method.rho_lower, "eta must be in [0, rho_lower)")
@assert(method.rho_lower < method.rho_upper, "must have rho_lower < rho_upper")
@assert(method.rho_lower >= 0.)
# Keep track of trust region sizes
delta = copy(method.initial_delta)
# Record attributes of the subproblem in the trace.
hard_case = false
reached_subproblem_solution = true
interior = true
lambda = NaN
NLSolversBase.value_gradient_hessian!!(d, initial_x)
NewtonTrustRegionState(copy(initial_x), # Maintain current state in state.x
copy(initial_x), # Maintain previous state in state.x_previous
copy(gradient(d)), # Store previous gradient in state.g_previous
T(NaN), # Store previous f in state.f_x_previous
similar(initial_x), # Maintain current search direction in state.s
hard_case,
reached_subproblem_solution,
interior,
T(delta),
T(lambda),
T(method.eta), # eta
zero(T)) # rho
end
function update_state!(d, state::NewtonTrustRegionState, method::NewtonTrustRegion)
T = eltype(state.x)
# Find the next step direction.
m, state.interior, state.lambda, state.hard_case, state.reached_subproblem_solution =
solve_tr_subproblem!(gradient(d), NLSolversBase.hessian(d), state.delta, state.s)
# Maintain a record of previous position
copyto!(state.x_previous, state.x)
state.f_x_previous = value(d)
# Update current position
state.x .+= state.s
# Update the function value and gradient
if method.use_fg
state.g_previous .= gradient(d)
value_gradient!(d, state.x)
else
value!(d, state.x)
end
# Update the trust region size based on the discrepancy between
# the predicted and actual function values. (Algorithm 4.1 in N&W (2006))
f_x_diff = state.f_x_previous - value(d)
if abs(m) <= eps(T)
# This should only happen when the step is very small, in which case
# we should accept the step and assess_convergence().
state.rho = 1.0
elseif m > 0
# This can happen if the trust region radius is too large and the
# Hessian is not positive definite. We should shrink the trust
# region.
state.rho = -1.0
else
state.rho = f_x_diff / (0 - m)
end
if state.rho < method.rho_lower
state.delta *= 0.25
elseif (state.rho > method.rho_upper) && (!state.interior)
state.delta = min(2 * state.delta, method.delta_hat)
else
# else leave delta unchanged.
end
if state.rho <= state.eta
# The improvement is too small and we won't take it.
# If you reject an interior solution, make sure that the next
# delta is smaller than the current step. Otherwise you waste
# steps reducing delta by constant factors while each solution
# will be the same. If this keeps on happening it could be a sign
# errors in the gradient or a non-differentiability at the optimum.
x_diff = state.x - state.x_previous
state.delta = 0.25 * norm(x_diff)
d.F = state.f_x_previous
copyto!(state.x, state.x_previous)
if method.use_fg
copyto!(d.DF, state.g_previous)
copyto!(d.x_df, state.x_previous)
end
else
if method.use_fg
hessian!(d, state.x)
else
NLSolversBase.gradient_hessian!!(d, state.x)
end
end
false
end
function assess_convergence(state::NewtonTrustRegionState, d, options::Options)
x_converged, f_converged, g_converged, converged, f_increased = false, false, false, false, false
if state.rho > state.eta
# Accept the point and check convergence
x_converged,
f_converged,
g_converged,
f_increased = assess_convergence(state.x,
state.x_previous,
value(d),
state.f_x_previous,
gradient(d),
options.x_abstol,
options.f_reltol,
options.g_abstol)
end
x_converged, f_converged, g_converged, f_increased
end
function trace!(tr, d, state, iteration, method::NewtonTrustRegion, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["x"] = copy(state.x)
dt["g(x)"] = copy(gradient(d))
dt["h(x)"] = copy(NLSolversBase.hessian(d))
dt["delta"] = copy(state.delta)
dt["interior"] = state.interior
dt["hard case"] = state.hard_case
dt["reached_subproblem_solution"] = state.reached_subproblem_solution
dt["lambda"] = state.lambda
end
g_norm = norm(gradient(d), Inf)
update!(tr,
iteration,
value(d),
g_norm,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 215 | function grid_search(f, grid)
min_value = f(grid[1])
arg_min_value = grid[1]
for el in grid
if f(el) < min_value
arg_min_value = el
end
end
return arg_min_value
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 12387 | abstract type Simplexer end
struct AffineSimplexer <: Simplexer
a::Float64
b::Float64
end
AffineSimplexer(;a = 0.025, b = 0.5) = AffineSimplexer(a, b)
function simplexer(S::AffineSimplexer, initial_x::Tx) where Tx
n = length(initial_x)
initial_simplex = Tx[copy(initial_x) for i = 1:n+1]
for j β eachindex(initial_x)
initial_simplex[j+1][j] = (1+S.b) * initial_simplex[j+1][j] + S.a
end
initial_simplex
end
abstract type NMParameters end
struct AdaptiveParameters <: NMParameters
Ξ±::Float64
Ξ²::Float64
Ξ³::Float64
Ξ΄::Float64
end
AdaptiveParameters(; Ξ± = 1.0, Ξ² = 1.0, Ξ³ = 0.75 , Ξ΄ = 1.0) = AdaptiveParameters(Ξ±, Ξ², Ξ³, Ξ΄)
parameters(P::AdaptiveParameters, n::Integer) = (P.Ξ±, P.Ξ² + 2/n, P.Ξ³ - 1/2n, P.Ξ΄ - 1/n)
struct FixedParameters <: NMParameters
Ξ±::Float64
Ξ²::Float64
Ξ³::Float64
Ξ΄::Float64
end
FixedParameters(; Ξ± = 1.0, Ξ² = 2.0, Ξ³ = 0.5, Ξ΄ = 0.5) = FixedParameters(Ξ±, Ξ², Ξ³, Ξ΄)
parameters(P::FixedParameters, n::Integer) = (P.Ξ±, P.Ξ², P.Ξ³, P.Ξ΄)
struct NelderMead{Ts <: Simplexer, Tp <: NMParameters} <: ZerothOrderOptimizer
initial_simplex::Ts
parameters::Tp
end
"""
# NelderMead
## Constructor
```julia
NelderMead(; parameters = AdaptiveParameters(),
initial_simplex = AffineSimplexer())
```
The constructor takes 2 keywords:
* `parameters`, an instance of either `AdaptiveParameters` or `FixedParameters`,
and is used to generate parameters for the Nelder-Mead Algorithm
* `initial_simplex`, an instance of `AffineSimplexer`
## Description
Our current implementation of the Nelder-Mead algorithm is based on [1] and [3].
Gradient-free methods can be a bit sensitive to starting values and tuning parameters,
so it is a good idea to be careful with the defaults provided in Optim.jl.
Instead of using gradient information, Nelder-Mead is a direct search method. It keeps
track of the function value at a number of points in the search space. Together, the
points form a simplex. Given a simplex, we can perform one of four actions: reflect,
expand, contract, or shrink. Basically, the goal is to iteratively replace the worst
point with a better point. More information can be found in [1], [2] or [3].
## References
- [1] Nelder, John A. and R. Mead (1965). "A simplex method for function minimization". Computer Journal 7: 308β313. doi:10.1093/comjnl/7.4.308
- [2] Lagarias, Jeffrey C., et al. "Convergence properties of the NelderβMead simplex method in low dimensions." SIAM Journal on Optimization 9.1 (1998): 112-147
- [3] Gao, Fuchang and Lixing Han (2010). "Implementing the Nelder-Mead simplex algorithm with adaptive parameters". Computational Optimization and Applications. doi:10.1007/s10589-010-9329-3
"""
function NelderMead(; kwargs...)
KW = Dict(kwargs)
if haskey(KW, :initial_simplex) || haskey(KW, :parameters)
initial_simplex, parameters = AffineSimplexer(), AdaptiveParameters()
haskey(KW, :initial_simplex) && (initial_simplex = KW[:initial_simplex])
haskey(KW, :parameters) && (parameters = KW[:parameters])
return NelderMead(initial_simplex, parameters)
else
return NelderMead(AffineSimplexer(), AdaptiveParameters())
end
end
Base.summary(::NelderMead) = "Nelder-Mead"
# centroid except h-th vertex
function centroid!(c::AbstractArray{T}, simplex, h=0) where T
n = length(c)
fill!(c, zero(T))
for i in eachindex(simplex)
if i != h
xi = simplex[i]
c .+= xi
end
end
rmul!(c, T(1)/n)
end
centroid(simplex, h) = centroid!(similar(simplex[1]), simplex, h)
nmobjective(y::Vector, m::Integer, n::Integer) = sqrt(var(y) * (m / n))
function print_header(method::NelderMead)
@printf "Iter Function value β(Ξ£(yα΅’-yΜ)Β²)/n \n"
@printf "------ -------------- --------------\n"
end
function Base.show(io::IO, trace::OptimizationTrace{<:Real, NelderMead})
@printf io "Iter Function value β(Ξ£(yα΅’-yΜ)Β²)/n \n"
@printf io "------ -------------- --------------\n"
for state in trace.states
show(io, state)
end
return
end
function Base.show(io::IO, t::OptimizationState{<:Real, NelderMead})
@printf io "%6d %14e %14e\n" t.iteration t.value t.g_norm
if !isempty(t.metadata)
for (key, value) in t.metadata
@printf io " * %s: %s\n" key value
end
end
return
end
mutable struct NelderMeadState{Tx, T, Tfs} <: ZerothOrderState
x::Tx
m::Int
simplex::Vector{Tx}
x_centroid::Tx
x_lowest::Tx
x_second_highest::Tx
x_highest::Tx
x_reflect::Tx
x_cache::Tx
f_simplex::Tfs
nm_x::T
f_lowest::T
i_order::Vector{Int}
Ξ±::T
Ξ²::T
Ξ³::T
Ξ΄::T
step_type::String
end
function reset!(method::NelderMead, state::NelderMeadState, obj, x)
state.simplex = simplexer(method.initial_simplex, x)
value!(obj, first(state.simplex))
state.f_simplex[1] = value(obj)
for i in 2:length(state.simplex)
state.f_simplex[i] = value(obj, state.simplex[i])
end
# Get the indices that correspond to the ordering of the f values
# at the vertices. i_order[1] is the index in the simplex of the vertex
# with the lowest function value, and i_order[end] is the index in the
# simplex of the vertex with the highest function value
state.i_order = sortperm(state.f_simplex)
end
function initial_state(method::NelderMead, options, d, initial_x)
T = eltype(initial_x)
n = length(initial_x)
m = n + 1
simplex = simplexer(method.initial_simplex, initial_x)
f_simplex = zeros(T, m)
value!!(d, first(simplex))
f_simplex[1] = value(d)
for i in 2:length(simplex)
f_simplex[i] = value(d, simplex[i])
end
# Get the indices that correspond to the ordering of the f values
# at the vertices. i_order[1] is the index in the simplex of the vertex
# with the lowest function value, and i_order[end] is the index in the
# simplex of the vertex with the highest function value
i_order = sortperm(f_simplex)
Ξ±, Ξ², Ξ³, Ξ΄ = parameters(method.parameters, n)
NelderMeadState(copy(initial_x), # Variable to hold final minimizer value for MultivariateOptimizationResults
m, # Number of vertices in the simplex
simplex, # Maintain simplex in state.simplex
centroid(simplex, i_order[m]), # Maintain centroid in state.centroid
copy(initial_x), # Store cache in state.x_lowest
copy(initial_x), # Store cache in state.x_second_highest
copy(initial_x), # Store cache in state.x_highest
copy(initial_x), # Store cache in state.x_reflect
copy(initial_x), # Store cache in state.x_cache
f_simplex, # Store objective values at the vertices in state.f_simplex
T(nmobjective(f_simplex, n, m)), # Store nmobjective in state.nm_x
f_simplex[i_order[1]], # Store lowest f in state.f_lowest
i_order, # Store a vector of rankings of objective values
T(Ξ±),
T(Ξ²),
T(Ξ³),
T(Ξ΄),
"initial")
end
function update_state!(f::F, state::NelderMeadState{T}, method::NelderMead) where {F, T}
# Augment the iteration counter
shrink = false
n, m = length(state.x), state.m
centroid!(state.x_centroid, state.simplex, state.i_order[m])
copyto!(state.x_lowest, state.simplex[state.i_order[1]])
copyto!(state.x_second_highest, state.simplex[state.i_order[n]])
copyto!(state.x_highest, state.simplex[state.i_order[m]])
state.f_lowest = state.f_simplex[state.i_order[1]]
f_second_highest = state.f_simplex[state.i_order[n]]
f_highest = state.f_simplex[state.i_order[m]]
# Compute a reflection
@. state.x_reflect = state.x_centroid + state.Ξ± * (state.x_centroid - state.x_highest)
f_reflect = value(f, state.x_reflect)
if f_reflect < state.f_lowest
# Compute an expansion
@. state.x_cache = state.x_centroid + state.Ξ² *(state.x_reflect - state.x_centroid)
f_expand = value(f, state.x_cache)
if f_expand < f_reflect
copyto!(state.simplex[state.i_order[m]], state.x_cache)
@inbounds state.f_simplex[state.i_order[m]] = f_expand
state.step_type = "expansion"
else
copyto!(state.simplex[state.i_order[m]], state.x_reflect)
@inbounds state.f_simplex[state.i_order[m]] = f_reflect
state.step_type = "reflection"
end
# shift all order indeces, and wrap the last one around to the first
i_highest = state.i_order[m]
@inbounds for i = m:-1:2
state.i_order[i] = state.i_order[i-1]
end
state.i_order[1] = i_highest
elseif f_reflect < f_second_highest
copyto!(state.simplex[state.i_order[m]], state.x_reflect)
@inbounds state.f_simplex[state.i_order[m]] = f_reflect
state.step_type = "reflection"
sortperm!(state.i_order, state.f_simplex)
else
if f_reflect < f_highest
# Outside contraction
@. state.x_cache = state.x_centroid + state.Ξ³ * (state.x_reflect - state.x_centroid)
f_outside_contraction = value(f, state.x_cache)
if f_outside_contraction < f_reflect
copyto!(state.simplex[state.i_order[m]], state.x_cache)
@inbounds state.f_simplex[state.i_order[m]] = f_outside_contraction
state.step_type = "outside contraction"
sortperm!(state.i_order, state.f_simplex)
else
shrink = true
end
else # f_reflect > f_highest
# Inside constraction
@. state.x_cache = state.x_centroid - state.Ξ³ *(state.x_reflect - state.x_centroid)
f_inside_contraction = value(f, state.x_cache)
if f_inside_contraction < f_highest
copyto!(state.simplex[state.i_order[m]], state.x_cache)
@inbounds state.f_simplex[state.i_order[m]] = f_inside_contraction
state.step_type = "inside contraction"
sortperm!(state.i_order, state.f_simplex)
else
shrink = true
end
end
end
if shrink
for i = 2:m
ord = state.i_order[i]
copyto!(state.simplex[ord], state.x_lowest + state.Ξ΄*(state.simplex[ord]-state.x_lowest))
state.f_simplex[ord] = value(f, state.simplex[ord])
end
step_type = "shrink"
sortperm!(state.i_order, state.f_simplex)
end
state.nm_x = nmobjective(state.f_simplex, n, m)
false
end
function after_while!(f, state, method::NelderMead, options)
sortperm!(state.i_order, state.f_simplex)
x_centroid_min = centroid(state.simplex, state.i_order[state.m])
f_centroid_min = value(f, x_centroid_min)
f_min, i_f_min = findmin(state.f_simplex)
x_min = state.simplex[i_f_min]
if f_centroid_min < f_min
x_min = x_centroid_min
f_min = f_centroid_min
end
if f isa BarrierWrapper
f.Fb = f_min
else
f.F = f_min
end
state.x .= x_min
end
# We don't have an f_x_previous in NelderMeadState, so we need to special case these
pick_best_x(f_increased, state::NelderMeadState) = state.x
pick_best_f(f_increased, state::NelderMeadState, d) = value(d)
function assess_convergence(state::NelderMeadState, d, options::Options)
g_converged = state.nm_x <= options.g_abstol # Hijact g_converged for NM stopping criterior
return false, false, g_converged, false
end
function initial_convergence(d, state::NelderMeadState, method::NelderMead, initial_x, options)
nmo = nmobjective(state.f_simplex, state.m, length(initial_x))
!isfinite(value(d)), nmo <= options.g_abstol, !isfinite(nmo)
end
function trace!(tr, d, state, iteration, method::NelderMead, options::Options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["centroid"] = copy(state.x_centroid)
dt["step_type"] = state.step_type
end
if options.trace_simplex
dt["simplex"] = state.simplex
dt["simplex_values"] = state.f_simplex
end
update!(tr,
iteration,
state.f_lowest,
state.nm_x,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback,
options.trace_simplex)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 13963 | struct ParticleSwarm{Tl, Tu} <: ZerothOrderOptimizer
lower::Tl
upper::Tu
n_particles::Int
end
"""
# Particle Swarm
## Constructor
```julia
ParticleSwarm(; lower = [],
upper = [],
n_particles = 0)
```
The constructor takes 3 keywords:
* `lower = []`, a vector of lower bounds, unbounded below if empty or `-Inf`'s
* `upper = []`, a vector of upper bounds, unbounded above if empty or `Inf`'s
* `n_particles = 0`, the number of particles in the swarm, defaults to least three
## Description
The Particle Swarm implementation in Optim.jl is the so-called Adaptive Particle
Swarm algorithm in [1]. It attempts to improve global coverage and convergence by
switching between four evolutionary states: exploration, exploitation, convergence,
and jumping out. In the jumping out state it intentionally tries to take the best
particle and move it away from its (potentially and probably) local optimum, to
improve the ability to find a global optimum. Of course, this comes a the cost
of slower convergence, but hopefully converges to the global optimum as a result.
Note, that convergence is never assessed for ParticleSwarm. It will run until it
reaches the maximum number of iterations set in Optim.Options(iterations=x)`.
## References
- [1] Zhan, Zhang, and Chung. Adaptive particle swarm optimization, IEEE Transactions on Systems, Man, and Cybernetics, Part B: CyberneticsVolume 39, Issue 6 (2009): 1362-1381
"""
ParticleSwarm(; lower = [], upper = [], n_particles = 0) = ParticleSwarm(lower, upper, n_particles)
Base.summary(::ParticleSwarm) = "Particle Swarm"
mutable struct ParticleSwarmState{Tx,T} <: ZerothOrderState
x::Tx
iteration::Int
lower::Tx
upper::Tx
c1::T # Weight variable; currently not exposed to users
c2::T # Weight variable; currently not exposed to users
w::T # Weight variable; currently not exposed to users
limit_search_space::Bool
n_particles::Int
X
V
X_best
score::Vector{T}
best_score::Vector{T}
x_learn
current_state
iterations::Int
end
function initial_state(method::ParticleSwarm, options, d, initial_x::AbstractArray{T}) where T
#=
Variable X represents the whole swarm of solutions with
the columns being the individual particles (= solutions to
the optimization problem.)
In each iteration the cost function is evaluated for all
particles. For the next iteration all particles "move"
towards their own historically best and the global historically
best solution. The weighing coefficients c1 and c2 define how much
towards the global or individual best solution they are pulled.
In each iteration there is a check for an additional special
solution which consists of the historically global best solution
where one randomly chosen parameter is modified. This helps
the swarm jumping out of local minima.
=#
n = length(initial_x)
if isempty(method.lower)
limit_search_space = false
lower = copy(initial_x)
lower .= -Inf
else
lower = method.lower
limit_search_space = true
end
if isempty(method.upper)
upper = copy(initial_x)
upper .= Inf
# limit_search_space is whatever it was for lower
else
upper = method.upper
limit_search_space = true
end
@assert length(lower) == length(initial_x) "limits must be of same length as x_initial."
@assert all(upper .> lower) "upper must be greater than lower"
if method.n_particles > 0
if method.n_particles < 3
@warn("Number of particles is set to 3 (minimum required)")
n_particles = 3
else
n_particles = method.n_particles
end
else
# user did not define number of particles
n_particles = maximum([3, length(initial_x)])
end
c1 = T(2)
c2 = T(2)
w = T(1)
X = Array{T,2}(undef, n, n_particles)
V = Array{T,2}(undef, n, n_particles)
X_best = Array{T,2}(undef, n, n_particles)
dx = zeros(T, n)
score = zeros(T, n_particles)
x = copy(initial_x)
best_score = zeros(T, n_particles)
x_learn = copy(initial_x)
current_state = 0
value!!(d, initial_x)
score[1] = value(d)
# if search space is limited, spread the initial population
# uniformly over the whole search space
if limit_search_space
for i in 1:n_particles
for j in 1:n
ww = upper[j] - lower[j]
X[j, i] = lower[j] + ww * rand(T)
X_best[j, i] = X[j, i]
V[j, i] = ww * (rand(T) * T(2) - T(1)) / 10
end
end
else
for i in 1:n_particles
for j in 1:n
if i == 1
if abs(initial_x[i]) > T(0)
dx[j] = abs(initial_x[i])
else
dx[j] = T(1)
end
end
X[j, i] = initial_x[j] + dx[j] * rand(T)
X_best[j, i] = X[j, i]
V[j, i] = abs(X[j, i]) * (rand(T) * T(2) - T(1))
end
end
end
for j in 1:n
X[j, 1] = initial_x[j]
X_best[j, 1] = initial_x[j]
end
for i in 2:n_particles
score[i] = value(d, X[:, i])
end
ParticleSwarmState(
x,
0,
lower,
upper,
c1,
c2,
w,
limit_search_space,
n_particles,
X,
V,
X_best,
score,
best_score,
x_learn,
0,
options.iterations)
end
function update_state!(f, state::ParticleSwarmState{T}, method::ParticleSwarm) where T
n = length(state.x)
if state.iteration == 0
copyto!(state.best_score, state.score)
f.F = Base.minimum(state.score)
end
f.F = housekeeping!(state.score,
state.best_score,
state.X,
state.X_best,
state.x,
value(f),
state.n_particles)
# Elitist Learning:
# find a new solution named 'x_learn' which is the current best
# solution with one randomly picked variable being modified.
# Replace the current worst solution in X with x_learn
# if x_learn presents the new best solution.
# In all other cases discard x_learn.
# This helps jumping out of local minima.
worst_score, i_worst = findmax(state.score)
for k in 1:n
state.x_learn[k] = state.x[k]
end
random_index = rand(1:n)
random_value = randn()
sigma_learn = 1 - (1 - 0.1) * state.iteration / state.iterations
r3 = randn() * sigma_learn
if state.limit_search_space
state.x_learn[random_index] = state.x_learn[random_index] + (state.upper[random_index] - state.lower[random_index]) / 3.0 * r3
else
state.x_learn[random_index] = state.x_learn[random_index] + state.x_learn[random_index] * r3
end
if state.limit_search_space
if state.x_learn[random_index] < state.lower[random_index]
state.x_learn[random_index] = state.lower[random_index]
elseif state.x_learn[random_index] > state.upper[random_index]
state.x_learn[random_index] = state.upper[random_index]
end
end
score_learn = value(f, state.x_learn)
if score_learn < f.F
f.F = score_learn * 1.0
for j in 1:n
state.X_best[j, i_worst] = state.x_learn[j]
state.X[j, i_worst] = state.x_learn[j]
state.x[j] = state.x_learn[j]
end
state.score[i_worst] = score_learn
state.best_score[i_worst] = score_learn
end
# TODO find a better name for _f (look inthe paper, it might be called f there)
state.current_state, _f = get_swarm_state(state.X, state.score, state.x, state.current_state)
state.w, state.c1, state.c2 = update_swarm_params!(state.c1, state.c2, state.w, state.current_state, _f)
update_swarm!(state.X, state.X_best, state.x, n, state.n_particles, state.V, state.w, state.c1, state.c2)
if state.limit_search_space
limit_X!(state.X, state.lower, state.upper, state.n_particles, n)
end
compute_cost!(f, state.n_particles, state.X, state.score)
state.iteration += 1
false
end
function update_swarm!(X::AbstractArray{Tx}, X_best, best_point, n, n_particles, V,
w, c1, c2) where Tx
# compute new positions for the swarm particles
for i in 1:n_particles
for j in 1:n
r1 = rand(Tx)
r2 = rand(Tx)
vx = X_best[j, i] - X[j, i]
vg = best_point[j] - X[j, i]
V[j, i] = V[j, i]*w + c1*r1*vx + c2*r2*vg
X[j, i] = X[j, i] + V[j, i]
end
end
end
function get_mu_1(f::Tx) where Tx
if Tx(0) <= f <= Tx(4)/10
return Tx(0)
elseif Tx(4)/10 < f <= Tx(6)/10
return Tx(5) * f - Tx(2)
elseif Tx(6)/10 < f <= Tx(7)/10
return Tx(1)
elseif Tx(7)/10 < f <= Tx(8)/10
return -Tx(10) * f + Tx(8)
else
return Tx(0)
end
end
function get_mu_2(f::Tx) where Tx
if Tx(0) <= f <= Tx(2)/10
return Tx(0)
elseif Tx(2)/10 < f <= Tx(3)/10
return Tx(10) * f - Tx(2)
elseif Tx(3)/10 < f <= Tx(4)/10
return Tx(1)
elseif Tx(4)/10 < f <= Tx(6)/10
return -Tx(5) * f + Tx(3)
else
return Tx(0)
end
end
function get_mu_3(f::Tx) where Tx
if Tx(0) <= f <= Tx(1)/10
return Tx(1)
elseif Tx(1)/10 < f <= Tx(3)/10
return -Tx(5) * f + Tx(3)/2
else
return Tx(0)
end
end
function get_mu_4(f::Tx) where Tx
if Tx(0) <= f <= Tx(7)/10
return Tx(0)
elseif Tx(7)/10 < f <= Tx(9)/10
return Tx(5) * f - Tx(7)/2
else
return Tx(1)
end
end
function get_swarm_state(X::AbstractArray{Tx}, score, best_point, previous_state) where Tx
# swarm can be in 4 different states, depending on which
# the weighing factors c1 and c2 are adapted.
# New state is not only depending on the current swarm state,
# but also from the previous state.
n, n_particles = size(X)
f_best, i_best = findmin(score)
XtX = X'X
#@assert size(XtX) == (n_particles, n_particles)
XtX_tr = LinearAlgebra.tr(XtX)
d = sum(XtX, dims=1)
@inbounds for i in eachindex(d)
d[i] = sqrt(max(n_particles * XtX[i, i] + XtX_tr - 2 * d[i], Tx(0.0)))
end
dg = d[i_best]
dmin, dmax = extrema(d)
f = (dg - dmin) / max(dmax - dmin, sqrt(eps(Tx)))
mu = zeros(Tx, 4)
mu[1] = get_mu_1(f)
mu[2] = get_mu_2(f)
mu[3] = get_mu_3(f)
mu[4] = get_mu_4(f)
best_mu, i_best_mu = findmax(mu)
current_state = 0
if previous_state == 0
current_state = i_best_mu
elseif previous_state == 1
if mu[1] > 0
current_state = 1
else
if mu[2] > 0
current_state = 2
elseif mu[4] > 0
current_state = 4
else
current_state = 3
end
end
elseif previous_state == 2
if mu[2] > 0
current_state = 2
else
if mu[3] > 0
current_state = 3
elseif mu[1] > 0
current_state = 1
else
current_state = 4
end
end
elseif previous_state == 3
if mu[3] > 0
current_state = 3
else
if mu[4] > 0
current_state = 4
elseif mu[2] > 0
current_state = 2
else
current_state = 1
end
end
elseif previous_state == 4
if mu[4] > 0
current_state = 4
else
if mu[1] > 0
current_state = 1
elseif mu[2] > 0
current_state = 2
else
current_state = 3
end
end
end
return current_state, f
end
function update_swarm_params!(c1, c2, w, current_state, f::T) where T
delta_c1 = T(5)/100 + rand(T) / T(20)
delta_c2 = T(5)/100 + rand(T) / T(20)
if current_state == 1
c1 += delta_c1
c2 -= delta_c2
elseif current_state == 2
c1 += delta_c1 / 2
c2 -= delta_c2 / 2
elseif current_state == 3
c1 += delta_c1 / 2
c2 += delta_c2 / 2
elseif current_state == 4
c1 -= delta_c1
c2 -= delta_c2
end
if c1 < T(3)/2
c1 = T(3)/2
elseif c1 > T(5)/2
c1 = T(5)/2
end
if c2 < T(3)/2
c2 = T(5)/2
elseif c2 > T(5)/2
c2 = T(5)/2
end
if c1 + c2 > T(4)
c_total = c1 + c2
c1 = c1 / c_total * 4
c2 = c2 / c_total * 4
end
w = 1 / (1 + T(3)/2 * exp(-T(26)/10 * f))
return w, c1, c2
end
function housekeeping!(score, best_score, X, X_best, best_point,
F, n_particles)
n = size(X, 1)
for i in 1:n_particles
if score[i] <= best_score[i]
best_score[i] = score[i]
for k in 1:n
X_best[k, i] = X[k, i]
end
if score[i] <= F
for k in 1:n
best_point[k] = X[k, i]
end
F = score[i]
end
end
end
return F
end
function limit_X!(X, lower, upper, n_particles, n)
# limit X values to boundaries
for i in 1:n_particles
for j in 1:n
if X[j, i] < lower[j]
X[j, i] = lower[j]
elseif X[j, i] > upper[j]
X[j, i] = upper[j]
end
end
end
nothing
end
function compute_cost!(f,
n_particles::Int,
X::Matrix,
score::Vector)
for i in 1:n_particles
score[i] = value(f, X[:, i])
end
nothing
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 3710 | log_temperature(t) = 1 / log(t)
constant_temperature(t) = 1.0
function default_neighbor!(x::AbstractArray{T}, x_proposal::AbstractArray) where T
@assert size(x) == size(x_proposal)
for i in 1:length(x)
@inbounds x_proposal[i] = x[i] + T(randn()) # workaround because all types might not have randn
end
return
end
struct SimulatedAnnealing{Tn, Ttemp} <: ZerothOrderOptimizer
neighbor!::Tn
temperature::Ttemp
keep_best::Bool # not used!?
end
"""
# SimulatedAnnealing
## Constructor
```julia
SimulatedAnnealing(; neighbor = default_neighbor!,
temperature = log_temperature,
keep_best::Bool = true)
```
The constructor takes 3 keywords:
* `neighbor = a!(x_proposed, x_current)`, a mutating function of the current `x`,
and the proposed `x`
* `T = b(iteration)`, a function of the current iteration that returns a temperature
* `p = c(f_proposal, f_current, T)`, a function of the current temperature, current
function value and proposed function value that returns an acceptance probability
## Description
Simulated Annealing is a derivative free method for optimization. It is based on the
Metropolis-Hastings algorithm that was originally used to generate samples from a
thermodynamics system, and is often used to generate draws from a posterior when doing
Bayesian inference. As such, it is a probabilistic method for finding the minimum of a
function, often over a quite large domains. For the historical reasons given above, the
algorithm uses terms such as cooling, temperature, and acceptance probabilities.
"""
SimulatedAnnealing(;neighbor = default_neighbor!,
temperature = log_temperature,
keep_best::Bool = true) =
SimulatedAnnealing(neighbor, temperature, keep_best)
Base.summary(::SimulatedAnnealing) = "Simulated Annealing"
mutable struct SimulatedAnnealingState{Tx,T} <: ZerothOrderState
x::Tx
iteration::Int
x_current::Tx
x_proposal::Tx
f_x_current::T
f_proposal::T
end
# We don't have an f_x_previous in SimulatedAnnealing, so we need to special case these
pick_best_x(f_increased, state::SimulatedAnnealingState) = state.x
pick_best_f(f_increased, state::SimulatedAnnealingState, d) = value(d)
function initial_state(method::SimulatedAnnealing, options, d, initial_x::AbstractArray{T}) where T
value!!(d, initial_x)
# Store the best state ever visited
best_x = copy(initial_x)
SimulatedAnnealingState(copy(best_x), 1, best_x, copy(initial_x), value(d), value(d))
end
function update_state!(nd, state::SimulatedAnnealingState{Tx, T}, method::SimulatedAnnealing) where {Tx, T}
# Determine the temperature for current iteration
t = method.temperature(state.iteration)
# Randomly generate a neighbor of our current state
method.neighbor!(state.x_current, state.x_proposal)
# Evaluate the cost function at the proposed state
state.f_proposal = value(nd, state.x_proposal)
if state.f_proposal <= state.f_x_current
# If proposal is superior, we always move to it
copyto!(state.x_current, state.x_proposal)
state.f_x_current = state.f_proposal
# If the new state is the best state yet, keep a record of it
if state.f_proposal < value(nd)
nd.F = state.f_proposal
copyto!(state.x, state.x_proposal)
end
else
# If proposal is inferior, we move to it with probability p
p = exp(-(state.f_proposal - state.f_x_current) / t)
if rand() <= p
copyto!(state.x_current, state.x_proposal)
state.f_x_current = state.f_proposal
end
end
state.iteration += 1
false
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 905 | function trace!(tr, d, state, iteration, method::Union{ZerothOrderOptimizer, SAMIN}, options::Options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["x"] = copy(state.x)
end
update!(tr,
state.iteration,
d.F,
NaN,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
function assess_convergence(state::ZerothOrderState, d, options::Options)
false, false, false, false
end
f_abschange(d::AbstractObjective, state::ZerothOrderState) = convert(typeof(value(d)), NaN)
f_relchange(d::AbstractObjective, state::ZerothOrderState) = convert(typeof(value(d)), NaN)
x_abschange(state::ZerothOrderState) = convert(real(eltype(state.x)), NaN)
x_relchange(state::ZerothOrderState) = convert(real(eltype(state.x)), NaN)
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 1914 |
function print_header(method::Brent)
@printf "Iter Function value Lower bound Upper bound Best bound\n"
end
function Base.show(io::IO, trace::OptimizationTrace{<:Real, Brent})
@printf io "Iter Function value Lower bound Upper bound Best bound\n"
@printf io "------ -------------- ----------- ----------- ----------\n"
for state in trace.states
show(io, state)
end
return
end
function Base.show(io::IO, t::OptimizationState{<:Real, Brent})
@printf io "%6d %14e %14e %14e %s\n" t.iteration t.value t.metadata["x_lower"] t.metadata["x_upper"] t.metadata["best bound"]
return
end
function print_header(method::GoldenSection)
@printf "Iter Function value Lower bound Upper bound\n"
end
function Base.show(io::IO, trace::OptimizationTrace{<:Real, GoldenSection})
@printf io "Iter Function value Lower bound Upper bound"
@printf io "------ -------------- ----------- -----------"
for state in trace.states
show(io, state)
end
return
end
function Base.show(io::IO, t::OptimizationState{<:Real, GoldenSection})
@printf io "%6d %14e %14e %14e\n" t.iteration t.value t.metadata["x_lower"] t.metadata["x_upper"]
return
end
function Base.show(io::IO, r::UnivariateOptimizationResults)
@printf io "Results of Optimization Algorithm\n"
@printf io " * Algorithm: %s\n" summary(r)
@printf io " * Search Interval: [%f, %f]\n" lower_bound(r) upper_bound(r)
@printf io " * Minimizer: %e\n" minimizer(r)
@printf io " * Minimum: %e\n" minimum(r)
@printf io " * Iterations: %d\n" iterations(r)
@printf io " * Convergence: max(|x - x_upper|, |x - x_lower|) <= 2*(%.1e*|x|+%.1e): %s\n" rel_tol(r) abs_tol(r) converged(r)
@printf io " * Objective Function Calls: %d" f_calls(r)
return
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 354 | mutable struct UnivariateOptimizationResults{Tb,Tt,Tf, Tx,M,O<:UnivariateOptimizer} <: OptimizationResults
method::O
initial_lower::Tb
initial_upper::Tb
minimizer::Tx
minimum::Tf
iterations::Int
iteration_converged::Bool
converged::Bool
rel_tol::Tt
abs_tol::Tt
trace::OptimizationTrace{M}
f_calls::Int
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 1557 | # Univariate Options
function optimize(f,
lower::T,
upper::T;
method = Brent(),
rel_tol::Real = sqrt(eps(float(T))),
abs_tol::Real = eps(float(T)),
iterations::Integer = 1_000,
store_trace::Bool = false,
show_trace::Bool = false,
show_warnings::Bool = true,
callback = nothing,
show_every = 1,
extended_trace::Bool = false) where T <: Real
show_every = show_every > 0 ? show_every : 1
if extended_trace && callback === nothing
show_trace = true
end
show_trace && print_header(method)
Tf = float(T)
optimize(f, Tf(lower), Tf(upper), method;
rel_tol = Tf(rel_tol),
abs_tol = Tf(abs_tol),
iterations = iterations,
store_trace = store_trace,
show_trace = show_trace,
show_warnings = show_warnings,
show_every = show_every,
callback = callback,
extended_trace = extended_trace)
end
function optimize(f,
lower::Union{Integer, Real},
upper::Union{Integer, Real};
kwargs...)
T = promote_type(typeof(lower/1), typeof(upper/1))
optimize(f,
T(lower),
T(upper);
kwargs...)
end
function optimize(f,
lower::Union{Integer, Real},
upper::Union{Integer, Real},
method::Union{Brent, GoldenSection};
kwargs...)
T = promote_type(typeof(lower/1), typeof(upper/1))
optimize(f,
T(lower),
T(upper),
method;
kwargs...)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 6979 | """
# Brent
## Constructor
```julia
Brent(;)
```
## Description
Also known as the Brent-Dekker algorith, `Brent` is a univariate optimization
algorithm for minimizing functions on some interval `[a,b]`. The method uses bisection
to find a zero of the gradient. If the original interval contains a minimum,
bisection will reliably find the solution, but can be slow. To this end `Brent`
combines bisection with the secant method and inverse quadratic interpolation to
accelerate convergence.
## References
R. P. Brent (2002) Algorithms for Minimization Without Derivatives. Dover edition.
"""
struct Brent <: UnivariateOptimizer end
Base.summary(::Brent) = "Brent's Method"
function optimize(
f, x_lower::T, x_upper::T,
mo::Brent;
rel_tol::T = sqrt(eps(T)),
abs_tol::T = eps(T),
iterations::Integer = 1_000,
store_trace::Bool = false,
show_trace::Bool = false,
show_warnings::Bool = true,
callback = nothing,
show_every = 1,
extended_trace::Bool = false) where T <: AbstractFloat
t0 = time()
options = (store_trace=store_trace, show_trace=show_trace, show_warnings=show_warnings, show_every=show_every, callback=callback)
if x_lower > x_upper
error("x_lower must be less than x_upper")
end
# Save for later
initial_lower = x_lower
initial_upper = x_upper
golden_ratio::T = T(1)/2 * (3 - sqrt(T(5.0)))
new_minimizer = x_lower + golden_ratio*(x_upper-x_lower)
new_minimum = f(new_minimizer)
best_bound = "initial"
f_calls = 1 # Number of calls to f
step = zero(T)
old_step = zero(T)
old_minimizer = new_minimizer
old_old_minimizer = new_minimizer
old_minimum = new_minimum
old_old_minimum = new_minimum
iteration = 0
converged = false
# Trace the history of states visited
tr = OptimizationTrace{T, typeof(mo)}()
tracing = store_trace || show_trace || extended_trace || callback !== nothing
stopped_by_callback = false
if tracing
# update trace; callbacks can stop routine early by returning true
state = (new_minimizer=new_minimizer,
x_lower=x_lower,
x_upper=x_upper,
best_bound=best_bound,
new_minimum=new_minimum)
stopped_by_callback = trace!(tr, nothing, state, iteration, mo, options, time()-t0)
end
while iteration < iterations && !stopped_by_callback
p = zero(T)
q = zero(T)
x_tol = rel_tol * abs(new_minimizer) + abs_tol
x_midpoint = (x_upper+x_lower)/2
if abs(new_minimizer - x_midpoint) <= 2*x_tol - (x_upper-x_lower)/2
converged = true
break
end
iteration += 1
if abs(old_step) > x_tol
# Compute parabola interpolation
# new_minimizer + p/q is the optimum of the parabola
# Also, q is guaranteed to be positive
r = (new_minimizer - old_minimizer) * (new_minimum - old_old_minimum)
q = (new_minimizer - old_old_minimizer) * (new_minimum - old_minimum)
p = (new_minimizer - old_old_minimizer) * q - (new_minimizer - old_minimizer) * r
q = 2(q - r)
if q > 0
p = -p
else
q = -q
end
end
if abs(p) < abs(q*old_step/2) && p < q*(x_upper-new_minimizer) && p < q*(new_minimizer-x_lower)
old_step = step
step = p/q
# The function must not be evaluated too close to x_upper or x_lower
x_temp = new_minimizer + step
if ((x_temp - x_lower) < 2*x_tol || (x_upper - x_temp) < 2*x_tol)
step = (new_minimizer < x_midpoint) ? x_tol : -x_tol
end
else
old_step = (new_minimizer < x_midpoint) ? x_upper - new_minimizer : x_lower - new_minimizer
step = golden_ratio * old_step
end
# The function must not be evaluated too close to new_minimizer
if abs(step) >= x_tol
new_x = new_minimizer + step
else
new_x = new_minimizer + ((step > 0) ? x_tol : -x_tol)
end
new_f = f(new_x)
f_calls += 1
if new_f < new_minimum
if new_x < new_minimizer
x_upper = new_minimizer
best_bound = "upper"
else
x_lower = new_minimizer
best_bound = "lower"
end
old_old_minimizer = old_minimizer
old_old_minimum = old_minimum
old_minimizer = new_minimizer
old_minimum = new_minimum
new_minimizer = new_x
new_minimum = new_f
else
if new_x < new_minimizer
x_lower = new_x
else
x_upper = new_x
end
if new_f <= old_minimum || old_minimizer == new_minimizer
old_old_minimizer = old_minimizer
old_old_minimum = old_minimum
old_minimizer = new_x
old_minimum = new_f
elseif new_f <= old_old_minimum || old_old_minimizer == new_minimizer || old_old_minimizer == old_minimizer
old_old_minimizer = new_x
old_old_minimum = new_f
end
end
if tracing
# update trace; callbacks can stop routine early by returning true
state = (new_minimizer=new_minimizer,
x_lower=x_lower,
x_upper=x_upper,
best_bound=best_bound,
new_minimum=new_minimum)
stopped_by_callback = trace!(tr, nothing, state, iteration, mo, options, time()-t0)
end
end
return UnivariateOptimizationResults(mo,
initial_lower,
initial_upper,
new_minimizer,
new_minimum,
iteration,
iteration == iterations,
converged,
rel_tol,
abs_tol,
tr,
f_calls)
end
function trace!(tr, d, state, iteration, method::Brent, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
dt["minimizer"] = state.new_minimizer
dt["x_lower"] = state.x_lower
dt["x_upper"] = state.x_upper
dt["best bound"] = state.best_bound
T = eltype(state.new_minimum)
update!(tr,
iteration,
state.new_minimum,
T(NaN),
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 4888 | """
# GoldenSection
## Constructor
```julia
GoldenSection(;)
```
## Description
The `GoldenSection` method seeks to minimize a univariate function on an interval
`[a, b]`. At all times the algorithm maintains a tuple of three minimizer candidates
`(c, d, e)` where ``c<d<e`` such that the ratio of the largest to the smallest interval
is the Golden Ratio.
## References
https://en.wikipedia.org/wiki/Golden-section_search
"""
struct GoldenSection <: UnivariateOptimizer end
Base.summary(::GoldenSection) = "Golden Section Search"
function optimize(f, x_lower::T, x_upper::T,
mo::GoldenSection;
rel_tol::T = sqrt(eps(T)),
abs_tol::T = eps(T),
iterations::Integer = 1_000,
store_trace::Bool = false,
show_trace::Bool = false,
show_warnings::Bool = true,
callback = nothing,
show_every = 1,
extended_trace::Bool = false,
nargs...) where T <: AbstractFloat
if x_lower > x_upper
error("x_lower must be less than x_upper")
end
t0 = time()
options = (store_trace=store_trace, show_trace=show_trace, show_warnings=show_warnings, show_every=show_every, callback=callback)
# Save for later
initial_lower = x_lower
initial_upper = x_upper
golden_ratio::T = 0.5 * (3.0 - sqrt(5.0))
new_minimizer = x_lower + golden_ratio*(x_upper-x_lower)
new_minimum = f(new_minimizer)
best_bound = "initial"
f_calls = 1 # Number of calls to f
iteration = 0
converged = false
# Trace the history of states visited
tr = OptimizationTrace{T, typeof(mo)}()
tracing = store_trace || show_trace || extended_trace || callback !== nothing
stopped_by_callback = false
if tracing
# update trace; callbacks can stop routine early by returning true
state = (new_minimizer=new_minimizer,
x_lower=x_lower,
x_upper=x_upper,
best_bound=best_bound,
new_minimum=new_minimum)
stopped_by_callback = trace!(tr, nothing, state, iteration, mo, options, time()-t0)
end
while iteration < iterations && !stopped_by_callback
x_tol = rel_tol * abs(new_minimizer) + abs_tol
x_midpoint = (x_upper+x_lower)/2
if abs(new_minimizer - x_midpoint) <= 2*x_tol - (x_upper-x_lower)/2
converged = true
break
end
iteration += 1
if x_upper - new_minimizer > new_minimizer - x_lower
new_x = new_minimizer + golden_ratio*(x_upper - new_minimizer)
new_f = f(new_x)
f_calls += 1
if new_f < new_minimum
x_lower = new_minimizer
best_bound = "lower"
new_minimizer = new_x
new_minimum = new_f
else
x_upper = new_x
best_bound = "upper"
end
else
new_x = new_minimizer - golden_ratio*(new_minimizer - x_lower)
new_f = f(new_x)
f_calls += 1
if new_f < new_minimum
x_upper = new_minimizer
best_bound = "upper"
new_minimizer = new_x
new_minimum = new_f
else
x_lower = new_x
best_bound = "lower"
end
end
if tracing
# update trace; callbacks can stop routine early by returning true
state = (new_minimizer=new_minimizer,
x_lower=x_lower,
x_upper=x_upper,
best_bound=best_bound,
new_minimum=new_minimum)
stopped_by_callback = trace!(tr, nothing, state, iteration, mo, options, time()-t0)
end
end
return UnivariateOptimizationResults(mo,
initial_lower,
initial_upper,
new_minimizer,
new_minimum,
iteration,
iteration == iterations,
converged,
rel_tol,
abs_tol,
tr,
f_calls)
end
function trace!(tr, d, state, iteration, method::GoldenSection, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
dt["minimizer"] = state.new_minimizer
dt["x_lower"] = state.x_lower
dt["x_upper"] = state.x_upper
T = eltype(state.new_minimum)
update!(tr,
iteration,
state.new_minimum,
T(NaN),
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 3403 | f_abschange(d::AbstractObjective, state) = f_abschange(value(d), state.f_x_previous)
f_abschange(f_x::T, f_x_previous) where T = abs(f_x - f_x_previous)
f_relchange(d::AbstractObjective, state) = f_relchange(value(d), state.f_x_previous)
f_relchange(f_x::T, f_x_previous) where T = abs(f_x - f_x_previous)/abs(f_x)
x_abschange(state) = x_abschange(state.x, state.x_previous)
x_abschange(x, x_previous) = maxdiff(x, x_previous)
x_relchange(state) = x_relchange(state.x, state.x_previous)
x_relchange(x, x_previous) = maxdiff(x, x_previous)/maximum(abs, x)
g_residual(d, state) = g_residual(d)
g_residual(d, state::NelderMeadState) = state.nm_x
g_residual(d::AbstractObjective) = g_residual(gradient(d))
g_residual(d::NonDifferentiable) = convert(typeof(value(d)), NaN)
g_residual(g) = maximum(abs, g)
gradient_convergence_assessment(state::AbstractOptimizerState, d, options) = g_residual(gradient(d)) β€ options.g_abstol
gradient_convergence_assessment(state::ZerothOrderState, d, options) = false
# Default function for convergence assessment used by
# AcceleratedGradientDescentState, BFGSState, ConjugateGradientState,
# GradientDescentState, LBFGSState, MomentumGradientDescentState and NewtonState
function assess_convergence(state::AbstractOptimizerState, d, options::Options)
assess_convergence(state.x,
state.x_previous,
value(d),
state.f_x_previous,
gradient(d),
options.x_abstol,
options.x_reltol,
options.f_abstol,
options.f_reltol,
options.g_abstol)
end
function assess_convergence(x, x_previous, f_x, f_x_previous, gx, x_abstol, x_reltol, f_abstol, f_reltol, g_abstol)
x_converged, f_converged, f_increased, g_converged = false, false, false, false
# TODO: Create function for x_convergence_assessment
if x_abschange(x, x_previous) β€ x_abstol
x_converged = true
end
if x_abschange(x, x_previous) β€ x_reltol * maximum(abs, x)
x_converged = true
end
# Relative Tolerance
# TODO: Create function for f_convergence_assessment
if f_abschange(f_x, f_x_previous) β€ f_abstol
f_converged = true
end
if f_abschange(f_x, f_x_previous) β€ f_reltol*abs(f_x)
f_converged = true
end
if f_x > f_x_previous
f_increased = true
end
g_converged = g_residual(gx) β€ g_abstol
return x_converged, f_converged, g_converged, f_increased
end
# Used by Fminbox and IPNewton
function assess_convergence(x,
x_previous,
f_x,
f_x_previous,
g,
x_tol,
f_tol,
g_tol)
x_converged, f_converged, f_increased, g_converged = false, false, false, false
if x_abschange(x, x_previous) β€ x_tol
x_converged = true
end
# Absolute Tolerance
# if abs(f_x - f_x_previous) < f_tol
# Relative Tolerance
if f_abschange(f_x, f_x_previous) β€ f_tol*abs(f_x)
f_converged = true
end
if f_x > f_x_previous
f_increased = true
end
if g_residual(g) β€ g_tol
g_converged = true
end
return x_converged, f_converged, g_converged, f_increased
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 355 | macro def(name, definition)
esc(quote
macro $name()
esc($(Expr(:quote, definition)))
end
end)
end
@def add_linesearch_fields begin
x_ls::Tx
alpha::T
end
@def initial_linesearch begin
(similar(initial_x), # Buffer of x for line search in state.x_ls
real(one(T))) # Keep track of step size in state.alpha
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 415 | # generic version for gpu support
function maxdiff(x::AbstractArray, y::AbstractArray)
return mapreduce((a, b) -> abs(a - b), max, x, y)
end
# allocation free version for normal arrays
function maxdiff(x::Array, y::Array)
res = real(zero(x[1] - y[1]))
@inbounds for i in 1:length(x)
delta = abs(x[i] - y[i])
if delta > res
res = delta
end
end
return res
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2620 | # Reset the search direction if it becomes corrupted
# return true if the direction was changed
reset_search_direction!(state, d, method) = false # no-op
_alphaguess(a) = a
_alphaguess(a::Number) = LineSearches.InitialStatic(alpha=a)
# Note that for these resets we're using `gradient(d)` but we don't need to use
# project_tangent! here, because we already did that inplace on gradient(d) after
# the last evaluation (we basically just always do it)
function reset_search_direction!(state, d, method::BFGS)
if method.initial_invH === nothing
n = length(state.x)
T = typeof(state.invH)
if method.initial_stepnorm === nothing
state.invH .= _init_identity_matrix(state.x)
else
initial_scale = method.initial_stepnorm * inv(norm(gradient(d), Inf))
state.invH .= _init_identity_matrix(state.x, initial_scale)
end
else
state.invH .= method.initial_invH(state.x)
end
# copyto!(state.invH, method.initial_invH(state.x))
state.s .= .-gradient(d)
return true
end
function reset_search_direction!(state, d, method::LBFGS)
state.pseudo_iteration = 1
state.s .= .-gradient(d)
return true
end
function reset_search_direction!(state, d, method::ConjugateGradient)
state.s .= .-state.pg
return true
end
function perform_linesearch!(state, method, d)
# Calculate search direction dphi0
dphi_0 = real(dot(gradient(d), state.s))
# reset the direction if it becomes corrupted
if dphi_0 >= zero(dphi_0) && reset_search_direction!(state, d, method)
dphi_0 = real(dot(gradient(d), state.s)) # update after direction reset
end
phi_0 = value(d)
# Guess an alpha
method.alphaguess!(method.linesearch!, state, phi_0, dphi_0, d)
# Store current x and f(x) for next iteration
state.f_x_previous = phi_0
copyto!(state.x_previous, state.x)
# Perform line search; catch LineSearchException to allow graceful exit
try
state.alpha, Οalpha =
method.linesearch!(d, state.x, state.s, state.alpha,
state.x_ls, phi_0, dphi_0)
return true # lssuccess = true
catch ex
if isa(ex, LineSearches.LineSearchException)
state.alpha = ex.alpha
# We shouldn't warn here, we should just carry it to the output
# @warn("Linesearch failed, using alpha = $(state.alpha) and
# exiting optimization.\nThe linesearch exited with message:\n$(ex.message)")
return false # lssuccess = false
else
rethrow(ex)
end
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 706 | # First order methods trace, used by AcceleratedGradientDescent,
# ConjugateGradient, GradientDescent, LBFGS and MomentumGradientDescent
function common_trace!(tr, d, state, iteration, method::FirstOrderOptimizer, options, curr_time=time())
dt = Dict()
dt["time"] = curr_time
if options.extended_trace
dt["x"] = copy(state.x)
dt["g(x)"] = copy(gradient(d))
dt["Current step size"] = state.alpha
end
g_norm = maximum(abs, gradient(d))
update!(tr,
iteration,
value(d),
g_norm,
dt,
options.store_trace,
options.show_trace,
options.show_every,
options.callback)
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 812 | function update!(tr::OptimizationTrace{Tf, T},
iteration::Integer,
f_x::Tf,
grnorm::Real,
dt::Dict,
store_trace::Bool,
show_trace::Bool,
show_every::Int = 1,
callback = nothing,
trace_simplex = false) where {Tf, T}
os = OptimizationState{Tf, T}(iteration, f_x, grnorm, dt)
if store_trace
push!(tr, os)
end
if show_trace
if iteration % show_every == 0
show(os)
flush(stdout)
end
end
if callback !== nothing && (iteration % show_every == 0)
if store_trace
stopped = callback(tr)
else
stopped = callback(os)
end
else
stopped = false
end
stopped
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2432 | module TestOptim
using Test
import Optim
import MathOptInterface
const MOI = MathOptInterface
function runtests()
for name in names(@__MODULE__; all = true)
if startswith("$(name)", "test_")
@testset "$(name)" begin
getfield(@__MODULE__, name)()
end
end
end
return
end
function test_SolverName()
@test MOI.get(Optim.Optimizer(), MOI.SolverName()) == "Optim"
end
function test_supports_incremental_interface()
@test MOI.supports_incremental_interface(Optim.Optimizer())
end
function test_MOI_Test()
model = MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
Optim.Optimizer(),
)
MOI.set(model, MOI.Silent(), true)
MOI.Test.runtests(
model,
MOI.Test.Config(
atol = 1e-6,
rtol = 1e-6,
optimal_status = MOI.LOCALLY_SOLVED,
exclude = Any[
MOI.ConstraintBasisStatus,
MOI.VariableBasisStatus,
MOI.ConstraintName,
MOI.VariableName,
MOI.ObjectiveBound,
MOI.DualObjectiveValue,
MOI.SolverVersion,
MOI.ConstraintDual,
],
),
exclude = String[
# No objective
"test_attribute_SolveTimeSec",
"test_attribute_RawStatusString",
# FIXME The hessian callback for constraints is called with
# `Ξ» = [-Inf, 0.0]` and then we get `NaN`, ...
"expression_hs071",
# Terminates with `OTHER_ERROR`
"test_objective_ObjectiveFunction_duplicate_terms",
"test_objective_ObjectiveFunction_constant",
"test_objective_ObjectiveFunction_VariableIndex",
"test_objective_FEASIBILITY_SENSE_clears_objective",
"test_nonlinear_expression_hs109",
"test_objective_qp_ObjectiveFunction_zero_ofdiag",
"test_objective_qp_ObjectiveFunction_edge_cases",
"test_solve_TerminationStatus_DUAL_INFEASIBLE",
"test_solve_result_index",
"test_modification_transform_singlevariable_lessthan",
"test_modification_delete_variables_in_a_batch",
"test_modification_delete_variable_with_single_variable_obj",
],
)
return
end
end # module TestOptim
TestOptim.runtests()
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 441 | @testset "Literate examples" begin
SKIPFILE = []
EXAMPLEDIR = joinpath(@__DIR__, "../docs/src/examples")
myfilter(str) = occursin(r"\.jl$", str) && !(str in SKIPFILE)
for file in filter!(myfilter, readdir(EXAMPLEDIR))
@testset "$file" begin
mktempdir() do dir
cd(dir) do
include(joinpath(EXAMPLEDIR, file))
end
end
end
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 10422 | using Test
using Optim
using OptimTestProblems
using OptimTestProblems.MultivariateProblems
const MVP = MultivariateProblems
import PositiveFactorizations: Positive, cholesky # for the IPNewton tests
using Random
import LineSearches
import NLSolversBase
import NLSolversBase: clear!
import LinearAlgebra: norm, diag, I, Diagonal, dot, eigen, issymmetric, mul!
import SparseArrays: normalize!, spdiagm
debug_printing = false
special_tests = [
"bigfloat/initial_convergence",
]
special_tests = map(s->"./special/"*s*".jl", special_tests)
general_tests = [
"api",
"callables",
"callbacks",
"convergence",
"default_solvers",
"deprecate",
"initial_convergence",
"objective_types",
"Optim",
"optimize",
"type_stability",
"types",
"counter",
"maximize",
]
general_tests = map(s->"./general/"*s*".jl", general_tests)
univariate_tests = [
# optimize
"optimize/interface",
"optimize/optimize",
# solvers
"solvers/golden_section",
"solvers/brent",
# "initial_convergence",
"dual",
]
univariate_tests = map(s->"./univariate/"*s*".jl", univariate_tests)
multivariate_tests = [
## optimize
"optimize/interface",
"optimize/optimize",
"optimize/inplace",
## solvers
## constrained
"solvers/constrained/fminbox",
"solvers/constrained/ipnewton/interface",
"solvers/constrained/ipnewton/constraints",
"solvers/constrained/ipnewton/counter",
"solvers/constrained/ipnewton/ipnewton_unconstrained",
"solvers/constrained/samin",
## first order
"solvers/first_order/accelerated_gradient_descent",
"solvers/first_order/adam_adamax",
"solvers/first_order/bfgs",
"solvers/first_order/cg",
"solvers/first_order/gradient_descent",
"solvers/first_order/l_bfgs",
"solvers/first_order/momentum_gradient_descent",
"solvers/first_order/ngmres",
## second order
"solvers/second_order/newton",
"solvers/second_order/newton_trust_region",
"solvers/second_order/krylov_trust_region",
## zeroth order
"solvers/zeroth_order/grid_search",
"solvers/zeroth_order/nelder_mead",
"solvers/zeroth_order/particle_swarm",
"solvers/zeroth_order/simulated_annealing",
## other
"array",
"extrapolate",
"lsthrow",
"precon",
"manifolds",
"complex",
"fdtime",
"arbitrary_precision",
"successive_f_tol",
"f_increase",
"measurements",
]
multivariate_tests = map(s->"./multivariate/"*s*".jl", multivariate_tests)
input_tuple(method, prob) = ((MVP.objective(prob),),)
input_tuple(method::Optim.FirstOrderOptimizer, prob) = ((MVP.objective(prob),), (MVP.objective(prob), MVP.gradient(prob)))
input_tuple(method::Optim.SecondOrderOptimizer, prob) = ((MVP.objective(prob),), (MVP.objective(prob), MVP.gradient(prob)), (MVP.objective(prob), MVP.gradient(prob), MVP.hessian(prob)))
function run_optim_tests(method; convergence_exceptions = (),
minimizer_exceptions = (),
minimum_exceptions = (),
f_increase_exceptions = (),
iteration_exceptions = (),
skip = (),
show_name = false,
show_trace = false,
show_res = false,
show_itcalls = false)
# Loop over unconstrained problems
for (name, prob) in MultivariateProblems.UnconstrainedProblems.examples
if !isfinite(prob.minimum) || !any(isfinite, prob.solutions)
debug_printing && println("$name has no registered minimum/minimizer. Skipping ...")
continue
end
show_name && printstyled("Problem: ", name, "\n", color=:green)
# Look for name in the first elements of the iteration_exceptions tuples
iter_id = findall(n->n[1] == name, iteration_exceptions)
# If name wasn't found, use default 1000 iterations, else use provided number
iters = length(iter_id) == 0 ? 1000 : iteration_exceptions[iter_id[1]][2]
# Construct options
allow_f_increases = (name in f_increase_exceptions)
dopts = Optim.default_options(method)
if haskey(dopts, :allow_f_increases)
allow_f_increases = allow_f_increases || dopts[:allow_f_increases]
dopts = (;dopts..., allow_f_increases = allow_f_increases)
end
options = Optim.Options(allow_f_increases = allow_f_increases,
iterations = iters, show_trace = show_trace;
dopts...)
# Use finite difference if it is not differentiable enough
if !(name in skip)
for (i, input) in enumerate(input_tuple(method, prob))
if (!prob.isdifferentiable && i > 1) || (!prob.istwicedifferentiable && i > 2)
continue
end
# Loop over appropriate input combinations of f, g!, and h!
results = Optim.optimize(input..., prob.initial_x, method, options)
@test isa(summary(results), String)
show_res && println(results)
show_itcalls && printstyled("Iterations: $(Optim.iterations(results))\n", color=:red)
show_itcalls && printstyled("f-calls: $(Optim.f_calls(results))\n", color=:red)
show_itcalls && printstyled("g-calls: $(Optim.g_calls(results))\n", color=:red)
show_itcalls && printstyled("h-calls: $(Optim.h_calls(results))\n", color=:red)
if !((name, i) in convergence_exceptions)
@test Optim.converged(results)
# Print on error, easier to debug CI
if !(Optim.converged(results))
printstyled(name, " did not converge with i = ", i, "\n", color=:red)
printstyled(results, "\n", color=:red)
end
end
if !((name, i) in minimum_exceptions)
@test Optim.minimum(results) < prob.minimum + sqrt(eps(typeof(prob.minimum)))
end
if !((name, i) in minimizer_exceptions)
@test norm(Optim.minimizer(results) - prob.solutions) < 1e-2
end
end
else
debug_printing && printstyled("Skipping $name\n", color=:blue)
end
end
end
function run_optim_tests_constrained(method; convergence_exceptions = (),
minimizer_exceptions = (),
minimum_exceptions = (),
f_increase_exceptions = (),
iteration_exceptions = (),
skip = (),
show_name = false,
show_trace = false,
show_res = false,
show_itcalls = false)
# TODO: Update with constraint problems too?
# Loop over unconstrained problems
for (name, prob) in MVP.UnconstrainedProblems.examples
if !isfinite(prob.minimum) || !any(isfinite, prob.solutions)
debug_printing && println("$name has no registered minimum/minimizer. Skipping ...")
continue
end
show_name && printstyled("Problem: ", name, "\n", color=:green)
# Look for name in the first elements of the iteration_exceptions tuples
iter_id = findall(n->n[1] == name, iteration_exceptions)
# If name wasn't found, use default 1000 iterations, else use provided number
iters = length(iter_id) == 0 ? 1000 : iteration_exceptions[iter_id[1]][2]
# Construct options
allow_f_increases = (name in f_increase_exceptions)
options = Optim.Options(iterations = iters, show_trace = show_trace; Optim.default_options(method)...)
# Use finite difference if it is not differentiable enough
if !(name in skip) && prob.istwicedifferentiable
# Loop over appropriate input combinations of f, g!, and h!
df = TwiceDifferentiable(MVP.objective(prob), MVP.gradient(prob),
MVP.objective_gradient(prob), MVP.hessian(prob), prob.initial_x)
infvec = fill(Inf, size(prob.initial_x))
constraints = TwiceDifferentiableConstraints(-infvec, infvec)
results = optimize(df,constraints,prob.initial_x, method, options)
@test isa(Optim.summary(results), String)
show_res && println(results)
show_itcalls && printstyled("Iterations: $(Optim.iterations(results))\n", color=:red)
show_itcalls && printstyled("f-calls: $(Optim.f_calls(results))\n", color=:red)
show_itcalls && printstyled("g-calls: $(Optim.g_calls(results))\n", color=:red)
show_itcalls && printstyled("h-calls: $(Optim.h_calls(results))\n", color=:red)
if !(name in convergence_exceptions)
@test Optim.converged(results)
# Print on error
if !(Optim.converged(results))
printstyled(name, "did not converge\n", color=:red)
printstyled(results, "\n", color=:red)
end
end
if !(name in minimum_exceptions)
@test Optim.minimum(results) < prob.minimum + sqrt(eps(typeof(prob.minimum)))
end
if !(name in minimizer_exceptions)
@test norm(Optim.minimizer(results) - prob.solutions) < 1e-2
end
else
debug_printing && printstyled("Skipping $name\n", color=:blue)
end
end
end
@testset "special" begin
for my_test in special_tests
println(my_test)
@time include(my_test)
end
end
@testset "general" begin
for my_test in general_tests
println(my_test)
@time include(my_test)
end
end
@testset "univariate" begin
for my_test in univariate_tests
println(my_test)
@time include(my_test)
end
end
@testset "multivariate" begin
for my_test in multivariate_tests
println(my_test)
@time include(my_test)
end
end
println("Literate examples")
@time include("examples.jl")
@testset "show method for options" begin
o = Optim.Options()
@test occursin(" = ", sprint(show, o))
end
@testset "MOI wrapper" begin
include("MOI_wrapper.jl")
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 81 | @testset "package issues" begin
@test isempty(detect_ambiguities(Optim))
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 7465 | # Test multivariate optimization
@testset "Multivariate API" begin
rosenbrock = MultivariateProblems.UnconstrainedProblems.examples["Rosenbrock"]
f = MVP.objective(rosenbrock)
g! = MVP.gradient(rosenbrock)
h! = MVP.hessian(rosenbrock)
initial_x = rosenbrock.initial_x
T = eltype(initial_x)
d1 = OnceDifferentiable(f, initial_x)
d2 = OnceDifferentiable(f, g!, initial_x)
d3 = TwiceDifferentiable(f, g!, h!, initial_x)
Optim.optimize(f, initial_x, BFGS())
Optim.optimize(f, g!, initial_x, BFGS())
Optim.optimize(f, g!, h!, initial_x, BFGS())
Optim.optimize(d2, initial_x, BFGS())
Optim.optimize(d3, initial_x, BFGS())
Optim.optimize(f, initial_x, BFGS(), Optim.Options())
Optim.optimize(f, g!, initial_x, BFGS(), Optim.Options())
Optim.optimize(f, g!, h!, initial_x, BFGS(), Optim.Options())
Optim.optimize(d2, initial_x, BFGS(), Optim.Options())
Optim.optimize(d3, initial_x, BFGS(), Optim.Options())
Optim.optimize(d1, initial_x, BFGS())
Optim.optimize(d2, initial_x, BFGS())
Optim.optimize(d1, initial_x, GradientDescent())
Optim.optimize(d2, initial_x, GradientDescent())
Optim.optimize(d1, initial_x, LBFGS())
Optim.optimize(d2, initial_x, LBFGS())
Optim.optimize(f, initial_x, NelderMead())
Optim.optimize(d3, initial_x, Newton())
Optim.optimize(f, initial_x, SimulatedAnnealing())
optimize(f, initial_x, BFGS())
optimize(f, g!, initial_x, BFGS())
optimize(f, g!, h!, initial_x, BFGS())
optimize(f, initial_x, GradientDescent())
optimize(f, g!, initial_x, GradientDescent())
optimize(f, g!, h!, initial_x, GradientDescent())
optimize(f, initial_x, LBFGS())
optimize(f, g!, initial_x, LBFGS())
optimize(f, g!, h!, initial_x, LBFGS())
optimize(f, initial_x, NelderMead())
optimize(f, g!, initial_x, NelderMead())
optimize(f, g!, h!, initial_x, NelderMead())
optimize(f, g!, h!, initial_x, Newton())
optimize(f, initial_x, SimulatedAnnealing())
optimize(f, g!, initial_x, SimulatedAnnealing())
optimize(f, g!, h!, initial_x, SimulatedAnnealing())
options = Optim.Options(g_tol = 1e-12, iterations = 10,
store_trace = true, show_trace = false)
res = optimize(f, g!, h!,
initial_x,
BFGS(),
options)
options_g = Optim.Options(g_tol = 1e-12, iterations = 10,
store_trace = true, show_trace = false)
options_f = Optim.Options(g_tol = 1e-12, iterations = 10,
store_trace = true, show_trace = false)
res = optimize(f, g!, h!,
initial_x,
GradientDescent(),
options_g)
res = optimize(f, g!, h!,
initial_x,
LBFGS(),
options_g)
res = optimize(f, g!, h!,
initial_x,
NelderMead(),
options_f)
res = optimize(f, g!, h!,
initial_x,
Newton(),
options_g)
options_sa = Optim.Options(iterations = 10, store_trace = true,
show_trace = false)
res = optimize(f, g!, h!,
initial_x,
SimulatedAnnealing(),
options_sa)
res = optimize(f, g!, h!,
initial_x,
BFGS(),
options_g)
options_ext = Optim.Options(g_tol = 1e-12, iterations = 10,
store_trace = true, show_trace = false,
extended_trace = true)
res_ext = optimize(f, g!, h!,
initial_x,
BFGS(),
options_ext)
@test summary(res) == "BFGS"
@test Optim.minimum(res) β 1.2580194638225255
@test Optim.minimizer(res) β [-0.116688, 0.0031153] rtol=0.001
@test Optim.iterations(res) == 10
@test Optim.f_calls(res) == 38
@test Optim.g_calls(res) == 38
@test Optim.converged(res) == false
@test Optim.x_converged(res) == false
@test Optim.f_converged(res) == false
@test Optim.g_converged(res) == false
@test Optim.g_tol(res) == 1e-12
@test Optim.iteration_limit_reached(res) == true
@test Optim.initial_state(res) == [-1.2, 1.0]
@test haskey(Optim.trace(res_ext)[1].metadata,"x")
# just testing if it runs
Optim.trace(res)
Optim.f_trace(res)
Optim.g_norm_trace(res)
@test_throws ErrorException Optim.x_trace(res)
@test_throws ErrorException Optim.x_lower_trace(res)
@test_throws ErrorException Optim.x_upper_trace(res)
@test_throws ErrorException Optim.lower_bound(res)
@test_throws ErrorException Optim.upper_bound(res)
@test_throws ErrorException Optim.rel_tol(res)
@test_throws ErrorException Optim.abs_tol(res)
options_extended = Optim.Options(store_trace = true, extended_trace = true)
res_extended = Optim.optimize(f, g!, initial_x, BFGS(), options_extended)
@test haskey(Optim.trace(res_extended)[1].metadata,"~inv(H)")
@test haskey(Optim.trace(res_extended)[1].metadata,"g(x)")
@test haskey(Optim.trace(res_extended)[1].metadata,"x")
options_extended_nm = Optim.Options(store_trace = true, extended_trace = true)
res_extended_nm = Optim.optimize(f, g!, initial_x, NelderMead(), options_extended_nm)
@test haskey(Optim.trace(res_extended_nm)[1].metadata,"centroid")
@test haskey(Optim.trace(res_extended_nm)[1].metadata,"step_type")
end
# Test univariate API
@testset "Univariate API" begin
f(x) = 2x^2+3x+1
res = optimize(f, -2.0, 1.0, GoldenSection())
@test summary(res) == "Golden Section Search"
@test Optim.minimum(res) β -0.125
@test Optim.minimizer(res) β -0.749999994377939
@test Optim.iterations(res) == 38
@test Optim.iteration_limit_reached(res) == false
@test_throws ErrorException Optim.trace(res)
@test_throws ErrorException Optim.x_trace(res)
@test_throws ErrorException Optim.x_lower_trace(res)
@test_throws ErrorException Optim.x_upper_trace(res)
@test_throws ErrorException Optim.f_trace(res)
@test Optim.lower_bound(res) == -2.0
@test Optim.upper_bound(res) == 1.0
@test Optim.rel_tol(res) β 1.4901161193847656e-8
@test Optim.abs_tol(res) β 2.220446049250313e-16
@test_throws ErrorException Optim.initial_state(res)
@test_throws ErrorException Optim.g_norm_trace(res)
@test_throws ErrorException Optim.g_calls(res)
@test_throws ErrorException Optim.x_converged(res)
@test_throws ErrorException Optim.f_converged(res)
@test_throws ErrorException Optim.g_converged(res)
@test_throws ErrorException Optim.x_tol(res)
@test_throws ErrorException Optim.f_tol(res)
@test_throws ErrorException Optim.g_tol(res)
res = optimize(f, -2.0, 1.0, GoldenSection(), store_trace = true, extended_trace = true)
# Right now, these just "test" if they run
Optim.x_trace(res)
Optim.x_lower_trace(res)
Optim.x_upper_trace(res)
end
@testset "#948" begin
res1 = optimize(OnceDifferentiable(t -> t[1]^2, (t, g) -> fill!(g, NaN), [0.0]), [0.0])
res2 = optimize(OnceDifferentiable(t -> t[1]^2, (t, g) -> fill!(g, Inf), [0.0]), [0.0])
res3 = optimize(OnceDifferentiable(t -> Inf, [0.0]), [0.0])
@test !Optim.converged(res1)
@test !Optim.converged(res2)
@test !Optim.converged(res3)
end | Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 384 | mutable struct MyCallable
f
end
(a::MyCallable)(x) = a.f(x)
@testset "callables" begin
@testset "univariate" begin
# FIXME
end
@testset "multivariate" begin
function rosenbrock(x::Vector)
return (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
end
a = MyCallable(rosenbrock)
optimize(a, rand(2), Optim.Options())
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2627 | @testset "Callbacks" begin
problem = MultivariateProblems.UnconstrainedProblems.examples["Rosenbrock"]
f = MVP.objective(problem)
g! = MVP.gradient(problem)
h! = MVP.hessian(problem)
initial_x = problem.initial_x
d2 = OnceDifferentiable(f, g!, initial_x)
d3 = TwiceDifferentiable(f, g!, h!, initial_x)
for method in (NelderMead(), SimulatedAnnealing())
ot_run = false
cb = tr -> begin
@test tr[end].iteration % 3 == 0
ot_run = true
false
end
options = Optim.Options(callback = cb, show_every=3, store_trace=true)
optimize(f, initial_x, method, options)
@test ot_run == true
os_run = false
cb = os -> begin
@test os.iteration % 3 == 0
os_run = true
false
end
options = Optim.Options(callback = cb, show_every=3)
optimize(f, initial_x, method, options)
@test os_run == true
# Test early stopping by callbacks
options = Optim.Options(callback = x -> x.iteration == 5 ? true : false)
optimize(f, zeros(2), NelderMead(), options)
end
for method in (BFGS(),
ConjugateGradient(),
GradientDescent(),
MomentumGradientDescent())
ot_run = false
cb = tr -> begin
@test tr[end].iteration % 3 == 0
ot_run = true
false
end
options = Optim.Options(callback = cb, show_every=3, store_trace=true)
optimize(d2, initial_x, method, options)
@test ot_run == true
os_run = false
cb = os -> begin
@test os.iteration % 3 == 0
os_run = true
false
end
options = Optim.Options(callback = cb, show_every=3)
optimize(d2, initial_x, method, options)
@test os_run == true
end
for method in (Newton(),)
ot_run = false
cb = tr -> begin
@test tr[end].iteration % 3 == 0
ot_run = true
false
end
options = Optim.Options(callback = cb, show_every=3, store_trace=true)
optimize(d3, initial_x, method, options)
@test ot_run == true
os_run = false
cb = os -> begin
@test os.iteration % 3 == 0
os_run = true
false
end
options = Optim.Options(callback = cb, show_every=3)
optimize(d3, initial_x, method, options)
@test os_run == true
end
res = optimize(x->x^2, -5, 5, callback=_->true)
@test res.iterations == 0
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2542 | mutable struct DummyState <: Optim.AbstractOptimizerState
x
x_previous
f_x
f_x_previous
g
end
mutable struct DummyStateZeroth <: Optim.ZerothOrderState
x
x_previous
f_x
f_x_previous
g
end
mutable struct DummyOptions
x_tol
f_tol
g_tol
g_abstol
end
mutable struct DummyMethod <: Optim.AbstractOptimizer end
mutable struct DummyMethodZeroth <: Optim.ZerothOrderOptimizer end
@testset "Convergence assessment" begin
## assess_convergence
# should converge
x0, x1 = [1.], [1.0 - 1e-7]
f0, f1 = 1.0, 1.0 - 1e-7
g = [1e-7]
x_tol = 1e-6
f_tol = 1e-6 # rel tol
g_tol = 1e-6
g_abstol = 1e-6
@test Optim.assess_convergence(x1, x0, f1, f0, g, x_tol, f_tol, g_tol) == (true, true, true, false)
# f_increase
f0, f1 = 1.0, 1.0 + 1e-7
@test Optim.assess_convergence(x1, x0, f1, f0, g, x_tol, f_tol, g_tol) == (true, true, true, true)
# f_increase without convergence
f_tol = 1e-12
@test Optim.assess_convergence(x1, x0, f1, f0, g, x_tol, f_tol, g_tol) == (true, false, true, true)
@test Optim.assess_convergence(x1, x0, f1, f0, g, x_tol, f_tol, g_tol) == (true, false, true, true)
f_tol = 1e-6 # rel tol
dOpt = DummyOptions(x_tol, f_tol, g_tol, g_abstol)
@test Optim.assess_convergence(x1, x0, f1, f0, g, x_tol, f_tol, g_tol) == (true, true, true, true)
f0, f1 = 1.0, 1.0 - 1e-7
dOpt = DummyOptions(x_tol, f_tol, g_tol, g_abstol)
@test Optim.assess_convergence(x1, x0, f1, f0, g, x_tol, f_tol, g_tol) == (true, true, true, false)
## initial_convergence and gradient_convergence_assessment
ds = DummyState(x1, x0, f1, f0, g)
dOpt = DummyOptions(x_tol, f_tol, g_tol, g_abstol)
dm = DummyMethod()
# >= First Order
d = Optim.OnceDifferentiable(x->sum(abs2.(x)),zeros(2))
Optim.gradient!(d,ones(2))
@test Optim.gradient_convergence_assessment(ds,d,dOpt) == false
Optim.gradient!(d,zeros(2))
@test Optim.gradient_convergence_assessment(ds,d,dOpt) == true
@test Optim.initial_convergence(d, ds, dm, ones(2), dOpt) == (false, false)
@test Optim.initial_convergence(d, ds, dm, zeros(2), dOpt) == (true, false)
# Zeroth order methods have no gradient -> returns false by default
ds = DummyStateZeroth(x1, x0, f1, f0, g)
dm = DummyMethodZeroth()
@test Optim.gradient_convergence_assessment(ds,d,dOpt) == false
@test Optim.initial_convergence(d, ds, dm, ones(2), dOpt) == (false, false)
# should check all other methods as well
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2515 | @testset "function counter" begin
prob = MultivariateProblems.UnconstrainedProblems.examples["Rosenbrock"]
let
global fcount = 0
global fcounter
function fcounter(reset::Bool = false)
if reset
fcount = 0
else
fcount += 1
end
fcount
end
global gcount = 0
global gcounter
function gcounter(reset::Bool = false)
if reset
gcount = 0
else
gcount += 1
end
gcount
end
global hcount = 0
global hcounter
function hcounter(reset::Bool = false)
if reset
hcount = 0
else
hcount += 1
end
hcount
end
end
f(x) = begin
fcounter()
MVP.objective(prob)(x)
end
g!(out, x) = begin
gcounter()
MVP.gradient(prob)(out, x)
end
h!(out, x) = begin
hcounter()
MVP.hessian(prob)(out, x)
end
ls = LineSearches.HagerZhang()
for solver in (AcceleratedGradientDescent, BFGS, ConjugateGradient,
GradientDescent, LBFGS, MomentumGradientDescent,
NGMRES, OACCEL)
fcounter(true); gcounter(true)
res = Optim.optimize(f, g!, prob.initial_x,
solver(linesearch = ls))
@test fcount == Optim.f_calls(res)
@test gcount == Optim.g_calls(res)
end
for solver in (Newton(linesearch = ls), NewtonTrustRegion())
fcounter(true); gcounter(true); hcounter(true)
res = Optim.optimize(f,g!, h!, prob.initial_x,
solver)
@test fcount == Optim.f_calls(res)
@test gcount == Optim.g_calls(res)
@test hcount == Optim.h_calls(res)
end
# Need to define fg! and hv! for KrylovTrustRegion
fg!(out,x) = begin
g!(out,x)
f(x)
end
hv!(out, x, v) = begin
n = length(x)
H = Matrix{Float64}(undef, n, n)
h!(H, x)
out .= H * v
end
begin
solver = Optim.KrylovTrustRegion()
fcounter(true); gcounter(true); hcounter(true)
df = Optim.TwiceDifferentiableHV(f, fg!, hv!, prob.initial_x)
res = Optim.optimize(df, prob.initial_x, solver)
@test fcount == Optim.f_calls(res)
@test gcount == Optim.g_calls(res)
@test hcount == Optim.h_calls(res)
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 383 | @testset "default solvers" begin
prob = MVP.UnconstrainedProblems.examples["Powell"]
f = objective(prob)
g! = gradient(prob)
h! = hessian(prob)
@test summary(optimize(f, prob.initial_x)) == summary(NelderMead())
@test summary(optimize(f, g!, prob.initial_x)) == summary(LBFGS())
@test summary(optimize(f, g!, h!, prob.initial_x)) == summary(Newton())
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 606 | @testset "Initial Convergence Handling" begin
f(x) = x[1]^2
function g!(out, x)
out[1] = 2.0 * x[1]
end
function h!(out, x)
out[1,1] = 2.0
end
for Optimizer in (AcceleratedGradientDescent,
GradientDescent, ConjugateGradient, LBFGS, BFGS,
MomentumGradientDescent)
res = optimize(f, g!, [0.], Optimizer())
@test Optim.minimizer(res)[1] β 0.
end
for Optimizer in (Newton, NewtonTrustRegion)
res = optimize(f, g!, h!, [0.], Optimizer())
@test Optim.minimizer(res)[1] β 0.
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 1207 | @testset "maximization wrapper" begin
@testset "univariate" begin
resmax = maximize(x->x^3, -1, 9)
resmin = optimize(x->-x^3, -1, 9)
@test Optim.maximum(resmax) == -Optim.minimum(resmin)
@test resmax.res.minimum == resmin.minimum
for meth in (Brent(), GoldenSection())
resmax = maximize(x->x^3, -1, 9, meth)
resmin = optimize(x->-x^3, -1, 9, meth)
@test Optim.maximum(resmax) == -Optim.minimum(resmin)
@test resmax.res.minimum == resmin.minimum
end
end
@testset "multivariate" begin
resmax = maximize(x->x[1]^3+x[2]^2, [3.0, 0.0])
resmin = optimize(x->-x[1]^3-x[2]^2, [3.0, 0.0])
@test Optim.maximum(resmax) == -Optim.minimum(resmin)
@test resmax.res.minimum == resmin.minimum
for meth in (NelderMead(), BFGS(), LBFGS(), GradientDescent(), Newton(), NewtonTrustRegion(), SimulatedAnnealing())
resmax = maximize(x->x[1]^3+x[2]^2, [3.0, 0.0])
resmin = optimize(x->-x[1]^3-x[2]^2, [3.0, 0.0])
@test Optim.maximum(resmax) == -Optim.minimum(resmin)
@test resmax.res.minimum == resmin.minimum
end
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2931 | @testset "objective types" begin
@testset "autodiff" begin
# Should throw, as :wah is not a proper autodiff choice
@test_throws ErrorException OnceDifferentiable(x->x, rand(10); autodiff=:wah)
for T in (OnceDifferentiable, TwiceDifferentiable)
odad1 = T(x->5.0, rand(1); autodiff = :finite)
odad2 = T(x->5.0, rand(1); autodiff = :forward)
Optim.gradient!(odad1, rand(1))
Optim.gradient!(odad2, rand(1))
# odad3 = T(x->5., rand(1); autodiff = :reverse)
@test Optim.gradient(odad1) == [0.0]
@test Optim.gradient(odad2) == [0.0]
# @test odad3.g == [0.0]
end
for a in (1.0, 5.0)
xa = rand(1)
odad1 = OnceDifferentiable(x->a*x[1], xa; autodiff = :finite)
odad2 = OnceDifferentiable(x->a*x[1], xa; autodiff = :forward)
# odad3 = OnceDifferentiable(x->a*x[1], xa; autodiff = :reverse)
Optim.gradient!(odad1, xa)
Optim.gradient!(odad2, xa)
@test Optim.gradient(odad1) β [a]
@test Optim.gradient(odad2) == [a]
# @test odad3.g == [a]
end
for a in (1.0, 5.0)
xa = rand(1)
odad1 = OnceDifferentiable(x->a*x[1]^2, xa; autodiff = :finite)
odad2 = OnceDifferentiable(x->a*x[1]^2, xa; autodiff = :forward)
# odad3 = OnceDifferentiable(x->a*x[1]^2, xa; autodiff = :reverse)
Optim.gradient!(odad1, xa)
Optim.gradient!(odad2, xa)
@test Optim.gradient(odad1) β 2.0*a*xa
@test Optim.gradient(odad2) == 2.0*a*xa
# @test odad3.g == 2.0*a*xa
end
for dtype in (OnceDifferentiable, TwiceDifferentiable)
for autodiff in (:finite, :forward)
differentiable = dtype(x->sum(x), rand(2); autodiff = autodiff)
Optim.value(differentiable)
Optim.value!(differentiable, rand(2))
Optim.value_gradient!(differentiable, rand(2))
Optim.gradient!(differentiable, rand(2))
dtype == TwiceDifferentiable && Optim.hessian!(differentiable, rand(2))
end
end
end
@testset "value/grad" begin
a = 3.0
x_seed = rand(1)
odad1 = OnceDifferentiable(x->a*x[1]^2, x_seed)
Optim.value_gradient!(odad1, x_seed)
@test Optim.gradient(odad1) β 2 .* a .* (x_seed)
@testset "call counters" begin
@test Optim.f_calls(odad1) == 1
@test Optim.g_calls(odad1) == 1
@test Optim.h_calls(odad1) == 0
Optim.value_gradient!(odad1, x_seed .+ 1.0)
@test Optim.f_calls(odad1) == 2
@test Optim.g_calls(odad1) == 2
@test Optim.h_calls(odad1) == 0
end
@test Optim.gradient(odad1) β 2 .* a .* (x_seed .+ 1.0)
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 1925 | @testset "optimize" begin
eta = 0.9
function f1(x)
(1.0 / 2.0) * (x[1]^2 + eta * x[2]^2)
end
function g1(storage, x)
storage[1] = x[1]
storage[2] = eta * x[2]
end
function h1(storage, x)
storage[1, 1] = 1.0
storage[1, 2] = 0.0
storage[2, 1] = 0.0
storage[2, 2] = eta
end
results = optimize(f1, g1, h1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
results = optimize(f1, g1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
results = optimize(f1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
results = optimize(f1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
# tests for bfgs_initial_invH
initial_invH = zeros(2,2)
h1(initial_invH, [127.0, 921.0])
initial_invH = Matrix(Diagonal(diag(initial_invH)))
results = optimize(f1, g1, [127.0, 921.0], BFGS(initial_invH = x -> initial_invH), Optim.Options())
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
# Tests for PR #302
results = optimize(cos, 0, 2pi);
@test norm(Optim.minimizer(results) - pi) < 0.01
results = optimize(cos, 0.0, 2pi);
@test norm(Optim.minimizer(results) - pi) < 0.01
results = optimize(cos, 0, 2pi, Brent());
@test norm(Optim.minimizer(results) - pi) < 0.01
results = optimize(cos, 0.0, 2pi, Brent());
@test norm(Optim.minimizer(results) - pi) < 0.01
results = optimize(cos, 0, 2pi, method = Brent())
@test norm(Optim.minimizer(results) - pi) < 0.01
results = optimize(cos, 0.0, 2pi, method = Brent())
@test norm(Optim.minimizer(results) - pi) < 0.01
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 1292 | @testset "Type Stability" begin
function rosenbrock(x::Vector{T}) where T
o = one(T)
c = convert(T,100)
return (o - x[1])^2 + c * (x[2] - x[1]^2)^2
end
function rosenbrock_gradient!(storage::Vector{T}, x::Vector{T}) where T
o = one(T)
c = convert(T,100)
storage[1] = (-2*o) * (o - x[1]) - (4*c) * (x[2] - x[1]^2) * x[1]
storage[2] = (2*c) * (x[2] - x[1]^2)
end
function rosenbrock_hessian!(storage::Matrix{T}, x::Vector{T}) where T
o = one(T)
c = convert(T,100)
f = 4*c
storage[1, 1] = (2*o) - f * x[2] + 3 * f * x[1]^2
storage[1, 2] = -f * x[1]
storage[2, 1] = -f * x[1]
storage[2, 2] = 2*c
end
for method in (NelderMead(),
SimulatedAnnealing(),
BFGS(),
ConjugateGradient(),
GradientDescent(),
MomentumGradientDescent(),
AcceleratedGradientDescent(),
LBFGS(),
Newton())
for T in (Float32, Float64)
result = optimize(rosenbrock, rosenbrock_gradient!, rosenbrock_hessian!, fill(zero(T), 2), method)
@test eltype(Optim.minimizer(result)) == T
end
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2042 | using Compat
import Compat.String
@testset "Types" begin
solver = NelderMead()
T = typeof(solver)
trace = OptimizationTrace{Float64, T}()
push!(trace,OptimizationState{Float64, T}(1,1.0,1.0,Dict()))
push!(trace,OptimizationState{Float64, T}(2,1.0,1.0,Dict()))
@test length(trace) == 2
@test trace[end].iteration == 2
prob = MultivariateProblems.UnconstrainedProblems.examples["Rosenbrock"]
f_prob = MVP.objective(prob)
for res in (
Optim.optimize(f_prob, prob.initial_x, NelderMead()),
Optim.optimize(f_prob, prob.initial_x, SimulatedAnnealing()),
Optim.optimize(
MVP.objective(prob),
MVP.gradient(prob),
prob.initial_x,
LBFGS()
)
)
@test typeof(f_prob(prob.initial_x)) == typeof(Optim.minimum(res))
@test eltype(prob.initial_x) == eltype(Optim.minimizer(res))
io = IOBuffer()
show(io, res)
s = String(take!(io))
line_shift = res.method isa Union{SimulatedAnnealing,LBFGS} ? 5 : 1
lines = split(s, '\n')
@test lines[4] |> contains("Final objective value")
@test lines[7] |> contains("Algorithm")
@test lines[9] |> contains("Convergence measures")
@test lines[13 + line_shift] |> contains("Iterations")
@test lines[14 + line_shift] |> contains("f(x) calls")
if res.method isa NelderMead
@test lines[10] |> contains("β(Ξ£(yα΅’-yΜ)Β²)/n β€ 1.0e-08")
elseif res.method isa Union{SimulatedAnnealing,LBFGS}
@test lines[10] |> contains("|x - x'|")
@test lines[11] |> contains("|x - x'|/|x'|")
@test lines[12] |> contains("|f(x) - f(x')|")
@test lines[13] |> contains("|f(x) - f(x')|/|f(x')|")
@test lines[14] |> contains("|g(x)|")
end
end
io = IOBuffer()
res = show(io, MIME"text/plain"(), Optim.Options(x_abstol = 10.0))
@test String(take!(io)) |> contains("x_abstol = 10.0")
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2397 | # TODO: Add HigherPrecision.jl tests here?
# TODO: Test Interior Point algorithm as well
@testset "Arbitrary Precision" begin
prob = MVP.UnconstrainedProblems.examples["Rosenbrock"]
f = MVP.objective(prob)
g! = MVP.gradient(prob)
h! = MVP.hessian(prob)
@testset "BigFloat" begin
x0 = big.(prob.initial_x)
res = optimize(f, x0)
debug_printing && @show res
@test Optim.converged(res) == true
@test Optim.minimum(res) < 1e-8
@test Optim.minimizer(res) β [1.0, 1.0] atol=1e-4 rtol=0
res = optimize(f, g!, x0)
debug_printing && @show res
@test Optim.converged(res) == true
@test Optim.minimum(res) < 1e-16
@test Optim.minimizer(res) β [1.0, 1.0] atol=1e-10 rtol=0
res = optimize(f, g!, h!, x0)
debug_printing && @show res
@test Optim.converged(res) == true
@test Optim.minimum(res) < 1e-16
@test Optim.minimizer(res) β [1.0, 1.0] atol=1e-10 rtol=0
lower = big.([-Inf, -Inf])
upper = big.([0.5, 1.5])
res = optimize(f, g!, lower, upper, x0, Fminbox(), Optim.Options(outer_g_abstol=sqrt(eps(big(1.0))), g_abstol=sqrt(eps(big(1.0)))))
debug_printing && @show res
@test Optim.converged(res) == true
@test Optim.minimum(res) β 0.25 atol=1e-10 rtol=0
@test Optim.minimizer(res) β [0.5, 0.25] atol=1e-10 rtol=0
res = optimize(f, lower, upper, x0, Fminbox(), Optim.Options(outer_g_abstol=sqrt(eps(big(1.0))), g_abstol=sqrt(eps(big(1.0)))))
debug_printing && @show res
@test Optim.converged(res) == true
@test Optim.minimum(res) β 0.25 atol=1e-10 rtol=0
@test Optim.minimizer(res) β [0.5, 0.25] atol=1e-10 rtol=0
end
end
@testset "Float32 doesn't work in Fminbox" begin
f(x::AbstractVector{T}) where {T} = (T(1.0) - x[1])^2 + T(100.0) * (x[2] - x[1]^2)^2
function g!(storage::AbstractVector{T}, x::AbstractVector{T}) where {T}
storage[1] = T(-2.0) * (T(1.0) - x[1]) - T(400.0) * (x[2] - x[1]^2) * x[1]
storage[2] = T(200.0) * (x[2] - x[1]^2)
end
for T = (Float32, Float64, BigFloat)
lower = T[1.25, -2.1]
upper = T[Inf, Inf]
initial_x = T[2.0, 2.0]
od = OnceDifferentiable(f, g!, initial_x)
results = optimize(od, lower, upper, initial_x, Fminbox(GradientDescent()))
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 4193 | using StableRNGs
@testset "normalized array" begin
rng = StableRNG(1323)
grdt!(buf, _) = (buf .= 0; buf[1] = 1; buf)
result = optimize(x->x[1], grdt!, randn(rng,2,2), ConjugateGradient(manifold=Sphere()))
@test result.minimizer β [-1 0; 0 0]
@test result.minimum β -1
end
@testset "input types" begin
f(X) = (10 - X[1])^2 + (0 - X[2])^2 + (0 - X[3])^2 + (5 - X[4])^2
function g!(storage, x)
storage[1] = -20 + 2 * x[1]
storage[2] = 2 * x[2]
storage[3] = 2 * x[3]
storage[4] = -10 + 2 * x[4]
return
end
@testset "vector" begin
for m in (AcceleratedGradientDescent, ConjugateGradient, BFGS, LBFGS, NelderMead, GradientDescent, MomentumGradientDescent, NelderMead, ParticleSwarm, SimulatedAnnealing, NGMRES, OACCEL)
debug_printing && printstyled("Solver: "*string(m); color = :green)
res = optimize(f, g!, [1., 0., 1., 0.], m())
@test typeof(Optim.minimizer(res)) <: Vector
if !(m in (NelderMead, SimulatedAnnealing, ParticleSwarm))
@test norm(Optim.minimizer(res) - [10.0, 0.0, 0.0, 5.0]) < 10e-8
end
end
end
@testset "matrix" begin
for m in (AcceleratedGradientDescent, ConjugateGradient, BFGS, LBFGS, ConjugateGradient, GradientDescent, MomentumGradientDescent, ParticleSwarm, SimulatedAnnealing, NGMRES, OACCEL)
res = optimize(f, g!, Matrix{Float64}(I, 2, 2), m())
@test typeof(Optim.minimizer(res)) <: Matrix
if !(m in (SimulatedAnnealing, ParticleSwarm))
@test norm(Optim.minimizer(res) - [10.0 0.0; 0.0 5.0]) < 10e-8
end
end
end
@testset "tensor" begin
eye3 = zeros(2,2,1)
eye3[:,:,1] = Matrix{Float64}(I, 2, 2)
for m in (AcceleratedGradientDescent, ConjugateGradient, BFGS, LBFGS, ConjugateGradient, GradientDescent, MomentumGradientDescent, ParticleSwarm, SimulatedAnnealing, NGMRES, OACCEL)
res = optimize(f, g!, eye3, m())
_minimizer = Optim.minimizer(res)
@test typeof(_minimizer) <: Array{Float64, 3}
@test size(_minimizer) == (2,2,1)
if !(m in (SimulatedAnnealing, ParticleSwarm))
@test norm(_minimizer - [10.0 0.0; 0.0 5.0]) < 10e-8
end
end
end
end
using RecursiveArrayTools
@testset "arraypartition input" begin
rng = StableRNG(133)
function polynomial(x)
return (10.0 - x[1])^2 + (7.0 - x[2])^4 + (108.0 - x[3])^4
end
function polynomial_gradient!(storage, x)
storage[1] = -2.0 * (10.0 - x[1])
storage[2] = -4.0 * (7.0 - x[2])^3
storage[3] = -4.0 * (108.0 - x[3])^3
end
function polynomial_hessian!(storage, x)
storage[1, 1] = 2.0
storage[1, 2] = 0.0
storage[1, 3] = 0.0
storage[2, 1] = 0.0
storage[2, 2] = 12.0 * (7.0 - x[2])^2
storage[2, 3] = 0.0
storage[3, 1] = 0.0
storage[3, 2] = 0.0
storage[3, 3] = 12.0 * (108.0 - x[3])^2
end
ap = ArrayPartition(rand(rng, 1), rand(rng, 2))
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, NelderMead())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, ParticleSwarm())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, SimulatedAnnealing())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, GradientDescent())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, AcceleratedGradientDescent())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, MomentumGradientDescent())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, ConjugateGradient())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, BFGS())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, LBFGS())
optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, Newton())
# optimize(polynomial, polynomial_gradient!, polynomial_hessian!, ap, NewtonTrustRegion())
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 3232 | @testset "Complex numbers" begin
Random.seed!(0)
# Test case: minimize quadratic plus quartic
# ΞΌ is the strength of the quartic. ΞΌ = 0 is just a quadratic problem
n = 4
A = randn(n,n) + im*randn(n,n)
A = A'A + I
b = randn(n) + im*randn(n)
ΞΌ = 1.0
fcomplex(x) = real(dot(x,A*x)/2 - dot(b,x)) + ΞΌ*sum(abs.(x).^4)
gcomplex(x) = A*x-b + 4ΞΌ*(abs.(x).^2).*x
gcomplex!(stor,x) = copyto!(stor,gcomplex(x))
x0 = randn(n)+im*randn(n)
xref = Optim.minimizer(Optim.optimize(fcomplex, gcomplex!, x0, Optim.LBFGS()))
@testset "Finite difference setup" begin
oda1 = OnceDifferentiable(fcomplex, x0)
fx, gx = NLSolversBase.value_gradient!(oda1, x0)
@test fx == fcomplex(x0)
@test gcomplex(x0) β NLSolversBase.gradient(oda1)
end
@testset "Zeroth and second order methods" begin
for method in (Optim.NelderMead,Optim.ParticleSwarm,Optim.Newton)
@test_throws Any Optim.optimize(fcomplex, gcomplex!, x0, method())
end
#not supposed to converge, but it should at least go through without errors
res = Optim.optimize(fcomplex, gcomplex!, x0, Optim.SimulatedAnnealing())
end
@testset "First order methods" begin
options = Optim.Options(allow_f_increases=true)
# TODO: AcceleratedGradientDescent fail to converge?
for method in (Optim.GradientDescent(), Optim.ConjugateGradient(), Optim.LBFGS(), Optim.BFGS(),
Optim.NGMRES(), Optim.OACCEL(),Optim.MomentumGradientDescent(mu=0.1))
debug_printing && printstyled("Solver: $(summary(method))\n", color=:green)
res = Optim.optimize(fcomplex, gcomplex!, x0, method, options)
debug_printing && printstyled("Iter\tf-calls\tg-calls\n", color=:green)
debug_printing && printstyled("$(Optim.iterations(res))\t$(Optim.f_calls(res))\t$(Optim.g_calls(res))\n", color=:red)
if !Optim.converged(res)
@warn("$(summary(method)) failed.")
display(res)
println("########################")
end
ressum = summary(res) # Just check that no errors arise when doing display(res)
@test typeof(fcomplex(x0)) == typeof(Optim.minimum(res))
@test eltype(x0) == eltype(Optim.minimizer(res))
@test Optim.converged(res)
@test Optim.minimizer(res) β xref rtol=1e-4
res = Optim.optimize(fcomplex, x0, method, options)
@test Optim.converged(res)
@test Optim.minimizer(res) β xref rtol=1e-4
# # To compare with the equivalent real solvers
# to_cplx(x) = x[1:n] + im*x[n+1:2n]
# from_cplx(x) = [real(x);imag(x)]
# freal(x) = fcomplex(to_cplx(x))
# greal!(stor,x) = copyto!(stor, from_cplx(gcomplex(to_cplx(x))))
# opt = Optim.Options(allow_f_increases=true,show_trace=true)
# println("$(summary(method)) cplx")
# res_cplx = Optim.optimize(fcomplex,gcomplex!,x0,method,opt)
# println("$(summary(method)) real")
# res_real = Optim.optimize(freal,greal!,from_cplx(x0),method,opt)
end
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2011 | import LineSearches
@testset "Extrapolation" begin
methods = [LBFGS(),
ConjugateGradient(),
LBFGS(alphaguess = LineSearches.InitialQuadratic(),
linesearch = LineSearches.BackTracking(order=2))]
msgs = ["LBFGS Default Options: ",
"CG Default Options: ",
"LBFGS + Backtracking + Extrapolation: "]
if debug_printing
println("--------------------")
println("Rosenbrock Example: ")
println("--------------------")
end
rosenbrock(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
for (method, msg) in zip(methods, msgs)
results = Optim.optimize(rosenbrock, zeros(2), method)
debug_printing && println(msg, "g_calls = ", results.g_calls, ", f_calls = ", results.f_calls)
end
if debug_printing
println("--------------------------------------")
println("p-Laplacian Example (preconditioned): ")
println("--------------------------------------")
end
plap(U; n=length(U)) = (n-1) * sum((0.1 .+ diff(U).^2).^2) - sum(U) / (n-1)
plap1(U; n=length(U), dU = diff(U), dW = 4 .* (0.1 .+ dU.^2) .* dU) =
(n - 1) .* ([0.0; dW] .- [dW; 0.0]) .- ones(n) / (n - 1)
precond(x::Vector) = precond(length(x))
precond(n::Number) = Optim.InverseDiagonal(diag(spdiagm(-1 => -ones(n-1), 0 => 2*ones(n), 1 => -ones(n-1)) * (n+1)))
f(X) = plap([0;X;0])
g!(g, X) = copyto!(g, (plap1([0;X;0]))[2:end-1])
N = 100
initial_x = zeros(N)
P = precond(initial_x)
methods = [LBFGS(P=P),
ConjugateGradient(P=P),
LBFGS(alphaguess = LineSearches.InitialQuadratic(),
linesearch = LineSearches.BackTracking(order=2), P=P)]
for (method, msg) in zip(methods, msgs)
results = Optim.optimize(f, g!, copy(initial_x), method)
debug_printing && println(msg, "g_calls = ", Optim.g_calls(results), ", f_calls = ", Optim.f_calls(results))
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 894 | @testset "f increase behaviour" begin
f(x) = 2*x[1]^2
g!(G, x) = copyto!(G, 4*x[1])
# Returned "minimizers" from one iteration of Gradient Descent
minimizers = [0.3, # allow_f_increases = true, alpha = 0.1
-1.5, # allow_f_increases = true, alpha = 1.0
0.3, # allow_f_increases = false, alpha = 0.1
0.5] # allow_f_increases = false, alpha = 1.0
k = 0
for allow in [true, false]
for alpha in [0.1, 1.0]
k += 1
method = GradientDescent(
alphaguess = LineSearches.InitialStatic(alpha=alpha),
linesearch = LineSearches.Static(),
)
opts = Optim.Options(iterations=1,allow_f_increases=allow)
res = optimize(f, g!, [0.5], method, opts)
@test minimizers[k] == Optim.minimizer(res)[1]
end
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2096 | @testset "Finite difference timing" begin
fd_input_tuple(method::Optim.FirstOrderOptimizer, prob) = ((MVP.objective(prob),),)
fd_input_tuple(method::Optim.SecondOrderOptimizer, prob) = ((MVP.objective(prob),), (MVP.objective(prob), MVP.gradient(prob)))
function run_optim_fd_tests(method;
problems = ("Extended Rosenbrock", "Powell",
"Paraboloid Diagonal", "Penalty Function I",),
show_name = true, show_trace = false,
show_time = false, show_res = false)
# Loop over unconstrained problems
for name in problems
prob = MVP.UnconstrainedProblems.examples[name]
show_name && printstyled("Problem: ", name, "\n", color=:green)
options = Optim.Options(allow_f_increases=true, show_trace = show_trace)
for (i, input) in enumerate(fd_input_tuple(method, prob))
# Loop over appropriate input combinations of f, g!, and h!
results = Optim.optimize(input..., prob.initial_x, method, options)
debug_printing && printstyled("f-calls: $(Optim.f_calls(results))\n", color=:red)
show_res && display(results)
show_time && @time Optim.optimize(input..., prob.initial_x, method, options)
@test Optim.converged(results)
@test Optim.minimum(results) < prob.minimum + sqrt(eps(typeof(prob.minimum)))
@test norm(Optim.minimizer(results) - prob.solutions) < 1e-2
end
end
end
@testset "Timing with LBFGS" begin
debug_printing && printstyled("#####################\nSolver: L-BFGS\n", color=:blue)
run_optim_fd_tests(LBFGS(), show_name=debug_printing, show_time = debug_printing)
end
@testset "Timing with Newton" begin
debug_printing && printstyled("#####################\nSolver: Newton\n", color=:blue)
run_optim_fd_tests(Newton(), show_name=debug_printing, show_time = debug_printing)
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 640 | @testset "Line search errors" begin
hz = LineSearches.HagerZhang(; delta = 0.1, sigma = 0.9, alphamax = Inf,
rho = 5.0, epsilon = 1e-6, gamma = 0.66, linesearchmax = 2)
for optimizer in (ConjugateGradient, GradientDescent, LBFGS, BFGS, Newton, AcceleratedGradientDescent, MomentumGradientDescent)
debug_printing && println("Testing $(string(optimizer))")
prob = MultivariateProblems.UnconstrainedProblems.examples["Exponential"]
@test optimize(MVP.objective(prob), prob.initial_x, optimizer(alphaguess = LineSearches.InitialPrevious(), linesearch = hz)).ls_success == false
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2894 | using StableRNGs
@testset "Manifolds" begin
rng = StableRNG(213)
# Test case: find eigenbasis for first two eigenvalues of a symmetric matrix by minimizing the Rayleigh quotient under orthogonality constraints
n = 4
m = 2
A = Diagonal(range(1, stop=2, length=n))
fmanif(x) = real(dot(x,A*x)/2)
gmanif(x) = A*x
gmanif!(stor,x) = copyto!(stor,gmanif(x))
# A[2,2] /= 10 #optional: reduce the gap to make the problem artificially harder
x0 = randn(rng,n,m)+im*randn(rng,n,m)
@testset "Stiefel, retraction = $retraction" for retraction in [:SVD,:CholQR]
manif = Optim.Stiefel(retraction)
# AcceleratedGradientDescent should be compatible also, but I haven't been able to make it converge
for ls in (Optim.BackTracking,Optim.HagerZhang,Optim.StrongWolfe,Optim.MoreThuente)
for method in (Optim.GradientDescent, Optim.ConjugateGradient, Optim.LBFGS, Optim.BFGS,
Optim.NGMRES, Optim.OACCEL)
debug_printing && printstyled("Solver: $(summary(method())), linesearch: $(summary(ls()))\n", color=:green)
res = Optim.optimize(fmanif, gmanif!, x0, method(manifold=manif,linesearch=ls()), Optim.Options(allow_f_increases=true,g_tol=1e-6))
debug_printing && printstyled("Iter\tf-calls\tg-calls\n", color=:green)
debug_printing && printstyled("$(Optim.iterations(res))\t$(Optim.f_calls(res))\t$(Optim.g_calls(res))\n", color=:red)
@test Optim.converged(res)
end
end
res = Optim.optimize(fmanif, gmanif!, x0, Optim.MomentumGradientDescent(mu=0.0, manifold=manif))
@test Optim.converged(res)
end
@testset "Power and Product" begin
# Power
@views fprod(x) = fmanif(x[:,1]) + fmanif(x[:,2])
@views gprod!(stor,x) = (gmanif!(stor[:, 1],x[:, 1]);gmanif!(stor[:, 2],x[:, 2]);stor)
m1 = Optim.PowerManifold(Optim.Sphere(), (n,), (2,))
rng = StableRNG(0)
x0 = randn(rng, n, 2) + im*randn(rng, n, 2)
res = Optim.optimize(fprod, gprod!, x0, Optim.ConjugateGradient(manifold=m1))
@test Optim.converged(res)
minpow = Optim.minimizer(res)
# Product
@views fprod(x) = fmanif(x[1:n]) + fmanif(x[n+1:2n])
@views gprod!(stor,x) = (gmanif!(stor[1:n],x[1:n]);gmanif!(stor[n+1:2n],x[n+1:2n]);stor)
m2 = Optim.ProductManifold(Optim.Sphere(), Optim.Sphere(), (n,), (n,))
rng = StableRNG(0)
x0 = randn(rng,2n) + im*randn(rng,2n)
res = Optim.optimize(fprod, gprod!, x0, Optim.ConjugateGradient(manifold=m2))
@test Optim.converged(res)
minprod = Optim.minimizer(res)
# results should be exactly equal: same initial guess, same sequence of operations
@test minpow[:,1] == minprod[1:n]
@test minpow[:,2] == minprod[n+1:2n]
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 526 | import Measurements
@testset "Measurements+Optim" begin
#example problem in #823
f(x)=(1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
xmes=zeros(Measurements.Measurement{Float64},2)
xfloat = zeros(2)
resmes = optimize(f,xmes)
resfloat = optimize(f,xfloat)
#given an initial value, they should give the exact same answer
@test all(Optim.minimizer(resmes) .|> Measurements.value .== Optim.minimizer(resfloat))
@test Optim.minimum(resmes) .|> Measurements.value .== Optim.minimum(resfloat)
end | Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2281 | import LinearAlgebra: qr, ldiv!
# this implements the 1D p-laplacian (p = 4)
# F(u) = β_{i=1}^{N} h (W(u_i') - β_{i=1}^{N-1} h u_i
# where u_i' = (u_i - u_{i-1})/h
# plap: implements the functional without boundary condition
# preconditioner is a discrete laplacian, which defines a metric
# equivalent (in the limit h β 0) to that induced by the hessian, but
# does not approximate the hessian explicitly.
@testset "Preconditioning" begin
plap(U; n=length(U)) = (n-1) * sum((0.1 .+ diff(U).^2).^2) - sum(U) / (n-1)
plap1(U; n=length(U), dU = diff(U), dW = 4 .* (0.1 .+ dU.^2) .* dU) =
(n - 1) .* ([0.0; dW] .- [dW; 0.0]) .- ones(n) / (n-1)
precond(x::Vector) = precond(length(x))
precond(n::Number) = Optim.InverseDiagonal(diag(spdiagm(-1 => -ones(n-1), 0 => 2*ones(n), 1 => -ones(n-1)) * (n+1)))
f(X) = plap([0;X;0])
g!(G, X) = copyto!(G, (plap1([0;X;0]))[2:end-1])
GRTOL = 1e-6
debug_printing && println("Test a basic preconditioning example")
for N in (10, 50, 250)
debug_printing && println("N = ", N)
initial_x = zeros(N)
Plap = precond(initial_x)
ID = nothing
for optimizer in (GradientDescent, ConjugateGradient, LBFGS)
for (P, wwo) in zip((ID, Plap), (" WITHOUT", " WITH"))
results = Optim.optimize(f, g!, copy(initial_x),
optimizer(P = P),
Optim.Options(g_tol = GRTOL, allow_f_increases = true, iterations=250000))
debug_printing && println(optimizer, wwo,
" preconditioning : g_calls = ", Optim.g_calls(results),
", f_calls = ", Optim.f_calls(results))
if (optimizer == GradientDescent) && (N > 15) && (P == ID)
debug_printing && println(" (gradient descent is not expected to converge)")
else
@test Optim.converged(results)
end
end
end
end
@testset "no β οΈ #900" begin
x, y, A = randn(10), randn(10), qr(randn(10,10)+4I)
ldiv!(x, A, y)
@test_throws MethodError ldiv!(x, nothing, y)
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 472 | @testset "successive_f_tol" begin
alg = GradientDescent(
alphaguess = LineSearches.InitialStatic(),
linesearch = LineSearches.Static(),
)
opt = Optim.Options(
iterations = 10,
successive_f_tol = 5,
f_tol = 3,
g_tol = -1,
)
result = Optim.optimize(
sum,
(y, _) -> fill!(y, 1),
[0.0, 0.0],
alg,
opt,
)
@test result.iterations == opt.successive_f_tol + 1
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 785 | @testset "inplace keyword" begin
rosenbrock = MultivariateProblems.UnconstrainedProblems.examples["Rosenbrock"]
f = MVP.objective(rosenbrock)
g! = MVP.gradient(rosenbrock)
h! = MVP.hessian(rosenbrock)
function g(x)
G = similar(x)
g!(G, x)
G
end
function h(x)
n = length(x)
H = similar(x, n, n)
h!(H, x)
H
end
initial_x = rosenbrock.initial_x
inp_res = optimize(f, g, h, initial_x; inplace = false)
op_res = optimize(f, g!, h!, initial_x; inplace = true)
for op in (Optim.minimizer, Optim.minimum, Optim.f_calls,
Optim.g_calls, Optim.h_calls, Optim.iterations, Optim.converged)
@test all(op(inp_res) .=== op(op_res))
end
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 5197 | @testset "interface" begin
problem = MultivariateProblems.UnconstrainedProblems.examples["Exponential"]
f = MVP.objective(problem)
g! = MVP.gradient(problem)
h! = MVP.hessian(problem)
nd = NonDifferentiable(f, fill!(similar(problem.initial_x), 0))
od = OnceDifferentiable(f, g!, fill!(similar(problem.initial_x), 0))
td = TwiceDifferentiable(f, g!, h!, fill!(similar(problem.initial_x), 0))
tdref = TwiceDifferentiable(f, g!, h!, fill!(similar(problem.initial_x), 0))
ref = optimize(tdref, problem.initial_x, Newton(), Optim.Options())
# test AbstractObjective interface
for obj in (nd, od, td)
res = []
push!(res, optimize(obj, problem.initial_x))
push!(res, optimize(obj, problem.initial_x, Optim.Options()))
for r in res
@test norm(Optim.minimum(ref)-Optim.minimum(r)) < 1e-6
end
end
ad_res = optimize(od, problem.initial_x, Newton())
@test norm(Optim.minimum(ref)-Optim.minimum(ad_res)) < 1e-6
ad_res2 = optimize(od, problem.initial_x, Newton())
@test norm(Optim.minimum(ref)-Optim.minimum(ad_res2)) < 1e-6
# test f, g!, h! interface
for tup in ((f,), (f, g!), (f, g!, h!))
fgh_res = []
push!(fgh_res, optimize(tup..., problem.initial_x))
for m in (NelderMead(), LBFGS(), Newton())
push!(fgh_res, optimize(tup..., problem.initial_x; f_tol = 1e-8))
push!(fgh_res, optimize(tup..., problem.initial_x, m))
push!(fgh_res, optimize(tup..., problem.initial_x, m, Optim.Options()))
end
for r in fgh_res
@test norm(Optim.minimum(ref)-Optim.minimum(r)) < 1e-6
end
end
# simple tests for https://github.com/JuliaNLSolvers/Optim.jl/issues/805
@test AcceleratedGradientDescent(alphaguess=1.0).alphaguess! isa Optim.LineSearches.InitialStatic
@test BFGS(alphaguess=1.0).alphaguess! isa Optim.LineSearches.InitialStatic
@test ConjugateGradient(alphaguess=1.0).alphaguess! isa Optim.LineSearches.InitialStatic
@test GradientDescent(alphaguess=1.0).alphaguess! isa Optim.LineSearches.InitialStatic
@test MomentumGradientDescent(alphaguess=1.0).alphaguess! isa Optim.LineSearches.InitialStatic
@test Newton(alphaguess=1.0).alphaguess! isa Optim.LineSearches.InitialStatic
optimize(od, problem.initial_x, AcceleratedGradientDescent(alphaguess=1.0))
optimize(od, problem.initial_x, BFGS(alphaguess=1.0))
optimize(od, problem.initial_x, ConjugateGradient(alphaguess=1.0))
optimize(od, problem.initial_x, GradientDescent(alphaguess=1.0))
optimize(od, problem.initial_x, MomentumGradientDescent(alphaguess=1.0))
optimize(td, problem.initial_x, Newton(alphaguess=1.0))
end
@testset "only_fg!, only_fgh!" begin
f(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
function g!(G, x)
G[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
G[2] = 200.0 * (x[2] - x[1]^2)
G
end
function h!(H, x)
H[1, 1] = 2.0 - 400.0 * x[2] + 1200.0 * x[1]^2
H[1, 2] = -400.0 * x[1]
H[2, 1] = -400.0 * x[1]
H[2, 2] = 200.0
H
end
function fg!(F,G,x)
G === nothing || g!(G,x)
F === nothing || return f(x)
nothing
end
function fgh!(F,G,H,x)
G === nothing || g!(G,x)
H === nothing || h!(H,x)
F === nothing || return f(x)
nothing
end
result_fg! = Optim.optimize(Optim.only_fg!(fg!), [0., 0.], Optim.LBFGS()) # works fine
@test result_fg!.minimizer β [1,1]
result_fgh! = Optim.optimize(Optim.only_fgh!(fgh!), [0., 0.], Optim.Newton())
@test result_fgh!.minimizer β [1,1]
end
@testset "#816" begin
w = rand(2)
f(x) = sum(x.^2)
g!(G, x) = @. G = 2x
g(x) = 2x
h!(H, x) = @. H = [2.0 0.0; 0.0 2.0]
hv!(Hv, x) = @. Hv = [2.0, 2.0] .* x
_hv!(Hv, x, v) = @. Hv = [2.0, 2.0] .* x
res = Optim.optimize(f, w)
@test res.method isa NelderMead
res = Optim.optimize(f, g!, w)
@test res.method isa LBFGS
function fg!(_, G, x)
isnothing(G) || g!(G, x)
return f(x)
end
function fg(x)
return f(x), g(x)
end
res = Optim.optimize(Optim.only_fg!(fg!), w)
@test res.method isa LBFGS
res = Optim.optimize(Optim.only_fg(fg), w)
@test res.method isa LBFGS
res = Optim.optimize(Optim.only_g_and_fg(g, fg), w)
@test res.method isa LBFGS
function fgh!(_, G, H, x)
isnothing(G) || g!(G, x)
isnothing(H) || h!(H, x)
return f(x)
end
res = Optim.optimize(Optim.only_fgh!(fgh!), w)
@test res.method isa Newton
res = Optim.optimize(Optim.only_fgh!(fgh!), w)
@test res.method isa Newton
function fghv!(_, G, Hv, x, v)
isnothing(G) || g!(G, x)
isnothing(Hv) || hv!(Hv, v)
return f(x)
end
res = Optim.optimize(Optim.only_fghv!(fghv!), w)
@test res.method isa Optim.KrylovTrustRegion
res = Optim.optimize(Optim.only_fg_and_hv!(fg!, _hv!), w)
@test res.method isa Optim.KrylovTrustRegion
end
@testset "issue 1041 and 1038" begin
g(x) = -exp(-(x-pi)^2)
@test_nowarn optimize(x->g(x[1]),[0.],method = AcceleratedGradientDescent())
end | Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
|
[
"MIT"
] | 1.9.4 | d9b79c4eed437421ac4285148fcadf42e0700e89 | code | 2793 | @testset "optimize" begin
eta = 0.9
function f1(x)
(1.0 / 2.0) * (x[1]^2 + eta * x[2]^2)
end
function g1(storage, x)
storage[1] = x[1]
storage[2] = eta * x[2]
end
function h1(storage, x)
storage[1, 1] = 1.0
storage[1, 2] = 0.0
storage[2, 1] = 0.0
storage[2, 2] = eta
end
results = optimize(f1, g1, h1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
results = optimize(f1, g1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
results = optimize(f1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
results = optimize(f1, [127.0, 921.0])
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
# tests for bfgs_initial_invH
initial_invH = zeros(2,2)
h1(initial_invH, [127.0, 921.0])
initial_invH = Matrix(Diagonal(diag(initial_invH)))
results = optimize(f1, g1, [127.0, 921.0], BFGS(initial_invH = x -> initial_invH), Optim.Options())
@test Optim.g_converged(results)
@test norm(Optim.minimizer(results) - [0.0, 0.0]) < 0.01
# test timeout
function f2(x)
sleep(0.1)
(1.0 / 2.0) * (x[1]^2 + eta * x[2]^2)
end
results = optimize(f2, g1, [127.0, 921.0], BFGS(), Optim.Options(; time_limit=0.0))
@test !Optim.g_converged(results)
@test Optim.time_limit(results) < Optim.time_run(results)
end
@testset "#718" begin
f(x) = (1.0 - x[1])^2 + 100.0 * (x[2] - x[1]^2)^2
function g!(G, x)
G[1] = -2.0 * (1.0 - x[1]) - 400.0 * (x[2] - x[1]^2) * x[1]
G[2] = 200.0 * (x[2] - x[1]^2)
end
function h!(H, x)
H[1, 1] = 2.0 - 400.0 * x[2] + 1200.0 * x[1]^2
H[1, 2] = -400.0 * x[1]
H[2, 1] = -400.0 * x[1]
H[2, 2] = 200.0
end
function fg!(F,G,x)
G === nothing || g!(G,x)
F === nothing || return f(x)
nothing
end
function fgh!(F,G,H,x)
G === nothing || g!(G,x)
H === nothing || h!(H,x)
F === nothing || return f(x)
nothing
end
optimize(Optim.only_fg!(fg!), [0., 0.], NelderMead())
optimize(Optim.only_fgh!(fgh!), [0., 0.], NelderMead())
optimize(Optim.only_fgh!(fgh!), [0., 0.], ParticleSwarm())
optimize(Optim.only_fgh!(fgh!), [0., 0.], SimulatedAnnealing())
optimize(Optim.only_fgh!(fgh!), [0., 0.], GradientDescent())
optimize(Optim.only_fgh!(fgh!), [0., 0.], LBFGS())
optimize(Optim.only_fgh!(fgh!), [0., 0.], BFGS())
optimize(Optim.only_fgh!(fgh!), [0., 0.], NewtonTrustRegion())
optimize(Optim.only_fgh!(fgh!), [0., 0.], Newton())
end
| Optim | https://github.com/JuliaNLSolvers/Optim.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.