licenses
sequencelengths 1
3
| version
stringclasses 677
values | tree_hash
stringlengths 40
40
| path
stringclasses 1
value | type
stringclasses 2
values | size
stringlengths 2
8
| text
stringlengths 25
67.1M
| package_name
stringlengths 2
41
| repo
stringlengths 33
86
|
---|---|---|---|---|---|---|---|---|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 2425 | # PersonalNameOriginedOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**script** | **String** | | [optional] [default to nothing]
**id** | **String** | | [optional] [default to nothing]
**explanation** | **String** | | [optional] [default to nothing]
**name** | **String** | The input name. | [optional] [default to nothing]
**countryOrigin** | **String** | Most likely country of Origin | [optional] [default to nothing]
**countryOriginAlt** | **String** | Second best alternative : country of Origin | [optional] [default to nothing]
**countriesOriginTop** | **Vector{String}** | List countries of Origin (top 10) | [optional] [default to nothing]
**score** | **Float64** | Compatibility to NamSor_v1 Origin score value. Higher score is better, but score is not normalized. Use calibratedProbability if available. | [optional] [default to nothing]
**regionOrigin** | **String** | Most likely region of Origin (based on countryOrigin ISO2 code) | [optional] [default to nothing]
**topRegionOrigin** | **String** | Most likely top region of Origin (based on countryOrigin ISO2 code) | [optional] [default to nothing]
**subRegionOrigin** | **String** | Most likely sub region of Origin (based on countryOrigin ISO2 code) | [optional] [default to nothing]
**probabilityCalibrated** | **Float64** | The calibrated probability for countryOrigin to have been guessed correctly. -1 = still calibrating. | [optional] [default to nothing]
**probabilityAltCalibrated** | **Float64** | The calibrated probability for countryOrigin OR countryOriginAlt to have been guessed correctly. -1 = still calibrating. | [optional] [default to nothing]
**religionStats** | [**Vector{ReligionStatOut}**](ReligionStatOut.md) | Geographic religious statistics, assuming country of origin is correctly predicted. | [optional] [default to nothing]
**religionStatsAlt** | [**Vector{ReligionStatOut}**](ReligionStatOut.md) | Geographic religious statistics, for country best alternative. | [optional] [default to nothing]
**religionStatsSynthetic** | [**Vector{ReligionStatOut}**](ReligionStatOut.md) | Geographic religious statistics, assuming country of origin OR best alternative is correctly predicted. | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 1171 | # PersonalNameParsedOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**script** | **String** | | [optional] [default to nothing]
**id** | **String** | | [optional] [default to nothing]
**explanation** | **String** | | [optional] [default to nothing]
**name** | **String** | The input name. | [optional] [default to nothing]
**nameParserType** | **String** | Name parsing is addressed as a classification problem, for example FN1LN1 means a first then last name order. | [optional] [default to nothing]
**nameParserTypeAlt** | **String** | Second best alternative parsing. Name parsing is addressed as a classification problem, for example FN1LN1 means a first then last name order. | [optional] [default to nothing]
**firstLastName** | [***FirstLastNameOut**](FirstLastNameOut.md) | | [optional] [default to nothing]
**score** | **Float64** | Higher score is better, but score is not normalized. Use calibratedProbability if available. | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 1334 | # PersonalNameReligionedOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**script** | **String** | | [optional] [default to nothing]
**id** | **String** | | [optional] [default to nothing]
**explanation** | **String** | | [optional] [default to nothing]
**name** | **String** | The input name. | [optional] [default to nothing]
**score** | **Float64** | Higher score is better, but score is not normalized. Use calibratedProbability if available. | [optional] [default to nothing]
**religion** | **String** | Most likely religion | [optional] [default to nothing]
**religionAlt** | **String** | Second best alternative : religion | [optional] [default to nothing]
**religionsTop** | **Vector{String}** | List religions (top 10) | [optional] [default to nothing]
**probabilityCalibrated** | **Float64** | The calibrated probability for country to have been guessed correctly. -1 = still calibrating. | [optional] [default to nothing]
**probabilityAltCalibrated** | **Float64** | The calibrated probability for country OR countryAlt to have been guessed correctly. -1 = still calibrating. | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 454 | # PersonalNameSubdivisionIn
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**id** | **String** | | [optional] [default to nothing]
**name** | **String** | | [optional] [default to nothing]
**subdivisionIso** | **String** | | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 1411 | # PersonalNameUSRaceEthnicityOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**script** | **String** | | [optional] [default to nothing]
**id** | **String** | | [optional] [default to nothing]
**explanation** | **String** | | [optional] [default to nothing]
**name** | **String** | The input name. | [optional] [default to nothing]
**raceEthnicityAlt** | **String** | Second most likely US 'race'/ethnicity | [optional] [default to nothing]
**raceEthnicity** | **String** | Most likely US 'race'/ethnicity | [optional] [default to nothing]
**score** | **Float64** | Higher score is better, but score is not normalized. Use calibratedProbability if available. | [optional] [default to nothing]
**raceEthnicitiesTop** | **Vector{String}** | List 'race'/ethnicities | [optional] [default to nothing]
**probabilityCalibrated** | **Float64** | The calibrated probability for raceEthnicity to have been guessed correctly. -1 = still calibrating. | [optional] [default to nothing]
**probabilityAltCalibrated** | **Float64** | The calibrated probability for raceEthnicity OR raceEthnicityAlt to have been guessed correctly. -1 = still calibrating. | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 899 | # ProperNounCategorizedOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**script** | **String** | | [optional] [default to nothing]
**id** | **String** | | [optional] [default to nothing]
**explanation** | **String** | | [optional] [default to nothing]
**name** | **String** | The input name | [optional] [default to nothing]
**commonType** | **String** | The most likely common name type | [optional] [default to nothing]
**commonTypeAlt** | **String** | Best alternative for : The most likely common name type | [optional] [default to nothing]
**score** | **Float64** | Higher score is better, but score is not normalized. Use calibratedProbability if available. | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 775 | # RegionISO
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**countryName** | **String** | | [optional] [default to nothing]
**countryNumCode** | **String** | | [optional] [default to nothing]
**countryISO2** | **String** | | [optional] [default to nothing]
**countryISO3** | **String** | | [optional] [default to nothing]
**countryFIPS** | **String** | | [optional] [default to nothing]
**subregion** | **String** | | [optional] [default to nothing]
**region** | **String** | | [optional] [default to nothing]
**topregion** | **String** | | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 383 | # RegionOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**countriesAndRegions** | [**Vector{RegionISO}**](RegionISO.md) | List of countries and regions | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 381 | # ReligionStatOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**religion** | **String** | | [optional] [default to nothing]
**pct** | **Float64** | | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 8618 | # SocialApi
All URIs are relative to *https://v2.namsor.com/NamSorAPIv2*
Method | HTTP request | Description
------------- | ------------- | -------------
[**phone_code**](SocialApi.md#phone_code) | **GET** /api2/json/phoneCode/{firstName}/{lastName}/{phoneNumber} | [USES 11 UNITS PER NAME] Infer the likely country and phone prefix, given a personal name and formatted / unformatted phone number.
[**phone_code_batch**](SocialApi.md#phone_code_batch) | **POST** /api2/json/phoneCodeBatch | [USES 11 UNITS PER NAME] Infer the likely country and phone prefix, of up to 100 personal names, detecting automatically the local context given a name and formatted / unformatted phone number.
[**phone_code_geo**](SocialApi.md#phone_code_geo) | **GET** /api2/json/phoneCodeGeo/{firstName}/{lastName}/{phoneNumber}/{countryIso2} | [USES 11 UNITS PER NAME] Infer the likely phone prefix, given a personal name and formatted / unformatted phone number, with a local context (ISO2 country of residence).
[**phone_code_geo_batch**](SocialApi.md#phone_code_geo_batch) | **POST** /api2/json/phoneCodeGeoBatch | [USES 11 UNITS PER NAME] Infer the likely country and phone prefix, of up to 100 personal names, with a local context (ISO2 country of residence).
[**phone_code_geo_feedback_loop**](SocialApi.md#phone_code_geo_feedback_loop) | **GET** /api2/json/phoneCodeGeoFeedbackLoop/{firstName}/{lastName}/{phoneNumber}/{phoneNumberE164}/{countryIso2} | [CREDITS 1 UNIT] Feedback loop to better infer the likely phone prefix, given a personal name and formatted / unformatted phone number, with a local context (ISO2 country of residence).
# **phone_code**
> phone_code(_api::SocialApi, first_name::String, last_name::String, phone_number::String; _mediaType=nothing) -> FirstLastNamePhoneCodedOut, OpenAPI.Clients.ApiResponse <br/>
> phone_code(_api::SocialApi, response_stream::Channel, first_name::String, last_name::String, phone_number::String; _mediaType=nothing) -> Channel{ FirstLastNamePhoneCodedOut }, OpenAPI.Clients.ApiResponse
[USES 11 UNITS PER NAME] Infer the likely country and phone prefix, given a personal name and formatted / unformatted phone number.
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**_api** | **SocialApi** | API context |
**first_name** | **String**| | [default to nothing]
**last_name** | **String**| | [default to nothing]
**phone_number** | **String**| | [default to nothing]
### Return type
[**FirstLastNamePhoneCodedOut**](FirstLastNamePhoneCodedOut.md)
### Authorization
[api_key](../README.md#api_key)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#api-endpoints) [[Back to Model list]](../README.md#models) [[Back to README]](../README.md)
# **phone_code_batch**
> phone_code_batch(_api::SocialApi; batch_first_last_name_phone_number_in=nothing, _mediaType=nothing) -> BatchFirstLastNamePhoneCodedOut, OpenAPI.Clients.ApiResponse <br/>
> phone_code_batch(_api::SocialApi, response_stream::Channel; batch_first_last_name_phone_number_in=nothing, _mediaType=nothing) -> Channel{ BatchFirstLastNamePhoneCodedOut }, OpenAPI.Clients.ApiResponse
[USES 11 UNITS PER NAME] Infer the likely country and phone prefix, of up to 100 personal names, detecting automatically the local context given a name and formatted / unformatted phone number.
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**_api** | **SocialApi** | API context |
### Optional Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**batch_first_last_name_phone_number_in** | [**BatchFirstLastNamePhoneNumberIn**](BatchFirstLastNamePhoneNumberIn.md)| A list of personal names |
### Return type
[**BatchFirstLastNamePhoneCodedOut**](BatchFirstLastNamePhoneCodedOut.md)
### Authorization
[api_key](../README.md#api_key)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#api-endpoints) [[Back to Model list]](../README.md#models) [[Back to README]](../README.md)
# **phone_code_geo**
> phone_code_geo(_api::SocialApi, first_name::String, last_name::String, phone_number::String, country_iso2::String; _mediaType=nothing) -> FirstLastNamePhoneCodedOut, OpenAPI.Clients.ApiResponse <br/>
> phone_code_geo(_api::SocialApi, response_stream::Channel, first_name::String, last_name::String, phone_number::String, country_iso2::String; _mediaType=nothing) -> Channel{ FirstLastNamePhoneCodedOut }, OpenAPI.Clients.ApiResponse
[USES 11 UNITS PER NAME] Infer the likely phone prefix, given a personal name and formatted / unformatted phone number, with a local context (ISO2 country of residence).
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**_api** | **SocialApi** | API context |
**first_name** | **String**| | [default to nothing]
**last_name** | **String**| | [default to nothing]
**phone_number** | **String**| | [default to nothing]
**country_iso2** | **String**| | [default to nothing]
### Return type
[**FirstLastNamePhoneCodedOut**](FirstLastNamePhoneCodedOut.md)
### Authorization
[api_key](../README.md#api_key)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#api-endpoints) [[Back to Model list]](../README.md#models) [[Back to README]](../README.md)
# **phone_code_geo_batch**
> phone_code_geo_batch(_api::SocialApi; batch_first_last_name_phone_number_geo_in=nothing, _mediaType=nothing) -> BatchFirstLastNamePhoneCodedOut, OpenAPI.Clients.ApiResponse <br/>
> phone_code_geo_batch(_api::SocialApi, response_stream::Channel; batch_first_last_name_phone_number_geo_in=nothing, _mediaType=nothing) -> Channel{ BatchFirstLastNamePhoneCodedOut }, OpenAPI.Clients.ApiResponse
[USES 11 UNITS PER NAME] Infer the likely country and phone prefix, of up to 100 personal names, with a local context (ISO2 country of residence).
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**_api** | **SocialApi** | API context |
### Optional Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**batch_first_last_name_phone_number_geo_in** | [**BatchFirstLastNamePhoneNumberGeoIn**](BatchFirstLastNamePhoneNumberGeoIn.md)| A list of personal names |
### Return type
[**BatchFirstLastNamePhoneCodedOut**](BatchFirstLastNamePhoneCodedOut.md)
### Authorization
[api_key](../README.md#api_key)
### HTTP request headers
- **Content-Type**: application/json
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#api-endpoints) [[Back to Model list]](../README.md#models) [[Back to README]](../README.md)
# **phone_code_geo_feedback_loop**
> phone_code_geo_feedback_loop(_api::SocialApi, first_name::String, last_name::String, phone_number::String, phone_number_e164::String, country_iso2::String; _mediaType=nothing) -> FirstLastNamePhoneCodedOut, OpenAPI.Clients.ApiResponse <br/>
> phone_code_geo_feedback_loop(_api::SocialApi, response_stream::Channel, first_name::String, last_name::String, phone_number::String, phone_number_e164::String, country_iso2::String; _mediaType=nothing) -> Channel{ FirstLastNamePhoneCodedOut }, OpenAPI.Clients.ApiResponse
[CREDITS 1 UNIT] Feedback loop to better infer the likely phone prefix, given a personal name and formatted / unformatted phone number, with a local context (ISO2 country of residence).
### Required Parameters
Name | Type | Description | Notes
------------- | ------------- | ------------- | -------------
**_api** | **SocialApi** | API context |
**first_name** | **String**| | [default to nothing]
**last_name** | **String**| | [default to nothing]
**phone_number** | **String**| | [default to nothing]
**phone_number_e164** | **String**| | [default to nothing]
**country_iso2** | **String**| | [default to nothing]
### Return type
[**FirstLastNamePhoneCodedOut**](FirstLastNamePhoneCodedOut.md)
### Authorization
[api_key](../README.md#api_key)
### HTTP request headers
- **Content-Type**: Not defined
- **Accept**: application/json
[[Back to top]](#) [[Back to API list]](../README.md#api-endpoints) [[Back to Model list]](../README.md#models) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 0.1.0 | 2d6ab091a1f4af72f40a76fe35b2c01982aa0bbf | docs | 474 | # SoftwareVersionOut
## Properties
Name | Type | Description | Notes
------------ | ------------- | ------------- | -------------
**softwareNameAndVersion** | **String** | The software version | [optional] [default to nothing]
**softwareVersion** | **Vector{Int64}** | The software version major minor build | [optional] [default to nothing]
[[Back to Model list]](../README.md#models) [[Back to API list]](../README.md#api-endpoints) [[Back to README]](../README.md)
| NamSor | https://github.com/NeroBlackstone/NamSor.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1467 | using Documenter, DocumenterVitepress, DocumenterCitations, Boltz
pages = [
"Boltz.jl" => "index.md",
"Tutorials" => [
"Getting Started" => "tutorials/1_GettingStarted.md",
"Symbolic Optimal Control" => "tutorials/2_SymbolicOptimalControl.md"
],
"API Reference" => [
"Index" => "api/index.md",
"Basis Functions" => "api/basis.md",
"Layers API" => "api/layers.md",
"Vision Models" => "api/vision.md",
"Private API" => "api/private.md"
]
]
bib = CitationBibliography(
joinpath(@__DIR__, "ref.bib");
style=:authoryear
)
doctestexpr = quote
using Boltz, Random, Lux
using DynamicExpressions, Zygote
end
DocMeta.setdocmeta!(Boltz, :DocTestSetup, doctestexpr; recursive=true)
deploy_config = Documenter.auto_detect_deploy_system()
deploy_decision = Documenter.deploy_folder(deploy_config; repo="github.com/LuxDL/Boltz.jl",
devbranch="main", devurl="dev", push_preview=true)
makedocs(; sitename="Boltz.jl Docs",
authors="Avik Pal et al.",
clean=true,
modules=[Boltz],
linkcheck=true,
repo="https://github.com/LuxDL/Boltz.jl/blob/{commit}{path}#{line}",
format=DocumenterVitepress.MarkdownVitepress(;
repo="github.com/LuxDL/Boltz.jl", devbranch="main", devurl="dev", deploy_decision),
draft=false,
plugins=[bib],
pages)
deploydocs(; repo="github.com/LuxDL/Boltz.jl.git",
push_preview=true, target="build", devbranch="main")
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1398 | using Pkg
storage_dir = joinpath(@__DIR__, "..", "tutorial_deps")
!isdir(storage_dir) && mkpath(storage_dir)
# Run this as `run_single_tutorial.jl <tutorial_name> <output_dir> <path/to/script>`
name = ARGS[1]
pkg_log_path = joinpath(storage_dir, "$(name)_pkg.log")
output_directory = ARGS[2]
path = ARGS[3]
io = open(pkg_log_path, "w")
Pkg.develop(; path=joinpath(@__DIR__, ".."), io)
Pkg.instantiate(; io)
close(io)
using Literate
function preprocess(path, str)
new_str = replace(str, "__DIR = @__DIR__" => "__DIR = \"$(dirname(path))\"")
appendix_code = """
# ## Appendix
using InteractiveUtils
InteractiveUtils.versioninfo()
if @isdefined(MLDataDevices)
if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
println()
CUDA.versioninfo()
end
if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
println()
AMDGPU.versioninfo()
end
end
nothing #hide
"""
return new_str * appendix_code
end
# For displaying generated Latex
function postprocess(path, str)
return replace(
str, "````\n__REPLACEME__\$" => "\$\$", "\$__REPLACEME__\n````" => "\$\$")
end
Literate.markdown(
path, output_directory; execute=true, name, flavor=Literate.DocumenterFlavor(),
preprocess=Base.Fix1(preprocess, path), postprocess=Base.Fix1(postprocess, path))
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1585 | const ALL_TUTORIALS = [
"GettingStarted/main.jl",
"SymbolicOptimalControl/main.jl"
]
const TUTORIALS = collect(enumerate(ALL_TUTORIALS))
const BUILDKITE_PARALLEL_JOB_COUNT = parse(
Int, get(ENV, "BUILDKITE_PARALLEL_JOB_COUNT", "-1"))
const TUTORIALS_BUILDING = if BUILDKITE_PARALLEL_JOB_COUNT > 0
id = parse(Int, ENV["BUILDKITE_PARALLEL_JOB"]) + 1 # Index starts from 0
splits = collect(Iterators.partition(
TUTORIALS, cld(length(TUTORIALS), BUILDKITE_PARALLEL_JOB_COUNT)))
id > length(splits) ? [] : splits[id]
else
TUTORIALS
end
const NTASKS = min(
parse(Int, get(ENV, "LUXLIB_DOCUMENTATION_NTASKS", "1")), length(TUTORIALS_BUILDING))
@info "Building Tutorials:" TUTORIALS_BUILDING
@info "Starting Lux Tutorial Build with $(NTASKS) tasks."
asyncmap(TUTORIALS_BUILDING; ntasks=NTASKS) do (i, p)
@info "Running Tutorial $(i): $(p) on task $(current_task())"
path = joinpath(@__DIR__, "..", "examples", p)
name = "$(i)_$(first(rsplit(p, "/")))"
output_directory = joinpath(@__DIR__, "src", "tutorials")
tutorial_proj = dirname(path)
file = joinpath(dirname(@__FILE__), "run_single_tutorial.jl")
withenv("JULIA_NUM_THREADS" => "$(Threads.nthreads())",
"JULIA_CUDA_HARD_MEMORY_LIMIT" => "$(100 ÷ NTASKS)%",
"JULIA_PKG_PRECOMPILE_AUTO" => "0", "JULIA_DEBUG" => "Literate") do
cmd = `$(Base.julia_cmd()) --color=yes --code-coverage=user --threads=$(Threads.nthreads()) --project=$(tutorial_proj) "$(file)" "$(name)" "$(output_directory)" "$(path)"`
run(cmd)
end
return
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1322 | # # Getting Started
#
# !!! tip "Prerequisites"
#
# Here we assume that you are familiar with [`Lux.jl`](https://lux.csail.mit.edu/stable/).
# If not please take a look at the
# [Lux.jl tutoials](https://lux.csail.mit.edu/stable/tutorials/).
#
# `Boltz.jl` is just like `Lux.jl` but comes with more "batteries included". Let's start by
# defining an MLP model.
using Lux, Boltz, Random
# ## Multi-Layer Perceptron
#
# If we were to do this in `Lux.jl` we would write the following:
model = Chain(
Dense(784, 256, relu),
Dense(256, 10)
)
# But in `Boltz.jl` we can do this:
model = Layers.MLP(784, (256, 10), relu)
# The `MLP` function is just a convenience wrapper around `Lux.Chain` that constructs a
# multi-layer perceptron with the given number of layers and activation function.
# ## How about VGG?
#
# Let's take a look at the `Vision` module. We can construct a VGG model with the
# following code:
Vision.VGG(13)
# We can also load pretrained ImageNet weights using
# !!! note "Load JLD2"
#
# You need to load `JLD2` before being able to load pretrained weights.
using JLD2
Vision.VGG(13; pretrained=true)
# ## Loading Models from Metalhead (Flux.jl)
# We can load models from Metalhead (Flux.jl), just remember to load `Metalhead` before.
using Metalhead
Vision.ResNet(18)
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 8964 | # # Solving Optimal Control Problems with Symbolic Universal Differential Equations
# This tutorial is based on [SciMLSensitivity.jl tutorial](https://docs.sciml.ai/SciMLSensitivity/stable/examples/optimal_control/optimal_control/).
# Instead of using a classical NN architecture, here we will combine the NN with a symbolic
# expression from [DynamicExpressions.jl](https://symbolicml.org/DynamicExpressions.jl/) (the
# symbolic engine behind [SymbolicRegression.jl](https://astroautomata.com/SymbolicRegression.jl/)
# and [PySR](https://github.com/MilesCranmer/PySR/)).
# Here we will solve a classic optimal control problem with a universal differential
# equation. Let
# $$x^{\prime\prime} = u^3(t)$$
# where we want to optimize our controller $u(t)$ such that the following is minimized:
# $$\mathcal{L}(\theta) = \sum_i \left(\|4 - x(t_i)\|_2 + 2\|x\prime(t_i)\|_2 + \|u(t_i)\|_2\right)$$
# where $i$ is measured on $(0, 8)$ at $0.01$ intervals. To do this, we rewrite the ODE in
# first order form:
# $$x^\prime = v$$
# $$v^\prime = u^3(t)$$
# and thus
# $$\mathcal{L}(\theta) = \sum_i \left(\|4 - x(t_i)\|_2 + 2\|v(t_i)\|_2 + \|u(t_i)\|_2\right)$$
# is our loss function on the first order system. We thus choose a neural network form for
# $u$ and optimize the equation with respect to this loss. Note that we will first reduce
# control cost (the last term) by 10x in order to bump the network out of a local minimum.
# This looks like:
# ## Package Imports
using Lux, Boltz, ComponentArrays, OrdinaryDiffEqVerner, Optimization, OptimizationOptimJL,
OptimizationOptimisers, SciMLSensitivity, Statistics, Printf, Random
using DynamicExpressions, SymbolicRegression, MLJ, SymbolicUtils, Latexify
using CairoMakie
# ## Helper Functions
function plot_dynamics(sol, us, ts)
fig = Figure()
ax = CairoMakie.Axis(fig[1, 1]; xlabel=L"t")
ylims!(ax, (-6, 6))
lines!(ax, ts, sol[1, :]; label=L"u_1(t)", linewidth=3)
lines!(ax, ts, sol[2, :]; label=L"u_2(t)", linewidth=3)
lines!(ax, ts, vec(us); label=L"u(t)", linewidth=3)
axislegend(ax; position=:rb)
return fig
end
# ## Training a Neural Network based UDE
# Let's setup the neural network. For the first part, we won't do any symbolic regression.
# We will plain and simple train a neural network to solve the optimal control problem.
rng = Xoshiro(0)
tspan = (0.0, 8.0)
mlp = Chain(Dense(1 => 4, gelu), Dense(4 => 4, gelu), Dense(4 => 1))
function construct_ude(mlp, solver; kwargs...)
return @compact(; mlp, solver, kwargs...) do x_in, ps
x, ts, ret_sol = x_in
function dudt(du, u, p, t)
u₁, u₂ = u
du[1] = u₂
du[2] = mlp([t], p)[1]^3
return
end
prob = ODEProblem{true}(dudt, x, extrema(ts), ps.mlp)
sol = solve(prob, solver; saveat=ts,
sensealg=QuadratureAdjoint(; autojacvec=ReverseDiffVJP(true)), kwargs...)
us = mlp(reshape(ts, 1, :), ps.mlp)
ret_sol === Val(true) && @return sol, us
@return Array(sol), us
end
end
ude = construct_ude(mlp, Vern9(); abstol=1e-10, reltol=1e-10);
# Here we are going to tuse the same configuration for testing, but this is to show that
# we can setup them up with different ode solve configurations
ude_test = construct_ude(mlp, Vern9(); abstol=1e-10, reltol=1e-10);
function train_model_1(ude, rng, ts_)
ps, st = Lux.setup(rng, ude)
ps = ComponentArray{Float64}(ps)
stateful_ude = StatefulLuxLayer{true}(ude, nothing, st)
ts = collect(ts_)
function loss_adjoint(θ)
x, us = stateful_ude(([-4.0, 0.0], ts, Val(false)), θ)
return mean(abs2, 4 .- x[1, :]) + 2 * mean(abs2, x[2, :]) + 0.1 * mean(abs2, us)
end
callback = function (state, l)
state.iter % 50 == 1 && @printf "Iteration: %5d\tLoss: %10g\n" state.iter l
return false
end
optf = OptimizationFunction((x, p) -> loss_adjoint(x), AutoZygote())
optprob = OptimizationProblem(optf, ps)
res1 = solve(optprob, Optimisers.Adam(0.001); callback, maxiters=500)
optprob = OptimizationProblem(optf, res1.u)
res2 = solve(optprob, LBFGS(); callback, maxiters=100)
return StatefulLuxLayer{true}(ude, res2.u, st)
end
trained_ude = train_model_1(ude, rng, 0.0:0.01:8.0)
nothing #hide
#-
sol, us = ude_test(([-4.0, 0.0], 0.0:0.01:8.0, Val(true)), trained_ude.ps, trained_ude.st)[1];
plot_dynamics(sol, us, 0.0:0.01:8.0)
# Now that the system is in a better behaved part of parameter space, we return to the
# original loss function to finish the optimization:
function train_model_2(stateful_ude::StatefulLuxLayer, ts_)
ts = collect(ts_)
function loss_adjoint(θ)
x, us = stateful_ude(([-4.0, 0.0], ts, Val(false)), θ)
return mean(abs2, 4 .- x[1, :]) .+ 2 * mean(abs2, x[2, :]) .+ mean(abs2, us)
end
callback = function (state, l)
state.iter % 10 == 1 && @printf "Iteration: %5d\tLoss: %10g\n" state.iter l
return false
end
optf = OptimizationFunction((x, p) -> loss_adjoint(x), AutoZygote())
optprob = OptimizationProblem(optf, stateful_ude.ps)
res2 = solve(optprob, LBFGS(); callback, maxiters=100)
return StatefulLuxLayer{true}(stateful_ude.model, res2.u, stateful_ude.st)
end
trained_ude = train_model_2(trained_ude, 0.0:0.01:8.0)
nothing #hide
#-
sol, us = ude_test(([-4.0, 0.0], 0.0:0.01:8.0, Val(true)), trained_ude.ps, trained_ude.st)[1];
plot_dynamics(sol, us, 0.0:0.01:8.0)
# ## Symbolic Regression
# Ok so now we have a trained neural network that solves the optimal control problem. But
# can we replace `Dense(4 => 4, gelu)` with a symbolic expression? Let's try!
# ### Data Generation for Symbolic Regression
# First, we need to generate data for the symbolic regression.
ts = reshape(collect(0.0:0.1:8.0), 1, :)
X_train = mlp[1](ts, trained_ude.ps.mlp.layer_1, trained_ude.st.mlp.layer_1)[1]
# This is the training input data. Now we generate the targets
Y_train = mlp[2](X_train, trained_ude.ps.mlp.layer_2, trained_ude.st.mlp.layer_2)[1]
# ### Fitting the Symbolic Expression
# We will follow the example from [SymbolicRegression.jl docs](https://astroautomata.com/SymbolicRegression.jl/dev/examples/)
# to fit the symbolic expression.
srmodel = MultitargetSRRegressor(;
binary_operators=[+, -, *, /], niterations=100, save_to_file=false);
# One important note here is to transpose the data because that is how MLJ expects the data
# to be structured (this is in contrast to how Lux or SymbolicRegression expects the data)
mach = machine(srmodel, X_train', Y_train')
fit!(mach; verbosity=0)
r = report(mach)
best_eq = [r.equations[1][r.best_idx[1]], r.equations[2][r.best_idx[2]],
r.equations[3][r.best_idx[3]], r.equations[4][r.best_idx[4]]]
# Let's see the expressions that SymbolicRegression.jl found. In case you were wondering,
# these expressions are not hardcoded, it is live updated from the output of the code above
# using `Latexify.jl` and the integration of `SymbolicUtils.jl` with `DynamicExpressions.jl`.
eqn1 = latexify(string(node_to_symbolic(best_eq[1], srmodel)); fmt=FancyNumberFormatter(5)) #hide
print("__REPLACEME__$(eqn1.s)__REPLACEME__") #hide
nothing #hide
#-
eqn2 = latexify(string(node_to_symbolic(best_eq[2], srmodel)); fmt=FancyNumberFormatter(5)) #hide
print("__REPLACEME__$(eqn2.s)__REPLACEME__") #hide
nothing #hide
#-
eqn1 = latexify(string(node_to_symbolic(best_eq[3], srmodel)); fmt=FancyNumberFormatter(5)) #hide
print("__REPLACEME__$(eqn1.s)__REPLACEME__") #hide
nothing #hide
#-
eqn2 = latexify(string(node_to_symbolic(best_eq[4], srmodel)); fmt=FancyNumberFormatter(5)) #hide
print("__REPLACEME__$(eqn2.s)__REPLACEME__") #hide
nothing #hide
# ## Combining the Neural Network with the Symbolic Expression
# Now that we have the symbolic expression, we can combine it with the neural network to
# solve the optimal control problem. but we do need to perform some finetuning.
hybrid_mlp = Chain(Dense(1 => 4, gelu),
Layers.DynamicExpressionsLayer(OperatorEnum(; binary_operators=[+, -, *, /]), best_eq),
Dense(4 => 1))
# There you have it! It is that easy to take the fitted Symbolic Expression and combine it
# with a neural network. Let's see how it performs before fintetuning.
hybrid_ude = construct_ude(hybrid_mlp, Vern9(); abstol=1e-10, reltol=1e-10);
# We want to reuse the trained neural network parameters, so we will copy them over to the
# new model
st = Lux.initialstates(rng, hybrid_ude)
ps = (;
mlp=(; layer_1=trained_ude.ps.mlp.layer_1,
layer_2=Lux.initialparameters(rng, hybrid_mlp[2]),
layer_3=trained_ude.ps.mlp.layer_3))
ps = ComponentArray(ps)
sol, us = hybrid_ude(([-4.0, 0.0], 0.0:0.01:8.0, Val(true)), ps, st)[1];
plot_dynamics(sol, us, 0.0:0.01:8.0)
# Now that does perform well! But we could finetune this model very easily. We will skip
# that part on CI, but you can do it by using the same training code as above.
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1271 | module BoltzDataInterpolationsExt
using DataInterpolations: AbstractInterpolation
using Boltz: Boltz, Layers, Utils
for train_grid in (true, false), tType in (AbstractVector, Number)
grid_expr = train_grid ? :(grid = ps.grid) : :(grid = st.grid)
sol_expr = tType === Number ? :(sol = interp(t)) : :(sol = interp.(t))
@eval function (spl::Layers.SplineLayer{$(train_grid), Basis})(
t::$(tType), ps, st) where {Basis <: AbstractInterpolation}
$(grid_expr)
interp = construct_basis(Basis, ps.saved_points, grid; extrapolate=true)
$(sol_expr)
spl.in_dims == () && return sol, st
return Utils.mapreduce_stack(sol), st
end
end
function construct_basis(
::Type{Basis}, saved_points::AbstractVector, grid; extrapolate=false) where {Basis}
return Basis(saved_points, grid; extrapolate)
end
function construct_basis(::Type{Basis}, saved_points::AbstractArray{T, N},
grid; extrapolate=false) where {Basis, T, N}
return construct_basis(
# Unfortunately DataInterpolations.jl is not very robust to different array types
# so we have to make a copy
Basis, [copy(selectdim(saved_points, N, i)) for i in 1:size(saved_points, N)],
grid; extrapolate)
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 6953 | module BoltzDynamicExpressionsExt
using ChainRulesCore: NoTangent
using DynamicExpressions: DynamicExpressions, Node, OperatorEnum, eval_grad_tree_array,
eval_tree_array
using ForwardDiff: ForwardDiff
using Lux: Lux, Chain, Parallel, WrappedFunction
using MLDataDevices: CPUDevice
using Boltz: Layers, Utils
@static if pkgversion(DynamicExpressions) ≥ v"0.19"
using DynamicExpressions: EvalOptions
const EvalOptionsTypes = Union{Missing, EvalOptions, NamedTuple}
else
const EvalOptionsTypes = Union{Missing, NamedTuple}
end
Utils.is_extension_loaded(::Val{:DynamicExpressions}) = true
# TODO: Remove this once the minimum version of DynamicExpressions is 0.19. We need to
# keep the older versions around for SymbolicRegression.jl compatibility which
# currently uses DynamicExpressions.jl 0.16.
construct_eval_options(::Missing, ::Missing) = (; turbo=Val(false), bumper=Val(false))
function construct_eval_options(turbo::Union{Bool, Val}, ::Missing)
return construct_eval_options(turbo, Val(false))
end
function construct_eval_options(::Missing, bumper::Union{Bool, Val})
return construct_eval_options(Val(false), bumper)
end
function construct_eval_options(turbo::Union{Bool, Val}, bumper::Union{Bool, Val})
Base.depwarn("`bumper` and `turbo` are deprecated. Use `eval_options` instead.",
:DynamicExpressionsLayer)
return (; turbo, bumper)
end
construct_eval_options(::Missing, eval_options::EvalOptionsTypes) = eval_options
construct_eval_options(eval_options::EvalOptionsTypes, ::Missing) = eval_options
function construct_eval_options(::EvalOptionsTypes, ::EvalOptionsTypes)
throw(ArgumentError("`eval_options`, `turbo` and `bumper` are mutually exclusive. \
Don't specify `eval_options` if you are using `turbo` or \
`bumper`."))
end
function Layers.DynamicExpressionsLayer(
operator_enum::OperatorEnum, expressions::AbstractVector{<:Node}; kwargs...)
return Layers.DynamicExpressionsLayer(operator_enum, expressions...; kwargs...)
end
function Layers.DynamicExpressionsLayer(operator_enum::OperatorEnum, expressions::Node...;
eval_options::EvalOptionsTypes=missing, turbo::Union{Bool, Val, Missing}=missing,
bumper::Union{Bool, Val, Missing}=missing)
eval_options = construct_eval_options(
eval_options, construct_eval_options(turbo, bumper))
internal_layer = if length(expressions) == 1
Layers.InternalDynamicExpressionWrapper(
operator_enum, first(expressions), eval_options)
else
Chain(
Parallel(nothing,
ntuple(
i -> Layers.InternalDynamicExpressionWrapper(
operator_enum, expressions[i], eval_options),
length(expressions))...),
WrappedFunction(Lux.Utils.stack1))
end
return Layers.DynamicExpressionsLayer(internal_layer)
end
function Layers.apply_dynamic_expression_internal(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum, x, ps)
Layers.update_de_expression_constants!(expr, ps)
@static if pkgversion(DynamicExpressions) ≥ v"0.19"
eval_options = EvalOptions(; de.eval_options.turbo, de.eval_options.bumper)
return first(eval_tree_array(expr, x, operator_enum; eval_options))
else
return first(eval_tree_array(
expr, x, operator_enum; de.eval_options.turbo, de.eval_options.bumper))
end
end
function Layers.∇apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum, x, ps)
Layers.update_de_expression_constants!(expr, ps)
_, Jₓ, _ = eval_grad_tree_array(
expr, x, operator_enum; variable=Val(true), de.eval_options.turbo)
y, Jₚ, _ = eval_grad_tree_array(
expr, x, operator_enum; variable=Val(false), de.eval_options.turbo)
∇apply_dynamic_expression_internal = let Jₓ = Jₓ, Jₚ = Jₚ
Δ -> begin
∂x = Jₓ .* reshape(Δ, 1, :)
∂ps = Jₚ * Δ
return NoTangent(), NoTangent(), NoTangent(), NoTangent(), ∂x, ∂ps, NoTangent()
end
end
return y, ∇apply_dynamic_expression_internal
end
# Forward Diff rules
# TODO: https://github.com/SymbolicML/DynamicExpressions.jl/issues/74 fix this
function Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum,
x::AbstractMatrix{<:ForwardDiff.Dual{Tag, T, N}}, ps, ::CPUDevice) where {T, N, Tag}
value_fn(x) = ForwardDiff.value(Tag, x)
partials_fn(x, i) = ForwardDiff.partials(Tag, x, i)
Layers.update_de_expression_constants!(expr, ps)
y, Jₓ, _ = eval_grad_tree_array(
expr, value_fn.(x), operator_enum; variable=Val(true), de.eval_options.turbo)
partials = ntuple(
i -> dropdims(sum(partials_fn.(x, i) .* Jₓ; dims=1); dims=1), N)
fT = promote_type(eltype(y), T, eltype(Jₓ))
partials_y = ForwardDiff.Partials{N, fT}.(tuple.(partials...))
return ForwardDiff.Dual{Tag, fT, N}.(y, partials_y)
end
function Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum, x,
ps::AbstractVector{<:ForwardDiff.Dual{Tag, T, N}}, ::CPUDevice) where {T, N, Tag}
value_fn(x) = ForwardDiff.value(Tag, x)
partials_fn(x, i) = ForwardDiff.partials(Tag, x, i)
Layers.update_de_expression_constants!(expr, value_fn.(ps))
y, Jₚ, _ = eval_grad_tree_array(
expr, x, operator_enum; variable=Val(false), de.eval_options.turbo)
partials = ntuple(
i -> dropdims(sum(partials_fn.(ps, i) .* Jₚ; dims=1); dims=1), N)
fT = promote_type(eltype(y), T, eltype(Jₚ))
partials_y = ForwardDiff.Partials{N, fT}.(tuple.(partials...))
return ForwardDiff.Dual{Tag, fT, N}.(y, partials_y)
end
function Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum,
x::AbstractMatrix{<:ForwardDiff.Dual{Tag, T1, N}},
ps::AbstractVector{<:ForwardDiff.Dual{Tag, T2, N}},
::CPUDevice) where {T1, T2, N, Tag}
value_fn(x) = ForwardDiff.value(Tag, x)
partials_fn(x, i) = ForwardDiff.partials(Tag, x, i)
ps_value = value_fn.(ps)
x_value = value_fn.(x)
Layers.update_de_expression_constants!(expr, ps_value)
_, Jₓ, _ = eval_grad_tree_array(
expr, x_value, operator_enum; variable=Val(true), de.eval_options.turbo)
y, Jₚ, _ = eval_grad_tree_array(
expr, x_value, operator_enum; variable=Val(false), de.eval_options.turbo)
partials = ntuple(
i -> dropdims(sum(partials_fn.(x, i) .* Jₓ; dims=1); dims=1) .+
dropdims(sum(partials_fn.(ps, i) .* Jₚ; dims=1); dims=1),
N)
fT = promote_type(eltype(y), T1, T2, eltype(Jₓ), eltype(Jₚ))
partials_y = ForwardDiff.Partials{N, fT}.(tuple.(partials...))
return ForwardDiff.Dual{Tag, fT, N}.(y, partials_y)
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 246 | module BoltzJLD2Ext
using JLD2: JLD2
using Boltz: InitializeModels, Utils
Utils.is_extension_loaded(::Val{:JLD2}) = true
function InitializeModels.load_using_jld2_internal(args...; kwargs...)
return JLD2.load(args...; kwargs...)
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 2526 | module BoltzMetalheadExt
using ArgCheck: @argcheck
using Metalhead: Metalhead
using Lux: Lux, FromFluxAdaptor
using Boltz: Boltz, Utils, Vision
Utils.is_extension_loaded(::Val{:Metalhead}) = true
function Vision.ResNetMetalhead(depth::Int; pretrained::Bool=false)
@argcheck depth in (18, 34, 50, 101, 152)
return FromFluxAdaptor(; preserve_ps_st=pretrained)(Metalhead.ResNet(
depth; pretrain=pretrained).layers)
end
function Vision.ResNeXtMetalhead(
depth::Int; cardinality=32, base_width=nothing, pretrained::Bool=false)
@argcheck depth in (50, 101, 152)
base_width = base_width === nothing ? (depth == 101 ? 8 : 4) : base_width
return FromFluxAdaptor(; preserve_ps_st=pretrained)(Metalhead.ResNeXt(
depth; pretrain=pretrained, cardinality, base_width).layers)
end
function Vision.GoogLeNetMetalhead(; pretrained::Bool=false)
return FromFluxAdaptor(; preserve_ps_st=pretrained)(Metalhead.GoogLeNet(;
pretrain=pretrained).layers)
end
function Vision.DenseNetMetalhead(depth::Int; pretrained::Bool=false)
@argcheck depth in (121, 161, 169, 201)
return FromFluxAdaptor(; preserve_ps_st=pretrained)(Metalhead.DenseNet(
depth; pretrain=pretrained).layers)
end
function Vision.MobileNetMetalhead(name::Symbol; pretrained::Bool=false)
@argcheck name in (:v1, :v2, :v3_small, :v3_large)
adaptor = FromFluxAdaptor(; preserve_ps_st=pretrained)
model = if name == :v1
adaptor(Metalhead.MobileNetv1(; pretrain=pretrained).layers)
elseif name == :v2
adaptor(Metalhead.MobileNetv2(; pretrain=pretrained).layers)
elseif name == :v3_small
adaptor(Metalhead.MobileNetv3(:small; pretrain=pretrained).layers)
elseif name == :v3_large
adaptor(Metalhead.MobileNetv3(:large; pretrain=pretrained).layers)
end
return model
end
function Vision.ConvMixerMetalhead(name::Symbol; pretrained::Bool=false)
@argcheck name in (:base, :large, :small)
return FromFluxAdaptor(; preserve_ps_st=pretrained)(Metalhead.ConvMixer(
name; pretrain=pretrained).layers)
end
function Vision.SqueezeNetMetalhead(; pretrained::Bool=false)
return FromFluxAdaptor(; preserve_ps_st=pretrained)(Metalhead.SqueezeNet(;
pretrain=pretrained).layers)
end
function Vision.WideResNetMetalhead(depth::Int; pretrained::Bool=false)
@argcheck depth in (18, 34, 50, 101, 152)
return FromFluxAdaptor(; preserve_ps_st=pretrained)(Metalhead.WideResNet(
depth; pretrain=pretrained).layers)
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 658 | module BoltzReverseDiffExt
using ReverseDiff: ReverseDiff, TrackedArray, @grad_from_chainrules
using MLDataDevices: CPUDevice
using Boltz: Layers
@grad_from_chainrules Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum, x::TrackedArray,
ps, ::CPUDevice)
@grad_from_chainrules Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum, x,
ps::TrackedArray, ::CPUDevice)
@grad_from_chainrules Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum,
x::TrackedArray, ps::TrackedArray, ::CPUDevice)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 646 | module BoltzTrackerExt
using Tracker: Tracker, TrackedArray, @grad_from_chainrules
using MLDataDevices: CPUDevice
using Boltz: Layers
@grad_from_chainrules Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum, x::TrackedArray,
ps, ::CPUDevice)
@grad_from_chainrules Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum, x,
ps::TrackedArray, ::CPUDevice)
@grad_from_chainrules Layers.apply_dynamic_expression(
de::Layers.InternalDynamicExpressionWrapper, expr, operator_enum,
x::TrackedArray, ps::TrackedArray, ::CPUDevice)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 291 | module BoltzZygoteExt
using ADTypes: AutoZygote
using Zygote: Zygote
using Boltz: Boltz, Layers, Utils
Utils.is_extension_loaded(::Val{:Zygote}) = true
# Hamiltonian NN
function Layers.hamiltonian_forward(::AutoZygote, model, x)
return only(Zygote.gradient(sum ∘ model, x))
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 236 | module Boltz
# Utility Functions
include("utils.jl")
include("initialize.jl")
# Basis Functions
include("basis.jl")
# Layers
include("layers/Layers.jl")
# Vision Models
include("vision/Vision.jl")
export Basis, Layers, Vision
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 5069 | module Basis
using ArgCheck: @argcheck
using ChainRulesCore: ChainRulesCore
using Compat: @compat
using ConcreteStructs: @concrete
using Markdown: @doc_str
using MLDataDevices: get_device, CPUDevice
using ..Utils: unsqueeze1
const CRC = ChainRulesCore
const ∂∅ = CRC.NoTangent()
# The rrules in this file are hardcoded to be used exclusively with GeneralBasisFunction
@concrete struct GeneralBasisFunction{name}
f
n::Int
dim::Int
end
function Base.show(
io::IO, ::MIME"text/plain", basis::GeneralBasisFunction{name}) where {name}
print(io, "Basis.$(name)(order=$(basis.n))")
end
function (basis::GeneralBasisFunction{name, F})(x::AbstractArray,
grid::Union{AbstractRange, AbstractVector}=1:1:(basis.n)) where {name, F}
@argcheck length(grid) == basis.n
if basis.dim == 1 # Fast path where we don't need to materialize the range
return basis.f.(grid, unsqueeze1(x))
end
@argcheck ndims(x) + 1 ≥ basis.dim
new_x_size = ntuple(
i -> i == basis.dim ? 1 : (i < basis.dim ? size(x, i) : size(x, i - 1)),
ndims(x) + 1)
x_new = reshape(x, new_x_size)
if grid isa AbstractRange
dev = get_device(x)
grid = dev isa CPUDevice ? collect(grid) : dev(grid)
end
grid_shape = ntuple(i -> i == basis.dim ? basis.n : 1, ndims(x) + 1)
grid_new = reshape(grid, grid_shape)
return basis.f.(grid_new, x_new)
end
@doc doc"""
Chebyshev(n; dim::Int=1)
Constructs a Chebyshev basis of the form $[T_{0}(x), T_{1}(x), \dots, T_{n-1}(x)]$ where
$T_j(.)$ is the $j^{th}$ Chebyshev polynomial of the first kind.
## Arguments
- `n`: number of terms in the polynomial expansion.
## Keyword Arguments
- `dim::Int=1`: The dimension along which the basis functions are applied.
"""
Chebyshev(n; dim::Int=1) = GeneralBasisFunction{:Chebyshev}(chebyshev, n, dim)
chebyshev(i, x) = @fastmath cos(i * acos(x))
@doc doc"""
Sin(n; dim::Int=1)
Constructs a sine basis of the form $[\sin(x), \sin(2x), \dots, \sin(nx)]$.
## Arguments
- `n`: number of terms in the sine expansion.
## Keyword Arguments
- `dim::Int=1`: The dimension along which the basis functions are applied.
"""
Sin(n; dim::Int=1) = GeneralBasisFunction{:Sin}(@fastmath(sin∘*), n, dim)
@doc doc"""
Cos(n; dim::Int=1)
Constructs a cosine basis of the form $[\cos(x), \cos(2x), \dots, \cos(nx)]$.
## Arguments
- `n`: number of terms in the cosine expansion.
## Keyword Arguments
- `dim::Int=1`: The dimension along which the basis functions are applied.
"""
Cos(n; dim::Int=1) = GeneralBasisFunction{:Cos}(@fastmath(cos∘*), n, dim)
@doc doc"""
Fourier(n; dim=1)
Constructs a Fourier basis of the form
$$F_j(x) = \begin{cases}
cos\left(\frac{j}{2}x\right) & \text{if } j \text{ is even} \\
sin\left(\frac{j}{2}x\right) & \text{if } j \text{ is odd}
\end{cases}$$
## Arguments
- `n`: number of terms in the Fourier expansion.
## Keyword Arguments
- `dim::Int=1`: The dimension along which the basis functions are applied.
"""
Fourier(n; dim::Int=1) = GeneralBasisFunction{:Fourier}(fourier, n, dim)
function fourier(i, x)
s, c = sincos(i * x / 2)
return ifelse(iseven(i), c, s)
end
function CRC.rrule(::typeof(Broadcast.broadcasted), ::typeof(fourier), i, x)
ix_by_2 = @. i * x / 2
s = @. sin(ix_by_2)
c = @. cos(ix_by_2)
y = @. ifelse(iseven(i), c, s)
∇fourier = let s = s, c = c, i = i
Δ -> (∂∅, ∂∅, ∂∅,
dropdims(sum((i / 2) .* ifelse.(iseven.(i), -s, c) .* Δ; dims=1); dims=1))
end
return y, ∇fourier
end
@doc doc"""
Legendre(n; dim::Int=1)
Constructs a Legendre basis of the form $[P_{0}(x), P_{1}(x), \dots, P_{n-1}(x)]$ where
$P_j(.)$ is the $j^{th}$ Legendre polynomial.
## Arguments
- `n`: number of terms in the polynomial expansion.
## Keyword Arguments
- `dim::Int=1`: The dimension along which the basis functions are applied.
"""
Legendre(n; dim::Int=1) = GeneralBasisFunction{:Legendre}(legendre_poly, n, dim)
## Source: https://github.com/ranocha/PolynomialBases.jl/blob/master/src/legendre.jl
function legendre_poly(i, x)
p = i - 1
a = one(x)
b = x
p ≤ 0 && return a
p == 1 && return b
for j in 2:p
a, b = b, @fastmath(((2j - 1) * x * b - (j - 1) * a)/j)
end
return b
end
@doc doc"""
Polynomial(n; dim::Int=1)
Constructs a Polynomial basis of the form $[1, x, \dots, x^{(n-1)}]$.
## Arguments
- `n`: number of terms in the polynomial expansion.
## Keyword Arguments
- `dim::Int=1`: The dimension along which the basis functions are applied.
"""
Polynomial(n; dim::Int=1) = GeneralBasisFunction{:Polynomial}(polynomial, n, dim)
polynomial(i, x) = x^(i - 1)
function CRC.rrule(::typeof(Broadcast.broadcasted), ::typeof(polynomial), i, x)
y_m1 = x .^ (i .- 2)
y = y_m1 .* x
∇polynomial = let y_m1 = y_m1, i = i
Δ -> (∂∅, ∂∅, ∂∅, dropdims(sum((i .- 1) .* y_m1 .* Δ; dims=1); dims=1))
end
return y, ∇polynomial
end
@compat public Chebyshev, Sin, Cos, Fourier, Legendre, Polynomial
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1228 | module InitializeModels
using ArgCheck: @argcheck
using Artifacts: Artifacts, @artifact_str
using Functors: fmap
using LazyArtifacts: LazyArtifacts
using Random: Random
using LuxCore: LuxCore
using ..Utils: is_extension_loaded
get_pretrained_weights_path(name::Symbol) = get_pretrained_weights_path(string(name))
function get_pretrained_weights_path(name::String)
try
return @artifact_str(name)
catch err
err isa ErrorException &&
throw(ArgumentError("no pretrained weights available for `$name`"))
rethrow(err)
end
end
function load_using_jld2(args...; kwargs...)
if !is_extension_loaded(Val(:JLD2))
error("`JLD2.jl` is not loaded. Please load it before trying to load pretrained \
weights.")
end
return load_using_jld2_internal(args...; kwargs...)
end
function load_using_jld2_internal end
struct SerializedRNG end
function remove_rng_from_structure(x)
return fmap(x) do xᵢ
xᵢ isa Random.AbstractRNG && return SerializedRNG()
return xᵢ
end
end
loadparameters(x) = x
function loadstates(x)
return fmap(x) do xᵢ
xᵢ isa SerializedRNG && return Random.default_rng()
return xᵢ
end
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 2888 | module Utils
using ForwardDiff: ForwardDiff
using GPUArraysCore: AnyGPUArray
using Statistics: mean
using MLDataDevices: get_device_type, get_device, CPUDevice, CUDADevice
is_extension_loaded(::Val) = false
"""
fast_chunk(x::AbstractArray, ::Val{n}, ::Val{dim})
Type-stable and faster version of `MLUtils.chunk`.
"""
fast_chunk(h::Int, n::Int) = (1:h) .+ h * (n - 1)
function fast_chunk(x::AbstractArray, h::Int, n::Int, ::Val{dim}) where {dim}
return selectdim(x, dim, fast_chunk(h, n))
end
function fast_chunk(x::AnyGPUArray, h::Int, n::Int, ::Val{dim}) where {dim}
return copy(selectdim(x, dim, fast_chunk(h, n)))
end
function fast_chunk(x::AbstractArray, ::Val{N}, d::Val{D}) where {N, D}
return fast_chunk.((x,), size(x, D) ÷ N, 1:N, d)
end
"""
flatten_spatial(x::AbstractArray{T, 4})
Flattens the first 2 dimensions of `x`, and permutes the remaining dimensions to (2, 1, 3).
"""
function flatten_spatial(x::AbstractArray{T, 4}) where {T}
# TODO: Should we do lazy permutedims for non-GPU arrays?
return permutedims(reshape(x, (:, size(x, 3), size(x, 4))), (2, 1, 3))
end
"""
second_dim_mean(x)
Computes the mean of `x` along dimension `2`.
"""
second_dim_mean(x) = dropdims(mean(x; dims=2); dims=2)
"""
should_type_assert(x)
In certain cases, to ensure type-stability we want to add type-asserts. But this won't work
for exotic types like `ForwardDiff.Dual`. We use this function to check if we should add a
type-assert for `x`.
"""
should_type_assert(x::AbstractArray{T}) where {T} = isbitstype(T) && parent(x) === x
should_type_assert(::AbstractArray{<:ForwardDiff.Dual}) = false
should_type_assert(::ForwardDiff.Dual) = false
should_type_assert(x) = true
unsqueeze1(x::AbstractArray) = reshape(x, 1, size(x)...)
unsqueezeN(x::AbstractArray) = reshape(x, size(x)..., 1)
catN(x::AbstractArray{T, N}, y::AbstractArray{T, N}) where {T, N} = cat(x, y; dims=Val(N))
mapreduce_stack(xs) = mapreduce(unsqueezeN, catN, xs)
unwrap_val(x) = x
unwrap_val(::Val{T}) where {T} = T
function safe_warning(msg::AbstractString)
@warn msg maxlog=1
return
end
safe_kron(a, b) = map(safe_kron_internal, a, b)
function safe_kron_internal(a::AbstractVector, b::AbstractVector)
return safe_kron_internal(get_device_type((a, b)), a, b)
end
safe_kron_internal(::Type{CPUDevice}, a::AbstractVector, b::AbstractVector) = kron(a, b)
function safe_kron_internal(::Type{CUDADevice}, a::AbstractVector, b::AbstractVector)
return vec(kron(reshape(a, :, 1), reshape(b, 1, :)))
end
function safe_kron_internal(::Type{D}, a::AbstractVector, b::AbstractVector) where {D}
safe_warning("`kron` is not supported on $(D). Falling back to `kron` on CPU.")
a_cpu = a |> CPUDevice()
b_cpu = b |> CPUDevice()
return safe_kron_internal(CPUDevice, a_cpu, b_cpu) |> get_device((a, b))
end
struct DataTransferBarrier{V}
val::V
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1791 | module Layers
using ArgCheck: @argcheck
using ADTypes: AutoForwardDiff, AutoZygote
using Compat: @compat
using ConcreteStructs: @concrete
using ChainRulesCore: ChainRulesCore, @non_differentiable, @ignore_derivatives
using Markdown: @doc_str
using Random: AbstractRNG
using Static: Static
using ForwardDiff: ForwardDiff
using Lux: Lux, LuxOps, StatefulLuxLayer
using LuxCore: LuxCore, AbstractLuxLayer, AbstractLuxContainerLayer, AbstractLuxWrapperLayer
using MLDataDevices: get_device, CPUDevice
using NNlib: NNlib
using WeightInitializers: zeros32, randn32
using ..Utils: DataTransferBarrier, fast_chunk, should_type_assert, mapreduce_stack,
unwrap_val, safe_kron, is_extension_loaded, flatten_spatial
const CRC = ChainRulesCore
const NORM_LAYER_DOC = "Function with signature `f(i::Integer, dims::Integer, act::F; \
kwargs...)`. `i` is the location of the layer in the model, \
`dims` is the channel dimension of the input, and `act` is the \
activation function. `kwargs` are forwarded from the `norm_kwargs` \
input, The function should return a normalization layer. Defaults \
to `nothing`, which means no normalization layer is used"
include("attention.jl")
include("conv_norm_act.jl")
include("dynamic_expressions.jl")
include("encoder.jl")
include("embeddings.jl")
include("hamiltonian.jl")
include("mlp.jl")
include("spline.jl")
include("tensor_product.jl")
@compat(public,
(ClassTokens, ConvBatchNormActivation, ConvNormActivation, DynamicExpressionsLayer,
HamiltonianNN, MultiHeadSelfAttention, MLP, PatchEmbedding, PeriodicEmbedding,
SplineLayer, TensorProductLayer, ViPosEmbedding, VisionTransformerEncoder))
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1727 | """
MultiHeadSelfAttention(in_planes::Int, number_heads::Int; use_qkv_bias::Bool=false,
attention_dropout_rate::T=0.0f0, projection_dropout_rate::T=0.0f0)
Multi-head self-attention layer
## Arguments
- `planes`: number of input channels
- `nheads`: number of heads
- `use_qkv_bias`: whether to use bias in the layer to get the query, key and value
- `attn_dropout_prob`: dropout probability after the self-attention layer
- `proj_dropout_prob`: dropout probability after the projection layer
"""
@concrete struct MultiHeadSelfAttention <:
AbstractLuxContainerLayer{(:qkv_layer, :dropout, :projection)}
qkv_layer
dropout
projection
nheads::Int
end
function MultiHeadSelfAttention(
in_planes::Int, number_heads::Int; use_qkv_bias::Bool=false,
attention_dropout_rate::T=0.0f0, projection_dropout_rate::T=0.0f0) where {T}
@argcheck in_planes % number_heads == 0
return MultiHeadSelfAttention(
Lux.Dense(in_planes, in_planes * 3; use_bias=use_qkv_bias),
Lux.Dropout(attention_dropout_rate),
Lux.Chain(Lux.Dense(in_planes => in_planes),
Lux.Dropout(projection_dropout_rate)),
number_heads
)
end
function (mhsa::MultiHeadSelfAttention)(x::AbstractArray{T, 3}, ps, st) where {T}
qkv, st_qkv = mhsa.qkv_layer(x, ps.qkv_layer, st.qkv_layer)
q, k, v = fast_chunk(qkv, Val(3), Val(1))
attn_dropout = StatefulLuxLayer{true}(mhsa.dropout, ps.dropout, st.dropout)
y, _ = NNlib.dot_product_attention(q, k, v; fdrop=attn_dropout, mhsa.nheads)
z, st_proj = mhsa.projection(y, ps.projection, st.projection)
return z, (; qkv_layer=st_qkv, dropout=attn_dropout.st, projection=st_proj)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 3269 | """
ConvNormActivation(kernel_size::Dims, in_chs::Integer, hidden_chs::Dims{N},
activation; norm_layer=nothing, conv_kwargs=(;), norm_kwargs=(;),
last_layer_activation::Bool=false) where {N}
Construct a Chain of convolutional layers with normalization and activation functions.
## Arguments
- `kernel_size`: size of the convolutional kernel
- `in_chs`: number of input channels
- `hidden_chs`: dimensions of the hidden layers
- `activation`: activation function
## Keyword Arguments
- `norm_layer`: $(NORM_LAYER_DOC)
- `conv_kwargs`: keyword arguments for the convolutional layers
- `norm_kwargs`: keyword arguments for the normalization layers
- `last_layer_activation`: set to `true` to apply the activation function to the last
layer
"""
@concrete struct ConvNormActivation <: AbstractLuxWrapperLayer{:model}
model <: Lux.Chain
end
function ConvNormActivation(
kernel_size::Dims, in_chs::Integer, hidden_chs::Dims{N}, activation::F=NNlib.relu;
norm_layer::NF=nothing, conv_kwargs=(;), norm_kwargs=(;),
last_layer_activation::Bool=false) where {N, F, NF}
layers = Vector{AbstractLuxLayer}(undef, N)
for (i, out_chs) in enumerate(hidden_chs)
act = i != N ? activation : (last_layer_activation ? activation : identity)
layers[i] = conv_norm_act(
i, kernel_size, in_chs => out_chs, act, norm_layer, conv_kwargs, norm_kwargs)
in_chs = out_chs
end
inner_blocks = NamedTuple{ntuple(i -> Symbol(:block, i), N)}(layers)
return ConvNormActivation(Lux.Chain(inner_blocks))
end
@concrete struct ConvNormActivationBlock <: AbstractLuxWrapperLayer{:block}
block <: Union{<:Lux.Conv, Lux.Chain}
end
function conv_norm_act(
i::Integer, kernel_size::Dims, (in_chs, out_chs)::Pair{<:Integer, <:Integer},
activation::F, norm_layer::NF, conv_kwargs, norm_kwargs) where {F, NF}
model = if norm_layer === nothing
Lux.Conv(kernel_size, in_chs => out_chs, activation; conv_kwargs...)
else
Lux.Chain(; conv=Lux.Conv(kernel_size, in_chs => out_chs; conv_kwargs...),
norm=norm_layer(i, out_chs, activation; norm_kwargs...))
end
return ConvNormActivationBlock(model)
end
"""
ConvBatchNormActivation(kernel_size::Dims, (in_filters, out_filters)::Pair{Int, Int},
depth::Int, act::F; use_norm::Bool=true, conv_kwargs=(;),
last_layer_activation::Bool=true, norm_kwargs=(;)) where {F}
This function is a convenience wrapper around [`ConvNormActivation`](@ref) that constructs a
chain with `norm_layer` set to `Lux.BatchNorm` if `use_norm` is `true` and `nothing`
otherwise. In most cases, users should use [`ConvNormActivation`](@ref) directly for a more
flexible interface.
"""
function ConvBatchNormActivation(
kernel_size::Dims, (in_filters, out_filters)::Pair{Int, Int},
depth::Int, act::F; use_norm::Bool=true, kwargs...) where {F}
return ConvNormActivation(kernel_size,
in_filters,
ntuple(Returns(out_filters), depth),
act;
norm_layer=use_norm ?
(i, chs, bn_act; kwargs...) -> Lux.BatchNorm(chs, bn_act; kwargs...) :
nothing,
last_layer_activation=true,
kwargs...)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 5189 | """
DynamicExpressionsLayer(operator_enum::OperatorEnum, expressions::Node...;
eval_options::EvalOptions=EvalOptions())
DynamicExpressionsLayer(operator_enum::OperatorEnum,
expressions::AbstractVector{<:Node}; kwargs...)
Wraps a `DynamicExpressions.jl` `Node` into a Lux layer and allows the constant nodes to
be updated using any of the AD Backends.
For details about these expressions, refer to the
[`DynamicExpressions.jl` documentation](https://symbolicml.org/DynamicExpressions.jl/dev/types/).
## Arguments
- `operator_enum`: `OperatorEnum` from `DynamicExpressions.jl`
- `expressions`: `Node` from `DynamicExpressions.jl` or `AbstractVector{<:Node}`
## Keyword Arguments
- `turbo`: Use `LoopVectorization.jl` for faster evaluation **(Deprecated)**
- `bumper`: Use `Bumper.jl` for faster evaluation **(Deprecated)**
- `eval_options`: EvalOptions from `DynamicExpressions.jl`
These options are simply forwarded to `DynamicExpressions.jl`'s `eval_tree_array`
and `eval_grad_tree_array` function.
# Extended Help
## Example
```jldoctest
julia> operators = OperatorEnum(; binary_operators=[+, -, *], unary_operators=[cos]);
julia> x1 = Node(; feature=1);
julia> x2 = Node(; feature=2);
julia> expr_1 = x1 * cos(x2 - 3.2)
x1 * cos(x2 - 3.2)
julia> expr_2 = x2 - x1 * x2 + 2.5 - 1.0 * x1
((x2 - (x1 * x2)) + 2.5) - (1.0 * x1)
julia> layer = Layers.DynamicExpressionsLayer(operators, expr_1, expr_2);
julia> ps, st = Lux.setup(Random.default_rng(), layer)
((layer_1 = (layer_1 = (params = Float32[3.2],), layer_2 = (params = Float32[2.5, 1.0],)), layer_2 = NamedTuple()), (layer_1 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_2 = NamedTuple()))
julia> x = [1.0f0 2.0f0 3.0f0
4.0f0 5.0f0 6.0f0]
2×3 Matrix{Float32}:
1.0 2.0 3.0
4.0 5.0 6.0
julia> layer(x, ps, st)[1] ≈ Float32[0.6967068 -0.4544041 -2.8266668; 1.5 -4.5 -12.5]
true
julia> ∂x, ∂ps, _ = Zygote.gradient(Base.Fix1(sum, abs2) ∘ first ∘ layer, x, ps, st);
julia> ∂x ≈ Float32[-14.0292 54.206482 180.32669; -0.9995737 10.7700815 55.6814]
true
julia> ∂ps.layer_1.layer_1.params ≈ Float32[-6.451908]
true
julia> ∂ps.layer_1.layer_2.params ≈ Float32[-31.0, 90.0]
true
```
"""
@concrete struct DynamicExpressionsLayer <: AbstractLuxWrapperLayer{:chain}
chain
end
@concrete struct InternalDynamicExpressionWrapper <: AbstractLuxLayer
operator_enum
expression
eval_options
end
function Base.show(io::IO, l::InternalDynamicExpressionWrapper)
print(io,
"InternalDynamicExpressionWrapper($(l.operator_enum), $(l.expression); \
eval_options=$(l.eval_options))")
end
function LuxCore.initialparameters(::AbstractRNG, layer::InternalDynamicExpressionWrapper)
params = map(Base.Fix2(getproperty, :val),
filter(node -> node.degree == 0 && node.constant, layer.expression))
return (; params)
end
function update_de_expression_constants!(expression, ps)
# Don't use `set_constant_refs!` here, since it requires the types to match. In our
# case we just warn the user
params = filter(node -> node.degree == 0 && node.constant, expression)
foreach(enumerate(params)) do (i, node)
(node.val isa typeof(ps[i])) ||
@warn lazy"node.val::$(typeof(node.val)) != ps[$i]::$(typeof(ps[i])). Type of node.val takes precedence. Fix the input expression if this is unintended." maxlog=1
return node.val = ps[i]
end
return
end
function (de::InternalDynamicExpressionWrapper)(x::AbstractVector, ps, st)
y, stₙ = de(reshape(x, :, 1), ps, st)
return vec(y), stₙ
end
# NOTE: Can't use `get_device_type` since it causes problems with ReverseDiff
function (de::InternalDynamicExpressionWrapper)(x::AbstractMatrix, ps, st)
y = apply_dynamic_expression(de, de.expression, de.operator_enum,
Lux.match_eltype(de, ps, st, x), ps.params, get_device(x))
return y, st
end
function apply_dynamic_expression(
de::InternalDynamicExpressionWrapper, expr, operator_enum, x, ps, ::CPUDevice)
if !is_extension_loaded(Val(:DynamicExpressions))
error("`DynamicExpressions.jl` is not loaded. Please load it before using \
`DynamicExpressionsLayer`.")
end
return apply_dynamic_expression_internal(de, expr, operator_enum, x, ps)
end
function apply_dynamic_expression(de, expr, operator_enum, x, ps, dev)
throw(ArgumentError("`DynamicExpressions.jl` only supports CPU operations. Current \
device detected as $(dev). CUDA.jl will be supported after \
https://github.com/SymbolicML/DynamicExpressions.jl/pull/65 is \
merged upstream."))
end
function CRC.rrule(
::typeof(apply_dynamic_expression), de::InternalDynamicExpressionWrapper,
expr, operator_enum, x, ps, ::CPUDevice)
if !is_extension_loaded(Val(:DynamicExpressions))
error("`DynamicExpressions.jl` is not loaded. Please load it before using \
`DynamicExpressionsLayer`.")
end
return ∇apply_dynamic_expression(de, expr, operator_enum, x, ps)
end
function apply_dynamic_expression_internal end
function ∇apply_dynamic_expression end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 5509 | """
ClassTokens(dim; init=zeros32)
Appends class tokens to an input with embedding dimension `dim` for use in many vision
transformer models.
"""
@concrete struct ClassTokens <: AbstractLuxLayer
dim::Int
init
end
ClassTokens(dim::Int; init=zeros32) = ClassTokens(dim, init)
LuxCore.initialparameters(rng::AbstractRNG, c::ClassTokens) = (; token=c.init(rng, c.dim))
function (m::ClassTokens)(x::AbstractArray{T, N}, ps, st) where {T, N}
tokens = reshape(ps.token, :, ntuple(_ -> 1, N - 1)...) .* ones_batch_like(x)
return cat(x, tokens; dims=Val(N - 1)), st
end
function ones_batch_like(x::AbstractArray{T, N}) where {T, N}
return fill!(similar(x, ntuple(_ -> 1, N - 1)..., size(x, N)), one(T))
end
@non_differentiable ones_batch_like(x::AbstractArray)
"""
ViPosEmbedding(embedding_size, number_patches; init = randn32)
Positional embedding layer used by many vision transformer-like models.
"""
@concrete struct ViPosEmbedding <: AbstractLuxLayer
embedding_size::Int
number_patches::Int
init
end
function ViPosEmbedding(embedding_size::Int, number_patches::Int; init=randn32)
return ViPosEmbedding(embedding_size, number_patches, init)
end
function LuxCore.initialparameters(rng::AbstractRNG, v::ViPosEmbedding)
return (; vectors=v.init(rng, v.embedding_size, v.number_patches))
end
(v::ViPosEmbedding)(x, ps, st) = x .+ ps.vectors, st
"""
PeriodicEmbedding(idxs, periods)
Create an embedding periodic in some inputs with specified periods. Input indices not in
`idxs` are passed through unchanged, but inputs in `idxs` are moved to the end of the
output and replaced with their sines, followed by their cosines (scaled appropriately to
have the specified periods). This smooth embedding preserves phase information and enforces
periodicity.
For example, `layer = PeriodicEmbedding([2, 3], [3.0, 1.0])` will create a layer periodic in
the second input with period 3.0 and periodic in the third input with period 1.0. In this
case, `layer([a, b, c, d], st) == ([a, d, sinpi(2 / 3.0 * b), sinpi(2 / 1.0 * c), cospi(2 / 3.0 * b), cospi(2 / 1.0 * c)], st)`.
## Arguments
- `idxs`: Indices of the periodic inputs
- `periods`: Periods of the periodic inputs, in the same order as in `idxs`
## Inputs
- `x` must be an `AbstractArray` with `issubset(idxs, axes(x, 1))`
- `st` must be a `NamedTuple` where `st.k = 2 ./ periods`, but on the same device as `x`
## Returns
- `AbstractArray` of size `(size(x, 1) + length(idxs), ...)` where `...` are the other
dimensions of `x`.
- `st`, unchanged
## Example
```jldoctest
julia> layer = Layers.PeriodicEmbedding([2], [4.0])
PeriodicEmbedding([2], [4.0])
julia> ps, st = Lux.setup(Random.default_rng(), layer);
julia> all(layer([1.1, 2.2, 3.3], ps, st)[1] .==
[1.1, 3.3, sinpi(2 / 4.0 * 2.2), cospi(2 / 4.0 * 2.2)])
true
```
"""
@concrete struct PeriodicEmbedding <: AbstractLuxLayer
idxs
periods
end
function LuxCore.initialstates(::AbstractRNG, pe::PeriodicEmbedding)
return (; idxs=DataTransferBarrier(pe.idxs), k=2 ./ pe.periods)
end
function (pe::PeriodicEmbedding)(x::AbstractVector, ps, st::NamedTuple)
return vec(first(pe(reshape(x, :, 1), ps, st))), st
end
function (p::PeriodicEmbedding)(x::AbstractMatrix, _, st::NamedTuple)
idxs = st.idxs.val
other_idxs = @ignore_derivatives setdiff(axes(x, 1), idxs)
y = vcat(x[other_idxs, :], sinpi.(st.k .* x[idxs, :]), cospi.(st.k .* x[idxs, :]))
return y, st
end
function (p::PeriodicEmbedding)(x::AbstractArray, ps, st::NamedTuple)
return reshape(first(p(reshape(x, size(x, 1), :), ps, st)), :, size(x)[2:end]...), st
end
function Base.show(io::IO, ::MIME"text/plain", p::PeriodicEmbedding)
return print(io, "PeriodicEmbedding(", p.idxs, ", ", p.periods, ")")
end
"""
PatchEmbedding(image_size, patch_size, in_channels, embed_planes;
norm_layer=Returns(Lux.NoOpLayer()), flatten=true)
Constructs a patch embedding layer with the given image size, patch size, input channels,
and embedding planes. The patch size must be a divisor of the image size.
## Arguments
- `image_size`: image size as a tuple
- `patch_size`: patch size as a tuple
- `in_channels`: number of input channels
- `embed_planes`: number of embedding planes
## Keyword Arguments
- `norm_layer`: Takes the embedding planes as input and returns a layer that normalizes
the embedding planes. Defaults to no normalization.
- `flatten`: set to `true` to flatten the output of the convolutional layer
"""
@concrete struct PatchEmbedding <: AbstractLuxContainerLayer{(:patch, :flatten, :norm)}
patch <: AbstractLuxLayer
flatten <: AbstractLuxLayer
norm <: AbstractLuxLayer
end
function (pe::PatchEmbedding)(x, ps, st)
y₁, st₁ = pe.patch(x, ps.patch, st.patch)
y₂, st₂ = pe.flatten(y₁, ps.flatten, st.flatten)
y₃, st₃ = pe.norm(y₂, ps.norm, st.norm)
return y₃, (; patch=st₁, flatten=st₂, norm=st₃)
end
function PatchEmbedding(image_size::Dims{N}, patch_size::Dims{N}, in_channels::Int,
embed_planes::Int; norm_layer=Returns(Lux.NoOpLayer()), flatten::Bool=true) where {N}
foreach(zip(image_size, patch_size)) do (i, p)
@argcheck i % p==0 "Image size ($i) must be divisible by patch size ($p)"
end
return PatchEmbedding(
Lux.Conv(patch_size, in_channels => embed_planes; stride=patch_size),
ifelse(flatten, Lux.WrappedFunction(flatten_spatial), Lux.NoOpLayer()),
norm_layer(embed_planes)
)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1619 | """
VisionTransformerEncoder(in_planes, depth, number_heads; mlp_ratio = 4.0f0,
dropout = 0.0f0)
Transformer as used in the base ViT architecture [dosovitskiy2020image](@citep).
## Arguments
- `in_planes`: number of input channels
- `depth`: number of attention blocks
- `number_heads`: number of attention heads
## Keyword Arguments
- `mlp_ratio`: ratio of MLP layers to the number of input channels
- `dropout_rate`: dropout rate
"""
@concrete struct VisionTransformerEncoder <: AbstractLuxWrapperLayer{:chain}
chain <: Lux.Chain
end
function VisionTransformerEncoder(
in_planes, depth, number_heads; mlp_ratio=4.0f0, dropout_rate=0.0f0)
hidden_planes = floor(Int, mlp_ratio * in_planes)
layers = [Lux.Chain(
Lux.SkipConnection(
Lux.Chain(Lux.LayerNorm((in_planes, 1); dims=1, affine=true),
MultiHeadSelfAttention(
in_planes, number_heads; attention_dropout_rate=dropout_rate,
projection_dropout_rate=dropout_rate)),
+),
Lux.SkipConnection(
Lux.Chain(Lux.LayerNorm((in_planes, 1); dims=1, affine=true),
Lux.Chain(Lux.Dense(in_planes => hidden_planes, NNlib.gelu),
Lux.Dropout(dropout_rate),
Lux.Dense(hidden_planes => in_planes),
Lux.Dropout(dropout_rate))),
+)) for _ in 1:depth]
return VisionTransformerEncoder(Lux.Chain(layers...))
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 3913 | """
HamiltonianNN{FST}(model; autodiff=nothing) where {FST}
Constructs a Hamiltonian Neural Network [greydanus2019hamiltonian](@citep). This neural
network is useful for learning symmetries and conservation laws by supervision on the
gradients of the trajectories. It takes as input a concatenated vector of length `2n`
containing the position (of size `n`) and momentum (of size `n`) of the particles. It then
returns the time derivatives for position and momentum.
## Arguments
- `FST`: If `true`, then the type of the state returned by the model must be same as the
type of the input state. See the documentation on `StatefulLuxLayer` for more
information.
- `model`: A `Lux.AbstractLuxLayer` neural network that returns the Hamiltonian of
the system. The `model` must return a "batched scalar", i.e. all the dimensions of the
output except the last one must be equal to 1. The last dimension must be equal to the
batchsize of the input.
## Keyword Arguments
- `autodiff`: The autodiff framework to be used for the internal Hamiltonian computation.
The default is `nothing`, which selects the best possible backend available. The
available options are `AutoForwardDiff` and `AutoZygote`.
## Autodiff Backends
| `autodiff` | Package Needed | Notes |
|:----------------- |:-------------- |:---------------------------------------------------------------------------- |
| `AutoZygote` | `Zygote.jl` | Preferred Backend. Chosen if `Zygote` is loaded and `autodiff` is `nothing`. |
| `AutoForwardDiff` | | Chosen if `Zygote` is not loaded and `autodiff` is `nothing`. |
!!! note
This layer uses nested autodiff. Please refer to the manual entry on
[Nested Autodiff](https://lux.csail.mit.edu/stable/manual/nested_autodiff) for more
information and known limitations.
"""
@concrete struct HamiltonianNN <: AbstractLuxWrapperLayer{:model}
fixed_state_type
model
autodiff
end
function HamiltonianNN{FST}(model; autodiff=nothing) where {FST}
@argcheck autodiff isa Union{Nothing, AutoForwardDiff, AutoZygote}
zygote_loaded = is_extension_loaded(Val(:Zygote))
if autodiff === nothing # Select best possible backend
autodiff = ifelse(zygote_loaded, AutoZygote(), AutoForwardDiff())
else
if autodiff isa AutoZygote && !zygote_loaded
throw(ArgumentError("`autodiff` cannot be `AutoZygote` when `Zygote.jl` is not \
loaded."))
end
end
return HamiltonianNN(Static.static(FST), model, autodiff)
end
function LuxCore.initialstates(rng::AbstractRNG, hnn::HamiltonianNN)
return (; model=LuxCore.initialstates(rng, hnn.model), first_call=true)
end
hamiltonian_forward(::AutoForwardDiff, model, x) = ForwardDiff.gradient(sum ∘ model, x)
function (hnn::HamiltonianNN)(x::AbstractVector, ps, st)
y, stₙ = hnn(reshape(x, :, 1), ps, st)
return vec(y), stₙ
end
function (hnn::HamiltonianNN)(x::AbstractArray{T, N}, ps, st) where {T, N}
model = StatefulLuxLayer{Static.known(hnn.fixed_state_type)}(hnn.model, ps, st.model)
st.first_call && check_hamiltonian_layer(hnn.model, x, ps, st.model)
if should_type_assert(x) && should_type_assert(ps)
H = hamiltonian_forward(hnn.autodiff, model, x)::typeof(x)
else
H = hamiltonian_forward(hnn.autodiff, model, x)
end
n = size(H, N - 1) ÷ 2
return (
cat(selectdim(H, N - 1, (n + 1):(2n)), selectdim(H, N - 1, 1:n); dims=Val(N - 1)),
(; model=model.st, first_call=false))
end
function check_hamiltonian_layer(model, x::AbstractArray{T, N}, ps, st) where {T, N}
y = first(model(x, ps, st))
@argcheck all(isone, size(y)[1:(end - 1)]) && size(y, ndims(y)) == size(x, N)
end
@non_differentiable check_hamiltonian_layer(::Any...)
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 2837 | """
MLP(in_dims::Integer, hidden_dims::Dims{N}, activation=NNlib.relu; norm_layer=nothing,
dropout_rate::Real=0.0f0, dense_kwargs=(;), norm_kwargs=(;),
last_layer_activation=false) where {N}
Construct a multi-layer perceptron (MLP) with dense layers, optional normalization layers,
and dropout.
## Arguments
- `in_dims`: number of input dimensions
- `hidden_dims`: dimensions of the hidden layers
- `activation`: activation function (stacked after the normalization layer, if present
else after the dense layer)
## Keyword Arguments
- `norm_layer`: $(NORM_LAYER_DOC)
- `dropout_rate`: dropout rate (default: `0.0f0`)
- `dense_kwargs`: keyword arguments for the dense layers
- `norm_kwargs`: keyword arguments for the normalization layers
- `last_layer_activation`: set to `true` to apply the activation function to the last
layer
"""
@concrete struct MLP <: AbstractLuxWrapperLayer{:chain}
chain <: Lux.Chain
end
function MLP(in_dims::Integer, hidden_dims::Dims{N}, activation::F=NNlib.relu;
norm_layer::NF=nothing, dropout_rate::Real=0.0f0, last_layer_activation::Bool=false,
dense_kwargs=(;), norm_kwargs=(;)) where {N, F, NF}
@argcheck N > 0
layers = Vector{AbstractLuxLayer}(undef, N)
for (i, out_dims) in enumerate(hidden_dims)
act = i != N ? activation : (last_layer_activation ? activation : identity)
layers[i] = dense_norm_act_dropout(i, in_dims => out_dims, act, norm_layer,
dropout_rate, dense_kwargs, norm_kwargs)
in_dims = out_dims
end
inner_blocks = NamedTuple{ntuple(i -> Symbol(:block, i), N)}(layers)
return MLP(Lux.Chain(inner_blocks))
end
@concrete struct DenseNormActDropoutBlock <: AbstractLuxWrapperLayer{:block}
block
end
function dense_norm_act_dropout(
i::Integer, (in_dims, out_dims)::Pair{<:Integer, <:Integer}, activation::F,
norm_layer::NF, dropout_rate::Real, dense_kwargs, norm_kwargs) where {F, NF}
if iszero(dropout_rate)
if norm_layer === nothing
return DenseNormActDropoutBlock(Lux.Chain(;
dense=Lux.Dense(in_dims => out_dims, activation; dense_kwargs...)))
end
return DenseNormActDropoutBlock(Lux.Chain(;
dense=Lux.Dense(in_dims => out_dims; dense_kwargs...),
norm=norm_layer(i, out_dims, activation; norm_kwargs...)))
end
if norm_layer === nothing
return DenseNormActDropoutBlock(Lux.Chain(;
dense=Lux.Dense(in_dims => out_dims, activation; dense_kwargs...),
dropout=Lux.Dropout(dropout_rate)))
end
return DenseNormActDropoutBlock(Lux.Chain(;
dense=Lux.Dense(in_dims => out_dims; dense_kwargs...),
norm=norm_layer(i, out_dims, activation; norm_kwargs...),
dropout=Lux.Dropout(dropout_rate)))
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 2549 | """
SplineLayer(in_dims, grid_min, grid_max, grid_step, basis::Type{Basis};
train_grid::Union{Val, Bool}=Val(false), init_saved_points=nothing)
Constructs a spline layer with the given basis function.
## Arguments
- `in_dims`: input dimensions of the layer. This must be a tuple of integers, to construct
a flat vector of saved_points pass in `()`.
- `grid_min`: minimum value of the grid.
- `grid_max`: maximum value of the grid.
- `grid_step`: step size of the grid.
- `basis`: basis function to use for the interpolation. Currently only the basis functions
from DataInterpolations.jl are supported:
1. `ConstantInterpolation`
2. `LinearInterpolation`
3. `QuadraticInterpolation`
4. `QuadraticSpline`
5. `CubicSpline`
## Keyword Arguments
- `train_grid`: whether to train the grid or not.
- `init_saved_points`: values of the function at multiples of the time step. Initialized
by default to a random vector sampled from the unit normal. Alternatively, can take a
function with the signature
`init_saved_points(rng, in_dims, grid_min, grid_max, grid_step)`.
!!! warning
Currently this layer is limited since it relies on DataInterpolations.jl which doesn't
work with GPU arrays. This will be fixed in the future by extending support to different
basis functions.
"""
@concrete struct SplineLayer{TG, B, T} <: AbstractLuxLayer
grid_min::T
grid_max::T
grid_step::T
basis
in_dims
init_saved_points
end
function SplineLayer(in_dims::Dims, grid_min, grid_max, grid_step, basis::Type{Basis};
train_grid::Union{Val, Bool}=Val(false), init_saved_points=nothing) where {Basis}
return SplineLayer{unwrap_val(train_grid), Basis}(
grid_min, grid_max, grid_step, basis, in_dims, init_saved_points)
end
function LuxCore.initialparameters(
rng::AbstractRNG, layer::SplineLayer{TG, B, T}) where {TG, B, T}
if layer.init_saved_points === nothing
saved_points = rand(rng, T, layer.in_dims...,
length((layer.grid_min):(layer.grid_step):(layer.grid_max)))
else
saved_points = layer.init_saved_points(
rng, in_dims, layer.grid_min, layer.grid_max, layer.grid_step)
end
TG || return (; saved_points)
return (;
saved_points, grid=collect((layer.grid_min):(layer.grid_step):(layer.grid_max)))
end
function LuxCore.initialstates(::AbstractRNG, layer::SplineLayer{false})
return (; grid=collect((layer.grid_min):(layer.grid_step):(layer.grid_max)),)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1914 | @doc doc"""
TensorProductLayer(basis_fns, out_dim::Int; init_weight = randn32)
Constructs the Tensor Product Layer, which takes as input an array of n tensor product
basis, $[B_1, B_2, \dots, B_n]$ a data point x, computes
$$z_i = W_{i, :} \odot [B_1(x_1) \otimes B_2(x_2) \otimes \dots \otimes B_n(x_n)]$$
where $W$ is the layer's weight, and returns $[z_1, \dots, z_{out}]$.
## Arguments
- `basis_fns`: Array of TensorProductBasis $[B_1(n_1), \dots, B_k(n_k)]$, where $k$
corresponds to the dimension of the input.
- `out_dim`: Dimension of the output.
## Keyword Arguments
- `init_weight`: Initializer for the weight matrix. Defaults to `randn32`.
!!! warning "Limited Backend Support"
Support for backends apart from CPU and CUDA is limited and slow due to limited
support for `kron` in the backend.
"""
@concrete struct TensorProductLayer <: AbstractLuxWrapperLayer{:dense}
basis_fns
dense
out_dim::Int
end
function TensorProductLayer(basis_fns, out_dim::Int; init_weight::F=randn32) where {F}
dense = Lux.Dense(
prod(Base.Fix2(getproperty, :n), basis_fns) => out_dim; use_bias=false, init_weight)
return TensorProductLayer(Tuple(basis_fns), dense, out_dim)
end
function (tp::TensorProductLayer)(x::AbstractVector, ps, st)
y, stₙ = tp(reshape(x, :, 1), ps, st)
return vec(y), stₙ
end
function (tp::TensorProductLayer)(x::AbstractArray{T, N}, ps, st) where {T, N}
x′ = LuxOps.eachslice(x, Val(N - 1)) # [I1, I2, ..., B] × T
@argcheck length(x′) == length(tp.basis_fns)
y = mapfoldl(safe_kron, zip(tp.basis_fns, x′)) do (fn, xᵢ)
eachcol(reshape(fn(xᵢ), :, prod(size(xᵢ))))
end # [[D₁, ..., Dₙ] × (I1, I2, ..., B)]
z, stₙ = tp.dense(mapreduce_stack(y), ps, st)
return reshape(z, size(x)[1:(end - 2)]..., tp.out_dim, size(x)[end]), stₙ
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1407 | module Vision
using ArgCheck: @argcheck
using Compat: @compat
using ConcreteStructs: @concrete
using Random: Random, AbstractRNG
using Lux: Lux
using LuxCore: LuxCore, AbstractLuxLayer, AbstractLuxWrapperLayer
using NNlib: relu
using ..InitializeModels: InitializeModels
using ..Layers: Layers, ConvBatchNormActivation, ClassTokens, PatchEmbedding,
ViPosEmbedding, VisionTransformerEncoder
using ..Utils: second_dim_mean, is_extension_loaded
abstract type AbstractLuxVisionLayer <: AbstractLuxWrapperLayer{:layer} end
for op in (:states, :parameters)
fname = Symbol(:initial, op)
fname_load = Symbol(:load, op)
@eval function LuxCore.$(fname)(rng::AbstractRNG, model::AbstractLuxVisionLayer)
if hasfield(typeof(model), :pretrained) && model.pretrained
path = InitializeModels.get_pretrained_weights_path(model.pretrained_name)
jld2_loaded_obj = InitializeModels.load_using_jld2(
joinpath(path, "$(model.pretrained_name).jld2"), $(string(op)))
return InitializeModels.$(fname_load)(jld2_loaded_obj)
end
return LuxCore.$(fname)(rng, model.layer)
end
end
include("extensions.jl")
include("alexnet.jl")
include("vit.jl")
include("vgg.jl")
@compat(public,
(AlexNet, ConvMixer, DenseNet, MobileNet, ResNet, ResNeXt,
SqueezeNet, GoogLeNet, ViT, VisionTransformer, VGG, WideResNet))
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1212 | """
AlexNet(; kwargs...)
Create an AlexNet model [krizhevsky2012imagenet](@citep).
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
@concrete struct AlexNet <: AbstractLuxVisionLayer
layer
pretrained_name::Symbol
pretrained::Bool
end
function AlexNet(; pretrained=false)
alexnet = Lux.Chain(;
backbone=Lux.Chain(
Lux.Conv((11, 11), 3 => 64, relu; stride=4, pad=2),
Lux.MaxPool((3, 3); stride=2),
Lux.Conv((5, 5), 64 => 192, relu; pad=2),
Lux.MaxPool((3, 3); stride=2),
Lux.Conv((3, 3), 192 => 384, relu; pad=1),
Lux.Conv((3, 3), 384 => 256, relu; pad=1),
Lux.Conv((3, 3), 256 => 256, relu; pad=1),
Lux.MaxPool((3, 3); stride=2)
),
classifier=Lux.Chain(
Lux.AdaptiveMeanPool((6, 6)),
Lux.FlattenLayer(),
Lux.Dropout(0.5f0),
Lux.Dense(256 * 6 * 6 => 4096, relu),
Lux.Dropout(0.5f0),
Lux.Dense(4096 => 4096, relu),
Lux.Dense(4096 => 1000)
)
)
return AlexNet(alexnet, :alexnet, pretrained)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 3776 | """
ResNet(depth::Int; pretrained::Bool=false)
Create a ResNet model [he2016deep](@citep).
## Arguments
- `depth::Int`: The depth of the ResNet model. Must be one of 18, 34, 50, 101, or 152.
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function ResNet end
"""
ResNeXt(depth::Int; cardinality=32, base_width=nothing, pretrained::Bool=false)
Create a ResNeXt model [xie2017aggregated](@citep).
## Arguments
- `depth::Int`: The depth of the ResNeXt model. Must be one of 50, 101, or 152.
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
- `cardinality`: The cardinality of the ResNeXt model. Defaults to 32.
- `base_width`: The base width of the ResNeXt model. Defaults to 8 for depth 101 and 4
otherwise.
"""
function ResNeXt end
"""
GoogLeNet(; pretrained::Bool=false)
Create a GoogLeNet model [szegedy2015going](@citep).
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function GoogLeNet end
"""
DenseNet(depth::Int; pretrained::Bool=false)
Create a DenseNet model [huang2017densely](@citep).
## Arguments
- `depth::Int`: The depth of the DenseNet model. Must be one of 121, 161, 169, or 201.
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function DenseNet end
"""
MobileNet(name::Symbol; pretrained::Bool=false)
Create a MobileNet model
[howard2017mobilenets, sandler2018mobilenetv2, howard2019searching](@citep).
## Arguments
- `name::Symbol`: The name of the MobileNet model. Must be one of `:v1`, `:v2`,
`:v3_small`, or `:v3_large`.
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function MobileNet end
"""
ConvMixer(name::Symbol; pretrained::Bool=false)
Create a ConvMixer model [trockman2022patches](@citep).
## Arguments
- `name::Symbol`: The name of the ConvMixer model. Must be one of `:base`, `:small`, or
`:large`.
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function ConvMixer end
"""
SqueezeNet(; pretrained::Bool=false)
Create a SqueezeNet model [iandola2016squeezenetalexnetlevelaccuracy50x](@citep).
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function SqueezeNet end
"""
WideResNet(depth::Int; pretrained::Bool=false)
Create a WideResNet model [zagoruyko2017wideresidualnetworks](@citep).
## Arguments
- `depth::Int`: The depth of the WideResNet model. Must be one of 18, 34, 50, 101, or 152.
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function WideResNet end
@concrete struct MetalheadWrapperLayer <: AbstractLuxVisionLayer
layer
pretrained_name::Symbol
pretrained::Bool
end
for f in [:ResNet, :ResNeXt, :GoogLeNet, :DenseNet,
:MobileNet, :ConvMixer, :SqueezeNet, :WideResNet]
f_metalhead = Symbol(f, :Metalhead)
@eval begin
function $(f_metalhead) end
function $(f)(args...; pretrained::Bool=false, kwargs...)
if !is_extension_loaded(Val(:Metalhead))
error("`Metalhead.jl` is not loaded. Please load `Metalhead.jl` to use \
this function.")
end
model = $(f_metalhead)(args...; pretrained, kwargs...)
return MetalheadWrapperLayer(model, :metalhead, false)
end
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 3204 | @concrete struct VGGFeatureExtractor <: AbstractLuxWrapperLayer{:model}
model <: Lux.Chain
end
function VGGFeatureExtractor(config, batchnorm, inchannels)
layers = Vector{AbstractLuxLayer}(undef, length(config) * 2)
input_filters = inchannels
for (i, (chs, depth)) in enumerate(config)
layers[2i - 1] = ConvBatchNormActivation(
(3, 3), input_filters => chs, depth, relu; last_layer_activation=true,
conv_kwargs=(; pad=(1, 1)), use_norm=batchnorm)
layers[2i] = Lux.MaxPool((2, 2))
input_filters = chs
end
return VGGFeatureExtractor(Lux.Chain(layers...))
end
@concrete struct VGGClassifier <: AbstractLuxWrapperLayer{:model}
model <: Lux.Chain
end
function VGGClassifier(imsize, nclasses, fcsize, dropout)
return VGGClassifier(Lux.Chain(
Lux.FlattenLayer(), Lux.Dense(Int(prod(imsize)) => fcsize, relu),
Lux.Dropout(dropout), Lux.Dense(fcsize => fcsize, relu),
Lux.Dropout(dropout), Lux.Dense(fcsize => nclasses)))
end
@concrete struct VGG <: AbstractLuxVisionLayer
layer
pretrained_name::Symbol
pretrained::Bool
end
"""
VGG(imsize; config, inchannels, batchnorm = false, nclasses, fcsize, dropout)
Create a VGG model [simonyan2014very](@citep).
## Arguments
- `imsize`: input image width and height as a tuple
- `config`: the configuration for the convolution layers
- `inchannels`: number of input channels
- `batchnorm`: set to `true` to use batch normalization after each convolution
- `nclasses`: number of output classes
- `fcsize`: intermediate fully connected layer size
- `dropout`: dropout level between fully connected layers
"""
function VGG(imsize; config, inchannels, batchnorm=false, nclasses, fcsize, dropout)
feature_extractor = VGGFeatureExtractor(config, batchnorm, inchannels)
nilarray = Lux.NilSizePropagation.NilArray((imsize..., inchannels, 2))
outsize = LuxCore.outputsize(feature_extractor, nilarray, Random.default_rng())
classifier = VGGClassifier(outsize, nclasses, fcsize, dropout)
return Lux.Chain(; feature_extractor, classifier)
end
const VGG_CONFIG = Dict(
11 => [(64, 1), (128, 1), (256, 2), (512, 2), (512, 2)],
13 => [(64, 2), (128, 2), (256, 2), (512, 2), (512, 2)],
16 => [(64, 2), (128, 2), (256, 3), (512, 3), (512, 3)],
19 => [(64, 2), (128, 2), (256, 4), (512, 4), (512, 4)]
)
"""
VGG(depth::Int; batchnorm::Bool=false, pretrained::Bool=false)
Create a VGG model [simonyan2014very](@citep) with ImageNet Configuration.
## Arguments
- `depth::Int`: the depth of the VGG model. Choices: {`11`, `13`, `16`, `19`}.
## Keyword Arguments
- `batchnorm = false`: set to `true` to use batch normalization after each convolution.
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function VGG(depth::Int; batchnorm::Bool=false, pretrained::Bool=false)
name = Symbol(:vgg, depth, ifelse(batchnorm, "_bn", ""))
config, inchannels, nclasses, fcsize = VGG_CONFIG[depth], 3, 1000, 4096
model = VGG((224, 224); config, inchannels, batchnorm, nclasses, fcsize, dropout=0.5f0)
return VGG(model, name, pretrained)
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 2411 | @concrete struct VisionTransformer <: AbstractLuxVisionLayer
layer
pretrained_name::Symbol
pretrained::Bool
end
function VisionTransformer(;
imsize::Dims{2}=(256, 256), in_channels::Int=3, patch_size::Dims{2}=(16, 16),
embed_planes::Int=768, depth::Int=6, number_heads=16,
mlp_ratio=4.0f0, dropout_rate=0.1f0, embedding_dropout_rate=0.1f0,
pool::Symbol=:class, num_classes::Int=1000)
@argcheck pool in (:class, :mean)
return Lux.Chain(
Lux.Chain(PatchEmbedding(imsize, patch_size, in_channels, embed_planes),
ClassTokens(embed_planes),
ViPosEmbedding(embed_planes, prod(imsize .÷ patch_size) + 1),
Lux.Dropout(embedding_dropout_rate),
VisionTransformerEncoder(
embed_planes, depth, number_heads; mlp_ratio, dropout_rate),
Lux.WrappedFunction(ifelse(pool === :class, x -> x[:, 1, :], second_dim_mean))),
Lux.Chain(Lux.LayerNorm((embed_planes,); affine=true),
Lux.Dense(embed_planes, num_classes)))
end
#! format: off
const VIT_CONFIGS = Dict(
:tiny => (depth=12, embed_planes=0192, number_heads=3 ),
:small => (depth=12, embed_planes=0384, number_heads=6 ),
:base => (depth=12, embed_planes=0768, number_heads=12 ),
:large => (depth=24, embed_planes=1024, number_heads=16 ),
:huge => (depth=32, embed_planes=1280, number_heads=16 ),
:giant => (depth=40, embed_planes=1408, number_heads=16, mlp_ratio=48 / 11),
:gigantic => (depth=48, embed_planes=1664, number_heads=16, mlp_ratio=64 / 13)
)
#! format: on
"""
VisionTransformer(name::Symbol; pretrained=false)
Creates a Vision Transformer model with the specified configuration.
## Arguments
- `name::Symbol`: name of the Vision Transformer model to create. The following models are
available -- `:tiny`, `:small`, `:base`, `:large`, `:huge`, `:giant`, `:gigantic`.
## Keyword Arguments
- `pretrained::Bool=false`: If `true`, loads pretrained weights when `LuxCore.setup` is
called.
"""
function VisionTransformer(name::Symbol; pretrained=false, kwargs...)
@argcheck name in keys(VIT_CONFIGS)
return VisionTransformer(
VisionTransformer(; VIT_CONFIGS[name]..., kwargs...), name, pretrained)
end
const ViT = VisionTransformer
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 10328 | # Only tests that are not run via `vision` or other higher-level test suites are
# included in this snippet.
@testitem "MLP" setup=[SharedTestSetup] tags=[:layers] begin
using NNlib
@testset "$(mode)" for (mode, aType, dev, ongpu) in MODES
@testset "$(act)" for act in (tanh, NNlib.gelu)
@testset "$(nType)" for nType in (BatchNorm, GroupNorm, nothing)
norm = if nType === nothing
nType
elseif nType === BatchNorm
(i, ch, act; kwargs...) -> BatchNorm(ch, act; kwargs...)
elseif nType === GroupNorm
(i, ch, act; kwargs...) -> GroupNorm(ch, 2, act; kwargs...)
end
model = Layers.MLP(2, (4, 4, 2), act; norm_layer=norm)
ps, st = Lux.setup(StableRNG(0), model) |> dev
x = randn(Float32, 2, 2) |> aType
@jet model(x, ps, st)
__f = (x, ps) -> sum(abs2, first(model(x, ps, st)))
test_gradients(
__f, x, ps; atol=1e-3, rtol=1e-3, soft_fail=[AutoFiniteDiff()])
end
end
end
end
@testitem "Hamiltonian Neural Network" setup=[SharedTestSetup] tags=[:layers] begin
using ComponentArrays, ForwardDiff, Zygote, MLDataDevices, NNlib
_remove_nothing(xs) = map(x -> x === nothing ? 0 : x, xs)
@testset "$(mode): $(autodiff)" for (mode, aType, dev, ongpu) in MODES,
autodiff in (nothing, AutoZygote(), AutoForwardDiff())
ongpu && autodiff === AutoForwardDiff() && continue
hnn = Layers.HamiltonianNN{true}(Layers.MLP(2, (4, 4, 2), NNlib.gelu); autodiff)
ps, st = Lux.setup(StableRNG(0), hnn) |> dev
x = randn(Float32, 2, 4) |> aType
@test_throws ArgumentError hnn(x, ps, st)
hnn = Layers.HamiltonianNN{true}(Layers.MLP(2, (4, 4, 1), NNlib.gelu); autodiff)
ps, st = Lux.setup(StableRNG(0), hnn) |> dev
ps_ca = ComponentArray(ps |> cpu_device()) |> dev
@test st.first_call
y, st = hnn(x, ps, st)
@test !st.first_call
∂x_zyg, ∂ps_zyg = Zygote.gradient(
(x, ps) -> sum(abs2, first(hnn(x, ps, st))), x, ps)
@test ∂x_zyg !== nothing
@test ∂ps_zyg !== nothing
if !ongpu
∂ps_zyg = _remove_nothing(getdata(ComponentArray(∂ps_zyg |> cpu_device()) |>
dev))
∂x_fd = ForwardDiff.gradient(x -> sum(abs2, first(hnn(x, ps, st))), x)
∂ps_fd = getdata(ForwardDiff.gradient(
ps -> sum(abs2, first(hnn(x, ps, st))), ps_ca))
@test ∂x_zyg≈∂x_fd atol=1e-3 rtol=1e-3
@test ∂ps_zyg≈∂ps_fd atol=1e-3 rtol=1e-3
end
st = Lux.initialstates(StableRNG(0), hnn) |> dev
@test st.first_call
y, st = hnn(x, ps_ca, st)
@test !st.first_call
∂x_zyg, ∂ps_zyg = Zygote.gradient(
(x, ps) -> sum(abs2, first(hnn(x, ps, st))), x, ps_ca)
@test ∂x_zyg !== nothing
@test ∂ps_zyg !== nothing
if !ongpu
∂ps_zyg = _remove_nothing(getdata(ComponentArray(∂ps_zyg |> cpu_device()) |>
dev))
∂x_fd = ForwardDiff.gradient(x -> sum(abs2, first(hnn(x, ps_ca, st))), x)
∂ps_fd = getdata(ForwardDiff.gradient(
ps -> sum(abs2, first(hnn(x, ps, st))), ps_ca))
@test ∂x_zyg≈∂x_fd atol=1e-3 rtol=1e-3
@test ∂ps_zyg≈∂ps_fd atol=1e-3 rtol=1e-3
end
end
end
@testitem "Tensor Product Layer" setup=[SharedTestSetup] tags=[:layers] begin
@testset "$(mode)" for (mode, aType, dev, ongpu) in MODES
@testset "$(basis)" for basis in (Basis.Chebyshev, Basis.Sin, Basis.Cos,
Basis.Fourier, Basis.Legendre, Basis.Polynomial)
tensor_project = Layers.TensorProductLayer([basis(n + 2) for n in 1:3], 4)
ps, st = Lux.setup(StableRNG(0), tensor_project) |> dev
x = tanh.(randn(Float32, 2, 4, 5)) |> aType
@test_throws ArgumentError tensor_project(x, ps, st)
x = tanh.(randn(Float32, 2, 3, 5)) |> aType
y, st = tensor_project(x, ps, st)
@test size(y) == (2, 4, 5)
@jet tensor_project(x, ps, st)
__f = (x, ps) -> sum(abs2, first(tensor_project(x, ps, st)))
test_gradients(__f, x, ps; atol=1e-3, rtol=1e-3,
skip_backends=[AutoTracker(), AutoEnzyme()])
end
end
end
@testitem "Basis Functions" setup=[SharedTestSetup] tags=[:layers] begin
@testset "$(mode)" for (mode, aType, dev, ongpu) in MODES
@testset "$(basis)" for basis in (Basis.Chebyshev, Basis.Sin, Basis.Cos,
Basis.Fourier, Basis.Legendre, Basis.Polynomial)
x = tanh.(randn(Float32, 2, 4)) |> aType
grid = collect(1:3) |> aType
fn = basis(3)
@test size(fn(x)) == (3, 2, 4)
@jet fn(x)
@test size(fn(x, grid)) == (3, 2, 4)
@jet fn(x, grid)
fn = basis(3; dim=2)
@test size(fn(x)) == (2, 3, 4)
@jet fn(x)
@test size(fn(x, grid)) == (2, 3, 4)
@jet fn(x, grid)
fn = basis(3; dim=3)
@test size(fn(x)) == (2, 4, 3)
@jet fn(x)
@test size(fn(x, grid)) == (2, 4, 3)
@jet fn(x, grid)
fn = basis(3; dim=4)
@test_throws ArgumentError fn(x)
grid = 1:5 |> aType
@test_throws ArgumentError fn(x, grid)
end
end
end
@testitem "Spline Layer" setup=[SharedTestSetup] tags=[:layers] begin
using ComponentArrays, DataInterpolations, ForwardDiff, Zygote, MLDataDevices
@testset "$(mode)" for (mode, aType, dev, ongpu) in MODES
ongpu && continue
@testset "$(spl): train_grid $(train_grid), dims $(dims)" for spl in (
ConstantInterpolation, LinearInterpolation,
QuadraticInterpolation, QuadraticSpline, CubicSpline),
train_grid in (true, false),
dims in ((), (8,))
spline = Layers.SplineLayer(dims, 0.0f0, 1.0f0, 0.1f0, spl; train_grid)
ps, st = Lux.setup(StableRNG(0), spline) |> dev
ps_ca = ComponentArray(ps |> cpu_device()) |> dev
x = tanh.(randn(Float32, 4)) |> aType
y, st = spline(x, ps, st)
@test size(y) == (dims..., 4)
@jet spline(x, ps, st)
y, st = spline(x, ps_ca, st)
@test size(y) == (dims..., 4)
@jet spline(x, ps_ca, st)
∂x, ∂ps = Zygote.gradient((x, ps) -> sum(abs2, first(spline(x, ps, st))), x, ps)
spl !== ConstantInterpolation && @test ∂x !== nothing
@test ∂ps !== nothing
∂x_fd = ForwardDiff.gradient(x -> sum(abs2, first(spline(x, ps, st))), x)
∂ps_fd = ForwardDiff.gradient(ps -> sum(abs2, first(spline(x, ps, st))), ps_ca)
spl !== ConstantInterpolation && @test ∂x≈∂x_fd atol=1e-3 rtol=1e-3
@test ∂ps.saved_points≈∂ps_fd.saved_points atol=1e-3 rtol=1e-3
if train_grid
if ∂ps.grid === nothing
@test_softfail all(Base.Fix1(isapprox, 0), ∂ps_fd.grid)
else
@test ∂ps.grid≈∂ps_fd.grid atol=1e-3 rtol=1e-3
end
end
end
end
end
@testitem "Periodic Embedding" setup=[SharedTestSetup] tags=[:layers] begin
@testset "$(mode)" for (mode, aType, dev, ongpu) in MODES
layer = Layers.PeriodicEmbedding([2, 3], [4.0, π / 5])
ps, st = Lux.setup(StableRNG(0), layer) |> dev
x = randn(StableRNG(0), 6, 4, 3, 2) |> aType
Δx = [0.0, 12.0, -2π / 5, 0.0, 0.0, 0.0] |> aType
val = layer(x, ps, st)[1] |> Array
shifted_val = layer(x .+ Δx, ps, st)[1] |> Array
@test all(val[1:4, :, :, :] .== shifted_val[1:4, :, :, :]) && all(isapprox.(
val[5:8, :, :, :], shifted_val[5:8, :, :, :]; atol=5 * eps(Float32)))
@jet layer(x, ps, st)
__f = x -> sum(first(layer(x, ps, st)))
test_gradients(__f, x; atol=1.0f-3, rtol=1.0f-3)
end
end
@testitem "Dynamic Expressions Layer" setup=[SharedTestSetup] tags=[:layers] begin
using DynamicExpressions, ForwardDiff, ComponentArrays, Bumper
operators = OperatorEnum(; binary_operators=[+, -, *], unary_operators=[cos])
x1 = Node(; feature=1)
x2 = Node(; feature=2)
expr_1 = x1 * cos(x2 - 3.2)
expr_2 = x2 - x1 * x2 + 2.5 - 1.0 * x1
for exprs in ((expr_1,), (expr_1, expr_2), ([expr_1, expr_2],)),
turbo in (Val(false), Val(true)),
bumper in (Val(false), Val(true))
layer = Layers.DynamicExpressionsLayer(operators, exprs...; turbo, bumper)
ps, st = Lux.setup(StableRNG(0), layer)
x = [1.0f0 2.0f0 3.0f0
4.0f0 5.0f0 6.0f0]
y, st_ = layer(x, ps, st)
@test eltype(y) == Float32
__f = (x, p) -> sum(abs2, first(layer(x, p, st)))
test_gradients(__f, x, ps; atol=1.0f-3, rtol=1.0f-3, skip_backends=[AutoEnzyme()])
# Particular ForwardDiff dispatches
ps_ca = ComponentArray(ps)
dps_ca = ForwardDiff.gradient(ps_ca) do ps_
sum(abs2, first(layer(x, ps_, st)))
end
dx = ForwardDiff.gradient(x) do x_
sum(abs2, first(layer(x_, ps, st)))
end
dxps = ForwardDiff.gradient(ComponentArray(; x, ps)) do ca
sum(abs2, first(layer(ca.x, ca.ps, st)))
end
@test dx≈dxps.x atol=1.0f-3 rtol=1.0f-3
@test dps_ca≈dxps.ps atol=1.0f-3 rtol=1.0f-3
x = Float64.(x)
y, st_ = layer(x, ps, st)
@test eltype(y) == Float64
__f = (x, p) -> sum(abs2, first(layer(x, p, st)))
test_gradients(__f, x, ps; atol=1.0e-3, rtol=1.0e-3, skip_backends=[AutoEnzyme()])
end
@testset "$(mode)" for (mode, aType, dev, ongpu) in MODES
layer = Layers.DynamicExpressionsLayer(operators, expr_1)
ps, st = Lux.setup(StableRNG(0), layer) |> dev
x = [1.0f0 2.0f0 3.0f0
4.0f0 5.0f0 6.0f0] |> aType
if ongpu
@test_throws ArgumentError layer(x, ps, st)
end
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 869 | @testitem "Aqua: Quality Assurance" tags=[:others] begin
using Aqua
Aqua.test_all(Boltz; ambiguities=false)
Aqua.test_ambiguities(Boltz; recursive=false)
end
@testitem "Explicit Imports: Quality Assurance" tags=[:others] begin
import Lux, Metalhead, Zygote # Load all trigger packages
using ExplicitImports
@test check_no_implicit_imports(Boltz; skip=(Base, Core, Lux)) === nothing
@test check_no_stale_explicit_imports(Boltz) === nothing
@test check_no_self_qualified_accesses(Boltz) === nothing
@test check_all_explicit_imports_via_owners(Boltz) === nothing
@test check_all_qualified_accesses_via_owners(Boltz) === nothing
@test_broken check_all_explicit_imports_are_public(Boltz) === nothing # mostly upstream problems
@test_broken check_all_qualified_accesses_are_public(Boltz) === nothing # mostly upstream
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1250 | using ReTestItems, Pkg, InteractiveUtils, Hwloc
@info sprint(versioninfo)
const BACKEND_GROUP = lowercase(get(ENV, "BACKEND_GROUP", "all"))
const EXTRA_PKGS = String[]
(BACKEND_GROUP == "all" || BACKEND_GROUP == "cuda") && push!(EXTRA_PKGS, "LuxCUDA")
(BACKEND_GROUP == "all" || BACKEND_GROUP == "amdgpu") && push!(EXTRA_PKGS, "AMDGPU")
if !isempty(EXTRA_PKGS)
@info "Installing Extra Packages for testing" EXTRA_PKGS=EXTRA_PKGS
Pkg.add(EXTRA_PKGS)
Pkg.update()
Base.retry_load_extensions()
Pkg.instantiate()
end
using Boltz
const BOLTZ_TEST_GROUP = get(ENV, "BOLTZ_TEST_GROUP", "all")
const RETESTITEMS_NWORKERS = parse(
Int, get(ENV, "RETESTITEMS_NWORKERS", string(min(Hwloc.num_physical_cores(), 16))))
const RETESTITEMS_NWORKER_THREADS = parse(Int,
get(ENV, "RETESTITEMS_NWORKER_THREADS",
string(max(Hwloc.num_virtual_cores() ÷ RETESTITEMS_NWORKERS, 1))))
@info "Running tests for group: $BOLTZ_TEST_GROUP with $RETESTITEMS_NWORKERS workers"
ReTestItems.runtests(
Boltz; tags=(BOLTZ_TEST_GROUP == "all" ? nothing : [Symbol(BOLTZ_TEST_GROUP)]),
nworkers=ifelse(BACKEND_GROUP ∈ ("cuda", "amdgpu"), 0, RETESTITEMS_NWORKERS),
nworker_threads=RETESTITEMS_NWORKER_THREADS, testitem_timeout=3600)
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 1218 | @testsetup module SharedTestSetup
using Enzyme
Enzyme.API.runtimeActivity!(true)
import Reexport: @reexport
@reexport using Boltz, Lux, GPUArraysCore, LuxLib, LuxTestUtils, Random, StableRNGs
using MLDataDevices, JLD2
import Metalhead
LuxTestUtils.jet_target_modules!(["Boltz", "Lux", "LuxLib"])
const BACKEND_GROUP = lowercase(get(ENV, "BACKEND_GROUP", "all"))
GPUArraysCore.allowscalar(false)
if BACKEND_GROUP == "all" || BACKEND_GROUP == "cuda"
using LuxCUDA
end
if BACKEND_GROUP == "all" || BACKEND_GROUP == "amdgpu"
using AMDGPU
end
cpu_testing() = BACKEND_GROUP == "all" || BACKEND_GROUP == "cpu"
function cuda_testing()
return (BACKEND_GROUP == "all" || BACKEND_GROUP == "cuda") &&
MLDataDevices.functional(CUDADevice)
end
function amdgpu_testing()
return (BACKEND_GROUP == "all" || BACKEND_GROUP == "amdgpu") &&
MLDataDevices.functional(AMDGPUDevice)
end
const MODES = begin
modes = []
cpu_testing() && push!(modes, ("cpu", Array, CPUDevice(), false))
cuda_testing() && push!(modes, ("cuda", CuArray, CUDADevice(), true))
amdgpu_testing() && push!(modes, ("amdgpu", ROCArray, AMDGPUDevice(), true))
modes
end
export MODES, BACKEND_GROUP
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | code | 7642 | @testsetup module PretrainedWeightsTestSetup
using Lux, Downloads, JLD2
function normalize_imagenet(data)
cmean = reshape(Float32[0.485, 0.456, 0.406], (1, 1, 3, 1))
cstd = reshape(Float32[0.229, 0.224, 0.225], (1, 1, 3, 1))
return (data .- cmean) ./ cstd
end
# The images are normalized and saved
@load joinpath(@__DIR__, "testimages", "monarch_color.jld2") monarch_color_224 monarch_color_256
const MONARCH_224 = monarch_color_224
const MONARCH_256 = monarch_color_256
const TEST_LBLS = readlines(Downloads.download(
"https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt"
))
function imagenet_acctest(model, ps, st, dev; size=224)
ps = ps |> dev
st = Lux.testmode(st) |> dev
TEST_X = size == 224 ? MONARCH_224 :
(size == 256 ? MONARCH_256 : error("size must be 224 or 256"))
x = TEST_X |> dev
ypred = first(model(x, ps, st)) |> collect |> vec
top5 = TEST_LBLS[sortperm(ypred; rev=true)]
return "monarch" in top5
end
export imagenet_acctest
end
@testitem "AlexNet" setup=[SharedTestSetup, PretrainedWeightsTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES
@testset "pretrained: $(pretrained)" for pretrained in [true, false]
model = Vision.AlexNet(; pretrained)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
if pretrained
@test imagenet_acctest(model, ps, st, dev)
end
GC.gc(true)
end
end
end
@testitem "ConvMixer" setup=[SharedTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES, name in [:small, :base, :large]
model = Vision.ConvMixer(name; pretrained=false)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 256, 256, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
GC.gc(true)
end
end
@testitem "GoogLeNet" setup=[SharedTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES
model = Vision.GoogLeNet(; pretrained=false)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
GC.gc(true)
end
end
@testitem "MobileNet" setup=[SharedTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES, name in [:v1, :v2, :v3_small, :v3_large]
model = Vision.MobileNet(name; pretrained=false)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
GC.gc(true)
end
end
@testitem "ResNet" setup=[SharedTestSetup, PretrainedWeightsTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES, depth in [18, 34, 50, 101, 152]
@testset for pretrained in [false, true]
model = Vision.ResNet(depth; pretrained)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
if pretrained
@test imagenet_acctest(model, ps, st, dev)
end
GC.gc(true)
end
end
end
@testitem "ResNeXt" setup=[SharedTestSetup, PretrainedWeightsTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES
@testset for (depth, cardinality, base_width) in [
(50, 32, 4), (101, 32, 8), (101, 64, 4), (152, 64, 4)]
@testset for pretrained in [false, true]
depth == 152 && pretrained && continue
model = Vision.ResNeXt(depth; pretrained, cardinality, base_width)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
if pretrained
@test imagenet_acctest(model, ps, st, dev)
end
GC.gc(true)
end
end
end
end
@testitem "WideResNet" setup=[SharedTestSetup, PretrainedWeightsTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES, depth in [50, 101, 152]
@testset for pretrained in [false, true]
depth == 152 && pretrained && continue
model = Vision.WideResNet(depth; pretrained)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
if pretrained
@test imagenet_acctest(model, ps, st, dev)
end
GC.gc(true)
end
end
end
@testitem "SqueezeNet" setup=[SharedTestSetup, PretrainedWeightsTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES
@testset for pretrained in [false, true]
model = Vision.SqueezeNet(; pretrained)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
if pretrained
@test imagenet_acctest(model, ps, st, dev)
end
GC.gc(true)
end
end
end
@testitem "VGG" setup=[SharedTestSetup, PretrainedWeightsTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES, depth in [11, 13, 16, 19]
@testset for pretrained in [false, true], batchnorm in [false, true]
model = Vision.VGG(depth; batchnorm, pretrained)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 224, 224, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
if pretrained
@test imagenet_acctest(model, ps, st, dev)
end
GC.gc(true)
end
end
end
@testitem "VisionTransformer" setup=[SharedTestSetup] tags=[:vision] begin
for (mode, aType, dev, ongpu) in MODES, name in [:tiny, :small, :base]
# :large, :huge, :giant, :gigantic --> too large for CI
model = Vision.VisionTransformer(name; pretrained=false)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 256, 256, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
model = Vision.VisionTransformer(name; pretrained=false)
ps, st = Lux.setup(Random.default_rng(), model) |> dev
st = Lux.testmode(st)
img = randn(Float32, 256, 256, 3, 2) |> aType
@jet model(img, ps, st)
@test size(first(model(img, ps, st))) == (1000, 2)
GC.gc(true)
end
end
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | docs | 2118 | # Boltz ⚡
[](https://github.com/LuxDL/Lux.jl/discussions)
[](https://luxdl.github.io/Boltz.jl/dev/)
[](https://luxdl.github.io/Boltz.jl/stable/)
[](https://github.com/LuxDL/Boltz.jl/actions/workflows/CI.yml)
[](https://buildkite.com/julialang/boltz-dot-jl)
[](https://codecov.io/gh/LuxDL/Boltz.jl)
[](https://juliapkgstats.com/pkg/Boltz)
[](https://juliapkgstats.com/pkg/Boltz)
[](https://github.com/aviatesk/JET.jl)
[](https://github.com/JuliaTesting/Aqua.jl)
[](https://github.com/SciML/ColPrac)
[](https://github.com/SciML/SciMLStyle)
Accelerate ⚡ your ML research using pre-built Deep Learning Models with Lux.
## Installation
```julia
using Pkg
Pkg.add("Boltz")
```
## Getting Started
```julia
using Boltz, Lux, Metalhead
model, ps, st = Vision.AlexNet(; pretrained=true)
```
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | docs | 2092 | ```@raw html
---
# https://vitepress.dev/reference/default-theme-home-page
layout: home
hero:
name: Boltz.jl ⚡ Docs
text: Pre-built Deep Learning Models in Julia
tagline: Accelerate ⚡ your ML research using pre-built Deep Learning Models with Lux
actions:
- theme: brand
text: Lux.jl Docs
link: https://lux.csail.mit.edu/
- theme: alt
text: Tutorials 📚
link: /tutorials/1_GettingStarted
- theme: alt
text: Vision Models 👀
link: /api/vision
- theme: alt
text: Layers API 🧩
link: /api/layers
- theme: alt
text: View on GitHub
link: https://github.com/LuxDL/Boltz.jl
image:
src: /lux-logo.svg
alt: Lux.jl
features:
- icon: 🔥
title: Powered by Lux.jl
details: Boltz.jl is built on top of Lux.jl, a pure Julia Deep Learning Framework designed for Scientific Machine Learning.
link: https://lux.csail.mit.edu/
- icon: 🧩
title: Pre-built Models
details: Boltz.jl provides pre-built models for common deep learning tasks, such as image classification.
link: /api/vision
- icon: 🧑🔬
title: SciML Primitives
details: Common deep learning primitives needed for scientific machine learning.
link: https://sciml.ai/
---
```
## How to Install Boltz.jl?
Its easy to install Boltz.jl. Since Boltz.jl is registered in the Julia General registry,
you can simply run the following command in the Julia REPL:
```julia
julia> using Pkg
julia> Pkg.add("Boltz")
```
If you want to use the latest unreleased version of Boltz.jl, you can run the following
command: (in most cases the released version will be same as the version on github)
```julia
julia> using Pkg
julia> Pkg.add(url="https://github.com/LuxDL/Boltz.jl")
```
## Want GPU Support?
Install the following package(s):
:::code-group
```julia [NVIDIA GPUs]
using Pkg
Pkg.add("LuxCUDA")
# or
Pkg.add(["CUDA", "cuDNN"])
```
```julia [AMD ROCm GPUs]
using Pkg
Pkg.add("AMDGPU")
```
```julia [Metal M-Series GPUs]
using Pkg
Pkg.add("Metal")
```
```julia [Intel GPUs]
using Pkg
Pkg.add("oneAPI")
```
:::
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | docs | 341 | # `Boltz.Basis` API Reference
!!! warning
The function calls for these basis functions should be considered experimental and are
subject to change without deprecation. However, the functions themselves are stable
and can be freely used in combination with the other Layers and Models.
```@autodocs
Modules = [Boltz.Basis]
```
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | docs | 166 | # API Reference
Accelerate ⚡ your ML research using pre-built Deep Learning Models with Lux.
## Index
```@index
Pages = ["basis.md", "layers.md", "vision.md"]
```
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | docs | 141 | # `Boltz.Layers` API Reference
---
```@autodocs
Modules = [Boltz.Layers]
```
```@bibliography
Pages = [@__FILE__]
Style = :authoryear
```
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | docs | 158 | # Private API
This is the private API reference for Boltz.jl. You know what this means. Don't use these
functions!
```@autodocs
Modules = [Boltz.Utils]
```
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 1.0.1 | e258d56273ce56896f4269e8f9d402a09ff2200d | docs | 3181 | # Computer Vision Models (`Vision` API)
## Native Lux Models
```@docs
Vision.AlexNet
Vision.VGG
Vision.VisionTransformer
```
## Imported from Metalhead.jl
!!! tip "Load Metalhead"
You need to load `Metalhead` before using these models.
```@docs
Vision.ConvMixer
Vision.DenseNet
Vision.GoogLeNet
Vision.MobileNet
Vision.ResNet
Vision.ResNeXt
Vision.SqueezeNet
Vision.WideResNet
```
## Pretrained Models
!!! note "Load JLD2"
You need to load `JLD2` before being able to load pretrained weights.
!!! tip "Load Pretrained Weights"
Pass `pretrained=true` to the model constructor to load the pretrained weights.
| MODEL | TOP 1 ACCURACY (%) | TOP 5 ACCURACY (%) |
| :------------------------------------------- | :----------------: | :----------------: |
| `AlexNet()` | 54.48 | 77.72 |
| `VGG(11)` | 67.35 | 87.91 |
| `VGG(13)` | 68.40 | 88.48 |
| `VGG(16)` | 70.24 | 89.80 |
| `VGG(19)` | 71.09 | 90.27 |
| `VGG(11; batchnorm=true)` | 69.09 | 88.94 |
| `VGG(13; batchnorm=true)` | 69.66 | 89.49 |
| `VGG(16; batchnorm=true)` | 72.11 | 91.02 |
| `VGG(19; batchnorm=true)` | 72.95 | 91.32 |
| `ResNet(18)` | - | - |
| `ResNet(34)` | - | - |
| `ResNet(50)` | - | - |
| `ResNet(101)` | - | - |
| `ResNet(152)` | - | - |
| `ResNeXt(50; cardinality=32, base_width=4)` | - | - |
| `ResNeXt(101; cardinality=32, base_width=8)` | - | - |
| `ResNeXt(101; cardinality=64, base_width=4)` | - | - |
| `SqueezeNet()` | - | - |
| `WideResNet(50)` | - | - |
| `WideResNet(101)` | - | - |
!!! note "Pretrained Models from Metalhead"
For Models imported from Metalhead, the pretrained weights can be loaded if they are
available in Metalhead. Refer to the [Metalhead.jl docs](https://fluxml.ai/Metalhead.jl/stable/#Image-Classification)
for a list of available pretrained models.
### Preprocessing
All the pretrained models require that the images be normalized with the parameters
`mean = [0.485f0, 0.456f0, 0.406f0]` and `std = [0.229f0, 0.224f0, 0.225f0]`.
```@bibliography
Pages = [@__FILE__]
Style = :authoryear
```
| Boltz | https://github.com/LuxDL/Boltz.jl.git |
|
[
"MIT"
] | 0.1.0 | 33d35ddce841c93a0fd6662980b97100b5fdaf68 | code | 1822 | module Shamir
using Polynomials
function l(x, j, k)
"""
Create a Lagrange basis polynomial
Reference: https://wikimedia.org/api/rest_v1/media/math/render/svg/6e2c3a2ab16a8723c0446de6a30da839198fb04b
"""
polys = []
for m in 1:k
if m != j
d = x[j] - x[m]
r = Poly([-1 * x[m], 1]) / d
push!(polys, r)
end
end
#println(polys)
return prod(polys)
end
function L(x, y, k)
"""
Create a linear combination of Lagrange basis polynomials
Reference: https://wikimedia.org/api/rest_v1/media/math/render/svg/d07f3378ff7718c345e5d3d4a57d3053190226a0
"""
s = []
for j in 1:k
r = y[j] * l(x, j, k)
push!(s, r)
end
#println(sum(s))
return sum(s)
end
function construct_shares(n, production_poly)
"""
Create shares of the secret
Parameters:
n: Total number of shares
production_poly : The Polynomial created with the secret
to produce secret shares
"""
share = []
for i in 1:n
push!(share, [i, production_poly(i)])
end
return share
end
function recover_secret(shares, n, k, p)
"""
Recover the secret by finding the coefficient of x^0 i.e.
when the x=0 in the polynomial.
Parameters:
shares : An array of shares of the individuals.
n : total number of shares
k: minimum number of shares required to unravel the secret
p: Field number (to restrict the computation space)
"""
if length(shares) < k
throw("Need more parties!")
end
x = []
y = []
for i in 1:n
push!(x, shares[i][1])
push!(y, shares[i][2])
end
f = L(x, y, k)
f = mod(f(0), p)
return f
end
export Shamir
end
| Shamir | https://github.com/r0cketr1kky/Shamir.jl.git |
|
[
"MIT"
] | 0.1.0 | 33d35ddce841c93a0fd6662980b97100b5fdaf68 | code | 576 | using Shamir, Polynomials, Test
@info "Testing creation of shares"
prod_coeffs = [1234, 166, 94]
n = 6
prod_poly = Poly(prod_coeffs)
shares = Shamir.construct_shares(n, prod_poly)
@test shares == [[1, 1494], [2, 1942], [3, 2578], [4, 3402], [5, 4414], [6, 5614]]
@info "Testing Recover secret"
prod_coeffs = [1234, 166, 94]
n = 6 #total number of parties
k = 3 #min num of shares
p = 1613 #field
prod_poly = Poly(prod_coeffs)
shares = Shamir.construct_shares(n, prod_poly)
secret = Shamir.recover_secret(shares, n, k, p)
@test secret == 1234.0
@info "All tests completed"
| Shamir | https://github.com/r0cketr1kky/Shamir.jl.git |
|
[
"MIT"
] | 0.1.0 | 33d35ddce841c93a0fd6662980b97100b5fdaf68 | docs | 673 | # Shamir.jl
An implementation of Shamir's Secret Sharing protocol in Julia
[](https://travis-ci.com/r0cketr1kky/Shamir.jl)
This project aims to aid users in distributing random shares, without sharing the secret. <br/>
For more details, [Shamir's Secret Sharing Scheme](https://en.wikipedia.org/wiki/Shamir's_Secret_Sharing#Shamir.27s_secret-sharing_scheme)<br/>
## Installation
```julia
Pkg.add("Shamir")
```
## Usage
In Julia
```julia
using Shamir
poly_production = Poly([1234, 166, 94])
shares = Shamir.construct_shares(poly_production)
secret = Shamir.reconstruct_secret(shares)
```
| Shamir | https://github.com/r0cketr1kky/Shamir.jl.git |
|
[
"MIT"
] | 1.1.1 | f8652515b68572d3362ee38e32245249413fb2d7 | code | 239 | using Documenter, ReadStat
makedocs(
modules = [ReadStat],
sitename = "ReadStat.jl",
analytics="UA-132838790-1",
pages = [
"Introduction" => "index.md"
]
)
deploydocs(
repo = "github.com/queryverse/ReadStat.jl.git"
)
| ReadStat | https://github.com/queryverse/ReadStat.jl.git |
|
[
"MIT"
] | 1.1.1 | f8652515b68572d3362ee38e32245249413fb2d7 | code | 3665 | function readstat_get_file_label(metadata::Ptr{Nothing})
ptr = ccall((:readstat_get_file_label, libreadstat), Cstring, (Ptr{Nothing},), metadata)
return ptr == C_NULL ? "" : unsafe_string(ptr)
end
function readstat_get_modified_time(metadata::Ptr{Nothing})
return ccall((:readstat_get_modified_time, libreadstat), UInt, (Ptr{Nothing},), metadata)
end
function readstat_get_file_format_version(metadata::Ptr{Nothing})
return ccall((:readstat_get_file_format_version, libreadstat), Cint, (Ptr{Nothing},), metadata)
end
function readstat_get_row_count(metadata::Ptr{Nothing})
return ccall((:readstat_get_row_count, libreadstat), Cint, (Ptr{Nothing},), metadata)
end
function readstat_get_var_count(metadata::Ptr{Nothing})
return ccall((:readstat_get_var_count, libreadstat), Cint, (Ptr{Nothing},), metadata)
end
function readstat_value_is_missing(value::ReadStatValue, variable::Ptr{Nothing})
return Bool(ccall((:readstat_value_is_missing, libreadstat), Cint, (ReadStatValue,Ptr{Nothing}), value, variable))
end
function readstat_variable_get_index(variable::Ptr{Nothing})
return ccall((:readstat_variable_get_index, libreadstat), Cint, (Ptr{Nothing},), variable)
end
function readstat_variable_get_name(variable::Ptr{Nothing})
return unsafe_string(ccall((:readstat_variable_get_name, libreadstat), Cstring, (Ptr{Nothing},), variable))
end
function readstat_variable_get_type(variable::Ptr{Nothing})
return ccall((:readstat_variable_get_type, libreadstat), Cint, (Ptr{Nothing},), variable)
end
function readstat_variable_get_storage_width(variable::Ptr{Nothing})
return ccall((:readstat_variable_get_storage_width, libreadstat), Csize_t, (Ptr{Nothing},), variable)
end
function readstat_variable_get_measure(variable::Ptr{Nothing})
return ccall((:readstat_variable_get_measure, libreadstat), Cint, (Ptr{Nothing},), variable)
end
function readstat_variable_get_alignment(variable::Ptr{Nothing})
return ccall((:readstat_variable_get_alignment, libreadstat), Cint, (Ptr{Nothing},), variable)
end
function readstat_parser_free(parser::Ptr{Nothing})
return ccall((:readstat_parser_free, libreadstat), Nothing, (Ptr{Nothing},), parser)
end
function readstat_value_type(val::Value)
return ccall((:readstat_value_type, libreadstat), Cint, (Value,), val)
end
function readstat_parse(filename::String, type::Val{:dta}, parser::Ptr{Nothing}, ds::ReadStatDataFrame)
return ccall((:readstat_parse_dta, libreadstat), Cint, (Ptr{Nothing}, Cstring, Any), parser, string(filename), ds)
end
function readstat_parse(filename::String, type::Val{:sav}, parser::Ptr{Nothing}, ds::ReadStatDataFrame)
return ccall((:readstat_parse_sav, libreadstat), Cint, (Ptr{Nothing}, Cstring, Any), parser, string(filename), ds)
end
function readstat_parse(filename::String, type::Val{:por}, parser::Ptr{Nothing}, ds::ReadStatDataFrame)
return ccall((:readstat_parse_por, libreadstat), Cint, (Ptr{Nothing}, Cstring, Any), parser, string(filename), ds)
end
function readstat_parse(filename::String, type::Val{:sas7bdat}, parser::Ptr{Nothing}, ds::ReadStatDataFrame)
return ccall((:readstat_parse_sas7bdat, libreadstat), Cint, (Ptr{Nothing}, Cstring, Any), parser, string(filename), ds)
end
function readstat_parse(filename::String, type::Val{:xport}, parser::Ptr{Nothing}, ds::ReadStatDataFrame)
return ccall((:readstat_parse_xport, libreadstat), Cint, (Ptr{Nothing}, Cstring, Any), parser, string(filename), ds)
end
function readstat_variable_get_missing_ranges_count(variable::Ptr{Nothing})
return ccall((:readstat_variable_get_missing_ranges_count, libreadstat), Cint, (Ptr{Nothing},), variable)
end
| ReadStat | https://github.com/queryverse/ReadStat.jl.git |
|
[
"MIT"
] | 1.1.1 | f8652515b68572d3362ee38e32245249413fb2d7 | code | 10897 | module ReadStat
using ReadStat_jll
##############################################################################
##
## Import
##
##############################################################################
using DataValues: DataValueVector
import DataValues
using Dates
export ReadStatDataFrame, read_dta, read_sav, read_por, read_sas7bdat, read_xport
##############################################################################
##
## Julia types that mirror C types
##
##############################################################################
const READSTAT_TYPE_STRING = Cint(0)
const READSTAT_TYPE_CHAR = Cint(1)
const READSTAT_TYPE_INT16 = Cint(2)
const READSTAT_TYPE_INT32 = Cint(3)
const READSTAT_TYPE_FLOAT = Cint(4)
const READSTAT_TYPE_DOUBLE = Cint(5)
const READSTAT_TYPE_LONG_STRING = Cint(6)
const READSTAT_ERROR_OPEN = Cint(1)
const READSTAT_ERROR_READ = Cint(2)
const READSTAT_ERROR_MALLOC = Cint(3)
const READSTAT_ERROR_USER_ABORT = Cint(4)
const READSTAT_ERROR_PARSE = Cint(5)
##############################################################################
##
## Pure Julia types
##
##############################################################################
struct ReadStatValue
union::Int64
readstat_types_t::Cint
tag::Cchar
@static if Sys.iswindows()
bits::Cuint
else
bits::UInt8
end
end
const Value = ReadStatValue
mutable struct ReadStatDataFrame
data::Vector{Any}
headers::Vector{Symbol}
types::Vector{DataType}
labels::Vector{String}
formats::Vector{String}
storagewidths::Vector{Csize_t}
measures::Vector{Cint}
alignments::Vector{Cint}
val_label_keys::Vector{String}
val_label_dict::Dict{String, Dict{Any,String}}
rows::Int
columns::Int
filelabel::String
timestamp::DateTime
format::Clong
types_as_int::Vector{Cint}
hasmissings::Vector{Bool}
ReadStatDataFrame() =
new(Any[], Symbol[], DataType[], String[], String[], Csize_t[], Cint[], Cint[],
String[], Dict{String, Dict{Any,String}}(), 0, 0, "", Dates.unix2datetime(0), 0, Cint[], Bool[])
end
include("C_interface.jl")
##############################################################################
##
## Julia functions
##
##############################################################################
function handle_info!(obs_count::Cint, var_count::Cint, ds_ptr::Ptr{ReadStatDataFrame})
ds = unsafe_pointer_to_objref(ds_ptr)
ds.rows = obs_count
ds.columns = var_count
return Cint(0)
end
function handle_metadata!(metadata::Ptr{Nothing}, ds_ptr::Ptr{ReadStatDataFrame})
ds = unsafe_pointer_to_objref(ds_ptr)
ds.filelabel = readstat_get_file_label(metadata)
ds.timestamp = Dates.unix2datetime(readstat_get_modified_time(metadata))
ds.format = readstat_get_file_format_version(metadata)
ds.rows = readstat_get_row_count(metadata)
ds.columns = readstat_get_var_count(metadata)
return Cint(0)
end
get_name(variable::Ptr{Nothing}) = Symbol(readstat_variable_get_name(variable))
function get_label(var::Ptr{Nothing})
ptr = ccall((:readstat_variable_get_label, libreadstat), Cstring, (Ptr{Nothing},), var)
ptr == C_NULL ? "" : unsafe_string(ptr)
end
function get_format(var::Ptr{Nothing})
ptr = ccall((:readstat_variable_get_format, libreadstat), Cstring, (Ptr{Nothing},), var)
ptr == C_NULL ? "" : unsafe_string(ptr)
end
function get_type(data_type::Cint)
if data_type == READSTAT_TYPE_STRING
return String
elseif data_type == READSTAT_TYPE_CHAR
return Int8
elseif data_type == READSTAT_TYPE_INT16
return Int16
elseif data_type == READSTAT_TYPE_INT32
return Int32
elseif data_type == READSTAT_TYPE_FLOAT
return Float32
elseif data_type == READSTAT_TYPE_DOUBLE
return Float64
end
return Nothing
end
get_type(variable::Ptr{Nothing}) = get_type(readstat_variable_get_type(variable))
get_storagewidth(variable::Ptr{Nothing}) = readstat_variable_get_storage_width(variable)
get_measure(variable::Ptr{Nothing}) = readstat_variable_get_measure(variable)
get_alignment(variable::Ptr{Nothing}) = readstat_variable_get_measure(variable)
function handle_variable!(var_index::Cint, variable::Ptr{Nothing},
val_label::Cstring, ds_ptr::Ptr{ReadStatDataFrame})
col = var_index + 1
ds = unsafe_pointer_to_objref(ds_ptr)::ReadStatDataFrame
missing_count = readstat_variable_get_missing_ranges_count(variable)
push!(ds.val_label_keys, (val_label == C_NULL ? "" : unsafe_string(val_label)))
push!(ds.headers, get_name(variable))
push!(ds.labels, get_label(variable))
push!(ds.formats, get_format(variable))
jtype = get_type(variable)
push!(ds.types, jtype)
push!(ds.types_as_int, readstat_variable_get_type(variable))
push!(ds.hasmissings, missing_count > 0)
# SAS XPORT sets ds.rows == -1
if ds.rows >= 0
push!(ds.data, DataValueVector{jtype}(Vector{jtype}(undef, ds.rows), fill(false, ds.rows)))
else
push!(ds.data, DataValueVector{jtype}(Vector{jtype}(undef, 0), fill(false, 0)))
end
push!(ds.storagewidths, get_storagewidth(variable))
push!(ds.measures, get_measure(variable))
push!(ds.alignments, get_alignment(variable))
return Cint(0)
end
function get_type(val::Value)
data_type = readstat_value_type(val)
return [String, Int8, Int16, Int32, Float32, Float64, String][data_type + 1]
end
Base.convert(::Type{Int8}, val::Value) = ccall((:readstat_int8_value, libreadstat), Int8, (Value,), val)
Base.convert(::Type{Int16}, val::Value) = ccall((:readstat_int16_value, libreadstat), Int16, (Value,), val)
Base.convert(::Type{Int32}, val::Value) = ccall((:readstat_int32_value, libreadstat), Int32, (Value,), val)
Base.convert(::Type{Float32}, val::Value) = ccall((:readstat_float_value, libreadstat), Float32, (Value,), val)
Base.convert(::Type{Float64}, val::Value) = ccall((:readstat_double_value, libreadstat), Float64, (Value,), val)
function Base.convert(::Type{String}, val::Value)
ptr = ccall((:readstat_string_value, libreadstat), Cstring, (Value,), val)
ptr ≠ C_NULL ? unsafe_string(ptr) : ""
end
as_native(val::Value) = convert(get_type(val), val)
function handle_value!(obs_index::Cint, variable::Ptr{Nothing},
value::ReadStatValue, ds_ptr::Ptr{ReadStatDataFrame})
ds = unsafe_pointer_to_objref(ds_ptr)::ReadStatDataFrame
var_index = readstat_variable_get_index(variable) + 1
data = ds.data
@inbounds type_as_int = ds.types_as_int[var_index]
ismissing = if @inbounds(ds.hasmissings[var_index])
readstat_value_is_missing(value, variable)
else
readstat_value_is_missing(value, C_NULL)
end
col = data[var_index]
@assert eltype(eltype(col)) == get_type(type_as_int)
if ismissing
if obs_index < length(col)
DataValues.unsafe_setindex_isna!(col, true, obs_index + 1)
else
push!(col, DataValues.NA)
end
else
readfield!(col, obs_index + 1, value)
end
return Cint(0)
end
function readfield!(dest::DataValueVector{String}, row, val::ReadStatValue)
ptr = ccall((:readstat_string_value, libreadstat), Cstring, (ReadStatValue,), val)
if row <= length(dest)
if ptr ≠ C_NULL
@inbounds DataValues.unsafe_setindex_value!(dest, unsafe_string(ptr), row)
end
elseif row == length(dest) + 1
_val = ptr ≠ C_NULL ? unsafe_string(ptr) : ""
DataValues.push!(dest, _val)
else
throw(ArgumentError("illegal row index: $row"))
end
end
for (j_type, rs_name) in (
(Int8, :readstat_int8_value),
(Int16, :readstat_int16_value),
(Int32, :readstat_int32_value),
(Float32, :readstat_float_value),
(Float64, :readstat_double_value))
@eval function readfield!(dest::DataValueVector{$j_type}, row, val::ReadStatValue)
_val = ccall(($(QuoteNode(rs_name)), libreadstat), $j_type, (ReadStatValue,), val)
if row <= length(dest)
@inbounds DataValues.unsafe_setindex_value!(dest, _val, row)
elseif row == length(dest) + 1
DataValues.push!(dest, _val)
else
throw(ArgumentError("illegal row index: $row"))
end
end
end
function handle_value_label!(val_labels::Cstring, value::Value, label::Cstring, ds_ptr::Ptr{ReadStatDataFrame})
val_labels ≠ C_NULL || return Cint(0)
ds = unsafe_pointer_to_objref(ds_ptr)
dict = get!(ds.val_label_dict, unsafe_string(val_labels), Dict{Any,String}())
dict[as_native(value)] = unsafe_string(label)
return Cint(0)
end
function read_data_file(filename::AbstractString, filetype::Val)
# initialize ds
ds = ReadStatDataFrame()
# initialize parser
parser = Parser()
# parse
parse_data_file!(ds, parser, filename, filetype)
# return dataframe instead of ReadStatDataFrame
return ds
end
function Parser()
parser = ccall((:readstat_parser_init, libreadstat), Ptr{Nothing}, ())
info_fxn = @cfunction(handle_info!, Cint, (Cint, Cint, Ptr{ReadStatDataFrame}))
meta_fxn = @cfunction(handle_metadata!, Cint, (Ptr{Nothing}, Ptr{ReadStatDataFrame}))
var_fxn = @cfunction(handle_variable!, Cint, (Cint, Ptr{Nothing}, Cstring, Ptr{ReadStatDataFrame}))
val_fxn = @cfunction(handle_value!, Cint, (Cint, Ptr{Nothing}, ReadStatValue, Ptr{ReadStatDataFrame}))
label_fxn = @cfunction(handle_value_label!, Cint, (Cstring, Value, Cstring, Ptr{ReadStatDataFrame}))
ccall((:readstat_set_metadata_handler, libreadstat), Int, (Ptr{Nothing}, Ptr{Nothing}), parser, meta_fxn)
ccall((:readstat_set_variable_handler, libreadstat), Int, (Ptr{Nothing}, Ptr{Nothing}), parser, var_fxn)
ccall((:readstat_set_value_handler, libreadstat), Int, (Ptr{Nothing}, Ptr{Nothing}), parser, val_fxn)
ccall((:readstat_set_value_label_handler, libreadstat), Int, (Ptr{Nothing}, Ptr{Nothing}), parser, label_fxn)
return parser
end
function error_message(retval::Integer)
unsafe_string(ccall((:readstat_error_message, libreadstat), Ptr{Cchar}, (Cint,), retval))
end
function parse_data_file!(ds::ReadStatDataFrame, parser::Ptr{Nothing}, filename::AbstractString, filetype::Val)
retval = readstat_parse(filename, filetype, parser, ds)
readstat_parser_free(parser)
retval == 0 || error("Error parsing $filename: $(error_message(retval))")
end
read_dta(filename::AbstractString) = read_data_file(filename, Val(:dta))
read_sav(filename::AbstractString) = read_data_file(filename, Val(:sav))
read_por(filename::AbstractString) = read_data_file(filename, Val(:por))
read_sas7bdat(filename::AbstractString) = read_data_file(filename, Val(:sas7bdat))
read_xport(filename::AbstractString) = read_data_file(filename, Val(:xport))
end #module ReadStat
| ReadStat | https://github.com/queryverse/ReadStat.jl.git |
|
[
"MIT"
] | 1.1.1 | f8652515b68572d3362ee38e32245249413fb2d7 | code | 770 | using ReadStat
using DataValues
using Test
@testset "ReadStat: $ext files" for (reader, ext) in
((read_dta, "dta"),
(read_sav, "sav"),
(read_sas7bdat, "sas7bdat"),
(read_xport, "xpt"))
dtafile = joinpath(dirname(@__FILE__), "types.$ext")
rsdf = reader(dtafile)
data = rsdf.data
@test length(data) == 6
@test rsdf.headers == [:vfloat, :vdouble, :vlong, :vint, :vbyte, :vstring]
@test data[1] == DataValueArray{Float32}([3.14, 7., NA])
@test data[2] == DataValueArray{Float64}([3.14, 7., NA])
@test data[3] == DataValueArray{Int32}([2, 7, NA])
@test data[4] == DataValueArray{Int16}([2, 7, NA])
@test data[5] == DataValueArray{Int8}([2, 7., NA])
@test data[6] == DataValueArray{String}(["2", "7", ""])
end
| ReadStat | https://github.com/queryverse/ReadStat.jl.git |
|
[
"MIT"
] | 1.1.1 | f8652515b68572d3362ee38e32245249413fb2d7 | docs | 762 | # ReadStat.jl v1.1.0 Release Notes
* Add support for SAS XPORT
# ReadStat.jl v1.0.2 Release Notes
* Fix a type instability
# ReadStat.jl v1.0.1 Release Notes
* Bugfix release
# ReadStat.jl v1.0.0 Release Notes
* Drop support for all Julia pre 1.3 versions
* Migrate to Project.toml
* Migrate to artifacts
# ReadStat.jl v0.4.1 Release Notes
* Fix remaining julia 0.7/1.0 issues
# ReadStat.jl v0.4.0 Release Notes
* Drop julia 0.6 support, add julia 0.7 support
* Use BinaryProvider.jl
# ReadStat.jl v0.3.0 Release Notes
* Change return type of API
* Return more info
# ReadStat.jl v0.2.0 Release Notes
* Remove dependency on DataFrames and DataTables
# ReadStat.jl v0.1.1 Release Notes
* Bug fix release
# ReadStat.jl v0.1.0 Release Notes
* First release | ReadStat | https://github.com/queryverse/ReadStat.jl.git |
|
[
"MIT"
] | 1.1.1 | f8652515b68572d3362ee38e32245249413fb2d7 | docs | 1467 | # ReadStat
[](http://www.repostatus.org/#active)
[](https://travis-ci.org/queryverse/ReadStat.jl)
[](https://ci.appveyor.com/project/queryverse/readstat-jl/branch/master)
[](https://codecov.io/gh/queryverse/ReadStat.jl)
## Overview
ReadStat.jl: Read files from Stata, SPSS, and SAS
--
The ReadStat.jl Julia package uses the [ReadStat](https://github.com/WizardMac/ReadStat) C library to parse binary and transport files from Stata, SPSS and SAS. All functions return a tuple, with the first element an array of columns and the second element a vector of column names.
For integration with packages like [DataFrames.jl](https://github.com/JuliaData/DataFrames.jl) you should use the [StatFiles.jl](https://github.com/queryverse/StatFiles.jl) package.
## Usage:
```julia
using ReadStat
read_dta("/path/to/something.dta")
read_por("/path/to/something.por")
read_sav("/path/to/something.sav")
read_sas7bdat("/path/to/something.sas7bdat")
```
## Installation
To install the package, run the following:
```julia
Pkg.add("ReadStat")
```
| ReadStat | https://github.com/queryverse/ReadStat.jl.git |
|
[
"MIT"
] | 1.1.0 | 75d468e2b341d925beb1475a8ab42fde7ba72364 | code | 7466 | module IPNets
using Sockets: IPAddr, IPv4, IPv6
export IPNet, IPv4Net, IPv6Net, is_private, is_global
abstract type IPNet end
IPNet(str::AbstractString) = parse(IPNet, str)
Base.parse(::Type{IPNet}, str::AbstractString) =
':' in str ? parse(IPv6Net, str) : parse(IPv4Net, str)
############################
## Types and constructors ##
############################
"""
IPv4Net(str::AbstractString)
IPv4Net(ip::IPv4, netmask::Int)
IPv4Net(ip::IPv4, netmask::IPv4)
Type representing a IPv4 network.
# Examples
```julia
julia> IPv4Net("192.168.0.0/24")
IPv4Net("192.168.0.0/24")
julia> IPv4Net(ip"192.168.0.0", 24)
IPv4Net("192.168.0.0/24")
julia> IPv4Net(ip"192.168.0.0", ip"255.255.255.0")
IPv4Net("192.168.0.0/24")
```
"""
struct IPv4Net <: IPNet
netaddr::UInt32
netmask::UInt32
function IPv4Net(netaddr::UInt32, netmask::UInt32)
netaddr′ = netaddr & netmask
if netaddr′ !== netaddr
throw(ArgumentError("input $(IPv4(netaddr))/$(count_ones(netmask)) has host bits set"))
end
new(netaddr′, netmask)
end
end
# "1.2.3.0/24"
IPv4Net(str::AbstractString) = parse(IPv4Net, str)
# ip"1.2.3.0", 24
IPv4Net(netaddr::IPv4, netmask::Integer=32) = IPv4Net(netaddr.host, to_mask(UInt32(netmask)))
# ip"1.2.3.0", ip"255.255.255.0"
function IPv4Net(netaddr::IPv4, netmask::IPv4)
netmask′ = to_mask(UInt32(count_ones(netmask.host)))
if netmask′ !== netmask.host
throw(ArgumentError("non-contiguous IPv4 subnets not supported, got $(netmask)"))
end
return IPv4Net(netaddr.host, netmask′)
end
"""
IPv6Net(str::AbstractString)
IPv6Net(ip::IPv6, netmask::Int)
Type representing a IPv6 network.
# Examples
```julia
julia> IPv6Net("1::2/64")
IPv6Net("1::/64")
julia> IPv6Net(ip"1::2", 64)
IPv6Net("1::/64")
```
"""
struct IPv6Net <: IPNet
netaddr::UInt128
netmask::UInt128
function IPv6Net(netaddr::UInt128, netmask::UInt128)
netaddr′ = netaddr & netmask
if netaddr′ !== netaddr
throw(ArgumentError("input $(IPv6(netaddr))/$(count_ones(netmask)) has host bits set"))
end
return new(netaddr′, netmask)
end
end
# "2001::1/64"
IPv6Net(str::AbstractString) = parse(IPv6Net, str)
# ip"2001::1", 64
IPv6Net(netaddr::IPv6, prefix::Integer=128) = IPv6Net(netaddr.host, to_mask(UInt128(prefix)))
#############
## Parsing ##
#############
Base.parse(::Type{T}, str::AbstractString) where T <:IPNet = parse(T, String(str))
Base.parse(::Type{IPv4Net}, str::String) = IPv4Net(_parsenet(str, UInt32, IPv4)...)
Base.parse(::Type{IPv6Net}, str::String) = IPv6Net(_parsenet(str, UInt128, IPv6)...)
function _parsenet(str::String, ::Type{IT}, ::Type{IPT}) where {IT, IPT}
nbits = IT(8 * sizeof(IT))
parts = split(str, '/')
if length(parts) == 1
netaddr, nmaskbits = parts[1], nbits
elseif length(parts) == 2
netaddr, maskbits = parts
nmaskbits = parse(IT, maskbits)
else
throw(ArgumentError("malformed IPNet input: $str"))
end
netaddr = parse(IPT, netaddr).host
netmask = to_mask(nmaskbits)
return netaddr, netmask
end
##############
## Printing ##
##############
Base.print(io::IO, net::IPNet) =
print(io, eltype(net)(net.netaddr), "/", count_ones(net.netmask))
Base.show(io::IO, net::T) where T <: IPNet = print(io, T, "(\"", net, "\")")
###########################
## IPNets as collections ##
###########################
Base.in(ip::IPv4, network::IPv4Net) = ip.host & network.netmask == network.netaddr
Base.in(ip::IPv6, network::IPv6Net) = ip.host & network.netmask == network.netaddr
# IP Networks are ordered first by starting network address
# and then by network mask. That is, smaller IP nets (with higher
# netmask values) are "less" than larger ones. This corresponds
# to secondary reordering by ending address.
Base.isless(a::T, b::T) where T <: IPNet = a.netaddr == b.netaddr ?
isless(count_ones(a.netmask), count_ones(b.netmask)) :
isless(a.netaddr, b.netaddr)
Base.iterate(net::IPNet, state = inttype(net)(0)) =
state >= length(net) ? nothing : (net[state], state + 0x1)
Base.eltype(::Type{IPv4Net}) = IPv4
Base.eltype(::Type{IPv6Net}) = IPv6
Base.firstindex(net::IPNet) = inttype(net)(0)
Base.lastindex(net::IPNet) = typemax(inttype(net)) >> (count_ones(net.netmask))
Base.length(net::IPv4Net)= Int64(lastindex(net) - firstindex(net)) + 1
Base.length(net::IPv6Net)= BigInt(lastindex(net) - firstindex(net)) + 1
function Base.getindex(net::IPNet, i::Integer)
fi, li = firstindex(net), lastindex(net)
fi <= i <= li || throw(BoundsError(net, i))
i = i % typeof(fi)
r = eltype(net)(net.netaddr + i)
return r
end
Base.getindex(net::IPNet, idxs::AbstractVector{<:Integer}) = [net[i] for i in idxs]
######################
## Internal utility ##
######################
function to_mask(nmaskbits::IT) where IT
nbits = IT(8 * sizeof(IT))
if !(0 <= nmaskbits <= nbits)
throw(ArgumentError("network mask bits must be between 0 and $(nbits), got $(nmaskbits)"))
end
return typemax(IT) << (nbits - IT(nmaskbits)) & typemax(IT)
end
inttype(::IPv4Net) = UInt32
inttype(::IPv6Net) = UInt128
###############################
## IP address classification ##
###############################
# See https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml
# and https://github.com/python/cpython/blob/67b3a9995368f89b7ce4a995920b2a83a81c599b/Lib/ipaddress.py#L1543-L1558
const _private_ipv4_nets = IPv4Net[
IPv4Net("0.0.0.0/8"),
IPv4Net("10.0.0.0/8"),
IPv4Net("127.0.0.0/8"),
IPv4Net("169.254.0.0/16"),
IPv4Net("172.16.0.0/12"),
IPv4Net("192.0.0.0/29"),
IPv4Net("192.0.0.170/31"),
IPv4Net("192.0.2.0/24"),
IPv4Net("192.168.0.0/16"),
IPv4Net("198.18.0.0/15"),
IPv4Net("198.51.100.0/24"),
IPv4Net("203.0.113.0/24"),
IPv4Net("240.0.0.0/4"),
IPv4Net("255.255.255.255/32"),
]
# See https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml
# and https://github.com/python/cpython/blob/67b3a9995368f89b7ce4a995920b2a83a81c599b/Lib/ipaddress.py#L2258-L2269
const _private_ipv6_nets = IPv6Net[
IPv6Net("::1/128"),
IPv6Net("::/128"),
IPv6Net("::ffff:0:0/96"),
IPv6Net("100::/64"),
IPv6Net("2001::/23"),
IPv6Net("2001:2::/48"),
IPv6Net("2001:db8::/32"),
IPv6Net("2001:10::/28"),
IPv6Net("fc00::/7"),
IPv6Net("fe80::/10"),
]
"""
is_private(ip::Union{IPv4,IPv6})
Return `true` if the IP adress is allocated for private networks.
See [iana-ipv4-special-registry](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml) (IPv4)
and [iana-ipv6-special-registry](https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml) (IPv6).
"""
is_private(::Union{IPv4,IPv6})
is_private(ip::IPv4) = any(ip in net for net in _private_ipv4_nets)
is_private(ip::IPv6) = any(ip in net for net in _private_ipv6_nets)
"""
is_global(ip::Union{IPv4,IPv6})
Return `true` if the IP adress is allocated for public networks.
See [iana-ipv4-special-registry](https://www.iana.org/assignments/iana-ipv4-special-registry/iana-ipv4-special-registry.xhtml) (IPv4)
and [iana-ipv6-special-registry](https://www.iana.org/assignments/iana-ipv6-special-registry/iana-ipv6-special-registry.xhtml) (IPv6).
"""
is_global(ip::Union{IPv4,IPv6}) = !is_private(ip)
end # module
| IPNets | https://github.com/JuliaWeb/IPNets.jl.git |
|
[
"MIT"
] | 1.1.0 | 75d468e2b341d925beb1475a8ab42fde7ba72364 | code | 5873 | using IPNets, Test, Sockets
@testset "IPNets" begin
#############
## IPv4Net ##
#############
## Constructors
@test IPv4Net("1.2.3.4") == parse(IPv4Net, "1.2.3.4") ==
IPv4Net("1.2.3.4/32") == parse(IPv4Net, "1.2.3.4/32") ==
IPv4Net(ip"1.2.3.4") == IPv4Net(ip"1.2.3.4", 32) ==
IPv4Net(ip"1.2.3.4", ip"255.255.255.255") ==
IPNet("1.2.3.4") == IPNet("1.2.3.4/32") ==
parse(IPv4Net, SubString("1.2.3.4")) ==
parse(IPNet, "1.2.3.4") == parse(IPNet, "1.2.3.4/32") ==
IPv4Net(0x01020304, typemax(UInt32))
@test IPv4Net("1.2.3.0/24") == parse(IPv4Net, "1.2.3.0/24") ==
IPv4Net(ip"1.2.3.0", ip"255.255.255.0") ==
IPv4Net(ip"1.2.3.0", 24) == parse(IPNet, "1.2.3.0/24") ==
IPv4Net(0x01020300, typemax(UInt32) << 8)
err = ArgumentError("network mask bits must be between 0 and 32, got 33")
@test_throws err IPv4Net("1.2.3.4/33")
@test_throws err IPv4Net(ip"1.2.3.4", 33)
err = ArgumentError("non-contiguous IPv4 subnets not supported, got 255.240.255.0")
@test_throws err IPv4Net(ip"1.2.3.0", ip"255.240.255.0")
err = ArgumentError("input 1.2.3.4/24 has host bits set")
@test_throws err IPv4Net("1.2.3.4/24")
@test_throws err parse(IPv4Net, "1.2.3.4/24")
@test_throws err IPv4Net(ip"1.2.3.4", 24)
@test_throws err parse(IPNet, "1.2.3.4/24")
err = ArgumentError("malformed IPNet input: 1.2.3.4/32/32")
@test_throws err IPv4Net("1.2.3.4/32/32")
## Print
ipnet = IPv4Net("1.2.3.0/24")
@test sprint(print, ipnet) == "1.2.3.0/24"
@test sprint(show, ipnet) == "IPv4Net(\"1.2.3.0/24\")"
## IPNet as collection
ipnet = IPv4Net("1.2.3.0/24")
@test ip"1.2.3.4" in ipnet
@test ip"1.2.3.0" in ipnet
@test ip"1.2.3.255" in ipnet
@test !(ip"1.2.4.0" in ipnet)
@test IPv4Net("1.2.3.4/32") == IPv4Net("1.2.3.4/32")
@test IPv4Net("1.2.3.0/24") == IPv4Net("1.2.3.0/24")
@test IPv4Net("1.2.3.4/32") != IPv4Net("1.2.3.4/31")
@test IPv4Net("1.2.3.4/31") < IPv4Net("1.2.3.4/32")
@test IPv4Net("1.2.3.4/32") > IPv4Net("1.2.3.4/31")
@test IPv4Net("1.2.3.0/24") < IPv4Net("1.2.4.0/24")
@test IPv4Net("1.2.4.0/24") > IPv4Net("1.2.3.0/24")
nets = map(IPv4Net, ["1.2.3.0/24", "1.2.3.4/31", "1.2.3.4/32", "1.2.4.0/24"])
@test sort(nets) == nets
@test length(ipnet) == length(collect(ipnet)) == 256
@test collect(ipnet) == [x for x in ipnet]
@test length(IPv4Net("0.0.0.0/0"))::Int64 == Int64(1) << 32
@test ipnet[0] == #= ipnet[begin] == =# ip"1.2.3.0" # TODO: Requires Julia 1.4
@test ipnet[1] == ip"1.2.3.1"
@test ipnet[255] == ipnet[end] == ip"1.2.3.255"
@test ipnet[0:1] == ipnet[[0, 1]] == [ip"1.2.3.0", ip"1.2.3.1"]
@test_throws BoundsError ipnet[-1]
@test_throws BoundsError ipnet[256]
@test_throws BoundsError ipnet[-1:2]
@test is_private(ip"127.0.0.1")
@test !is_global(ip"127.0.0.1")
@test !is_private(ip"1.1.1.1")
@test is_global(ip"1.1.1.1")
#############
## IPv6Net ##
#############
## Constructors
@test IPv6Net("1:2::3:4") == parse(IPv6Net, "1:2::3:4") ==
IPv6Net("1:2::3:4/128") == parse(IPv6Net, "1:2::3:4/128") ==
IPv6Net(ip"1:2::3:4") == IPv6Net(ip"1:2::3:4", 128) ==
parse(IPv6Net, SubString("1:2::3:4")) ==
parse(IPNet, "1:2::3:4") == parse(IPNet, "1:2::3:4/128") ==
IPv6Net(0x00010002000000000000000000030004, typemax(UInt128))
@test IPv6Net("1:2::3:0/112") == parse(IPv6Net, "1:2::3:0/112") ==
IPv6Net(ip"1:2::3:0", 112) == parse(IPNet, "1:2::3:0/112") ==
IPv6Net(0x00010002000000000000000000030000, typemax(UInt128) << 16)
err = ArgumentError("network mask bits must be between 0 and 128, got 129")
@test_throws err IPv6Net("1:2::3:4/129")
@test_throws err IPv6Net(ip"1:2::3:4", 129)
err = ArgumentError("input 1:2::3:4/112 has host bits set")
@test_throws err IPv6Net("1:2::3:4/112")
@test_throws err parse(IPv6Net, "1:2::3:4/112")
@test_throws err IPv6Net(ip"1:2::3:4", 112)
@test_throws err parse(IPNet, "1:2::3:4/112")
err = ArgumentError("malformed IPNet input: 1:2::3:4/32/32")
@test_throws err IPv6Net("1:2::3:4/32/32")
## Print
ipnet = IPv6Net("1:2::3:0/112")
@test sprint(print, ipnet) == "1:2::3:0/112"
@test sprint(show, ipnet) == "IPv6Net(\"1:2::3:0/112\")"
## IPNet as collection
ipnet = IPv6Net("1:2::3:0/112")
@test ip"1:2::3:4" in ipnet
@test ip"1:2::3:0" in ipnet
@test ip"1:2::3:ffff" in ipnet
@test !(ip"1:2::4:0" in ipnet)
@test IPv6Net("1:2::3:4/128") == IPv6Net("1:2::3:4/128")
@test IPv6Net("1:2::3:0/112") == IPv6Net("1:2::3:0/112")
@test IPv6Net("1:2::3:4/128") != IPv6Net("1:2::3:4/127")
@test IPv6Net("1:2::3:4/127") < IPv6Net("1:2::3:4/128")
@test IPv6Net("1:2::3:4/128") > IPv6Net("1:2::3:4/127")
@test IPv6Net("1:2::3:0/112") < IPv6Net("1:2::4:0/112")
@test IPv6Net("1:2::4:0/112") > IPv6Net("1:2::3:0/112")
nets = map(IPv6Net, ["1:2::3:0/112", "1:2::3:4/127", "1:2::3:4/128", "1:2::4:0/112"])
@test sort(nets) == nets
@test length(ipnet) == length(collect(ipnet)) == 65536
@test collect(ipnet)::Vector{IPv6} == [x for x in ipnet]
@test length(IPv6Net("::/0"))::BigInt == BigInt(1) << 128
@test ipnet[0] == #= ipnet[begin] == =# ip"1:2::3:0" # TODO: Requires Julia 1.4
@test ipnet[1] == ip"1:2::3:1"
@test ipnet[65535] == ipnet[end] == ip"1:2::3:ffff"
@test ipnet[0:1] == ipnet[[0, 1]] == [ip"1:2::3:0", ip"1:2::3:1"]
@test_throws BoundsError ipnet[-1]
@test_throws BoundsError ipnet[65536]
@test_throws BoundsError ipnet[-1:2]
@test is_private(ip"::1")
@test !is_global(ip"::1")
@test !is_private(ip"2606:4700:4700::1111")
@test is_global(ip"2606:4700:4700::1111")
end
| IPNets | https://github.com/JuliaWeb/IPNets.jl.git |
|
[
"MIT"
] | 1.1.0 | 75d468e2b341d925beb1475a8ab42fde7ba72364 | docs | 2752 | ## IPNets.jl
[](https://github.com/JuliaWeb/IPNets.jl/actions/workflows/ci.yml)
[](http://codecov.io/github/JuliaWeb/IPNets.jl?branch=master)
*IPNets.jl* is a Julia package that provides IP network types. Both IPv4 and IPv6
networks can be described with *IPNets.jl* using standard, intuitive syntax.
### Main Features
An important aspect of *IPNets.jl* is the ability to treat IP networks as
collections while not actually allocating the memory required to store a full
range of addresses. Operations such as membership testing, indexing and iteration
are supported with `IPNet` types. The following examples should
help clarify.
Constructors:
```julia
julia> using IPNets, Sockets
julia> IPv4Net("1.2.3.0/24") # string in CIDR notation
IPv4Net("1.2.3.0/24")
julia> parse(IPv4Net, "1.2.3.0/24") # same as above
IPv4Net("1.2.3.0/24")
julia> IPv4Net(ip"1.2.3.0", 24) # IPv4 and mask as number of bits
IPv4Net("1.2.3.0/24")
julia> IPv4Net(ip"1.2.3.0", ip"255.255.255.0") # IPv4 and mask as another IPv4
IPv4Net("1.2.3.0/24")
julia> IPv4Net("1.2.3.4") # 32 bit mask default
IPv4Net("1.2.3.4/32")
```
Membership test:
```julia
julia> ip4net = IPv4Net("1.2.3.0/24");
julia> ip"1.2.3.4" in ip4net
true
julia> ip"1.2.4.1" in ip4net
false
```
Length, indexing, and iteration:
```julia
julia> ip4net = IPv4Net("1.2.3.0/24");
julia> length(ip4net)
256
julia> ip4net[0] # index from 0 (!)
ip"1.2.3.0"
julia> ip4net[0xff]
ip"1.2.3.255"
julia> ip4net[4:8]
5-element Vector{IPv4}:
ip"1.2.3.4"
ip"1.2.3.5"
ip"1.2.3.6"
ip"1.2.3.7"
ip"1.2.3.8"
julia> for ip in ip4net
@show ip
end
ip = ip"1.2.3.0"
ip = ip"1.2.3.1"
[...]
ip = ip"1.2.3.255"
```
Though these examples use the `IPv4Net` type, the `IPv6Net` type is also available with similar behavior:
```julia
julia> IPv6Net("1:2::/64") # string in CIDR notation
IPv6Net("1:2::/64")
julia> parse(IPv6Net, "1:2::/64") # same as above
IPv6Net("1:2::/64")
julia> IPv6Net(ip"1:2::", 64) # IPv6 and prefix
IPv6Net("1:2::/64")
julia> IPv6Net("1:2::3:4") # 128 bit mask default
IPv6Net("1:2::3:4/128")
```
For unknown (string) input use the `IPNet` supertype constructor (or `parse`):
```julia
julia> IPNet("1.2.3.0/24")
IPv4Net("1.2.3.0/24")
julia> parse(IPNet, "1.2.3.4")
IPv4Net("1.2.3.4/32")
julia> IPNet("1:2::3:4")
IPv6Net("1:2::3:4/128")
julia> parse(IPNet, "1:2::/64")
IPv6Net("1:2::/64")
```
### Limitations
- Non-contiguous subnetting for IPv4 addresses (e.g., a netmask of "255.240.255.0")
is not supported. Subnets must be able to be represented as a series of contiguous mask bits.
| IPNets | https://github.com/JuliaWeb/IPNets.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | code | 642 | using SymArrays
using Documenter
DocMeta.setdocmeta!(SymArrays, :DocTestSetup, :(using SymArrays); recursive=true)
makedocs(;
modules=[SymArrays],
authors="Johannes Feist <[email protected]> and contributors",
repo="https://github.com/jfeist/SymArrays.jl/blob/{commit}{path}#{line}",
sitename="SymArrays.jl",
format=Documenter.HTML(;
prettyurls=get(ENV, "CI", "false") == "true",
canonical="https://jfeist.github.io/SymArrays.jl",
assets=String[],
),
pages=[
"Home" => "index.md",
],
)
deploydocs(;
repo="github.com/jfeist/SymArrays.jl",
devbranch = "main",
)
| SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | code | 237 | module SymArrays
export SymArray, contract, contract!, SymArr_ifsym, symgrp_size, symgrps, nsymgrps, storage_type
include("helpers.jl")
include("symarray.jl")
include("contractions.jl")
include("cuda_contractions.jl")
end # module
| SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | code | 13126 | # Functions for contracting two arrays, both for general StridedArrays as well as
# for SymArrays. we only have a few specialized functions that we have needed up
# to now, but carefully optimize each of them
using TensorOperations
using LinearAlgebra
# Array[i]*SymArray[(i,j,k)]
# indices 1, 2, and 3 are exchangeable here
function contract(A::StridedVector{T},S::SymArray{(3,),U},n::Union{Val{1},Val{2},Val{3}}) where {T,U}
TU = promote_type(T,U)
@assert size(S,1) == length(A)
res = SymArray{(2,),TU}(size(S,1),size(S,2))
contract!(res,A,S,n)
end
# We know that $j\leq k$ (because $R$ is itself exchange symmetric)
# \begin{align}
# R_{jk} &= \sum_{i=1}^N g_i S_{ijk}
# \end{align}
# Matrix elements represented by $S_{ijk}$:
# \begin{equation}
# \begin{cases}
# S_{ijk}, S_{ikj}, S_{jik}, S_{jki}, S_{kij}, S_{kji} & i<j<k\\
# S_{ijk}, S_{jik}, S_{jki} & i<j=k\\
# S_{ijk}, S_{ikj}, S_{kij} & i=j<k\\
# S_{ijk} & i=j=k
# \end{cases}
# \end{equation}
# We only need to take the contributions that show up for each $R_{jk}$
# Array[i]*SymArray[(i,j,k)]
# indices 1, 2, and 3 are exchangeable here
function contract!(res::SymArray{(2,),TU}, A::StridedVector{T}, S::SymArray{(3,),U}, n::Union{Val{1},Val{2},Val{3}}) where {T,U,TU}
# only loop over S once, and put all the values where they should go
# R[j,k] = sum_i A[i] B[i,j,k]
# S[i,j,k] with i<=j<=k represents the 6 (not always distinct) terms: Bijk, Bikj, Bjik, Bjki, Bkij, Bkji
# since R[j,k] is also exchange symmetric, we only need to calculate j<=k
# this means we only have to check each adjacent pair of indices and include
# their permutation if not equal, but keeping the result indices ordered
@assert size(S,1) == length(A)
@assert size(S,1) == size(res,1)
res.data .= 0
@inbounds for (v,inds) in zip(S.data,CartesianIndices(S))
i,j,k = Tuple(inds)
res[j,k] += v*A[i]
# have to include i<->j
if i<j res[i,k] += v*A[j] end
# have to include i<->k, but keep indices of res sorted
if j<k res[i,j] += v*A[k] end
end
res
end
# Array[k]*SymArray[(i,j),k]
function contract(A::StridedVector{T},S::SymArray{(2,1),U},n::Val{3}) where {T,U}
TU = promote_type(T,U)
sumsize = length(A)
@assert sumsize == size(S,3)
res = SymArray{(2,),TU}(size(S,1),size(S,2))
contract!(res,A,S,n)
end
# Array[k]*SymArray[(i,j),k]
function contract!(res::SymArray{(2,),TU},A::StridedVector{T},S::SymArray{(2,1),U},::Val{3}) where {T,U,TU}
# use that S[(i,j),k] == S[I,k] (i.e., the two symmetric indices act like a "big" index)
mul!(res.data,reshape(S.data,:,length(A)),A)
res
end
# Array[i]*SymArray[(i,j),k)]
# since indices 1 and 2 are exchangeable here, use this
function contract(A::StridedVector{T},S::SymArray{(2,1),U},n::Union{Val{1},Val{2}}) where {T,U}
TU = promote_type(T,U)
@assert size(S,1) == length(A)
# the result is a normal 2D array
res = Array{TU,2}(undef,size(S,2),size(S,3))
contract!(res,A,S,n)
end
# Array[i]*SymArray[(i,j),k]
# since indices 1 and 2 are exchangeable here, use this
function contract!(res::StridedArray{TU,2},A::StridedVector{T},S::SymArray{(2,1),U},::Union{Val{1},Val{2}}) where {T,U,TU}
# only loop over S once, and put all the values where they should go
@assert size(A,1) == size(S,1)
@assert size(res,1) == size(S,1)
@assert size(res,2) == size(S,3)
res .= zero(TU)
@inbounds for (v,inds) in zip(S.data,CartesianIndices(S))
i1,i2,i3 = Tuple(inds)
res[i2,i3] += v*A[i1]
# if i1 != i2, we have to add the equal contribution from S[i2,i1,i3]
if i1 != i2
res[i1,i3] += v*A[i2]
end
end
res
end
# Array[i]*SymArray[(i,j)]
# this is symmetric in i1 and i2
function contract(A::StridedVector{T},S::SymArray{(2,),U},n::Union{Val{1},Val{2}}) where {T,U}
TU = promote_type(T,U)
@assert size(S,1) == length(A)
# the result is a normal 1D vector
res = Vector{TU}(undef,size(S,2))
contract!(res,A,S,n)
end
# Array[i]*SymArray[(i,j)]
# this is symmetric in i1 and i2
function contract!(res::StridedVector{TU},A::StridedVector{T},S::SymArray{(2,),U},::Union{Val{1},Val{2}}) where {T,U,TU}
@assert size(A,1) == size(S,1)
@assert size(res,1) == size(S,1)
res .= zero(TU)
# only loop over S once, and put all the values where they should go
for (v,inds) in zip(S.data,CartesianIndices(S))
i1,i2 = Tuple(inds)
res[i2] += v*A[i1]
# if i1 != i2, we have to add the contribution from S[i2,i1]
if i1 != i2
res[i1] += v*A[i2]
end
end
res
end
# Array[i_n]*Array[i1,i2,i3,...,iN]
function contract(A::StridedVector{T},B::StridedArray{U,N},::Val{n}) where {T,U,N,n}
TU = promote_type(T,U)
@assert 1 <= n <= N
resdims = size(B)[1:N .!= n]
res = similar(B,TU,resdims)
A = convert(AbstractArray{TU},A)
B = convert(AbstractArray{TU},B)
contract!(res,A,B,Val{n}())
end
mygemv!(args...) = BLAS.gemv!(args...)
function _contract_middle!(res,A,B)
@inbounds for k=1:size(B,3)
mul!(@view(res[:,k]), @view(B[:,:,k]), A)
end
end
# Array[i_n]*Array[i1,i2,i3,...,iN]
function contract!(res::StridedArray{TU},A::StridedVector{TU},B::StridedArray{TU,N},::Val{n}) where {TU,N,n}
nsum = length(A)
@assert size(B,n) == nsum
@assert ndims(res)+1 == ndims(B)
ii = 0
for jj = 1:ndims(B)
jj==n && continue
ii += 1
@assert size(B,jj) == size(res,ii)
end
if n==1 # A[i]*B[i,...]
mygemv!('T',one(TU),reshape(B,nsum,:),A,zero(TU),vec(res))
elseif n==N # B[...,i]*A[i]
mygemv!('N',one(TU),reshape(B,:,nsum),A,zero(TU),vec(res))
else
rightsize = prod(size(B,i) for i=n+1:N)
Br = reshape(B,:,nsum,rightsize)
resr = reshape(res,:,rightsize)
_contract_middle!(resr,A,Br)
end
res
end
# Array[i_n]*Array[i1,i2,i3,...,iN]
function contract!(res::StridedArray{TU,Nres},A::StridedArray{TU,NA},B::StridedArray{TU,NB},::Val{nA},::Val{nB}) where {TU,Nres,NA,NB,nA,nB}
# use specialized code for 1D-arrays if possible
if NA==1
@assert nA==1
return contract!(res,A,B,Val(nB))
elseif NB==1
@assert nB==1
return contract!(res,B,A,Val(nA))
end
nsum = size(A,nA)
@assert size(B,nB) == nsum
@assert Nres == NA + NB - 2
ii = 0
for jj = 1:NA
jj==nA && continue
ii += 1
@assert size(A,jj) == size(res,ii)
end
for jj = 1:NB
jj==nB && continue
ii += 1
@assert size(B,jj) == size(res,ii)
end
if nA==NA && nB==1 # A[...,i]*B[i,...]
resAsize = prod(ntuple(i->size(A,i),Val(NA-1)))
mul!(reshape(res,resAsize,:),reshape(A,resAsize,nsum),reshape(B,nsum,:))
else
packedsize(M,::Val{n}) where n = begin
nM1 = prod(ntuple(i->size(M,i),Val(n-1)))
nM2 = size(M,n)
nM3 = prod(ntuple(i->size(M,n+i),Val(ndims(M)-n)))
(nM1, nM2, nM3)
end
# just use @tensor after reshaping to pack together
NAps = packedsize(A,Val(nA))
NBps = packedsize(B,Val(nB))
Nresps = (NAps[1],NAps[3],NBps[1],NBps[3])
Ap = reshape(A,NAps...)
Bp = reshape(B,NBps...)
resp = reshape(res,Nresps...)
@tensor resp[iA1,iA3,iB1,iB3] = Ap[iA1,ii,iA3] * Bp[iB1,ii,iB3]
end
res
end
"""return the symmetry group index and the number of symmetric indices in the group"""
@inline which_symgrp(S::T,nS) where T<:SymArray = which_symgrp(T,nS)
@inline @generated function which_symgrp(::Type{<:SymArray{Nsyms}},nS) where Nsyms
grps = ()
for (ii,Nsym) in enumerate(Nsyms)
grps = (grps...,ntuple(_->ii,Nsym)...)
end
quote
ng = $grps[nS]
ng, $Nsyms[ng]
end
end
"""Check if the arguments correspond to a valid contraction. Do all "static" checks at compile time."""
@generated function check_contraction_compatibility(res::SymArray{Nsymsres,TU}, A::StridedArray{T,NA}, S::SymArray{NsymsS,U}, ::Val{nA}, ::Val{nS}) where {T,U,TU,NsymsS,Nsymsres,NA,nA,nS}
promote_type(T,U) <: TU || error("element types not compatible: T = $T, U = $U, TU = $TU")
contracted_group, Nsym_ctrgrp = which_symgrp(S,nS)
NsymsA_contracted = ntuple(_->1,NA-1)
if Nsym_ctrgrp == 1
NsymsS_contracted = TupleTools.deleteat(NsymsS,contracted_group)
else
NsymsS_contracted = Base.setindex(NsymsS,Nsym_ctrgrp-1,contracted_group)
end
Nsymsres_check = (NsymsA_contracted...,NsymsS_contracted...)
# assure that symmetry structure is compatible
errmsg = "symmetry structure not compatible:"
errmsg *= "\nNA = $NA, NsymsS = $NsymsS, nA = $nA, nS = $nS"
errmsg *= "\nNsymsres = $Nsymsres, expected Nsymssres = $Nsymsres_check"
Nsymsres == Nsymsres_check || error(errmsg)
Sresinds = ((1:nS-1)...,(nS+1:sum(NsymsS))...)
Aresinds = ((1:nA-1)...,(nA+1:NA)...)
sizerescheck = ((i->:(sizeA[$i])).(Aresinds)...,
(i->:(sizeS[$i])).(Sresinds)...)
code = quote
sizeA = size(A)
sizeS = size(S)
@assert sizeA[$nA] == sizeS[$nS]
@assert size(res) == ($(sizerescheck...),)
end
#display(code)
code
end
"""
A[iAprev,icntrct,iApost]
S[iSprev,Icntrct,ISpost]
res[iAprev,iApost,iSprev,Icntrct-1,ISpost]
"""
@generated function contract_symindex!(res::Array{TU,5}, A::Array{T,3}, ::Val{sizeA13unit}, S::Array{U,3}, ::Val{sizeS13unit}, ::Val{Nsym}) where {T,U,TU,sizeA13unit,sizeS13unit,Nsym}
# Nsym-dimensional index tuple
iS2s = Tuple(Symbol.(:iS2_,1:Nsym))
# combined (Nsym-1)-dimensional index for the Nsym possible permutations
iSm2s = Tuple(Symbol.(:iSm2_,1:Nsym))
iAsetters = Expr(:block)
iAusers = Expr(:block)
for n = 1:Nsym
chk = n==1 ? true : :( $(iS2s[n-1])<$(iS2s[n]) )
syminds = TupleTools.deleteat(iS2s,n)
push!(iAsetters.args, :( $chk && ($(iSm2s[n]) = symgrp_sortedsub2ind($(syminds...))) ))
push!(iAusers.args, :( $chk && (res[iA1,iA3,iS1,$(iSm2s[n]),iS3] += v * A[iA1,$(iS2s[n]),iA3]) ))
end
iA1max = sizeA13unit[1] ? 1 : :(size(A,1))
iA3max = sizeA13unit[2] ? 1 : :(size(A,3))
iS1max = sizeS13unit[1] ? 1 : :(size(S,1))
iS3max = sizeS13unit[2] ? 1 : :(size(S,3))
code = quote
# size of iterated index is middle index of A
iterdimS = SymIndexIter($Nsym,size(A,2))
res .= zero(TU)
@inbounds for iS3 = 1:$iS3max
for (iS2,IS) = enumerate(iterdimS)
($(iS2s...),) = Tuple(IS)
$iAsetters
for iS1 = 1:$iS1max
v = S[iS1,iS2,iS3]
for iA3 = 1:$iA3max
for iA1 = 1:$iA1max
$iAusers
end
end
end
end
end
end
code
end
function contract!(res::StridedArray{TU,Nres}, A::StridedArray{T,NA}, S::SymArray{NsymsS,U}, ::Val{nA}, ::Val{nS}) where {T,U,TU,NsymsS,Nres,NA,nA,nS}
# StridedArray is equivalent to SymArray with all Nsyms equal to 1, provide this as overlay of that data
ressym = SymArray{ntuple(i->1,Val(Nres))}(res,size(res)...)
contract!(ressym,A,S,Val(nA),Val(nS))
res
end
@generated function contract!(res::SymArray{Nsymsres,TU}, A::StridedArray{T,NA}, S::SymArray{NsymsS,U}, ::Val{nA}, ::Val{nS}) where {T,U,TU,NsymsS,Nsymsres,NA,nA,nS}
contracted_group, Nsym_ctrgrp = which_symgrp(S,nS)
sizeA13unit = (nA==1,nA==NA)
sizeS13unit = (contracted_group==1,contracted_group==length(NsymsS))
newsize_centered_expr(sizeA::Symbol,nA::Int,NA::Int) = begin
sizeAs = [:( $sizeA[$ii] ) for ii=1:NA]
t1 = nA > 1 ? :( *($(sizeAs[1:nA-1]...)) ) : 1
t3 = nA < NA ? :( *($(sizeAs[nA+1:NA]...)) ) : 1
:( ($t1, $sizeA[$nA], $t3) )
end
code = quote
# first check that all the sizes are compatible etc
check_contraction_compatibility(res,A,S,Val($nA),Val($nS))
sizeA = size(A)
sizeAp = $(newsize_centered_expr(:sizeA,nA,NA))
Apacked = reshape(A,sizeAp)
grpsizeS = symgrp_size.(S.Nts,Val.(NsymsS))
sizeSp = $(newsize_centered_expr(:grpsizeS, contracted_group, length(NsymsS)))
Spacked = reshape(S.data,sizeSp)
if $Nsym_ctrgrp > 1
size_respS1 = symgrp_size(S.Nts[$contracted_group],Val($(Nsym_ctrgrp-1)))
respacked = reshape(res.data,sizeAp[1],sizeAp[3],sizeSp[1],size_respS1,sizeSp[3])
contract_symindex!(respacked,Apacked,Val($sizeA13unit),Spacked,Val($sizeS13unit),Val($Nsym_ctrgrp))
else
# iS2 is a single (not symmetric) dimension and it disappears after summing
# we have res[iS1,iS3,iA1,iA3] = S_iS1,k,iS3 * A_iA1,k,iA3
respacked = reshape(res.data,sizeAp[1],sizeAp[3],sizeSp[1],sizeSp[3])
@tensor respacked[iA1,iA3,iS1,iS3] = Apacked[iA1,ii,iA3] * Spacked[iS1,ii,iS3]
end
end
#display(code)
code
end
| SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | code | 2491 | using CUDA
###########################################################################
## helper functions (originally from CuArrays, but since removed there) ##
###########################################################################
function cudims(n::Integer)
threads = min(n, 256)
cld(n,threads), threads
end
cudims(a::AbstractArray) = cudims(length(a))
cudims(a::SymArray) = cudims(a.data)
# COV_EXCL_START
@inline ind2sub_(a::AbstractArray{T,0}, i) where T = ()
@inline ind2sub_(a, i) = Tuple(CartesianIndices(a)[i])
macro cuindex(A)
quote
A = $(esc(A))
i = (blockIdx().x-1) * blockDim().x + threadIdx().x
i > length(A) && return
ind2sub_(A, i)
end
end
# COV_EXCL_STOP
###########################################################################
mygemv!(tA,alpha,A::CuArray,args...) = CUDA.CUBLAS.gemv!(tA,alpha,A,args...)
_contract_middle!(res::CuArray,A,B) = (@tensor res[i,k] = B[i,j,k] * A[j])
# for SymArrays on the GPU, collect should convert to a SymArray on the CPU
# to convert to a "normal" Array, you then have to apply collect again
Base.collect(S::SymArray{Nsyms,T,N,M,datType}) where {Nsyms,T,N,M,datType<:CuArray} = SymArray{Nsyms}(collect(S.data),S.size...)
@generated function cuda_contraction_kernel(res, A, S, SI::SymIndexIter{Nsymres}) where {Nsymres}
# calculate res[iA1,iA3,iS1,iSm2,iS3] = ∑_iA2 A[iA1,iA2,iA3] * S[iS1,iS2,iS3]
# where iSm2 = (i1,i2,...,iNsymres) and iS2 = sorted(iA2,i1,i2,i3...,iNsymres)
code = quote
I = @cuindex(res)
iA1,iA3,iS1,iSm2,iS3 = I
ISm2 = ind2sub_symgrp(SI, iSm2)
res[I...] = zero(eltype(res))
end
for n = 0:Nsymres
iAstart = n==0 ? 1 : :(ISm2[$n]+1)
iAend = n<Nsymres ? :(ISm2[$(n+1)]) : :(size(A,2))
iprev = [:( ISm2[$i] ) for i=1:n]
ipost = [:( ISm2[$i] ) for i=n+1:Nsymres]
cc = :( for iA2 = $iAstart:$iAend
iS2 = symgrp_sortedsub2ind($(iprev...),iA2,$(ipost...))
res[I...] += A[iA1,iA2,iA3]*S[iS1,iS2,iS3]
end)
push!(code.args,cc)
end
push!(code.args,:(return))
#display(code)
:( @inbounds $code )
end
function contract_symindex!(res::CuArray{TU,5}, A::CuArray{T,3}, ::Val{sizeA13unit}, S::CuArray{U,3}, ::Val{sizeS13unit}, ::Val{Nsym}) where {T,U,TU,sizeA13unit,sizeS13unit,Nsym}
blk, thr = cudims(res)
SI = SymIndexIter(Nsym-1,size(A,2))
@cuda blocks=blk threads=thr cuda_contraction_kernel(res,A,S,SI)
end | SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | code | 2188 | """based on Base.binomial, but without negative values for n and without overflow checks
(index calculations here should not overflow if the array does not have more elements than an Int64 can represent)"""
@inline function binomial_simple(n::T, k::T) where T<:Integer
(k < 0 || k > n) && return zero(T)
(k == 0 || k == n) && return one(T)
if k > (n>>1)
k = n - k
end
k == 1 && return n
x::T = nn = n - k + 1
nn += 1
rr = 2
while rr <= k
x = div(x*nn, rr)
rr += 1
nn += 1
end
x
end
@inline @generated function binomial_unrolled(n::T, ::Val{k}) where {k,T<:Integer}
terms = [:(n - $(k-j)) for j=1:k]
if k < 10
# for small k, just return the unrolled calculation directly
# (n k) = prod(n+i-k, i=1:k)/k!
# typemax(Int)/factorial(9) ~ 23*10^12
# so for k<10, this does not overflow for index calculation of up to ~100TB arrays.
# so we do not worry about it
# use the precomputed factorial
binom = :( *($(terms...)) ÷ $(factorial(k)) )
else
binom = terms[1] # j=1
for j=2:k
# careful about operation order:
# first multiply, the product is then always divisible by j
binom = :( ($binom * $(terms[j])) ÷ $j )
end
# (n k) == (n n-k) -> should be faster for n-k < k
# but when we do this replacement, we cannot unroll the loop explicitly,
# so heuristically use n-k < k/2 to ensure that it wins
:( n < $(3k÷2) ? binomial_simple(n,n-$k) : $binom )
end
end
"""calculate binomial(ii+n+offset,n)
This shows up in size and index calculations for arrays with symmetric indices."""
#@inline symind_binomial(ii,::Val{n},::Val{offset}) where {n,offset} = binomial_unrolled(ii+(n+offset),Val(n))
# binary search for finding an integer m such that func(m) <= ind < func(m+1), with low <= m <= high
function searchlast_func(x,func,low::T,high::T) where T<:Integer
high += one(T)
while low < high-one(T)
mid = (low+high) >> 1
if x < func(mid)
high = mid
else
low = mid
end
end
return low
end
| SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | code | 10001 | import Base: size, length, ndims, eltype, first, last, ==, parent
import Base: getindex, setindex!, iterate, eachindex, IndexStyle, CartesianIndices, tail, copyto!, fill!
using LinearAlgebra
using TupleTools
using Adapt
"size of a single symmetric group with Nsym dimensions and size Nt per dimension"
symgrp_size(Nt,Nsym) = binomial_simple(Nt-1+Nsym, Nsym)
symgrp_size(Nt,::Val{Nsym}) where Nsym = binomial_unrolled(Nt+(Nsym-1),Val(Nsym))
# calculates the length of a SymArray
symarrlength(Nts,Nsyms) = prod(symgrp_size.(Nts,Nsyms))
@generated function _getNts(::Val{Nsyms},size::NTuple{N,Int}) where {Nsyms,N}
@assert sum(Nsyms)==N
symdims = cumsum(collect((1,Nsyms...)))
Nts = [ :( size[$ii] ) for ii in symdims[1:end-1]]
code = quote
Nts = ($(Nts...),)
end
err = :( error("SymArray: sizes $size not compatible with symmetry numbers $Nsyms") )
for ii = 1:length(Nsyms)
dd = [ :( size[$jj] ) for jj=symdims[ii]:symdims[ii+1]-1 ]
push!(code.args,:( all(($(dd...),) .== Nts[$ii]) || $err ))
end
push!(code.args, :( Nts ))
code
end
struct SymArray{Nsyms,T,N,M,datType<:AbstractArray} <: AbstractArray{T,N}
data::datType
size::NTuple{N,Int}
Nts::NTuple{M,Int}
function SymArray{Nsyms,T}(::Type{arrType},size::Vararg{Int,N}) where {Nsyms,T,N,arrType}
Nts = _getNts(Val(Nsyms),size)
M = length(Nsyms)
data = arrType{T,M}(undef,symgrp_size.(Nts,Nsyms)...)
new{Nsyms,T,N,M,typeof(data)}(data,size,Nts)
end
function SymArray{Nsyms,T}(size::Vararg{Int,N}) where {Nsyms,T,N}
SymArray{Nsyms,T}(Array,size...)
end
# this creates a SymArray that serves as a view on an existing array
function SymArray{Nsyms}(data::datType,size::Vararg{Int,N}) where {Nsyms,N,datType<:AbstractArray{T}} where T
Nts = _getNts(Val(Nsyms),size)
@assert Base.size(data) == symgrp_size.(Nts,Val.(Nsyms))
new{Nsyms,T,N,length(Nsyms),datType}(data,size,Nts)
end
end
parent(A::SymArray) = A.data
size(A::SymArray) = A.size
length(A::SymArray) = length(A.data)
# this is necessary for CUDAnative kernels, but also generally useful
# e.g., adapt(CuArray,S) will return a copy of the SymArray with storage in a CuArray
Adapt.adapt_structure(to, x::SymArray{Nsyms}) where Nsyms = SymArray{Nsyms}(adapt(to,x.data),x.size...)
symgrp_size(S::SymArray{Nsyms}) where Nsyms = symgrp_size.(S.Nts,Nsyms)
symgrp_size(S::SymArray{Nsyms},d::Integer) where Nsyms = symgrp_size(S.Nts[d],Nsyms[d])
symgrps(S) = symgrps(typeof(S))
symgrps(::Type{<:SymArray{Nsyms}}) where Nsyms = Nsyms
nsymgrps(S) = nsymgrps(typeof(S))
nsymgrps(::Type{<:SymArray{Nsyms,T,N,M}}) where {Nsyms,T,N,M} = M
"""
storage_type(A)
Return the type of the underlying storage array for array wrappers.
"""
storage_type(A) = storage_type(typeof(A))
storage_type(T::Type) = (P = parent_type(T); P===T ? T : storage_type(P))
parent_type(T::Type{<:AbstractArray}) = T
parent_type(::Type{<:PermutedDimsArray{T,N,perm,iperm,AA}}) where {T,N,perm,iperm,AA} = AA
parent_type(::Type{<:LinearAlgebra.Transpose{T,S}}) where {T,S} = S
parent_type(::Type{<:LinearAlgebra.Adjoint{T,S}}) where {T,S} = S
parent_type(::Type{<:SubArray{T,N,P}}) where {T,N,P} = P
parent_type(::Type{<:SymArray{Nsyms,T,N,M,datType}}) where {Nsyms,T,N,M,datType} = datType
copyto!(S::SymArray,A::AbstractArray) = begin
Ainds, Sinds = LinearIndices(A), LinearIndices(S)
isempty(Ainds) || (checkbounds(Bool, Sinds, first(Ainds)) && checkbounds(Bool, Sinds, last(Ainds))) || throw(BoundsError(S, Ainds))
@inbounds for (i,I) in zip(Ainds,eachindex(S))
S[i] = A[I]
end
S
end
copyto!(A::AbstractArray,S::SymArray) = begin
Ainds, Sinds = LinearIndices(A), LinearIndices(S)
isempty(Sinds) || (checkbounds(Bool, Ainds, first(Sinds)) && checkbounds(Bool, Ainds, last(Sinds))) || throw(BoundsError(A, Sinds))
@inbounds for (i,I) in zip(Ainds,CartesianIndices(A))
A[i] = S[I]
end
A
end
copyto!(S::SymArray,Ssrc::SymArray) = begin
@assert symgrps(S) == symgrps(Ssrc)
@assert size(S) == size(Ssrc)
copyto!(S.data,Ssrc.data)
S
end
Base.similar(src::SymArray{Nsyms}) where Nsyms = SymArray{Nsyms}(similar(parent(src)),size(src)...)
Base.copy(src::SymArray{Nsyms}) where Nsyms = SymArray{Nsyms}(copy(parent(src)),size(src)...)
fill!(S::SymArray,v) = fill!(S.data,v)
==(S1::SymArray,S2::SymArray) = (symgrps(S1),S1.data) == (symgrps(S2),S2.data)
SymArray{Nsyms}(A::AbstractArray{T}) where {Nsyms,T} = (S = SymArray{Nsyms,T}(size(A)...); copyto!(S,A))
# to avoid ambiguity with Vararg "view" constructor above
SymArray{(1,)}(A::AbstractVector{T}) where {T} = (S = SymArray{(1,),T}(size(A)...); copyto!(S,A))
"`SymArr_ifsym(A,Nsyms)` make a SymArray if there is some symmetry (i.e., any of the Nsyms are not 1)"
SymArr_ifsym(A,Nsyms) = all(Nsyms.==1) ? A : SymArray{(Nsyms...,)}(A)
"calculates the contribution of index idim in (i1,...,idim,...,iN) to the corresponding linear index for the group"
symind2ind(i,::Val{dim}) where dim = binomial_unrolled(i+(dim-2),Val(dim))
"calculates the linear index corresponding to the symmetric index group (i1,...,iNsym)"
@inline @generated function symgrp_sortedsub2ind(I::Vararg{T,Nsym})::T where {Nsym,T<:Integer}
terms2toN = [ :( symind2ind(I[$dim],Val($dim)) ) for dim=2:Nsym ]
:( +(I[1],$(terms2toN...)) )
end
@generated _sub2grp(A::SymArray{Nsyms,T,N}, I::Vararg{Int,N}) where {Nsyms,T,N} = begin
result = []
ii::Int = 0
for Nsym in Nsyms
Ilocs = ( :( I[$(ii+=1)] ) for _=1:Nsym)
push!(result, :( symgrp_sortedsub2ind(TupleTools.sort(($(Ilocs...),))...) ) )
end
code = :( ($(result...),) )
code
end
IndexStyle(::Type{<:SymArray}) = IndexCartesian()
getindex(A::SymArray, i::Int) = A.data[i]
getindex(A::SymArray{Nsyms,T,N}, I::Vararg{Int,N}) where {Nsyms,T,N} = (@boundscheck checkbounds(A,I...); @inbounds A.data[_sub2grp(A,I...)...])
setindex!(A::SymArray, v, i::Int) = A.data[i] = v
setindex!(A::SymArray{Nsyms,T,N}, v, I::Vararg{Int,N}) where {Nsyms,T,N} = (@boundscheck checkbounds(A,I...); @inbounds A.data[_sub2grp(A,I...)...] = v);
eachindex(S::SymArray) = CartesianIndices(S)
CartesianIndices(S::SymArray) = SymArrayIter(S)
@generated function lessnexts(::SymArray{Nsyms}) where Nsyms
lessnext = ones(Bool,sum(Nsyms))
istart = 1
for Nsym in Nsyms
istart += Nsym
lessnext[istart-1] = false
end
Tuple(lessnext)
end
struct SymArrayIter{N}
lessnext::NTuple{N,Bool}
sizes::NTuple{N,Int}
"create an iterator that gives i1<=i2<=i3 etc for each index group"
SymArrayIter(A::SymArray{Nsyms,T,N,M}) where {Nsyms,T,N,M} = new{N}(lessnexts(A),A.size)
end
ndims(::SymArrayIter{N}) where N = N
eltype(::Type{SymArrayIter{N}}) where N = NTuple{N,Int}
first(iter::SymArrayIter) = CartesianIndex(map(one, iter.sizes))
last(iter::SymArrayIter) = CartesianIndex(iter.sizes...)
@inline function iterate(iter::SymArrayIter)
iterfirst = first(iter)
iterfirst, iterfirst
end
@inline function iterate(iter::SymArrayIter, state)
valid, I = __inc(state.I, iter.sizes, iter.lessnext)
valid || return nothing
return CartesianIndex(I...), CartesianIndex(I...)
end
# increment post check to avoid integer overflow
@inline __inc(::Tuple{}, ::Tuple{}, ::Tuple{}) = false, ()
@inline function __inc(state::Tuple{Int}, size::Tuple{Int}, lessnext::Tuple{Int})
valid = state[1] < size[1]
return valid, (state[1]+1,)
end
@inline function __inc(state, sizes, lessnext)
smax = lessnext[1] ? state[2] : sizes[1]
if state[1] < smax
return true, (state[1]+1, tail(state)...)
end
valid, I = __inc(tail(state), tail(sizes), tail(lessnext))
return valid, (1, I...)
end
struct SymIndexIter{Nsym}
size::Int
"create an iterator that gives i1<=i2<=i3 etc for one index group"
SymIndexIter(Nsym,size) = new{Nsym}(size)
end
ndims(::SymIndexIter{Nsym}) where Nsym = Nsym
eltype(::Type{SymIndexIter{Nsym}}) where Nsym = NTuple{Nsym,Int}
length(iter::SymIndexIter{Nsym}) where Nsym = symgrp_size(iter.size,Nsym)
first(iter::SymIndexIter{Nsym}) where Nsym = ntuple(one,Val(Nsym))
last(iter::SymIndexIter{Nsym}) where Nsym = ntuple(i->iter.size,Val(Nsym))
@inline function iterate(iter::SymIndexIter)
I = first(iter)
I, I
end
@inline function iterate(iter::SymIndexIter, state)
valid, I = __inc(state, iter.size)
ifelse(valid, (I, I), nothing)
end
# increment post check to avoid integer overflow
@inline __inc(::Tuple{}, ::Tuple{}) = false, ()
@inline function __inc(state::Tuple{Int}, size::Int)
valid = state[1] < size
return valid, (state[1]+1,)
end
@inline function __inc(state::NTuple{N,Int}, size::Int) where N
if state[1] < state[2]
return true, (state[1]+1, tail(state)...)
end
valid, I = __inc(tail(state), size)
return valid, (1, I...)
end
function _find_symind(ind::T, ::Val{dim}, high::T) where {dim,T<:Integer}
dim==1 ? ind+one(T) : searchlast_func(ind, x->symind2ind(x,Val(dim)),one(T),high)
end
"""convert a linear index for a symmetric index group into a group of subindices"""
@generated function ind2sub_symgrp(SI::SymIndexIter{N},ind::T) where {N,T<:Integer}
code = quote
ind -= 1
end
kis = Symbol.(:k,1:N)
for dim=N:-1:2
push!(code.args,:( $(kis[dim]) = _find_symind(ind,Val($dim),T(SI.size)) ))
push!(code.args,:( ind -= symind2ind($(kis[dim]),Val($dim)) ))
end
push!(code.args, :( k1 = ind + 1 ))
push!(code.args, :( return ($(kis...),)) )
#display(code)
code
end
@generated function _grp2sub(A::SymArray{Nsyms,T,N,M}, I::Vararg{Int,M}) where {Nsyms,T,N,M}
exs = [:( ind2sub_symgrp(SymIndexIter($Nsym,A.Nts[$ii]),I[$ii]) ) for (ii,Nsym) in enumerate(Nsyms)]
code = :( TupleTools.flatten($(exs...)) )
#display(code)
code
end
@inline ind2sub(A, ii) = Tuple(CartesianIndices(A)[ii])
@inline ind2sub(A::SymArray,ii) = _grp2sub(A,ind2sub(A.data,ii)...)
| SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | code | 10700 | using Test
using SymArrays
using SymArrays: symarrlength, _sub2grp, which_symgrp
using TensorOperations
using Random
using CUDA
using cuTENSOR
CUDA.allowscalar(false)
@testset "SymArrays.jl" begin
@testset "SymArray" begin
@test symarrlength((3,6,4,3),(3,2,1,3)) == 8400
S = SymArray{(2,),Float64}(5,5)
@test nsymgrps(S) == 1
@test symgrps(S) == (2,)
@test _sub2grp(S,2,5) == _sub2grp(S,5,2)
@test _sub2grp(S,3,4) != _sub2grp(S,5,3)
# _sub2grp has to have same number of arguments as size of N
@test_throws MethodError _sub2grp(S,3,5,1)
# indexing the array allows having additional 1s at the end
@test S[3,5,1] == S[3,5]
@test_throws BoundsError S[3,5,3]
# this calculation gives an index for data that is within bounds, but should not be legal
# make sure this is caught
@test_throws BoundsError S[0,6]
@test (S[1,5] = 2.; S[1,5] == 2.)
@test_throws BoundsError S[0,6] = 2.
S = SymArray{(3,1,2,2),Float64}(3,3,3,2,4,4,4,4)
@test nsymgrps(S) == 4
@test symgrps(S) == (3,1,2,2)
@test size(S) == (3,3,3,2,4,4,4,4)
@test length(S) == 2000
# iterating over all indices should give only the distinct indices,
# i.e., give the same number of terms as the array length
@test sum(1 for s in S) == length(S)
@test sum(1 for I in eachindex(S)) == length(S)
# calculating the linear index when iterating over Cartesian indices should give sequential access to the array
@test 1:length(S) == [(LinearIndices(S.data)[_sub2grp(S,Tuple(I)...)...] for I in eachindex(S))...]
@testset "_sub2grp" begin
# test that permuting exchangeable indices accesses the same array element
i1 = _sub2grp(S,1,2,3,1,4,3,2,1)
@test _sub2grp(S,2,3,1,1,4,3,2,1) == i1
@test _sub2grp(S,2,3,1,1,3,4,2,1) == i1
@test _sub2grp(S,2,3,1,1,3,4,1,2) == i1
@test _sub2grp(S,2,1,3,1,4,3,1,2) == i1
# make sure that swapping independent indices gives a different array element
@test _sub2grp(S,2,1,3,1,4,1,3,2) != i1
@test _sub2grp(S,1,2,3,2,4,3,2,1) != i1
@test _sub2grp(S,1,2,3,1,1,3,2,4) != i1
SI = eachindex(S)
@test first(SI) == CartesianIndex(1,1,1,1,1,1,1,1)
NN = 4
maxNdim = 40
for Ndim = 1:maxNdim
S = SymArray{(Ndim,),Float64}(ntuple(i->NN,Ndim)...)
Is = ntuple(i->NN,Ndim)
@test _sub2grp(S,Is...) == size(S.data)
Is = ntuple(i->1,Ndim)
@test _sub2grp(S,Is...) == (1,)
end
end
A = rand(5,5)
@test A' != A
@test SymArray{(1,1)}(A) == A
@test SymArray{(2,)}(A) != A
A = A + A'
# the standard equality test uses iteration over the arrays, (a,b) in zip(A,B),
# which only accesses the actually different indices in SymArrays
@test SymArray{(2,)}(A) != A
# broadcasting goes over all indices
@test all(SymArray{(2,)}(A) .== A)
# views on existing vector-like types
x = rand(3)
S = SymArray{(2,)}(x,2,2)
@test S.data === x
fill!(S,8.)
@test all(S .== 8.)
S2 = SymArray{(2,)}(0*x,2,2)
copyto!(S2,S)
@test S == S2
x = 5:10
S = SymArray{(2,)}(x,3,3)
@test S.data === x
A = rand(10)
S = SymArray{(1,)}(A,10)
@test S.data === A
# but if called without sizes, the copy constructor should be used
S = SymArray{(1,)}(A)
@test S.data !== A
end
@testset "which_symgrp" begin
S = SymArray{(3,2),Float64}(5,5,5,3,3)
@test which_symgrp(S,1) == (1,3)
@test which_symgrp(S,3) == (1,3)
@test which_symgrp(S,4) == (2,2)
@test which_symgrp(S,5) == (2,2)
end
@testset "Contractions" begin
@testset "Manual" begin
N,M,O = 3, 5, 8
for T in (Float64,ComplexF64)
A = rand(T,N)
for U in (Float64,ComplexF64)
for (n,dims) in enumerate([(N,M,O),(O,N,M),(M,O,N)])
B = rand(U,dims...)
n==1 && (@tensor D1[j,k] := A[i]*B[i,j,k])
n==2 && (@tensor D1[j,k] := A[i]*B[j,i,k])
n==3 && (@tensor D1[j,k] := A[i]*B[j,k,i])
D2 = contract(A,B,Val(n))
@test D1 ≈ D2
if T==U
contract!(D2,A,B,Val(n))
@test D1 ≈ D2
else
@test_throws MethodError contract!(D2,A,B,Val(n))
end
end
S = SymArray{(3,),U}(N,N,N)
rand!(S.data)
B = collect(S)
@test contract(A,S,Val(1)) == contract(A,S,Val(2))
@test contract(A,S,Val(1)) == contract(A,S,Val(3))
for n = 1:3
@test collect(contract(A,S,Val(n))) ≈ contract(A,B,Val(n))
end
S = SymArray{(2,1),U}(N,N,N)
rand!(S.data)
B = collect(S)
@test contract(A,S,Val(1)) == contract(A,S,Val(2))
for n = 1:3
@test collect(contract(A,S,Val(n))) ≈ contract(A,B,Val(n))
end
@test !(collect(contract(A,S,Val(1))) ≈ collect(contract(A,S,Val(3))))
S = SymArray{(2,),U}(N,N)
rand!(S.data)
B = collect(S)
@test contract(A,S,Val(1)) ≈ contract(A,S,Val(2))
for n = 1:2
@test collect(contract(A,S,Val(n))) ≈ contract(A,B,Val(n))
end
end
end
end
@testset "Generated" begin
# use small dimension sizes here so the tests do not take too long
N,M,O = 4, 5, 6
arrTypes = has_cuda_gpu() ? (Array,CuArray) : (Array,)
for arrType in arrTypes
for T in (Float64,ComplexF64)
A = rand(T,N,M,O) |> arrType
for U in (Float64,ComplexF64)
S = SymArray{(2,3,1),U}(arrType,M,M,N,N,N,O)
@test storage_type(S) <: arrType
rand!(S.data)
# first collect GPU->CPU, then SymArray -> Array
B = collect(collect(S)) |> arrType
@test storage_type(B) <: arrType
TU = promote_type(U,T)
res21 = SymArray{(1,1,1,3,1),TU}(arrType,N,O,M,N,N,N,O)
contract!(res21,A,S,Val(2),Val(1))
@tensor res21_tst[i,k,l,m,n,o,p] := A[i,j,k] * B[j,l,m,n,o,p]
@test collect(collect(res21)) ≈ collect(res21_tst)
if T==U
contract!(res21_tst,A,B,Val(2),Val(1))
@test collect(collect(res21)) ≈ collect(res21_tst)
end
res13 = SymArray{(1,1,2,2,1),TU}(arrType,M,O,M,M,N,N,O)
contract!(res13,A,S,Val(1),Val(3))
@tensor res13_tst[j,k,l,m,n,o,p] := A[i,j,k] * B[l,m,i,n,o,p]
@test collect(collect(res13)) ≈ collect(res13_tst)
if T==U
contract!(res13_tst,A,B,Val(1),Val(3))
@test collect(collect(res13)) ≈ collect(res13_tst)
end
# dimension 3, 4, and 5 should be equivalent
contract!(res13,A,S,Val(1),Val(4))
@test collect(collect(res13)) ≈ collect(res13_tst)
contract!(res13,A,S,Val(1),Val(5))
@test collect(collect(res13)) ≈ collect(res13_tst)
end
A = rand(T,N) |> arrType
S = SymArray{(2,3,1),T}(arrType,N,N,N,N,N,N)
rand!(S.data)
# first collect GPU->CPU, then SymArray -> Array
B = collect(collect(S)) |> arrType
res11 = SymArray{(1,3,1),T}(arrType,N,N,N,N,N)
contract!(res11,A,S,Val(1),Val(1))
@tensor res11_tst[j,k,l,m,n] := A[i] * B[i,j,k,l,m,n]
@test collect(collect(res11)) ≈ collect(res11_tst)
contract!(res11_tst,A,B,Val(1),Val(1))
@test collect(collect(res11)) ≈ collect(res11_tst)
res13 = SymArray{(2,2,1),T}(arrType,N,N,N,N,N)
contract!(res13,A,S,Val(1),Val(3))
@tensor res13_tst[j,k,l,m,n] := A[i] * B[j,k,i,l,m,n]
@test collect(collect(res13)) ≈ collect(res13_tst)
contract!(res13_tst,A,B,Val(1),Val(3))
@test collect(collect(res13)) ≈ collect(res13_tst)
# dimension 3, 4, and 5 should be equivalent
contract!(res13_tst,A,B,Val(1),Val(4))
@test collect(collect(res13)) ≈ collect(res13_tst)
contract!(res13_tst,A,B,Val(1),Val(5))
@test collect(collect(res13)) ≈ collect(res13_tst)
res16 = SymArray{(2,3),T}(arrType,N,N,N,N,N)
contract!(res16,A,S,Val(1),Val(6))
@tensor res16_tst[j,k,l,m,n] := A[i] * B[j,k,l,m,n,i]
@test collect(collect(res16)) ≈ collect(res16_tst)
contract!(res16_tst,A,B,Val(1),Val(6))
@test collect(collect(res16)) ≈ collect(res16_tst)
# check that contraction from SymArray to Array works
A = rand(T,M,O) |> arrType
S = SymArray{(1,2,1),T}(arrType,N,M,M,O)
rand!(S.data)
B = collect(collect(S)) |> arrType
res12 = zeros(T,O,N,M,O) |> arrType
res12_tst = zeros(T,O,N,M,O) |> arrType
contract!(res12,A,S,Val(1),Val(2))
contract!(res12_tst,A,B,Val(1),Val(2))
@test collect(res12) ≈ collect(res12_tst)
end
end
end
end
end | SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | docs | 1621 | # SymArrays.jl
[](https://jfeist.github.io/SymArrays.jl/stable)
[](https://jfeist.github.io/SymArrays.jl/dev)
[](https://github.com/jfeist/SymArrays.jl/actions/workflows/CI.yml)
[](https://codecov.io/gh/jfeist/SymArrays.jl)
This package provides some tools to efficiently store arrays with exchange symmetries, i.e., arrays where exchanging two indices leaves the value unchanged. It stores the underlying data in a flat vector and provides mappings that allow to address it as a "normal" `AbstractArray{T,N}`. To generate a new one with undefined data, use
```julia
S = SymArray{Nsyms,T}(dims...)
```
where `NSyms` is a tuple that indicates the size of each group of exchangeable indices (which have to be adjacent for simplicity), `T` is the element type (e.g., `Float64` or `ComplexF64`), and `dims` are the dimensions of the array (which have to fulfill `length(dims)==sum(Nsyms)`. As an example
```julia
S = SymArray{(3,1,2,1),Float64}(10,10,10,3,50,50,50)
```
declares an array `S[(i,j,k),l,(m,n),o]` where any permutation of `(i,j,k)` leaves the value unchanged, as does any permutation of `(m,n)`. Note that interchangeable indices obviously have to have the same size.
## TODO:
- Allow specification and treatment of Hermitian indices, where any permutation conjugates the result (possibly only for 2 indices at a time?).
| SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.3.5 | 7680a0e455ce44eb63301154a07412c9c75a0bd7 | docs | 70 | # SymArrays.jl
```@index
```
```@autodocs
Modules = [SymArrays]
```
| SymArrays | https://github.com/jfeist/SymArrays.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 7623 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
push!(LOAD_PATH, "./src")
using ParametricOptInterface
using MathOptInterface
using GLPK
import Random
#using SparseArrays
using TimerOutputs
const MOI = MathOptInterface
const POI = ParametricOptInterface
SOLVER = GLPK
if SOLVER == GLPK
MAX_ITER_PARAM = "it_lim"
elseif SOLVER == Gurobi
MAX_ITER_PARAM = "IterationLimit"
elseif SOLVER == Xpress
MAX_ITER_PARAM = "LPITERLIMIT"
end
struct PMedianData
num_facilities::Int
num_customers::Int
num_locations::Int
customer_locations::Vector{Float64}
end
# This is the LP relaxation.
function generate_poi_problem(model, data::PMedianData, add_parameters::Bool)
NL = data.num_locations
NC = data.num_customers
###
### 0 <= facility_variables <= 1
###
facility_variables = MOI.add_variables(model, NL)
for v in facility_variables
MOI.add_constraint(model, v, MOI.Interval(0.0, 1.0))
end
###
### assignment_variables >= 0
###
assignment_variables = reshape(MOI.add_variables(model, NC * NL), NC, NL)
for v in assignment_variables
MOI.add_constraint(model, v, MOI.GreaterThan(0.0))
# "Less than 1.0" constraint is redundant.
end
###
### Objective function
###
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(
[
MOI.ScalarAffineTerm(
data.customer_locations[i] - j,
assignment_variables[i, j],
) for i in 1:NC for j in 1:NL
],
0.0,
),
)
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
###
### assignment_variables[i, j] <= facility_variables[j]
###
for i in 1:NC, j in 1:NL
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(
[
MOI.ScalarAffineTerm(1.0, assignment_variables[i, j]),
MOI.ScalarAffineTerm(-1.0, facility_variables[j]),
],
0.0,
),
MOI.LessThan(0.0),
)
end
###
### sum_j assignment_variables[i, j] = 1
###
for i in 1:NC
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(
[
MOI.ScalarAffineTerm(1.0, assignment_variables[i, j]) for
j in 1:NL
],
0.0,
),
MOI.EqualTo(1.0),
)
end
###
### sum_j facility_variables[j] == num_facilities
###
if add_parameters
d, cd = MOI.add_constrained_variable(
model,
MOI.Parameter(data.num_facilities),
)
end
if add_parameters
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(1.0, vcat(facility_variables, d)),
0.0,
),
MOI.EqualTo{Float64}(0),
)
else
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(1.0, facility_variables),
0.0,
),
MOI.EqualTo{Float64}(data.num_facilities),
)
end
return assignment_variables, facility_variables
end
function solve_moi(
data::PMedianData,
optimizer;
vector_version,
params,
add_parameters = false,
)
model = optimizer()
for (param, value) in params
MOI.set(model, param, value)
end
@timeit "generate" x, y = if vector_version
generate_poi_problem_vector(model, data, add_parameters)
else
generate_poi_problem(model, data, add_parameters)
end
@timeit "solve" MOI.optimize!(model)
return MOI.get(model, MOI.ObjectiveValue())
end
function POI_OPTIMIZER()
return POI.Optimizer(SOLVER.Optimizer())
end
function MOI_OPTIMIZER()
return SOLVER.Optimizer()
end
function solve_moi_loop(
data::PMedianData;
vector_version,
max_iters = Inf,
time_limit_sec = Inf,
loops,
)
params = []
if isfinite(time_limit_sec)
push!(params, (MOI.TimeLimitSec(), time_limit_sec))
end
if isfinite(max_iters)
push!(params, (MOI.RawOptimizerAttribute(MAX_ITER_PARAM), max_iters))
end
push!(params, (MOI.Silent(), true))
s_type = vector_version ? "vector" : "scalar"
@timeit(
"$(SOLVER) MOI $(s_type)",
for _ in 1:loops
solve_moi(
data,
MOI_OPTIMIZER;
vector_version = vector_version,
params = params,
)
end
)
end
function solve_poi_no_params_loop(
data::PMedianData;
vector_version,
max_iters = Inf,
time_limit_sec = Inf,
loops,
)
params = []
if isfinite(time_limit_sec)
push!(params, (MOI.TimeLimitSec(), time_limit_sec))
end
if isfinite(max_iters)
push!(params, (MOI.RawOptimizerAttribute(MAX_ITER_PARAM), max_iters))
end
push!(params, (MOI.Silent(), true))
s_type = vector_version ? "vector" : "scalar"
@timeit(
"$(SOLVER) POI NO PARAMS $(s_type)",
for _ in 1:loops
solve_moi(
data,
POI_OPTIMIZER;
vector_version = vector_version,
params = params,
)
end
)
end
function solve_poi_loop(
data::PMedianData;
vector_version,
max_iters = Inf,
time_limit_sec = Inf,
loops = 1,
)
params = []
if isfinite(time_limit_sec)
push!(params, (MOI.TimeLimitSec(), time_limit_sec))
end
if isfinite(max_iters)
push!(params, (MOI.RawOptimizerAttribute(MAX_ITER_PARAM), max_iters))
end
push!(params, (MOI.Silent(), true))
s_type = vector_version ? "vector" : "scalar"
@timeit(
"$(SOLVER) POI $(s_type)",
for _ in 1:loops
solve_moi(
data,
POI_OPTIMIZER;
vector_version = vector_version,
params = params,
add_parameters = true,
)
end
)
end
function run_benchmark(;
num_facilities,
num_customers,
num_locations,
time_limit_sec,
max_iters,
loops,
)
Random.seed!(10)
reset_timer!()
data = PMedianData(
num_facilities,
num_customers,
num_locations,
rand(num_customers) .* num_locations,
)
GC.gc()
solve_moi_loop(
data,
vector_version = false,
max_iters = max_iters,
time_limit_sec = time_limit_sec,
loops = loops,
)
GC.gc()
solve_poi_no_params_loop(
data,
vector_version = false,
max_iters = max_iters,
time_limit_sec = time_limit_sec,
loops = loops,
)
GC.gc()
solve_poi_loop(
data,
vector_version = false,
max_iters = max_iters,
time_limit_sec = time_limit_sec,
loops = loops,
)
GC.gc()
print_timer()
return println()
end
run_benchmark(
num_facilities = 100,
num_customers = 100,
num_locations = 100,
time_limit_sec = 0.0001,
max_iters = 1,
loops = 1,
)
run_benchmark(
num_facilities = 100,
num_customers = 100,
num_locations = 100,
time_limit_sec = 0.0001,
max_iters = 1,
loops = 100,
)
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 16978 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
using MathOptInterface
using ParametricOptInterface
using BenchmarkTools
const MOI = MathOptInterface
const POI = ParametricOptInterface
import Pkg
function moi_add_variables(N::Int)
model = MOI.Utilities.Model{Float64}()
MOI.add_variables(model, N)
return nothing
end
function poi_add_variables(N::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
MOI.add_variables(model, N)
return nothing
end
function poi_add_parameters(N::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
MOI.add_constrained_variable.(model, MOI.Parameter.(ones(N)))
return nothing
end
function poi_add_parameters_and_variables(N::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
MOI.add_variables(model, N / 2)
MOI.add_constrained_variable.(model, MOI.Parameter.(ones(Int(N / 2))))
return nothing
end
function poi_add_parameters_and_variables_alternating(N::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
for i in 1:Int(N / 2)
MOI.add_variable(model)
MOI.add_constrained_variable(model, MOI.Parameter(1.0))
end
return nothing
end
function moi_add_saf_ctr(N::Int, M::Int)
model = MOI.Utilities.Model{Float64}()
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, x), 0.0),
MOI.GreaterThan(1.0),
)
end
return nothing
end
function poi_add_saf_ctr(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, x), 0.0),
MOI.GreaterThan(1.0),
)
end
return nothing
end
function poi_add_saf_variables_and_parameters_ctr(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(
[MOI.ScalarAffineTerm.(1.0, x); MOI.ScalarAffineTerm.(1.0, y)],
0.0,
),
MOI.GreaterThan(1.0),
)
end
return nothing
end
function poi_add_saf_variables_and_parameters_ctr_parameter_update(
N::Int,
M::Int,
)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarAffineFunction(
[MOI.ScalarAffineTerm.(1.0, x); MOI.ScalarAffineTerm.(1.0, y)],
0.0,
),
MOI.GreaterThan(1.0),
)
end
MOI.set.(model, POI.ParameterValue(), y, 0.5)
POI.update_parameters!(model)
return nothing
end
function moi_add_sqf_variables_ctr(N::Int, M::Int)
model = MOI.Utilities.Model{Float64}()
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, x),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
MOI.GreaterThan(1.0),
)
end
return nothing
end
function poi_add_sqf_variables_ctr(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, x),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
MOI.GreaterThan(1.0),
)
end
return nothing
end
function poi_add_sqf_variables_parameters_ctr(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
MOI.GreaterThan(1.0),
)
end
return nothing
end
function poi_add_sqf_variables_parameters_ctr_parameter_update(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
MOI.GreaterThan(1.0),
)
end
MOI.set.(model, POI.ParameterValue(), y, 0.5)
POI.update_parameters!(model)
return nothing
end
function poi_add_sqf_parameters_parameters_ctr(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, y, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
MOI.GreaterThan(1.0),
)
end
return nothing
end
function poi_add_sqf_parameters_parameters_ctr_parameter_update(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.add_constraint(
model,
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, y, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
MOI.GreaterThan(1.0),
)
end
MOI.set.(model, POI.ParameterValue(), y, 0.5)
POI.update_parameters!(model)
return nothing
end
function moi_add_saf_obj(N::Int, M::Int)
model = MOI.Utilities.Model{Float64}()
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, x), 0.0),
)
end
return nothing
end
function poi_add_saf_obj(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(1.0, x), 0.0),
)
end
return nothing
end
function poi_add_saf_variables_and_parameters_obj(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(
[MOI.ScalarAffineTerm.(1.0, x); MOI.ScalarAffineTerm.(1.0, y)],
0.0,
),
)
end
return nothing
end
function poi_add_saf_variables_and_parameters_obj_parameter_update(
N::Int,
M::Int,
)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(
[MOI.ScalarAffineTerm.(1.0, x); MOI.ScalarAffineTerm.(1.0, y)],
0.0,
),
)
end
for _ in 1:M
MOI.set.(model, POI.ParameterValue(), y, 0.5)
POI.update_parameters!(model)
end
return nothing
end
function moi_add_sqf_variables_obj(N::Int, M::Int)
model = MOI.Utilities.Model{Float64}()
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, x),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
)
end
return nothing
end
function poi_add_sqf_variables_obj(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, x),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
)
end
return nothing
end
function poi_add_sqf_variables_parameters_obj(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
)
end
return nothing
end
function poi_add_sqf_variables_parameters_obj_parameter_update(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
)
end
for _ in 1:M
MOI.set.(model, POI.ParameterValue(), y, 0.5)
POI.update_parameters!(model)
end
return nothing
end
function poi_add_sqf_parameters_parameters_obj(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, y, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
)
end
return nothing
end
function poi_add_sqf_parameters_parameters_obj_parameter_update(N::Int, M::Int)
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
)
)
for _ in 1:M
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, y, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
)
end
for _ in 1:M
MOI.set.(model, POI.ParameterValue(), y, 0.5)
POI.update_parameters!(model)
end
return nothing
end
function run_benchmarks(N::Int, M::Int)
println("Pkg status:")
Pkg.status()
println("")
GC.gc()
println("variables on a MOIU.Model.")
@btime moi_add_variables($N)
GC.gc()
println("variables on a POI.Optimizer.")
@btime poi_add_variables($N)
GC.gc()
println("parameters on a POI.Optimizer.")
@btime poi_add_parameters($N)
GC.gc()
println("parameters and variables on a POI.Optimizer.")
@btime poi_add_parameters_and_variables($N)
GC.gc()
println("alternating parameters and variables on a POI.Optimizer.")
@btime poi_add_parameters_and_variables_alternating($N)
GC.gc()
println("SAF constraint with variables on a MOIU.Model.")
@btime moi_add_saf_ctr($N, $M)
GC.gc()
println("SAF constraint with variables on a POI.Optimizer.")
@btime poi_add_saf_ctr($N, $M)
GC.gc()
println("SAF constraint with variables and parameters on a POI.Optimizer.")
@btime poi_add_saf_variables_and_parameters_ctr($N, $M)
GC.gc()
println("SQF constraint with variables on a MOIU.Model{Float64}.")
@btime moi_add_sqf_variables_ctr($N, $M)
GC.gc()
println("SQF constraint with variables on a POI.Optimizer.")
@btime poi_add_sqf_variables_ctr($N, $M)
GC.gc()
println(
"SQF constraint with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_ctr($N, $M)
GC.gc()
println("SQF constraint with product of parameters on a POI.Optimizer.")
@btime poi_add_sqf_parameters_parameters_ctr($N, $M)
GC.gc()
println("SAF objective with variables on a MOIU.Model.")
@btime moi_add_saf_obj($N, $M)
GC.gc()
println("SAF objective with variables on a POI.Optimizer.")
@btime poi_add_saf_obj($N, $M)
GC.gc()
println("SAF objective with variables and parameters on a POI.Optimizer.")
@btime poi_add_saf_variables_and_parameters_obj($N, $M)
GC.gc()
println("SQF objective with variables on a MOIU.Model.")
@btime moi_add_sqf_variables_obj($N, $M)
GC.gc()
println("SQF objective with variables on a POI.Optimizer.")
@btime poi_add_sqf_variables_obj($N, $M)
GC.gc()
println(
"SQF objective with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_obj($N, $M)
GC.gc()
println("SQF objective with product of parameters on a POI.Optimizer.")
@btime poi_add_sqf_parameters_parameters_obj($N, $M)
GC.gc()
println(
"Update parameters in SAF constraint with variables and parameters on a POI.Optimizer.",
)
@btime poi_add_saf_variables_and_parameters_ctr_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SAF objective with variables and parameters on a POI.Optimizer.",
)
@btime poi_add_saf_variables_and_parameters_obj_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF constraint with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_ctr_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF constraint with product of parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_parameters_parameters_ctr_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF objective with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_obj_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF objective with product of parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_parameters_parameters_obj_parameter_update($N, $M)
return nothing
end
N = 10_000
M = 100
run_benchmarks(N, M)
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 15781 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
using ParametricOptInterface
using BenchmarkTools
using JuMP
using LinearAlgebra
const POI = ParametricOptInterface
import Pkg
function moi_add_variables(N::Int)
model = direct_model(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
)
@variable(model, x[i = 1:N])
return nothing
end
function poi_add_variables(N::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:N])
return nothing
end
function poi_add_parameters(N::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:N] in MOI.Parameter(1.0))
return nothing
end
function poi_add_parameters_and_variables(N::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:N] in MOI.Parameter(1.0))
return nothing
end
function poi_add_parameters_and_variables_alternating(N::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
for i in 1:Int(N / 2)
@variable(model)
@variable(model, set = MOI.Parameter(1.0))
end
return nothing
end
function moi_add_saf_ctr(N::Int, M::Int)
model = direct_model(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
)
@variable(model, x[i = 1:N])
@constraint(model, cons[i = 1:M], sum(x) >= 1)
return nothing
end
function poi_add_saf_ctr(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:N])
@constraint(model, cons[i = 1:M], sum(x) >= 1)
return nothing
end
function poi_add_saf_variables_and_parameters_ctr(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(0))
@constraint(model, con[i = 1:M], sum(x) + sum(p) >= 1)
return nothing
end
function poi_add_saf_variables_and_parameters_ctr_parameter_update(
N::Int,
M::Int,
)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(0))
@constraint(model, con[i = 1:M], sum(x) + sum(p) >= 1)
MOI.set.(model, POI.ParameterValue(), p, 0.5)
POI.update_parameters!(backend(model))
return nothing
end
function moi_add_sqf_variables_ctr(N::Int, M::Int)
model = direct_model(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
)
@variable(model, x[i = 1:N])
@constraint(model, con[i = 1:M], dot(x, x) >= 1)
return nothing
end
function poi_add_sqf_variables_ctr(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:N])
@constraint(model, con[i = 1:M], dot(x, x) >= 1)
return nothing
end
function poi_add_sqf_variables_parameters_ctr(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
@constraint(model, con[i = 1:M], dot(x, p) >= 1)
return nothing
end
function poi_add_sqf_variables_parameters_ctr_parameter_update(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
@constraint(model, con[i = 1:M], dot(x, p) >= 1)
MOI.set.(model, POI.ParameterValue(), p, 0.5)
POI.update_parameters!(backend(model))
return nothing
end
function poi_add_sqf_parameters_parameters_ctr(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
@constraint(model, con[i = 1:M], dot(p, p) >= 1)
return nothing
end
function poi_add_sqf_parameters_parameters_ctr_parameter_update(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
@constraint(model, con[i = 1:M], dot(p, p) >= 1)
MOI.set.(model, POI.ParameterValue(), p, 0.5)
POI.update_parameters!(backend(model))
return nothing
end
function moi_add_saf_obj(N::Int, M::Int)
model = direct_model(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
)
@variable(model, x[i = 1:N])
@objective(model, Min, sum(x))
return nothing
end
function poi_add_saf_obj(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:N])
for _ in 1:M
@objective(model, Min, sum(x))
end
return nothing
end
function poi_add_saf_variables_and_parameters_obj(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
for _ in 1:M
@objective(model, Min, sum(x) + sum(p))
end
return nothing
end
function poi_add_saf_variables_and_parameters_obj_parameter_update(
N::Int,
M::Int,
)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
for _ in 1:M
@objective(model, Min, sum(x) + sum(p))
end
for _ in 1:M
MOI.set.(model, POI.ParameterValue(), p, 0.5)
POI.update_parameters!(backend(model))
end
return nothing
end
function moi_add_sqf_variables_obj(N::Int, M::Int)
model = direct_model(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
)
@variable(model, x[i = 1:N])
for _ in 1:M
@objective(model, Min, dot(x, x))
end
return nothing
end
function poi_add_sqf_variables_obj(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:N])
for _ in 1:M
@objective(model, Min, dot(x, x))
end
return nothing
end
function poi_add_sqf_variables_parameters_obj(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
for _ in 1:M
@objective(model, Min, dot(x, p))
end
return nothing
end
function poi_add_sqf_variables_parameters_obj_parameter_update(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
for _ in 1:M
@objective(model, Min, dot(x, p))
end
for _ in 1:M
MOI.set.(model, POI.ParameterValue(), p, 0.5)
POI.update_parameters!(backend(model))
end
return nothing
end
function poi_add_sqf_parameters_parameters_obj(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
for _ in 1:M
@objective(model, Min, dot(p, p))
end
return nothing
end
function poi_add_sqf_parameters_parameters_obj_parameter_update(N::Int, M::Int)
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
@variable(model, x[i = 1:Int(N / 2)])
@variable(model, p[i = 1:Int(N / 2)] in MOI.Parameter.(1))
for _ in 1:M
@objective(model, Min, dot(p, p))
end
for _ in 1:M
MOI.set.(model, POI.ParameterValue(), p, 0.5)
POI.update_parameters!(backend(model))
end
return nothing
end
function run_benchmarks(N::Int, M::Int)
println("Pkg status:")
Pkg.status()
println("")
GC.gc()
println("variables on a MOIU.Model.")
@btime moi_add_variables($N)
GC.gc()
println("variables on a POI.Optimizer.")
@btime poi_add_variables($N)
GC.gc()
println("parameters on a POI.Optimizer.")
@btime poi_add_parameters($N)
GC.gc()
println("parameters and variables on a POI.Optimizer.")
@btime poi_add_parameters_and_variables($N)
GC.gc()
println("alternating parameters and variables on a POI.Optimizer.")
@btime poi_add_parameters_and_variables_alternating($N)
GC.gc()
println("SAF constraint with variables on a MOIU.Model.")
@btime moi_add_saf_ctr($N, $M)
GC.gc()
println("SAF constraint with variables on a POI.Optimizer.")
@btime poi_add_saf_ctr($N, $M)
GC.gc()
println("SAF constraint with variables and parameters on a POI.Optimizer.")
@btime poi_add_saf_variables_and_parameters_ctr($N, $M)
GC.gc()
println("SQF constraint with variables on a MOIU.Model{Float64}.")
@btime moi_add_sqf_variables_ctr($N, $M)
GC.gc()
println("SQF constraint with variables on a POI.Optimizer.")
@btime poi_add_sqf_variables_ctr($N, $M)
GC.gc()
println(
"SQF constraint with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_ctr($N, $M)
GC.gc()
println("SQF constraint with product of parameters on a POI.Optimizer.")
@btime poi_add_sqf_parameters_parameters_ctr($N, $M)
GC.gc()
println("SAF objective with variables on a MOIU.Model.")
@btime moi_add_saf_obj($N, $M)
GC.gc()
println("SAF objective with variables on a POI.Optimizer.")
@btime poi_add_saf_obj($N, $M)
GC.gc()
println("SAF objective with variables and parameters on a POI.Optimizer.")
@btime poi_add_saf_variables_and_parameters_obj($N, $M)
GC.gc()
println("SQF objective with variables on a MOIU.Model.")
@btime moi_add_sqf_variables_obj($N, $M)
GC.gc()
println("SQF objective with variables on a POI.Optimizer.")
@btime poi_add_sqf_variables_obj($N, $M)
GC.gc()
println(
"SQF objective with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_obj($N, $M)
GC.gc()
println("SQF objective with product of parameters on a POI.Optimizer.")
@btime poi_add_sqf_parameters_parameters_obj($N, $M)
GC.gc()
println(
"Update parameters in SAF constraint with variables and parameters on a POI.Optimizer.",
)
@btime poi_add_saf_variables_and_parameters_ctr_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SAF objective with variables and parameters on a POI.Optimizer.",
)
@btime poi_add_saf_variables_and_parameters_obj_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF constraint with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_ctr_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF constraint with product of parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_parameters_parameters_ctr_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF objective with product of variables and parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_variables_parameters_obj_parameter_update($N, $M)
GC.gc()
println(
"Update parameters in SQF objective with product of parameters on a POI.Optimizer.",
)
@btime poi_add_sqf_parameters_parameters_obj_parameter_update($N, $M)
return nothing
end
N = 10_000
M = 100
run_benchmarks(N, M)
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 10192 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
import Pkg
Pkg.activate(@__DIR__)
Pkg.instantiate()
using ParametricOptInterface
using MathOptInterface
const POI = ParametricOptInterface
const MOI = MathOptInterface
using ParameterJuMP
using JuMP
using GLPK
const OPTIMIZER = GLPK.Optimizer
using TimerOutputs
using LinearAlgebra
using Random
const N_Candidates = 200
const N_Observations = 2000
const N_Nodes = 200
const Observations = 1:N_Observations
const Candidates = 1:N_Candidates
const Nodes = 1:N_Nodes;
#' Initialize a random number generator to keep results deterministic
rng = Random.MersenneTwister(123);
#' Building regressors (explanatory) sinusoids
const X = zeros(N_Candidates, N_Observations)
const time = [obs / N_Observations * 1 for obs in Observations]
for obs in Observations, cand in Candidates
t = time[obs]
f = cand
X[cand, obs] = sin(2 * pi * f * t)
end
#' Define coefficients
β = zeros(N_Candidates)
for i in Candidates
if rand(rng) <= (1 - i / N_Candidates)^2 && i <= 100
β[i] = 4 * rand(rng) / i
end
end
# println("First coefs: $(β[1:min(10, N_Candidates)])")
const y = X' * β .+ 0.1 * randn(rng, N_Observations)
function full_model_regression()
time_build = @elapsed begin # measure time to create a model
# initialize a optimization model
full_model = direct_model(OPTIMIZER)
MOI.set(full_model, MOI.Silent(), true)
# create optimization variables of the problem
@variables(full_model, begin
ɛ_up[Observations] >= 0
ɛ_dw[Observations] >= 0
β[1:N_Candidates]
# 0 <= β[Candidates] <= 8
end)
# define constraints of the model
@constraints(
full_model,
begin
ɛ_up_ctr[i in Observations],
ɛ_up[i] >= +sum(X[j, i] * β[j] for j in Candidates) - y[i]
ɛ_dw_ctr[i in Observations],
ɛ_dw[i] >= -sum(X[j, i] * β[j] for j in Candidates) + y[i]
end
)
# construct the objective function to be minimized
@objective(
full_model,
Min,
sum(ɛ_up[i] + ɛ_dw[i] for i in Observations)
)
end
# solve the problem
time_solve = @elapsed optimize!(full_model)
println(
"First coefficients in solution: $(value.(β)[1:min(10, N_Candidates)])",
)
println("Objective value: $(objective_value(full_model))")
println("Time in solve: $time_solve")
println("Time in build: $time_build")
return nothing
end
function ObsSet(K)
obs_per_block = div(N_Observations, N_Nodes)
return (1+(K-1)*obs_per_block):(K*obs_per_block)
end
function slave_model(PARAM, K)
# initialize the JuMP model
slave = if PARAM == 0
# special constructor exported by ParameterJuMP
# to add the functionality to the model
Model(OPTIMIZER)
elseif PARAM == 1
# POI constructor
direct_model(POI.Optimizer(OPTIMIZER()))
# Model(() -> POI.ParametricOptimizer(OPTIMIZER()))
else
# regular JuMP constructor
direct_model(OPTIMIZER())
end
MOI.set(slave, MOI.Silent(), true)
# Define local optimization variables for norm-1 error
@variables(slave, begin
ɛ_up[ObsSet(K)] >= 0
ɛ_dw[ObsSet(K)] >= 0
end)
# create the regression coefficient representation
if PARAM == 0
# here is the main constructor of the Parameter JuMP packages
# It will create model *parameters* instead of variables
# Variables are added to the optimization model, while parameters
# are not. Parameters are merged with LP problem constants and do not
# increase the model dimensions.
@variable(slave, β[i = 1:N_Candidates] == 0, Param())
elseif PARAM == 1
# Create parameters
@variable(slave, β[i = 1:N_Candidates] in MOI.Parameter.(0.0))
else
# Create fixed variables
@variables(slave, begin
β[Candidates]
β_fixed[1:N_Candidates] == 0
end)
@constraint(slave, β_fix[i in Candidates], β[i] == β_fixed[i])
end
# create local constraints
# Note that *parameter* algebra is implemented just like variables
# algebra. We can multiply parameters by constants, add parameters,
# sum parameters and variables and so on.
@constraints(
slave,
begin
ɛ_up_ctr[i in ObsSet(K)],
ɛ_up[i] >= +sum(X[j, i] * β[j] for j in Candidates) - y[i]
ɛ_dw_ctr[i in ObsSet(K)],
ɛ_dw[i] >= -sum(X[j, i] * β[j] for j in Candidates) + y[i]
end
)
# create local objective function
@objective(slave, Min, sum(ɛ_up[i] + ɛ_dw[i] for i in ObsSet(K)))
# return the correct group of parameters
return (slave, β)
end
function master_model(PARAM)
master = Model(OPTIMIZER)
@variables(master, begin
ɛ[Nodes] >= 0
β[1:N_Candidates]
end)
@objective(master, Min, sum(ɛ[i] for i in Nodes))
sol = zeros(N_Candidates)
return (master, ɛ, β, sol)
end
function master_solve(PARAM, master_model)
model = master_model[1]
β = master_model[3]
optimize!(model)
return (value.(β), objective_value(model))
end
function slave_solve(PARAM, model, master_solution)
β0 = master_solution[1]
slave = model[1]
# The first step is to fix the values given by the master problem
@timeit "fix" if PARAM == 0
# ParameterJuMP: *parameters* can be set to new values and the optimization
# model will be automatically updated
β = model[2]
set_value.(β, β0)
elseif PARAM == 1
# POI: assigning new values to *parameters* and the optimization
# model will be automatically updated
β = model[2]
MOI.set.(slave, POI.ParameterValue(), β, β0)
else
# JuMP: it is also possible to fix variables to new values
β_fixed = slave[:β_fixed]
fix.(β_fixed, β0)
end
# here the slave problem is solved
@timeit "opt" optimize!(slave)
# query dual variables, which are sensitivities
# They represent the subgradient (almost a derivative)
# of the objective function for infinitesimal variations
# of the constants in the linear constraints
@timeit "dual" if PARAM == 0
# ParameterJuMP: we can query dual values of *parameters*
π = dual.(β)
elseif PARAM == 1
# POI: we can query dual values of *parameters*
π = MOI.get.(slave, POI.ParameterDual(), β)
else
# or, in pure JuMP, we query the duals form
# constraints that fix the values of our regression
# coefficients
π = dual.(slave[:β_fix])
end
# π2 = shadow_price.(β_fix)
# @show sum(π .- π2)
obj = objective_value(slave)
rhs = obj - dot(π, β0)
return (rhs, π, obj)
end
function master_add_cut(PARAM, master_model, cut_info, node)
master = master_model[1]
ɛ = master_model[2]
β = master_model[3]
rhs = cut_info[1]
π = cut_info[2]
@constraint(master, ɛ[node] >= sum(π[j] * β[j] for j in Candidates) + rhs)
end
function decomposed_model(PARAM; print_timer_outputs::Bool = true)
reset_timer!() # reset timer fo comparision
time_init = @elapsed @timeit "Init" begin
# println("Initialize decomposed model")
# Create the master problem with no cuts
# println("Build master problem")
@timeit "Master" master = master_model(PARAM)
# initialize solution for the regression coefficients in zero
# println("Build initial solution")
@timeit "Sol" solution = (zeros(N_Candidates), Inf)
best_sol = deepcopy(solution)
# Create the slave problems
# println("Build slave problems")
@timeit "Slaves" slaves =
[slave_model(PARAM, i) for i in Candidates]
# Save initial version of the slave problems and create
# the first set of cuts
# println("Build initial cuts")
@timeit "Cuts" cuts =
[slave_solve(PARAM, slaves[i], solution) for i in Candidates]
end
UB = +Inf
LB = -Inf
# println("Initialize Iterative step")
time_loop = @elapsed @timeit "Loop" for k in 1:80
# Add cuts generated from each slave problem to the master problem
@timeit "add cuts" for i in Candidates
master_add_cut(PARAM, master, cuts[i], i)
end
# Solve the master problem with the new set of cuts
# Obtain new solution candidate for the regression coefficients
@timeit "solve master" solution = master_solve(PARAM, master)
# Pass the new candidate solution to each of the slave problems
# Solve the slave problems and obtain cutting planes
# @show solution[2]
@timeit "solve nodes" for i in Candidates
cuts[i] = slave_solve(PARAM, slaves[i], solution)
end
LB = solution[2]
new_UB = sum(cuts[i][3] for i in Candidates)
if new_UB <= UB
best_sol = deepcopy(solution)
end
UB = min(UB, new_UB)
# println("Iter = $k, LB = $LB, UB = $UB")
if abs(UB - LB) / (abs(UB) + abs(LB)) < 0.05
# println("Converged!")
break
end
end
# println(
# "First coefficients in solution: $(solution[1][1:min(10, N_Candidates)])",
# )
# println("Objective value: $(solution[2])")
# println("Time in loop: $time_loop")
# println("Time in init: $time_init")
print_timer_outputs && print_timer()
return best_sol[1]
end
println("ParameterJuMP")
GC.gc()
β1 = decomposed_model(0; print_timer_outputs = false);
GC.gc()
β1 = decomposed_model(0);
println("POI, direct mode")
GC.gc()
β2 = decomposed_model(1; print_timer_outputs = false);
GC.gc()
β2 = decomposed_model(1);
println("Pure JuMP, direct mode")
GC.gc()
β3 = decomposed_model(2; print_timer_outputs = false);
GC.gc()
β3 = decomposed_model(2);
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 1022 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
using Documenter
using ParametricOptInterface
makedocs(
modules = [ParametricOptInterface],
doctest = false,
clean = true,
# See https://github.com/JuliaDocs/Documenter.jl/issues/868
format = Documenter.HTML(
assets = ["assets/favicon.ico"],
mathengine = Documenter.MathJax(),
prettyurls = get(ENV, "CI", nothing) == "true",
),
sitename = "ParametricOptInterface.jl",
authors = "Tomás Gutierrez, and contributors",
pages = [
"Home" => "index.md",
"manual.md",
"Examples" => [
"Examples/example.md",
"Examples/benders.md",
"Examples/markowitz.md",
],
"reference.md",
],
)
deploydocs(
repo = "github.com/jump-dev/ParametricOptInterface.jl.git",
push_preview = true,
)
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 39843 | #
# Helpers
#
function _is_variable(v::MOI.VariableIndex)
return v.value < PARAMETER_INDEX_THRESHOLD
end
function _is_parameter(v::MOI.VariableIndex)
return v.value > PARAMETER_INDEX_THRESHOLD
end
function _has_parameters(f::MOI.ScalarAffineFunction{T}) where {T}
for term in f.terms
if _is_parameter(term.variable)
return true
end
end
return false
end
function _has_parameters(f::MOI.VectorOfVariables)
for variable in f.variables
if _is_parameter(variable)
return true
end
end
return false
end
function _has_parameters(f::MOI.VectorAffineFunction{T}) where {T}
for term in f.terms
if _is_parameter(term.scalar_term.variable)
return true
end
end
return false
end
function _has_parameters(f::MOI.ScalarQuadraticFunction{T}) where {T}
for term_l in f.affine_terms
if _is_parameter(term_l.variable)
return true
end
end
for term in f.quadratic_terms
if _is_parameter(term.variable_1) || _is_parameter(term.variable_2)
return true
end
end
return false
end
function _cache_multiplicative_params!(
model::Optimizer{T},
f::ParametricQuadraticFunction{T},
) where {T}
for term in f.pv
push!(model.multiplicative_parameters, term.variable_1.value)
end
# TODO compute these duals might be feasible
for term in f.pp
push!(model.multiplicative_parameters, term.variable_1.value)
push!(model.multiplicative_parameters, term.variable_2.value)
end
return
end
#
# Empty
#
function MOI.is_empty(model::Optimizer)
return MOI.is_empty(model.optimizer) &&
isempty(model.parameters) &&
isempty(model.parameters_name) &&
isempty(model.updated_parameters) &&
isempty(model.variables) &&
model.last_variable_index_added == 0 &&
model.last_parameter_index_added == PARAMETER_INDEX_THRESHOLD &&
isempty(model.constraint_outer_to_inner) &&
# affine ctr
model.last_affine_added == 0 &&
isempty(model.affine_outer_to_inner) &&
isempty(model.affine_constraint_cache) &&
isempty(model.affine_constraint_cache_set) &&
# quad ctr
model.last_quad_add_added == 0 &&
isempty(model.quadratic_outer_to_inner) &&
isempty(model.quadratic_constraint_cache) &&
isempty(model.quadratic_constraint_cache_set) &&
# obj
model.affine_objective_cache === nothing &&
model.quadratic_objective_cache === nothing &&
MOI.is_empty(model.original_objective_cache) &&
isempty(model.quadratic_objective_cache_product) &&
#
isempty(model.vector_affine_constraint_cache) &&
#
isempty(model.multiplicative_parameters) &&
isempty(model.dual_value_of_parameters) &&
model.number_of_parameters_in_model == 0
end
function MOI.empty!(model::Optimizer{T}) where {T}
MOI.empty!(model.optimizer)
empty!(model.parameters)
empty!(model.parameters_name)
empty!(model.updated_parameters)
empty!(model.variables)
model.last_variable_index_added = 0
model.last_parameter_index_added = PARAMETER_INDEX_THRESHOLD
empty!(model.constraint_outer_to_inner)
# affine ctr
model.last_affine_added = 0
empty!(model.affine_outer_to_inner)
empty!(model.affine_constraint_cache)
empty!(model.affine_constraint_cache_set)
# quad ctr
model.last_quad_add_added = 0
empty!(model.quadratic_outer_to_inner)
empty!(model.quadratic_constraint_cache)
empty!(model.quadratic_constraint_cache_set)
# obj
model.affine_objective_cache = nothing
model.quadratic_objective_cache = nothing
MOI.empty!(model.original_objective_cache)
empty!(model.quadratic_objective_cache_product)
#
empty!(model.vector_affine_constraint_cache)
#
empty!(model.multiplicative_parameters)
empty!(model.dual_value_of_parameters)
#
model.number_of_parameters_in_model = 0
return
end
#
# Variables
#
function MOI.is_valid(model::Optimizer, vi::MOI.VariableIndex)
if haskey(model.variables, vi)
return true
elseif haskey(model.parameters, p_idx(vi))
return true
end
return false
end
function MOI.is_valid(
model::Optimizer,
ci::MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{T}},
) where {T}
vi = MOI.VariableIndex(ci.value)
if haskey(model.parameters, p_idx(vi))
return true
end
return false
end
function MOI.supports(
model::Optimizer,
attr::MOI.VariableName,
tp::Type{MOI.VariableIndex},
)
return MOI.supports(model.optimizer, attr, tp)
end
function MOI.set(
model::Optimizer,
attr::MOI.VariableName,
v::MOI.VariableIndex,
name::String,
)
if _parameter_in_model(model, v)
model.parameters_name[v] = name
else
MOI.set(model.optimizer, attr, v, name)
end
return
end
function MOI.get(model::Optimizer, attr::MOI.VariableName, v::MOI.VariableIndex)
if _parameter_in_model(model, v)
return get(model.parameters_name, v, "")
else
return MOI.get(model.optimizer, attr, v)
end
end
function MOI.get(model::Optimizer, tp::Type{MOI.VariableIndex}, attr::String)
return MOI.get(model.optimizer, tp, attr)
end
function MOI.add_variable(model::Optimizer)
_next_variable_index!(model)
return MOI.Utilities.CleverDicts.add_item(
model.variables,
MOI.add_variable(model.optimizer),
)
end
function MOI.supports_add_constrained_variable(
::Optimizer{T},
::Type{MOI.Parameter{T}},
) where {T}
return true
end
function MOI.supports_add_constrained_variables(
model::Optimizer,
::Type{MOI.Reals},
)
return MOI.supports_add_constrained_variables(model.optimizer, MOI.Reals)
end
function MOI.add_constrained_variable(
model::Optimizer{T},
set::MOI.Parameter{T},
) where {T}
_next_parameter_index!(model)
p = MOI.VariableIndex(model.last_parameter_index_added)
MOI.Utilities.CleverDicts.add_item(model.parameters, set.value)
cp = MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{T}}(
model.last_parameter_index_added,
)
_add_to_constraint_map!(model, cp)
MOI.Utilities.CleverDicts.add_item(model.updated_parameters, NaN)
_update_number_of_parameters!(model)
return p, cp
end
function _add_to_constraint_map!(model::Optimizer, ci)
model.constraint_outer_to_inner[ci] = ci
return
end
function _add_to_constraint_map!(
model::Optimizer,
ci::MOI.ConstraintIndex{F,S},
) where {F<:MOI.ScalarAffineFunction,S}
model.last_affine_added += 1
model.constraint_outer_to_inner[ci] = ci
return
end
function _add_to_constraint_map!(
model::Optimizer,
ci::MOI.ConstraintIndex{F,S},
) where {F<:MOI.ScalarQuadraticFunction,S}
model.last_quad_add_added += 1
model.constraint_outer_to_inner[ci] = ci
return
end
function MOI.supports(
model::Optimizer,
attr::MOI.AbstractVariableAttribute,
tp::Type{MOI.VariableIndex},
)
return MOI.supports(model.optimizer, attr, tp)
end
function MOI.set(
model::Optimizer,
attr::MOI.AbstractVariableAttribute,
v::MOI.VariableIndex,
val,
)
if _variable_in_model(model, v)
MOI.set(model.optimizer, attr, v, val)
else
error("$attr is not supported for parameters")
end
end
function MOI.get(
model::Optimizer,
attr::MOI.AbstractVariableAttribute,
v::MOI.VariableIndex,
)
if _variable_in_model(model, v)
return MOI.get(model.optimizer, attr, model.variables[v])
else
error("$attr is not supported for parameters")
end
end
function MOI.delete(model::Optimizer, v::MOI.VariableIndex)
delete!(model.variables, v)
MOI.delete(model.optimizer, v)
MOI.delete(model.original_objective_cache, v)
# TODO - what happens if the variable was in a SAF that was converted to bounds?
# solution: do not allow if that is the case (requires going trhought the scalar affine cache)
# TODO - deleting a variable also deletes constraints
for (F, S) in MOI.Utilities.DoubleDicts.nonempty_outer_keys(
model.constraint_outer_to_inner,
)
_delete_variable_index_constraint(
model.constraint_outer_to_inner,
F,
S,
v.value,
)
end
return
end
function _delete_variable_index_constraint(d, F, S, v)
return
end
function _delete_variable_index_constraint(
d,
F::Type{MOI.VariableIndex},
S,
value,
)
inner = d[F, S]
key = MOI.ConstraintIndex{F,S}(value)
delete!(inner, key)
return
end
#
# Constraints
#
function MOI.is_valid(
model::Optimizer,
c::MOI.ConstraintIndex{F,S},
) where {
F<:Union{MOI.VariableIndex,MOI.VectorOfVariables,MOI.VectorAffineFunction},
S<:MOI.AbstractSet,
}
return MOI.is_valid(model.optimizer, c)
end
function MOI.is_valid(
model::Optimizer,
c::MOI.ConstraintIndex{F,S},
) where {F<:MOI.ScalarAffineFunction,S<:MOI.AbstractSet}
return MOI.is_valid(model.optimizer, c)
end
function MOI.supports_constraint(
model::Optimizer,
F::Union{
Type{MOI.VariableIndex},
Type{MOI.ScalarAffineFunction{T}},
Type{MOI.VectorOfVariables},
Type{MOI.VectorAffineFunction{T}},
},
S::Type{<:MOI.AbstractSet},
) where {T}
return MOI.supports_constraint(model.optimizer, F, S)
end
function MOI.supports_constraint(
model::Optimizer,
::Type{MOI.ScalarQuadraticFunction{T}},
S::Type{<:MOI.AbstractSet},
) where {T}
return MOI.supports_constraint(
model.optimizer,
MOI.ScalarAffineFunction{T},
S,
)
end
function MOI.supports_constraint(
model::Optimizer,
::Type{MOI.VectorQuadraticFunction{T}},
S::Type{<:MOI.AbstractSet},
) where {T}
return MOI.supports_constraint(
model.optimizer,
MOI.VectorAffineFunction{T},
S,
)
end
function MOI.supports(
model::Optimizer,
attr::MOI.ConstraintName,
tp::Type{<:MOI.ConstraintIndex},
)
return MOI.supports(model.optimizer, attr, tp)
end
function MOI.set(
model::Optimizer,
attr::MOI.ConstraintName,
c::MOI.ConstraintIndex{MOI.ScalarQuadraticFunction{T},S},
name::String,
) where {T,S<:MOI.AbstractSet}
if haskey(model.quadratic_outer_to_inner, c)
MOI.set(model.optimizer, attr, model.quadratic_outer_to_inner[c], name)
else
MOI.set(model.optimizer, attr, c, name)
end
return
end
function MOI.set(
model::Optimizer,
attr::MOI.ConstraintName,
c::MOI.ConstraintIndex{MOI.ScalarAffineFunction{T},S},
name::String,
) where {T,S<:MOI.AbstractSet}
if haskey(model.affine_outer_to_inner, c)
MOI.set(model.optimizer, attr, model.affine_outer_to_inner[c], name)
else
MOI.set(model.optimizer, attr, c, name)
end
return
end
function MOI.set(
model::Optimizer,
attr::MOI.ConstraintName,
c::MOI.ConstraintIndex,
name::String,
)
MOI.set(model.optimizer, attr, c, name)
return
end
function MOI.get(
model::Optimizer,
attr::MOI.ConstraintName,
c::MOI.ConstraintIndex{MOI.ScalarQuadraticFunction{T},S},
) where {T,S<:MOI.AbstractSet}
if haskey(model.quadratic_outer_to_inner, c)
return MOI.get(model.optimizer, attr, model.quadratic_outer_to_inner[c])
else
return MOI.get(model.optimizer, attr, c)
end
end
function MOI.get(
model::Optimizer,
attr::MOI.ConstraintName,
c::MOI.ConstraintIndex,
)
return MOI.get(model.optimizer, attr, c)
end
function MOI.get(
model::Optimizer,
attr::MOI.ConstraintName,
c::MOI.ConstraintIndex{MOI.ScalarAffineFunction{T},S},
) where {T,S}
if haskey(model.affine_outer_to_inner, c)
inner_ci = model.affine_outer_to_inner[c]
# This SAF constraint was transformed into variable bound
if typeof(inner_ci) === MOI.ConstraintIndex{MOI.VariableIndex,S}
v = MOI.get(model.optimizer, MOI.ConstraintFunction(), inner_ci)
variable_name = MOI.get(model.optimizer, MOI.VariableName(), v)
return "ParametricBound_$(S)_$(variable_name)"
end
return MOI.get(model.optimizer, attr, inner_ci)
else
return MOI.get(model.optimizer, attr, c)
end
end
function MOI.get(
model::Optimizer,
tp::Type{MOI.ConstraintIndex{F,S}},
name::String,
) where {F,S}
return MOI.get(model.optimizer, tp, name)
end
function MOI.get(model::Optimizer, tp::Type{MOI.ConstraintIndex}, name::String)
return MOI.get(model.optimizer, tp, name)
end
function MOI.set(
model::Optimizer,
::MOI.ConstraintFunction,
c::MOI.ConstraintIndex{F},
f::F,
) where {F}
MOI.set(model.optimizer, MOI.ConstraintFunction(), c, f)
return
end
function MOI.get(
model::Optimizer,
attr::MOI.ConstraintFunction,
ci::MOI.ConstraintIndex,
)
if haskey(model.quadratic_outer_to_inner, ci)
inner_ci = model.quadratic_outer_to_inner[ci]
return _original_function(model.quadratic_constraint_cache[inner_ci])
elseif haskey(model.affine_outer_to_inner, ci)
inner_ci = model.affine_outer_to_inner[ci]
return _original_function(model.affine_constraint_cache[inner_ci])
else
MOI.throw_if_not_valid(model, ci)
return MOI.get(model.optimizer, attr, ci)
end
end
function MOI.get(
model::Optimizer{T},
::MOI.ConstraintFunction,
cp::MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{T}},
) where {T}
p = MOI.VariableIndex(cp.value)
if !_parameter_in_model(model, p)
error("Parameter not in the model")
end
return p
end
function MOI.set(
model::Optimizer,
::MOI.ConstraintSet,
c::MOI.ConstraintIndex{F,S},
s::S,
) where {F,S}
MOI.set(model.optimizer, MOI.ConstraintSet(), c, s)
return
end
function MOI.get(
model::Optimizer,
attr::MOI.ConstraintSet,
ci::MOI.ConstraintIndex,
)
if haskey(model.quadratic_outer_to_inner, ci)
inner_ci = model.quadratic_outer_to_inner[ci]
return model.quadratic_constraint_cache_set[inner_ci]
elseif haskey(model.affine_outer_to_inner, ci)
inner_ci = model.affine_outer_to_inner[ci]
return model.affine_constraint_cache_set[inner_ci]
else
MOI.throw_if_not_valid(model, ci)
return MOI.get(model.optimizer, attr, ci)
end
end
function MOI.set(
model::Optimizer{T},
::MOI.ConstraintSet,
cp::MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{T}},
set::MOI.Parameter{T},
) where {T}
p = MOI.VariableIndex(cp.value)
if !_parameter_in_model(model, p)
error("Parameter not in the model")
end
return model.updated_parameters[p_idx(p)] = set.value
end
function MOI.get(
model::Optimizer{T},
::MOI.ConstraintSet,
cp::MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{T}},
) where {T}
p = MOI.VariableIndex(cp.value)
if !_parameter_in_model(model, p)
error("Parameter not in the model")
end
val = model.updated_parameters[p_idx(p)]
if isnan(val)
return MOI.Parameter{T}(model.parameters[p_idx(p)])
end
return MOI.Parameter{T}(val)
end
function MOI.modify(
model::Optimizer,
c::MOI.ConstraintIndex{F,S},
chg::MOI.ScalarCoefficientChange{T},
) where {F,S,T}
if haskey(model.quadratic_constraint_cache, c) ||
haskey(model.affine_constraint_cache, c)
error("Parametric constraint cannot be modified")
end
MOI.modify(model.optimizer, c, chg)
return
end
function _add_constraint_direct_and_cache_map!(model::Optimizer, f, set)
ci = MOI.add_constraint(model.optimizer, f, set)
_add_to_constraint_map!(model, ci)
return ci
end
function MOI.add_constraint(
model::Optimizer,
f::MOI.VariableIndex,
set::MOI.AbstractScalarSet,
)
if !_is_variable(f)
error("Cannot constrain a parameter")
elseif !_variable_in_model(model, f)
error("Variable not in the model")
end
return _add_constraint_direct_and_cache_map!(model, f, set)
end
function _add_constraint_with_parameters_on_function(
model::Optimizer,
f::MOI.ScalarAffineFunction{T},
set::S,
) where {T,S}
pf = ParametricAffineFunction(f)
if model.constraints_interpretation == ONLY_BOUNDS
if length(pf.v) == 1 && isone(MOI.coefficient(pf.v[]))
poi_ci = _add_vi_constraint(model, pf, set)
else
error(
"It was not possible to interpret this constraint as a variable bound.",
)
end
elseif model.constraints_interpretation == ONLY_CONSTRAINTS
poi_ci = MOI.add_constraint(model, pf, set)
elseif model.constraints_interpretation == BOUNDS_AND_CONSTRAINTS
if length(pf.v) == 1 && isone(MOI.coefficient(pf.v[]))
poi_ci = _add_vi_constraint(model, pf, set)
else
poi_ci = MOI.add_constraint(model, pf, set)
end
end
return poi_ci
end
function MOI.add_constraint(
model::Optimizer,
pf::ParametricAffineFunction{T},
set::S,
) where {T,S}
_cache_set_constant!(pf, set)
_update_cache!(pf, model)
inner_ci = MOI.add_constraint(
model.optimizer,
MOI.ScalarAffineFunction{T}(pf.v, 0.0),
_set_with_new_constant(set, pf.current_constant),
)
model.last_affine_added += 1
outer_ci = MOI.ConstraintIndex{MOI.ScalarAffineFunction{T},S}(
model.last_affine_added,
)
model.affine_outer_to_inner[outer_ci] = inner_ci
model.constraint_outer_to_inner[outer_ci] = inner_ci
model.affine_constraint_cache[inner_ci] = pf
model.affine_constraint_cache_set[inner_ci] = set
return outer_ci
end
function _add_vi_constraint(
model::Optimizer,
pf::ParametricAffineFunction{T},
set::S,
) where {T,S}
_cache_set_constant!(pf, set)
_update_cache!(pf, model)
inner_ci = MOI.add_constraint(
model.optimizer,
pf.v[].variable,
_set_with_new_constant(set, pf.current_constant),
)
model.last_affine_added += 1
outer_ci = MOI.ConstraintIndex{MOI.ScalarAffineFunction{T},S}(
model.last_affine_added,
)
model.affine_outer_to_inner[outer_ci] = inner_ci
model.constraint_outer_to_inner[outer_ci] = inner_ci
model.affine_constraint_cache[inner_ci] = pf
model.affine_constraint_cache_set[inner_ci] = set
return outer_ci
end
function MOI.add_constraint(
model::Optimizer,
f::MOI.ScalarAffineFunction{T},
set::MOI.AbstractScalarSet,
) where {T}
if !_has_parameters(f)
return _add_constraint_direct_and_cache_map!(model, f, set)
else
return _add_constraint_with_parameters_on_function(model, f, set)
end
end
function MOI.add_constraint(
model::Optimizer,
f::MOI.VectorOfVariables,
set::MOI.AbstractVectorSet,
)
if _has_parameters(f)
error("VectorOfVariables does not allow parameters")
end
return _add_constraint_direct_and_cache_map!(model, f, set)
end
function MOI.add_constraint(
model::Optimizer,
f::MOI.VectorAffineFunction{T},
set::MOI.AbstractVectorSet,
) where {T}
if !_has_parameters(f)
return _add_constraint_direct_and_cache_map!(model, f, set)
else
return _add_constraint_with_parameters_on_function(model, f, set)
end
end
function _add_constraint_with_parameters_on_function(
model::Optimizer,
f::MOI.VectorAffineFunction{T},
set::MOI.AbstractVectorSet,
) where {T}
pf = ParametricVectorAffineFunction(f)
# _cache_set_constant!(pf, set) # there is no constant is vector sets
_update_cache!(pf, model)
inner_ci = MOI.add_constraint(model.optimizer, _current_function(pf), set)
model.vector_affine_constraint_cache[inner_ci] = pf
_add_to_constraint_map!(model, inner_ci)
return inner_ci
end
function _add_constraint_with_parameters_on_function(
model::Optimizer,
f::MOI.ScalarQuadraticFunction{T},
s::S,
) where {T,S<:MOI.AbstractScalarSet}
pf = ParametricQuadraticFunction(f)
_cache_multiplicative_params!(model, pf)
_cache_set_constant!(pf, s)
_update_cache!(pf, model)
func = _current_function(pf)
if !_is_affine(func)
fq = func
inner_ci =
MOI.Utilities.normalize_and_add_constraint(model.optimizer, fq, s)
model.last_quad_add_added += 1
outer_ci = MOI.ConstraintIndex{MOI.ScalarQuadraticFunction{T},S}(
model.last_quad_add_added,
)
model.quadratic_outer_to_inner[outer_ci] = inner_ci
model.constraint_outer_to_inner[outer_ci] = inner_ci
else
fa = MOI.ScalarAffineFunction(func.affine_terms, func.constant)
inner_ci =
MOI.Utilities.normalize_and_add_constraint(model.optimizer, fa, s)
model.last_quad_add_added += 1
outer_ci = MOI.ConstraintIndex{MOI.ScalarQuadraticFunction{T},S}(
model.last_quad_add_added,
)
# This part is used to remember that ci came from a quadratic function
# It is particularly useful because sometimes the constraint mutates
model.quadratic_outer_to_inner[outer_ci] = inner_ci
model.constraint_outer_to_inner[outer_ci] = inner_ci
end
model.quadratic_constraint_cache[inner_ci] = pf
model.quadratic_constraint_cache_set[inner_ci] = s
return outer_ci
end
function _is_affine(f::MOI.ScalarQuadraticFunction)
if isempty(f.quadratic_terms)
return true
end
return false
end
function MOI.add_constraint(
model::Optimizer,
f::MOI.ScalarQuadraticFunction{T},
set::MOI.AbstractScalarSet,
) where {T}
if !_has_parameters(f)
return _add_constraint_direct_and_cache_map!(model, f, set)
else
return _add_constraint_with_parameters_on_function(model, f, set)
end
end
function MOI.delete(
model::Optimizer,
c::MOI.ConstraintIndex{F,S},
) where {F<:MOI.ScalarQuadraticFunction,S<:MOI.AbstractSet}
if haskey(model.quadratic_outer_to_inner, c)
ci_inner = model.quadratic_outer_to_inner[c]
deleteat!(model.quadratic_outer_to_inner, c)
deleteat!(model.quadratic_constraint_cache, c)
deleteat!(model.quadratic_constraint_cache_set, c)
MOI.delete(model.optimizer, ci_inner)
else
MOI.delete(model.optimizer, c)
end
deleteat!(model.constraint_outer_to_inner, c)
return
end
function MOI.delete(
model::Optimizer,
c::MOI.ConstraintIndex{F,S},
) where {F<:MOI.ScalarAffineFunction,S<:MOI.AbstractSet}
if haskey(model.affine_outer_to_inner, c)
ci_inner = model.affine_outer_to_inner[c]
delete!(model.affine_outer_to_inner, c)
delete!(model.affine_constraint_cache, c)
delete!(model.affine_constraint_cache_set, c)
MOI.delete(model.optimizer, ci_inner)
else
MOI.delete(model.optimizer, c)
end
delete!(model.constraint_outer_to_inner, c)
return
end
function MOI.delete(
model::Optimizer,
c::MOI.ConstraintIndex{F,S},
) where {F<:Union{MOI.VariableIndex,MOI.VectorOfVariables},S<:MOI.AbstractSet}
MOI.delete(model.optimizer, c)
delete!(model.constraint_outer_to_inner, c)
return
end
function MOI.delete(
model::Optimizer,
c::MOI.ConstraintIndex{F,S},
) where {F<:MOI.VectorAffineFunction,S<:MOI.AbstractSet}
MOI.delete(model.optimizer, c)
delete!(model.constraint_outer_to_inner, c)
deleteat!(model.vector_affine_constraint_cache, c)
return
end
#
# Objective
#
function MOI.supports(
model::Optimizer,
attr::Union{
MOI.ObjectiveSense,
MOI.ObjectiveFunction{MOI.VariableIndex},
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{T}},
},
) where {T}
return MOI.supports(model.optimizer, attr)
end
function MOI.supports(
model::Optimizer,
::MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{T}},
) where {T}
return MOI.supports(
model.optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{T}}(),
)
end
function MOI.modify(
model::Optimizer,
c::MOI.ObjectiveFunction{F},
chg::Union{MOI.ScalarConstantChange{T},MOI.ScalarCoefficientChange{T}},
) where {F<:MathOptInterface.AbstractScalarFunction,T}
if model.quadratic_objective_cache !== nothing ||
model.affine_objective_cache !== nothing ||
!isempty(model.quadratic_objective_cache_product)
error("Parametric objective cannot be modified")
end
MOI.modify(model.optimizer, c, chg)
MOI.modify(model.original_objective_cache, c, chg)
return
end
function MOI.get(model::Optimizer, attr::MOI.ObjectiveSense)
return MOI.get(model.optimizer, attr)
end
function MOI.get(model::Optimizer, attr::MOI.ObjectiveFunctionType)
return MOI.get(model.original_objective_cache, attr)
end
function MOI.get(model::Optimizer, attr::MOI.ObjectiveFunction)
return MOI.get(model.original_objective_cache, attr)
end
function _empty_objective_function_caches!(model::Optimizer{T}) where {T}
model.affine_objective_cache = nothing
model.quadratic_objective_cache = nothing
model.original_objective_cache = MOI.Utilities.ObjectiveContainer{T}()
return
end
function MOI.set(
model::Optimizer,
attr::MOI.ObjectiveFunction,
f::MOI.ScalarAffineFunction{T},
) where {T}
# clear previously defined objetive function cache
_empty_objective_function_caches!(model)
if !_has_parameters(f)
MOI.set(model.optimizer, attr, f)
else
pf = ParametricAffineFunction(f)
_update_cache!(pf, model)
MOI.set(model.optimizer, attr, _current_function(pf))
model.affine_objective_cache = pf
end
MOI.set(model.original_objective_cache, attr, f)
return
end
function MOI.set(
model::Optimizer,
attr::MOI.ObjectiveFunction{F},
f::F,
) where {F<:MOI.ScalarQuadraticFunction{T}} where {T}
# clear previously defined objetive function cache
_empty_objective_function_caches!(model)
if !_has_parameters(f)
MOI.set(model.optimizer, attr, f)
else
pf = ParametricQuadraticFunction(f)
_cache_multiplicative_params!(model, pf)
_update_cache!(pf, model)
func = _current_function(pf)
MOI.set(
model.optimizer,
MOI.ObjectiveFunction{(
_is_affine(func) ? MOI.ScalarAffineFunction{T} :
MOI.ScalarQuadraticFunction{T}
)}(),
# func,
(
_is_affine(func) ?
MOI.ScalarAffineFunction(func.affine_terms, func.constant) :
func
),
)
model.quadratic_objective_cache = pf
end
MOI.set(model.original_objective_cache, attr, f)
return
end
function MOI.set(
model::Optimizer,
attr::MOI.ObjectiveFunction,
v::MOI.VariableIndex,
)
if _is_parameter(v)
error("Cannot use a parameter as objective function alone")
elseif !_variable_in_model(model, v)
error("Variable not in the model")
end
MOI.set(model.optimizer, attr, model.variables[v])
MOI.set(model.original_objective_cache, attr, v)
return
end
function MOI.set(
model::Optimizer,
attr::MOI.ObjectiveSense,
sense::MOI.OptimizationSense,
)
MOI.set(model.optimizer, attr, sense)
return
end
#
# Other
#
function MOI.supports_incremental_interface(model::Optimizer)
return MOI.supports_incremental_interface(model.optimizer)
end
#
# Attributes
#
function MOI.supports(model::Optimizer, ::MOI.Name)
return MOI.supports(model.optimizer, MOI.Name())
end
MOI.get(model::Optimizer, ::MOI.Name) = MOI.get(model.optimizer, MOI.Name())
function MOI.set(model::Optimizer, ::MOI.Name, name::String)
return MOI.set(model.optimizer, MOI.Name(), name)
end
function MOI.get(model::Optimizer, ::MOI.ListOfModelAttributesSet)
return MOI.get(model.optimizer, MOI.ListOfModelAttributesSet())
end
function MOI.get(model::Optimizer, ::MOI.ListOfVariableAttributesSet)
return MOI.get(model.optimizer, MOI.ListOfVariableAttributesSet())
end
function MOI.get(
model::Optimizer,
::MOI.ListOfConstraintAttributesSet{F,S},
) where {F,S}
if F === MOI.ScalarQuadraticFunction
error(
"MOI.ListOfConstraintAttributesSet is not implemented for ScalarQuadraticFunction.",
)
end
return MOI.get(model.optimizer, MOI.ListOfConstraintAttributesSet{F,S}())
end
function MOI.get(model::Optimizer, ::MOI.NumberOfVariables)
return MOI.get(model, NumberOfPureVariables()) +
MOI.get(model, NumberOfParameters())
end
function MOI.get(model::Optimizer, ::MOI.NumberOfConstraints{F,S}) where {S,F}
return length(model.constraint_outer_to_inner[F, S])
end
function MOI.get(model::Optimizer, ::MOI.ListOfVariableIndices)
return vcat(
MOI.get(model, ListOfPureVariableIndices()),
v_idx.(MOI.get(model, ListOfParameterIndices())),
)
end
function MOI.get(model::Optimizer, ::MOI.ListOfConstraintTypesPresent)
constraint_types = MOI.Utilities.DoubleDicts.nonempty_outer_keys(
model.constraint_outer_to_inner,
)
return collect(constraint_types)
end
function MOI.get(
model::Optimizer,
::MOI.ListOfConstraintIndices{F,S},
) where {S,F}
list = collect(values(model.constraint_outer_to_inner[F, S]))
sort!(list, lt = (x, y) -> (x.value < y.value))
return list
end
function MOI.supports(model::Optimizer, attr::MOI.AbstractOptimizerAttribute)
return MOI.supports(model.optimizer, attr)
end
function MOI.get(model::Optimizer, attr::MOI.AbstractOptimizerAttribute)
return MOI.get(model.optimizer, attr)
end
function MOI.set(model::Optimizer, attr::MOI.AbstractOptimizerAttribute, value)
MOI.set(model.optimizer, attr, value)
return
end
function MOI.set(model::Optimizer, attr::MOI.RawOptimizerAttribute, val::Any)
MOI.set(model.optimizer, attr, val)
return
end
function MOI.get(model::Optimizer, ::MOI.SolverName)
name = MOI.get(model.optimizer, MOI.SolverName())
return "Parametric Optimizer with $(name) attached"
end
function MOI.get(model::Optimizer, ::MOI.SolverVersion)
return MOI.get(model.optimizer, MOI.SolverVersion())
end
#
# Solutions Attributes
#
function MOI.get(model::Optimizer, attr::MOI.AbstractModelAttribute)
return MOI.get(model.optimizer, attr)
end
function MOI.get(
model::Optimizer,
attr::MOI.VariablePrimal,
v::MOI.VariableIndex,
)
if _parameter_in_model(model, v)
return model.parameters[p_idx(v)]
elseif _variable_in_model(model, v)
return MOI.get(model.optimizer, attr, model.variables[v])
else
error("Variable not in the model")
end
end
function MOI.get(
model::Optimizer,
attr::T,
) where {
T<:Union{
MOI.TerminationStatus,
MOI.ObjectiveValue,
MOI.DualObjectiveValue,
MOI.PrimalStatus,
MOI.DualStatus,
},
}
return MOI.get(model.optimizer, attr)
end
function MOI.get(
model::Optimizer,
attr::MOI.AbstractConstraintAttribute,
c::MOI.ConstraintIndex,
)
optimizer_ci = get(model.constraint_outer_to_inner, c, c)
return MOI.get(model.optimizer, attr, optimizer_ci)
end
#
# Special Attributes
#
struct NumberOfPureVariables <: MOI.AbstractModelAttribute end
function MOI.get(model::Optimizer, ::NumberOfPureVariables)
return length(model.variables)
end
struct ListOfPureVariableIndices <: MOI.AbstractModelAttribute end
function MOI.get(model::Optimizer, ::ListOfPureVariableIndices)
return collect(keys(model.variables))::Vector{MOI.VariableIndex}
end
struct NumberOfParameters <: MOI.AbstractModelAttribute end
function MOI.get(model::Optimizer, ::NumberOfParameters)
return length(model.parameters)
end
struct ListOfParameterIndices <: MOI.AbstractModelAttribute end
function MOI.get(model::Optimizer, ::ListOfParameterIndices)
return collect(keys(model.parameters))::Vector{ParameterIndex}
end
"""
ParameterValue <: MOI.AbstractVariableAttribute
Attribute defined to set and get parameter values
# Example
```julia
MOI.set(model, POI.ParameterValue(), p, 2.0)
MOI.get(model, POI.ParameterValue(), p)
```
"""
struct ParameterValue <: MOI.AbstractVariableAttribute end
# We need a CachingOptimizer fallback to
# get ParameterValue working correctly on JuMP
# TODO: Think of a better solution for this
function MOI.set(
opt::MOI.Utilities.CachingOptimizer,
::ParameterValue,
var::MOI.VariableIndex,
val::Float64,
)
ci =
MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{Float64}}(var.value)
set = MOI.set(opt, MOI.ConstraintSet(), ci, MOI.Parameter(val))
return nothing
end
function MOI.set(
model::Optimizer,
::ParameterValue,
var::MOI.VariableIndex,
val::Float64,
)
ci =
MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{Float64}}(var.value)
set = MOI.set(model, MOI.ConstraintSet(), ci, MOI.Parameter(val))
return nothing
end
function MOI.set(
opt::MOI.Utilities.CachingOptimizer,
::ParameterValue,
vi::MOI.VariableIndex,
val::Real,
)
return MOI.set(opt, ParameterValue(), vi, convert(Float64, val))
end
function MOI.set(
model::Optimizer,
::ParameterValue,
vi::MOI.VariableIndex,
val::Real,
)
return MOI.set(model, ParameterValue(), vi, convert(Float64, val))
end
function MOI.get(
opt::MOI.Utilities.CachingOptimizer,
::ParameterValue,
var::MOI.VariableIndex,
)
ci =
MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{Float64}}(var.value)
set = MOI.get(opt, MOI.ConstraintSet(), ci)
return set.value
end
function MOI.get(model::Optimizer, ::ParameterValue, var::MOI.VariableIndex)
return model.parameters[p_idx(var)]
end
"""
ConstraintsInterpretation <: MOI.AbstractOptimizerAttribute
Attribute to define how [`POI.Optimizer`](@ref) should interpret constraints.
- `POI.ONLY_CONSTRAINTS`: Only interpret `ScalarAffineFunction` constraints as linear constraints
If an expression such as `x >= p1 + p2` appears it will be trated like a new constraint.
**This is the default behaviour of [`POI.Optimizer`](@ref)**
- `POI.ONLY_BOUNDS`: Only interpret `ScalarAffineFunction` constraints as a variable bound.
This is valid for constraints such as `x >= p` or `x >= p1 + p2`. If a constraint `x1 + x2 >= p` appears,
which is not a valid variable bound it will throw an error.
- `POI.BOUNDS_AND_CONSTRAINTS`: Interpret `ScalarAffineFunction` constraints as a variable bound if they
are a valid variable bound, i.e., `x >= p` or `x >= p1 + p2` and interpret them as linear constraints
otherwise.
# Example
```julia
MOI.set(model, POI.InterpretConstraintsAsBounds(), POI.ONLY_BOUNDS)
MOI.set(model, POI.InterpretConstraintsAsBounds(), POI.ONLY_CONSTRAINTS)
MOI.set(model, POI.InterpretConstraintsAsBounds(), POI.BOUNDS_AND_CONSTRAINTS)
```
"""
struct ConstraintsInterpretation <: MOI.AbstractOptimizerAttribute end
function MOI.set(
model::Optimizer,
::ConstraintsInterpretation,
value::ConstraintsInterpretationCode,
)
return model.constraints_interpretation = value
end
struct QuadraticObjectiveCoef <: MOI.AbstractModelAttribute end
function _set_quadratic_product_in_obj!(model::Optimizer{T}) where {T}
n = length(model.quadratic_objective_cache_product)
f = if model.affine_objective_cache !== nothing
_current_function(model.affine_objective_cache)
elseif model.quadratic_objective_cache !== nothing
_current_function(model.quadratic_objective_cache)
else
F = MOI.get(model.original_objective_cache, MOI.ObjectiveFunctionType())
MOI.get(model.original_objective_cache, MOI.ObjectiveFunction{F}())
end
F = typeof(f)
quadratic_prods_vector = MOI.ScalarQuadraticTerm{T}[]
sizehint!(quadratic_prods_vector, n)
for ((x, y), fparam) in model.quadratic_objective_cache_product
# x, y = prod_var
evaluated_fparam = _evaluate_parametric_expression(model, fparam)
push!(
quadratic_prods_vector,
MOI.ScalarQuadraticTerm(evaluated_fparam, x, y),
)
end
f_new = if F <: MOI.VariableIndex
MOI.ScalarQuadraticFunction(
quadratic_prods_vector,
MOI.ScalarAffineTerm{T}[MOI.ScalarAffineTerm{T}(1.0, f)],
0.0,
)
elseif F <: MOI.ScalarAffineFunction{T}
MOI.ScalarQuadraticFunction(quadratic_prods_vector, f.terms, f.constant)
elseif F <: MOI.ScalarQuadraticFunction{T}
quadratic_terms = vcat(f.quadratic_terms, quadratic_prods_vector)
MOI.ScalarQuadraticFunction(quadratic_terms, f.affine_terms, f.constant)
end
MOI.set(
model.optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{T}}(),
f_new,
)
return
end
function _evaluate_parametric_expression(model::Optimizer, p::MOI.VariableIndex)
return model.parameters[p_idx(p)]
end
function _evaluate_parametric_expression(
model::Optimizer,
fparam::MOI.ScalarAffineFunction{T},
) where {T}
constant = fparam.constant
terms = fparam.terms
evaluated_parameter_expression = zero(T)
for term in terms
coef = term.coefficient
p = term.variable
evaluated_parameter_expression += coef * model.parameters[p_idx(p)]
evaluated_parameter_expression += constant
end
return evaluated_parameter_expression
end
function MOI.set(
model::Optimizer,
::QuadraticObjectiveCoef,
(x1, x2)::Tuple{MOI.VariableIndex,MOI.VariableIndex},
::Nothing,
)
if x1.value > x2.value
aux = x1
x1 = x2
x2 = aux
end
delete!(model.quadratic_objective_cache_product, (x1, x2))
model.quadratic_objective_cache_product_changed = true
return
end
function MOI.set(
model::Optimizer,
::QuadraticObjectiveCoef,
(x1, x2)::Tuple{MOI.VariableIndex,MOI.VariableIndex},
f_param::Union{MOI.VariableIndex,MOI.ScalarAffineFunction{T}},
) where {T}
if x1.value > x2.value
aux = x1
x1 = x2
x2 = aux
end
model.quadratic_objective_cache_product[(x1, x2)] = f_param
model.quadratic_objective_cache_product_changed = true
return
end
function MOI.get(
model::Optimizer,
::QuadraticObjectiveCoef,
(x1, x2)::Tuple{MOI.VariableIndex,MOI.VariableIndex},
)
if x1.value > x2.value
aux = x1
x1 = x2
x2 = aux
end
if haskey(model.quadratic_objective_cache_product, (x1, x2))
return model.quadratic_objective_cache_product[(x1, x2)]
else
throw(
ErrorException(
"Parameter not set in product of variables ($x1,$x2)",
),
)
end
end
#
# Optimize
#
function MOI.optimize!(model::Optimizer)
if !isempty(model.updated_parameters)
update_parameters!(model)
end
if (
!isempty(model.quadratic_objective_cache_product) ||
model.quadratic_objective_cache_product_changed
)
model.quadratic_objective_cache_product_changed = false
_set_quadratic_product_in_obj!(model)
end
MOI.optimize!(model.optimizer)
if MOI.get(model, MOI.DualStatus()) != MOI.NO_SOLUTION &&
model.evaluate_duals
_compute_dual_of_parameters!(model)
end
return
end
#
# compute_conflict!
#
function MOI.compute_conflict!(model::Optimizer)
return MOI.compute_conflict!(model.optimizer)
end
function MOI.get(
model::Optimizer,
attr::MOI.ConstraintConflictStatus,
ci::MOI.ConstraintIndex{MOI.VariableIndex,<:MOI.Parameter},
)
return MOI.MAYBE_IN_CONFLICT
end
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 8511 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
module ParametricOptInterface
using MathOptInterface
const MOI = MathOptInterface
@enum ConstraintsInterpretationCode ONLY_CONSTRAINTS ONLY_BOUNDS BOUNDS_AND_CONSTRAINTS
#
# Parameter Index
#
const SIMPLE_SCALAR_SETS{T} =
Union{MOI.LessThan{T},MOI.GreaterThan{T},MOI.EqualTo{T}}
const PARAMETER_INDEX_THRESHOLD = Int64(4_611_686_018_427_387_904) # div(typemax(Int64),2)+1
struct ParameterIndex
index::Int64
end
function p_idx(vi::MOI.VariableIndex)::ParameterIndex
return ParameterIndex(vi.value - PARAMETER_INDEX_THRESHOLD)
end
function v_idx(pi::ParameterIndex)::MOI.VariableIndex
return MOI.VariableIndex(pi.index + PARAMETER_INDEX_THRESHOLD)
end
function p_val(vi::MOI.VariableIndex)::Int64
return vi.value - PARAMETER_INDEX_THRESHOLD
end
function p_val(ci::MOI.ConstraintIndex)::Int64
return ci.value - PARAMETER_INDEX_THRESHOLD
end
#
# MOI Special structure helpers
#
# Utilities for using a CleverDict in Parameters
function MOI.Utilities.CleverDicts.index_to_key(
::Type{ParameterIndex},
index::Int64,
)
return ParameterIndex(index)
end
function MOI.Utilities.CleverDicts.key_to_index(key::ParameterIndex)
return key.index
end
const ParamTo{T} = MOI.Utilities.CleverDicts.CleverDict{
ParameterIndex,
T,
typeof(MOI.Utilities.CleverDicts.key_to_index),
typeof(MOI.Utilities.CleverDicts.index_to_key),
}
const VariableMap = MOI.Utilities.CleverDicts.CleverDict{
MOI.VariableIndex,
MOI.VariableIndex,
typeof(MOI.Utilities.CleverDicts.key_to_index),
typeof(MOI.Utilities.CleverDicts.index_to_key),
}
const DoubleDict{T} = MOI.Utilities.DoubleDicts.DoubleDict{T}
const DoubleDictInner{F,S,T} = MOI.Utilities.DoubleDicts.DoubleDictInner{F,S,T}
#
# parametric functions
#
include("parametric_functions.jl")
"""
Optimizer{T, OT <: MOI.ModelLike} <: MOI.AbstractOptimizer
Declares a `Optimizer`, which allows the handling of parameters in a
optimization model.
## Keyword arguments
- `evaluate_duals::Bool`: If `true`, evaluates the dual of parameters. Users might want to set it to `false`
to increase performance when the duals of parameters are not necessary. Defaults to `true`.
- `save_original_objective_and_constraints`: If `true` saves the orginal function and set of the constraints
as well as the original objective function inside [`POI.Optimizer`](@ref). This is useful for printing the model
but greatly increases the memory footprint. Users might want to set it to `false` to increase performance
in applications where you don't need to query the original expressions provided to the model in constraints
or in the objective. Note that this might break printing or queries such as `MOI.get(model, MOI.ConstraintFunction(), c)`.
Defaults to `true`.
## Example
```julia-repl
julia> ParametricOptInterface.Optimizer(GLPK.Optimizer())
ParametricOptInterface.Optimizer{Float64,GLPK.Optimizer}
```
"""
mutable struct Optimizer{T,OT<:MOI.ModelLike} <: MOI.AbstractOptimizer
optimizer::OT
parameters::ParamTo{T}
parameters_name::Dict{MOI.VariableIndex,String}
# The updated_parameters dictionary has the same dimension of the
# parameters dictionary and if the value stored is a NaN is means
# that the parameter has not been updated.
updated_parameters::ParamTo{T}
variables::VariableMap
last_variable_index_added::Int64
last_parameter_index_added::Int64
# mapping of all constraints: necessary for getters
constraint_outer_to_inner::DoubleDict{MOI.ConstraintIndex}
# affine constraint data
last_affine_added::Int64
# Store the map for SAFs (some might be transformed into VI)
affine_outer_to_inner::DoubleDict{MOI.ConstraintIndex}
# Clever cache of data (inner key)
affine_constraint_cache::DoubleDict{ParametricAffineFunction{T}}
# Store original constraint set (inner key)
affine_constraint_cache_set::DoubleDict{MOI.AbstractScalarSet}
# quadratic constraitn data
last_quad_add_added::Int64
# Store the map for SQFs (some might be transformed into SAF)
# for instance p*p + var -> ScalarAffine(var)
quadratic_outer_to_inner::DoubleDict{MOI.ConstraintIndex}
# Clever cache of data (inner key)
quadratic_constraint_cache::DoubleDict{ParametricQuadraticFunction{T}}
# Store original constraint set (inner key)
quadratic_constraint_cache_set::DoubleDict{MOI.AbstractScalarSet}
# objective function data
# Clever cache of data (at most one can be !== nothing)
affine_objective_cache::Union{Nothing,ParametricAffineFunction{T}}
quadratic_objective_cache::Union{Nothing,ParametricQuadraticFunction{T}}
original_objective_cache::MOI.Utilities.ObjectiveContainer{T}
# Store parametric expressions for product of variables
quadratic_objective_cache_product::Dict{
Tuple{MOI.VariableIndex,MOI.VariableIndex},
MOI.AbstractFunction,
}
quadratic_objective_cache_product_changed::Bool
# vector affine function data
# vector_constraint_cache::DoubleDict{Vector{MOI.VectorAffineTerm{T}}}
# Clever cache of data (inner key)
vector_affine_constraint_cache::DoubleDict{
ParametricVectorAffineFunction{T},
}
#
multiplicative_parameters::Set{Int64}
dual_value_of_parameters::Vector{T}
# params
evaluate_duals::Bool
number_of_parameters_in_model::Int64
constraints_interpretation::ConstraintsInterpretationCode
save_original_objective_and_constraints::Bool
function Optimizer(
optimizer::OT;
evaluate_duals::Bool = true,
save_original_objective_and_constraints::Bool = true,
) where {OT}
T = Float64
return new{T,OT}(
optimizer,
MOI.Utilities.CleverDicts.CleverDict{ParameterIndex,T}(
MOI.Utilities.CleverDicts.key_to_index,
MOI.Utilities.CleverDicts.index_to_key,
),
Dict{MOI.VariableIndex,String}(),
MOI.Utilities.CleverDicts.CleverDict{ParameterIndex,T}(
MOI.Utilities.CleverDicts.key_to_index,
MOI.Utilities.CleverDicts.index_to_key,
),
MOI.Utilities.CleverDicts.CleverDict{
MOI.VariableIndex,
MOI.VariableIndex,
}(
MOI.Utilities.CleverDicts.key_to_index,
MOI.Utilities.CleverDicts.index_to_key,
),
0,
PARAMETER_INDEX_THRESHOLD,
DoubleDict{MOI.ConstraintIndex}(),
# affine constraint
0,
DoubleDict{MOI.ConstraintIndex}(),
DoubleDict{ParametricAffineFunction{T}}(),
DoubleDict{MOI.AbstractScalarSet}(),
# quadratic constraint
0,
DoubleDict{MOI.ConstraintIndex}(),
DoubleDict{ParametricQuadraticFunction{T}}(),
DoubleDict{MOI.AbstractScalarSet}(),
# objective
nothing,
nothing,
# nothing,
MOI.Utilities.ObjectiveContainer{T}(),
Dict{
Tuple{MOI.VariableIndex,MOI.VariableIndex},
MOI.AbstractFunction,
}(),
false,
# vec affine
# DoubleDict{Vector{MOI.VectorAffineTerm{T}}}(),
DoubleDict{ParametricVectorAffineFunction{T}}(),
# other
Set{Int64}(),
Vector{T}(),
evaluate_duals,
0,
ONLY_CONSTRAINTS,
save_original_objective_and_constraints,
)
end
end
function _next_variable_index!(model::Optimizer)
return model.last_variable_index_added += 1
end
function _next_parameter_index!(model::Optimizer)
return model.last_parameter_index_added += 1
end
function _update_number_of_parameters!(model::Optimizer)
return model.number_of_parameters_in_model += 1
end
function _parameter_in_model(model::Optimizer, v::MOI.VariableIndex)
return PARAMETER_INDEX_THRESHOLD <
v.value <=
model.last_parameter_index_added
end
function _variable_in_model(model::Optimizer, v::MOI.VariableIndex)
return 0 < v.value <= model.last_variable_index_added
end
include("duals.jl")
include("update_parameters.jl")
include("MOI_wrapper.jl")
end # module
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 4815 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
function _compute_dual_of_parameters!(model::Optimizer{T}) where {T}
model.dual_value_of_parameters =
zeros(T, model.number_of_parameters_in_model)
_update_duals_from_affine_constraints!(model)
_update_duals_from_vector_affine_constraints!(model)
_update_duals_from_quadratic_constraints!(model)
if model.affine_objective_cache !== nothing
_update_duals_from_objective!(model, model.affine_objective_cache)
end
if model.quadratic_objective_cache !== nothing
_update_duals_from_objective!(model, model.quadratic_objective_cache)
end
return
end
function _update_duals_from_affine_constraints!(model::Optimizer)
for (F, S) in keys(model.affine_constraint_cache.dict)
affine_constraint_cache_inner = model.affine_constraint_cache[F, S]
# barrier for type instability
_compute_parameters_in_ci!(model, affine_constraint_cache_inner)
end
return
end
function _update_duals_from_vector_affine_constraints!(model::Optimizer)
for (F, S) in keys(model.vector_affine_constraint_cache.dict)
vector_affine_constraint_cache_inner =
model.vector_affine_constraint_cache[F, S]
# barrier for type instability
_compute_parameters_in_ci!(model, vector_affine_constraint_cache_inner)
end
return
end
function _update_duals_from_quadratic_constraints!(model::Optimizer)
for (F, S) in keys(model.quadratic_constraint_cache.dict)
quadratic_constraint_cache_inner =
model.quadratic_constraint_cache[F, S]
# barrier for type instability
_compute_parameters_in_ci!(model, quadratic_constraint_cache_inner)
end
return
end
function _compute_parameters_in_ci!(
model::OT,
constraint_cache_inner::DoubleDictInner{F,S,V},
) where {OT,F,S,V}
for (inner_ci, pf) in constraint_cache_inner
_compute_parameters_in_ci!(model, pf, inner_ci)
end
return
end
function _compute_parameters_in_ci!(
model::Optimizer{T},
pf,
ci::MOI.ConstraintIndex{F,S},
) where {F,S} where {T}
cons_dual = MOI.get(model.optimizer, MOI.ConstraintDual(), ci)
for term in pf.p
model.dual_value_of_parameters[p_val(term.variable)] -=
cons_dual * term.coefficient
end
return
end
function _compute_parameters_in_ci!(
model::Optimizer{T},
pf::ParametricVectorAffineFunction{T},
ci::MOI.ConstraintIndex{F,S},
) where {F<:MOI.VectorAffineFunction{T},S} where {T}
cons_dual = MOI.get(model.optimizer, MOI.ConstraintDual(), ci)
for term in pf.p
model.dual_value_of_parameters[p_val(term.scalar_term.variable)] -=
cons_dual[term.output_index] * term.scalar_term.coefficient
end
return
end
function _update_duals_from_objective!(model::Optimizer{T}, pf) where {T}
is_min = MOI.get(model.optimizer, MOI.ObjectiveSense()) == MOI.MIN_SENSE
for param in pf.p
model.dual_value_of_parameters[p_val(param.variable)] +=
ifelse(is_min, 1, -1) * param.coefficient
end
return
end
"""
ParameterDual <: MOI.AbstractVariableAttribute
Attribute defined to get the dual values associated to parameters
# Example
```julia
MOI.get(model, POI.ParameterValue(), p)
```
"""
struct ParameterDual <: MOI.AbstractVariableAttribute end
MOI.is_set_by_optimize(::ParametricOptInterface.ParameterDual) = true
function MOI.get(
model::Optimizer{T},
::ParameterDual,
v::MOI.VariableIndex,
) where {T}
if !_is_additive(
model,
MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{T}}(v.value),
)
error("Cannot compute the dual of a multiplicative parameter")
end
return model.dual_value_of_parameters[p_val(v)]
end
function MOI.get(
model::Optimizer{T},
::MOI.ConstraintDual,
cp::MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{T}},
) where {T}
if !model.evaluate_duals
throw(
MOI.GetAttributeNotAllowed(
MOI.ConstraintDual(),
"$(MOI.ConstraintDual()) not available when " *
"evaluate_duals is set to false. " *
"Create an optimizer such as POI.Optimizer(HiGHS.Optimizer(); evaluate_duals = true) to enable this feature.",
),
)
end
if !_is_additive(model, cp)
error("Cannot compute the dual of a multiplicative parameter")
end
return model.dual_value_of_parameters[p_val(cp)]
end
function _is_additive(model::Optimizer, cp::MOI.ConstraintIndex)
if cp.value in model.multiplicative_parameters
return false
end
return true
end
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 13584 | abstract type ParametricFunction{T} end
function _cache_set_constant!(
f::ParametricFunction{T},
s::Union{MOI.LessThan{T},MOI.GreaterThan{T},MOI.EqualTo{T}},
) where {T}
f.set_constant = MOI.constant(s)
return
end
function _cache_set_constant!(
::ParametricFunction{T},
::MOI.AbstractScalarSet,
) where {T}
return
end
mutable struct ParametricQuadraticFunction{T} <: ParametricFunction{T}
# helper to efficiently update affine terms
affine_data::Dict{MOI.VariableIndex,T}
affine_data_np::Dict{MOI.VariableIndex,T}
# constant * parameter * variable (in this order)
pv::Vector{MOI.ScalarQuadraticTerm{T}}
# constant * parameter * parameter
pp::Vector{MOI.ScalarQuadraticTerm{T}}
# constant * variable * variable
vv::Vector{MOI.ScalarQuadraticTerm{T}}
# constant * parameter
p::Vector{MOI.ScalarAffineTerm{T}}
# constant * variable
v::Vector{MOI.ScalarAffineTerm{T}}
# constant (does not include the set constant)
c::T
# to avoid unnecessary lookups in updates
set_constant::T
# cache data that is inside the solver to avoid slow getters
current_terms_with_p::Dict{MOI.VariableIndex,T}
current_constant::T
# computed on runtime
# updated_terms_with_p::Dict{MOI.VariableIndex,T}
# updated_constant::T
end
function ParametricQuadraticFunction(
f::MOI.ScalarQuadraticFunction{T},
) where {T}
v, p = _split_affine_terms(f.affine_terms)
pv, pp, vv = _split_quadratic_terms(f.quadratic_terms)
# find variables related to parameters
# so that we only cache the important part of the v (affine part)
v_in_pv = Set{MOI.VariableIndex}()
sizehint!(v_in_pv, length(pv))
for term in pv
push!(v_in_pv, term.variable_2)
end
affine_data = Dict{MOI.VariableIndex,T}()
sizehint!(affine_data, length(v_in_pv))
affine_data_np = Dict{MOI.VariableIndex,T}()
sizehint!(affine_data, length(v))
for term in v
if term.variable in v_in_pv
base = get(affine_data, term.variable, zero(T))
affine_data[term.variable] = term.coefficient + base
else
base = get(affine_data_np, term.variable, zero(T))
affine_data_np[term.variable] = term.coefficient + base
end
end
return ParametricQuadraticFunction{T}(
affine_data,
affine_data_np,
pv,
pp,
vv,
p,
v,
f.constant,
zero(T),
Dict{MOI.VariableIndex,T}(),
zero(T),
)
end
function _split_quadratic_terms(
terms::Vector{MOI.ScalarQuadraticTerm{T}},
) where {T}
num_vv, num_pp, num_pv = _count_scalar_quadratic_terms_types(terms)
pp = Vector{MOI.ScalarQuadraticTerm{T}}(undef, num_pp) # parameter x parameter
pv = Vector{MOI.ScalarQuadraticTerm{T}}(undef, num_pv) # parameter (as a variable) x variable
vv = Vector{MOI.ScalarQuadraticTerm{T}}(undef, num_vv) # variable x variable
i_vv = 1
i_pp = 1
i_pv = 1
for term in terms
if _is_variable(term.variable_1)
if _is_variable(term.variable_2)
vv[i_vv] = term
i_vv += 1
else
pv[i_pv] = MOI.ScalarQuadraticTerm(
term.coefficient,
term.variable_2,
term.variable_1,
)
i_pv += 1
end
else
if _is_variable(term.variable_2)
pv[i_pv] = term
i_pv += 1
else
pp[i_pp] = term
i_pp += 1
end
end
end
return pv, pp, vv
end
function _count_scalar_quadratic_terms_types(
terms::Vector{MOI.ScalarQuadraticTerm{T}},
) where {T}
num_vv = 0
num_pp = 0
num_pv = 0
for term in terms
if _is_variable(term.variable_1)
if _is_variable(term.variable_2)
num_vv += 1
else
num_pv += 1
end
else
if _is_variable(term.variable_2)
num_pv += 1
else
num_pp += 1
end
end
end
return num_vv, num_pp, num_pv
end
function _original_function(f::ParametricQuadraticFunction{T}) where {T}
return MOI.ScalarQuadraticFunction{T}(
vcat(f.pv, f.pp, f.vv),
vcat(f.p, f.v),
f.c,
)
end
function _current_function(f::ParametricQuadraticFunction{T}) where {T}
affine = MOI.ScalarAffineTerm{T}[]
sizehint!(affine, length(f.current_terms_with_p) + length(f.affine_data_np))
for (v, c) in f.current_terms_with_p
push!(affine, MOI.ScalarAffineTerm{T}(c, v))
end
for (v, c) in f.affine_data_np
push!(affine, MOI.ScalarAffineTerm{T}(c, v))
end
return MOI.ScalarQuadraticFunction{T}(f.vv, affine, f.current_constant)
end
function _parametric_constant(
model,
f::ParametricQuadraticFunction{T},
) where {T}
# do not add set_function here
param_constant = f.c
for term in f.p
param_constant +=
term.coefficient * model.parameters[p_idx(term.variable)]
end
for term in f.pp
param_constant +=
term.coefficient *
model.parameters[p_idx(term.variable_1)] *
model.parameters[p_idx(term.variable_2)]
end
return param_constant
end
function _delta_parametric_constant(
model,
f::ParametricQuadraticFunction{T},
) where {T}
delta_constant = zero(T)
for term in f.p
p = p_idx(term.variable)
if !isnan(model.updated_parameters[p])
delta_constant +=
term.coefficient *
(model.updated_parameters[p] - model.parameters[p])
end
end
for term in f.pp
p1 = p_idx(term.variable_1)
p2 = p_idx(term.variable_2)
isnan_1 = isnan(model.updated_parameters[p1])
isnan_2 = isnan(model.updated_parameters[p2])
if !isnan_1 || !isnan_2
new_1 = ifelse(
isnan_1,
model.parameters[p1],
model.updated_parameters[p1],
)
new_2 = ifelse(
isnan_2,
model.parameters[p2],
model.updated_parameters[p2],
)
delta_constant +=
term.coefficient *
(new_1 * new_2 - model.parameters[p1] * model.parameters[p2])
end
end
return delta_constant
end
function _parametric_affine_terms(
model,
f::ParametricQuadraticFunction{T},
) where {T}
param_terms_dict = Dict{MOI.VariableIndex,T}()
sizehint!(param_terms_dict, length(f.pv))
# remember a variable may appear more than once in pv
for term in f.pv
base = get(param_terms_dict, term.variable_2, zero(T))
param_terms_dict[term.variable_2] =
base + term.coefficient * model.parameters[p_idx(term.variable_1)]
end
# by definition affine data only contains variables that appear in pv
for (var, coef) in f.affine_data
param_terms_dict[var] += coef
end
return param_terms_dict
end
function _delta_parametric_affine_terms(
model,
f::ParametricQuadraticFunction{T},
) where {T}
delta_terms_dict = Dict{MOI.VariableIndex,T}()
sizehint!(delta_terms_dict, length(f.pv))
# remember a variable may appear more than once in pv
for term in f.pv
p = p_idx(term.variable_1)
if !isnan(model.updated_parameters[p])
base = get(delta_terms_dict, term.variable_2, zero(T))
delta_terms_dict[term.variable_2] =
base +
term.coefficient *
(model.updated_parameters[p] - model.parameters[p])
end
end
return delta_terms_dict
end
function _update_cache!(f::ParametricQuadraticFunction{T}, model) where {T}
f.current_constant = _parametric_constant(model, f)
f.current_terms_with_p = _parametric_affine_terms(model, f)
return nothing
end
mutable struct ParametricAffineFunction{T} <: ParametricFunction{T}
# constant * parameter
p::Vector{MOI.ScalarAffineTerm{T}}
# constant * variable
v::Vector{MOI.ScalarAffineTerm{T}}
# constant
c::T
# to avoid unnecessary lookups in updates
set_constant::T
# cache to avoid slow getters
current_constant::T
end
function ParametricAffineFunction(f::MOI.ScalarAffineFunction{T}) where {T}
v, p = _split_affine_terms(f.terms)
return ParametricAffineFunction(p, v, f.constant)
end
function ParametricAffineFunction(
terms_p::Vector{MOI.ScalarAffineTerm{T}},
terms_v::Vector{MOI.ScalarAffineTerm{T}},
constant::T,
) where {T}
return ParametricAffineFunction{T}(
terms_p,
terms_v,
constant,
zero(T),
zero(T),
)
end
function _split_affine_terms(terms::Vector{MOI.ScalarAffineTerm{T}}) where {T}
num_v, num_p = _count_scalar_affine_terms_types(terms)
v = Vector{MOI.ScalarAffineTerm{T}}(undef, num_v)
p = Vector{MOI.ScalarAffineTerm{T}}(undef, num_p)
i_v = 1
i_p = 1
for term in terms
if _is_variable(term.variable)
v[i_v] = term
i_v += 1
else
p[i_p] = term
i_p += 1
end
end
return v, p
end
function _count_scalar_affine_terms_types(
terms::Vector{MOI.ScalarAffineTerm{T}},
) where {T}
num_vars = 0
num_params = 0
for term in terms
if _is_variable(term.variable)
num_vars += 1
else
num_params += 1
end
end
return num_vars, num_params
end
function _original_function(f::ParametricAffineFunction{T}) where {T}
return MOI.ScalarAffineFunction{T}(vcat(f.p, f.v), f.c)
end
function _current_function(f::ParametricAffineFunction{T}) where {T}
return MOI.ScalarAffineFunction{T}(f.v, f.current_constant)
end
function _parametric_constant(model, f::ParametricAffineFunction{T}) where {T}
# do not add set_function here
param_constant = f.c
for term in f.p
param_constant +=
term.coefficient * model.parameters[p_idx(term.variable)]
end
return param_constant
end
function _delta_parametric_constant(
model,
f::ParametricAffineFunction{T},
) where {T}
delta_constant = zero(T)
for term in f.p
p = p_idx(term.variable)
if !isnan(model.updated_parameters[p])
delta_constant +=
term.coefficient *
(model.updated_parameters[p] - model.parameters[p])
end
end
return delta_constant
end
function _update_cache!(f::ParametricAffineFunction{T}, model) where {T}
f.current_constant = _parametric_constant(model, f)
return nothing
end
mutable struct ParametricVectorAffineFunction{T}
# constant * parameter
p::Vector{MOI.VectorAffineTerm{T}}
# constant * variable
v::Vector{MOI.VectorAffineTerm{T}}
# constant
c::Vector{T}
# to avoid unnecessary lookups in updates
set_constant::Vector{T}
# cache to avoid slow getters
current_constant::Vector{T}
end
function ParametricVectorAffineFunction(
f::MOI.VectorAffineFunction{T},
) where {T}
v, p = _split_vector_affine_terms(f.terms)
return ParametricVectorAffineFunction{T}(
p,
v,
copy(f.constants),
zeros(T, length(f.constants)),
zeros(T, length(f.constants)),
)
end
function _split_vector_affine_terms(
terms::Vector{MOI.VectorAffineTerm{T}},
) where {T}
num_v, num_p = _count_vector_affine_terms_types(terms)
v = Vector{MOI.VectorAffineTerm{T}}(undef, num_v)
p = Vector{MOI.VectorAffineTerm{T}}(undef, num_p)
i_v = 1
i_p = 1
for term in terms
if _is_variable(term.scalar_term.variable)
v[i_v] = term
i_v += 1
else
p[i_p] = term
i_p += 1
end
end
return v, p
end
function _count_vector_affine_terms_types(
terms::Vector{MOI.VectorAffineTerm{T}},
) where {T}
num_vars = 0
num_params = 0
for term in terms
if _is_variable(term.scalar_term.variable)
num_vars += 1
else
num_params += 1
end
end
return num_vars, num_params
end
function _original_function(f::ParametricVectorAffineFunction{T}) where {T}
return MOI.VectorAffineFunction{T}(vcat(f.p, f.v), f.c)
end
function _current_function(f::ParametricVectorAffineFunction{T}) where {T}
return MOI.VectorAffineFunction{T}(f.v, f.current_constant)
end
function _parametric_constant(
model,
f::ParametricVectorAffineFunction{T},
) where {T}
# do not add set_function here
param_constant = copy(f.c)
for term in f.p
param_constant[term.output_index] +=
term.scalar_term.coefficient *
model.parameters[p_idx(term.scalar_term.variable)]
end
return param_constant
end
function _delta_parametric_constant(
model,
f::ParametricVectorAffineFunction{T},
) where {T}
delta_constant = zeros(T, length(f.c))
for term in f.p
p = p_idx(term.scalar_term.variable)
if !isnan(model.updated_parameters[p])
delta_constant[term.output_index] +=
term.scalar_term.coefficient *
(model.updated_parameters[p] - model.parameters[p])
end
end
return delta_constant
end
function _update_cache!(f::ParametricVectorAffineFunction{T}, model) where {T}
f.current_constant = _parametric_constant(model, f)
return nothing
end
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 10180 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
function _set_with_new_constant(s::MOI.LessThan{T}, val::T) where {T}
return MOI.LessThan{T}(s.upper - val)
end
function _set_with_new_constant(s::MOI.GreaterThan{T}, val::T) where {T}
return MOI.GreaterThan{T}(s.lower - val)
end
function _set_with_new_constant(s::MOI.EqualTo{T}, val::T) where {T}
return MOI.EqualTo{T}(s.value - val)
end
function _set_with_new_constant(s::MOI.Interval{T}, val::T) where {T}
return MOI.Interval{T}(s.lower - val, s.upper - val)
end
# Affine
# change to use only inner_ci all around so tha tupdates are faster
# modifications should not be used any ways, afterall we have param all around
function _update_affine_constraints!(model::Optimizer)
for (F, S) in keys(model.affine_constraint_cache.dict)
affine_constraint_cache_inner = model.affine_constraint_cache[F, S]
affine_constraint_cache_set_inner =
model.affine_constraint_cache_set[F, S]
if !isempty(affine_constraint_cache_inner)
# barrier to avoid type instability of inner dicts
_update_affine_constraints!(
model,
affine_constraint_cache_inner,
affine_constraint_cache_set_inner,
)
end
end
return
end
# TODO: cache changes and then batch them instead
function _update_affine_constraints!(
model::Optimizer,
affine_constraint_cache_inner::DoubleDictInner{F,S,V},
affine_constraint_cache_set_inner::DoubleDictInner{
F,
S,
MOI.AbstractScalarSet,
},
) where {F,S<:SIMPLE_SCALAR_SETS{T},V} where {T}
# cis = MOI.ConstraintIndex{F,S}[]
# sets = S[]
# sizehint!(cis, length(affine_constraint_cache_inner))
# sizehint!(sets, length(affine_constraint_cache_inner))
for (inner_ci, pf) in affine_constraint_cache_inner
delta_constant = _delta_parametric_constant(model, pf)
if !iszero(delta_constant)
pf.current_constant += delta_constant
new_set = S(pf.set_constant - pf.current_constant)
# new_set = _set_with_new_constant(set, param_constant)
MOI.set(model.optimizer, MOI.ConstraintSet(), inner_ci, new_set)
# push!(cis, inner_ci)
# push!(sets, new_set)
end
end
# if !isempty(cis)
# MOI.set(model.optimizer, MOI.ConstraintSet(), cis, sets)
# end
return
end
function _update_affine_constraints!(
model::Optimizer,
affine_constraint_cache_inner::DoubleDictInner{F,S,V},
affine_constraint_cache_set_inner::DoubleDictInner{
F,
S,
MOI.AbstractScalarSet,
},
) where {F,S<:MOI.Interval{T},V} where {T}
for (inner_ci, pf) in affine_constraint_cache_inner
delta_constant = _delta_parametric_constant(model, pf)
if !iszero(delta_constant)
pf.current_constant += delta_constant
# new_set = S(pf.set_constant - pf.current_constant)
set = affine_constraint_cache_set_inner[inner_ci]::S
new_set = _set_with_new_constant(set, pf.current_constant)::S
MOI.set(model.optimizer, MOI.ConstraintSet(), inner_ci, new_set)
end
end
return
end
function _update_vector_affine_constraints!(model::Optimizer)
for (F, S) in keys(model.vector_affine_constraint_cache.dict)
vector_affine_constraint_cache_inner =
model.vector_affine_constraint_cache[F, S]
if !isempty(vector_affine_constraint_cache_inner)
# barrier to avoid type instability of inner dicts
_update_vector_affine_constraints!(
model,
vector_affine_constraint_cache_inner,
)
end
end
return
end
function _update_vector_affine_constraints!(
model::Optimizer,
vector_affine_constraint_cache_inner::DoubleDictInner{F,S,V},
) where {F<:MOI.VectorAffineFunction{T},S,V} where {T}
for (inner_ci, pf) in vector_affine_constraint_cache_inner
delta_constant = _delta_parametric_constant(model, pf)
if !iszero(delta_constant)
pf.current_constant .+= delta_constant
MOI.modify(
model.optimizer,
inner_ci,
MOI.VectorConstantChange(pf.current_constant),
)
end
end
return
end
function _update_quadratic_constraints!(model::Optimizer)
for (F, S) in keys(model.quadratic_constraint_cache.dict)
quadratic_constraint_cache_inner =
model.quadratic_constraint_cache[F, S]
quadratic_constraint_cache_set_inner =
model.quadratic_constraint_cache_set[F, S]
if !isempty(quadratic_constraint_cache_inner)
# barrier to avoid type instability of inner dicts
_update_quadratic_constraints!(
model,
quadratic_constraint_cache_inner,
quadratic_constraint_cache_set_inner,
)
end
end
return
end
function _affine_build_change_and_up_param_func(
pf::ParametricQuadraticFunction{T},
delta_terms,
) where {T}
changes = Vector{MOI.ScalarCoefficientChange}(undef, length(delta_terms))
i = 1
for (var, coef) in delta_terms
base_coef = pf.current_terms_with_p[var]
new_coef = base_coef + coef
pf.current_terms_with_p[var] = new_coef
changes[i] = MOI.ScalarCoefficientChange(var, new_coef)
i += 1
end
return changes
end
function _update_quadratic_constraints!(
model::Optimizer,
quadratic_constraint_cache_inner::DoubleDictInner{F,S,V},
quadratic_constraint_cache_set_inner::DoubleDictInner{
F,
S,
MOI.AbstractScalarSet,
},
) where {F,S<:SIMPLE_SCALAR_SETS{T},V} where {T}
# cis = MOI.ConstraintIndex{F,S}[]
# sets = S[]
# sizehint!(cis, length(quadratic_constraint_cache_inner))
# sizehint!(sets, length(quadratic_constraint_cache_inner))
for (inner_ci, pf) in quadratic_constraint_cache_inner
delta_constant = _delta_parametric_constant(model, pf)
if !iszero(delta_constant)
pf.current_constant += delta_constant
new_set = S(pf.set_constant - pf.current_constant)
# new_set = _set_with_new_constant(set, param_constant)
MOI.set(model.optimizer, MOI.ConstraintSet(), inner_ci, new_set)
# push!(cis, inner_ci)
# push!(sets, new_set)
end
delta_terms = _delta_parametric_affine_terms(model, pf)
if !isempty(delta_terms)
changes = _affine_build_change_and_up_param_func(pf, delta_terms)
cis = fill(inner_ci, length(changes))
MOI.modify(model.optimizer, cis, changes)
end
end
# if !isempty(cis)
# MOI.set(model.optimizer, MOI.ConstraintSet(), cis, sets)
# end
return
end
function _update_quadratic_constraints!(
model::Optimizer,
quadratic_constraint_cache_inner::DoubleDictInner{F,S,V},
quadratic_constraint_cache_set_inner::DoubleDictInner{
F,
S,
MOI.AbstractScalarSet,
},
) where {F,S<:MOI.Interval{T},V} where {T}
for (inner_ci, pf) in quadratic_constraint_cache_inner
delta_constant = _delta_parametric_constant(model, pf)
if !iszero(delta_constant)
pf.current_constant += delta_constant
# new_set = S(pf.set_constant - pf.current_constant)
set = quadratic_constraint_cache_set_inner[inner_ci]::S
new_set = _set_with_new_constant(set, pf.current_constant)::S
MOI.set(model.optimizer, MOI.ConstraintSet(), inner_ci, new_set)
end
delta_terms = _delta_parametric_affine_terms(model, pf)
if !isempty(delta_terms)
changes = _affine_build_change_and_up_param_func(pf, delta_terms)
cis = fill(inner_ci, length(changes))
MOI.modify(model.optimizer, cis, changes)
end
end
return
end
function _update_affine_objective!(model::Optimizer{T}) where {T}
if model.affine_objective_cache === nothing
return
end
pf = model.affine_objective_cache
delta_constant = _delta_parametric_constant(model, pf)
if !iszero(delta_constant)
pf.current_constant += delta_constant
# F = MOI.get(model.optimizer, MOI.ObjectiveFunctionType())
MOI.modify(
model.optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{T}}(),
MOI.ScalarConstantChange(pf.current_constant),
)
end
return
end
function _update_quadratic_objective!(model::Optimizer{T}) where {T}
if model.quadratic_objective_cache === nothing
return
end
pf = model.quadratic_objective_cache
delta_constant = _delta_parametric_constant(model, pf)
if !iszero(delta_constant)
pf.current_constant += delta_constant
F = MOI.get(model.optimizer, MOI.ObjectiveFunctionType())
MOI.modify(
model.optimizer,
MOI.ObjectiveFunction{F}(),
MOI.ScalarConstantChange(pf.current_constant),
)
end
delta_terms = _delta_parametric_affine_terms(model, pf)
if !isempty(delta_terms)
F = MOI.get(model.optimizer, MOI.ObjectiveFunctionType())
changes = _affine_build_change_and_up_param_func(pf, delta_terms)
MOI.modify(model.optimizer, MOI.ObjectiveFunction{F}(), changes)
end
return
end
function update_parameters!(model::Optimizer)
_update_affine_constraints!(model)
_update_vector_affine_constraints!(model)
_update_quadratic_constraints!(model)
_update_affine_objective!(model)
_update_quadratic_objective!(model)
# Update parameters and put NaN to indicate that the parameter has been
# updated
for (parameter_index, val) in model.updated_parameters
if !isnan(val)
model.parameters[parameter_index] = val
model.updated_parameters[parameter_index] = NaN
end
end
return
end
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 37821 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
function test_jump_direct_affine_parameters()
optimizer = POI.Optimizer(GLPK.Optimizer())
model = direct_model(optimizer)
@variable(model, x[i = 1:2] >= 0)
@variable(model, y in MOI.Parameter(0.0))
@variable(model, w in MOI.Parameter(0.0))
@variable(model, z in MOI.Parameter(0.0))
@constraint(model, 2 * x[1] + x[2] + y <= 4)
@constraint(model, 1 * x[1] + 2 * x[2] + z <= 4)
@objective(model, Max, 4 * x[1] + 3 * x[2] + w)
optimize!(model)
@test isapprox.(value(x[1]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(x[2]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(y), 0, atol = ATOL)
# ===== Set parameter value =====
MOI.set(model, POI.ParameterValue(), y, 2.0)
optimize!(model)
@test isapprox.(value(x[1]), 0.0, atol = ATOL)
@test isapprox.(value(x[2]), 2.0, atol = ATOL)
@test isapprox.(value(y), 2.0, atol = ATOL)
return
end
function test_jump_direct_parameter_times_variable()
optimizer = POI.Optimizer(GLPK.Optimizer())
model = direct_model(optimizer)
@variable(model, x[i = 1:2] >= 0)
@variable(model, y in MOI.Parameter(0.0))
@variable(model, w in MOI.Parameter(0.0))
@variable(model, z in MOI.Parameter(0.0))
@constraint(model, 2 * x[1] + x[2] + y <= 4)
@constraint(model, (1 + y) * x[1] + 2 * x[2] + z <= 4)
@objective(model, Max, 4 * x[1] + 3 * x[2] + w)
optimize!(model)
@test isapprox.(value(x[1]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(x[2]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(y), 0, atol = ATOL)
# ===== Set parameter value =====
MOI.set(model, POI.ParameterValue(), y, 2.0)
optimize!(model)
@test isapprox.(value(x[1]), 0.0, atol = ATOL)
@test isapprox.(value(x[2]), 2.0, atol = ATOL)
@test isapprox.(value(y), 2.0, atol = ATOL)
return
end
function test_jump_affine_parameters()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x[i = 1:2] >= 0)
@variable(model, y in MOI.Parameter(0.0))
@variable(model, w in MOI.Parameter(0.0))
@variable(model, z in MOI.Parameter(0.0))
@constraint(model, 2 * x[1] + x[2] + y <= 4)
@constraint(model, 1 * x[1] + 2 * x[2] + z <= 4)
@objective(model, Max, 4 * x[1] + 3 * x[2] + w)
optimize!(model)
@test isapprox.(value(x[1]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(x[2]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(y), 0, atol = ATOL)
# ===== Set parameter value =====
MOI.set(model, POI.ParameterValue(), y, 2.0)
optimize!(model)
@test isapprox.(value(x[1]), 0.0, atol = ATOL)
@test isapprox.(value(x[2]), 2.0, atol = ATOL)
@test isapprox.(value(y), 2.0, atol = ATOL)
return
end
function test_jump_parameter_times_variable()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x[i = 1:2] >= 0)
@variable(model, y in MOI.Parameter(0.0))
@variable(model, w in MOI.Parameter(0.0))
@variable(model, z in MOI.Parameter(0.0))
@test MOI.get(model, POI.ParameterValue(), y) == 0
@constraint(model, 2 * x[1] + x[2] + y <= 4)
@constraint(model, (1 + y) * x[1] + 2 * x[2] + z <= 4)
@objective(model, Max, 4 * x[1] + 3 * x[2] + w)
optimize!(model)
@test isapprox.(value(x[1]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(x[2]), 4.0 / 3.0, atol = ATOL)
@test isapprox.(value(y), 0, atol = ATOL)
# ===== Set parameter value =====
MOI.set(model, POI.ParameterValue(), y, 2.0)
optimize!(model)
@test isapprox.(value(x[1]), 0.0, atol = ATOL)
@test isapprox.(value(x[2]), 2.0, atol = ATOL)
@test isapprox.(value(y), 2.0, atol = ATOL)
return
end
function test_jump_constraintfunction_getter()
model = direct_model(
POI.Optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Utilities.AUTOMATIC,
),
),
)
vx = @variable(model, x[i = 1:2])
vp = @variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
c1 = @constraint(model, con, sum(x) + sum(p) >= 1)
c2 = @constraint(model, conq, sum(x .* p) >= 1)
c3 = @constraint(model, conqa, sum(x .* p) + x[1]^2 + x[1] + p[1] >= 1)
@test MOI.Utilities.canonical(
MOI.get(model, MOI.ConstraintFunction(), c1),
) ≈ MOI.Utilities.canonical(
MOI.ScalarAffineFunction{Float64}(
[
MOI.ScalarAffineTerm{Float64}(1.0, MOI.VariableIndex(1)),
MOI.ScalarAffineTerm{Float64}(1.0, MOI.VariableIndex(2)),
MOI.ScalarAffineTerm{Float64}(
1.0,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
MOI.ScalarAffineTerm{Float64}(
1.0,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 2),
),
],
0.0,
),
)
@test canonical_compare(
MOI.get(model, MOI.ConstraintFunction(), c2),
MOI.ScalarQuadraticFunction{Float64}(
[
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(1),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(2),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 2),
),
],
[],
0.0,
),
)
@test canonical_compare(
MOI.get(model, MOI.ConstraintFunction(), c3),
MOI.ScalarQuadraticFunction{Float64}(
[
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(1),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(2),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 2),
),
MOI.ScalarQuadraticTerm{Float64}(
2.0,
MOI.VariableIndex(1),
MOI.VariableIndex(1),
),
],
[
MOI.ScalarAffineTerm{Float64}(1.0, MOI.VariableIndex(1)),
MOI.ScalarAffineTerm{Float64}(
1.0,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
],
0.0,
),
)
o1 = @objective(model, Min, sum(x) + sum(p))
F = MOI.get(model, MOI.ObjectiveFunctionType())
@test canonical_compare(
MOI.get(model, MOI.ObjectiveFunction{F}()),
MOI.ScalarAffineFunction{Float64}(
[
MOI.ScalarAffineTerm{Float64}(1.0, MOI.VariableIndex(1)),
MOI.ScalarAffineTerm{Float64}(1.0, MOI.VariableIndex(2)),
MOI.ScalarAffineTerm{Float64}(
1.0,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
MOI.ScalarAffineTerm{Float64}(
1.0,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 2),
),
],
0.0,
),
)
o2 = @objective(model, Min, sum(x .* p) + 2)
F = MOI.get(model, MOI.ObjectiveFunctionType())
f = MOI.get(model, MOI.ObjectiveFunction{F}())
f_ref = MOI.ScalarQuadraticFunction{Float64}(
[
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(1),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(2),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 2),
),
],
[],
2.0,
)
@test canonical_compare(f, f_ref)
o3 = @objective(model, Min, sum(x .* p) + x[1]^2 + x[1] + p[1])
F = MOI.get(model, MOI.ObjectiveFunctionType())
@test canonical_compare(
MOI.get(model, MOI.ObjectiveFunction{F}()),
MOI.ScalarQuadraticFunction{Float64}(
[
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(1),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
MOI.ScalarQuadraticTerm{Float64}(
1.0,
MOI.VariableIndex(2),
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 2),
),
MOI.ScalarQuadraticTerm{Float64}(
2.0,
MOI.VariableIndex(1),
MOI.VariableIndex(1),
),
],
[
MOI.ScalarAffineTerm{Float64}(1.0, MOI.VariableIndex(1)),
MOI.ScalarAffineTerm{Float64}(
1.0,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
),
],
0.0,
),
)
return
end
function test_jump_interpret_parameteric_bounds()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_BOUNDS)
@variable(model, x[i = 1:2])
@variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
@constraint(model, [i in 1:2], x[i] >= p[i])
@objective(model, Min, sum(x))
optimize!(model)
expected = Tuple{Type,Type}[
(MOI.ScalarAffineFunction{Float64}, MOI.GreaterThan{Float64}),
(MOI.VariableIndex, MOI.Parameter{Float64}),
]
result = MOI.get(model, MOI.ListOfConstraintTypesPresent())
@test Set(result) == Set(expected)
@test length(result) == length(expected)
expected = Tuple{Type,Type}[(MOI.VariableIndex, MOI.GreaterThan{Float64})]
result = MOI.get(
backend(model).optimizer.model.optimizer,
MOI.ListOfConstraintTypesPresent(),
)
@test Set(result) == Set(expected)
@test length(result) == length(expected)
@test objective_value(model) == -2
MOI.set(model, POI.ParameterValue(), p[1], 4.0)
optimize!(model)
@test objective_value(model) == 3
return
end
function test_jump_interpret_parameteric_bounds_expression()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_BOUNDS)
@variable(model, x[i = 1:2])
@variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
@constraint(model, [i in 1:2], x[i] >= p[i] + p[1])
@objective(model, Min, sum(x))
optimize!(model)
expected = Tuple{Type,Type}[
(MOI.ScalarAffineFunction{Float64}, MOI.GreaterThan{Float64}),
(MOI.VariableIndex, MOI.Parameter{Float64}),
]
result = MOI.get(model, MOI.ListOfConstraintTypesPresent())
@test Set(result) == Set(expected)
@test length(result) == length(expected)
expected = Tuple{Type,Type}[(MOI.VariableIndex, MOI.GreaterThan{Float64})]
result = MOI.get(
backend(model).optimizer.model.optimizer,
MOI.ListOfConstraintTypesPresent(),
)
@test Set(result) == Set(expected)
@test length(result) == length(expected)
@test objective_value(model) == -4
MOI.set(model, POI.ParameterValue(), p[1], 4.0)
optimize!(model)
@test objective_value(model) == 11.0
return
end
function test_jump_direct_interpret_parameteric_bounds()
model = direct_model(POI.Optimizer(GLPK.Optimizer()))
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_BOUNDS)
@variable(model, x[i = 1:2])
@variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
@constraint(model, [i in 1:2], x[i] >= p[i])
@objective(model, Min, sum(x))
optimize!(model)
expected = Tuple{Type,Type}[
(MOI.ScalarAffineFunction{Float64}, MOI.GreaterThan{Float64}),
(MOI.VariableIndex, MOI.Parameter{Float64}),
]
result = MOI.get(model, MOI.ListOfConstraintTypesPresent())
@test Set(result) == Set(expected)
@test length(result) == length(expected)
expected = Tuple{Type,Type}[(MOI.VariableIndex, MOI.GreaterThan{Float64})]
result =
MOI.get(backend(model).optimizer, MOI.ListOfConstraintTypesPresent())
@test Set(result) == Set(expected)
@test length(result) == length(expected)
@test objective_value(model) == -2
MOI.set(model, POI.ParameterValue(), p[1], 4.0)
optimize!(model)
@test objective_value(model) == 3
return
end
function test_jump_direct_interpret_parameteric_bounds_no_interpretation()
model = direct_model(POI.Optimizer(GLPK.Optimizer()))
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_CONSTRAINTS)
@variable(model, x[i = 1:2])
@variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
@constraint(model, [i in 1:2], x[i] >= p[i])
@objective(model, Min, sum(x))
optimize!(model)
expected = Tuple{Type,Type}[
(MOI.ScalarAffineFunction{Float64}, MOI.GreaterThan{Float64}),
(MOI.VariableIndex, MOI.Parameter{Float64}),
]
result = MOI.get(model, MOI.ListOfConstraintTypesPresent())
@test Set(result) == Set(expected)
@test length(result) == length(expected)
expected = Tuple{Type,Type}[(
MOI.ScalarAffineFunction{Float64},
MOI.GreaterThan{Float64},
),]
result =
MOI.get(backend(model).optimizer, MOI.ListOfConstraintTypesPresent())
@test Set(result) == Set(expected)
@test length(result) == length(expected)
@test objective_value(model) == -2
MOI.set(model, POI.ParameterValue(), p[1], 4.0)
optimize!(model)
@test objective_value(model) == 3
return
end
function test_jump_direct_interpret_parameteric_bounds_change()
model = direct_model(POI.Optimizer(GLPK.Optimizer()))
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_BOUNDS)
@variable(model, x[i = 1:2])
@variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
@constraint(model, [i in 1:2], x[i] >= p[i])
@test_throws ErrorException @constraint(model, [i in 1:2], 2x[i] >= p[i])
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_CONSTRAINTS)
@constraint(model, [i in 1:2], 2x[i] >= p[i])
@objective(model, Min, sum(x))
optimize!(model)
@test objective_value(model) == -1
MOI.set(model, POI.ParameterValue(), p[1], 4.0)
optimize!(model)
@test objective_value(model) == 3.5
return
end
function test_jump_direct_interpret_parameteric_bounds_both()
model = direct_model(POI.Optimizer(GLPK.Optimizer()))
MOI.set(model, POI.ConstraintsInterpretation(), POI.BOUNDS_AND_CONSTRAINTS)
@variable(model, x[i = 1:2])
@variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
@constraint(model, [i in 1:2], x[i] >= p[i])
@constraint(model, [i in 1:2], 2x[i] >= p[i])
@objective(model, Min, sum(x))
optimize!(model)
@test objective_value(model) == -1
MOI.set(model, POI.ParameterValue(), p[1], 4.0)
optimize!(model)
@test objective_value(model) == 3.5
return
end
function test_jump_direct_interpret_parameteric_bounds_invalid()
model = direct_model(POI.Optimizer(GLPK.Optimizer()))
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_BOUNDS)
@variable(model, x[i = 1:2])
@variable(model, p[i = 1:2] in MOI.Parameter.(-1.0))
@test_throws ErrorException @constraint(
model,
[i in 1:2],
2x[i] >= p[i] + p[1]
)
return
end
function test_jump_set_variable_start_value()
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
GLPK.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
model = direct_model(optimizer)
@variable(model, x >= 0)
@variable(model, p in MOI.Parameter(0.0))
set_start_value(x, 1.0)
@test start_value(x) == 1
err = ErrorException(
"MathOptInterface.VariablePrimalStart() is not supported for parameters",
)
@test_throws err set_start_value(p, 1.0)
@test_throws err start_value(p)
return
end
function test_jump_direct_get_parameter_value()
model = direct_model(POI.Optimizer(GLPK.Optimizer()))
@variable(model, x, lower_bound = 0.0, upper_bound = 10.0)
@variable(model, y, binary = true)
@variable(model, z, set = MOI.Parameter(10.0))
c = @constraint(model, 19.0 * x - z + 22.0 * y <= 1.0)
@objective(model, Min, x + y)
@test MOI.get(model, POI.ParameterValue(), z) == 10
return
end
function test_jump_get_parameter_value()
model = Model(() -> ParametricOptInterface.Optimizer(GLPK.Optimizer()))
@variable(model, x, lower_bound = 0.0, upper_bound = 10.0)
@variable(model, y, binary = true)
@variable(model, z, set = MOI.Parameter(10))
c = @constraint(model, 19.0 * x - z + 22.0 * y <= 1.0)
@objective(model, Min, x + y)
@test MOI.get(model, POI.ParameterValue(), z) == 10
return
end
function test_jump_sdp_scalar_parameter()
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
m = direct_model(optimizer)
set_silent(m)
@variable(m, p in MOI.Parameter(0.0))
@variable(m, x[1:2, 1:2], Symmetric)
@objective(m, Min, x[1, 1] + x[2, 2])
@constraint(m, LinearAlgebra.Symmetric(x .- [1+p 0; 0 1+p]) in PSDCone())
optimize!(m)
@test all(isapprox.(value.(x), [1 0; 0 1], atol = ATOL))
MOI.set(m, POI.ParameterValue(), p, 1)
optimize!(m)
@test all(isapprox.(value.(x), [2 0; 0 2], atol = ATOL))
return
end
function test_jump_sdp_matrix_parameter()
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
m = direct_model(optimizer)
set_silent(m)
P1 = [1 2; 2 3]
@variable(m, p[1:2, 1:2] in MOI.Parameter.(P1))
@variable(m, x[1:2, 1:2], Symmetric)
@objective(m, Min, x[1, 1] + x[2, 2])
@constraint(m, LinearAlgebra.Symmetric(x - p) in PSDCone())
optimize!(m)
@test all(isapprox.(value.(x), P1, atol = ATOL))
P2 = [1 2; 2 1]
MOI.set.(m, POI.ParameterValue(), p, P2)
optimize!(m)
@test all(isapprox.(value.(x), P2, atol = ATOL))
return
end
function test_jump_dual_basic()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x[1:2] in MOI.Parameter.(ones(2) .* 4.0))
@variable(model, y[1:6])
@constraint(model, ctr1, 3 * y[1] >= 2 - 7 * x[1])
@objective(model, Min, 5 * y[1])
JuMP.optimize!(model)
@test 5 / 3 ≈ JuMP.dual(ctr1) atol = 1e-3
@test [-35 / 3, 0.0] ≈ MOI.get.(model, POI.ParameterDual(), x) atol = 1e-3
@test [-26 / 3, 0.0, 0.0, 0.0, 0.0, 0.0] ≈ JuMP.value.(y) atol = 1e-3
@test -130 / 3 ≈ JuMP.objective_value(model) atol = 1e-3
return
end
function test_jump_dual_multiplicative_fail()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x)
@variable(model, p in MOI.Parameter(1.0))
@constraint(model, cons, x * p >= 3)
@objective(model, Min, 2x)
optimize!(model)
@test_throws ErrorException(
"Cannot compute the dual of a multiplicative parameter",
) MOI.get(model, POI.ParameterDual(), p)
return
end
function test_jump_dual_objective_min()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x)
@variable(model, p in MOI.Parameter(1.0))
@constraint(model, cons, x >= 3 * p)
@objective(model, Min, 2x + p)
optimize!(model)
@test MOI.get(model, POI.ParameterDual(), p) == 7
return
end
function test_jump_dual_objective_max()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x)
@variable(model, p in MOI.Parameter(1.0))
@constraint(model, cons, x >= 3 * p)
@objective(model, Max, -2x + p)
optimize!(model)
@test MOI.get(model, POI.ParameterDual(), p) == 5
return
end
function test_jump_dual_multiple_parameters_1()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x[1:6] in MOI.Parameter.(ones(6) .* 4.0))
@variable(model, y[1:6])
@constraint(model, ctr1, 3 * y[1] >= 2 - 7 * x[3])
@constraint(model, ctr2, 3 * y[1] >= 2 - 7 * x[3])
@constraint(model, ctr3, 3 * y[1] >= 2 - 7 * x[3])
@constraint(model, ctr4, 3 * y[1] >= 2 - 7 * x[3])
@constraint(model, ctr5, 3 * y[1] >= 2 - 7 * x[3])
@constraint(model, ctr6, 3 * y[1] >= 2 - 7 * x[3])
@constraint(model, ctr7, sum(3 * y[i] + x[i] for i in 2:4) >= 2 - 7 * x[3])
@constraint(
model,
ctr8,
sum(3 * y[i] + 7.0 * x[i] - x[i] for i in 2:4) >= 2 - 7 * x[3]
)
@objective(model, Min, 5 * y[1])
JuMP.optimize!(model)
@test 5 / 3 ≈
JuMP.dual(ctr1) +
JuMP.dual(ctr2) +
JuMP.dual(ctr3) +
JuMP.dual(ctr4) +
JuMP.dual(ctr5) +
JuMP.dual(ctr6) atol = 1e-3
@test 0.0 ≈ JuMP.dual(ctr7) atol = 1e-3
@test 0.0 ≈ JuMP.dual(ctr8) atol = 1e-3
@test [0.0, 0.0, -35 / 3, 0.0, 0.0, 0.0] ≈
MOI.get.(model, POI.ParameterDual(), x) atol = 1e-3
@test [-26 / 3, 0.0, 0.0, 0.0, 0.0, 0.0] ≈ JuMP.value.(y) atol = 1e-3
@test -130 / 3 ≈ JuMP.objective_value(model) atol = 1e-3
return
end
function test_jump_duals_LessThan()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α in MOI.Parameter(-1.0))
@variable(model, x)
cref = @constraint(model, x ≤ α)
@objective(model, Max, x)
JuMP.optimize!(model)
@test JuMP.value(x) == -1.0
@test JuMP.dual(cref) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -1.0
MOI.set(model, POI.ParameterValue(), α, 2.0)
JuMP.optimize!(model)
@test JuMP.value(x) == 2.0
@test JuMP.dual(cref) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -1.0
return
end
function test_jump_duals_EqualTo()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α in MOI.Parameter(-1.0))
@variable(model, x)
cref = @constraint(model, x == α)
@objective(model, Max, x)
JuMP.optimize!(model)
@test JuMP.value(x) == -1.0
@test JuMP.dual(cref) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -1.0
MOI.set(model, POI.ParameterValue(), α, 2.0)
JuMP.optimize!(model)
@test JuMP.value(x) == 2.0
@test JuMP.dual(cref) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -1.0
return
end
function test_jump_duals_GreaterThan()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α in MOI.Parameter(1.0))
MOI.set(model, POI.ParameterValue(), α, -1.0)
@variable(model, x)
cref = @constraint(model, x >= α)
@objective(model, Min, x)
JuMP.optimize!(model)
@test JuMP.value(x) == -1.0
@test JuMP.dual(cref) == 1.0
@test MOI.get(model, POI.ParameterDual(), α) == 1.0
MOI.set(model, POI.ParameterValue(), α, 2.0)
JuMP.optimize!(model)
@test JuMP.value(x) == 2.0
@test JuMP.dual(cref) == 1.0
@test MOI.get(model, POI.ParameterDual(), α) == 1.0
return
end
function test_jump_dual_multiple_parameters_2()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α[1:10] in MOI.Parameter.(ones(10)))
@variable(model, x)
cref = @constraint(model, x == sum(2 * α[i] for i in 1:10))
@objective(model, Min, x)
JuMP.optimize!(model)
@test JuMP.value(x) == 20.0
@test JuMP.dual(cref) == 1.0
@test MOI.get(model, POI.ParameterDual(), α[3]) == 2.0
return
end
function test_jump_dual_mixing_params_and_vars_1()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α[1:5] in MOI.Parameter.(ones(5)))
@variable(model, x)
cref = @constraint(model, sum(x for i in 1:5) == sum(2 * α[i] for i in 1:5))
@objective(model, Min, x)
JuMP.optimize!(model)
@test JuMP.value(x) == 2.0
@test JuMP.dual(cref) == 1 / 5
@test MOI.get(model, POI.ParameterDual(), α[3]) == 2 / 5
return
end
function test_jump_dual_mixing_params_and_vars_2()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α[1:5] in MOI.Parameter.(ones(5)))
@variable(model, x)
cref = @constraint(model, 0.0 == sum(-x + 2 * α[i] for i in 1:5))
@objective(model, Min, x)
JuMP.optimize!(model)
@test JuMP.value(x) == 2.0
@test JuMP.dual(cref) == 1 / 5
@test MOI.get(model, POI.ParameterDual(), α[3]) == 2 / 5
return
end
function test_jump_dual_mixing_params_and_vars_3()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α[1:5] in MOI.Parameter.(ones(5)))
@variable(model, x)
cref = @constraint(model, 0.0 == sum(-x + 2.0 + 2 * α[i] for i in 1:5))
@objective(model, Min, x)
JuMP.optimize!(model)
@test JuMP.value(x) == 4.0
@test JuMP.dual(cref) == 1 / 5
@test MOI.get(model, POI.ParameterDual(), α[3]) == 2 / 5
return
end
function test_jump_dual_add_after_solve()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α in MOI.Parameter(1.0))
MOI.set(model, POI.ParameterValue(), α, -1.0)
@variable(model, x)
cref = @constraint(model, x <= α)
@objective(model, Max, x)
JuMP.optimize!(model)
@test JuMP.value(x) == -1.0
@test JuMP.dual(cref) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -1.0
@variable(model, b in MOI.Parameter(-2.0))
cref = @constraint(model, x <= b)
JuMP.optimize!(model)
@test JuMP.value(x) == -2.0
@test JuMP.dual(cref) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == 0.0
@test MOI.get(model, POI.ParameterDual(), b) == -1.0
return
end
function test_jump_dual_add_ctr_alaternative()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α in MOI.Parameter(-1.0))
@variable(model, x)
exp = x - α
cref = @constraint(model, exp ≤ 0)
@objective(model, Max, x)
JuMP.optimize!(model)
@test JuMP.value(x) == -1.0
@test JuMP.dual(cref) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -1.0
return
end
function test_jump_dual_delete_constraint()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, α in MOI.Parameter(-1.0))
@variable(model, x)
cref1 = @constraint(model, x ≤ α / 2)
cref2 = @constraint(model, x ≤ α)
cref3 = @constraint(model, x ≤ 2α)
@objective(model, Max, x)
JuMP.delete(model, cref3)
JuMP.optimize!(model)
@test JuMP.value(x) == -1.0
@test JuMP.dual(cref1) == 0.0
@test JuMP.dual(cref2) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -1.0
JuMP.delete(model, cref2)
JuMP.optimize!(model)
@test JuMP.value(x) == -0.5
@test JuMP.dual(cref1) == -1.0
@test MOI.get(model, POI.ParameterDual(), α) == -0.5
return
end
function test_jump_nlp()
model = Model(() -> ParametricOptInterface.Optimizer(Ipopt.Optimizer()))
@variable(model, x)
@variable(model, z in MOI.Parameter(10.0))
@constraint(model, x >= z)
@NLobjective(model, Min, x^2)
@test_throws ErrorException optimize!(model)
return
end
function test_jump_direct_vector_parameter_affine_nonnegatives()
"""
min x + y
x - t + 1 >= 0
y - t + 2 >= 0
opt
x* = t-1
y* = t-2
obj = 2*t-3
"""
cached = MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
)
optimizer = POI.Optimizer(cached)
model = direct_model(optimizer)
set_silent(model)
@variable(model, x)
@variable(model, y)
@variable(model, t in MOI.Parameter(5.0))
@constraint(model, [(x - t + 1), (y - t + 2)...] in MOI.Nonnegatives(2))
@objective(model, Min, x + y)
optimize!(model)
@test isapprox.(value(x), 4.0, atol = ATOL)
@test isapprox.(value(y), 3.0, atol = ATOL)
MOI.set(model, POI.ParameterValue(), t, 6)
optimize!(model)
@test isapprox.(value(x), 5.0, atol = ATOL)
@test isapprox.(value(y), 4.0, atol = ATOL)
return
end
function test_jump_direct_vector_parameter_affine_nonpositives()
"""
min x + y
- x + t - 1 ≤ 0
- y + t - 2 ≤ 0
opt
x* = t-1
y* = t-2
obj = 2*t-3
"""
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
model = direct_model(optimizer)
set_silent(model)
@variable(model, x)
@variable(model, y)
@variable(model, t in MOI.Parameter(5.0))
@constraint(model, [(-x + t - 1), (-y + t - 2)...] in MOI.Nonpositives(2))
@objective(model, Min, x + y)
optimize!(model)
@test isapprox.(value(x), 4.0, atol = ATOL)
@test isapprox.(value(y), 3.0, atol = ATOL)
MOI.set(model, POI.ParameterValue(), t, 6)
optimize!(model)
@test isapprox.(value(x), 5.0, atol = ATOL)
@test isapprox.(value(y), 4.0, atol = ATOL)
return
end
function test_jump_direct_soc_parameters()
"""
Problem SOC2 from MOI
min x
s.t. y ≥ 1/√2
(x-p)² + y² ≤ 1
in conic form:
min x
s.t. -1/√2 + y ∈ R₊
1 - t ∈ {0}
(t, x-p ,y) ∈ SOC₃
opt
x* = p - 1/√2
y* = 1/√2
"""
cached = MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
)
optimizer = POI.Optimizer(cached)
model = direct_model(optimizer)
set_silent(model)
@variable(model, x)
@variable(model, y)
@variable(model, t)
@variable(model, p in MOI.Parameter(0.0))
@constraint(model, [y - 1 / √2] in MOI.Nonnegatives(1))
@constraint(model, [t - 1] in MOI.Zeros(1))
@constraint(model, [t, (x - p), y...] in SecondOrderCone())
@objective(model, Min, 1.0 * x)
optimize!(model)
@test objective_value(model) ≈ -1 / √2 atol = ATOL
@test value(x) ≈ -1 / √2 atol = ATOL
@test value(y) ≈ 1 / √2 atol = ATOL
@test value(t) ≈ 1 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 1)
optimize!(model)
@test objective_value(model) ≈ 1 - 1 / √2 atol = ATOL
@test value(x) ≈ 1 - 1 / √2 atol = ATOL
return
end
function test_jump_direct_qp_objective()
optimizer = POI.Optimizer(Ipopt.Optimizer())
model = direct_model(optimizer)
MOI.set(model, MOI.Silent(), true)
@variable(model, x >= 0)
@variable(model, y >= 0)
@variable(model, p in MOI.Parameter(1.0))
@constraint(model, 2x + y <= 4)
@constraint(model, x + 2y <= 4)
@objective(model, Max, (x^2 + y^2) / 2)
optimize!(model)
@test objective_value(model) ≈ 16 / 9 atol = ATOL
@test value(x) ≈ 4 / 3 atol = ATOL
@test value(y) ≈ 4 / 3 atol = ATOL
MOI.set(
backend(model),
POI.QuadraticObjectiveCoef(),
(index(x), index(y)),
2index(p) + 3,
)
optimize!(model)
@test canonical_compare(
MOI.get(
backend(model),
POI.QuadraticObjectiveCoef(),
(index(x), index(y)),
),
MOI.ScalarAffineFunction{Int64}(
MOI.ScalarAffineTerm{Int64}[MOI.ScalarAffineTerm{Int64}(
2,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
)],
3,
),
)
@test objective_value(model) ≈ 32 / 3 atol = ATOL
@test value(x) ≈ 4 / 3 atol = ATOL
@test value(y) ≈ 4 / 3 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 2.0)
optimize!(model)
@test objective_value(model) ≈ 128 / 9 atol = ATOL
@test value(x) ≈ 4 / 3 atol = ATOL
@test value(y) ≈ 4 / 3 atol = ATOL
MOI.set(
backend(model),
POI.QuadraticObjectiveCoef(),
(index(x), index(y)),
nothing,
)
optimize!(model)
@test objective_value(model) ≈ 16 / 9 atol = ATOL
@test value(x) ≈ 4 / 3 atol = ATOL
@test value(y) ≈ 4 / 3 atol = ATOL
# now in reverse order
MOI.set(
backend(model),
POI.QuadraticObjectiveCoef(),
(index(y), index(x)),
2index(p) + 3,
)
optimize!(model)
@test objective_value(model) ≈ 128 / 9 atol = ATOL
@test value(x) ≈ 4 / 3 atol = ATOL
@test value(y) ≈ 4 / 3 atol = ATOL
MOI.set(
backend(model),
POI.QuadraticObjectiveCoef(),
(index(y), index(x)),
nothing,
)
optimize!(model)
@test objective_value(model) ≈ 16 / 9 atol = ATOL
@test value(x) ≈ 4 / 3 atol = ATOL
@test value(y) ≈ 4 / 3 atol = ATOL
return
end
function test_jump_direct_rsoc_constraints()
"""
Problem RSOC
min x
s.t. y ≥ 1/√2
x² + (y-p)² ≤ 1
in conic form:
min x
s.t. -1/√2 + y ∈ R₊
1 - t ∈ {0}
(t, x ,y-p) ∈ RSOC
opt
x* = 1/2*(max{1/√2,p}-p)^2
y* = max{1/√2,p}
"""
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
model = direct_model(optimizer)
MOI.set(model, MOI.Silent(), true)
@variable(model, x)
@variable(model, y)
@variable(model, t)
@variable(model, p in MOI.Parameter(0.0))
@constraint(model, [y - 1 / √2] in MOI.Nonnegatives(1))
@constraint(model, [t - 1] in MOI.Zeros(1))
@constraint(model, [t, x, y - p] in RotatedSecondOrderCone())
@objective(model, Min, 1.0 * x)
optimize!(model)
@test objective_value(model) ≈ 1 / 4 atol = ATOL
@test value(x) ≈ 1 / 4 atol = ATOL
@test value(y) ≈ 1 / √2 atol = ATOL
@test value(t) ≈ 1 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 2)
optimize!(model)
@test objective_value(model) ≈ 0.0 atol = ATOL
@test value(x) ≈ 0.0 atol = ATOL
@test value(y) ≈ 2 atol = ATOL
return
end
function test_jump_quadratic_interval()
optimizer = POI.Optimizer(GLPK.Optimizer())
# model = direct_model(optimizer)
model = Model(() -> optimizer)
MOI.set(model, MOI.Silent(), true)
@variable(model, x >= 0)
@variable(model, y >= 0)
@variable(model, p in MOI.Parameter(10.0))
@variable(model, q in MOI.Parameter(4.0))
@constraint(model, 0 <= x - p * y + q <= 0)
@objective(model, Min, x + y)
optimize!(model)
@test value(x) ≈ 0 atol = ATOL
@test value(y) ≈ 0.4 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 20.0)
optimize!(model)
@test value(x) ≈ 0 atol = ATOL
@test value(y) ≈ 0.2 atol = ATOL
MOI.set(model, POI.ParameterValue(), q, 6.0)
optimize!(model)
@test value(x) ≈ 0 atol = ATOL
@test value(y) ≈ 0.3 atol = ATOL
return
end
function test_jump_quadratic_interval_cached()
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
GLPK.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
model = direct_model(optimizer)
# optimizer = POI.Optimizer(GLPK.Optimizer())
# model = direct_model(optimizer)
# model = Model(() -> optimizer)
# MOI.set(model, MOI.Silent(), true)
@variable(model, x >= 0)
@variable(model, y >= 0)
@variable(model, p in MOI.Parameter(10.0))
@variable(model, q in MOI.Parameter(4.0))
@constraint(model, 0 <= x - p * y + q <= 0)
@objective(model, Min, x + y)
optimize!(model)
@test value(x) ≈ 0 atol = ATOL
@test value(y) ≈ 0.4 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 20.0)
optimize!(model)
@test value(x) ≈ 0 atol = ATOL
@test value(y) ≈ 0.2 atol = ATOL
MOI.set(model, POI.ParameterValue(), q, 6.0)
optimize!(model)
@test value(x) ≈ 0 atol = ATOL
@test value(y) ≈ 0.3 atol = ATOL
return
end
function test_affine_parametric_objective()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, p in MOI.Parameter(1.0))
@variable(model, 0 <= x <= 1)
@objective(model, Max, (p + 0.5) * x)
optimize!(model)
@test value(x) ≈ 1.0
@test objective_value(model) ≈ 1.5
@test value(objective_function(model)) ≈ 1.5
end
function test_abstract_optimizer_attributes()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
set_attribute(model, "tm_lim", 60 * 1000)
attr = MOI.RawOptimizerAttribute("tm_lim")
@test MOI.supports(unsafe_backend(model), attr)
@test get_attribute(model, "tm_lim") ≈ 60 * 1000
return
end
function test_get_quadratic_constraint()
model = Model(() -> POI.Optimizer(GLPK.Optimizer()))
@variable(model, x)
@variable(model, p in Parameter(2.0))
@constraint(model, c, p * x <= 10)
optimize!(model)
@test value(c) ≈ 2.0 * value(x)
return
end
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 64564 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
function test_basic_tests()
"""
min x₁ + y
x₁ + y = 2
x₁,x₂ ≥ 0
opt
x* = {2-y,0}
obj = 2
"""
optimizer = POI.Optimizer(GLPK.Optimizer())
MOI.set(optimizer, MOI.Silent(), true)
x = MOI.add_variables(optimizer, 2)
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
@test MOI.is_valid(optimizer, x[1])
@test MOI.is_valid(optimizer, y)
@test MOI.is_valid(optimizer, cy)
@test MOI.get(optimizer, POI.ListOfPureVariableIndices()) == x
@test MOI.get(optimizer, MOI.ListOfVariableIndices()) == [x[1], x[2], y]
z = MOI.VariableIndex(4)
cz = MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{Float64}}(4)
@test !MOI.is_valid(optimizer, z)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
@test_throws ErrorException("Cannot constrain a parameter") MOI.add_constraint(
optimizer,
y,
MOI.EqualTo(0.0),
)
@test_throws ErrorException("Variable not in the model") MOI.add_constraint(
optimizer,
z,
MOI.GreaterThan(0.0),
)
cons1 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 1.0], [x[1], y]),
0.0,
)
c1 = MOI.add_constraint(optimizer, cons1, MOI.EqualTo(2.0))
obj_func = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 1.0], [x[1], y]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.ObjectiveValue()) == 2
@test MOI.get(optimizer, MOI.VariablePrimal(), x[1]) == 2
@test_throws ErrorException("Variable not in the model") MOI.get(
optimizer,
MOI.VariablePrimal(),
z,
)
@test MOI.get(optimizer, POI.ListOfPureVariableIndices()) ==
MOI.VariableIndex[MOI.VariableIndex(1), MOI.VariableIndex(2)]
@test MOI.get(optimizer, POI.ListOfParameterIndices()) ==
POI.ParameterIndex[POI.ParameterIndex(1)]
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
@test_throws ErrorException("Parameter not in the model") MOI.set(
optimizer,
MOI.ConstraintSet(),
cz,
MOI.Parameter(1.0),
)
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.ObjectiveValue()) == 2
@test MOI.get(optimizer, MOI.VariablePrimal(), x[1]) == 1
"""
min x₁ + x₂
x₁ + y = 2
x₁,x₂ ≥ 0
opt
x* = {2-y,0}
obj = 2-y
"""
new_obj_func = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 1.0], [x[1], x[2]]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
new_obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.ObjectiveValue()) == 1
@test MOI.supports(optimizer, MOI.VariableName(), MOI.VariableIndex)
@test MOI.get(optimizer, MOI.ObjectiveSense()) == MOI.MIN_SENSE
@test MOI.get(optimizer, MOI.VariableName(), x[1]) == ""
@test MOI.get(optimizer, MOI.ConstraintName(), c1) == ""
MOI.set(optimizer, MOI.ConstraintName(), c1, "ctr123")
@test MOI.get(optimizer, MOI.ConstraintName(), c1) == "ctr123"
return
end
function test_basic_special_cases_of_getters()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [2.0 1.0; 1.0 2.0]
a = [1.0, 1.0]
c = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 1], x[1], y))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], x[1], y))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[2, 2], x[2], y))
constraint_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, [x[1], y]),
0.0,
)
cons_index =
MOI.add_constraint(optimizer, constraint_function, MOI.LessThan(25.0))
obj_func = MOI.ScalarQuadraticFunction(
[MOI.ScalarQuadraticTerm(A[2, 2], x[2], y)],
MOI.ScalarAffineTerm.(c, [x[1], x[2]]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
@test MOI.get(optimizer, MOI.ObjectiveFunctionType()) ==
MOI.ScalarQuadraticFunction{Float64}
@test MOI.get(optimizer, MOI.NumberOfVariables()) == 3
return
end
function test_modification_multiple()
model = POI.Optimizer(MOI.Utilities.Model{Float64}())
x = MOI.add_variables(model, 3)
saf = MOI.ScalarAffineFunction(
[
MOI.ScalarAffineTerm(1.0, x[1]),
MOI.ScalarAffineTerm(1.0, x[2]),
MOI.ScalarAffineTerm(1.0, x[3]),
],
0.0,
)
ci1 = MOI.add_constraint(model, saf, MOI.LessThan(1.0))
ci2 = MOI.add_constraint(model, saf, MOI.LessThan(2.0))
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
saf,
)
fc1 = MOI.get(model, MOI.ConstraintFunction(), ci1)
@test MOI.coefficient.(fc1.terms) == [1.0, 1.0, 1.0]
fc2 = MOI.get(model, MOI.ConstraintFunction(), ci2)
@test MOI.coefficient.(fc2.terms) == [1.0, 1.0, 1.0]
obj = MOI.get(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
)
@test MOI.coefficient.(obj.terms) == [1.0, 1.0, 1.0]
changes_cis = [
MOI.ScalarCoefficientChange(MOI.VariableIndex(1), 4.0)
MOI.ScalarCoefficientChange(MOI.VariableIndex(1), 0.5)
MOI.ScalarCoefficientChange(MOI.VariableIndex(3), 2.0)
]
MOI.modify(model, [ci1, ci2, ci2], changes_cis)
fc1 = MOI.get(model, MOI.ConstraintFunction(), ci1)
@test MOI.coefficient.(fc1.terms) == [4.0, 1.0, 1.0]
fc2 = MOI.get(model, MOI.ConstraintFunction(), ci2)
@test MOI.coefficient.(fc2.terms) == [0.5, 1.0, 2.0]
changes_obj = [
MOI.ScalarCoefficientChange(MOI.VariableIndex(1), 4.0)
MOI.ScalarCoefficientChange(MOI.VariableIndex(2), 10.0)
MOI.ScalarCoefficientChange(MOI.VariableIndex(3), 2.0)
]
MOI.modify(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
changes_obj,
)
obj = MOI.get(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
)
@test MOI.coefficient.(obj.terms) == [4.0, 10.0, 2.0]
return
end
function test_moi_glpk()
# TODO see why tests error or fail
MOI.Test.runtests(
MOI.Bridges.full_bridge_optimizer(
POI.Optimizer(GLPK.Optimizer()),
Float64,
),
MOI.Test.Config();
exclude = [
# GLPK returns INVALID_MODEL instead of INFEASIBLE
"test_constraint_ZeroOne_bounds_3",
],
)
return
end
function test_moi_ipopt()
model = MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
MOI.Bridges.full_bridge_optimizer(
POI.Optimizer(Ipopt.Optimizer()),
Float64,
),
)
MOI.set(model, MOI.Silent(), true)
# Without fixed_variable_treatment set, duals are not computed for variables
# that have lower_bound == upper_bound.
MOI.set(
model,
MOI.RawOptimizerAttribute("fixed_variable_treatment"),
"make_constraint",
)
MOI.Test.runtests(
model,
MOI.Test.Config(
atol = 1e-4,
rtol = 1e-4,
optimal_status = MOI.LOCALLY_SOLVED,
exclude = Any[
MOI.ConstraintBasisStatus,
MOI.DualObjectiveValue,
MOI.ObjectiveBound,
],
);
exclude = String[
# Tests purposefully excluded:
# - Upstream: ZeroBridge does not support ConstraintDual
"test_conic_linear_VectorOfVariables_2",
# - Excluded because this test is optional
"test_model_ScalarFunctionConstantNotZero",
# - Excluded because Ipopt returns NORM_LIMIT instead of
# DUAL_INFEASIBLE
"test_solve_TerminationStatus_DUAL_INFEASIBLE",
# - Excluded because Ipopt returns INVALID_MODEL instead of
# LOCALLY_SOLVED
"test_linear_VectorAffineFunction_empty_row",
# - Excluded because Ipopt returns LOCALLY_INFEASIBLE instead of
# INFEASIBLE
"INFEASIBLE",
"test_solve_DualStatus_INFEASIBILITY_CERTIFICATE_",
],
)
return
end
function test_moi_ListOfConstraintTypesPresent()
N = 10
ipopt = Ipopt.Optimizer()
model = POI.Optimizer(ipopt)
MOI.set(model, MOI.Silent(), true)
x = MOI.add_variables(model, N / 2)
y =
first.(
MOI.add_constrained_variable.(
model,
MOI.Parameter.(ones(Int(N / 2))),
),
)
MOI.add_constraint(
model,
MOI.ScalarQuadraticFunction(
MOI.ScalarQuadraticTerm.(1.0, x, y),
MOI.ScalarAffineTerm{Float64}[],
0.0,
),
MOI.GreaterThan(1.0),
)
result = MOI.get(model, MOI.ListOfConstraintTypesPresent())
expected = [
(MOI.ScalarQuadraticFunction{Float64}, MOI.GreaterThan{Float64}),
(MOI.VariableIndex, MOI.Parameter{Float64}),
]
@test Set(result) == Set(expected)
@test length(result) == length(expected)
return
end
function test_production_problem_example()
optimizer = POI.Optimizer(GLPK.Optimizer())
c = [4.0, 3.0]
A1 = [2.0, 1.0, 1.0]
A2 = [1.0, 2.0, 1.0]
b1 = 4.0
b2 = 4.0
x = MOI.add_variables(optimizer, length(c))
@test typeof(x[1]) == MOI.VariableIndex
w, cw = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
@test MOI.get(optimizer, MOI.VariablePrimal(), w) == 0
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
cons1 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(A1, [x[1], x[2], y]),
0.0,
)
MOI.add_constraint(optimizer, cons1, MOI.LessThan(b1))
cons2 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(A2, [x[1], x[2], z]),
0.0,
)
MOI.add_constraint(optimizer, cons2, MOI.LessThan(b2))
@test cons1.terms[1].coefficient == 2
@test POI._parameter_in_model(optimizer, cons2.terms[3].variable)
obj_func = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([c[1], c[2], 3.0], [x[1], x[2], w]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
MOI.get(optimizer, MOI.TerminationStatus())
MOI.get(optimizer, MOI.PrimalStatus())
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 28 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 4 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 4 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 5 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 2 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -3.0, atol = ATOL)
MOI.get(optimizer, MOI.VariablePrimal(), w)
MOI.get(optimizer, MOI.VariablePrimal(), y)
MOI.get(optimizer, MOI.VariablePrimal(), z)
MOI.set(optimizer, MOI.ConstraintSet(), cw, MOI.Parameter(2.0))
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(1.0))
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.VariablePrimal(), w) == 2.0
@test MOI.get(optimizer, MOI.VariablePrimal(), y) == 1.0
@test MOI.get(optimizer, MOI.VariablePrimal(), z) == 1.0
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 13.0, atol = ATOL)
@test MOI.get.(optimizer, MOI.VariablePrimal(), x) == [1.0, 1.0]
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 5 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 2 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -3.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cw, MOI.Parameter(0.0))
MOI.optimize!(optimizer)
@test MOI.get(optimizer, MOI.VariablePrimal(), w) == 0.0
@test MOI.get(optimizer, MOI.VariablePrimal(), y) == 1.0
@test MOI.get(optimizer, MOI.VariablePrimal(), z) == 1.0
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 7, atol = ATOL)
@test MOI.get.(optimizer, MOI.VariablePrimal(), x) == [1.0, 1.0]
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(-5.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 12.0, atol = ATOL)
@test MOI.get.(optimizer, MOI.VariablePrimal(), x) == [3.0, 0.0]
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 4, atol = ATOL)
return
end
function test_production_problem_example_duals()
optimizer = POI.Optimizer(GLPK.Optimizer())
c = [4.0, 3.0]
A1 = [2.0, 1.0, 3.0]
A2 = [1.0, 2.0, 0.5]
b1 = 4.0
b2 = 4.0
x = MOI.add_variables(optimizer, length(c))
@test typeof(x[1]) == MOI.VariableIndex
w, cw = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
@test MOI.get(optimizer, MOI.VariablePrimal(), w) == 0
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
cons1 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(A1, [x[1], x[2], y]),
0.0,
)
ci1 = MOI.add_constraint(optimizer, cons1, MOI.LessThan(b1))
cons2 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(A2, [x[1], x[2], z]),
0.0,
)
ci2 = MOI.add_constraint(optimizer, cons2, MOI.LessThan(b2))
@test cons1.terms[1].coefficient == 2
@test POI._parameter_in_model(optimizer, cons2.terms[3].variable)
obj_func = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([c[1], c[2], 2.0], [x[1], x[2], w]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
MOI.get(optimizer, MOI.TerminationStatus())
MOI.get(optimizer, MOI.PrimalStatus())
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 28 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 4 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 4 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 5, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 2 / 6, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -2.0, atol = ATOL)
@test ≈(
MOI.get(optimizer, MOI.ConstraintDual(), cy),
-3 * MOI.get(optimizer, MOI.ConstraintDual(), ci1),
atol = 1e-4,
)
@test ≈(
MOI.get(optimizer, MOI.ConstraintDual(), cz),
-0.5 * MOI.get(optimizer, MOI.ConstraintDual(), ci2),
atol = 1e-4,
)
MOI.set(optimizer, MOI.ConstraintSet(), cw, MOI.Parameter(2.0))
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(1.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 7.0, atol = ATOL)
@test MOI.get.(optimizer, MOI.VariablePrimal(), x) == [0.0, 1.0]
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 9.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 0.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -2.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cw, MOI.Parameter(0.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 3.0, atol = ATOL)
@test MOI.get.(optimizer, MOI.VariablePrimal(), x) == [0.0, 1.0]
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 9.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 0.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -2.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(-5.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 14.0, atol = ATOL)
@test ≈(
MOI.get.(optimizer, MOI.VariablePrimal(), x),
[3.5, 0.0],
atol = ATOL,
)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 2, atol = ATOL)
return
end
function test_production_problem_example_parameters_for_duals_and_intervals()
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
GLPK.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
c = [4.0, 3.0]
A1 = [2.0, 1.0, 3.0]
A2 = [1.0, 2.0, 0.5]
b1 = 4.0
b2 = 4.0
x = MOI.add_variables(optimizer, length(c))
@test typeof(x[1]) == MOI.VariableIndex
w, cw = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
@test MOI.get(optimizer, MOI.VariablePrimal(), w) == 0
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
cons1 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(A1, [x[1], x[2], y]),
0.0,
)
ci1 = MOI.add_constraint(optimizer, cons1, MOI.Interval(-Inf, b1))
cons2 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(A2, [x[1], x[2], z]),
0.0,
)
ci2 = MOI.add_constraint(optimizer, cons2, MOI.LessThan(b2))
@test cons1.terms[1].coefficient == 2
@test POI._parameter_in_model(optimizer, cons2.terms[3].variable)
obj_func = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([c[1], c[2], 2.0], [x[1], x[2], w]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
MOI.get(optimizer, MOI.TerminationStatus())
MOI.get(optimizer, MOI.PrimalStatus())
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 28 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 4 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 4 / 3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 5, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 2 / 6, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -2.0, atol = ATOL)
@test ≈(
MOI.get(optimizer, MOI.ConstraintDual(), cy),
-3 * MOI.get(optimizer, MOI.ConstraintDual(), ci1),
atol = 1e-4,
)
@test ≈(
MOI.get(optimizer, MOI.ConstraintDual(), cz),
-0.5 * MOI.get(optimizer, MOI.ConstraintDual(), ci2),
atol = 1e-4,
)
MOI.set(optimizer, MOI.ConstraintSet(), cw, MOI.Parameter(2.0))
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(1.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 7.0, atol = ATOL)
@test MOI.get.(optimizer, MOI.VariablePrimal(), x) == [0.0, 1.0]
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 9.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 0.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -2.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cw, MOI.Parameter(0.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 3.0, atol = ATOL)
@test MOI.get.(optimizer, MOI.VariablePrimal(), x) == [0.0, 1.0]
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 9.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 0.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cw), -2.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(-5.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 14.0, atol = ATOL)
@test ≈(
MOI.get.(optimizer, MOI.VariablePrimal(), x),
[3.5, 0.0],
atol = ATOL,
)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cy), 0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), cz), 2, atol = ATOL)
return
end
function test_vector_parameter_affine_nonnegatives()
"""
min x + y
x - t + 1 >= 0
y - t + 2 >= 0
opt
x* = t-1
y* = t-2
obj = 2*t-3
"""
cached = MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
)
model = POI.Optimizer(cached)
MOI.set(model, MOI.Silent(), true)
x = MOI.add_variable(model)
y = MOI.add_variable(model)
t, ct = MOI.add_constrained_variable(model, MOI.Parameter(5.0))
A = [1.0 0 -1; 0 1 -1]
b = [1.0; 2]
terms =
MOI.VectorAffineTerm.(
1:2,
MOI.ScalarAffineTerm.(A, reshape([x, y, t], 1, 3)),
)
f = MOI.VectorAffineFunction(vec(terms), b)
set = MOI.Nonnegatives(2)
cnn = MOI.add_constraint(model, f, MOI.Nonnegatives(2))
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 1.0], [y, x]),
0.0,
),
)
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(model)
@test MOI.get(model, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(model, MOI.DualStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 3 atol = ATOL
@test MOI.get(model, MOI.ConstraintPrimal(), cnn) ≈ [0.0, 0.0] atol = ATOL
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 7 atol = ATOL
@test MOI.get(model, MOI.DualObjectiveValue()) ≈ 7 atol = ATOL
@test MOI.get(model, MOI.ConstraintDual(), cnn) ≈ [1.0, 1.0] atol = ATOL
MOI.set(model, POI.ParameterValue(), t, 6)
MOI.optimize!(model)
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 5 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 atol = ATOL
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 9 atol = ATOL
return
end
function test_vector_parameter_affine_nonpositives()
"""
min x + y
- x + t - 1 ≤ 0
- y + t - 2 ≤ 0
opt
x* = t-1
y* = t-2
obj = 2*t-3
"""
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
),
Float64,
)
model = POI.Optimizer(cached)
MOI.set(model, MOI.Silent(), true)
x = MOI.add_variable(model)
y = MOI.add_variable(model)
t, ct = MOI.add_constrained_variable(model, MOI.Parameter(5.0))
A = [-1.0 0 1; 0 -1 1]
b = [-1.0; -2]
terms =
MOI.VectorAffineTerm.(
1:2,
MOI.ScalarAffineTerm.(A, reshape([x, y, t], 1, 3)),
)
f = MOI.VectorAffineFunction(vec(terms), b)
set = MOI.Nonnegatives(2)
cnn = MOI.add_constraint(model, f, MOI.Nonpositives(2))
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 1.0], [y, x]),
0.0,
),
)
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(model)
@test MOI.get(model, MOI.PrimalStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(model, MOI.DualStatus()) == MOI.FEASIBLE_POINT
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 3 atol = ATOL
@test MOI.get(model, MOI.ConstraintPrimal(), cnn) ≈ [0.0, 0.0] atol = ATOL
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 7 atol = ATOL
@test MOI.get(model, MOI.DualObjectiveValue()) ≈ 7 atol = ATOL
@test MOI.get(model, MOI.ConstraintDual(), cnn) ≈ [-1.0, -1.0] atol = ATOL
MOI.set(model, POI.ParameterValue(), t, 6)
MOI.optimize!(model)
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 5 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 atol = ATOL
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 9 atol = ATOL
return
end
function test_vector_soc_parameters()
"""
Problem SOC2 from MOI
min x
s.t. y ≥ 1/√2
(x-p)² + y² ≤ 1
in conic form:
min x
s.t. -1/√2 + y ∈ R₊
1 - t ∈ {0}
(t, x-p ,y) ∈ SOC₃
opt
x* = p - 1/√2
y* = 1/√2
"""
cached = MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
)
model = POI.Optimizer(cached)
MOI.set(model, MOI.Silent(), true)
x, y, t = MOI.add_variables(model, 3)
p, cp = MOI.add_constrained_variable(model, MOI.Parameter(0.0))
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction([MOI.ScalarAffineTerm(1.0, x)], 0.0),
)
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
cnon = MOI.add_constraint(
model,
MOI.VectorAffineFunction(
[MOI.VectorAffineTerm(1, MOI.ScalarAffineTerm(1.0, y))],
[-1 / √2],
),
MOI.Nonnegatives(1),
)
ceq = MOI.add_constraint(
model,
MOI.VectorAffineFunction(
[MOI.VectorAffineTerm(1, MOI.ScalarAffineTerm(-1.0, t))],
[1.0],
),
MOI.Zeros(1),
)
A = [
1.0 0.0 0.0 0.0
0.0 1.0 0.0 -1
0.0 0.0 1.0 0.0
]
f = MOI.VectorAffineFunction(
vec(
MOI.VectorAffineTerm.(
1:3,
MOI.ScalarAffineTerm.(A, reshape([t, x, y, p], 1, 4)),
),
),
zeros(3),
)
csoc = MOI.add_constraint(model, f, MOI.SecondOrderCone(3))
f_error = MOI.VectorOfVariables([t, p, y])
@test_throws ErrorException MOI.add_constraint(
model,
f_error,
MOI.SecondOrderCone(3),
)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ -1 / √2 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ -1 / √2 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 1 / √2 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), t) ≈ 1 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 1)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 1 - 1 / √2 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 1 - 1 / √2 atol = ATOL
return
end
# TODO(odow): What is this doing here!!!
function test_vector_soc_no_parameters()
"""
Problem SOC2 from MOI
min x
s.t. y ≥ 1/√2
x² + y² ≤ 1
in conic form:
min x
s.t. -1/√2 + y ∈ R₊
1 - t ∈ {0}
(t, x ,y) ∈ SOC₃
opt
x* = 1/√2
y* = 1/√2
"""
cached = MOI.Bridges.full_bridge_optimizer(
MOI.Utilities.CachingOptimizer(
MOI.Utilities.UniversalFallback(MOI.Utilities.Model{Float64}()),
SCS.Optimizer(),
),
Float64,
)
model = POI.Optimizer(cached)
MOI.set(model, MOI.Silent(), true)
x, y, t = MOI.add_variables(model, 3)
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
MOI.ScalarAffineFunction([MOI.ScalarAffineTerm(1.0, x)], 0.0),
)
MOI.set(model, MOI.ObjectiveSense(), MOI.MIN_SENSE)
cnon = MOI.add_constraint(
model,
MOI.VectorAffineFunction(
[MOI.VectorAffineTerm(1, MOI.ScalarAffineTerm(1.0, y))],
[-1 / √2],
),
MOI.Nonnegatives(1),
)
ceq = MOI.add_constraint(
model,
MOI.VectorAffineFunction(
[MOI.VectorAffineTerm(1, MOI.ScalarAffineTerm(-1.0, t))],
[1.0],
),
MOI.Zeros(1),
)
f = MOI.VectorOfVariables([t, x, y])
csoc = MOI.add_constraint(model, f, MOI.SecondOrderCone(3))
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ -1 / √2 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ -1 / √2 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 1 / √2 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), t) ≈ 1 atol = ATOL
return
end
function test_qp_no_parameters_1()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
Q = [4.0 1.0; 1.0 2.0]
q = [1.0; 1.0]
G = [1.0 1.0; 1.0 0.0; 0.0 1.0; -1.0 -1.0; -1.0 0.0; 0.0 -1.0]
h = [1.0; 0.7; 0.7; -1.0; 0.0; 0.0]
x = MOI.add_variables(optimizer, 2)
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
for i in 1:2
for j in i:2 # indexes (i,j), (j,i) will be mirrored. specify only one kind
push!(quad_terms, MOI.ScalarQuadraticTerm(Q[i, j], x[i], x[j]))
end
end
objective_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(q, x),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
objective_function,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
for i in 1:6
MOI.add_constraint(
optimizer,
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(G[i, :], x), 0.0),
MOI.LessThan(h[i]),
)
end
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 1.88, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 0.3, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 0.7, atol = ATOL)
return
end
function test_qp_no_parameters_2()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [0.0 1.0; 1.0 0.0]
a = [0.0, 0.0]
c = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(1.0))
MOI.add_constraint(optimizer, x_i, MOI.LessThan(5.0))
end
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], x[1], x[2]))
constraint_function = MOI.ScalarQuadraticFunction(
[MOI.ScalarQuadraticTerm(A[1, 2], x[1], x[2])],
MOI.ScalarAffineTerm.(a, x),
0.0,
)
MOI.add_constraint(optimizer, constraint_function, MOI.LessThan(9.0))
obj_func =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, [x[1], x[2]]), 0.0)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 11.8, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 5.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 1.8, atol = ATOL)
return
end
function test_qp_parameter_in_affine_constraint()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
Q = [3.0 2.0; 2.0 1.0]
q = [1.0, 6.0]
G = [2.0 3.0 1.0; 1.0 1.0 1.0]
h = [4.0; 3.0]
x = MOI.add_variables(optimizer, 2)
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
for i in 1:2
for j in i:2 # indexes (i,j), (j,i) will be mirrored. specify only one kind
push!(quad_terms, MOI.ScalarQuadraticTerm(Q[i, j], x[i], x[j]))
end
end
objective_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(q, x),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
objective_function,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
for i in 1:2
MOI.add_constraint(
optimizer,
MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(G[i, :], [x[1], x[2], y]),
0.0,
),
MOI.GreaterThan(h[i]),
)
end
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 12.5, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 5.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), -2.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 5.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 3.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), -1.0, atol = ATOL)
return
end
function test_qp_parameter_in_quadratic_constraint()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
Q = [3.0 2.0; 2.0 1.0]
q = [1.0, 6.0, 1.0]
G = [2.0 3.0 1.0 0.0; 1.0 1.0 0.0 1.0]
h = [4.0; 3.0]
x = MOI.add_variables(optimizer, 2)
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
w, cw = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
for i in 1:2
for j in i:2 # indexes (i,j), (j,i) will be mirrored. specify only one kind
push!(quad_terms, MOI.ScalarQuadraticTerm(Q[i, j], x[i], x[j]))
end
end
objective_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(q, [x[1], x[2], y]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
objective_function,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.add_constraint(
optimizer,
MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(G[1, :], [x[1], x[2], y, w]),
0.0,
),
MOI.GreaterThan(h[1]),
)
MOI.add_constraint(
optimizer,
MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.(G[2, :], [x[1], x[2], y, w]),
0.0,
),
MOI.GreaterThan(h[2]),
)
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 12.5, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 5.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), -2.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
MOI.set(optimizer, MOI.ConstraintSet(), cw, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 5.7142, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 2.1428, atol = ATOL)
@test ≈(
MOI.get(optimizer, MOI.VariablePrimal(), x[2]),
-0.4285,
atol = ATOL,
)
return
end
function test_qp_variable_times_variable_plus_parameter()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [2.0 1.0; 1.0 2.0]
a = [1.0, 1.0]
c = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 1], x[1], x[1]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], x[1], x[2]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[2, 2], x[2], x[2]))
constraint_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, [x[1], y]),
0.0,
)
cons_index =
MOI.add_constraint(optimizer, constraint_function, MOI.LessThan(25.0))
obj_func =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, [x[1], x[2]]), 0.0)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 9.0664, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 4.3665, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 1 / 3, atol = ATOL)
@test ≈(
MOI.get(optimizer, MOI.ConstraintDual(), cy),
-MOI.get(optimizer, MOI.ConstraintDual(), cons_index),
atol = ATOL,
)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 8.6609, atol = ATOL)
return
end
function test_qp_variable_times_variable_plus_parameter_duals()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [2.0 1.0; 1.0 2.0]
a = [1.0, 2.0]
c = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 1], x[1], x[1]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], x[1], x[2]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[2, 2], x[2], x[2]))
constraint_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, [x[1], y]),
0.0,
)
cons_index =
MOI.add_constraint(optimizer, constraint_function, MOI.LessThan(25.0))
obj_func =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, [x[1], x[2]]), 0.0)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 9.0664, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 4.3665, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 1 / 3, atol = ATOL)
@test ≈(
MOI.get(optimizer, MOI.ConstraintDual(), cy),
-2 * MOI.get(optimizer, MOI.ConstraintDual(), cons_index),
atol = ATOL,
)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 8.2376, atol = ATOL)
return
end
function test_qp_parameter_times_variable()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [2.0 1.0; 1.0 2.0]
a = [1.0, 1.0]
c = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
MOI.add_constraint(optimizer, x[1], MOI.LessThan(20.0))
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 1], x[1], x[1]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], x[1], y))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[2, 2], y, y))
constraint_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, x),
0.0,
)
MOI.add_constraint(optimizer, constraint_function, MOI.LessThan(30.0))
obj_func =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, [x[1], x[2]]), 0.0)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 30.25, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 0.5, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 29.25, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 22.0, atol = ATOL)
return
end
function test_qp_variable_times_parameter()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [2.0 1.0; 1.0 2.0]
a = [1.0, 1.0]
c = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
MOI.add_constraint(optimizer, x[1], MOI.LessThan(20.0))
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[2, 2], y, y))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], y, x[1]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 1], x[1], x[1]))
constraint_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, x),
0.0,
)
MOI.add_constraint(optimizer, constraint_function, MOI.LessThan(30.0))
obj_func =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, [x[1], x[2]]), 0.0)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 30.25, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 0.5, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 29.25, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ObjectiveValue()), 22.0, atol = ATOL)
return
end
function test_qp_parameter_times_parameter()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [2.0 1.0; 1.0 2.0]
a = [1.0, 1.0]
c = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
MOI.add_constraint(optimizer, x[1], MOI.LessThan(20.0))
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 1], y, y))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], y, z))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[2, 2], z, z))
constraint_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, x),
0.0,
)
MOI.add_constraint(optimizer, constraint_function, MOI.LessThan(30.0))
obj_func =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(c, [x[1], x[2]]), 0.0)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 50.0, atol = ATOL)
@test isapprox(
MOI.get(optimizer, MOI.VariablePrimal(), x[1]),
20.0,
atol = ATOL,
)
@test isapprox(
MOI.get(optimizer, MOI.VariablePrimal(), x[2]),
10.0,
atol = ATOL,
)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 42.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(1.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 36.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(-1.0))
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(-1.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 45.0, atol = ATOL)
return
end
function test_qp_quadratic_constant()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
Q = [3.0 2.0 0.0; 2.0 1.0 0.0; 0.0 0.0 1.0]
q = [1.0, 6.0, 0.0]
G = [2.0 3.0; 1.0 1.0]
h = [4.0; 3.0]
x = MOI.add_variables(optimizer, 2)
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
for i in 1:2
for j in i:2 # indexes (i,j), (j,i) will be mirrored. specify only one kind
push!(quad_terms, MOI.ScalarQuadraticTerm(Q[i, j], x[i], x[j]))
end
end
push!(quad_terms, MOI.ScalarQuadraticTerm(Q[1, 3], x[1], y))
push!(quad_terms, MOI.ScalarQuadraticTerm(Q[2, 3], x[2], y))
push!(quad_terms, MOI.ScalarQuadraticTerm(Q[3, 3], y, y))
objective_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(q, [x[1], x[2], y]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
objective_function,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.add_constraint(
optimizer,
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(G[1, :], x), 0.0),
MOI.GreaterThan(h[1]),
)
MOI.add_constraint(
optimizer,
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.(G[2, :], x), 0.0),
MOI.GreaterThan(h[2]),
)
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 12.5, atol = ATOL)
@test isapprox.(
MOI.get(optimizer, MOI.VariablePrimal(), x[1]),
5.0,
atol = ATOL,
)
@test isapprox.(
MOI.get(optimizer, MOI.VariablePrimal(), x[2]),
-2.0,
atol = ATOL,
)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
MOI.optimize!(optimizer)
# @test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 5.7142, atol = ATOL)
# @test isapprox.(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 2.1428, atol = ATOL)
# @test isapprox.(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), -0.4285, atol = ATOL)
return
end
function test_qp_objective_parameter_times_parameter()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
a = [1.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(1.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(1.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(1.0, y, z))
objective_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, x),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
objective_function,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 1.0, atol = ATOL)
@test isapprox(
MOI.get(optimizer, MOI.VariablePrimal(), x[1]),
0.0,
atol = ATOL,
)
err =
ErrorException("Cannot compute the dual of a multiplicative parameter")
@test_throws err MOI.get(optimizer, MOI.ConstraintDual(), cy)
@test_throws err MOI.get(optimizer, MOI.ConstraintDual(), cz)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 2.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(3.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 6.0, atol = ATOL)
MOI.set(optimizer, POI.ParameterValue(), y, 5)
MOI.set(optimizer, POI.ParameterValue(), z, 5.0)
@test_throws ErrorException MOI.set(
optimizer,
POI.ParameterValue(),
MOI.VariableIndex(10872368175),
5.0,
)
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 25.0, atol = ATOL)
end
function test_qp_objective_affine_parameter()
ipopt = Ipopt.Optimizer()
MOI.set(ipopt, MOI.RawOptimizerAttribute("print_level"), 0)
opt_in =
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{Float64}(), ipopt)
optimizer = POI.Optimizer(opt_in)
A = [0.0 1.0; 1.0 0.0]
a = [2.0, 1.0]
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(1.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(1.0))
quad_terms = MOI.ScalarQuadraticTerm{Float64}[]
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 1], x[1], x[1]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[1, 2], x[1], x[2]))
push!(quad_terms, MOI.ScalarQuadraticTerm(A[2, 2], x[2], x[2]))
objective_function = MOI.ScalarQuadraticFunction(
quad_terms,
MOI.ScalarAffineTerm.(a, [y, z]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
objective_function,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 3.0, atol = ATOL)
@test isapprox(
MOI.get(optimizer, MOI.VariablePrimal(), x[1]),
0,
atol = ATOL,
)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(2.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 5.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(3.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 7.0, atol = ATOL)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(5.0))
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(5.0))
MOI.optimize!(optimizer)
@test isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 15.0, atol = ATOL)
return
end
function test_qp_objective_parameter_in_quadratic_part()
model = POI.Optimizer(Ipopt.Optimizer())
MOI.set(model, MOI.Silent(), true)
x = MOI.add_variable(model)
y = MOI.add_variable(model)
z = MOI.add_variable(model)
p = first(MOI.add_constrained_variable.(model, MOI.Parameter(1.0)))
MOI.add_constraint(model, x, MOI.GreaterThan(0.0))
MOI.add_constraint(model, y, MOI.GreaterThan(0.0))
cons1 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([2.0, 1.0], [x, y]), 0.0)
ci1 = MOI.add_constraint(model, cons1, MOI.LessThan(4.0))
cons2 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([1.0, 2.0], [x, y]), 0.0)
ci2 = MOI.add_constraint(model, cons2, MOI.LessThan(4.0))
MOI.set(model, MOI.ObjectiveSense(), MOI.MAX_SENSE)
obj_func = MOI.ScalarQuadraticFunction(
[
MOI.ScalarQuadraticTerm(1.0, x, x)
MOI.ScalarQuadraticTerm(1.0, y, y)
],
MOI.ScalarAffineTerm{Float64}[],
0.0,
)
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
obj_func,
)
MOI.set(model, POI.QuadraticObjectiveCoef(), (x, y), 2p + 3)
@test MOI.get(model, POI.QuadraticObjectiveCoef(), (x, y)) ≈
MOI.ScalarAffineFunction{Int64}(
MOI.ScalarAffineTerm{Int64}[MOI.ScalarAffineTerm{Int64}(
2,
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1),
)],
3,
)
@test_throws ErrorException MOI.get(
model,
POI.QuadraticObjectiveCoef(),
(x, z),
)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 32 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 2.0)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 128 / 9 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
model = POI.Optimizer(Ipopt.Optimizer())
MOI.set(model, MOI.Silent(), true)
x = MOI.add_variable(model)
y = MOI.add_variable(model)
p = first(MOI.add_constrained_variable.(model, MOI.Parameter(1.0)))
MOI.add_constraint(model, x, MOI.GreaterThan(0.0))
MOI.add_constraint(model, y, MOI.GreaterThan(0.0))
cons1 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([2.0, 1.0], [x, y]), 0.0)
ci1 = MOI.add_constraint(model, cons1, MOI.LessThan(4.0))
cons2 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([1.0, 2.0], [x, y]), 0.0)
ci2 = MOI.add_constraint(model, cons2, MOI.LessThan(4.0))
MOI.set(model, MOI.ObjectiveSense(), MOI.MAX_SENSE)
obj_func = MOI.ScalarAffineFunction(
[
MOI.ScalarAffineTerm(1.0, x)
MOI.ScalarAffineTerm(2.0, y)
],
1.0,
)
MOI.set(
model,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(model, POI.QuadraticObjectiveCoef(), (x, y), p)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 61 / 9 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, POI.QuadraticObjectiveCoef(), (x, y)) ≈
MOI.VariableIndex(POI.PARAMETER_INDEX_THRESHOLD + 1)
MOI.set(model, POI.ParameterValue(), p, 2.0)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 77 / 9 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
model = POI.Optimizer(Ipopt.Optimizer())
MOI.set(model, MOI.Silent(), true)
x = MOI.add_variable(model)
y = MOI.add_variable(model)
p = first(MOI.add_constrained_variable.(model, MOI.Parameter(1.0)))
MOI.add_constraint(model, x, MOI.GreaterThan(0.0))
MOI.add_constraint(model, y, MOI.GreaterThan(0.0))
cons1 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([2.0, 1.0], [x, y]), 0.0)
ci1 = MOI.add_constraint(model, cons1, MOI.LessThan(4.0))
cons2 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([1.0, 2.0], [x, y]), 0.0)
ci2 = MOI.add_constraint(model, cons2, MOI.LessThan(4.0))
MOI.set(model, MOI.ObjectiveSense(), MOI.MAX_SENSE)
obj_func = x
MOI.set(model, MOI.ObjectiveFunction{MOI.VariableIndex}(), obj_func)
MOI.set(model, POI.QuadraticObjectiveCoef(), (x, y), p)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 28 / 9 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 2.0)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 44 / 9 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
model = POI.Optimizer(Ipopt.Optimizer())
MOI.set(model, MOI.Silent(), true)
x = MOI.add_variable(model)
y = MOI.add_variable(model)
p = first(MOI.add_constrained_variable.(model, MOI.Parameter(1.0)))
MOI.add_constraint(model, x, MOI.GreaterThan(0.0))
MOI.add_constraint(model, y, MOI.GreaterThan(0.0))
cons1 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([2.0, 1.0], [x, y]), 0.0)
ci1 = MOI.add_constraint(model, cons1, MOI.LessThan(4.0))
cons2 =
MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([1.0, 2.0], [x, y]), 0.0)
ci2 = MOI.add_constraint(model, cons2, MOI.LessThan(4.0))
MOI.set(model, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.set(model, POI.QuadraticObjectiveCoef(), (x, y), p)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 16 / 9 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
MOI.set(model, POI.ParameterValue(), p, 2.0)
MOI.optimize!(model)
@test MOI.get(model, MOI.ObjectiveValue()) ≈ 32 / 9 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), x) ≈ 4 / 3 atol = ATOL
@test MOI.get(model, MOI.VariablePrimal(), y) ≈ 4 / 3 atol = ATOL
return
end
function test_compute_conflict!()
T = Float64
mock = MOI.Utilities.MockOptimizer(MOI.Utilities.Model{T}())
MOI.set(mock, MOI.ConflictStatus(), MOI.COMPUTE_CONFLICT_NOT_CALLED)
model = POI.Optimizer(
MOI.Utilities.CachingOptimizer(MOI.Utilities.Model{T}(), mock),
)
x, x_ci = MOI.add_constrained_variable(model, MOI.GreaterThan(1.0))
p, p_ci = MOI.add_constrained_variable(model, MOI.Parameter(2.0))
ci = MOI.add_constraint(model, 2.0 * x + 3.0 * p, MOI.LessThan(0.0))
@test MOI.get(model, MOI.ConflictStatus()) ==
MOI.COMPUTE_CONFLICT_NOT_CALLED
MOI.Utilities.set_mock_optimize!(
mock,
mock::MOI.Utilities.MockOptimizer -> begin
MOI.Utilities.mock_optimize!(
mock,
MOI.INFEASIBLE,
MOI.NO_SOLUTION,
MOI.NO_SOLUTION;
constraint_conflict_status = [
(MOI.VariableIndex, MOI.Parameter{T}) =>
[MOI.MAYBE_IN_CONFLICT],
(MOI.VariableIndex, MOI.GreaterThan{T}) =>
[MOI.IN_CONFLICT],
(MOI.ScalarAffineFunction{T}, MOI.LessThan{T}) =>
[MOI.IN_CONFLICT],
],
)
MOI.set(mock, MOI.ConflictStatus(), MOI.CONFLICT_FOUND)
end,
)
MOI.optimize!(model)
@test MOI.get(model, MOI.TerminationStatus()) == MOI.INFEASIBLE
MOI.compute_conflict!(model)
@test MOI.get(model, MOI.ConflictStatus()) == MOI.CONFLICT_FOUND
@test MOI.get(model, MOI.ConstraintConflictStatus(), x_ci) ==
MOI.IN_CONFLICT
@test MOI.get(model, MOI.ConstraintConflictStatus(), p_ci) ==
MOI.MAYBE_IN_CONFLICT
@test MOI.get(model, MOI.ConstraintConflictStatus(), ci) == MOI.IN_CONFLICT
return
end
function test_duals_not_available()
optimizer = POI.Optimizer(GLPK.Optimizer(); evaluate_duals = false)
MOI.set(optimizer, MOI.Silent(), true)
x = MOI.add_variables(optimizer, 2)
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
z = MOI.VariableIndex(4)
cz = MOI.ConstraintIndex{MOI.VariableIndex,MOI.Parameter{Float64}}(4)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
cons1 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 1.0], [x[1], y]),
0.0,
)
c1 = MOI.add_constraint(optimizer, cons1, MOI.EqualTo(2.0))
obj_func = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 1.0], [x[1], y]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MIN_SENSE)
MOI.optimize!(optimizer)
@test_throws MOI.GetAttributeNotAllowed MOI.get(
optimizer,
MOI.ConstraintDual(),
cy,
)
return
end
function test_duals_without_parameters()
optimizer = POI.Optimizer(GLPK.Optimizer())
MOI.set(optimizer, MOI.Silent(), true)
x = MOI.add_variables(optimizer, 3)
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
cons1 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, -1.0], [x[1], y]),
0.0,
)
c1 = MOI.add_constraint(optimizer, cons1, MOI.LessThan(0.0))
cons2 = MOI.ScalarAffineFunction([MOI.ScalarAffineTerm(1.0, x[2])], 0.0)
c2 = MOI.add_constraint(optimizer, cons2, MOI.LessThan(1.0))
cons3 = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, -1.0], [x[3], z]),
0.0,
)
c3 = MOI.add_constraint(optimizer, cons3, MOI.LessThan(0.0))
obj_func = MOI.ScalarAffineFunction(
MOI.ScalarAffineTerm.([1.0, 2.0, 3.0], [x[1], x[2], x[3]]),
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(),
obj_func,
)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
MOI.set(optimizer, MOI.ConstraintSet(), cy, MOI.Parameter(1.0))
MOI.set(optimizer, MOI.ConstraintSet(), cz, MOI.Parameter(1.0))
MOI.optimize!(optimizer)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), c1), -1.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), c2), -2.0, atol = ATOL)
@test ≈(MOI.get(optimizer, MOI.ConstraintDual(), c3), -3.0, atol = ATOL)
return
end
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | code | 710 | # Copyright (c) 2020: Tomás Gutierrez and contributors
#
# Use of this source code is governed by an MIT-style license that can be found
# in the LICENSE.md file or at https://opensource.org/licenses/MIT.
using JuMP
using Test
import GLPK
import Ipopt
import SCS
import LinearAlgebra
import ParametricOptInterface
const POI = ParametricOptInterface
const ATOL = 1e-4
function canonical_compare(f1, f2)
return MOI.Utilities.canonical(f1) ≈ MOI.Utilities.canonical(f2)
end
include("moi_tests.jl")
include("jump_tests.jl")
for name in names(@__MODULE__; all = true)
if startswith("$name", "test_")
@testset "$(name)" begin
getfield(@__MODULE__, name)()
end
end
end
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 1899 | # ParametricOptInterface.jl
[](https://jump.dev/ParametricOptInterface.jl/stable)
[](https://jump.dev/ParametricOptInterface.jl/dev)
[](https://github.com/jump-dev/ParametricOptInterface.jl/actions?query=workflow%3ACI)
[](https://codecov.io/gh/jump-dev/ParametricOptInterface.jl)
[ParametricOptInterface.jl](https://github.com/jump-dev/ParametricOptInterface.jl)
is a package that adds parameters to models in JuMP and MathOptInterface.
## License
`ParametricOptInterface.jl` is licensed under the
[MIT License](https://github.com/jump-dev/ParametricOptInterface.jl/blob/master/LICENSE.md).
## Installation
Install ParametricOptInterface using `Pkg.add`:
```julia
import Pkg
Pkg.add("ParametricOptInterface")
```
## Documentation
The [documentation for ParametricOptInterface.jl](https://jump.dev/ParametricOptInterface.jl/stable/)
includes a detailed description of the theory behind the package, along with
examples, tutorials, and an API reference.
## Use with JuMP
Use ParametricOptInterface with JuMP by following this brief example:
```julia
using JuMP, HiGHS
import ParametricOptInterface as POI
model = Model(() -> POI.Optimizer(HiGHS.Optimizer()))
@variable(model, x)
@variable(model, p in MOI.Parameter(1.0))
@constraint(model, cons, x + p >= 3)
@objective(model, Min, 2x)
optimize!(model)
MOI.set(model, POI.ParameterValue(), p, 2.0)
optimize!(model)
```
## GSOC2020
ParametricOptInterface began as a [NumFOCUS sponsored Google Summer of Code (2020) project](https://summerofcode.withgoogle.com/archive/2020/projects/4959861055422464).
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 254 | In this folder we have a collection of cases that use either BenchmarkTools or TimerOutputs to help developers keep track of possible performance regressions.
A good way to run the benchmarks is to start a julia session and include("run_benchmarks.jl")
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 474 | # ParametricOptInterface.jl Documentation
ParametricOptInterface.jl (POI for short) is a package written on top of MathOptInterface.jl that allows users to add parameters to a MOI/JuMP problem explicitly.
## Installation
To install the package you can use `Pkg.add` as follows:
```julia
pkg> add ParametricOptInterface
```
## Contributing
When contributing please note that the package follows the [JuMP style guide](https://jump.dev/JuMP.jl/stable/developers/style/).
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 3715 | # Manual
## Why use parameters?
A typical optimization model built using `MathOptInterface.jl` (`MOI`for short) has two main components:
1. Variables
2. Constants
Using these basic elements, one can create functions and sets that, together, form the desired optimization model. The goal of `POI` is the implementation of a third type, parameters, which
* are declared similar to a variable, and inherits some functionalities (e.g. dual calculation)
* acts like a constant, in the sense that it has a fixed value that will remain the same unless explicitely changed by the user
A main concern is to efficiently implement this new type, as one typical usage is to change its value to analyze the model behavior, without the need to build a new one from scratch.
## How it works
The main idea applied in POI is that the interaction between the solver, e.g. `GLPK`, and the optimization model will be handled by `MOI` as usual. Because of that, `POI` is a higher level wrapper around `MOI`, responsible for receiving variables, constants and parameters, and forwarding to the lower level model only variables and constants.
As `POI` receives parameters, it must analyze and decide how they should be handled on the lower level optimization model (the `MOI` model).
## Usage
In this manual we describe how to interact with the optimization model at the MOI level. In the [Examples](@ref) section you can find some tutorials with the JuMP usage.
### Supported constraints
This is a list of supported `MOI` constraint functions that can handle parameters. If you try to add a parameter to
a function that is not listed here, it will return an unsupported error.
| MOI Function |
|:-------|
| `ScalarAffineFunction` |
| `ScalarQuadraticFunction` |
| `VectorAffineFunction` |
### Supported objective functions
| MOI Function |
|:-------|
| `ScalarAffineFunction` |
| `ScalarQuadraticFunction` |
### Declare a Optimizer
In order to use parameters, the user needs to declare a [`ParametricOptInterface.Optimizer`](@ref) on top of a `MOI` optimizer, such as `HiGHS.Optimizer()`.
```julia
using ParametricOptInterface, MathOptInterface, HiGHS
# Rename ParametricOptInterface and MathOptInterface to simplify the code
const POI = ParametricOptInterface
const MOI = MathOptInterface
# Define a Optimizer on top of the MOI optimizer
optimizer = POI.Optimizer(HiGHS.Optimizer())
```
### Parameters
A `MOI.Parameter` is a set used to define a variable with a fixed value that
can be changed by the user. It is analogous to `MOI.EqualTo`, but can be used
by special methods like the ones in this package to remove the fixed variable from the
optimization problem. This permits the usage of multiplicative parameters in lienar models
and might speedup solves since the number of variables is reduced.
### Adding a new parameter to a model
To add a parameter to a model, we must use the `MOI.add_constrained_variable()` function, passing as its arguments the model and a `MOI.Parameter` with its given value:
```julia
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
```
### Changing the parameter value
To change a given parameter's value, access its `VariableIndex` and set it to the new value using the `MOI.Parameter` structure.
```julia
MOI.set(optimizer, POI.ParameterValue(), y, MOI.Parameter(2.0))
```
### Retrieving the dual of a parameter
Given an optimized model, one can compute the dual associated to a parameter, **as long as it is an additive term in the constraints or objective**.
One can do so by getting the `MOI.ConstraintDual` attribute of the parameter's `MOI.ConstraintIndex`:
```julia
MOI.get(optimizer, POI.ParameterDual(), y)
```
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 182 | # Reference
```@docs
ParametricOptInterface.ConstraintsInterpretation
ParametricOptInterface.Optimizer
ParametricOptInterface.ParameterDual
ParametricOptInterface.ParameterValue
``` | ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 13490 | # Benders Quantile Regression
We will apply Norm-1 regression to the [Linear Regression](https://en.wikipedia.org/wiki/Linear_regression) problem.
Linear regression is a statistical tool to obtain the relation between one **dependent variable** and other **explanatory variables**.
In other words, given a set of $n$ explanatory variables $X = \{ X_1, \dots, X_n \}$
we would like to obtain the best possible estimate for $Y$.
In order to accomplish such a task we make the hypothesis that $Y$
is approximately linear function of $X$:
$$Y = \sum_{j =1}^n \beta_j X_j + \varepsilon$$
where $\varepsilon$ is some random error.
The estimation of the $\beta$ values relies on observations of the variables:
$\{y^i, x_1^i, \dots, x_n^i\}_i$.
In this example we will solve a problem where the explanatory variables are sinusoids of differents frequencies.
First, we define the number of explanatory variables and observations
```julia
using ParametricOptInterface,MathOptInterface,JuMP,HiGHS
using TimerOutputs,LinearAlgebra,Random
const POI = ParametricOptInterface
const MOI = MathOptInterface
const OPTIMIZER = HiGHS.Optimizer;
const N_Candidates = 200
const N_Observations = 2000
const N_Nodes = 200
const Observations = 1:N_Observations
const Candidates = 1:N_Candidates
const Nodes = 1:N_Nodes;
```
Initialize a random number generator to keep results deterministic
```julia
rng = Random.MersenneTwister(123);
```
Building regressors (explanatory) sinusoids
```julia
const X = zeros(N_Candidates, N_Observations)
const time = [obs / N_Observations * 1 for obs in Observations]
for obs in Observations, cand in Candidates
t = time[obs]
f = cand
X[cand, obs] = sin(2 * pi * f * t)
end
```
Define coefficients
```julia
β = zeros(N_Candidates)
for i in Candidates
if rand(rng) <= (1 - i / N_Candidates)^2 && i <= 100
β[i] = 4 * rand(rng) / i
end
end
```
Create noisy observations
```julia
const y = X' * β .+ 0.1 * randn(rng, N_Observations)
```
### Benders Decomposition
Benders decomposition is used to solve large optimization problems with some special characteristics.
LP's can be solved with classical linear optimization methods
such as the Simplex method or Interior point methods provided by
solvers like HiGHS.
However, these methods do not scale linearly with the problem size.
In the Benders decomposition framework we break the problem in two pieces:
A outer and a inner problem.
Of course some variables will belong to both problems, this is where the
cleverness of Benders kicks in:
The outer problem is solved and passes the shared variables to the inner.
The inner problem is solved with the shared variables FIXED to the values
given by the outer problem. The solution of the inner problem can be used
to generate a constraint to the outer problem to describe the linear
approximation of the cost function of the shared variables.
In many cases, like stochastic programming, the inner problems have a
interesting structure and might be broken in smaller problem to be solved
in parallel.
We will descibe the decomposition similarly to what is done in:
Introduction to Linear Optimization, Bertsimas & Tsitsiklis (Chapter 6.5):
Where the problem in question has the form
$$\begin{align}
& \min_{x, y_k} && c^T x && + f_1^T y_1 && + \dots && + f_n^T y_n && \notag \\
& \text{subject to} && Ax && && && && = b \notag \\
& && B_1 x && + D_1 y_1 && && && = d_1 \notag \\
& && \dots && && \dots && && \notag \\
& && B_n x && && && + D_n y_n && = d_n \notag \\
& && x, && y_1, && && y_n && \geq 0 \notag \\
\end{align}$$
### Inner Problem
Given a solution for the $x$ variables we can define the inner problem as
$$\begin{align}
z_k(x) \ = \ & \min_{y_k} && f_k^T y_k && \notag \\
& \text{subject to} && D_k y_k && = d_k - B_k x \notag \\
& && y_k && \geq 0 \notag \\
\end{align}$$
The $z_k(x)$ function represents the cost of the subproblem given a
solution for $x$. This function is a convex function because $x$
affects only the right hand side of the problem (this is a standard
results in LP theory).
For the special case of the Norm-1 reggression the problem is written as:
$$\begin{align}
z_k(\beta) \ = \ & \min_{\varepsilon^{up}, \varepsilon^{dw}} && \sum_{i \in ObsSet(k)} {\varepsilon^{up}}_i + {\varepsilon^{dw}}_i && \notag \\
& \text{subject to} && {\varepsilon^{up}}_i \geq + y_i - \sum_{j \in Candidates} \beta_j x_{i,j} && \forall i \in ObsSet(k) \notag \\
& && {\varepsilon^{dw}}_i \geq - y_i + \sum_{j \in Candidates} \beta_j x_{i,j} && \forall i \in ObsSet(k) \notag \\
& && {\varepsilon^{up}}_i, {\varepsilon^{dw}}_i \geq 0 && \forall i \in ObsSet(k) \notag \\
\end{align}$$
The collection $ObsSet(k)$ is a sub-set of the `N_Observations`.
Any partition of the `N_Observations` collection is valid.
In this example we will partition with the function:
```julia
function ObsSet(K)
obs_per_block = div(N_Observations, N_Nodes)
return (1+(K-1)*obs_per_block):(K*obs_per_block)
end
```
Which can be written in POI as follows:
```julia
function inner_model(K)
# initialize the POI model
inner = direct_model(POI.Optimizer(OPTIMIZER()))
# Define local optimization variables for norm-1 error
@variables(inner, begin
ɛ_up[ObsSet(K)] >= 0
ɛ_dw[ObsSet(K)] >= 0
end)
# create the regression coefficient representation
# Create parameters
β = [@variable(inner, set = MOI.Parameter(0.0)) for i in 1:N_Candidates]
for (i, βi) in enumerate(β)
set_name(βi, "β[$i]")
end
# create local constraints
# Note that *parameter* algebra is implemented just like variables
# algebra. We can multiply parameters by constants, add parameters,
# sum parameters and variables and so on.
@constraints(
inner,
begin
ɛ_up_ctr[i in ObsSet(K)],
ɛ_up[i] >= +sum(X[j, i] * β[j] for j in Candidates) - y[i]
ɛ_dw_ctr[i in ObsSet(K)],
ɛ_dw[i] >= -sum(X[j, i] * β[j] for j in Candidates) + y[i]
end
)
# create local objective function
@objective(inner, Min, sum(ɛ_up[i] + ɛ_dw[i] for i in ObsSet(K)))
# return the correct group of parameters
return (inner, β)
end
```
### Outer Problem
Now that all pieces of the original problem can be representad by
the convex $z_k(x)$ functions we can recast the problem in the the equivalent form:
$$\begin{align}
& \min_{x} && c^T x + z_1(x) + \dots + z_n(x) && \notag \\
& \text{subject to} && Ax = b && \notag \\
& && x \geq 0 && \notag \\
\end{align}$$
However we cannot pass a problem in this form to a linear programming
solver (it could be passed to other kinds of solvers).
Another standart result of optimization theory is that a convex function
can be represented by its supporting hyper-planes:
$$\begin{align}
z_k(x) \ = \ & \min_{z, x} && z && \notag \\
& \text{subject to} && z \geq \pi_k(\hat{x}) (x - \hat{x}) + z_k(\hat{x}), \ \forall \hat{x} \in dom(z_k) && \notag \\
\end{align}$$
Then we can re-write (again) the outer problem as
$$\begin{align}
& \min_{x, z_k} && c^T x + z_1 + \dots + z_n \notag \\
& \text{subject to} && z_i \geq \pi_i(\hat{x}) (x - \hat{x}) + z_i(\hat{x}), \ \forall \hat{x} \in dom(z_i), i \in \{1, \dots, n\} \notag \\
& && Ax = b \notag \\
& && x \geq 0 \notag \\
\end{align}$$
Which is a linear program! However, it has infinitely many constraints !!
We can relax the infinite constraints and write:
$$\begin{align}
& \min_{x, z_k} && c^T x + z_1 + \dots + z_n \notag \\
& \text{subject to} && Ax = b \notag \\
& && x \geq 0 \notag \\
\end{align}$$
But now its only an underestimated problem.
In the case of our problem it can be written as:
$$\begin{align}
& \min_{\varepsilon, \beta} && \sum_{i \in Nodes} \varepsilon_i \notag \\
& \text{subject to} && \varepsilon_i \geq 0 \notag \\
\end{align}$$
This model can be written in JuMP:
```julia
function outer_model()
outer = Model(OPTIMIZER)
@variables(outer, begin
ɛ[Nodes] >= 0
β[1:N_Candidates]
end)
@objective(outer, Min, sum(ɛ[i] for i in Nodes))
sol = zeros(N_Candidates)
return (outer, ɛ, β, sol)
end
```
The method to solve the outer problem and query its solution is given here:
```julia
function outer_solve(outer_model)
model = outer_model[1]
β = outer_model[3]
optimize!(model)
return (value.(β), objective_value(model))
end
```
### Supporting Hyperplanes
With these building blocks in hand, we can start building the algorithm.
So far we know how to:
- Solve the relaxed outer problem
- Obtain the solution for the $\hat{x}$ (or $\beta$ in our case)
Now we can:
- Fix the values of $\hat{x}$ in the inner problems
- Solve the inner problems
- query the solution of the inner problems to obtain the supporting hyperplane
the value of $z_k(\hat{x})$, which is the objective value of the inner problem
and the derivative $\pi_k(\hat{x}) = \frac{d z_k(x)}{d x} \Big|_{x = \hat{x}}$
The derivative is the dual variable associated to the variable $\hat{x}$,
which results by applying the chain rule on the constraints duals.
These new steps are executed by the function:
```julia
function inner_solve(model, outer_solution)
β0 = outer_solution[1]
inner = model[1]
# The first step is to fix the values given by the outer problem
@timeit "fix" begin
β = model[2]
MOI.set.(inner, POI.ParameterValue(), β, β0)
end
# here the inner problem is solved
@timeit "opt" optimize!(inner)
# query dual variables, which are sensitivities
# They represent the subgradient (almost a derivative)
# of the objective function for infinitesimal variations
# of the constants in the linear constraints
# POI: we can query dual values of *parameters*
π = MOI.get.(inner, POI.ParameterDual(), β)
# π2 = shadow_price.(β_fix)
obj = objective_value(inner)
rhs = obj - dot(π, β0)
return (rhs, π, obj)
end
```
Now that we have cutting plane in hand we can add them to the outer problem
```julia
function outer_add_cut(outer_model, cut_info, node)
outer = outer_model[1]
ɛ = outer_model[2]
β = outer_model[3]
rhs = cut_info[1]
π = cut_info[2]
@constraint(outer, ɛ[node] >= sum(π[j] * β[j] for j in Candidates) + rhs)
end
```
### Algorithm wrap up
The complete algorithm is
- Solve the relaxed master problem
- Obtain the solution for the $\hat{x}$ (or $\beta$ in our case)
- Fix the values of $\hat{x}$ in the slave problems
- Solve the slave problem
- query the solution of the slave problem to obtain the supporting hyperplane
- add hyperplane to master problem
- repeat
Now we grab all the pieces that we built and we write the benders
algorithm by calling the above function in a proper order.
The macros `@timeit` are use to time each step of the algorithm.
```julia
function decomposed_model(;print_timer_outputs::Bool = true)
reset_timer!() # reset timer fo comparision
time_init = @elapsed @timeit "Init" begin
# Create the outer problem with no cuts
@timeit "outer" outer = outer_model()
# initialize solution for the regression coefficients in zero
@timeit "Sol" solution = (zeros(N_Candidates), Inf)
best_sol = deepcopy(solution)
# Create the inner problems
@timeit "inners" inners =
[inner_model(i) for i in Candidates]
# Save initial version of the inner problems and create
# the first set of cuts
@timeit "Cuts" cuts =
[inner_solve(inners[i], solution) for i in Candidates]
end
UB = +Inf
LB = -Inf
# println("Initialize Iterative step")
time_loop = @elapsed @timeit "Loop" for k in 1:80
# Add cuts generated from each inner problem to the outer problem
@timeit "add cuts" for i in Candidates
outer_add_cut(outer, cuts[i], i)
end
# Solve the outer problem with the new set of cuts
# Obtain new solution candidate for the regression coefficients
@timeit "solve outer" solution = outer_solve( outer)
# Pass the new candidate solution to each of the inner problems
# Solve the inner problems and obtain cutting planes
@timeit "solve nodes" for i in Candidates
cuts[i] = inner_solve( inners[i], solution)
end
LB = solution[2]
new_UB = sum(cuts[i][3] for i in Candidates)
if new_UB <= UB
best_sol = deepcopy(solution)
end
UB = min(UB, new_UB)
if abs(UB - LB) / (abs(UB) + abs(LB)) < 0.05
break
end
end
print_timer_outputs && print_timer()
return best_sol[1]
end
```
Run benders decomposition with POI
```julia
β2 = decomposed_model(; print_timer_outputs = false);
GC.gc()
β2 = decomposed_model();
``` | ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 16510 | # Basic Examples
## MOI example - step by step usage
Let's write a step-by-step example of `POI` usage at the MOI level.
First, we declare a [`ParametricOptInterface.Optimizer`](@ref) on top of a `MOI` optimizer. In the example, we consider `HiGHS` as the underlying solver:
```@example moi1
using HiGHS
using MathOptInterface
using ParametricOptInterface
const MOI = MathOptInterface
const POI = ParametricOptInterface
optimizer = POI.Optimizer(HiGHS.Optimizer())
```
We declare the variable `x` as in a typical `MOI` model, and we add a non-negativity constraint:
```@example moi1
x = MOI.add_variables(optimizer, 2)
for x_i in x
MOI.add_constraint(optimizer, x_i, MOI.GreaterThan(0.0))
end
```
Now, let's consider 3 `MOI.Parameter`. Two of them, `y`, `z`, will be placed in the constraints and one, `w`, in the objective function. We'll start all three of them with a value equal to `0`:
```@example moi1
w, cw = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
y, cy = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
z, cz = MOI.add_constrained_variable(optimizer, MOI.Parameter(0.0))
```
Let's add the constraints. Notice that we treat parameters and variables in the same way when building the functions that will be placed in some set to create a constraint (`Function-in-Set`):
```@example moi1
cons1 = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([2.0, 1.0, 3.0], [x[1], x[2], y]), 0.0)
ci1 = MOI.add_constraint(optimizer, cons1, MOI.LessThan(4.0))
cons2 = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([1.0, 2.0, 0.5], [x[1], x[2], z]), 0.0)
ci2 = MOI.add_constraint(optimizer, cons2, MOI.LessThan(4.0))
```
Finally, we declare and add the objective function, with its respective sense:
```@example moi1
obj_func = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([4.0, 3.0, 2.0], [x[1], x[2], w]), 0.0)
MOI.set(optimizer, MOI.ObjectiveFunction{MOI.ScalarAffineFunction{Float64}}(), obj_func)
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
```
Now we can optimize the model and assess its termination and primal status:
```@example moi1
MOI.optimize!(optimizer)
MOI.get(optimizer, MOI.TerminationStatus())
MOI.get(optimizer, MOI.PrimalStatus())
```
Given the optimized solution, we check that its value is, as expected, equal to `28/3`, and the solution vector `x` is `[4/3, 4/3]`:
```@example moi1
isapprox(MOI.get(optimizer, MOI.ObjectiveValue()), 28/3, atol = 1e-4)
isapprox(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 4/3, atol = 1e-4)
isapprox(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 4/3, atol = 1e-4)
```
We can also retrieve the dual values associated to each parameter, **as they are all additive**:
```@example moi1
MOI.get(optimizer, MOI.ConstraintDual(), cy)
MOI.get(optimizer, MOI.ConstraintDual(), cz)
MOI.get(optimizer, MOI.ConstraintDual(), cw)
```
Notice the direct relationship in this case between the parameters' duals and the associated constraints' duals.
The `y` parameter, for example, only appears in the `cons1`. If we compare their duals, we can check that the dual of `y` is equal to its coefficient in `cons1` multiplied by the constraint's dual itself, as expected:
```@example moi1
isapprox(MOI.get(optimizer, MOI.ConstraintDual(), cy), 3*MOI.get(optimizer, MOI.ConstraintDual(), ci1), atol = 1e-4)
```
The same is valid for the remaining parameters. In case a parameter appears in more than one constraint, or both some constraints and in the objective function, its dual will be equal to the linear combination of the functions' duals multiplied by the respective coefficients.
So far, we only added some parameters that had no influence at first in solving the model. Let's change the values associated to each parameter to assess its implications.
First, we set the value of parameters `y` and `z` to `1.0`. Notice that we are changing the feasible set of the decision variables:
```@example moi1
MOI.set(optimizer, POI.ParameterValue(), y, 1.0)
MOI.set(optimizer, POI.ParameterValue(), z, 1.0)
```
However, if we check the optimized model now, there will be no changes in the objective function value or the in the optimized decision variables:
```@example moi1
isapprox.(MOI.get(optimizer, MOI.ObjectiveValue()), 28/3, atol = 1e-4)
isapprox.(MOI.get(optimizer, MOI.VariablePrimal(), x[1]), 4/3, atol = 1e-4)
isapprox.(MOI.get(optimizer, MOI.VariablePrimal(), x[2]), 4/3, atol = 1e-4)
```
Although we changed the parameter values, we didn't optimize the model yet. Thus, **to apply the parameters' changes, the model must be optimized again**:
```@example moi1
MOI.optimize!(optimizer)
```
The `MOI.optimize!()` function handles the necessary updates, properly fowarding the new outer model (`POI` model) additions to the inner model (`MOI` model) which will be handled by the solver. Now we can assess the updated optimized information:
```@example moi1
isapprox.(MOI.get(optimizer, MOI.ObjectiveValue()), 3.0, atol = 1e-4)
MOI.get.(optimizer, MOI.VariablePrimal(), x) == [0.0, 1.0]
```
If we update the parameter `w`, associated to the objective function, we are simply adding a constant to it. Notice how the new objective function is precisely equal to the previous one plus the new value of `w`. In addition, as we didn't update the feasible set, the optimized decision variables remain the same.
```@example moi1
MOI.set(optimizer, POI.ParameterValue(), w, 2.0)
# Once again, the model must be optimized to incorporate the changes
MOI.optimize!(optimizer)
# Only the objective function value changes
isapprox.(MOI.get(optimizer, MOI.ObjectiveValue()), 7.0, atol = 1e-4)
MOI.get.(optimizer, MOI.VariablePrimal(), x) == [0.0, 1.0]
```
## JuMP Example - step by step usage
Let's write a step-by-step example of `POI` usage at the JuMP level.
First, we declare a `Model` on top of a `Optimizer` of an underlying solver. In the example, we consider `HiGHS` as the underlying solver:
```@example jump1
using HiGHS
using JuMP
using ParametricOptInterface
const POI = ParametricOptInterface
model = Model(() -> ParametricOptInterface.Optimizer(HiGHS.Optimizer()))
```
We declare the variable `x` as in a typical `JuMP` model:
```@example jump1
@variable(model, x[i = 1:2] >= 0)
```
Now, let's consider 3 `MOI.Parameter`. Two of them, `y`, `z`, will be placed in the constraints and one, `w`, in the objective function. We'll start all three of them with a value equal to `0`:
```@example jump1
@variable(model, y in MOI.Parameter(0.0))
@variable(model, z in MOI.Parameter(0.0))
@variable(model, w in MOI.Parameter(0.0))
```
Let's add the constraints. Notice that we treat parameters the same way we treat variables when writing the model:
```@example jump1
@constraint(model, c1, 2x[1] + x[2] + 3y <= 4)
@constraint(model, c2, x[1] + 2x[2] + 0.5z <= 4)
```
Finally, we declare and add the objective function, with its respective sense:
```@example jump1
@objective(model, Max, 4x[1] + 3x[2] + 2w)
```
We can optimize the model and assess its termination and primal status:
```@example jump1
optimize!(model)
termination_status(model)
primal_status(model)
```
Given the optimized solution, we check that its value is, as expected, equal to `28/3`, and the solution vector `x` is `[4/3, 4/3]`:
```@example jump1
isapprox(objective_value(model), 28/3)
isapprox(value.(x), [4/3, 4/3])
```
We can also retrieve the dual values associated to each parameter, **as they are all additive**:
```@example jump1
MOI.get(model, POI.ParameterDual(), y)
MOI.get(model, POI.ParameterDual(), z)
MOI.get(model, POI.ParameterDual(), w)
```
Notice the direct relationship in this case between the parameters' duals and the associated constraints' duals. The `y` parameter, for example, only appears in the `c1`. If we compare their duals, we can check that the dual of `y` is equal to its coefficient in `c1` multiplied by the constraint's dual itself, as expected:
```@example jump1
dual_of_y = MOI.get(model, POI.ParameterDual(), y)
isapprox(dual_of_y, 3 * dual(c1))
```
The same is valid for the remaining parameters. In case a parameter appears in more than one constraint, or both some constraints and in the objective function, its dual will be equal to the linear combination of the functions' duals multiplied by the respective coefficients.
So far, we only added some parameters that had no influence at first in solving the model. Let's change the values associated to each parameter to assess its implications. First, we set the value of parameters `y` and `z` to `1.0`. Notice that we are changing the feasible set of the decision variables:
```@example jump1
MOI.set(model, POI.ParameterValue(), y, 1)
MOI.set(model, POI.ParameterValue(), z, 1)
# We can also query the value in the parameters
MOI.get(model, POI.ParameterValue(), y)
MOI.get(model, POI.ParameterValue(), z)
```
To apply the parameters' changes, the model must be optimized again:
```@example jump1
optimize!(model)
```
The `optimize!` function handles the necessary updates, properly fowarding the new outer model (`POI` model) additions to the inner model (`MOI` model) which will be handled by the solver. Now we can assess the updated optimized information:
```@example jump1
isapprox(objective_value(model), 3)
isapprox(value.(x), [0, 1])
```
If we update the parameter `w`, associated to the objective function, we are simply adding a constant to it. Notice how the new objective function is precisely equal to the previous one plus the new value of `w`. In addition, as we didn't update the feasible set, the optimized decision variables remain the same.
```@example jump1
MOI.set(model, POI.ParameterValue(), w, 2)
# Once again, the model must be optimized to incorporate the changes
optimize!(model)
# Only the objective function value changes
isapprox(objective_value(model), 7)
isapprox(value.(x), [0, 1])
```
## JuMP Example - Declaring vectors of parameters
Many times it is useful to declare a vector of parameters just like we declare a vector of variables, the JuMP syntax for variables works with parameters too:
```@example jump2
using HiGHS
using JuMP
using ParametricOptInterface
const POI = ParametricOptInterface
model = Model(() -> ParametricOptInterface.Optimizer(HiGHS.Optimizer()))
@variable(model, x[i = 1:3] >= 0)
@variable(model, p1[i = 1:3] in MOI.Parameter(0.0))
@variable(model, p2[i = 1:3] in MOI.Parameter.([1, 10, 45]))
@variable(model, p3[i = 1:3] in MOI.Parameter.(ones(3)))
```
## JuMP Example - Dealing with parametric expressions as variable bounds
A very common pattern that appears when using ParametricOptInterface is to add variable and later add some expression with parameters that represent the variable bound. The following code illustrates the pattern:
```@example jump3
using HiGHS
using JuMP
using ParametricOptInterface
const POI = ParametricOptInterface
model = direct_model(POI.Optimizer(HiGHS.Optimizer()))
@variable(model, x)
@variable(model, p in MOI.Parameter(0.0))
@constraint(model, x >= p)
```
Since parameters are treated like variables JuMP lowers this to MOI as `x - p >= 0` which is not a variable bound but a linear constraint.This means that the current representation of this problem at the solver level is:
```math
\begin{align}
& \min_{x} & 0
\\
& \;\;\text{s.t.} & x & \in \mathbb{R} \\
& & x - p & \geq 0
\end{align}
```
This behaviour might be undesirable because it creates extra rows in your problem. Users can set the [`ParametricOptInterface.ConstraintsInterpretation`](@ref) to control how the linear constraints should be interpreted. The pattern advised for users seeking the most performance out of ParametricOptInterface should use the followig pattern:
```@example jump3
using HiGHS
using JuMP
using ParametricOptInterface
const POI = ParametricOptInterface
model = direct_model(POI.Optimizer(HiGHS.Optimizer()))
@variable(model, x)
@variable(model, p in MOI.Parameter(0.0))
# Indicate that all the new constraints will be valid variable bounds
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_BOUNDS)
@constraint(model, x >= p)
# The name of this constraint was different to inform users that this is a
# variable bound.
# Indicate that all the new constraints will not be variable bounds
MOI.set(model, POI.ConstraintsInterpretation(), POI.ONLY_CONSTRAINTS)
# @constraint(model, ...)
```
This way the mathematical representation of the problem will be:
```math
\begin{align}
& \min_{x} & 0
\\
& \;\;\text{s.t.} & x & \geq p
\end{align}
```
which might lead to faster solves.
Users that just want everything to work can use the default value `POI.ONLY_CONSTRAINTS` or try to use `POI.BOUNDS_AND_CONSTRAINTS` and leave it to ParametricOptInterface to interpret the constraints as bounds when applicable and linear constraints otherwise.
## MOI Example - Parameters multiplying Quadratic terms
Let's start with a simple quadratic problem
```@example moi2
using Ipopt
using MathOptInterface
using ParametricOptInterface
const MOI = MathOptInterface
const POI = ParametricOptInterface
optimizer = POI.Optimizer(Ipopt.Optimizer())
x = MOI.add_variable(optimizer)
y = MOI.add_variable(optimizer)
MOI.add_constraint(optimizer, x, MOI.GreaterThan(0.0))
MOI.add_constraint(optimizer, y, MOI.GreaterThan(0.0))
cons1 = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([2.0, 1.0], [x, y]), 0.0)
ci1 = MOI.add_constraint(optimizer, cons1, MOI.LessThan(4.0))
cons2 = MOI.ScalarAffineFunction(MOI.ScalarAffineTerm.([1.0, 2.0], [x, y]), 0.0)
ci2 = MOI.add_constraint(optimizer, cons2, MOI.LessThan(4.0))
MOI.set(optimizer, MOI.ObjectiveSense(), MOI.MAX_SENSE)
obj_func = MOI.ScalarQuadraticFunction(
[MOI.ScalarQuadraticTerm(1.0, x, x)
MOI.ScalarQuadraticTerm(1.0, y, y)],
MOI.ScalarAffineTerm{Float64}[],
0.0,
)
MOI.set(
optimizer,
MOI.ObjectiveFunction{MOI.ScalarQuadraticFunction{Float64}}(),
obj_func,
)
```
To multiply a parameter in a quadratic term, the user will
need to use the `POI.QuadraticObjectiveCoef` model attribute.
```@example moi2
p = first(MOI.add_constrained_variable.(optimizer, MOI.Parameter(1.0)))
MOI.set(optimizer, POI.QuadraticObjectiveCoef(), (x,y), p)
```
This function will add the term `p*xy` to the objective function.
It's also possible to multiply a scalar affine function to the quadratic term.
```@example moi2
MOI.set(optimizer, POI.QuadraticObjectiveCoef(), (x,y), 2p+3)
```
This will set the term `(2p+3)*xy` to the objective function (it overwrites the last set).
Then, just optimize the model.
```@example moi2
MOI.optimize!(model)
isapprox(MOI.get(model, MOI.ObjectiveValue()), 32/3, atol=1e-4)
isapprox(MOI.get(model, MOI.VariablePrimal(), x), 4/3, atol=1e-4)
isapprox(MOI.get(model, MOI.VariablePrimal(), y), 4/3, atol=1e-4)
```
To change the parameter just set `POI.ParameterValue` and optimize again.
```@example moi2
MOI.set(model, POI.ParameterValue(), p, 2.0)
MOI.optimize!(model)
isapprox(MOI.get(model, MOI.ObjectiveValue()), 128/9, atol=1e-4)
isapprox(MOI.get(model, MOI.VariablePrimal(), x), 4/3, atol=1e-4)
isapprox(MOI.get(model, MOI.VariablePrimal(), y), 4/3, atol=1e-4)
```
## JuMP Example - Parameters multiplying Quadratic terms
Let's get the same MOI example
```@example jump4
using Ipopt
using JuMP
using ParametricOptInterface
const POI = ParametricOptInterface
optimizer = POI.Optimizer(Ipopt.Optimizer())
model = direct_model(optimizer)
@variable(model, x >= 0)
@variable(model, y >= 0)
@variable(model, p in MOI.Parameter(1.0))
@constraint(model, 2x + y <= 4)
@constraint(model, x + 2y <= 4)
@objective(model, Max, (x^2 + y^2)/2)
```
We use the same MOI function to add the parameter multiplied to the quadratic term.
```@example jump4
MOI.set(backend(model), POI.QuadraticObjectiveCoef(), (index(x),index(y)), 2index(p)+3)
```
If the user print the `model`, the term `(2p+3)*xy` won't show.
It's possible to retrieve the parametric function multiplying the term `xy` with `MOI.get`.
```@example jump4
MOI.get(backend(model), POI.QuadraticObjectiveCoef(), (index(x),index(y)))
```
Then, just optimize the model
```@example jump4
optimize!(model)
isapprox(objective_value(model), 32/3, atol=1e-4)
isapprox(value(x), 4/3, atol=1e-4)
isapprox(value(y), 4/3, atol=1e-4)
```
To change the parameter just set `POI.ParameterValue` and optimize again.
```@example jump4
MOI.set(model, POI.ParameterValue(), p, 2.0)
optimize!(model)
isapprox(objective_value(model), 128/9, atol=1e-4)
isapprox(value(x), 4/3, atol=1e-4)
isapprox(value(y), 4/3, atol=1e-4)
```
| ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.8.2 | 1b93d5117b620c44f2241d77496b270634a4180d | docs | 3065 | # Markowitz Efficient Frontier
In this example, we solve the classical portfolio problem where we introduce the
weight parameter $\gamma$ and maximize $\gamma \text{ risk} - \text{expected return}$. By updating the values of $\gamma$ we trace the efficient frontier.
Given the prices changes with mean $\mu$ and covariance $\Sigma$, we can construct the classical portfolio problem:
$$\begin{array}{ll}
\text{maximize} & \gamma* x^T \mu - x^T \Sigma x \\
\text{subject to} & \| x \|_1 = 1 \\
& x \succeq 0
\end{array}$$
The problem data was gotten from the example [portfolio optimization](https://jump.dev/Convex.jl/dev/examples/portfolio_optimization/portfolio_optimization2/)
```julia
using ParametricOptInterface, MathOptInterface, JuMP, Ipopt
using LinearAlgebra, Plots
const POI = ParametricOptInterface
const MOI = MathOptInterface
# generate problem data
μ = [11.5; 9.5; 6] / 100 #expected returns
Σ = [
166 34 58 #covariance matrix
34 64 4
58 4 100
] / 100^2
```
We first build the model with $\gamma$ as parameter in POI
```julia
function first_model(μ,Σ)
cached = MOI.Bridges.full_bridge_optimizer(
MOIU.CachingOptimizer(
MOIU.UniversalFallback(MOIU.Model{Float64}()),
Ipopt.Optimizer(),
),
Float64,
)
optimizer = POI.Optimizer(cached)
portfolio = direct_model(optimizer)
set_silent(portfolio)
N = length(μ)
@variable(portfolio, x[1:N] >= 0)
@variable(portfolio, γ in MOI.Parameter(0.0))
@objective(portfolio, Max, γ*dot(μ,x) - x' * Σ * x)
@constraint(portfolio, sum(x) == 1)
optimize!(portfolio)
return portfolio
end
```
Then, we update the $\gamma$ value in the model
```julia
function update_model!(portfolio,γ_value)
γ = portfolio[:γ]
MOI.set(portfolio, POI.ParameterValue(), γ, γ_value)
optimize!(portfolio)
return portfolio
end
```
Collecting all the return and risk resuls for each $\gamma$
```julia
function add_to_dict(portfolios_values,portfolio,μ,Σ)
γ = portfolio[:γ]
γ_value = value(γ)
x = portfolio[:x]
x_value = value.(x)
portfolio_return = dot(μ,x_value)
portfolio_deviation = x_value' * Σ * x_value
portfolios_values[γ_value] = (portfolio_return,portfolio_deviation)
end
```
Run the portfolio optimization for different values of $\gamma$
```julia
portfolio = first_model(μ,Σ)
portfolios_values = Dict()
# Create a reference to the model to change it later
portfolio_ref = [portfolio]
add_to_dict(portfolios_values,portfolio,μ,Σ)
for γ_value in 0.02:0.02:1.0
portfolio_ref[] = update_model!(portfolio_ref[],γ_value)
add_to_dict(portfolios_values,portfolio_ref[],μ,Σ)
end
```
Plot the efficient frontier
```julia
portfolios_values = sort(portfolios_values,by=x->x[1])
portfolios_values_matrix = hcat([[v[1],v[2]] for v in values(portfolios_values)]...)'
plot(portfolios_values_matrix[:,2],portfolios_values_matrix[:,1],legend=false,
xlabel="Standard Deviation", ylabel = "Return", title = "Efficient Frontier")
``` | ParametricOptInterface | https://github.com/jump-dev/ParametricOptInterface.jl.git |
|
[
"MIT"
] | 0.1.1 | 92573689d59edeb259f16715a716e7932f94c261 | code | 149 | import Literate
foreach(["par_ldiv.jl", "iternz.jl"]) do i
Literate.markdown(joinpath("lit", i), "src/"; execute=true, repo_root_path="../")
end | SparseExtra | https://github.com/SobhanMP/SparseExtra.jl.git |
|
[
"MIT"
] | 0.1.1 | 92573689d59edeb259f16715a716e7932f94c261 | code | 995 | # shamlessly ~~stolen from~~ inspired by Krotov.jl
using SparseExtra, Documenter
import Pkg
DocMeta.setdocmeta!(SparseExtra, :DocTestSetup,
:(using SparseExtra); recursive=true)
PROJECT_TOML = Pkg.TOML.parsefile(joinpath(@__DIR__, "..", "Project.toml"))
VERSION = PROJECT_TOML["version"]
NAME = PROJECT_TOML["name"]
AUTHORS = join(PROJECT_TOML["authors"], ", ") * " and contributors"
GITHUB = "https://github.com/SobhanMP/SparseExtra.jl"
println("Starting makedocs")
makedocs(;
authors=AUTHORS,
sitename="SparseExtra.jl",
modules=[SparseExtra],
format=Documenter.HTML(;
prettyurls=true,
canonical="https://SobhanMP.github.io/SparseExtra.jl",
assets=String[],
footer="[$NAME.jl]($GITHUB) v$VERSION docs powered by [Documenter.jl](https://github.com/JuliaDocs/Documenter.jl).",
mathengine=KaTeX()
),
pages=[
"Home" => "index.md",
"`iternz`" => "iternz.md",
"Parallel `ldiv`" => "par_ldiv.md",
]
)
| SparseExtra | https://github.com/SobhanMP/SparseExtra.jl.git |
|
[
"MIT"
] | 0.1.1 | 92573689d59edeb259f16715a716e7932f94c261 | code | 4136 | # # The `iternz` API
# This returns an iterator over the structural non-zero elements of the array (elements that aren't zero due to the structure not zero elements) i.e.
# ```julia
# all(iternz(x)) do (v, k...)
# x[k...] == v
# end
# ```
# The big idea is to abstract away all of the speciall loops needed to iterate over sparse containers. These include special Linear Algebra matrices like `Diagonal`, and `UpperTriangular` or `SparseMatrixCSC`. Furethemore it's possible to use this recursively i.e. An iteration over a `Diagonal{SparseVector}` will skip the zero elements (if they are not stored) of the SparseVector.
# For an example let's take the sum of the elements in a matrix such that `(i + j) % 7 == 0`. The most general way of writing it is
using BenchmarkTools, SparseArrays
const n = 10_000
const A = sprandn(n, n, 5 / n);
function general(x::AbstractMatrix)
s = zero(eltype(x))
@inbounds for j in axes(x, 2),
i in axes(x, 1)
if (i + j) % 7 == 0
s += x[i, j]
end
end
return s
end
@benchmark general($A)
# Now this is pretty bad, we can improve the performance by using the sparse structure of the problem
using SparseArrays: getcolptr, nonzeros, rowvals
function sparse_only(x::SparseMatrixCSC)
s = zero(eltype(x))
@inbounds for j in axes(x, 2),
ind in getcolptr(x)[j]:getcolptr(x)[j + 1] - 1
i = rowvals(x)[ind]
if (i + j) % 7 == 0
s += nonzeros(x)[ind]
end
end
return s
end
# We can test for correctness
sparse_only(A) == general(A)
# and benchmark the function
@benchmark sparse_only($A)
# we can see that while writing the function requires understanding how CSC matrices are stored, the code is 600x faster. The thing is that this pattern gets repeated everywhere so we might try and abstract it away. My proposition is the iternz api.
using SparseExtra
function iternz_only(x::AbstractMatrix)
s = zero(eltype(x))
for (v, i, j) in iternz(x)
if (i + j) % 7 == 0
s += v
end
end
return s
end
iternz_only(A) == general(A)
#
@benchmark sparse_only($A)
# The speed is the same as the specialized version but there is no `@inbounds`, no need for ugly loops etc. As a bonus point it works on all of the specialized matrices
using LinearAlgebra
all(iternz_only(i(A)) ≈ general(i(A)) for i in [Transpose, UpperTriangular, LowerTriangular, Diagonal, Symmetric]) # symmetric changes the order of exection.
# Since these interfaces are written using the iternz interface themselves, the codes generalize to the cases where these special matrices are combined, removing the need to do these tedious specialization.
# For instance the 3 argument dot can be written as
function iternz_dot(x::AbstractVector, A::AbstractMatrix, y::AbstractVector)
(length(x), length(y)) == size(A) || throw(ArgumentError("bad shape"))
acc = zero(promote_type(eltype(x), eltype(A), eltype(y)))
@inbounds for (v, i, j) in iternz(A)
acc += x[i] * v * y[j]
end
acc
end
const (x, y) = randn(n), randn(n);
const SA = Symmetric(A);
# Correctness tests
dot(x, A, y) ≈ iternz_dot(x, A, y) && dot(x, SA, y) ≈ iternz_dot(x, SA, y)
# Benchmarks
@benchmark dot($x, $A, $y)
#
@benchmark iternz_dot($x, $A, $y)
#
@benchmark dot($x, $SA, $y)
#
@benchmark iternz_dot($x, $SA, $y)
# ## API:
# The Api is pretty simple, the `iternz(A)` should return an iteratable such that
# ```julia
# all(A[ind...] == v for (v, ind...) in iternz(A))
# ```
#
# If the matrix is a container for a different type, the inner iteration should be done via iternz. This repo provides the `IterateNZ` container whose sole pupose is to hold the array to overload `Base.iterate`. Additionally matrices have the `skip_col` and `skip_row_to` functions defined. The idea that if meaningful, this should return a state such that iterating on that state will give the first element of the next column or in the case of `skip_row_to(cont, state, i)`, iterate should return `(i, j)` where j is the current column.
# ## TODO
# - test with non-one based indexing
| SparseExtra | https://github.com/SobhanMP/SparseExtra.jl.git |
|
[
"MIT"
] | 0.1.1 | 92573689d59edeb259f16715a716e7932f94c261 | code | 381 | # # parallel ldiv!
using SparseExtra, LinearAlgebra, SparseArrays, BenchmarkTools
const n = 10_000
const A = sprandn(n, n, 5 / n);
const C = A + I
const B = Matrix(sprandn(n, n, 1 / n));
const F = lu(C);
const X = similar(B);
# Standard:
@benchmark ldiv!($X, $F, $B)
# With FLoops.jl:
@benchmark par_solve!($X, $F, $B)
# with manual loops
@benchmark par_ldiv!_t($X, $F, $B)
| SparseExtra | https://github.com/SobhanMP/SparseExtra.jl.git |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.