hexsha
stringlengths 40
40
| size
int64 4
1.05M
| content
stringlengths 4
1.05M
| avg_line_length
float64 1.33
100
| max_line_length
int64 1
1k
| alphanum_fraction
float64 0.25
1
|
---|---|---|---|---|---|
61ebd653b71e25e837b126bcd8a95f9233dbfde8 | 82,007 | // Copyright 2018 Mozilla
//
// Licensed under the Apache License, Version 2.0 (the "License"); you may not use
// this file except in compliance with the License. You may obtain a copy of the
// License at http://www.apache.org/licenses/LICENSE-2.0
// Unless required by applicable law or agreed to in writing, software distributed
// under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
// CONDITIONS OF ANY KIND, either express or implied. See the License for the
// specific language governing permissions and limitations under the License.
//! This module exposes an Foreign Function Interface (FFI) that allows Mentat to be
//! called from other languages.
//!
//! Functions that are available to other languages in this module are defined as
//! extern "C" functions which allow them to be layed out correctly for the
//! platform's C ABI. They all have a `#[no_mangle]` decorator to ensure
//! Rust's name mangling is turned off, so that it is easier to link to.
//!
//! Mentat's FFI contains unsafe code. As it is an interface between foreign code
//! and native Rust code, Rust cannot guarantee that the types and data that have been passed
//! to it from another language are present and in the format it is expecting.
//! This interface is designed to ensure that nothing unsafe passes through this module
//! and enters Mentat proper
//!
//! Structs defined with `#[repr(C)]` are guaranteed to have a layout that is compatible
//! with the platform's representation in C.
//!
//! This API passes pointers in two ways, depending on the lifetime of the value and
//! what value owns it.
//! Pointers to values that are guaranteed to live beyond the lifetime of the function,
//! are passed over the FFI as a raw pointer.
//!
//! `value as *const Binding`
//!
//! Pointers to values that cannot be guaranteed to live beyond the lifetime of the function
//! are first `Box`ed so that they live on the heap, and the raw pointer passed this way.
//!
//! `Box::into_raw(Box::new(value))`
//!
//! The memory for a value that is moved onto the heap before being passed over the FFI
//! is no longer managed by Rust, but Rust still owns the value. Therefore the pointer
//! must be returned to Rust in order to be released. To this effect a number of `destructor`
//! functions are provided for each Rust value type that is passed, as is a catch all destructor
//! to release memory for `#[repr(C)]` values.
//! The destructors reclaim the memory via [Box](std::boxed::Box) and then drop the reference, causing the
//! memory to be released.
//!
//! A macro has been provided to make defining destructors easier.
//!
//! `define_destructor!(query_builder_destroy, QueryBuilder);`
//!
//! Passing a pointer to memory that has already been released will cause Mentat to crash,
//! so callers have to be careful to ensure they manage their pointers properly.
//! Failure to call a destructor for a value on the heap will cause a memory leak.
//!
//! Generally, the functions exposed in this module have a direct mapping to existing Mentat APIs,
//! in order to keep application logic to a minumum and provide the greatest flexibility
//! for callers using the interface. However, in some cases a single convenience function
//! has been provided in order to make the interface easier to use and reduce the number
//! of calls that have to be made over the FFI to perform a task. An example of this is
//! `store_register_observer`, which takes a single native callback function that is then
//! wrapped inside a Rust closure and added to a [TxObserver](mentat::TxObserver) struct. This is then used to
//! register the observer with the store.
//!
//! Functions that may fail take an out parameter of type `*mut ExternError`. In the event the
//! function fails, information about the error that occured will be stored inside it (and,
//! typically, a null pointer will be returned). Convenience functions for unpacking a
//! `Result<T, E>` as a `*mut T` while writing any error to the `ExternError` are provided as
//! `translate_result`, `translate_opt_result` (for `Result<Option<T>>`) and `translate_void_result`
//! (for `Result<(), T>`). Callers are responsible for freeing the `message` field of `ExternError`.
#![allow(unused_doc_comments)]
#![allow(clippy::missing_safety_doc)]
extern crate core;
extern crate libc;
extern crate mentat;
use core::fmt::Display;
use std::collections::BTreeSet;
use std::ffi::CString;
use std::os::raw::{c_char, c_int, c_longlong, c_ulonglong, c_void};
use std::slice;
use std::sync::Arc;
use std::vec;
pub use mentat::{
Binding, CacheDirection, Entid, FindSpec, HasSchema, InProgress, KnownEntid, QueryBuilder,
QueryInputs, QueryOutput, QueryResults, Queryable, RelResult, Store, TxObserver, TxReport,
TypedValue, Uuid, ValueType, Variable,
};
pub use mentat::entity_builder::{BuildTerms, EntityBuilder, InProgressBuilder};
pub mod android;
pub mod utils;
pub use utils::strings::{c_char_to_string, kw_from_string, string_to_c_char};
use utils::error::{translate_opt_result, translate_result, translate_void_result, ExternError};
pub use utils::log;
// type aliases for iterator types.
pub type BindingIterator = vec::IntoIter<Binding>;
pub type BindingListIterator = std::slice::Chunks<'static, mentat::Binding>;
/// Helper macro for asserting one or more pointers are not null at the same time.
#[macro_export]
macro_rules! assert_not_null {
($($e:expr),+ $(,)*) => ($(
assert!(!$e.is_null(), concat!("Unexpected null pointer: ", stringify!($e)));
)+);
}
/// A C representation of the change provided by the transaction observers
/// from a single transact.
/// Holds a transaction identifier, the changes as a set of affected attributes
/// and the length of the list of changes.
#[repr(C)]
#[derive(Debug, Clone)]
pub struct TransactionChange {
pub txid: Entid,
pub changes: *const c_longlong,
pub changes_len: c_ulonglong,
}
/// A C representation of the list of changes provided by the transaction observers.
/// Provides the list of changes as the length of the list.
#[repr(C)]
#[derive(Debug)]
pub struct TxChangeList {
pub reports: *const TransactionChange,
pub len: c_ulonglong,
}
#[repr(C)]
#[derive(Debug)]
pub struct InProgressTransactResult<'a, 'c> {
pub in_progress: *mut InProgress<'a, 'c>,
pub tx_report: *mut TxReport,
// TODO: This is a different usage pattern than most uses of ExternError. Is this bad?
pub err: ExternError,
}
impl<'a, 'c> InProgressTransactResult<'a, 'c> {
// This takes a tuple so that we can pass the result of `transact()` into it directly.
unsafe fn from_transact<E: Display>(tuple: (InProgress<'a, 'c>, Result<TxReport, E>)) -> Self {
let (in_progress, tx_result) = tuple;
let mut err = ExternError::default();
let tx_report = translate_result(tx_result, (&mut err) as *mut ExternError);
InProgressTransactResult {
in_progress: Box::into_raw(Box::new(in_progress)),
tx_report,
err,
}
}
}
/// A store cannot be opened twice to the same location.
/// Once created, the reference to the store is held by the caller and not Rust,
/// therefore the caller is responsible for calling `store_destroy` to release the memory
/// used by the [Store](mentat::Store) in order to avoid a memory leak.
// TODO: Take an `ExternError` parameter, rather than crashing on error.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `store_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn store_open(uri: *const c_char, error: *mut ExternError) -> *mut Store {
assert_not_null!(uri);
let uri = c_char_to_string(uri);
translate_result(Store::open(&uri), error)
}
/// Variant of store_open that opens an encrypted database.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `store_destroy` is provided for releasing the memory for this
/// pointer type.
#[cfg(feature = "sqlcipher")]
#[no_mangle]
pub unsafe extern "C" fn store_open_encrypted(
uri: *const c_char,
key: *const c_char,
error: *mut ExternError,
) -> *mut Store {
let uri = c_char_to_string(uri);
let key = c_char_to_string(key);
translate_result(Store::open_with_key(&uri, &key), error)
}
// TODO: open empty
// TODO: dismantle
// TODO: conn
// TODO: begin_read
/// Starts a new transaction to allow multiple transacts to be
/// performed together. This is more efficient than performing
/// a large set of individual commits.
///
/// # Safety
///
/// Callers must ensure that the pointer to the [Store](mentat::Store) is not dangling.
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `in_progress_destroy` is provided for releasing the memory for this
/// pointer type.
///
/// TODO: Document the errors that can result from begin_transaction
#[no_mangle]
pub unsafe extern "C" fn store_begin_transaction<'a, 'c>(
store: *mut Store,
error: *mut ExternError,
) -> *mut InProgress<'a, 'c> {
assert_not_null!(store);
let store = &mut *store;
translate_result(store.begin_transaction(), error)
}
/// Perform a single transact operation using the current in progress
/// transaction. Takes edn as a string to transact.
///
/// Returns a [Result<TxReport>](mentat::TxReport)
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `tx_report_destroy` is provided for releasing the memory for this
/// pointer type.
///
/// TODO: Document the errors that can result from transact
#[no_mangle]
pub unsafe extern "C" fn in_progress_transact<'m>(
in_progress: *mut InProgress<'m, 'm>,
transaction: *const c_char,
error: *mut ExternError,
) -> *mut TxReport {
assert_not_null!(in_progress);
let in_progress = &mut *in_progress;
let transaction = c_char_to_string(transaction);
translate_result(in_progress.transact(transaction), error)
}
/// Commit all the transacts that have been performed using this
/// in progress transaction.
///
/// # Safety
/// Callers are responsible for managing the memory for the return value.
/// A destructor `tx_report_destroy` is provided for releasing the memory for this
/// pointer type.
///
/// TODO: Document the errors that can result from transact
#[no_mangle]
pub unsafe extern "C" fn in_progress_commit<'m>(
in_progress: *mut InProgress<'m, 'm>,
error: *mut ExternError,
) {
assert_not_null!(in_progress);
let in_progress = Box::from_raw(in_progress);
translate_void_result(in_progress.commit(), error);
}
/// Rolls back all the transacts that have been performed using this
/// in progress transaction.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `tx_report_destroy` is provided for releasing the memory for this
/// pointer type.
///
/// TODO: Document the errors that can result from rollback
#[no_mangle]
pub unsafe extern "C" fn in_progress_rollback<'m>(
in_progress: *mut InProgress<'m, 'm>,
error: *mut ExternError,
) {
assert_not_null!(in_progress);
let in_progress = Box::from_raw(in_progress);
translate_void_result(in_progress.rollback(), error);
}
/// Creates a builder using the in progress transaction to allow for programmatic
/// assertion of values.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `in_progress_builder_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder<'m>(
in_progress: *mut InProgress<'m, 'm>,
) -> *mut InProgressBuilder {
assert_not_null!(in_progress);
let in_progress = Box::from_raw(in_progress);
Box::into_raw(Box::new(in_progress.builder()))
}
/// Creates a builder for an entity with `tempid` using the in progress transaction to
/// allow for programmatic assertion of values for that entity.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `entity_builder_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn in_progress_entity_builder_from_temp_id<'m>(
in_progress: *mut InProgress<'m, 'm>,
temp_id: *const c_char,
) -> *mut EntityBuilder<InProgressBuilder> {
assert_not_null!(in_progress);
let in_progress = Box::from_raw(in_progress);
let temp_id = c_char_to_string(temp_id);
Box::into_raw(Box::new(in_progress.builder().describe_tempid(&temp_id)))
}
/// Creates a builder for an entity with `entid` using the in progress transaction to
/// allow for programmatic assertion of values for that entity.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `entity_builder_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn in_progress_entity_builder_from_entid<'m>(
in_progress: *mut InProgress<'m, 'm>,
entid: c_longlong,
) -> *mut EntityBuilder<InProgressBuilder> {
assert_not_null!(in_progress);
let in_progress = Box::from_raw(in_progress);
Box::into_raw(Box::new(in_progress.builder().describe(KnownEntid(entid))))
}
/// Starts a new transaction and creates a builder using the transaction
/// to allow for programmatic assertion of values.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `in_progress_builder_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn store_in_progress_builder<'a, 'c>(
store: *mut Store,
error: *mut ExternError,
) -> *mut InProgressBuilder<'a, 'c> {
assert_not_null!(store);
let store = &mut *store;
let result = store
.begin_transaction()
.map(|in_progress| in_progress.builder());
translate_result(result, error)
}
/// Starts a new transaction and creates a builder for an entity with `tempid`
/// using the transaction to allow for programmatic assertion of values for that entity.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `entity_builder_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn store_entity_builder_from_temp_id<'a, 'c>(
store: *mut Store,
temp_id: *const c_char,
error: *mut ExternError,
) -> *mut EntityBuilder<InProgressBuilder<'a, 'c>> {
assert_not_null!(store);
let store = &mut *store;
let temp_id = c_char_to_string(temp_id);
let result = store
.begin_transaction()
.map(|in_progress| in_progress.builder().describe_tempid(&temp_id));
translate_result(result, error)
}
/// Starts a new transaction and creates a builder for an entity with `entid`
/// using the transaction to allow for programmatic assertion of values for that entity.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `entity_builder_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn store_entity_builder_from_entid<'a, 'c>(
store: *mut Store,
entid: c_longlong,
error: *mut ExternError,
) -> *mut EntityBuilder<InProgressBuilder<'a, 'c>> {
assert_not_null!(store);
let store = &mut *store;
let result = store
.begin_transaction()
.map(|in_progress| in_progress.builder().describe(KnownEntid(entid)));
translate_result(result, error)
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/string`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_string(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = c_char_to_string(value).into();
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/long`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_long(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Long(value);
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If `value` is not present as an Entid in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/ref`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_ref(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Ref(value);
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If `value` is not present as an attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/keyword`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_keyword(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = kw_from_string(c_char_to_string(value)).into();
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/boolean`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_boolean(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: bool,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/double`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_double(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: f64,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/instant`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_timestamp(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::instant(value);
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/uuid`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_add_uuid(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: *const [u8; 16],
error: *mut ExternError,
) {
assert_not_null!(builder, value);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value = &*value;
let value = Uuid::from_slice(value).expect("valid uuid");
let value: TypedValue = value.into();
translate_void_result(builder.add(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/string`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_string(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = c_char_to_string(value).into();
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/long`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_long(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Long(value);
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/ref`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_ref(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Ref(value);
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/keyword`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_keyword(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = kw_from_string(c_char_to_string(value)).into();
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/boolean`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_boolean(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: bool,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/double`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_double(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: f64,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/instant`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_timestamp(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::instant(value);
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/uuid`.
///
/// # Safety
/// TODO:
// TODO don't panic if the UUID is not valid - return result instead.
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_retract_uuid(
builder: *mut InProgressBuilder,
entid: c_longlong,
kw: *const c_char,
value: *const [u8; 16],
error: *mut ExternError,
) {
assert_not_null!(builder, value);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value = &*value;
let value = Uuid::from_slice(value).expect("valid uuid");
let value: TypedValue = value.into();
translate_void_result(builder.retract(KnownEntid(entid), kw, value), error);
}
/// Transacts and commits all the assertions and retractions that have been performed
/// using this builder.
///
/// This consumes the builder and the enclosed [InProgress](mentat::InProgress) transaction.
///
/// # Safety
/// TODO:
// TODO: Document the errors that can result from transact
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_commit(
builder: *mut InProgressBuilder,
error: *mut ExternError,
) -> *mut TxReport {
assert_not_null!(builder);
let builder = Box::from_raw(builder);
translate_result(builder.commit(), error)
}
/// Transacts all the assertions and retractions that have been performed
/// using this builder.
///
/// This consumes the builder and returns the enclosed [InProgress](mentat::InProgress) transaction
/// inside the [InProgressTransactResult](mentat::InProgressTransactResult) alongside the [TxReport](mentat::TxReport) generated
/// by the transact.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// The destructors `in_progress_destroy` and `tx_report_destroy` arew provided for
/// releasing the memory for these pointer types.
///
// TODO: Document the errors that can result from transact
#[no_mangle]
pub unsafe extern "C" fn in_progress_builder_transact<'a, 'c>(
builder: *mut InProgressBuilder<'a, 'c>,
) -> InProgressTransactResult<'a, 'c> {
assert_not_null!(builder);
let builder = Box::from_raw(builder);
InProgressTransactResult::from_transact(builder.transact())
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/string`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_string(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = c_char_to_string(value).into();
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/long`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_long(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Long(value);
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/ref`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_ref(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Ref(value);
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/keyword`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_keyword(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = kw_from_string(c_char_to_string(value)).into();
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/boolean`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_boolean(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: bool,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/double`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_double(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: f64,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/instant`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_timestamp(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::instant(value);
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to assert `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/uuid`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_add_uuid(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: *const [u8; 16],
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value = &*value;
let value = Uuid::from_slice(value).expect("valid uuid");
let value: TypedValue = value.into();
translate_void_result(builder.add(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/string`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_string(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = c_char_to_string(value).into();
translate_void_result(builder.retract(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/long`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_long(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Long(value);
translate_void_result(builder.retract(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/ref`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_ref(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::Ref(value);
translate_void_result(builder.retract(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/keyword`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_keyword(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = kw_from_string(c_char_to_string(value)).into();
translate_void_result(builder.retract(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/boolean`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_boolean(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: bool,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.retract(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/double`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_double(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: f64,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = value.into();
translate_void_result(builder.retract(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/instant`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_timestamp(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: c_longlong,
error: *mut ExternError,
) {
assert_not_null!(builder);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value: TypedValue = TypedValue::instant(value);
translate_void_result(builder.retract(kw, value), error);
}
/// Uses `builder` to retract `value` for `kw` on entity `entid`.
///
/// # Errors
///
/// If `entid` is not present in the store.
/// If `kw` is not a valid attribute in the store.
/// If the `:db/type` of the attribute described by `kw` is not `:db.type/uuid`.
///
/// # Safety
/// TODO:
// TODO: Generalise with macro https://github.com/mozilla/mentat/issues/703
// TODO: don't panic if the UUID is not valid - return result instead.
#[no_mangle]
pub unsafe extern "C" fn entity_builder_retract_uuid(
builder: *mut EntityBuilder<InProgressBuilder>,
kw: *const c_char,
value: *const [u8; 16],
error: *mut ExternError,
) {
assert_not_null!(builder, value);
let builder = &mut *builder;
let kw = kw_from_string(c_char_to_string(kw));
let value = &*value;
let value = Uuid::from_slice(value).expect("valid uuid");
let value: TypedValue = value.into();
translate_void_result(builder.retract(kw, value), error);
}
/// Transacts all the assertions and retractions that have been performed
/// using this builder.
///
/// This consumes the builder and returns the enclosed [InProgress](mentat::InProgress) transaction
/// inside the [InProgressTransactResult][::InProgressTransactResult] alongside the [TxReport](mentat::TxReport) generated
/// by the transact.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// The destructors `in_progress_destroy` and `tx_report_destroy` are provided for
/// releasing the memory for these pointer types.
///
/// TODO: Document the errors that can result from transact
#[no_mangle]
pub unsafe extern "C" fn entity_builder_transact<'a, 'c>(
builder: *mut EntityBuilder<InProgressBuilder<'a, 'c>>,
) -> InProgressTransactResult<'a, 'c> {
assert_not_null!(builder);
let builder = Box::from_raw(builder);
InProgressTransactResult::from_transact(builder.transact())
}
/// Transacts and commits all the assertions and retractions that have been performed
/// using this builder.
///
/// This consumes the builder and the enclosed [InProgress](mentat::InProgress) transaction.
///
/// # Safety
/// TODO:
/// TODO: Document the errors that can result from transact
#[no_mangle]
pub unsafe extern "C" fn entity_builder_commit(
builder: *mut EntityBuilder<InProgressBuilder>,
error: *mut ExternError,
) -> *mut TxReport {
assert_not_null!(builder);
let builder = Box::from_raw(builder);
translate_result(builder.commit(), error)
}
/// Performs a single transaction against the store.
///
/// # Safety
/// TODO:
/// TODO: Document the errors that can result from transact
#[no_mangle]
pub unsafe extern "C" fn store_transact(
store: *mut Store,
transaction: *const c_char,
error: *mut ExternError,
) -> *mut TxReport {
assert_not_null!(store);
let store = &mut *store;
let transaction = c_char_to_string(transaction);
let result = store.begin_transaction().and_then(|mut in_progress| {
in_progress
.transact(transaction)
.and_then(|tx_report| in_progress.commit().map(|_| tx_report))
});
translate_result(result, error)
}
/// Fetches the `tx_id` for the given [TxReport](mentat::TxReport)`.
/// # Safety
#[no_mangle]
pub unsafe extern "C" fn tx_report_get_entid(tx_report: *mut TxReport) -> c_longlong {
assert_not_null!(tx_report);
let tx_report = &*tx_report;
tx_report.tx_id as c_longlong
}
/// Fetches the `tx_instant` for the given [TxReport](mentat::TxReport).
/// # Safety
#[no_mangle]
pub unsafe extern "C" fn tx_report_get_tx_instant(tx_report: *mut TxReport) -> c_longlong {
assert_not_null!(tx_report);
let tx_report = &*tx_report;
tx_report.tx_instant.timestamp() as c_longlong
}
/// Fetches the [Entid](mentat::Entid) assigned to the `tempid` during the transaction represented
/// by the given [TxReport](mentat::TxReport).
///
/// Note that this reutrns the value as a heap allocated pointer that the caller is responsible for
/// freeing with `destroy()`.
// TODO: This is gross and unnecessary
#[no_mangle]
pub unsafe extern "C" fn tx_report_entity_for_temp_id(
tx_report: *mut TxReport,
tempid: *const c_char,
) -> *mut c_longlong {
assert_not_null!(tx_report);
let tx_report = &*tx_report;
let key = c_char_to_string(tempid);
if let Some(entid) = tx_report.tempids.get(key) {
Box::into_raw(Box::new(*entid as c_longlong))
} else {
std::ptr::null_mut()
}
}
/// Adds an attribute to the cache.
/// `store_cache_attribute_forward` caches values for an attribute keyed by entity
/// (i.e. find values and entities that have this attribute, or find values of attribute for an entity)
#[no_mangle]
pub unsafe extern "C" fn store_cache_attribute_forward(
store: *mut Store,
attribute: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(store);
let store = &mut *store;
let kw = kw_from_string(c_char_to_string(attribute));
translate_void_result(store.cache(&kw, CacheDirection::Forward), error);
}
/// Adds an attribute to the cache.
/// `store_cache_attribute_reverse` caches entities for an attribute keyed by value.
/// (i.e. find entities that have a particular value for an attribute).
#[no_mangle]
pub unsafe extern "C" fn store_cache_attribute_reverse(
store: *mut Store,
attribute: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(store);
let store = &mut *store;
let kw = kw_from_string(c_char_to_string(attribute));
translate_void_result(store.cache(&kw, CacheDirection::Reverse), error);
}
/// Adds an attribute to the cache.
/// `store_cache_attribute_bi_directional` caches entity in both available directions, forward and reverse.
///
/// `Forward` caches values for an attribute keyed by entity
/// (i.e. find values and entities that have this attribute, or find values of attribute for an entity)
///
/// `Reverse` caches entities for an attribute keyed by value.
/// (i.e. find entities that have a particular value for an attribute).
#[no_mangle]
pub unsafe extern "C" fn store_cache_attribute_bi_directional(
store: *mut Store,
attribute: *const c_char,
error: *mut ExternError,
) {
assert_not_null!(store);
let store = &mut *store;
let kw = kw_from_string(c_char_to_string(attribute));
translate_void_result(store.cache(&kw, CacheDirection::Both), error);
}
/// Creates a [QueryBuilder](mentat::QueryBuilder) from the given store to execute the provided query.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `query_builder_destroy` is provided for releasing the memory for this
/// pointer type.
///
/// TODO: Update QueryBuilder so it only takes a [Store](mentat::Store) pointer on execution
#[no_mangle]
pub unsafe extern "C" fn store_query<'a>(
store: *mut Store,
query: *const c_char,
) -> *mut QueryBuilder<'a> {
assert_not_null!(store);
let query = c_char_to_string(query);
let store = &mut *store;
Box::into_raw(Box::new(QueryBuilder::new(store, query)))
}
/// Binds a [TypedValue::Long](mentat::TypedValue::Long) to a [Variable](mentat::Variable) with the given name.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_long(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: c_longlong,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let query_builder = &mut *query_builder;
query_builder.bind_long(&var, value);
}
/// Binds a [TypedValue::Ref](mentat::TypedValue::Ref) to a [Variable](mentat::Variable) with the given name.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_ref(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: c_longlong,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let query_builder = &mut *query_builder;
query_builder.bind_ref(&var, value);
}
/// Binds a [TypedValue::Ref](mentat::TypedValue::Ref) to a [Variable](mentat::Variable) with the given name. Takes a keyword as a c string in the format
/// `:namespace/name` and converts it into an [NamespacedKeyworf](mentat::NamespacedKeyword).
///
/// # Panics
///
/// If the provided keyword does not map to a valid keyword in the schema.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_ref_kw(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: *const c_char,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let kw = kw_from_string(c_char_to_string(value));
let query_builder = &mut *query_builder;
if let Some(err) = query_builder.bind_ref_from_kw(&var, kw).err() {
panic!(err);
}
}
/// Binds a [TypedValue::Ref](mentat::TypedValue::Ref) to a [Variable](mentat::Variable) with the given name. Takes a keyword as a c string in the format
/// `:namespace/name` and converts it into an [NamespacedKeyworf](mentat::NamespacedKeyword).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_kw(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: *const c_char,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let query_builder = &mut *query_builder;
let kw = kw_from_string(c_char_to_string(value));
query_builder.bind_value(&var, kw);
}
/// Binds a [TypedValue::Boolean](mentat::TypedValue::Boolean) to a [Variable](mentat::Variable) with the given name.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_boolean(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: bool,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let query_builder = &mut *query_builder;
query_builder.bind_value(&var, value);
}
/// Binds a [TypedValue::Double](mentat::TypedValue::Double) to a [Variable](mentat::Variable) with the given name.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_double(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: f64,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let query_builder = &mut *query_builder;
query_builder.bind_value(&var, value);
}
/// Binds a [TypedValue::Instant](mentat::TypedValue::Instant) to a [Variable](mentat::Variable) with the given name.
/// Takes a timestamp in microseconds.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_timestamp(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: c_longlong,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let query_builder = &mut *query_builder;
query_builder.bind_instant(&var, value);
}
/// Binds a [TypedValue::String](mentat::TypedValue::String) to a [Variable](mentat::Variable) with the given name.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_string(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: *const c_char,
) {
assert_not_null!(query_builder);
let var = c_char_to_string(var);
let value = c_char_to_string(value);
let query_builder = &mut *query_builder;
query_builder.bind_value(&var, value);
}
/// Binds a [TypedValue::Uuid](mentat::TypedValue::Uuid) to a [Variable](mentat::Variable) with the given name.
/// Takes a `UUID` as a byte slice of length 16. This maps directly to the `uuid_t` C type.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn query_builder_bind_uuid(
query_builder: *mut QueryBuilder,
var: *const c_char,
value: *const [u8; 16],
) {
assert_not_null!(query_builder, value);
let var = c_char_to_string(var);
let value = &*value;
let value = Uuid::from_slice(value).expect("valid uuid");
let query_builder = &mut *query_builder;
query_builder.bind_value(&var, value);
}
/// Executes a query and returns the results as a [Scalar](mentat::QueryResults::Scalar).
///
/// # Panics
///
/// If the find set of the query executed is not structured `[:find ?foo . :where ...]`.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn query_builder_execute_scalar(
query_builder: *mut QueryBuilder,
error: *mut ExternError,
) -> *mut Binding {
assert_not_null!(query_builder);
let query_builder = &mut *query_builder;
let results = query_builder.execute_scalar();
translate_opt_result(results, error)
}
/// Executes a query and returns the results as a [Coll](mentat::QueryResults::Coll).
///
/// # Panics
///
/// If the find set of the query executed is not structured `[:find [?foo ...] :where ...]`.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_list_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn query_builder_execute_coll(
query_builder: *mut QueryBuilder,
error: *mut ExternError,
) -> *mut Vec<Binding> {
assert_not_null!(query_builder);
let query_builder = &mut *query_builder;
let results = query_builder.execute_coll();
translate_result(results, error)
}
/// Executes a query and returns the results as a [Tuple](mentat::QueryResults::Tuple).
///
/// # Panics
///
/// If the find set of the query executed is not structured `[:find [?foo ?bar] :where ...]`.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_list_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn query_builder_execute_tuple(
query_builder: *mut QueryBuilder,
error: *mut ExternError,
) -> *mut Vec<Binding> {
assert_not_null!(query_builder);
let query_builder = &mut *query_builder;
let results = query_builder.execute_tuple();
translate_opt_result(results, error)
}
/// Executes a query and returns the results as a [Rel](mentat::QueryResults::Rel).
///
/// # Panics
///
/// If the find set of the query executed is not structured `[:find ?foo ?bar :where ...]`.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_result_set_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn query_builder_execute(
query_builder: *mut QueryBuilder,
error: *mut ExternError,
) -> *mut RelResult<Binding> {
assert_not_null!(query_builder);
let query_builder = &mut *query_builder;
let results = query_builder.execute_rel();
translate_result(results, error)
}
fn unwrap_conversion<T>(value: Option<T>, expected_type: ValueType) -> T {
match value {
Some(v) => v,
None => panic!("Typed value cannot be coerced into a {}", expected_type),
}
}
/// Consumes a [Binding](mentat::Binding) and returns the value as a C `long`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Long](mentat::ValueType::Long).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_long(typed_value: *mut Binding) -> c_longlong {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
unwrap_conversion(typed_value.into_long(), ValueType::Long)
}
/// Consumes a [Binding](mentat::Binding) and returns the value as an [Entid](mentat::Entid).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Ref](mentat::ValueType::Ref).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_entid(typed_value: *mut Binding) -> Entid {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
println!("typed value as entid {:?}", typed_value);
unwrap_conversion(typed_value.into_entid(), ValueType::Ref)
}
/// Consumes a [Binding](mentat::Binding) and returns the value as an keyword C `String`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Ref](mentat::ValueType::Ref).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_kw(typed_value: *mut Binding) -> *mut c_char {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
unwrap_conversion(typed_value.into_kw_c_string(), ValueType::Keyword)
}
/// Consumes a [Binding](mentat::Binding) and returns the value as a boolean represented as an `i32`.
/// If the value of the boolean is `true` the value returned is 1.
/// If the value of the boolean is `false` the value returned is 0.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Long](mentat::ValueType::Boolean).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_boolean(typed_value: *mut Binding) -> i32 {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
if unwrap_conversion(typed_value.into_boolean(), ValueType::Boolean) {
1
} else {
0
}
}
/// Consumes a [Binding](mentat::Binding) and returns the value as a `f64`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Long](mentat::ValueType::Double).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_double(typed_value: *mut Binding) -> f64 {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
unwrap_conversion(typed_value.into_double(), ValueType::Double)
}
/// Consumes a [Binding](mentat::Binding) and returns the value as a microsecond timestamp.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Long](mentat::ValueType::Instant).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_timestamp(typed_value: *mut Binding) -> c_longlong {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
unwrap_conversion(typed_value.into_timestamp(), ValueType::Instant)
}
/// Consumes a [Binding](mentat::Binding) and returns the value as a C `String`.
///
/// The caller is responsible for freeing the pointer returned from this function using
/// `rust_c_string_destroy`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Long](mentat::ValueType::String).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_string(typed_value: *mut Binding) -> *mut c_char {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
unwrap_conversion(typed_value.into_c_string(), ValueType::String)
}
/// Consumes a [Binding](mentat::Binding) and returns the value as a UUID byte slice of length 16.
///
/// The caller is responsible for freeing the pointer returned from this function using `destroy`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Long](mentat::ValueType::Uuid).
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn typed_value_into_uuid(typed_value: *mut Binding) -> *mut [u8; 16] {
assert_not_null!(typed_value);
let typed_value = Box::from_raw(typed_value);
let value = unwrap_conversion(typed_value.into_uuid(), ValueType::Uuid);
Box::into_raw(Box::new(*value.as_bytes()))
}
/// Returns the [ValueType](mentat::ValueType) of this [Binding](mentat::Binding).
#[no_mangle]
pub unsafe extern "C" fn typed_value_value_type(typed_value: *mut Binding) -> ValueType {
let typed_value = &*typed_value;
typed_value
.value_type()
.unwrap_or_else(|| panic!("Binding is not Scalar and has no ValueType"))
}
/// Returns the value at the provided `index` as a `Vec<ValueType>`.
/// If there is no value present at the `index`, a null pointer is returned.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_result_set_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn row_at_index(
rows: *mut RelResult<Binding>,
index: c_int,
) -> *mut Vec<Binding> {
assert_not_null!(rows);
let result = &*rows;
result
.row(index as usize)
.map_or_else(std::ptr::null_mut, |v| Box::into_raw(Box::new(v.to_vec())))
}
/// Consumes the `RelResult<Binding>` and returns an iterator over the values.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_result_set_iter_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn typed_value_result_set_into_iter(
rows: *mut RelResult<Binding>,
) -> *mut BindingListIterator {
assert_not_null!(rows);
let result = &*rows;
let rows = result.rows();
Box::into_raw(Box::new(rows))
}
/// Returns the next value in the `iter` as a `Vec<ValueType>`.
/// If there is no value next value, a null pointer is returned.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_list_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn typed_value_result_set_iter_next(
iter: *mut BindingListIterator,
) -> *mut Vec<Binding> {
assert_not_null!(iter);
let iter = &mut *iter;
iter.next().map_or(std::ptr::null_mut(), |v| {
Box::into_raw(Box::new(v.to_vec()))
})
}
/// Consumes the `Vec<Binding>` and returns an iterator over the values.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_list_iter_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn typed_value_list_into_iter(
values: *mut Vec<Binding>,
) -> *mut BindingIterator {
assert_not_null!(values);
let result = Box::from_raw(values);
Box::into_raw(Box::new(result.into_iter()))
}
/// Returns the next value in the `iter` as a [Binding](mentat::Binding).
/// If there is no value next value, a null pointer is returned.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn typed_value_list_iter_next(iter: *mut BindingIterator) -> *mut Binding {
assert_not_null!(iter);
let iter = &mut *iter;
iter.next()
.map_or(std::ptr::null_mut(), |v| Box::into_raw(Box::new(v)))
}
/// Returns the value at the provided `index` as a [Binding](mentat::Binding).
/// If there is no value present at the `index`, a null pointer is returned.
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_destroy` is provided for releasing the memory for this
/// pointer type.
#[no_mangle]
pub unsafe extern "C" fn value_at_index(values: *mut Vec<Binding>, index: c_int) -> *mut Binding {
assert_not_null!(values);
let values = &*values;
if index < 0 || (index as usize) > values.len() {
std::ptr::null_mut()
} else {
// TODO: an older version of this function returned a reference into values. This
// causes `typed_value_into_*` to be memory unsafe, and goes against the documentation.
// Should there be a version that still behaves in this manner?
Box::into_raw(Box::new(values[index as usize].clone()))
}
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as a `long`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not `ValueType::Long`.
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_long(
values: *mut Vec<Binding>,
index: c_int,
) -> c_longlong {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
unwrap_conversion(value.clone().into_long(), ValueType::Long)
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as an [Entid](mentat::Entid).
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not `ValueType::Ref`.
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_entid(
values: *mut Vec<Binding>,
index: c_int,
) -> Entid {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
unwrap_conversion(value.clone().into_entid(), ValueType::Ref)
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as a keyword C `String`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Ref](mentat::ValueType::Ref).
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_kw(
values: *mut Vec<Binding>,
index: c_int,
) -> *mut c_char {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
unwrap_conversion(value.clone().into_kw_c_string(), ValueType::Keyword) as *mut c_char
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as a boolean represented by a `i32`.
/// If the value of the `boolean` is `true` then the value returned is 1.
/// If the value of the `boolean` is `false` then the value returned is 0.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Long](mentat::ValueType::Long).
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_boolean(
values: *mut Vec<Binding>,
index: c_int,
) -> i32 {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
if unwrap_conversion(value.clone().into_boolean(), ValueType::Boolean) {
1
} else {
0
}
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as an `f64`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Double](mentat::ValueType::Double).
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_double(
values: *mut Vec<Binding>,
index: c_int,
) -> f64 {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
unwrap_conversion(value.clone().into_double(), ValueType::Double)
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as a microsecond timestamp.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Instant](mentat::ValueType::Instant).
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_timestamp(
values: *mut Vec<Binding>,
index: c_int,
) -> c_longlong {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
unwrap_conversion(value.clone().into_timestamp(), ValueType::Instant)
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as a C `String`.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::String](mentat::ValueType::String).
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_string(
values: *mut Vec<Binding>,
index: c_int,
) -> *mut c_char {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
unwrap_conversion(value.clone().into_c_string(), ValueType::String) as *mut c_char
}
/// Returns the value of the [Binding](mentat::Binding) at `index` as a UUID byte slice of length 16.
///
/// # Panics
///
/// If the [ValueType](mentat::ValueType) of the [Binding](mentat::Binding) is not [ValueType::Uuid](mentat::ValueType::Uuid).
/// If there is no value at `index`.
///
// TODO Generalise with macro https://github.com/mozilla/mentat/issues/703
#[no_mangle]
pub unsafe extern "C" fn value_at_index_into_uuid(
values: *mut Vec<Binding>,
index: c_int,
) -> *mut [u8; 16] {
assert_not_null!(values);
let result = &*values;
let value = result.get(index as usize).expect("No value at index");
let uuid = unwrap_conversion(value.clone().into_uuid(), ValueType::Uuid);
Box::into_raw(Box::new(*uuid.as_bytes()))
}
/// Returns a pointer to the the [Binding](mentat::Binding) associated with the `attribute` as
/// `:namespace/name` for the given `entid`.
/// If there is a value for that `attribute` on the entity with id `entid` then the value is returned.
/// If there no value for that `attribute` on the entity with id `entid` but the attribute is value,
/// then a null pointer is returned.
/// If there is no [Attribute](mentat::Attribute) in the [Schema](mentat::Schema) for the given
/// `attribute` then a null pointer is returned and an error is stored in is returned in `error`,
///
/// # Safety
///
/// Callers are responsible for managing the memory for the return value.
/// A destructor `typed_value_destroy` is provided for releasing the memory for this
/// pointer type.
///
/// TODO: list the types of error that can be caused by this function
#[no_mangle]
pub unsafe extern "C" fn store_value_for_attribute(
store: *mut Store,
entid: c_longlong,
attribute: *const c_char,
error: *mut ExternError,
) -> *mut Binding {
assert_not_null!(store);
let store = &*store;
let kw = kw_from_string(c_char_to_string(attribute));
let result = store
.lookup_value_for_attribute(entid, &kw)
.map(|o| o.map(Binding::from));
translate_opt_result(result, error)
}
/// Registers a [TxObserver](mentat::TxObserver) with the `key` to observe changes to `attributes`
/// on this `store`.
/// Calls `callback` is a relevant transaction occurs.
///
/// # Panics
///
/// If there is no [Attribute](mentat::Attribute) in the [Schema](mentat::Schema) for a given `attribute`.
///
#[no_mangle]
pub unsafe extern "C" fn store_register_observer(
store: *mut Store,
key: *const c_char,
attributes: *const Entid,
attributes_len: usize,
callback: extern "C" fn(key: *const c_char, reports: &TxChangeList),
) {
assert_not_null!(store, attributes);
let store = &mut *store;
let mut attribute_set = BTreeSet::new();
let slice = slice::from_raw_parts(attributes, attributes_len);
attribute_set.extend(slice.iter());
let key = c_char_to_string(key);
let tx_observer = Arc::new(TxObserver::new(attribute_set, move |obs_key, batch| {
let reports: Vec<(Entid, Vec<Entid>)> = batch
.into_iter()
.map(|(tx_id, changes)| {
(
*tx_id,
changes.iter().map(|eid| *eid as c_longlong).collect(),
)
})
.collect();
let extern_reports = reports
.iter()
.map(|item| TransactionChange {
txid: item.0,
changes: item.1.as_ptr(),
changes_len: item.1.len() as c_ulonglong,
})
.collect::<Vec<_>>();
let len = extern_reports.len();
let change_list = TxChangeList {
reports: extern_reports.as_ptr(),
len: len as c_ulonglong,
};
let s = string_to_c_char(obs_key);
callback(s, &change_list);
rust_c_string_destroy(s);
}));
store.register_observer(key.to_string(), tx_observer);
}
/// Unregisters a [TxObserver](mentat::TxObserver) with the `key` to observe changes on this `store`.
#[no_mangle]
pub unsafe extern "C" fn store_unregister_observer(store: *mut Store, key: *const c_char) {
assert_not_null!(store);
let store = &mut *store;
let key = c_char_to_string(key).to_string();
store.unregister_observer(&key);
}
/// Returns the [Entid](mentat::Entid) associated with the `attr` as `:namespace/name`.
///
/// # Panics
///
/// If there is no [Attribute](mentat::Attribute) in the [Schema](mentat::Schema) for `attr`.
#[no_mangle]
pub unsafe extern "C" fn store_entid_for_attribute(
store: *mut Store,
attr: *const c_char,
) -> Entid {
assert_not_null!(store);
let store = &mut *store;
let keyword_string = c_char_to_string(attr);
let kw = kw_from_string(keyword_string);
let conn = store.conn();
let current_schema = conn.current_schema();
current_schema
.get_entid(&kw)
.expect("Unable to find entid for invalid attribute")
.into()
}
/// Returns the value at the provided `index` as a [TransactionChange](TransactionChange) .
///
/// # Panics
///
/// If there is no value present at the `index`.
///
#[no_mangle]
pub unsafe extern "C" fn tx_change_list_entry_at(
tx_report_list: *mut TxChangeList,
index: c_int,
) -> *const TransactionChange {
assert_not_null!(tx_report_list);
let tx_report_list = &*tx_report_list;
assert!(0 <= index && (index as usize) < (tx_report_list.len as usize));
tx_report_list.reports.offset(index as isize)
}
/// Returns the value at the provided `index` as a [Entid](mentat::Entid) .
///
/// # Panics
///
/// If there is no value present at the `index`.
#[no_mangle]
pub unsafe extern "C" fn changelist_entry_at(
tx_report: *mut TransactionChange,
index: c_int,
) -> Entid {
assert_not_null!(tx_report);
let tx_report = &*tx_report;
assert!(0 <= index && (index as usize) < (tx_report.changes_len as usize));
std::ptr::read(tx_report.changes.offset(index as isize))
}
#[no_mangle]
pub unsafe extern "C" fn rust_c_string_destroy(s: *mut c_char) {
if !s.is_null() {
let _ = CString::from_raw(s);
}
}
/// Creates a function with a given `$name` that releases the memory for a type `$t`.
macro_rules! define_destructor (
($name:ident, $t:ty) => (
#[no_mangle]
pub unsafe extern "C" fn $name(obj: *mut $t) {
if !obj.is_null() {
let _ = Box::from_raw(obj);
}
}
)
);
/// Creates a function with a given `$name` that releases the memory
/// for a type `$t` with lifetimes <'a, 'c>.
/// TODO: Move to using `macro_rules` lifetime specifier when it lands in stable
/// This will enable us to specialise `define_destructor` and use repetitions
/// to allow more generic lifetime handling instead of having two functions.
/// https://github.com/rust-lang/rust/issues/34303
/// https://github.com/mozilla/mentat/issues/702
macro_rules! define_destructor_with_lifetimes (
($name:ident, $t:ty) => (
#[no_mangle]
pub unsafe extern "C" fn $name<'a, 'c>(obj: *mut $t) {
if !obj.is_null() {
let _ = Box::from_raw(obj);
}
}
)
);
/// destroy function for releasing the memory for `repr(C)` structs.
define_destructor!(destroy, c_void);
/// destroy function for releasing the memory of UUIDs
define_destructor!(uuid_destroy, [u8; 16]);
/// Destructor for releasing the memory of [InProgressBuilder](mentat::InProgressBuilder).
define_destructor_with_lifetimes!(in_progress_builder_destroy, InProgressBuilder<'a, 'c>);
/// Destructor for releasing the memory of [EntityBuilder](mentat::EntityBuilder).
define_destructor_with_lifetimes!(
entity_builder_destroy,
EntityBuilder<InProgressBuilder<'a, 'c>>
);
/// Destructor for releasing the memory of [QueryBuilder](mentat::QueryBuilder) .
define_destructor!(query_builder_destroy, QueryBuilder);
/// Destructor for releasing the memory of [Store](mentat::Store) .
define_destructor!(store_destroy, Store);
/// Destructor for releasing the memory of [TxReport](mentat::TxReport) .
define_destructor!(tx_report_destroy, TxReport);
/// Destructor for releasing the memory of [Binding](mentat::Binding).
define_destructor!(typed_value_destroy, Binding);
/// Destructor for releasing the memory of [Vec<Binding>][mentat::Binding].
define_destructor!(typed_value_list_destroy, Vec<Binding>);
/// Destructor for releasing the memory of [BindingIterator](BindingIterator) .
define_destructor!(typed_value_list_iter_destroy, BindingIterator);
/// Destructor for releasing the memory of [RelResult<Binding>](mentat::RelResult).
define_destructor!(typed_value_result_set_destroy, RelResult<Binding>);
/// Destructor for releasing the memory of [BindingListIterator](::BindingListIterator).
define_destructor!(typed_value_result_set_iter_destroy, BindingListIterator);
/// Destructor for releasing the memory of [InProgress](mentat::InProgress).
define_destructor!(in_progress_destroy, InProgress);
| 35.226375 | 153 | 0.693234 |
9b57d815b6608693516d7fd03792d85ecc067e32 | 7,122 | use std::fmt;
use approx::{ulps_eq, ulps_ne};
use cgmath::prelude::*;
use cgmath::{AbsDiffEq, RelativeEq, UlpsEq};
use cgmath::{BaseFloat, Point3, Vector3, Vector4};
use crate::prelude::*;
use crate::Ray3;
/// A 3-dimensional plane formed from the equation: `A*x + B*y + C*z - D = 0`.
///
/// # Fields
///
/// - `n`: a unit vector representing the normal of the plane where:
/// - `n.x`: corresponds to `A` in the plane equation
/// - `n.y`: corresponds to `B` in the plane equation
/// - `n.z`: corresponds to `C` in the plane equation
/// - `d`: the distance value, corresponding to `D` in the plane equation
///
/// # Notes
///
/// The `A*x + B*y + C*z - D = 0` form is preferred over the other common
/// alternative, `A*x + B*y + C*z + D = 0`, because it tends to avoid
/// superfluous negations (see _Real Time Collision Detection_, p. 55).
#[derive(Copy, Clone, PartialEq)]
#[cfg_attr(feature = "serde", derive(Serialize, Deserialize))]
pub struct Plane<S> {
/// Plane normal
pub n: Vector3<S>,
/// Plane distance value
pub d: S,
}
impl<S: BaseFloat> Plane<S> {
/// Construct a plane from a normal vector and a scalar distance. The
/// plane will be perpendicular to `n`, and `d` units offset from the
/// origin.
pub fn new(n: Vector3<S>, d: S) -> Plane<S> {
Plane { n, d }
}
/// # Arguments
///
/// - `a`: the `x` component of the normal
/// - `b`: the `y` component of the normal
/// - `c`: the `z` component of the normal
/// - `d`: the plane's distance value
pub fn from_abcd(a: S, b: S, c: S, d: S) -> Plane<S> {
Plane {
n: Vector3::new(a, b, c),
d,
}
}
/// Construct a plane from the components of a four-dimensional vector
pub fn from_vector4(v: Vector4<S>) -> Plane<S> {
Plane {
n: Vector3::new(v.x, v.y, v.z),
d: v.w,
}
}
/// Construct a plane from the components of a four-dimensional vector
/// Assuming alternative representation: `A*x + B*y + C*z + D = 0`
pub fn from_vector4_alt(v: Vector4<S>) -> Plane<S> {
Plane {
n: Vector3::new(v.x, v.y, v.z),
d: -v.w,
}
}
/// Constructs a plane that passes through the the three points `a`, `b` and `c`
pub fn from_points(a: Point3<S>, b: Point3<S>, c: Point3<S>) -> Option<Plane<S>> {
// create two vectors that run parallel to the plane
let v0 = b - a;
let v1 = c - a;
// find the normal vector that is perpendicular to v1 and v2
let n = v0.cross(v1);
if ulps_eq!(n, &Vector3::zero()) {
None
} else {
// compute the normal and the distance to the plane
let n = n.normalize();
let d = -a.dot(n);
Some(Plane::new(n, d))
}
}
/// Construct a plane from a point and a normal vector.
/// The plane will contain the point `p` and be perpendicular to `n`.
pub fn from_point_normal(p: Point3<S>, n: Vector3<S>) -> Plane<S> {
Plane { n, d: p.dot(n) }
}
/// Normalize a plane.
pub fn normalize(&self) -> Option<Plane<S>> {
if ulps_eq!(self.n, &Vector3::zero()) {
None
} else {
let denom = S::one() / self.n.magnitude();
Some(Plane::new(self.n * denom, self.d * denom))
}
}
}
impl<S: AbsDiffEq> AbsDiffEq for Plane<S>
where
S::Epsilon: Copy,
S: BaseFloat,
{
type Epsilon = S::Epsilon;
#[inline]
fn default_epsilon() -> S::Epsilon {
S::default_epsilon()
}
#[inline]
fn abs_diff_eq(&self, other: &Self, epsilon: S::Epsilon) -> bool {
Vector3::abs_diff_eq(&self.n, &other.n, epsilon)
&& S::abs_diff_eq(&self.d, &other.d, epsilon)
}
}
impl<S: RelativeEq> RelativeEq for Plane<S>
where
S::Epsilon: Copy,
S: BaseFloat,
{
#[inline]
fn default_max_relative() -> S::Epsilon {
S::default_max_relative()
}
#[inline]
fn relative_eq(&self, other: &Self, epsilon: S::Epsilon, max_relative: S::Epsilon) -> bool {
Vector3::relative_eq(&self.n, &other.n, epsilon, max_relative)
&& S::relative_eq(&self.d, &other.d, epsilon, max_relative)
}
}
impl<S: UlpsEq> UlpsEq for Plane<S>
where
S::Epsilon: Copy,
S: BaseFloat,
{
#[inline]
fn default_max_ulps() -> u32 {
S::default_max_ulps()
}
#[inline]
fn ulps_eq(&self, other: &Self, epsilon: S::Epsilon, max_ulps: u32) -> bool {
Vector3::ulps_eq(&self.n, &other.n, epsilon, max_ulps)
&& S::ulps_eq(&self.d, &other.d, epsilon, max_ulps)
}
}
impl<S: BaseFloat> fmt::Debug for Plane<S> {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(
f,
"{:?}x + {:?}y + {:?}z - {:?} = 0",
self.n.x, self.n.y, self.n.z, self.d
)
}
}
impl<S: BaseFloat> Continuous<Ray3<S>> for Plane<S> {
type Result = Point3<S>;
fn intersection(&self, r: &Ray3<S>) -> Option<Point3<S>> {
let p = self;
let t = -(p.d + r.origin.dot(p.n)) / r.direction.dot(p.n);
if t < Zero::zero() {
None
} else {
Some(r.origin + r.direction * t)
}
}
}
impl<S: BaseFloat> Discrete<Ray3<S>> for Plane<S> {
fn intersects(&self, r: &Ray3<S>) -> bool {
let p = self;
let t = -(p.d + r.origin.dot(p.n)) / r.direction.dot(p.n);
t >= Zero::zero()
}
}
/// See _Real-Time Collision Detection_, p. 210
impl<S: BaseFloat> Continuous<Plane<S>> for Plane<S> {
type Result = Ray3<S>;
fn intersection(&self, p2: &Plane<S>) -> Option<Ray3<S>> {
let p1 = self;
let d = p1.n.cross(p2.n);
let denom = d.dot(d);
if ulps_eq!(denom, &S::zero()) {
None
} else {
let p = (p2.n * p1.d - p1.n * p2.d).cross(d) / denom;
Some(Ray3::new(Point3::from_vec(p), d))
}
}
}
impl<S: BaseFloat> Discrete<Plane<S>> for Plane<S> {
fn intersects(&self, p2: &Plane<S>) -> bool {
let p1 = self;
let d = p1.n.cross(p2.n);
let denom = d.dot(d);
ulps_ne!(denom, &S::zero())
}
}
/// See _Real-Time Collision Detection_, p. 212 - 214
impl<S: BaseFloat> Continuous<(Plane<S>, Plane<S>)> for Plane<S> {
type Result = Point3<S>;
fn intersection(&self, planes: &(Plane<S>, Plane<S>)) -> Option<Point3<S>> {
let (p1, p2, p3) = (self, planes.0, planes.1);
let u = p2.n.cross(p3.n);
let denom = p1.n.dot(u);
if ulps_eq!(denom.abs(), &S::zero()) {
None
} else {
let p = (u * p1.d + p1.n.cross(p2.n * p3.d - p3.n * p2.d)) / denom;
Some(Point3::from_vec(p))
}
}
}
impl<S: BaseFloat> Discrete<(Plane<S>, Plane<S>)> for Plane<S> {
fn intersects(&self, planes: &(Plane<S>, Plane<S>)) -> bool {
let (p1, p2, p3) = (self, planes.0, planes.1);
let u = p2.n.cross(p3.n);
let denom = p1.n.dot(u);
ulps_ne!(denom.abs(), &S::zero())
}
}
| 29.308642 | 96 | 0.535805 |
d997130a8f16fe6dd2e7b7e0a5af78574c9f657f | 13,805 | // Copyright 2019 TiKV Project Authors. Licensed under Apache-2.0.
use std::f64::INFINITY;
use std::path::Path;
use std::sync::Arc;
use std::thread;
use std::time::Duration;
use engine_rocks::raw::{IngestExternalFileOptions, Writable};
use engine_rocks::util::get_cf_handle;
use engine_rocks::util::new_temp_engine;
use engine_rocks::RocksEngine;
use engine_rocks::{Compat, RocksSnapshot, RocksSstWriterBuilder};
use engine_traits::{
CompactExt, DeleteStrategy, Engines, KvEngine, MiscExt, Range, SstWriter, SstWriterBuilder,
ALL_CFS, CF_DEFAULT, CF_WRITE,
};
use keys::data_key;
use kvproto::metapb::{Peer, Region};
use raftstore::store::RegionSnapshot;
use raftstore::store::{apply_sst_cf_file, build_sst_cf_file};
use tempfile::Builder;
use test_raftstore::*;
use tikv::config::TiKvConfig;
use tikv::storage::mvcc::ScannerBuilder;
use tikv::storage::txn::Scanner;
use tikv_util::config::{ReadableDuration, ReadableSize};
use tikv_util::time::Limiter;
use txn_types::{Key, Write, WriteType};
#[test]
fn test_turnoff_titan() {
let mut cluster = new_node_cluster(0, 3);
cluster.cfg.rocksdb.defaultcf.disable_auto_compactions = true;
cluster.cfg.rocksdb.defaultcf.num_levels = 1;
configure_for_enable_titan(&mut cluster, ReadableSize::kb(0));
cluster.run();
assert_eq!(cluster.must_get(b"k1"), None);
let size = 5;
for i in 0..size {
assert!(
cluster
.put(
format!("k{:02}0", i).as_bytes(),
format!("v{}", i).as_bytes(),
)
.is_ok()
);
}
cluster.must_flush_cf(CF_DEFAULT, true);
for i in 0..size {
assert!(
cluster
.put(
format!("k{:02}1", i).as_bytes(),
format!("v{}", i).as_bytes(),
)
.is_ok()
);
}
cluster.must_flush_cf(CF_DEFAULT, true);
for i in cluster.get_node_ids().into_iter() {
let db = cluster.get_engine(i);
assert_eq!(
db.get_property_int(&"rocksdb.num-files-at-level0").unwrap(),
2
);
assert_eq!(
db.get_property_int(&"rocksdb.num-files-at-level1").unwrap(),
0
);
assert_eq!(
db.get_property_int(&"rocksdb.titandb.num-live-blob-file")
.unwrap(),
2
);
assert_eq!(
db.get_property_int(&"rocksdb.titandb.num-obsolete-blob-file")
.unwrap(),
0
);
}
cluster.shutdown();
// try reopen db when titan isn't properly turned off.
configure_for_disable_titan(&mut cluster);
assert!(cluster.pre_start_check().is_err());
configure_for_enable_titan(&mut cluster, ReadableSize::kb(0));
assert!(cluster.pre_start_check().is_ok());
cluster.start().unwrap();
assert_eq!(cluster.must_get(b"k1"), None);
for i in cluster.get_node_ids().into_iter() {
let db = cluster.get_engine(i);
let handle = get_cf_handle(&db, CF_DEFAULT).unwrap();
let opt = vec![("blob_run_mode", "kFallback")];
assert!(db.set_options_cf(handle, &opt).is_ok());
}
cluster.compact_data();
let mut all_check_pass = true;
for _ in 0..10 {
// wait for gc completes.
sleep_ms(10);
all_check_pass = true;
for i in cluster.get_node_ids().into_iter() {
let db = cluster.get_engine(i);
if db.get_property_int(&"rocksdb.num-files-at-level0").unwrap() != 0 {
all_check_pass = false;
break;
}
if db.get_property_int(&"rocksdb.num-files-at-level1").unwrap() != 1 {
all_check_pass = false;
break;
}
if db
.get_property_int(&"rocksdb.titandb.num-live-blob-file")
.unwrap()
!= 0
{
all_check_pass = false;
break;
}
}
if all_check_pass {
break;
}
}
if !all_check_pass {
panic!("unexpected titan gc results");
}
cluster.shutdown();
configure_for_disable_titan(&mut cluster);
// wait till files are purged, timeout set to purge_obsolete_files_period.
for _ in 1..100 {
sleep_ms(10);
if cluster.pre_start_check().is_ok() {
return;
}
}
assert!(cluster.pre_start_check().is_ok());
}
#[test]
fn test_delete_files_in_range_for_titan() {
let path = Builder::new()
.prefix("test-titan-delete-files-in-range")
.tempdir()
.unwrap();
// Set configs and create engines
let mut cfg = TiKvConfig::default();
let cache = cfg.storage.block_cache.build_shared_cache();
cfg.rocksdb.titan.enabled = true;
cfg.rocksdb.titan.disable_gc = true;
cfg.rocksdb.titan.purge_obsolete_files_period = ReadableDuration::secs(1);
cfg.rocksdb.defaultcf.disable_auto_compactions = true;
// Disable dynamic_level_bytes, otherwise SST files would be ingested to L0.
cfg.rocksdb.defaultcf.dynamic_level_bytes = false;
cfg.rocksdb.defaultcf.titan.min_gc_batch_size = ReadableSize(0);
cfg.rocksdb.defaultcf.titan.discardable_ratio = 0.4;
cfg.rocksdb.defaultcf.titan.sample_ratio = 1.0;
cfg.rocksdb.defaultcf.titan.min_blob_size = ReadableSize(0);
let kv_db_opts = cfg.rocksdb.build_opt();
let kv_cfs_opts = cfg
.rocksdb
.build_cf_opts(&cache, None, cfg.storage.enable_ttl);
let raft_path = path.path().join(Path::new("titan"));
let engines = Engines::new(
RocksEngine::from_db(Arc::new(
engine_rocks::raw_util::new_engine(
path.path().to_str().unwrap(),
Some(kv_db_opts),
ALL_CFS,
Some(kv_cfs_opts),
)
.unwrap(),
)),
RocksEngine::from_db(Arc::new(
engine_rocks::raw_util::new_engine(
raft_path.to_str().unwrap(),
None,
&[CF_DEFAULT],
None,
)
.unwrap(),
)),
);
// Write some mvcc keys and values into db
// default_cf : a_7, b_7
// write_cf : a_8, b_8
let start_ts = 7.into();
let commit_ts = 8.into();
let write = Write::new(WriteType::Put, start_ts, None);
let db = &engines.kv.as_inner();
let default_cf = db.cf_handle(CF_DEFAULT).unwrap();
let write_cf = db.cf_handle(CF_WRITE).unwrap();
db.put_cf(
&default_cf,
&data_key(Key::from_raw(b"a").append_ts(start_ts).as_encoded()),
b"a_value",
)
.unwrap();
db.put_cf(
&write_cf,
&data_key(Key::from_raw(b"a").append_ts(commit_ts).as_encoded()),
&write.as_ref().to_bytes(),
)
.unwrap();
db.put_cf(
&default_cf,
&data_key(Key::from_raw(b"b").append_ts(start_ts).as_encoded()),
b"b_value",
)
.unwrap();
db.put_cf(
&write_cf,
&data_key(Key::from_raw(b"b").append_ts(commit_ts).as_encoded()),
&write.as_ref().to_bytes(),
)
.unwrap();
// Flush and compact the kvs into L6.
db.flush(true).unwrap();
db.c().compact_files_in_range(None, None, None).unwrap();
let value = db.get_property_int(&"rocksdb.num-files-at-level0").unwrap();
assert_eq!(value, 0);
let value = db.get_property_int(&"rocksdb.num-files-at-level6").unwrap();
assert_eq!(value, 1);
// Delete one mvcc kvs we have written above.
// Here we make the kvs on the L5 by ingesting SST.
let sst_file_path = Path::new(db.path()).join("for_ingest.sst");
let mut writer = RocksSstWriterBuilder::new()
.build(&sst_file_path.to_str().unwrap())
.unwrap();
writer
.delete(&data_key(
Key::from_raw(b"a").append_ts(start_ts).as_encoded(),
))
.unwrap();
writer.finish().unwrap();
let mut opts = IngestExternalFileOptions::new();
opts.move_files(true);
db.ingest_external_file_cf(&default_cf, &opts, &[sst_file_path.to_str().unwrap()])
.unwrap();
// Now the LSM structure of default cf is:
// L5: [delete(a_7)]
// L6: [put(a_7, blob1), put(b_7, blob1)]
// the ranges of two SST files are overlapped.
//
// There is one blob file in Titan
// blob1: (a_7, a_value), (b_7, b_value)
let value = db.get_property_int(&"rocksdb.num-files-at-level0").unwrap();
assert_eq!(value, 0);
let value = db.get_property_int(&"rocksdb.num-files-at-level5").unwrap();
assert_eq!(value, 1);
let value = db.get_property_int(&"rocksdb.num-files-at-level6").unwrap();
assert_eq!(value, 1);
// Used to trigger titan gc
let db = &engines.kv.as_inner();
db.put(b"1", b"1").unwrap();
db.flush(true).unwrap();
db.put(b"2", b"2").unwrap();
db.flush(true).unwrap();
db.c()
.compact_files_in_range(Some(b"0"), Some(b"3"), Some(1))
.unwrap();
// Now the LSM structure of default cf is:
// memtable: [put(b_7, blob4)] (because of Titan GC)
// L0: [put(1, blob2), put(2, blob3)]
// L5: [delete(a_7)]
// L6: [put(a_7, blob1), put(b_7, blob1)]
// the ranges of two SST files are overlapped.
//
// There is four blob files in Titan
// blob1: (a_7, a_value), (b_7, b_value)
// blob2: (1, 1)
// blob3: (2, 2)
// blob4: (b_7, b_value)
let value = db.get_property_int(&"rocksdb.num-files-at-level0").unwrap();
assert_eq!(value, 0);
let value = db.get_property_int(&"rocksdb.num-files-at-level1").unwrap();
assert_eq!(value, 1);
let value = db.get_property_int(&"rocksdb.num-files-at-level5").unwrap();
assert_eq!(value, 1);
let value = db.get_property_int(&"rocksdb.num-files-at-level6").unwrap();
assert_eq!(value, 1);
// Wait Titan to purge obsolete files
thread::sleep(Duration::from_secs(2));
// Now the LSM structure of default cf is:
// memtable: [put(b_7, blob4)] (because of Titan GC)
// L0: [put(1, blob2), put(2, blob3)]
// L5: [delete(a_7)]
// L6: [put(a_7, blob1), put(b_7, blob1)]
// the ranges of two SST files are overlapped.
//
// There is three blob files in Titan
// blob2: (1, 1)
// blob3: (2, 2)
// blob4: (b_7, b_value)
// `delete_files_in_range` may expose some old keys.
// For Titan it may encounter `missing blob file` in `delete_all_in_range`,
// so we set key_only for Titan.
engines
.kv
.delete_all_in_range(
DeleteStrategy::DeleteFiles,
&[Range::new(
&data_key(Key::from_raw(b"a").as_encoded()),
&data_key(Key::from_raw(b"b").as_encoded()),
)],
)
.unwrap();
engines
.kv
.delete_all_in_range(
DeleteStrategy::DeleteByKey,
&[Range::new(
&data_key(Key::from_raw(b"a").as_encoded()),
&data_key(Key::from_raw(b"b").as_encoded()),
)],
)
.unwrap();
engines
.kv
.delete_all_in_range(
DeleteStrategy::DeleteBlobs,
&[Range::new(
&data_key(Key::from_raw(b"a").as_encoded()),
&data_key(Key::from_raw(b"b").as_encoded()),
)],
)
.unwrap();
// Now the LSM structure of default cf is:
// memtable: [put(b_7, blob4)] (because of Titan GC)
// L0: [put(1, blob2), put(2, blob3)]
// L6: [put(a_7, blob1), put(b_7, blob1)]
// the ranges of two SST files are overlapped.
//
// There is three blob files in Titan
// blob2: (1, 1)
// blob3: (2, 2)
// blob4: (b_7, b_value)
let value = db.get_property_int(&"rocksdb.num-files-at-level0").unwrap();
assert_eq!(value, 0);
let value = db.get_property_int(&"rocksdb.num-files-at-level1").unwrap();
assert_eq!(value, 1);
let value = db.get_property_int(&"rocksdb.num-files-at-level5").unwrap();
assert_eq!(value, 0);
let value = db.get_property_int(&"rocksdb.num-files-at-level6").unwrap();
assert_eq!(value, 1);
// Generate a snapshot
let default_sst_file_path = path.path().join("default.sst");
let write_sst_file_path = path.path().join("write.sst");
let limiter = Limiter::new(INFINITY);
build_sst_cf_file::<RocksEngine>(
&default_sst_file_path.to_str().unwrap(),
&engines.kv,
&engines.kv.snapshot(),
CF_DEFAULT,
b"",
b"{",
&limiter,
)
.unwrap();
build_sst_cf_file::<RocksEngine>(
&write_sst_file_path.to_str().unwrap(),
&engines.kv,
&engines.kv.snapshot(),
CF_WRITE,
b"",
b"{",
&limiter,
)
.unwrap();
// Apply the snapshot to other DB.
let dir1 = Builder::new()
.prefix("test-snap-cf-db-apply")
.tempdir()
.unwrap();
let engines1 = new_temp_engine(&dir1);
apply_sst_cf_file(
&default_sst_file_path.to_str().unwrap(),
&engines1.kv,
CF_DEFAULT,
)
.unwrap();
apply_sst_cf_file(
&write_sst_file_path.to_str().unwrap(),
&engines1.kv,
CF_WRITE,
)
.unwrap();
// Do scan on other DB.
let mut r = Region::default();
r.mut_peers().push(Peer::default());
r.set_start_key(b"a".to_vec());
r.set_end_key(b"z".to_vec());
let snapshot = RegionSnapshot::<RocksSnapshot>::from_raw(engines1.kv, r);
let mut scanner = ScannerBuilder::new(snapshot, 10.into())
.range(Some(Key::from_raw(b"a")), None)
.build()
.unwrap();
assert_eq!(
scanner.next().unwrap(),
Some((Key::from_raw(b"b"), b"b_value".to_vec())),
);
}
| 32.71327 | 95 | 0.581384 |
e6f550f03fc4a720c7dec07c56f010039a626cd6 | 1,068 | pub mod scripts;
use serde::{Deserialize, Serialize};
use std::error::Error;
#[derive(Serialize, Deserialize, Debug)]
pub struct DeepLink {
pub link: Option<String>,
pub title: Option<String>,
}
pub const UNKNOWN: DeepLink = DeepLink {
link: None,
title: None,
};
#[derive(Serialize, Deserialize, Debug, Clone)]
pub struct App {
// App bundle id, ie: "com.googlecode.iterm2"
pub id: String,
// App name, ie: "iTerm2"
pub name: String,
// Frontmost window title, ie "osascript -s s ./src/scripts/front_app.applescript"
pub title: Option<String>,
}
impl App {
pub fn deep_link(&self) -> Result<DeepLink, Box<dyn Error>> {
let rs: DeepLink = match &self.id[..] {
//"com.googlecode.iterm2" => scripts::com_googlecode_iterm2()?,
"com.apple.Safari" => scripts::com_apple_Safari()?,
"com.apple.mail" => scripts::com_apple_mail()?,
"com.google.Chrome" | "org.chromium.Chromium" => scripts::com_google_Chrome()?,
_ => UNKNOWN,
};
Ok(rs)
}
}
| 28.105263 | 91 | 0.613296 |
f8c6b00f700ca3eccd777d422f75b91963e3fbe9 | 1,713 | use super::{
super::tui_utils::Event,
text::{generate_help_text, generate_info_text},
LedgerList, LedgerTab, LedgerTabState, Trans,
};
use termion::event::Key;
pub fn event(tab: &mut LedgerTab, event: Event<Key>) -> Trans {
if event == Event::Input(Key::Delete) {
if tab.active_list == LedgerList::Accounts {
tab.ledger.remove_account_at(tab.account_cursor);
tab.accounts_cursors.remove(tab.account_cursor);
tab.accounts_names.remove(tab.account_cursor);
tab.transactions_names.remove(tab.account_cursor);
if tab.account_cursor != 0 {
tab.account_cursor -= 1;
}
} else {
let account = tab
.ledger
.accounts
.get_mut(tab.account_cursor)
.expect("Unreachable: txn del acc cursor bounds");
let cursor = tab
.accounts_cursors
.get_mut(tab.account_cursor)
.expect("Unreachable: txn cursor bounds");
let names = tab
.transactions_names
.get_mut(tab.account_cursor)
.expect("Unreachable: txn names cursor bounds");
account.transactions.remove(*cursor);
names.remove(*cursor);
if *cursor != 0 {
*cursor -= 1;
}
if names.len() == 0 {
tab.active_list = LedgerList::Accounts;
}
}
tab.state = LedgerTabState::Normal;
generate_help_text(tab);
generate_info_text(tab);
} else if event != Event::Tick {
tab.state = LedgerTabState::Normal;
}
Trans::None
}
| 34.959184 | 66 | 0.545242 |
3adfa89b83354b1717fa03d3ca9df020d042287e | 49 | pub mod color;
pub mod spinner;
pub mod styling;
| 12.25 | 16 | 0.755102 |
0e862da96bf3ca0a38a35b72c9b4dcf52dd2d5bf | 31,472 | use libc::c_char;
use utils::cstring::CStringUtils;
use utils::error;
use connection;
use disclosed_proof;
use std::ptr;
use utils::threadpool::spawn;
use error::prelude::*;
/// Create a proof for fulfilling a corresponding proof request
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// source_id: Institution's identification for the proof, should be unique.
///
/// req: proof request received via "vcx_get_proof_requests"
///
/// cb: Callback that provides proof handle or error status
///
/// #Returns
/// Error code as u32
#[no_mangle]
#[allow(unused_variables, unused_mut)]
pub extern fn vcx_disclosed_proof_create_with_request(command_handle: u32,
source_id: *const c_char,
proof_req: *const c_char,
cb: Option<extern fn(xcommand_handle: u32, err: u32, handle: u32)>) -> u32 {
info!("vcx_disclosed_proof_create_with_request >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
check_useful_c_str!(source_id, VcxErrorKind::InvalidOption);
check_useful_c_str!(proof_req, VcxErrorKind::InvalidOption);
trace!("vcx_disclosed_proof_create_with_request(command_handle: {}, source_id: {}, proof_req: {})",
command_handle, source_id, proof_req);
spawn(move || {
match disclosed_proof::create_proof(&source_id, &proof_req) {
Ok(x) => {
trace!("vcx_disclosed_proof_create_with_request_cb(command_handle: {}, rc: {}, handle: {}) source_id: {}",
command_handle,error::SUCCESS.message, x, source_id);
cb(command_handle, 0, x);
}
Err(x) => {
error!("vcx_disclosed_proof_create_with_request_cb(command_handle: {}, rc: {}, handle: {}) source_id: {}",
command_handle, x, 0, source_id);
cb(command_handle, x.into(), 0);
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Create a proof for fulfilling a corresponding proof request
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// source_id: Institution's personal identification for the proof, should be unique.
///
/// connection: connection to query for proof request
///
/// msg_id: msg_id that contains the proof request
///
/// cb: Callback that provides proof handle and proof request or error status
///
/// #Returns
/// Error code as a u32
#[no_mangle]
#[allow(unused_variables, unused_mut)]
pub extern fn vcx_disclosed_proof_create_with_msgid(command_handle: u32,
source_id: *const c_char,
connection_handle: u32,
msg_id: *const c_char,
cb: Option<extern fn(xcommand_handle: u32, err: u32, proof_handle: u32, proof_req: *const c_char)>) -> u32 {
info!("vcx_disclosed_proof_create_with_msgid >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
check_useful_c_str!(source_id, VcxErrorKind::InvalidOption);
check_useful_c_str!(msg_id, VcxErrorKind::InvalidOption);
trace!("vcx_disclosed_proof_create_with_msgid(command_handle: {}, source_id: {}, connection_handle: {}, msg_id: {})",
command_handle, source_id, connection_handle, msg_id);
spawn(move || {
match disclosed_proof::get_proof_request(connection_handle, &msg_id) {
Ok(request) => {
match disclosed_proof::create_proof(&source_id, &request) {
Ok(handle) => {
trace!("vcx_disclosed_proof_create_with_msgid_cb(command_handle: {}, rc: {}, handle: {}, proof_req: {}) source_id: {}",
command_handle, error::SUCCESS.message, handle, request, source_id);
let msg = CStringUtils::string_to_cstring(request);
cb(command_handle, error::SUCCESS.code_num, handle, msg.as_ptr())
}
Err(e) => {
warn!("vcx_disclosed_proof_create_with_msgid_cb(command_handle: {}, rc: {}, handle: {}, proof_req: {}) source_id: {}",
command_handle, e, 0, request, source_id);
let msg = CStringUtils::string_to_cstring(request);
cb(command_handle, e.into(), 0, msg.as_ptr());
}
};
}
Err(e) => cb(command_handle, e.into(), 0, ptr::null()),
};
Ok(())
});
error::SUCCESS.code_num
}
/// Send a proof to the connection, called after having received a proof request
///
/// #params
/// command_handle: command handle to map callback to API user context.
///
/// proof_handle: proof handle that was provided duration creation. Used to identify proof object.
///
/// connection_handle: Connection handle that identifies pairwise connection
///
/// cb: Callback that provides error status of proof send request
///
/// #Returns
/// Error code as u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_send_proof(command_handle: u32,
proof_handle: u32,
connection_handle: u32,
cb: Option<extern fn(xcommand_handle: u32, err: u32)>) -> u32 {
info!("vcx_disclosed_proof_send_proof >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
if !disclosed_proof::is_valid_handle(proof_handle) {
return VcxError::from(VcxErrorKind::InvalidDisclosedProofHandle).into()
}
if !connection::is_valid_handle(connection_handle) {
return VcxError::from(VcxErrorKind::InvalidConnectionHandle).into()
}
let source_id = disclosed_proof::get_source_id(proof_handle).unwrap_or_default();
trace!("vcx_disclosed_proof_send_proof(command_handle: {}, proof_handle: {}, connection_handle: {}) source_id: {}",
command_handle, proof_handle, connection_handle, source_id);
spawn(move || {
let err = match disclosed_proof::send_proof(proof_handle, connection_handle) {
Ok(x) => {
trace!("vcx_disclosed_proof_send_proof_cb(command_handle: {}, rc: {}) source_id: {}",
command_handle, error::SUCCESS.message, source_id);
cb(command_handle, error::SUCCESS.code_num);
}
Err(x) => {
error!("vcx_disclosed_proof_send_proof_cb(command_handle: {}, rc: {}) source_id: {}",
command_handle, x, source_id);
cb(command_handle, x.into());
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Queries agency for proof requests from the given connection.
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// connection_handle: Connection to query for proof requests.
///
/// cb: Callback that provides any proof requests and error status of query
///
/// #Returns
/// Error code as a u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_get_requests(command_handle: u32,
connection_handle: u32,
cb: Option<extern fn(xcommand_handle: u32, err: u32, requests: *const c_char)>) -> u32 {
info!("vcx_disclosed_proof_get_requests >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
if !connection::is_valid_handle(connection_handle) {
return VcxError::from(VcxErrorKind::InvalidConnectionHandle).into()
}
trace!("vcx_disclosed_proof_get_requests(command_handle: {}, connection_handle: {})",
command_handle, connection_handle);
spawn(move || {
match disclosed_proof::get_proof_request_messages(connection_handle, None) {
Ok(x) => {
trace!("vcx_disclosed_proof_get_requests_cb(command_handle: {}, rc: {}, msg: {})",
command_handle, error::SUCCESS.message, x);
let msg = CStringUtils::string_to_cstring(x);
cb(command_handle, error::SUCCESS.code_num, msg.as_ptr());
}
Err(x) => {
error!("vcx_disclosed_proof_get_requests_cb(command_handle: {}, rc: {}, msg: {})",
command_handle, error::SUCCESS.message, x);
cb(command_handle, x.into(), ptr::null_mut());
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Get the current state of the disclosed proof object
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// proof_handle: Proof handle that was provided during creation. Used to access disclosed proof object
///
/// cb: Callback that provides most current state of the disclosed proof and error status of request
///
/// #Returns
/// Error code as a u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_get_state(command_handle: u32,
proof_handle: u32,
cb: Option<extern fn(xcommand_handle: u32, err: u32, state: u32)>) -> u32 {
info!("vcx_disclosed_proof_get_state >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
if !disclosed_proof::is_valid_handle(proof_handle) {
return VcxError::from(VcxErrorKind::InvalidDisclosedProofHandle).into()
}
let source_id = disclosed_proof::get_source_id(proof_handle).unwrap_or_default();
trace!("vcx_disclosed_proof_get_state(command_handle: {}, proof_handle: {}), source_id: {:?}",
command_handle, proof_handle, source_id);
spawn(move || {
match disclosed_proof::get_state(proof_handle) {
Ok(s) => {
trace!("vcx_disclosed_proof_get_state_cb(command_handle: {}, rc: {}, state: {}) source_id: {}",
command_handle, error::SUCCESS.message, s, source_id);
cb(command_handle, error::SUCCESS.code_num, s)
}
Err(e) => {
error!("vcx_disclosed_proof_get_state_cb(command_handle: {}, rc: {}, state: {}) source_id: {}",
command_handle, e, 0, source_id);
cb(command_handle, e.into(), 0)
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Checks for any state change in the disclosed proof and updates the the state attribute
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// proof_handle: Credential handle that was provided during creation. Used to identify disclosed proof object
///
/// cb: Callback that provides most current state of the disclosed proof and error status of request
///
/// #Returns
/// Error code as a u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_update_state(command_handle: u32,
proof_handle: u32,
cb: Option<extern fn(xcommand_handle: u32, err: u32, state: u32)>) -> u32 {
info!("vcx_disclosed_proof_update_state >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
if !disclosed_proof::is_valid_handle(proof_handle) {
return VcxError::from(VcxErrorKind::InvalidDisclosedProofHandle).into()
}
let source_id = disclosed_proof::get_source_id(proof_handle).unwrap_or_default();
trace!("vcx_disclosed_proof_update_state(command_handle: {}, proof_handle: {}) source_id: {}",
command_handle, proof_handle, source_id);
spawn(move || {
match disclosed_proof::update_state(proof_handle) {
Ok(s) => {
trace!("vcx_disclosed_proof_update_state_cb(command_handle: {}, rc: {}, state: {}) source_id: {}",
command_handle, error::SUCCESS.message, s, source_id);
cb(command_handle, error::SUCCESS.code_num, s)
}
Err(e) => {
error!("vcx_disclosed_proof_update_state_cb(command_handle: {}, rc: {}, state: {}) source_id: {}",
command_handle, e, 0, source_id);
cb(command_handle, e.into(), 0)
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Takes the disclosed proof object and returns a json string of all its attributes
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// handle: Proof handle that was provided during creation. Used to identify the disclosed proof object
///
/// cb: Callback that provides json string of the disclosed proof's attributes and provides error status
///
/// #Returns
/// Error code as a u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_serialize(command_handle: u32,
proof_handle: u32,
cb: Option<extern fn(xcommand_handle: u32, err: u32, data: *const c_char)>) -> u32 {
info!("vcx_disclosed_proof_serialize >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
if !disclosed_proof::is_valid_handle(proof_handle) {
return VcxError::from(VcxErrorKind::InvalidDisclosedProofHandle).into()
}
let source_id = disclosed_proof::get_source_id(proof_handle).unwrap_or_default();
trace!("vcx_disclosed_proof_serialize(command_handle: {}, proof_handle: {}) source_id: {}",
command_handle, proof_handle, source_id);
spawn(move || {
match disclosed_proof::to_string(proof_handle) {
Ok(x) => {
trace!("vcx_disclosed_proof_serialize_cb(command_handle: {}, rc: {}, data: {}) source_id: {}",
command_handle, error::SUCCESS.message, x, source_id);
let msg = CStringUtils::string_to_cstring(x);
cb(command_handle, error::SUCCESS.code_num, msg.as_ptr());
}
Err(x) => {
error!("vcx_disclosed_proof_serialize_cb(command_handle: {}, rc: {}, data: {}) source_id: {}",
command_handle, x, 0, source_id);
cb(command_handle, x.into(), ptr::null_mut());
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Takes a json string representing an disclosed proof object and recreates an object matching the json
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// data: json string representing a disclosed proof object
///
///
/// cb: Callback that provides handle and provides error status
///
/// #Returns
/// Error code as a u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_deserialize(command_handle: u32,
proof_data: *const c_char,
cb: Option<extern fn(xcommand_handle: u32, err: u32, handle: u32)>) -> u32 {
info!("vcx_disclosed_proof_deserialize >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
check_useful_c_str!(proof_data, VcxErrorKind::InvalidOption);
trace!("vcx_disclosed_proof_deserialize(command_handle: {}, proof_data: {})",
command_handle, proof_data);
spawn(move || {
match disclosed_proof::from_string(&proof_data) {
Ok(x) => {
trace!("vcx_disclosed_proof_deserialize_cb(command_handle: {}, rc: {}, proof_handle: {}) source_id: {}",
command_handle, error::SUCCESS.message, x, disclosed_proof::get_source_id(x).unwrap_or_default());
cb(command_handle, 0, x);
}
Err(x) => {
error!("vcx_disclosed_proof_deserialize_cb(command_handle: {}, rc: {}, proof_handle: {}) source_id: {}",
command_handle, x, 0, "");
cb(command_handle, x.into(), 0);
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Takes the disclosed proof object and returns a json string of all credentials matching associated proof request from wallet
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
/// handle: Proof handle that was provided during creation. Used to identify the disclosed proof object
///
/// cb: Callback that provides json string of the credentials in wallet associated with proof request
///
/// #Returns
/// Error code as a u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_retrieve_credentials(command_handle: u32,
proof_handle: u32,
cb: Option<extern fn(xcommand_handle: u32, err: u32, data: *const c_char)>) -> u32 {
info!("vcx_disclosed_proof_retrieve_credentials >>>");
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
if !disclosed_proof::is_valid_handle(proof_handle) {
return VcxError::from(VcxErrorKind::InvalidDisclosedProofHandle).into()
}
let source_id = disclosed_proof::get_source_id(proof_handle).unwrap_or_default();
trace!("vcx_disclosed_proof_retrieve_credentials(command_handle: {}, proof_handle: {}) source_id: {}",
command_handle, proof_handle, source_id);
spawn(move || {
match disclosed_proof::retrieve_credentials(proof_handle) {
Ok(x) => {
trace!("vcx_disclosed_proof_retrieve_credentials(command_handle: {}, rc: {}, data: {}) source_id: {}",
command_handle, error::SUCCESS.message, x, source_id);
let msg = CStringUtils::string_to_cstring(x);
cb(command_handle, error::SUCCESS.code_num, msg.as_ptr());
}
Err(x) => {
error!("vcx_disclosed_proof_retrieve_credentials(command_handle: {}, rc: {}, data: {}) source_id: {}",
command_handle, x, 0, source_id);
cb(command_handle, x.into(), ptr::null_mut());
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Takes the disclosed proof object and generates a proof from the selected credentials and self attested attributes
///
/// #Params
/// command_handle: command handle to map callback to user context.
///
///
/// handle: Proof handle that was provided during creation. Used to identify the disclosed proof object
///
/// selected_credentials: a json string with a credential for each proof request attribute.
/// List of possible credentials for each attribute is returned from vcx_disclosed_proof_retrieve_credentials,
/// (user needs to select specific credential to use from list of credentials)
/// {
/// "attrs":{
/// String:{// Attribute key: This may not be the same as the attr name ex. "age_1" where attribute name is "age"
/// "credential": {
/// "cred_info":{
/// "referent":String,
/// "attrs":{ String: String }, // ex. {"age": "111", "name": "Bob"}
/// "schema_id": String,
/// "cred_def_id": String,
/// "rev_reg_id":Option<String>,
/// "cred_rev_id":Option<String>,
/// },
/// "interval":Option<{to: Option<u64>, from:: Option<u64>}>
/// }, // This is the exact credential information selected from list of
/// // credentials returned from vcx_disclosed_proof_retrieve_credentials
/// "tails_file": Option<"String">, // Path to tails file for this credential
/// },
/// },
/// "predicates":{ TODO: will be implemented as part of IS-1095 ticket. }
/// }
/// // selected_credentials can be empty "{}" if the proof only contains self_attested_attrs
///
/// self_attested_attrs: a json string with attributes self attested by user
/// # Examples self_attested_attrs -> "{"self_attested_attr_0":"attested_val"}" | "{}"
///
/// cb: Callback that returns error status
///
/// #Returns
/// Error code as a u32
#[no_mangle]
pub extern fn vcx_disclosed_proof_generate_proof(command_handle: u32,
proof_handle: u32,
selected_credentials: *const c_char,
self_attested_attrs: *const c_char,
cb: Option<extern fn(xcommand_handle: u32, err: u32)>) -> u32 {
info!("vcx_disclosed_proof_generate_proof >>>");
check_useful_c_str!(selected_credentials, VcxErrorKind::InvalidOption);
check_useful_c_str!(self_attested_attrs, VcxErrorKind::InvalidOption);
check_useful_c_callback!(cb, VcxErrorKind::InvalidOption);
if !disclosed_proof::is_valid_handle(proof_handle) {
return VcxError::from(VcxErrorKind::InvalidDisclosedProofHandle).into()
}
let source_id = disclosed_proof::get_source_id(proof_handle).unwrap_or_default();
trace!("vcx_disclosed_proof_generate_proof(command_handle: {}, proof_handle: {}, selected_credentials: {}, self_attested_attrs: {}) source_id: {}",
command_handle, proof_handle, selected_credentials, self_attested_attrs, source_id);
spawn(move || {
match disclosed_proof::generate_proof(proof_handle, selected_credentials, self_attested_attrs) {
Ok(_) => {
trace!("vcx_disclosed_proof_generate_proof(command_handle: {}, rc: {}) source_id: {}",
command_handle, error::SUCCESS.message, source_id);
cb(command_handle, error::SUCCESS.code_num);
}
Err(x) => {
error!("vcx_disclosed_proof_generate_proof(command_handle: {}, rc: {}) source_id: {}",
command_handle, x, source_id);
cb(command_handle, x.into());
}
};
Ok(())
});
error::SUCCESS.code_num
}
/// Releases the disclosed proof object by de-allocating memory
///
/// #Params
/// handle: Proof handle that was provided during creation. Used to access proof object
///
/// #Returns
/// Success
#[no_mangle]
pub extern fn vcx_disclosed_proof_release(handle: u32) -> u32 {
info!("vcx_disclosed_proof_release >>>");
let source_id = disclosed_proof::get_source_id(handle).unwrap_or_default();
match disclosed_proof::release(handle) {
Ok(_) => {
trace!("vcx_disclosed_proof_release(handle: {}, rc: {}), source_id: {:?}",
handle, error::SUCCESS.message, source_id);
error::SUCCESS.code_num
}
Err(e) => {
error!("vcx_disclosed_proof_release(handle: {}, rc: {}), source_id: {:?}",
handle, e, source_id);
e.into()
}
}
}
#[cfg(test)]
mod tests {
extern crate serde_json;
use super::*;
use std::ffi::CString;
use std::time::Duration;
use connection;
use api::VcxStateType;
use utils::constants::DEFAULT_SERIALIZE_VERSION;
use api::return_types_u32;
use serde_json::Value;
pub const BAD_PROOF_REQUEST: &str = r#"{"version": "0.1","to_did": "LtMgSjtFcyPwenK9SHCyb8","from_did": "LtMgSjtFcyPwenK9SHCyb8","claim": {"account_num": ["8BEaoLf8TBmK4BUyX8WWnA"],"name_on_account": ["Alice"]},"schema_seq_no": 48,"issuer_did": "Pd4fnFtRBcMKRVC2go5w3j","claim_name": "Account Certificate","claim_id": "3675417066","msg_ref_id": "ymy5nth"}"#;
#[test]
fn test_vcx_proof_create_with_request_success() {
init!("true");
let cb = return_types_u32::Return_U32_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_create_with_request(cb.command_handle,
CString::new("test_create").unwrap().into_raw(),
CString::new(::utils::constants::PROOF_REQUEST_JSON).unwrap().into_raw(),
Some(cb.get_callback())), error::SUCCESS.code_num);
assert!(cb.receive(Some(Duration::from_secs(10))).unwrap() > 0);
}
#[test]
fn test_vcx_proof_create_with_request() {
init!("true");
let cb = return_types_u32::Return_U32_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_create_with_request(
cb.command_handle,
CString::new("test_create").unwrap().into_raw(),
CString::new(BAD_PROOF_REQUEST).unwrap().into_raw(),
Some(cb.get_callback())), error::SUCCESS.code_num);
assert_eq!(cb.receive(Some(Duration::from_secs(10))).err(), Some(error::INVALID_JSON.code_num));
}
#[test]
fn test_create_with_msgid() {
init!("true");
let cxn = ::connection::tests::build_test_connection();
::utils::httpclient::set_next_u8_response(::utils::constants::NEW_PROOF_REQUEST_RESPONSE.to_vec());
let cb = return_types_u32::Return_U32_U32_STR::new().unwrap();
assert_eq!(vcx_disclosed_proof_create_with_msgid(cb.command_handle,
CString::new("test_create_with_msgid").unwrap().into_raw(),
cxn,
CString::new("123").unwrap().into_raw(),
Some(cb.get_callback())), error::SUCCESS.code_num);
let (handle, disclosed_proof) = cb.receive(Some(Duration::from_secs(10))).unwrap();
assert!(handle > 0 && disclosed_proof.is_some());
}
#[test]
fn test_vcx_disclosed_proof_release() {
init!("true");
let cb = return_types_u32::Return_U32_STR::new().unwrap();
let handle = disclosed_proof::create_proof("1", ::utils::constants::PROOF_REQUEST_JSON).unwrap();
let unknown_handle = handle + 1;
let err = vcx_disclosed_proof_release(unknown_handle);
assert_eq!(err, error::INVALID_DISCLOSED_PROOF_HANDLE.code_num);
}
#[test]
fn test_vcx_disclosed_proof_serialize_and_deserialize() {
init!("true");
let cb = return_types_u32::Return_U32_STR::new().unwrap();
let handle = disclosed_proof::create_proof("1", ::utils::constants::PROOF_REQUEST_JSON).unwrap();
assert_eq!(vcx_disclosed_proof_serialize(cb.command_handle,
handle,
Some(cb.get_callback())), error::SUCCESS.code_num);
let s = cb.receive(Some(Duration::from_secs(2))).unwrap().unwrap();
let j: Value = serde_json::from_str(&s).unwrap();
assert_eq!(j["version"], DEFAULT_SERIALIZE_VERSION);
let cb = return_types_u32::Return_U32_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_deserialize(cb.command_handle,
CString::new(s).unwrap().into_raw(),
Some(cb.get_callback())),
error::SUCCESS.code_num);
let handle = cb.receive(Some(Duration::from_secs(2))).unwrap();
assert!(handle > 0);
}
#[test]
fn test_vcx_send_proof() {
init!("true");
let handle = disclosed_proof::create_proof("1", ::utils::constants::PROOF_REQUEST_JSON).unwrap();
assert_eq!(disclosed_proof::get_state(handle).unwrap(), VcxStateType::VcxStateRequestReceived as u32);
let connection_handle = connection::tests::build_test_connection();
let cb = return_types_u32::Return_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_send_proof(cb.command_handle, handle, connection_handle, Some(cb.get_callback())), error::SUCCESS.code_num);
cb.receive(Some(Duration::from_secs(10))).unwrap();
}
#[test]
fn test_vcx_proof_get_requests() {
init!("true");
let cxn = ::connection::tests::build_test_connection();
::utils::httpclient::set_next_u8_response(::utils::constants::NEW_PROOF_REQUEST_RESPONSE.to_vec());
let cb = return_types_u32::Return_U32_STR::new().unwrap();
assert_eq!(vcx_disclosed_proof_get_requests(cb.command_handle, cxn, Some(cb.get_callback())), error::SUCCESS.code_num as u32);
cb.receive(Some(Duration::from_secs(10))).unwrap();
}
#[test]
fn test_vcx_proof_get_state() {
init!("true");
let handle = disclosed_proof::create_proof("1", ::utils::constants::PROOF_REQUEST_JSON).unwrap();
assert!(handle > 0);
let cb = return_types_u32::Return_U32_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_get_state(cb.command_handle, handle, Some(cb.get_callback())), error::SUCCESS.code_num);
let state = cb.receive(Some(Duration::from_secs(10))).unwrap();
assert_eq!(state, VcxStateType::VcxStateRequestReceived as u32);
}
#[test]
fn test_vcx_disclosed_proof_retrieve_credentials() {
init!("true");
let cb = return_types_u32::Return_U32_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_create_with_request(cb.command_handle,
CString::new("test_create").unwrap().into_raw(),
CString::new(::utils::constants::PROOF_REQUEST_JSON).unwrap().into_raw(),
Some(cb.get_callback())), error::SUCCESS.code_num);
let handle = cb.receive(Some(Duration::from_secs(2))).unwrap();
let cb = return_types_u32::Return_U32_STR::new().unwrap();
assert_eq!(vcx_disclosed_proof_retrieve_credentials(cb.command_handle,
handle,
Some(cb.get_callback())),
error::SUCCESS.code_num);
let credentials = cb.receive(None).unwrap().unwrap();
}
#[test]
fn test_vcx_disclosed_proof_generate_proof() {
init!("true");
let cb = return_types_u32::Return_U32_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_create_with_request(cb.command_handle,
CString::new("test_create").unwrap().into_raw(),
CString::new(::utils::constants::PROOF_REQUEST_JSON).unwrap().into_raw(),
Some(cb.get_callback())), error::SUCCESS.code_num);
let proof_handle = cb.receive(Some(Duration::from_secs(10))).unwrap();
let cb = return_types_u32::Return_U32::new().unwrap();
assert_eq!(vcx_disclosed_proof_generate_proof(cb.command_handle,
proof_handle,
CString::new("{}").unwrap().into_raw(),
CString::new("{}").unwrap().into_raw(),
Some(cb.get_callback())), error::SUCCESS.code_num);
cb.receive(Some(Duration::from_secs(10))).unwrap();
}
}
| 44.016783 | 362 | 0.592972 |
381ea168a286d0144a3e460942c46e2b81224ef0 | 1,114 | use crate::common::util::*;
#[test]
fn test_link_existing_file() {
let (at, mut ucmd) = at_and_ucmd!();
let file = "test_link_existing_file";
let link = "test_link_existing_file_link";
at.touch(file);
at.write(file, "foobar");
assert!(at.file_exists(file));
ucmd.args(&[file, link]).succeeds().no_stderr();
assert!(at.file_exists(file));
assert!(at.file_exists(link));
assert_eq!(at.read(file), at.read(link));
}
#[test]
fn test_link_no_circular() {
let (at, mut ucmd) = at_and_ucmd!();
let link = "test_link_no_circular";
ucmd.args(&[link, link])
.fails()
.stderr_is("link: error: No such file or directory (os error 2)\n");
assert!(!at.file_exists(link));
}
#[test]
fn test_link_nonexistent_file() {
let (at, mut ucmd) = at_and_ucmd!();
let file = "test_link_nonexistent_file";
let link = "test_link_nonexistent_file_link";
ucmd.args(&[file, link])
.fails()
.stderr_is("link: error: No such file or directory (os error 2)\n");
assert!(!at.file_exists(file));
assert!(!at.file_exists(link));
}
| 26.52381 | 76 | 0.632855 |
33e6eeee06500c1d5bba8b24e3dba4b694732a73 | 5,709 | #![allow(warnings, clippy, unknown_lints)]
use std::{io::Result, path::PathBuf, process::exit};
pub type Identifier = String;
pub type StringLiteral = String;
pub mod asm;
pub mod hir;
pub mod mir;
use hir::HirProgram;
mod target;
pub use target::{Go, Target, C};
use asciicolor::Colorize;
use comment::cpp::strip;
use lalrpop_util::{lalrpop_mod, ParseError};
lalrpop_mod!(pub parser);
pub fn compile(
cwd: &PathBuf,
ffi_code: impl ToString,
input: impl ToString,
target: impl Target,
) -> Result<()> {
match parse(input).compile(cwd) {
Ok(mir) => match mir.assemble() {
Ok(asm) => match asm.assemble(&target) {
// Add the target's prelude, the FFI code from the user,
// the compiled Oak code, and the target's postlude
Ok(result) => target.compile(
target.prelude() + &ffi_code.to_string() + &result + &target.postlude(),
),
Err(e) => {
eprintln!("compilation error: {}", e.bright_red().underline());
exit(1);
}
},
Err(e) => {
eprintln!("compilation error: {}", e.bright_red().underline());
exit(1);
}
},
Err(e) => {
eprintln!("compilation error: {}", e.bright_red().underline());
exit(1);
}
}
}
pub fn parse(input: impl ToString) -> HirProgram {
let code = &strip(input.to_string()).unwrap();
match parser::ProgramParser::new().parse(code) {
// if the parser succeeds, build will succeed
Ok(parsed) => parsed,
// if the parser succeeds, annotate code with comments
Err(e) => {
eprintln!("{}", format_error(&code, e));
exit(1);
}
}
}
type Error<'a, T> = ParseError<usize, T, &'a str>;
/// This formats an error properly given the line, the `unexpected` token as a string,
/// the line number, and the column number of the unexpected token.
fn make_error(line: &str, unexpected: &str, line_number: usize, column_number: usize) -> String {
// The string used to underline the unexpected token
let underline = format!(
"{}^{}",
" ".repeat(column_number),
"-".repeat(unexpected.len() - 1)
);
// Format string properly and return
format!(
"{WS} |
{line_number} | {line}
{WS} | {underline}
{WS} |
{WS} = unexpected `{unexpected}`",
WS = " ".repeat(line_number.to_string().len()),
line_number = line_number,
line = line.bright_yellow().underline(),
underline = underline,
unexpected = unexpected.bright_yellow().underline()
)
}
// Gets the line number, the line, and the column number of the error
fn get_line(script: &str, location: usize) -> (usize, String, usize) {
// Get the line number from the character location
let line_number = script[..location + 1].lines().count();
// Get the line from the line number
let line = match script.lines().nth(line_number - 1) {
Some(line) => line,
None => {
if let Some(line) = script.lines().last() {
line
} else {
""
}
}
}
.replace("\t", " ");
// Get the column number from the location
let mut column = {
let mut current_column = 0;
// For every character in the script until the location of the error,
// keep track of the column location
for ch in script[..location].chars() {
if ch == '\n' {
current_column = 0;
} else if ch == '\t' {
current_column += 4;
} else {
current_column += 1;
}
}
current_column
};
// Trim the beginning of the line and subtract the number of spaces from the column
let trimmed_line = line.trim_start();
column -= (line.len() - trimmed_line.len()) as i32;
(line_number, String::from(trimmed_line), column as usize)
}
/// This is used to take an LALRPOP error and convert
/// it into a nicely formatted error message
fn format_error<T: core::fmt::Debug>(script: &str, err: Error<T>) -> String {
match err {
Error::InvalidToken { location } => {
let (line_number, line, column) = get_line(script, location);
make_error(
&line,
&(script.as_bytes()[location] as char).to_string(),
line_number,
column,
)
}
Error::UnrecognizedEOF { location, .. } => {
let (line_number, line, _) = get_line(script, location);
make_error(&line, "EOF", line_number, line.len())
}
Error::UnrecognizedToken { token, .. } => {
// The start and end of the unrecognized token
let start = token.0;
let end = token.2;
let (line_number, line, column) = get_line(script, start);
let unexpected = &script[start..end];
make_error(&line, unexpected, line_number, column)
}
Error::ExtraToken { token } => {
// The start and end of the extra token
let start = token.0;
let end = token.2;
let (line_number, line, column) = get_line(script, start);
let unexpected = &script[start..end];
make_error(&line, unexpected, line_number, column)
}
Error::User { error } => format!(
" |\n? | {}\n | {}\n |\n = unexpected compiling error",
error,
format!("^{}", "-".repeat(error.len() - 1))
),
}
}
| 32.622857 | 97 | 0.541776 |
6ad7552a7dc8a29996d76897b765a1c84485a1fc | 17,937 | pub(crate) mod debug;
pub(crate) mod into_shapes;
pub(crate) mod pattern;
pub(crate) mod state;
use self::debug::ExpandTracer;
use self::into_shapes::IntoShapes;
use self::state::{Peeked, TokensIteratorState};
use crate::hir::syntax_shape::flat_shape::{FlatShape, ShapeResult};
use crate::hir::syntax_shape::{ExpandContext, ExpandSyntax, ExpressionListShape};
use crate::hir::SpannedExpression;
use crate::parse::token_tree::{BlockType, DelimitedNode, SpannedToken, SquareType, TokenType};
use getset::{Getters, MutGetters};
use nu_errors::ParseError;
use nu_protocol::SpannedTypeName;
use nu_source::{
HasFallibleSpan, HasSpan, IntoSpanned, PrettyDebugWithSource, Span, Spanned, SpannedItem, Text,
};
use std::borrow::Borrow;
use std::sync::Arc;
#[derive(Getters, MutGetters, Clone, Debug)]
pub struct TokensIterator<'content> {
#[get = "pub"]
#[get_mut = "pub"]
state: TokensIteratorState<'content>,
#[get = "pub"]
#[get_mut = "pub"]
expand_tracer: ExpandTracer<SpannedExpression>,
}
#[derive(Debug)]
pub struct Checkpoint<'content, 'me> {
pub(crate) iterator: &'me mut TokensIterator<'content>,
index: usize,
seen: indexmap::IndexSet<usize>,
shape_start: usize,
committed: bool,
}
impl<'content, 'me> Checkpoint<'content, 'me> {
pub(crate) fn commit(mut self) {
self.committed = true;
}
}
impl<'content, 'me> std::ops::Drop for Checkpoint<'content, 'me> {
fn drop(&mut self) {
if !self.committed {
let state = &mut self.iterator.state;
state.index = self.index;
state.seen = self.seen.clone();
state.shapes.truncate(self.shape_start);
}
}
}
// For parse_command
impl<'content> TokensIterator<'content> {
pub fn sort_shapes(&mut self) {
// This is pretty dubious, but it works. We should look into a better algorithm that doesn't end up requiring
// this solution.
self.state
.shapes
.sort_by(|a, b| a.span().start().cmp(&b.span().start()));
}
/// Run a block of code, retrieving the shapes that were created during the block. This is
/// used by `parse_command` to associate shapes with a particular flag.
pub fn shapes_for<'me, T>(
&'me mut self,
block: impl FnOnce(&mut TokensIterator<'content>) -> Result<T, ParseError>,
) -> (Result<T, ParseError>, Vec<ShapeResult>) {
let index = self.state.index;
let mut shapes = vec![];
let mut errors = self.state.errors.clone();
let seen = self.state.seen.clone();
std::mem::swap(&mut self.state.shapes, &mut shapes);
std::mem::swap(&mut self.state.errors, &mut errors);
let checkpoint = Checkpoint {
iterator: self,
index,
seen,
committed: false,
shape_start: 0,
};
let value = block(checkpoint.iterator);
let value = match value {
Err(err) => {
drop(checkpoint);
std::mem::swap(&mut self.state.shapes, &mut shapes);
std::mem::swap(&mut self.state.errors, &mut errors);
return (Err(err), vec![]);
}
Ok(value) => value,
};
checkpoint.commit();
std::mem::swap(&mut self.state.shapes, &mut shapes);
(Ok(value), shapes)
}
pub fn extract<T>(&mut self, f: impl Fn(&SpannedToken) -> Option<T>) -> Option<(usize, T)> {
let state = &mut self.state;
for (i, item) in state.tokens.iter().enumerate() {
if state.seen.contains(&i) {
continue;
}
match f(item) {
None => {
continue;
}
Some(value) => {
state.seen.insert(i);
return Some((i, value));
}
}
}
self.move_to(0);
None
}
pub fn remove(&mut self, position: usize) {
self.state.seen.insert(position);
}
}
// Delimited
impl<'content> TokensIterator<'content> {
pub fn block(&mut self) -> Result<Spanned<Vec<SpannedExpression>>, ParseError> {
self.expand_token_with_token_nodes(BlockType, |node, token_nodes| {
token_nodes.delimited(node)
})
}
pub fn square(&mut self) -> Result<Spanned<Vec<SpannedExpression>>, ParseError> {
self.expand_token_with_token_nodes(SquareType, |node, token_nodes| {
token_nodes.delimited(node)
})
}
fn delimited(
&mut self,
DelimitedNode {
delimiter,
spans,
children,
}: DelimitedNode,
) -> Result<(Vec<ShapeResult>, Spanned<Vec<SpannedExpression>>), ParseError> {
let span = spans.0.until(spans.1);
let (child_shapes, expr) = self.child(children[..].spanned(span), |token_nodes| {
token_nodes.expand_infallible(ExpressionListShape).exprs
});
let mut shapes = vec![ShapeResult::Success(
FlatShape::OpenDelimiter(delimiter).spanned(spans.0),
)];
shapes.extend(child_shapes);
shapes.push(ShapeResult::Success(
FlatShape::CloseDelimiter(delimiter).spanned(spans.1),
));
Ok((shapes, expr))
}
}
impl<'content> TokensIterator<'content> {
pub fn new(
items: &'content [SpannedToken],
context: ExpandContext<'content>,
span: Span,
) -> TokensIterator<'content> {
let source = context.source();
TokensIterator {
state: TokensIteratorState {
tokens: items,
span,
index: 0,
seen: indexmap::IndexSet::new(),
shapes: vec![],
errors: indexmap::IndexMap::new(),
context: Arc::new(context),
},
expand_tracer: ExpandTracer::new("Expand Trace", source.clone()),
}
}
pub fn len(&self) -> usize {
self.state.tokens.len()
}
pub fn is_empty(&self) -> bool {
self.state.tokens.is_empty()
}
pub fn source(&self) -> Text {
self.state.context.source().clone()
}
pub fn context(&self) -> &ExpandContext {
&self.state.context
}
pub fn color_result(&mut self, shape: ShapeResult) {
match shape {
ShapeResult::Success(shape) => self.color_shape(shape),
ShapeResult::Fallback { shape, allowed } => self.color_err(shape, allowed),
}
}
pub fn color_shape(&mut self, shape: Spanned<FlatShape>) {
self.with_tracer(|_, tracer| tracer.add_shape(shape.into_trace_shape(shape.span)));
self.state.shapes.push(ShapeResult::Success(shape));
}
pub fn color_err(&mut self, shape: Spanned<FlatShape>, valid_shapes: Vec<String>) {
self.with_tracer(|_, tracer| tracer.add_err_shape(shape.into_trace_shape(shape.span)));
self.state.errors.insert(shape.span, valid_shapes.clone());
self.state.shapes.push(ShapeResult::Fallback {
shape,
allowed: valid_shapes,
});
}
pub fn color_shapes(&mut self, shapes: Vec<Spanned<FlatShape>>) {
self.with_tracer(|_, tracer| {
for shape in &shapes {
tracer.add_shape(shape.into_trace_shape(shape.span))
}
});
for shape in &shapes {
self.state.shapes.push(ShapeResult::Success(*shape));
}
}
pub fn child<'me, T>(
&'me mut self,
tokens: Spanned<&'me [SpannedToken]>,
block: impl FnOnce(&mut TokensIterator<'me>) -> T,
) -> (Vec<ShapeResult>, T) {
let mut shapes = vec![];
std::mem::swap(&mut shapes, &mut self.state.shapes);
let mut errors = self.state.errors.clone();
std::mem::swap(&mut errors, &mut self.state.errors);
let mut expand_tracer = ExpandTracer::new("Expand Trace", self.source());
std::mem::swap(&mut expand_tracer, &mut self.expand_tracer);
let mut iterator = TokensIterator {
state: TokensIteratorState {
tokens: tokens.item,
span: tokens.span,
index: 0,
seen: indexmap::IndexSet::new(),
shapes,
errors,
context: self.state.context.clone(),
},
expand_tracer,
};
let result = block(&mut iterator);
std::mem::swap(&mut iterator.state.shapes, &mut self.state.shapes);
std::mem::swap(&mut iterator.state.errors, &mut self.state.errors);
std::mem::swap(&mut iterator.expand_tracer, &mut self.expand_tracer);
(iterator.state.shapes, result)
}
fn with_tracer(
&mut self,
block: impl FnOnce(&mut TokensIteratorState, &mut ExpandTracer<SpannedExpression>),
) {
let state = &mut self.state;
let tracer = &mut self.expand_tracer;
block(state, tracer)
}
pub fn finish_tracer(&mut self) {
self.with_tracer(|_, tracer| tracer.finish())
}
pub fn atomic_parse<'me, T, E>(
&'me mut self,
block: impl FnOnce(&mut TokensIterator<'content>) -> Result<T, E>,
) -> Result<T, E> {
let state = &mut self.state;
let index = state.index;
let shape_start = state.shapes.len();
let seen = state.seen.clone();
let checkpoint = Checkpoint {
iterator: self,
index,
seen,
committed: false,
shape_start,
};
let value = block(checkpoint.iterator)?;
checkpoint.commit();
Ok(value)
}
fn eof_span(&self) -> Span {
Span::new(self.state.span.end(), self.state.span.end())
}
pub fn span_at_cursor(&mut self) -> Span {
let next = self.peek();
match next.node {
None => self.eof_span(),
Some(node) => node.span(),
}
}
pub fn at_end(&self) -> bool {
next_index(&self.state).is_none()
}
pub fn move_to(&mut self, pos: usize) {
self.state.index = pos;
}
/// Peek the next token in the token stream and return a `Peeked`.
///
/// # Example
///
/// ```ignore
/// let peeked = token_nodes.peek().not_eof();
/// let node = peeked.node;
/// match node.unspanned() {
/// Token::Whitespace => {
/// let node = peeked.commit();
/// return Ok(node.span)
/// }
/// other => return Err(ParseError::mismatch("whitespace", node.spanned_type_name()))
/// }
/// ```
pub fn peek<'me>(&'me mut self) -> Peeked<'content, 'me> {
let state = self.state();
let len = state.tokens.len();
let from = state.index;
let index = next_index(state);
let (node, to) = match index {
None => (None, len),
Some(to) => (Some(&state.tokens[to]), to + 1),
};
Peeked {
node,
iterator: self,
from,
to,
}
}
/// Produce an error corresponding to the next token.
///
/// If the next token is EOF, produce an `UnexpectedEof`. Otherwise, produce a `Mismatch`.
pub fn err_next_token(&mut self, expected: &'static str) -> ParseError {
match next_index(&self.state) {
None => ParseError::unexpected_eof(expected, self.eof_span()),
Some(index) => {
ParseError::mismatch(expected, self.state.tokens[index].spanned_type_name())
}
}
}
fn expand_token_with_token_nodes<
'me,
T: 'me,
U: IntoSpanned<Output = V>,
V: HasFallibleSpan,
F: IntoShapes,
>(
&'me mut self,
expected: impl TokenType<Output = T>,
block: impl FnOnce(T, &mut Self) -> Result<(F, U), ParseError>,
) -> Result<V, ParseError> {
let desc = expected.desc();
let peeked = self.peek().not_eof(desc.borrow())?;
let (shapes, val) = {
let node = peeked.node;
let type_name = node.spanned_type_name();
let func = Box::new(|| Err(ParseError::mismatch(desc.clone().into_owned(), type_name)));
match expected.extract_token_value(node, &func) {
Err(err) => return Err(err),
Ok(value) => match block(value, peeked.iterator) {
Err(err) => return Err(err),
Ok((shape, val)) => {
let span = peeked.node.span();
peeked.commit();
(shape.into_shapes(span), val.into_spanned(span))
}
},
}
};
for shape in &shapes {
self.color_result(shape.clone());
}
Ok(val)
}
/// Expand and color a single token. Takes an `impl TokenType` and produces
/// (() | FlatShape | Vec<Spanned<FlatShape>>, Output) (or an error).
///
/// If a single FlatShape is produced, it is annotated with the span of the
/// original token. Otherwise, each FlatShape in the list must already be
/// annotated.
pub fn expand_token<'me, T, U, V, F>(
&'me mut self,
expected: impl TokenType<Output = T>,
block: impl FnOnce(T) -> Result<(F, U), ParseError>,
) -> Result<V, ParseError>
where
T: 'me,
U: IntoSpanned<Output = V>,
V: HasFallibleSpan,
F: IntoShapes,
{
self.expand_token_with_token_nodes(expected, |value, _| block(value))
}
fn commit(&mut self, from: usize, to: usize) {
for index in from..to {
self.state.seen.insert(index);
}
self.state.index = to;
}
pub fn debug_remaining(&self) -> Vec<SpannedToken> {
let mut tokens: TokensIterator = self.clone();
tokens.move_to(0);
tokens.cloned().collect()
}
/// Expand an `ExpandSyntax` whose output is a `Result`, producing either the shape's output
/// or a `ParseError`. If the token stream is at EOF, this method produces a ParseError
/// (`UnexpectedEof`).
///
/// You must use `expand_syntax` if the `Output` of the `ExpandSyntax` is a `Result`, but
/// it's difficult to model this in the Rust type system.
pub fn expand_syntax<U>(
&mut self,
shape: impl ExpandSyntax<Output = Result<U, ParseError>>,
) -> Result<U, ParseError>
where
U: std::fmt::Debug + HasFallibleSpan + PrettyDebugWithSource + Clone + 'static,
{
if self.at_end() {
self.with_tracer(|_, tracer| tracer.start(shape.name(), None));
self.with_tracer(|_, tracer| tracer.eof_frame());
return Err(ParseError::unexpected_eof(shape.name(), self.eof_span()));
}
let (result, added_shapes) = self.expand(shape);
match &result {
Ok(val) => self.finish_expand(val, added_shapes),
Err(err) => self.with_tracer(|_, tracer| tracer.failed(err)),
}
result
}
/// Expand an `impl ExpandSyntax` and produce its Output. Use `expand_infallible` if the
/// `ExpandSyntax` cannot produce a `Result`. You must also use `ExpandSyntax` if EOF
/// is an error.
///
/// The purpose of `expand_infallible` is to clearly mark the infallible path through
/// and entire list of tokens that produces a fully colored version of the source.
///
/// If the `ExpandSyntax` can produce a `Result`, make sure to use `expand_syntax`,
/// which will correctly show the error in the trace.
pub fn expand_infallible<U>(&mut self, shape: impl ExpandSyntax<Output = U>) -> U
where
U: std::fmt::Debug + PrettyDebugWithSource + HasFallibleSpan + Clone + 'static,
{
let (result, added_shapes) = self.expand(shape);
self.finish_expand(&result, added_shapes);
result
}
fn finish_expand<V>(&mut self, val: &V, added_shapes: usize)
where
V: PrettyDebugWithSource + HasFallibleSpan + Clone,
{
self.with_tracer(|_, tracer| {
if val.maybe_span().is_some() || added_shapes > 0 {
tracer.add_result(val.clone());
}
tracer.success();
})
}
pub fn expand<U>(&mut self, shape: impl ExpandSyntax<Output = U>) -> (U, usize)
where
U: std::fmt::Debug + Clone + 'static,
{
let desc = shape.name();
self.with_tracer(|state, tracer| {
tracer.start(
desc,
next_index(state).map(|index| state.tokens[index].clone()),
)
});
let start_shapes = self.state.shapes.len();
let result = shape.expand(self);
let added_shapes = self.state.shapes.len() - start_shapes;
(result, added_shapes)
}
}
impl<'content> Iterator for TokensIterator<'content> {
type Item = &'content SpannedToken;
fn next(&mut self) -> Option<Self::Item> {
next(self)
}
}
fn next_index(state: &TokensIteratorState) -> Option<usize> {
let mut to = state.index;
loop {
if to >= state.tokens.len() {
return None;
}
if state.seen.contains(&to) {
to += 1;
continue;
}
if to >= state.tokens.len() {
return None;
}
return Some(to);
}
}
fn next<'me, 'content>(
iterator: &'me mut TokensIterator<'content>,
) -> Option<&'content SpannedToken> {
let next = next_index(&iterator.state);
let len = iterator.len();
match next {
None => {
iterator.move_to(len);
None
}
Some(index) => {
iterator.move_to(index + 1);
Some(&iterator.state.tokens[index])
}
}
}
| 29.746269 | 117 | 0.557005 |
18ac61b3bcc3b517e9d3eb455c9fb7e2d1664e7d | 105 | fn main() {
let mut v = [-5, 4, 1, -3, 2];
v.sort();
assert!(v == [-5, -3, 1, 2, 4]);
}
| 15 | 36 | 0.333333 |
50bddc049100a3d1484cb0a4694b309a2bd01080 | 26,120 | use std::iter;
use cgmath::prelude::*;
use wgpu::util::DeviceExt;
use winit::{
event::*,
event_loop::{ControlFlow, EventLoop},
window::Window,
};
mod camera;
mod model;
mod texture; // NEW!
use model::{DrawLight, DrawModel, Vertex};
const NUM_INSTANCES_PER_ROW: u32 = 10;
#[repr(C)]
#[derive(Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct Uniforms {
view_position: [f32; 4],
view_proj: [[f32; 4]; 4],
}
impl Uniforms {
fn new() -> Self {
Self {
view_position: [0.0; 4],
view_proj: cgmath::Matrix4::identity().into(),
}
}
// UPDATED!
fn update_view_proj(&mut self, camera: &camera::Camera, projection: &camera::Projection) {
self.view_position = camera.position.to_homogeneous().into();
self.view_proj = (projection.calc_matrix() * camera.calc_matrix()).into()
}
}
struct Instance {
position: cgmath::Vector3<f32>,
rotation: cgmath::Quaternion<f32>,
}
impl Instance {
fn to_raw(&self) -> InstanceRaw {
InstanceRaw {
model: (cgmath::Matrix4::from_translation(self.position)
* cgmath::Matrix4::from(self.rotation))
.into(),
normal: cgmath::Matrix3::from(self.rotation).into(),
}
}
}
#[repr(C)]
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
#[allow(dead_code)]
struct InstanceRaw {
model: [[f32; 4]; 4],
normal: [[f32; 3]; 3],
}
impl model::Vertex for InstanceRaw {
fn desc<'a>() -> wgpu::VertexBufferLayout<'a> {
use std::mem;
wgpu::VertexBufferLayout {
array_stride: mem::size_of::<InstanceRaw>() as wgpu::BufferAddress,
// We need to switch from using a step mode of Vertex to Instance
// This means that our shaders will only change to use the next
// instance when the shader starts processing a new instance
step_mode: wgpu::InputStepMode::Instance,
attributes: &[
wgpu::VertexAttribute {
offset: 0,
// While our vertex shader only uses locations 0, and 1 now, in later tutorials we'll
// be using 2, 3, and 4, for Vertex. We'll start at slot 5 not conflict with them later
shader_location: 5,
format: wgpu::VertexFormat::Float32x4,
},
// A mat4 takes up 4 vertex slots as it is technically 4 vec4s. We need to define a slot
// for each vec4. We don't have to do this in code though.
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 4]>() as wgpu::BufferAddress,
shader_location: 6,
format: wgpu::VertexFormat::Float32x4,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 8]>() as wgpu::BufferAddress,
shader_location: 7,
format: wgpu::VertexFormat::Float32x4,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 12]>() as wgpu::BufferAddress,
shader_location: 8,
format: wgpu::VertexFormat::Float32x4,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 16]>() as wgpu::BufferAddress,
shader_location: 9,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 19]>() as wgpu::BufferAddress,
shader_location: 10,
format: wgpu::VertexFormat::Float32x3,
},
wgpu::VertexAttribute {
offset: mem::size_of::<[f32; 22]>() as wgpu::BufferAddress,
shader_location: 11,
format: wgpu::VertexFormat::Float32x3,
},
],
}
}
}
#[repr(C)]
#[derive(Debug, Copy, Clone, bytemuck::Pod, bytemuck::Zeroable)]
struct Light {
position: [f32; 3],
// Due to uniforms requiring 16 byte (4 float) spacing, we need to use a padding field here
_padding: u32,
color: [f32; 3],
}
struct State {
surface: wgpu::Surface,
device: wgpu::Device,
queue: wgpu::Queue,
sc_desc: wgpu::SwapChainDescriptor,
swap_chain: wgpu::SwapChain,
render_pipeline: wgpu::RenderPipeline,
obj_model: model::Model,
camera: camera::Camera, // UPDATED!
projection: camera::Projection, // NEW!
camera_controller: camera::CameraController, // UPDATED!
uniforms: Uniforms,
uniform_buffer: wgpu::Buffer,
uniform_bind_group: wgpu::BindGroup,
instances: Vec<Instance>,
#[allow(dead_code)]
instance_buffer: wgpu::Buffer,
depth_texture: texture::Texture,
size: winit::dpi::PhysicalSize<u32>,
light: Light,
light_buffer: wgpu::Buffer,
light_bind_group: wgpu::BindGroup,
light_render_pipeline: wgpu::RenderPipeline,
#[allow(dead_code)]
debug_material: model::Material,
// NEW!
mouse_pressed: bool,
}
fn create_render_pipeline(
device: &wgpu::Device,
layout: &wgpu::PipelineLayout,
color_format: wgpu::TextureFormat,
depth_format: Option<wgpu::TextureFormat>,
vertex_layouts: &[wgpu::VertexBufferLayout],
shader: wgpu::ShaderModuleDescriptor,
) -> wgpu::RenderPipeline {
let shader = device.create_shader_module(&shader);
device.create_render_pipeline(&wgpu::RenderPipelineDescriptor {
label: Some(&format!("{:?}", shader)),
layout: Some(&layout),
vertex: wgpu::VertexState {
module: &shader,
entry_point: "main",
buffers: vertex_layouts,
},
fragment: Some(wgpu::FragmentState {
module: &shader,
entry_point: "main",
targets: &[wgpu::ColorTargetState {
format: color_format,
blend: Some(wgpu::BlendState {
alpha: wgpu::BlendComponent::REPLACE,
color: wgpu::BlendComponent::REPLACE,
}),
write_mask: wgpu::ColorWrite::ALL,
}],
}),
primitive: wgpu::PrimitiveState {
topology: wgpu::PrimitiveTopology::TriangleList,
strip_index_format: None,
front_face: wgpu::FrontFace::Ccw,
cull_mode: Some(wgpu::Face::Back),
// Setting this to anything other than Fill requires Features::NON_FILL_POLYGON_MODE
polygon_mode: wgpu::PolygonMode::Fill,
// Requires Features::DEPTH_CLAMPING
clamp_depth: false,
// Requires Features::CONSERVATIVE_RASTERIZATION
conservative: false,
},
depth_stencil: depth_format.map(|format| wgpu::DepthStencilState {
format,
depth_write_enabled: true,
depth_compare: wgpu::CompareFunction::Less,
stencil: wgpu::StencilState::default(),
bias: wgpu::DepthBiasState::default(),
}),
multisample: wgpu::MultisampleState {
count: 1,
mask: !0,
alpha_to_coverage_enabled: false,
},
})
}
impl State {
async fn new(window: &Window) -> Self {
let size = window.inner_size();
// The instance is a handle to our GPU
// BackendBit::PRIMARY => Vulkan + Metal + DX12 + Browser WebGPU
let instance = wgpu::Instance::new(wgpu::BackendBit::PRIMARY);
let surface = unsafe { instance.create_surface(window) };
let adapter = instance
.request_adapter(&wgpu::RequestAdapterOptions {
power_preference: wgpu::PowerPreference::default(),
compatible_surface: Some(&surface),
})
.await
.unwrap();
let (device, queue) = adapter
.request_device(
&wgpu::DeviceDescriptor {
label: None,
features: wgpu::Features::empty(),
limits: wgpu::Limits::default(),
},
None, // Trace path
)
.await
.unwrap();
let sc_desc = wgpu::SwapChainDescriptor {
usage: wgpu::TextureUsage::RENDER_ATTACHMENT,
format: adapter.get_swap_chain_preferred_format(&surface).unwrap(),
width: size.width,
height: size.height,
present_mode: wgpu::PresentMode::Fifo,
};
let swap_chain = device.create_swap_chain(&surface, &sc_desc);
let texture_bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
entries: &[
wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
view_dimension: wgpu::TextureViewDimension::D2,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 1,
visibility: wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Sampler {
comparison: false,
filtering: true,
},
count: None,
},
// normal map
wgpu::BindGroupLayoutEntry {
binding: 2,
visibility: wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Texture {
multisampled: false,
sample_type: wgpu::TextureSampleType::Float { filterable: true },
view_dimension: wgpu::TextureViewDimension::D2,
},
count: None,
},
wgpu::BindGroupLayoutEntry {
binding: 3,
visibility: wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Sampler {
comparison: false,
filtering: true,
},
count: None,
},
],
label: Some("texture_bind_group_layout"),
});
// UPDATED!
let camera = camera::Camera::new((0.0, 5.0, 10.0), cgmath::Deg(-90.0), cgmath::Deg(-20.0));
let projection =
camera::Projection::new(sc_desc.width, sc_desc.height, cgmath::Deg(45.0), 0.1, 100.0);
let camera_controller = camera::CameraController::new(4.0, 0.4);
let mut uniforms = Uniforms::new();
uniforms.update_view_proj(&camera, &projection);
let uniform_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Uniform Buffer"),
contents: bytemuck::cast_slice(&[uniforms]),
usage: wgpu::BufferUsage::UNIFORM | wgpu::BufferUsage::COPY_DST,
});
const SPACE_BETWEEN: f32 = 3.0;
let instances = (0..NUM_INSTANCES_PER_ROW)
.flat_map(|z| {
(0..NUM_INSTANCES_PER_ROW).map(move |x| {
let x = SPACE_BETWEEN * (x as f32 - NUM_INSTANCES_PER_ROW as f32 / 2.0);
let z = SPACE_BETWEEN * (z as f32 - NUM_INSTANCES_PER_ROW as f32 / 2.0);
let position = cgmath::Vector3 { x, y: 0.0, z };
let rotation = if position.is_zero() {
cgmath::Quaternion::from_axis_angle(
cgmath::Vector3::unit_z(),
cgmath::Deg(0.0),
)
} else {
cgmath::Quaternion::from_axis_angle(
position.clone().normalize(),
cgmath::Deg(45.0),
)
};
Instance { position, rotation }
})
})
.collect::<Vec<_>>();
let instance_data = instances.iter().map(Instance::to_raw).collect::<Vec<_>>();
let instance_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Instance Buffer"),
contents: bytemuck::cast_slice(&instance_data),
usage: wgpu::BufferUsage::VERTEX,
});
let uniform_bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStage::VERTEX | wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
}],
label: Some("uniform_bind_group_layout"),
});
let uniform_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
layout: &uniform_bind_group_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: uniform_buffer.as_entire_binding(),
}],
label: Some("uniform_bind_group"),
});
let res_dir = std::path::Path::new(env!("OUT_DIR")).join("res");
let now = std::time::Instant::now();
let obj_model = model::Model::load(
&device,
&queue,
&texture_bind_group_layout,
res_dir.join("cube.obj"),
)
.unwrap();
println!("Elapsed (Original): {:?}", std::time::Instant::now() - now);
let light = Light {
position: [2.0, 2.0, 2.0],
_padding: 0,
color: [1.0, 1.0, 1.0],
};
let light_buffer = device.create_buffer_init(&wgpu::util::BufferInitDescriptor {
label: Some("Light VB"),
contents: bytemuck::cast_slice(&[light]),
usage: wgpu::BufferUsage::UNIFORM | wgpu::BufferUsage::COPY_DST,
});
let light_bind_group_layout =
device.create_bind_group_layout(&wgpu::BindGroupLayoutDescriptor {
entries: &[wgpu::BindGroupLayoutEntry {
binding: 0,
visibility: wgpu::ShaderStage::VERTEX | wgpu::ShaderStage::FRAGMENT,
ty: wgpu::BindingType::Buffer {
ty: wgpu::BufferBindingType::Uniform,
has_dynamic_offset: false,
min_binding_size: None,
},
count: None,
}],
label: None,
});
let light_bind_group = device.create_bind_group(&wgpu::BindGroupDescriptor {
layout: &light_bind_group_layout,
entries: &[wgpu::BindGroupEntry {
binding: 0,
resource: light_buffer.as_entire_binding(),
}],
label: None,
});
let depth_texture =
texture::Texture::create_depth_texture(&device, &sc_desc, "depth_texture");
let render_pipeline_layout =
device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Render Pipeline Layout"),
bind_group_layouts: &[
&texture_bind_group_layout,
&uniform_bind_group_layout,
&light_bind_group_layout,
],
push_constant_ranges: &[],
});
let render_pipeline = {
let shader = wgpu::ShaderModuleDescriptor {
label: Some("Normal Shader"),
flags: wgpu::ShaderFlags::all(),
source: wgpu::ShaderSource::Wgsl(include_str!("shader.wgsl").into()),
};
create_render_pipeline(
&device,
&render_pipeline_layout,
sc_desc.format,
Some(texture::Texture::DEPTH_FORMAT),
&[model::ModelVertex::desc(), InstanceRaw::desc()],
shader,
)
};
let light_render_pipeline = {
let layout = device.create_pipeline_layout(&wgpu::PipelineLayoutDescriptor {
label: Some("Light Pipeline Layout"),
bind_group_layouts: &[&uniform_bind_group_layout, &light_bind_group_layout],
push_constant_ranges: &[],
});
let shader = wgpu::ShaderModuleDescriptor {
label: Some("Light Shader"),
flags: wgpu::ShaderFlags::all(),
source: wgpu::ShaderSource::Wgsl(include_str!("light.wgsl").into()),
};
create_render_pipeline(
&device,
&layout,
sc_desc.format,
Some(texture::Texture::DEPTH_FORMAT),
&[model::ModelVertex::desc()],
shader,
)
};
let debug_material = {
let diffuse_bytes = include_bytes!("../res/cobble-diffuse.png");
let normal_bytes = include_bytes!("../res/cobble-normal.png");
let diffuse_texture = texture::Texture::from_bytes(
&device,
&queue,
diffuse_bytes,
"res/alt-diffuse.png",
false,
)
.unwrap();
let normal_texture = texture::Texture::from_bytes(
&device,
&queue,
normal_bytes,
"res/alt-normal.png",
true,
)
.unwrap();
model::Material::new(
&device,
"alt-material",
diffuse_texture,
normal_texture,
&texture_bind_group_layout,
)
};
Self {
surface,
device,
queue,
sc_desc,
swap_chain,
render_pipeline,
obj_model,
camera,
projection,
camera_controller,
uniform_buffer,
uniform_bind_group,
uniforms,
instances,
instance_buffer,
depth_texture,
size,
light,
light_buffer,
light_bind_group,
light_render_pipeline,
#[allow(dead_code)]
debug_material,
// NEW!
mouse_pressed: false,
}
}
fn resize(&mut self, new_size: winit::dpi::PhysicalSize<u32>) {
// UPDATED!
self.projection.resize(new_size.width, new_size.height);
self.size = new_size;
self.sc_desc.width = new_size.width;
self.sc_desc.height = new_size.height;
self.swap_chain = self.device.create_swap_chain(&self.surface, &self.sc_desc);
self.depth_texture =
texture::Texture::create_depth_texture(&self.device, &self.sc_desc, "depth_texture");
}
// UPDATED!
fn input(&mut self, event: &DeviceEvent) -> bool {
match event {
DeviceEvent::Key(KeyboardInput {
virtual_keycode: Some(key),
state,
..
}) => self.camera_controller.process_keyboard(*key, *state),
DeviceEvent::MouseWheel { delta, .. } => {
self.camera_controller.process_scroll(delta);
true
}
DeviceEvent::Button {
button: 1, // Left Mouse Button
state,
} => {
self.mouse_pressed = *state == ElementState::Pressed;
true
}
DeviceEvent::MouseMotion { delta } => {
if self.mouse_pressed {
self.camera_controller.process_mouse(delta.0, delta.1);
}
true
}
_ => false,
}
}
fn update(&mut self, dt: std::time::Duration) {
// UPDATED!
self.camera_controller.update_camera(&mut self.camera, dt);
self.uniforms
.update_view_proj(&self.camera, &self.projection);
self.queue.write_buffer(
&self.uniform_buffer,
0,
bytemuck::cast_slice(&[self.uniforms]),
);
// Update the light
let old_position: cgmath::Vector3<_> = self.light.position.into();
self.light.position =
(cgmath::Quaternion::from_axis_angle((0.0, 1.0, 0.0).into(), cgmath::Deg(1.0))
* old_position)
.into();
self.queue
.write_buffer(&self.light_buffer, 0, bytemuck::cast_slice(&[self.light]));
}
fn render(&mut self) -> Result<(), wgpu::SwapChainError> {
let frame = self.swap_chain.get_current_frame()?.output;
let mut encoder = self
.device
.create_command_encoder(&wgpu::CommandEncoderDescriptor {
label: Some("Render Encoder"),
});
{
let mut render_pass = encoder.begin_render_pass(&wgpu::RenderPassDescriptor {
label: Some("Render Pass"),
color_attachments: &[wgpu::RenderPassColorAttachment {
view: &frame.view,
resolve_target: None,
ops: wgpu::Operations {
load: wgpu::LoadOp::Clear(wgpu::Color {
r: 0.1,
g: 0.2,
b: 0.3,
a: 1.0,
}),
store: true,
},
}],
depth_stencil_attachment: Some(wgpu::RenderPassDepthStencilAttachment {
view: &self.depth_texture.view,
depth_ops: Some(wgpu::Operations {
load: wgpu::LoadOp::Clear(1.0),
store: true,
}),
stencil_ops: None,
}),
});
render_pass.set_vertex_buffer(1, self.instance_buffer.slice(..));
render_pass.set_pipeline(&self.light_render_pipeline);
render_pass.draw_light_model(
&self.obj_model,
&self.uniform_bind_group,
&self.light_bind_group,
);
render_pass.set_pipeline(&self.render_pipeline);
render_pass.draw_model_instanced(
&self.obj_model,
0..self.instances.len() as u32,
&self.uniform_bind_group,
&self.light_bind_group,
);
}
self.queue.submit(iter::once(encoder.finish()));
Ok(())
}
}
fn main() {
env_logger::init();
let event_loop = EventLoop::new();
let title = env!("CARGO_PKG_NAME");
let window = winit::window::WindowBuilder::new()
.with_title(title)
.build(&event_loop)
.unwrap();
use futures::executor::block_on;
let mut state = block_on(State::new(&window)); // NEW!
let mut last_render_time = std::time::Instant::now();
event_loop.run(move |event, _, control_flow| {
*control_flow = ControlFlow::Poll;
match event {
Event::MainEventsCleared => window.request_redraw(),
Event::DeviceEvent {
ref event,
.. // We're not using device_id currently
} => {
state.input(event);
}
// UPDATED!
Event::WindowEvent {
ref event,
window_id,
} if window_id == window.id() => {
match event {
WindowEvent::CloseRequested => *control_flow = ControlFlow::Exit,
WindowEvent::KeyboardInput { input, .. } => match input {
KeyboardInput {
state: ElementState::Pressed,
virtual_keycode: Some(VirtualKeyCode::Escape),
..
} => {
*control_flow = ControlFlow::Exit;
}
_ => {}
},
WindowEvent::Resized(physical_size) => {
state.resize(*physical_size);
}
WindowEvent::ScaleFactorChanged { new_inner_size, .. } => {
state.resize(**new_inner_size);
}
_ => {}
}
}
// UPDATED!
Event::RedrawRequested(_) => {
let now = std::time::Instant::now();
let dt = now - last_render_time;
last_render_time = now;
state.update(dt);
match state.render() {
Ok(_) => {}
// Recreate the swap_chain if lost
Err(wgpu::SwapChainError::Lost) => state.resize(state.size),
// The system is out of memory, we should probably quit
Err(wgpu::SwapChainError::OutOfMemory) => *control_flow = ControlFlow::Exit,
// All other errors (Outdated, Timeout) should be resolved by the next frame
Err(e) => eprintln!("{:?}", e),
}
}
_ => {}
}
});
}
| 36.73699 | 107 | 0.501149 |
d7688a19e9af6f8a8cbe1ed50fa9179068e4728a | 456,227 | #![doc = "generated by AutoRust 0.1.0"]
#![allow(unused_mut)]
#![allow(unused_variables)]
#![allow(unused_imports)]
use crate::models::*;
pub mod recoverable_databases {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<RecoverableDatabase, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/recoverableDatabases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: RecoverableDatabase =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<RecoverableDatabaseListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/recoverableDatabases",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: RecoverableDatabaseListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod restorable_dropped_databases {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
restorable_droppeded_database_id: &str,
) -> std::result::Result<RestorableDroppedDatabase, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/restorableDroppedDatabases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
restorable_droppeded_database_id
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: RestorableDroppedDatabase =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<RestorableDroppedDatabaseListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/restorableDroppedDatabases",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: RestorableDroppedDatabaseListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod restore_points {
use crate::models::*;
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<RestorePointListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/restorePoints",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: RestorePointListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_database::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod servers {
use crate::models::*;
pub async fn check_name_availability(
operation_config: &crate::OperationConfig,
subscription_id: &str,
parameters: &CheckNameAvailabilityRequest,
) -> std::result::Result<CheckNameAvailabilityResponse, check_name_availability::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/checkNameAvailability",
operation_config.base_path(),
subscription_id
);
let mut url = url::Url::parse(url_str).map_err(check_name_availability::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(check_name_availability::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(check_name_availability::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(check_name_availability::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(check_name_availability::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: CheckNameAvailabilityResponse = serde_json::from_slice(rsp_body)
.map_err(|source| check_name_availability::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(check_name_availability::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod check_name_availability {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list(
operation_config: &crate::OperationConfig,
subscription_id: &str,
) -> std::result::Result<ServerListResult, list::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/servers",
operation_config.base_path(),
subscription_id
);
let mut url = url::Url::parse(url_str).map_err(list::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(list::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerListResult =
serde_json::from_slice(rsp_body).map_err(|source| list::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list::Error::DefaultResponse { status_code }),
}
}
pub mod list {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_resource_group(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
subscription_id: &str,
) -> std::result::Result<ServerListResult, list_by_resource_group::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers",
operation_config.base_path(),
subscription_id,
resource_group_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_resource_group::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_resource_group::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_by_resource_group::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_resource_group::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_resource_group::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_resource_group::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_resource_group {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
subscription_id: &str,
) -> std::result::Result<Server, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Server =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
parameters: &Server,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Server = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: Server = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(Server),
Accepted202,
Created201(Server),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
parameters: &ServerUpdate,
subscription_id: &str,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Server =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => Err(update::Error::DefaultResponse { status_code }),
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(Server),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod server_connection_policies {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
connection_policy_name: &str,
) -> std::result::Result<ServerConnectionPolicy, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/connectionPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
connection_policy_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerConnectionPolicy =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
connection_policy_name: &str,
parameters: &ServerConnectionPolicy,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/connectionPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
connection_policy_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerConnectionPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: ServerConnectionPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ServerConnectionPolicy),
Created201(ServerConnectionPolicy),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod databases {
use crate::models::*;
pub async fn pause(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<pause::Response, pause::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/pause",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(pause::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(pause::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(pause::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(pause::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(pause::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(pause::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(pause::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod pause {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn resume(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<resume::Response, resume::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/resume",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(resume::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(resume::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(resume::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(resume::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::ACCEPTED => Ok(resume::Response::Accepted202),
http::StatusCode::OK => Ok(resume::Response::Ok200),
status_code => {
let rsp_body = rsp.body();
Err(resume::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod resume {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Accepted202,
Ok200,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
expand: Option<&str>,
) -> std::result::Result<Database, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
if let Some(expand) = expand {
url.query_pairs_mut().append_pair("$expand", expand);
}
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Database =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
parameters: &Database,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Database = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: Database = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(Database),
Created201(Database),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
parameters: &DatabaseUpdate,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Database =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(Database),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => {
let rsp_body = rsp.body();
Err(delete::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
expand: Option<&str>,
filter: Option<&str>,
) -> std::result::Result<DatabaseListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
if let Some(expand) = expand {
url.query_pairs_mut().append_pair("$expand", expand);
}
if let Some(filter) = filter {
url.query_pairs_mut().append_pair("$filter", filter);
}
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get_by_elastic_pool(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
database_name: &str,
) -> std::result::Result<Database, get_by_elastic_pool::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}/databases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(get_by_elastic_pool::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get_by_elastic_pool::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get_by_elastic_pool::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(get_by_elastic_pool::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Database = serde_json::from_slice(rsp_body)
.map_err(|source| get_by_elastic_pool::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get_by_elastic_pool::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get_by_elastic_pool {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_elastic_pool(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
) -> std::result::Result<DatabaseListResult, list_by_elastic_pool::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}/databases",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_elastic_pool::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_elastic_pool::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_elastic_pool::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_elastic_pool::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_elastic_pool::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_elastic_pool::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_elastic_pool {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get_by_recommended_elastic_pool(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
recommended_elastic_pool_name: &str,
database_name: &str,
) -> std::result::Result<Database, get_by_recommended_elastic_pool::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/recommendedElasticPools/{}/databases/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
recommended_elastic_pool_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(get_by_recommended_elastic_pool::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get_by_recommended_elastic_pool::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(get_by_recommended_elastic_pool::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(get_by_recommended_elastic_pool::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: Database = serde_json::from_slice(rsp_body)
.map_err(|source| get_by_recommended_elastic_pool::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get_by_recommended_elastic_pool::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get_by_recommended_elastic_pool {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_recommended_elastic_pool(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
recommended_elastic_pool_name: &str,
) -> std::result::Result<DatabaseListResult, list_by_recommended_elastic_pool::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/recommendedElasticPools/{}/databases",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
recommended_elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_recommended_elastic_pool::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_recommended_elastic_pool::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_by_recommended_elastic_pool::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_recommended_elastic_pool::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_recommended_elastic_pool::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_recommended_elastic_pool::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_recommended_elastic_pool {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn import(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
parameters: &ImportRequest,
) -> std::result::Result<import::Response, import::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/import",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(import::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(import::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(import::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(import::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(import::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ImportExportResponse =
serde_json::from_slice(rsp_body).map_err(|source| import::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(import::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(import::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(import::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod import {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ImportExportResponse),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_import_operation(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
extension_name: &str,
parameters: &ImportExtensionRequest,
) -> std::result::Result<create_import_operation::Response, create_import_operation::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/extensions/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
extension_name
);
let mut url = url::Url::parse(url_str).map_err(create_import_operation::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_import_operation::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_import_operation::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(create_import_operation::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_import_operation::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: ImportExportResponse = serde_json::from_slice(rsp_body)
.map_err(|source| create_import_operation::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_import_operation::Response::Created201(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_import_operation::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(create_import_operation::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_import_operation {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Created201(ImportExportResponse),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn export(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
parameters: &ExportRequest,
) -> std::result::Result<export::Response, export::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/export",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(export::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(export::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(export::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(export::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(export::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ImportExportResponse =
serde_json::from_slice(rsp_body).map_err(|source| export::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(export::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(export::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(export::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod export {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ImportExportResponse),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_metrics(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
filter: &str,
) -> std::result::Result<MetricListResult, list_metrics::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/metrics",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_metrics::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_metrics::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
url.query_pairs_mut().append_pair("$filter", filter);
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_metrics::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_metrics::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: MetricListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_metrics::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_metrics::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_metrics {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_metric_definitions(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<MetricDefinitionListResult, list_metric_definitions::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/metricDefinitions",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_metric_definitions::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_metric_definitions::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_metric_definitions::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_metric_definitions::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: MetricDefinitionListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_metric_definitions::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_metric_definitions::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_metric_definitions {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod elastic_pool_activities {
use crate::models::*;
pub async fn list_by_elastic_pool(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
) -> std::result::Result<ElasticPoolActivityListResult, list_by_elastic_pool::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}/elasticPoolActivity",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_elastic_pool::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_elastic_pool::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_elastic_pool::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_elastic_pool::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ElasticPoolActivityListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_elastic_pool::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_elastic_pool::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_elastic_pool {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod elastic_pool_database_activities {
use crate::models::*;
pub async fn list_by_elastic_pool(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
) -> std::result::Result<ElasticPoolDatabaseActivityListResult, list_by_elastic_pool::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}/elasticPoolDatabaseActivity",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_elastic_pool::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_elastic_pool::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_elastic_pool::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_elastic_pool::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ElasticPoolDatabaseActivityListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_elastic_pool::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_elastic_pool::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_elastic_pool {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod service_tier_advisors {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
service_tier_advisor_name: &str,
) -> std::result::Result<ServiceTierAdvisor, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/serviceTierAdvisors/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
service_tier_advisor_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServiceTierAdvisor =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<ServiceTierAdvisorListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/serviceTierAdvisors",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServiceTierAdvisorListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_database::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod transparent_data_encryptions {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
transparent_data_encryption_name: &str,
) -> std::result::Result<TransparentDataEncryption, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/transparentDataEncryption/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
transparent_data_encryption_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: TransparentDataEncryption =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
transparent_data_encryption_name: &str,
parameters: &TransparentDataEncryption,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/transparentDataEncryption/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
transparent_data_encryption_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: TransparentDataEncryption = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: TransparentDataEncryption = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(TransparentDataEncryption),
Created201(TransparentDataEncryption),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod transparent_data_encryption_activities {
use crate::models::*;
pub async fn list_by_configuration(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
transparent_data_encryption_name: &str,
) -> std::result::Result<TransparentDataEncryptionActivityListResult, list_by_configuration::Error> {
let http_client = operation_config.http_client();
let url_str = & format ! ("{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/transparentDataEncryption/{}/operationResults" , operation_config . base_path () , subscription_id , resource_group_name , server_name , database_name , transparent_data_encryption_name) ;
let mut url = url::Url::parse(url_str).map_err(list_by_configuration::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_configuration::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_by_configuration::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_configuration::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: TransparentDataEncryptionActivityListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_configuration::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_configuration::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_configuration {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod database_threat_detection_policies {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
security_alert_policy_name: &str,
) -> std::result::Result<DatabaseSecurityAlertPolicy, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/securityAlertPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
security_alert_policy_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseSecurityAlertPolicy =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
security_alert_policy_name: &str,
parameters: &DatabaseSecurityAlertPolicy,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/securityAlertPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
security_alert_policy_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseSecurityAlertPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: DatabaseSecurityAlertPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(DatabaseSecurityAlertPolicy),
Created201(DatabaseSecurityAlertPolicy),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod data_masking_policies {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
data_masking_policy_name: &str,
) -> std::result::Result<DataMaskingPolicy, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/dataMaskingPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
data_masking_policy_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DataMaskingPolicy =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
data_masking_policy_name: &str,
parameters: &DataMaskingPolicy,
) -> std::result::Result<DataMaskingPolicy, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/dataMaskingPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
data_masking_policy_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DataMaskingPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod data_masking_rules {
use crate::models::*;
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
data_masking_policy_name: &str,
data_masking_rule_name: &str,
parameters: &DataMaskingRule,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/dataMaskingPolicies/{}/rules/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
data_masking_policy_name,
data_masking_rule_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DataMaskingRule = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: DataMaskingRule = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(DataMaskingRule),
Created201(DataMaskingRule),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
data_masking_policy_name: &str,
) -> std::result::Result<DataMaskingRuleListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/dataMaskingPolicies/{}/rules",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
data_masking_policy_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DataMaskingRuleListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_database::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod elastic_pools {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
) -> std::result::Result<ElasticPool, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ElasticPool =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
parameters: &ElasticPool,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ElasticPool = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: ElasticPool = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ElasticPool),
Created201(ElasticPool),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
parameters: &ElasticPoolUpdate,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ElasticPool =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ElasticPool),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => {
let rsp_body = rsp.body();
Err(delete::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<ElasticPoolListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ElasticPoolListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_metrics(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
filter: &str,
) -> std::result::Result<MetricListResult, list_metrics::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}/metrics",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(list_metrics::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_metrics::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
url.query_pairs_mut().append_pair("$filter", filter);
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_metrics::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_metrics::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: MetricListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_metrics::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_metrics::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_metrics {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_metric_definitions(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
elastic_pool_name: &str,
) -> std::result::Result<MetricDefinitionListResult, list_metric_definitions::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/elasticPools/{}/metricDefinitions",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
elastic_pool_name
);
let mut url = url::Url::parse(url_str).map_err(list_metric_definitions::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_metric_definitions::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_metric_definitions::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_metric_definitions::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: MetricDefinitionListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_metric_definitions::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_metric_definitions::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_metric_definitions {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod firewall_rules {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
firewall_rule_name: &str,
) -> std::result::Result<FirewallRule, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/firewallRules/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
firewall_rule_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FirewallRule =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
firewall_rule_name: &str,
parameters: &FirewallRule,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/firewallRules/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
firewall_rule_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FirewallRule = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: FirewallRule = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(FirewallRule),
Created201(FirewallRule),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
firewall_rule_name: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/firewallRules/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
firewall_rule_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => {
let rsp_body = rsp.body();
Err(delete::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<FirewallRuleListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/firewallRules",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FirewallRuleListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod geo_backup_policies {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
geo_backup_policy_name: &str,
) -> std::result::Result<GeoBackupPolicy, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/geoBackupPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
geo_backup_policy_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: GeoBackupPolicy =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
geo_backup_policy_name: &str,
parameters: &GeoBackupPolicy,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/geoBackupPolicies/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
geo_backup_policy_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: GeoBackupPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: GeoBackupPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Created201(GeoBackupPolicy),
Ok200(GeoBackupPolicy),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<GeoBackupPolicyListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/geoBackupPolicies",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: GeoBackupPolicyListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_database::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod replication_links {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
link_id: &str,
) -> std::result::Result<ReplicationLink, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/replicationLinks/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
link_id
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ReplicationLink =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
link_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/replicationLinks/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
link_id
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => {
let rsp_body = rsp.body();
Err(delete::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn failover(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
link_id: &str,
) -> std::result::Result<failover::Response, failover::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/replicationLinks/{}/failover",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
link_id
);
let mut url = url::Url::parse(url_str).map_err(failover::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(failover::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(failover::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(failover::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::NO_CONTENT => Ok(failover::Response::NoContent204),
http::StatusCode::ACCEPTED => Ok(failover::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(failover::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod failover {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
NoContent204,
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn failover_allow_data_loss(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
link_id: &str,
) -> std::result::Result<failover_allow_data_loss::Response, failover_allow_data_loss::Error> {
let http_client = operation_config.http_client();
let url_str = & format ! ("{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/replicationLinks/{}/forceFailoverAllowDataLoss" , operation_config . base_path () , subscription_id , resource_group_name , server_name , database_name , link_id) ;
let mut url = url::Url::parse(url_str).map_err(failover_allow_data_loss::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(failover_allow_data_loss::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(failover_allow_data_loss::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(failover_allow_data_loss::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::NO_CONTENT => Ok(failover_allow_data_loss::Response::NoContent204),
http::StatusCode::ACCEPTED => Ok(failover_allow_data_loss::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(failover_allow_data_loss::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod failover_allow_data_loss {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
NoContent204,
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn unlink(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
link_id: &str,
parameters: &UnlinkParameters,
) -> std::result::Result<unlink::Response, unlink::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/replicationLinks/{}/unlink",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
link_id
);
let mut url = url::Url::parse(url_str).map_err(unlink::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(unlink::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(unlink::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(unlink::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(unlink::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::NO_CONTENT => Ok(unlink::Response::NoContent204),
http::StatusCode::ACCEPTED => Ok(unlink::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(unlink::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod unlink {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
NoContent204,
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<ReplicationLinkListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/replicationLinks",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ReplicationLinkListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_database::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod server_azure_ad_administrators {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
administrator_name: &str,
) -> std::result::Result<ServerAzureAdAdministrator, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/administrators/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
administrator_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerAzureAdAdministrator =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
administrator_name: &str,
properties: &ServerAzureAdAdministrator,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/administrators/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
administrator_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(properties).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerAzureAdAdministrator = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: ServerAzureAdAdministrator = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
http::StatusCode::ACCEPTED => {
let rsp_body = rsp.body();
let rsp_value: ServerAzureAdAdministrator = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Accepted202(rsp_value))
}
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ServerAzureAdAdministrator),
Created201(ServerAzureAdAdministrator),
Accepted202(ServerAzureAdAdministrator),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
administrator_name: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/administrators/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
administrator_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::ACCEPTED => {
let rsp_body = rsp.body();
let rsp_value: ServerAzureAdAdministrator =
serde_json::from_slice(rsp_body).map_err(|source| delete::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(delete::Response::Accepted202(rsp_value))
}
http::StatusCode::NO_CONTENT => {
let rsp_body = rsp.body();
let rsp_value: ServerAzureAdAdministrator =
serde_json::from_slice(rsp_body).map_err(|source| delete::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(delete::Response::NoContent204(rsp_value))
}
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerAzureAdAdministrator =
serde_json::from_slice(rsp_body).map_err(|source| delete::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(delete::Response::Ok200(rsp_value))
}
status_code => {
let rsp_body = rsp.body();
Err(delete::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Accepted202(ServerAzureAdAdministrator),
NoContent204(ServerAzureAdAdministrator),
Ok200(ServerAzureAdAdministrator),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<ServerAdministratorListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/administrators",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerAdministratorListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod server_communication_links {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
communication_link_name: &str,
) -> std::result::Result<ServerCommunicationLink, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/communicationLinks/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
communication_link_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerCommunicationLink =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
communication_link_name: &str,
parameters: &ServerCommunicationLink,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/communicationLinks/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
communication_link_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: ServerCommunicationLink = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
status_code => {
let rsp_body = rsp.body();
Err(create_or_update::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Created201(ServerCommunicationLink),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
communication_link_name: &str,
) -> std::result::Result<(), delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/communicationLinks/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
communication_link_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(()),
status_code => {
let rsp_body = rsp.body();
Err(delete::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<ServerCommunicationLinkListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/communicationLinks",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerCommunicationLinkListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod service_objectives {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
service_objective_name: &str,
) -> std::result::Result<ServiceObjective, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/serviceObjectives/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
service_objective_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServiceObjective =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(get::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<ServiceObjectiveListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/serviceObjectives",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServiceObjectiveListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod server_usages {
use crate::models::*;
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
) -> std::result::Result<ServerUsageListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/usages",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerUsageListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_server::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod database_usages {
use crate::models::*;
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
subscription_id: &str,
resource_group_name: &str,
server_name: &str,
database_name: &str,
) -> std::result::Result<DatabaseUsageListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/usages",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseUsageListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => {
let rsp_body = rsp.body();
Err(list_by_database::Error::UnexpectedResponse {
status_code,
body: rsp_body.clone(),
})
}
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("Unexpected HTTP status code {}", status_code)]
UnexpectedResponse { status_code: http::StatusCode, body: bytes::Bytes },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod capabilities {
use crate::models::*;
pub async fn list_by_location(
operation_config: &crate::OperationConfig,
location_name: &str,
subscription_id: &str,
) -> std::result::Result<LocationCapabilities, list_by_location::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/locations/{}/capabilities",
operation_config.base_path(),
subscription_id,
location_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_location::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_location::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_location::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_location::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: LocationCapabilities = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_location::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_location::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_location {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod database_blob_auditing_policies {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
blob_auditing_policy_name: &str,
subscription_id: &str,
) -> std::result::Result<DatabaseBlobAuditingPolicy, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/auditingSettings/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
blob_auditing_policy_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseBlobAuditingPolicy =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
blob_auditing_policy_name: &str,
parameters: &DatabaseBlobAuditingPolicy,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/auditingSettings/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
blob_auditing_policy_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseBlobAuditingPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: DatabaseBlobAuditingPolicy = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(DatabaseBlobAuditingPolicy),
Created201(DatabaseBlobAuditingPolicy),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
subscription_id: &str,
) -> std::result::Result<DatabaseBlobAuditingPolicyListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/auditingSettings",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: DatabaseBlobAuditingPolicyListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_database::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod encryption_protectors {
use crate::models::*;
pub async fn revalidate(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
encryption_protector_name: &str,
subscription_id: &str,
) -> std::result::Result<revalidate::Response, revalidate::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/encryptionProtector/{}/revalidate",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
encryption_protector_name
);
let mut url = url::Url::parse(url_str).map_err(revalidate::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(revalidate::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(revalidate::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(revalidate::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(revalidate::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(revalidate::Response::Accepted202),
status_code => Err(revalidate::Error::DefaultResponse { status_code }),
}
}
pub mod revalidate {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
subscription_id: &str,
) -> std::result::Result<EncryptionProtectorListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/encryptionProtector",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: EncryptionProtectorListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_server::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
encryption_protector_name: &str,
subscription_id: &str,
) -> std::result::Result<EncryptionProtector, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/encryptionProtector/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
encryption_protector_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: EncryptionProtector =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
encryption_protector_name: &str,
parameters: &EncryptionProtector,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/encryptionProtector/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
encryption_protector_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: EncryptionProtector = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(EncryptionProtector),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod failover_groups {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
failover_group_name: &str,
subscription_id: &str,
) -> std::result::Result<FailoverGroup, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/failoverGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
failover_group_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FailoverGroup =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
failover_group_name: &str,
parameters: &FailoverGroup,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/failoverGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
failover_group_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FailoverGroup = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: FailoverGroup = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(FailoverGroup),
Accepted202,
Created201(FailoverGroup),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
failover_group_name: &str,
parameters: &FailoverGroupUpdate,
subscription_id: &str,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/failoverGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
failover_group_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FailoverGroup =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => Err(update::Error::DefaultResponse { status_code }),
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(FailoverGroup),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
failover_group_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/failoverGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
failover_group_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
subscription_id: &str,
) -> std::result::Result<FailoverGroupListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/failoverGroups",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FailoverGroupListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_server::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn failover(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
failover_group_name: &str,
subscription_id: &str,
) -> std::result::Result<failover::Response, failover::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/failoverGroups/{}/failover",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
failover_group_name
);
let mut url = url::Url::parse(url_str).map_err(failover::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(failover::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(failover::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(failover::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FailoverGroup =
serde_json::from_slice(rsp_body).map_err(|source| failover::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(failover::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(failover::Response::Accepted202),
status_code => Err(failover::Error::DefaultResponse { status_code }),
}
}
pub mod failover {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(FailoverGroup),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn force_failover_allow_data_loss(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
failover_group_name: &str,
subscription_id: &str,
) -> std::result::Result<force_failover_allow_data_loss::Response, force_failover_allow_data_loss::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/failoverGroups/{}/forceFailoverAllowDataLoss",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
failover_group_name
);
let mut url = url::Url::parse(url_str).map_err(force_failover_allow_data_loss::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(force_failover_allow_data_loss::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(force_failover_allow_data_loss::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(force_failover_allow_data_loss::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: FailoverGroup = serde_json::from_slice(rsp_body)
.map_err(|source| force_failover_allow_data_loss::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(force_failover_allow_data_loss::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(force_failover_allow_data_loss::Response::Accepted202),
status_code => Err(force_failover_allow_data_loss::Error::DefaultResponse { status_code }),
}
}
pub mod force_failover_allow_data_loss {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(FailoverGroup),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod managed_instances {
use crate::models::*;
pub async fn list_by_resource_group(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
subscription_id: &str,
) -> std::result::Result<ManagedInstanceListResult, list_by_resource_group::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/managedInstances",
operation_config.base_path(),
subscription_id,
resource_group_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_resource_group::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_resource_group::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_by_resource_group::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_resource_group::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ManagedInstanceListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_resource_group::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_resource_group::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_resource_group {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
managed_instance_name: &str,
subscription_id: &str,
) -> std::result::Result<ManagedInstance, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/managedInstances/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
managed_instance_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ManagedInstance =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
managed_instance_name: &str,
parameters: &ManagedInstance,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/managedInstances/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
managed_instance_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ManagedInstance = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: ManagedInstance = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ManagedInstance),
Accepted202,
Created201(ManagedInstance),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
managed_instance_name: &str,
parameters: &ManagedInstanceUpdate,
subscription_id: &str,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/managedInstances/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
managed_instance_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ManagedInstance =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => Err(update::Error::DefaultResponse { status_code }),
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ManagedInstance),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
managed_instance_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/managedInstances/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
managed_instance_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list(
operation_config: &crate::OperationConfig,
subscription_id: &str,
) -> std::result::Result<ManagedInstanceListResult, list::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/managedInstances",
operation_config.base_path(),
subscription_id
);
let mut url = url::Url::parse(url_str).map_err(list::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(list::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ManagedInstanceListResult =
serde_json::from_slice(rsp_body).map_err(|source| list::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list::Error::DefaultResponse { status_code }),
}
}
pub mod list {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod operations {
use crate::models::*;
pub async fn list(operation_config: &crate::OperationConfig) -> std::result::Result<OperationListResult, list::Error> {
let http_client = operation_config.http_client();
let url_str = &format!("{}/providers/Microsoft.Sql/operations", operation_config.base_path(),);
let mut url = url::Url::parse(url_str).map_err(list::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(list::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: OperationListResult =
serde_json::from_slice(rsp_body).map_err(|source| list::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list::Error::DefaultResponse { status_code }),
}
}
pub mod list {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod server_keys {
use crate::models::*;
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
subscription_id: &str,
) -> std::result::Result<ServerKeyListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/keys",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerKeyListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_server::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
key_name: &str,
subscription_id: &str,
) -> std::result::Result<ServerKey, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/keys/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
key_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerKey =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
key_name: &str,
parameters: &ServerKey,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/keys/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
key_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: ServerKey = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: ServerKey = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(ServerKey),
Accepted202,
Created201(ServerKey),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
key_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/keys/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
key_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod sync_agents {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
sync_agent_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncAgent, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/syncAgents/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
sync_agent_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncAgent =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
sync_agent_name: &str,
parameters: &SyncAgent,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/syncAgents/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
sync_agent_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncAgent = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: SyncAgent = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(SyncAgent),
Accepted202,
Created201(SyncAgent),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
sync_agent_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/syncAgents/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
sync_agent_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncAgentListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/syncAgents",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncAgentListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_server::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn generate_key(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
sync_agent_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncAgentKeyProperties, generate_key::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/syncAgents/{}/generateKey",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
sync_agent_name
);
let mut url = url::Url::parse(url_str).map_err(generate_key::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(generate_key::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(generate_key::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(generate_key::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncAgentKeyProperties =
serde_json::from_slice(rsp_body).map_err(|source| generate_key::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(generate_key::Error::DefaultResponse { status_code }),
}
}
pub mod generate_key {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_linked_databases(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
sync_agent_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncAgentLinkedDatabaseListResult, list_linked_databases::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/syncAgents/{}/linkedDatabases",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
sync_agent_name
);
let mut url = url::Url::parse(url_str).map_err(list_linked_databases::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_linked_databases::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_linked_databases::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_linked_databases::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncAgentLinkedDatabaseListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_linked_databases::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_linked_databases::Error::DefaultResponse { status_code }),
}
}
pub mod list_linked_databases {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod sync_groups {
use crate::models::*;
pub async fn list_sync_database_ids(
operation_config: &crate::OperationConfig,
location_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncDatabaseIdListResult, list_sync_database_ids::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/locations/{}/syncDatabaseIds",
operation_config.base_path(),
subscription_id,
location_name
);
let mut url = url::Url::parse(url_str).map_err(list_sync_database_ids::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_sync_database_ids::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_sync_database_ids::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_sync_database_ids::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncDatabaseIdListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_sync_database_ids::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_sync_database_ids::Error::DefaultResponse { status_code }),
}
}
pub mod list_sync_database_ids {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn refresh_hub_schema(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
subscription_id: &str,
) -> std::result::Result<refresh_hub_schema::Response, refresh_hub_schema::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/refreshHubSchema",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(refresh_hub_schema::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(refresh_hub_schema::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(refresh_hub_schema::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(refresh_hub_schema::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(refresh_hub_schema::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(refresh_hub_schema::Response::Accepted202),
status_code => Err(refresh_hub_schema::Error::DefaultResponse { status_code }),
}
}
pub mod refresh_hub_schema {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_hub_schemas(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncFullSchemaPropertiesListResult, list_hub_schemas::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/hubSchemas",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(list_hub_schemas::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_hub_schemas::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_hub_schemas::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_hub_schemas::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncFullSchemaPropertiesListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_hub_schemas::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_hub_schemas::Error::DefaultResponse { status_code }),
}
}
pub mod list_hub_schemas {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_logs(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
start_time: &str,
end_time: &str,
type_: &str,
continuation_token: Option<&str>,
subscription_id: &str,
) -> std::result::Result<SyncGroupLogListResult, list_logs::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/logs",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(list_logs::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_logs::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
url.query_pairs_mut().append_pair("startTime", start_time);
url.query_pairs_mut().append_pair("endTime", end_time);
url.query_pairs_mut().append_pair("type", type_);
if let Some(continuation_token) = continuation_token {
url.query_pairs_mut().append_pair("continuationToken", continuation_token);
}
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_logs::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_logs::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncGroupLogListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_logs::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_logs::Error::DefaultResponse { status_code }),
}
}
pub mod list_logs {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn cancel_sync(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
subscription_id: &str,
) -> std::result::Result<(), cancel_sync::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/cancelSync",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(cancel_sync::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(cancel_sync::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(cancel_sync::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(cancel_sync::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(()),
status_code => Err(cancel_sync::Error::DefaultResponse { status_code }),
}
}
pub mod cancel_sync {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn trigger_sync(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
subscription_id: &str,
) -> std::result::Result<(), trigger_sync::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/triggerSync",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(trigger_sync::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(trigger_sync::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(trigger_sync::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(trigger_sync::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(()),
status_code => Err(trigger_sync::Error::DefaultResponse { status_code }),
}
}
pub mod trigger_sync {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncGroup, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncGroup =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
parameters: &SyncGroup,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncGroup = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: SyncGroup = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(SyncGroup),
Accepted202,
Created201(SyncGroup),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
parameters: &SyncGroup,
subscription_id: &str,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncGroup =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => Err(update::Error::DefaultResponse { status_code }),
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(SyncGroup),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_database(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncGroupListResult, list_by_database::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_database::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_database::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_database::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_database::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncGroupListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_database::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_database::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_database {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod sync_members {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
sync_member_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncMember, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/syncMembers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name,
sync_member_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncMember =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
sync_member_name: &str,
parameters: &SyncMember,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/syncMembers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name,
sync_member_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncMember = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: SyncMember = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(SyncMember),
Accepted202,
Created201(SyncMember),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
sync_member_name: &str,
parameters: &SyncMember,
subscription_id: &str,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/syncMembers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name,
sync_member_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncMember =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => Err(update::Error::DefaultResponse { status_code }),
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(SyncMember),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
sync_member_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/syncMembers/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name,
sync_member_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_sync_group(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncMemberListResult, list_by_sync_group::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/syncMembers",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_sync_group::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_sync_group::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_sync_group::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_sync_group::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncMemberListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_sync_group::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_sync_group::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_sync_group {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_member_schemas(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
sync_member_name: &str,
subscription_id: &str,
) -> std::result::Result<SyncFullSchemaPropertiesListResult, list_member_schemas::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/syncMembers/{}/schemas",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
database_name,
sync_group_name,
sync_member_name
);
let mut url = url::Url::parse(url_str).map_err(list_member_schemas::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_member_schemas::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_member_schemas::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_member_schemas::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SyncFullSchemaPropertiesListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_member_schemas::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_member_schemas::Error::DefaultResponse { status_code }),
}
}
pub mod list_member_schemas {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn refresh_member_schema(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
database_name: &str,
sync_group_name: &str,
sync_member_name: &str,
subscription_id: &str,
) -> std::result::Result<refresh_member_schema::Response, refresh_member_schema::Error> {
let http_client = operation_config.http_client();
let url_str = & format ! ("{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/databases/{}/syncGroups/{}/syncMembers/{}/refreshSchema" , operation_config . base_path () , subscription_id , resource_group_name , server_name , database_name , sync_group_name , sync_member_name) ;
let mut url = url::Url::parse(url_str).map_err(refresh_member_schema::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::POST);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(refresh_member_schema::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.header(http::header::CONTENT_LENGTH, 0);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(refresh_member_schema::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(refresh_member_schema::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(refresh_member_schema::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(refresh_member_schema::Response::Accepted202),
status_code => Err(refresh_member_schema::Error::DefaultResponse { status_code }),
}
}
pub mod refresh_member_schema {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod subscription_usages {
use crate::models::*;
pub async fn list_by_location(
operation_config: &crate::OperationConfig,
location_name: &str,
subscription_id: &str,
) -> std::result::Result<SubscriptionUsageListResult, list_by_location::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/locations/{}/usages",
operation_config.base_path(),
subscription_id,
location_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_location::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_location::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_location::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_location::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SubscriptionUsageListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_location::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_location::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_location {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
location_name: &str,
usage_name: &str,
subscription_id: &str,
) -> std::result::Result<SubscriptionUsage, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/locations/{}/usages/{}",
operation_config.base_path(),
subscription_id,
location_name,
usage_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: SubscriptionUsage =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod virtual_clusters {
use crate::models::*;
pub async fn list(
operation_config: &crate::OperationConfig,
subscription_id: &str,
) -> std::result::Result<VirtualClusterListResult, list::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/providers/Microsoft.Sql/virtualClusters",
operation_config.base_path(),
subscription_id
);
let mut url = url::Url::parse(url_str).map_err(list::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(list::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: VirtualClusterListResult =
serde_json::from_slice(rsp_body).map_err(|source| list::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list::Error::DefaultResponse { status_code }),
}
}
pub mod list {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_resource_group(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
subscription_id: &str,
) -> std::result::Result<VirtualClusterListResult, list_by_resource_group::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/virtualClusters",
operation_config.base_path(),
subscription_id,
resource_group_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_resource_group::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_resource_group::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder
.body(req_body)
.map_err(list_by_resource_group::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_resource_group::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: VirtualClusterListResult = serde_json::from_slice(rsp_body)
.map_err(|source| list_by_resource_group::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_resource_group::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_resource_group {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
virtual_cluster_name: &str,
subscription_id: &str,
) -> std::result::Result<VirtualCluster, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/virtualClusters/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
virtual_cluster_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: VirtualCluster =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
virtual_cluster_name: &str,
parameters: &VirtualClusterUpdate,
subscription_id: &str,
) -> std::result::Result<update::Response, update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/virtualClusters/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
virtual_cluster_name
);
let mut url = url::Url::parse(url_str).map_err(update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PATCH);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(update::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: VirtualCluster =
serde_json::from_slice(rsp_body).map_err(|source| update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(update::Response::Accepted202),
status_code => Err(update::Error::DefaultResponse { status_code }),
}
}
pub mod update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(VirtualCluster),
Accepted202,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
virtual_cluster_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/virtualClusters/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
virtual_cluster_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
pub mod virtual_network_rules {
use crate::models::*;
pub async fn get(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
virtual_network_rule_name: &str,
subscription_id: &str,
) -> std::result::Result<VirtualNetworkRule, get::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/virtualNetworkRules/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
virtual_network_rule_name
);
let mut url = url::Url::parse(url_str).map_err(get::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(get::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(get::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(get::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: VirtualNetworkRule =
serde_json::from_slice(rsp_body).map_err(|source| get::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(get::Error::DefaultResponse { status_code }),
}
}
pub mod get {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn create_or_update(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
virtual_network_rule_name: &str,
parameters: &VirtualNetworkRule,
subscription_id: &str,
) -> std::result::Result<create_or_update::Response, create_or_update::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/virtualNetworkRules/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
virtual_network_rule_name
);
let mut url = url::Url::parse(url_str).map_err(create_or_update::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::PUT);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(create_or_update::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = azure_core::to_json(parameters).map_err(create_or_update::Error::SerializeError)?;
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(create_or_update::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(create_or_update::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: VirtualNetworkRule = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Ok200(rsp_value))
}
http::StatusCode::ACCEPTED => Ok(create_or_update::Response::Accepted202),
http::StatusCode::CREATED => {
let rsp_body = rsp.body();
let rsp_value: VirtualNetworkRule = serde_json::from_slice(rsp_body)
.map_err(|source| create_or_update::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(create_or_update::Response::Created201(rsp_value))
}
status_code => Err(create_or_update::Error::DefaultResponse { status_code }),
}
}
pub mod create_or_update {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200(VirtualNetworkRule),
Accepted202,
Created201(VirtualNetworkRule),
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn delete(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
virtual_network_rule_name: &str,
subscription_id: &str,
) -> std::result::Result<delete::Response, delete::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/virtualNetworkRules/{}",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name,
virtual_network_rule_name
);
let mut url = url::Url::parse(url_str).map_err(delete::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::DELETE);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(delete::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(delete::Error::BuildRequestError)?;
let rsp = http_client.execute_request(req).await.map_err(delete::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => Ok(delete::Response::Ok200),
http::StatusCode::ACCEPTED => Ok(delete::Response::Accepted202),
http::StatusCode::NO_CONTENT => Ok(delete::Response::NoContent204),
status_code => Err(delete::Error::DefaultResponse { status_code }),
}
}
pub mod delete {
use crate::{models, models::*};
#[derive(Debug)]
pub enum Response {
Ok200,
Accepted202,
NoContent204,
}
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
pub async fn list_by_server(
operation_config: &crate::OperationConfig,
resource_group_name: &str,
server_name: &str,
subscription_id: &str,
) -> std::result::Result<VirtualNetworkRuleListResult, list_by_server::Error> {
let http_client = operation_config.http_client();
let url_str = &format!(
"{}/subscriptions/{}/resourceGroups/{}/providers/Microsoft.Sql/servers/{}/virtualNetworkRules",
operation_config.base_path(),
subscription_id,
resource_group_name,
server_name
);
let mut url = url::Url::parse(url_str).map_err(list_by_server::Error::ParseUrlError)?;
let mut req_builder = http::request::Builder::new();
req_builder = req_builder.method(http::Method::GET);
if let Some(token_credential) = operation_config.token_credential() {
let token_response = token_credential
.get_token(operation_config.token_credential_resource())
.await
.map_err(list_by_server::Error::GetTokenError)?;
req_builder = req_builder.header(http::header::AUTHORIZATION, format!("Bearer {}", token_response.token.secret()));
}
url.query_pairs_mut().append_pair("api-version", operation_config.api_version());
let req_body = bytes::Bytes::from_static(azure_core::EMPTY_BODY);
req_builder = req_builder.uri(url.as_str());
let req = req_builder.body(req_body).map_err(list_by_server::Error::BuildRequestError)?;
let rsp = http_client
.execute_request(req)
.await
.map_err(list_by_server::Error::ExecuteRequestError)?;
match rsp.status() {
http::StatusCode::OK => {
let rsp_body = rsp.body();
let rsp_value: VirtualNetworkRuleListResult =
serde_json::from_slice(rsp_body).map_err(|source| list_by_server::Error::DeserializeError(source, rsp_body.clone()))?;
Ok(rsp_value)
}
status_code => Err(list_by_server::Error::DefaultResponse { status_code }),
}
}
pub mod list_by_server {
use crate::{models, models::*};
#[derive(Debug, thiserror :: Error)]
pub enum Error {
#[error("HTTP status code {}", status_code)]
DefaultResponse { status_code: http::StatusCode },
#[error("Failed to parse request URL: {0}")]
ParseUrlError(url::ParseError),
#[error("Failed to build request: {0}")]
BuildRequestError(http::Error),
#[error("Failed to execute request: {0}")]
ExecuteRequestError(azure_core::HttpError),
#[error("Failed to serialize request body: {0}")]
SerializeError(serde_json::Error),
#[error("Failed to deserialize response: {0}, body: {1:?}")]
DeserializeError(serde_json::Error, bytes::Bytes),
#[error("Failed to get access token: {0}")]
GetTokenError(azure_core::Error),
}
}
}
| 48.267774 | 309 | 0.595397 |
8f007be6f6902922c77ebbecb43b22ce24192125 | 13,289 | use std::net::{Ipv4Addr, Ipv6Addr, SocketAddrV4, SocketAddrV6};
use bytes::{Buf, BufMut};
use err_derive::Error;
use crate::coding::{BufExt, BufMutExt, UnexpectedEnd};
use crate::packet::ConnectionId;
use crate::{
varint, Side, TransportConfig, TransportError, MAX_CID_SIZE, MIN_CID_SIZE, RESET_TOKEN_SIZE,
VERSION,
};
// Apply a given macro to a list of all the transport parameters having integer types, along with
// their codes and default values. Using this helps us avoid error-prone duplication of the
// contained information across decoding, encoding, and the `Default` impl. Whenever we want to do
// something with transport parameters, we'll handle the bulk of cases by writing a macro that takes
// a list of arguments in this form, then passing it to this macro.
macro_rules! apply_params {
($macro:ident) => {
$macro! {
// name (id) = default,
idle_timeout(0x0001) = 0,
max_packet_size(0x0003) = 65527,
initial_max_data(0x0004) = 0,
initial_max_stream_data_bidi_local(0x0005) = 0,
initial_max_stream_data_bidi_remote(0x0006) = 0,
initial_max_stream_data_uni(0x0007) = 0,
initial_max_streams_bidi(0x0008) = 0,
initial_max_streams_uni(0x0009) = 0,
ack_delay_exponent(0x000a) = 3,
max_ack_delay(0x000b) = 25,
}
};
}
macro_rules! make_struct {
{$($name:ident ($code:expr) = $default:expr,)*} => {
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct TransportParameters {
$(pub $name : u64,)*
pub disable_migration: bool,
// Server-only
pub original_connection_id: Option<ConnectionId>,
pub stateless_reset_token: Option<[u8; RESET_TOKEN_SIZE]>,
pub preferred_address: Option<PreferredAddress>,
}
impl Default for TransportParameters {
/// Standard defaults, used if the peer does not supply a given parameter.
fn default() -> Self {
Self {
$($name: $default,)*
disable_migration: false,
original_connection_id: None,
stateless_reset_token: None,
preferred_address: None,
}
}
}
}
}
apply_params!(make_struct);
impl TransportParameters {
pub fn new(config: &TransportConfig) -> Self {
TransportParameters {
initial_max_streams_bidi: config.stream_window_bidi,
initial_max_streams_uni: config.stream_window_uni,
initial_max_data: config.receive_window,
initial_max_stream_data_bidi_local: config.stream_receive_window,
initial_max_stream_data_bidi_remote: config.stream_receive_window,
initial_max_stream_data_uni: config.stream_receive_window,
idle_timeout: config.idle_timeout,
max_ack_delay: 0, // Unimplemented
..Self::default()
}
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq)]
pub struct PreferredAddress {
address_v4: Option<SocketAddrV4>,
address_v6: Option<SocketAddrV6>,
connection_id: ConnectionId,
stateless_reset_token: [u8; RESET_TOKEN_SIZE],
}
impl PreferredAddress {
fn wire_size(&self) -> u16 {
4 + 2 + 16 + 2 + 1 + self.connection_id.len() as u16 + 16
}
fn write<W: BufMut>(&self, w: &mut W) {
w.write(self.address_v4.map_or(Ipv4Addr::UNSPECIFIED, |x| *x.ip()));
w.write::<u16>(self.address_v4.map_or(0, |x| x.port()));
w.write(self.address_v6.map_or(Ipv6Addr::UNSPECIFIED, |x| *x.ip()));
w.write::<u16>(self.address_v6.map_or(0, |x| x.port()));
w.write::<u8>(self.connection_id.len() as u8);
w.put_slice(&self.connection_id);
w.put_slice(&self.stateless_reset_token);
}
fn read<R: Buf>(r: &mut R) -> Result<Self, Error> {
let ip_v4 = r.get::<Ipv4Addr>()?;
let port_v4 = r.get::<u16>()?;
let ip_v6 = r.get::<Ipv6Addr>()?;
let port_v6 = r.get::<u16>()?;
let cid_len = r.get::<u8>()?;
if r.remaining() < cid_len as usize
|| (cid_len != 0 && (cid_len < MIN_CID_SIZE as u8 || cid_len > MAX_CID_SIZE as u8))
{
return Err(Error::Malformed);
}
let mut stage = [0; MAX_CID_SIZE];
r.copy_to_slice(&mut stage[0..cid_len as usize]);
let cid = ConnectionId::new(&stage[0..cid_len as usize]);
if r.remaining() < 16 {
return Err(Error::Malformed);
}
let mut token = [0; RESET_TOKEN_SIZE];
r.copy_to_slice(&mut token);
let address_v4 = if ip_v4.is_unspecified() && port_v4 == 0 {
None
} else {
Some(SocketAddrV4::new(ip_v4, port_v4))
};
let address_v6 = if ip_v6.is_unspecified() && port_v6 == 0 {
None
} else {
Some(SocketAddrV6::new(ip_v6, port_v6, 0, 0))
};
if address_v4.is_none() && address_v6.is_none() {
return Err(Error::IllegalValue);
}
Ok(Self {
address_v4,
address_v6,
connection_id: cid,
stateless_reset_token: token,
})
}
}
#[derive(Debug, Copy, Clone, Eq, PartialEq, Error)]
pub enum Error {
#[error(display = "version negotiation was tampered with")]
VersionNegotiation,
#[error(display = "parameter had illegal value")]
IllegalValue,
#[error(display = "parameters were malformed")]
Malformed,
}
impl From<Error> for TransportError {
fn from(e: Error) -> Self {
match e {
Error::VersionNegotiation => TransportError::VERSION_NEGOTIATION_ERROR(""),
Error::IllegalValue => TransportError::TRANSPORT_PARAMETER_ERROR("illegal value"),
Error::Malformed => TransportError::TRANSPORT_PARAMETER_ERROR("malformed"),
}
}
}
impl From<UnexpectedEnd> for Error {
fn from(_: UnexpectedEnd) -> Self {
Error::Malformed
}
}
impl TransportParameters {
pub fn write<W: BufMut>(&self, side: Side, w: &mut W) {
if side.is_server() {
w.write::<u32>(VERSION); // Negotiated version
w.write::<u8>(8); // Bytes of supported versions
w.write::<u32>(0x0a1a_2a3a); // Reserved version
w.write::<u32>(VERSION); // Real supported version
} else {
w.write::<u32>(VERSION); // Initially requested version
}
let mut buf = Vec::new();
macro_rules! write_params {
{$($name:ident ($code:expr) = $default:expr,)*} => {
$(
if self.$name != $default {
buf.write::<u16>($code);
buf.write::<u16>(varint::size(self.$name).expect("value too large") as u16);
buf.write_var(self.$name);
}
)*
}
}
apply_params!(write_params);
if let Some(ref x) = self.original_connection_id {
buf.write::<u16>(0x0000);
buf.write::<u16>(x.len() as u16);
buf.put_slice(x);
}
if let Some(ref x) = self.stateless_reset_token {
buf.write::<u16>(0x0002);
buf.write::<u16>(16);
buf.put_slice(x);
}
if self.disable_migration {
buf.write::<u16>(0x000c);
buf.write::<u16>(0);
}
if let Some(ref x) = self.preferred_address {
buf.write::<u16>(0x000d);
buf.write::<u16>(x.wire_size());
x.write(&mut buf);
}
w.write::<u16>(buf.len() as u16);
w.put_slice(&buf);
}
pub fn read<R: Buf>(side: Side, r: &mut R) -> Result<Self, Error> {
if side.is_server() {
if r.remaining() < 26 {
return Err(Error::Malformed);
}
// We only support one version, so there is no validation to do here.
r.get::<u32>().unwrap();
} else {
if r.remaining() < 31 {
return Err(Error::Malformed);
}
let negotiated = r.get::<u32>().unwrap();
if negotiated != VERSION {
return Err(Error::VersionNegotiation);
}
let supported_bytes = r.get::<u8>().unwrap();
if supported_bytes < 4 || supported_bytes > 252 || supported_bytes % 4 != 0 {
return Err(Error::Malformed);
}
let mut found = false;
for _ in 0..(supported_bytes / 4) {
found |= r.get::<u32>().unwrap() == negotiated;
}
if !found {
return Err(Error::VersionNegotiation);
}
}
// Initialize to protocol-specified defaults
let mut params = TransportParameters::default();
let params_len = r.get::<u16>().unwrap();
if params_len as usize != r.remaining() {
return Err(Error::Malformed);
}
// State to check for duplicate transport parameters.
macro_rules! param_state {
{$($name:ident ($code:expr) = $default:expr,)*} => {{
struct ParamState {
$($name: bool,)*
}
ParamState {
$($name: false,)*
}
}}
}
let mut got = apply_params!(param_state);
while r.has_remaining() {
if r.remaining() < 4 {
return Err(Error::Malformed);
}
let id = r.get::<u16>().unwrap();
let len = r.get::<u16>().unwrap();
if r.remaining() < len as usize {
return Err(Error::Malformed);
}
match id {
0x0000 => {
if len < MIN_CID_SIZE as u16
|| len > MAX_CID_SIZE as u16
|| params.original_connection_id.is_some()
{
return Err(Error::Malformed);
}
let mut staging = [0; MAX_CID_SIZE];
r.copy_to_slice(&mut staging[0..len as usize]);
params.original_connection_id =
Some(ConnectionId::new(&staging[0..len as usize]));
}
0x0002 => {
if len != 16 || params.stateless_reset_token.is_some() {
return Err(Error::Malformed);
}
let mut tok = [0; RESET_TOKEN_SIZE];
r.copy_to_slice(&mut tok);
params.stateless_reset_token = Some(tok);
}
0x000c => {
if len != 0 || params.disable_migration {
return Err(Error::Malformed);
}
params.disable_migration = true;
}
0x000d => {
if params.preferred_address.is_some() {
return Err(Error::Malformed);
}
params.preferred_address =
Some(PreferredAddress::read(&mut r.take(len as usize))?);
}
_ => {
macro_rules! parse {
{$($name:ident ($code:expr) = $default:expr,)*} => {
match id {
$($code => {
params.$name = r.get_var()?;
if len != varint::size(params.$name).unwrap() as u16 || got.$name { return Err(Error::Malformed); }
got.$name = true;
})*
_ => r.advance(len as usize),
}
}
}
apply_params!(parse);
}
}
}
if params.ack_delay_exponent > 20
|| params.max_ack_delay >= 1 << 14
|| (side.is_server()
&& (params.stateless_reset_token.is_some() || params.preferred_address.is_some()))
{
return Err(Error::IllegalValue);
}
Ok(params)
}
}
#[cfg(test)]
mod test {
use super::*;
use bytes::IntoBuf;
#[test]
fn coding() {
let mut buf = Vec::new();
let params = TransportParameters {
initial_max_streams_bidi: 16,
initial_max_streams_uni: 16,
ack_delay_exponent: 2,
max_packet_size: 1200,
preferred_address: Some(PreferredAddress {
address_v4: Some(SocketAddrV4::new(Ipv4Addr::LOCALHOST, 42)),
address_v6: None,
connection_id: ConnectionId::new(&[]),
stateless_reset_token: [0xab; RESET_TOKEN_SIZE],
}),
..TransportParameters::default()
};
params.write(Side::Server, &mut buf);
assert_eq!(
TransportParameters::read(Side::Client, &mut buf.into_buf()).unwrap(),
params
);
}
}
| 34.697128 | 135 | 0.513432 |
21d43e26b3a2961c13f4fb4a0c54e6b006f89427 | 6,586 | use zygote::Zygote;
use std::fmt;
use gen::Gen;
#[derive(Clone)]
pub struct Chromosome {
dominant: Zygote,
recessive: Zygote,
pub decoded: Vec<u64>,
}
impl fmt::Display for Chromosome {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{}\n{}", self.dominant, self.recessive)
}
}
impl fmt::Debug for Chromosome {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
fmt::Display::fmt(&self, f)
}
}
impl Chromosome {
pub fn new(dominant: Zygote, recessive: Zygote) -> Self {
Self {
decoded: vec![0;dominant.u64s_amount()],
dominant,
recessive,
}
}
pub fn overwrite(&mut self, source: &Chromosome) {
self.dominant.overwrite(&source.dominant);
self.recessive.overwrite(&source.recessive);
}
pub fn decode_genotype(&mut self) {
let mut p = 0;
while p < self.dominant.u64s_amount() {
let dd = self.dominant.get_d_u64(p);
let dv = self.dominant.get_v_u64(p);
let rd = self.recessive.get_d_u64(p);
let rv = self.recessive.get_v_u64(p);
self.decoded[p] = dv & !rd | rd & rv & !dd | dd & dv;
p += 1
}
}
#[allow(dead_code)]
fn from_strings(dominant: &str, recessive: &str) -> Chromosome {
Chromosome {
dominant: dominant.parse::<Zygote>().unwrap(),
recessive: recessive.parse::<Zygote>().unwrap(),
decoded: vec![0;dominant.len()],
}
}
pub fn cross_zygotes(&mut self, begin: usize, amount: usize) {
self.dominant.cross_bidirectional(
&mut self.recessive,
begin,
amount,
);
}
pub fn cross_chromosomes(&mut self, that: &Chromosome, begin: usize, amount: usize) {
self.dominant.cross(&that.dominant, begin, amount);
self.recessive.cross(&that.recessive, begin, amount);
}
pub fn mutate(&mut self, pos: usize, new_gen: &Gen) {
self.dominant.mutate(pos, new_gen);
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn to_string_should_concat_zygotes() {
assert_eq!(
Chromosome::from_strings(
"dDrR rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr ",
"RrDd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd",
).to_string(),
"dDrR rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr\
\nRrDd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd"
)
}
}
#[cfg(test)]
mod decoding_first_zygote_with_dominant_genes {
use super::*;
#[test]
fn must_override_recessive_genes_of_second_one() {
let mut chr = Chromosome::from_strings("DDdd", "RrRr");
chr.decode_genotype();
assert_eq!(chr.decoded, vec![0b1100u64,0,0,0]);
}
#[test]
fn must_override_dominant_genes_of_second_one() {
let mut chr = Chromosome::from_strings("DDdd", "DdDd");
chr.decode_genotype();
assert_eq!(chr.decoded, vec![0b1100u64,0,0,0]);
}
}
#[cfg(test)]
mod decoding_first_zygote_with_recessive_genes {
use super::*;
#[test]
fn must_override_recessive_genes_of_second_one() {
let mut chr = Chromosome::from_strings("RRrr", "RrRr");
chr.decode_genotype();
assert_eq!(chr.decoded, vec![0b1100u64,0,0,0]);
}
#[test]
fn must_override_dominant_genes_of_second_one() {
let mut chr = Chromosome::from_strings("RRrr", "DdDd");
chr.decode_genotype();
assert_eq!(chr.decoded, vec![0b1010u64,0,0,0]);
}
}
#[cfg(test)]
mod crossing_zygote {
use super::*;
#[test]
fn must_swap_3_genes_starting_from_pos_2() {
let mut chr = Chromosome::from_strings(
"dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd",
"rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr",
);
chr.cross_zygotes(2, 3);
assert_eq!(
chr.to_string(),
"dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddr rrdd\
\nrrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrd ddrr"
);
}
#[quickcheck]
fn must_swap_whole_right_pos_if_amount_is_more_than_length(pos: usize) -> bool {
let mut chr = Chromosome::from_strings(
"dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd",
"rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr",
);
chr.cross_zygotes(3, 61 + pos);
chr.to_string() ==
"rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rddd\
\ndddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd drrr"
}
#[test]
fn cross_chromosomes() {
let mut first = Chromosome::from_strings(
"dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd",
"rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr",
);
let mut second = Chromosome::from_strings(
"DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD DDDD",
"RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR RRRR",
);
first.cross_chromosomes(&mut second, 1, 2);
assert_eq!(
first.to_string(),
"dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dDDd\
\nrrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rRRr"
)
}
#[test]
fn mutate_gen_in_dominant() {
let mut chr = Chromosome::from_strings(
"dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd",
"rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr",
);
chr.mutate(2, &Gen::R1);
assert_eq!(
chr.to_string(),
"dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dddd dRdd\
\nrrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr rrrr"
)
}
}
fn _bools_to_str(bools: &[bool]) -> String {
bools
.iter()
.map(|b| if *b { '1' } else { '0' })
.rev()
.collect()
}
| 33.095477 | 99 | 0.600213 |
0ea545c25e5060be60353c77f1f3ea6028be11a2 | 13,639 | use std::fs;
use std::sync::Once;
use crate::{libindy, settings, utils};
use crate::agency_client::mocking::AgencyMockDecrypted;
use crate::init::{init_issuer_config, open_as_main_wallet};
use crate::init::PoolConfig;
use crate::libindy::utils::pool::reset_pool_handle;
use crate::libindy::utils::pool::test_utils::{create_test_ledger_config, delete_test_pool, open_test_pool};
use crate::libindy::utils::wallet::{close_main_wallet, create_and_open_as_main_wallet, create_indy_wallet, delete_wallet, reset_wallet_handle, WalletConfig};
use crate::libindy::utils::wallet::{configure_issuer_wallet, create_wallet};
use crate::settings::set_testing_defaults;
use crate::utils::constants;
use crate::utils::file::write_file;
use crate::utils::get_temp_dir_path;
use crate::utils::plugins::init_plugin;
use crate::utils::provision::{AgentProvisionConfig, provision_cloud_agent};
use crate::utils::test_logger::LibvcxDefaultLogger;
pub struct SetupEmpty; // clears settings, setups up logging
pub struct SetupDefaults; // set default settings
pub struct SetupMocks; // set default settings and enable test mode
pub struct SetupIndyMocks; // set default settings and enable indy mode
pub struct SetupWallet {
pub wallet_config: WalletConfig,
skip_cleanup: bool,
} // creates wallet with random name, configures wallet settings
pub struct SetupPoolConfig {
skip_cleanup: bool,
pub pool_config: PoolConfig,
}
pub struct SetupLibraryWallet {
pub wallet_config: WalletConfig,
} // set default settings and init indy wallet
pub struct SetupLibraryWalletPoolZeroFees {
pub institution_did: String,
} // set default settings, init indy wallet, init pool, set zero fees
pub struct SetupAgencyMock {
pub wallet_config: WalletConfig,
} // set default settings and enable mock agency mode
pub struct SetupLibraryAgencyV2; // init indy wallet, init pool, provision 2 agents. use protocol type 2.0
pub struct SetupLibraryAgencyV2ZeroFees; // init indy wallet, init pool, provision 2 agents. use protocol type 2.0, set zero fees
fn setup() {
init_test_logging();
settings::clear_config();
set_testing_defaults();
}
fn setup_empty() {
settings::clear_config();
init_test_logging();
}
fn tear_down() {
settings::clear_config();
reset_wallet_handle().unwrap();
reset_pool_handle();
settings::get_agency_client_mut().unwrap().disable_test_mode();
AgencyMockDecrypted::clear_mocks();
}
impl SetupEmpty {
pub fn init() {
setup_empty();
}
}
impl Drop for SetupEmpty {
fn drop(&mut self) {
tear_down()
}
}
impl SetupDefaults {
pub fn init() {
debug!("SetupDefaults :: starting");
setup();
debug!("SetupDefaults :: finished");
}
}
impl Drop for SetupDefaults {
fn drop(&mut self) {
tear_down()
}
}
impl SetupMocks {
fn _init() -> SetupMocks {
settings::set_config_value(settings::CONFIG_ENABLE_TEST_MODE, "true");
settings::get_agency_client_mut().unwrap().enable_test_mode();
SetupMocks
}
pub fn init() -> SetupMocks {
setup();
SetupMocks::_init()
}
pub fn init_without_threadpool() -> SetupMocks {
setup();
SetupMocks::_init()
}
}
impl Drop for SetupMocks {
fn drop(&mut self) {
tear_down()
}
}
impl SetupLibraryWallet {
pub fn init() -> SetupLibraryWallet {
setup();
let wallet_name: String = format!("Test_SetupLibraryWallet_{}", uuid::Uuid::new_v4().to_string());
let wallet_key: String = settings::DEFAULT_WALLET_KEY.into();
let wallet_kdf: String = settings::WALLET_KDF_RAW.into();
let wallet_config = WalletConfig {
wallet_name: wallet_name.clone(),
wallet_key: wallet_key.clone(),
wallet_key_derivation: wallet_kdf.to_string(),
wallet_type: None,
storage_config: None,
storage_credentials: None,
rekey: None,
rekey_derivation_method: None,
};
settings::set_config_value(settings::CONFIG_ENABLE_TEST_MODE, "false");
settings::get_agency_client_mut().unwrap().disable_test_mode();
create_and_open_as_main_wallet(&wallet_config).unwrap();
SetupLibraryWallet { wallet_config }
}
}
impl Drop for SetupLibraryWallet {
fn drop(&mut self) {
let _res = close_main_wallet().unwrap();
delete_wallet(&self.wallet_config).unwrap();
tear_down()
}
}
impl SetupWallet {
pub fn init() -> SetupWallet {
init_test_logging();
let wallet_name: String = format!("Test_SetupWallet_{}", uuid::Uuid::new_v4().to_string());
settings::get_agency_client_mut().unwrap().disable_test_mode();
let wallet_config = WalletConfig {
wallet_name: wallet_name.clone(),
wallet_key: settings::DEFAULT_WALLET_KEY.into(),
wallet_key_derivation: settings::WALLET_KDF_RAW.into(),
wallet_type: None,
storage_config: None,
storage_credentials: None,
rekey: None,
rekey_derivation_method: None,
};
create_indy_wallet(&wallet_config).unwrap();
SetupWallet { wallet_config, skip_cleanup: false }
}
pub fn skip_cleanup(mut self) -> SetupWallet {
self.skip_cleanup = true;
self
}
}
impl Drop for SetupWallet {
fn drop(&mut self) {
if self.skip_cleanup == false {
let _res = close_main_wallet().unwrap_or_else(|_e| error!("Failed to close main wallet while dropping SetupWallet test config."));
delete_wallet(&self.wallet_config).unwrap_or_else(|_e| error!("Failed to delete wallet while dropping SetupWallet test config."));
reset_wallet_handle().unwrap_or_else(|_e| error!("Failed to reset wallet handle while dropping SetupWallet test config."));
}
}
}
impl SetupPoolConfig {
pub fn init() -> SetupPoolConfig {
create_test_ledger_config();
let genesis_path = utils::get_temp_dir_path(settings::DEFAULT_GENESIS_PATH).to_str().unwrap().to_string();
let pool_config = PoolConfig {
genesis_path,
pool_name: None,
pool_config: None,
};
SetupPoolConfig { skip_cleanup: false, pool_config }
}
pub fn skip_cleanup(mut self) -> SetupPoolConfig {
self.skip_cleanup = true;
self
}
}
impl Drop for SetupPoolConfig {
fn drop(&mut self) {
if self.skip_cleanup == false {
delete_test_pool();
reset_pool_handle();
}
}
}
impl SetupIndyMocks {
pub fn init() -> SetupIndyMocks {
setup();
settings::set_config_value(settings::CONFIG_ENABLE_TEST_MODE, "true");
settings::get_agency_client_mut().unwrap().enable_test_mode();
SetupIndyMocks {}
}
}
impl Drop for SetupIndyMocks {
fn drop(&mut self) {
tear_down()
}
}
impl SetupLibraryWalletPoolZeroFees {
pub fn init() -> SetupLibraryWalletPoolZeroFees {
setup();
let institution_did = setup_indy_env(true);
SetupLibraryWalletPoolZeroFees {
institution_did
}
}
}
impl Drop for SetupLibraryWalletPoolZeroFees {
fn drop(&mut self) {
cleanup_indy_env();
tear_down()
}
}
impl SetupAgencyMock {
pub fn init() -> SetupAgencyMock {
setup();
let wallet_name: String = format!("Test_SetupWalletAndPool_{}", uuid::Uuid::new_v4().to_string());
settings::get_agency_client_mut().unwrap().enable_test_mode();
let wallet_config = WalletConfig {
wallet_name: wallet_name.clone(),
wallet_key: settings::DEFAULT_WALLET_KEY.into(),
wallet_key_derivation: settings::WALLET_KDF_RAW.into(),
wallet_type: None,
storage_config: None,
storage_credentials: None,
rekey: None,
rekey_derivation_method: None,
};
create_and_open_as_main_wallet(&wallet_config).unwrap();
SetupAgencyMock { wallet_config }
}
}
impl Drop for SetupAgencyMock {
fn drop(&mut self) {
let _res = close_main_wallet().unwrap();
delete_wallet(&self.wallet_config).unwrap();
tear_down()
}
}
impl SetupLibraryAgencyV2 {
pub fn init() -> SetupLibraryAgencyV2 {
setup();
debug!("SetupLibraryAgencyV2 init >> going to setup agency environment");
setup_agency_env();
debug!("SetupLibraryAgencyV2 init >> completed");
SetupLibraryAgencyV2
}
}
impl Drop for SetupLibraryAgencyV2 {
fn drop(&mut self) {
cleanup_agency_env();
tear_down()
}
}
impl SetupLibraryAgencyV2ZeroFees {
pub fn init() -> SetupLibraryAgencyV2ZeroFees {
setup();
setup_agency_env();
SetupLibraryAgencyV2ZeroFees
}
}
impl Drop for SetupLibraryAgencyV2ZeroFees {
fn drop(&mut self) {
cleanup_agency_env();
tear_down()
}
}
#[macro_export]
macro_rules! assert_match {
($pattern:pat, $var:expr) => (
assert!(match $var {
$pattern => true,
_ => false
})
);
}
/* dummy */
pub const AGENCY_ENDPOINT: &'static str = "http://localhost:8080";
pub const AGENCY_DID: &'static str = "VsKV7grR1BUE29mG2Fm2kX";
pub const AGENCY_VERKEY: &'static str = "Hezce2UWMZ3wUhVkh2LfKSs8nDzWwzs2Win7EzNN3YaR";
pub const C_AGENCY_ENDPOINT: &'static str = "http://localhost:8080";
pub const C_AGENCY_DID: &'static str = "VsKV7grR1BUE29mG2Fm2kX";
pub const C_AGENCY_VERKEY: &'static str = "Hezce2UWMZ3wUhVkh2LfKSs8nDzWwzs2Win7EzNN3YaR";
lazy_static! {
static ref TEST_LOGGING_INIT: Once = Once::new();
}
pub fn init_test_logging() {
TEST_LOGGING_INIT.call_once(|| {
LibvcxDefaultLogger::init_testing_logger();
})
}
pub fn create_new_seed() -> String {
let x = rand::random::<u32>();
format!("{:032}", x)
}
pub fn configure_trustee_did() {
settings::set_config_value(settings::CONFIG_ENABLE_TEST_MODE, "false");
libindy::utils::anoncreds::libindy_prover_create_master_secret(settings::DEFAULT_LINK_SECRET_ALIAS).unwrap();
let (my_did, my_vk) = libindy::utils::signus::create_and_store_my_did(Some(constants::TRUSTEE_SEED), None).unwrap();
settings::set_config_value(settings::CONFIG_INSTITUTION_DID, &my_did);
settings::set_config_value(settings::CONFIG_INSTITUTION_VERKEY, &my_vk);
}
pub fn setup_libnullpay_nofees() {
init_plugin(settings::DEFAULT_PAYMENT_PLUGIN, settings::DEFAULT_PAYMENT_INIT_FUNCTION);
libindy::utils::payments::test_utils::token_setup(None, None, true);
}
pub fn setup_indy_env(use_zero_fees: bool) -> String {
settings::set_config_value(settings::CONFIG_ENABLE_TEST_MODE, "false");
settings::get_agency_client_mut().unwrap().disable_test_mode();
init_plugin(settings::DEFAULT_PAYMENT_PLUGIN, settings::DEFAULT_PAYMENT_INIT_FUNCTION);
let enterprise_seed = "000000000000000000000000Trustee1";
let config_wallet = WalletConfig {
wallet_name: format!("wallet_{}", uuid::Uuid::new_v4().to_string()),
wallet_key: settings::DEFAULT_WALLET_KEY.into(),
wallet_key_derivation: settings::WALLET_KDF_RAW.into(),
wallet_type: None,
storage_config: None,
storage_credentials: None,
rekey: None,
rekey_derivation_method: None,
};
let config_provision_agent = AgentProvisionConfig {
agency_did: AGENCY_DID.to_string(),
agency_verkey: AGENCY_VERKEY.to_string(),
agency_endpoint: AGENCY_ENDPOINT.to_string(),
agent_seed: None,
};
create_wallet(&config_wallet).unwrap();
open_as_main_wallet(&config_wallet).unwrap();
let config_issuer = configure_issuer_wallet(enterprise_seed).unwrap();
init_issuer_config(&config_issuer).unwrap();
provision_cloud_agent(&config_provision_agent).unwrap();
settings::set_config_value(settings::CONFIG_GENESIS_PATH, utils::get_temp_dir_path(settings::DEFAULT_GENESIS_PATH).to_str().unwrap());
open_test_pool();
libindy::utils::payments::test_utils::token_setup(None, None, use_zero_fees);
let institution_did = settings::get_config_value(settings::CONFIG_INSTITUTION_DID).unwrap();
institution_did
}
pub fn cleanup_indy_env() {
delete_test_pool();
}
pub fn cleanup_agency_env() {
delete_test_pool();
}
pub fn setup_agency_env() {
debug!("setup_agency_env >> clearing up settings");
settings::clear_config();
init_plugin(settings::DEFAULT_PAYMENT_PLUGIN, settings::DEFAULT_PAYMENT_INIT_FUNCTION);
settings::set_config_value(settings::CONFIG_GENESIS_PATH, utils::get_temp_dir_path(settings::DEFAULT_GENESIS_PATH).to_str().unwrap());
open_test_pool();
}
pub struct TempFile {
pub path: String,
}
impl TempFile {
pub fn prepare_path(filename: &str) -> TempFile {
let file_path = get_temp_dir_path(filename).to_str().unwrap().to_string();
TempFile { path: file_path }
}
pub fn create(filename: &str) -> TempFile {
let file_path = get_temp_dir_path(filename).to_str().unwrap().to_string();
fs::File::create(&file_path).unwrap();
TempFile { path: file_path }
}
pub fn create_with_data(filename: &str, data: &str) -> TempFile {
let mut file = TempFile::create(filename);
file.write(data);
file
}
pub fn write(&mut self, data: &str) {
write_file(&self.path, data).unwrap()
}
}
impl Drop for TempFile {
fn drop(&mut self) {
fs::remove_file(&self.path).unwrap()
}
}
| 30.04185 | 157 | 0.67351 |
f5f66fd42efbbe4c603e3d251a3b872fdea86122 | 11,515 | //! Support for a calling of an imported function.
use pyo3::prelude::*;
use pyo3::types::{PyAny, PyDict, PyTuple};
use crate::code_memory::CodeMemory;
use crate::function::Function;
use crate::memory::Memory;
use crate::value::{read_value_from, write_value_to};
use cranelift_codegen::ir::types;
use cranelift_codegen::ir::{InstBuilder, StackSlotData, StackSlotKind};
use cranelift_codegen::Context;
use cranelift_codegen::{binemit, ir, isa};
use cranelift_entity::{EntityRef, PrimaryMap};
use cranelift_frontend::{FunctionBuilder, FunctionBuilderContext};
use cranelift_wasm::{DefinedFuncIndex, FuncIndex};
use target_lexicon::HOST;
use wasmtime_environ::{Export, Module};
use wasmtime_runtime::{Imports, InstanceHandle, VMContext, VMFunctionBody};
use core::cmp;
use std::cell::RefCell;
use std::collections::{HashMap, HashSet};
use std::rc::Rc;
struct BoundPyFunction {
name: String,
obj: PyObject,
}
struct ImportObjState {
calls: Vec<BoundPyFunction>,
#[allow(dead_code)]
code_memory: CodeMemory,
}
unsafe extern "C" fn stub_fn(vmctx: *mut VMContext, call_id: u32, values_vec: *mut i64) {
let gil = Python::acquire_gil();
let py = gil.python();
let mut instance = InstanceHandle::from_vmctx(vmctx);
let (_name, obj) = {
let state = instance
.host_state()
.downcast_mut::<ImportObjState>()
.expect("state");
let name = state.calls[call_id as usize].name.to_owned();
let obj = state.calls[call_id as usize].obj.clone_ref(py);
(name, obj)
};
let module = instance.module_ref();
let signature = &module.signatures[module.functions[FuncIndex::new(call_id as usize)]];
let mut args = Vec::new();
for i in 1..signature.params.len() {
args.push(read_value_from(
py,
values_vec.offset(i as isize - 1),
signature.params[i].value_type,
))
}
let result = obj.call(py, PyTuple::new(py, args), None).expect("result");
for i in 0..signature.returns.len() {
let val = if result.is_none() {
0.into_object(py) // FIXME default ???
} else {
if i > 0 {
panic!("multiple returns unsupported");
}
result.clone_ref(py)
};
write_value_to(
py,
values_vec.offset(i as isize),
signature.returns[i].value_type,
val,
);
}
}
/// Create a trampoline for invoking a python function.
fn make_trampoline(
isa: &dyn isa::TargetIsa,
code_memory: &mut CodeMemory,
fn_builder_ctx: &mut FunctionBuilderContext,
call_id: u32,
signature: &ir::Signature,
) -> *const VMFunctionBody {
// Mostly reverse copy of the similar method from wasmtime's
// wasmtime-jit/src/compiler.rs.
let pointer_type = isa.pointer_type();
let mut stub_sig = ir::Signature::new(isa.frontend_config().default_call_conv);
// Add the `vmctx` parameter.
stub_sig.params.push(ir::AbiParam::special(
pointer_type,
ir::ArgumentPurpose::VMContext,
));
// Add the `call_id` parameter.
stub_sig.params.push(ir::AbiParam::new(types::I32));
// Add the `values_vec` parameter.
stub_sig.params.push(ir::AbiParam::new(pointer_type));
let values_vec_len = 8 * cmp::max(signature.params.len() - 1, signature.returns.len()) as u32;
let mut context = Context::new();
context.func =
ir::Function::with_name_signature(ir::ExternalName::user(0, 0), signature.clone());
let ss = context.func.create_stack_slot(StackSlotData::new(
StackSlotKind::ExplicitSlot,
values_vec_len,
));
let value_size = 8;
{
let mut builder = FunctionBuilder::new(&mut context.func, fn_builder_ctx);
let block0 = builder.create_ebb();
builder.append_ebb_params_for_function_params(block0);
builder.switch_to_block(block0);
builder.seal_block(block0);
let values_vec_ptr_val = builder.ins().stack_addr(pointer_type, ss, 0);
let mflags = ir::MemFlags::trusted();
for i in 1..signature.params.len() {
if i == 0 {
continue;
}
let val = builder.func.dfg.ebb_params(block0)[i];
builder.ins().store(
mflags,
val,
values_vec_ptr_val,
((i - 1) * value_size) as i32,
);
}
let vmctx_ptr_val = builder.func.dfg.ebb_params(block0)[0];
let call_id_val = builder.ins().iconst(types::I32, call_id as i64);
let callee_args = vec![vmctx_ptr_val, call_id_val, values_vec_ptr_val];
let new_sig = builder.import_signature(stub_sig.clone());
let callee_value = builder
.ins()
.iconst(pointer_type, stub_fn as *const VMFunctionBody as i64);
builder
.ins()
.call_indirect(new_sig, callee_value, &callee_args);
let mflags = ir::MemFlags::trusted();
let mut results = Vec::new();
for (i, r) in signature.returns.iter().enumerate() {
let load = builder.ins().load(
r.value_type,
mflags,
values_vec_ptr_val,
(i * value_size) as i32,
);
results.push(load);
}
builder.ins().return_(&results);
builder.finalize()
}
let mut code_buf: Vec<u8> = Vec::new();
let mut reloc_sink = RelocSink {};
let mut trap_sink = binemit::NullTrapSink {};
context
.compile_and_emit(isa, &mut code_buf, &mut reloc_sink, &mut trap_sink)
.expect("compile_and_emit");
code_memory
.allocate_copy_of_byte_slice(&code_buf)
.expect("allocate_copy_of_byte_slice")
.as_ptr()
}
fn parse_annotation_type(s: &str) -> ir::Type {
match s {
"I32" | "i32" => types::I32,
"I64" | "i64" => types::I64,
"F32" | "f32" => types::F32,
"F64" | "f64" => types::F64,
_ => panic!("unknown type in annotations"),
}
}
fn get_signature_from_py_annotation(
annot: &PyDict,
pointer_type: ir::Type,
call_conv: isa::CallConv,
) -> PyResult<ir::Signature> {
let mut params = Vec::new();
params.push(ir::AbiParam::special(
pointer_type,
ir::ArgumentPurpose::VMContext,
));
let mut returns = None;
for (name, value) in annot.iter() {
let ty = parse_annotation_type(&value.to_string());
match name.to_string().as_str() {
"return" => returns = Some(ty),
_ => params.push(ir::AbiParam::new(ty)),
}
}
Ok(ir::Signature {
params,
returns: match returns {
Some(r) => vec![ir::AbiParam::new(r)],
None => vec![],
},
call_conv,
})
}
pub fn into_instance_from_obj(
py: Python,
global_exports: Rc<RefCell<HashMap<String, Option<wasmtime_runtime::Export>>>>,
obj: &PyAny,
) -> PyResult<InstanceHandle> {
let isa = {
let isa_builder =
cranelift_native::builder().expect("host machine is not a supported target");
let flag_builder = cranelift_codegen::settings::builder();
isa_builder.finish(cranelift_codegen::settings::Flags::new(flag_builder))
};
let mut fn_builder_ctx = FunctionBuilderContext::new();
let mut module = Module::new();
let mut finished_functions: PrimaryMap<DefinedFuncIndex, *const VMFunctionBody> =
PrimaryMap::new();
let mut code_memory = CodeMemory::new();
let pointer_type = types::Type::triple_pointer_type(&HOST);
let call_conv = isa::CallConv::triple_default(&HOST);
let obj = obj.cast_as::<PyDict>()?;
let mut bound_functions = Vec::new();
let mut dependencies = HashSet::new();
let mut memories = PrimaryMap::new();
for (name, item) in obj.iter() {
if item.is_callable() {
let sig = if item.get_type().is_subclass::<Function>()? {
// TODO faster calls?
let wasm_fn = item.cast_as::<Function>()?;
dependencies.insert(wasm_fn.instance.clone());
wasm_fn.get_signature()
} else if item.hasattr("__annotations__")? {
let annot = item.getattr("__annotations__")?.cast_as::<PyDict>()?;
get_signature_from_py_annotation(&annot, pointer_type, call_conv)?
} else {
// TODO support calls without annotations?
continue;
};
let sig_id = module.signatures.push(sig.clone());
let func_id = module.functions.push(sig_id);
module
.exports
.insert(name.to_string(), Export::Function(func_id));
let trampoline = make_trampoline(
isa.as_ref(),
&mut code_memory,
&mut fn_builder_ctx,
func_id.index() as u32,
&sig,
);
finished_functions.push(trampoline);
bound_functions.push(BoundPyFunction {
name: name.to_string(),
obj: item.into_object(py),
});
} else if item.get_type().is_subclass::<Memory>()? {
let wasm_mem = item.cast_as::<Memory>()?;
dependencies.insert(wasm_mem.instance.clone());
let plan = wasm_mem.get_plan();
let mem_id = module.memory_plans.push(plan);
let _mem_id_2 = memories.push(wasm_mem.into_import());
assert_eq!(mem_id, _mem_id_2);
let _mem_id_3 = module
.imported_memories
.push((String::from(""), String::from("")));
assert_eq!(mem_id, _mem_id_3);
module
.exports
.insert(name.to_string(), Export::Memory(mem_id));
}
}
let imports = Imports::new(
dependencies,
PrimaryMap::new(),
PrimaryMap::new(),
memories,
PrimaryMap::new(),
);
let data_initializers = Vec::new();
let signatures = PrimaryMap::new();
code_memory.publish();
let import_obj_state = ImportObjState {
calls: bound_functions,
code_memory,
};
Ok(InstanceHandle::new(
Rc::new(module),
global_exports,
finished_functions.into_boxed_slice(),
imports,
&data_initializers,
signatures.into_boxed_slice(),
None,
Box::new(import_obj_state),
)
.expect("instance"))
}
/// We don't expect trampoline compilation to produce any relocations, so
/// this `RelocSink` just asserts that it doesn't recieve any.
struct RelocSink {}
impl binemit::RelocSink for RelocSink {
fn reloc_ebb(
&mut self,
_offset: binemit::CodeOffset,
_reloc: binemit::Reloc,
_ebb_offset: binemit::CodeOffset,
) {
panic!("trampoline compilation should not produce ebb relocs");
}
fn reloc_external(
&mut self,
_offset: binemit::CodeOffset,
_reloc: binemit::Reloc,
_name: &ir::ExternalName,
_addend: binemit::Addend,
) {
panic!("trampoline compilation should not produce external symbol relocs");
}
fn reloc_jt(
&mut self,
_offset: binemit::CodeOffset,
_reloc: binemit::Reloc,
_jt: ir::JumpTable,
) {
panic!("trampoline compilation should not produce jump table relocs");
}
}
| 32.164804 | 98 | 0.592358 |
bf6c345d59e182a4bbe7aabacfb70eb8982a5980 | 1,517 | use crate::input::structures;
use crate::{input::rooms, protos::cao_commands};
use tonic::{Request, Response, Status};
use tracing::info;
#[derive(Clone)]
pub struct CommandService {
world: crate::WorldContainer,
}
impl std::fmt::Debug for CommandService {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("CommandService").finish()
}
}
impl CommandService {
pub fn new(world: crate::WorldContainer) -> Self {
Self { world }
}
}
#[tonic::async_trait]
impl cao_commands::command_server::Command for CommandService {
async fn place_structure(
&self,
request: Request<cao_commands::PlaceStructureCommand>,
) -> Result<Response<cao_commands::CommandResult>, Status> {
info!("Placing structure");
let mut w = self.world.write().await;
structures::place_structure(&mut w, request.get_ref())
.map(|_: ()| Response::new(cao_commands::CommandResult {}))
.map_err(|err| Status::invalid_argument(err.to_string()))
}
async fn take_room(
&self,
request: tonic::Request<cao_commands::TakeRoomCommand>,
) -> Result<tonic::Response<cao_commands::CommandResult>, tonic::Status> {
info!("Taking room");
let mut w = self.world.write().await;
rooms::take_room(&mut w, request.get_ref())
.map(|_: ()| Response::new(cao_commands::CommandResult {}))
.map_err(|err| Status::invalid_argument(err.to_string()))
}
}
| 32.276596 | 78 | 0.637442 |
723ceb807435a4e3e809bee6aa6db4606184504f | 47,287 | #![allow(unused_imports, non_camel_case_types)]
use crate::model::CodeableConcept::CodeableConcept;
use crate::model::ContactDetail::ContactDetail;
use crate::model::Element::Element;
use crate::model::Extension::Extension;
use crate::model::Identifier::Identifier;
use crate::model::Meta::Meta;
use crate::model::Narrative::Narrative;
use crate::model::Period::Period;
use crate::model::Reference::Reference;
use crate::model::RelatedArtifact::RelatedArtifact;
use crate::model::ResourceList::ResourceList;
use crate::model::UsageContext::UsageContext;
use serde_json::json;
use serde_json::value::Value;
use std::borrow::Cow;
/// The ResearchDefinition resource describes the conditional state (population and
/// any exposures being compared within the population) and outcome (if specified)
/// that the knowledge (evidence, assertion, recommendation) is about.
#[derive(Debug)]
pub struct ResearchDefinition<'a> {
pub(crate) value: Cow<'a, Value>,
}
impl ResearchDefinition<'_> {
pub fn new(value: &Value) -> ResearchDefinition {
ResearchDefinition {
value: Cow::Borrowed(value),
}
}
pub fn to_json(&self) -> Value {
(*self.value).clone()
}
/// Extensions for approvalDate
pub fn _approval_date(&self) -> Option<Element> {
if let Some(val) = self.value.get("_approvalDate") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for comment
pub fn _comment(&self) -> Option<Vec<Element>> {
if let Some(Value::Array(val)) = self.value.get("_comment") {
return Some(
val.into_iter()
.map(|e| Element {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// Extensions for copyright
pub fn _copyright(&self) -> Option<Element> {
if let Some(val) = self.value.get("_copyright") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for date
pub fn _date(&self) -> Option<Element> {
if let Some(val) = self.value.get("_date") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for description
pub fn _description(&self) -> Option<Element> {
if let Some(val) = self.value.get("_description") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for experimental
pub fn _experimental(&self) -> Option<Element> {
if let Some(val) = self.value.get("_experimental") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for implicitRules
pub fn _implicit_rules(&self) -> Option<Element> {
if let Some(val) = self.value.get("_implicitRules") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for language
pub fn _language(&self) -> Option<Element> {
if let Some(val) = self.value.get("_language") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for lastReviewDate
pub fn _last_review_date(&self) -> Option<Element> {
if let Some(val) = self.value.get("_lastReviewDate") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for name
pub fn _name(&self) -> Option<Element> {
if let Some(val) = self.value.get("_name") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for publisher
pub fn _publisher(&self) -> Option<Element> {
if let Some(val) = self.value.get("_publisher") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for purpose
pub fn _purpose(&self) -> Option<Element> {
if let Some(val) = self.value.get("_purpose") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for shortTitle
pub fn _short_title(&self) -> Option<Element> {
if let Some(val) = self.value.get("_shortTitle") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for status
pub fn _status(&self) -> Option<Element> {
if let Some(val) = self.value.get("_status") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for subtitle
pub fn _subtitle(&self) -> Option<Element> {
if let Some(val) = self.value.get("_subtitle") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for title
pub fn _title(&self) -> Option<Element> {
if let Some(val) = self.value.get("_title") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for url
pub fn _url(&self) -> Option<Element> {
if let Some(val) = self.value.get("_url") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for usage
pub fn _usage(&self) -> Option<Element> {
if let Some(val) = self.value.get("_usage") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for version
pub fn _version(&self) -> Option<Element> {
if let Some(val) = self.value.get("_version") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The date on which the resource content was approved by the publisher. Approval
/// happens once when the content is officially approved for usage.
pub fn approval_date(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("approvalDate") {
return Some(string);
}
return None;
}
/// An individiual or organization primarily involved in the creation and
/// maintenance of the content.
pub fn author(&self) -> Option<Vec<ContactDetail>> {
if let Some(Value::Array(val)) = self.value.get("author") {
return Some(
val.into_iter()
.map(|e| ContactDetail {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// A human-readable string to clarify or explain concepts about the resource.
pub fn comment(&self) -> Option<Vec<&str>> {
if let Some(Value::Array(val)) = self.value.get("comment") {
return Some(
val.into_iter()
.map(|e| e.as_str().unwrap())
.collect::<Vec<_>>(),
);
}
return None;
}
/// Contact details to assist a user in finding and communicating with the
/// publisher.
pub fn contact(&self) -> Option<Vec<ContactDetail>> {
if let Some(Value::Array(val)) = self.value.get("contact") {
return Some(
val.into_iter()
.map(|e| ContactDetail {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// These resources do not have an independent existence apart from the resource
/// that contains them - they cannot be identified independently, and nor can they
/// have their own independent transaction scope.
pub fn contained(&self) -> Option<Vec<ResourceList>> {
if let Some(Value::Array(val)) = self.value.get("contained") {
return Some(
val.into_iter()
.map(|e| ResourceList {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// A copyright statement relating to the research definition and/or its contents.
/// Copyright statements are generally legal restrictions on the use and publishing
/// of the research definition.
pub fn copyright(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("copyright") {
return Some(string);
}
return None;
}
/// The date (and optionally time) when the research definition was published. The
/// date must change when the business version changes and it must change if the
/// status code changes. In addition, it should change when the substantive content
/// of the research definition changes.
pub fn date(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("date") {
return Some(string);
}
return None;
}
/// A free text natural language description of the research definition from a
/// consumer's perspective.
pub fn description(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("description") {
return Some(string);
}
return None;
}
/// An individual or organization primarily responsible for internal coherence of
/// the content.
pub fn editor(&self) -> Option<Vec<ContactDetail>> {
if let Some(Value::Array(val)) = self.value.get("editor") {
return Some(
val.into_iter()
.map(|e| ContactDetail {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The period during which the research definition content was or is planned to be
/// in active use.
pub fn effective_period(&self) -> Option<Period> {
if let Some(val) = self.value.get("effectivePeriod") {
return Some(Period {
value: Cow::Borrowed(val),
});
}
return None;
}
/// An individual or organization responsible for officially endorsing the content
/// for use in some setting.
pub fn endorser(&self) -> Option<Vec<ContactDetail>> {
if let Some(Value::Array(val)) = self.value.get("endorser") {
return Some(
val.into_iter()
.map(|e| ContactDetail {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// A Boolean value to indicate that this research definition is authored for
/// testing purposes (or education/evaluation/marketing) and is not intended to be
/// used for genuine usage.
pub fn experimental(&self) -> Option<bool> {
if let Some(val) = self.value.get("experimental") {
return Some(val.as_bool().unwrap());
}
return None;
}
/// A reference to a ResearchElementDefinition resource that defines the exposure
/// for the research.
pub fn exposure(&self) -> Option<Reference> {
if let Some(val) = self.value.get("exposure") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// A reference to a ResearchElementDefinition resource that defines the
/// exposureAlternative for the research.
pub fn exposure_alternative(&self) -> Option<Reference> {
if let Some(val) = self.value.get("exposureAlternative") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// May be used to represent additional information that is not part of the basic
/// definition of the resource. To make the use of extensions safe and manageable,
/// there is a strict set of governance applied to the definition and use of
/// extensions. Though any implementer can define an extension, there is a set of
/// requirements that SHALL be met as part of the definition of the extension.
pub fn extension(&self) -> Option<Vec<Extension>> {
if let Some(Value::Array(val)) = self.value.get("extension") {
return Some(
val.into_iter()
.map(|e| Extension {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The logical id of the resource, as used in the URL for the resource. Once
/// assigned, this value never changes.
pub fn id(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("id") {
return Some(string);
}
return None;
}
/// A formal identifier that is used to identify this research definition when it is
/// represented in other formats, or referenced in a specification, model, design or
/// an instance.
pub fn identifier(&self) -> Option<Vec<Identifier>> {
if let Some(Value::Array(val)) = self.value.get("identifier") {
return Some(
val.into_iter()
.map(|e| Identifier {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// A reference to a set of rules that were followed when the resource was
/// constructed, and which must be understood when processing the content. Often,
/// this is a reference to an implementation guide that defines the special rules
/// along with other profiles etc.
pub fn implicit_rules(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("implicitRules") {
return Some(string);
}
return None;
}
/// A legal or geographic region in which the research definition is intended to be
/// used.
pub fn jurisdiction(&self) -> Option<Vec<CodeableConcept>> {
if let Some(Value::Array(val)) = self.value.get("jurisdiction") {
return Some(
val.into_iter()
.map(|e| CodeableConcept {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The base language in which the resource is written.
pub fn language(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("language") {
return Some(string);
}
return None;
}
/// The date on which the resource content was last reviewed. Review happens
/// periodically after approval but does not change the original approval date.
pub fn last_review_date(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("lastReviewDate") {
return Some(string);
}
return None;
}
/// A reference to a Library resource containing the formal logic used by the
/// ResearchDefinition.
pub fn library(&self) -> Option<Vec<&str>> {
if let Some(Value::Array(val)) = self.value.get("library") {
return Some(
val.into_iter()
.map(|e| e.as_str().unwrap())
.collect::<Vec<_>>(),
);
}
return None;
}
/// The metadata about the resource. This is content that is maintained by the
/// infrastructure. Changes to the content might not always be associated with
/// version changes to the resource.
pub fn meta(&self) -> Option<Meta> {
if let Some(val) = self.value.get("meta") {
return Some(Meta {
value: Cow::Borrowed(val),
});
}
return None;
}
/// May be used to represent additional information that is not part of the basic
/// definition of the resource and that modifies the understanding of the element
/// that contains it and/or the understanding of the containing element's
/// descendants. Usually modifier elements provide negation or qualification. To
/// make the use of extensions safe and manageable, there is a strict set of
/// governance applied to the definition and use of extensions. Though any
/// implementer is allowed to define an extension, there is a set of requirements
/// that SHALL be met as part of the definition of the extension. Applications
/// processing a resource are required to check for modifier extensions. Modifier
/// extensions SHALL NOT change the meaning of any elements on Resource or
/// DomainResource (including cannot change the meaning of modifierExtension
/// itself).
pub fn modifier_extension(&self) -> Option<Vec<Extension>> {
if let Some(Value::Array(val)) = self.value.get("modifierExtension") {
return Some(
val.into_iter()
.map(|e| Extension {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// A natural language name identifying the research definition. This name should be
/// usable as an identifier for the module by machine processing applications such
/// as code generation.
pub fn name(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("name") {
return Some(string);
}
return None;
}
/// A reference to a ResearchElementDefinition resomece that defines the outcome for
/// the research.
pub fn outcome(&self) -> Option<Reference> {
if let Some(val) = self.value.get("outcome") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// A reference to a ResearchElementDefinition resource that defines the population
/// for the research.
pub fn population(&self) -> Reference {
Reference {
value: Cow::Borrowed(&self.value["population"]),
}
}
/// The name of the organization or individual that published the research
/// definition.
pub fn publisher(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("publisher") {
return Some(string);
}
return None;
}
/// Explanation of why this research definition is needed and why it has been
/// designed as it has.
pub fn purpose(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("purpose") {
return Some(string);
}
return None;
}
/// Related artifacts such as additional documentation, justification, or
/// bibliographic references.
pub fn related_artifact(&self) -> Option<Vec<RelatedArtifact>> {
if let Some(Value::Array(val)) = self.value.get("relatedArtifact") {
return Some(
val.into_iter()
.map(|e| RelatedArtifact {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// An individual or organization primarily responsible for review of some aspect of
/// the content.
pub fn reviewer(&self) -> Option<Vec<ContactDetail>> {
if let Some(Value::Array(val)) = self.value.get("reviewer") {
return Some(
val.into_iter()
.map(|e| ContactDetail {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The short title provides an alternate title for use in informal descriptive
/// contexts where the full, formal title is not necessary.
pub fn short_title(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("shortTitle") {
return Some(string);
}
return None;
}
/// The status of this research definition. Enables tracking the life-cycle of the
/// content.
pub fn status(&self) -> Option<ResearchDefinitionStatus> {
if let Some(Value::String(val)) = self.value.get("status") {
return Some(ResearchDefinitionStatus::from_string(&val).unwrap());
}
return None;
}
/// The intended subjects for the ResearchDefinition. If this element is not
/// provided, a Patient subject is assumed, but the subject of the
/// ResearchDefinition can be anything.
pub fn subject_codeable_concept(&self) -> Option<CodeableConcept> {
if let Some(val) = self.value.get("subjectCodeableConcept") {
return Some(CodeableConcept {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The intended subjects for the ResearchDefinition. If this element is not
/// provided, a Patient subject is assumed, but the subject of the
/// ResearchDefinition can be anything.
pub fn subject_reference(&self) -> Option<Reference> {
if let Some(val) = self.value.get("subjectReference") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// An explanatory or alternate title for the ResearchDefinition giving additional
/// information about its content.
pub fn subtitle(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("subtitle") {
return Some(string);
}
return None;
}
/// A human-readable narrative that contains a summary of the resource and can be
/// used to represent the content of the resource to a human. The narrative need not
/// encode all the structured data, but is required to contain sufficient detail to
/// make it "clinically safe" for a human to just read the narrative. Resource
/// definitions may define what content should be represented in the narrative to
/// ensure clinical safety.
pub fn text(&self) -> Option<Narrative> {
if let Some(val) = self.value.get("text") {
return Some(Narrative {
value: Cow::Borrowed(val),
});
}
return None;
}
/// A short, descriptive, user-friendly title for the research definition.
pub fn title(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("title") {
return Some(string);
}
return None;
}
/// Descriptive topics related to the content of the ResearchDefinition. Topics
/// provide a high-level categorization grouping types of ResearchDefinitions that
/// can be useful for filtering and searching.
pub fn topic(&self) -> Option<Vec<CodeableConcept>> {
if let Some(Value::Array(val)) = self.value.get("topic") {
return Some(
val.into_iter()
.map(|e| CodeableConcept {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// An absolute URI that is used to identify this research definition when it is
/// referenced in a specification, model, design or an instance; also called its
/// canonical identifier. This SHOULD be globally unique and SHOULD be a literal
/// address at which at which an authoritative instance of this research definition
/// is (or will be) published. This URL can be the target of a canonical reference.
/// It SHALL remain the same when the research definition is stored on different
/// servers.
pub fn url(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("url") {
return Some(string);
}
return None;
}
/// A detailed description, from a clinical perspective, of how the
/// ResearchDefinition is used.
pub fn usage(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("usage") {
return Some(string);
}
return None;
}
/// The content was developed with a focus and intent of supporting the contexts
/// that are listed. These contexts may be general categories (gender, age, ...) or
/// may be references to specific programs (insurance plans, studies, ...) and may
/// be used to assist with indexing and searching for appropriate research
/// definition instances.
pub fn use_context(&self) -> Option<Vec<UsageContext>> {
if let Some(Value::Array(val)) = self.value.get("useContext") {
return Some(
val.into_iter()
.map(|e| UsageContext {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The identifier that is used to identify this version of the research definition
/// when it is referenced in a specification, model, design or instance. This is an
/// arbitrary value managed by the research definition author and is not expected to
/// be globally unique. For example, it might be a timestamp (e.g. yyyymmdd) if a
/// managed version is not available. There is also no expectation that versions can
/// be placed in a lexicographical sequence. To provide a version consistent with
/// the Decision Support Service specification, use the format Major.Minor.Revision
/// (e.g. 1.0.0). For more information on versioning knowledge assets, refer to the
/// Decision Support Service specification. Note that a version is required for non-
/// experimental active artifacts.
pub fn version(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("version") {
return Some(string);
}
return None;
}
pub fn validate(&self) -> bool {
if let Some(_val) = self._approval_date() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._comment() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self._copyright() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._date() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._description() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._experimental() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._implicit_rules() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._language() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._last_review_date() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._name() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._publisher() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._purpose() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._short_title() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._status() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._subtitle() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._title() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._url() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._usage() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._version() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.approval_date() {}
if let Some(_val) = self.author() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.comment() {
_val.into_iter().for_each(|_e| {});
}
if let Some(_val) = self.contact() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.contained() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.copyright() {}
if let Some(_val) = self.date() {}
if let Some(_val) = self.description() {}
if let Some(_val) = self.editor() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.effective_period() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.endorser() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.experimental() {}
if let Some(_val) = self.exposure() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.exposure_alternative() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.extension() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.id() {}
if let Some(_val) = self.identifier() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.implicit_rules() {}
if let Some(_val) = self.jurisdiction() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.language() {}
if let Some(_val) = self.last_review_date() {}
if let Some(_val) = self.library() {
_val.into_iter().for_each(|_e| {});
}
if let Some(_val) = self.meta() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.modifier_extension() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.name() {}
if let Some(_val) = self.outcome() {
if !_val.validate() {
return false;
}
}
if !self.population().validate() {
return false;
}
if let Some(_val) = self.publisher() {}
if let Some(_val) = self.purpose() {}
if let Some(_val) = self.related_artifact() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.reviewer() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.short_title() {}
if let Some(_val) = self.status() {}
if let Some(_val) = self.subject_codeable_concept() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.subject_reference() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.subtitle() {}
if let Some(_val) = self.text() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.title() {}
if let Some(_val) = self.topic() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.url() {}
if let Some(_val) = self.usage() {}
if let Some(_val) = self.use_context() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.version() {}
return true;
}
}
#[derive(Debug)]
pub struct ResearchDefinitionBuilder {
pub(crate) value: Value,
}
impl ResearchDefinitionBuilder {
pub fn build(&self) -> ResearchDefinition {
ResearchDefinition {
value: Cow::Owned(self.value.clone()),
}
}
pub fn with(existing: ResearchDefinition) -> ResearchDefinitionBuilder {
ResearchDefinitionBuilder {
value: (*existing.value).clone(),
}
}
pub fn new(population: Reference) -> ResearchDefinitionBuilder {
let mut __value: Value = json!({});
__value["population"] = json!(population.value);
return ResearchDefinitionBuilder { value: __value };
}
pub fn _approval_date<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_approvalDate"] = json!(val.value);
return self;
}
pub fn _comment<'a>(&'a mut self, val: Vec<Element>) -> &'a mut ResearchDefinitionBuilder {
self.value["_comment"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn _copyright<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_copyright"] = json!(val.value);
return self;
}
pub fn _date<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_date"] = json!(val.value);
return self;
}
pub fn _description<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_description"] = json!(val.value);
return self;
}
pub fn _experimental<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_experimental"] = json!(val.value);
return self;
}
pub fn _implicit_rules<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_implicitRules"] = json!(val.value);
return self;
}
pub fn _language<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_language"] = json!(val.value);
return self;
}
pub fn _last_review_date<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_lastReviewDate"] = json!(val.value);
return self;
}
pub fn _name<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_name"] = json!(val.value);
return self;
}
pub fn _publisher<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_publisher"] = json!(val.value);
return self;
}
pub fn _purpose<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_purpose"] = json!(val.value);
return self;
}
pub fn _short_title<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_shortTitle"] = json!(val.value);
return self;
}
pub fn _status<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_status"] = json!(val.value);
return self;
}
pub fn _subtitle<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_subtitle"] = json!(val.value);
return self;
}
pub fn _title<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_title"] = json!(val.value);
return self;
}
pub fn _url<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_url"] = json!(val.value);
return self;
}
pub fn _usage<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_usage"] = json!(val.value);
return self;
}
pub fn _version<'a>(&'a mut self, val: Element) -> &'a mut ResearchDefinitionBuilder {
self.value["_version"] = json!(val.value);
return self;
}
pub fn approval_date<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["approvalDate"] = json!(val);
return self;
}
pub fn author<'a>(&'a mut self, val: Vec<ContactDetail>) -> &'a mut ResearchDefinitionBuilder {
self.value["author"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn comment<'a>(&'a mut self, val: Vec<&str>) -> &'a mut ResearchDefinitionBuilder {
self.value["comment"] = json!(val);
return self;
}
pub fn contact<'a>(&'a mut self, val: Vec<ContactDetail>) -> &'a mut ResearchDefinitionBuilder {
self.value["contact"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn contained<'a>(
&'a mut self,
val: Vec<ResourceList>,
) -> &'a mut ResearchDefinitionBuilder {
self.value["contained"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn copyright<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["copyright"] = json!(val);
return self;
}
pub fn date<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["date"] = json!(val);
return self;
}
pub fn description<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["description"] = json!(val);
return self;
}
pub fn editor<'a>(&'a mut self, val: Vec<ContactDetail>) -> &'a mut ResearchDefinitionBuilder {
self.value["editor"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn effective_period<'a>(&'a mut self, val: Period) -> &'a mut ResearchDefinitionBuilder {
self.value["effectivePeriod"] = json!(val.value);
return self;
}
pub fn endorser<'a>(
&'a mut self,
val: Vec<ContactDetail>,
) -> &'a mut ResearchDefinitionBuilder {
self.value["endorser"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn experimental<'a>(&'a mut self, val: bool) -> &'a mut ResearchDefinitionBuilder {
self.value["experimental"] = json!(val);
return self;
}
pub fn exposure<'a>(&'a mut self, val: Reference) -> &'a mut ResearchDefinitionBuilder {
self.value["exposure"] = json!(val.value);
return self;
}
pub fn exposure_alternative<'a>(
&'a mut self,
val: Reference,
) -> &'a mut ResearchDefinitionBuilder {
self.value["exposureAlternative"] = json!(val.value);
return self;
}
pub fn extension<'a>(&'a mut self, val: Vec<Extension>) -> &'a mut ResearchDefinitionBuilder {
self.value["extension"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn id<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["id"] = json!(val);
return self;
}
pub fn identifier<'a>(&'a mut self, val: Vec<Identifier>) -> &'a mut ResearchDefinitionBuilder {
self.value["identifier"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn implicit_rules<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["implicitRules"] = json!(val);
return self;
}
pub fn jurisdiction<'a>(
&'a mut self,
val: Vec<CodeableConcept>,
) -> &'a mut ResearchDefinitionBuilder {
self.value["jurisdiction"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn language<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["language"] = json!(val);
return self;
}
pub fn last_review_date<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["lastReviewDate"] = json!(val);
return self;
}
pub fn library<'a>(&'a mut self, val: Vec<&str>) -> &'a mut ResearchDefinitionBuilder {
self.value["library"] = json!(val);
return self;
}
pub fn meta<'a>(&'a mut self, val: Meta) -> &'a mut ResearchDefinitionBuilder {
self.value["meta"] = json!(val.value);
return self;
}
pub fn modifier_extension<'a>(
&'a mut self,
val: Vec<Extension>,
) -> &'a mut ResearchDefinitionBuilder {
self.value["modifierExtension"] =
json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn name<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["name"] = json!(val);
return self;
}
pub fn outcome<'a>(&'a mut self, val: Reference) -> &'a mut ResearchDefinitionBuilder {
self.value["outcome"] = json!(val.value);
return self;
}
pub fn publisher<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["publisher"] = json!(val);
return self;
}
pub fn purpose<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["purpose"] = json!(val);
return self;
}
pub fn related_artifact<'a>(
&'a mut self,
val: Vec<RelatedArtifact>,
) -> &'a mut ResearchDefinitionBuilder {
self.value["relatedArtifact"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn reviewer<'a>(
&'a mut self,
val: Vec<ContactDetail>,
) -> &'a mut ResearchDefinitionBuilder {
self.value["reviewer"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn short_title<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["shortTitle"] = json!(val);
return self;
}
pub fn status<'a>(
&'a mut self,
val: ResearchDefinitionStatus,
) -> &'a mut ResearchDefinitionBuilder {
self.value["status"] = json!(val.to_string());
return self;
}
pub fn subject_codeable_concept<'a>(
&'a mut self,
val: CodeableConcept,
) -> &'a mut ResearchDefinitionBuilder {
self.value["subjectCodeableConcept"] = json!(val.value);
return self;
}
pub fn subject_reference<'a>(
&'a mut self,
val: Reference,
) -> &'a mut ResearchDefinitionBuilder {
self.value["subjectReference"] = json!(val.value);
return self;
}
pub fn subtitle<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["subtitle"] = json!(val);
return self;
}
pub fn text<'a>(&'a mut self, val: Narrative) -> &'a mut ResearchDefinitionBuilder {
self.value["text"] = json!(val.value);
return self;
}
pub fn title<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["title"] = json!(val);
return self;
}
pub fn topic<'a>(&'a mut self, val: Vec<CodeableConcept>) -> &'a mut ResearchDefinitionBuilder {
self.value["topic"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn url<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["url"] = json!(val);
return self;
}
pub fn usage<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["usage"] = json!(val);
return self;
}
pub fn use_context<'a>(
&'a mut self,
val: Vec<UsageContext>,
) -> &'a mut ResearchDefinitionBuilder {
self.value["useContext"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn version<'a>(&'a mut self, val: &str) -> &'a mut ResearchDefinitionBuilder {
self.value["version"] = json!(val);
return self;
}
}
#[derive(Debug)]
pub enum ResearchDefinitionStatus {
Draft,
Active,
Retired,
Unknown,
}
impl ResearchDefinitionStatus {
pub fn from_string(string: &str) -> Option<ResearchDefinitionStatus> {
match string {
"draft" => Some(ResearchDefinitionStatus::Draft),
"active" => Some(ResearchDefinitionStatus::Active),
"retired" => Some(ResearchDefinitionStatus::Retired),
"unknown" => Some(ResearchDefinitionStatus::Unknown),
_ => None,
}
}
pub fn to_string(&self) -> String {
match self {
ResearchDefinitionStatus::Draft => "draft".to_string(),
ResearchDefinitionStatus::Active => "active".to_string(),
ResearchDefinitionStatus::Retired => "retired".to_string(),
ResearchDefinitionStatus::Unknown => "unknown".to_string(),
}
}
}
| 34.043916 | 100 | 0.545731 |
6ae5bf8fc7215e15821b87d0b8581411a8ebe801 | 1,577 | #[test]
fn margin_and_flex_row() {
let mut stretch = stretch::Stretch::new();
let node0 = stretch
.new_node(
stretch::style::Style {
flex_grow: 1f32,
margin: stretch::geometry::Rect {
start: stretch::style::Dimension::Points(10f32),
end: stretch::style::Dimension::Points(10f32),
..Default::default()
},
..Default::default()
},
&[],
)
.unwrap();
let node = stretch
.new_node(
stretch::style::Style {
size: stretch::geometry::Size {
width: stretch::style::Dimension::Points(100f32),
height: stretch::style::Dimension::Points(100f32),
..Default::default()
},
..Default::default()
},
&[node0],
)
.unwrap();
stretch.compute_layout(node, stretch::geometry::Size::undefined()).unwrap();
assert_eq!(stretch.layout(node).unwrap().size.width, 100f32);
assert_eq!(stretch.layout(node).unwrap().size.height, 100f32);
assert_eq!(stretch.layout(node).unwrap().location.x, 0f32);
assert_eq!(stretch.layout(node).unwrap().location.y, 0f32);
assert_eq!(stretch.layout(node0).unwrap().size.width, 80f32);
assert_eq!(stretch.layout(node0).unwrap().size.height, 100f32);
assert_eq!(stretch.layout(node0).unwrap().location.x, 10f32);
assert_eq!(stretch.layout(node0).unwrap().location.y, 0f32);
}
| 38.463415 | 80 | 0.538998 |
0ec4b2790f0fd6cfda5907f35e9a47b4550b9be7 | 4,929 | use std::sync::Arc;
use std::sync::atomic::{AtomicBool, Ordering};
use std::time::{Instant, Duration};
use gilrs::{Gilrs, Button, Axis};
use multiinput::{RawInputManager, RawEvent};
use pad_motion::protocol::*;
use pad_motion::server::*;
fn main() {
let running = Arc::new(AtomicBool::new(true));
{
let running = running.clone();
ctrlc::set_handler(move || {
running.store(false, Ordering::SeqCst);
}).expect("Error setting Ctrl-C handler");
}
let server = Arc::new(Server::new(None, None).unwrap());
let server_thread_join_handle = {
let server = server.clone();
server.start(running.clone())
};
let controller_info = ControllerInfo {
slot_state: SlotState::Connected,
device_type: DeviceType::FullGyro,
connection_type: ConnectionType::USB,
.. Default::default()
};
server.update_controller_info(controller_info);
fn to_stick_value(input: f32) -> u8 {
(input * 127.0 + 127.0) as u8
}
let mut gilrs = Gilrs::new().unwrap();
let mut mouse_manager = RawInputManager::new().unwrap();
mouse_manager.register_devices(multiinput::DeviceType::Mice);
let now = Instant::now();
while running.load(Ordering::SeqCst) {
// Consume controller events
while let Some(_event) = gilrs.next_event() {
}
let mut delta_rotation_x = 0.0;
let mut delta_rotation_y = 0.0;
let mut delta_mouse_wheel = 0.0;
while let Some(event) = mouse_manager.get_event() {
match event {
RawEvent::MouseMoveEvent(_mouse_id, delta_x, delta_y) => {
delta_rotation_x += delta_x as f32;
delta_rotation_y += delta_y as f32;
},
RawEvent::MouseWheelEvent(_mouse_id, delta) => {
delta_mouse_wheel += delta as f32;
}
_ => ()
}
}
let first_gamepad = gilrs.gamepads().next();
let controller_data = {
if let Some((_id, gamepad)) = first_gamepad {
let analog_button_value = |button| {
gamepad.button_data(button).map(|data| (data.value() * 255.0) as u8).unwrap_or(0)
};
ControllerData {
connected: true,
d_pad_left: gamepad.is_pressed(Button::DPadLeft),
d_pad_down: gamepad.is_pressed(Button::DPadDown),
d_pad_right: gamepad.is_pressed(Button::DPadRight),
d_pad_up: gamepad.is_pressed(Button::DPadUp),
start: gamepad.is_pressed(Button::Start),
right_stick_button: gamepad.is_pressed(Button::RightThumb),
left_stick_button: gamepad.is_pressed(Button::LeftThumb),
select: gamepad.is_pressed(Button::Select),
triangle: gamepad.is_pressed(Button::North),
circle: gamepad.is_pressed(Button::East),
cross: gamepad.is_pressed(Button::South),
square: gamepad.is_pressed(Button::West),
r1: gamepad.is_pressed(Button::RightTrigger),
l1: gamepad.is_pressed(Button::LeftTrigger),
r2: gamepad.is_pressed(Button::RightTrigger2),
l2: gamepad.is_pressed(Button::LeftTrigger2),
ps: analog_button_value(Button::Mode),
left_stick_x: to_stick_value(gamepad.value(Axis::LeftStickX)),
left_stick_y: to_stick_value(gamepad.value(Axis::LeftStickY)),
right_stick_x: to_stick_value(gamepad.value(Axis::RightStickX)),
right_stick_y: to_stick_value(gamepad.value(Axis::RightStickY)),
analog_d_pad_left: analog_button_value(Button::DPadLeft),
analog_d_pad_down: analog_button_value(Button::DPadDown),
analog_d_pad_right: analog_button_value(Button::DPadRight),
analog_d_pad_up: analog_button_value(Button::DPadUp),
analog_triangle: analog_button_value(Button::North),
analog_circle: analog_button_value(Button::East),
analog_cross: analog_button_value(Button::South),
analog_square: analog_button_value(Button::West),
analog_r1: analog_button_value(Button::RightTrigger),
analog_l1: analog_button_value(Button::LeftTrigger),
analog_r2: analog_button_value(Button::RightTrigger2),
analog_l2: analog_button_value(Button::LeftTrigger2),
motion_data_timestamp: now.elapsed().as_micros() as u64,
gyroscope_pitch: -delta_rotation_y * 10.0,
gyroscope_roll: delta_rotation_x * 10.0,
gyroscope_yaw: delta_mouse_wheel * 300.0,
.. Default::default()
}
} else {
ControllerData {
connected: true,
motion_data_timestamp: now.elapsed().as_micros() as u64,
gyroscope_pitch: -delta_rotation_y * 10.0,
gyroscope_roll: delta_rotation_x * 10.0,
gyroscope_yaw: delta_mouse_wheel * 300.0,
.. Default::default()
}
}
};
server.update_controller_data(0, controller_data);
std::thread::sleep(Duration::from_millis(10));
}
server_thread_join_handle.join().unwrap();
}
| 37.340909 | 91 | 0.654088 |
c13e543da05598456f2761fca652c5459e9a5972 | 12,453 | //! Module for kernels
//!
//! Currently used within Gaussian Processes and SVMs.
use std::ops::{Add, Mul};
use linalg::Vector;
use linalg::norm::{Euclidean, VectorNorm, VectorMetric};
use rulinalg::utils;
/// The Kernel trait
///
/// Requires a function mapping two vectors to a scalar.
pub trait Kernel {
/// The kernel function.
///
/// Takes two equal length slices and returns a scalar.
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64;
}
/// The sum of two kernels
///
/// This struct should not be directly instantiated but instead
/// is created when we add two kernels together.
///
/// Note that it will be more efficient to implement the final kernel
/// manually yourself. However this provides an easy mechanism to test
/// different combinations.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel::{Kernel, Polynomial, HyperTan, KernelArith};
///
/// let poly_ker = Polynomial::new(1f64,2f64,3f64);
/// let hypert_ker = HyperTan::new(1f64,2.5);
///
/// let poly_plus_hypert_ker = KernelArith(poly_ker) + KernelArith(hypert_ker);
///
/// println!("{0}", poly_plus_hypert_ker.kernel(&[1f64,2f64,3f64],
/// &[3f64,1f64,2f64]));
/// ```
#[derive(Debug)]
pub struct KernelSum<T, U>
where T: Kernel,
U: Kernel
{
k1: T,
k2: U,
}
/// Computes the sum of the two associated kernels.
impl<T, U> Kernel for KernelSum<T, U>
where T: Kernel,
U: Kernel
{
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
self.k1.kernel(x1, x2) + self.k2.kernel(x1, x2)
}
}
/// The pointwise product of two kernels
///
/// This struct should not be directly instantiated but instead
/// is created when we multiply two kernels together.
///
/// Note that it will be more efficient to implement the final kernel
/// manually yourself. However this provides an easy mechanism to test
/// different combinations.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel::{Kernel, Polynomial, HyperTan, KernelArith};
///
/// let poly_ker = Polynomial::new(1f64,2f64,3f64);
/// let hypert_ker = HyperTan::new(1f64,2.5);
///
/// let poly_plus_hypert_ker = KernelArith(poly_ker) * KernelArith(hypert_ker);
///
/// println!("{0}", poly_plus_hypert_ker.kernel(&[1f64,2f64,3f64],
/// &[3f64,1f64,2f64]));
/// ```
#[derive(Debug)]
pub struct KernelProd<T, U>
where T: Kernel,
U: Kernel
{
k1: T,
k2: U,
}
/// Computes the product of the two associated kernels.
impl<T, U> Kernel for KernelProd<T, U>
where T: Kernel,
U: Kernel
{
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
self.k1.kernel(x1, x2) * self.k2.kernel(x1, x2)
}
}
/// A wrapper tuple struct used for kernel arithmetic
#[derive(Debug)]
pub struct KernelArith<K: Kernel>(pub K);
impl<T: Kernel, U: Kernel> Add<KernelArith<T>> for KernelArith<U> {
type Output = KernelSum<U, T>;
fn add(self, ker: KernelArith<T>) -> KernelSum<U, T> {
KernelSum {
k1: self.0,
k2: ker.0,
}
}
}
impl<T: Kernel, U: Kernel> Mul<KernelArith<T>> for KernelArith<U> {
type Output = KernelProd<U, T>;
fn mul(self, ker: KernelArith<T>) -> KernelProd<U, T> {
KernelProd {
k1: self.0,
k2: ker.0,
}
}
}
/// The Linear Kernel
///
/// k(x,y) = x<sup>T</sup>y + c
#[derive(Clone, Copy, Debug)]
pub struct Linear {
/// Constant term added to inner product.
pub c: f64,
}
impl Linear {
/// Constructs a new Linear Kernel.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel;
/// use rusty_machine::learning::toolkit::kernel::Kernel;
///
/// let ker = kernel::Linear::new(5.0);
///
/// println!("{0}", ker.kernel(&[1.,2.,3.], &[3.,4.,5.]));
/// ```
pub fn new(c: f64) -> Linear {
Linear { c: c }
}
}
/// Constructs the default Linear Kernel
///
/// The defaults are:
///
/// - c = 0
impl Default for Linear {
fn default() -> Linear {
Linear { c: 0f64 }
}
}
impl Kernel for Linear {
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
utils::dot(x1, x2) + self.c
}
}
/// The Polynomial Kernel
///
/// k(x,y) = (αx<sup>T</sup>y + c)<sup>d</sup>
#[derive(Clone, Copy, Debug)]
pub struct Polynomial {
/// Scaling of the inner product.
pub alpha: f64,
/// Constant added to inner product.
pub c: f64,
/// The power to raise the sum to.
pub d: f64,
}
impl Polynomial {
/// Constructs a new Polynomial Kernel.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel;
/// use rusty_machine::learning::toolkit::kernel::Kernel;
///
/// // Constructs a new polynomial with alpha = 1, c = 0, d = 2.
/// let ker = kernel::Polynomial::new(1.0, 0.0, 2.0);
///
/// println!("{0}", ker.kernel(&[1.,2.,3.], &[3.,4.,5.]));
/// ```
pub fn new(alpha: f64, c: f64, d: f64) -> Polynomial {
Polynomial {
alpha: alpha,
c: c,
d: d,
}
}
}
/// Construct a new polynomial kernel.
///
/// The defaults are:
///
/// - alpha = 1
/// - c = 0
/// - d = 1
impl Default for Polynomial {
fn default() -> Polynomial {
Polynomial {
alpha: 1f64,
c: 0f64,
d: 1f64,
}
}
}
impl Kernel for Polynomial {
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
(self.alpha * utils::dot(x1, x2) + self.c).powf(self.d)
}
}
/// Squared exponential kernel
///
/// Equivalently a gaussian function.
///
/// k(x,y) = A _exp_(-||x-y||<sup>2</sup> / 2l<sup>2</sup>)
///
/// Where A is the amplitude and l the length scale.
#[derive(Clone, Copy, Debug)]
pub struct SquaredExp {
/// The length scale of the kernel.
pub ls: f64,
/// The amplitude of the kernel.
pub ampl: f64,
}
impl SquaredExp {
/// Construct a new squared exponential kernel.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel;
/// use rusty_machine::learning::toolkit::kernel::Kernel;
///
/// // Construct a kernel with lengthscale 2 and amplitude 1.
/// let ker = kernel::SquaredExp::new(2f64, 1f64);
///
/// println!("{0}", ker.kernel(&[1.,2.,3.], &[3.,4.,5.]));
/// ```
pub fn new(ls: f64, ampl: f64) -> SquaredExp {
SquaredExp {
ls: ls,
ampl: ampl,
}
}
}
/// Constructs the default Squared Exp kernel.
///
/// The defaults are:
///
/// - ls = 1
/// - ampl = 1
impl Default for SquaredExp {
fn default() -> SquaredExp {
SquaredExp {
ls: 1f64,
ampl: 1f64,
}
}
}
impl Kernel for SquaredExp {
/// The squared exponential kernel function.
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
assert_eq!(x1.len(), x2.len());
let diff = Vector::new(x1.to_vec()) - Vector::new(x2.to_vec());
let x = -diff.dot(&diff) / (2f64 * self.ls * self.ls);
(self.ampl * x.exp())
}
}
/// The Exponential Kernel
///
/// k(x,y) = A _exp_(-||x-y|| / 2l<sup>2</sup>)
///
/// Where A is the amplitude and l is the length scale.
#[derive(Clone, Copy, Debug)]
pub struct Exponential {
/// The length scale of the kernel.
pub ls: f64,
/// The amplitude of the kernel.
pub ampl: f64,
}
impl Exponential {
/// Construct a new squared exponential kernel.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel;
/// use rusty_machine::learning::toolkit::kernel::Kernel;
///
/// // Construct a kernel with lengthscale 2 and amplitude 1.
/// let ker = kernel::Exponential::new(2f64, 1f64);
///
/// println!("{0}", ker.kernel(&[1.,2.,3.], &[3.,4.,5.]));
/// ```
pub fn new(ls: f64, ampl: f64) -> Exponential {
Exponential {
ls: ls,
ampl: ampl,
}
}
}
/// Constructs the default Exponential kernel.
///
/// The defaults are:
///
/// - ls = 1
/// - amplitude = 1
impl Default for Exponential {
fn default() -> Exponential {
Exponential {
ls: 1f64,
ampl: 1f64,
}
}
}
impl Kernel for Exponential {
/// The squared exponential kernel function.
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
assert_eq!(x1.len(), x2.len());
let diff = Vector::new(x1.to_vec()) - Vector::new(x2.to_vec());
let x = -Euclidean.norm(&diff) / (2f64 * self.ls * self.ls);
(self.ampl * x.exp())
}
}
/// The Hyperbolic Tangent Kernel.
///
/// ker(x,y) = _tanh_(αx<sup>T</sup>y + c)
#[derive(Clone, Copy, Debug)]
pub struct HyperTan {
/// The scaling of the inner product.
pub alpha: f64,
/// The constant to add to the inner product.
pub c: f64,
}
impl HyperTan {
/// Constructs a new Hyperbolic Tangent Kernel.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel;
/// use rusty_machine::learning::toolkit::kernel::Kernel;
///
/// // Construct a kernel with alpha = 1, c = 2.
/// let ker = kernel::HyperTan::new(1.0, 2.0);
///
/// println!("{0}", ker.kernel(&[1.,2.,3.], &[3.,4.,5.]));
/// ```
pub fn new(alpha: f64, c: f64) -> HyperTan {
HyperTan {
alpha: alpha,
c: c,
}
}
}
/// Constructs a default Hyperbolic Tangent Kernel.
///
/// The defaults are:
///
/// - alpha = 1
/// - c = 0
impl Default for HyperTan {
fn default() -> HyperTan {
HyperTan {
alpha: 1f64,
c: 0f64,
}
}
}
impl Kernel for HyperTan {
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
(self.alpha * utils::dot(x1, x2) + self.c).tanh()
}
}
/// The Multiquadric Kernel.
///
/// k(x,y) = _sqrt_(||x-y||<sup>2</sup> + c<sup>2</sup>)
#[derive(Clone, Copy, Debug)]
pub struct Multiquadric {
/// Constant added to square of difference.
pub c: f64,
}
impl Multiquadric {
/// Constructs a new Multiquadric Kernel.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel;
/// use rusty_machine::learning::toolkit::kernel::Kernel;
///
/// // Construct a kernel with c = 2.
/// let ker = kernel::Multiquadric::new(2.0);
///
/// println!("{0}", ker.kernel(&[1.,2.,3.], &[3.,4.,5.]));
/// ```
pub fn new(c: f64) -> Multiquadric {
Multiquadric { c: c }
}
}
/// Constructs a default Multiquadric Kernel.
///
/// The defaults are:
///
/// - c = 0
impl Default for Multiquadric {
fn default() -> Multiquadric {
Multiquadric { c: 0f64 }
}
}
impl Kernel for Multiquadric {
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
assert_eq!(x1.len(), x2.len());
Euclidean.metric(&(x1.into()), &(x2.into())).hypot(self.c)
}
}
/// The Rational Quadratic Kernel.
///
/// k(x,y) = (1 + ||x-y||<sup>2</sup> / (2αl<sup>2</sup>))<sup>-α</sup>
#[derive(Clone, Copy, Debug)]
pub struct RationalQuadratic {
/// Controls inverse power and difference scale.
pub alpha: f64,
/// Length scale controls scale of difference.
pub ls: f64,
}
impl RationalQuadratic {
/// Constructs a new Rational Quadratic Kernel.
///
/// # Examples
///
/// ```
/// use rusty_machine::learning::toolkit::kernel;
/// use rusty_machine::learning::toolkit::kernel::Kernel;
///
/// // Construct a kernel with alpha = 2, ls = 2.
/// let ker = kernel::RationalQuadratic::new(2.0, 2.0);
///
/// println!("{0}", ker.kernel(&[1.,2.,3.], &[3.,4.,5.]));
/// ```
pub fn new(alpha: f64, ls: f64) -> RationalQuadratic {
RationalQuadratic {
alpha: alpha,
ls: ls,
}
}
}
/// The default Rational Qaudratic Kernel.
///
/// The defaults are:
///
/// - alpha = 1
/// - ls = 1
impl Default for RationalQuadratic {
fn default() -> RationalQuadratic {
RationalQuadratic {
alpha: 1f64,
ls: 1f64,
}
}
}
impl Kernel for RationalQuadratic {
fn kernel(&self, x1: &[f64], x2: &[f64]) -> f64 {
let diff = Vector::new(x1.to_vec()) - Vector::new(x2.to_vec());
(1f64 + diff.dot(&diff) / (2f64 * self.alpha * self.ls * self.ls)).powf(-self.alpha)
}
}
| 24.227626 | 94 | 0.549105 |
7ad61aa90077f756651a1231191702b0f298273e | 818 | use lsp_types::request::Request;
use lsp_types::Range;
use lsp_types::TextDocumentIdentifier;
use serde::{Deserialize, Serialize};
#[derive(Deserialize, Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct SyntaxTreeParams {
pub text_document: TextDocumentIdentifier,
}
pub enum SyntaxTree {}
impl Request for SyntaxTree {
type Params = SyntaxTreeParams;
type Result = String;
const METHOD: &'static str = "asm/syntaxTree";
}
#[derive(Deserialize, Serialize, Debug)]
#[serde(rename_all = "camelCase")]
pub struct RunAnalysisParams {
pub text_document: TextDocumentIdentifier,
pub range: Option<Range>,
}
pub enum RunAnalysis {}
impl Request for RunAnalysis {
type Params = RunAnalysisParams;
type Result = String;
const METHOD: &'static str = "asm/runAnalysis";
}
| 24.058824 | 51 | 0.729829 |
dd0112847f079d88e4aac71fb5def93ef51f6110 | 4,774 | pub mod clients;
mod connection_string;
mod connection_string_builder;
mod copy_id;
mod copy_progress;
mod errors;
mod into_azure_path;
pub mod prelude;
pub mod shared_access_signature;
use std::convert::TryInto;
pub use self::connection_string::{ConnectionString, EndpointProtocol};
pub use self::connection_string_builder::ConnectionStringBuilder;
pub use self::into_azure_path::IntoAzurePath;
pub(crate) mod headers;
use bytes::Bytes;
pub use copy_id::{copy_id_from_headers, CopyId};
pub use copy_progress::CopyProgress;
pub(crate) mod parsing_xml;
mod stored_access_policy;
pub use errors::Error;
pub(crate) mod xml;
#[derive(Debug, Clone, Eq, PartialEq, Copy, Serialize, Deserialize)]
pub struct Yes;
#[derive(Debug, Clone, Eq, PartialEq, Copy, Serialize, Deserialize)]
pub struct No;
pub trait ToAssign: std::fmt::Debug {}
pub trait Assigned: ToAssign {}
pub trait NotAssigned: ToAssign {}
impl ToAssign for Yes {}
impl ToAssign for No {}
impl Assigned for Yes {}
impl NotAssigned for No {}
#[derive(Debug, Clone, PartialEq)]
pub struct IPRange {
pub start: std::net::IpAddr,
pub end: std::net::IpAddr,
}
use serde::{Deserialize, Deserializer};
pub use stored_access_policy::{StoredAccessPolicy, StoredAccessPolicyList};
#[derive(Debug, Clone, PartialEq)]
pub struct ConsistencyCRC64(Bytes);
const CRC64_BYTE_LENGTH: usize = 8;
impl ConsistencyCRC64 {
/// Decodes from base64 encoded input
pub fn decode(input: impl AsRef<[u8]>) -> Result<Self, Error> {
let bytes = base64::decode(input).map_err(Error::Base64DecodeError)?;
let bytes = Bytes::from(bytes);
match bytes.len() {
CRC64_BYTE_LENGTH => Ok(Self(bytes)),
len => Err(Error::CRC64Not8BytesLong(len)),
}
}
pub fn bytes(&self) -> &Bytes {
&self.0
}
pub fn as_slice(&self) -> &[u8; CRC64_BYTE_LENGTH] {
// we check the length when decoding, so this unwrap is safe
self.0.as_ref().try_into().unwrap()
}
}
impl AsRef<[u8; CRC64_BYTE_LENGTH]> for ConsistencyCRC64 {
fn as_ref(&self) -> &[u8; CRC64_BYTE_LENGTH] {
self.as_slice()
}
}
impl<'de> Deserialize<'de> for ConsistencyCRC64 {
fn deserialize<D>(deserializer: D) -> Result<Self, <D as Deserializer<'de>>::Error>
where
D: Deserializer<'de>,
{
let bytes = String::deserialize(deserializer)?;
Ok(ConsistencyCRC64::decode(bytes).map_err(serde::de::Error::custom)?)
}
}
#[derive(Debug, Clone, PartialEq)]
pub struct ConsistencyMD5(Bytes);
const MD5_BYTE_LENGTH: usize = 16;
impl ConsistencyMD5 {}
impl ConsistencyMD5 {
/// Decodes from base64 encoded input
pub fn decode(input: impl AsRef<[u8]>) -> Result<Self, Error> {
let bytes = base64::decode(input).map_err(Error::Base64DecodeError)?;
let bytes = Bytes::from(bytes);
match bytes.len() {
MD5_BYTE_LENGTH => Ok(Self(bytes)),
len => Err(Error::DigestNot16BytesLong(len)),
}
}
pub fn bytes(&self) -> &Bytes {
&self.0
}
pub fn as_slice(&self) -> &[u8; MD5_BYTE_LENGTH] {
// we check the length when decoding, so this unwrap is safe
self.0.as_ref().try_into().unwrap()
}
}
impl AsRef<[u8; MD5_BYTE_LENGTH]> for ConsistencyMD5 {
fn as_ref(&self) -> &[u8; MD5_BYTE_LENGTH] {
self.as_slice()
}
}
impl<'de> Deserialize<'de> for ConsistencyMD5 {
fn deserialize<D>(deserializer: D) -> Result<Self, <D as Deserializer<'de>>::Error>
where
D: Deserializer<'de>,
{
let bytes = String::deserialize(deserializer)?;
Ok(ConsistencyMD5::decode(bytes).map_err(serde::de::Error::custom)?)
}
}
#[cfg(test)]
mod test {
use super::*;
use serde::de::value::{Error, StringDeserializer};
use serde::de::IntoDeserializer;
#[test]
fn should_deserialize_consistency_crc64() {
let input = base64::encode([1, 2, 4, 8, 16, 32, 64, 128]);
let deserializer: StringDeserializer<Error> = input.into_deserializer();
let content_crc64 = ConsistencyCRC64::deserialize(deserializer).unwrap();
assert_eq!(
content_crc64,
ConsistencyCRC64(Bytes::from_static(&[1, 2, 4, 8, 16, 32, 64, 128]))
);
}
#[test]
fn should_deserialize_consistency_md5() {
let input = base64::encode([1, 2, 4, 8, 16, 32, 64, 128, 1, 2, 4, 8, 16, 32, 64, 128]);
let deserializer: StringDeserializer<Error> = input.into_deserializer();
let content_md5 = ConsistencyMD5::deserialize(deserializer).unwrap();
assert_eq!(
content_md5,
ConsistencyMD5(Bytes::from_static(&[
1, 2, 4, 8, 16, 32, 64, 128, 1, 2, 4, 8, 16, 32, 64, 128
]))
);
}
}
| 29.8375 | 95 | 0.647884 |
bfc7bdc9badb05a99c37ae96d64023b0652f19db | 353 | use crate::transformer::transformers;
use crate::utils::table;
/// display all transformers available
pub fn list() {
let mut table = table();
table.set_titles(row!["name", "description"]);
for transformer in transformers() {
table.add_row(row![transformer.id(), transformer.description()]);
}
let _ = table.printstd();
}
| 23.533333 | 73 | 0.660057 |
9c4cc58daec0fe1d5cbcc0e4f7ccd73423e6389d | 1,146 | use crate::result::*;
use crate::svc;
use crate::crt0;
use crate::ipc::sf;
use crate::service;
use crate::service::fatal;
use crate::service::fatal::IService;
use core::mem;
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
pub enum AssertMode {
ProcessExit,
FatalThrow,
SvcBreak,
Panic
}
pub fn assert(mode: AssertMode, rc: ResultCode) {
if rc.is_failure() {
match mode {
AssertMode::ProcessExit => {
crt0::exit(rc);
},
AssertMode::FatalThrow => {
match service::new_service_object::<fatal::Service>() {
Ok(fatal) => {
let _ = fatal.get().throw_with_policy(rc, fatal::Policy::ErrorScreen, sf::ProcessId::new());
},
_ => {}
};
},
AssertMode::SvcBreak => {
svc::break_(svc::BreakReason::Panic, &rc as *const _ as *const u8, mem::size_of::<ResultCode>());
},
AssertMode::Panic => {
let res: Result<()> = Err(rc);
res.unwrap();
},
}
}
} | 27.95122 | 116 | 0.487784 |
75c1aa0f49a7f3f03902b871a332ad0bf1fadb13 | 3,996 | // Copyright (c) 2021, BlockProject 3D
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification,
// are permitted provided that the following conditions are met:
//
// * Redistributions of source code must retain the above copyright notice,
// this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above copyright notice,
// this list of conditions and the following disclaimer in the documentation
// and/or other materials provided with the distribution.
// * Neither the name of BlockProject 3D nor the names of its contributors
// may be used to endorse or promote products derived from this software
// without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
// CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
// EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
// PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
// LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
// NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
// SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
use crate::shader::{Target, Type};
/// The required settings to create a new BPXS.
///
/// *This is intended to be generated with help of [Builder](crate::shader::Builder).*
#[derive(Clone)]
pub struct Settings
{
/// The assembly hash of the shader package.
pub assembly_hash: u64,
/// The target rendering API of the shader package.
pub target: Target,
/// The type of the shader package (Assembly or Pipeline).
pub ty: Type
}
/// Utility to simplify generation of [Settings](crate::shader::Settings) required when creating a new BPXS.
pub struct Builder
{
settings: Settings
}
impl Default for Builder
{
fn default() -> Self
{
Self::new()
}
}
impl Builder
{
/// Creates a new BPX Shader Package builder.
pub fn new() -> Builder
{
Builder {
settings: Settings {
assembly_hash: 0,
target: Target::Any,
ty: Type::Pipeline
}
}
}
/// Defines the shader assembly this package is linked against.
///
/// *By default, no shader assembly is linked and the hash is 0.*
///
/// # Arguments
///
/// * `hash`: the shader assembly hash.
///
/// returns: ShaderPackBuilder
pub fn assembly(mut self, hash: u64) -> Self
{
self.settings.assembly_hash = hash;
self
}
/// Defines the target of this shader package.
///
/// *By default, the target is Any.*
///
/// # Arguments
///
/// * `target`: the shader target.
///
/// returns: ShaderPackBuilder
pub fn target(mut self, target: Target) -> Self
{
self.settings.target = target;
self
}
/// Defines the shader package type.
///
/// *By default, the type is Pipeline.*
///
/// # Arguments
///
/// * `ty`: the shader package type (pipeline/program or assembly).
///
/// returns: ShaderPackBuilder
pub fn ty(mut self, ty: Type) -> Self
{
self.settings.ty = ty;
self
}
/// Returns the built settings.
pub fn build(&self) -> Settings
{
self.settings.clone()
}
}
impl From<&mut Builder> for Settings
{
fn from(builder: &mut Builder) -> Self
{
builder.build()
}
}
impl From<Builder> for Settings
{
fn from(builder: Builder) -> Self
{
builder.build()
}
}
| 28.140845 | 108 | 0.64014 |
2132c956fe7f26ef89bcf51824c1a395f3535e24 | 4,927 | use std::error::Error as _;
use std::io;
use time::error::{
ComponentRange, ConversionRange, Error, Format, IndeterminateOffset, InvalidFormatDescription,
Parse, ParseFromDescription, TryFromParsed,
};
use time::format_description::{self, modifier, Component, FormatItem};
use time::macros::format_description;
use time::parsing::Parsed;
use time::{Date, Time};
macro_rules! assert_display_eq {
($a:expr, $b:expr $(,)?) => {
assert_eq!($a.to_string(), $b.to_string())
};
}
macro_rules! assert_dbg_reflexive {
($a:expr) => {
assert_eq!(format!("{:?}", $a), format!("{:?}", $a))
};
}
macro_rules! assert_source {
($err:expr,None $(,)?) => {
assert!($err.source().is_none())
};
($err:expr, $source:ty $(,)?) => {
assert!($err.source().unwrap().is::<$source>())
};
}
fn component_range() -> ComponentRange {
Date::from_ordinal_date(0, 367).unwrap_err()
}
fn insufficient_type_information() -> Format {
Time::MIDNIGHT
.format(&time::format_description::well_known::Rfc3339)
.unwrap_err()
}
fn unexpected_trailing_characters() -> Parse {
Time::parse("a", &format_description!("")).unwrap_err()
}
fn invalid_format_description() -> InvalidFormatDescription {
format_description::parse("[").unwrap_err()
}
fn io_error() -> io::Error {
io::Error::last_os_error()
}
fn invalid_literal() -> ParseFromDescription {
Parsed::parse_literal(b"a", b"b").unwrap_err()
}
#[test]
fn debug() {
assert_eq!(format!("{:?}", FormatItem::Literal(b"abcdef")), "abcdef");
assert_dbg_reflexive!(FormatItem::Compound(&[FormatItem::Component(
Component::Day(modifier::Day::default())
)]));
assert_dbg_reflexive!(Parse::from(ParseFromDescription::InvalidComponent("a")));
assert_dbg_reflexive!(invalid_format_description());
}
#[test]
fn display() {
assert_display_eq!(ConversionRange, Error::from(ConversionRange));
assert_display_eq!(component_range(), Error::from(component_range()));
assert_display_eq!(component_range(), TryFromParsed::from(component_range()));
assert_display_eq!(IndeterminateOffset, Error::from(IndeterminateOffset));
assert_display_eq!(
TryFromParsed::InsufficientInformation,
Error::from(TryFromParsed::InsufficientInformation)
);
assert_display_eq!(
insufficient_type_information(),
Error::from(insufficient_type_information())
);
assert_display_eq!(
Format::InvalidComponent("a"),
Error::from(Format::InvalidComponent("a"))
);
assert_display_eq!(
ParseFromDescription::InvalidComponent("a"),
Error::from(Parse::from(ParseFromDescription::InvalidComponent("a")))
);
assert_display_eq!(invalid_literal(), Parse::from(invalid_literal()));
assert_display_eq!(
component_range(),
Error::from(Parse::from(TryFromParsed::from(component_range())))
);
assert_display_eq!(
ParseFromDescription::InvalidComponent("a"),
Parse::from(ParseFromDescription::InvalidComponent("a"))
);
assert_display_eq!(
component_range(),
Parse::from(TryFromParsed::from(component_range()))
);
assert_display_eq!(
unexpected_trailing_characters(),
Error::from(unexpected_trailing_characters()),
);
assert_display_eq!(
invalid_format_description(),
Error::from(invalid_format_description())
);
assert_display_eq!(io_error(), Format::from(io_error()));
}
#[test]
fn source() {
assert_source!(Error::from(ConversionRange), ConversionRange);
assert_source!(Error::from(component_range()), ComponentRange);
assert_source!(TryFromParsed::from(component_range()), ComponentRange);
assert_source!(TryFromParsed::InsufficientInformation, None);
assert_source!(insufficient_type_information(), None);
assert_source!(Format::InvalidComponent("a"), None);
assert_source!(Error::from(insufficient_type_information()), Format);
assert_source!(Error::from(IndeterminateOffset), IndeterminateOffset);
assert_source!(
Parse::from(TryFromParsed::InsufficientInformation),
TryFromParsed
);
assert_source!(
Error::from(TryFromParsed::InsufficientInformation),
TryFromParsed
);
assert_source!(
Parse::from(ParseFromDescription::InvalidComponent("a")),
ParseFromDescription
);
assert_source!(
Error::from(ParseFromDescription::InvalidComponent("a")),
ParseFromDescription
);
assert_source!(unexpected_trailing_characters(), None);
assert_source!(Error::from(unexpected_trailing_characters()), None);
assert_source!(
Error::from(invalid_format_description()),
InvalidFormatDescription
);
assert_source!(Format::from(io_error()), io::Error);
}
#[test]
fn component_name() {
assert_eq!(component_range().name(), "ordinal");
}
| 31.787097 | 98 | 0.676477 |
483ff8ebcd214f09d362b31822ecb19f903c4cc6 | 975 | use std::sync::Arc;
use std::time::Duration;
use tokio::{sync::Mutex, task};
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error + Send + Sync>> {
console_subscriber::init();
task::Builder::default()
.name("main-task")
.spawn(async move {
let count = Arc::new(Mutex::new(0));
for i in 0..5 {
let my_count = Arc::clone(&count);
let task_name = format!("increment-{}", i);
tokio::task::Builder::default()
.name(&task_name)
.spawn(async move {
for _ in 0..10 {
let mut lock = my_count.lock().await;
*lock += 1;
tokio::time::sleep(Duration::from_secs(1)).await;
}
});
}
while *count.lock().await < 50 {}
})
.await?;
Ok(())
}
| 30.46875 | 77 | 0.420513 |
482636a6c51c55ba7b9fba7c33be62c3f3ec05ef | 7,038 | use crate::thread_worker::Worker;
use crate::types::*;
use crossbeam_channel::{Receiver, Sender, TryRecvError};
use jsonrpc_core::{self, Call, Output};
use std::collections::HashMap;
use std::io::{self, BufRead, BufReader, BufWriter, Error, ErrorKind, Read, Write};
use std::process::{Command, Stdio};
pub struct LanguageServerTransport {
// The field order is important as it defines the order of drop.
// We want to exit a writer loop first (after sending exit notification),
// then close all pipes and wait until child process is finished.
// That helps to ensure that reader loop is not stuck trying to read from the language server.
pub to_lang_server: Worker<ServerMessage, Void>,
pub from_lang_server: Worker<Void, ServerMessage>,
pub errors: Worker<Void, Void>,
}
pub fn start(cmd: &str, args: &[String]) -> Result<LanguageServerTransport, String> {
info!("Starting Language server `{} {}`", cmd, args.join(" "));
let mut child = match Command::new(cmd)
.args(args)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()
{
Ok(c) => c,
Err(err) => {
return Err(match err.kind() {
ErrorKind::NotFound | ErrorKind::PermissionDenied => format!("{}: {}", err, cmd),
_ => format!("{}", err),
})
}
};
let writer = BufWriter::new(child.stdin.take().expect("Failed to open stdin"));
let reader = BufReader::new(child.stdout.take().expect("Failed to open stdout"));
// NOTE 1024 is arbitrary
let channel_capacity = 1024;
// XXX temporary way of tracing language server errors
let mut stderr = BufReader::new(child.stderr.take().expect("Failed to open stderr"));
let errors = Worker::spawn(
"Language server errors",
channel_capacity,
move |receiver, _| loop {
if let Err(TryRecvError::Disconnected) = receiver.try_recv() {
return;
}
let mut buf = String::new();
match stderr.read_to_string(&mut buf) {
Ok(_) => {
if buf.is_empty() {
return;
}
error!("Language server error: {}", buf);
}
Err(e) => {
error!("Failed to read from language server stderr: {}", e);
return;
}
}
},
);
// XXX
let from_lang_server = Worker::spawn(
"Messages from language server",
channel_capacity,
move |receiver, sender| {
if let Err(msg) = reader_loop(reader, receiver, &sender) {
error!("{}", msg);
}
},
);
let to_lang_server = Worker::spawn(
"Messages to language server",
channel_capacity,
move |receiver, _| {
if writer_loop(writer, &receiver).is_err() {
error!("Failed to write message to language server");
}
// NOTE prevent zombie
debug!("Waiting for language server process end");
drop(child.stdin.take());
drop(child.stdout.take());
drop(child.stderr.take());
std::thread::sleep(std::time::Duration::from_secs(1));
match child.try_wait() {
Ok(None) => {
std::thread::sleep(std::time::Duration::from_secs(1));
if let Ok(None) = child.try_wait() {
// Okay, we asked politely enough and waited long enough.
child.kill().unwrap();
}
}
Err(_) => {
error!("Language server wasn't running was it?!");
}
_ => {}
}
},
);
Ok(LanguageServerTransport {
from_lang_server,
to_lang_server,
errors,
})
}
fn reader_loop(
mut reader: impl BufRead,
receiver: Receiver<Void>,
sender: &Sender<ServerMessage>,
) -> io::Result<()> {
let mut headers: HashMap<String, String> = HashMap::default();
loop {
if let Err(TryRecvError::Disconnected) = receiver.try_recv() {
return Ok(());
}
headers.clear();
loop {
let mut header = String::new();
if reader.read_line(&mut header)? == 0 {
debug!("Language server closed pipe, stopping reading");
return Ok(());
}
let header = header.trim();
if header.is_empty() {
break;
}
let parts: Vec<&str> = header.split(": ").collect();
if parts.len() != 2 {
return Err(Error::new(ErrorKind::Other, "Failed to parse header"));
}
headers.insert(parts[0].to_string(), parts[1].to_string());
}
let content_len = headers
.get("Content-Length")
.ok_or_else(|| Error::new(ErrorKind::Other, "Failed to get Content-Length header"))?
.parse()
.map_err(|_| Error::new(ErrorKind::Other, "Failed to parse Content-Length header"))?;
let mut content = vec![0; content_len];
reader.read_exact(&mut content)?;
let msg = String::from_utf8(content)
.map_err(|_| Error::new(ErrorKind::Other, "Failed to read content as UTF-8 string"))?;
debug!("From server: {}", msg);
let output: serde_json::Result<Output> = serde_json::from_str(&msg);
match output {
Ok(output) => {
if sender.send(ServerMessage::Response(output)).is_err() {
return Err(Error::new(ErrorKind::Other, "Failed to send response"));
}
}
Err(_) => {
let msg: Call = serde_json::from_str(&msg).map_err(|_| {
Error::new(ErrorKind::Other, "Failed to parse language server message")
})?;
if sender.send(ServerMessage::Request(msg)).is_err() {
return Err(Error::new(ErrorKind::Other, "Failed to send response"));
}
}
}
}
}
fn writer_loop(mut writer: impl Write, receiver: &Receiver<ServerMessage>) -> io::Result<()> {
for request in receiver {
let request = match request {
ServerMessage::Request(request) => serde_json::to_string(&request),
ServerMessage::Response(response) => serde_json::to_string(&response),
}?;
debug!("To server: {}", request);
write!(
writer,
"Content-Length: {}\r\n\r\n{}",
request.len(),
request
)?;
writer.flush()?;
}
// NOTE we rely on the assumption that language server will exit when its stdin is closed
// without need to kill child process
debug!("Received signal to stop language server, closing pipe");
Ok(())
}
| 36.848168 | 98 | 0.52998 |
dd339456a6d4673506e10287f518289654839e0f | 4,713 | //! Type names for rkyv_dyn.
//!
//! The goal of `TypeName` is to avoid allocations if possible. If all you need is the hash of a
//! type name, then there's no reason to allocate a string to do it.
//!
//! rkyv_typename provides a derive macro to easily implement [`TypeName`], and has options to
//! easily customize your type's name.
//!
//! # Examples
//! ```
//! use rkyv_typename::TypeName;
//! #[derive(TypeName)]
//! #[typename = "CoolType"]
//! struct Example<T>(T);
//!
//! let mut type_name = String::new();
//! Example::<i32>::build_type_name(|piece| type_name += piece);
//! assert_eq!(type_name, "CoolType<i32>");
//! ```
//!
//! ## Features
//!
//! - `std`: Implements [`TypeName`] for standard library types (enabled by default)
#![deny(rustdoc::broken_intra_doc_links)]
#![deny(missing_docs)]
#![deny(rustdoc::missing_crate_level_docs)]
#![cfg_attr(not(feature = "std"), no_std)]
mod core_impl;
#[cfg(feature = "std")]
mod std_impl;
pub use rkyv_typename_derive::TypeName;
/// Builds a name for a type.
///
/// An implementation can be derived automatically with `#[derive(TypeName)]`. See
/// [TypeName](macro@TypeName) for more details.
///
/// Names cannot be guaranteed to be unique and although they are usually suitable to use as keys,
/// precautions should be taken to ensure that if name collisions happen that they are detected and
/// fixable.
///
/// # Examples
///
/// Most of the time, `#[derive(TypeName)]` will suit your needs. However, if you need more control,
/// you can always implement it manually:
///
/// ```
/// use rkyv_typename::TypeName;
///
/// struct Example;
///
/// impl TypeName for Example {
/// fn build_type_name<F: FnMut(&str)>(mut f: F) {
/// f("CoolStruct");
/// }
/// }
///
/// struct GenericExample<T, U, V>(T, U, V);
///
/// impl<
/// T: TypeName,
/// U: TypeName,
/// V: TypeName
/// > TypeName for GenericExample<T, U, V> {
/// fn build_type_name<F: FnMut(&str)>(mut f: F) {
/// f("CoolGeneric<");
/// T::build_type_name(&mut f);
/// f(", ");
/// U::build_type_name(&mut f);
/// f(", ");
/// V::build_type_name(&mut f);
/// f(">");
/// }
/// }
///
/// fn type_name<T: TypeName>() -> String {
/// let mut result = String::new();
/// T::build_type_name(|piece| result += piece);
/// result
/// }
///
/// assert_eq!(type_name::<Example>(), "CoolStruct");
/// assert_eq!(
/// type_name::<GenericExample<i32, Option<String>, Example>>(),
/// "CoolGeneric<i32, core::option::Option<alloc::string::String>, CoolStruct>"
/// );
/// ```
pub trait TypeName {
/// Submits the pieces of the type name to the given function.
fn build_type_name<F: FnMut(&str)>(f: F);
}
impl<T: TypeName> TypeName for &T {
fn build_type_name<F: FnMut(&str)>(mut f: F) {
f("&");
T::build_type_name(f);
}
}
#[cfg(test)]
mod tests {
use crate as rkyv_typename;
use crate::TypeName;
fn type_name_string<T: TypeName>() -> String {
let mut result = String::new();
T::build_type_name(|piece| result += piece);
result
}
#[test]
fn builtin_types() {
assert_eq!(type_name_string::<i32>(), "i32");
assert_eq!(type_name_string::<(i32,)>(), "(i32,)");
assert_eq!(type_name_string::<(i32, i32)>(), "(i32, i32)");
assert_eq!(type_name_string::<[[u8; 4]; 8]>(), "[[u8; 4]; 8]");
assert_eq!(
type_name_string::<Option<[String; 1]>>(),
"core::option::Option<[alloc::string::String; 1]>"
);
assert_eq!(
type_name_string::<Option<[Option<u8>; 4]>>(),
"core::option::Option<[core::option::Option<u8>; 4]>"
);
}
#[test]
fn derive() {
#[derive(TypeName)]
struct Test;
assert_eq!(type_name_string::<Test>(), "rkyv_typename::tests::Test");
}
#[test]
fn derive_generic() {
#[derive(TypeName)]
struct Test<T, U, V>(T, U, V);
assert_eq!(
type_name_string::<Test<u8, [i32; 4], Option<String>>>(),
"rkyv_typename::tests::Test<u8, [i32; 4], core::option::Option<alloc::string::String>>"
);
}
#[test]
fn derive_custom_typename() {
#[derive(TypeName)]
#[typename = "Custom"]
struct Test;
assert_eq!(type_name_string::<Test>(), "Custom");
#[derive(TypeName)]
#[typename = "GenericCustom"]
struct GenericTest<T>(T);
assert_eq!(type_name_string::<GenericTest<i32>>(), "GenericCustom<i32>");
assert_eq!(
type_name_string::<GenericTest<Test>>(),
"GenericCustom<Custom>"
);
}
}
| 28.053571 | 100 | 0.572884 |
9cb7f0b47cdaef6d5ff25d38c4d844f778d28c9f | 13,274 | #[doc = "Reader of register RTC_CNTL_RESET_STATE"]
pub type R = crate::R<u32, super::RTC_CNTL_RESET_STATE>;
#[doc = "Writer for register RTC_CNTL_RESET_STATE"]
pub type W = crate::W<u32, super::RTC_CNTL_RESET_STATE>;
#[doc = "Register RTC_CNTL_RESET_STATE `reset()`'s with value 0"]
impl crate::ResetValue for super::RTC_CNTL_RESET_STATE {
type Type = u32;
#[inline(always)]
fn reset_value() -> Self::Type {
0
}
}
#[doc = "Reader of field `RTC_CNTL_DRESET_MASK_PROCPU`"]
pub type RTC_CNTL_DRESET_MASK_PROCPU_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `RTC_CNTL_DRESET_MASK_PROCPU`"]
pub struct RTC_CNTL_DRESET_MASK_PROCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_DRESET_MASK_PROCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 25)) | (((value as u32) & 0x01) << 25);
self.w
}
}
#[doc = "Reader of field `RTC_CNTL_DRESET_MASK_APPCPU`"]
pub type RTC_CNTL_DRESET_MASK_APPCPU_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `RTC_CNTL_DRESET_MASK_APPCPU`"]
pub struct RTC_CNTL_DRESET_MASK_APPCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_DRESET_MASK_APPCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 24)) | (((value as u32) & 0x01) << 24);
self.w
}
}
#[doc = "Write proxy for field `RTC_CNTL_JTAG_RESET_FLAG_CLR_APPCPU`"]
pub struct RTC_CNTL_JTAG_RESET_FLAG_CLR_APPCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_JTAG_RESET_FLAG_CLR_APPCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 23)) | (((value as u32) & 0x01) << 23);
self.w
}
}
#[doc = "Write proxy for field `RTC_CNTL_JTAG_RESET_FLAG_CLR_PROCPU`"]
pub struct RTC_CNTL_JTAG_RESET_FLAG_CLR_PROCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_JTAG_RESET_FLAG_CLR_PROCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 22)) | (((value as u32) & 0x01) << 22);
self.w
}
}
#[doc = "Reader of field `RTC_CNTL_JTAG_RESET_FLAG_APPCPU`"]
pub type RTC_CNTL_JTAG_RESET_FLAG_APPCPU_R = crate::R<bool, bool>;
#[doc = "Reader of field `RTC_CNTL_JTAG_RESET_FLAG_PROCPU`"]
pub type RTC_CNTL_JTAG_RESET_FLAG_PROCPU_R = crate::R<bool, bool>;
#[doc = "Reader of field `RTC_CNTL_OCD_HALT_ON_RESET_PROCPU`"]
pub type RTC_CNTL_OCD_HALT_ON_RESET_PROCPU_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `RTC_CNTL_OCD_HALT_ON_RESET_PROCPU`"]
pub struct RTC_CNTL_OCD_HALT_ON_RESET_PROCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_OCD_HALT_ON_RESET_PROCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 19)) | (((value as u32) & 0x01) << 19);
self.w
}
}
#[doc = "Reader of field `RTC_CNTL_OCD_HALT_ON_RESET_APPCPU`"]
pub type RTC_CNTL_OCD_HALT_ON_RESET_APPCPU_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `RTC_CNTL_OCD_HALT_ON_RESET_APPCPU`"]
pub struct RTC_CNTL_OCD_HALT_ON_RESET_APPCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_OCD_HALT_ON_RESET_APPCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 18)) | (((value as u32) & 0x01) << 18);
self.w
}
}
#[doc = "Write proxy for field `RTC_CNTL_ALL_RESET_FLAG_CLR_APPCPU`"]
pub struct RTC_CNTL_ALL_RESET_FLAG_CLR_APPCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_ALL_RESET_FLAG_CLR_APPCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 17)) | (((value as u32) & 0x01) << 17);
self.w
}
}
#[doc = "Write proxy for field `RTC_CNTL_ALL_RESET_FLAG_CLR_PROCPU`"]
pub struct RTC_CNTL_ALL_RESET_FLAG_CLR_PROCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_ALL_RESET_FLAG_CLR_PROCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 16)) | (((value as u32) & 0x01) << 16);
self.w
}
}
#[doc = "Reader of field `RTC_CNTL_ALL_RESET_FLAG_APPCPU`"]
pub type RTC_CNTL_ALL_RESET_FLAG_APPCPU_R = crate::R<bool, bool>;
#[doc = "Reader of field `RTC_CNTL_ALL_RESET_FLAG_PROCPU`"]
pub type RTC_CNTL_ALL_RESET_FLAG_PROCPU_R = crate::R<bool, bool>;
#[doc = "Reader of field `RTC_CNTL_STAT_VECTOR_SEL_PROCPU`"]
pub type RTC_CNTL_STAT_VECTOR_SEL_PROCPU_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `RTC_CNTL_STAT_VECTOR_SEL_PROCPU`"]
pub struct RTC_CNTL_STAT_VECTOR_SEL_PROCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_STAT_VECTOR_SEL_PROCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 13)) | (((value as u32) & 0x01) << 13);
self.w
}
}
#[doc = "Reader of field `RTC_CNTL_STAT_VECTOR_SEL_APPCPU`"]
pub type RTC_CNTL_STAT_VECTOR_SEL_APPCPU_R = crate::R<bool, bool>;
#[doc = "Write proxy for field `RTC_CNTL_STAT_VECTOR_SEL_APPCPU`"]
pub struct RTC_CNTL_STAT_VECTOR_SEL_APPCPU_W<'a> {
w: &'a mut W,
}
impl<'a> RTC_CNTL_STAT_VECTOR_SEL_APPCPU_W<'a> {
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !(0x01 << 12)) | (((value as u32) & 0x01) << 12);
self.w
}
}
#[doc = "Reader of field `RTC_CNTL_RESET_CAUSE_APPCPU`"]
pub type RTC_CNTL_RESET_CAUSE_APPCPU_R = crate::R<u8, u8>;
#[doc = "Reader of field `RTC_CNTL_RESET_CAUSE_PROCPU`"]
pub type RTC_CNTL_RESET_CAUSE_PROCPU_R = crate::R<u8, u8>;
impl R {
#[doc = "Bit 25"]
#[inline(always)]
pub fn rtc_cntl_dreset_mask_procpu(&self) -> RTC_CNTL_DRESET_MASK_PROCPU_R {
RTC_CNTL_DRESET_MASK_PROCPU_R::new(((self.bits >> 25) & 0x01) != 0)
}
#[doc = "Bit 24"]
#[inline(always)]
pub fn rtc_cntl_dreset_mask_appcpu(&self) -> RTC_CNTL_DRESET_MASK_APPCPU_R {
RTC_CNTL_DRESET_MASK_APPCPU_R::new(((self.bits >> 24) & 0x01) != 0)
}
#[doc = "Bit 21"]
#[inline(always)]
pub fn rtc_cntl_jtag_reset_flag_appcpu(&self) -> RTC_CNTL_JTAG_RESET_FLAG_APPCPU_R {
RTC_CNTL_JTAG_RESET_FLAG_APPCPU_R::new(((self.bits >> 21) & 0x01) != 0)
}
#[doc = "Bit 20"]
#[inline(always)]
pub fn rtc_cntl_jtag_reset_flag_procpu(&self) -> RTC_CNTL_JTAG_RESET_FLAG_PROCPU_R {
RTC_CNTL_JTAG_RESET_FLAG_PROCPU_R::new(((self.bits >> 20) & 0x01) != 0)
}
#[doc = "Bit 19"]
#[inline(always)]
pub fn rtc_cntl_ocd_halt_on_reset_procpu(&self) -> RTC_CNTL_OCD_HALT_ON_RESET_PROCPU_R {
RTC_CNTL_OCD_HALT_ON_RESET_PROCPU_R::new(((self.bits >> 19) & 0x01) != 0)
}
#[doc = "Bit 18"]
#[inline(always)]
pub fn rtc_cntl_ocd_halt_on_reset_appcpu(&self) -> RTC_CNTL_OCD_HALT_ON_RESET_APPCPU_R {
RTC_CNTL_OCD_HALT_ON_RESET_APPCPU_R::new(((self.bits >> 18) & 0x01) != 0)
}
#[doc = "Bit 15"]
#[inline(always)]
pub fn rtc_cntl_all_reset_flag_appcpu(&self) -> RTC_CNTL_ALL_RESET_FLAG_APPCPU_R {
RTC_CNTL_ALL_RESET_FLAG_APPCPU_R::new(((self.bits >> 15) & 0x01) != 0)
}
#[doc = "Bit 14"]
#[inline(always)]
pub fn rtc_cntl_all_reset_flag_procpu(&self) -> RTC_CNTL_ALL_RESET_FLAG_PROCPU_R {
RTC_CNTL_ALL_RESET_FLAG_PROCPU_R::new(((self.bits >> 14) & 0x01) != 0)
}
#[doc = "Bit 13"]
#[inline(always)]
pub fn rtc_cntl_stat_vector_sel_procpu(&self) -> RTC_CNTL_STAT_VECTOR_SEL_PROCPU_R {
RTC_CNTL_STAT_VECTOR_SEL_PROCPU_R::new(((self.bits >> 13) & 0x01) != 0)
}
#[doc = "Bit 12"]
#[inline(always)]
pub fn rtc_cntl_stat_vector_sel_appcpu(&self) -> RTC_CNTL_STAT_VECTOR_SEL_APPCPU_R {
RTC_CNTL_STAT_VECTOR_SEL_APPCPU_R::new(((self.bits >> 12) & 0x01) != 0)
}
#[doc = "Bits 6:11"]
#[inline(always)]
pub fn rtc_cntl_reset_cause_appcpu(&self) -> RTC_CNTL_RESET_CAUSE_APPCPU_R {
RTC_CNTL_RESET_CAUSE_APPCPU_R::new(((self.bits >> 6) & 0x3f) as u8)
}
#[doc = "Bits 0:5"]
#[inline(always)]
pub fn rtc_cntl_reset_cause_procpu(&self) -> RTC_CNTL_RESET_CAUSE_PROCPU_R {
RTC_CNTL_RESET_CAUSE_PROCPU_R::new((self.bits & 0x3f) as u8)
}
}
impl W {
#[doc = "Bit 25"]
#[inline(always)]
pub fn rtc_cntl_dreset_mask_procpu(&mut self) -> RTC_CNTL_DRESET_MASK_PROCPU_W {
RTC_CNTL_DRESET_MASK_PROCPU_W { w: self }
}
#[doc = "Bit 24"]
#[inline(always)]
pub fn rtc_cntl_dreset_mask_appcpu(&mut self) -> RTC_CNTL_DRESET_MASK_APPCPU_W {
RTC_CNTL_DRESET_MASK_APPCPU_W { w: self }
}
#[doc = "Bit 23"]
#[inline(always)]
pub fn rtc_cntl_jtag_reset_flag_clr_appcpu(&mut self) -> RTC_CNTL_JTAG_RESET_FLAG_CLR_APPCPU_W {
RTC_CNTL_JTAG_RESET_FLAG_CLR_APPCPU_W { w: self }
}
#[doc = "Bit 22"]
#[inline(always)]
pub fn rtc_cntl_jtag_reset_flag_clr_procpu(&mut self) -> RTC_CNTL_JTAG_RESET_FLAG_CLR_PROCPU_W {
RTC_CNTL_JTAG_RESET_FLAG_CLR_PROCPU_W { w: self }
}
#[doc = "Bit 19"]
#[inline(always)]
pub fn rtc_cntl_ocd_halt_on_reset_procpu(&mut self) -> RTC_CNTL_OCD_HALT_ON_RESET_PROCPU_W {
RTC_CNTL_OCD_HALT_ON_RESET_PROCPU_W { w: self }
}
#[doc = "Bit 18"]
#[inline(always)]
pub fn rtc_cntl_ocd_halt_on_reset_appcpu(&mut self) -> RTC_CNTL_OCD_HALT_ON_RESET_APPCPU_W {
RTC_CNTL_OCD_HALT_ON_RESET_APPCPU_W { w: self }
}
#[doc = "Bit 17"]
#[inline(always)]
pub fn rtc_cntl_all_reset_flag_clr_appcpu(&mut self) -> RTC_CNTL_ALL_RESET_FLAG_CLR_APPCPU_W {
RTC_CNTL_ALL_RESET_FLAG_CLR_APPCPU_W { w: self }
}
#[doc = "Bit 16"]
#[inline(always)]
pub fn rtc_cntl_all_reset_flag_clr_procpu(&mut self) -> RTC_CNTL_ALL_RESET_FLAG_CLR_PROCPU_W {
RTC_CNTL_ALL_RESET_FLAG_CLR_PROCPU_W { w: self }
}
#[doc = "Bit 13"]
#[inline(always)]
pub fn rtc_cntl_stat_vector_sel_procpu(&mut self) -> RTC_CNTL_STAT_VECTOR_SEL_PROCPU_W {
RTC_CNTL_STAT_VECTOR_SEL_PROCPU_W { w: self }
}
#[doc = "Bit 12"]
#[inline(always)]
pub fn rtc_cntl_stat_vector_sel_appcpu(&mut self) -> RTC_CNTL_STAT_VECTOR_SEL_APPCPU_W {
RTC_CNTL_STAT_VECTOR_SEL_APPCPU_W { w: self }
}
}
| 35.778976 | 100 | 0.632063 |
f486eb980f2b805113365051d415ea10db33ceab | 1,419 | use bytecodec;
use std;
use trackable::error::TrackableError;
use trackable::error::{ErrorKind as TrackableErrorKind, ErrorKindExt};
use url;
/// This crate specific `Error` type.
#[derive(Debug, Clone)]
pub struct Error(TrackableError<ErrorKind>);
derive_traits_for_trackable_error_newtype!(Error, ErrorKind);
impl From<std::io::Error> for Error {
fn from(f: std::io::Error) -> Self {
ErrorKind::Other.cause(f).into()
}
}
impl From<std::sync::mpsc::RecvError> for Error {
fn from(f: std::sync::mpsc::RecvError) -> Self {
ErrorKind::Other.cause(f).into()
}
}
impl From<bytecodec::Error> for Error {
fn from(f: bytecodec::Error) -> Self {
let bytecodec_error_kind = *f.kind();
let kind = match *f.kind() {
bytecodec::ErrorKind::InvalidInput => ErrorKind::InvalidInput,
bytecodec::ErrorKind::UnexpectedEos => ErrorKind::UnexpectedEos,
_ => ErrorKind::Other,
};
track!(kind.takes_over(f); bytecodec_error_kind).into()
}
}
impl From<url::ParseError> for Error {
fn from(f: url::ParseError) -> Self {
ErrorKind::InvalidInput.cause(f).into()
}
}
/// Possible error kinds.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
#[allow(missing_docs)]
pub enum ErrorKind {
InvalidInput,
UnexpectedEos,
Timeout,
TemporarilyUnavailable,
Other,
}
impl TrackableErrorKind for ErrorKind {}
| 28.959184 | 76 | 0.658915 |
5bd77fec93b8b0785b9eeece324df6a2764cdc94 | 408 | use enums;
#[derive(Clone, Debug)]
pub struct AVG {
pub issues: Vec<String>,
pub fixed: Option<String>,
pub severity: enums::Severity,
pub status: enums::Status,
}
impl Default for AVG {
fn default() -> AVG {
AVG {
issues: vec![],
fixed: None,
severity: enums::Severity::Unknown,
status: enums::Status::Unknown,
}
}
}
| 19.428571 | 47 | 0.539216 |
1d6f32749df9b36cb64c8428c5ba9c45398c88f0 | 7,072 | use wasm_bindgen::prelude::*;
#[wasm_bindgen]
pub struct Board {
board: Vec<Cell>,
}
#[wasm_bindgen]
extern "C" {
#[wasm_bindgen(js_namespace = console)]
fn log(message: String);
}
#[wasm_bindgen]
#[derive(Copy, Clone, Debug, PartialEq)]
pub enum Cell {
EMPTY = 0,
PLAYER1 = 1,
PLAYER2 = 2,
TIE = 3,
}
#[wasm_bindgen]
impl Board {
#[wasm_bindgen(constructor, catch)]
pub fn new(width: u32) -> Board {
if width < 3 || width > 10 {
panic!("Width value must be between 3 and 10 {}", width);
}
let mut board = vec![];
for _ in 0..width.pow(2) {
board.push(Cell::EMPTY);
}
Self { board }
}
pub fn get_dim(&self) -> usize {
(self.board.len() as f64).sqrt() as usize
}
#[wasm_bindgen(js_name=getCell)]
pub fn get_cell(&self, x: usize, y: usize) -> Cell {
//let res = self.board[self.get_index(x, y)];
//log(format!("{:?}", res));
self.board[self.get_index(x, y)]
}
pub fn get_index(&self, x: usize, y: usize) -> usize {
self.get_dim() * x + y
}
#[wasm_bindgen(js_name=playerMove)]
pub fn player_move(&mut self, x: usize, y: usize, player: Cell) {
let index = self.get_index(x, y);
self.board[index] = player;
}
#[wasm_bindgen(js_name=checkWin)]
pub fn check_win(&self) -> Cell {
let board_dim = self.get_dim();
//Check rows
for row in self.board.chunks(board_dim) {
if all_equal(row) && row[0] != Cell::EMPTY {
return row[0]
}
}
//Check columns
for x in 0..board_dim {
let mut col = vec![];
for y in 0..board_dim {
col.push(self.get_cell(y, x));
}
if all_equal(&col) && col[0] != Cell::EMPTY {
return col[0];
}
}
//check diagonal
let mut left = vec![];
let mut right = vec![];
for i in 0..board_dim {
left.push(self.get_cell(i, i));
right.push(self.get_cell(board_dim - i - 1, i));
}
for diag in vec![left, right].iter() {
if all_equal(diag) && diag[0] != Cell::EMPTY {
return diag[0];
}
}
// check if the game if over
if self
.board
.iter()
.any(|cell| *cell == Cell::EMPTY)
{
return Cell::EMPTY;
}
//
Cell::TIE
}
pub fn clone_board(original: &Board) -> Board {
let mut board = vec![];
for cell in original.board.iter() {
board.push(cell.clone());
}
Board { board }
}
}
fn all_equal(v: &[Cell]) -> bool {
!v.iter().any(|curr| *curr != v[0])
}
#[test]
fn test_constructor() {
let board = Board::new(3);
assert!(
board.board
== vec![
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY
]
);
let board = Board::new(4);
assert!(
board.board
== vec![
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY,
Cell::EMPTY
]
);
}
#[test]
#[should_panic]
fn test_constructor_shoul_panic() {
let board = Board::new(1);
}
#[test]
fn test_length() {
let board = Board::new(3);
assert!(board.get_dim() == 3);
let board = Board::new(6);
assert!(board.get_dim() == 6);
}
#[test]
fn test_get_move() {
let mut board = Board::new(3);
board.player_move(0, 0, Cell::PLAYER1);
board.player_move(2, 2, Cell::PLAYER2);
board.player_move(1, 2, Cell::PLAYER1);
assert!(board.get_cell(0, 0) == Cell::PLAYER1);
assert!(board.get_cell(2, 2) == Cell::PLAYER2);
assert!(board.get_cell(1, 2) == Cell::PLAYER1);
assert!(board.get_cell(0, 1) == Cell::EMPTY);
}
#[test]
fn test_check_player1_win() {
let mut board = Board::new(3);
board.player_move(0, 0, Cell::PLAYER1);
board.player_move(0, 1, Cell::PLAYER2);
board.player_move(0, 2, Cell::PLAYER1);
board.player_move(1, 0, Cell::PLAYER1);
board.player_move(1, 1, Cell::PLAYER2);
board.player_move(1, 2, Cell::PLAYER1);
board.player_move(2, 0, Cell::PLAYER1);
board.player_move(2, 1, Cell::PLAYER2);
board.player_move(2, 2, Cell::PLAYER1);
assert!(board.check_win() == Cell::PLAYER1);
}
#[test]
fn test_check_player2_win() {
let mut board = Board::new(3);
board.player_move(0, 0, Cell::PLAYER1);
board.player_move(0, 1, Cell::PLAYER2);
board.player_move(0, 2, Cell::PLAYER2);
board.player_move(1, 0, Cell::PLAYER1);
board.player_move(1, 1, Cell::PLAYER2);
board.player_move(1, 2, Cell::PLAYER2);
board.player_move(2, 0, Cell::PLAYER2);
board.player_move(2, 1, Cell::PLAYER2);
board.player_move(2, 2, Cell::PLAYER2);
assert!(board.check_win() == Cell::PLAYER2);
}
#[test]
fn test_check_diaonal_win() {
let mut board = Board::new(3);
board.player_move(0, 0, Cell::PLAYER1);
board.player_move(0, 1, Cell::PLAYER2);
board.player_move(0, 2, Cell::PLAYER2);
board.player_move(1, 0, Cell::PLAYER1);
board.player_move(1, 1, Cell::PLAYER1);
board.player_move(1, 2, Cell::EMPTY);
board.player_move(2, 0, Cell::PLAYER2);
board.player_move(2, 1, Cell::PLAYER2);
board.player_move(2, 2, Cell::PLAYER1);
assert!(board.check_win() == Cell::PLAYER1);
}
#[test]
fn test_check_game_not_finished_yet() {
let mut board = Board::new(3);
board.player_move(0, 0, Cell::EMPTY);
board.player_move(0, 1, Cell::PLAYER2);
board.player_move(0, 2, Cell::PLAYER2);
board.player_move(1, 0, Cell::PLAYER1);
board.player_move(1, 1, Cell::PLAYER1);
board.player_move(1, 2, Cell::EMPTY);
board.player_move(2, 0, Cell::PLAYER2);
board.player_move(2, 1, Cell::PLAYER2);
board.player_move(2, 2, Cell::PLAYER1);
assert!(board.check_win() == Cell::EMPTY);
}
#[test]
fn test_check_tie() {
let mut board = Board::new(3);
board.player_move(0, 0, Cell::PLAYER2);
board.player_move(0, 1, Cell::PLAYER2);
board.player_move(0, 2, Cell::PLAYER1);
board.player_move(1, 0, Cell::PLAYER1);
board.player_move(1, 1, Cell::PLAYER1);
board.player_move(1, 2, Cell::PLAYER2);
board.player_move(2, 0, Cell::PLAYER2);
board.player_move(2, 1, Cell::PLAYER2);
board.player_move(2, 2, Cell::PLAYER1);
assert!(board.check_win() == Cell::TIE);
}
| 25.904762 | 69 | 0.536765 |
2246729429a72e1a440dde1d6d45fcb71f8ba3e7 | 4,350 | /// Storage formats, and io functions for rbspy's internal raw storage format.
///
/// rbspy has a versioned "raw" storage format. The versioning info is stored,
/// along with a "magic number" at the start of the file. The magic number plus
/// version are the first 8 bytes of the file, and are represented as
///
/// b"rbspyXY\n"
///
/// Here, `XY` is a decimal number in [0-99]
///
/// The use of b'\n' as a terminator effectively reserves a byte, and provides
/// flexibility to go to a different version encoding scheme if this format
/// changes _way_ to much.
extern crate flate2;
use std::io;
use std::io::prelude::*;
use std::fs::File;
use std::path::Path;
use std::time::SystemTime;
use crate::core::types::Header;
use crate::core::types::StackTrace;
use self::flate2::Compression;
use failure::Error;
use serde_json;
mod v0;
mod v1;
mod v2;
pub struct Store {
encoder: flate2::write::GzEncoder<File>,
}
impl Store {
pub fn new(out_path: &Path, sample_rate: u32) -> Result<Store, io::Error> {
let file = File::create(out_path)?;
let mut encoder = flate2::write::GzEncoder::new(file, Compression::default());
encoder.write_all("rbspy02\n".as_bytes())?;
let json = serde_json::to_string(&Header {
sample_rate: Some(sample_rate),
rbspy_version: Some(env!("CARGO_PKG_VERSION").to_string()),
start_time: Some(SystemTime::now()),
})?;
writeln!(&mut encoder, "{}", json)?;
Ok(Store { encoder })
}
pub fn write(&mut self, trace: &StackTrace) -> Result<(), Error> {
let json = serde_json::to_string(trace)?;
writeln!(&mut self.encoder, "{}", json)?;
Ok(())
}
pub fn complete(self) {
drop(self.encoder)
}
}
#[derive(Clone, Debug, Copy, Eq, PartialEq, Ord, PartialOrd)]
pub(crate) struct Version(u64);
impl ::std::fmt::Display for Version {
fn fmt(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
write!(f, "{}", self.0)
}
}
impl Version {
/// Parse bytes to a version.
///
/// # Errors
/// Fails with `StorageError::Invalid` if the version tag is in an unknown
/// format.
fn try_from(b: &[u8]) -> Result<Version, StorageError> {
if &b[0..3] == "00\n".as_bytes() {
Ok(Version(0))
} else if &b[0..3] == "01\n".as_bytes() {
Ok(Version(1))
} else if &b[0..3] == "02\n".as_bytes() {
Ok(Version(2))
} else {
Err(StorageError::Invalid)
}
}
}
#[derive(Fail, Debug)]
pub(crate) enum StorageError {
/// The file doesn't begin with the magic tag `rbspy` + version number.
#[fail(display = "Invalid rbspy file")]
Invalid,
/// The version of the rbspy file can't be handled by this version of rbspy.
#[fail(display = "Cannot handle rbspy format {}", _0)]
UnknownVersion(Version),
/// An IO error occurred.
#[fail(display = "IO error {:?}", _0)]
Io(#[cause] io::Error),
}
/// Types that can be deserialized from an `io::Read` into something convertible
/// to the current internal form.
pub(crate) trait Storage: Into<v2::Data> {
fn from_reader<R: Read>(r: R) -> Result<Self, Error>;
fn version() -> Version;
}
fn read_version(r: &mut dyn Read) -> Result<Version, StorageError> {
let mut buf = [0u8; 8];
// TODO: I don't know how to failure good, so this doesn't work.
r.read(&mut buf).map_err(StorageError::Io)?;
match &buf[..5] {
b"rbspy" => Ok(Version::try_from(&buf[5..])?),
_ => Err(StorageError::Invalid),
}
}
pub(crate) fn from_reader<R: Read>(r: R) -> Result<v2::Data, Error> {
// This will read 8 bytes, leaving the reader's cursor at the start of the
// "real" data.
let mut reader = flate2::read::GzDecoder::new(r);
let version = read_version(&mut reader)?;
match version {
Version(0) => {
let intermediate = v0::Data::from_reader(reader)?;
Ok(intermediate.into())
}
Version(1) => {
let intermediate = v1::Data::from_reader(reader)?;
Ok(intermediate.into())
}
Version(2) => {
let intermediate = v2::Data::from_reader(reader)?;
Ok(intermediate)
}
v => Err(StorageError::UnknownVersion(v).into()),
}
}
| 30.208333 | 86 | 0.594253 |
4b7edda3cf860d03d08c0afa6a3e4df4fd60195b | 1,480 | /*
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
*/
use c25519::big::NLEN;
use arch::Chunk;
// Base Bits= 56
// Curve25519 Modulus
pub const MODULUS:[Chunk;NLEN]=[0xFFFFFFFFFFFFED,0xFFFFFFFFFFFFFF,0xFFFFFFFFFFFFFF,0xFFFFFFFFFFFFFF,0x7FFFFFFF];
pub const R2MODP:[Chunk;NLEN]=[0xA4000000000000,0x5,0x0,0x0,0x0];
pub const MCONST:Chunk=0x13;
// c25519 Curve
pub const CURVE_COF_I:isize = 8;
pub const CURVE_A:isize = 486662;
pub const CURVE_B_I:isize = 0;
pub const CURVE_COF:[Chunk;NLEN]=[0x8,0x0,0x0,0x0,0x0];
pub const CURVE_B:[Chunk;NLEN]=[0x0,0x0,0x0,0x0,0x0];
pub const CURVE_ORDER:[Chunk;NLEN]=[0x12631A5CF5D3ED,0xF9DEA2F79CD658,0x14DE,0x0,0x10000000];
pub const CURVE_GX:[Chunk;NLEN]=[0x9,0x0,0x0,0x0,0x0];
pub const CURVE_GY:[Chunk;NLEN]=[0x0,0x0,0x0,0x0,0x0];
| 37.948718 | 112 | 0.782432 |
ac42dde4de9e9c3bfe5090a4c3bff66682122b4d | 1,582 | // errors2.rs
// Say we're writing a game where you can buy items with tokens. All items cost
// 5 tokens, and whenever you purchase items there is a processing fee of 1
// token. A player of the game will type in how many items they want to buy,
// and the `total_cost` function will calculate the total number of tokens.
// Since the player typed in the quantity, though, we get it as a string-- and
// they might have typed anything, not just numbers!
// Right now, this function isn't handling the error case at all (and isn't
// handling the success case properly either). What we want to do is:
// if we call the `parse` function on a string that is not a number, that
// function will return a `ParseIntError`, and in that case, we want to
// immediately return that error from our function and not try to multiply
// and add.
// There are at least two ways to implement this that are both correct-- but
// one is a lot shorter! Execute `rustlings hint errors2` for hints to both ways.
use std::num::ParseIntError;
pub fn total_cost(item_quantity: &str) -> Result<i32, ParseIntError> {
let processing_fee = 1;
let cost_per_item = 5;
Ok(item_quantity.parse::<i32>()? * cost_per_item + processing_fee)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn item_quantity_is_a_valid_number() {
assert_eq!(total_cost("34"), Ok(171));
}
#[test]
fn item_quantity_is_an_invalid_number() {
assert_eq!(
total_cost("beep boop").unwrap_err().to_string(),
"invalid digit found in string"
);
}
}
| 35.954545 | 81 | 0.692162 |
39e97be7770c2418110b6765b39892582bed8ac9 | 50,116 | #![allow(non_camel_case_types)]
#![allow(non_upper_case_globals)]
#![allow(non_snake_case)]
#![allow(unused)]
#![allow(clippy::all)]
use libc::timespec;
/* automatically generated by rust-bindgen */
pub const FIO_IOOPS_VERSION: u32 = 24;
pub type __uint8_t = libc::c_uchar;
pub type __uint16_t = libc::c_ushort;
pub type __int32_t = libc::c_int;
pub type __uint32_t = libc::c_uint;
pub type __int64_t = libc::c_long;
pub type __uint64_t = libc::c_ulong;
pub type __intptr_t = __int64_t;
pub type __size_t = __uint64_t;
pub type __time_t = __int64_t;
pub type __off_t = __int64_t;
pub type __pid_t = __int32_t;
pub type __suseconds_t = libc::c_long;
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union __mbstate_t {
pub __mbstate8: [libc::c_char; 128usize],
pub _mbstateL: __int64_t,
_bindgen_union_align: [u64; 16usize],
}
pub type time_t = __time_t;
pub type pid_t = __pid_t;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct pthread {
_unused: [u8; 0],
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct pthread_cond {
_unused: [u8; 0],
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct pthread_mutex {
_unused: [u8; 0],
}
pub type pthread_t = *mut pthread;
pub type pthread_mutex_t = *mut pthread_mutex;
pub type pthread_cond_t = *mut pthread_cond;
pub type suseconds_t = __suseconds_t;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct timeval {
pub tv_sec: time_t,
pub tv_usec: suseconds_t,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct rusage {
pub ru_utime: timeval,
pub ru_stime: timeval,
pub ru_maxrss: libc::c_long,
pub ru_ixrss: libc::c_long,
pub ru_idrss: libc::c_long,
pub ru_isrss: libc::c_long,
pub ru_minflt: libc::c_long,
pub ru_majflt: libc::c_long,
pub ru_nswap: libc::c_long,
pub ru_inblock: libc::c_long,
pub ru_oublock: libc::c_long,
pub ru_msgsnd: libc::c_long,
pub ru_msgrcv: libc::c_long,
pub ru_nsignals: libc::c_long,
pub ru_nvcsw: libc::c_long,
pub ru_nivcsw: libc::c_long,
}
pub type fpos_t = __off_t;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct __sbuf {
pub _base: *mut libc::c_uchar,
pub _size: libc::c_int,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct __sFILE {
pub _p: *mut libc::c_uchar,
pub _r: libc::c_int,
pub _w: libc::c_int,
pub _flags: libc::c_short,
pub _file: libc::c_short,
pub _bf: __sbuf,
pub _lbfsize: libc::c_int,
pub _cookie: *mut libc::c_void,
pub _close: ::std::option::Option<unsafe extern "C" fn(arg1: *mut libc::c_void) -> libc::c_int>,
pub _read: ::std::option::Option<
unsafe extern "C" fn(
arg1: *mut libc::c_void,
arg2: *mut libc::c_char,
arg3: libc::c_int,
) -> libc::c_int,
>,
pub _seek: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut libc::c_void, arg2: fpos_t, arg3: libc::c_int) -> fpos_t,
>,
pub _write: ::std::option::Option<
unsafe extern "C" fn(
arg1: *mut libc::c_void,
arg2: *const libc::c_char,
arg3: libc::c_int,
) -> libc::c_int,
>,
pub _ub: __sbuf,
pub _up: *mut libc::c_uchar,
pub _ur: libc::c_int,
pub _ubuf: [libc::c_uchar; 3usize],
pub _nbuf: [libc::c_uchar; 1usize],
pub _lb: __sbuf,
pub _blksize: libc::c_int,
pub _offset: fpos_t,
pub _fl_mutex: *mut pthread_mutex,
pub _fl_owner: *mut pthread,
pub _fl_count: libc::c_int,
pub _orientation: libc::c_int,
pub _mbstate: __mbstate_t,
pub _flags2: libc::c_int,
}
pub type FILE = __sFILE;
pub type bool_ = libc::c_int;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct _cpuset {
pub __bits: [libc::c_long; 4usize],
}
pub type cpuset_t = _cpuset;
pub const fio_ddir_DDIR_READ: fio_ddir = 0;
pub const fio_ddir_DDIR_WRITE: fio_ddir = 1;
pub const fio_ddir_DDIR_TRIM: fio_ddir = 2;
pub const fio_ddir_DDIR_SYNC: fio_ddir = 3;
pub const fio_ddir_DDIR_DATASYNC: fio_ddir = 4;
pub const fio_ddir_DDIR_SYNC_FILE_RANGE: fio_ddir = 5;
pub const fio_ddir_DDIR_WAIT: fio_ddir = 6;
pub const fio_ddir_DDIR_LAST: fio_ddir = 7;
pub const fio_ddir_DDIR_INVAL: fio_ddir = -1;
pub const fio_ddir_DDIR_RWDIR_CNT: fio_ddir = 3;
pub const fio_ddir_DDIR_RWDIR_SYNC_CNT: fio_ddir = 4;
pub type fio_ddir = i32;
pub const td_ddir_TD_DDIR_READ: td_ddir = 1;
pub const td_ddir_TD_DDIR_WRITE: td_ddir = 2;
pub const td_ddir_TD_DDIR_RAND: td_ddir = 4;
pub const td_ddir_TD_DDIR_TRIM: td_ddir = 8;
pub const td_ddir_TD_DDIR_RW: td_ddir = 3;
pub const td_ddir_TD_DDIR_RANDREAD: td_ddir = 5;
pub const td_ddir_TD_DDIR_RANDWRITE: td_ddir = 6;
pub const td_ddir_TD_DDIR_RANDRW: td_ddir = 7;
pub const td_ddir_TD_DDIR_RANDTRIM: td_ddir = 12;
pub const td_ddir_TD_DDIR_TRIMWRITE: td_ddir = 10;
pub type td_ddir = u32;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct flist_head {
pub next: *mut flist_head,
pub prev: *mut flist_head,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct taus88_state {
pub s1: libc::c_uint,
pub s2: libc::c_uint,
pub s3: libc::c_uint,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct taus258_state {
pub s1: u64,
pub s2: u64,
pub s3: u64,
pub s4: u64,
pub s5: u64,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct frand_state {
pub use64: libc::c_uint,
pub __bindgen_anon_1: frand_state__bindgen_ty_1,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union frand_state__bindgen_ty_1 {
pub state32: taus88_state,
pub state64: taus258_state,
_bindgen_union_align: [u64; 5usize],
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct zipf_state {
pub nranges: u64,
pub theta: f64,
pub zeta2: f64,
pub zetan: f64,
pub pareto_pow: f64,
pub rand: frand_state,
pub rand_off: u64,
pub disable_hash: bool_,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct axmap {
_unused: [u8; 0],
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct fio_lfsr {
pub xormask: u64,
pub last_val: u64,
pub cached_bit: u64,
pub max_val: u64,
pub num_vals: u64,
pub cycle_length: u64,
pub cached_cycle_length: u64,
pub spin: libc::c_uint,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct gauss_state {
pub r: frand_state,
pub nranges: u64,
pub stddev: libc::c_uint,
pub disable_hash: bool_,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct zoned_block_device_info {
_unused: [u8; 0],
}
pub const fio_filetype_FIO_TYPE_FILE: fio_filetype = 1;
pub const fio_filetype_FIO_TYPE_BLOCK: fio_filetype = 2;
pub const fio_filetype_FIO_TYPE_CHAR: fio_filetype = 3;
pub const fio_filetype_FIO_TYPE_PIPE: fio_filetype = 4;
pub type fio_filetype = u32;
pub const fio_file_flags_FIO_FILE_open: fio_file_flags = 1;
pub const fio_file_flags_FIO_FILE_closing: fio_file_flags = 2;
pub const fio_file_flags_FIO_FILE_extend: fio_file_flags = 4;
pub const fio_file_flags_FIO_FILE_done: fio_file_flags = 8;
pub const fio_file_flags_FIO_FILE_size_known: fio_file_flags = 16;
pub const fio_file_flags_FIO_FILE_hashed: fio_file_flags = 32;
pub const fio_file_flags_FIO_FILE_partial_mmap: fio_file_flags = 64;
pub const fio_file_flags_FIO_FILE_axmap: fio_file_flags = 128;
pub const fio_file_flags_FIO_FILE_lfsr: fio_file_flags = 256;
pub type fio_file_flags = u32;
pub const file_lock_mode_FILE_LOCK_NONE: file_lock_mode = 0;
pub const file_lock_mode_FILE_LOCK_EXCLUSIVE: file_lock_mode = 1;
pub const file_lock_mode_FILE_LOCK_READWRITE: file_lock_mode = 2;
pub type file_lock_mode = u32;
pub const fio_fallocate_mode_FIO_FALLOCATE_NONE: fio_fallocate_mode = 1;
pub const fio_fallocate_mode_FIO_FALLOCATE_POSIX: fio_fallocate_mode = 2;
pub const fio_fallocate_mode_FIO_FALLOCATE_KEEP_SIZE: fio_fallocate_mode = 3;
pub const fio_fallocate_mode_FIO_FALLOCATE_NATIVE: fio_fallocate_mode = 4;
pub type fio_fallocate_mode = u32;
#[repr(C)]
#[derive(Copy, Clone)]
pub struct fio_file {
pub hash_list: flist_head,
pub filetype: fio_filetype,
pub fd: libc::c_int,
pub shadow_fd: libc::c_int,
pub major: libc::c_uint,
pub minor: libc::c_uint,
pub fileno: libc::c_int,
pub file_name: *mut libc::c_char,
pub real_file_size: u64,
pub file_offset: u64,
pub io_size: u64,
pub zbd_info: *mut zoned_block_device_info,
pub last_pos: [u64; 3usize],
pub last_start: [u64; 3usize],
pub first_write: u64,
pub last_write: u64,
pub last_write_comp: *mut u64,
pub last_write_idx: libc::c_uint,
pub __bindgen_anon_1: fio_file__bindgen_ty_1,
pub __bindgen_anon_2: fio_file__bindgen_ty_2,
pub __bindgen_anon_3: fio_file__bindgen_ty_3,
pub __bindgen_anon_4: fio_file__bindgen_ty_4,
pub references: libc::c_int,
pub flags: fio_file_flags,
pub du: *mut disk_util,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union fio_file__bindgen_ty_1 {
pub engine_pos: u64,
pub engine_data: *mut libc::c_void,
_bindgen_union_align: u64,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union fio_file__bindgen_ty_2 {
pub lock: *mut fio_sem,
pub rwlock: *mut fio_rwlock,
_bindgen_union_align: u64,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union fio_file__bindgen_ty_3 {
pub io_axmap: *mut axmap,
pub lfsr: fio_lfsr,
_bindgen_union_align: [u64; 8usize],
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union fio_file__bindgen_ty_4 {
pub zipf: zipf_state,
pub gauss: gauss_state,
_bindgen_union_align: [u64; 13usize],
}
extern "C" {
pub fn generic_open_file(arg1: *mut thread_data, arg2: *mut fio_file) -> libc::c_int;
}
extern "C" {
pub fn generic_close_file(arg1: *mut thread_data, arg2: *mut fio_file) -> libc::c_int;
}
pub type os_cpu_mask_t = cpuset_t;
pub const fio_opt_type_FIO_OPT_INVALID: fio_opt_type = 0;
pub const fio_opt_type_FIO_OPT_STR: fio_opt_type = 1;
pub const fio_opt_type_FIO_OPT_STR_ULL: fio_opt_type = 2;
pub const fio_opt_type_FIO_OPT_STR_MULTI: fio_opt_type = 3;
pub const fio_opt_type_FIO_OPT_STR_VAL: fio_opt_type = 4;
pub const fio_opt_type_FIO_OPT_STR_VAL_TIME: fio_opt_type = 5;
pub const fio_opt_type_FIO_OPT_STR_STORE: fio_opt_type = 6;
pub const fio_opt_type_FIO_OPT_RANGE: fio_opt_type = 7;
pub const fio_opt_type_FIO_OPT_INT: fio_opt_type = 8;
pub const fio_opt_type_FIO_OPT_ULL: fio_opt_type = 9;
pub const fio_opt_type_FIO_OPT_BOOL: fio_opt_type = 10;
pub const fio_opt_type_FIO_OPT_FLOAT_LIST: fio_opt_type = 11;
pub const fio_opt_type_FIO_OPT_STR_SET: fio_opt_type = 12;
pub const fio_opt_type_FIO_OPT_DEPRECATED: fio_opt_type = 13;
pub const fio_opt_type_FIO_OPT_SOFT_DEPRECATED: fio_opt_type = 14;
pub const fio_opt_type_FIO_OPT_UNSUPPORTED: fio_opt_type = 15;
pub type fio_opt_type = u32;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct value_pair {
pub ival: *const libc::c_char,
pub oval: libc::c_ulonglong,
pub help: *const libc::c_char,
pub orval: libc::c_int,
pub cb: *mut libc::c_void,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct fio_option {
pub name: *const libc::c_char,
pub lname: *const libc::c_char,
pub alias: *const libc::c_char,
pub type_: fio_opt_type,
pub off1: libc::c_uint,
pub off2: libc::c_uint,
pub off3: libc::c_uint,
pub off4: libc::c_uint,
pub off5: libc::c_uint,
pub off6: libc::c_uint,
pub maxval: libc::c_ulonglong,
pub minval: libc::c_int,
pub maxfp: f64,
pub minfp: f64,
pub interval: libc::c_uint,
pub maxlen: libc::c_uint,
pub neg: libc::c_int,
pub prio: libc::c_int,
pub cb: *mut libc::c_void,
pub help: *const libc::c_char,
pub def: *const libc::c_char,
pub posval: [value_pair; 24usize],
pub parent: *const libc::c_char,
pub hide: libc::c_int,
pub hide_on_set: libc::c_int,
pub inverse: *const libc::c_char,
pub inv_opt: *mut fio_option,
pub verify: ::std::option::Option<
unsafe extern "C" fn(arg1: *const fio_option, arg2: *mut libc::c_void) -> libc::c_int,
>,
pub prof_name: *const libc::c_char,
pub prof_opts: *mut libc::c_void,
pub category: u64,
pub group: u64,
pub gui_data: *mut libc::c_void,
pub is_seconds: libc::c_int,
pub is_time: libc::c_int,
pub no_warn_def: libc::c_int,
pub pow2: libc::c_int,
pub no_free: libc::c_int,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct fio_rb_node {
pub rb_parent_color: isize,
pub rb_right: *mut fio_rb_node,
pub rb_left: *mut fio_rb_node,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct rb_root {
pub rb_node: *mut fio_rb_node,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct fio_fp64 {
pub u: fio_fp64__bindgen_ty_1,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union fio_fp64__bindgen_ty_1 {
pub i: u64,
pub f: f64,
pub filler: [u8; 16usize],
_bindgen_union_align: [u64; 2usize],
}
pub type fio_fp64_t = fio_fp64;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct workqueue_work {
pub list: flist_head,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct submit_worker {
pub thread: pthread_t,
pub lock: pthread_mutex_t,
pub cond: pthread_cond_t,
pub work_list: flist_head,
pub flags: libc::c_uint,
pub index: libc::c_uint,
pub seq: u64,
pub wq: *mut workqueue,
pub priv_: *mut libc::c_void,
pub sk_out: *mut sk_out,
}
pub type workqueue_work_fn = ::std::option::Option<
unsafe extern "C" fn(arg1: *mut submit_worker, arg2: *mut workqueue_work) -> libc::c_int,
>;
pub type workqueue_pre_sleep_flush_fn =
::std::option::Option<unsafe extern "C" fn(arg1: *mut submit_worker) -> bool_>;
pub type workqueue_pre_sleep_fn =
::std::option::Option<unsafe extern "C" fn(arg1: *mut submit_worker)>;
pub type workqueue_alloc_worker_fn =
::std::option::Option<unsafe extern "C" fn(arg1: *mut submit_worker) -> libc::c_int>;
pub type workqueue_free_worker_fn =
::std::option::Option<unsafe extern "C" fn(arg1: *mut submit_worker)>;
pub type workqueue_init_worker_fn =
::std::option::Option<unsafe extern "C" fn(arg1: *mut submit_worker) -> libc::c_int>;
pub type workqueue_exit_worker_fn =
::std::option::Option<unsafe extern "C" fn(arg1: *mut submit_worker, arg2: *mut libc::c_uint)>;
pub type workqueue_update_acct_fn =
::std::option::Option<unsafe extern "C" fn(arg1: *mut submit_worker)>;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct workqueue_ops {
pub fn_: workqueue_work_fn,
pub pre_sleep_flush_fn: workqueue_pre_sleep_flush_fn,
pub pre_sleep_fn: workqueue_pre_sleep_fn,
pub update_acct_fn: workqueue_update_acct_fn,
pub alloc_worker_fn: workqueue_alloc_worker_fn,
pub free_worker_fn: workqueue_free_worker_fn,
pub init_worker_fn: workqueue_init_worker_fn,
pub exit_worker_fn: workqueue_exit_worker_fn,
pub nice: libc::c_uint,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct workqueue {
pub max_workers: libc::c_uint,
pub td: *mut thread_data,
pub ops: workqueue_ops,
pub work_seq: u64,
pub workers: *mut submit_worker,
pub next_free_worker: libc::c_uint,
pub flush_cond: pthread_cond_t,
pub flush_lock: pthread_mutex_t,
pub stat_lock: pthread_mutex_t,
pub wake_idle: libc::c_int,
}
#[repr(C)]
pub struct io_u {
pub start_time: timespec,
pub issue_time: timespec,
pub file: *mut fio_file,
pub flags: libc::c_uint,
pub ddir: fio_ddir,
pub acct_ddir: fio_ddir,
pub numberio: libc::c_ushort,
pub buflen: libc::c_ulonglong,
pub offset: libc::c_ulonglong,
pub buf: *mut libc::c_void,
pub rand_seed: u64,
pub xfer_buf: *mut libc::c_void,
pub xfer_buflen: libc::c_ulonglong,
pub buf_filled_len: libc::c_ulonglong,
pub ipo: *mut io_piece,
pub resid: libc::c_ulonglong,
pub error: libc::c_uint,
pub __bindgen_anon_1: io_u__bindgen_ty_1,
pub __bindgen_anon_2: io_u__bindgen_ty_2,
pub post_submit: ::std::option::Option<unsafe extern "C" fn(arg1: *const io_u, success: bool_)>,
pub end_io: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut *mut io_u) -> libc::c_int,
>,
pub __bindgen_anon_3: io_u__bindgen_ty_3,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union io_u__bindgen_ty_1 {
pub index: libc::c_uint,
pub seen: libc::c_uint,
pub engine_data: *mut libc::c_void,
_bindgen_union_align: u64,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union io_u__bindgen_ty_2 {
pub verify_list: flist_head,
pub work: workqueue_work,
_bindgen_union_align: [u64; 2usize],
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union io_u__bindgen_ty_3 {
pub mmap_data: *mut libc::c_void,
_bindgen_union_align: u64,
}
pub const fio_q_status_FIO_Q_COMPLETED: fio_q_status = 0;
pub const fio_q_status_FIO_Q_QUEUED: fio_q_status = 1;
pub const fio_q_status_FIO_Q_BUSY: fio_q_status = 2;
pub type fio_q_status = u32;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct ioengine_ops {
pub list: flist_head,
pub name: *const libc::c_char,
pub version: libc::c_int,
pub flags: libc::c_int,
pub setup: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data) -> libc::c_int>,
pub init: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data) -> libc::c_int>,
pub prep: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut io_u) -> libc::c_int,
>,
pub queue: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut io_u) -> fio_q_status,
>,
pub commit: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data) -> libc::c_int>,
pub getevents: ::std::option::Option<
unsafe extern "C" fn(
arg1: *mut thread_data,
arg2: libc::c_uint,
arg3: libc::c_uint,
arg4: *const timespec,
) -> libc::c_int,
>,
pub event: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: libc::c_int) -> *mut io_u,
>,
pub errdetails:
::std::option::Option<unsafe extern "C" fn(arg1: *mut io_u) -> *mut libc::c_char>,
pub cancel: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut io_u) -> libc::c_int,
>,
pub cleanup: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data)>,
pub open_file: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut fio_file) -> libc::c_int,
>,
pub close_file: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut fio_file) -> libc::c_int,
>,
pub invalidate: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut fio_file) -> libc::c_int,
>,
pub unlink_file: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut fio_file) -> libc::c_int,
>,
pub get_file_size: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut fio_file) -> libc::c_int,
>,
pub terminate: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data)>,
pub iomem_alloc: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: usize) -> libc::c_int,
>,
pub iomem_free: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data)>,
pub io_u_init: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut io_u) -> libc::c_int,
>,
pub io_u_free:
::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data, arg2: *mut io_u)>,
pub option_struct_size: libc::c_int,
pub options: *mut fio_option,
}
pub const fio_ioengine_flags_FIO_SYNCIO: fio_ioengine_flags = 1;
pub const fio_ioengine_flags_FIO_RAWIO: fio_ioengine_flags = 2;
pub const fio_ioengine_flags_FIO_DISKLESSIO: fio_ioengine_flags = 4;
pub const fio_ioengine_flags_FIO_NOEXTEND: fio_ioengine_flags = 8;
pub const fio_ioengine_flags_FIO_NODISKUTIL: fio_ioengine_flags = 16;
pub const fio_ioengine_flags_FIO_UNIDIR: fio_ioengine_flags = 32;
pub const fio_ioengine_flags_FIO_NOIO: fio_ioengine_flags = 64;
pub const fio_ioengine_flags_FIO_PIPEIO: fio_ioengine_flags = 128;
pub const fio_ioengine_flags_FIO_BARRIER: fio_ioengine_flags = 256;
pub const fio_ioengine_flags_FIO_MEMALIGN: fio_ioengine_flags = 512;
pub const fio_ioengine_flags_FIO_BIT_BASED: fio_ioengine_flags = 1024;
pub const fio_ioengine_flags_FIO_FAKEIO: fio_ioengine_flags = 2048;
pub const fio_ioengine_flags_FIO_NOSTATS: fio_ioengine_flags = 4096;
pub const fio_ioengine_flags_FIO_NOFILEHASH: fio_ioengine_flags = 8192;
pub type fio_ioengine_flags = u32;
#[repr(C)]
#[derive(Copy, Clone)]
pub struct io_stat {
pub max_val: u64,
pub min_val: u64,
pub samples: u64,
pub mean: fio_fp64_t,
pub S: fio_fp64_t,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct io_hist {
pub samples: u64,
pub hist_last: libc::c_ulong,
pub list: flist_head,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct io_logs {
pub list: flist_head,
pub nr_samples: u64,
pub max_samples: u64,
pub log: *mut libc::c_void,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct io_log {
pub io_logs: flist_head,
pub cur_log_max: u32,
pub pending: *mut io_logs,
pub log_ddir_mask: libc::c_uint,
pub filename: *mut libc::c_char,
pub td: *mut thread_data,
pub log_type: libc::c_uint,
pub disabled: bool_,
pub log_offset: libc::c_uint,
pub log_gz: libc::c_uint,
pub log_gz_store: libc::c_uint,
pub avg_window: [io_stat; 3usize],
pub avg_msec: libc::c_ulong,
pub avg_last: [libc::c_ulong; 3usize],
pub hist_window: [io_hist; 3usize],
pub hist_msec: libc::c_ulong,
pub hist_coarseness: libc::c_uint,
pub chunk_lock: pthread_mutex_t,
pub chunk_seq: libc::c_uint,
pub chunk_list: flist_head,
pub deferred_free_lock: pthread_mutex_t,
pub deferred_items: [*mut libc::c_void; 8usize],
pub deferred: libc::c_uint,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct io_piece {
pub __bindgen_anon_1: io_piece__bindgen_ty_1,
pub trim_list: flist_head,
pub __bindgen_anon_2: io_piece__bindgen_ty_2,
pub offset: libc::c_ulonglong,
pub numberio: libc::c_ushort,
pub len: libc::c_ulong,
pub flags: libc::c_uint,
pub ddir: fio_ddir,
pub __bindgen_anon_3: io_piece__bindgen_ty_3,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union io_piece__bindgen_ty_1 {
pub rb_node: fio_rb_node,
pub list: flist_head,
_bindgen_union_align: [u64; 3usize],
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union io_piece__bindgen_ty_2 {
pub fileno: libc::c_int,
pub file: *mut fio_file,
_bindgen_union_align: u64,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union io_piece__bindgen_ty_3 {
pub delay: libc::c_ulong,
pub file_action: libc::c_uint,
_bindgen_union_align: u64,
}
#[repr(C)]
#[repr(align(8))]
pub struct thread_stat {
pub _bindgen_opaque_blob: [u64; 11841usize],
}
#[repr(C)]
#[repr(align(4))]
#[derive(Copy, Clone)]
pub union thread_stat__bindgen_ty_1 {
pub continue_on_error: u16,
pub pad2: u32,
_bindgen_union_align: u32,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union thread_stat__bindgen_ty_2 {
pub ss_iops_data: *mut u64,
pub pad4: u64,
_bindgen_union_align: u64,
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union thread_stat__bindgen_ty_3 {
pub ss_bw_data: *mut u64,
pub pad5: u64,
_bindgen_union_align: u64,
}
pub const fio_cs_CS_GTOD: fio_cs = 1;
pub const fio_cs_CS_CGETTIME: fio_cs = 2;
pub const fio_cs_CS_CPUCLOCK: fio_cs = 3;
pub const fio_cs_CS_INVAL: fio_cs = 4;
pub type fio_cs = u32;
#[doc = " Pattern format description. The input for \'parse_pattern\'."]
#[doc = " Describes format with its name and callback, which should"]
#[doc = " be called to paste something inside the buffer."]
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct pattern_fmt_desc {
pub fmt: *const libc::c_char,
pub len: libc::c_uint,
pub paste: ::std::option::Option<
unsafe extern "C" fn(
buf: *mut libc::c_char,
len: libc::c_uint,
priv_: *mut libc::c_void,
) -> libc::c_int,
>,
}
#[doc = " Pattern format. The output of \'parse_pattern\'."]
#[doc = " Describes the exact position inside the xbuffer."]
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct pattern_fmt {
pub off: libc::c_uint,
pub desc: *const pattern_fmt_desc,
}
pub const error_type_ERROR_TYPE_NONE: error_type = 0;
pub const error_type_ERROR_TYPE_READ: error_type = 1;
pub const error_type_ERROR_TYPE_WRITE: error_type = 2;
pub const error_type_ERROR_TYPE_VERIFY: error_type = 4;
pub const error_type_ERROR_TYPE_ANY: error_type = 65535;
pub type error_type = u32;
pub const fio_zone_mode_ZONE_MODE_NOT_SPECIFIED: fio_zone_mode = 0;
pub const fio_zone_mode_ZONE_MODE_NONE: fio_zone_mode = 1;
pub const fio_zone_mode_ZONE_MODE_STRIDED: fio_zone_mode = 2;
pub const fio_zone_mode_ZONE_MODE_ZBD: fio_zone_mode = 3;
pub type fio_zone_mode = u32;
pub const fio_memtype_MEM_MALLOC: fio_memtype = 0;
pub const fio_memtype_MEM_SHM: fio_memtype = 1;
pub const fio_memtype_MEM_SHMHUGE: fio_memtype = 2;
pub const fio_memtype_MEM_MMAP: fio_memtype = 3;
pub const fio_memtype_MEM_MMAPHUGE: fio_memtype = 4;
pub const fio_memtype_MEM_MMAPSHARED: fio_memtype = 5;
pub const fio_memtype_MEM_CUDA_MALLOC: fio_memtype = 6;
pub type fio_memtype = u32;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct bssplit {
pub bs: u64,
pub perc: u32,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct zone_split {
pub access_perc: u8,
pub size_perc: u8,
pub pad: [u8; 6usize],
pub size: u64,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct thread_options {
pub magic: libc::c_int,
pub set_options: [u64; 8usize],
pub description: *mut libc::c_char,
pub name: *mut libc::c_char,
pub wait_for: *mut libc::c_char,
pub directory: *mut libc::c_char,
pub filename: *mut libc::c_char,
pub filename_format: *mut libc::c_char,
pub opendir: *mut libc::c_char,
pub ioengine: *mut libc::c_char,
pub ioengine_so_path: *mut libc::c_char,
pub mmapfile: *mut libc::c_char,
pub td_ddir: td_ddir,
pub rw_seq: libc::c_uint,
pub kb_base: libc::c_uint,
pub unit_base: libc::c_uint,
pub ddir_seq_nr: libc::c_uint,
pub ddir_seq_add: libc::c_longlong,
pub iodepth: libc::c_uint,
pub iodepth_low: libc::c_uint,
pub iodepth_batch: libc::c_uint,
pub iodepth_batch_complete_min: libc::c_uint,
pub iodepth_batch_complete_max: libc::c_uint,
pub serialize_overlap: libc::c_uint,
pub unique_filename: libc::c_uint,
pub size: libc::c_ulonglong,
pub io_size: libc::c_ulonglong,
pub size_percent: libc::c_uint,
pub fill_device: libc::c_uint,
pub file_append: libc::c_uint,
pub file_size_low: libc::c_ulonglong,
pub file_size_high: libc::c_ulonglong,
pub start_offset: libc::c_ulonglong,
pub start_offset_align: libc::c_ulonglong,
pub bs: [libc::c_ulonglong; 3usize],
pub ba: [libc::c_ulonglong; 3usize],
pub min_bs: [libc::c_ulonglong; 3usize],
pub max_bs: [libc::c_ulonglong; 3usize],
pub bssplit: [*mut bssplit; 3usize],
pub bssplit_nr: [libc::c_uint; 3usize],
pub ignore_error: [*mut libc::c_int; 3usize],
pub ignore_error_nr: [libc::c_uint; 3usize],
pub error_dump: libc::c_uint,
pub nr_files: libc::c_uint,
pub open_files: libc::c_uint,
pub file_lock_mode: file_lock_mode,
pub odirect: libc::c_uint,
pub oatomic: libc::c_uint,
pub invalidate_cache: libc::c_uint,
pub create_serialize: libc::c_uint,
pub create_fsync: libc::c_uint,
pub create_on_open: libc::c_uint,
pub create_only: libc::c_uint,
pub end_fsync: libc::c_uint,
pub pre_read: libc::c_uint,
pub sync_io: libc::c_uint,
pub write_hint: libc::c_uint,
pub verify: libc::c_uint,
pub do_verify: libc::c_uint,
pub verify_interval: libc::c_uint,
pub verify_offset: libc::c_uint,
pub verify_pattern: [libc::c_char; 512usize],
pub verify_pattern_bytes: libc::c_uint,
pub verify_fmt: [pattern_fmt; 8usize],
pub verify_fmt_sz: libc::c_uint,
pub verify_fatal: libc::c_uint,
pub verify_dump: libc::c_uint,
pub verify_async: libc::c_uint,
pub verify_backlog: libc::c_ulonglong,
pub verify_batch: libc::c_uint,
pub experimental_verify: libc::c_uint,
pub verify_state: libc::c_uint,
pub verify_state_save: libc::c_uint,
pub use_thread: libc::c_uint,
pub unlink: libc::c_uint,
pub unlink_each_loop: libc::c_uint,
pub do_disk_util: libc::c_uint,
pub override_sync: libc::c_uint,
pub rand_repeatable: libc::c_uint,
pub allrand_repeatable: libc::c_uint,
pub rand_seed: libc::c_ulonglong,
pub log_avg_msec: libc::c_uint,
pub log_hist_msec: libc::c_uint,
pub log_hist_coarseness: libc::c_uint,
pub log_max: libc::c_uint,
pub log_offset: libc::c_uint,
pub log_gz: libc::c_uint,
pub log_gz_store: libc::c_uint,
pub log_unix_epoch: libc::c_uint,
pub norandommap: libc::c_uint,
pub softrandommap: libc::c_uint,
pub bs_unaligned: libc::c_uint,
pub fsync_on_close: libc::c_uint,
pub bs_is_seq_rand: libc::c_uint,
pub verify_only: libc::c_uint,
pub random_distribution: libc::c_uint,
pub exitall_error: libc::c_uint,
pub zone_split: [*mut zone_split; 3usize],
pub zone_split_nr: [libc::c_uint; 3usize],
pub zipf_theta: fio_fp64_t,
pub pareto_h: fio_fp64_t,
pub gauss_dev: fio_fp64_t,
pub random_generator: libc::c_uint,
pub perc_rand: [libc::c_uint; 3usize],
pub hugepage_size: libc::c_uint,
pub rw_min_bs: libc::c_ulonglong,
pub thinktime: libc::c_uint,
pub thinktime_spin: libc::c_uint,
pub thinktime_blocks: libc::c_uint,
pub fsync_blocks: libc::c_uint,
pub fdatasync_blocks: libc::c_uint,
pub barrier_blocks: libc::c_uint,
pub start_delay: libc::c_ulonglong,
pub start_delay_orig: libc::c_ulonglong,
pub start_delay_high: libc::c_ulonglong,
pub timeout: libc::c_ulonglong,
pub ramp_time: libc::c_ulonglong,
pub ss_state: libc::c_uint,
pub ss_limit: fio_fp64_t,
pub ss_dur: libc::c_ulonglong,
pub ss_ramp_time: libc::c_ulonglong,
pub overwrite: libc::c_uint,
pub bw_avg_time: libc::c_uint,
pub iops_avg_time: libc::c_uint,
pub loops: libc::c_uint,
pub zone_range: libc::c_ulonglong,
pub zone_size: libc::c_ulonglong,
pub zone_skip: libc::c_ulonglong,
pub zone_mode: fio_zone_mode,
pub lockmem: libc::c_ulonglong,
pub mem_type: fio_memtype,
pub mem_align: libc::c_uint,
pub max_latency: libc::c_ulonglong,
pub stonewall: libc::c_uint,
pub new_group: libc::c_uint,
pub numjobs: libc::c_uint,
pub cpumask: os_cpu_mask_t,
pub verify_cpumask: os_cpu_mask_t,
pub log_gz_cpumask: os_cpu_mask_t,
pub cpus_allowed_policy: libc::c_uint,
pub numa_cpunodes: *mut libc::c_char,
pub numa_mem_mode: libc::c_ushort,
pub numa_mem_prefer_node: libc::c_uint,
pub numa_memnodes: *mut libc::c_char,
pub gpu_dev_id: libc::c_uint,
pub start_offset_percent: libc::c_uint,
pub iolog: libc::c_uint,
pub rwmixcycle: libc::c_uint,
pub rwmix: [libc::c_uint; 3usize],
pub nice: libc::c_uint,
pub ioprio: libc::c_uint,
pub ioprio_class: libc::c_uint,
pub file_service_type: libc::c_uint,
pub group_reporting: libc::c_uint,
pub stats: libc::c_uint,
pub fadvise_hint: libc::c_uint,
pub fallocate_mode: fio_fallocate_mode,
pub zero_buffers: libc::c_uint,
pub refill_buffers: libc::c_uint,
pub scramble_buffers: libc::c_uint,
pub buffer_pattern: [libc::c_char; 512usize],
pub buffer_pattern_bytes: libc::c_uint,
pub compress_percentage: libc::c_uint,
pub compress_chunk: libc::c_uint,
pub dedupe_percentage: libc::c_uint,
pub time_based: libc::c_uint,
pub disable_lat: libc::c_uint,
pub disable_clat: libc::c_uint,
pub disable_slat: libc::c_uint,
pub disable_bw: libc::c_uint,
pub unified_rw_rep: libc::c_uint,
pub gtod_reduce: libc::c_uint,
pub gtod_cpu: libc::c_uint,
pub clocksource: fio_cs,
pub no_stall: libc::c_uint,
pub trim_percentage: libc::c_uint,
pub trim_batch: libc::c_uint,
pub trim_zero: libc::c_uint,
pub trim_backlog: libc::c_ulonglong,
pub clat_percentiles: libc::c_uint,
pub lat_percentiles: libc::c_uint,
pub percentile_precision: libc::c_uint,
pub percentile_list: [fio_fp64_t; 20usize],
pub read_iolog_file: *mut libc::c_char,
pub read_iolog_chunked: bool_,
pub write_iolog_file: *mut libc::c_char,
pub merge_blktrace_file: *mut libc::c_char,
pub merge_blktrace_scalars: [fio_fp64_t; 20usize],
pub merge_blktrace_iters: [fio_fp64_t; 20usize],
pub write_bw_log: libc::c_uint,
pub write_lat_log: libc::c_uint,
pub write_iops_log: libc::c_uint,
pub write_hist_log: libc::c_uint,
pub bw_log_file: *mut libc::c_char,
pub lat_log_file: *mut libc::c_char,
pub iops_log_file: *mut libc::c_char,
pub hist_log_file: *mut libc::c_char,
pub replay_redirect: *mut libc::c_char,
pub exec_prerun: *mut libc::c_char,
pub exec_postrun: *mut libc::c_char,
pub rate: [u64; 3usize],
pub ratemin: [u64; 3usize],
pub ratecycle: libc::c_uint,
pub io_submit_mode: libc::c_uint,
pub rate_iops: [libc::c_uint; 3usize],
pub rate_iops_min: [libc::c_uint; 3usize],
pub rate_process: libc::c_uint,
pub rate_ign_think: libc::c_uint,
pub ioscheduler: *mut libc::c_char,
pub continue_on_error: error_type,
pub profile: *mut libc::c_char,
pub cgroup: *mut libc::c_char,
pub cgroup_weight: libc::c_uint,
pub cgroup_nodelete: libc::c_uint,
pub uid: libc::c_uint,
pub gid: libc::c_uint,
pub flow_id: libc::c_int,
pub flow: libc::c_int,
pub flow_watermark: libc::c_int,
pub flow_sleep: libc::c_uint,
pub offset_increment: libc::c_ulonglong,
pub number_ios: libc::c_ulonglong,
pub sync_file_range: libc::c_uint,
pub latency_target: libc::c_ulonglong,
pub latency_window: libc::c_ulonglong,
pub latency_percentile: fio_fp64_t,
pub sig_figs: libc::c_uint,
pub block_error_hist: libc::c_uint,
pub replay_align: libc::c_uint,
pub replay_scale: libc::c_uint,
pub replay_time_scale: libc::c_uint,
pub replay_skip: libc::c_uint,
pub per_job_logs: libc::c_uint,
pub allow_create: libc::c_uint,
pub allow_mounted_write: libc::c_uint,
pub read_beyond_wp: libc::c_uint,
pub max_open_zones: libc::c_int,
pub zrt: fio_fp64_t,
pub zrf: fio_fp64_t,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct prof_io_ops {
pub td_init: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data) -> libc::c_int>,
pub td_exit: ::std::option::Option<unsafe extern "C" fn(arg1: *mut thread_data)>,
pub io_u_lat: ::std::option::Option<
unsafe extern "C" fn(arg1: *mut thread_data, arg2: u64) -> libc::c_int,
>,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct fio_sem {
pub lock: pthread_mutex_t,
pub cond: pthread_cond_t,
pub value: libc::c_int,
pub waiters: libc::c_int,
pub magic: libc::c_int,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct disk_util_stats {
pub ios: [u64; 2usize],
pub merges: [u64; 2usize],
pub sectors: [u64; 2usize],
pub ticks: [u64; 2usize],
pub io_ticks: u64,
pub time_in_queue: u64,
pub msec: u64,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct disk_util_stat {
pub name: [u8; 64usize],
pub s: disk_util_stats,
}
#[repr(C)]
#[derive(Copy, Clone)]
pub struct disk_util_agg {
pub ios: [u64; 2usize],
pub merges: [u64; 2usize],
pub sectors: [u64; 2usize],
pub ticks: [u64; 2usize],
pub io_ticks: u64,
pub time_in_queue: u64,
pub slavecount: u32,
pub pad: u32,
pub max_util: fio_fp64_t,
}
#[repr(C)]
pub struct disk_util {
pub list: flist_head,
pub slavelist: flist_head,
pub sysfs_root: *mut libc::c_char,
pub path: [libc::c_char; 1024usize],
pub major: libc::c_int,
pub minor: libc::c_int,
pub dus: disk_util_stat,
pub last_dus: disk_util_stat,
pub agg: disk_util_agg,
pub slaves: flist_head,
pub time: timespec,
pub lock: *mut fio_sem,
pub users: libc::c_ulong,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct sk_out {
pub refs: libc::c_uint,
pub sk: libc::c_int,
pub lock: fio_sem,
pub list: flist_head,
pub wait: fio_sem,
pub xmit: fio_sem,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct io_u_queue {
pub io_us: *mut *mut io_u,
pub nr: libc::c_uint,
pub max: libc::c_uint,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct io_u_ring {
pub head: libc::c_uint,
pub tail: libc::c_uint,
pub max: libc::c_uint,
pub ring: *mut *mut io_u,
}
#[repr(C)]
pub struct steadystate_data {
pub limit: f64,
pub dur: libc::c_ulonglong,
pub ramp_time: libc::c_ulonglong,
pub state: u32,
pub head: libc::c_uint,
pub tail: libc::c_uint,
pub iops_data: *mut u64,
pub bw_data: *mut u64,
pub slope: f64,
pub deviation: f64,
pub criterion: f64,
pub sum_y: u64,
pub sum_x: u64,
pub sum_x_sq: u64,
pub sum_xy: u64,
pub oldest_y: u64,
pub prev_time: timespec,
pub prev_iops: u64,
pub prev_bytes: u64,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct zone_split_index {
pub size_perc: u8,
pub size_perc_prev: u8,
pub size: u64,
pub size_prev: u64,
}
#[repr(C)]
pub struct thread_data {
pub opt_list: flist_head,
pub flags: libc::c_ulong,
pub o: thread_options,
pub eo: *mut libc::c_void,
pub thread: pthread_t,
pub thread_number: libc::c_uint,
pub subjob_number: libc::c_uint,
pub groupid: libc::c_uint,
pub ts: thread_stat,
pub client_type: libc::c_int,
pub slat_log: *mut io_log,
pub clat_log: *mut io_log,
pub clat_hist_log: *mut io_log,
pub lat_log: *mut io_log,
pub bw_log: *mut io_log,
pub iops_log: *mut io_log,
pub log_compress_wq: workqueue,
pub parent: *mut thread_data,
pub stat_io_bytes: [u64; 3usize],
pub bw_sample_time: timespec,
pub stat_io_blocks: [u64; 3usize],
pub iops_sample_time: timespec,
pub update_rusage: libc::c_int,
pub rusage_sem: *mut fio_sem,
pub ru_start: rusage,
pub ru_end: rusage,
pub files: *mut *mut fio_file,
pub file_locks: *mut libc::c_uchar,
pub files_size: libc::c_uint,
pub files_index: libc::c_uint,
pub nr_open_files: libc::c_uint,
pub nr_done_files: libc::c_uint,
pub __bindgen_anon_1: thread_data__bindgen_ty_1,
pub __bindgen_anon_2: thread_data__bindgen_ty_2,
pub __bindgen_anon_3: thread_data__bindgen_ty_3,
pub error: libc::c_int,
pub sig: libc::c_int,
pub done: libc::c_int,
pub stop_io: libc::c_int,
pub pid: pid_t,
pub orig_buffer: *mut libc::c_char,
pub orig_buffer_size: usize,
pub runstate: libc::c_int,
pub terminate: bool_,
pub last_was_sync: bool_,
pub last_ddir: fio_ddir,
pub mmapfd: libc::c_int,
pub iolog_buf: *mut libc::c_void,
pub iolog_f: *mut FILE,
pub rand_seeds: [libc::c_ulong; 19usize],
pub bsrange_state: [frand_state; 3usize],
pub verify_state: frand_state,
pub trim_state: frand_state,
pub delay_state: frand_state,
pub buf_state: frand_state,
pub buf_state_prev: frand_state,
pub dedupe_state: frand_state,
pub zone_state: frand_state,
pub zone_state_index: *mut *mut zone_split_index,
pub verify_batch: libc::c_uint,
pub trim_batch: libc::c_uint,
pub vstate: *mut thread_io_list,
pub shm_id: libc::c_int,
pub io_ops: *mut ioengine_ops,
pub io_ops_init: libc::c_int,
pub io_ops_data: *mut libc::c_void,
pub io_ops_dlhandle: *mut libc::c_void,
pub cur_depth: libc::c_uint,
pub io_u_queued: libc::c_uint,
pub io_u_in_flight: libc::c_uint,
pub io_u_requeues: io_u_ring,
pub io_u_freelist: io_u_queue,
pub io_u_all: io_u_queue,
pub io_u_lock: pthread_mutex_t,
pub free_cond: pthread_cond_t,
pub verify_list: flist_head,
pub verify_threads: *mut pthread_t,
pub nr_verify_threads: libc::c_uint,
pub verify_cond: pthread_cond_t,
pub verify_thread_exit: libc::c_int,
pub rate_bps: [u64; 3usize],
pub rate_next_io_time: [u64; 3usize],
pub rate_bytes: [libc::c_ulong; 3usize],
pub rate_blocks: [libc::c_ulong; 3usize],
pub rate_io_issue_bytes: [libc::c_ulonglong; 3usize],
pub lastrate: [timespec; 3usize],
pub last_usec: [i64; 3usize],
pub poisson_state: [frand_state; 3usize],
pub io_wq: workqueue,
pub total_io_size: u64,
pub fill_device_size: u64,
pub io_issues: [u64; 3usize],
pub io_issue_bytes: [u64; 3usize],
pub loops: u64,
pub io_blocks: [u64; 3usize],
pub this_io_blocks: [u64; 3usize],
pub io_bytes: [u64; 3usize],
pub this_io_bytes: [u64; 3usize],
pub io_skip_bytes: u64,
pub zone_bytes: u64,
pub sem: *mut fio_sem,
pub bytes_done: [u64; 3usize],
pub random_state: frand_state,
pub start: timespec,
pub epoch: timespec,
pub unix_epoch: libc::c_ulonglong,
pub last_issue: timespec,
pub time_offset: libc::c_long,
pub ts_cache: timespec,
pub terminate_time: timespec,
pub ts_cache_nr: libc::c_uint,
pub ts_cache_mask: libc::c_uint,
pub ramp_time_over: bool_,
pub latency_ts: timespec,
pub latency_qd: libc::c_uint,
pub latency_qd_high: libc::c_uint,
pub latency_qd_low: libc::c_uint,
pub latency_failed: libc::c_uint,
pub latency_ios: u64,
pub latency_end_run: libc::c_int,
pub rwmix_state: frand_state,
pub rwmix_issues: libc::c_ulong,
pub rwmix_ddir: fio_ddir,
pub ddir_seq_nr: libc::c_uint,
pub seq_rand_state: [frand_state; 3usize],
pub io_hist_tree: rb_root,
pub io_hist_list: flist_head,
pub io_hist_len: libc::c_ulong,
pub io_log_list: flist_head,
pub io_log_rfile: *mut FILE,
pub io_log_current: libc::c_uint,
pub io_log_checkmark: libc::c_uint,
pub io_log_highmark: libc::c_uint,
pub io_log_highmark_time: timespec,
pub trim_list: flist_head,
pub trim_entries: libc::c_ulong,
pub file_service_nr: libc::c_uint,
pub file_service_left: libc::c_uint,
pub file_service_file: *mut fio_file,
pub sync_file_range_nr: libc::c_uint,
pub file_size_state: frand_state,
pub total_err_count: libc::c_uint,
pub first_error: libc::c_int,
pub flow: *mut fio_flow,
pub prof_io_ops: prof_io_ops,
pub prof_data: *mut libc::c_void,
pub pinned_mem: *mut libc::c_void,
pub ss: steadystate_data,
pub verror: [libc::c_char; 128usize],
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union thread_data__bindgen_ty_1 {
pub next_file: libc::c_uint,
pub next_file_state: frand_state,
_bindgen_union_align: [u64; 6usize],
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union thread_data__bindgen_ty_2 {
pub next_file_zipf: zipf_state,
pub next_file_gauss: gauss_state,
_bindgen_union_align: [u64; 13usize],
}
#[repr(C)]
#[repr(align(8))]
#[derive(Copy, Clone)]
pub union thread_data__bindgen_ty_3 {
pub zipf_theta: f64,
pub pareto_h: f64,
pub gauss_dev: f64,
_bindgen_union_align: u64,
}
pub const opt_category___FIO_OPT_C_GENERAL: opt_category = 0;
pub const opt_category___FIO_OPT_C_IO: opt_category = 1;
pub const opt_category___FIO_OPT_C_FILE: opt_category = 2;
pub const opt_category___FIO_OPT_C_STAT: opt_category = 3;
pub const opt_category___FIO_OPT_C_LOG: opt_category = 4;
pub const opt_category___FIO_OPT_C_PROFILE: opt_category = 5;
pub const opt_category___FIO_OPT_C_ENGINE: opt_category = 6;
pub const opt_category___FIO_OPT_C_NR: opt_category = 7;
pub const opt_category_FIO_OPT_C_GENERAL: opt_category = 1;
pub const opt_category_FIO_OPT_C_IO: opt_category = 2;
pub const opt_category_FIO_OPT_C_FILE: opt_category = 4;
pub const opt_category_FIO_OPT_C_STAT: opt_category = 8;
pub const opt_category_FIO_OPT_C_LOG: opt_category = 16;
pub const opt_category_FIO_OPT_C_PROFILE: opt_category = 32;
pub const opt_category_FIO_OPT_C_ENGINE: opt_category = 64;
pub const opt_category_FIO_OPT_C_INVALID: opt_category = 128;
pub type opt_category = u32;
pub const opt_category_group___FIO_OPT_G_RATE: opt_category_group = 0;
pub const opt_category_group___FIO_OPT_G_ZONE: opt_category_group = 1;
pub const opt_category_group___FIO_OPT_G_RWMIX: opt_category_group = 2;
pub const opt_category_group___FIO_OPT_G_VERIFY: opt_category_group = 3;
pub const opt_category_group___FIO_OPT_G_TRIM: opt_category_group = 4;
pub const opt_category_group___FIO_OPT_G_IOLOG: opt_category_group = 5;
pub const opt_category_group___FIO_OPT_G_IO_DEPTH: opt_category_group = 6;
pub const opt_category_group___FIO_OPT_G_IO_FLOW: opt_category_group = 7;
pub const opt_category_group___FIO_OPT_G_DESC: opt_category_group = 8;
pub const opt_category_group___FIO_OPT_G_FILENAME: opt_category_group = 9;
pub const opt_category_group___FIO_OPT_G_IO_BASIC: opt_category_group = 10;
pub const opt_category_group___FIO_OPT_G_CGROUP: opt_category_group = 11;
pub const opt_category_group___FIO_OPT_G_RUNTIME: opt_category_group = 12;
pub const opt_category_group___FIO_OPT_G_PROCESS: opt_category_group = 13;
pub const opt_category_group___FIO_OPT_G_CRED: opt_category_group = 14;
pub const opt_category_group___FIO_OPT_G_CLOCK: opt_category_group = 15;
pub const opt_category_group___FIO_OPT_G_IO_TYPE: opt_category_group = 16;
pub const opt_category_group___FIO_OPT_G_THINKTIME: opt_category_group = 17;
pub const opt_category_group___FIO_OPT_G_RANDOM: opt_category_group = 18;
pub const opt_category_group___FIO_OPT_G_IO_BUF: opt_category_group = 19;
pub const opt_category_group___FIO_OPT_G_TIOBENCH: opt_category_group = 20;
pub const opt_category_group___FIO_OPT_G_ERR: opt_category_group = 21;
pub const opt_category_group___FIO_OPT_G_E4DEFRAG: opt_category_group = 22;
pub const opt_category_group___FIO_OPT_G_NETIO: opt_category_group = 23;
pub const opt_category_group___FIO_OPT_G_RDMA: opt_category_group = 24;
pub const opt_category_group___FIO_OPT_G_LIBAIO: opt_category_group = 25;
pub const opt_category_group___FIO_OPT_G_ACT: opt_category_group = 26;
pub const opt_category_group___FIO_OPT_G_LATPROF: opt_category_group = 27;
pub const opt_category_group___FIO_OPT_G_RBD: opt_category_group = 28;
pub const opt_category_group___FIO_OPT_G_HTTP: opt_category_group = 29;
pub const opt_category_group___FIO_OPT_G_GFAPI: opt_category_group = 30;
pub const opt_category_group___FIO_OPT_G_MTD: opt_category_group = 31;
pub const opt_category_group___FIO_OPT_G_HDFS: opt_category_group = 32;
pub const opt_category_group___FIO_OPT_G_SG: opt_category_group = 33;
pub const opt_category_group___FIO_OPT_G_NR: opt_category_group = 34;
pub const opt_category_group_FIO_OPT_G_RATE: opt_category_group = 1;
pub const opt_category_group_FIO_OPT_G_ZONE: opt_category_group = 2;
pub const opt_category_group_FIO_OPT_G_RWMIX: opt_category_group = 4;
pub const opt_category_group_FIO_OPT_G_VERIFY: opt_category_group = 8;
pub const opt_category_group_FIO_OPT_G_TRIM: opt_category_group = 16;
pub const opt_category_group_FIO_OPT_G_IOLOG: opt_category_group = 32;
pub const opt_category_group_FIO_OPT_G_IO_DEPTH: opt_category_group = 64;
pub const opt_category_group_FIO_OPT_G_IO_FLOW: opt_category_group = 128;
pub const opt_category_group_FIO_OPT_G_DESC: opt_category_group = 256;
pub const opt_category_group_FIO_OPT_G_FILENAME: opt_category_group = 512;
pub const opt_category_group_FIO_OPT_G_IO_BASIC: opt_category_group = 1024;
pub const opt_category_group_FIO_OPT_G_CGROUP: opt_category_group = 2048;
pub const opt_category_group_FIO_OPT_G_RUNTIME: opt_category_group = 4096;
pub const opt_category_group_FIO_OPT_G_PROCESS: opt_category_group = 8192;
pub const opt_category_group_FIO_OPT_G_CRED: opt_category_group = 16384;
pub const opt_category_group_FIO_OPT_G_CLOCK: opt_category_group = 32768;
pub const opt_category_group_FIO_OPT_G_IO_TYPE: opt_category_group = 65536;
pub const opt_category_group_FIO_OPT_G_THINKTIME: opt_category_group = 131072;
pub const opt_category_group_FIO_OPT_G_RANDOM: opt_category_group = 262144;
pub const opt_category_group_FIO_OPT_G_IO_BUF: opt_category_group = 524288;
pub const opt_category_group_FIO_OPT_G_TIOBENCH: opt_category_group = 1048576;
pub const opt_category_group_FIO_OPT_G_ERR: opt_category_group = 2097152;
pub const opt_category_group_FIO_OPT_G_E4DEFRAG: opt_category_group = 4194304;
pub const opt_category_group_FIO_OPT_G_NETIO: opt_category_group = 8388608;
pub const opt_category_group_FIO_OPT_G_RDMA: opt_category_group = 16777216;
pub const opt_category_group_FIO_OPT_G_LIBAIO: opt_category_group = 33554432;
pub const opt_category_group_FIO_OPT_G_ACT: opt_category_group = 67108864;
pub const opt_category_group_FIO_OPT_G_LATPROF: opt_category_group = 134217728;
pub const opt_category_group_FIO_OPT_G_RBD: opt_category_group = 268435456;
pub const opt_category_group_FIO_OPT_G_HTTP: opt_category_group = 536870912;
pub const opt_category_group_FIO_OPT_G_GFAPI: opt_category_group = 1073741824;
pub const opt_category_group_FIO_OPT_G_MTD: opt_category_group = 2147483648;
pub const opt_category_group_FIO_OPT_G_HDFS: opt_category_group = 4294967296;
pub const opt_category_group_FIO_OPT_G_SG: opt_category_group = 8589934592;
pub const opt_category_group_FIO_OPT_G_INVALID: opt_category_group = 17179869184;
pub type opt_category_group = u64;
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct fio_rwlock {
pub _address: u8,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct thread_io_list {
pub _address: u8,
}
#[repr(C)]
#[derive(Debug, Copy, Clone)]
pub struct fio_flow {
pub _address: u8,
}
| 34.778626 | 100 | 0.714203 |
ef79582ce62e1b82c1af1f5fef5fd1932b191149 | 12,626 | use std::{cell::RefCell, collections::BTreeMap};
use dces::prelude::{Entity, EntityComponentManager};
use crate::{css_engine::*, prelude::*, render::*, shell::WindowShell, tree::Tree};
use super::{MessageBox, WidgetContainer};
/// The `Context` is provides access for the states to objects they could work with.
pub struct Context<'a> {
ecm: &'a mut EntityComponentManager<Tree, StringComponentStore>,
window_shell: &'a mut WindowShell<WindowAdapter>,
pub entity: Entity,
pub theme: &'a ThemeValue,
render_objects: &'a RefCell<BTreeMap<Entity, Box<dyn RenderObject>>>,
layouts: &'a mut BTreeMap<Entity, Box<dyn Layout>>,
handlers: &'a mut EventHandlerMap,
states: &'a RefCell<BTreeMap<Entity, Box<dyn State>>>,
new_states: &'a mut BTreeMap<Entity, Box<dyn State>>,
}
impl<'a> Drop for Context<'a> {
fn drop(&mut self) {
self.states.borrow_mut().append(&mut self.new_states);
}
}
impl<'a> Context<'a> {
/// Creates a new container.
pub fn new(
ecs: (
Entity,
&'a mut EntityComponentManager<Tree, StringComponentStore>,
),
window_shell: &'a mut WindowShell<WindowAdapter>,
theme: &'a ThemeValue,
render_objects: &'a RefCell<BTreeMap<Entity, Box<dyn RenderObject>>>,
layouts: &'a mut BTreeMap<Entity, Box<dyn Layout>>,
handlers: &'a mut EventHandlerMap,
states: &'a RefCell<BTreeMap<Entity, Box<dyn State>>>,
new_states: &'a mut BTreeMap<Entity, Box<dyn State>>,
) -> Self {
Context {
entity: ecs.0,
ecm: ecs.1,
window_shell,
theme,
render_objects,
layouts,
handlers,
states,
new_states,
}
}
// -- Widgets --
/// Returns a specific widget.
pub fn get_widget(&mut self, entity: Entity) -> WidgetContainer<'_> {
WidgetContainer::new(entity, self.ecm, self.theme)
}
/// Returns the widget of the current state ctx.
pub fn widget(&mut self) -> WidgetContainer<'_> {
self.get_widget(self.entity)
}
/// Returns the window widget.
pub fn window(&mut self) -> WidgetContainer<'_> {
let root = self.ecm.entity_store().root;
self.get_widget(root)
}
/// Returns a child of the widget of the current state referenced by css `id`.
/// If the no id is defined its panics.
pub fn child<'b>(&mut self, id: impl Into<&'b str>) -> WidgetContainer<'_> {
self.entity_of_child(id)
.map(move |child| self.get_widget(child))
.unwrap()
}
/// Returns a child of the widget of the current state referenced by css `id`.
/// If the no id is defined None will returned.
pub fn try_child<'b>(&mut self, id: impl Into<&'b str>) -> Option<WidgetContainer<'_>> {
self.entity_of_child(id)
.map(move |child| self.get_widget(child))
}
/// Returns the parent of the current widget.
/// Panics if the parent does not exists.
pub fn parent(&mut self) -> WidgetContainer<'_> {
let entity = self.ecm.entity_store().parent[&self.entity].unwrap();
self.get_widget(entity)
}
// Returns the parent of the current widget.
/// If the current widget is the root None will be returned.
pub fn try_parent(&mut self) -> Option<WidgetContainer<'_>> {
if self.ecm.entity_store().parent[&self.entity] == None {
return None;
}
let entity = self.ecm.entity_store().parent[&self.entity].unwrap();
Some(self.get_widget(entity))
}
/// Returns a parent of the widget of the current state referenced by css `id`.
/// Panics if a parent with the given id could not be found
pub fn parent_from_id<'b>(&mut self, id: impl Into<&'b str>) -> WidgetContainer<'_> {
let mut current = self.entity;
let id = id.into();
while let Some(parent) = self.ecm.entity_store().parent[¤t] {
if let Ok(selector) = self
.ecm
.component_store()
.get::<Selector>("selector", parent)
{
if let Some(parent_id) = &selector.id {
if parent_id == id {
return self.get_widget(parent);
}
}
}
current = parent;
}
panic!(
"Parent with id: {}, of child with entity: {} could not be found",
id, self.entity.0
);
}
/// Returns a parent of the widget of the current state referenced by css `id`.
/// If the no id is defined None will returned.
pub fn try_parent_from_id<'b>(
&mut self,
id: impl Into<&'b str>,
) -> Option<WidgetContainer<'_>> {
let mut current = self.entity;
let id = id.into();
while let Some(parent) = self.ecm.entity_store().parent[¤t] {
if let Ok(selector) = self
.ecm
.component_store()
.get::<Selector>("selector", parent)
{
if let Some(parent_id) = &selector.id {
if parent_id == id {
return Some(self.get_widget(parent));
}
}
}
current = parent;
}
None
}
/// Returns the child of the current widget.
/// Panics if a child on the given index could not be found.
pub fn child_from_index(&mut self, index: usize) -> WidgetContainer<'_> {
let entity = self.ecm.entity_store().children[&self.entity][index];
self.get_widget(entity)
}
/// Returns the child of the current widget.
/// If the index is out of the children index bounds or the widget has no children None will be returned.
pub fn try_child_from_index(&mut self, index: usize) -> Option<WidgetContainer<'_>> {
if index >= self.ecm.entity_store().children[&self.entity].len() {
return None;
}
let entity = self.ecm.entity_store().children[&self.entity][index];
Some(self.get_widget(entity))
}
// -- Widgets --
// -- Manipulation --
/// Returns the current build ctx.
pub fn build_context(&mut self) -> BuildContext {
BuildContext::new(
self.ecm,
self.render_objects,
self.layouts,
self.handlers,
self.new_states,
self.theme,
)
}
/// Appends a child widget to the given parent.
pub fn append_child_to<W: Widget>(&mut self, child: W, parent: Entity) {
let bctx = &mut self.build_context();
let child = child.build(bctx);
bctx.append_child(parent, child)
}
/// Appends a child widget by entity to the given parent.
pub fn append_child_entity_to(&mut self, child: Entity, parent: Entity) {
self.build_context().append_child(parent, child)
}
/// Appends a child to the current widget.
pub fn append_child<W: Widget>(&mut self, child: W) {
self.append_child_to(child, self.entity);
}
/// Appends a child widget by entity to the current widget.
pub fn append_child_entity(&mut self, child: Entity) {
self.append_child_entity_to(self.entity, child);
}
/// Removes a child from the current widget. If the given entity is not a child
/// of the given parent nothing will happen.
pub fn remove_child(&mut self, child: Entity) {
self.remove_child_from(child, self.entity);
}
/// Removes a child from the given parent. If the given entity is not a child
/// of the given parent nothing will happen.
pub fn remove_child_from(&mut self, child: Entity, parent: Entity) {
if self.ecm.entity_store().children[&parent].contains(&child) {
self.ecm.remove_entity(child);
}
}
/// Clears all children of the current widget.
pub fn clear_children(&mut self) {
self.clear_children_of(self.entity);
}
/// Clears all children of the given widget.
pub fn clear_children_of(&mut self, parent: Entity) {
while !self.ecm.entity_store().children[&parent].is_empty() {
let child = self.ecm.entity_store().children[&parent][0];
self.ecm.remove_entity(child);
}
}
// -- Manipulation --
/// Returns the entity id of an child by the given name.
pub fn entity_of_child<'b>(&mut self, id: impl Into<&'b str>) -> Option<Entity> {
let id = id.into();
let mut current_node = self.entity;
loop {
if let Ok(selector) = self
.ecm
.component_store()
.get::<Selector>("selector", current_node)
{
if let Some(child_id) = &selector.id {
if child_id == id {
return Some(current_node);
}
}
}
let mut it = self.ecm.entity_store().start_node(current_node).into_iter();
it.next();
if let Some(node) = it.next() {
current_node = node;
} else {
break;
}
}
None
}
/// Returns the entity of the parent referenced by css `element`.
/// If the no id is defined None will returned.
pub fn parent_entity_by_element<'b>(&mut self, element: impl Into<&'b str>) -> Option<Entity> {
let mut current = self.entity;
let element = element.into();
while let Some(parent) = self.ecm.entity_store().parent[¤t] {
if let Ok(selector) = self
.ecm
.component_store()
.get::<Selector>("selector", parent)
{
if let Some(parent_element) = &selector.element {
if parent_element == element
&& self
.ecm
.component_store()
.is_origin::<Selector>("selector", parent)
{
return Some(parent);
}
}
}
current = parent;
}
None
}
/// Returns the entity of the parent.
pub fn entity_of_parent(&mut self) -> Option<Entity> {
self.ecm.entity_store().parent[&self.entity]
}
/// Returns the child index of the current entity.
pub fn index_as_child(&mut self, entity: Entity) -> Option<usize> {
if let Some(parent) = self.ecm.entity_store().parent[&entity] {
return self.ecm.entity_store().children[&parent]
.iter()
.position(|e| *e == entity);
}
None
}
/// Sends a message to the widget with the given id over the message channel.
pub fn send_message(&mut self, target_widget: &str, message: impl Into<MessageBox>) {
let mut entity = None;
if let Ok(global) = self.ecm.component_store().get::<Global>("global", 0.into()) {
if let Some(en) = global.id_map.get(target_widget) {
entity = Some(*en);
}
}
if let Some(entity) = entity {
self.window_shell
.adapter()
.messages
.entry(entity)
.or_insert_with(Vec::new)
.push(message.into());
} else {
println!(
"Context send_message: widget id {} not found.",
target_widget
);
}
}
/// Pushes an event to the event queue with the given `strategy`.
pub fn push_event_strategy<E: Event>(&mut self, event: E, strategy: EventStrategy) {
self.window_shell
.adapter()
.event_queue
.register_event_with_strategy(event, strategy, self.entity);
}
/// Pushes an event to the event queue.
pub fn push_event<E: Event>(&mut self, event: E) {
self.window_shell
.adapter()
.event_queue
.register_event(event, self.entity);
}
/// Pushes an event to the event queue.
pub fn push_event_by_entity<E: Event>(&mut self, event: E, entity: Entity) {
self.window_shell
.adapter()
.event_queue
.register_event(event, entity);
}
/// Returns a mutable reference of the 2d render ctx.
pub fn render_context_2_d(&mut self) -> &mut RenderContext2D {
self.window_shell.render_context_2_d()
}
}
| 33.052356 | 109 | 0.559164 |
75c43acd95359539e6519fb8e8f9d249180fc3ae | 2,116 | //! De-/serialization functions for `std::time::SystemTime` objects represented as seconds since
//! the UNIX epoch. Delegates to `js_int::UInt` to ensure integer size is within bounds.
use std::{
convert::TryFrom,
time::{Duration, SystemTime, UNIX_EPOCH},
};
use js_int::UInt;
use serde::{
de::{Deserialize, Deserializer},
ser::{Error, Serialize, Serializer},
};
/// Serialize a SystemTime.
///
/// Will fail if integer is greater than the maximum integer that can be unambiguously represented
/// by an f64.
pub fn serialize<S>(time: &SystemTime, serializer: S) -> Result<S::Ok, S::Error>
where
S: Serializer,
{
// If this unwrap fails, the system this is executed is completely broken.
let time_since_epoch = time.duration_since(UNIX_EPOCH).unwrap();
match UInt::try_from(time_since_epoch.as_secs()) {
Ok(uint) => uint.serialize(serializer),
Err(err) => Err(S::Error::custom(err)),
}
}
/// Deserializes a SystemTime.
///
/// Will fail if integer is greater than the maximum integer that can be unambiguously represented
/// by an f64.
pub fn deserialize<'de, D>(deserializer: D) -> Result<SystemTime, D::Error>
where
D: Deserializer<'de>,
{
let secs = UInt::deserialize(deserializer)?;
Ok(UNIX_EPOCH + Duration::from_secs(secs.into()))
}
#[cfg(test)]
mod tests {
use std::time::{Duration, SystemTime, UNIX_EPOCH};
use serde::{Deserialize, Serialize};
use serde_json::json;
#[derive(Clone, Debug, PartialEq, Deserialize, Serialize)]
struct SystemTimeTest {
#[serde(with = "super")]
timestamp: SystemTime,
}
#[test]
fn deserialize() {
let json = json!({ "timestamp": 3000 });
assert_eq!(
serde_json::from_value::<SystemTimeTest>(json).unwrap(),
SystemTimeTest { timestamp: UNIX_EPOCH + Duration::from_secs(3000) },
);
}
#[test]
fn serialize() {
let request = SystemTimeTest { timestamp: UNIX_EPOCH + Duration::new(2000, 0) };
assert_eq!(serde_json::to_value(&request).unwrap(), json!({ "timestamp": 2000 }));
}
}
| 29.388889 | 98 | 0.651701 |
e96004023039dea21d6bad9626dafb587d9dc730 | 4,072 | use crate::value::RefVal;
use crate::RefType;
use crate::host::HostFuncBody;
use crate::inst::*;
use crate::module::*;
use crate::value::Value;
use anyhow::Result;
use std::iter;
use wasmparser::{FuncType, FunctionBody, Type};
#[derive(Clone, Copy, Debug)]
pub struct InstIndex(pub u32);
impl InstIndex {
pub fn zero() -> InstIndex {
InstIndex(0)
}
}
pub enum FunctionInstance {
Defined(DefinedFunctionInstance),
Native(NativeFunctionInstance),
}
impl FunctionInstance {
pub fn ty(&self) -> &FuncType {
match self {
Self::Defined(defined) => defined.ty(),
Self::Native(host) => host.ty(),
}
}
pub fn defined(&self) -> Option<&DefinedFunctionInstance> {
match self {
Self::Defined(defined) => Some(defined),
_ => None,
}
}
pub fn name(&self) -> &String {
match self {
Self::Defined(defined) => &defined.name,
Self::Native(host) => host.field_name(),
}
}
}
pub struct DefinedFunctionInstance {
name: String,
ty: FuncType,
module_index: ModuleIndex,
instructions: Vec<Instruction>,
default_locals: Vec<Value>,
}
impl DefinedFunctionInstance {
pub(crate) fn new(
name: String,
ty: FuncType,
module_index: ModuleIndex,
body: FunctionBody,
base_offset: usize,
) -> Result<Self> {
let mut locals = Vec::new();
let reader = body.get_locals_reader()?;
for local in reader {
let (count, value_type) = local?;
let elements = iter::repeat(value_type).take(count as usize);
locals.append(&mut elements.collect());
}
let mut reader = body.get_operators_reader()?;
let mut instructions = Vec::new();
while !reader.eof() {
let inst = transform_inst(&mut reader, base_offset)?;
instructions.push(inst);
}
// Compute default local values here instead of frame initialization
// to avoid re-computation
let mut local_tys = ty.params.to_vec();
local_tys.append(&mut locals.to_vec());
let mut default_locals = Vec::new();
for ty in local_tys {
let v = match ty {
Type::I32 => Value::I32(0),
Type::I64 => Value::I64(0),
Type::F32 => Value::F32(0),
Type::F64 => Value::F64(0),
Type::ExternRef => Value::Ref(RefVal::NullRef(RefType::ExternRef)),
Type::FuncRef => Value::Ref(RefVal::NullRef(RefType::FuncRef)),
_ => unimplemented!("local initialization of type {:?}", ty),
};
default_locals.push(v);
}
Ok(Self {
name,
ty,
module_index,
instructions,
default_locals,
})
}
pub fn name(&self) -> &String {
&self.name
}
pub fn ty(&self) -> &FuncType {
&self.ty
}
pub fn module_index(&self) -> ModuleIndex {
self.module_index
}
pub(crate) fn instructions(&self) -> &[Instruction] {
&self.instructions
}
pub(crate) fn inst(&self, index: InstIndex) -> Option<&Instruction> {
self.instructions.get(index.0 as usize)
}
pub(crate) fn default_locals(&self) -> &[Value] {
&self.default_locals
}
}
pub struct NativeFunctionInstance {
ty: FuncType,
module_name: String,
field_name: String,
code: HostFuncBody,
}
impl NativeFunctionInstance {
pub fn ty(&self) -> &FuncType {
&self.ty
}
pub fn module_name(&self) -> &String {
&self.module_name
}
pub fn field_name(&self) -> &String {
&self.field_name
}
pub fn code(&self) -> &HostFuncBody {
&self.code
}
pub fn new(ty: FuncType, module_name: String, field_name: String, code: HostFuncBody) -> Self {
Self {
ty,
module_name,
field_name,
code,
}
}
}
| 24.829268 | 99 | 0.551326 |
18e1489b994a597d04be23733930e33b3d28e0a2 | 21,028 | use {
crate::parse_instruction::{
check_num_accounts, ParsableProgram, ParseInstructionError, ParsedInstructionEnum,
},
bincode::deserialize,
serde_json::json,
solana_sdk::{
instruction::CompiledInstruction, pubkey::Pubkey, system_instruction::SystemInstruction,
},
};
pub fn parse_system(
instruction: &CompiledInstruction,
account_keys: &[Pubkey],
) -> Result<ParsedInstructionEnum, ParseInstructionError> {
let system_instruction: SystemInstruction = deserialize(&instruction.data)
.map_err(|_| ParseInstructionError::InstructionNotParsable(ParsableProgram::System))?;
match instruction.accounts.iter().max() {
Some(index) if (*index as usize) < account_keys.len() => {}
_ => {
// Runtime should prevent this from ever happening
return Err(ParseInstructionError::InstructionKeyMismatch(
ParsableProgram::System,
));
}
}
match system_instruction {
SystemInstruction::CreateAccount {
lamports,
space,
owner,
} => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "createAccount".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"newAccount": account_keys[instruction.accounts[1] as usize].to_string(),
"lamports": lamports,
"space": space,
"owner": owner.to_string(),
}),
})
}
SystemInstruction::Assign { owner } => {
check_num_system_accounts(&instruction.accounts, 1)?;
Ok(ParsedInstructionEnum {
instruction_type: "assign".to_string(),
info: json!({
"account": account_keys[instruction.accounts[0] as usize].to_string(),
"owner": owner.to_string(),
}),
})
}
SystemInstruction::CreateFNode { reward_address, node_type } => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "createfnode".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"reward_address": reward_address.to_string(),
"node_type": node_type,
}),
})
}
SystemInstruction::AddGrant { id, receiving_address, amount } => {
check_num_system_accounts(&instruction.accounts, 3)?;
Ok(ParsedInstructionEnum {
instruction_type: "addgrant".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"grant_id": id,
"receiving_address": receiving_address.to_string(),
"amount": amount,
}),
})
}
SystemInstruction::VoteOnGrant { grant_hash, vote, node_hash } => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "VoteOnGrant".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"grant_hash": grant_hash.to_string(),
"vote": vote,
"node_hash": node_hash,
}),
})
}
SystemInstruction::DissolveGrant { grant_hash } => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "DissolveGrant".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"grant_hash": grant_hash.to_string(),
}),
})
}
SystemInstruction::Transfer { lamports } => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "transfer".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"destination": account_keys[instruction.accounts[1] as usize].to_string(),
"lamports": lamports,
}),
})
}
SystemInstruction::CreateAccountWithSeed {
base,
seed,
lamports,
space,
owner,
} => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "createAccountWithSeed".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"newAccount": account_keys[instruction.accounts[1] as usize].to_string(),
"base": base.to_string(),
"seed": seed,
"lamports": lamports,
"space": space,
"owner": owner.to_string(),
}),
})
}
SystemInstruction::AdvanceNonceAccount => {
check_num_system_accounts(&instruction.accounts, 3)?;
Ok(ParsedInstructionEnum {
instruction_type: "advanceNonce".to_string(),
info: json!({
"nonceAccount": account_keys[instruction.accounts[0] as usize].to_string(),
"recentBlockhashesSysvar": account_keys[instruction.accounts[1] as usize].to_string(),
"nonceAuthority": account_keys[instruction.accounts[2] as usize].to_string(),
}),
})
}
SystemInstruction::WithdrawNonceAccount(lamports) => {
check_num_system_accounts(&instruction.accounts, 5)?;
Ok(ParsedInstructionEnum {
instruction_type: "withdrawFromNonce".to_string(),
info: json!({
"nonceAccount": account_keys[instruction.accounts[0] as usize].to_string(),
"destination": account_keys[instruction.accounts[1] as usize].to_string(),
"recentBlockhashesSysvar": account_keys[instruction.accounts[2] as usize].to_string(),
"rentSysvar": account_keys[instruction.accounts[3] as usize].to_string(),
"nonceAuthority": account_keys[instruction.accounts[4] as usize].to_string(),
"lamports": lamports,
}),
})
}
SystemInstruction::InitializeNonceAccount(authority) => {
check_num_system_accounts(&instruction.accounts, 3)?;
Ok(ParsedInstructionEnum {
instruction_type: "initializeNonce".to_string(),
info: json!({
"nonceAccount": account_keys[instruction.accounts[0] as usize].to_string(),
"recentBlockhashesSysvar": account_keys[instruction.accounts[1] as usize].to_string(),
"rentSysvar": account_keys[instruction.accounts[2] as usize].to_string(),
"nonceAuthority": authority.to_string(),
}),
})
}
SystemInstruction::AuthorizeNonceAccount(authority) => {
check_num_system_accounts(&instruction.accounts, 1)?;
Ok(ParsedInstructionEnum {
instruction_type: "authorizeNonce".to_string(),
info: json!({
"nonceAccount": account_keys[instruction.accounts[0] as usize].to_string(),
"nonceAuthority": account_keys[instruction.accounts[1] as usize].to_string(),
"newAuthorized": authority.to_string(),
}),
})
}
SystemInstruction::Allocate { space } => {
check_num_system_accounts(&instruction.accounts, 1)?;
Ok(ParsedInstructionEnum {
instruction_type: "allocate".to_string(),
info: json!({
"account": account_keys[instruction.accounts[0] as usize].to_string(),
"space": space,
}),
})
}
SystemInstruction::AllocateWithSeed {
base,
seed,
space,
owner,
} => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "allocateWithSeed".to_string(),
info: json!({
"account": account_keys[instruction.accounts[0] as usize].to_string(),
"base": base.to_string(),
"seed": seed,
"space": space,
"owner": owner.to_string(),
}),
})
}
SystemInstruction::AssignWithSeed { base, seed, owner } => {
check_num_system_accounts(&instruction.accounts, 2)?;
Ok(ParsedInstructionEnum {
instruction_type: "assignWithSeed".to_string(),
info: json!({
"account": account_keys[instruction.accounts[0] as usize].to_string(),
"base": base.to_string(),
"seed": seed,
"owner": owner.to_string(),
}),
})
}
SystemInstruction::TransferWithSeed {
lamports,
from_seed,
from_owner,
} => {
check_num_system_accounts(&instruction.accounts, 3)?;
Ok(ParsedInstructionEnum {
instruction_type: "transferWithSeed".to_string(),
info: json!({
"source": account_keys[instruction.accounts[0] as usize].to_string(),
"sourceBase": account_keys[instruction.accounts[1] as usize].to_string(),
"destination": account_keys[instruction.accounts[2] as usize].to_string(),
"lamports": lamports,
"sourceSeed": from_seed,
"sourceOwner": from_owner.to_string(),
}),
})
}
}
}
fn check_num_system_accounts(accounts: &[u8], num: usize) -> Result<(), ParseInstructionError> {
check_num_accounts(accounts, num, ParsableProgram::System)
}
#[cfg(test)]
mod test {
use {
super::*,
solana_sdk::{message::Message, pubkey::Pubkey, system_instruction},
};
#[test]
#[allow(clippy::same_item_push)]
fn test_parse_system_instruction() {
let mut keys: Vec<Pubkey> = vec![];
for _ in 0..6 {
keys.push(solana_sdk::pubkey::new_rand());
}
let lamports = 55;
let space = 128;
let instruction =
system_instruction::create_account(&keys[0], &keys[1], lamports, space, &keys[2]);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..2]).unwrap(),
ParsedInstructionEnum {
instruction_type: "createAccount".to_string(),
info: json!({
"source": keys[0].to_string(),
"newAccount": keys[1].to_string(),
"lamports": lamports,
"owner": keys[2].to_string(),
"space": space,
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..1]).is_err());
let instruction = system_instruction::assign(&keys[0], &keys[1]);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..1]).unwrap(),
ParsedInstructionEnum {
instruction_type: "assign".to_string(),
info: json!({
"account": keys[0].to_string(),
"owner": keys[1].to_string(),
}),
}
);
assert!(parse_system(&message.instructions[0], &[]).is_err());
let instruction = system_instruction::transfer(&keys[0], &keys[1], lamports);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..2]).unwrap(),
ParsedInstructionEnum {
instruction_type: "transfer".to_string(),
info: json!({
"source": keys[0].to_string(),
"destination": keys[1].to_string(),
"lamports": lamports,
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..1]).is_err());
let seed = "test_seed";
let instruction = system_instruction::create_account_with_seed(
&keys[0], &keys[2], &keys[1], seed, lamports, space, &keys[3],
);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..3]).unwrap(),
ParsedInstructionEnum {
instruction_type: "createAccountWithSeed".to_string(),
info: json!({
"source": keys[0].to_string(),
"newAccount": keys[2].to_string(),
"lamports": lamports,
"base": keys[1].to_string(),
"seed": seed,
"owner": keys[3].to_string(),
"space": space,
}),
}
);
let seed = "test_seed";
let instruction = system_instruction::create_account_with_seed(
&keys[0], &keys[1], &keys[0], seed, lamports, space, &keys[3],
);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..2]).unwrap(),
ParsedInstructionEnum {
instruction_type: "createAccountWithSeed".to_string(),
info: json!({
"source": keys[0].to_string(),
"newAccount": keys[1].to_string(),
"lamports": lamports,
"base": keys[0].to_string(),
"seed": seed,
"owner": keys[3].to_string(),
"space": space,
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..1]).is_err());
let instruction = system_instruction::allocate(&keys[0], space);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..1]).unwrap(),
ParsedInstructionEnum {
instruction_type: "allocate".to_string(),
info: json!({
"account": keys[0].to_string(),
"space": space,
}),
}
);
assert!(parse_system(&message.instructions[0], &[]).is_err());
let instruction =
system_instruction::allocate_with_seed(&keys[1], &keys[0], seed, space, &keys[2]);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..2]).unwrap(),
ParsedInstructionEnum {
instruction_type: "allocateWithSeed".to_string(),
info: json!({
"account": keys[1].to_string(),
"base": keys[0].to_string(),
"seed": seed,
"owner": keys[2].to_string(),
"space": space,
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..1]).is_err());
let instruction = system_instruction::assign_with_seed(&keys[1], &keys[0], seed, &keys[2]);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..2]).unwrap(),
ParsedInstructionEnum {
instruction_type: "assignWithSeed".to_string(),
info: json!({
"account": keys[1].to_string(),
"base": keys[0].to_string(),
"seed": seed,
"owner": keys[2].to_string(),
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..1]).is_err());
let instruction = system_instruction::transfer_with_seed(
&keys[1],
&keys[0],
seed.to_string(),
&keys[3],
&keys[2],
lamports,
);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..3]).unwrap(),
ParsedInstructionEnum {
instruction_type: "transferWithSeed".to_string(),
info: json!({
"source": keys[1].to_string(),
"sourceBase": keys[0].to_string(),
"sourceSeed": seed,
"sourceOwner": keys[3].to_string(),
"lamports": lamports,
"destination": keys[2].to_string()
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..2]).is_err());
}
#[test]
#[allow(clippy::same_item_push)]
fn test_parse_system_instruction_nonce() {
let mut keys: Vec<Pubkey> = vec![];
for _ in 0..5 {
keys.push(solana_sdk::pubkey::new_rand());
}
let instruction = system_instruction::advance_nonce_account(&keys[1], &keys[0]);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..3]).unwrap(),
ParsedInstructionEnum {
instruction_type: "advanceNonce".to_string(),
info: json!({
"nonceAccount": keys[1].to_string(),
"recentBlockhashesSysvar": keys[2].to_string(),
"nonceAuthority": keys[0].to_string(),
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..2]).is_err());
let lamports = 55;
let instruction =
system_instruction::withdraw_nonce_account(&keys[1], &keys[0], &keys[2], lamports);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..5]).unwrap(),
ParsedInstructionEnum {
instruction_type: "withdrawFromNonce".to_string(),
info: json!({
"nonceAccount": keys[1].to_string(),
"destination": keys[2].to_string(),
"recentBlockhashesSysvar": keys[3].to_string(),
"rentSysvar": keys[4].to_string(),
"nonceAuthority": keys[0].to_string(),
"lamports": lamports
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..4]).is_err());
let instructions =
system_instruction::create_nonce_account(&keys[0], &keys[1], &keys[4], lamports);
let message = Message::new(&instructions, None);
assert_eq!(
parse_system(&message.instructions[1], &keys[0..4]).unwrap(),
ParsedInstructionEnum {
instruction_type: "initializeNonce".to_string(),
info: json!({
"nonceAccount": keys[1].to_string(),
"recentBlockhashesSysvar": keys[2].to_string(),
"rentSysvar": keys[3].to_string(),
"nonceAuthority": keys[4].to_string(),
}),
}
);
assert!(parse_system(&message.instructions[1], &keys[0..3]).is_err());
let instruction = system_instruction::authorize_nonce_account(&keys[1], &keys[0], &keys[2]);
let message = Message::new(&[instruction], None);
assert_eq!(
parse_system(&message.instructions[0], &keys[0..2]).unwrap(),
ParsedInstructionEnum {
instruction_type: "authorizeNonce".to_string(),
info: json!({
"nonceAccount": keys[1].to_string(),
"newAuthorized": keys[2].to_string(),
"nonceAuthority": keys[0].to_string(),
}),
}
);
assert!(parse_system(&message.instructions[0], &keys[0..1]).is_err());
}
}
| 41.805169 | 106 | 0.514124 |
08211ee33575f7cca4335b50cdcf706412644db3 | 62,639 | use crate::ops::*;
use std::cell::Cell;
#[derive(Default, Debug, Copy, Clone)]
pub struct CpuPinIn {
pub data: u8,
pub irq: bool,
pub nmi: bool,
pub reset: bool,
pub power: bool,
pub dmc_req: Option<u16>,
pub oam_req: Option<u8>,
}
#[derive(Debug, Copy, Clone)]
enum PendingDmcRead {
Pending(u16, u32),
Reading,
Resume,
}
#[derive(Debug, Copy, Clone)]
enum OamDma {
Read(u16, u16),
Write(u16, u16),
}
#[derive(Debug, Copy, Clone)]
enum Irq {
ReadPcOne(u16),
ReadPcTwo(u16),
WriteRegPcHigh(u16),
WriteRegPcLow(u16),
WriteRegP(u16),
ReadHighJump(u16),
ReadLowJump(u16),
UpdateRegPc,
}
#[derive(Debug, Copy, Clone)]
enum Power {
ReadRegPcLow,
ReadRegPcHigh,
UpdateRegPc(u16),
}
#[derive(Debug, Copy, Clone)]
pub enum TickResult {
Read(u16),
Write(u16, u8),
Idle,
DmcRead(u8),
}
#[derive(Copy, Clone)]
enum AddressResult {
Address(u16),
TickAddress(TickResult, u16),
Next(TickResult, Addressing),
}
#[derive(Copy, Clone)]
enum ExecResult {
Done,
Next(TickResult, Instruction),
Tick(TickResult),
}
#[derive(Debug, Copy, Clone)]
enum Stage {
Fetch,
Decode,
Address(Addressing, Instruction),
Execute(u16, Instruction),
OamDma(OamDma),
Reset(Power),
Power(Power),
Irq(Irq),
}
#[derive(Debug, Copy, Clone)]
pub struct CpuDebugState {
pub reg_a: u8,
pub reg_x: u8,
pub reg_y: u8,
pub reg_pc: u16,
pub reg_sp: u8,
pub reg_p: u8,
pub instruction_addr: Option<u16>,
pub cycle: u64,
}
pub struct Cpu {
current_tick: u64,
power_up_pc: Option<u16>,
pin_in: CpuPinIn,
reg_a: u32,
reg_x: u32,
reg_y: u32,
reg_pc: u32,
reg_sp: u32,
flag_c: u32,
flag_z: u32,
flag_i: u32,
flag_d: u32,
flag_v: u32,
flag_s: u32,
stage: Stage,
last_tick: TickResult,
instruction_addr: Option<u16>,
dmc_hold: u8,
dmc_hold_addr: u16,
pending_dmc: Option<PendingDmcRead>,
pending_nmi: Cell<Option<u32>>,
pending_oam_dma: Cell<Option<u8>>,
pending_power: bool,
pending_reset: bool,
irq_delay: u32,
irq_set_delay: u32,
}
impl Cpu {
pub fn new() -> Cpu {
Cpu {
current_tick: 0,
power_up_pc: None,
pin_in: Default::default(),
reg_a: 0,
reg_x: 0,
reg_y: 0,
reg_pc: 0,
reg_sp: 0,
flag_c: 0,
flag_z: 0,
flag_i: 0,
flag_d: 0,
flag_v: 0,
flag_s: 0,
instruction_addr: None,
last_tick: TickResult::Read(0),
dmc_hold: 0,
dmc_hold_addr: 0,
stage: Stage::Fetch,
pending_dmc: None,
pending_nmi: Cell::new(None),
pending_oam_dma: Cell::new(None),
pending_power: false,
pending_reset: false,
irq_delay: 0,
irq_set_delay: 0,
}
}
pub fn power_up_pc(&mut self, pc: Option<u16>) {
self.power_up_pc = pc;
}
fn power(&mut self, step: Power) -> TickResult {
use Power::*;
use TickResult::*;
match step {
ReadRegPcLow => {
self.stage = Stage::Power(ReadRegPcHigh);
Read(0xfffc)
}
ReadRegPcHigh => {
self.stage = Stage::Power(UpdateRegPc(self.pin_in.data as u16));
Read(0xfffc + 1)
}
UpdateRegPc(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
self.reg_pc = (low_addr | high_addr) as u32;
self.set_reg_p(0x34);
self.reg_sp = 0xfd;
if let Some(addr) = self.power_up_pc {
self.reg_pc = addr as u32;
}
self.fetch()
}
}
}
fn reset(&mut self, step: Power) -> TickResult {
use Power::*;
use TickResult::*;
match step {
ReadRegPcLow => {
self.stage = Stage::Reset(ReadRegPcHigh);
Read(0xfffc)
}
ReadRegPcHigh => {
self.stage = Stage::Reset(UpdateRegPc(self.pin_in.data as u16));
Read(0xfffc + 1)
}
UpdateRegPc(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
self.reg_pc = (low_addr | high_addr) as u32;
self.reg_sp = self.reg_sp.wrapping_sub(3);
self.flag_i = 1;
self.fetch()
}
}
}
fn reg_p(&self) -> u8 {
let mut val = 0;
if self.flag_c != 0 {
val |= 0x01;
}
if self.flag_z == 0 {
val |= 0x02;
}
if self.flag_i != 0 {
val |= 0x04;
}
if self.flag_d != 0 {
val |= 0x08;
}
if self.flag_v != 0 {
val |= 0x40;
}
if self.flag_s & 0x80 != 0 {
val |= 0x80;
}
val
}
fn set_reg_p(&mut self, val: u32) {
self.flag_c = val & 0x01;
self.flag_z = (val & 0x02) ^ 0x02;
self.flag_i = val & 0x04;
self.flag_d = val & 0x08;
self.flag_v = val & 0x40;
self.flag_s = val & 0x80;
}
fn oam_dma_req(&self) {
if let Some(addr) = self.pin_in.oam_req {
self.pending_oam_dma.set(Some(addr));
}
}
pub fn nmi_req(&self, delay: u32) {
self.pending_nmi.set(Some(delay));
}
fn dmc_req(&mut self) {
if let Some(addr) = self.pin_in.dmc_req {
self.pending_dmc = Some(PendingDmcRead::Pending(addr, 4));
}
}
pub fn nmi_cancel(&self) {
self.pending_nmi.set(None);
}
pub fn tick(&mut self, pin_in: CpuPinIn) -> TickResult {
self.pin_in = pin_in;
self.current_tick += 1;
self.oam_dma_req();
self.dmc_req();
self.instruction_addr = None;
if self.pin_in.power {
self.pending_power = true;
}
if self.pin_in.reset {
self.pending_reset = true;
}
match self.pending_dmc {
Some(PendingDmcRead::Pending(addr, count)) => {
let mut was_read = false;
if let TickResult::Read(addr) = self.last_tick {
self.dmc_hold = self.pin_in.data;
self.dmc_hold_addr = addr;
was_read = true;
}
if count == 0 {
self.pending_dmc = Some(PendingDmcRead::Reading);
return TickResult::Read(addr);
} else {
self.pending_dmc = Some(PendingDmcRead::Pending(addr, count - 1));
if was_read {
return TickResult::Idle;
}
}
}
Some(PendingDmcRead::Reading) => {
self.pending_dmc = Some(PendingDmcRead::Resume);
return TickResult::DmcRead(self.pin_in.data);
}
Some(PendingDmcRead::Resume) => {
self.pending_dmc = None;
self.pin_in.data = self.dmc_hold;
}
None => (),
}
self.last_tick = match self.stage {
Stage::Fetch => self.fetch(),
Stage::Decode => self.decode(),
Stage::Address(addressing, instruction) => self.addressing(addressing, instruction),
Stage::Execute(address, instruction) => self.execute(address, instruction),
Stage::OamDma(oam) => self.oam_dma(oam),
Stage::Irq(irq) => self.irq_nmi(irq),
Stage::Power(step) => self.power(step),
Stage::Reset(step) => self.reset(step),
};
self.last_tick
}
pub fn debug_state(&self) -> CpuDebugState {
CpuDebugState {
reg_a: self.reg_a as u8,
reg_x: self.reg_x as u8,
reg_y: self.reg_y as u8,
reg_sp: self.reg_sp as u8,
reg_p: self.reg_p(),
reg_pc: self.reg_pc as u16,
instruction_addr: self.instruction_addr,
cycle: self.current_tick,
}
}
fn read_pc(&mut self) -> TickResult {
let pc = self.reg_pc as u16;
self.reg_pc = pc.wrapping_add(1) as u32;
TickResult::Read(pc)
}
fn pop_stack(&mut self) -> TickResult {
self.reg_sp = self.reg_sp.wrapping_add(1) & 0xff;
let addr = self.reg_sp as u16 | 0x100;
TickResult::Read(addr)
}
fn push_stack(&mut self, value: u8) -> TickResult {
let addr = self.reg_sp as u16 | 0x100;
self.reg_sp = self.reg_sp.wrapping_sub(1) & 0xff;
TickResult::Write(addr, value)
}
fn oam_dma(&mut self, oam: OamDma) -> TickResult {
match oam {
OamDma::Read(high_addr, low_addr) => {
self.stage = Stage::OamDma(OamDma::Write(high_addr, low_addr));
TickResult::Read(high_addr | low_addr)
}
OamDma::Write(high_addr, low_addr) => {
if low_addr == 255 {
self.stage = Stage::Fetch;
} else {
self.stage = Stage::OamDma(OamDma::Read(high_addr, low_addr + 1));
}
TickResult::Write(0x2004, self.pin_in.data)
}
}
}
fn irq_nmi(&mut self, irq: Irq) -> TickResult {
use self::Irq::*;
match irq {
ReadPcOne(addr) => {
self.stage = Stage::Irq(Irq::ReadPcTwo(addr));
TickResult::Read(self.reg_pc as u16)
}
ReadPcTwo(addr) => {
self.stage = Stage::Irq(Irq::WriteRegPcHigh(addr));
TickResult::Read(self.reg_pc as u16)
}
WriteRegPcHigh(addr) => {
self.stage = Stage::Irq(Irq::WriteRegPcLow(addr));
let val = (self.reg_pc >> 8) & 0xff;
self.push_stack(val as u8)
}
WriteRegPcLow(addr) => {
self.stage = Stage::Irq(Irq::WriteRegP(addr));
let val = self.reg_pc & 0xff;
self.push_stack(val as u8)
}
WriteRegP(addr) => {
if self.pending_nmi.get().is_some() {
self.pending_nmi.set(None);
self.stage = Stage::Irq(Irq::ReadHighJump(0xfffa));
} else {
self.stage = Stage::Irq(Irq::ReadHighJump(addr));
}
let val = self.reg_p() | 0x20;
self.push_stack(val)
}
ReadHighJump(addr) => {
self.stage = Stage::Irq(Irq::ReadLowJump(addr));
TickResult::Read(addr)
}
ReadLowJump(addr) => {
self.stage = Stage::Irq(Irq::UpdateRegPc);
self.reg_pc &= 0xff00;
self.reg_pc |= self.pin_in.data as u32;
self.flag_i = 1;
TickResult::Read(addr + 1)
}
UpdateRegPc => {
self.reg_pc &= 0x00ff;
self.reg_pc |= ((self.pin_in.data as u16) << 8) as u32;
self.fetch()
}
}
}
fn interrupt(&mut self) -> Stage {
let mut stage = Stage::Decode;
if let Some(high_addr) = self.pending_oam_dma.get() {
self.pending_oam_dma.set(None);
stage = Stage::OamDma(OamDma::Read((high_addr as u16) << 8, 0))
}
if self.pin_in.irq && (self.flag_i == 0 || self.irq_set_delay != 0) && self.irq_delay == 0 {
if self.irq_set_delay != 0 {
self.irq_set_delay -= 1;
}
stage = Stage::Irq(Irq::ReadPcOne(0xfffe))
}
if self.irq_set_delay != 0 {
self.irq_set_delay -= 1;
}
if self.irq_delay != 0 {
self.irq_delay -= 1;
}
match self.pending_nmi.get() {
Some(0) => {
self.pending_nmi.set(None);
stage = Stage::Irq(Irq::ReadPcOne(0xfffa));
}
Some(count) => {
self.pending_nmi.set(Some(count - 1));
}
None => (),
}
if self.pin_in.power {
self.pin_in.power = false;
stage = Stage::Power(Power::ReadRegPcLow);
}
if self.pending_reset {
self.pending_reset = false;
stage = Stage::Reset(Power::ReadRegPcLow);
}
stage
}
fn fetch(&mut self) -> TickResult {
self.stage = self.interrupt();
match self.stage {
Stage::Fetch => self.read_pc(),
Stage::Decode => {
self.instruction_addr = Some(self.reg_pc as u16);
self.read_pc()
}
Stage::Address(addressing, instruction) => self.addressing(addressing, instruction),
Stage::Execute(address, instruction) => self.execute(address, instruction),
Stage::OamDma(oam) => self.oam_dma(oam),
Stage::Irq(irq) => self.irq_nmi(irq),
Stage::Power(step) => self.power(step),
Stage::Reset(step) => self.reset(step),
}
}
fn decode(&mut self) -> TickResult {
let op = super::ops::OPS[self.pin_in.data as usize];
self.addressing(op.addressing, op.instruction)
}
fn addressing(&mut self, addressing: Addressing, instruction: Instruction) -> TickResult {
use Addressing::*;
let address_res = match addressing {
None => self.addr_none(),
Accumulator => self.addr_accumulator(),
Immediate => self.addr_immediate(),
ZeroPage(step) => self.addr_zero_page(step),
ZeroPageOffset(reg, step) => self.addr_zero_page_offset(reg, step),
Absolute(step) => self.addr_absolute(step),
AbsoluteOffset(reg, dummy, step) => self.addr_absolute_offset(reg, dummy, step),
IndirectAbsolute(step) => self.addr_indirect_absolute(step),
Relative(step) => self.addr_relative(step),
IndirectX(step) => self.addr_indirect_x(step),
IndirectY(dummy, step) => self.addr_indirect_y(dummy, step),
};
match address_res {
AddressResult::Next(tick, next) => {
self.stage = Stage::Address(next, instruction);
tick
}
AddressResult::TickAddress(tick, address) => {
self.stage = Stage::Execute(address, instruction);
tick
}
AddressResult::Address(addr) => self.execute(addr, instruction),
}
}
fn addr_none(&mut self) -> AddressResult {
AddressResult::TickAddress(TickResult::Read(self.reg_pc as u16), 0x0000)
}
fn addr_accumulator(&mut self) -> AddressResult {
AddressResult::TickAddress(TickResult::Read(self.reg_pc as u16), self.reg_a as u16)
}
fn addr_immediate(&mut self) -> AddressResult {
let addr = self.reg_pc as u16;
self.reg_pc = self.reg_pc.wrapping_add(1);
AddressResult::Address(addr)
}
fn addr_zero_page(&mut self, step: ZeroPage) -> AddressResult {
use AddressResult::*;
use ZeroPage::*;
match step {
Read => Next(self.read_pc(), Addressing::ZeroPage(Decode)),
Decode => Address(self.pin_in.data as u16),
}
}
fn addr_zero_page_offset(&mut self, reg: Reg, step: ZeroPageOffset) -> AddressResult {
use AddressResult::*;
use ZeroPageOffset::*;
match step {
ReadImmediate => {
let next = Addressing::ZeroPageOffset(reg, ApplyOffset);
Next(self.read_pc(), next)
}
ApplyOffset => {
let reg = match reg {
Reg::X => self.reg_x,
Reg::Y => self.reg_y,
};
let addr = self.pin_in.data.wrapping_add(reg as u8);
TickAddress(TickResult::Read(self.pin_in.data as u16), addr as u16)
}
}
}
fn addr_absolute(&mut self, step: Absolute) -> AddressResult {
use Absolute::*;
use AddressResult::*;
match step {
ReadLow => {
let next = Addressing::Absolute(ReadHigh);
Next(self.read_pc(), next)
}
ReadHigh => {
let low_addr = self.pin_in.data as u16;
let next = Addressing::Absolute(Decode(low_addr));
Next(self.read_pc(), next)
}
Decode(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
let addr = low_addr | high_addr;
Address(addr)
}
}
}
fn addr_absolute_offset(
&mut self,
reg: Reg,
dummy: DummyRead,
step: AbsoluteOffset,
) -> AddressResult {
use AbsoluteOffset::*;
use AddressResult::*;
match step {
ReadLow => {
let next = Addressing::AbsoluteOffset(reg, dummy, ReadHigh);
Next(self.read_pc(), next)
}
ReadHigh => {
let next = Addressing::AbsoluteOffset(reg, dummy, Decode(self.pin_in.data as u16));
Next(self.read_pc(), next)
}
Decode(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
let addr = high_addr | low_addr;
let reg = match reg {
Reg::X => self.reg_x,
Reg::Y => self.reg_y,
};
let reg = (reg & 0xff) as u16;
let offset_addr = addr.wrapping_add(reg);
let will_wrap = will_wrap(addr, reg);
match (will_wrap, dummy) {
(true, DummyRead::OnCarry) | (_, DummyRead::Always) => {
let dummy_addr = wrapping_add(addr, reg);
TickAddress(TickResult::Read(dummy_addr), offset_addr)
}
_ => Address(offset_addr),
}
}
}
}
fn addr_indirect_absolute(&mut self, step: IndirectAbsolute) -> AddressResult {
use AddressResult::*;
use IndirectAbsolute::*;
match step {
ReadLow => {
let next = Addressing::IndirectAbsolute(ReadHigh);
Next(self.read_pc(), next)
}
ReadHigh => {
let next = Addressing::IndirectAbsolute(ReadIndirectLow(self.pin_in.data as u16));
Next(self.read_pc(), next)
}
ReadIndirectLow(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
let addr = low_addr | high_addr;
let next = Addressing::IndirectAbsolute(ReadIndirectHigh(addr));
Next(TickResult::Read(addr), next)
}
ReadIndirectHigh(addr) => {
let addr = wrapping_add(addr, 1);
let next = Addressing::IndirectAbsolute(Decode(self.pin_in.data as u16));
Next(TickResult::Read(addr), next)
}
Decode(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
let addr = low_addr | high_addr;
Address(addr)
}
}
}
fn addr_relative(&mut self, step: Relative) -> AddressResult {
use AddressResult::*;
use Relative::*;
match step {
ReadRegPc => Next(self.read_pc(), Addressing::Relative(Decode)),
Decode => Address(self.pin_in.data as u16),
}
}
fn addr_indirect_x(&mut self, step: IndirectX) -> AddressResult {
use AddressResult::*;
use IndirectX::*;
match step {
ReadBase => {
let next = Addressing::IndirectX(ReadDummy);
Next(self.read_pc(), next)
}
ReadDummy => {
let addr = self.pin_in.data.wrapping_add(self.reg_x as u8) as u16;
let next = Addressing::IndirectX(ReadIndirectLow(addr));
Next(TickResult::Read(self.pin_in.data as u16), next)
}
ReadIndirectLow(offset_addr) => {
let next = Addressing::IndirectX(ReadIndirectHigh(offset_addr));
Next(TickResult::Read(offset_addr), next)
}
ReadIndirectHigh(offset_addr) => {
let next = Addressing::IndirectX(Decode(self.pin_in.data as u16));
let high_offset_addr = wrapping_add(offset_addr, 1);
Next(TickResult::Read(high_offset_addr), next)
}
Decode(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
let addr = low_addr | high_addr;
Address(addr)
}
}
}
fn addr_indirect_y(&mut self, dummy: DummyRead, step: IndirectY) -> AddressResult {
use AddressResult::*;
use IndirectY::*;
match step {
ReadBase => {
let next = Addressing::IndirectY(dummy, ReadZeroPageLow);
Next(self.read_pc(), next)
}
ReadZeroPageLow => {
let zp_low_addr = self.pin_in.data as u16;
let next = Addressing::IndirectY(dummy, ReadZeroPageHigh(zp_low_addr));
Next(TickResult::Read(zp_low_addr), next)
}
ReadZeroPageHigh(zp_low_addr) => {
let zp_high_addr = wrapping_add(zp_low_addr, 1);
let low_addr = self.pin_in.data as u16;
let next = Addressing::IndirectY(dummy, Decode(low_addr));
Next(TickResult::Read(zp_high_addr), next)
}
Decode(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
let addr = low_addr | high_addr;
let reg_y = (self.reg_y & 0xff) as u16;
let offset_addr = addr.wrapping_add(reg_y);
let will_wrap = will_wrap(addr, reg_y);
match (will_wrap, dummy) {
(true, DummyRead::OnCarry) | (_, DummyRead::Always) => {
let dummy_addr = wrapping_add(addr, reg_y);
TickAddress(TickResult::Read(dummy_addr), offset_addr)
}
_ => Address(offset_addr),
}
}
}
}
fn execute(&mut self, address: u16, instruction: Instruction) -> TickResult {
use Instruction::*;
let exec_result = match instruction {
Adc(step) => self.inst_adc(address, step),
And(step) => self.inst_and(address, step),
Asl(step) => self.inst_asl(address, step),
Asla => self.inst_asla(),
Bcc(step) => {
let cond = self.flag_c == 0;
self.inst_branch(address, step, cond)
}
Bcs(step) => {
let cond = self.flag_c != 0;
self.inst_branch(address, step, cond)
}
Beq(step) => {
let cond = self.flag_z == 0;
self.inst_branch(address, step, cond)
}
Bit(step) => self.inst_bit(address, step),
Bmi(step) => {
let cond = self.flag_s & 0x80 != 0;
self.inst_branch(address, step, cond)
}
Bne(step) => {
let cond = self.flag_z != 0;
self.inst_branch(address, step, cond)
}
Bpl(step) => {
let cond = self.flag_s & 0x80 == 0;
self.inst_branch(address, step, cond)
}
Brk(step) => self.inst_brk(address, step),
Bvc(step) => {
let cond = self.flag_v == 0;
self.inst_branch(address, step, cond)
}
Bvs(step) => {
let cond = self.flag_v != 0;
self.inst_branch(address, step, cond)
}
Clc => self.inst_clc(),
Cld => self.inst_cld(),
Cli => self.inst_cli(),
Clv => self.inst_clv(),
Cmp(step) => self.inst_cmp(address, step),
Cpx(step) => self.inst_cpx(address, step),
Cpy(step) => self.inst_cpy(address, step),
Dec(step) => self.inst_dec(address, step),
Dex => self.inst_dex(),
Dey => self.inst_dey(),
Eor(step) => self.inst_eor(address, step),
Inc(step) => self.inst_inc(address, step),
Inx => self.inst_inx(),
Iny => self.inst_iny(),
Jmp => self.inst_jmp(address),
Jsr(step) => self.inst_jsr(address, step),
Lda(step) => self.inst_lda(address, step),
Ldx(step) => self.inst_ldx(address, step),
Ldy(step) => self.inst_ldy(address, step),
Lsr(step) => self.inst_lsr(address, step),
Lsra => self.inst_lsra(),
Nop => self.inst_nop(),
Ora(step) => self.inst_ora(address, step),
Pha => self.inst_pha(),
Php => self.inst_php(),
Pla(step) => self.inst_pla(step),
Plp(step) => self.inst_plp(step),
Rol(step) => self.inst_rol(address, step),
Rola => self.inst_rola(),
Ror(step) => self.inst_ror(address, step),
Rora => self.inst_rora(),
Rti(step) => self.inst_rti(step),
Rts(step) => self.inst_rts(step),
Sbc(step) => self.inst_sbc(address, step),
Sec => self.inst_sec(),
Sed => self.inst_sed(),
Sei => self.inst_sei(),
Sta => self.inst_sta(address),
Stx => self.inst_stx(address),
Sty => self.inst_sty(address),
Tax => self.inst_tax(),
Tay => self.inst_tay(),
Tsx => self.inst_tsx(),
Txa => self.inst_txa(),
Txs => self.inst_txs(),
Tya => self.inst_tya(),
IllAhx => self.ill_inst_ahx(address),
IllAlr(step) => self.ill_inst_alr(address, step),
IllAnc(step) => self.ill_inst_anc(address, step),
IllArr(step) => self.ill_inst_arr(address, step),
IllAxs(step) => self.ill_inst_axs(address, step),
IllDcp(step) => self.ill_inst_dcp(address, step),
IllIsc(step) => self.ill_inst_isc(address, step),
IllKil => self.ill_inst_kil(),
IllLas => self.ill_inst_las(address),
IllLax(step) => self.ill_inst_lax(address, step),
IllNop => self.ill_inst_nop(),
IllNopAddr => self.ill_inst_nop_addr(address),
IllRla(step) => self.ill_inst_rla(address, step),
IllRra(step) => self.ill_inst_rra(address, step),
IllSax => self.ill_inst_sax(address),
IllSbc(step) => self.ill_inst_sbc(address, step),
IllShx => self.ill_inst_shx(address),
IllShy => self.ill_inst_shy(address),
IllSlo(step) => self.ill_inst_slo(address, step),
IllSre(step) => self.ill_inst_sre(address, step),
IllTas => self.ill_inst_tas(address),
IllXaa(step) => self.ill_inst_xaa(address, step),
};
match exec_result {
ExecResult::Next(tick, next) => {
self.stage = Stage::Execute(address, next);
tick
}
ExecResult::Tick(tick) => {
self.stage = Stage::Fetch;
tick
}
ExecResult::Done => self.fetch(),
}
}
fn inst_adc(&mut self, addr: u16, step: ReadExec) -> ExecResult {
match step {
ReadExec::Read => {
ExecResult::Next(TickResult::Read(addr), Instruction::Adc(ReadExec::Exec))
}
ReadExec::Exec => {
let data = self.pin_in.data as u32;
let reg_a = self.reg_a.wrapping_add(data.wrapping_add(self.flag_c));
self.flag_v = ((!(self.reg_a ^ data) & (self.reg_a ^ reg_a)) >> 7) & 1;
self.flag_c = if reg_a > 0xff { 1 } else { 0 };
self.reg_a = reg_a & 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
ExecResult::Done
}
}
}
fn inst_and(&mut self, addr: u16, step: ReadExec) -> ExecResult {
match step {
ReadExec::Read => {
ExecResult::Next(TickResult::Read(addr), Instruction::And(ReadExec::Exec))
}
ReadExec::Exec => {
let data = self.pin_in.data as u32;
self.reg_a &= data;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
ExecResult::Done
}
}
}
fn inst_asl(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Asl(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(TickResult::Write(addr, data), Instruction::Asl(Exec(data)))
}
Exec(data) => {
let value = self.asl(data as u32) as u8;
Tick(TickResult::Write(addr, value))
}
}
}
fn inst_asla(&mut self) -> ExecResult {
self.reg_a = self.asl(self.reg_a);
ExecResult::Done
}
fn asl(&mut self, mut value: u32) -> u32 {
self.flag_c = (value >> 7) & 1;
value = (value << 1) & 0xff;
self.flag_z = value;
self.flag_s = value;
value
}
fn inst_branch(&mut self, addr: u16, step: Branch, condition: bool) -> ExecResult {
use self::Branch::*;
use ExecResult::*;
match step {
Check => {
if condition {
// TODO: Messy setting it to BCC
Next(TickResult::Read(addr), Instruction::Bcc(Branch))
} else {
Done
}
}
Branch => {
let high_pc = self.reg_pc & 0xff00;
if addr < 0x080 {
let offset_pc = self.reg_pc.wrapping_add(addr as u32);
self.reg_pc = offset_pc;
if high_pc != offset_pc & 0xff00 {
let dummy_pc = (high_pc | (offset_pc & 0xff)) as u16;
Tick(TickResult::Read(dummy_pc))
} else {
Done
}
} else {
let offset_pc = self.reg_pc.wrapping_add(addr as u32).wrapping_sub(256);
self.reg_pc = offset_pc;
if high_pc != (offset_pc & 0xff00) {
let dummy_pc = (high_pc | (offset_pc & 0xff)) as u16;
Tick(TickResult::Read(dummy_pc))
} else {
Done
}
}
}
}
}
fn inst_bit(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Bit(Exec)),
Exec => {
let data = self.pin_in.data as u32;
self.flag_s = data & 0x80;
self.flag_v = (data >> 6) & 1;
self.flag_z = data & self.reg_a;
Done
}
}
}
fn inst_brk(&mut self, addr: u16, step: Break) -> ExecResult {
use Break::*;
use ExecResult::*;
match step {
ReadDummy => Next(TickResult::Read(addr), Instruction::Brk(WriteRegPcHigh)),
WriteRegPcHigh => {
let pc_high = ((self.reg_pc >> 8) & 0xff) as u8;
Next(self.push_stack(pc_high), Instruction::Brk(WriteRegPcLow))
}
WriteRegPcLow => {
let pc_low = (self.reg_pc & 0xff) as u8;
Next(self.push_stack(pc_low), Instruction::Brk(WriteRegP))
}
WriteRegP => {
let reg_p = self.reg_p() | 0x30;
self.flag_i = 1;
let jump = match self.pending_nmi.get() {
Some(_) => {
self.pending_nmi.set(None);
ReadHighJump(0xfffa)
}
_ => ReadHighJump(0xfffe),
};
Next(self.push_stack(reg_p), Instruction::Brk(jump))
}
ReadHighJump(addr) => Next(TickResult::Read(addr), Instruction::Brk(ReadLowJump(addr))),
ReadLowJump(addr) => {
let low_value = self.pin_in.data as u16;
Next(
TickResult::Read(addr + 1),
Instruction::Brk(UpdateRegPc(low_value)),
)
}
UpdateRegPc(low_value) => {
let high_value = (self.pin_in.data as u16) << 8;
self.reg_pc = (low_value | high_value) as u32;
Done
}
}
}
fn inst_clc(&mut self) -> ExecResult {
self.flag_c = 0;
ExecResult::Done
}
fn inst_cld(&mut self) -> ExecResult {
self.flag_d = 0;
ExecResult::Done
}
fn inst_cli(&mut self) -> ExecResult {
if self.flag_i == 1 {
self.irq_delay = 1;
}
self.flag_i = 0;
ExecResult::Done
}
fn inst_clv(&mut self) -> ExecResult {
self.flag_v = 0;
ExecResult::Done
}
fn inst_cmp(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Cmp(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.flag_c = if self.reg_a >= value { 1 } else { 0 };
self.flag_z = if self.reg_a == value { 0 } else { 1 };
self.flag_s = self.reg_a.wrapping_sub(value) & 0xff;
Done
}
}
}
fn inst_cpx(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Cpx(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.flag_c = if self.reg_x >= value { 1 } else { 0 };
self.flag_z = if self.reg_x == value { 0 } else { 1 };
self.flag_s = self.reg_x.wrapping_sub(value) & 0xff;
Done
}
}
}
fn inst_cpy(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Cpy(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.flag_c = if self.reg_y >= value { 1 } else { 0 };
self.flag_z = if self.reg_y == value { 0 } else { 1 };
self.flag_s = self.reg_y.wrapping_sub(value) & 0xff;
Done
}
}
}
fn inst_dec(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Dec(Dummy)),
Dummy => {
let value = self.pin_in.data;
Next(
TickResult::Write(addr, value),
Instruction::Dec(Exec(value)),
)
}
Exec(value) => {
let value = value.wrapping_sub(1) as u32;
self.flag_s = value;
self.flag_z = value;
Tick(TickResult::Write(addr, value as u8))
}
}
}
fn inst_dex(&mut self) -> ExecResult {
self.reg_x = self.reg_x.wrapping_sub(1) & 0xff;
self.flag_s = self.reg_x;
self.flag_z = self.reg_x;
ExecResult::Done
}
fn inst_dey(&mut self) -> ExecResult {
self.reg_y = self.reg_y.wrapping_sub(1) & 0xff;
self.flag_s = self.reg_y;
self.flag_z = self.reg_y;
ExecResult::Done
}
fn inst_eor(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Eor(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.reg_a ^= value;
self.reg_a &= 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn inst_inc(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Inc(Dummy)),
Dummy => {
let value = self.pin_in.data;
Next(
TickResult::Write(addr, value),
Instruction::Inc(Exec(value)),
)
}
Exec(value) => {
let value = value.wrapping_add(1) as u32;
self.flag_s = value;
self.flag_z = value;
Tick(TickResult::Write(addr, value as u8))
}
}
}
fn inst_inx(&mut self) -> ExecResult {
self.reg_x = self.reg_x.wrapping_add(1) & 0xff;
self.flag_s = self.reg_x;
self.flag_z = self.reg_x;
ExecResult::Done
}
fn inst_iny(&mut self) -> ExecResult {
self.reg_y = self.reg_y.wrapping_add(1) & 0xff;
self.flag_s = self.reg_y;
self.flag_z = self.reg_y;
ExecResult::Done
}
fn inst_jmp(&mut self, addr: u16) -> ExecResult {
self.reg_pc = addr as u32;
ExecResult::Done
}
fn inst_jsr(&mut self, addr: u16, step: Jsr) -> ExecResult {
use ExecResult::*;
use Jsr::*;
match step {
ReadDummy => {
let dummy_addr = self.reg_sp | 0x100;
Next(
TickResult::Read(dummy_addr as u16),
Instruction::Jsr(WriteRegPcHigh),
)
}
WriteRegPcHigh => {
let value = (self.reg_pc.wrapping_sub(1) >> 8) & 0xff;
Next(
self.push_stack(value as u8),
Instruction::Jsr(WriteRegPcLow),
)
}
WriteRegPcLow => {
let value = self.reg_pc.wrapping_sub(1) & 0xff;
self.reg_pc = addr as u32;
Tick(self.push_stack(value as u8))
}
}
}
fn inst_lda(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Lda(Exec)),
Exec => {
self.reg_a = self.pin_in.data as u32;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn inst_ldx(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Ldx(Exec)),
Exec => {
self.reg_x = self.pin_in.data as u32;
self.flag_s = self.reg_x;
self.flag_z = self.reg_x;
Done
}
}
}
fn inst_ldy(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Ldy(Exec)),
Exec => {
self.reg_y = self.pin_in.data as u32;
self.flag_s = self.reg_y;
self.flag_z = self.reg_y;
Done
}
}
}
fn inst_lsr(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Lsr(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(TickResult::Write(addr, data), Instruction::Lsr(Exec(data)))
}
Exec(data) => {
let value = self.lsr(data);
Tick(TickResult::Write(addr, value))
}
}
}
fn inst_lsra(&mut self) -> ExecResult {
self.reg_a = self.lsr(self.reg_a as u8) as u32;
ExecResult::Done
}
fn lsr(&mut self, value: u8) -> u8 {
self.flag_c = (value as u32) & 1;
let value = value >> 1;
self.flag_s = value as u32;
self.flag_z = value as u32;
value
}
fn inst_nop(&mut self) -> ExecResult {
ExecResult::Done
}
fn inst_ora(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Ora(Exec)),
Exec => {
self.reg_a = (self.reg_a | self.pin_in.data as u32) & 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn inst_pha(&mut self) -> ExecResult {
ExecResult::Tick(self.push_stack(self.reg_a as u8))
}
fn inst_php(&mut self) -> ExecResult {
let value = self.reg_p() as u8 | 0x30;
ExecResult::Tick(self.push_stack(value))
}
fn inst_pla(&mut self, step: DummyReadExec) -> ExecResult {
use DummyReadExec::*;
use ExecResult::*;
match step {
Dummy => {
let dummy_addr = self.reg_sp | 0x100;
Next(TickResult::Read(dummy_addr as u16), Instruction::Pla(Read))
}
Read => Next(self.pop_stack(), Instruction::Pla(Exec)),
Exec => {
self.reg_a = self.pin_in.data as u32;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn inst_plp(&mut self, step: DummyReadExec) -> ExecResult {
use DummyReadExec::*;
use ExecResult::*;
match step {
Dummy => {
let dummy_addr = self.reg_sp | 0x100;
Next(TickResult::Read(dummy_addr as u16), Instruction::Plp(Read))
}
Read => Next(self.pop_stack(), Instruction::Plp(Exec)),
Exec => {
let value = self.pin_in.data as u32;
if self.flag_i == 1 && value & 0x04 == 0 {
self.irq_delay = 1;
}
if self.flag_i == 0 && value & 0x04 != 0 {
self.irq_set_delay = 1;
}
self.set_reg_p(value);
Done
}
}
}
fn inst_rol(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Rol(Dummy)),
Dummy => {
let value = self.pin_in.data;
Next(
TickResult::Write(addr, value),
Instruction::Rol(Exec(value)),
)
}
Exec(data) => {
let value = self.rol(data);
Tick(TickResult::Write(addr, value))
}
}
}
fn inst_rola(&mut self) -> ExecResult {
self.reg_a = self.rol(self.reg_a as u8) as u32;
ExecResult::Done
}
fn rol(&mut self, value: u8) -> u8 {
let value = value as u32;
let c = if self.flag_c != 0 { 1 } else { 0 };
self.flag_c = value >> 7 & 1;
let value = (value << 1 | c) & 0xff;
self.flag_s = value;
self.flag_z = value;
value as u8
}
fn inst_ror(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Ror(Dummy)),
Dummy => {
let value = self.pin_in.data;
Next(
TickResult::Write(addr, value),
Instruction::Ror(Exec(value)),
)
}
Exec(data) => {
let value = self.ror(data);
Tick(TickResult::Write(addr, value))
}
}
}
fn inst_rora(&mut self) -> ExecResult {
self.reg_a = self.ror(self.reg_a as u8) as u32;
ExecResult::Done
}
fn ror(&mut self, value: u8) -> u8 {
let value = value as u32;
let c = if self.flag_c != 0 { 0x80 } else { 0 };
self.flag_c = value & 1;
let value = (value >> 1 | c) & 0xff;
self.flag_s = value;
self.flag_z = value;
value as u8
}
fn inst_rti(&mut self, step: Rti) -> ExecResult {
use ExecResult::*;
use Rti::*;
match step {
Dummy => {
let dummy_addr = self.reg_sp | 0x100;
Next(
TickResult::Read(dummy_addr as u16),
Instruction::Rti(ReadRegP),
)
}
ReadRegP => Next(self.pop_stack(), Instruction::Rti(ReadRegPcLow)),
ReadRegPcLow => {
let reg_p = self.pin_in.data;
self.set_reg_p(reg_p as u32);
Next(self.pop_stack(), Instruction::Rti(ReadRegPcHigh))
}
ReadRegPcHigh => {
let low_value = self.pin_in.data;
Next(self.pop_stack(), Instruction::Rti(Exec(low_value as u16)))
}
Exec(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
self.reg_pc = (high_addr | low_addr) as u32;
Done
}
}
}
fn inst_rts(&mut self, step: Rts) -> ExecResult {
use ExecResult::*;
use Rts::*;
match step {
Dummy => {
let dummy_addr = self.reg_sp | 0x100;
Next(
TickResult::Read(dummy_addr as u16),
Instruction::Rts(ReadRegPcLow),
)
}
ReadRegPcLow => Next(self.pop_stack(), Instruction::Rts(ReadRegPcHigh)),
ReadRegPcHigh => {
let low_value = self.pin_in.data as u16;
Next(self.pop_stack(), Instruction::Rts(Exec(low_value)))
}
Exec(low_addr) => {
let high_addr = (self.pin_in.data as u16) << 8;
self.reg_pc = (high_addr | low_addr).wrapping_add(1) as u32;
Tick(TickResult::Read(self.reg_pc as u16))
}
}
}
fn inst_sbc(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::Sbc(Exec)),
Exec => {
let value = self.pin_in.data as i32;
let temp_a = self.reg_a as i32;
let temp = temp_a.wrapping_sub(value.wrapping_sub(self.flag_c as i32 - 1));
self.flag_v = (((temp_a ^ value) & (temp_a ^ temp)) >> 7) as u32 & 1;
self.flag_c = if temp < 0 { 0 } else { 1 };
self.reg_a = (temp as u32) & 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn inst_sec(&mut self) -> ExecResult {
self.flag_c = 1;
ExecResult::Done
}
fn inst_sed(&mut self) -> ExecResult {
self.flag_d = 1;
ExecResult::Done
}
fn inst_sei(&mut self) -> ExecResult {
if self.flag_i == 0 {
self.irq_set_delay = 1;
}
self.flag_i = 1;
ExecResult::Done
}
fn inst_sta(&mut self, addr: u16) -> ExecResult {
ExecResult::Tick(TickResult::Write(addr, self.reg_a as u8))
}
fn inst_stx(&mut self, addr: u16) -> ExecResult {
ExecResult::Tick(TickResult::Write(addr, self.reg_x as u8))
}
fn inst_sty(&mut self, addr: u16) -> ExecResult {
ExecResult::Tick(TickResult::Write(addr, self.reg_y as u8))
}
fn inst_tax(&mut self) -> ExecResult {
self.reg_x = self.reg_a;
self.flag_s = self.reg_x;
self.flag_z = self.reg_x;
ExecResult::Done
}
fn inst_tay(&mut self) -> ExecResult {
self.reg_y = self.reg_a;
self.flag_s = self.reg_y;
self.flag_z = self.reg_y;
ExecResult::Done
}
fn inst_tsx(&mut self) -> ExecResult {
self.reg_x = self.reg_sp;
self.flag_s = self.reg_x;
self.flag_z = self.reg_x;
ExecResult::Done
}
fn inst_txa(&mut self) -> ExecResult {
self.reg_a = self.reg_x;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
ExecResult::Done
}
fn inst_txs(&mut self) -> ExecResult {
self.reg_sp = self.reg_x;
ExecResult::Done
}
fn inst_tya(&mut self) -> ExecResult {
self.reg_a = self.reg_y;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
ExecResult::Done
}
fn ill_inst_ahx(&mut self, addr: u16) -> ExecResult {
ExecResult::Tick(TickResult::Read(addr))
}
fn ill_inst_alr(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllAlr(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.reg_a &= value;
self.flag_c = self.reg_a & 1;
self.reg_a >>= 1;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn ill_inst_anc(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllAnc(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.reg_a &= value;
self.flag_c = (self.reg_a >> 7) & 1;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn ill_inst_arr(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllArr(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.reg_a &= value;
if self.flag_c != 0 {
self.flag_c = self.reg_a & 1;
self.reg_a = ((self.reg_a >> 1) | 0x80) & 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
} else {
self.flag_c = self.reg_a & 1;
self.reg_a = (self.reg_a >> 1) & 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
}
match ((self.reg_a & 0x40), (self.reg_a & 0x20)) {
(0, 0) => {
self.flag_c = 0;
self.flag_v = 0;
}
(_, 0) => {
self.flag_c = 1;
self.flag_v = 1;
}
(0, _) => {
self.flag_c = 0;
self.flag_v = 1;
}
(_, _) => {
self.flag_c = 1;
self.flag_v = 0;
}
}
Done
}
}
}
fn ill_inst_axs(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllAxs(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.reg_x &= self.reg_a;
let temp = self.reg_x.wrapping_sub(value);
self.flag_c = if temp > self.reg_x { 0 } else { 1 };
self.reg_x = temp & 0xff;
self.flag_s = self.reg_x;
self.flag_z = self.reg_x;
Done
}
}
}
fn ill_inst_dcp(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllDcp(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(
TickResult::Write(addr, data),
Instruction::IllDcp(Exec(data)),
)
}
Exec(data) => {
let value = data.wrapping_sub(1) as u32;
self.flag_c = if self.reg_a >= value { 1 } else { 0 };
self.flag_z = if self.reg_a == value { 0 } else { 1 };
self.flag_s = self.reg_a.wrapping_sub(value) & 0xff;
Tick(TickResult::Write(addr, value as u8))
}
}
}
fn ill_inst_isc(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllIsc(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(
TickResult::Write(addr, data),
Instruction::IllIsc(Exec(data)),
)
}
Exec(data) => {
let value = data.wrapping_add(1) as i32;
let temp_a = self.reg_a as i32;
let temp = temp_a.wrapping_sub(value.wrapping_sub(self.flag_c as i32 - 1));
self.flag_v = (((temp_a ^ value) & (temp_a ^ temp)) >> 7) as u32 & 1;
self.flag_c = if temp < 0 { 0 } else { 1 };
self.reg_a = (temp as u32) & 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Tick(TickResult::Write(addr, value as u8))
}
}
}
fn ill_inst_kil(&mut self) -> ExecResult {
eprintln!("KIL encountered");
ExecResult::Done
}
fn ill_inst_las(&mut self, addr: u16) -> ExecResult {
ExecResult::Tick(TickResult::Read(addr))
}
fn ill_inst_lax(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllLax(Exec)),
Exec => {
self.reg_a = self.pin_in.data as u32;
self.reg_x = self.reg_a;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn ill_inst_nop(&mut self) -> ExecResult {
ExecResult::Done
}
fn ill_inst_nop_addr(&mut self, addr: u16) -> ExecResult {
ExecResult::Tick(TickResult::Read(addr))
}
fn ill_inst_rla(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllRla(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(
TickResult::Write(addr, data),
Instruction::IllRla(Exec(data)),
)
}
Exec(data) => {
let c = if self.flag_c != 0 { 1 } else { 0 };
self.flag_c = (data as u32) >> 7 & 1;
let value = ((data as u32) << 1 | c) & 0xff;
self.reg_a &= value;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Tick(TickResult::Write(addr, value as u8))
}
}
}
fn ill_inst_rra(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllRra(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(
TickResult::Write(addr, data),
Instruction::IllRra(Exec(data)),
)
}
Exec(data) => {
let data = data as u32;
let c = if self.flag_c != 0 { 0x80 } else { 0 };
self.flag_c = data & 1;
let data = (data >> 1 | c) & 0xff;
let value = self.reg_a.wrapping_add(data.wrapping_add(self.flag_c));
self.flag_v = ((!(self.reg_a ^ data) & (self.reg_a ^ value)) >> 7) & 1;
self.flag_c = if value > 0xff { 1 } else { 0 };
self.reg_a = value & 0xff;
self.flag_s = value & 0xff;
self.flag_z = value & 0xff;
Tick(TickResult::Write(addr, data as u8))
}
}
}
fn ill_inst_sax(&mut self, addr: u16) -> ExecResult {
let value = (self.reg_a & self.reg_x) & 0xff;
ExecResult::Tick(TickResult::Write(addr, value as u8))
}
fn ill_inst_sbc(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllSbc(Exec)),
Exec => {
let value = self.pin_in.data as i32;
let temp_a = self.reg_a as i32;
let temp = temp_a.wrapping_sub(value.wrapping_sub(self.flag_c as i32 - 1));
self.flag_v = (((temp_a ^ value) & (temp_a ^ temp)) >> 7) as u32 & 1;
self.flag_c = if temp < 0 { 0 } else { 1 };
self.reg_a = (temp as u32) & 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
fn ill_inst_shx(&mut self, addr: u16) -> ExecResult {
let temp_addr = addr as u32;
let value = (self.reg_x & ((temp_addr >> 8).wrapping_add(1))) & 0xff;
ExecResult::Tick(TickResult::Write(addr, value as u8))
}
fn ill_inst_shy(&mut self, addr: u16) -> ExecResult {
let temp_addr = addr as u32;
let value = (self.reg_y & ((temp_addr >> 8).wrapping_add(1))) & 0xff;
ExecResult::Tick(TickResult::Write(addr, value as u8))
}
fn ill_inst_slo(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllSlo(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(
TickResult::Write(addr, data),
Instruction::IllSlo(Exec(data)),
)
}
Exec(data) => {
let value = data as u32;
self.flag_c = (value >> 7) & 1;
let value = (value << 1) & 0xff;
self.reg_a |= value;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Tick(TickResult::Write(addr, value as u8))
}
}
}
fn ill_inst_sre(&mut self, addr: u16, step: ReadDummyExec) -> ExecResult {
use ExecResult::*;
use ReadDummyExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllSre(Dummy)),
Dummy => {
let data = self.pin_in.data;
Next(
TickResult::Write(addr, data),
Instruction::IllSre(Exec(data)),
)
}
Exec(data) => {
let value = data as u32;
self.flag_c = value & 1;
let value = value >> 1;
self.reg_a ^= value;
self.reg_a &= 0xff;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Tick(TickResult::Write(addr, value as u8))
}
}
}
fn ill_inst_tas(&mut self, addr: u16) -> ExecResult {
self.reg_sp = self.reg_x & self.reg_a;
let value = self.reg_sp & ((addr as u32) >> 8);
ExecResult::Tick(TickResult::Write(addr, value as u8))
}
fn ill_inst_xaa(&mut self, addr: u16, step: ReadExec) -> ExecResult {
use ExecResult::*;
use ReadExec::*;
match step {
Read => Next(TickResult::Read(addr), Instruction::IllXaa(Exec)),
Exec => {
let value = self.pin_in.data as u32;
self.reg_a = self.reg_x & value;
self.flag_s = self.reg_a;
self.flag_z = self.reg_a;
Done
}
}
}
}
fn will_wrap(addr: u16, add: u16) -> bool {
addr & 0xff00 != addr.wrapping_add(add) & 0xff00
}
fn wrapping_add(addr: u16, add: u16) -> u16 {
(addr & 0xff00) | (addr.wrapping_add(add) & 0xff)
}
| 33.177436 | 100 | 0.481856 |
ab0696c86c86af3fccdee9e8a387c6e00f5334dc | 2,718 | use crate::graph::{Graph, GraphError};
use std::ops::Add;
type Parents = Vec<Option<usize>>;
type Distances<T> = Vec<Option<T>>;
impl <T> Graph<T> where T : PartialOrd + Copy + Default + Add<Output = T> {
pub fn bellman_ford(&self, from: usize) -> Result<(Parents, Distances<T>), GraphError> {
if from >= self.size() {
return Err(GraphError::new("The vertex is missing in the graph"));
}
let mut parents = vec![None; self.size()];
let mut distances = vec![None; self.size()];
distances[from] = Some(Default::default());
for _ in 0..self.adj.len() {
let mut any = false;
for idx in 0..self.adj.len() {
if distances[idx].is_some() {
for edge in &self.adj[idx] {
if distances[edge.to].is_none() {
parents[edge.to] = Some(idx);
distances[edge.to] = Some(edge.weight + distances[idx].unwrap());
any = true;
} else if edge.weight + distances[idx].unwrap() < distances[edge.to].unwrap() {
parents[edge.to] = Some(idx);
distances[edge.to] = Some(edge.weight + distances[idx].unwrap());
any = true
}
}
}
}
if !any {
return Ok((parents, distances));
}
}
Err(GraphError::new("Exists cycle"))
}
}
#[cfg(test)]
#[test]
fn test_bellman_ford(){
let mut graph = Graph::new(10);
graph.add_oriented_edge(1, 2, 2.0).unwrap();
graph.add_oriented_edge(2, 3, 5.0).unwrap();
graph.add_oriented_edge(3, 5, 7.0).unwrap();
graph.add_oriented_edge(1, 5, 19.0).unwrap();
let (parents, distances) = graph.bellman_ford(1).unwrap();
assert_eq!(graph.search_path(5, &parents).unwrap().unwrap(), vec![1, 2, 3, 5]);
assert_eq!(graph.search_path(3, &parents).unwrap().unwrap(), vec![1, 2, 3]);
assert_eq!(distances[5].unwrap(), 14.0);
assert_eq!(distances[7], None);
let mut graph = Graph::new(4);
graph.add_oriented_edge(1, 2, 2.0).unwrap();
graph.add_oriented_edge(2, 3, -2.0).unwrap();
graph.add_oriented_edge(3, 4, -2.0).unwrap();
graph.add_oriented_edge(4, 2, -2.0).unwrap();
let res = graph.bellman_ford(1);
assert!(res.is_err(), "{}", true);
let mut graph = Graph::new(4);
graph.add_oriented_edge(1, 3, 2.0).unwrap();
graph.add_oriented_edge(2, 4, -2.0).unwrap();
let (_, dist) = graph.bellman_ford(1).unwrap();
assert_eq!(dist, vec![None, Some(0.0), None, Some(2.0), None]);
} | 38.28169 | 103 | 0.533481 |
3a4516aceafd54e1db80d186f05142989bd7a3af | 21,812 | // Copyright (c) The Dijets Core Contributors
// SPDX-License-Identifier: Apache-2.0
//! This module contains verification of usage of dependencies for modules and scripts.
use move_binary_format::{
access::{ModuleAccess, ScriptAccess},
binary_views::BinaryIndexedView,
errors::{verification_error, Location, PartialVMError, PartialVMResult, VMResult},
file_format::{
AbilitySet, Bytecode, CodeOffset, CompiledModule, CompiledScript, FunctionDefinitionIndex,
FunctionHandleIndex, ModuleHandleIndex, SignatureToken, StructHandleIndex,
StructTypeParameter, TableIndex, Visibility,
},
IndexKind,
};
use move_core_types::{identifier::Identifier, language_storage::ModuleId, vm_status::StatusCode};
use std::collections::{BTreeMap, BTreeSet, HashMap};
struct Context<'a, 'b> {
resolver: BinaryIndexedView<'a>,
// (Module -> CompiledModule) for (at least) all immediate dependencies
dependency_map: BTreeMap<ModuleId, &'b CompiledModule>,
// (Module::StructName -> handle) for all types of all dependencies
struct_id_to_handle_map: HashMap<(ModuleId, Identifier), StructHandleIndex>,
// (Module::FunctionName -> handle) for all functions that can ever be called by this
// module/script in all dependencies
func_id_to_handle_map: HashMap<(ModuleId, Identifier), FunctionHandleIndex>,
// (handle -> visibility) for all function handles found in the module being checked
function_visibilities: HashMap<FunctionHandleIndex, Visibility>,
}
impl<'a, 'b> Context<'a, 'b> {
fn module(
module: &'a CompiledModule,
dependencies: impl IntoIterator<Item = &'b CompiledModule>,
) -> Self {
Self::new(BinaryIndexedView::Module(module), dependencies)
}
fn script(
script: &'a CompiledScript,
dependencies: impl IntoIterator<Item = &'b CompiledModule>,
) -> Self {
Self::new(BinaryIndexedView::Script(script), dependencies)
}
fn new(
resolver: BinaryIndexedView<'a>,
dependencies: impl IntoIterator<Item = &'b CompiledModule>,
) -> Self {
let self_module = resolver.self_id();
let self_module_idx = resolver.self_handle_idx();
let empty_defs = &vec![];
let self_function_defs = match &resolver {
BinaryIndexedView::Module(m) => m.function_defs(),
BinaryIndexedView::Script(_) => empty_defs,
};
let dependency_map = dependencies
.into_iter()
.filter(|d| Some(d.self_id()) != self_module)
.map(|d| (d.self_id(), d))
.collect();
let mut context = Self {
resolver,
dependency_map,
struct_id_to_handle_map: HashMap::new(),
func_id_to_handle_map: HashMap::new(),
function_visibilities: HashMap::new(),
};
let mut dependency_visibilities = HashMap::new();
for (module_id, module) in &context.dependency_map {
let friend_module_ids: BTreeSet<_> = module.immediate_friends().into_iter().collect();
// Module::StructName -> def handle idx
for struct_def in module.struct_defs() {
let struct_handle = module.struct_handle_at(struct_def.struct_handle);
let struct_name = module.identifier_at(struct_handle.name);
context.struct_id_to_handle_map.insert(
(module_id.clone(), struct_name.to_owned()),
struct_def.struct_handle,
);
}
// Module::FuncName -> def handle idx
for func_def in module.function_defs() {
let func_handle = module.function_handle_at(func_def.function);
let func_name = module.identifier_at(func_handle.name);
dependency_visibilities.insert(
(module_id.clone(), func_name.to_owned()),
func_def.visibility,
);
let may_be_called = match func_def.visibility {
Visibility::Public | Visibility::Script => true,
Visibility::Friend => self_module
.as_ref()
.map_or(false, |self_id| friend_module_ids.contains(self_id)),
Visibility::Private => false,
};
if may_be_called {
context
.func_id_to_handle_map
.insert((module_id.clone(), func_name.to_owned()), func_def.function);
}
}
}
for function_def in self_function_defs {
let visibility = function_def.visibility;
context
.function_visibilities
.insert(function_def.function, visibility);
}
for (idx, function_handle) in context.resolver.function_handles().iter().enumerate() {
if Some(function_handle.module) == self_module_idx {
continue;
}
let owner_module_id = context
.resolver
.module_id_for_handle(context.resolver.module_handle_at(function_handle.module));
let function_name = context.resolver.identifier_at(function_handle.name);
let visibility =
match dependency_visibilities.get(&(owner_module_id, function_name.to_owned())) {
// The visibility does not need to be set here. If the function does not
// link, it will be reported by verify_imported_functions
None => continue,
Some(vis) => *vis,
};
context
.function_visibilities
.insert(FunctionHandleIndex(idx as TableIndex), visibility);
}
context
}
}
pub fn verify_module<'a>(
module: &CompiledModule,
dependencies: impl IntoIterator<Item = &'a CompiledModule>,
) -> VMResult<()> {
verify_module_impl(module, dependencies)
.map_err(|e| e.finish(Location::Module(module.self_id())))
}
fn verify_module_impl<'a>(
module: &CompiledModule,
dependencies: impl IntoIterator<Item = &'a CompiledModule>,
) -> PartialVMResult<()> {
let context = &Context::module(module, dependencies);
verify_imported_modules(context)?;
verify_imported_structs(context)?;
verify_imported_functions(context)?;
verify_all_script_visibility_usage(context)
}
pub fn verify_script<'a>(
script: &CompiledScript,
dependencies: impl IntoIterator<Item = &'a CompiledModule>,
) -> VMResult<()> {
verify_script_impl(script, dependencies).map_err(|e| e.finish(Location::Script))
}
pub fn verify_script_impl<'a>(
script: &CompiledScript,
dependencies: impl IntoIterator<Item = &'a CompiledModule>,
) -> PartialVMResult<()> {
let context = &Context::script(script, dependencies);
verify_imported_modules(context)?;
verify_imported_structs(context)?;
verify_imported_functions(context)?;
verify_all_script_visibility_usage(context)
}
fn verify_imported_modules(context: &Context) -> PartialVMResult<()> {
let self_module = context.resolver.self_handle_idx();
for (idx, module_handle) in context.resolver.module_handles().iter().enumerate() {
let module_id = context.resolver.module_id_for_handle(module_handle);
if Some(ModuleHandleIndex(idx as u16)) != self_module
&& !context.dependency_map.contains_key(&module_id)
{
return Err(verification_error(
StatusCode::MISSING_DEPENDENCY,
IndexKind::ModuleHandle,
idx as TableIndex,
));
}
}
Ok(())
}
fn verify_imported_structs(context: &Context) -> PartialVMResult<()> {
let self_module = context.resolver.self_handle_idx();
for (idx, struct_handle) in context.resolver.struct_handles().iter().enumerate() {
if Some(struct_handle.module) == self_module {
continue;
}
let owner_module_id = context
.resolver
.module_id_for_handle(context.resolver.module_handle_at(struct_handle.module));
// TODO: remove unwrap
let owner_module = context.dependency_map.get(&owner_module_id).unwrap();
let struct_name = context.resolver.identifier_at(struct_handle.name);
match context
.struct_id_to_handle_map
.get(&(owner_module_id, struct_name.to_owned()))
{
Some(def_idx) => {
let def_handle = owner_module.struct_handle_at(*def_idx);
if !compatible_struct_abilities(struct_handle.abilities, def_handle.abilities)
|| !compatible_struct_type_parameters(
&struct_handle.type_parameters,
&def_handle.type_parameters,
)
{
return Err(verification_error(
StatusCode::TYPE_MISMATCH,
IndexKind::StructHandle,
idx as TableIndex,
));
}
}
None => {
return Err(verification_error(
StatusCode::LOOKUP_FAILED,
IndexKind::StructHandle,
idx as TableIndex,
))
}
}
}
Ok(())
}
fn verify_imported_functions(context: &Context) -> PartialVMResult<()> {
let self_module = context.resolver.self_handle_idx();
for (idx, function_handle) in context.resolver.function_handles().iter().enumerate() {
if Some(function_handle.module) == self_module {
continue;
}
let owner_module_id = context
.resolver
.module_id_for_handle(context.resolver.module_handle_at(function_handle.module));
let function_name = context.resolver.identifier_at(function_handle.name);
// TODO: remove unwrap
let owner_module = context.dependency_map.get(&owner_module_id).unwrap();
match context
.func_id_to_handle_map
.get(&(owner_module_id.clone(), function_name.to_owned()))
{
Some(def_idx) => {
let def_handle = owner_module.function_handle_at(*def_idx);
// compatible type parameter constraints
if !compatible_fun_type_parameters(
&function_handle.type_parameters,
&def_handle.type_parameters,
) {
return Err(verification_error(
StatusCode::TYPE_MISMATCH,
IndexKind::FunctionHandle,
idx as TableIndex,
));
}
// same parameters
let handle_params = context.resolver.signature_at(function_handle.parameters);
let def_params = match context.dependency_map.get(&owner_module_id) {
Some(module) => module.signature_at(def_handle.parameters),
None => {
return Err(verification_error(
StatusCode::LOOKUP_FAILED,
IndexKind::FunctionHandle,
idx as TableIndex,
))
}
};
compare_cross_module_signatures(
context,
&handle_params.0,
&def_params.0,
owner_module,
)
.map_err(|e| e.at_index(IndexKind::FunctionHandle, idx as TableIndex))?;
// same return_
let handle_return = context.resolver.signature_at(function_handle.return_);
let def_return = match context.dependency_map.get(&owner_module_id) {
Some(module) => module.signature_at(def_handle.return_),
None => {
return Err(verification_error(
StatusCode::LOOKUP_FAILED,
IndexKind::FunctionHandle,
idx as TableIndex,
))
}
};
compare_cross_module_signatures(
context,
&handle_return.0,
&def_return.0,
owner_module,
)
.map_err(|e| e.at_index(IndexKind::FunctionHandle, idx as TableIndex))?;
}
None => {
return Err(verification_error(
StatusCode::LOOKUP_FAILED,
IndexKind::FunctionHandle,
idx as TableIndex,
));
}
}
}
Ok(())
}
// The local view must be a subset of (or equal to) the defined set of abilities. Conceptually, the
// local view can be more constrained than the defined one. Removing abilities locally does nothing
// but limit the local usage.
// (Note this works because there are no negative constraints, i.e. you cannot constrain a type
// parameter with the absence of an ability)
fn compatible_struct_abilities(
local_struct_abilities_declaration: AbilitySet,
defined_struct_abilities: AbilitySet,
) -> bool {
local_struct_abilities_declaration.is_subset(defined_struct_abilities)
}
// - The number of type parameters must be the same
// - Each pair of parameters must satisfy [`compatible_type_parameter_constraints`]
fn compatible_fun_type_parameters(
local_type_parameters_declaration: &[AbilitySet],
defined_type_parameters: &[AbilitySet],
) -> bool {
local_type_parameters_declaration.len() == defined_type_parameters.len()
&& local_type_parameters_declaration
.iter()
.zip(defined_type_parameters)
.all(
|(
local_type_parameter_constraints_declaration,
defined_type_parameter_constraints,
)| {
compatible_type_parameter_constraints(
*local_type_parameter_constraints_declaration,
*defined_type_parameter_constraints,
)
},
)
}
// - The number of type parameters must be the same
// - Each pair of parameters must satisfy [`compatible_type_parameter_constraints`] and [`compatible_type_parameter_phantom_decl`]
fn compatible_struct_type_parameters(
local_type_parameters_declaration: &[StructTypeParameter],
defined_type_parameters: &[StructTypeParameter],
) -> bool {
local_type_parameters_declaration.len() == defined_type_parameters.len()
&& local_type_parameters_declaration
.iter()
.zip(defined_type_parameters)
.all(
|(local_type_parameter_declaration, defined_type_parameter)| {
compatible_type_parameter_phantom_decl(
local_type_parameter_declaration,
defined_type_parameter,
) && compatible_type_parameter_constraints(
local_type_parameter_declaration.constraints,
defined_type_parameter.constraints,
)
},
)
}
// The local view of a type parameter must be a superset of (or equal to) the defined
// constraints. Conceptually, the local view can be more constrained than the defined one as the
// local context is only limiting usage, and cannot take advantage of the additional constraints.
fn compatible_type_parameter_constraints(
local_type_parameter_constraints_declaration: AbilitySet,
defined_type_parameter_constraints: AbilitySet,
) -> bool {
defined_type_parameter_constraints.is_subset(local_type_parameter_constraints_declaration)
}
// Adding phantom declarations relaxes the requirements for clients, thus, the local view may
// lack a phantom declaration present in the definition.
fn compatible_type_parameter_phantom_decl(
local_type_parameter_declaration: &StructTypeParameter,
defined_type_parameter: &StructTypeParameter,
) -> bool {
// local_type_parameter_declaration.is_phantom => defined_type_parameter.is_phantom
!local_type_parameter_declaration.is_phantom || defined_type_parameter.is_phantom
}
fn compare_cross_module_signatures(
context: &Context,
handle_sig: &[SignatureToken],
def_sig: &[SignatureToken],
def_module: &CompiledModule,
) -> PartialVMResult<()> {
if handle_sig.len() != def_sig.len() {
return Err(PartialVMError::new(StatusCode::TYPE_MISMATCH));
}
for (handle_type, def_type) in handle_sig.iter().zip(def_sig) {
compare_types(context, handle_type, def_type, def_module)?;
}
Ok(())
}
fn compare_types(
context: &Context,
handle_type: &SignatureToken,
def_type: &SignatureToken,
def_module: &CompiledModule,
) -> PartialVMResult<()> {
match (handle_type, def_type) {
(SignatureToken::Bool, SignatureToken::Bool)
| (SignatureToken::U8, SignatureToken::U8)
| (SignatureToken::U64, SignatureToken::U64)
| (SignatureToken::U128, SignatureToken::U128)
| (SignatureToken::Address, SignatureToken::Address)
| (SignatureToken::Signer, SignatureToken::Signer) => Ok(()),
(SignatureToken::Vector(ty1), SignatureToken::Vector(ty2)) => {
compare_types(context, ty1, ty2, def_module)
}
(SignatureToken::Struct(idx1), SignatureToken::Struct(idx2)) => {
compare_structs(context, *idx1, *idx2, def_module)
}
(
SignatureToken::StructInstantiation(idx1, inst1),
SignatureToken::StructInstantiation(idx2, inst2),
) => {
compare_structs(context, *idx1, *idx2, def_module)?;
compare_cross_module_signatures(context, inst1, inst2, def_module)
}
(SignatureToken::Reference(ty1), SignatureToken::Reference(ty2))
| (SignatureToken::MutableReference(ty1), SignatureToken::MutableReference(ty2)) => {
compare_types(context, ty1, ty2, def_module)
}
(SignatureToken::TypeParameter(idx1), SignatureToken::TypeParameter(idx2)) => {
if idx1 != idx2 {
Err(PartialVMError::new(StatusCode::TYPE_MISMATCH))
} else {
Ok(())
}
}
_ => Err(PartialVMError::new(StatusCode::TYPE_MISMATCH)),
}
}
fn compare_structs(
context: &Context,
idx1: StructHandleIndex,
idx2: StructHandleIndex,
def_module: &CompiledModule,
) -> PartialVMResult<()> {
// grab ModuleId and struct name for the module being verified
let struct_handle = context.resolver.struct_handle_at(idx1);
let module_handle = context.resolver.module_handle_at(struct_handle.module);
let module_id = context.resolver.module_id_for_handle(module_handle);
let struct_name = context.resolver.identifier_at(struct_handle.name);
// grab ModuleId and struct name for the definition
let def_struct_handle = def_module.struct_handle_at(idx2);
let def_module_handle = def_module.module_handle_at(def_struct_handle.module);
let def_module_id = def_module.module_id_for_handle(def_module_handle);
let def_struct_name = def_module.identifier_at(def_struct_handle.name);
if module_id != def_module_id || struct_name != def_struct_name {
Err(PartialVMError::new(StatusCode::TYPE_MISMATCH))
} else {
Ok(())
}
}
fn verify_all_script_visibility_usage(context: &Context) -> PartialVMResult<()> {
match &context.resolver {
BinaryIndexedView::Module(m) => {
for (idx, fdef) in m.function_defs().iter().enumerate() {
let code = match &fdef.code {
None => continue,
Some(code) => &code.code,
};
verify_script_visibility_usage(
context,
fdef.visibility,
FunctionDefinitionIndex(idx as TableIndex),
code,
)?
}
Ok(())
}
BinaryIndexedView::Script(s) => verify_script_visibility_usage(
context,
Visibility::Script,
FunctionDefinitionIndex(0),
&s.code().code,
),
}
}
fn verify_script_visibility_usage(
context: &Context,
current_visibility: Visibility,
fdef_idx: FunctionDefinitionIndex,
code: &[Bytecode],
) -> PartialVMResult<()> {
for (idx, instr) in code.iter().enumerate() {
let idx = idx as CodeOffset;
let fhandle_idx = match instr {
Bytecode::Call(fhandle_idx) => fhandle_idx,
Bytecode::CallGeneric(finst_idx) => {
&context
.resolver
.function_instantiation_at(*finst_idx)
.handle
}
_ => continue,
};
let fhandle_vis = context.function_visibilities[fhandle_idx];
match (current_visibility, fhandle_vis) {
(Visibility::Script, Visibility::Script) => (),
(_, Visibility::Script) => {
return Err(PartialVMError::new(
StatusCode::CALLED_SCRIPT_VISIBLE_FROM_NON_SCRIPT_VISIBLE,
)
.at_code_offset(fdef_idx, idx)
.with_message(
"script-visible functions can only be called from scripts or other \
script-visibile functions"
.to_string(),
));
}
_ => (),
}
}
Ok(())
}
| 40.243542 | 130 | 0.603429 |
8aac93f0777281cbfbbe362be7fa06563d7484cd | 2,003 | #[doc = "Register `PCCR45` reader"]
pub struct R(crate::R<PCCR45_SPEC>);
impl core::ops::Deref for R {
type Target = crate::R<PCCR45_SPEC>;
#[inline(always)]
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl From<crate::R<PCCR45_SPEC>> for R {
#[inline(always)]
fn from(reader: crate::R<PCCR45_SPEC>) -> Self {
R(reader)
}
}
#[doc = "Register `PCCR45` writer"]
pub struct W(crate::W<PCCR45_SPEC>);
impl core::ops::Deref for W {
type Target = crate::W<PCCR45_SPEC>;
#[inline(always)]
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl core::ops::DerefMut for W {
#[inline(always)]
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
impl From<crate::W<PCCR45_SPEC>> for W {
#[inline(always)]
fn from(writer: crate::W<PCCR45_SPEC>) -> Self {
W(writer)
}
}
impl W {
#[doc = "Writes raw bits to the register."]
#[inline(always)]
pub unsafe fn bits(&mut self, bits: u32) -> &mut Self {
self.0.bits(bits);
self
}
}
#[doc = "PWM45 Clock Configuration Register\n\nThis register you can [`read`](crate::generic::Reg::read), [`write_with_zero`](crate::generic::Reg::write_with_zero), [`reset`](crate::generic::Reg::reset), [`write`](crate::generic::Reg::write), [`modify`](crate::generic::Reg::modify). See [API](https://docs.rs/svd2rust/#read--modify--write-api).\n\nFor information about available fields see [pccr45](index.html) module"]
pub struct PCCR45_SPEC;
impl crate::RegisterSpec for PCCR45_SPEC {
type Ux = u32;
}
#[doc = "`read()` method returns [pccr45::R](R) reader structure"]
impl crate::Readable for PCCR45_SPEC {
type Reader = R;
}
#[doc = "`write(|w| ..)` method takes [pccr45::W](W) writer structure"]
impl crate::Writable for PCCR45_SPEC {
type Writer = W;
}
#[doc = "`reset()` method sets PCCR45 to value 0"]
impl crate::Resettable for PCCR45_SPEC {
#[inline(always)]
fn reset_value() -> Self::Ux {
0
}
}
| 30.815385 | 421 | 0.618073 |
715dc837cff2c9b49524deb10cd6d865cd2ca4ba | 15,857 | // Copyright (c) The Starcoin Core Contributors
// SPDX-License-Identifier: Apache-2.0
use crate::helper::{decode_key, gen_keypair, generate_node_name, load_key, save_key};
use crate::{
get_available_port_from, get_random_available_port, parse_key_val, ApiQuotaConfig, BaseConfig,
ConfigModule, QuotaDuration, StarcoinOpt,
};
use anyhow::Result;
use clap::Parser;
use network_api::messages::{NotificationMessage, BLOCK_PROTOCOL_NAME};
use network_p2p_types::{
is_memory_addr, memory_addr,
multiaddr::{Multiaddr, Protocol},
MultiaddrWithPeerId,
};
use once_cell::sync::Lazy;
use rand::seq::SliceRandom;
use rand::thread_rng;
use serde::{Deserialize, Serialize};
use starcoin_crypto::ed25519::{Ed25519PrivateKey, Ed25519PublicKey};
use starcoin_logger::prelude::*;
use starcoin_types::peer_info::PeerId;
use std::borrow::Cow;
use std::collections::HashSet;
use std::net::Ipv4Addr;
use std::num::NonZeroU32;
use std::path::PathBuf;
use std::str::FromStr;
use std::sync::Arc;
pub static G_DEFAULT_NETWORK_PORT: u16 = 9840;
static G_NETWORK_KEY_FILE: Lazy<PathBuf> = Lazy::new(|| PathBuf::from("network_key"));
#[derive(Debug, Default, Clone, PartialEq, Deserialize, Serialize, Parser)]
pub struct NetworkRpcQuotaConfiguration {
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(
name = "p2prpc-default-global-api-quota",
long,
help = "default global p2p rpc quota, eg: 1000/s"
)]
pub default_global_api_quota: Option<ApiQuotaConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(
name = "p2prpc-custom-global-api-quota",
long,
number_of_values = 1,
parse(try_from_str = parse_key_val)
)]
/// customize global p2p rpc quota, eg: get_block=100/s
/// number_of_values = 1 forces the user to repeat the -D option for each key-value pair:
/// my_program -D a=1 -D b=2
pub custom_global_api_quota: Option<Vec<(String, ApiQuotaConfig)>>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(
name = "p2prpc-default-user-api-quota",
long,
help = "default p2p rpc quota of a peer, eg: 1000/s"
)]
pub default_user_api_quota: Option<ApiQuotaConfig>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(
name = "p2prpc-custom-user-api-quota",
long,
help = "customize p2p rpc quota of a peer, eg: get_block=10/s",
parse(try_from_str = parse_key_val),
number_of_values = 1
)]
pub custom_user_api_quota: Option<Vec<(String, ApiQuotaConfig)>>,
}
impl NetworkRpcQuotaConfiguration {
pub fn default_global_api_quota(&self) -> ApiQuotaConfig {
self.default_global_api_quota
.clone()
.unwrap_or(ApiQuotaConfig {
max_burst: NonZeroU32::new(1000).expect("New NonZeroU32 should success."),
duration: QuotaDuration::Second,
})
}
pub fn custom_global_api_quota(&self) -> Vec<(String, ApiQuotaConfig)> {
self.custom_global_api_quota.clone().unwrap_or_default()
}
pub fn default_user_api_quota(&self) -> ApiQuotaConfig {
self.default_user_api_quota
.clone()
.unwrap_or(ApiQuotaConfig {
max_burst: NonZeroU32::new(50).expect("New NonZeroU32 should success."),
duration: QuotaDuration::Second,
})
}
pub fn custom_user_api_quota(&self) -> Vec<(String, ApiQuotaConfig)> {
self.custom_user_api_quota.clone().unwrap_or_default()
}
pub fn merge(&mut self, o: &Self) -> Result<()> {
if o.default_global_api_quota.is_some() {
self.default_global_api_quota = o.default_global_api_quota.clone();
}
//TODO should merge two vec?
if o.custom_global_api_quota.is_some() {
self.custom_global_api_quota = o.custom_global_api_quota.clone();
}
if o.default_user_api_quota.is_some() {
self.default_user_api_quota = o.default_user_api_quota.clone();
}
if o.custom_user_api_quota.is_some() {
self.custom_user_api_quota = o.custom_user_api_quota.clone();
}
Ok(())
}
}
//for avoid conflict between seed vec and subcommand, so define a custom type to parse seeds.
//https://github.com/TeXitoi/clap/issues/367
#[derive(Default, Clone, Debug, Deserialize, PartialEq, Serialize)]
pub struct Seeds(pub Vec<MultiaddrWithPeerId>);
impl Seeds {
pub fn into_vec(self) -> Vec<MultiaddrWithPeerId> {
self.into()
}
pub fn merge(&mut self, other: &Seeds) {
let mut seeds = HashSet::new();
seeds.extend(self.0.clone().into_iter());
seeds.extend(other.0.clone().into_iter());
let mut seeds: Vec<MultiaddrWithPeerId> = seeds.into_iter().collect();
//keep order in config
seeds.sort();
self.0 = seeds;
}
pub fn is_empty(&self) -> bool {
self.0.is_empty()
}
}
impl FromStr for Seeds {
type Err = anyhow::Error;
fn from_str(s: &str) -> Result<Self, Self::Err> {
let seeds = s
.split(',')
.filter(|s| !s.is_empty())
.map(MultiaddrWithPeerId::from_str)
.collect::<Result<Vec<MultiaddrWithPeerId>, network_p2p_types::ParseErr>>()?;
Ok(Seeds(seeds))
}
}
#[allow(clippy::from_over_into)]
impl Into<Vec<MultiaddrWithPeerId>> for Seeds {
fn into(self) -> Vec<MultiaddrWithPeerId> {
self.0
}
}
impl From<Vec<MultiaddrWithPeerId>> for Seeds {
fn from(seeds: Vec<MultiaddrWithPeerId>) -> Self {
Seeds(seeds)
}
}
#[derive(Default, Clone, Debug, Deserialize, PartialEq, Serialize, Parser)]
#[serde(deny_unknown_fields)]
pub struct NetworkConfig {
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(long = "node-name")]
/// Node network name, just for display, if absent will generate a random name.
pub node_name: Option<String>,
#[serde(skip)]
#[clap(long = "node-key")]
/// Node network private key string.
/// This option is skip for config file, only support cli option, after init will write the key to node_key_file
pub node_key: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(long = "node-key-file", parse(from_os_str), conflicts_with("node-key"))]
/// Node network private key file, default is network_key under the data dir.
pub node_key_file: Option<PathBuf>,
#[serde(skip_serializing_if = "Seeds::is_empty")]
#[serde(default)]
#[clap(long = "seed", default_value = "")]
/// P2P network seed, multi seed should use ',' as delimiter.
pub seeds: Seeds,
/// Enable peer discovery on local networks.
/// By default this option is `false`. only support cli option.
#[serde(skip)]
#[clap(long = "discover-local")]
pub discover_local: Option<bool>,
#[serde(skip)]
#[clap(long = "disable-seed")]
/// Do not connect to seed node, include builtin and config seed.
/// This option is skip for config file, only support cli option.
pub disable_seed: bool,
#[clap(flatten)]
pub network_rpc_quotas: NetworkRpcQuotaConfiguration,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(long)]
/// min peers to propagate new block and new transactions. Default 8.
min_peers_to_propagate: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(long)]
///max peers to propagate new block and new transactions. Default 128.
max_peers_to_propagate: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(long)]
///max count for incoming peers. Default 25.
max_incoming_peers: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(long)]
///max count for outgoing connected peers. Default 75.
/// max peers = max_incoming_peers + max_outgoing_peers
max_outgoing_peers: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(long)]
/// p2p network listen address, Default is /ip4/0.0.0.0/tcp/9840
listen: Option<Multiaddr>,
#[serde(skip)]
#[clap(skip)]
base: Option<Arc<BaseConfig>>,
#[serde(skip)]
#[clap(skip)]
network_keypair: Option<(Ed25519PrivateKey, Ed25519PublicKey)>,
#[serde(skip)]
#[clap(skip)]
generate_listen: Option<Multiaddr>,
#[serde(skip_serializing_if = "Option::is_none")]
#[clap(name = "unsupported-protocols", long, use_value_delimiter = true)]
pub unsupported_protocols: Option<Vec<String>>,
}
impl NetworkConfig {
fn base(&self) -> &BaseConfig {
self.base.as_ref().expect("Config should init.")
}
pub fn listen(&self) -> Multiaddr {
self.generate_listen.clone().expect("Config should init.")
}
pub fn seeds(&self) -> Vec<MultiaddrWithPeerId> {
if self.disable_seed {
return vec![];
}
let mut seeds: HashSet<MultiaddrWithPeerId> =
self.seeds.clone().into_vec().into_iter().collect();
seeds.extend(self.base().net().boot_nodes().iter().cloned());
let self_peer_id = self.self_peer_id();
seeds.retain(|node| {
if &node.peer_id == self_peer_id.origin() {
info!(
"Self peer_id({}) contains in boot nodes, removed.",
self_peer_id
);
false
} else {
true
}
});
let mut seeds: Vec<MultiaddrWithPeerId> = seeds.into_iter().collect();
// shuffle seeds, connect seeds with random orders.
seeds.shuffle(&mut thread_rng());
seeds
}
pub fn network_keypair(&self) -> &(Ed25519PrivateKey, Ed25519PublicKey) {
self.network_keypair.as_ref().expect("Config should init.")
}
pub fn self_address(&self) -> MultiaddrWithPeerId {
let addr = self.listen();
let host = if is_memory_addr(&addr) {
addr
} else {
addr.replace(0, |_p| Some(Protocol::Ip4(Ipv4Addr::new(127, 0, 0, 1))))
.expect("Replace multi address fail.")
};
MultiaddrWithPeerId::new(host, self.self_peer_id().into())
}
pub fn discover_local(&self) -> bool {
self.discover_local.unwrap_or(false)
}
pub fn disable_seed(&self) -> bool {
self.disable_seed
}
pub fn self_peer_id(&self) -> PeerId {
PeerId::from_ed25519_public_key(self.network_keypair().1.clone())
}
pub fn max_peers_to_propagate(&self) -> u32 {
self.max_peers_to_propagate.unwrap_or(128)
}
pub fn min_peers_to_propagate(&self) -> u32 {
self.min_peers_to_propagate.unwrap_or(8)
}
pub fn max_incoming_peers(&self) -> u32 {
self.max_incoming_peers.unwrap_or(25)
}
pub fn max_outgoing_peers(&self) -> u32 {
self.max_outgoing_peers.unwrap_or(75)
}
pub fn node_name(&self) -> String {
self.node_name.clone().unwrap_or_else(generate_node_name)
}
fn node_key_file(&self) -> PathBuf {
let path = self.node_key_file.as_ref().unwrap_or(&G_NETWORK_KEY_FILE);
if path.is_absolute() {
path.clone()
} else {
self.base().data_dir().join(path.as_path())
}
}
/// node key loader step:
/// 1. if node_key is Some, directly decode the key.
/// 2. try load node key from node_key_file
/// 3. if node_key_file is not exists, generate and save key to the node_key_file.
fn load_or_generate_keypair(&mut self) -> Result<()> {
let keypair = match self.node_key.as_ref() {
Some(node_key) => decode_key(node_key)?,
None => {
let path = self.node_key_file();
if path.exists() {
load_key(&path)?
} else {
let keypair = gen_keypair();
save_key(&keypair.0.to_bytes(), &path)?;
keypair
}
}
};
self.network_keypair = Some(keypair);
Ok(())
}
fn generate_listen_address(&mut self) {
if self.listen.is_some() {
self.generate_listen = self.listen.clone();
} else {
let base = self.base();
let port = if base.net().is_test() {
get_random_available_port()
} else if base.net().is_dev() {
get_available_port_from(G_DEFAULT_NETWORK_PORT)
} else {
G_DEFAULT_NETWORK_PORT
};
//test env use in memory transport.
let listen = if base.net().is_test() {
memory_addr(port as u64)
} else {
format!("/ip4/0.0.0.0/tcp/{}", port)
.parse()
.expect("Parse multi address fail.")
};
self.generate_listen = Some(listen);
}
}
pub fn supported_network_protocols(&self) -> Vec<Cow<'static, str>> {
let protocols = NotificationMessage::protocols();
if let Some(unsupported_protocols) = &self.unsupported_protocols {
return protocols
.into_iter()
.filter(|protocol| {
!unsupported_protocols.contains(&protocol.to_string())
|| protocol == BLOCK_PROTOCOL_NAME
})
.collect();
}
protocols
}
}
impl ConfigModule for NetworkConfig {
fn merge_with_opt(&mut self, opt: &StarcoinOpt, base: Arc<BaseConfig>) -> Result<()> {
self.base = Some(base);
self.seeds.merge(&opt.network.seeds);
if opt.network.disable_seed {
self.disable_seed = opt.network.disable_seed;
}
self.network_rpc_quotas
.merge(&opt.network.network_rpc_quotas)?;
if opt.network.node_name.is_some() {
self.node_name = opt.network.node_name.clone();
}
if self.node_name.is_none() {
self.node_name = Some(generate_node_name())
}
if opt.network.node_key.is_some() {
self.node_key = opt.network.node_key.clone();
}
if opt.network.listen.is_some() {
self.listen = opt.network.listen.clone();
}
if let Some(m) = opt.network.max_peers_to_propagate {
self.max_peers_to_propagate = Some(m);
}
if let Some(m) = opt.network.min_peers_to_propagate {
self.min_peers_to_propagate = Some(m);
}
if opt.network.discover_local.is_some() {
self.discover_local = opt.network.discover_local;
}
if opt.network.max_incoming_peers.is_some() {
self.max_incoming_peers = opt.network.max_incoming_peers;
}
if opt.network.max_outgoing_peers.is_some() {
self.max_outgoing_peers = opt.network.max_outgoing_peers;
}
if opt.network.unsupported_protocols.is_some() {
let mut protocols: HashSet<String> = self
.unsupported_protocols
.clone()
.unwrap_or_default()
.into_iter()
.collect();
protocols.extend(
opt.network
.unsupported_protocols
.clone()
.unwrap_or_default(),
);
self.unsupported_protocols = Some(
protocols
.into_iter()
.filter(|protocol| !protocol.eq_ignore_ascii_case(BLOCK_PROTOCOL_NAME))
.map(|protocol| protocol.to_lowercase())
.collect(),
);
}
self.load_or_generate_keypair()?;
self.generate_listen_address();
Ok(())
}
}
| 33.595339 | 116 | 0.604528 |
fc60c30fcb8cdd0f9ecc0850eb4851047aeb31f4 | 1,726 | #[doc = "Writer for register TASKS_CTSTOP"]
pub type W = crate::W<u32, super::TASKS_CTSTOP>;
#[doc = "Register TASKS_CTSTOP `reset()`'s with value 0"]
impl crate::ResetValue for super::TASKS_CTSTOP {
type Type = u32;
#[inline(always)]
fn reset_value() -> Self::Type {
0
}
}
#[doc = "Stop calibration timer\n\nValue on reset: 0"]
#[derive(Clone, Copy, Debug, PartialEq)]
pub enum TASKS_CTSTOP_AW {
#[doc = "1: Trigger task"]
TRIGGER = 1,
}
impl From<TASKS_CTSTOP_AW> for bool {
#[inline(always)]
fn from(variant: TASKS_CTSTOP_AW) -> Self {
variant as u8 != 0
}
}
#[doc = "Write proxy for field `TASKS_CTSTOP`"]
pub struct TASKS_CTSTOP_W<'a> {
w: &'a mut W,
}
impl<'a> TASKS_CTSTOP_W<'a> {
#[doc = r"Writes `variant` to the field"]
#[inline(always)]
pub fn variant(self, variant: TASKS_CTSTOP_AW) -> &'a mut W {
{
self.bit(variant.into())
}
}
#[doc = "Trigger task"]
#[inline(always)]
pub fn trigger(self) -> &'a mut W {
self.variant(TASKS_CTSTOP_AW::TRIGGER)
}
#[doc = r"Sets the field bit"]
#[inline(always)]
pub fn set_bit(self) -> &'a mut W {
self.bit(true)
}
#[doc = r"Clears the field bit"]
#[inline(always)]
pub fn clear_bit(self) -> &'a mut W {
self.bit(false)
}
#[doc = r"Writes raw bits to the field"]
#[inline(always)]
pub fn bit(self, value: bool) -> &'a mut W {
self.w.bits = (self.w.bits & !0x01) | ((value as u32) & 0x01);
self.w
}
}
impl W {
#[doc = "Bit 0 - Stop calibration timer"]
#[inline(always)]
pub fn tasks_ctstop(&mut self) -> TASKS_CTSTOP_W {
TASKS_CTSTOP_W { w: self }
}
}
| 26.96875 | 70 | 0.572422 |
9024b5324d6b48768cbb638ae25510f191f78dbf | 3,013 | /// A macro is exposed so that we can embed the program ID.
#[macro_export]
macro_rules! vote_weight_record {
($id:expr) => {
/// Anchor wrapper for the SPL governance program's VoterWeightRecord type.
#[derive(Clone)]
pub struct VoterWeightRecord(spl_governance_addin_api::voter_weight::VoterWeightRecord);
impl anchor_lang::AccountDeserialize for VoterWeightRecord {
fn try_deserialize(buf: &mut &[u8]) -> std::result::Result<Self, ProgramError> {
let mut data = buf;
let vwr: spl_governance_addin_api::voter_weight::VoterWeightRecord =
anchor_lang::AnchorDeserialize::deserialize(&mut data)
.map_err(|_| anchor_lang::__private::ErrorCode::AccountDidNotDeserialize)?;
if !solana_program::program_pack::IsInitialized::is_initialized(&vwr) {
return Err(anchor_lang::__private::ErrorCode::AccountDidNotSerialize.into());
}
Ok(VoterWeightRecord(vwr))
}
fn try_deserialize_unchecked(
buf: &mut &[u8],
) -> std::result::Result<Self, ProgramError> {
let mut data = buf;
let vwr: spl_governance_addin_api::voter_weight::VoterWeightRecord =
anchor_lang::AnchorDeserialize::deserialize(&mut data)
.map_err(|_| anchor_lang::__private::ErrorCode::AccountDidNotDeserialize)?;
Ok(VoterWeightRecord(vwr))
}
}
impl anchor_lang::AccountSerialize for VoterWeightRecord {
fn try_serialize<W: std::io::Write>(
&self,
writer: &mut W,
) -> std::result::Result<(), ProgramError> {
let mut to_write = &mut self.0.clone();
//to_write.account_discriminator = *b"2ef99b4b";
to_write.account_discriminator = VoterWeightRecord::discriminator();
anchor_lang::AnchorSerialize::serialize(to_write, writer)
.map_err(|_| anchor_lang::__private::ErrorCode::AccountDidNotSerialize)?;
Ok(())
}
}
impl anchor_lang::Owner for VoterWeightRecord {
fn owner() -> Pubkey {
$id
}
}
impl anchor_lang::Discriminator for VoterWeightRecord {
fn discriminator() -> [u8; 8] {
//*b"2ef99b4b"
spl_governance_addin_api::voter_weight::VoterWeightRecord::ACCOUNT_DISCRIMINATOR
}
}
impl std::ops::Deref for VoterWeightRecord {
type Target = spl_governance_addin_api::voter_weight::VoterWeightRecord;
fn deref(&self) -> &Self::Target {
&self.0
}
}
impl std::ops::DerefMut for VoterWeightRecord {
fn deref_mut(&mut self) -> &mut Self::Target {
&mut self.0
}
}
};
}
| 40.716216 | 99 | 0.563226 |
11f3028fedfa2edeb791b43c1a8ab16f10f1d32f | 186 | use rsip_derives::UntypedHeader;
/// The `Subscription` header in its [untyped](super) form.
#[derive(UntypedHeader, Debug, PartialEq, Eq, Clone)]
pub struct SubscriptionState(String);
| 31 | 59 | 0.763441 |
398e61358aece786cedfbf79a0c146589b12e1d6 | 38,109 | use super::operation::{AddOperation, UserOperation};
use super::segment_updater::SegmentUpdater;
use super::PreparedCommit;
use bit_set::BitSet;
use core::Index;
use core::Segment;
use core::SegmentComponent;
use core::SegmentId;
use core::SegmentMeta;
use core::SegmentReader;
use crossbeam::channel;
use directory::DirectoryLock;
use docset::DocSet;
use error::TantivyError;
use fastfield::write_delete_bitset;
use futures::{Canceled, Future};
use indexer::delete_queue::{DeleteCursor, DeleteQueue};
use indexer::doc_opstamp_mapping::DocToOpstampMapping;
use indexer::operation::DeleteOperation;
use indexer::stamper::Stamper;
use indexer::MergePolicy;
use indexer::SegmentEntry;
use indexer::SegmentWriter;
use postings::compute_table_size;
use schema::Document;
use schema::IndexRecordOption;
use schema::Term;
use std::mem;
use std::ops::Range;
use std::sync::Arc;
use std::thread;
use std::thread::JoinHandle;
use Opstamp;
use Result;
// Size of the margin for the heap. A segment is closed when the remaining memory
// in the heap goes below MARGIN_IN_BYTES.
pub const MARGIN_IN_BYTES: usize = 1_000_000;
// We impose the memory per thread to be at least 3 MB.
pub const HEAP_SIZE_MIN: usize = ((MARGIN_IN_BYTES as u32) * 3u32) as usize;
pub const HEAP_SIZE_MAX: usize = u32::max_value() as usize - MARGIN_IN_BYTES;
// Add document will block if the number of docs waiting in the queue to be indexed
// reaches `PIPELINE_MAX_SIZE_IN_DOCS`
const PIPELINE_MAX_SIZE_IN_DOCS: usize = 10_000;
type OperationSender = channel::Sender<Vec<AddOperation>>;
type OperationReceiver = channel::Receiver<Vec<AddOperation>>;
/// Split the thread memory budget into
/// - the heap size
/// - the hash table "table" itself.
///
/// Returns (the heap size in bytes, the hash table size in number of bits)
fn initial_table_size(per_thread_memory_budget: usize) -> usize {
assert!(per_thread_memory_budget > 1_000);
let table_size_limit: usize = per_thread_memory_budget / 3;
if let Some(limit) = (1..)
.take_while(|num_bits: &usize| compute_table_size(*num_bits) < table_size_limit)
.last()
{
limit.min(19) // we cap it at 2^19 = 512K.
} else {
unreachable!(
"Per thread memory is too small: {}",
per_thread_memory_budget
);
}
}
/// `IndexWriter` is the user entry-point to add document to an index.
///
/// It manages a small number of indexing thread, as well as a shared
/// indexing queue.
/// Each indexing thread builds its own independent `Segment`, via
/// a `SegmentWriter` object.
pub struct IndexWriter {
// the lock is just used to bind the
// lifetime of the lock with that of the IndexWriter.
_directory_lock: Option<DirectoryLock>,
index: Index,
heap_size_in_bytes_per_thread: usize,
workers_join_handle: Vec<JoinHandle<Result<()>>>,
operation_receiver: OperationReceiver,
operation_sender: OperationSender,
segment_updater: SegmentUpdater,
worker_id: usize,
num_threads: usize,
generation: usize,
delete_queue: DeleteQueue,
stamper: Stamper,
committed_opstamp: Opstamp,
}
/// Open a new index writer. Attempts to acquire a lockfile.
///
/// The lockfile should be deleted on drop, but it is possible
/// that due to a panic or other error, a stale lockfile will be
/// left in the index directory. If you are sure that no other
/// `IndexWriter` on the system is accessing the index directory,
/// it is safe to manually delete the lockfile.
///
/// `num_threads` specifies the number of indexing workers that
/// should work at the same time.
/// # Errors
/// If the lockfile already exists, returns `Error::FileAlreadyExists`.
/// # Panics
/// If the heap size per thread is too small, panics.
pub fn open_index_writer(
index: &Index,
num_threads: usize,
heap_size_in_bytes_per_thread: usize,
directory_lock: DirectoryLock,
) -> Result<IndexWriter> {
if heap_size_in_bytes_per_thread < HEAP_SIZE_MIN {
let err_msg = format!(
"The heap size per thread needs to be at least {}.",
HEAP_SIZE_MIN
);
return Err(TantivyError::InvalidArgument(err_msg));
}
if heap_size_in_bytes_per_thread >= HEAP_SIZE_MAX {
let err_msg = format!("The heap size per thread cannot exceed {}", HEAP_SIZE_MAX);
return Err(TantivyError::InvalidArgument(err_msg));
}
let (document_sender, document_receiver): (OperationSender, OperationReceiver) =
channel::bounded(PIPELINE_MAX_SIZE_IN_DOCS);
let delete_queue = DeleteQueue::new();
let current_opstamp = index.load_metas()?.opstamp;
let stamper = Stamper::new(current_opstamp);
let segment_updater =
SegmentUpdater::create(index.clone(), stamper.clone(), &delete_queue.cursor())?;
let mut index_writer = IndexWriter {
_directory_lock: Some(directory_lock),
heap_size_in_bytes_per_thread,
index: index.clone(),
operation_receiver: document_receiver,
operation_sender: document_sender,
segment_updater,
workers_join_handle: vec![],
num_threads,
delete_queue,
committed_opstamp: current_opstamp,
stamper,
generation: 0,
worker_id: 0,
};
index_writer.start_workers()?;
Ok(index_writer)
}
pub fn compute_deleted_bitset(
delete_bitset: &mut BitSet,
segment_reader: &SegmentReader,
delete_cursor: &mut DeleteCursor,
doc_opstamps: &DocToOpstampMapping,
target_opstamp: Opstamp,
) -> Result<bool> {
let mut might_have_changed = false;
#[cfg_attr(feature = "cargo-clippy", allow(clippy::while_let_loop))]
loop {
if let Some(delete_op) = delete_cursor.get() {
if delete_op.opstamp > target_opstamp {
break;
} else {
// A delete operation should only affect
// document that were inserted after it.
//
// Limit doc helps identify the first document
// that may be affected by the delete operation.
let limit_doc = doc_opstamps.compute_doc_limit(delete_op.opstamp);
let inverted_index = segment_reader.inverted_index(delete_op.term.field());
if let Some(mut docset) =
inverted_index.read_postings(&delete_op.term, IndexRecordOption::Basic)
{
while docset.advance() {
let deleted_doc = docset.doc();
if deleted_doc < limit_doc {
delete_bitset.insert(deleted_doc as usize);
might_have_changed = true;
}
}
}
}
} else {
break;
}
delete_cursor.advance();
}
Ok(might_have_changed)
}
/// Advance delete for the given segment up
/// to the target opstamp.
pub fn advance_deletes(
mut segment: Segment,
segment_entry: &mut SegmentEntry,
target_opstamp: Opstamp,
) -> Result<()> {
{
if segment_entry.meta().delete_opstamp() == Some(target_opstamp) {
// We are already up-to-date here.
return Ok(());
}
let segment_reader = SegmentReader::open(&segment)?;
let max_doc = segment_reader.max_doc();
let mut delete_bitset: BitSet = match segment_entry.delete_bitset() {
Some(previous_delete_bitset) => (*previous_delete_bitset).clone(),
None => BitSet::with_capacity(max_doc as usize),
};
let delete_cursor = segment_entry.delete_cursor();
compute_deleted_bitset(
&mut delete_bitset,
&segment_reader,
delete_cursor,
&DocToOpstampMapping::None,
target_opstamp,
)?;
// TODO optimize
for doc in 0u32..max_doc {
if segment_reader.is_deleted(doc) {
delete_bitset.insert(doc as usize);
}
}
let num_deleted_docs = delete_bitset.len();
if num_deleted_docs > 0 {
segment = segment.with_delete_meta(num_deleted_docs as u32, target_opstamp);
let mut delete_file = segment.open_write(SegmentComponent::DELETE)?;
write_delete_bitset(&delete_bitset, &mut delete_file)?;
}
}
segment_entry.set_meta(segment.meta().clone());
Ok(())
}
fn index_documents(
memory_budget: usize,
segment: &Segment,
generation: usize,
document_iterator: &mut Iterator<Item = Vec<AddOperation>>,
segment_updater: &mut SegmentUpdater,
mut delete_cursor: DeleteCursor,
) -> Result<bool> {
let schema = segment.schema();
let segment_id = segment.id();
let table_size = initial_table_size(memory_budget);
let mut segment_writer = SegmentWriter::for_segment(table_size, segment.clone(), &schema)?;
for documents in document_iterator {
for doc in documents {
segment_writer.add_document(doc, &schema)?;
}
let mem_usage = segment_writer.mem_usage();
if mem_usage >= memory_budget - MARGIN_IN_BYTES {
info!(
"Buffer limit reached, flushing segment with maxdoc={}.",
segment_writer.max_doc()
);
break;
}
}
if !segment_updater.is_alive() {
return Ok(false);
}
let num_docs = segment_writer.max_doc();
// this is ensured by the call to peek before starting
// the worker thread.
assert!(num_docs > 0);
let doc_opstamps: Vec<Opstamp> = segment_writer.finalize()?;
let segment_meta = SegmentMeta::new(segment_id, num_docs);
let last_docstamp: Opstamp = *(doc_opstamps.last().unwrap());
let delete_bitset_opt = if delete_cursor.get().is_some() {
let doc_to_opstamps = DocToOpstampMapping::from(doc_opstamps);
let segment_reader = SegmentReader::open(segment)?;
let mut deleted_bitset = BitSet::with_capacity(num_docs as usize);
let may_have_deletes = compute_deleted_bitset(
&mut deleted_bitset,
&segment_reader,
&mut delete_cursor,
&doc_to_opstamps,
last_docstamp,
)?;
if may_have_deletes {
Some(deleted_bitset)
} else {
None
}
} else {
// if there are no delete operation in the queue, no need
// to even open the segment.
None
};
let segment_entry = SegmentEntry::new(segment_meta, delete_cursor, delete_bitset_opt);
Ok(segment_updater.add_segment(generation, segment_entry))
}
impl IndexWriter {
/// The index writer
pub fn wait_merging_threads(mut self) -> Result<()> {
// this will stop the indexing thread,
// dropping the last reference to the segment_updater.
drop(self.operation_sender);
let former_workers_handles = mem::replace(&mut self.workers_join_handle, vec![]);
for join_handle in former_workers_handles {
join_handle
.join()
.expect("Indexing Worker thread panicked")
.map_err(|_| {
TantivyError::ErrorInThread("Error in indexing worker thread.".into())
})?;
}
drop(self.workers_join_handle);
let result = self
.segment_updater
.wait_merging_thread()
.map_err(|_| TantivyError::ErrorInThread("Failed to join merging thread.".into()));
if let Err(ref e) = result {
error!("Some merging thread failed {:?}", e);
}
result
}
#[doc(hidden)]
pub fn add_segment(&mut self, segment_meta: SegmentMeta) {
let delete_cursor = self.delete_queue.cursor();
let segment_entry = SegmentEntry::new(segment_meta, delete_cursor, None);
self.segment_updater
.add_segment(self.generation, segment_entry);
}
/// Creates a new segment.
///
/// This method is useful only for users trying to do complex
/// operations, like converting an index format to another.
///
/// It is safe to start writing file associated to the new `Segment`.
/// These will not be garbage collected as long as an instance object of
/// `SegmentMeta` object associated to the new `Segment` is "alive".
pub fn new_segment(&self) -> Segment {
self.index.new_segment()
}
/// Spawns a new worker thread for indexing.
/// The thread consumes documents from the pipeline.
///
fn add_indexing_worker(&mut self) -> Result<()> {
let document_receiver_clone = self.operation_receiver.clone();
let mut segment_updater = self.segment_updater.clone();
let generation = self.generation;
let mut delete_cursor = self.delete_queue.cursor();
let mem_budget = self.heap_size_in_bytes_per_thread;
let index = self.index.clone();
let join_handle: JoinHandle<Result<()>> = thread::Builder::new()
.name(format!(
"thrd-tantivy-index{}-gen{}",
self.worker_id, generation
))
.spawn(move || {
loop {
let mut document_iterator =
document_receiver_clone.clone().into_iter().peekable();
// the peeking here is to avoid
// creating a new segment's files
// if no document are available.
//
// this is a valid guarantee as the
// peeked document now belongs to
// our local iterator.
if let Some(operations) = document_iterator.peek() {
if let Some(first) = operations.first() {
delete_cursor.skip_to(first.opstamp);
} else {
return Ok(());
}
} else {
// No more documents.
// Happens when there is a commit, or if the `IndexWriter`
// was dropped.
return Ok(());
}
let segment = index.new_segment();
index_documents(
mem_budget,
&segment,
generation,
&mut document_iterator,
&mut segment_updater,
delete_cursor.clone(),
)?;
}
})?;
self.worker_id += 1;
self.workers_join_handle.push(join_handle);
Ok(())
}
/// Accessor to the merge policy.
pub fn get_merge_policy(&self) -> Arc<Box<MergePolicy>> {
self.segment_updater.get_merge_policy()
}
/// Set the merge policy.
pub fn set_merge_policy(&self, merge_policy: Box<MergePolicy>) {
self.segment_updater.set_merge_policy(merge_policy);
}
fn start_workers(&mut self) -> Result<()> {
for _ in 0..self.num_threads {
self.add_indexing_worker()?;
}
Ok(())
}
/// Detects and removes the files that
/// are not used by the index anymore.
pub fn garbage_collect_files(&mut self) -> Result<()> {
self.segment_updater.garbage_collect_files()
}
/// Merges a given list of segments
///
/// `segment_ids` is required to be non-empty.
pub fn merge(
&mut self,
segment_ids: &[SegmentId],
) -> Result<impl Future<Item = SegmentMeta, Error = Canceled>> {
self.segment_updater.start_merge(segment_ids)
}
/// Closes the current document channel send.
/// and replace all the channels by new ones.
///
/// The current workers will keep on indexing
/// the pending document and stop
/// when no documents are remaining.
///
/// Returns the former segment_ready channel.
fn recreate_document_channel(&mut self) -> OperationReceiver {
let (document_sender, document_receiver): (OperationSender, OperationReceiver) =
channel::bounded(PIPELINE_MAX_SIZE_IN_DOCS);
mem::replace(&mut self.operation_sender, document_sender);
mem::replace(&mut self.operation_receiver, document_receiver)
}
/// Rollback to the last commit
///
/// This cancels all of the update that
/// happened before after the last commit.
/// After calling rollback, the index is in the same
/// state as it was after the last commit.
///
/// The opstamp at the last commit is returned.
pub fn rollback(&mut self) -> Result<Opstamp> {
info!("Rolling back to opstamp {}", self.committed_opstamp);
// marks the segment updater as killed. From now on, all
// segment updates will be ignored.
self.segment_updater.kill();
let document_receiver = self.operation_receiver.clone();
// take the directory lock to create a new index_writer.
let directory_lock = self
._directory_lock
.take()
.expect("The IndexWriter does not have any lock. This is a bug, please report.");
let new_index_writer: IndexWriter = open_index_writer(
&self.index,
self.num_threads,
self.heap_size_in_bytes_per_thread,
directory_lock,
)?;
// the current `self` is dropped right away because of this call.
//
// This will drop the document queue, and the thread
// should terminate.
mem::replace(self, new_index_writer);
// Drains the document receiver pipeline :
// Workers don't need to index the pending documents.
//
// This will reach an end as the only document_sender
// was dropped with the index_writer.
for _ in document_receiver.clone() {}
Ok(self.committed_opstamp)
}
/// Prepares a commit.
///
/// Calling `prepare_commit()` will cut the indexing
/// queue. All pending documents will be sent to the
/// indexing workers. They will then terminate, regardless
/// of the size of their current segment and flush their
/// work on disk.
///
/// Once a commit is "prepared", you can either
/// call
/// * `.commit()`: to accept this commit
/// * `.abort()`: to cancel this commit.
///
/// In the current implementation, `PreparedCommit` borrows
/// the `IndexWriter` mutably so we are guaranteed that no new
/// document can be added as long as it is committed or is
/// dropped.
///
/// It is also possible to add a payload to the `commit`
/// using this API.
/// See [`PreparedCommit::set_payload()`](PreparedCommit.html)
pub fn prepare_commit(&mut self) -> Result<PreparedCommit> {
// Here, because we join all of the worker threads,
// all of the segment update for this commit have been
// sent.
//
// No document belonging to the next generation have been
// pushed too, because add_document can only happen
// on this thread.
// This will move uncommitted segments to the state of
// committed segments.
info!("Preparing commit");
// this will drop the current document channel
// and recreate a new one.
self.recreate_document_channel();
let former_workers_join_handle = mem::replace(&mut self.workers_join_handle, Vec::new());
for worker_handle in former_workers_join_handle {
let indexing_worker_result = worker_handle
.join()
.map_err(|e| TantivyError::ErrorInThread(format!("{:?}", e)))?;
indexing_worker_result?;
// add a new worker for the next generation.
self.add_indexing_worker()?;
}
let commit_opstamp = self.stamper.stamp();
let prepared_commit = PreparedCommit::new(self, commit_opstamp);
info!("Prepared commit {}", commit_opstamp);
Ok(prepared_commit)
}
/// Commits all of the pending changes
///
/// A call to commit blocks.
/// After it returns, all of the document that
/// were added since the last commit are published
/// and persisted.
///
/// In case of a crash or an hardware failure (as
/// long as the hard disk is spared), it will be possible
/// to resume indexing from this point.
///
/// Commit returns the `opstamp` of the last document
/// that made it in the commit.
///
pub fn commit(&mut self) -> Result<Opstamp> {
self.prepare_commit()?.commit()
}
pub(crate) fn segment_updater(&self) -> &SegmentUpdater {
&self.segment_updater
}
/// Delete all documents containing a given term.
///
/// Delete operation only affects documents that
/// were added in previous commits, and documents
/// that were added previously in the same commit.
///
/// Like adds, the deletion itself will be visible
/// only after calling `commit()`.
pub fn delete_term(&mut self, term: Term) -> Opstamp {
let opstamp = self.stamper.stamp();
let delete_operation = DeleteOperation { opstamp, term };
self.delete_queue.push(delete_operation);
opstamp
}
/// Returns the opstamp of the last successful commit.
///
/// This is, for instance, the opstamp the index will
/// rollback to if there is a failure like a power surge.
///
/// This is also the opstamp of the commit that is currently
/// available for searchers.
pub fn commit_opstamp(&self) -> Opstamp {
self.committed_opstamp
}
/// Adds a document.
///
/// If the indexing pipeline is full, this call may block.
///
/// The opstamp is an increasing `u64` that can
/// be used by the client to align commits with its own
/// document queue.
///
/// Currently it represents the number of documents that
/// have been added since the creation of the index.
pub fn add_document(&mut self, document: Document) -> Opstamp {
let opstamp = self.stamper.stamp();
let add_operation = AddOperation { opstamp, document };
let send_result = self.operation_sender.send(vec![add_operation]);
if let Err(e) = send_result {
panic!("Failed to index document. Sending to indexing channel failed. This probably means all of the indexing threads have panicked. {:?}", e);
}
opstamp
}
/// Gets a range of stamps from the stamper and "pops" the last stamp
/// from the range returning a tuple of the last optstamp and the popped
/// range.
///
/// The total number of stamps generated by this method is `count + 1`;
/// each operation gets a stamp from the `stamps` iterator and `last_opstamp`
/// is for the batch itself.
fn get_batch_opstamps(&mut self, count: Opstamp) -> (Opstamp, Range<Opstamp>) {
let Range { start, end } = self.stamper.stamps(count + 1u64);
let last_opstamp = end - 1;
let stamps = Range {
start,
end: last_opstamp,
};
(last_opstamp, stamps)
}
/// Runs a group of document operations ensuring that the operations are
/// assigned contigous u64 opstamps and that add operations of the same
/// group are flushed into the same segment.
///
/// If the indexing pipeline is full, this call may block.
///
/// Each operation of the given `user_operations` will receive an in-order,
/// contiguous u64 opstamp. The entire batch itself is also given an
/// opstamp that is 1 greater than the last given operation. This
/// `batch_opstamp` is the return value of `run`. An empty group of
/// `user_operations`, an empty `Vec<UserOperation>`, still receives
/// a valid opstamp even though no changes were _actually_ made to the index.
///
/// Like adds and deletes (see `IndexWriter.add_document` and
/// `IndexWriter.delete_term`), the changes made by calling `run` will be
/// visible to readers only after calling `commit()`.
pub fn run(&mut self, user_operations: Vec<UserOperation>) -> Opstamp {
let count = user_operations.len() as u64;
if count == 0 {
return self.stamper.stamp();
}
let (batch_opstamp, stamps) = self.get_batch_opstamps(count);
let mut adds: Vec<AddOperation> = Vec::new();
for (user_op, opstamp) in user_operations.into_iter().zip(stamps) {
match user_op {
UserOperation::Delete(term) => {
let delete_operation = DeleteOperation { opstamp, term };
self.delete_queue.push(delete_operation);
}
UserOperation::Add(document) => {
let add_operation = AddOperation { opstamp, document };
adds.push(add_operation);
}
}
}
let send_result = self.operation_sender.send(adds);
if let Err(e) = send_result {
panic!("Failed to index document. Sending to indexing channel failed. This probably means all of the indexing threads have panicked. {:?}", e);
};
batch_opstamp
}
}
#[cfg(test)]
mod tests {
use super::super::operation::UserOperation;
use super::initial_table_size;
use collector::TopDocs;
use directory::error::LockError;
use error::*;
use indexer::NoMergePolicy;
use query::TermQuery;
use schema::{self, IndexRecordOption};
use Index;
use ReloadPolicy;
use Term;
#[test]
fn test_operations_group() {
// an operations group with 2 items should cause 3 opstamps 0, 1, and 2.
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
let operations = vec![
UserOperation::Add(doc!(text_field=>"a")),
UserOperation::Add(doc!(text_field=>"b")),
];
let batch_opstamp1 = index_writer.run(operations);
assert_eq!(batch_opstamp1, 2u64);
}
#[test]
fn test_ordered_batched_operations() {
// * one delete for `doc!(field=>"a")`
// * one add for `doc!(field=>"a")`
// * one add for `doc!(field=>"b")`
// * one delete for `doc!(field=>"b")`
// after commit there is one doc with "a" and 0 doc with "b"
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::Manual)
.try_into()
.unwrap();
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
let a_term = Term::from_field_text(text_field, "a");
let b_term = Term::from_field_text(text_field, "b");
let operations = vec![
UserOperation::Delete(a_term),
UserOperation::Add(doc!(text_field=>"a")),
UserOperation::Add(doc!(text_field=>"b")),
UserOperation::Delete(b_term),
];
index_writer.run(operations);
index_writer.commit().expect("failed to commit");
reader.reload().expect("failed to load searchers");
let a_term = Term::from_field_text(text_field, "a");
let b_term = Term::from_field_text(text_field, "b");
let a_query = TermQuery::new(a_term, IndexRecordOption::Basic);
let b_query = TermQuery::new(b_term, IndexRecordOption::Basic);
let searcher = reader.searcher();
let a_docs = searcher
.search(&a_query, &TopDocs::with_limit(1))
.expect("search for a failed");
let b_docs = searcher
.search(&b_query, &TopDocs::with_limit(1))
.expect("search for b failed");
assert_eq!(a_docs.len(), 1);
assert_eq!(b_docs.len(), 0);
}
#[test]
fn test_empty_operations_group() {
let schema_builder = schema::Schema::builder();
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer(3_000_000).unwrap();
let operations1 = vec![];
let batch_opstamp1 = index_writer.run(operations1);
assert_eq!(batch_opstamp1, 0u64);
let operations2 = vec![];
let batch_opstamp2 = index_writer.run(operations2);
assert_eq!(batch_opstamp2, 1u64);
}
#[test]
fn test_lockfile_stops_duplicates() {
let schema_builder = schema::Schema::builder();
let index = Index::create_in_ram(schema_builder.build());
let _index_writer = index.writer(3_000_000).unwrap();
match index.writer(3_000_000) {
Err(TantivyError::LockFailure(LockError::LockBusy, _)) => {}
_ => panic!("Expected a `LockFailure` error"),
}
}
#[test]
fn test_lockfile_already_exists_error_msg() {
let schema_builder = schema::Schema::builder();
let index = Index::create_in_ram(schema_builder.build());
let _index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
match index.writer_with_num_threads(1, 3_000_000) {
Err(err) => {
let err_msg = err.to_string();
assert!(err_msg.contains("already an `IndexWriter`"));
}
_ => panic!("Expected LockfileAlreadyExists error"),
}
}
#[test]
fn test_set_merge_policy() {
let schema_builder = schema::Schema::builder();
let index = Index::create_in_ram(schema_builder.build());
let index_writer = index.writer(3_000_000).unwrap();
assert_eq!(
format!("{:?}", index_writer.get_merge_policy()),
"LogMergePolicy { min_merge_size: 8, min_layer_size: 10000, \
level_log_size: 0.75 }"
);
let merge_policy = Box::new(NoMergePolicy::default());
index_writer.set_merge_policy(merge_policy);
assert_eq!(
format!("{:?}", index_writer.get_merge_policy()),
"NoMergePolicy"
);
}
#[test]
fn test_lockfile_released_on_drop() {
let schema_builder = schema::Schema::builder();
let index = Index::create_in_ram(schema_builder.build());
{
let _index_writer = index.writer(3_000_000).unwrap();
// the lock should be released when the
// index_writer leaves the scope.
}
let _index_writer_two = index.writer(3_000_000).unwrap();
}
#[test]
fn test_commit_and_rollback() {
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::Manual)
.try_into()
.unwrap();
let num_docs_containing = |s: &str| {
let searcher = reader.searcher();
let term = Term::from_field_text(text_field, s);
searcher.doc_freq(&term)
};
{
// writing the segment
let mut index_writer = index.writer(3_000_000).unwrap();
index_writer.add_document(doc!(text_field=>"a"));
index_writer.rollback().unwrap();
assert_eq!(index_writer.commit_opstamp(), 0u64);
assert_eq!(num_docs_containing("a"), 0);
{
index_writer.add_document(doc!(text_field=>"b"));
index_writer.add_document(doc!(text_field=>"c"));
}
assert!(index_writer.commit().is_ok());
reader.reload().unwrap();
assert_eq!(num_docs_containing("a"), 0);
assert_eq!(num_docs_containing("b"), 1);
assert_eq!(num_docs_containing("c"), 1);
}
reader.reload().unwrap();
reader.searcher();
}
#[test]
fn test_with_merges() {
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
let reader = index
.reader_builder()
.reload_policy(ReloadPolicy::Manual)
.try_into()
.unwrap();
let num_docs_containing = |s: &str| {
let term_a = Term::from_field_text(text_field, s);
reader.searcher().doc_freq(&term_a)
};
{
// writing the segment
let mut index_writer = index.writer(12_000_000).unwrap();
// create 8 segments with 100 tiny docs
for _doc in 0..100 {
index_writer.add_document(doc!(text_field=>"a"));
}
index_writer.commit().expect("commit failed");
for _doc in 0..100 {
index_writer.add_document(doc!(text_field=>"a"));
}
// this should create 8 segments and trigger a merge.
index_writer.commit().expect("commit failed");
index_writer
.wait_merging_threads()
.expect("waiting merging thread failed");
reader.reload().unwrap();
assert_eq!(num_docs_containing("a"), 200);
assert!(index.searchable_segments().unwrap().len() < 8);
}
}
#[test]
fn test_prepare_with_commit_message() {
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
{
// writing the segment
let mut index_writer = index.writer(12_000_000).unwrap();
// create 8 segments with 100 tiny docs
for _doc in 0..100 {
index_writer.add_document(doc!(text_field => "a"));
}
{
let mut prepared_commit = index_writer.prepare_commit().expect("commit failed");
prepared_commit.set_payload("first commit");
prepared_commit.commit().expect("commit failed");
}
{
let metas = index.load_metas().unwrap();
assert_eq!(metas.payload.unwrap(), "first commit");
}
for _doc in 0..100 {
index_writer.add_document(doc!(text_field => "a"));
}
index_writer.commit().unwrap();
{
let metas = index.load_metas().unwrap();
assert!(metas.payload.is_none());
}
}
}
#[test]
fn test_prepare_but_rollback() {
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
{
// writing the segment
let mut index_writer = index.writer_with_num_threads(4, 12_000_000).unwrap();
// create 8 segments with 100 tiny docs
for _doc in 0..100 {
index_writer.add_document(doc!(text_field => "a"));
}
{
let mut prepared_commit = index_writer.prepare_commit().expect("commit failed");
prepared_commit.set_payload("first commit");
prepared_commit.abort().expect("commit failed");
}
{
let metas = index.load_metas().unwrap();
assert!(metas.payload.is_none());
}
for _doc in 0..100 {
index_writer.add_document(doc!(text_field => "b"));
}
index_writer.commit().unwrap();
}
let num_docs_containing = |s: &str| {
let term_a = Term::from_field_text(text_field, s);
index
.reader_builder()
.reload_policy(ReloadPolicy::Manual)
.try_into()
.unwrap()
.searcher()
.doc_freq(&term_a)
};
assert_eq!(num_docs_containing("a"), 0);
assert_eq!(num_docs_containing("b"), 100);
}
#[test]
fn test_hashmap_size() {
assert_eq!(initial_table_size(100_000), 11);
assert_eq!(initial_table_size(1_000_000), 14);
assert_eq!(initial_table_size(10_000_000), 17);
assert_eq!(initial_table_size(1_000_000_000), 19);
}
#[cfg(not(feature = "no_fail"))]
#[test]
fn test_write_commit_fails() {
use fail;
let mut schema_builder = schema::Schema::builder();
let text_field = schema_builder.add_text_field("text", schema::TEXT);
let index = Index::create_in_ram(schema_builder.build());
let mut index_writer = index.writer_with_num_threads(1, 3_000_000).unwrap();
for _ in 0..100 {
index_writer.add_document(doc!(text_field => "a"));
}
index_writer.commit().unwrap();
fail::cfg("RAMDirectory::atomic_write", "return(error_write_failed)").unwrap();
for _ in 0..100 {
index_writer.add_document(doc!(text_field => "b"));
}
assert!(index_writer.commit().is_err());
let num_docs_containing = |s: &str| {
let term_a = Term::from_field_text(text_field, s);
index.reader().unwrap().searcher().doc_freq(&term_a)
};
assert_eq!(num_docs_containing("a"), 100);
assert_eq!(num_docs_containing("b"), 0);
fail::cfg("RAMDirectory::atomic_write", "off").unwrap();
}
}
| 36.190883 | 155 | 0.602456 |
f76a4ce4c89e82a935f75a00424d63470a3d6326 | 203,437 | // Code generated by software.amazon.smithy.rust.codegen.smithy-rs. DO NOT EDIT.
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct CancelUpdateStackError {
pub kind: CancelUpdateStackErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum CancelUpdateStackErrorKind {
TokenAlreadyExistsError(crate::error::TokenAlreadyExistsError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for CancelUpdateStackError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
CancelUpdateStackErrorKind::TokenAlreadyExistsError(_inner) => _inner.fmt(f),
CancelUpdateStackErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for CancelUpdateStackError {
fn code(&self) -> Option<&str> {
CancelUpdateStackError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl CancelUpdateStackError {
pub fn new(kind: CancelUpdateStackErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: CancelUpdateStackErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: CancelUpdateStackErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_token_already_exists_error(&self) -> bool {
matches!(
&self.kind,
CancelUpdateStackErrorKind::TokenAlreadyExistsError(_)
)
}
}
impl std::error::Error for CancelUpdateStackError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
CancelUpdateStackErrorKind::TokenAlreadyExistsError(_inner) => Some(_inner),
CancelUpdateStackErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ContinueUpdateRollbackError {
pub kind: ContinueUpdateRollbackErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ContinueUpdateRollbackErrorKind {
TokenAlreadyExistsError(crate::error::TokenAlreadyExistsError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ContinueUpdateRollbackError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ContinueUpdateRollbackErrorKind::TokenAlreadyExistsError(_inner) => _inner.fmt(f),
ContinueUpdateRollbackErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ContinueUpdateRollbackError {
fn code(&self) -> Option<&str> {
ContinueUpdateRollbackError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ContinueUpdateRollbackError {
pub fn new(kind: ContinueUpdateRollbackErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ContinueUpdateRollbackErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ContinueUpdateRollbackErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_token_already_exists_error(&self) -> bool {
matches!(
&self.kind,
ContinueUpdateRollbackErrorKind::TokenAlreadyExistsError(_)
)
}
}
impl std::error::Error for ContinueUpdateRollbackError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ContinueUpdateRollbackErrorKind::TokenAlreadyExistsError(_inner) => Some(_inner),
ContinueUpdateRollbackErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct CreateChangeSetError {
pub kind: CreateChangeSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum CreateChangeSetErrorKind {
AlreadyExistsError(crate::error::AlreadyExistsError),
InsufficientCapabilitiesError(crate::error::InsufficientCapabilitiesError),
LimitExceededError(crate::error::LimitExceededError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for CreateChangeSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
CreateChangeSetErrorKind::AlreadyExistsError(_inner) => _inner.fmt(f),
CreateChangeSetErrorKind::InsufficientCapabilitiesError(_inner) => _inner.fmt(f),
CreateChangeSetErrorKind::LimitExceededError(_inner) => _inner.fmt(f),
CreateChangeSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for CreateChangeSetError {
fn code(&self) -> Option<&str> {
CreateChangeSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl CreateChangeSetError {
pub fn new(kind: CreateChangeSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: CreateChangeSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: CreateChangeSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_already_exists_error(&self) -> bool {
matches!(&self.kind, CreateChangeSetErrorKind::AlreadyExistsError(_))
}
pub fn is_insufficient_capabilities_error(&self) -> bool {
matches!(
&self.kind,
CreateChangeSetErrorKind::InsufficientCapabilitiesError(_)
)
}
pub fn is_limit_exceeded_error(&self) -> bool {
matches!(&self.kind, CreateChangeSetErrorKind::LimitExceededError(_))
}
}
impl std::error::Error for CreateChangeSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
CreateChangeSetErrorKind::AlreadyExistsError(_inner) => Some(_inner),
CreateChangeSetErrorKind::InsufficientCapabilitiesError(_inner) => Some(_inner),
CreateChangeSetErrorKind::LimitExceededError(_inner) => Some(_inner),
CreateChangeSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct CreateStackError {
pub kind: CreateStackErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum CreateStackErrorKind {
AlreadyExistsError(crate::error::AlreadyExistsError),
InsufficientCapabilitiesError(crate::error::InsufficientCapabilitiesError),
LimitExceededError(crate::error::LimitExceededError),
TokenAlreadyExistsError(crate::error::TokenAlreadyExistsError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for CreateStackError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
CreateStackErrorKind::AlreadyExistsError(_inner) => _inner.fmt(f),
CreateStackErrorKind::InsufficientCapabilitiesError(_inner) => _inner.fmt(f),
CreateStackErrorKind::LimitExceededError(_inner) => _inner.fmt(f),
CreateStackErrorKind::TokenAlreadyExistsError(_inner) => _inner.fmt(f),
CreateStackErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for CreateStackError {
fn code(&self) -> Option<&str> {
CreateStackError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl CreateStackError {
pub fn new(kind: CreateStackErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: CreateStackErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: CreateStackErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_already_exists_error(&self) -> bool {
matches!(&self.kind, CreateStackErrorKind::AlreadyExistsError(_))
}
pub fn is_insufficient_capabilities_error(&self) -> bool {
matches!(
&self.kind,
CreateStackErrorKind::InsufficientCapabilitiesError(_)
)
}
pub fn is_limit_exceeded_error(&self) -> bool {
matches!(&self.kind, CreateStackErrorKind::LimitExceededError(_))
}
pub fn is_token_already_exists_error(&self) -> bool {
matches!(&self.kind, CreateStackErrorKind::TokenAlreadyExistsError(_))
}
}
impl std::error::Error for CreateStackError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
CreateStackErrorKind::AlreadyExistsError(_inner) => Some(_inner),
CreateStackErrorKind::InsufficientCapabilitiesError(_inner) => Some(_inner),
CreateStackErrorKind::LimitExceededError(_inner) => Some(_inner),
CreateStackErrorKind::TokenAlreadyExistsError(_inner) => Some(_inner),
CreateStackErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct CreateStackInstancesError {
pub kind: CreateStackInstancesErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum CreateStackInstancesErrorKind {
InvalidOperationError(crate::error::InvalidOperationError),
LimitExceededError(crate::error::LimitExceededError),
OperationIdAlreadyExistsError(crate::error::OperationIdAlreadyExistsError),
OperationInProgressError(crate::error::OperationInProgressError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
StaleRequestError(crate::error::StaleRequestError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for CreateStackInstancesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
CreateStackInstancesErrorKind::InvalidOperationError(_inner) => _inner.fmt(f),
CreateStackInstancesErrorKind::LimitExceededError(_inner) => _inner.fmt(f),
CreateStackInstancesErrorKind::OperationIdAlreadyExistsError(_inner) => _inner.fmt(f),
CreateStackInstancesErrorKind::OperationInProgressError(_inner) => _inner.fmt(f),
CreateStackInstancesErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
CreateStackInstancesErrorKind::StaleRequestError(_inner) => _inner.fmt(f),
CreateStackInstancesErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for CreateStackInstancesError {
fn code(&self) -> Option<&str> {
CreateStackInstancesError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl CreateStackInstancesError {
pub fn new(kind: CreateStackInstancesErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: CreateStackInstancesErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: CreateStackInstancesErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_operation_error(&self) -> bool {
matches!(
&self.kind,
CreateStackInstancesErrorKind::InvalidOperationError(_)
)
}
pub fn is_limit_exceeded_error(&self) -> bool {
matches!(
&self.kind,
CreateStackInstancesErrorKind::LimitExceededError(_)
)
}
pub fn is_operation_id_already_exists_error(&self) -> bool {
matches!(
&self.kind,
CreateStackInstancesErrorKind::OperationIdAlreadyExistsError(_)
)
}
pub fn is_operation_in_progress_error(&self) -> bool {
matches!(
&self.kind,
CreateStackInstancesErrorKind::OperationInProgressError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
CreateStackInstancesErrorKind::StackSetNotFoundError(_)
)
}
pub fn is_stale_request_error(&self) -> bool {
matches!(
&self.kind,
CreateStackInstancesErrorKind::StaleRequestError(_)
)
}
}
impl std::error::Error for CreateStackInstancesError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
CreateStackInstancesErrorKind::InvalidOperationError(_inner) => Some(_inner),
CreateStackInstancesErrorKind::LimitExceededError(_inner) => Some(_inner),
CreateStackInstancesErrorKind::OperationIdAlreadyExistsError(_inner) => Some(_inner),
CreateStackInstancesErrorKind::OperationInProgressError(_inner) => Some(_inner),
CreateStackInstancesErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
CreateStackInstancesErrorKind::StaleRequestError(_inner) => Some(_inner),
CreateStackInstancesErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct CreateStackSetError {
pub kind: CreateStackSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum CreateStackSetErrorKind {
CreatedButModifiedError(crate::error::CreatedButModifiedError),
LimitExceededError(crate::error::LimitExceededError),
NameAlreadyExistsError(crate::error::NameAlreadyExistsError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for CreateStackSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
CreateStackSetErrorKind::CreatedButModifiedError(_inner) => _inner.fmt(f),
CreateStackSetErrorKind::LimitExceededError(_inner) => _inner.fmt(f),
CreateStackSetErrorKind::NameAlreadyExistsError(_inner) => _inner.fmt(f),
CreateStackSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for CreateStackSetError {
fn code(&self) -> Option<&str> {
CreateStackSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl CreateStackSetError {
pub fn new(kind: CreateStackSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: CreateStackSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: CreateStackSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_created_but_modified_error(&self) -> bool {
matches!(
&self.kind,
CreateStackSetErrorKind::CreatedButModifiedError(_)
)
}
pub fn is_limit_exceeded_error(&self) -> bool {
matches!(&self.kind, CreateStackSetErrorKind::LimitExceededError(_))
}
pub fn is_name_already_exists_error(&self) -> bool {
matches!(
&self.kind,
CreateStackSetErrorKind::NameAlreadyExistsError(_)
)
}
}
impl std::error::Error for CreateStackSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
CreateStackSetErrorKind::CreatedButModifiedError(_inner) => Some(_inner),
CreateStackSetErrorKind::LimitExceededError(_inner) => Some(_inner),
CreateStackSetErrorKind::NameAlreadyExistsError(_inner) => Some(_inner),
CreateStackSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DeleteChangeSetError {
pub kind: DeleteChangeSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DeleteChangeSetErrorKind {
InvalidChangeSetStatusError(crate::error::InvalidChangeSetStatusError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DeleteChangeSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DeleteChangeSetErrorKind::InvalidChangeSetStatusError(_inner) => _inner.fmt(f),
DeleteChangeSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DeleteChangeSetError {
fn code(&self) -> Option<&str> {
DeleteChangeSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DeleteChangeSetError {
pub fn new(kind: DeleteChangeSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DeleteChangeSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DeleteChangeSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_change_set_status_error(&self) -> bool {
matches!(
&self.kind,
DeleteChangeSetErrorKind::InvalidChangeSetStatusError(_)
)
}
}
impl std::error::Error for DeleteChangeSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DeleteChangeSetErrorKind::InvalidChangeSetStatusError(_inner) => Some(_inner),
DeleteChangeSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DeleteStackError {
pub kind: DeleteStackErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DeleteStackErrorKind {
TokenAlreadyExistsError(crate::error::TokenAlreadyExistsError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DeleteStackError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DeleteStackErrorKind::TokenAlreadyExistsError(_inner) => _inner.fmt(f),
DeleteStackErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DeleteStackError {
fn code(&self) -> Option<&str> {
DeleteStackError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DeleteStackError {
pub fn new(kind: DeleteStackErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DeleteStackErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DeleteStackErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_token_already_exists_error(&self) -> bool {
matches!(&self.kind, DeleteStackErrorKind::TokenAlreadyExistsError(_))
}
}
impl std::error::Error for DeleteStackError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DeleteStackErrorKind::TokenAlreadyExistsError(_inner) => Some(_inner),
DeleteStackErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DeleteStackInstancesError {
pub kind: DeleteStackInstancesErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DeleteStackInstancesErrorKind {
InvalidOperationError(crate::error::InvalidOperationError),
OperationIdAlreadyExistsError(crate::error::OperationIdAlreadyExistsError),
OperationInProgressError(crate::error::OperationInProgressError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
StaleRequestError(crate::error::StaleRequestError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DeleteStackInstancesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DeleteStackInstancesErrorKind::InvalidOperationError(_inner) => _inner.fmt(f),
DeleteStackInstancesErrorKind::OperationIdAlreadyExistsError(_inner) => _inner.fmt(f),
DeleteStackInstancesErrorKind::OperationInProgressError(_inner) => _inner.fmt(f),
DeleteStackInstancesErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
DeleteStackInstancesErrorKind::StaleRequestError(_inner) => _inner.fmt(f),
DeleteStackInstancesErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DeleteStackInstancesError {
fn code(&self) -> Option<&str> {
DeleteStackInstancesError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DeleteStackInstancesError {
pub fn new(kind: DeleteStackInstancesErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DeleteStackInstancesErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DeleteStackInstancesErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_operation_error(&self) -> bool {
matches!(
&self.kind,
DeleteStackInstancesErrorKind::InvalidOperationError(_)
)
}
pub fn is_operation_id_already_exists_error(&self) -> bool {
matches!(
&self.kind,
DeleteStackInstancesErrorKind::OperationIdAlreadyExistsError(_)
)
}
pub fn is_operation_in_progress_error(&self) -> bool {
matches!(
&self.kind,
DeleteStackInstancesErrorKind::OperationInProgressError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
DeleteStackInstancesErrorKind::StackSetNotFoundError(_)
)
}
pub fn is_stale_request_error(&self) -> bool {
matches!(
&self.kind,
DeleteStackInstancesErrorKind::StaleRequestError(_)
)
}
}
impl std::error::Error for DeleteStackInstancesError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DeleteStackInstancesErrorKind::InvalidOperationError(_inner) => Some(_inner),
DeleteStackInstancesErrorKind::OperationIdAlreadyExistsError(_inner) => Some(_inner),
DeleteStackInstancesErrorKind::OperationInProgressError(_inner) => Some(_inner),
DeleteStackInstancesErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
DeleteStackInstancesErrorKind::StaleRequestError(_inner) => Some(_inner),
DeleteStackInstancesErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DeleteStackSetError {
pub kind: DeleteStackSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DeleteStackSetErrorKind {
OperationInProgressError(crate::error::OperationInProgressError),
StackSetNotEmptyError(crate::error::StackSetNotEmptyError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DeleteStackSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DeleteStackSetErrorKind::OperationInProgressError(_inner) => _inner.fmt(f),
DeleteStackSetErrorKind::StackSetNotEmptyError(_inner) => _inner.fmt(f),
DeleteStackSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DeleteStackSetError {
fn code(&self) -> Option<&str> {
DeleteStackSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DeleteStackSetError {
pub fn new(kind: DeleteStackSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DeleteStackSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DeleteStackSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_operation_in_progress_error(&self) -> bool {
matches!(
&self.kind,
DeleteStackSetErrorKind::OperationInProgressError(_)
)
}
pub fn is_stack_set_not_empty_error(&self) -> bool {
matches!(
&self.kind,
DeleteStackSetErrorKind::StackSetNotEmptyError(_)
)
}
}
impl std::error::Error for DeleteStackSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DeleteStackSetErrorKind::OperationInProgressError(_inner) => Some(_inner),
DeleteStackSetErrorKind::StackSetNotEmptyError(_inner) => Some(_inner),
DeleteStackSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DeregisterTypeError {
pub kind: DeregisterTypeErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DeregisterTypeErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
TypeNotFoundError(crate::error::TypeNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DeregisterTypeError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DeregisterTypeErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
DeregisterTypeErrorKind::TypeNotFoundError(_inner) => _inner.fmt(f),
DeregisterTypeErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DeregisterTypeError {
fn code(&self) -> Option<&str> {
DeregisterTypeError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DeregisterTypeError {
pub fn new(kind: DeregisterTypeErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DeregisterTypeErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DeregisterTypeErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(&self.kind, DeregisterTypeErrorKind::CFNRegistryError(_))
}
pub fn is_type_not_found_error(&self) -> bool {
matches!(&self.kind, DeregisterTypeErrorKind::TypeNotFoundError(_))
}
}
impl std::error::Error for DeregisterTypeError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DeregisterTypeErrorKind::CFNRegistryError(_inner) => Some(_inner),
DeregisterTypeErrorKind::TypeNotFoundError(_inner) => Some(_inner),
DeregisterTypeErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeAccountLimitsError {
pub kind: DescribeAccountLimitsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeAccountLimitsErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeAccountLimitsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeAccountLimitsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeAccountLimitsError {
fn code(&self) -> Option<&str> {
DescribeAccountLimitsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeAccountLimitsError {
pub fn new(kind: DescribeAccountLimitsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeAccountLimitsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeAccountLimitsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DescribeAccountLimitsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeAccountLimitsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeChangeSetError {
pub kind: DescribeChangeSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeChangeSetErrorKind {
ChangeSetNotFoundError(crate::error::ChangeSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeChangeSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeChangeSetErrorKind::ChangeSetNotFoundError(_inner) => _inner.fmt(f),
DescribeChangeSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeChangeSetError {
fn code(&self) -> Option<&str> {
DescribeChangeSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeChangeSetError {
pub fn new(kind: DescribeChangeSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeChangeSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeChangeSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_change_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
DescribeChangeSetErrorKind::ChangeSetNotFoundError(_)
)
}
}
impl std::error::Error for DescribeChangeSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeChangeSetErrorKind::ChangeSetNotFoundError(_inner) => Some(_inner),
DescribeChangeSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackDriftDetectionStatusError {
pub kind: DescribeStackDriftDetectionStatusErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackDriftDetectionStatusErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackDriftDetectionStatusError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackDriftDetectionStatusErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackDriftDetectionStatusError {
fn code(&self) -> Option<&str> {
DescribeStackDriftDetectionStatusError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackDriftDetectionStatusError {
pub fn new(
kind: DescribeStackDriftDetectionStatusErrorKind,
meta: smithy_types::Error,
) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackDriftDetectionStatusErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackDriftDetectionStatusErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DescribeStackDriftDetectionStatusError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackDriftDetectionStatusErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackEventsError {
pub kind: DescribeStackEventsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackEventsErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackEventsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackEventsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackEventsError {
fn code(&self) -> Option<&str> {
DescribeStackEventsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackEventsError {
pub fn new(kind: DescribeStackEventsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackEventsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackEventsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DescribeStackEventsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackEventsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackInstanceError {
pub kind: DescribeStackInstanceErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackInstanceErrorKind {
StackInstanceNotFoundError(crate::error::StackInstanceNotFoundError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackInstanceError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackInstanceErrorKind::StackInstanceNotFoundError(_inner) => _inner.fmt(f),
DescribeStackInstanceErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
DescribeStackInstanceErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackInstanceError {
fn code(&self) -> Option<&str> {
DescribeStackInstanceError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackInstanceError {
pub fn new(kind: DescribeStackInstanceErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackInstanceErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackInstanceErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_stack_instance_not_found_error(&self) -> bool {
matches!(
&self.kind,
DescribeStackInstanceErrorKind::StackInstanceNotFoundError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
DescribeStackInstanceErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for DescribeStackInstanceError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackInstanceErrorKind::StackInstanceNotFoundError(_inner) => Some(_inner),
DescribeStackInstanceErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
DescribeStackInstanceErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackResourceError {
pub kind: DescribeStackResourceErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackResourceErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackResourceError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackResourceErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackResourceError {
fn code(&self) -> Option<&str> {
DescribeStackResourceError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackResourceError {
pub fn new(kind: DescribeStackResourceErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackResourceErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackResourceErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DescribeStackResourceError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackResourceErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackResourceDriftsError {
pub kind: DescribeStackResourceDriftsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackResourceDriftsErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackResourceDriftsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackResourceDriftsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackResourceDriftsError {
fn code(&self) -> Option<&str> {
DescribeStackResourceDriftsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackResourceDriftsError {
pub fn new(kind: DescribeStackResourceDriftsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackResourceDriftsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackResourceDriftsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DescribeStackResourceDriftsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackResourceDriftsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackResourcesError {
pub kind: DescribeStackResourcesErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackResourcesErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackResourcesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackResourcesErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackResourcesError {
fn code(&self) -> Option<&str> {
DescribeStackResourcesError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackResourcesError {
pub fn new(kind: DescribeStackResourcesErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackResourcesErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackResourcesErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DescribeStackResourcesError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackResourcesErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStacksError {
pub kind: DescribeStacksErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStacksErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStacksError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStacksErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStacksError {
fn code(&self) -> Option<&str> {
DescribeStacksError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStacksError {
pub fn new(kind: DescribeStacksErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStacksErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStacksErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DescribeStacksError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStacksErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackSetError {
pub kind: DescribeStackSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackSetErrorKind {
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackSetErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
DescribeStackSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackSetError {
fn code(&self) -> Option<&str> {
DescribeStackSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackSetError {
pub fn new(kind: DescribeStackSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
DescribeStackSetErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for DescribeStackSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackSetErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
DescribeStackSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeStackSetOperationError {
pub kind: DescribeStackSetOperationErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeStackSetOperationErrorKind {
OperationNotFoundError(crate::error::OperationNotFoundError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeStackSetOperationError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeStackSetOperationErrorKind::OperationNotFoundError(_inner) => _inner.fmt(f),
DescribeStackSetOperationErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
DescribeStackSetOperationErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeStackSetOperationError {
fn code(&self) -> Option<&str> {
DescribeStackSetOperationError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeStackSetOperationError {
pub fn new(kind: DescribeStackSetOperationErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeStackSetOperationErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeStackSetOperationErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_operation_not_found_error(&self) -> bool {
matches!(
&self.kind,
DescribeStackSetOperationErrorKind::OperationNotFoundError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
DescribeStackSetOperationErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for DescribeStackSetOperationError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeStackSetOperationErrorKind::OperationNotFoundError(_inner) => Some(_inner),
DescribeStackSetOperationErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
DescribeStackSetOperationErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeTypeError {
pub kind: DescribeTypeErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeTypeErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
TypeNotFoundError(crate::error::TypeNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeTypeError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeTypeErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
DescribeTypeErrorKind::TypeNotFoundError(_inner) => _inner.fmt(f),
DescribeTypeErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeTypeError {
fn code(&self) -> Option<&str> {
DescribeTypeError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeTypeError {
pub fn new(kind: DescribeTypeErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeTypeErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeTypeErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(&self.kind, DescribeTypeErrorKind::CFNRegistryError(_))
}
pub fn is_type_not_found_error(&self) -> bool {
matches!(&self.kind, DescribeTypeErrorKind::TypeNotFoundError(_))
}
}
impl std::error::Error for DescribeTypeError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeTypeErrorKind::CFNRegistryError(_inner) => Some(_inner),
DescribeTypeErrorKind::TypeNotFoundError(_inner) => Some(_inner),
DescribeTypeErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DescribeTypeRegistrationError {
pub kind: DescribeTypeRegistrationErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DescribeTypeRegistrationErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DescribeTypeRegistrationError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DescribeTypeRegistrationErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
DescribeTypeRegistrationErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DescribeTypeRegistrationError {
fn code(&self) -> Option<&str> {
DescribeTypeRegistrationError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DescribeTypeRegistrationError {
pub fn new(kind: DescribeTypeRegistrationErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DescribeTypeRegistrationErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DescribeTypeRegistrationErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(
&self.kind,
DescribeTypeRegistrationErrorKind::CFNRegistryError(_)
)
}
}
impl std::error::Error for DescribeTypeRegistrationError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DescribeTypeRegistrationErrorKind::CFNRegistryError(_inner) => Some(_inner),
DescribeTypeRegistrationErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DetectStackDriftError {
pub kind: DetectStackDriftErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DetectStackDriftErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DetectStackDriftError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DetectStackDriftErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DetectStackDriftError {
fn code(&self) -> Option<&str> {
DetectStackDriftError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DetectStackDriftError {
pub fn new(kind: DetectStackDriftErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DetectStackDriftErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DetectStackDriftErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DetectStackDriftError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DetectStackDriftErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DetectStackResourceDriftError {
pub kind: DetectStackResourceDriftErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DetectStackResourceDriftErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DetectStackResourceDriftError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DetectStackResourceDriftErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DetectStackResourceDriftError {
fn code(&self) -> Option<&str> {
DetectStackResourceDriftError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DetectStackResourceDriftError {
pub fn new(kind: DetectStackResourceDriftErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DetectStackResourceDriftErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DetectStackResourceDriftErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for DetectStackResourceDriftError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DetectStackResourceDriftErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct DetectStackSetDriftError {
pub kind: DetectStackSetDriftErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum DetectStackSetDriftErrorKind {
InvalidOperationError(crate::error::InvalidOperationError),
OperationInProgressError(crate::error::OperationInProgressError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for DetectStackSetDriftError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
DetectStackSetDriftErrorKind::InvalidOperationError(_inner) => _inner.fmt(f),
DetectStackSetDriftErrorKind::OperationInProgressError(_inner) => _inner.fmt(f),
DetectStackSetDriftErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
DetectStackSetDriftErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for DetectStackSetDriftError {
fn code(&self) -> Option<&str> {
DetectStackSetDriftError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl DetectStackSetDriftError {
pub fn new(kind: DetectStackSetDriftErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: DetectStackSetDriftErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: DetectStackSetDriftErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_operation_error(&self) -> bool {
matches!(
&self.kind,
DetectStackSetDriftErrorKind::InvalidOperationError(_)
)
}
pub fn is_operation_in_progress_error(&self) -> bool {
matches!(
&self.kind,
DetectStackSetDriftErrorKind::OperationInProgressError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
DetectStackSetDriftErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for DetectStackSetDriftError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
DetectStackSetDriftErrorKind::InvalidOperationError(_inner) => Some(_inner),
DetectStackSetDriftErrorKind::OperationInProgressError(_inner) => Some(_inner),
DetectStackSetDriftErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
DetectStackSetDriftErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct EstimateTemplateCostError {
pub kind: EstimateTemplateCostErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum EstimateTemplateCostErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for EstimateTemplateCostError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
EstimateTemplateCostErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for EstimateTemplateCostError {
fn code(&self) -> Option<&str> {
EstimateTemplateCostError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl EstimateTemplateCostError {
pub fn new(kind: EstimateTemplateCostErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: EstimateTemplateCostErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: EstimateTemplateCostErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for EstimateTemplateCostError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
EstimateTemplateCostErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ExecuteChangeSetError {
pub kind: ExecuteChangeSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ExecuteChangeSetErrorKind {
ChangeSetNotFoundError(crate::error::ChangeSetNotFoundError),
InsufficientCapabilitiesError(crate::error::InsufficientCapabilitiesError),
InvalidChangeSetStatusError(crate::error::InvalidChangeSetStatusError),
TokenAlreadyExistsError(crate::error::TokenAlreadyExistsError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ExecuteChangeSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ExecuteChangeSetErrorKind::ChangeSetNotFoundError(_inner) => _inner.fmt(f),
ExecuteChangeSetErrorKind::InsufficientCapabilitiesError(_inner) => _inner.fmt(f),
ExecuteChangeSetErrorKind::InvalidChangeSetStatusError(_inner) => _inner.fmt(f),
ExecuteChangeSetErrorKind::TokenAlreadyExistsError(_inner) => _inner.fmt(f),
ExecuteChangeSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ExecuteChangeSetError {
fn code(&self) -> Option<&str> {
ExecuteChangeSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ExecuteChangeSetError {
pub fn new(kind: ExecuteChangeSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ExecuteChangeSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ExecuteChangeSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_change_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
ExecuteChangeSetErrorKind::ChangeSetNotFoundError(_)
)
}
pub fn is_insufficient_capabilities_error(&self) -> bool {
matches!(
&self.kind,
ExecuteChangeSetErrorKind::InsufficientCapabilitiesError(_)
)
}
pub fn is_invalid_change_set_status_error(&self) -> bool {
matches!(
&self.kind,
ExecuteChangeSetErrorKind::InvalidChangeSetStatusError(_)
)
}
pub fn is_token_already_exists_error(&self) -> bool {
matches!(
&self.kind,
ExecuteChangeSetErrorKind::TokenAlreadyExistsError(_)
)
}
}
impl std::error::Error for ExecuteChangeSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ExecuteChangeSetErrorKind::ChangeSetNotFoundError(_inner) => Some(_inner),
ExecuteChangeSetErrorKind::InsufficientCapabilitiesError(_inner) => Some(_inner),
ExecuteChangeSetErrorKind::InvalidChangeSetStatusError(_inner) => Some(_inner),
ExecuteChangeSetErrorKind::TokenAlreadyExistsError(_inner) => Some(_inner),
ExecuteChangeSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct GetStackPolicyError {
pub kind: GetStackPolicyErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum GetStackPolicyErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for GetStackPolicyError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
GetStackPolicyErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for GetStackPolicyError {
fn code(&self) -> Option<&str> {
GetStackPolicyError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl GetStackPolicyError {
pub fn new(kind: GetStackPolicyErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: GetStackPolicyErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: GetStackPolicyErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for GetStackPolicyError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
GetStackPolicyErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct GetTemplateError {
pub kind: GetTemplateErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum GetTemplateErrorKind {
ChangeSetNotFoundError(crate::error::ChangeSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for GetTemplateError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
GetTemplateErrorKind::ChangeSetNotFoundError(_inner) => _inner.fmt(f),
GetTemplateErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for GetTemplateError {
fn code(&self) -> Option<&str> {
GetTemplateError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl GetTemplateError {
pub fn new(kind: GetTemplateErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: GetTemplateErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: GetTemplateErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_change_set_not_found_error(&self) -> bool {
matches!(&self.kind, GetTemplateErrorKind::ChangeSetNotFoundError(_))
}
}
impl std::error::Error for GetTemplateError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
GetTemplateErrorKind::ChangeSetNotFoundError(_inner) => Some(_inner),
GetTemplateErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct GetTemplateSummaryError {
pub kind: GetTemplateSummaryErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum GetTemplateSummaryErrorKind {
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for GetTemplateSummaryError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
GetTemplateSummaryErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
GetTemplateSummaryErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for GetTemplateSummaryError {
fn code(&self) -> Option<&str> {
GetTemplateSummaryError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl GetTemplateSummaryError {
pub fn new(kind: GetTemplateSummaryErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: GetTemplateSummaryErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: GetTemplateSummaryErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
GetTemplateSummaryErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for GetTemplateSummaryError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
GetTemplateSummaryErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
GetTemplateSummaryErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListChangeSetsError {
pub kind: ListChangeSetsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListChangeSetsErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListChangeSetsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListChangeSetsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListChangeSetsError {
fn code(&self) -> Option<&str> {
ListChangeSetsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListChangeSetsError {
pub fn new(kind: ListChangeSetsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListChangeSetsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListChangeSetsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for ListChangeSetsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListChangeSetsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListExportsError {
pub kind: ListExportsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListExportsErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListExportsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListExportsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListExportsError {
fn code(&self) -> Option<&str> {
ListExportsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListExportsError {
pub fn new(kind: ListExportsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListExportsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListExportsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for ListExportsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListExportsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListImportsError {
pub kind: ListImportsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListImportsErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListImportsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListImportsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListImportsError {
fn code(&self) -> Option<&str> {
ListImportsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListImportsError {
pub fn new(kind: ListImportsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListImportsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListImportsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for ListImportsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListImportsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListStackInstancesError {
pub kind: ListStackInstancesErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListStackInstancesErrorKind {
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListStackInstancesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListStackInstancesErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
ListStackInstancesErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListStackInstancesError {
fn code(&self) -> Option<&str> {
ListStackInstancesError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListStackInstancesError {
pub fn new(kind: ListStackInstancesErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListStackInstancesErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListStackInstancesErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
ListStackInstancesErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for ListStackInstancesError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListStackInstancesErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
ListStackInstancesErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListStackResourcesError {
pub kind: ListStackResourcesErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListStackResourcesErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListStackResourcesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListStackResourcesErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListStackResourcesError {
fn code(&self) -> Option<&str> {
ListStackResourcesError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListStackResourcesError {
pub fn new(kind: ListStackResourcesErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListStackResourcesErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListStackResourcesErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for ListStackResourcesError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListStackResourcesErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListStacksError {
pub kind: ListStacksErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListStacksErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListStacksError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListStacksErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListStacksError {
fn code(&self) -> Option<&str> {
ListStacksError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListStacksError {
pub fn new(kind: ListStacksErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListStacksErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListStacksErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for ListStacksError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListStacksErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListStackSetOperationResultsError {
pub kind: ListStackSetOperationResultsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListStackSetOperationResultsErrorKind {
OperationNotFoundError(crate::error::OperationNotFoundError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListStackSetOperationResultsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListStackSetOperationResultsErrorKind::OperationNotFoundError(_inner) => _inner.fmt(f),
ListStackSetOperationResultsErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
ListStackSetOperationResultsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListStackSetOperationResultsError {
fn code(&self) -> Option<&str> {
ListStackSetOperationResultsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListStackSetOperationResultsError {
pub fn new(kind: ListStackSetOperationResultsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListStackSetOperationResultsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListStackSetOperationResultsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_operation_not_found_error(&self) -> bool {
matches!(
&self.kind,
ListStackSetOperationResultsErrorKind::OperationNotFoundError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
ListStackSetOperationResultsErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for ListStackSetOperationResultsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListStackSetOperationResultsErrorKind::OperationNotFoundError(_inner) => Some(_inner),
ListStackSetOperationResultsErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
ListStackSetOperationResultsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListStackSetOperationsError {
pub kind: ListStackSetOperationsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListStackSetOperationsErrorKind {
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListStackSetOperationsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListStackSetOperationsErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
ListStackSetOperationsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListStackSetOperationsError {
fn code(&self) -> Option<&str> {
ListStackSetOperationsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListStackSetOperationsError {
pub fn new(kind: ListStackSetOperationsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListStackSetOperationsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListStackSetOperationsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
ListStackSetOperationsErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for ListStackSetOperationsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListStackSetOperationsErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
ListStackSetOperationsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListStackSetsError {
pub kind: ListStackSetsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListStackSetsErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListStackSetsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListStackSetsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListStackSetsError {
fn code(&self) -> Option<&str> {
ListStackSetsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListStackSetsError {
pub fn new(kind: ListStackSetsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListStackSetsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListStackSetsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for ListStackSetsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListStackSetsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListTypeRegistrationsError {
pub kind: ListTypeRegistrationsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListTypeRegistrationsErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListTypeRegistrationsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListTypeRegistrationsErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
ListTypeRegistrationsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListTypeRegistrationsError {
fn code(&self) -> Option<&str> {
ListTypeRegistrationsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListTypeRegistrationsError {
pub fn new(kind: ListTypeRegistrationsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListTypeRegistrationsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListTypeRegistrationsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(
&self.kind,
ListTypeRegistrationsErrorKind::CFNRegistryError(_)
)
}
}
impl std::error::Error for ListTypeRegistrationsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListTypeRegistrationsErrorKind::CFNRegistryError(_inner) => Some(_inner),
ListTypeRegistrationsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListTypesError {
pub kind: ListTypesErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListTypesErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListTypesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListTypesErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
ListTypesErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListTypesError {
fn code(&self) -> Option<&str> {
ListTypesError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListTypesError {
pub fn new(kind: ListTypesErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListTypesErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListTypesErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(&self.kind, ListTypesErrorKind::CFNRegistryError(_))
}
}
impl std::error::Error for ListTypesError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListTypesErrorKind::CFNRegistryError(_inner) => Some(_inner),
ListTypesErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ListTypeVersionsError {
pub kind: ListTypeVersionsErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ListTypeVersionsErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ListTypeVersionsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ListTypeVersionsErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
ListTypeVersionsErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ListTypeVersionsError {
fn code(&self) -> Option<&str> {
ListTypeVersionsError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ListTypeVersionsError {
pub fn new(kind: ListTypeVersionsErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ListTypeVersionsErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ListTypeVersionsErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(&self.kind, ListTypeVersionsErrorKind::CFNRegistryError(_))
}
}
impl std::error::Error for ListTypeVersionsError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ListTypeVersionsErrorKind::CFNRegistryError(_inner) => Some(_inner),
ListTypeVersionsErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct RecordHandlerProgressError {
pub kind: RecordHandlerProgressErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum RecordHandlerProgressErrorKind {
InvalidStateTransitionError(crate::error::InvalidStateTransitionError),
OperationStatusCheckFailedError(crate::error::OperationStatusCheckFailedError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for RecordHandlerProgressError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
RecordHandlerProgressErrorKind::InvalidStateTransitionError(_inner) => _inner.fmt(f),
RecordHandlerProgressErrorKind::OperationStatusCheckFailedError(_inner) => {
_inner.fmt(f)
}
RecordHandlerProgressErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for RecordHandlerProgressError {
fn code(&self) -> Option<&str> {
RecordHandlerProgressError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl RecordHandlerProgressError {
pub fn new(kind: RecordHandlerProgressErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: RecordHandlerProgressErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: RecordHandlerProgressErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_state_transition_error(&self) -> bool {
matches!(
&self.kind,
RecordHandlerProgressErrorKind::InvalidStateTransitionError(_)
)
}
pub fn is_operation_status_check_failed_error(&self) -> bool {
matches!(
&self.kind,
RecordHandlerProgressErrorKind::OperationStatusCheckFailedError(_)
)
}
}
impl std::error::Error for RecordHandlerProgressError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
RecordHandlerProgressErrorKind::InvalidStateTransitionError(_inner) => Some(_inner),
RecordHandlerProgressErrorKind::OperationStatusCheckFailedError(_inner) => Some(_inner),
RecordHandlerProgressErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct RegisterTypeError {
pub kind: RegisterTypeErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum RegisterTypeErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for RegisterTypeError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
RegisterTypeErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
RegisterTypeErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for RegisterTypeError {
fn code(&self) -> Option<&str> {
RegisterTypeError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl RegisterTypeError {
pub fn new(kind: RegisterTypeErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: RegisterTypeErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: RegisterTypeErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(&self.kind, RegisterTypeErrorKind::CFNRegistryError(_))
}
}
impl std::error::Error for RegisterTypeError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
RegisterTypeErrorKind::CFNRegistryError(_inner) => Some(_inner),
RegisterTypeErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct SetStackPolicyError {
pub kind: SetStackPolicyErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum SetStackPolicyErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for SetStackPolicyError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
SetStackPolicyErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for SetStackPolicyError {
fn code(&self) -> Option<&str> {
SetStackPolicyError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl SetStackPolicyError {
pub fn new(kind: SetStackPolicyErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: SetStackPolicyErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: SetStackPolicyErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for SetStackPolicyError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
SetStackPolicyErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct SetTypeDefaultVersionError {
pub kind: SetTypeDefaultVersionErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum SetTypeDefaultVersionErrorKind {
CFNRegistryError(crate::error::CFNRegistryError),
TypeNotFoundError(crate::error::TypeNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for SetTypeDefaultVersionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
SetTypeDefaultVersionErrorKind::CFNRegistryError(_inner) => _inner.fmt(f),
SetTypeDefaultVersionErrorKind::TypeNotFoundError(_inner) => _inner.fmt(f),
SetTypeDefaultVersionErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for SetTypeDefaultVersionError {
fn code(&self) -> Option<&str> {
SetTypeDefaultVersionError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl SetTypeDefaultVersionError {
pub fn new(kind: SetTypeDefaultVersionErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: SetTypeDefaultVersionErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: SetTypeDefaultVersionErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_cfn_registry_error(&self) -> bool {
matches!(
&self.kind,
SetTypeDefaultVersionErrorKind::CFNRegistryError(_)
)
}
pub fn is_type_not_found_error(&self) -> bool {
matches!(
&self.kind,
SetTypeDefaultVersionErrorKind::TypeNotFoundError(_)
)
}
}
impl std::error::Error for SetTypeDefaultVersionError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
SetTypeDefaultVersionErrorKind::CFNRegistryError(_inner) => Some(_inner),
SetTypeDefaultVersionErrorKind::TypeNotFoundError(_inner) => Some(_inner),
SetTypeDefaultVersionErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct SignalResourceError {
pub kind: SignalResourceErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum SignalResourceErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for SignalResourceError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
SignalResourceErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for SignalResourceError {
fn code(&self) -> Option<&str> {
SignalResourceError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl SignalResourceError {
pub fn new(kind: SignalResourceErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: SignalResourceErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: SignalResourceErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for SignalResourceError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
SignalResourceErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct StopStackSetOperationError {
pub kind: StopStackSetOperationErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum StopStackSetOperationErrorKind {
InvalidOperationError(crate::error::InvalidOperationError),
OperationNotFoundError(crate::error::OperationNotFoundError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for StopStackSetOperationError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
StopStackSetOperationErrorKind::InvalidOperationError(_inner) => _inner.fmt(f),
StopStackSetOperationErrorKind::OperationNotFoundError(_inner) => _inner.fmt(f),
StopStackSetOperationErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
StopStackSetOperationErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for StopStackSetOperationError {
fn code(&self) -> Option<&str> {
StopStackSetOperationError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl StopStackSetOperationError {
pub fn new(kind: StopStackSetOperationErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: StopStackSetOperationErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: StopStackSetOperationErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_operation_error(&self) -> bool {
matches!(
&self.kind,
StopStackSetOperationErrorKind::InvalidOperationError(_)
)
}
pub fn is_operation_not_found_error(&self) -> bool {
matches!(
&self.kind,
StopStackSetOperationErrorKind::OperationNotFoundError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
StopStackSetOperationErrorKind::StackSetNotFoundError(_)
)
}
}
impl std::error::Error for StopStackSetOperationError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
StopStackSetOperationErrorKind::InvalidOperationError(_inner) => Some(_inner),
StopStackSetOperationErrorKind::OperationNotFoundError(_inner) => Some(_inner),
StopStackSetOperationErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
StopStackSetOperationErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct UpdateStackError {
pub kind: UpdateStackErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum UpdateStackErrorKind {
InsufficientCapabilitiesError(crate::error::InsufficientCapabilitiesError),
TokenAlreadyExistsError(crate::error::TokenAlreadyExistsError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for UpdateStackError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
UpdateStackErrorKind::InsufficientCapabilitiesError(_inner) => _inner.fmt(f),
UpdateStackErrorKind::TokenAlreadyExistsError(_inner) => _inner.fmt(f),
UpdateStackErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for UpdateStackError {
fn code(&self) -> Option<&str> {
UpdateStackError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl UpdateStackError {
pub fn new(kind: UpdateStackErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: UpdateStackErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: UpdateStackErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_insufficient_capabilities_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackErrorKind::InsufficientCapabilitiesError(_)
)
}
pub fn is_token_already_exists_error(&self) -> bool {
matches!(&self.kind, UpdateStackErrorKind::TokenAlreadyExistsError(_))
}
}
impl std::error::Error for UpdateStackError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
UpdateStackErrorKind::InsufficientCapabilitiesError(_inner) => Some(_inner),
UpdateStackErrorKind::TokenAlreadyExistsError(_inner) => Some(_inner),
UpdateStackErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct UpdateStackInstancesError {
pub kind: UpdateStackInstancesErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum UpdateStackInstancesErrorKind {
InvalidOperationError(crate::error::InvalidOperationError),
OperationIdAlreadyExistsError(crate::error::OperationIdAlreadyExistsError),
OperationInProgressError(crate::error::OperationInProgressError),
StackInstanceNotFoundError(crate::error::StackInstanceNotFoundError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
StaleRequestError(crate::error::StaleRequestError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for UpdateStackInstancesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
UpdateStackInstancesErrorKind::InvalidOperationError(_inner) => _inner.fmt(f),
UpdateStackInstancesErrorKind::OperationIdAlreadyExistsError(_inner) => _inner.fmt(f),
UpdateStackInstancesErrorKind::OperationInProgressError(_inner) => _inner.fmt(f),
UpdateStackInstancesErrorKind::StackInstanceNotFoundError(_inner) => _inner.fmt(f),
UpdateStackInstancesErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
UpdateStackInstancesErrorKind::StaleRequestError(_inner) => _inner.fmt(f),
UpdateStackInstancesErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for UpdateStackInstancesError {
fn code(&self) -> Option<&str> {
UpdateStackInstancesError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl UpdateStackInstancesError {
pub fn new(kind: UpdateStackInstancesErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: UpdateStackInstancesErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: UpdateStackInstancesErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_operation_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackInstancesErrorKind::InvalidOperationError(_)
)
}
pub fn is_operation_id_already_exists_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackInstancesErrorKind::OperationIdAlreadyExistsError(_)
)
}
pub fn is_operation_in_progress_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackInstancesErrorKind::OperationInProgressError(_)
)
}
pub fn is_stack_instance_not_found_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackInstancesErrorKind::StackInstanceNotFoundError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackInstancesErrorKind::StackSetNotFoundError(_)
)
}
pub fn is_stale_request_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackInstancesErrorKind::StaleRequestError(_)
)
}
}
impl std::error::Error for UpdateStackInstancesError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
UpdateStackInstancesErrorKind::InvalidOperationError(_inner) => Some(_inner),
UpdateStackInstancesErrorKind::OperationIdAlreadyExistsError(_inner) => Some(_inner),
UpdateStackInstancesErrorKind::OperationInProgressError(_inner) => Some(_inner),
UpdateStackInstancesErrorKind::StackInstanceNotFoundError(_inner) => Some(_inner),
UpdateStackInstancesErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
UpdateStackInstancesErrorKind::StaleRequestError(_inner) => Some(_inner),
UpdateStackInstancesErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct UpdateStackSetError {
pub kind: UpdateStackSetErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum UpdateStackSetErrorKind {
InvalidOperationError(crate::error::InvalidOperationError),
OperationIdAlreadyExistsError(crate::error::OperationIdAlreadyExistsError),
OperationInProgressError(crate::error::OperationInProgressError),
StackInstanceNotFoundError(crate::error::StackInstanceNotFoundError),
StackSetNotFoundError(crate::error::StackSetNotFoundError),
StaleRequestError(crate::error::StaleRequestError),
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for UpdateStackSetError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
UpdateStackSetErrorKind::InvalidOperationError(_inner) => _inner.fmt(f),
UpdateStackSetErrorKind::OperationIdAlreadyExistsError(_inner) => _inner.fmt(f),
UpdateStackSetErrorKind::OperationInProgressError(_inner) => _inner.fmt(f),
UpdateStackSetErrorKind::StackInstanceNotFoundError(_inner) => _inner.fmt(f),
UpdateStackSetErrorKind::StackSetNotFoundError(_inner) => _inner.fmt(f),
UpdateStackSetErrorKind::StaleRequestError(_inner) => _inner.fmt(f),
UpdateStackSetErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for UpdateStackSetError {
fn code(&self) -> Option<&str> {
UpdateStackSetError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl UpdateStackSetError {
pub fn new(kind: UpdateStackSetErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: UpdateStackSetErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: UpdateStackSetErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
pub fn is_invalid_operation_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackSetErrorKind::InvalidOperationError(_)
)
}
pub fn is_operation_id_already_exists_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackSetErrorKind::OperationIdAlreadyExistsError(_)
)
}
pub fn is_operation_in_progress_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackSetErrorKind::OperationInProgressError(_)
)
}
pub fn is_stack_instance_not_found_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackSetErrorKind::StackInstanceNotFoundError(_)
)
}
pub fn is_stack_set_not_found_error(&self) -> bool {
matches!(
&self.kind,
UpdateStackSetErrorKind::StackSetNotFoundError(_)
)
}
pub fn is_stale_request_error(&self) -> bool {
matches!(&self.kind, UpdateStackSetErrorKind::StaleRequestError(_))
}
}
impl std::error::Error for UpdateStackSetError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
UpdateStackSetErrorKind::InvalidOperationError(_inner) => Some(_inner),
UpdateStackSetErrorKind::OperationIdAlreadyExistsError(_inner) => Some(_inner),
UpdateStackSetErrorKind::OperationInProgressError(_inner) => Some(_inner),
UpdateStackSetErrorKind::StackInstanceNotFoundError(_inner) => Some(_inner),
UpdateStackSetErrorKind::StackSetNotFoundError(_inner) => Some(_inner),
UpdateStackSetErrorKind::StaleRequestError(_inner) => Some(_inner),
UpdateStackSetErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct UpdateTerminationProtectionError {
pub kind: UpdateTerminationProtectionErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum UpdateTerminationProtectionErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for UpdateTerminationProtectionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
UpdateTerminationProtectionErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for UpdateTerminationProtectionError {
fn code(&self) -> Option<&str> {
UpdateTerminationProtectionError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl UpdateTerminationProtectionError {
pub fn new(kind: UpdateTerminationProtectionErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: UpdateTerminationProtectionErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: UpdateTerminationProtectionErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for UpdateTerminationProtectionError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
UpdateTerminationProtectionErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub struct ValidateTemplateError {
pub kind: ValidateTemplateErrorKind,
pub(crate) meta: smithy_types::Error,
}
#[non_exhaustive]
#[derive(std::fmt::Debug)]
pub enum ValidateTemplateErrorKind {
/// An unexpected error, eg. invalid JSON returned by the service or an unknown error code
Unhandled(Box<dyn std::error::Error + Send + Sync + 'static>),
}
impl std::fmt::Display for ValidateTemplateError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match &self.kind {
ValidateTemplateErrorKind::Unhandled(_inner) => _inner.fmt(f),
}
}
}
impl smithy_types::retry::ProvideErrorKind for ValidateTemplateError {
fn code(&self) -> Option<&str> {
ValidateTemplateError::code(self)
}
fn retryable_error_kind(&self) -> Option<smithy_types::retry::ErrorKind> {
None
}
}
impl ValidateTemplateError {
pub fn new(kind: ValidateTemplateErrorKind, meta: smithy_types::Error) -> Self {
Self { kind, meta }
}
pub fn unhandled(err: impl Into<Box<dyn std::error::Error + Send + Sync + 'static>>) -> Self {
Self {
kind: ValidateTemplateErrorKind::Unhandled(err.into()),
meta: Default::default(),
}
}
pub fn generic(err: smithy_types::Error) -> Self {
Self {
meta: err.clone(),
kind: ValidateTemplateErrorKind::Unhandled(err.into()),
}
}
// Consider if this should actually be `Option<Cow<&str>>`. This would enable us to use display as implemented
// by std::Error to generate a message in that case.
pub fn message(&self) -> Option<&str> {
self.meta.message()
}
pub fn meta(&self) -> &smithy_types::Error {
&self.meta
}
pub fn request_id(&self) -> Option<&str> {
self.meta.request_id()
}
pub fn code(&self) -> Option<&str> {
self.meta.code()
}
}
impl std::error::Error for ValidateTemplateError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self.kind {
ValidateTemplateErrorKind::Unhandled(_inner) => Some(_inner.as_ref()),
}
}
}
/// <p>Another operation has been performed on this stack set since the specified operation
/// was performed. </p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StaleRequestError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for StaleRequestError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StaleRequestError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl StaleRequestError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for StaleRequestError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "StaleRequestError [StaleRequestException]")?;
if let Some(inner_1) = &self.message {
write!(f, ": {}", inner_1)?;
}
Ok(())
}
}
impl std::error::Error for StaleRequestError {}
/// See [`StaleRequestError`](crate::error::StaleRequestError)
pub mod stale_request_error {
/// A builder for [`StaleRequestError`](crate::error::StaleRequestError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`StaleRequestError`](crate::error::StaleRequestError)
pub fn build(self) -> crate::error::StaleRequestError {
crate::error::StaleRequestError {
message: self.message,
}
}
}
}
impl StaleRequestError {
/// Creates a new builder-style object to manufacture [`StaleRequestError`](crate::error::StaleRequestError)
pub fn builder() -> crate::error::stale_request_error::Builder {
crate::error::stale_request_error::Builder::default()
}
}
/// <p>The specified stack set doesn't exist.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StackSetNotFoundError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for StackSetNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StackSetNotFoundError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl StackSetNotFoundError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for StackSetNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "StackSetNotFoundError [StackSetNotFoundException]")?;
if let Some(inner_2) = &self.message {
write!(f, ": {}", inner_2)?;
}
Ok(())
}
}
impl std::error::Error for StackSetNotFoundError {}
/// See [`StackSetNotFoundError`](crate::error::StackSetNotFoundError)
pub mod stack_set_not_found_error {
/// A builder for [`StackSetNotFoundError`](crate::error::StackSetNotFoundError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`StackSetNotFoundError`](crate::error::StackSetNotFoundError)
pub fn build(self) -> crate::error::StackSetNotFoundError {
crate::error::StackSetNotFoundError {
message: self.message,
}
}
}
}
impl StackSetNotFoundError {
/// Creates a new builder-style object to manufacture [`StackSetNotFoundError`](crate::error::StackSetNotFoundError)
pub fn builder() -> crate::error::stack_set_not_found_error::Builder {
crate::error::stack_set_not_found_error::Builder::default()
}
}
/// <p>The specified stack instance doesn't exist.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StackInstanceNotFoundError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for StackInstanceNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StackInstanceNotFoundError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl StackInstanceNotFoundError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for StackInstanceNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"StackInstanceNotFoundError [StackInstanceNotFoundException]"
)?;
if let Some(inner_3) = &self.message {
write!(f, ": {}", inner_3)?;
}
Ok(())
}
}
impl std::error::Error for StackInstanceNotFoundError {}
/// See [`StackInstanceNotFoundError`](crate::error::StackInstanceNotFoundError)
pub mod stack_instance_not_found_error {
/// A builder for [`StackInstanceNotFoundError`](crate::error::StackInstanceNotFoundError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`StackInstanceNotFoundError`](crate::error::StackInstanceNotFoundError)
pub fn build(self) -> crate::error::StackInstanceNotFoundError {
crate::error::StackInstanceNotFoundError {
message: self.message,
}
}
}
}
impl StackInstanceNotFoundError {
/// Creates a new builder-style object to manufacture [`StackInstanceNotFoundError`](crate::error::StackInstanceNotFoundError)
pub fn builder() -> crate::error::stack_instance_not_found_error::Builder {
crate::error::stack_instance_not_found_error::Builder::default()
}
}
/// <p>Another operation is currently in progress for this stack set. Only one operation can
/// be performed for a stack set at a given time.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct OperationInProgressError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for OperationInProgressError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("OperationInProgressError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl OperationInProgressError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for OperationInProgressError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "OperationInProgressError [OperationInProgressException]")?;
if let Some(inner_4) = &self.message {
write!(f, ": {}", inner_4)?;
}
Ok(())
}
}
impl std::error::Error for OperationInProgressError {}
/// See [`OperationInProgressError`](crate::error::OperationInProgressError)
pub mod operation_in_progress_error {
/// A builder for [`OperationInProgressError`](crate::error::OperationInProgressError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`OperationInProgressError`](crate::error::OperationInProgressError)
pub fn build(self) -> crate::error::OperationInProgressError {
crate::error::OperationInProgressError {
message: self.message,
}
}
}
}
impl OperationInProgressError {
/// Creates a new builder-style object to manufacture [`OperationInProgressError`](crate::error::OperationInProgressError)
pub fn builder() -> crate::error::operation_in_progress_error::Builder {
crate::error::operation_in_progress_error::Builder::default()
}
}
/// <p>The specified operation ID already exists.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct OperationIdAlreadyExistsError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for OperationIdAlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("OperationIdAlreadyExistsError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl OperationIdAlreadyExistsError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for OperationIdAlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"OperationIdAlreadyExistsError [OperationIdAlreadyExistsException]"
)?;
if let Some(inner_5) = &self.message {
write!(f, ": {}", inner_5)?;
}
Ok(())
}
}
impl std::error::Error for OperationIdAlreadyExistsError {}
/// See [`OperationIdAlreadyExistsError`](crate::error::OperationIdAlreadyExistsError)
pub mod operation_id_already_exists_error {
/// A builder for [`OperationIdAlreadyExistsError`](crate::error::OperationIdAlreadyExistsError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`OperationIdAlreadyExistsError`](crate::error::OperationIdAlreadyExistsError)
pub fn build(self) -> crate::error::OperationIdAlreadyExistsError {
crate::error::OperationIdAlreadyExistsError {
message: self.message,
}
}
}
}
impl OperationIdAlreadyExistsError {
/// Creates a new builder-style object to manufacture [`OperationIdAlreadyExistsError`](crate::error::OperationIdAlreadyExistsError)
pub fn builder() -> crate::error::operation_id_already_exists_error::Builder {
crate::error::operation_id_already_exists_error::Builder::default()
}
}
/// <p>The specified operation isn't valid.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct InvalidOperationError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for InvalidOperationError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("InvalidOperationError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl InvalidOperationError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for InvalidOperationError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "InvalidOperationError [InvalidOperationException]")?;
if let Some(inner_6) = &self.message {
write!(f, ": {}", inner_6)?;
}
Ok(())
}
}
impl std::error::Error for InvalidOperationError {}
/// See [`InvalidOperationError`](crate::error::InvalidOperationError)
pub mod invalid_operation_error {
/// A builder for [`InvalidOperationError`](crate::error::InvalidOperationError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`InvalidOperationError`](crate::error::InvalidOperationError)
pub fn build(self) -> crate::error::InvalidOperationError {
crate::error::InvalidOperationError {
message: self.message,
}
}
}
}
impl InvalidOperationError {
/// Creates a new builder-style object to manufacture [`InvalidOperationError`](crate::error::InvalidOperationError)
pub fn builder() -> crate::error::invalid_operation_error::Builder {
crate::error::invalid_operation_error::Builder::default()
}
}
/// <p>A client request token already exists.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct TokenAlreadyExistsError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for TokenAlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("TokenAlreadyExistsError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl TokenAlreadyExistsError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for TokenAlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "TokenAlreadyExistsError [TokenAlreadyExistsException]")?;
if let Some(inner_7) = &self.message {
write!(f, ": {}", inner_7)?;
}
Ok(())
}
}
impl std::error::Error for TokenAlreadyExistsError {}
/// See [`TokenAlreadyExistsError`](crate::error::TokenAlreadyExistsError)
pub mod token_already_exists_error {
/// A builder for [`TokenAlreadyExistsError`](crate::error::TokenAlreadyExistsError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`TokenAlreadyExistsError`](crate::error::TokenAlreadyExistsError)
pub fn build(self) -> crate::error::TokenAlreadyExistsError {
crate::error::TokenAlreadyExistsError {
message: self.message,
}
}
}
}
impl TokenAlreadyExistsError {
/// Creates a new builder-style object to manufacture [`TokenAlreadyExistsError`](crate::error::TokenAlreadyExistsError)
pub fn builder() -> crate::error::token_already_exists_error::Builder {
crate::error::token_already_exists_error::Builder::default()
}
}
/// <p>The template contains resources with capabilities that weren't specified in the
/// Capabilities parameter.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct InsufficientCapabilitiesError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for InsufficientCapabilitiesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("InsufficientCapabilitiesError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl InsufficientCapabilitiesError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for InsufficientCapabilitiesError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"InsufficientCapabilitiesError [InsufficientCapabilitiesException]"
)?;
if let Some(inner_8) = &self.message {
write!(f, ": {}", inner_8)?;
}
Ok(())
}
}
impl std::error::Error for InsufficientCapabilitiesError {}
/// See [`InsufficientCapabilitiesError`](crate::error::InsufficientCapabilitiesError)
pub mod insufficient_capabilities_error {
/// A builder for [`InsufficientCapabilitiesError`](crate::error::InsufficientCapabilitiesError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`InsufficientCapabilitiesError`](crate::error::InsufficientCapabilitiesError)
pub fn build(self) -> crate::error::InsufficientCapabilitiesError {
crate::error::InsufficientCapabilitiesError {
message: self.message,
}
}
}
}
impl InsufficientCapabilitiesError {
/// Creates a new builder-style object to manufacture [`InsufficientCapabilitiesError`](crate::error::InsufficientCapabilitiesError)
pub fn builder() -> crate::error::insufficient_capabilities_error::Builder {
crate::error::insufficient_capabilities_error::Builder::default()
}
}
/// <p>The specified ID refers to an operation that doesn't exist.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct OperationNotFoundError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for OperationNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("OperationNotFoundError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl OperationNotFoundError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for OperationNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "OperationNotFoundError [OperationNotFoundException]")?;
if let Some(inner_9) = &self.message {
write!(f, ": {}", inner_9)?;
}
Ok(())
}
}
impl std::error::Error for OperationNotFoundError {}
/// See [`OperationNotFoundError`](crate::error::OperationNotFoundError)
pub mod operation_not_found_error {
/// A builder for [`OperationNotFoundError`](crate::error::OperationNotFoundError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`OperationNotFoundError`](crate::error::OperationNotFoundError)
pub fn build(self) -> crate::error::OperationNotFoundError {
crate::error::OperationNotFoundError {
message: self.message,
}
}
}
}
impl OperationNotFoundError {
/// Creates a new builder-style object to manufacture [`OperationNotFoundError`](crate::error::OperationNotFoundError)
pub fn builder() -> crate::error::operation_not_found_error::Builder {
crate::error::operation_not_found_error::Builder::default()
}
}
/// <p>The specified type does not exist in the CloudFormation registry.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct TypeNotFoundError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for TypeNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("TypeNotFoundError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl TypeNotFoundError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for TypeNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "TypeNotFoundError [TypeNotFoundException]")?;
if let Some(inner_10) = &self.message {
write!(f, ": {}", inner_10)?;
}
Ok(())
}
}
impl std::error::Error for TypeNotFoundError {}
/// See [`TypeNotFoundError`](crate::error::TypeNotFoundError)
pub mod type_not_found_error {
/// A builder for [`TypeNotFoundError`](crate::error::TypeNotFoundError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`TypeNotFoundError`](crate::error::TypeNotFoundError)
pub fn build(self) -> crate::error::TypeNotFoundError {
crate::error::TypeNotFoundError {
message: self.message,
}
}
}
}
impl TypeNotFoundError {
/// Creates a new builder-style object to manufacture [`TypeNotFoundError`](crate::error::TypeNotFoundError)
pub fn builder() -> crate::error::type_not_found_error::Builder {
crate::error::type_not_found_error::Builder::default()
}
}
/// <p>An error occurred during a CloudFormation registry operation.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CFNRegistryError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for CFNRegistryError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CFNRegistryError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl CFNRegistryError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for CFNRegistryError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "CFNRegistryError [CFNRegistryException]")?;
if let Some(inner_11) = &self.message {
write!(f, ": {}", inner_11)?;
}
Ok(())
}
}
impl std::error::Error for CFNRegistryError {}
/// See [`CFNRegistryError`](crate::error::CFNRegistryError)
pub mod cfn_registry_error {
/// A builder for [`CFNRegistryError`](crate::error::CFNRegistryError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`CFNRegistryError`](crate::error::CFNRegistryError)
pub fn build(self) -> crate::error::CFNRegistryError {
crate::error::CFNRegistryError {
message: self.message,
}
}
}
}
impl CFNRegistryError {
/// Creates a new builder-style object to manufacture [`CFNRegistryError`](crate::error::CFNRegistryError)
pub fn builder() -> crate::error::cfn_registry_error::Builder {
crate::error::cfn_registry_error::Builder::default()
}
}
/// <p>Error reserved for use by the <a href="https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/what-is-cloudformation-cli.html">CloudFormation CLI</a>. CloudFormation does not return this error to users.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct OperationStatusCheckFailedError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for OperationStatusCheckFailedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("OperationStatusCheckFailedError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl OperationStatusCheckFailedError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for OperationStatusCheckFailedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"OperationStatusCheckFailedError [OperationStatusCheckFailedException]"
)?;
if let Some(inner_12) = &self.message {
write!(f, ": {}", inner_12)?;
}
Ok(())
}
}
impl std::error::Error for OperationStatusCheckFailedError {}
/// See [`OperationStatusCheckFailedError`](crate::error::OperationStatusCheckFailedError)
pub mod operation_status_check_failed_error {
/// A builder for [`OperationStatusCheckFailedError`](crate::error::OperationStatusCheckFailedError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`OperationStatusCheckFailedError`](crate::error::OperationStatusCheckFailedError)
pub fn build(self) -> crate::error::OperationStatusCheckFailedError {
crate::error::OperationStatusCheckFailedError {
message: self.message,
}
}
}
}
impl OperationStatusCheckFailedError {
/// Creates a new builder-style object to manufacture [`OperationStatusCheckFailedError`](crate::error::OperationStatusCheckFailedError)
pub fn builder() -> crate::error::operation_status_check_failed_error::Builder {
crate::error::operation_status_check_failed_error::Builder::default()
}
}
/// <p>Error reserved for use by the <a href="https://docs.aws.amazon.com/cloudformation-cli/latest/userguide/what-is-cloudformation-cli.html">CloudFormation CLI</a>. CloudFormation does not return this error to users.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct InvalidStateTransitionError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for InvalidStateTransitionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("InvalidStateTransitionError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl InvalidStateTransitionError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for InvalidStateTransitionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"InvalidStateTransitionError [InvalidStateTransitionException]"
)?;
if let Some(inner_13) = &self.message {
write!(f, ": {}", inner_13)?;
}
Ok(())
}
}
impl std::error::Error for InvalidStateTransitionError {}
/// See [`InvalidStateTransitionError`](crate::error::InvalidStateTransitionError)
pub mod invalid_state_transition_error {
/// A builder for [`InvalidStateTransitionError`](crate::error::InvalidStateTransitionError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`InvalidStateTransitionError`](crate::error::InvalidStateTransitionError)
pub fn build(self) -> crate::error::InvalidStateTransitionError {
crate::error::InvalidStateTransitionError {
message: self.message,
}
}
}
}
impl InvalidStateTransitionError {
/// Creates a new builder-style object to manufacture [`InvalidStateTransitionError`](crate::error::InvalidStateTransitionError)
pub fn builder() -> crate::error::invalid_state_transition_error::Builder {
crate::error::invalid_state_transition_error::Builder::default()
}
}
/// <p>The specified change set name or ID doesn't exit. To view valid change sets for a
/// stack, use the <code>ListChangeSets</code> action.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct ChangeSetNotFoundError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for ChangeSetNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("ChangeSetNotFoundError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl ChangeSetNotFoundError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for ChangeSetNotFoundError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "ChangeSetNotFoundError [ChangeSetNotFoundException]")?;
if let Some(inner_14) = &self.message {
write!(f, ": {}", inner_14)?;
}
Ok(())
}
}
impl std::error::Error for ChangeSetNotFoundError {}
/// See [`ChangeSetNotFoundError`](crate::error::ChangeSetNotFoundError)
pub mod change_set_not_found_error {
/// A builder for [`ChangeSetNotFoundError`](crate::error::ChangeSetNotFoundError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`ChangeSetNotFoundError`](crate::error::ChangeSetNotFoundError)
pub fn build(self) -> crate::error::ChangeSetNotFoundError {
crate::error::ChangeSetNotFoundError {
message: self.message,
}
}
}
}
impl ChangeSetNotFoundError {
/// Creates a new builder-style object to manufacture [`ChangeSetNotFoundError`](crate::error::ChangeSetNotFoundError)
pub fn builder() -> crate::error::change_set_not_found_error::Builder {
crate::error::change_set_not_found_error::Builder::default()
}
}
/// <p>The specified change set can't be used to update the stack. For example, the change
/// set status might be <code>CREATE_IN_PROGRESS</code>, or the stack status might be
/// <code>UPDATE_IN_PROGRESS</code>.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct InvalidChangeSetStatusError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for InvalidChangeSetStatusError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("InvalidChangeSetStatusError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl InvalidChangeSetStatusError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for InvalidChangeSetStatusError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"InvalidChangeSetStatusError [InvalidChangeSetStatusException]"
)?;
if let Some(inner_15) = &self.message {
write!(f, ": {}", inner_15)?;
}
Ok(())
}
}
impl std::error::Error for InvalidChangeSetStatusError {}
/// See [`InvalidChangeSetStatusError`](crate::error::InvalidChangeSetStatusError)
pub mod invalid_change_set_status_error {
/// A builder for [`InvalidChangeSetStatusError`](crate::error::InvalidChangeSetStatusError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`InvalidChangeSetStatusError`](crate::error::InvalidChangeSetStatusError)
pub fn build(self) -> crate::error::InvalidChangeSetStatusError {
crate::error::InvalidChangeSetStatusError {
message: self.message,
}
}
}
}
impl InvalidChangeSetStatusError {
/// Creates a new builder-style object to manufacture [`InvalidChangeSetStatusError`](crate::error::InvalidChangeSetStatusError)
pub fn builder() -> crate::error::invalid_change_set_status_error::Builder {
crate::error::invalid_change_set_status_error::Builder::default()
}
}
/// <p>You can't yet delete this stack set, because it still contains one or more stack
/// instances. Delete all stack instances from the stack set before deleting the stack
/// set.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct StackSetNotEmptyError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for StackSetNotEmptyError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("StackSetNotEmptyError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl StackSetNotEmptyError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for StackSetNotEmptyError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "StackSetNotEmptyError [StackSetNotEmptyException]")?;
if let Some(inner_16) = &self.message {
write!(f, ": {}", inner_16)?;
}
Ok(())
}
}
impl std::error::Error for StackSetNotEmptyError {}
/// See [`StackSetNotEmptyError`](crate::error::StackSetNotEmptyError)
pub mod stack_set_not_empty_error {
/// A builder for [`StackSetNotEmptyError`](crate::error::StackSetNotEmptyError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`StackSetNotEmptyError`](crate::error::StackSetNotEmptyError)
pub fn build(self) -> crate::error::StackSetNotEmptyError {
crate::error::StackSetNotEmptyError {
message: self.message,
}
}
}
}
impl StackSetNotEmptyError {
/// Creates a new builder-style object to manufacture [`StackSetNotEmptyError`](crate::error::StackSetNotEmptyError)
pub fn builder() -> crate::error::stack_set_not_empty_error::Builder {
crate::error::stack_set_not_empty_error::Builder::default()
}
}
/// <p>The specified name is already in use.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct NameAlreadyExistsError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for NameAlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("NameAlreadyExistsError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl NameAlreadyExistsError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for NameAlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "NameAlreadyExistsError [NameAlreadyExistsException]")?;
if let Some(inner_17) = &self.message {
write!(f, ": {}", inner_17)?;
}
Ok(())
}
}
impl std::error::Error for NameAlreadyExistsError {}
/// See [`NameAlreadyExistsError`](crate::error::NameAlreadyExistsError)
pub mod name_already_exists_error {
/// A builder for [`NameAlreadyExistsError`](crate::error::NameAlreadyExistsError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`NameAlreadyExistsError`](crate::error::NameAlreadyExistsError)
pub fn build(self) -> crate::error::NameAlreadyExistsError {
crate::error::NameAlreadyExistsError {
message: self.message,
}
}
}
}
impl NameAlreadyExistsError {
/// Creates a new builder-style object to manufacture [`NameAlreadyExistsError`](crate::error::NameAlreadyExistsError)
pub fn builder() -> crate::error::name_already_exists_error::Builder {
crate::error::name_already_exists_error::Builder::default()
}
}
/// <p>The quota for the resource has already been reached.</p>
/// <p>For information on resource and stack limitations, see <a href="https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cloudformation-limits.html">Limits</a> in
/// the <i>AWS CloudFormation User Guide</i>.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct LimitExceededError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for LimitExceededError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("LimitExceededError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl LimitExceededError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for LimitExceededError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "LimitExceededError [LimitExceededException]")?;
if let Some(inner_18) = &self.message {
write!(f, ": {}", inner_18)?;
}
Ok(())
}
}
impl std::error::Error for LimitExceededError {}
/// See [`LimitExceededError`](crate::error::LimitExceededError)
pub mod limit_exceeded_error {
/// A builder for [`LimitExceededError`](crate::error::LimitExceededError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`LimitExceededError`](crate::error::LimitExceededError)
pub fn build(self) -> crate::error::LimitExceededError {
crate::error::LimitExceededError {
message: self.message,
}
}
}
}
impl LimitExceededError {
/// Creates a new builder-style object to manufacture [`LimitExceededError`](crate::error::LimitExceededError)
pub fn builder() -> crate::error::limit_exceeded_error::Builder {
crate::error::limit_exceeded_error::Builder::default()
}
}
/// <p>The specified resource exists, but has been changed.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct CreatedButModifiedError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for CreatedButModifiedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("CreatedButModifiedError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl CreatedButModifiedError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for CreatedButModifiedError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "CreatedButModifiedError [CreatedButModifiedException]")?;
if let Some(inner_19) = &self.message {
write!(f, ": {}", inner_19)?;
}
Ok(())
}
}
impl std::error::Error for CreatedButModifiedError {}
/// See [`CreatedButModifiedError`](crate::error::CreatedButModifiedError)
pub mod created_but_modified_error {
/// A builder for [`CreatedButModifiedError`](crate::error::CreatedButModifiedError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`CreatedButModifiedError`](crate::error::CreatedButModifiedError)
pub fn build(self) -> crate::error::CreatedButModifiedError {
crate::error::CreatedButModifiedError {
message: self.message,
}
}
}
}
impl CreatedButModifiedError {
/// Creates a new builder-style object to manufacture [`CreatedButModifiedError`](crate::error::CreatedButModifiedError)
pub fn builder() -> crate::error::created_but_modified_error::Builder {
crate::error::created_but_modified_error::Builder::default()
}
}
/// <p>The resource with the name requested already exists.</p>
#[non_exhaustive]
#[derive(std::clone::Clone, std::cmp::PartialEq)]
pub struct AlreadyExistsError {
pub message: std::option::Option<std::string::String>,
}
impl std::fmt::Debug for AlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let mut formatter = f.debug_struct("AlreadyExistsError");
formatter.field("message", &self.message);
formatter.finish()
}
}
impl AlreadyExistsError {
pub fn message(&self) -> Option<&str> {
self.message.as_deref()
}
}
impl std::fmt::Display for AlreadyExistsError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "AlreadyExistsError [AlreadyExistsException]")?;
if let Some(inner_20) = &self.message {
write!(f, ": {}", inner_20)?;
}
Ok(())
}
}
impl std::error::Error for AlreadyExistsError {}
/// See [`AlreadyExistsError`](crate::error::AlreadyExistsError)
pub mod already_exists_error {
/// A builder for [`AlreadyExistsError`](crate::error::AlreadyExistsError)
#[non_exhaustive]
#[derive(std::default::Default, std::clone::Clone, std::cmp::PartialEq, std::fmt::Debug)]
pub struct Builder {
pub(crate) message: std::option::Option<std::string::String>,
}
impl Builder {
pub fn message(mut self, input: impl Into<std::string::String>) -> Self {
self.message = Some(input.into());
self
}
pub fn set_message(mut self, input: std::option::Option<std::string::String>) -> Self {
self.message = input;
self
}
/// Consumes the builder and constructs a [`AlreadyExistsError`](crate::error::AlreadyExistsError)
pub fn build(self) -> crate::error::AlreadyExistsError {
crate::error::AlreadyExistsError {
message: self.message,
}
}
}
}
impl AlreadyExistsError {
/// Creates a new builder-style object to manufacture [`AlreadyExistsError`](crate::error::AlreadyExistsError)
pub fn builder() -> crate::error::already_exists_error::Builder {
crate::error::already_exists_error::Builder::default()
}
}
| 34.91282 | 222 | 0.63603 |
0a27c566c761745accf9c6b40d536f5c3685ac87 | 15,965 | use der_parser::ber::*;
use der_parser::der::*;
use der_parser::error::*;
use der_parser::*;
use hex_literal::hex;
use nom::branch::alt;
use nom::combinator::{complete, eof, map, map_res};
use nom::error::ErrorKind;
use nom::multi::many0;
use nom::sequence::tuple;
use nom::*;
use oid::Oid;
use pretty_assertions::assert_eq;
use test_case::test_case;
#[derive(Debug, PartialEq)]
struct MyStruct<'a> {
a: BerObject<'a>,
b: BerObject<'a>,
}
fn parse_struct01(i: &[u8]) -> BerResult<MyStruct> {
parse_der_sequence_defined_g(|i: &[u8], _| {
let (i, a) = parse_ber_integer(i)?;
let (i, b) = parse_ber_integer(i)?;
Ok((i, MyStruct { a, b }))
})(i)
}
fn parse_struct01_complete(i: &[u8]) -> BerResult<MyStruct> {
parse_der_sequence_defined_g(|i: &[u8], _| {
let (i, a) = parse_ber_integer(i)?;
let (i, b) = parse_ber_integer(i)?;
eof(i)?;
Ok((i, MyStruct { a, b }))
})(i)
}
// verifying tag
fn parse_struct04(i: &[u8], tag: Tag) -> BerResult<MyStruct> {
parse_der_container(|i: &[u8], hdr| {
if hdr.tag() != tag {
return Err(Err::Error(BerError::InvalidTag));
}
let (i, a) = parse_ber_integer(i)?;
let (i, b) = parse_ber_integer(i)?;
eof(i)?;
Ok((i, MyStruct { a, b }))
})(i)
}
#[test_case(&hex!("30 00"), Ok(&[]) ; "empty seq")]
#[test_case(&hex!("30 0a 02 03 01 00 01 02 03 01 00 00"), Ok(&[0x10001, 0x10000]) ; "seq ok")]
#[test_case(&hex!("30 07 02 03 01 00 01 02 03 01"), Err(Err::Error(BerError::NomError(ErrorKind::Eof))) ; "incomplete")]
#[test_case(&hex!("31 0a 02 03 01 00 01 02 03 01 00 00"), Err(Err::Error(BerError::unexpected_tag(Tag::Sequence, Tag::Set))) ; "invalid tag")]
#[test_case(&hex!("30 80 02 03 01 00 01 00 00"), Ok(&[0x10001]) ; "indefinite seq ok")]
#[test_case(&hex!("30 80"), Err(Err::Incomplete(Needed::new(1))) ; "indefinite incomplete")]
fn tc_ber_seq_of(i: &[u8], out: Result<&[u32], Err<BerError>>) {
fn parser(i: &[u8]) -> BerResult {
parse_ber_sequence_of(parse_ber_integer)(i)
}
let res = parser(i);
match out {
Ok(l) => {
let (rem, res) = res.expect("could not parse sequence of");
assert!(rem.is_empty());
if let BerObjectContent::Sequence(res) = res.content {
pretty_assertions::assert_eq!(res.len(), l.len());
for (a, b) in res.iter().zip(l.iter()) {
pretty_assertions::assert_eq!(a.as_u32().unwrap(), *b);
}
} else {
panic!("wrong type for parsed object");
}
}
Err(e) => {
pretty_assertions::assert_eq!(res, Err(e));
}
}
}
#[test_case(&hex!("30 0a 02 03 01 00 01 02 03 01 00 00"), Ok(&[0x10001, 0x10000]) ; "seq ok")]
#[test_case(&hex!("30 07 02 03 01 00 01 02 01"), Err(Err::Incomplete(Needed::new(1))) ; "incomplete")]
#[test_case(&hex!("31 0a 02 03 01 00 01 02 03 01 00 00"), Err(Err::Error(BerError::unexpected_tag(Tag::Sequence, Tag::Set))) ; "invalid tag")]
#[test_case(&hex!("30 80 02 03 01 00 01 02 03 01 00 00 00 00"), Ok(&[0x10001, 0x10000]) ; "indefinite seq")]
fn tc_ber_seq_defined(i: &[u8], out: Result<&[u32], Err<BerError>>) {
fn parser(i: &[u8]) -> BerResult<BerObject> {
parse_ber_sequence_defined(map(
tuple((parse_ber_integer, parse_ber_integer)),
|(a, b)| vec![a, b],
))(i)
}
let res = parser(i);
match out {
Ok(l) => {
let (rem, res) = res.expect("could not parse sequence");
assert!(rem.is_empty());
if let BerObjectContent::Sequence(res) = res.content {
pretty_assertions::assert_eq!(res.len(), l.len());
for (a, b) in res.iter().zip(l.iter()) {
pretty_assertions::assert_eq!(a.as_u32().unwrap(), *b);
}
} else {
panic!("wrong type for parsed object");
}
}
Err(e) => {
pretty_assertions::assert_eq!(res, Err(e));
}
}
}
#[test_case(&hex!("31 00"), Ok(&[]) ; "empty set")]
#[test_case(&hex!("31 0a 02 03 01 00 01 02 03 01 00 00"), Ok(&[0x10001, 0x10000]) ; "set ok")]
#[test_case(&hex!("31 07 02 03 01 00 01 02 03 01"), Err(Err::Error(BerError::NomError(ErrorKind::Eof))) ; "incomplete")]
#[test_case(&hex!("30 0a 02 03 01 00 01 02 03 01 00 00"), Err(Err::Error(BerError::unexpected_tag(Tag::Set, Tag::Sequence))) ; "invalid tag")]
#[test_case(&hex!("31 80 02 03 01 00 01 00 00"), Ok(&[0x10001]) ; "indefinite set ok")]
#[test_case(&hex!("31 80"), Err(Err::Incomplete(Needed::new(1))) ; "indefinite incomplete")]
fn tc_ber_set_of(i: &[u8], out: Result<&[u32], Err<BerError>>) {
fn parser(i: &[u8]) -> BerResult {
parse_ber_set_of(parse_ber_integer)(i)
}
let res = parser(i);
match out {
Ok(l) => {
let (rem, res) = res.expect("could not parse set of");
assert!(rem.is_empty());
if let BerObjectContent::Set(res) = res.content {
pretty_assertions::assert_eq!(res.len(), l.len());
for (a, b) in res.iter().zip(l.iter()) {
pretty_assertions::assert_eq!(a.as_u32().unwrap(), *b);
}
} else {
panic!("wrong type for parsed object");
}
}
Err(e) => {
pretty_assertions::assert_eq!(res, Err(e));
}
}
}
#[test_case(&hex!("31 0a 02 03 01 00 01 02 03 01 00 00"), Ok(&[0x10001, 0x10000]) ; "set ok")]
#[test_case(&hex!("31 07 02 03 01 00 01 02 01"), Err(Err::Incomplete(Needed::new(1))) ; "incomplete")]
#[test_case(&hex!("30 0a 02 03 01 00 01 02 03 01 00 00"), Err(Err::Error(BerError::unexpected_tag(Tag::Set, Tag::Sequence))) ; "invalid tag")]
#[test_case(&hex!("31 80 02 03 01 00 01 02 03 01 00 00 00 00"), Ok(&[0x10001, 0x10000]) ; "indefinite set")]
fn tc_ber_set_defined(i: &[u8], out: Result<&[u32], Err<BerError>>) {
fn parser(i: &[u8]) -> BerResult<BerObject> {
parse_ber_set_defined(map(
tuple((parse_ber_integer, parse_ber_integer)),
|(a, b)| vec![a, b],
))(i)
}
let res = parser(i);
match out {
Ok(l) => {
let (rem, res) = res.expect("could not parse set");
assert!(rem.is_empty());
if let BerObjectContent::Set(res) = res.content {
pretty_assertions::assert_eq!(res.len(), l.len());
for (a, b) in res.iter().zip(l.iter()) {
pretty_assertions::assert_eq!(a.as_u32().unwrap(), *b);
}
} else {
panic!("wrong type for parsed object");
}
}
Err(e) => {
pretty_assertions::assert_eq!(res, Err(e));
}
}
}
#[test]
fn empty_seq() {
let data = &hex!("30 00");
let (_, res) = parse_ber_sequence(data).expect("parsing empty sequence failed");
assert!(res.as_sequence().unwrap().is_empty());
}
#[test]
fn struct01() {
let bytes = [
0x30, 0x0a, 0x02, 0x03, 0x01, 0x00, 0x01, 0x02, 0x03, 0x01, 0x00, 0x00,
];
let empty = &b""[..];
let expected = MyStruct {
a: BerObject::from_int_slice(b"\x01\x00\x01"),
b: BerObject::from_int_slice(b"\x01\x00\x00"),
};
let res = parse_struct01(&bytes);
assert_eq!(res, Ok((empty, expected)));
}
#[test]
fn struct02() {
let empty = &b""[..];
let bytes = [
0x30, 0x45, 0x31, 0x0b, 0x30, 0x09, 0x06, 0x03, 0x55, 0x04, 0x06, 0x13, 0x02, 0x46, 0x52,
0x31, 0x13, 0x30, 0x11, 0x06, 0x03, 0x55, 0x04, 0x08, 0x0c, 0x0a, 0x53, 0x6f, 0x6d, 0x65,
0x2d, 0x53, 0x74, 0x61, 0x74, 0x65, 0x31, 0x21, 0x30, 0x1f, 0x06, 0x03, 0x55, 0x04, 0x0a,
0x16, 0x18, 0x49, 0x6e, 0x74, 0x65, 0x72, 0x6e, 0x65, 0x74, 0x20, 0x57, 0x69, 0x64, 0x67,
0x69, 0x74, 0x73, 0x20, 0x50, 0x74, 0x79, 0x20, 0x4c, 0x74, 0x64,
];
#[derive(Debug, PartialEq)]
struct Attr<'a> {
oid: Oid<'a>,
val: BerObject<'a>,
}
#[derive(Debug, PartialEq)]
struct Rdn<'a> {
a: Attr<'a>,
}
#[derive(Debug, PartialEq)]
struct Name<'a> {
l: Vec<Rdn<'a>>,
}
let expected = Name {
l: vec![
Rdn {
a: Attr {
oid: Oid::from(&[2, 5, 4, 6]).unwrap(), // countryName
val: BerObject::from_obj(BerObjectContent::PrintableString("FR")),
},
},
Rdn {
a: Attr {
oid: Oid::from(&[2, 5, 4, 8]).unwrap(), // stateOrProvinceName
val: BerObject::from_obj(BerObjectContent::UTF8String("Some-State")),
},
},
Rdn {
a: Attr {
oid: Oid::from(&[2, 5, 4, 10]).unwrap(), // organizationName
val: BerObject::from_obj(BerObjectContent::IA5String(
"Internet Widgits Pty Ltd",
)),
},
},
],
};
fn parse_directory_string(i: &[u8]) -> BerResult {
alt((
parse_ber_utf8string,
parse_ber_printablestring,
parse_ber_ia5string,
))(i)
}
fn parse_attr_type_and_value(i: &[u8]) -> BerResult<Attr> {
fn clone_oid(x: BerObject) -> Result<Oid, BerError> {
x.as_oid().map(|o| o.clone())
}
parse_der_sequence_defined_g(|i: &[u8], _| {
let (i, o) = map_res(parse_ber_oid, clone_oid)(i)?;
let (i, s) = parse_directory_string(i)?;
Ok((i, Attr { oid: o, val: s }))
})(i)
}
fn parse_rdn(i: &[u8]) -> BerResult<Rdn> {
parse_der_set_defined_g(|i: &[u8], _| {
let (i, a) = parse_attr_type_and_value(i)?;
Ok((i, Rdn { a }))
})(i)
}
fn parse_name(i: &[u8]) -> BerResult<Name> {
parse_der_sequence_defined_g(|i: &[u8], _| {
let (i, l) = many0(complete(parse_rdn))(i)?;
Ok((i, Name { l }))
})(i)
}
let parsed = parse_name(&bytes).unwrap();
assert_eq!(parsed, (empty, expected));
//
assert_eq!(parsed.1.l[0].a.val.as_str(), Ok("FR"));
assert_eq!(parsed.1.l[1].a.val.as_str(), Ok("Some-State"));
assert_eq!(parsed.1.l[2].a.val.as_str(), Ok("Internet Widgits Pty Ltd"));
}
#[test]
fn struct_with_garbage() {
let bytes = [
0x30, 0x0c, 0x02, 0x03, 0x01, 0x00, 0x01, 0x02, 0x03, 0x01, 0x00, 0x00, 0xff, 0xff,
];
let empty = &b""[..];
let expected = MyStruct {
a: BerObject::from_int_slice(b"\x01\x00\x01"),
b: BerObject::from_int_slice(b"\x01\x00\x00"),
};
assert_eq!(parse_struct01(&bytes), Ok((empty, expected)));
assert_eq!(
parse_struct01_complete(&bytes),
Err(Err::Error(error_position!(&bytes[12..], ErrorKind::Eof)))
);
}
#[test]
fn struct_verify_tag() {
let bytes = [
0x30, 0x0a, 0x02, 0x03, 0x01, 0x00, 0x01, 0x02, 0x03, 0x01, 0x00, 0x00,
];
let empty = &b""[..];
let expected = MyStruct {
a: BerObject::from_int_slice(b"\x01\x00\x01"),
b: BerObject::from_int_slice(b"\x01\x00\x00"),
};
let res = parse_struct04(&bytes, Tag::Sequence);
assert_eq!(res, Ok((empty, expected)));
let res = parse_struct04(&bytes, Tag::Set);
assert_eq!(res, Err(Err::Error(BerError::InvalidTag)));
}
#[test_case(&hex!("a2 05 02 03 01 00 01"), Ok(0x10001) ; "tag ok")]
#[test_case(&hex!("a2 80 02 03 01 00 01 00 00"), Ok(0x10001) ; "indefinite tag ok")]
#[test_case(&hex!("a3 05 02 03 01 00 01"), Err(BerError::unexpected_tag(Tag(2), Tag(3))) ; "invalid tag")]
#[test_case(&hex!("22 05 02 03 01 00 01"), Err(BerError::InvalidClass) ; "invalid class")]
#[test_case(&hex!("82 05 02 03 01 00 01"), Err(BerError::ConstructExpected) ; "construct expected")]
fn tc_ber_tagged_explicit_g(i: &[u8], out: Result<u32, BerError>) {
fn parse_int_explicit(i: &[u8]) -> BerResult<u32> {
parse_ber_tagged_explicit_g(2, move |content, _hdr| {
let (rem, obj) = parse_ber_integer(content)?;
let value = obj.as_u32()?;
Ok((rem, value))
})(i)
}
let res = parse_int_explicit(i);
match out {
Ok(expected) => {
pretty_assertions::assert_eq!(res, Ok((&b""[..], expected)));
}
Err(e) => {
pretty_assertions::assert_eq!(res, Err(Err::Error(e)));
}
}
}
#[test]
fn tagged_explicit() {
fn parse_int_explicit(i: &[u8]) -> BerResult<u32> {
map_res(
parse_der_tagged_explicit(2, parse_der_integer),
|x: BerObject| x.as_tagged()?.2.as_u32(),
)(i)
}
let bytes = &[0xa2, 0x05, 0x02, 0x03, 0x01, 0x00, 0x01];
// EXPLICIT tagged value parsing
let (rem, val) = parse_int_explicit(bytes).expect("Could not parse explicit int");
assert!(rem.is_empty());
assert_eq!(val, 0x10001);
// wrong tag
assert_eq!(
parse_der_tagged_explicit(3, parse_der_integer)(bytes as &[u8]),
Err(Err::Error(BerError::unexpected_tag(Tag(3), Tag(2))))
);
// wrong type
assert_eq!(
parse_der_tagged_explicit(2, parse_der_bool)(bytes as &[u8]),
Err(Err::Error(BerError::unexpected_tag(Tag(1), Tag(2))))
);
}
#[test_case(&hex!("82 03 01 00 01"), Ok(0x10001) ; "tag ok")]
#[test_case(&hex!("83 03 01 00 01"), Err(BerError::unexpected_tag(Tag(2), Tag(3))) ; "invalid tag")]
fn tc_ber_tagged_implicit_g(i: &[u8], out: Result<u32, BerError>) {
fn parse_int_implicit(i: &[u8]) -> BerResult<u32> {
parse_ber_tagged_implicit_g(2, |content, hdr, depth| {
let (rem, obj) = parse_ber_content(Tag::Integer)(content, &hdr, depth)?;
let value = obj.as_u32()?;
Ok((rem, value))
})(i)
}
let res = parse_int_implicit(i);
match out {
Ok(expected) => {
pretty_assertions::assert_eq!(res, Ok((&b""[..], expected)));
}
Err(e) => {
pretty_assertions::assert_eq!(res, Err(Err::Error(e)));
}
}
}
#[test]
fn tagged_implicit() {
fn parse_int_implicit(i: &[u8]) -> BerResult<u32> {
map_res(
parse_der_tagged_implicit(2, parse_der_content(Tag::Integer)),
|x: BerObject| x.as_u32(),
)(i)
}
let bytes = &[0x82, 0x03, 0x01, 0x00, 0x01];
// IMPLICIT tagged value parsing
let (rem, val) = parse_int_implicit(bytes).expect("could not parse implicit int");
assert!(rem.is_empty());
assert_eq!(val, 0x10001);
// wrong tag
assert_eq!(
parse_der_tagged_implicit(3, parse_der_content(Tag::Integer))(bytes as &[u8]),
Err(Err::Error(BerError::unexpected_tag(Tag(3), Tag(2))))
);
}
#[test]
fn application() {
#[derive(Debug, PartialEq)]
struct SimpleStruct {
a: u32,
}
fn parse_app01(i: &[u8]) -> BerResult<SimpleStruct> {
parse_der_container(|i, hdr| {
if hdr.class() != Class::Application {
return Err(Err::Error(BerError::InvalidClass));
}
if hdr.tag() != Tag(2) {
return Err(Err::Error(BerError::InvalidTag));
}
let (i, a) = map_res(parse_ber_integer, |x: BerObject| x.as_u32())(i)?;
Ok((i, SimpleStruct { a }))
})(i)
}
let bytes = &[0x62, 0x05, 0x02, 0x03, 0x01, 0x00, 0x01];
let (rem, app) = parse_app01(bytes).expect("could not parse application");
assert!(rem.is_empty());
assert_eq!(app, SimpleStruct { a: 0x10001 });
}
#[test]
#[ignore = "not yet implemented"]
fn ber_constructed_string() {
// this encoding is equivalent to "04 05 01 AB 23 7F CA"
let data = &hex!(
"
24 80
04 02 01 ab
04 02 23 7f
04 01 ca
00 00"
);
let _ = parse_ber_octetstring(data).expect("parsing failed");
}
| 36.366743 | 142 | 0.545882 |
ff944e06c81fc83343a23c39bf4f802465c94531 | 3,214 | pub use ruffle_wstr::*;
use std::ops::Deref;
use gc_arena::{Collect, Gc, MutationContext};
use std::borrow::Cow;
#[derive(Clone, Copy, Collect)]
#[collect(no_drop)]
enum Source<'gc> {
Owned(Gc<'gc, OwnedWStr>),
Static(&'static WStr),
}
#[derive(Collect)]
#[collect(require_static)]
struct OwnedWStr(WString);
#[derive(Clone, Copy, Collect)]
#[collect(no_drop)]
pub struct AvmString<'gc> {
source: Source<'gc>,
}
impl<'gc> AvmString<'gc> {
pub fn new_utf8<'s, S: Into<Cow<'s, str>>>(
gc_context: MutationContext<'gc, '_>,
string: S,
) -> Self {
let buf = match string.into() {
Cow::Owned(utf8) => WString::from_utf8_owned(utf8),
Cow::Borrowed(utf8) => WString::from_utf8(utf8),
};
Self {
source: Source::Owned(Gc::allocate(gc_context, OwnedWStr(buf))),
}
}
pub fn new_utf8_bytes<'b, B: Into<Cow<'b, [u8]>>>(
gc_context: MutationContext<'gc, '_>,
bytes: B,
) -> Result<Self, std::str::Utf8Error> {
let utf8 = match bytes.into() {
Cow::Owned(b) => Cow::Owned(String::from_utf8(b).map_err(|e| e.utf8_error())?),
Cow::Borrowed(b) => Cow::Borrowed(std::str::from_utf8(b)?),
};
Ok(Self::new_utf8(gc_context, utf8))
}
pub fn new<S: Into<WString>>(gc_context: MutationContext<'gc, '_>, string: S) -> Self {
Self {
source: Source::Owned(Gc::allocate(gc_context, OwnedWStr(string.into()))),
}
}
pub fn as_wstr(&self) -> &WStr {
match &self.source {
Source::Owned(s) => &s.0,
Source::Static(s) => s,
}
}
pub fn concat(
gc_context: MutationContext<'gc, '_>,
left: AvmString<'gc>,
right: AvmString<'gc>,
) -> AvmString<'gc> {
if left.is_empty() {
right
} else if right.is_empty() {
left
} else {
let mut out = WString::from(left.as_wstr());
out.push_str(&right);
Self::new(gc_context, out)
}
}
#[inline]
pub fn ptr_eq(this: &Self, other: &Self) -> bool {
match (this.source, other.source) {
(Source::Owned(this), Source::Owned(other)) => Gc::ptr_eq(this, other),
(Source::Static(this), Source::Static(other)) => std::ptr::eq(this, other),
_ => false,
}
}
}
impl Default for AvmString<'_> {
fn default() -> Self {
Self {
source: Source::Static(WStr::empty()),
}
}
}
impl<'gc> From<&'static str> for AvmString<'gc> {
#[inline]
fn from(str: &'static str) -> Self {
// TODO(moulins): actually check that `str` is valid ASCII.
Self {
source: Source::Static(WStr::from_units(str.as_bytes())),
}
}
}
impl<'gc> From<&'static WStr> for AvmString<'gc> {
#[inline]
fn from(str: &'static WStr) -> Self {
Self {
source: Source::Static(str),
}
}
}
impl<'gc> Deref for AvmString<'gc> {
type Target = WStr;
#[inline]
fn deref(&self) -> &Self::Target {
self.as_wstr()
}
}
wstr_impl_traits!(impl['gc] for AvmString<'gc>);
| 25.712 | 91 | 0.53267 |
e946a6777f595137dbc5eebda3ee634f51c779f1 | 14,808 | // This file is generated by rust-protobuf 2.25.2. Do not edit
// @generated
// https://github.com/rust-lang/rust-clippy/issues/702
#![allow(unknown_lints)]
#![allow(clippy::all)]
#![allow(unused_attributes)]
#![cfg_attr(rustfmt, rustfmt::skip)]
#![allow(box_pointers)]
#![allow(dead_code)]
#![allow(missing_docs)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
#![allow(non_upper_case_globals)]
#![allow(trivial_casts)]
#![allow(unused_imports)]
#![allow(unused_results)]
//! Generated file from `src/response.proto`
/// Generated files are compatible only with the same version
/// of protobuf runtime.
// const _PROTOBUF_VERSION_CHECK: () = ::protobuf::VERSION_2_25_2;
#[derive(PartialEq,Clone,Default)]
pub struct MsgInstantiateContractResponse {
// message fields
pub contract_address: ::std::string::String,
pub data: ::std::vec::Vec<u8>,
// special fields
pub unknown_fields: ::protobuf::UnknownFields,
pub cached_size: ::protobuf::CachedSize,
}
impl<'a> ::std::default::Default for &'a MsgInstantiateContractResponse {
fn default() -> &'a MsgInstantiateContractResponse {
<MsgInstantiateContractResponse as ::protobuf::Message>::default_instance()
}
}
impl MsgInstantiateContractResponse {
pub fn new() -> MsgInstantiateContractResponse {
::std::default::Default::default()
}
// string contract_address = 1;
pub fn get_contract_address(&self) -> &str {
&self.contract_address
}
pub fn clear_contract_address(&mut self) {
self.contract_address.clear();
}
// Param is passed by value, moved
pub fn set_contract_address(&mut self, v: ::std::string::String) {
self.contract_address = v;
}
// Mutable pointer to the field.
// If field is not initialized, it is initialized with default value first.
pub fn mut_contract_address(&mut self) -> &mut ::std::string::String {
&mut self.contract_address
}
// Take field
pub fn take_contract_address(&mut self) -> ::std::string::String {
::std::mem::replace(&mut self.contract_address, ::std::string::String::new())
}
// bytes data = 2;
pub fn get_data(&self) -> &[u8] {
&self.data
}
pub fn clear_data(&mut self) {
self.data.clear();
}
// Param is passed by value, moved
pub fn set_data(&mut self, v: ::std::vec::Vec<u8>) {
self.data = v;
}
// Mutable pointer to the field.
// If field is not initialized, it is initialized with default value first.
pub fn mut_data(&mut self) -> &mut ::std::vec::Vec<u8> {
&mut self.data
}
// Take field
pub fn take_data(&mut self) -> ::std::vec::Vec<u8> {
::std::mem::replace(&mut self.data, ::std::vec::Vec::new())
}
}
impl ::protobuf::Message for MsgInstantiateContractResponse {
fn is_initialized(&self) -> bool {
true
}
fn merge_from(&mut self, is: &mut ::protobuf::CodedInputStream<'_>) -> ::protobuf::ProtobufResult<()> {
while !is.eof()? {
let (field_number, wire_type) = is.read_tag_unpack()?;
match field_number {
1 => {
::protobuf::rt::read_singular_proto3_string_into(wire_type, is, &mut self.contract_address)?;
},
2 => {
::protobuf::rt::read_singular_proto3_bytes_into(wire_type, is, &mut self.data)?;
},
_ => {
::protobuf::rt::read_unknown_or_skip_group(field_number, wire_type, is, self.mut_unknown_fields())?;
},
};
}
::std::result::Result::Ok(())
}
// Compute sizes of nested messages
#[allow(unused_variables)]
fn compute_size(&self) -> u32 {
let mut my_size = 0;
if !self.contract_address.is_empty() {
my_size += ::protobuf::rt::string_size(1, &self.contract_address);
}
if !self.data.is_empty() {
my_size += ::protobuf::rt::bytes_size(2, &self.data);
}
my_size += ::protobuf::rt::unknown_fields_size(self.get_unknown_fields());
self.cached_size.set(my_size);
my_size
}
fn write_to_with_cached_sizes(&self, os: &mut ::protobuf::CodedOutputStream<'_>) -> ::protobuf::ProtobufResult<()> {
if !self.contract_address.is_empty() {
os.write_string(1, &self.contract_address)?;
}
if !self.data.is_empty() {
os.write_bytes(2, &self.data)?;
}
os.write_unknown_fields(self.get_unknown_fields())?;
::std::result::Result::Ok(())
}
fn get_cached_size(&self) -> u32 {
self.cached_size.get()
}
fn get_unknown_fields(&self) -> &::protobuf::UnknownFields {
&self.unknown_fields
}
fn mut_unknown_fields(&mut self) -> &mut ::protobuf::UnknownFields {
&mut self.unknown_fields
}
fn as_any(&self) -> &dyn (::std::any::Any) {
self as &dyn (::std::any::Any)
}
fn as_any_mut(&mut self) -> &mut dyn (::std::any::Any) {
self as &mut dyn (::std::any::Any)
}
fn into_any(self: ::std::boxed::Box<Self>) -> ::std::boxed::Box<dyn (::std::any::Any)> {
self
}
fn descriptor(&self) -> &'static ::protobuf::reflect::MessageDescriptor {
Self::descriptor_static()
}
fn new() -> MsgInstantiateContractResponse {
MsgInstantiateContractResponse::new()
}
fn descriptor_static() -> &'static ::protobuf::reflect::MessageDescriptor {
static descriptor: ::protobuf::rt::LazyV2<::protobuf::reflect::MessageDescriptor> = ::protobuf::rt::LazyV2::INIT;
descriptor.get(|| {
let mut fields = ::std::vec::Vec::new();
fields.push(::protobuf::reflect::accessor::make_simple_field_accessor::<_, ::protobuf::types::ProtobufTypeString>(
"contract_address",
|m: &MsgInstantiateContractResponse| { &m.contract_address },
|m: &mut MsgInstantiateContractResponse| { &mut m.contract_address },
));
fields.push(::protobuf::reflect::accessor::make_simple_field_accessor::<_, ::protobuf::types::ProtobufTypeBytes>(
"data",
|m: &MsgInstantiateContractResponse| { &m.data },
|m: &mut MsgInstantiateContractResponse| { &mut m.data },
));
::protobuf::reflect::MessageDescriptor::new_pb_name::<MsgInstantiateContractResponse>(
"MsgInstantiateContractResponse",
fields,
file_descriptor_proto()
)
})
}
fn default_instance() -> &'static MsgInstantiateContractResponse {
static instance: ::protobuf::rt::LazyV2<MsgInstantiateContractResponse> = ::protobuf::rt::LazyV2::INIT;
instance.get(MsgInstantiateContractResponse::new)
}
}
impl ::protobuf::Clear for MsgInstantiateContractResponse {
fn clear(&mut self) {
self.contract_address.clear();
self.data.clear();
self.unknown_fields.clear();
}
}
impl ::std::fmt::Debug for MsgInstantiateContractResponse {
fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
::protobuf::text_format::fmt(self, f)
}
}
impl ::protobuf::reflect::ProtobufValue for MsgInstantiateContractResponse {
fn as_ref(&self) -> ::protobuf::reflect::ReflectValueRef {
::protobuf::reflect::ReflectValueRef::Message(self)
}
}
#[derive(PartialEq,Clone,Default)]
pub struct MsgExecuteContractResponse {
// message fields
pub data: ::std::vec::Vec<u8>,
// special fields
pub unknown_fields: ::protobuf::UnknownFields,
pub cached_size: ::protobuf::CachedSize,
}
impl<'a> ::std::default::Default for &'a MsgExecuteContractResponse {
fn default() -> &'a MsgExecuteContractResponse {
<MsgExecuteContractResponse as ::protobuf::Message>::default_instance()
}
}
impl MsgExecuteContractResponse {
pub fn new() -> MsgExecuteContractResponse {
::std::default::Default::default()
}
// bytes data = 1;
pub fn get_data(&self) -> &[u8] {
&self.data
}
pub fn clear_data(&mut self) {
self.data.clear();
}
// Param is passed by value, moved
pub fn set_data(&mut self, v: ::std::vec::Vec<u8>) {
self.data = v;
}
// Mutable pointer to the field.
// If field is not initialized, it is initialized with default value first.
pub fn mut_data(&mut self) -> &mut ::std::vec::Vec<u8> {
&mut self.data
}
// Take field
pub fn take_data(&mut self) -> ::std::vec::Vec<u8> {
::std::mem::replace(&mut self.data, ::std::vec::Vec::new())
}
}
impl ::protobuf::Message for MsgExecuteContractResponse {
fn is_initialized(&self) -> bool {
true
}
fn merge_from(&mut self, is: &mut ::protobuf::CodedInputStream<'_>) -> ::protobuf::ProtobufResult<()> {
while !is.eof()? {
let (field_number, wire_type) = is.read_tag_unpack()?;
match field_number {
1 => {
::protobuf::rt::read_singular_proto3_bytes_into(wire_type, is, &mut self.data)?;
},
_ => {
::protobuf::rt::read_unknown_or_skip_group(field_number, wire_type, is, self.mut_unknown_fields())?;
},
};
}
::std::result::Result::Ok(())
}
// Compute sizes of nested messages
#[allow(unused_variables)]
fn compute_size(&self) -> u32 {
let mut my_size = 0;
if !self.data.is_empty() {
my_size += ::protobuf::rt::bytes_size(1, &self.data);
}
my_size += ::protobuf::rt::unknown_fields_size(self.get_unknown_fields());
self.cached_size.set(my_size);
my_size
}
fn write_to_with_cached_sizes(&self, os: &mut ::protobuf::CodedOutputStream<'_>) -> ::protobuf::ProtobufResult<()> {
if !self.data.is_empty() {
os.write_bytes(1, &self.data)?;
}
os.write_unknown_fields(self.get_unknown_fields())?;
::std::result::Result::Ok(())
}
fn get_cached_size(&self) -> u32 {
self.cached_size.get()
}
fn get_unknown_fields(&self) -> &::protobuf::UnknownFields {
&self.unknown_fields
}
fn mut_unknown_fields(&mut self) -> &mut ::protobuf::UnknownFields {
&mut self.unknown_fields
}
fn as_any(&self) -> &dyn (::std::any::Any) {
self as &dyn (::std::any::Any)
}
fn as_any_mut(&mut self) -> &mut dyn (::std::any::Any) {
self as &mut dyn (::std::any::Any)
}
fn into_any(self: ::std::boxed::Box<Self>) -> ::std::boxed::Box<dyn (::std::any::Any)> {
self
}
fn descriptor(&self) -> &'static ::protobuf::reflect::MessageDescriptor {
Self::descriptor_static()
}
fn new() -> MsgExecuteContractResponse {
MsgExecuteContractResponse::new()
}
fn descriptor_static() -> &'static ::protobuf::reflect::MessageDescriptor {
static descriptor: ::protobuf::rt::LazyV2<::protobuf::reflect::MessageDescriptor> = ::protobuf::rt::LazyV2::INIT;
descriptor.get(|| {
let mut fields = ::std::vec::Vec::new();
fields.push(::protobuf::reflect::accessor::make_simple_field_accessor::<_, ::protobuf::types::ProtobufTypeBytes>(
"data",
|m: &MsgExecuteContractResponse| { &m.data },
|m: &mut MsgExecuteContractResponse| { &mut m.data },
));
::protobuf::reflect::MessageDescriptor::new_pb_name::<MsgExecuteContractResponse>(
"MsgExecuteContractResponse",
fields,
file_descriptor_proto()
)
})
}
fn default_instance() -> &'static MsgExecuteContractResponse {
static instance: ::protobuf::rt::LazyV2<MsgExecuteContractResponse> = ::protobuf::rt::LazyV2::INIT;
instance.get(MsgExecuteContractResponse::new)
}
}
impl ::protobuf::Clear for MsgExecuteContractResponse {
fn clear(&mut self) {
self.data.clear();
self.unknown_fields.clear();
}
}
impl ::std::fmt::Debug for MsgExecuteContractResponse {
fn fmt(&self, f: &mut ::std::fmt::Formatter<'_>) -> ::std::fmt::Result {
::protobuf::text_format::fmt(self, f)
}
}
impl ::protobuf::reflect::ProtobufValue for MsgExecuteContractResponse {
fn as_ref(&self) -> ::protobuf::reflect::ReflectValueRef {
::protobuf::reflect::ReflectValueRef::Message(self)
}
}
static file_descriptor_proto_data: &'static [u8] = b"\
\n\x12src/response.proto\"_\n\x1eMsgInstantiateContractResponse\x12)\n\
\x10contract_address\x18\x01\x20\x01(\tR\x0fcontractAddress\x12\x12\n\
\x04data\x18\x02\x20\x01(\x0cR\x04data\"0\n\x1aMsgExecuteContractRespons\
e\x12\x12\n\x04data\x18\x01\x20\x01(\x0cR\x04dataJ\xd8\x04\n\x06\x12\x04\
\0\0\x0e\x01\n\x08\n\x01\x0c\x12\x03\0\0\x12\n_\n\x02\x04\0\x12\x04\x03\
\0\x08\x01\x1aS\x20MsgInstantiateContractResponse\x20defines\x20the\x20M\
sg/InstantiateContract\x20response\x20type.\n\n\n\n\x03\x04\0\x01\x12\
\x03\x03\x08&\nR\n\x04\x04\0\x02\0\x12\x03\x05\x02\x1e\x1aE\x20ContractA\
ddress\x20is\x20the\x20bech32\x20address\x20of\x20the\x20new\x20contract\
\x20instance.\n\n\x0c\n\x05\x04\0\x02\0\x05\x12\x03\x05\x02\x08\n\x0c\n\
\x05\x04\0\x02\0\x01\x12\x03\x05\t\x19\n\x0c\n\x05\x04\0\x02\0\x03\x12\
\x03\x05\x1c\x1d\nO\n\x04\x04\0\x02\x01\x12\x03\x07\x02\x11\x1aB\x20Data\
\x20contains\x20base64-encoded\x20bytes\x20to\x20returned\x20from\x20the\
\x20contract\n\n\x0c\n\x05\x04\0\x02\x01\x05\x12\x03\x07\x02\x07\n\x0c\n\
\x05\x04\0\x02\x01\x01\x12\x03\x07\x08\x0c\n\x0c\n\x05\x04\0\x02\x01\x03\
\x12\x03\x07\x0f\x10\nW\n\x02\x04\x01\x12\x04\x0b\0\x0e\x01\x1aK\x20MsgE\
xecuteContractResponse\x20defines\x20the\x20Msg/ExecuteContract\x20respo\
nse\x20type.\n\n\n\n\x03\x04\x01\x01\x12\x03\x0b\x08\"\nO\n\x04\x04\x01\
\x02\0\x12\x03\r\x02\x11\x1aB\x20Data\x20contains\x20base64-encoded\x20b\
ytes\x20to\x20returned\x20from\x20the\x20contract\n\n\x0c\n\x05\x04\x01\
\x02\0\x05\x12\x03\r\x02\x07\n\x0c\n\x05\x04\x01\x02\0\x01\x12\x03\r\x08\
\x0c\n\x0c\n\x05\x04\x01\x02\0\x03\x12\x03\r\x0f\x10b\x06proto3\
";
static file_descriptor_proto_lazy: ::protobuf::rt::LazyV2<::protobuf::descriptor::FileDescriptorProto> = ::protobuf::rt::LazyV2::INIT;
fn parse_descriptor_proto() -> ::protobuf::descriptor::FileDescriptorProto {
::protobuf::Message::parse_from_bytes(file_descriptor_proto_data).unwrap()
}
pub fn file_descriptor_proto() -> &'static ::protobuf::descriptor::FileDescriptorProto {
file_descriptor_proto_lazy.get(|| {
parse_descriptor_proto()
})
}
| 35.090047 | 134 | 0.618922 |
8717e5be390ab3e035537cae3c547e7029d5e43b | 436 | //! Tests auto-converted from "sass-spec/spec/non_conformant/scss-tests/004_test_variables.hrx"
#[test]
fn test() {
assert_eq!(
crate::rsass(
"foo {\
\n $var: 2;\
\n $another-var: 4;\
\n a: $var;\
\n b: $var + $another-var;}\
\n"
)
.unwrap(),
"foo {\
\n a: 2;\
\n b: 6;\
\n}\
\n"
);
}
| 19.818182 | 95 | 0.37844 |
6742d6d8d8264d8bb3726308703bca15d19dec0a | 4,717 | // Copyright 2019 The Fuchsia Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
use crate::agent::{AgentError, BlueprintHandle, Context, Invocation, Lifespan, Payload};
use crate::base::SettingType;
use crate::message::base::{Audience, MessengerType};
use crate::monitor;
use crate::service;
use crate::service_context::ServiceContext;
use anyhow::{format_err, Error};
use std::collections::HashSet;
use std::sync::Arc;
/// Authority provides the ability to execute agents sequentially or simultaneously for a given
/// stage.
pub struct Authority {
// A mapping of agent addresses
agent_signatures: Vec<service::message::Signature>,
// Factory passed to agents for communicating with the service.
messenger_factory: service::message::Factory,
// Messenger
messenger: service::message::Messenger,
// Available components
available_components: HashSet<SettingType>,
// Available resource monitors
resource_monitor_actor: Option<monitor::environment::Actor>,
}
impl Authority {
pub async fn create(
messenger_factory: service::message::Factory,
available_components: HashSet<SettingType>,
resource_monitor_actor: Option<monitor::environment::Actor>,
) -> Result<Authority, Error> {
let (client, _) = messenger_factory
.create(MessengerType::Unbound)
.await
.map_err(|_| anyhow::format_err!("could not create agent messenger for authority"))?;
return Ok(Authority {
agent_signatures: Vec::new(),
messenger_factory,
messenger: client,
available_components,
resource_monitor_actor,
});
}
pub async fn register(&mut self, blueprint: BlueprintHandle) {
let agent_receptor = self
.messenger_factory
.create(MessengerType::Unbound)
.await
.expect("agent receptor should be created")
.1;
let signature = agent_receptor.get_signature();
blueprint
.create(
Context::new(
agent_receptor,
self.messenger_factory.clone(),
self.available_components.clone(),
self.resource_monitor_actor.clone(),
)
.await,
)
.await;
self.agent_signatures.push(signature);
}
/// Invokes each registered agent for a given lifespan. If sequential is true,
/// invocations will only proceed to the next agent once the current
/// invocation has been successfully acknowledged. When sequential is false,
/// agents will receive their invocations without waiting. However, the
/// overall completion (signaled through the receiver returned by the method),
/// will not return until all invocations have been acknowledged.
pub async fn execute_lifespan(
&self,
lifespan: Lifespan,
service_context: Arc<ServiceContext>,
sequential: bool,
) -> Result<(), Error> {
let mut pending_receptors = Vec::new();
for &signature in &self.agent_signatures {
let mut receptor = self
.messenger
.message(
Payload::Invocation(Invocation {
lifespan: lifespan.clone(),
service_context: Arc::clone(&service_context),
})
.into(),
Audience::Messenger(signature),
)
.send();
if sequential {
let result = process_payload(receptor.next_payload().await);
if result.is_err() {
return result;
}
} else {
pending_receptors.push(receptor);
}
}
// Pending acks should only be present for non sequential execution. In
// this case wait for each to complete.
for mut receptor in pending_receptors {
let result = process_payload(receptor.next_payload().await);
if result.is_err() {
return result;
}
}
Ok(())
}
}
fn process_payload(
payload: Result<(service::Payload, service::message::MessageClient), Error>,
) -> Result<(), Error> {
match payload {
Ok((service::Payload::Agent(Payload::Complete(Ok(_))), _)) => Ok(()),
Ok((service::Payload::Agent(Payload::Complete(Err(AgentError::UnhandledLifespan))), _)) => {
Ok(())
}
_ => Err(format_err!("invocation failed")),
}
}
| 34.940741 | 100 | 0.59593 |
215e3980761cdabea8e1344dce9a169fcb2fde1c | 734 | pub(crate) mod internal {
pub use std::marker::Unpin;
pub use async_recursion::async_recursion;
pub use futures::io::AsyncBufReadExt;
pub use tokio::io::{AsyncRead, AsyncReadExt, AsyncWrite, AsyncWriteExt, BufReader, BufWriter};
pub use tokio::prelude::*;
pub use crate::*;
}
mod client;
mod command;
mod decoder;
mod encoder;
mod error;
mod server;
mod value;
pub use client::Client;
pub use command::Command;
pub use decoder::Decoder;
pub use encoder::Encoder;
pub use error::Error;
pub use server::{Handle, Handler, Response, Server};
pub use value::{Float, Map, Set, Value};
pub use worm_derive::Handler;
pub use async_trait::async_trait;
pub use tokio::net::ToSocketAddrs;
#[cfg(test)]
mod tests;
| 20.388889 | 98 | 0.717984 |
e427f49b4d04dac4bbe4393e72dfd9ea08b7a05f | 3,761 | // Generated from definition io.k8s.api.authorization.v1.NonResourceAttributes
/// NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface
#[derive(Clone, Debug, Default, PartialEq)]
pub struct NonResourceAttributes {
/// Path is the URL path of the request
pub path: Option<String>,
/// Verb is the standard HTTP verb
pub verb: Option<String>,
}
impl<'de> ::serde::Deserialize<'de> for NonResourceAttributes {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: ::serde::Deserializer<'de> {
#[allow(non_camel_case_types)]
enum Field {
Key_path,
Key_verb,
Other,
}
impl<'de> ::serde::Deserialize<'de> for Field {
fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: ::serde::Deserializer<'de> {
struct Visitor;
impl<'de> ::serde::de::Visitor<'de> for Visitor {
type Value = Field;
fn expecting(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
write!(f, "field identifier")
}
fn visit_str<E>(self, v: &str) -> Result<Self::Value, E> where E: ::serde::de::Error {
Ok(match v {
"path" => Field::Key_path,
"verb" => Field::Key_verb,
_ => Field::Other,
})
}
}
deserializer.deserialize_identifier(Visitor)
}
}
struct Visitor;
impl<'de> ::serde::de::Visitor<'de> for Visitor {
type Value = NonResourceAttributes;
fn expecting(&self, f: &mut ::std::fmt::Formatter) -> ::std::fmt::Result {
write!(f, "struct NonResourceAttributes")
}
fn visit_map<A>(self, mut map: A) -> Result<Self::Value, A::Error> where A: ::serde::de::MapAccess<'de> {
let mut value_path: Option<String> = None;
let mut value_verb: Option<String> = None;
while let Some(key) = ::serde::de::MapAccess::next_key::<Field>(&mut map)? {
match key {
Field::Key_path => value_path = ::serde::de::MapAccess::next_value(&mut map)?,
Field::Key_verb => value_verb = ::serde::de::MapAccess::next_value(&mut map)?,
Field::Other => { let _: ::serde::de::IgnoredAny = ::serde::de::MapAccess::next_value(&mut map)?; },
}
}
Ok(NonResourceAttributes {
path: value_path,
verb: value_verb,
})
}
}
deserializer.deserialize_struct(
"NonResourceAttributes",
&[
"path",
"verb",
],
Visitor,
)
}
}
impl ::serde::Serialize for NonResourceAttributes {
fn serialize<S>(&self, serializer: S) -> Result<S::Ok, S::Error> where S: ::serde::Serializer {
let mut state = serializer.serialize_struct(
"NonResourceAttributes",
0 +
self.path.as_ref().map_or(0, |_| 1) +
self.verb.as_ref().map_or(0, |_| 1),
)?;
if let Some(value) = &self.path {
::serde::ser::SerializeStruct::serialize_field(&mut state, "path", value)?;
}
if let Some(value) = &self.verb {
::serde::ser::SerializeStruct::serialize_field(&mut state, "verb", value)?;
}
::serde::ser::SerializeStruct::end(state)
}
}
| 36.872549 | 127 | 0.506248 |
5bd1232d4a4af55606971c5300eef48efa415e63 | 1,849 | use crate::*;
pub(crate) fn assert_one_yocto() {
assert_eq!(
env::attached_deposit(),
1,
"Requires attached deposit of exactly 1 yoctoNEAR"
)
}
pub(crate) fn assert_self() {
assert_eq!(
env::predecessor_account_id(),
env::current_account_id(),
"Method is private"
);
}
impl Contract {
pub(crate) fn internal_deposit(&mut self, account_id: &AccountId, amount: Balance) {
let balance = self
.accounts
.get(&account_id)
.expect("The account is not registered");
if let Some(new_balance) = balance.checked_add(amount) {
self.accounts.insert(&account_id, &new_balance);
} else {
env::panic(b"Balance overflow");
}
}
pub(crate) fn internal_withdraw(&mut self, account_id: &AccountId, amount: Balance) {
let balance = self
.accounts
.get(&account_id)
.expect("The account is not registered");
if let Some(new_balance) = balance.checked_sub(amount) {
self.accounts.insert(&account_id, &new_balance);
} else {
env::panic(b"The account doesn't have enough balance");
}
}
pub(crate) fn internal_transfer(
&mut self,
sender_id: &AccountId,
receiver_id: &AccountId,
amount: Balance,
memo: Option<String>,
) {
assert_ne!(
sender_id, receiver_id,
"Sender and receiver should be different"
);
self.internal_withdraw(sender_id, amount);
self.internal_deposit(receiver_id, amount);
env::log(format!("Transfer {} from {} to {}", amount, sender_id, receiver_id).as_bytes());
if let Some(memo) = memo {
env::log(format!("Memo: {}", memo).as_bytes());
}
}
}
| 29.349206 | 98 | 0.569497 |
765f549ef2f1dd33a06e7205c63318d9a4413600 | 1,015 | /*!
Utilities for suspending the test.
*/
use core::fmt::{Debug, Display};
use core::time::Duration;
use std::thread::sleep;
use tracing::{error, warn};
/**
Call this function in the middle of a test code of interest,
so that we can suspend the test and still interact with the
spawned Gaia chains and chain supervisor for debugging.
*/
pub fn suspend<R>() -> R {
warn!("suspending the test indefinitely. you can still interact with any spawned chains and relayers");
loop {
sleep(Duration::from_secs(999_999_999))
}
}
pub fn hang_on_error<E: Debug + Display>(hang_on_fail: bool) -> impl FnOnce(E) -> E {
move |e| {
if hang_on_fail {
error!("test failure occured with HANG_ON_FAIL=1, suspending the test to allow debugging: {:?}",
e);
suspend()
} else {
error!("test failure occured. set HANG_ON_FAIL=1 to suspend the test on failure for debugging: {}",
e);
e
}
}
}
| 27.432432 | 111 | 0.614778 |
23e63a8a73909df0c2a568a955ff66c8b10ca299 | 2,411 | use std::io::Error as IoError;
use std::str::Utf8Error;
use std::num::ParseIntError;
use std::fmt::{Display, Formatter};
///读取文件内容
fn _read_file(path: &str) -> std::result::Result<String, std::io::Error> {
std::fs::read_to_string(path)
}
/// 转换为utf8内容
fn _to_utf8(v: &[u8]) -> std::result::Result<&str, std::str::Utf8Error> {
std::str::from_utf8(v)
}
/// 转化为u32数字
fn _to_u32(v: &str) -> std::result::Result<u32, std::num::ParseIntError> {
v.parse::<u32>()
}
//----Convert Error
#[derive(Debug)]
enum CustomError {
ParseIntError(std::num::ParseIntError),
Utf8Error(std::str::Utf8Error),
IoError(std::io::Error),
}
impl std::error::Error for CustomError {
fn source(&self) -> Option<&(dyn std::error::Error + 'static)> {
match &self {
CustomError::IoError(ref e) => Some(e),
CustomError::Utf8Error(ref e) => Some(e),
CustomError::ParseIntError(ref e) => Some(e),
}
}
}
impl Display for CustomError {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match &self {
CustomError::IoError(ref e) => e.fmt(f),
CustomError::Utf8Error(ref e) => e.fmt(f),
CustomError::ParseIntError(ref e) => e.fmt(f),
}
}
}
impl From<ParseIntError> for CustomError {
fn from(s: std::num::ParseIntError) -> Self {
CustomError::ParseIntError(s)
}
}
impl From<IoError> for CustomError {
fn from(s: std::io::Error) -> Self {
CustomError::IoError(s)
}
}
impl From<Utf8Error> for CustomError {
fn from(s: std::str::Utf8Error) -> Self {
CustomError::Utf8Error(s)
}
}
///自定义Result类型:IResult
pub type IResult<I> = std::result::Result<I, CustomError>;
pub type IOResult<I, O> = std::result::Result<(I, O), CustomError>;
///读取文件内容
fn read_file(path: &str) -> IResult<String> {
let val = std::fs::read_to_string(path)?;
Ok(val)
}
/// 转换为utf8内容
fn to_utf8(v: &[u8]) -> IResult<&str> {
let x = std::str::from_utf8(v)?;
Ok(x)
}
/// 转化为u32数字
fn to_u32(v: &str) -> IResult<u32> {
let i = v.parse::<u32>()?;
Ok(i)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_to_u32() -> std::result::Result<(), CustomError> {
let path = "./dat";
let v = read_file(path)?;
let x = to_utf8(v.as_bytes())?;
let u = to_u32(x)?;
assert_eq!(8, u);
Ok(())
}
} | 23.182692 | 74 | 0.577354 |
5b52b1925de9a71a438469d6bded0c528303b5f5 | 2,135 | // Copyright 2017 PingCAP, Inc.
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// See the License for the specific language governing permissions and
// limitations under the License.
extern crate benchmark;
extern crate clap;
extern crate futures;
extern crate grpcio as grpc;
extern crate grpcio_proto as grpc_proto;
#[macro_use]
extern crate log;
extern crate rand;
use std::env;
use std::sync::Arc;
use benchmark::{init_log, Worker};
use clap::{App, Arg};
use futures::sync::oneshot;
use futures::Future;
use grpc::{Environment, ServerBuilder};
use grpc_proto::testing::services_grpc::create_worker_service;
use rand::Rng;
const LOG_FILE: &str = "GRPCIO_BENCHMARK_LOG_FILE";
fn main() {
let matches = App::new("Benchmark QpsWorker")
.about("ref http://www.grpc.io/docs/guides/benchmarking.html")
.arg(
Arg::with_name("port")
.long("driver_port")
.help("The port the worker should listen on. For example, \"8080\"")
.takes_value(true),
)
.get_matches();
let port: u16 = matches.value_of("port").unwrap_or("8080").parse().unwrap();
let _log_guard = init_log(
env::var(LOG_FILE)
.ok()
.map(|lf| format!("{}.{}", lf, rand::thread_rng().gen::<u32>())),
);
let env = Arc::new(Environment::new(2));
let (tx, rx) = oneshot::channel();
let worker = Worker::new(tx);
let service = create_worker_service(worker);
let mut server = ServerBuilder::new(env)
.register_service(service)
.bind("[::]", port)
.build()
.unwrap();
for &(ref host, port) in server.bind_addrs() {
info!("listening on {}:{}", host, port);
}
server.start();
let _ = rx.wait();
let _ = server.shutdown().wait();
}
| 29.246575 | 84 | 0.635129 |
d7bab851cd5447fd0b0997fc0b06c40642f4368e | 439 | //! This module contains implementations for creating **Comments**, **Overrides**, and **Updates**
//! on a bodhi instance. Creating **Releases** is possible with the REST API, but not implemented
//! yet.
mod traits;
pub(crate) use traits::Create;
mod comments;
pub use comments::{CommentBuilder, NewComment};
mod overrides;
pub use overrides::{NewOverride, OverrideBuilder};
mod updates;
pub use updates::{NewUpdate, UpdateBuilder};
| 27.4375 | 98 | 0.742597 |
ac16b43fbfa7f105bdc6db2d80607bf6a7da3785 | 78,793 | /*
* Swaggy Jenkins
*
* Jenkins API clients generated from Swagger / Open API specification
*
* The version of the OpenAPI document: 1.1.2-pre.0
* Contact: [email protected]
* Generated by: https://openapi-generator.tech
*/
use reqwest;
use crate::apis::ResponseContent;
use super::{Error, configuration};
/// struct for typed errors of method [`delete_pipeline_queue_item`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum DeletePipelineQueueItemError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_authenticated_user`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetAuthenticatedUserError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_classes`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetClassesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_json_web_key`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetJsonWebKeyError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_json_web_token`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetJsonWebTokenError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_organisation`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetOrganisationError {
Status401(),
Status403(),
Status404(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_organisations`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetOrganisationsError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineError {
Status401(),
Status403(),
Status404(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_activities`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineActivitiesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_branch`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineBranchError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_branch_run`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineBranchRunError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_branches`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineBranchesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_folder`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineFolderError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_folder_pipeline`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineFolderPipelineError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_queue`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineQueueError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_run`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_run_log`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunLogError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_run_node`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunNodeError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_run_node_step`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunNodeStepError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_run_node_step_log`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunNodeStepLogError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_run_node_steps`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunNodeStepsError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_run_nodes`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunNodesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipeline_runs`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelineRunsError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_pipelines`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetPipelinesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_scm`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetScmError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_scm_organisation_repositories`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetScmOrganisationRepositoriesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_scm_organisation_repository`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetScmOrganisationRepositoryError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_scm_organisations`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetScmOrganisationsError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_user`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetUserError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_user_favorites`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetUserFavoritesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`get_users`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum GetUsersError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`post_pipeline_run`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum PostPipelineRunError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`post_pipeline_runs`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum PostPipelineRunsError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`put_pipeline_favorite`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum PutPipelineFavoriteError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`put_pipeline_run`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum PutPipelineRunError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`search`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum SearchError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// struct for typed errors of method [`search_classes`]
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(untagged)]
pub enum SearchClassesError {
Status401(),
Status403(),
UnknownValue(serde_json::Value),
}
/// Delete queue item from an organization pipeline queue
pub async fn delete_pipeline_queue_item(configuration: &configuration::Configuration, organization: &str, pipeline: &str, queue: &str) -> Result<(), Error<DeletePipelineQueueItemError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/queue/{queue}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), queue=crate::apis::urlencode(queue));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::DELETE, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
Ok(())
} else {
let local_var_entity: Option<DeletePipelineQueueItemError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve authenticated user details for an organization
pub async fn get_authenticated_user(configuration: &configuration::Configuration, organization: &str) -> Result<crate::models::User, Error<GetAuthenticatedUserError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/user/", local_var_configuration.base_path, organization=crate::apis::urlencode(organization));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetAuthenticatedUserError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Get a list of class names supported by a given class
pub async fn get_classes(configuration: &configuration::Configuration, class: &str) -> Result<String, Error<GetClassesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/classes/{class}", local_var_configuration.base_path, class=crate::apis::urlencode(class));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetClassesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve JSON Web Key
pub async fn get_json_web_key(configuration: &configuration::Configuration, key: i32) -> Result<String, Error<GetJsonWebKeyError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/jwt-auth/jwks/{key}", local_var_configuration.base_path, key=key);
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetJsonWebKeyError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve JSON Web Token
pub async fn get_json_web_token(configuration: &configuration::Configuration, expiry_time_in_mins: Option<i32>, max_expiry_time_in_mins: Option<i32>) -> Result<String, Error<GetJsonWebTokenError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/jwt-auth/token", local_var_configuration.base_path);
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_str) = expiry_time_in_mins {
local_var_req_builder = local_var_req_builder.query(&[("expiryTimeInMins", &local_var_str.to_string())]);
}
if let Some(ref local_var_str) = max_expiry_time_in_mins {
local_var_req_builder = local_var_req_builder.query(&[("maxExpiryTimeInMins", &local_var_str.to_string())]);
}
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetJsonWebTokenError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve organization details
pub async fn get_organisation(configuration: &configuration::Configuration, organization: &str) -> Result<crate::models::Organisation, Error<GetOrganisationError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetOrganisationError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve all organizations details
pub async fn get_organisations(configuration: &configuration::Configuration, ) -> Result<Vec<crate::models::Organisation>, Error<GetOrganisationsError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/", local_var_configuration.base_path);
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetOrganisationsError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve pipeline details for an organization
pub async fn get_pipeline(configuration: &configuration::Configuration, organization: &str, pipeline: &str) -> Result<crate::models::Pipeline, Error<GetPipelineError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve all activities details for an organization pipeline
pub async fn get_pipeline_activities(configuration: &configuration::Configuration, organization: &str, pipeline: &str) -> Result<Vec<crate::models::PipelineActivity>, Error<GetPipelineActivitiesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/activities", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineActivitiesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve branch details for an organization pipeline
pub async fn get_pipeline_branch(configuration: &configuration::Configuration, organization: &str, pipeline: &str, branch: &str) -> Result<crate::models::BranchImpl, Error<GetPipelineBranchError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/branches/{branch}/", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), branch=crate::apis::urlencode(branch));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineBranchError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve branch run details for an organization pipeline
pub async fn get_pipeline_branch_run(configuration: &configuration::Configuration, organization: &str, pipeline: &str, branch: &str, run: &str) -> Result<crate::models::PipelineRun, Error<GetPipelineBranchRunError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/branches/{branch}/runs/{run}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), branch=crate::apis::urlencode(branch), run=crate::apis::urlencode(run));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineBranchRunError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve all branches details for an organization pipeline
pub async fn get_pipeline_branches(configuration: &configuration::Configuration, organization: &str, pipeline: &str) -> Result<crate::models::MultibranchPipeline, Error<GetPipelineBranchesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/branches", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineBranchesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve pipeline folder for an organization
pub async fn get_pipeline_folder(configuration: &configuration::Configuration, organization: &str, folder: &str) -> Result<crate::models::PipelineFolderImpl, Error<GetPipelineFolderError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{folder}/", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), folder=crate::apis::urlencode(folder));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineFolderError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve pipeline details for an organization folder
pub async fn get_pipeline_folder_pipeline(configuration: &configuration::Configuration, organization: &str, pipeline: &str, folder: &str) -> Result<crate::models::PipelineImpl, Error<GetPipelineFolderPipelineError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{folder}/pipelines/{pipeline}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), folder=crate::apis::urlencode(folder));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineFolderPipelineError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve queue details for an organization pipeline
pub async fn get_pipeline_queue(configuration: &configuration::Configuration, organization: &str, pipeline: &str) -> Result<Vec<crate::models::QueueItemImpl>, Error<GetPipelineQueueError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/queue", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineQueueError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve run details for an organization pipeline
pub async fn get_pipeline_run(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str) -> Result<crate::models::PipelineRun, Error<GetPipelineRunError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Get log for a pipeline run
pub async fn get_pipeline_run_log(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str, start: Option<i32>, download: Option<bool>) -> Result<String, Error<GetPipelineRunLogError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/log", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_str) = start {
local_var_req_builder = local_var_req_builder.query(&[("start", &local_var_str.to_string())]);
}
if let Some(ref local_var_str) = download {
local_var_req_builder = local_var_req_builder.query(&[("download", &local_var_str.to_string())]);
}
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunLogError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve run node details for an organization pipeline
pub async fn get_pipeline_run_node(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str, node: &str) -> Result<crate::models::PipelineRunNode, Error<GetPipelineRunNodeError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/nodes/{node}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run), node=crate::apis::urlencode(node));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunNodeError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve run node details for an organization pipeline
pub async fn get_pipeline_run_node_step(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str, node: &str, step: &str) -> Result<crate::models::PipelineStepImpl, Error<GetPipelineRunNodeStepError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/nodes/{node}/steps/{step}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run), node=crate::apis::urlencode(node), step=crate::apis::urlencode(step));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunNodeStepError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Get log for a pipeline run node step
pub async fn get_pipeline_run_node_step_log(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str, node: &str, step: &str) -> Result<String, Error<GetPipelineRunNodeStepLogError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/nodes/{node}/steps/{step}/log", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run), node=crate::apis::urlencode(node), step=crate::apis::urlencode(step));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunNodeStepLogError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve run node steps details for an organization pipeline
pub async fn get_pipeline_run_node_steps(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str, node: &str) -> Result<Vec<crate::models::PipelineStepImpl>, Error<GetPipelineRunNodeStepsError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/nodes/{node}/steps", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run), node=crate::apis::urlencode(node));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunNodeStepsError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve run nodes details for an organization pipeline
pub async fn get_pipeline_run_nodes(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str) -> Result<Vec<crate::models::PipelineRunNode>, Error<GetPipelineRunNodesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/nodes", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunNodesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve all runs details for an organization pipeline
pub async fn get_pipeline_runs(configuration: &configuration::Configuration, organization: &str, pipeline: &str) -> Result<Vec<crate::models::PipelineRun>, Error<GetPipelineRunsError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelineRunsError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve all pipelines details for an organization
pub async fn get_pipelines(configuration: &configuration::Configuration, organization: &str) -> Result<Vec<crate::models::Pipeline>, Error<GetPipelinesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/", local_var_configuration.base_path, organization=crate::apis::urlencode(organization));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetPipelinesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve SCM details for an organization
pub async fn get_scm(configuration: &configuration::Configuration, organization: &str, scm: &str) -> Result<crate::models::GithubScm, Error<GetScmError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/scm/{scm}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), scm=crate::apis::urlencode(scm));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetScmError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve SCM organization repositories details for an organization
pub async fn get_scm_organisation_repositories(configuration: &configuration::Configuration, organization: &str, scm: &str, scm_organisation: &str, credential_id: Option<&str>, page_size: Option<i32>, page_number: Option<i32>) -> Result<Vec<crate::models::GithubOrganization>, Error<GetScmOrganisationRepositoriesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/scm/{scm}/organizations/{scmOrganisation}/repositories", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), scm=crate::apis::urlencode(scm), scmOrganisation=crate::apis::urlencode(scm_organisation));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_str) = credential_id {
local_var_req_builder = local_var_req_builder.query(&[("credentialId", &local_var_str.to_string())]);
}
if let Some(ref local_var_str) = page_size {
local_var_req_builder = local_var_req_builder.query(&[("pageSize", &local_var_str.to_string())]);
}
if let Some(ref local_var_str) = page_number {
local_var_req_builder = local_var_req_builder.query(&[("pageNumber", &local_var_str.to_string())]);
}
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetScmOrganisationRepositoriesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve SCM organization repository details for an organization
pub async fn get_scm_organisation_repository(configuration: &configuration::Configuration, organization: &str, scm: &str, scm_organisation: &str, repository: &str, credential_id: Option<&str>) -> Result<Vec<crate::models::GithubOrganization>, Error<GetScmOrganisationRepositoryError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/scm/{scm}/organizations/{scmOrganisation}/repositories/{repository}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), scm=crate::apis::urlencode(scm), scmOrganisation=crate::apis::urlencode(scm_organisation), repository=crate::apis::urlencode(repository));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_str) = credential_id {
local_var_req_builder = local_var_req_builder.query(&[("credentialId", &local_var_str.to_string())]);
}
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetScmOrganisationRepositoryError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve SCM organizations details for an organization
pub async fn get_scm_organisations(configuration: &configuration::Configuration, organization: &str, scm: &str, credential_id: Option<&str>) -> Result<Vec<crate::models::GithubOrganization>, Error<GetScmOrganisationsError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/scm/{scm}/organizations", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), scm=crate::apis::urlencode(scm));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_str) = credential_id {
local_var_req_builder = local_var_req_builder.query(&[("credentialId", &local_var_str.to_string())]);
}
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetScmOrganisationsError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve user details for an organization
pub async fn get_user(configuration: &configuration::Configuration, organization: &str, user: &str) -> Result<crate::models::User, Error<GetUserError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/users/{user}", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), user=crate::apis::urlencode(user));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetUserError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve user favorites details for an organization
pub async fn get_user_favorites(configuration: &configuration::Configuration, user: &str) -> Result<Vec<crate::models::FavoriteImpl>, Error<GetUserFavoritesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/users/{user}/favorites", local_var_configuration.base_path, user=crate::apis::urlencode(user));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetUserFavoritesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Retrieve users details for an organization
pub async fn get_users(configuration: &configuration::Configuration, organization: &str) -> Result<crate::models::User, Error<GetUsersError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/users/", local_var_configuration.base_path, organization=crate::apis::urlencode(organization));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<GetUsersError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Replay an organization pipeline run
pub async fn post_pipeline_run(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str) -> Result<crate::models::QueueItemImpl, Error<PostPipelineRunError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/replay", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::POST, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<PostPipelineRunError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Start a build for an organization pipeline
pub async fn post_pipeline_runs(configuration: &configuration::Configuration, organization: &str, pipeline: &str) -> Result<crate::models::QueueItemImpl, Error<PostPipelineRunsError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::POST, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<PostPipelineRunsError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Favorite/unfavorite a pipeline
pub async fn put_pipeline_favorite(configuration: &configuration::Configuration, organization: &str, pipeline: &str, body: bool) -> Result<crate::models::FavoriteImpl, Error<PutPipelineFavoriteError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/favorite", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::PUT, local_var_uri_str.as_str());
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
local_var_req_builder = local_var_req_builder.json(&body);
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<PutPipelineFavoriteError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Stop a build of an organization pipeline
pub async fn put_pipeline_run(configuration: &configuration::Configuration, organization: &str, pipeline: &str, run: &str, blocking: Option<&str>, time_out_in_secs: Option<i32>) -> Result<crate::models::PipelineRun, Error<PutPipelineRunError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/organizations/{organization}/pipelines/{pipeline}/runs/{run}/stop", local_var_configuration.base_path, organization=crate::apis::urlencode(organization), pipeline=crate::apis::urlencode(pipeline), run=crate::apis::urlencode(run));
let mut local_var_req_builder = local_var_client.request(reqwest::Method::PUT, local_var_uri_str.as_str());
if let Some(ref local_var_str) = blocking {
local_var_req_builder = local_var_req_builder.query(&[("blocking", &local_var_str.to_string())]);
}
if let Some(ref local_var_str) = time_out_in_secs {
local_var_req_builder = local_var_req_builder.query(&[("timeOutInSecs", &local_var_str.to_string())]);
}
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<PutPipelineRunError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Search for any resource details
pub async fn search(configuration: &configuration::Configuration, q: &str) -> Result<String, Error<SearchError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/search/", local_var_configuration.base_path);
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
local_var_req_builder = local_var_req_builder.query(&[("q", &q.to_string())]);
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<SearchError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
/// Get classes details
pub async fn search_classes(configuration: &configuration::Configuration, q: &str) -> Result<String, Error<SearchClassesError>> {
let local_var_configuration = configuration;
let local_var_client = &local_var_configuration.client;
let local_var_uri_str = format!("{}/blue/rest/classes/", local_var_configuration.base_path);
let mut local_var_req_builder = local_var_client.request(reqwest::Method::GET, local_var_uri_str.as_str());
local_var_req_builder = local_var_req_builder.query(&[("q", &q.to_string())]);
if let Some(ref local_var_user_agent) = local_var_configuration.user_agent {
local_var_req_builder = local_var_req_builder.header(reqwest::header::USER_AGENT, local_var_user_agent.clone());
}
if let Some(ref local_var_auth_conf) = local_var_configuration.basic_auth {
local_var_req_builder = local_var_req_builder.basic_auth(local_var_auth_conf.0.to_owned(), local_var_auth_conf.1.to_owned());
};
let local_var_req = local_var_req_builder.build()?;
let local_var_resp = local_var_client.execute(local_var_req).await?;
let local_var_status = local_var_resp.status();
let local_var_content = local_var_resp.text().await?;
if !local_var_status.is_client_error() && !local_var_status.is_server_error() {
serde_json::from_str(&local_var_content).map_err(Error::from)
} else {
let local_var_entity: Option<SearchClassesError> = serde_json::from_str(&local_var_content).ok();
let local_var_error = ResponseContent { status: local_var_status, content: local_var_content, entity: local_var_entity };
Err(Error::ResponseError(local_var_error))
}
}
| 51.465056 | 375 | 0.75725 |
f99c2cc5ff9cf8ff851434b903dbfd06836eb07a | 567 | // rustfmt-wrap_comments: true
// rustfmt-max_width: 50
// This example shows how to configure fern to output really nicely colored logs
// - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
// - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
// - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
// - when the log level is info, the level name is green and the rest of the line is white
// - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
// - aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
fn func1() {}
| 47.25 | 94 | 0.802469 |
8a22bc06aceebdc11b0e789f49cbab7206ccf06b | 496 | extern crate libtest;
use serde_json;
use std::io;
use libtest::models::all::required::Assembled;
use libtest::models::idol::ExpandsJson;
fn main() -> Result<(), i32> {
let mut value: serde_json::Value =
serde_json::from_reader(io::stdin()).expect("Invalid json input");
let expanded = Assembled::expand_json(&mut value);
let expanded = match expanded {
Some(new) => new,
None => value,
};
println!("{}", serde_json::to_string_pretty(&expanded).expect(""));
Ok(())
}
| 22.545455 | 70 | 0.66129 |
d91011d49177a200d8716e4e84f154e364e7dd46 | 16,418 | //! Functions and types for working with CUDA kernels.
use context::{CacheConfig, SharedMemoryConfig};
use cuda_sys::cuda::{self, CUfunction};
use error::{CudaResult, ToResult};
use module::Module;
use std::marker::PhantomData;
use std::mem::transmute;
/// Dimensions of a grid, or the number of thread blocks in a kernel launch.
///
/// Each component of a `GridSize` must be at least 1. The maximum size depends on your device's
/// compute capability, but maximums of `x = (2^31)-1, y = 65535, z = 65535` are common. Launching
/// a kernel with a grid size greater than these limits will cause an error.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct GridSize {
/// Width of grid in blocks
pub x: u32,
/// Height of grid in blocks
pub y: u32,
/// Depth of grid in blocks
pub z: u32,
}
impl GridSize {
/// Create a one-dimensional grid of `x` blocks
#[inline]
pub fn x(x: u32) -> GridSize {
GridSize { x, y: 1, z: 1 }
}
/// Create a two-dimensional grid of `x * y` blocks
#[inline]
pub fn xy(x: u32, y: u32) -> GridSize {
GridSize { x, y, z: 1 }
}
/// Create a three-dimensional grid of `x * y * z` blocks
#[inline]
pub fn xyz(x: u32, y: u32, z: u32) -> GridSize {
GridSize { x, y, z }
}
}
impl From<u32> for GridSize {
fn from(x: u32) -> GridSize {
GridSize::x(x)
}
}
impl From<(u32, u32)> for GridSize {
fn from((x, y): (u32, u32)) -> GridSize {
GridSize::xy(x, y)
}
}
impl From<(u32, u32, u32)> for GridSize {
fn from((x, y, z): (u32, u32, u32)) -> GridSize {
GridSize::xyz(x, y, z)
}
}
impl<'a> From<&'a GridSize> for GridSize {
fn from(other: &GridSize) -> GridSize {
other.clone()
}
}
/// Dimensions of a thread block, or the number of threads in a block.
///
/// Each component of a `BlockSize` must be at least 1. The maximum size depends on your device's
/// compute capability, but maximums of `x = 1024, y = 1024, z = 64` are common. In addition, the
/// limit on total number of threads in a block (`x * y * z`) is also defined by the compute
/// capability, typically 1024. Launching a kernel with a block size greater than these limits will
/// cause an error.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct BlockSize {
/// X dimension of each thread block
pub x: u32,
/// Y dimension of each thread block
pub y: u32,
/// Z dimension of each thread block
pub z: u32,
}
impl BlockSize {
/// Create a one-dimensional block of `x` threads
#[inline]
pub fn x(x: u32) -> BlockSize {
BlockSize { x, y: 1, z: 1 }
}
/// Create a two-dimensional block of `x * y` threads
#[inline]
pub fn xy(x: u32, y: u32) -> BlockSize {
BlockSize { x, y, z: 1 }
}
/// Create a three-dimensional block of `x * y * z` threads
#[inline]
pub fn xyz(x: u32, y: u32, z: u32) -> BlockSize {
BlockSize { x, y, z }
}
}
impl From<u32> for BlockSize {
fn from(x: u32) -> BlockSize {
BlockSize::x(x)
}
}
impl From<(u32, u32)> for BlockSize {
fn from((x, y): (u32, u32)) -> BlockSize {
BlockSize::xy(x, y)
}
}
impl From<(u32, u32, u32)> for BlockSize {
fn from((x, y, z): (u32, u32, u32)) -> BlockSize {
BlockSize::xyz(x, y, z)
}
}
impl<'a> From<&'a BlockSize> for BlockSize {
fn from(other: &BlockSize) -> BlockSize {
other.clone()
}
}
/// All supported function attributes for [Function::get_attribute](struct.Function.html#method.get_attribute)
#[repr(u32)]
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub enum FunctionAttribute {
/// The maximum number of threads per block, beyond which a launch would fail. This depends on
/// both the function and the device.
MaxThreadsPerBlock = 0,
/// The size in bytes of the statically-allocated shared memory required by this function.
SharedMemorySizeBytes = 1,
/// The size in bytes of the constant memory required by this function
ConstSizeBytes = 2,
/// The size in bytes of local memory used by each thread of this function
LocalSizeBytes = 3,
/// The number of registers used by each thread of this function
NumRegisters = 4,
/// The PTX virtual architecture version for which the function was compiled. This value is the
/// major PTX version * 10 + the minor PTX version, so version 1.3 would return the value 13.
PtxVersion = 5,
/// The binary architecture version for which the function was compiled. Encoded the same way as
/// PtxVersion.
BinaryVersion = 6,
/// The attribute to indicate whether the function has been compiled with user specified
/// option "-Xptxas --dlcm=ca" set.
CacheModeCa = 7,
#[doc(hidden)]
__Nonexhaustive = 8,
}
/// Handle to a global kernel function.
#[derive(Debug)]
pub struct Function<'a> {
inner: CUfunction,
module: PhantomData<&'a Module>,
}
impl<'a> Function<'a> {
pub(crate) fn new(inner: CUfunction, _module: &Module) -> Function {
Function {
inner,
module: PhantomData,
}
}
/// Returns information about a function.
///
/// # Examples:
///
/// ```
/// # use rustacuda::*;
/// # let _ctx = quick_init().unwrap();
/// # use rustacuda::module::Module;
/// # use std::ffi::CString;
/// # let ptx = CString::new(include_str!("../resources/add.ptx")).unwrap();
/// # let module = Module::load_from_string(&ptx).unwrap();
/// # let name = CString::new("sum").unwrap();
/// use rustacuda::function::FunctionAttribute;
/// let function = module.get_function(&name).unwrap();
/// let shared_memory = function.get_attribute(FunctionAttribute::SharedMemorySizeBytes).unwrap();
/// println!("This function uses {} bytes of shared memory", shared_memory);
/// ```
pub fn get_attribute(&self, attr: FunctionAttribute) -> CudaResult<i32> {
unsafe {
let mut val = 0i32;
cuda::cuFuncGetAttribute(
&mut val as *mut i32,
// This should be safe, as the repr and values of FunctionAttribute should match.
::std::mem::transmute(attr),
self.inner,
).to_result()?;
Ok(val)
}
}
/// Sets the preferred cache configuration for this function.
///
/// On devices where L1 cache and shared memory use the same hardware resources, this sets the
/// preferred cache configuration for this function. This is only a preference. The
/// driver will use the requested configuration if possible, but is free to choose a different
/// configuration if required to execute the function. This setting will override the
/// context-wide setting.
///
/// This setting does nothing on devices where the size of the L1 cache and shared memory are
/// fixed.
///
/// # Example:
///
/// ```
/// # use rustacuda::*;
/// # let _ctx = quick_init().unwrap();
/// # use rustacuda::module::Module;
/// # use std::ffi::CString;
/// # let ptx = CString::new(include_str!("../resources/add.ptx")).unwrap();
/// # let module = Module::load_from_string(&ptx).unwrap();
/// # let name = CString::new("sum").unwrap();
/// use rustacuda::context::CacheConfig;
/// let mut function = module.get_function(&name).unwrap();
/// function.set_cache_config(CacheConfig::PreferL1).unwrap();
/// ```
pub fn set_cache_config(&mut self, config: CacheConfig) -> CudaResult<()> {
unsafe { cuda::cuFuncSetCacheConfig(self.inner, transmute(config)).to_result() }
}
/// Sets the preferred shared memory configuration for this function.
///
/// On devices with configurable shared memory banks, this function will set this function's
/// shared memory bank size which is used for subsequent launches of this function. If not set,
/// the context-wide setting will be used instead.
///
/// # Example:
///
/// ```
/// # use rustacuda::*;
/// # let _ctx = quick_init().unwrap();
/// # use rustacuda::module::Module;
/// # use std::ffi::CString;
/// # let ptx = CString::new(include_str!("../resources/add.ptx")).unwrap();
/// # let module = Module::load_from_string(&ptx).unwrap();
/// # let name = CString::new("sum").unwrap();
/// use rustacuda::context::SharedMemoryConfig;
/// let mut function = module.get_function(&name).unwrap();
/// function.set_shared_memory_config(SharedMemoryConfig::EightByteBankSize).unwrap();
/// ```
pub fn set_shared_memory_config(&mut self, cfg: SharedMemoryConfig) -> CudaResult<()> {
unsafe { cuda::cuFuncSetSharedMemConfig(self.inner, transmute(cfg)).to_result() }
}
pub(crate) fn to_inner(&self) -> CUfunction {
self.inner
}
}
/// Launch a kernel function asynchronously.
///
/// # Syntax:
///
/// The format of this macro is designed to resemble the triple-chevron syntax used to launch
/// kernels in CUDA C. There are two forms available:
///
/// ```ignore
/// let result = launch!(module.function_name<<<grid, block, shared_memory_size, stream>>>(parameter1, parameter2...));
/// ```
///
/// This will load a kernel called `function_name` from the module `module` and launch it with
/// the given grid/block size on the given stream. Unlike in CUDA C, the shared memory size and
/// stream parameters are not optional. The shared memory size is a number of bytes per thread for
/// dynamic shared memory (Note that this uses `extern __shared__ int x[]` in CUDA C, not the
/// fixed-length arrays created by `__shared__ int x[64]`. This will usually be zero.).
/// `stream` must be the name of a [`Stream`](stream/struct.Stream.html) value.
/// `grid` can be any value which implements [`Into<GridSize>`](function/struct.GridSize.html) (such as
/// `u32` values, tuples of up to three `u32` values, and GridSize structures) and likewise `block`
/// can be any value that implements [`Into<BlockSize>`](function/struct.BlockSize.html).
///
/// NOTE: due to some limitations of Rust's macro system, `module` and `stream` must be local
/// variable names. Paths or function calls will not work.
///
/// The second form is similar:
///
/// ```ignore
/// let result = launch!(function<<<grid, block, shared_memory_size, stream>>>(parameter1, parameter2...));
/// ```
///
/// In this variant, the `function` parameter must be a variable. Use this form to avoid looking up
/// the kernel function for each call.
///
/// # Safety:
///
/// Launching kernels must be done in an `unsafe` block. Calling a kernel is similar to calling a
/// foreign-language function, as the kernel itself could be written in C or unsafe Rust. The kernel
/// must accept the same number and type of parameters that are passed to the `launch!` macro. The
/// kernel must not write invalid data (for example, invalid enums) into areas of memory that can
/// be copied back to the host. The programmer must ensure that the host does not access device or
/// unified memory that the kernel could write to until after calling `stream.synchronize()`.
///
/// # Examples:
///
/// ```
/// # #[macro_use]
/// # extern crate rustacuda;
/// use rustacuda::memory::*;
/// use rustacuda::module::Module;
/// use rustacuda::stream::*;
/// use std::ffi::CString;
///
/// # fn main() {
///
/// // Set up the context, load the module, and create a stream to run kernels in.
/// let _ctx = rustacuda::quick_init().unwrap();
/// let ptx = CString::new(include_str!("../resources/add.ptx")).unwrap();
/// let module = Module::load_from_string(&ptx).unwrap();
/// let stream = Stream::new(StreamFlags::NON_BLOCKING, None).unwrap();
///
/// // Create buffers for data
/// let mut in_x = DeviceBuffer::from_slice(&[1.0f32; 10]).unwrap();
/// let mut in_y = DeviceBuffer::from_slice(&[2.0f32; 10]).unwrap();
/// let mut out_1 = DeviceBuffer::from_slice(&[0.0f32; 10]).unwrap();
/// let mut out_2 = DeviceBuffer::from_slice(&[0.0f32; 10]).unwrap();
///
/// // This kernel adds each element in `in_x` and `in_y` and writes the result into `out`.
/// unsafe {
/// // Launch the kernel with one block of one thread, no dynamic shared memory on `stream`.
/// let result = launch!(module.sum<<<1, 1, 0, stream>>>(
/// in_x.as_device_ptr(),
/// in_y.as_device_ptr(),
/// out_1.as_device_ptr(),
/// out_1.len()
/// ));
/// // `launch!` returns an error in case anything went wrong with the launch itself, but
/// // kernel launches are asynchronous so errors caused by the kernel (eg. invalid memory
/// // access) will show up later at some other CUDA API call (probably at `synchronize()`
/// // below).
/// result.unwrap();
///
/// // Launch the kernel again using the `function` form:
/// let function_name = CString::new("sum").unwrap();
/// let sum = module.get_function(&function_name).unwrap();
/// // Launch with 1x1x1 (1) blocks of 10x1x1 (10) threads, to show that you can use tuples to
/// // configure grid and block size.
/// let result = launch!(sum<<<(1, 1, 1), (10, 1, 1), 0, stream>>>(
/// in_x.as_device_ptr(),
/// in_y.as_device_ptr(),
/// out_2.as_device_ptr(),
/// out_2.len()
/// ));
/// result.unwrap();
/// }
///
/// // Kernel launches are asynchronous, so we wait for the kernels to finish executing.
/// stream.synchronize().unwrap();
///
/// // Copy the results back to host memory
/// let mut out_host = [0.0f32; 20];
/// out_1.copy_to(&mut out_host[0..10]).unwrap();
/// out_2.copy_to(&mut out_host[10..20]).unwrap();
///
/// for x in out_host.iter() {
/// assert_eq!(3.0, *x);
/// }
///
/// # }
/// ```
///
#[macro_export]
macro_rules! launch {
($module:ident . $function:ident <<<$grid:expr, $block:expr, $shared:expr, $stream:ident>>>( $( $arg:expr),* )) => {
{
let name = std::ffi::CString::new(stringify!($function)).unwrap();
let function = $module.get_function(&name);
match function {
Ok(f) => launch!(f<<<$grid, $block, $shared, $stream>>>( $($arg),* ) ),
Err(e) => Err(e),
}
}
};
($function:ident <<<$grid:expr, $block:expr, $shared:expr, $stream:ident>>>( $( $arg:expr),* )) => {
{
fn assert_impl_devicecopy<T: $crate::memory::DeviceCopy>(_val: T) {};
if false {
$(
assert_impl_devicecopy($arg);
)*
};
$stream.launch(&$function, $grid, $block, $shared,
&[
$(
&$arg as *const _ as *mut ::std::ffi::c_void,
)*
]
)
}
};
}
#[cfg(test)]
mod test {
use super::*;
use memory::CopyDestination;
use memory::DeviceBuffer;
use quick_init;
use std::ffi::CString;
use stream::{Stream, StreamFlags};
#[test]
fn test_launch() {
let _context = quick_init();
let ptx_text = CString::new(include_str!("../resources/add.ptx")).unwrap();
let module = Module::load_from_string(&ptx_text).unwrap();
unsafe {
let mut in_x = DeviceBuffer::from_slice(&[2.0f32; 128]).unwrap();
let mut in_y = DeviceBuffer::from_slice(&[1.0f32; 128]).unwrap();
let mut out: DeviceBuffer<f32> = DeviceBuffer::uninitialized(128).unwrap();
let stream = Stream::new(StreamFlags::NON_BLOCKING, None).unwrap();
launch!(module.sum<<<1, 128, 0, stream>>>(in_x.as_device_ptr(), in_y.as_device_ptr(), out.as_device_ptr(), out.len())).unwrap();
stream.synchronize().unwrap();
let mut out_host = [0f32; 128];
out.copy_to(&mut out_host[..]).unwrap();
for x in out_host.iter() {
assert_eq!(3, *x as u32);
}
}
}
}
| 38.00463 | 141 | 0.594957 |
b95af1609fc018160bf0c4f927f4df88e77a4808 | 819 | use super::File;
use futures::{try_ready, Future, Poll};
use std::fs::OpenOptions as StdOpenOptions;
use std::io;
use std::path::Path;
/// Future returned by `File::open` and resolves to a `File` instance.
#[derive(Debug)]
pub struct OpenFuture<P> {
options: StdOpenOptions,
path: P,
}
impl<P> OpenFuture<P>
where
P: AsRef<Path> + Send + 'static,
{
pub(crate) fn new(options: StdOpenOptions, path: P) -> Self {
OpenFuture { options, path }
}
}
impl<P> Future for OpenFuture<P>
where
P: AsRef<Path> + Send + 'static,
{
type Item = File;
type Error = io::Error;
fn poll(&mut self) -> Poll<Self::Item, Self::Error> {
let std = try_ready!(crate::blocking_io(|| self.options.open(&self.path)));
let file = File::from_std(std);
Ok(file.into())
}
}
| 22.135135 | 83 | 0.620269 |
effe6ecadd27297ecd09d2f5cdba0d86ac32c22e | 18,573 | //! Types used in the rest of the compiler.
use std::sync::Arc;
use std::{rc::Rc, collections::HashMap};
use num::BigInt;
use crate::util::FileSpan;
use crate::elab::{environment::{AtomID, Environment},
lisp::{LispVal, Uncons}};
// /// An argument to a function.
// #[derive(Debug, DeepSizeOf)]
// pub struct Arg {
// /// The name of the argument, if not `_`.
// pub name: Option<(AtomID, FileSpan)>,
// /// True if the argument is a ghost variable (computationally irrelevant).
// pub ghost: bool,
// /// The (unparsed) type of the argument.
// pub ty: LispVal,
// }
// impl PartialEq<Arg> for Arg {
// fn eq(&self, other: &Arg) -> bool {
// let b = match (&self.name, &other.name) {
// (None, None) => true,
// (&Some((a, _)), &Some((b, _))) => a == b,
// _ => false
// };
// b && self.ghost == other.ghost && self.ty == other.ty
// }
// }
// impl Eq for Arg {}
/// The type of variant, or well founded order that recursions decrease.
#[derive(PartialEq, Eq, Debug, DeepSizeOf)]
pub enum VariantType {
/// This variant is a nonnegative natural number which decreases to 0.
Down,
/// This variant is a natural number or integer which increases while
/// remaining less than this constant.
UpLt(LispVal),
/// This variant is a natural number or integer which increases while
/// remaining less than or equal to this constant.
UpLe(LispVal)
}
/// A variant is a pure expression, together with a
/// well founded order that decreases on all calls.
pub type Variant = (LispVal, VariantType);
/// An invariant is a local variable in a loop, that is passed as an argument
/// on recursive calls.
#[derive(Debug, DeepSizeOf)]
pub struct Invariant {
/// The variable name.
pub name: AtomID,
/// True if the variable is ghost (computationally irrelevant).
pub ghost: bool,
/// The type of the variable, or none for inferred.
pub ty: Option<LispVal>,
/// The initial value of the variable.
pub val: Option<LispVal>,
}
/// A block is a local scope. Like functions, this requires explicit importing
/// of variables from external scope if they will be mutated after the block.
#[derive(Debug, DeepSizeOf)]
pub struct Block {
/// The list of variables that will be updated by the block. Variables
/// in external scope that are not in this list are treated as read only.
pub muts: Box<[(AtomID, Option<FileSpan>)]>,
/// The statements of the block.
pub stmts: Uncons
}
/// A tuple pattern, which destructures the results of assignments from functions with
/// mutiple return values, as well as explicit tuple values and structs.
#[derive(Debug, DeepSizeOf)]
pub enum TuplePattern {
/// A variable binding, or `_` for an ignored binding. The `bool` is true if the variable
/// is ghost.
Name(bool, AtomID, Option<FileSpan>),
/// A type ascription. The type is unparsed.
Typed(Box<TuplePattern>, LispVal),
/// A tuple, with the given arguments.
Tuple(Box<[TuplePattern]>),
}
impl TuplePattern {
/// The `_` tuple pattern. This is marked as ghost because it can't be referred to so
/// it is always safe to make irrelevant.
pub const UNDER: TuplePattern = TuplePattern::Name(true, AtomID::UNDER, None);
/// The name of a variable binding (or `_` for a tuple pattern)
#[must_use] pub fn name(&self) -> AtomID {
match self {
&TuplePattern::Name(_, a, _) => a,
TuplePattern::Typed(p, _) => p.name(),
_ => AtomID::UNDER
}
}
/// The span of a variable binding (or [`None`] for a tuple pattern).
#[must_use] pub fn fspan(&self) -> Option<&FileSpan> {
match self {
TuplePattern::Name(_, _, fsp) => fsp.as_ref(),
TuplePattern::Typed(p, _) => p.fspan(),
_ => None
}
}
/// True if all the bindings in this pattern are ghost.
#[must_use] pub fn ghost(&self) -> bool {
match self {
&TuplePattern::Name(g, _, _) => g,
TuplePattern::Typed(p, _) => p.ghost(),
TuplePattern::Tuple(ps) => ps.iter().all(TuplePattern::ghost),
}
}
/// The type of this binding, or `_` if there is no explicit type.
#[must_use] pub fn ty(&self) -> LispVal {
match self {
TuplePattern::Typed(_, ty) => ty.clone(),
_ => LispVal::atom(AtomID::UNDER)
}
}
}
impl PartialEq<TuplePattern> for TuplePattern {
fn eq(&self, other: &TuplePattern) -> bool {
match (self, other) {
(TuplePattern::Name(g1, a1, _), TuplePattern::Name(g2, a2, _)) => g1 == g2 && a1 == a2,
(TuplePattern::Typed(p1, ty1), TuplePattern::Typed(p2, ty2)) => p1 == p2 && ty1 == ty2,
(TuplePattern::Tuple(ps1), TuplePattern::Tuple(ps2)) => ps1 == ps2,
_ => false
}
}
}
impl Eq for TuplePattern {}
/// A pattern, the left side of a switch statement.
#[derive(Debug, DeepSizeOf)]
pub enum Pattern {
/// A variable binding, unless this is the name of a constant in which case
/// it is a constant value.
VarOrConst(AtomID),
/// A numeric literal.
Number(BigInt),
/// A hypothesis pattern, which binds the first argument to a proof that the
/// scrutinee satisfies the pattern argument.
Hyped(AtomID, Box<Pattern>),
/// A pattern guard: Matches the inner pattern, and then if the expression returns
/// true, this is also considered to match.
With(Box<(Pattern, LispVal)>),
/// A disjunction of patterns.
Or(Box<[Pattern]>),
}
/// An expression or statement. A block is a list of expressions.
#[derive(Debug, DeepSizeOf)]
pub enum Expr {
/// A `()` literal.
Nil,
/// A variable reference.
Var(AtomID),
/// A number literal.
Number(BigInt),
/// A let binding.
Let {
/// True if the `rhs` expression should not be evaluated,
/// and all variables in the declaration should be considered ghost.
ghost: bool,
/// A tuple pattern, containing variable bindings.
lhs: TuplePattern,
/// The expression to evaluate, or [`None`] for uninitialized.
rhs: Option<Box<Expr>>,
},
/// A function call (or something that looks like one at parse time).
Call {
/// The function to call.
f: AtomID,
/// The function arguments.
args: Box<[Expr]>,
/// The variant, if needed.
variant: Option<Variant>,
},
/// An entailment proof, which takes a proof of `P1 * ... * Pn => Q` and expressions proving
/// `P1, ..., Pn` and is a hypothesis of type `Q`.
Entail(LispVal, Box<[Expr]>),
/// A block scope.
Block(Block),
/// A label, which looks exactly like a local function but has no independent stack frame.
/// They are called like regular functions but can only appear in tail position.
Label {
/// The name of the label
name: AtomID,
/// The arguments of the label
args: Box<[TuplePattern]>,
/// The variant, for recursive calls
variant: Option<Variant>,
/// The code that is executed when you jump to the label
body: Block,
},
/// An if-then-else expression (at either block or statement level). The initial atom names
/// a hypothesis that the expression is true in one branch and false in the other.
If(Box<(Option<AtomID>, Expr, Expr, Expr)>),
/// A switch (pattern match) statement, given the initial expression and a list of match arms.
Switch(Box<Expr>, Box<[(Pattern, Expr)]>),
/// A while loop.
While {
/// A hypothesis that the condition is true in the loop and false after it.
hyp: Option<AtomID>,
/// The loop condition.
cond: Box<Expr>,
/// The variant, which must decrease on every round around the loop.
var: Option<Variant>,
/// The invariants, which must be supplied on every round around the loop.
invar: Box<[Invariant]>,
/// The body of the loop.
body: Block,
},
/// A hole `_`, which is a compile error but queries the compiler to provide a type context.
Hole(FileSpan),
}
/// A procedure kind, which defines the different kinds of function-like declarations.
#[derive(Copy, Clone, PartialEq, Eq, Debug)]
pub enum ProcKind {
/// A (pure) function, which generates a logic level function as well as code. (Body required.)
Func,
/// A procedure, which is opaque except for its type. (Body provided.)
Proc,
/// A precedure declaration, used for forward declarations. (Body not provided.)
ProcDecl,
/// An intrinsic declaration, which is only here to put the function declaration in user code.
/// The compiler will ensure this matches an existing intrinsic, and intrinsics cannot be
/// called until they are declared using an `intrinsic` declaration.
Intrinsic,
}
crate::deep_size_0!(ProcKind);
/// A procedure (or function or intrinsic), a top level item similar to function declarations in C.
#[derive(Debug, DeepSizeOf)]
pub struct Proc {
/// The type of declaration: `func`, `proc`, `proc` with no body, or `intrinsic`.
pub kind: ProcKind,
/// The name of the procedure.
pub name: AtomID,
/// The span of the procedure name.
pub span: Option<FileSpan>,
/// The arguments of the procedure.
pub args: Box<[TuplePattern]>,
/// The return values of the procedure. (Functions and procedures return multiple values in MMC.)
pub rets: Box<[TuplePattern]>,
/// The variant, used for recursive functions.
pub variant: Option<Variant>,
/// The body of the procedure.
pub body: Block,
}
impl Proc {
/// Checks if this proc equals `other`, ignoring the `body` and `kind` fields.
/// (This is how we validate a proc against a proc decl.)
#[must_use] pub fn eq_decl(&self, other: &Proc) -> bool {
self.name == other.name &&
self.args == other.args &&
self.rets == other.rets &&
self.variant == other.variant &&
self.body.muts == other.body.muts
}
}
/// A field of a struct.
#[derive(Debug, DeepSizeOf)]
pub struct Field {
/// The name of the field.
pub name: AtomID,
/// True if the field is computationally irrelevant.
pub ghost: bool,
/// The type of the field (unparsed).
pub ty: LispVal,
}
/// A top level program item. (A program AST is a list of program items.)
#[derive(Debug, DeepSizeOf)]
pub enum AST {
/// A procedure, behind an Arc so it can be cheaply copied.
Proc(Arc<Proc>),
/// A global variable declaration.
Global {
/// The variable(s) being declared
lhs: TuplePattern,
/// The value of the declaration
rhs: Option<LispVal>,
},
/// A constant declaration.
Const {
/// The constant(s) being declared
lhs: TuplePattern,
/// The value of the declaration
rhs: LispVal,
},
/// A type definition.
Typedef {
/// The name of the newly declared type
name: AtomID,
/// The span of the name
span: Option<FileSpan>,
/// The arguments of the type declaration, for a parametric type
args: Box<[TuplePattern]>,
/// The value of the declaration (another type)
val: LispVal,
},
/// A structure definition.
Struct {
/// The name of the structure
name: AtomID,
/// The span of the name
span: Option<FileSpan>,
/// The parameters of the type
args: Box<[TuplePattern]>,
/// The fields of the structure
fields: Box<[TuplePattern]>,
},
}
impl AST {
/// Make a new `AST::Proc`.
#[must_use] pub fn proc(p: Proc) -> AST { AST::Proc(Arc::new(p)) }
}
macro_rules! make_keywords {
{$($(#[$attr:meta])* $x:ident: $e:expr,)*} => {
make_keywords! {@IMPL $($(#[$attr])* $x concat!("The keyword `", $e, "`.\n"), $e,)*}
};
{@IMPL $($(#[$attr:meta])* $x:ident $doc0:expr, $e:expr,)*} => {
/// The type of MMC keywords, which are atoms with a special role in the MMC parser.
#[derive(Debug, PartialEq, Eq, Copy, Clone)]
pub enum Keyword { $(#[doc=$doc0] $(#[$attr])* $x),* }
crate::deep_size_0!(Keyword);
impl Environment {
/// Make the initial MMC keyword map in the given environment.
#[allow(clippy::string_lit_as_bytes)]
pub fn make_keywords(&mut self) -> HashMap<AtomID, Keyword> {
let mut atoms = HashMap::new();
$(atoms.insert(self.get_atom($e.as_bytes()), Keyword::$x);)*
atoms
}
}
}
}
make_keywords! {
Add: "+",
Arrow: "=>",
Begin: "begin",
Colon: ":",
ColonEq: ":=",
Const: "const",
Else: "else",
Entail: "entail",
Func: "func",
Finish: "finish",
Ghost: "ghost",
Global: "global",
Intrinsic: "intrinsic",
Invariant: "invariant",
If: "if",
Le: "<=",
Lt: "<",
Mut: "mut",
Or: "or",
Proc: "proc",
Star: "*",
Struct: "struct",
Switch: "switch",
Typedef: "typedef",
Variant: "variant",
While: "while",
With: "with",
}
/// Possible sizes for integer operations and types.
#[derive(Copy, Clone, Debug)]
pub enum Size {
/// 8 bits, or 1 byte. Used for `u8` and `i8`.
S8,
/// 16 bits, or 2 bytes. Used for `u16` and `i16`.
S16,
/// 32 bits, or 4 bytes. Used for `u32` and `i32`.
S32,
/// 64 bits, or 8 bytes. Used for `u64` and `i64`.
S64,
/// Unbounded size. Used for `nat` and `int`. (These types are only legal for
/// ghost variables, but they are also used to indicate "correct to an unbounded model"
/// for operations like [`Unop::BitNot`] when it makes sense. We do not actually support
/// bignum compilation.)
Inf,
}
crate::deep_size_0!(Size);
/// (Elaborated) unary operations.
#[derive(Copy, Clone, Debug)]
pub enum Unop {
/// Logical (boolean) NOT
Not,
/// Bitwise NOT. For fixed size this is the operation `2^n - x - 1`, and
/// for infinite size this is `-x - 1`. Note that signed NOT
BitNot(Size),
}
crate::deep_size_0!(Unop);
/// (Elaborated) binary operations.
#[derive(Copy, Clone, Debug)]
pub enum Binop {
/// Integer addition
Add,
/// Integer multiplication
Mul,
/// Logical (boolean) AND
And,
/// Logical (boolean) OR
Or,
/// Bitwise AND, for signed or unsigned integers of any size
BitAnd,
/// Bitwise OR, for signed or unsigned integers of any size
BitOr,
/// Bitwise XOR, for signed or unsigned integers of any size
BitXor,
/// Less than, for signed or unsigned integers of any size
Lt,
/// Less than or equal, for signed or unsigned integers of any size
Le,
/// Equal, for signed or unsigned integers of any size
Eq,
/// Not equal, for signed or unsigned integers of any size
Ne,
}
crate::deep_size_0!(Binop);
/// A proof expression, or "hypothesis".
#[derive(Debug, DeepSizeOf)]
pub enum ProofExpr {
/// An assertion expression `(assert p): p`.
Assert(Box<PureExpr>),
}
/// Pure expressions in an abstract domain. The interpretation depends on the type,
/// but most expressions operate on the type of (signed unbounded) integers.
#[derive(Debug, DeepSizeOf)]
pub enum PureExpr {
/// A variable.
Var(AtomID),
/// An integer or natural number.
Int(BigInt),
/// The unit value `()`.
Unit,
/// A boolean literal.
Bool(bool),
/// A unary operation.
Unop(Unop, Rc<PureExpr>),
/// A binary operation.
Binop(Binop, Rc<PureExpr>, Rc<PureExpr>),
/// A tuple constructor.
Tuple(Box<[PureExpr]>),
/// An index operation `(index a i h): T` where `a: (array T n)`,
/// `i: nat`, and `h: i < n`.
Index(Box<PureExpr>, Rc<PureExpr>, Box<ProofExpr>),
/// An deref operation `(* x): T` where `x: (own T)`.
DerefOwn(Box<PureExpr>),
/// An deref operation `(* x): T` where `x: (& T)`.
Deref(Box<PureExpr>),
/// An deref operation `(* x): T` where `x: (&mut T)`.
DerefMut(Box<PureExpr>),
/// A ghost expression.
Ghost(Rc<PureExpr>),
}
/// A type, which classifies regular variables (not type variables, not hypotheses).
#[derive(Debug, DeepSizeOf)]
pub enum Type {
/// A type variable.
Var(AtomID),
/// `()` is the type with one element; `sizeof () = 0`.
Unit,
/// `bool` is the type of booleans, that is, bytes which are 0 or 1; `sizeof bool = 1`.
Bool,
/// `i(8*N)` is the type of N byte signed integers `sizeof i(8*N) = N`.
Int(Size),
/// `u(8*N)` is the type of N byte unsigned integers; `sizeof u(8*N) = N`.
UInt(Size),
/// The type `(array T n)` is an array of `n` elements of type `T`;
/// `sizeof (array T n) = sizeof T * n`.
Array(Box<Type>, Rc<PureExpr>),
/// `(own T)` is a type of owned pointers. The typehood predicate is
/// `x :> own T` iff `E. v (x |-> v) * v :> T`.
Own(Box<Type>),
/// `(& T)` is a type of borrowed pointers. This type is treated specially;
/// the `x |-> v` assumption is stored separately from regular types, and
/// `(* x)` is treated as sugar for `v`.
Ref(Box<Type>),
/// `(&mut T)` is a type of mutable pointers. This is also treated specially;
/// it is sugar for `(mut {x : (own T)})`, which is to say `x` has
/// type `own T` in the function but must also be passed back out of the
/// function.
RefMut(Box<Type>),
/// `(list A B C)` is a tuple type with elements `A, B, C`;
/// `sizeof (list A B C) = sizeof A + sizeof B + sizeof C`.
List(Box<[Type]>),
/// `(struct {x : A} {y : B} {z : C})` is the dependent version of `list`;
/// it is a tuple type with elements `A, B, C`, but the types `A, B, C` can
/// themselves refer to `x, y, z`.
/// `sizeof (struct {x : A} {_ : B x}) = sizeof A + max_x (sizeof (B x))`.
///
/// The top level declaration `(struct foo {x : A} {y : B})` desugars to
/// `(typedef foo (struct {x : A} {y : B}))`.
Struct(Box<[Type]>),
/// `(and A B C)` is an intersection type of `A, B, C`;
/// `sizeof (and A B C) = max (sizeof A, sizeof B, sizeof C)`, and
/// the typehood predicate is `x :> (and A B C)` iff
/// `x :> A /\ x :> B /\ x :> C`. (Note that this is regular conjunction,
/// not separating conjunction.)
And(Box<[Type]>),
/// `(or A B C)` is an undiscriminated anonymous union of types `A, B, C`.
/// `sizeof (or A B C) = max (sizeof A, sizeof B, sizeof C)`, and
/// the typehood predicate is `x :> (or A B C)` iff
/// `x :> A \/ x :> B \/ x :> C`.
Or(Box<[Type]>),
/// `(ghost A)` is a compoutationally irrelevant version of `A`, which means
/// that the logical storage of `(ghost A)` is the same as `A` but the physical storage
/// is the same as `()`. `sizeof (ghost A) = 0`.
Ghost(Box<Type>),
/// A propositional type, used for hypotheses.
Prop(Box<Prop>),
/// A user-defined type-former.
_User(AtomID, Box<[Type]>, Box<[PureExpr]>),
}
impl Type {
/// Create a ghost node if the boolean is true.
#[must_use] pub fn ghost_if(ghost: bool, this: Type) -> Type {
if ghost { Type::Ghost(Box::new(this)) } else { this }
}
}
/// A separating proposition, which classifies hypotheses / proof terms.
#[derive(Clone, Debug, DeepSizeOf)]
pub enum Prop {
/// An unresolved metavariable.
MVar(usize),
/// An (executable) boolean expression, interpreted as a pure proposition
Pure(Rc<PureExpr>),
}
| 33.525271 | 99 | 0.635277 |
4b1fd251d97d67d24ac5d7cbcea5fc955d5576c9 | 930 | use std::ops::Deref;
fn main() {
let m = MyBox::new(String::from("Rust"));
hello(&m);
let c = CustomSmartPointer { data: String::from("my stuff") };
let d = CustomSmartPointer { data: String::from("other stuff") };
println!("CustomSmartPointers created.");
let c = CustomSmartPointer { data: String::from("some data") };
println!("CustomSmartPointer created.");
drop(c);
println!("CustomSmartPointer dropped before the end of main.");
}
fn hello(name: &str) {
println!("Hello, {}!", name);
}
struct MyBox<T>(T);
impl<T> MyBox<T> {
fn new(x: T) -> MyBox<T> {
MyBox(x)
}
}
impl<T> Deref for MyBox<T> {
type Target = T;
fn deref(&self) -> &T {
&self.0
}
}
struct CustomSmartPointer {
data: String,
}
impl Drop for CustomSmartPointer {
fn drop(&mut self) {
println!("Dropping CustomSmartPointer with data `{}`!", self.data);
}
} | 20.666667 | 75 | 0.591398 |
d6249f19e8eb204eaad15cb5c4d0797d7f687f00 | 13,252 | // Copyright 2020 Parity Technologies (UK) Ltd.
//
// Permission is hereby granted, free of charge, to any person obtaining a
// copy of this software and associated documentation files (the "Software"),
// to deal in the Software without restriction, including without limitation
// the rights to use, copy, modify, merge, publish, distribute, sublicense,
// and/or sell copies of the Software, and to permit persons to whom the
// Software is furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
// FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
// DEALINGS IN THE SOFTWARE.
mod error;
pub(crate) mod handler;
mod listeners;
mod substream;
pub(crate) mod pool;
pub use error::{
ConnectionError, PendingConnectionError, PendingInboundConnectionError,
PendingOutboundConnectionError,
};
pub use handler::{ConnectionHandler, ConnectionHandlerEvent, IntoConnectionHandler};
pub use listeners::{ListenerId, ListenersEvent, ListenersStream};
pub use pool::{ConnectionCounters, ConnectionLimits};
pub use pool::{EstablishedConnection, EstablishedConnectionIter, PendingConnection};
pub use substream::{Close, Substream, SubstreamEndpoint};
use crate::multiaddr::{Multiaddr, Protocol};
use crate::muxing::StreamMuxer;
use crate::PeerId;
use std::hash::Hash;
use std::{error::Error, fmt, pin::Pin, task::Context, task::Poll};
use substream::{Muxing, SubstreamEvent};
/// Connection identifier.
#[derive(Debug, Copy, Clone, Hash, PartialEq, Eq, PartialOrd, Ord)]
pub struct ConnectionId(usize);
impl ConnectionId {
/// Creates a `ConnectionId` from a non-negative integer.
///
/// This is primarily useful for creating connection IDs
/// in test environments. There is in general no guarantee
/// that all connection IDs are based on non-negative integers.
pub fn new(id: usize) -> Self {
ConnectionId(id)
}
}
/// The endpoint roles associated with a peer-to-peer communication channel.
#[derive(Debug, Copy, Clone, PartialEq, Eq, Hash)]
pub enum Endpoint {
/// The socket comes from a dialer.
Dialer,
/// The socket comes from a listener.
Listener,
}
impl std::ops::Not for Endpoint {
type Output = Endpoint;
fn not(self) -> Self::Output {
match self {
Endpoint::Dialer => Endpoint::Listener,
Endpoint::Listener => Endpoint::Dialer,
}
}
}
impl Endpoint {
/// Is this endpoint a dialer?
pub fn is_dialer(self) -> bool {
matches!(self, Endpoint::Dialer)
}
/// Is this endpoint a listener?
pub fn is_listener(self) -> bool {
matches!(self, Endpoint::Listener)
}
}
/// The endpoint roles associated with a pending peer-to-peer connection.
#[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub enum PendingPoint {
/// The socket comes from a dialer.
///
/// There is no single address associated with the Dialer of a pending
/// connection. Addresses are dialed in parallel. Only once the first dial
/// is successful is the address of the connection known.
Dialer,
/// The socket comes from a listener.
Listener {
/// Local connection address.
local_addr: Multiaddr,
/// Address used to send back data to the remote.
send_back_addr: Multiaddr,
},
}
impl From<ConnectedPoint> for PendingPoint {
fn from(endpoint: ConnectedPoint) -> Self {
match endpoint {
ConnectedPoint::Dialer { .. } => PendingPoint::Dialer,
ConnectedPoint::Listener {
local_addr,
send_back_addr,
} => PendingPoint::Listener {
local_addr,
send_back_addr,
},
}
}
}
/// The endpoint roles associated with an established peer-to-peer connection.
#[derive(PartialEq, Eq, Debug, Clone, Hash)]
pub enum ConnectedPoint {
/// We dialed the node.
Dialer {
/// Multiaddress that was successfully dialed.
address: Multiaddr,
},
/// We received the node.
Listener {
/// Local connection address.
local_addr: Multiaddr,
/// Address used to send back data to the remote.
send_back_addr: Multiaddr,
},
}
impl From<&'_ ConnectedPoint> for Endpoint {
fn from(endpoint: &'_ ConnectedPoint) -> Endpoint {
endpoint.to_endpoint()
}
}
impl From<ConnectedPoint> for Endpoint {
fn from(endpoint: ConnectedPoint) -> Endpoint {
endpoint.to_endpoint()
}
}
impl ConnectedPoint {
/// Turns the `ConnectedPoint` into the corresponding `Endpoint`.
pub fn to_endpoint(&self) -> Endpoint {
match self {
ConnectedPoint::Dialer { .. } => Endpoint::Dialer,
ConnectedPoint::Listener { .. } => Endpoint::Listener,
}
}
/// Returns true if we are `Dialer`.
pub fn is_dialer(&self) -> bool {
match self {
ConnectedPoint::Dialer { .. } => true,
ConnectedPoint::Listener { .. } => false,
}
}
/// Returns true if we are `Listener`.
pub fn is_listener(&self) -> bool {
match self {
ConnectedPoint::Dialer { .. } => false,
ConnectedPoint::Listener { .. } => true,
}
}
/// Returns true if the connection is relayed.
pub fn is_relayed(&self) -> bool {
match self {
ConnectedPoint::Dialer { address } => address,
ConnectedPoint::Listener { local_addr, .. } => local_addr,
}
.iter()
.any(|p| p == Protocol::P2pCircuit)
}
/// Returns the address of the remote stored in this struct.
///
/// For `Dialer`, this returns `address`. For `Listener`, this returns `send_back_addr`.
///
/// Note that the remote node might not be listening on this address and hence the address might
/// not be usable to establish new connections.
pub fn get_remote_address(&self) -> &Multiaddr {
match self {
ConnectedPoint::Dialer { address } => address,
ConnectedPoint::Listener { send_back_addr, .. } => send_back_addr,
}
}
/// Modifies the address of the remote stored in this struct.
///
/// For `Dialer`, this modifies `address`. For `Listener`, this modifies `send_back_addr`.
pub fn set_remote_address(&mut self, new_address: Multiaddr) {
match self {
ConnectedPoint::Dialer { address } => *address = new_address,
ConnectedPoint::Listener { send_back_addr, .. } => *send_back_addr = new_address,
}
}
}
/// Information about a successfully established connection.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Connected {
/// The connected endpoint, including network address information.
pub endpoint: ConnectedPoint,
/// Information obtained from the transport.
pub peer_id: PeerId,
}
/// Event generated by a [`Connection`].
#[derive(Debug, Clone)]
pub enum Event<T> {
/// Event generated by the [`ConnectionHandler`].
Handler(T),
/// Address of the remote has changed.
AddressChange(Multiaddr),
}
/// A multiplexed connection to a peer with an associated `ConnectionHandler`.
pub struct Connection<TMuxer, THandler>
where
TMuxer: StreamMuxer,
THandler: ConnectionHandler<Substream = Substream<TMuxer>>,
{
/// Node that handles the muxing.
muxing: substream::Muxing<TMuxer, THandler::OutboundOpenInfo>,
/// Handler that processes substreams.
handler: THandler,
}
impl<TMuxer, THandler> fmt::Debug for Connection<TMuxer, THandler>
where
TMuxer: StreamMuxer,
THandler: ConnectionHandler<Substream = Substream<TMuxer>> + fmt::Debug,
{
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Connection")
.field("muxing", &self.muxing)
.field("handler", &self.handler)
.finish()
}
}
impl<TMuxer, THandler> Unpin for Connection<TMuxer, THandler>
where
TMuxer: StreamMuxer,
THandler: ConnectionHandler<Substream = Substream<TMuxer>>,
{
}
impl<TMuxer, THandler> Connection<TMuxer, THandler>
where
TMuxer: StreamMuxer,
THandler: ConnectionHandler<Substream = Substream<TMuxer>>,
{
/// Builds a new `Connection` from the given substream multiplexer
/// and connection handler.
pub fn new(muxer: TMuxer, handler: THandler) -> Self {
Connection {
muxing: Muxing::new(muxer),
handler,
}
}
/// Returns a reference to the `ConnectionHandler`
pub fn handler(&self) -> &THandler {
&self.handler
}
/// Returns a mutable reference to the `ConnectionHandler`
pub fn handler_mut(&mut self) -> &mut THandler {
&mut self.handler
}
/// Notifies the connection handler of an event.
pub fn inject_event(&mut self, event: THandler::InEvent) {
self.handler.inject_event(event);
}
/// Begins an orderly shutdown of the connection, returning the connection
/// handler and a `Future` that resolves when connection shutdown is complete.
pub fn close(self) -> (THandler, Close<TMuxer>) {
(self.handler, self.muxing.close().0)
}
/// Polls the connection for events produced by the associated handler
/// as a result of I/O activity on the substream multiplexer.
pub fn poll(
mut self: Pin<&mut Self>,
cx: &mut Context<'_>,
) -> Poll<Result<Event<THandler::OutEvent>, ConnectionError<THandler::Error>>> {
loop {
let mut io_pending = false;
// Perform I/O on the connection through the muxer, informing the handler
// of new substreams.
match self.muxing.poll(cx) {
Poll::Pending => io_pending = true,
Poll::Ready(Ok(SubstreamEvent::InboundSubstream { substream })) => self
.handler
.inject_substream(substream, SubstreamEndpoint::Listener),
Poll::Ready(Ok(SubstreamEvent::OutboundSubstream {
user_data,
substream,
})) => {
let endpoint = SubstreamEndpoint::Dialer(user_data);
self.handler.inject_substream(substream, endpoint)
}
Poll::Ready(Ok(SubstreamEvent::AddressChange(address))) => {
self.handler.inject_address_change(&address);
return Poll::Ready(Ok(Event::AddressChange(address)));
}
Poll::Ready(Err(err)) => return Poll::Ready(Err(ConnectionError::IO(err))),
}
// Poll the handler for new events.
match self.handler.poll(cx) {
Poll::Pending => {
if io_pending {
return Poll::Pending; // Nothing to do
}
}
Poll::Ready(Ok(ConnectionHandlerEvent::OutboundSubstreamRequest(user_data))) => {
self.muxing.open_substream(user_data);
}
Poll::Ready(Ok(ConnectionHandlerEvent::Custom(event))) => {
return Poll::Ready(Ok(Event::Handler(event)));
}
Poll::Ready(Err(err)) => return Poll::Ready(Err(ConnectionError::Handler(err))),
}
}
}
}
/// Borrowed information about an incoming connection currently being negotiated.
#[derive(Debug, Copy, Clone)]
pub struct IncomingInfo<'a> {
/// Local connection address.
pub local_addr: &'a Multiaddr,
/// Address used to send back data to the remote.
pub send_back_addr: &'a Multiaddr,
}
impl<'a> IncomingInfo<'a> {
/// Builds the [`PendingPoint`] corresponding to the incoming connection.
pub fn to_pending_point(&self) -> PendingPoint {
PendingPoint::Listener {
local_addr: self.local_addr.clone(),
send_back_addr: self.send_back_addr.clone(),
}
}
/// Builds the [`ConnectedPoint`] corresponding to the incoming connection.
pub fn to_connected_point(&self) -> ConnectedPoint {
ConnectedPoint::Listener {
local_addr: self.local_addr.clone(),
send_back_addr: self.send_back_addr.clone(),
}
}
}
/// Information about a connection limit.
#[derive(Debug, Clone)]
pub struct ConnectionLimit {
/// The maximum number of connections.
pub limit: u32,
/// The current number of connections.
pub current: u32,
}
impl fmt::Display for ConnectionLimit {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}/{}", self.current, self.limit)
}
}
/// A `ConnectionLimit` can represent an error if it has been exceeded.
impl Error for ConnectionLimit {}
| 33.979487 | 100 | 0.632659 |
0a14cf0bd7b8221757f08766fb7bfd24d7a8eaf4 | 1,905 | pub(crate) mod character_data_seal {
pub trait Seal {
#[doc(hidden)]
fn as_web_sys_character_data(&self) -> &web_sys::CharacterData;
}
}
pub trait CharacterData: character_data_seal::Seal {
fn len(&self) -> u32 {
self.as_web_sys_character_data().length()
}
fn data(&self) -> String {
self.as_web_sys_character_data()
.node_value()
.unwrap_or(String::new())
}
fn set_data(&self, value: &str) {
self.as_web_sys_character_data().set_node_value(Some(value));
}
}
macro_rules! impl_character_data_traits {
($tpe:ident, $web_sys_tpe:ident) => {
impl $crate::dom::character_data_seal::Seal for $tpe {
fn as_web_sys_character_data(&self) -> &web_sys::CharacterData {
self.inner.as_ref()
}
}
impl $crate::dom::CharacterData for $tpe {}
impl AsRef<web_sys::CharacterData> for $tpe {
fn as_ref(&self) -> &web_sys::CharacterData {
use $crate::dom::character_data_seal::Seal;
self.as_web_sys_character_data()
}
}
impl $crate::dom::range_bound_container_seal::Seal for $tpe {
fn as_web_sys_node(&self) -> &web_sys::Node {
use $crate::dom::character_data_seal::Seal;
self.as_web_sys_character_data().as_ref()
}
}
impl $crate::dom::RangeBoundContainer for $tpe {}
$crate::dom::impl_node_traits!($tpe);
$crate::dom::impl_child_node_for_character_data!($tpe);
$crate::dom::impl_owned_node!($tpe);
$crate::dom::impl_element_sibling_for_character_data!($tpe);
$crate::dom::impl_try_from_node!($tpe, $web_sys_tpe);
};
($tpe:ident) => {
$crate::dom::impl_character_data_traits!($tpe, $tpe);
};
}
pub(crate) use impl_character_data_traits;
| 29.765625 | 76 | 0.592126 |
e5864c57e4e17f016dced275e830d4f0a2d37a17 | 2,907 | use std::path::Path;
use std::io::{Read, Write};
use std::fs::File;
use libc::getuid;
use process_util::env_path_find;
use launcher::Context;
use unshare::{Command, Stdio};
pub struct SystemInfo {
pub expect_inotify_limit: Option<usize>,
}
pub fn check(cinfo: &SystemInfo, context: &Context)
-> Result<(), String>
{
match cinfo.expect_inotify_limit {
Some(val) => check_sysctl(context,
"fs.inotify.max_user_watches", val,
"http://bit.ly/max_user_watches", 524288),
None => {}
}
Ok(())
}
fn check_sysctl(context: &Context, name: &str, expect: usize,
link: &str, max: usize) {
let path = Path::new("/proc/sys").join(name.replace(".", "/"));
let mut buf = String::with_capacity(10);
let val: Option<usize> = File::open(&path).ok()
.and_then(|mut f| f.read_to_string(&mut buf).ok())
.and_then(|_| buf.trim().parse().ok());
let real = match val {
None => {
warn!("Can't read sysctl {:?}", name);
return;
}
Some(x) => x,
};
if real >= expect {
return;
}
if context.settings.auto_apply_sysctl && expect <= max {
let uid = unsafe { getuid() };
if uid == 0 {
File::create(&path)
.and_then(|mut f| f.write_all(format!("{}", expect).as_bytes()))
.map_err(|e| error!("Can't apply sysctl {}: {}", name, e)).ok();
} else if let Some(cmdpath) = env_path_find("sudo") {
let mut sysctl = Command::new(cmdpath);
sysctl.stdin(Stdio::null());
sysctl.arg("-k");
sysctl.arg("sysctl");
sysctl.arg(format!("{}={}", name, expect));
warn!("The sysctl setting {name} is {is} but \
at least {expected} is expected. \
Running the following command to fix it:\n \
{cmd:?}\n\
More info: {link}",
name=name, is=real, expected=expect, link=link, cmd=sysctl);
match sysctl.status() {
Ok(st) if !st.success() => {
error!("Error running sysctl {:?}", st);
},
Err(e) => {
error!("Error running sysctl: {:?}", e);
},
_ => {},
}
} else {
error!("Error running sysctl: `sudo` not found");
}
} else {
warn!("The sysctl setting {name} is {is} but \
at least {expected} is expected. \
To fix it till next reboot run:\n \
sysctl {name}={expected}\n\
More info: {link}",
name=name, is=real, expected=expect, link=link);
if expect > max {
warn!("Additionally we can't autofix it \
because value is too large. So be careful.")
}
}
}
| 32.3 | 78 | 0.492948 |
0a5aa1b7bd34695cebb9a8451c45d0d5e25eb52f | 965 | // Copyright 2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Check that we get an error in a multidisptach scenario where the
// set of impls is ambiguous.
trait Convert<Target> {
fn convert(&self) -> Target;
}
impl Convert<i8> for i32 {
fn convert(&self) -> i8 {
*self as i8
}
}
impl Convert<i16> for i32 {
fn convert(&self) -> i16 {
*self as i16
}
}
fn test<T,U>(_: T, _: U)
where T : Convert<U>
{
}
fn a() {
test(22_i32, std::default::Default::default()); //~ ERROR type annotations required
}
fn main() {}
| 24.125 | 87 | 0.666321 |
5b33a094d18f6c472167380fa33127409f4837fc | 7,892 | // Copyright (c) The Libra Core Contributors
// SPDX-License-Identifier: Apache-2.0
#![forbid(unsafe_code)]
use crate::{aws::Aws, instance::Instance};
use anyhow::{ensure, format_err, Result};
use config_builder::ValidatorConfig;
use generate_keypair::load_key_from_file;
use libra_config::config::AdmissionControlConfig;
use libra_crypto::ed25519::{Ed25519PrivateKey, Ed25519PublicKey};
use libra_crypto::test_utils::KeyPair;
use rand::prelude::*;
use rusoto_ec2::{DescribeInstancesRequest, Ec2, Filter, Tag};
use slog_scope::*;
use std::collections::HashMap;
use std::convert::TryInto;
use std::{thread, time::Duration};
#[derive(Clone)]
pub struct Cluster {
// guaranteed non-empty
instances: Vec<Instance>,
prometheus_ip: Option<String>,
mint_key_pair: KeyPair<Ed25519PrivateKey, Ed25519PublicKey>,
}
impl Cluster {
pub fn from_host_port(peers: Vec<(String, u32)>, mint_file: &str) -> Self {
let instances: Vec<Instance> = peers
.into_iter()
.map(|host_port| {
Instance::new(
format!("{}:{}", &host_port.0, host_port.1), /* short_hash */
host_port.0,
host_port.1,
)
})
.collect();
let mint_key_pair: KeyPair<Ed25519PrivateKey, Ed25519PublicKey> =
load_key_from_file(mint_file).expect("invalid faucet keypair file");
Self {
instances,
prometheus_ip: None,
mint_key_pair,
}
}
pub fn discover(aws: &Aws) -> Result<Self> {
let mut instances = vec![];
let mut next_token = None;
let mut retries_left = 10;
let mut prometheus_ip: Option<String> = None;
loop {
let filters = vec![
Filter {
name: Some("tag:Workspace".into()),
values: Some(vec![aws.workspace().clone()]),
},
Filter {
name: Some("instance-state-name".into()),
values: Some(vec!["running".into()]),
},
];
let result = aws
.ec2()
.describe_instances(DescribeInstancesRequest {
filters: Some(filters),
max_results: Some(1000),
dry_run: None,
instance_ids: None,
next_token: next_token.clone(),
})
.sync();
let result = match result {
Err(e) => {
warn!(
"Failed to describe aws instances: {:?}, retries left: {}",
e, retries_left
);
thread::sleep(Duration::from_secs(1));
if retries_left == 0 {
panic!("Last attempt to describe instances failed");
}
retries_left -= 1;
continue;
}
Ok(r) => r,
};
let ac_port = AdmissionControlConfig::default().admission_control_service_port as u32;
for reservation in result.reservations.expect("no reservations") {
for aws_instance in reservation.instances.expect("no instances") {
let ip = aws_instance
.private_ip_address
.expect("Instance does not have private IP address");
let tags = aws_instance.tags.expect("Instance does not have tags");
let role = parse_tags(tags);
match role {
InstanceRole::Prometheus => {
prometheus_ip = Some(ip);
}
InstanceRole::Peer(peer_name) => {
instances.push(Instance::new(peer_name, ip, ac_port));
}
_ => {}
}
}
}
next_token = result.next_token;
if next_token.is_none() {
break;
}
}
ensure!(
!instances.is_empty(),
"No instances were discovered for cluster"
);
let prometheus_ip =
prometheus_ip.ok_or_else(|| format_err!("Prometheus was not found in workspace"))?;
let seed = "1337133713371337133713371337133713371337133713371337133713371337";
let seed = hex::decode(seed).expect("Invalid hex in seed.");
let seed = seed[..32].try_into().expect("Invalid seed");
let (_, mint_key) = ValidatorConfig::new()
.seed(seed)
.build_faucet_client()
.expect("Unable to build faucet keys");
let mint_key_pair = KeyPair::from(mint_key);
Ok(Self {
instances,
prometheus_ip: Some(prometheus_ip),
mint_key_pair,
})
}
pub fn random_instance(&self) -> Instance {
let mut rnd = rand::thread_rng();
self.instances.choose(&mut rnd).unwrap().clone()
}
pub fn instances(&self) -> &Vec<Instance> {
&self.instances
}
pub fn into_instances(self) -> Vec<Instance> {
self.instances
}
pub fn prometheus_ip(&self) -> Option<&String> {
self.prometheus_ip.as_ref()
}
pub fn mint_key_pair(&self) -> &KeyPair<Ed25519PrivateKey, Ed25519PublicKey> {
&self.mint_key_pair
}
pub fn get_instance(&self, name: &str) -> Option<&Instance> {
self.instances
.iter()
.find(|instance| instance.peer_name() == name)
}
/// Splits this cluster into two
///
/// Returns tuple of two clusters:
/// First element in tuple contains cluster with c random instances from self
/// Second element in tuple contains cluster with remaining instances from self
pub fn split_n_random(&self, c: usize) -> (Self, Self) {
assert!(c <= self.instances.len());
let mut rng = ThreadRng::default();
let mut sub = vec![];
let mut rem = self.instances.clone();
for _ in 0..c {
let idx_remove = rng.gen_range(0, rem.len());
let instance = rem.remove(idx_remove);
sub.push(instance);
}
(self.new_sub_cluster(sub), self.new_sub_cluster(rem))
}
fn new_sub_cluster(&self, instances: Vec<Instance>) -> Self {
Cluster {
instances,
prometheus_ip: self.prometheus_ip.clone(),
mint_key_pair: self.mint_key_pair.clone(),
}
}
pub fn sub_cluster(&self, ids: Vec<String>) -> Cluster {
let mut instances = Vec::with_capacity(ids.len());
for id in ids {
let instance = self.get_instance(&id);
match instance {
Some(instance) => instances.push(instance.clone()),
None => panic!("Can not make sub_cluster: instance {} is not found", id),
}
}
assert!(!instances.is_empty(), "No instances for subcluster");
self.new_sub_cluster(instances)
}
}
fn parse_tags(tags: Vec<Tag>) -> InstanceRole {
let mut map: HashMap<_, _> = tags.into_iter().map(|tag| (tag.key, tag.value)).collect();
let role = map.remove(&Some("Role".to_string()));
if role == Some(Some("validator".to_string())) {
let peer_name = map.remove(&Some("Name".to_string()));
let peer_name = peer_name.expect("Validator instance without Name");
let peer_name = peer_name.expect("'Name' tag without value");
return InstanceRole::Peer(peer_name);
} else if role == Some(Some("monitoring".to_string())) {
return InstanceRole::Prometheus;
}
InstanceRole::Unknown
}
enum InstanceRole {
Peer(String),
Prometheus,
Unknown,
}
| 35.710407 | 98 | 0.538267 |
f916cd7396282b3c83994915d72b8092ef1d31c5 | 86 | #[derive(Debug, Clone)]
pub enum CowRef<'a, T> {
Ref(&'a T),
Boxed(Box<T>),
}
| 14.333333 | 24 | 0.523256 |
db623a6570fc04cc02068fea4ef86e6e19fc726e | 14,020 | //
// Copyright 2017 hasselc Developers
//
// Licensed under the Apache License, Version 2.0, <LICENSE-APACHE or
// http://apache.org/licenses/LICENSE-2.0> or the MIT license <LICENSE-MIT or
// http://opensource.org/licenses/MIT>, at your option. This file may not be
// copied, modified, or distributed except according to those terms.
//
use error::{self, ErrorKind};
use ir::block::{Block, CallData, Expr, Statement};
use src_tag::SrcTagged;
use symbol_table::{SymbolName, SymbolTable};
use base_type::BaseType;
pub trait TypeChecking {
fn base_type(&self) -> Option<&BaseType>;
fn infer_types(&mut self, symbol_table: &SymbolTable) -> error::Result<()>;
fn imply_type(&mut self, base_type: &BaseType);
fn imply_defaults(&mut self);
fn resolve_type(&mut self, symbol_table: &SymbolTable) -> error::Result<BaseType>;
}
impl TypeChecking for Expr {
fn base_type(&self) -> Option<&BaseType> {
use ir::block::Expr::*;
match *self {
Number(ref data) => data.value_type.as_ref(),
Symbol(ref data) => data.value_type.as_ref(),
BinaryOp(ref data) => data.result_type.as_ref(),
Call(ref data) => data.base_type(),
ArrayIndex(ref data) => data.array_type.as_ref().and_then(|at| at.underlying_type()),
}
}
fn infer_types(&mut self, symbol_table: &SymbolTable) -> error::Result<()> {
use ir::block::Expr::*;
match *self {
Number(_) => {}
Symbol(ref mut data) => {
if let Some(constant) = symbol_table.constant(data.symbol) {
data.value_type = Some(constant.base_type);
} else if let Some(variable) = symbol_table.variable(data.symbol) {
data.value_type = Some(variable.base_type);
} else {
unreachable!()
}
}
BinaryOp(ref mut data) => {
data.left.infer_types(symbol_table)?;
data.right.infer_types(symbol_table)?;
if !data.op.is_arithmetic() && data.left.base_type().is_none() && data.right.base_type().is_none() {
data.left.imply_defaults();
data.right.imply_defaults();
}
let left_type = data.left.base_type().cloned();
let right_type = data.right.base_type().cloned();
if left_type.is_none() && right_type.is_some() {
data.left.imply_type(right_type.as_ref().unwrap());
data.result_type = right_type;
} else if left_type.is_some() && right_type.is_none() {
data.right.imply_type(left_type.as_ref().unwrap());
data.result_type = left_type;
}
}
Call(ref mut data) => data.infer_types(symbol_table)?,
ArrayIndex(ref mut data) => {
data.index.infer_types(symbol_table)?;
if data.index.base_type().is_none() {
data.index.imply_type(&BaseType::U16);
}
if let Some(constant) = symbol_table.constant(data.array) {
data.array_type = Some(constant.base_type);
} else if let Some(variable) = symbol_table.variable(data.array) {
data.array_type = Some(variable.base_type);
} else {
unreachable!()
}
}
}
Ok(())
}
fn imply_type(&mut self, base_type: &BaseType) {
use ir::block::Expr::*;
match *self {
Number(ref mut data) => {
if data.value_type.is_none() {
data.value_type = Some(base_type.clone());
}
}
BinaryOp(ref mut data) => {
if data.result_type.is_none() {
data.result_type = Some(base_type.clone());
if data.op.is_arithmetic() {
data.left.imply_type(base_type);
data.right.imply_type(base_type);
}
}
}
Symbol(_) | Call(_) | ArrayIndex(_) => {}
}
}
fn imply_defaults(&mut self) {
if let Expr::Number(ref mut data) = *self {
if data.value_type.is_none() {
data.value_type = Some(BaseType::U8);
}
}
}
fn resolve_type(&mut self, symbol_table: &SymbolTable) -> error::Result<BaseType> {
use ir::block::Expr::*;
match *self {
Number(ref data) => match data.value_type {
Some(ref base_type) => Ok(base_type.clone()),
None => Err(ErrorKind::TypeExprError(data.tag, "Can't infer type of number".into()).into()),
},
BinaryOp(ref mut data) => {
let left_type = data.left.resolve_type(symbol_table)?;
let right_type = data.right.resolve_type(symbol_table)?;
match BaseType::choose_type(&left_type, &right_type) {
Some(base_type) => {
data.result_type = Some(base_type.clone());
Ok(base_type)
}
None => Err(ErrorKind::TypeExprError(
data.tag,
format!(
"Can't perform arithmetic between {} and {}",
left_type, right_type
),
).into()),
}
}
Call(ref mut data) => data.resolve_type(symbol_table),
ArrayIndex(ref mut data) => {
if !data.array_type.as_ref().unwrap().can_index() {
Err(ErrorKind::TypeExprError(
data.tag,
format!("Can't index {}", data.array_type.as_ref().unwrap()),
).into())
} else if data.index.base_type().is_none() {
Err(ErrorKind::TypeExprError(
data.index.src_tag(),
"Can't infer type of array index".into(),
).into())
} else {
Ok(data.array_type
.as_ref()
.unwrap()
.underlying_type()
.unwrap()
.clone())
}
}
Symbol(ref data) => Ok(data.value_type.as_ref().unwrap().clone()),
}
}
}
impl TypeChecking for CallData {
fn base_type(&self) -> Option<&BaseType> {
self.return_type.as_ref()
}
fn infer_types(&mut self, symbol_table: &SymbolTable) -> error::Result<()> {
if let Some(function) = symbol_table.function_by_name(&self.function) {
self.return_type = Some(function.read().unwrap().return_type.clone());
let expected_arg_count = function.read().unwrap().parameters.len();
if self.arguments.len() != expected_arg_count {
return Err(ErrorKind::ExpectedNArgumentsGotM(
self.tag,
SymbolName::clone(&self.function),
expected_arg_count,
self.arguments.len(),
).into());
}
for (index, argument) in self.arguments.iter_mut().enumerate() {
argument.infer_types(symbol_table)?;
if argument.base_type().is_none() {
argument.imply_type(&function.read().unwrap().parameters[index].base_type);
}
}
} else {
return Err(ErrorKind::SymbolNotFound(self.tag, SymbolName::clone(&self.function)).into());
}
Ok(())
}
fn imply_type(&mut self, _base_type: &BaseType) {}
fn imply_defaults(&mut self) {}
fn resolve_type(&mut self, symbol_table: &SymbolTable) -> error::Result<BaseType> {
if let Some(function) = symbol_table.function_by_name(&self.function) {
for (index, argument) in self.arguments.iter_mut().enumerate() {
let argument_type = argument.resolve_type(symbol_table)?;
if !argument_type.can_assign_into(&function.read().unwrap().parameters[index].base_type) {
return Err(ErrorKind::TypeExprError(
argument.src_tag(),
format!(
"Argument {} expected {} but got a {}",
index + 1,
function.read().unwrap().parameters[index].base_type,
argument_type
),
).into());
}
}
Ok(self.return_type.as_ref().unwrap().clone())
} else {
unreachable!()
}
}
}
impl TypeChecking for Statement {
fn base_type(&self) -> Option<&BaseType> {
None
}
fn infer_types(&mut self, symbol_table: &SymbolTable) -> error::Result<()> {
use ir::block::Statement::*;
match *self {
Assign(ref mut data) => {
data.left_value.infer_types(symbol_table)?;
data.right_value.infer_types(symbol_table)?
}
Call(ref mut data) => data.infer_types(symbol_table)?,
Conditional(ref mut data) => {
data.condition.infer_types(symbol_table)?;
for statement in &mut data.when_true {
statement.infer_types(symbol_table)?;
}
for statement in &mut data.when_false {
statement.infer_types(symbol_table)?;
}
}
Return(ref mut data) => {
if let Some(ref mut value) = data.value {
value.infer_types(symbol_table)?;
}
}
WhileLoop(ref mut data) => {
data.condition.infer_types(symbol_table)?;
for statement in &mut data.body {
statement.infer_types(symbol_table)?;
}
}
Break | GoTo(_) | InlineAsm(_) => {}
}
Ok(())
}
fn imply_type(&mut self, base_type: &BaseType) {
use ir::block::Statement::*;
match *self {
Return(ref mut data) => {
if let Some(ref mut value) = *(&mut data.value) {
value.imply_type(base_type);
}
}
Conditional(ref mut data) => {
for statement in &mut data.when_true {
statement.imply_type(base_type);
}
for statement in &mut data.when_false {
statement.imply_type(base_type);
}
}
WhileLoop(ref mut data) => for statement in &mut data.body {
statement.imply_type(base_type);
},
_ => {}
}
}
fn imply_defaults(&mut self) {}
fn resolve_type(&mut self, symbol_table: &SymbolTable) -> error::Result<BaseType> {
use ir::block::Statement::*;
match *self {
Assign(ref mut data) => {
let left_type = data.left_value.resolve_type(symbol_table)?;
data.right_value.imply_type(&left_type);
let right_type = data.right_value.resolve_type(symbol_table)?;
if !right_type.can_assign_into(&left_type) {
return Err(ErrorKind::TypeExprError(
data.tag,
format!("Can't assign {} into {}", right_type, left_type),
).into());
}
data.value_type = Some(left_type);
}
Call(ref mut data) => {
data.resolve_type(symbol_table)?;
}
Conditional(ref mut data) => {
let condition_type = data.condition.resolve_type(symbol_table)?;
if !condition_type.can_cast_into(&BaseType::Bool) {
return Err(
ErrorKind::TypeExprError(data.tag, "Condition can't evaluate to a boolean".into()).into(),
);
}
for statement in &mut data.when_true {
statement.resolve_type(symbol_table)?;
}
for statement in &mut data.when_false {
statement.resolve_type(symbol_table)?;
}
}
Return(ref mut data) => {
if let Some(ref mut value) = *(&mut data.value) {
data.value_type = Some(value.resolve_type(symbol_table)?);
}
}
WhileLoop(ref mut data) => {
let condition_type = data.condition.resolve_type(symbol_table)?;
if !condition_type.can_cast_into(&BaseType::Bool) {
return Err(
ErrorKind::TypeExprError(data.tag, "Condition can't evaluate to a boolean".into()).into(),
);
}
for statement in &mut data.body {
statement.resolve_type(symbol_table)?;
}
}
Break | GoTo(_) | InlineAsm(_) => {}
}
Ok(BaseType::Void)
}
}
pub fn resolve_types(blocks: &mut Vec<Block>) -> error::Result<()> {
for block in blocks {
let symbol_table = &*block.symbol_table.read().unwrap();
let return_type = block.metadata.read().unwrap().return_type.clone();
for statement in &mut block.body {
statement.infer_types(symbol_table)?;
statement.imply_type(&return_type);
statement.resolve_type(symbol_table)?;
}
// TODO: Iterate over all return statements (recursively) and verify return type
}
Ok(())
}
| 39.716714 | 116 | 0.498859 |
f402fab5cacdc07f819167084100d9a936586990 | 12,396 | use crate::core::{InternedString, PackageId, SourceId};
use crate::sources::git;
use crate::sources::registry::MaybeLock;
use crate::sources::registry::{RegistryConfig, RegistryData, CRATE_TEMPLATE, VERSION_TEMPLATE};
use crate::util::errors::{CargoResult, CargoResultExt};
use crate::util::{Config, Filesystem, Sha256};
use lazycell::LazyCell;
use log::{debug, trace};
use std::cell::{Cell, Ref, RefCell};
use std::fmt::Write as FmtWrite;
use std::fs::{self, File, OpenOptions};
use std::io::prelude::*;
use std::io::SeekFrom;
use std::mem;
use std::path::Path;
use std::str;
pub struct RemoteRegistry<'cfg> {
index_path: Filesystem,
cache_path: Filesystem,
source_id: SourceId,
config: &'cfg Config,
tree: RefCell<Option<git2::Tree<'static>>>,
repo: LazyCell<git2::Repository>,
head: Cell<Option<git2::Oid>>,
current_sha: Cell<Option<InternedString>>,
}
impl<'cfg> RemoteRegistry<'cfg> {
pub fn new(source_id: SourceId, config: &'cfg Config, name: &str) -> RemoteRegistry<'cfg> {
RemoteRegistry {
index_path: config.registry_index_path().join(name),
cache_path: config.registry_cache_path().join(name),
source_id,
config,
tree: RefCell::new(None),
repo: LazyCell::new(),
head: Cell::new(None),
current_sha: Cell::new(None),
}
}
fn repo(&self) -> CargoResult<&git2::Repository> {
self.repo.try_borrow_with(|| {
let path = self.config.assert_package_cache_locked(&self.index_path);
// Fast path without a lock
if let Ok(repo) = git2::Repository::open(&path) {
trace!("opened a repo without a lock");
return Ok(repo);
}
// Ok, now we need to lock and try the whole thing over again.
trace!("acquiring registry index lock");
match git2::Repository::open(&path) {
Ok(repo) => Ok(repo),
Err(_) => {
drop(fs::remove_dir_all(&path));
fs::create_dir_all(&path)?;
// Note that we'd actually prefer to use a bare repository
// here as we're not actually going to check anything out.
// All versions of Cargo, though, share the same CARGO_HOME,
// so for compatibility with older Cargo which *does* do
// checkouts we make sure to initialize a new full
// repository (not a bare one).
//
// We should change this to `init_bare` whenever we feel
// like enough time has passed or if we change the directory
// that the folder is located in, such as by changing the
// hash at the end of the directory.
//
// Note that in the meantime we also skip `init.templatedir`
// as it can be misconfigured sometimes or otherwise add
// things that we don't want.
let mut opts = git2::RepositoryInitOptions::new();
opts.external_template(false);
Ok(git2::Repository::init_opts(&path, &opts)
.chain_err(|| "failed to initialized index git repository")?)
}
}
})
}
fn head(&self) -> CargoResult<git2::Oid> {
if self.head.get().is_none() {
let oid = self.repo()?.refname_to_id("refs/remotes/origin/master")?;
self.head.set(Some(oid));
}
Ok(self.head.get().unwrap())
}
fn tree(&self) -> CargoResult<Ref<'_, git2::Tree<'_>>> {
{
let tree = self.tree.borrow();
if tree.is_some() {
return Ok(Ref::map(tree, |s| s.as_ref().unwrap()));
}
}
let repo = self.repo()?;
let commit = repo.find_commit(self.head()?)?;
let tree = commit.tree()?;
// Unfortunately in libgit2 the tree objects look like they've got a
// reference to the repository object which means that a tree cannot
// outlive the repository that it came from. Here we want to cache this
// tree, though, so to accomplish this we transmute it to a static
// lifetime.
//
// Note that we don't actually hand out the static lifetime, instead we
// only return a scoped one from this function. Additionally the repo
// we loaded from (above) lives as long as this object
// (`RemoteRegistry`) so we then just need to ensure that the tree is
// destroyed first in the destructor, hence the destructor on
// `RemoteRegistry` below.
let tree = unsafe { mem::transmute::<git2::Tree<'_>, git2::Tree<'static>>(tree) };
*self.tree.borrow_mut() = Some(tree);
Ok(Ref::map(self.tree.borrow(), |s| s.as_ref().unwrap()))
}
fn filename(&self, pkg: PackageId) -> String {
format!("{}-{}.crate", pkg.name(), pkg.version())
}
}
const LAST_UPDATED_FILE: &str = ".last-updated";
impl<'cfg> RegistryData for RemoteRegistry<'cfg> {
fn prepare(&self) -> CargoResult<()> {
self.repo()?; // create intermediate dirs and initialize the repo
Ok(())
}
fn index_path(&self) -> &Filesystem {
&self.index_path
}
fn assert_index_locked<'a>(&self, path: &'a Filesystem) -> &'a Path {
self.config.assert_package_cache_locked(path)
}
fn current_version(&self) -> Option<InternedString> {
if let Some(sha) = self.current_sha.get() {
return Some(sha);
}
let sha = InternedString::new(&self.head().ok()?.to_string());
self.current_sha.set(Some(sha));
Some(sha)
}
fn load(
&self,
_root: &Path,
path: &Path,
data: &mut dyn FnMut(&[u8]) -> CargoResult<()>,
) -> CargoResult<()> {
// Note that the index calls this method and the filesystem is locked
// in the index, so we don't need to worry about an `update_index`
// happening in a different process.
let repo = self.repo()?;
let tree = self.tree()?;
let entry = tree.get_path(path)?;
let object = entry.to_object(repo)?;
let blob = match object.as_blob() {
Some(blob) => blob,
None => failure::bail!("path `{}` is not a blob in the git repo", path.display()),
};
data(blob.content())
}
fn config(&mut self) -> CargoResult<Option<RegistryConfig>> {
debug!("loading config");
self.prepare()?;
self.config.assert_package_cache_locked(&self.index_path);
let mut config = None;
self.load(Path::new(""), Path::new("config.json"), &mut |json| {
config = Some(serde_json::from_slice(json)?);
Ok(())
})?;
trace!("config loaded");
Ok(config)
}
fn update_index(&mut self) -> CargoResult<()> {
if self.config.offline() {
if self.repo()?.is_empty()? {
// An empty repository is guaranteed to fail, since hitting
// this path means we need at least one crate. This is an
// attempt to provide a better error message other than "no
// matching package named …".
failure::bail!(
"unable to fetch {} in offline mode\n\
Try running without the offline flag, or try running \
`cargo fetch` within your project directory before going offline.",
self.source_id
);
}
return Ok(());
}
if self.config.cli_unstable().no_index_update {
return Ok(());
}
// Make sure the index is only updated once per session since it is an
// expensive operation. This generally only happens when the resolver
// is run multiple times, such as during `cargo publish`.
if self.config.updated_sources().contains(&self.source_id) {
return Ok(());
}
debug!("updating the index");
// Ensure that we'll actually be able to acquire an HTTP handle later on
// once we start trying to download crates. This will weed out any
// problems with `.cargo/config` configuration related to HTTP.
//
// This way if there's a problem the error gets printed before we even
// hit the index, which may not actually read this configuration.
self.config.http()?;
self.prepare()?;
self.head.set(None);
*self.tree.borrow_mut() = None;
self.current_sha.set(None);
let path = self.config.assert_package_cache_locked(&self.index_path);
self.config
.shell()
.status("Updating", self.source_id.display_index())?;
// git fetch origin master
let url = self.source_id.url();
let refspec = "refs/heads/master:refs/remotes/origin/master";
let repo = self.repo.borrow_mut().unwrap();
git::fetch(repo, url.as_str(), refspec, self.config)
.chain_err(|| format!("failed to fetch `{}`", url))?;
self.config.updated_sources().insert(self.source_id);
// Create a dummy file to record the mtime for when we updated the
// index.
File::create(&path.join(LAST_UPDATED_FILE))?;
Ok(())
}
fn download(&mut self, pkg: PackageId, _checksum: &str) -> CargoResult<MaybeLock> {
let filename = self.filename(pkg);
// Attempt to open an read-only copy first to avoid an exclusive write
// lock and also work with read-only filesystems. Note that we check the
// length of the file like below to handle interrupted downloads.
//
// If this fails then we fall through to the exclusive path where we may
// have to redownload the file.
let path = self.cache_path.join(&filename);
let path = self.config.assert_package_cache_locked(&path);
if let Ok(dst) = File::open(&path) {
let meta = dst.metadata()?;
if meta.len() > 0 {
return Ok(MaybeLock::Ready(dst));
}
}
let config = self.config()?.unwrap();
let mut url = config.dl;
if !url.contains(CRATE_TEMPLATE) && !url.contains(VERSION_TEMPLATE) {
write!(url, "/{}/{}/download", CRATE_TEMPLATE, VERSION_TEMPLATE).unwrap();
}
let url = url
.replace(CRATE_TEMPLATE, &*pkg.name())
.replace(VERSION_TEMPLATE, &pkg.version().to_string());
Ok(MaybeLock::Download {
url,
descriptor: pkg.to_string(),
})
}
fn finish_download(
&mut self,
pkg: PackageId,
checksum: &str,
data: &[u8],
) -> CargoResult<File> {
// Verify what we just downloaded
let actual = Sha256::new().update(data).finish_hex();
if actual != checksum {
failure::bail!("failed to verify the checksum of `{}`", pkg)
}
let filename = self.filename(pkg);
self.cache_path.create_dir()?;
let path = self.cache_path.join(&filename);
let path = self.config.assert_package_cache_locked(&path);
let mut dst = OpenOptions::new()
.create(true)
.read(true)
.write(true)
.open(&path)?;
let meta = dst.metadata()?;
if meta.len() > 0 {
return Ok(dst);
}
dst.write_all(data)?;
dst.seek(SeekFrom::Start(0))?;
Ok(dst)
}
fn is_crate_downloaded(&self, pkg: PackageId) -> bool {
let filename = format!("{}-{}.crate", pkg.name(), pkg.version());
let path = Path::new(&filename);
let path = self.cache_path.join(path);
let path = self.config.assert_package_cache_locked(&path);
if let Ok(dst) = File::open(path) {
if let Ok(meta) = dst.metadata() {
return meta.len() > 0;
}
}
false
}
}
impl<'cfg> Drop for RemoteRegistry<'cfg> {
fn drop(&mut self) {
// Just be sure to drop this before our other fields
self.tree.borrow_mut().take();
}
}
| 37.677812 | 95 | 0.56131 |
28916f0adf6a0de569c3c68dbc5ec401258524a4 | 532 | use rand;
use rand::distributions::{Distribution, Standard};
use {Address, AddressBusIO, Data};
#[derive(Default)]
pub struct Random<T: Data> {
value: T,
}
impl<T: Data> Random<T> {
pub fn new() -> Random<T> {
Random { value: T::zero() }
}
}
impl<T: Address, U: Data> AddressBusIO<T, U> for Random<U>
where
Standard: Distribution<U>,
{
fn read(&mut self, _address: T) -> U {
self.value
}
fn write(&mut self, _address: T, _value: U) {
self.value = rand::random::<U>();
}
}
| 19 | 58 | 0.578947 |
9b61aa4bca7883b447c3372d9e08771dec81ab46 | 20,844 | use crate::blob::generate_blob_uri;
use crate::blob::responses::PutBlobResponse;
use azure_sdk_storage_core::client::Client;
use azure_sdk_storage_core::ClientRequired;
use azure_sdk_core::errors::{check_status_extract_headers_and_body, AzureError};
use azure_sdk_core::headers::BLOB_TYPE;
use azure_sdk_core::lease::LeaseId;
use azure_sdk_core::modify_conditions::IfMatchCondition;
use azure_sdk_core::{
BlobNameRequired, BlobNameSupport, CacheControlOption, CacheControlSupport, ClientRequestIdOption, ClientRequestIdSupport,
ContainerNameRequired, ContainerNameSupport, ContentDispositionOption, ContentDispositionSupport, ContentEncodingOption,
ContentEncodingSupport, ContentLanguageOption, ContentLanguageSupport, ContentTypeOption, ContentTypeSupport, IfMatchConditionOption,
IfMatchConditionSupport, LeaseIdOption, LeaseIdSupport, MetadataOption, MetadataSupport, No, TimeoutOption, TimeoutSupport, ToAssign,
Yes,
};
use futures::future::{done, ok};
use futures::prelude::*;
use hyper::{Method, StatusCode};
use std::collections::HashMap;
use std::marker::PhantomData;
#[derive(Debug, Clone)]
pub struct PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
client: &'a Client,
p_container_name: PhantomData<ContainerNameSet>,
p_blob_name: PhantomData<BlobNameSet>,
container_name: Option<&'a str>,
blob_name: Option<&'a str>,
timeout: Option<u64>,
content_type: Option<&'a str>,
content_encoding: Option<&'a str>,
content_language: Option<&'a str>,
cache_control: Option<&'a str>,
content_disposition: Option<&'a str>,
metadata: Option<&'a HashMap<&'a str, &'a str>>,
lease_id: Option<&'a LeaseId>,
if_match_condition: Option<IfMatchCondition<'a>>,
client_request_id: Option<&'a str>,
}
impl<'a> PutAppendBlobBuilder<'a, No, No> {
#[inline]
pub(crate) fn new(client: &'a Client) -> PutAppendBlobBuilder<'a, No, No> {
PutAppendBlobBuilder {
client,
p_container_name: PhantomData {},
container_name: None,
p_blob_name: PhantomData {},
blob_name: None,
timeout: None,
content_type: None,
content_encoding: None,
content_language: None,
cache_control: None,
content_disposition: None,
metadata: None,
lease_id: None,
if_match_condition: None,
client_request_id: None,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> ClientRequired<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn client(&self) -> &'a Client {
self.client
}
}
impl<'a, BlobNameSet> ContainerNameRequired<'a> for PutAppendBlobBuilder<'a, Yes, BlobNameSet>
where
BlobNameSet: ToAssign,
{
#[inline]
fn container_name(&self) -> &'a str {
self.container_name.unwrap()
}
}
impl<'a, ContainerNameSet> BlobNameRequired<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, Yes>
where
ContainerNameSet: ToAssign,
{
#[inline]
fn blob_name(&self) -> &'a str {
self.blob_name.unwrap()
}
}
impl<'a, ContainerNameSet, BlobNameSet> TimeoutOption for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn timeout(&self) -> Option<u64> {
self.timeout
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentTypeOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn content_type(&self) -> Option<&'a str> {
self.content_type
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentEncodingOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn content_encoding(&self) -> Option<&'a str> {
self.content_encoding
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentLanguageOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn content_language(&self) -> Option<&'a str> {
self.content_language
}
}
impl<'a, ContainerNameSet, BlobNameSet> CacheControlOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn cache_control(&self) -> Option<&'a str> {
self.cache_control
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentDispositionOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn content_disposition(&self) -> Option<&'a str> {
self.content_disposition
}
}
impl<'a, ContainerNameSet, BlobNameSet> MetadataOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn metadata(&self) -> Option<&'a HashMap<&'a str, &'a str>> {
self.metadata
}
}
impl<'a, ContainerNameSet, BlobNameSet> LeaseIdOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn lease_id(&self) -> Option<&'a LeaseId> {
self.lease_id
}
}
impl<'a, ContainerNameSet, BlobNameSet> IfMatchConditionOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn if_match_condition(&self) -> Option<IfMatchCondition<'a>> {
self.if_match_condition
}
}
impl<'a, ContainerNameSet, BlobNameSet> ClientRequestIdOption<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
#[inline]
fn client_request_id(&self) -> Option<&'a str> {
self.client_request_id
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContainerNameSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, Yes, BlobNameSet>;
#[inline]
fn with_container_name(self, container_name: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: Some(container_name),
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> BlobNameSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, Yes>;
#[inline]
fn with_blob_name(self, blob_name: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: Some(blob_name),
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> TimeoutSupport for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_timeout(self, timeout: u64) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: Some(timeout),
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentTypeSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_content_type(self, content_type: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: Some(content_type),
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentEncodingSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_content_encoding(self, content_encoding: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: Some(content_encoding),
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentLanguageSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_content_language(self, content_language: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: Some(content_language),
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> CacheControlSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_cache_control(self, cache_control: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: Some(cache_control),
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> ContentDispositionSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_content_disposition(self, content_disposition: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: Some(content_disposition),
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> MetadataSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_metadata(self, metadata: &'a HashMap<&'a str, &'a str>) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: Some(metadata),
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> LeaseIdSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_lease_id(self, lease_id: &'a LeaseId) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: Some(lease_id),
if_match_condition: self.if_match_condition,
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> IfMatchConditionSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_if_match_condition(self, if_match_condition: IfMatchCondition<'a>) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: Some(if_match_condition),
client_request_id: self.client_request_id,
}
}
}
impl<'a, ContainerNameSet, BlobNameSet> ClientRequestIdSupport<'a> for PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
type O = PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>;
#[inline]
fn with_client_request_id(self, client_request_id: &'a str) -> Self::O {
PutAppendBlobBuilder {
client: self.client,
p_container_name: PhantomData {},
p_blob_name: PhantomData {},
container_name: self.container_name,
blob_name: self.blob_name,
timeout: self.timeout,
content_type: self.content_type,
content_encoding: self.content_encoding,
content_language: self.content_language,
cache_control: self.cache_control,
content_disposition: self.content_disposition,
metadata: self.metadata,
lease_id: self.lease_id,
if_match_condition: self.if_match_condition,
client_request_id: Some(client_request_id),
}
}
}
// methods callable regardless
impl<'a, ContainerNameSet, BlobNameSet> PutAppendBlobBuilder<'a, ContainerNameSet, BlobNameSet>
where
ContainerNameSet: ToAssign,
BlobNameSet: ToAssign,
{
}
impl<'a> PutAppendBlobBuilder<'a, Yes, Yes> {
#[inline]
pub fn finalize(self) -> impl Future<Item = PutBlobResponse, Error = AzureError> {
let mut uri = generate_blob_uri(&self, None);
if let Some(timeout) = TimeoutOption::to_uri_parameter(&self) {
uri = format!("{}?{}", uri, timeout);
}
trace!("uri == {:?}", uri);
let req = self.client().perform_request(
&uri,
&Method::PUT,
|ref mut request| {
ContentTypeOption::add_header(&self, request);
ContentEncodingOption::add_header(&self, request);
ContentLanguageOption::add_header(&self, request);
CacheControlOption::add_header(&self, request);
ContentDispositionOption::add_header(&self, request);
MetadataOption::add_header(&self, request);
request.header(BLOB_TYPE, "AppendBlob");
LeaseIdOption::add_header(&self, request);
IfMatchConditionOption::add_header(&self, request);
ClientRequestIdOption::add_header(&self, request);
},
None,
);
done(req)
.from_err()
.and_then(move |response| check_status_extract_headers_and_body(response, StatusCode::CREATED))
.and_then(move |(headers, _body)| done(PutBlobResponse::from_headers(&headers)).and_then(ok))
}
}
| 34.74 | 137 | 0.660814 |
7af0c57a0e58d2247b76e108422be29520516acc | 7,938 | use super::*;
use crate::types::BalanceOf;
use bitcoin::{
formatter::{Formattable, TryFormattable},
types::{
BlockBuilder, H256Le, RawBlockHeader, TransactionBuilder, TransactionInputBuilder, TransactionInputSource,
TransactionOutput,
},
};
use btc_relay::{BtcAddress, BtcPublicKey, Pallet as BtcRelay};
use currency::Amount;
use frame_benchmarking::{account, benchmarks, impl_benchmark_test_suite};
use frame_support::{assert_ok, traits::Get};
use frame_system::RawOrigin;
use oracle::Pallet as Oracle;
use orml_traits::MultiCurrency;
use primitives::{CurrencyId, VaultId};
use security::Pallet as Security;
use sp_core::{H160, U256};
use sp_runtime::traits::One;
use sp_std::prelude::*;
use vault_registry::{
types::{Vault, Wallet},
Pallet as VaultRegistry,
};
#[cfg(test)]
use crate::Pallet as Relay;
type UnsignedFixedPoint<T> = <T as currency::Config>::UnsignedFixedPoint;
pub const DEFAULT_TESTING_CURRENCY: CurrencyId = CurrencyId::DOT;
fn dummy_public_key() -> BtcPublicKey {
BtcPublicKey([
2, 205, 114, 218, 156, 16, 235, 172, 106, 37, 18, 153, 202, 140, 176, 91, 207, 51, 187, 55, 18, 45, 222, 180,
119, 54, 243, 97, 173, 150, 161, 169, 230,
])
}
fn mint_collateral<T: crate::Config>(account_id: &T::AccountId, amount: BalanceOf<T>) {
<orml_tokens::Pallet<T>>::deposit(DEFAULT_TESTING_CURRENCY, account_id, amount).unwrap();
}
benchmarks! {
initialize {
let height = 0u32;
let origin: T::AccountId = account("Origin", 0, 0);
let stake = 100u32;
let address = BtcAddress::P2PKH(H160::from([0; 20]));
let block = BlockBuilder::new()
.with_version(4)
.with_coinbase(&address, 50, 3)
.with_timestamp(1588813835)
.mine(U256::from(2).pow(254.into())).unwrap();
let block_header = RawBlockHeader::from_bytes(&block.header.try_format().unwrap()).unwrap();
}: _(RawOrigin::Signed(origin), block_header, height)
store_block_header {
let origin: T::AccountId = account("Origin", 0, 0);
let address = BtcAddress::P2PKH(H160::from([0; 20]));
let height = 0;
let stake = 100u32;
let init_block = BlockBuilder::new()
.with_version(4)
.with_coinbase(&address, 50, 3)
.with_timestamp(1588813835)
.mine(U256::from(2).pow(254.into())).unwrap();
let init_block_hash = init_block.header.hash;
let raw_block_header = RawBlockHeader::from_bytes(&init_block.header.try_format().unwrap())
.expect("could not serialize block header");
let block_header = BtcRelay::<T>::parse_raw_block_header(&raw_block_header).unwrap();
BtcRelay::<T>::initialize(origin.clone(), block_header, height).unwrap();
let block = BlockBuilder::new()
.with_previous_hash(init_block_hash)
.with_version(4)
.with_coinbase(&address, 50, 3)
.with_timestamp(1588814835)
.mine(U256::from(2).pow(254.into())).unwrap();
let raw_block_header = RawBlockHeader::from_bytes(&block.header.try_format().unwrap())
.expect("could not serialize block header");
}: _(RawOrigin::Signed(origin), raw_block_header)
report_vault_theft {
let origin: T::AccountId = account("Origin", 0, 0);
let relayer_id: T::AccountId = account("Relayer", 0, 0);
let vault_address = BtcAddress::P2PKH(H160::from_slice(&[
126, 125, 148, 208, 221, 194, 29, 131, 191, 188, 252, 119, 152, 228, 84, 126, 223, 8,
50, 170,
]));
let address = BtcAddress::P2PKH(H160([0; 20]));
let vault_id: VaultId<T::AccountId, _> = VaultId::new(
account("Vault", 0, 0),
T::GetGriefingCollateralCurrencyId::get(),
T::GetWrappedCurrencyId::get()
);
let mut vault = Vault {
wallet: Wallet::new(dummy_public_key()),
id: vault_id.clone(),
..Vault::new(vault_id.clone(), Default::default())
};
vault.wallet.add_btc_address(vault_address);
VaultRegistry::<T>::insert_vault(
&vault_id,
vault
);
VaultRegistry::<T>::set_secure_collateral_threshold(vault_id.currencies.clone(), UnsignedFixedPoint::<T>::one());
VaultRegistry::<T>::set_collateral_ceiling(vault_id.currencies.clone(), 1_000_000_000u32.into());
mint_collateral::<T>(&vault_id.account_id, 1000u32.into());
assert_ok!(VaultRegistry::<T>::try_deposit_collateral(&vault_id, &Amount::new(1000u32.into(), T::GetGriefingCollateralCurrencyId::get())));
let height = 0;
let block = BlockBuilder::new()
.with_version(4)
.with_coinbase(&address, 50, 3)
.with_timestamp(1588813835)
.mine(U256::from(2).pow(254.into())).unwrap();
let block_hash = block.header.hash;
let raw_block_header = RawBlockHeader::from_bytes(&block.header.try_format().unwrap()).unwrap();
let block_header = BtcRelay::<T>::parse_raw_block_header(&raw_block_header).unwrap();
Security::<T>::set_active_block_number(1u32.into());
BtcRelay::<T>::initialize(relayer_id.clone(), block_header, height).unwrap();
let value = 0;
let transaction = TransactionBuilder::new()
.with_version(2)
.add_input(
TransactionInputBuilder::new()
.with_sequence(4294967295)
.with_source(TransactionInputSource::FromOutput(H256Le::from_bytes_le(&[
193, 80, 65, 160, 109, 235, 107, 56, 24, 176, 34, 250, 197, 88, 218, 76,
226, 9, 127, 8, 96, 200, 246, 66, 16, 91, 186, 217, 210, 155, 224, 42,
]), 1))
.with_script(&[
73, 48, 70, 2, 33, 0, 207, 210, 162, 211, 50, 178, 154, 220, 225, 25, 197,
90, 159, 173, 211, 192, 115, 51, 32, 36, 183, 226, 114, 81, 62, 81, 98, 60,
161, 89, 147, 72, 2, 33, 0, 155, 72, 45, 127, 123, 77, 71, 154, 255, 98,
189, 205, 174, 165, 70, 103, 115, 125, 86, 248, 212, 214, 61, 208, 62, 195,
239, 101, 30, 217, 162, 84, 1, 33, 3, 37, 248, 176, 57, 161, 24, 97, 101,
156, 155, 240, 63, 67, 252, 78, 160, 85, 243, 167, 28, 214, 12, 123, 31,
212, 116, 171, 87, 143, 153, 119, 250,
])
.build(),
)
.add_output(TransactionOutput::payment(value.into(), &address))
.build();
let block = BlockBuilder::new()
.with_previous_hash(block_hash)
.with_version(4)
.with_coinbase(&address, 50, 3)
.with_timestamp(1588813835)
.add_transaction(transaction.clone())
.mine(U256::from(2).pow(254.into())).unwrap();
let tx_id = transaction.tx_id();
let proof = block.merkle_proof(&[tx_id]).unwrap().try_format().unwrap();
let raw_tx = transaction.format_with(true);
let raw_block_header = RawBlockHeader::from_bytes(&block.header.try_format().unwrap()).unwrap();
let block_header = BtcRelay::<T>::parse_raw_block_header(&raw_block_header).unwrap();
BtcRelay::<T>::store_block_header(&relayer_id, block_header).unwrap();
Security::<T>::set_active_block_number(Security::<T>::active_block_number() +
BtcRelay::<T>::parachain_confirmations() + 1u32.into());
Oracle::<T>::_set_exchange_rate(DEFAULT_TESTING_CURRENCY,
<T as currency::Config>::UnsignedFixedPoint::one()
).unwrap();
}: _(RawOrigin::Signed(origin), vault_id, proof, raw_tx)
}
impl_benchmark_test_suite!(Relay, crate::mock::ExtBuilder::build_with(|_| {}), crate::mock::Test);
| 41.778947 | 147 | 0.597002 |
23a6e54e0a174b92d6b9b6ca6fff5e1204347195 | 583 | // Copyright 2019 TiKV Project Authors. Licensed under Apache-2.0.
use crate::engine::KvEngine;
use crate::iterable::Iterable;
use crate::peekable::Peekable;
use std::fmt::Debug;
use std::ops::Deref;
pub trait Snapshot<E>
where
Self: 'static + Peekable + Iterable + Send + Sync + Sized + Debug,
E: KvEngine,
{
type SyncSnapshot: SyncSnapshot<Self>;
fn cf_names(&self) -> Vec<&str>;
fn into_sync(self) -> Self::SyncSnapshot;
fn get_db(&self) -> &E;
}
pub trait SyncSnapshot<T>
where
Self: Clone + Send + Sync + Sized + Debug + Deref<Target = T>,
{
}
| 20.821429 | 70 | 0.662093 |
1a037f3a0ebe854f18cfade0451fa7019f8d6f08 | 17,770 | use std::{collections::BTreeSet, iter::FromIterator};
use assert_matches::assert_matches;
use once_cell::sync::Lazy;
use casper_engine_test_support::{
DeployItemBuilder, ExecuteRequestBuilder, InMemoryWasmTestBuilder, DEFAULT_ACCOUNT_ADDR,
DEFAULT_PAYMENT, DEFAULT_RUN_GENESIS_REQUEST,
};
use casper_execution_engine::core::{engine_state::Error, execution};
use casper_types::{
contracts::{self, CONTRACT_INITIAL_VERSION, MAX_GROUPS},
runtime_args, Group, Key, RuntimeArgs,
};
const CONTRACT_GROUPS: &str = "manage_groups.wasm";
const PACKAGE_HASH_KEY: &str = "package_hash_key";
const PACKAGE_ACCESS_KEY: &str = "package_access_key";
const CREATE_GROUP: &str = "create_group";
const REMOVE_GROUP: &str = "remove_group";
const EXTEND_GROUP_UREFS: &str = "extend_group_urefs";
const REMOVE_GROUP_UREFS: &str = "remove_group_urefs";
const GROUP_NAME_ARG: &str = "group_name";
const UREFS_ARG: &str = "urefs";
const NEW_UREFS_COUNT: u64 = 3;
const GROUP_1_NAME: &str = "Group 1";
const TOTAL_NEW_UREFS_ARG: &str = "total_new_urefs";
const TOTAL_EXISTING_UREFS_ARG: &str = "total_existing_urefs";
const ARG_AMOUNT: &str = "amount";
static DEFAULT_CREATE_GROUP_ARGS: Lazy<RuntimeArgs> = Lazy::new(|| {
runtime_args! {
GROUP_NAME_ARG => GROUP_1_NAME,
TOTAL_NEW_UREFS_ARG => 1u64,
TOTAL_EXISTING_UREFS_ARG => 1u64,
}
});
#[ignore]
#[test]
fn should_create_and_remove_group() {
// This test runs a contract that's after every call extends the same key with
// more data
let exec_request_1 = ExecuteRequestBuilder::standard(
*DEFAULT_ACCOUNT_ADDR,
CONTRACT_GROUPS,
RuntimeArgs::default(),
)
.build();
let mut builder = InMemoryWasmTestBuilder::default();
builder.run_genesis(&DEFAULT_RUN_GENESIS_REQUEST);
builder.exec(exec_request_1).expect_success().commit();
let account = builder
.query(None, Key::Account(*DEFAULT_ACCOUNT_ADDR), &[])
.expect("should query account")
.as_account()
.cloned()
.expect("should be account");
let package_hash = account
.named_keys()
.get(PACKAGE_HASH_KEY)
.expect("should have contract package");
let _access_uref = account
.named_keys()
.get(PACKAGE_ACCESS_KEY)
.expect("should have package hash");
let exec_request_2 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
CREATE_GROUP,
DEFAULT_CREATE_GROUP_ARGS.clone(),
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([3; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_2).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
assert_eq!(contract_package.groups().len(), 1);
let group_1 = contract_package
.groups()
.get(&Group::new(GROUP_1_NAME))
.expect("should have group");
assert_eq!(group_1.len(), 2);
let exec_request_3 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let args = runtime_args! {
GROUP_NAME_ARG => GROUP_1_NAME,
};
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
REMOVE_GROUP,
args,
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([3; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_3).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
assert_eq!(
contract_package.groups().get(&Group::new(GROUP_1_NAME)),
None
);
}
#[ignore]
#[test]
fn should_create_and_extend_user_group() {
// This test runs a contract that's after every call extends the same key with
// more data
let exec_request_1 = ExecuteRequestBuilder::standard(
*DEFAULT_ACCOUNT_ADDR,
CONTRACT_GROUPS,
RuntimeArgs::default(),
)
.build();
let mut builder = InMemoryWasmTestBuilder::default();
builder.run_genesis(&DEFAULT_RUN_GENESIS_REQUEST);
builder.exec(exec_request_1).expect_success().commit();
let account = builder
.query(None, Key::Account(*DEFAULT_ACCOUNT_ADDR), &[])
.expect("should query account")
.as_account()
.cloned()
.expect("should be account");
let package_hash = account
.named_keys()
.get(PACKAGE_HASH_KEY)
.expect("should have contract package");
let _access_uref = account
.named_keys()
.get(PACKAGE_ACCESS_KEY)
.expect("should have package hash");
let exec_request_2 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
CREATE_GROUP,
DEFAULT_CREATE_GROUP_ARGS.clone(),
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([5; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_2).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
assert_eq!(contract_package.groups().len(), 1);
let group_1 = contract_package
.groups()
.get(&Group::new(GROUP_1_NAME))
.expect("should have group");
assert_eq!(group_1.len(), 2);
let exec_request_3 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let args = runtime_args! {
GROUP_NAME_ARG => GROUP_1_NAME,
TOTAL_NEW_UREFS_ARG => NEW_UREFS_COUNT,
};
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
EXTEND_GROUP_UREFS,
args,
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([3; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_3).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
let group_1_extended = contract_package
.groups()
.get(&Group::new(GROUP_1_NAME))
.expect("should have group");
assert!(group_1_extended.len() > group_1.len());
// Calculates how many new urefs were created
let new_urefs: BTreeSet<_> = group_1_extended.difference(group_1).collect();
assert_eq!(new_urefs.len(), NEW_UREFS_COUNT as usize);
}
#[ignore]
#[test]
fn should_create_and_remove_urefs_from_group() {
// This test runs a contract that's after every call extends the same key with
// more data
let exec_request_1 = ExecuteRequestBuilder::standard(
*DEFAULT_ACCOUNT_ADDR,
CONTRACT_GROUPS,
RuntimeArgs::default(),
)
.build();
let mut builder = InMemoryWasmTestBuilder::default();
builder.run_genesis(&DEFAULT_RUN_GENESIS_REQUEST);
builder.exec(exec_request_1).expect_success().commit();
let account = builder
.query(None, Key::Account(*DEFAULT_ACCOUNT_ADDR), &[])
.expect("should query account")
.as_account()
.cloned()
.expect("should be account");
let package_hash = account
.named_keys()
.get(PACKAGE_HASH_KEY)
.expect("should have contract package");
let _access_uref = account
.named_keys()
.get(PACKAGE_ACCESS_KEY)
.expect("should have package hash");
let exec_request_2 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
CREATE_GROUP,
DEFAULT_CREATE_GROUP_ARGS.clone(),
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([3; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_2).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
assert_eq!(contract_package.groups().len(), 1);
let group_1 = contract_package
.groups()
.get(&Group::new(GROUP_1_NAME))
.expect("should have group");
assert_eq!(group_1.len(), 2);
let urefs_to_remove = Vec::from_iter(group_1.to_owned());
let exec_request_3 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let args = runtime_args! {
GROUP_NAME_ARG => GROUP_1_NAME,
UREFS_ARG => urefs_to_remove,
};
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
REMOVE_GROUP_UREFS,
args,
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([3; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_3).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
let group_1_modified = contract_package
.groups()
.get(&Group::new(GROUP_1_NAME))
.expect("should have group 1");
assert!(group_1_modified.len() < group_1.len());
}
#[ignore]
#[test]
fn should_limit_max_urefs_while_extending() {
// This test runs a contract that's after every call extends the same key with
// more data
let exec_request_1 = ExecuteRequestBuilder::standard(
*DEFAULT_ACCOUNT_ADDR,
CONTRACT_GROUPS,
RuntimeArgs::default(),
)
.build();
let mut builder = InMemoryWasmTestBuilder::default();
builder.run_genesis(&DEFAULT_RUN_GENESIS_REQUEST);
builder.exec(exec_request_1).expect_success().commit();
let account = builder
.query(None, Key::Account(*DEFAULT_ACCOUNT_ADDR), &[])
.expect("should query account")
.as_account()
.cloned()
.expect("should be account");
let package_hash = account
.named_keys()
.get(PACKAGE_HASH_KEY)
.expect("should have contract package");
let _access_uref = account
.named_keys()
.get(PACKAGE_ACCESS_KEY)
.expect("should have package hash");
let exec_request_2 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
CREATE_GROUP,
DEFAULT_CREATE_GROUP_ARGS.clone(),
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([3; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_2).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
assert_eq!(contract_package.groups().len(), 1);
let group_1 = contract_package
.groups()
.get(&Group::new(GROUP_1_NAME))
.expect("should have group");
assert_eq!(group_1.len(), 2);
let exec_request_3 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let args = runtime_args! {
GROUP_NAME_ARG => GROUP_1_NAME,
TOTAL_NEW_UREFS_ARG => 8u64,
};
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
EXTEND_GROUP_UREFS,
args,
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([5; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
let exec_request_4 = {
// This inserts package as an argument because this test
// can work from different accounts which might not have the same keys in their session
// code.
let args = runtime_args! {
GROUP_NAME_ARG => GROUP_1_NAME,
// Exceeds by 1
TOTAL_NEW_UREFS_ARG => 1u64,
};
let deploy = DeployItemBuilder::new()
.with_address(*DEFAULT_ACCOUNT_ADDR)
.with_stored_versioned_contract_by_name(
PACKAGE_HASH_KEY,
Some(CONTRACT_INITIAL_VERSION),
EXTEND_GROUP_UREFS,
args,
)
.with_empty_payment_bytes(runtime_args! { ARG_AMOUNT => *DEFAULT_PAYMENT })
.with_authorization_keys(&[*DEFAULT_ACCOUNT_ADDR])
.with_deploy_hash([32; 32])
.build();
ExecuteRequestBuilder::new().push_deploy(deploy).build()
};
builder.exec(exec_request_3).expect_success().commit();
let query_result = builder
.query(None, *package_hash, &[])
.expect("should have result");
let contract_package = query_result
.as_contract_package()
.expect("should be package");
let group_1_modified = contract_package
.groups()
.get(&Group::new(GROUP_1_NAME))
.expect("should have group 1");
assert_eq!(group_1_modified.len(), MAX_GROUPS as usize);
// Tries to exceed the limit by 1
builder.exec(exec_request_4).commit();
let response = builder
.get_exec_results()
.last()
.expect("should have last response");
assert_eq!(response.len(), 1);
let exec_response = response.last().expect("should have response");
let error = exec_response.as_error().expect("should have error");
let error = assert_matches!(error, Error::Exec(execution::Error::Revert(e)) => e);
assert_eq!(error, &contracts::Error::MaxTotalURefsExceeded.into());
}
| 34.774951 | 95 | 0.630051 |
f502cfa43ff76440352c80ad5985ffbf1abaf34b | 106 | pub mod contract;
pub mod state;
mod global;
mod math;
mod querier;
mod user;
#[cfg(test)]
mod testing;
| 9.636364 | 17 | 0.707547 |
ef6d13534651760d7f1684d8484b02e98ddd53dc | 2,401 | use crate::util::{print_part_1, print_part_2};
use std::collections::HashSet;
use std::fs::read_to_string;
use std::time::Instant;
fn santa_travel(input: &str, part: usize) -> usize {
let mut santa_index = 0;
let mut coll = HashSet::new();
let mut x = [0, 0];
let mut y = [0, 0];
coll.insert((x[santa_index], y[santa_index]));
for dir in input.chars() {
match dir {
'<' => {
x[santa_index] -= 1;
}
'>' => {
x[santa_index] += 1;
}
'^' => {
y[santa_index] += 1;
}
'v' => {
y[santa_index] -= 1;
}
_ => unreachable!(),
}
coll.insert((x[santa_index], y[santa_index]));
if part == 2 {
santa_index ^= 1; // flip to next santa
}
}
coll.len()
}
pub fn main() {
let input = read_to_string("inputs/day03.txt").expect("Input not found..");
// PART 1
let start = Instant::now();
let known_answer = "2572";
let part_1: usize = santa_travel(&input, 1);
let duration = start.elapsed();
print_part_1(&part_1.to_string(), &known_answer, duration);
// PART 2
let start = Instant::now();
let known_answer = "2631";
let part_2: usize = santa_travel(&input, 2);
let duration = start.elapsed();
print_part_2(&part_2.to_string(), &known_answer, duration);
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_example_1() {
let input = ">";
let answer = santa_travel(&input, 1);
assert_eq!(answer, 2);
}
#[test]
fn test_example_2() {
let input = "^>v<";
let answer = santa_travel(&input, 1);
assert_eq!(answer, 4);
}
#[test]
fn test_example_3() {
let input = "^v^v^v^v^v";
let answer = santa_travel(&input, 1);
assert_eq!(answer, 2);
}
#[test]
fn test_example_4() {
let input = "^v";
let answer = santa_travel(&input, 2);
assert_eq!(answer, 3);
}
#[test]
fn test_example_5() {
let input = "^>v<";
let answer = santa_travel(&input, 2);
assert_eq!(answer, 3);
}
#[test]
fn test_example_6() {
let input = "^v^v^v^v^v";
let answer = santa_travel(&input, 2);
assert_eq!(answer, 11);
}
}
| 23.772277 | 79 | 0.502291 |
1afb3728c9d72e7d00d20a153631a312b72302e6 | 17,621 | // Copyright 2012-2014 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
use ascii::*;
use collections::HashMap;
use collections;
use env::split_paths;
use env;
use ffi::{OsString, OsStr};
use fmt;
use fs;
use io::{self, Error, ErrorKind};
use libc::c_void;
use mem;
use os::windows::ffi::OsStrExt;
use path::Path;
use ptr;
use sys::mutex::Mutex;
use sys::c;
use sys::fs::{OpenOptions, File};
use sys::handle::Handle;
use sys::pipe::{self, AnonPipe};
use sys::stdio;
use sys::{self, cvt};
use sys_common::{AsInner, FromInner};
////////////////////////////////////////////////////////////////////////////////
// Command
////////////////////////////////////////////////////////////////////////////////
fn mk_key(s: &OsStr) -> OsString {
FromInner::from_inner(sys::os_str::Buf {
inner: s.as_inner().inner.to_ascii_uppercase()
})
}
fn ensure_no_nuls<T: AsRef<OsStr>>(str: T) -> io::Result<T> {
if str.as_ref().encode_wide().any(|b| b == 0) {
Err(io::Error::new(ErrorKind::InvalidInput, "nul byte found in provided data"))
} else {
Ok(str)
}
}
pub struct Command {
program: OsString,
args: Vec<OsString>,
env: Option<HashMap<OsString, OsString>>,
cwd: Option<OsString>,
flags: u32,
detach: bool, // not currently exposed in std::process
stdin: Option<Stdio>,
stdout: Option<Stdio>,
stderr: Option<Stdio>,
}
pub enum Stdio {
Inherit,
Null,
MakePipe,
Handle(Handle),
}
pub struct StdioPipes {
pub stdin: Option<AnonPipe>,
pub stdout: Option<AnonPipe>,
pub stderr: Option<AnonPipe>,
}
struct DropGuard<'a> {
lock: &'a Mutex,
}
impl Command {
pub fn new(program: &OsStr) -> Command {
Command {
program: program.to_os_string(),
args: Vec::new(),
env: None,
cwd: None,
flags: 0,
detach: false,
stdin: None,
stdout: None,
stderr: None,
}
}
pub fn arg(&mut self, arg: &OsStr) {
self.args.push(arg.to_os_string())
}
fn init_env_map(&mut self){
if self.env.is_none() {
self.env = Some(env::vars_os().map(|(key, val)| {
(mk_key(&key), val)
}).collect());
}
}
pub fn env(&mut self, key: &OsStr, val: &OsStr) {
self.init_env_map();
self.env.as_mut().unwrap().insert(mk_key(key), val.to_os_string());
}
pub fn env_remove(&mut self, key: &OsStr) {
self.init_env_map();
self.env.as_mut().unwrap().remove(&mk_key(key));
}
pub fn env_clear(&mut self) {
self.env = Some(HashMap::new())
}
pub fn cwd(&mut self, dir: &OsStr) {
self.cwd = Some(dir.to_os_string())
}
pub fn stdin(&mut self, stdin: Stdio) {
self.stdin = Some(stdin);
}
pub fn stdout(&mut self, stdout: Stdio) {
self.stdout = Some(stdout);
}
pub fn stderr(&mut self, stderr: Stdio) {
self.stderr = Some(stderr);
}
pub fn creation_flags(&mut self, flags: u32) {
self.flags = flags;
}
pub fn spawn(&mut self, default: Stdio, needs_stdin: bool)
-> io::Result<(Process, StdioPipes)> {
// To have the spawning semantics of unix/windows stay the same, we need
// to read the *child's* PATH if one is provided. See #15149 for more
// details.
let program = self.env.as_ref().and_then(|env| {
for (key, v) in env {
if OsStr::new("PATH") != &**key { continue }
// Split the value and test each path to see if the
// program exists.
for path in split_paths(&v) {
let path = path.join(self.program.to_str().unwrap())
.with_extension(env::consts::EXE_EXTENSION);
if fs::metadata(&path).is_ok() {
return Some(path.into_os_string())
}
}
break
}
None
});
let mut si = zeroed_startupinfo();
si.cb = mem::size_of::<c::STARTUPINFO>() as c::DWORD;
si.dwFlags = c::STARTF_USESTDHANDLES;
let program = program.as_ref().unwrap_or(&self.program);
let mut cmd_str = make_command_line(program, &self.args)?;
cmd_str.push(0); // add null terminator
// stolen from the libuv code.
let mut flags = self.flags | c::CREATE_UNICODE_ENVIRONMENT;
if self.detach {
flags |= c::DETACHED_PROCESS | c::CREATE_NEW_PROCESS_GROUP;
}
let (envp, _data) = make_envp(self.env.as_ref())?;
let (dirp, _data) = make_dirp(self.cwd.as_ref())?;
let mut pi = zeroed_process_information();
// Prepare all stdio handles to be inherited by the child. This
// currently involves duplicating any existing ones with the ability to
// be inherited by child processes. Note, however, that once an
// inheritable handle is created, *any* spawned child will inherit that
// handle. We only want our own child to inherit this handle, so we wrap
// the remaining portion of this spawn in a mutex.
//
// For more information, msdn also has an article about this race:
// http://support.microsoft.com/kb/315939
static CREATE_PROCESS_LOCK: Mutex = Mutex::new();
let _guard = DropGuard::new(&CREATE_PROCESS_LOCK);
let mut pipes = StdioPipes {
stdin: None,
stdout: None,
stderr: None,
};
let null = Stdio::Null;
let default_stdin = if needs_stdin {&default} else {&null};
let stdin = self.stdin.as_ref().unwrap_or(default_stdin);
let stdout = self.stdout.as_ref().unwrap_or(&default);
let stderr = self.stderr.as_ref().unwrap_or(&default);
let stdin = stdin.to_handle(c::STD_INPUT_HANDLE, &mut pipes.stdin)?;
let stdout = stdout.to_handle(c::STD_OUTPUT_HANDLE,
&mut pipes.stdout)?;
let stderr = stderr.to_handle(c::STD_ERROR_HANDLE,
&mut pipes.stderr)?;
si.hStdInput = stdin.raw();
si.hStdOutput = stdout.raw();
si.hStdError = stderr.raw();
unsafe {
cvt(c::CreateProcessW(ptr::null(),
cmd_str.as_mut_ptr(),
ptr::null_mut(),
ptr::null_mut(),
c::TRUE, flags, envp, dirp,
&mut si, &mut pi))
}?;
// We close the thread handle because we don't care about keeping
// the thread id valid, and we aren't keeping the thread handle
// around to be able to close it later.
drop(Handle::new(pi.hThread));
Ok((Process { handle: Handle::new(pi.hProcess) }, pipes))
}
}
impl fmt::Debug for Command {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "{:?}", self.program)?;
for arg in &self.args {
write!(f, " {:?}", arg)?;
}
Ok(())
}
}
impl<'a> DropGuard<'a> {
fn new(lock: &'a Mutex) -> DropGuard<'a> {
unsafe {
lock.lock();
DropGuard { lock: lock }
}
}
}
impl<'a> Drop for DropGuard<'a> {
fn drop(&mut self) {
unsafe {
self.lock.unlock();
}
}
}
impl Stdio {
fn to_handle(&self, stdio_id: c::DWORD, pipe: &mut Option<AnonPipe>)
-> io::Result<Handle> {
match *self {
// If no stdio handle is available, then inherit means that it
// should still be unavailable so propagate the
// INVALID_HANDLE_VALUE.
Stdio::Inherit => {
match stdio::get(stdio_id) {
Ok(io) => io.handle().duplicate(0, true,
c::DUPLICATE_SAME_ACCESS),
Err(..) => Ok(Handle::new(c::INVALID_HANDLE_VALUE)),
}
}
Stdio::MakePipe => {
let ours_readable = stdio_id != c::STD_INPUT_HANDLE;
let pipes = pipe::anon_pipe(ours_readable)?;
*pipe = Some(pipes.ours);
cvt(unsafe {
c::SetHandleInformation(pipes.theirs.handle().raw(),
c::HANDLE_FLAG_INHERIT,
c::HANDLE_FLAG_INHERIT)
})?;
Ok(pipes.theirs.into_handle())
}
Stdio::Handle(ref handle) => {
handle.duplicate(0, true, c::DUPLICATE_SAME_ACCESS)
}
// Open up a reference to NUL with appropriate read/write
// permissions as well as the ability to be inherited to child
// processes (as this is about to be inherited).
Stdio::Null => {
let size = mem::size_of::<c::SECURITY_ATTRIBUTES>();
let mut sa = c::SECURITY_ATTRIBUTES {
nLength: size as c::DWORD,
lpSecurityDescriptor: ptr::null_mut(),
bInheritHandle: 1,
};
let mut opts = OpenOptions::new();
opts.read(stdio_id == c::STD_INPUT_HANDLE);
opts.write(stdio_id != c::STD_INPUT_HANDLE);
opts.security_attributes(&mut sa);
File::open(Path::new("NUL"), &opts).map(|file| {
file.into_handle()
})
}
}
}
}
////////////////////////////////////////////////////////////////////////////////
// Processes
////////////////////////////////////////////////////////////////////////////////
/// A value representing a child process.
///
/// The lifetime of this value is linked to the lifetime of the actual
/// process - the Process destructor calls self.finish() which waits
/// for the process to terminate.
pub struct Process {
handle: Handle,
}
impl Process {
pub fn kill(&mut self) -> io::Result<()> {
cvt(unsafe {
c::TerminateProcess(self.handle.raw(), 1)
})?;
Ok(())
}
pub fn id(&self) -> u32 {
unsafe {
c::GetProcessId(self.handle.raw()) as u32
}
}
pub fn wait(&mut self) -> io::Result<ExitStatus> {
unsafe {
let res = c::WaitForSingleObject(self.handle.raw(), c::INFINITE);
if res != c::WAIT_OBJECT_0 {
return Err(Error::last_os_error())
}
let mut status = 0;
cvt(c::GetExitCodeProcess(self.handle.raw(), &mut status))?;
Ok(ExitStatus(status))
}
}
pub fn try_wait(&mut self) -> io::Result<Option<ExitStatus>> {
unsafe {
match c::WaitForSingleObject(self.handle.raw(), 0) {
c::WAIT_OBJECT_0 => {}
c::WAIT_TIMEOUT => {
return Ok(None);
}
_ => return Err(io::Error::last_os_error()),
}
let mut status = 0;
cvt(c::GetExitCodeProcess(self.handle.raw(), &mut status))?;
Ok(Some(ExitStatus(status)))
}
}
pub fn handle(&self) -> &Handle { &self.handle }
pub fn into_handle(self) -> Handle { self.handle }
}
#[derive(PartialEq, Eq, Clone, Copy, Debug)]
pub struct ExitStatus(c::DWORD);
impl ExitStatus {
pub fn success(&self) -> bool {
self.0 == 0
}
pub fn code(&self) -> Option<i32> {
Some(self.0 as i32)
}
}
impl From<c::DWORD> for ExitStatus {
fn from(u: c::DWORD) -> ExitStatus {
ExitStatus(u)
}
}
impl fmt::Display for ExitStatus {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "exit code: {}", self.0)
}
}
fn zeroed_startupinfo() -> c::STARTUPINFO {
c::STARTUPINFO {
cb: 0,
lpReserved: ptr::null_mut(),
lpDesktop: ptr::null_mut(),
lpTitle: ptr::null_mut(),
dwX: 0,
dwY: 0,
dwXSize: 0,
dwYSize: 0,
dwXCountChars: 0,
dwYCountCharts: 0,
dwFillAttribute: 0,
dwFlags: 0,
wShowWindow: 0,
cbReserved2: 0,
lpReserved2: ptr::null_mut(),
hStdInput: c::INVALID_HANDLE_VALUE,
hStdOutput: c::INVALID_HANDLE_VALUE,
hStdError: c::INVALID_HANDLE_VALUE,
}
}
fn zeroed_process_information() -> c::PROCESS_INFORMATION {
c::PROCESS_INFORMATION {
hProcess: ptr::null_mut(),
hThread: ptr::null_mut(),
dwProcessId: 0,
dwThreadId: 0
}
}
// Produces a wide string *without terminating null*; returns an error if
// `prog` or any of the `args` contain a nul.
fn make_command_line(prog: &OsStr, args: &[OsString]) -> io::Result<Vec<u16>> {
// Encode the command and arguments in a command line string such
// that the spawned process may recover them using CommandLineToArgvW.
let mut cmd: Vec<u16> = Vec::new();
append_arg(&mut cmd, prog)?;
for arg in args {
cmd.push(' ' as u16);
append_arg(&mut cmd, arg)?;
}
return Ok(cmd);
fn append_arg(cmd: &mut Vec<u16>, arg: &OsStr) -> io::Result<()> {
// If an argument has 0 characters then we need to quote it to ensure
// that it actually gets passed through on the command line or otherwise
// it will be dropped entirely when parsed on the other end.
ensure_no_nuls(arg)?;
let arg_bytes = &arg.as_inner().inner.as_inner();
let quote = arg_bytes.iter().any(|c| *c == b' ' || *c == b'\t')
|| arg_bytes.is_empty();
if quote {
cmd.push('"' as u16);
}
let mut iter = arg.encode_wide();
let mut backslashes: usize = 0;
while let Some(x) = iter.next() {
if x == '\\' as u16 {
backslashes += 1;
} else {
if x == '"' as u16 {
// Add n+1 backslashes to total 2n+1 before internal '"'.
for _ in 0..(backslashes+1) {
cmd.push('\\' as u16);
}
}
backslashes = 0;
}
cmd.push(x);
}
if quote {
// Add n backslashes to total 2n before ending '"'.
for _ in 0..backslashes {
cmd.push('\\' as u16);
}
cmd.push('"' as u16);
}
Ok(())
}
}
fn make_envp(env: Option<&collections::HashMap<OsString, OsString>>)
-> io::Result<(*mut c_void, Vec<u16>)> {
// On Windows we pass an "environment block" which is not a char**, but
// rather a concatenation of null-terminated k=v\0 sequences, with a final
// \0 to terminate.
match env {
Some(env) => {
let mut blk = Vec::new();
for pair in env {
blk.extend(ensure_no_nuls(pair.0)?.encode_wide());
blk.push('=' as u16);
blk.extend(ensure_no_nuls(pair.1)?.encode_wide());
blk.push(0);
}
blk.push(0);
Ok((blk.as_mut_ptr() as *mut c_void, blk))
}
_ => Ok((ptr::null_mut(), Vec::new()))
}
}
fn make_dirp(d: Option<&OsString>) -> io::Result<(*const u16, Vec<u16>)> {
match d {
Some(dir) => {
let mut dir_str: Vec<u16> = ensure_no_nuls(dir)?.encode_wide().collect();
dir_str.push(0);
Ok((dir_str.as_ptr(), dir_str))
},
None => Ok((ptr::null(), Vec::new()))
}
}
#[cfg(test)]
mod tests {
use ffi::{OsStr, OsString};
use super::make_command_line;
#[test]
fn test_make_command_line() {
fn test_wrapper(prog: &str, args: &[&str]) -> String {
let command_line = &make_command_line(OsStr::new(prog),
&args.iter()
.map(|a| OsString::from(a))
.collect::<Vec<OsString>>())
.unwrap();
String::from_utf16(command_line).unwrap()
}
assert_eq!(
test_wrapper("prog", &["aaa", "bbb", "ccc"]),
"prog aaa bbb ccc"
);
assert_eq!(
test_wrapper("C:\\Program Files\\blah\\blah.exe", &["aaa"]),
"\"C:\\Program Files\\blah\\blah.exe\" aaa"
);
assert_eq!(
test_wrapper("C:\\Program Files\\test", &["aa\"bb"]),
"\"C:\\Program Files\\test\" aa\\\"bb"
);
assert_eq!(
test_wrapper("echo", &["a b c"]),
"echo \"a b c\""
);
assert_eq!(
test_wrapper("echo", &["\" \\\" \\", "\\"]),
"echo \"\\\" \\\\\\\" \\\\\" \\"
);
assert_eq!(
test_wrapper("\u{03c0}\u{042f}\u{97f3}\u{00e6}\u{221e}", &[]),
"\u{03c0}\u{042f}\u{97f3}\u{00e6}\u{221e}"
);
}
}
| 32.096539 | 87 | 0.508654 |
f9d1a1fe2886ff20af6d97ffa3dddb2535835b7f | 377 | // Copyright 2020-2022 IOTA Stiftung
// SPDX-License-Identifier: Apache-2.0
pub use auto_save::OptionAutoSave;
pub use auto_save::WasmAutoSave;
pub use identity_setup::WasmIdentitySetup;
pub use key_location::WasmKeyLocation;
pub use method_content::*;
pub use signature::WasmSignature;
mod auto_save;
mod identity_setup;
mod key_location;
mod method_content;
mod signature;
| 23.5625 | 42 | 0.809019 |
e5dc4b71daaf5873bb0758fccccf826ba29ee091 | 5,481 | // Copyright (C) 2019 Alibaba Cloud Computing. All rights reserved.
// SPDX-License-Identifier: Apache-2.0 or BSD-3-Clause
//
// Portions Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved.
//
// Portions Copyright 2017 The Chromium OS Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE-BSD-Google file.
//! Common traits and structs for vhost-kern and vhost-user backend drivers.
use super::Result;
use std::os::unix::io::RawFd;
use vmm_sys_util::eventfd::EventFd;
/// Maximum number of memory regions supported.
pub const VHOST_MAX_MEMORY_REGIONS: usize = 255;
/// Vring/virtque configuration data.
pub struct VringConfigData {
/// Maximum queue size supported by the driver.
pub queue_max_size: u16,
/// Actual queue size negotiated by the driver.
pub queue_size: u16,
/// Bitmask of vring flags.
pub flags: u32,
/// Descriptor table address.
pub desc_table_addr: u64,
/// Used ring buffer address.
pub used_ring_addr: u64,
/// Available ring buffer address.
pub avail_ring_addr: u64,
/// Optional address for logging.
pub log_addr: Option<u64>,
}
/// Memory region configuration data.
#[derive(Default, Clone, Copy)]
pub struct VhostUserMemoryRegionInfo {
/// Guest physical address of the memory region.
pub guest_phys_addr: u64,
/// Size of the memory region.
pub memory_size: u64,
/// Virtual address in the current process.
pub userspace_addr: u64,
/// Optional offset where region starts in the mapped memory.
pub mmap_offset: u64,
/// Optional file diescriptor for mmap
pub mmap_handle: RawFd,
}
/// An interface for setting up vhost-based backend drivers.
///
/// Vhost devices are subset of virtio devices, which improve virtio device's performance by
/// delegating data plane operations to dedicated IO service processes. Vhost devices use the
/// same virtqueue layout as virtio devices to allow vhost devices to be mapped directly to
/// virtio devices.
/// The purpose of vhost is to implement a subset of a virtio device's functionality outside the
/// VMM process. Typically fast paths for IO operations are delegated to the dedicated IO service
/// processes, and slow path for device configuration are still handled by the VMM process. It may
/// also be used to control access permissions of virtio backend devices.
pub trait VhostBackend: std::marker::Sized {
/// Get a bitmask of supported virtio/vhost features.
fn get_features(&mut self) -> Result<u64>;
/// Inform the vhost subsystem which features to enable.
/// This should be a subset of supported features from get_features().
///
/// # Arguments
/// * `features` - Bitmask of features to set.
fn set_features(&mut self, features: u64) -> Result<()>;
/// Set the current process as the owner of the vhost backend.
/// This must be run before any other vhost commands.
fn set_owner(&mut self) -> Result<()>;
/// Used to be sent to request disabling all rings
/// This is no longer used.
fn reset_owner(&mut self) -> Result<()>;
/// Set the guest memory mappings for vhost to use.
fn set_mem_table(&mut self, regions: &[VhostUserMemoryRegionInfo]) -> Result<()>;
/// Set base address for page modification logging.
fn set_log_base(&mut self, base: u64, fd: Option<RawFd>) -> Result<()>;
/// Specify an eventfd file descriptor to signal on log write.
fn set_log_fd(&mut self, fd: RawFd) -> Result<()>;
/// Set the number of descriptors in the vring.
///
/// # Arguments
/// * `queue_index` - Index of the queue to set descriptor count for.
/// * `num` - Number of descriptors in the queue.
fn set_vring_num(&mut self, queue_index: usize, num: u16) -> Result<()>;
/// Set the addresses for a given vring.
///
/// # Arguments
/// * `queue_index` - Index of the queue to set addresses for.
/// * `config_data` - Configuration data for a vring.
fn set_vring_addr(&mut self, queue_index: usize, config_data: &VringConfigData) -> Result<()>;
/// Set the first index to look for available descriptors.
///
/// # Arguments
/// * `queue_index` - Index of the queue to modify.
/// * `num` - Index where available descriptors start.
fn set_vring_base(&mut self, queue_index: usize, base: u16) -> Result<()>;
/// Get the available vring base offset.
fn get_vring_base(&mut self, queue_index: usize) -> Result<u32>;
/// Set the eventfd to trigger when buffers have been used by the host.
///
/// # Arguments
/// * `queue_index` - Index of the queue to modify.
/// * `fd` - EventFd to trigger.
fn set_vring_call(&mut self, queue_index: usize, fd: &EventFd) -> Result<()>;
/// Set the eventfd that will be signaled by the guest when buffers are
/// available for the host to process.
///
/// # Arguments
/// * `queue_index` - Index of the queue to modify.
/// * `fd` - EventFd that will be signaled from guest.
fn set_vring_kick(&mut self, queue_index: usize, fd: &EventFd) -> Result<()>;
/// Set the eventfd that will be signaled by the guest when error happens.
///
/// # Arguments
/// * `queue_index` - Index of the queue to modify.
/// * `fd` - EventFd that will be signaled from guest.
fn set_vring_err(&mut self, queue_index: usize, fd: &EventFd) -> Result<()>;
}
| 40.301471 | 98 | 0.678708 |
ff59c7ea7617a563fc51b5b5692cc80137be391a | 54,258 | use regex::Regex;
#[derive(Clone, Debug, PartialEq)]
pub enum Token {
Address,
AndEquals,
Anonymous,
Arrow,
As,
Assembly,
Assignment,
ASMAssign,
BitwiseAnd,
BitwiseOr,
BitwiseXor,
Bool,
Break,
Byte,
Bytes,
Bytes1,
Bytes2,
Bytes3,
Bytes4,
Bytes5,
Bytes6,
Bytes7,
Bytes8,
Bytes9,
Bytes10,
Bytes11,
Bytes12,
Bytes13,
Bytes14,
Bytes15,
Bytes16,
Bytes17,
Bytes18,
Bytes19,
Bytes20,
Bytes21,
Bytes22,
Bytes23,
Bytes24,
Bytes25,
Bytes26,
Bytes27,
Bytes28,
Bytes29,
Bytes30,
Bytes31,
Bytes32,
CloseBrace,
CloseBracket,
CloseParenthesis,
Colon,
Comma,
CommentMulti,
CommentSingle,
Constant,
Continue,
Contract,
Days,
DecimalNumber(String),
Decrement,
Delete,
Divide,
DivideEquals,
Do,
Dot,
Else,
Emit,
Enum,
EOF,
Equals,
Ether,
Event,
EventParameter,
Exclamation,
External,
False,
Finney,
Fixed,
For,
From,
Function,
GreaterThan,
GreaterThanOrEquals,
Hex,
HexLiteral(String),
HexNumber(String),
Hours,
Identifier(String),
If,
Illegal,
Import,
Increment,
Indexed,
Int,
Int8,
Int16,
Int24,
Int32,
Int40,
Int48,
Int56,
Int64,
Int72,
Int80,
Int88,
Int96,
Int104,
Int112,
Int120,
Int128,
Int136,
Int144,
Int152,
Int160,
Int168,
Int176,
Int184,
Int192,
Int200,
Int208,
Int216,
Int224,
Int232,
Int240,
Int248,
Int256,
Interface,
Internal,
Is,
LessThan,
LessThanOrEquals,
Let,
Library,
LogicalAnd,
LogicalOr,
Mapping,
Memory,
Minus,
MinusEquals,
Minutes,
ModEquals,
Modifier,
Modulus,
Multiply,
MultiplyEquals,
New,
NoMatch,
NotEquals,
OpenBrace,
OpenBracket,
OpenParenthesis,
OrEquals,
Parameter,
Payable,
Plus,
PlusEquals,
Power,
Pragma,
Private,
Public,
Pure,
Question,
Return,
Returns,
Seconds,
Semicolon,
ShiftLeft,
ShiftLeftEquals,
ShiftRight,
ShiftRightEquals,
StateVariable,
Storage,
String,
StringLiteral(String),
Struct,
Szabo,
Throw,
Tilda,
True,
Ufixed,
Uint,
Uint8,
Uint16,
Uint24,
Uint32,
Uint40,
Uint48,
Uint56,
Uint64,
Uint72,
Uint80,
Uint88,
Uint96,
Uint104,
Uint112,
Uint120,
Uint128,
Uint136,
Uint144,
Uint152,
Uint160,
Uint168,
Uint176,
Uint184,
Uint192,
Uint200,
Uint208,
Uint216,
Uint224,
Uint232,
Uint240,
Uint248,
Uint256,
UserDefinedTypeName,
Using,
Var,
Version(String),
View,
Weeks,
Wei,
While,
XorEquals,
Years,
}
impl Token {
// Returns whether the Token is a unit
pub fn is_number_unit(&self) -> bool {
return match self {
Token::Days => true,
Token::Ether => true,
Token::Finney => true,
Token::Hours => true,
Token::Minutes => true,
Token::Seconds => true,
Token::Szabo => true,
Token::Weeks => true,
Token::Wei => true,
Token::Years => true,
_ => false
}
}
// Returns whether the Token is an int
pub fn is_int(&self) -> bool {
return match self {
Token::Int => true,
Token::Int8 => true,
Token::Int16 => true,
Token::Int24 => true,
Token::Int32 => true,
Token::Int40 => true,
Token::Int48 => true,
Token::Int56 => true,
Token::Int64 => true,
Token::Int72 => true,
Token::Int80 => true,
Token::Int88 => true,
Token::Int96 => true,
Token::Int104 => true,
Token::Int112 => true,
Token::Int120 => true,
Token::Int128 => true,
Token::Int136 => true,
Token::Int144 => true,
Token::Int152 => true,
Token::Int160 => true,
Token::Int168 => true,
Token::Int176 => true,
Token::Int184 => true,
Token::Int192 => true,
Token::Int200 => true,
Token::Int208 => true,
Token::Int216 => true,
Token::Int224 => true,
Token::Int232 => true,
Token::Int240 => true,
Token::Int248 => true,
Token::Int256 => true,
_ => false
}
}
// Returns whether the token is an unsigned integer
pub fn is_uint(&self) -> bool {
return match self {
Token::Uint => true,
Token::Uint8 => true,
Token::Uint16 => true,
Token::Uint24 => true,
Token::Uint32 => true,
Token::Uint40 => true,
Token::Uint48 => true,
Token::Uint56 => true,
Token::Uint64 => true,
Token::Uint72 => true,
Token::Uint80 => true,
Token::Uint88 => true,
Token::Uint96 => true,
Token::Uint104 => true,
Token::Uint112 => true,
Token::Uint120 => true,
Token::Uint128 => true,
Token::Uint136 => true,
Token::Uint144 => true,
Token::Uint152 => true,
Token::Uint160 => true,
Token::Uint168 => true,
Token::Uint176 => true,
Token::Uint184 => true,
Token::Uint192 => true,
Token::Uint200 => true,
Token::Uint208 => true,
Token::Uint216 => true,
Token::Uint224 => true,
Token::Uint232 => true,
Token::Uint240 => true,
Token::Uint248 => true,
Token::Uint256 => true,
_ => false
}
}
// Returns whether the Token is a byte, bytes, or bytesXX
pub fn is_byte(&self) -> bool {
return match self {
Token::Byte => true,
Token::Bytes => true,
Token::Bytes1 => true,
Token::Bytes2 => true,
Token::Bytes3 => true,
Token::Bytes4 => true,
Token::Bytes5 => true,
Token::Bytes6 => true,
Token::Bytes7 => true,
Token::Bytes8 => true,
Token::Bytes9 => true,
Token::Bytes10 => true,
Token::Bytes11 => true,
Token::Bytes12 => true,
Token::Bytes13 => true,
Token::Bytes14 => true,
Token::Bytes15 => true,
Token::Bytes16 => true,
Token::Bytes17 => true,
Token::Bytes18 => true,
Token::Bytes19 => true,
Token::Bytes20 => true,
Token::Bytes21 => true,
Token::Bytes22 => true,
Token::Bytes23 => true,
Token::Bytes24 => true,
Token::Bytes25 => true,
Token::Bytes26 => true,
Token::Bytes27 => true,
Token::Bytes28 => true,
Token::Bytes29 => true,
Token::Bytes30 => true,
Token::Bytes31 => true,
Token::Bytes32 => true,
_ => false
}
}
// Returns whether the token represents an elementary type
// (address, bool, string, var, int, uint, byte)
pub fn is_elementary_type(&self) -> bool {
return match self {
Token::Address => true,
Token::Bool => true,
Token::String => true,
Token::Var => true,
int if int.is_int() => true,
uint if uint.is_uint() => true,
byte if byte.is_byte() => true,
_ => false
}
}
}
trait LineMatch {
fn match_idx(&self, idx: usize, val: char) -> bool;
fn is_digit_at(&self, idx: usize) -> bool;
fn is_rational_at(&self, idx: usize) -> bool;
fn is_hex_digit_at(&self, idx: usize) -> bool;
fn is_hex_delim_at(&self, idx: usize) -> bool;
fn is_whitespace_at(&self, idx: usize) -> bool;
}
impl LineMatch for Vec<char> {
fn match_idx(&self, idx: usize, val: char) -> bool {
match self.get(idx) {
Some(v) => v == &val,
None => false
}
}
fn is_digit_at(&self, idx: usize) -> bool {
match self.get(idx) {
Some(v) => v.is_digit(10),
None => false
}
}
fn is_hex_digit_at(&self, idx: usize) -> bool {
match self.get(idx) {
Some(v) => v.is_digit(16),
None => false
}
}
fn is_rational_at(&self, idx: usize) -> bool {
match self.get(idx) {
Some(v) => v.is_rational(),
None => false
}
}
fn is_hex_delim_at(&self, idx: usize) -> bool {
match self.get(idx) {
Some(v) => *v == 'x' || *v == 'X',
None => false
}
}
fn is_whitespace_at(&self, idx: usize) -> bool {
match self.get(idx) {
Some(v) => v.is_whitespace(),
None => false
}
}
}
trait CharExt {
fn starts_rational(&self) -> bool;
fn starts_iden_or_keyword(&self) -> bool;
fn is_iden_or_keyword_part(&self) -> bool;
fn is_whitespace(&self) -> bool;
fn is_rational(&self) -> bool;
}
impl CharExt for char {
// Not allowed: Leading 0, leading 'e'
fn starts_rational(&self) -> bool {
return (self.is_digit(10) || *self == '.') && *self != '0';
}
// If self could be the first character of an identifier, returns true
fn starts_iden_or_keyword(&self) -> bool {
return *self == '_' || *self == '$' || self.is_ascii_alphabetic();
}
// If self could be a component of an identifier or keyword, returns true
fn is_iden_or_keyword_part(&self) -> bool {
return self.starts_iden_or_keyword() || self.is_digit(10);
}
// Returns true if the char could be a part of a rational literal
fn is_rational(&self) -> bool {
return
*self == 'e' ||
*self == 'E' ||
*self == '.' ||
self.is_digit(10);
}
// If self is whitespace, returns true
fn is_whitespace(&self) -> bool {
return *self == ' ' || *self == '\n' || *self == '\t' || *self == '\r';
}
}
trait AsString {
fn as_string(&self) -> String;
}
impl AsString for Vec<char> {
fn as_string(&self) -> String {
return self.into_iter().collect();
}
}
pub fn to_chars(string: &str) -> Vec<char> {
return string.chars().collect::<Vec<char>>();
}
pub fn to_identifier(string: &str) -> Token {
return Token::Identifier(string.to_string());
}
pub fn to_string_literal(string: &str) -> Token {
return Token::StringLiteral(string.to_string());
}
pub fn to_decimal_number(string: &str) -> Token {
return Token::DecimalNumber(string.to_string());
}
pub fn to_hex_number(string: &str) -> Token {
return Token::HexNumber(string.to_string());
}
pub fn to_hex_literal(string: &str) -> Token {
return Token::HexLiteral(string.to_string());
}
/**
* Given a collected string, returns the matching Token
* Returns Token::NoMatch if no match is found
*/
fn match_collected(collected: String) -> Token {
let decimal_re = Regex::new(r"^[0-9]+(\.[0-9]*)?([eE][0-9]+)?$").unwrap();
let id_re = Regex::new(r"^[a-zA-Z\$_][a-zA-Z0-9\$_]*$").unwrap();
let hex_re = Regex::new(r"^0x[0-9a-fA-F]*$").unwrap();
let hex_literal_re = Regex::new(r#"^hex(\\"([0-9a-fA-F]{2})*\\"|'([0-9a-fA-F]{2})*')$"#).unwrap();
let version_re = Regex::new(r"^\^?[0-9]+\.[0-9]+\.[0-9]+").unwrap();
return match collected.as_ref() {
"address" => Token::Address,
"anonymous" => Token::Anonymous,
"as" => Token::As,
"assembly" => Token::Assembly,
"bool" => Token::Bool,
"break" => Token::Break,
"byte" => Token::Byte,
"bytes" => Token::Bytes,
"bytes1" => Token::Bytes1,
"bytes2" => Token::Bytes2,
"bytes3" => Token::Bytes3,
"bytes4" => Token::Bytes4,
"bytes5" => Token::Bytes5,
"bytes6" => Token::Bytes6,
"bytes7" => Token::Bytes7,
"bytes8" => Token::Bytes8,
"bytes9" => Token::Bytes9,
"bytes10" => Token::Bytes10,
"bytes11" => Token::Bytes11,
"bytes12" => Token::Bytes12,
"bytes13" => Token::Bytes13,
"bytes14" => Token::Bytes14,
"bytes15" => Token::Bytes15,
"bytes16" => Token::Bytes16,
"bytes17" => Token::Bytes17,
"bytes18" => Token::Bytes18,
"bytes19" => Token::Bytes19,
"bytes20" => Token::Bytes20,
"bytes21" => Token::Bytes21,
"bytes22" => Token::Bytes22,
"bytes23" => Token::Bytes23,
"bytes24" => Token::Bytes24,
"bytes25" => Token::Bytes25,
"bytes26" => Token::Bytes26,
"bytes27" => Token::Bytes27,
"bytes28" => Token::Bytes28,
"bytes29" => Token::Bytes29,
"bytes30" => Token::Bytes30,
"bytes31" => Token::Bytes31,
"bytes32" => Token::Bytes32,
"constant" => Token::Constant,
"continue" => Token::Continue,
"contract" => Token::Contract,
"days" => Token::Days,
"delete" => Token::Delete,
"do" => Token::Do,
"else" => Token::Else,
"emit" => Token::Emit,
"enum" => Token::Enum,
"ether" => Token::Ether,
"event" => Token::Event,
"external" => Token::External,
"false" => Token::False,
"finney" => Token::Finney,
"fixed" => Token::Fixed,
"for" => Token::For,
"from" => Token::From,
"function" => Token::Function,
"hex" => Token::Hex,
"hours" => Token::Hours,
"if" => Token::If,
"import" => Token::Import,
"indexed" => Token::Indexed,
"int" => Token::Int,
"int8" => Token::Int8,
"int16" => Token::Int16,
"int24" => Token::Int24,
"int32" => Token::Int32,
"int40" => Token::Int40,
"int48" => Token::Int48,
"int56" => Token::Int56,
"int64" => Token::Int64,
"int72" => Token::Int72,
"int80" => Token::Int80,
"int88" => Token::Int88,
"int96" => Token::Int96,
"int104" => Token::Int104,
"int112" => Token::Int112,
"int120" => Token::Int120,
"int128" => Token::Int128,
"int136" => Token::Int136,
"int144" => Token::Int144,
"int152" => Token::Int152,
"int160" => Token::Int160,
"int168" => Token::Int168,
"int176" => Token::Int176,
"int184" => Token::Int184,
"int192" => Token::Int192,
"int200" => Token::Int200,
"int208" => Token::Int208,
"int216" => Token::Int216,
"int224" => Token::Int224,
"int232" => Token::Int232,
"int240" => Token::Int240,
"int248" => Token::Int248,
"int256" => Token::Int256,
"interface" => Token::Interface,
"internal" => Token::Internal,
"is" => Token::Is,
"let" => Token::Let,
"library" => Token::Library,
"mapping" => Token::Mapping,
"memory" => Token::Memory,
"minutes" => Token::Minutes,
"modifier" => Token::Modifier,
"new" => Token::New,
"payable" => Token::Payable,
"pragma" => Token::Pragma,
"private" => Token::Private,
"public" => Token::Public,
"pure" => Token::Pure,
"return" => Token::Return,
"returns" => Token::Returns,
"seconds" => Token::Seconds,
"storage" => Token::Storage,
"string" => Token::String,
"struct" => Token::Struct,
"szabo" => Token::Szabo,
"throw" => Token::Throw,
"true" => Token::True,
"ufixed" => Token::Ufixed,
"uint" => Token::Uint,
"uint8" => Token::Uint8,
"uint16" => Token::Uint16,
"uint24" => Token::Uint24,
"uint32" => Token::Uint32,
"uint40" => Token::Uint40,
"uint48" => Token::Uint48,
"uint56" => Token::Uint56,
"uint64" => Token::Uint64,
"uint72" => Token::Uint72,
"uint80" => Token::Uint80,
"uint88" => Token::Uint88,
"uint96" => Token::Uint96,
"uint104" => Token::Uint104,
"uint112" => Token::Uint112,
"uint120" => Token::Uint120,
"uint128" => Token::Uint128,
"uint136" => Token::Uint136,
"uint144" => Token::Uint144,
"uint152" => Token::Uint152,
"uint160" => Token::Uint160,
"uint168" => Token::Uint168,
"uint176" => Token::Uint176,
"uint184" => Token::Uint184,
"uint192" => Token::Uint192,
"uint200" => Token::Uint200,
"uint208" => Token::Uint208,
"uint216" => Token::Uint216,
"uint224" => Token::Uint224,
"uint232" => Token::Uint232,
"uint240" => Token::Uint240,
"uint248" => Token::Uint248,
"uint256" => Token::Uint256,
"using" => Token::Using,
"var" => Token::Var,
"view" => Token::View,
"weeks" => Token::Weeks,
"wei" => Token::Wei,
"while" => Token::While,
"years" => Token::Years,
"_" => to_identifier("_"),
id if id_re.is_match(id) => Token::Identifier(id.to_string()),
hex if hex_re.is_match(hex) => Token::HexNumber(hex.to_string()),
num if decimal_re.is_match(num) => Token::DecimalNumber(num.to_string()),
hex if hex_literal_re.is_match(hex) => Token::HexLiteral(hex.to_string()),
version if version_re.is_match(version) => Token::Version(version.to_string()),
_ => Token::NoMatch
}
}
/**
* Matches . at line[*cur] with Token::Dot
*/
fn match_period(line: &Vec<char>, cur: &mut usize) -> Token {
if line.is_digit_at(*cur + 1) {
return match_rational(line, cur);
} else {
return Token::Dot;
}
}
/**
* Matches : at line[*cur] with its corresponding Token
* : | Colon
* := | ASMAssign
*/
fn match_colon(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::ASMAssign;
} else {
return Token::Colon;
}
}
/**
* Matches = at line[*cur] with its corresponding Token
* = | Assignment
* == | Equals
* => | Arrow
*/
fn match_equals(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::Equals;
} else if line.match_idx(*cur + 1, '>') {
*cur += 1;
return Token::Arrow;
} else {
return Token::Assignment;
}
}
/**
* Matches + at line[*cur] with its corresponding Token
* + | Plus
* ++ | Increment
* += | PlusEquals
*/
fn match_plus(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '+') {
*cur += 1;
return Token::Increment;
} else if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::PlusEquals;
} else {
return Token::Plus;
}
}
/**
* Matches - at line[*cur] with its corresponding Token
* - | Minus
* -- | Decrement
* -= | MinusEquals
*/
fn match_minus(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '-') {
*cur += 1;
return Token::Decrement;
} else if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::MinusEquals;
} else {
return Token::Minus;
}
}
/**
* Matches * at line[*cur] with its corresponding Token
* * | Multiply
* ** | Power
* *= | MultiplyEquals
*/
fn match_star(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '*') {
*cur += 1;
return Token::Power;
} else if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::MultiplyEquals;
} else {
return Token::Multiply;
}
}
/**
* Matches / at line[*cur] with its corresponding Token
* / | Divide
* // | CommentSingle
* /* | CommentMulti */
* /= | DivideEquals
*/
fn match_slash(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::DivideEquals;
} else if line.match_idx(*cur + 1, '/') {
*cur += 1;
return Token::CommentSingle;
} else if line.match_idx(*cur + 1, '*') {
*cur += 1;
return Token::CommentMulti;
} else {
return Token::Divide;
}
}
/**
* Matches > at line[*cur] with its corresponding Token
* > | GreaterThen
* >= | GreaterThanOrEquals
* >> | ShiftRight
* >>= | ShiftRightEquals
* >>> | TODO
* >>>= | TODO
*/
fn match_rarrow(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::GreaterThanOrEquals;
} else if line.match_idx(*cur + 1, '>') {
if line.match_idx(*cur + 2, '=') {
*cur += 2;
return Token::ShiftRightEquals;
} else if line.match_idx(*cur + 2, '>') {
if line.match_idx(*cur + 3, '=') {
*cur += 3;
return Token::Illegal; // TODO
} else {
*cur += 2;
return Token::Illegal; // TODO
}
} else {
*cur += 1;
return Token::ShiftRight;
}
} else {
return Token::GreaterThan;
}
}
/**
* Matches < at line[*cur] with its corresponding Token
* < | LessThen
* <= | LessThanOrEquals
* << | ShiftLeft
* <<= | ShiftLeftEquals
*/
fn match_larrow(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::LessThanOrEquals;
} else if line.match_idx(*cur + 1, '<') {
if line.match_idx(*cur + 2, '=') {
*cur += 2;
return Token::ShiftLeftEquals;
} else {
*cur += 1;
return Token::ShiftLeft;
}
} else {
return Token::LessThan;
}
}
/**
* Matches ! at line[*cur] with its corresponding Token
* ! | Exclamation
* != | NotEquals
*/
fn match_exclamation(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::NotEquals;
} else {
return Token::Exclamation;
}
}
/**
* Matches % at line[*cur] with its corresponding Token
* % | Modulus
* %= | ModEquals
*/
fn match_percent(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::ModEquals;
} else {
return Token::Modulus;
}
}
/**
* Matches & at line[*cur] with its corresponding Token
* & | BitwiseAnd
* && | LogicalAnd
* &= | AndEquals
*/
fn match_and(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '&') {
*cur += 1;
return Token::LogicalAnd;
} else if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::AndEquals;
} else {
return Token::BitwiseAnd;
}
}
/**
* Matches | at line[*cur] with its corresponding Token
* | | BitwiseOr
* || | LogicalOr
* |= | OrEquals
*/
fn match_or(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '|') {
*cur += 1;
return Token::LogicalOr;
} else if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::OrEquals;
} else {
return Token::BitwiseOr;
}
}
/**
* Matches | at line[*cur] with its corresponding Token
* ^ | BitwiseXor
* ^= | XorEquals
*/
fn match_xor(line: &Vec<char>, cur: &mut usize) -> Token {
if line.match_idx(*cur + 1, '=') {
*cur += 1;
return Token::XorEquals;
} else {
return Token::BitwiseXor;
}
}
/**
* Matches a string literal at line[*cur] with its corresponding Token
*/
fn match_string(line: &Vec<char>, cur: &mut usize) -> Token {
let first_quote = line[*cur].to_string();
let mut collected = String::from(first_quote.clone());
*cur += 1;
while *cur < line.len() {
if line[*cur].to_string() == r"\".to_string() {
return Token::Illegal; // TODO handle escapes
} else if line[*cur].to_string() == first_quote {
collected.push(line[*cur]);
return Token::StringLiteral(collected);
} else {
collected.push(line[*cur]);
*cur += 1;
}
}
Token::EOF
}
/**
* Increments cur until end of line or until a non-whitespace character
* Returns Token::NoMatch, which is used in the match statement
*/
fn skip_whitespace(line: &Vec<char>, cur: &mut usize) -> Token {
while *cur < line.len() && line.is_whitespace_at(*cur + 1) {
*cur += 1;
}
Token::NoMatch
}
fn match_hex_literal(line: &Vec<char>, cur: &mut usize, collected: String) -> Token {
let first_quote = line[*cur].to_string();
let mut literal = collected.clone();
literal.push(line[*cur]);
*cur += 1;
while *cur < line.len() {
if line[*cur].to_string() == first_quote {
literal.push(line[*cur]);
*cur -= 1;
return Token::HexLiteral(literal);
} else if line.is_hex_digit_at(*cur) {
literal.push(line[*cur]);
*cur += 1;
} else {
return Token::Illegal;
}
}
Token::EOF
}
/**
* Matches an identifier or keyword at line[*cur]
*/
fn match_identifier_or_keyword(line: &Vec<char>, cur: &mut usize) -> Token {
let mut collected = String::new();
while *cur < line.len() && line[*cur].is_iden_or_keyword_part() {
collected.push(line[*cur]);
*cur += 1;
}
*cur -= 1;
let mut result = match_collected(collected);
// Special case - found "hex"
if result == Token::Hex {
if line.match_idx(*cur + 1, '"') || line.match_idx(*cur + 1, '\'') {
*cur += 1;
return match_hex_literal(line, cur, "hex".to_string());
} else {
return Token::Illegal;
}
}
return result;
}
/**
* Matches a leading '0'. If this does not correspond to a hex
* number, returns Token::Illegal
*/
fn match_hex_number(line: &Vec<char>, cur: &mut usize) -> Token {
let mut collected = String::new();
if !line.is_hex_delim_at(*cur + 1) {
return Token::Illegal;
}
collected.push(line[*cur]);
collected.push(line[*cur + 1]);
*cur += 2;
while *cur < line.len() && line.is_hex_digit_at(*cur) {
collected.push(line[*cur]);
*cur += 1;
}
*cur -= 1;
// Cannot only have '0x'
if collected.len() <= 2 {
return Token::Illegal;
} else {
return Token::HexNumber(collected);
}
}
/**
* Matches a decimal literal at line[*cur] and returns its Token
*/
fn match_rational(line: &Vec<char>, cur: &mut usize) -> Token {
let mut collected = String::new();
let mut decimal_found = false;
let mut version_found = false;
let mut exponent_found = false;
while *cur < line.len() {
if line.match_idx(*cur, '.') {
// Cannot have a decimal after an exponent
// If we find 2 decimals, we are parsing a Version
// Allowed: { var a = 14.4; } || { var a = 1.4e5; }
// Not allowed: { var a = 1e4.5; }
if decimal_found {
if version_found {
return Token::Illegal;
} else {
version_found = true;
}
} else if exponent_found {
return Token::Illegal;
} else {
decimal_found = true;
}
} else if line.match_idx(*cur, 'e') || line.match_idx(*cur, 'E') {
// Cannot have 2 exponents, or an exponent in a version
if exponent_found || version_found {
return Token::Illegal;
} else {
// If we find an exponent, there must be at least 1 more digit
// (Trailing decimals are allowed, though!)
// Allowed: { var a = 14.; }
// Not allowed: { var a = 14e; }
if *cur + 1 == line.len() || !line[*cur + 1].is_digit(10) {
return Token::Illegal;
} else {
exponent_found = true;
}
}
} else if !line.is_digit_at(*cur) {
*cur -= 1;
if version_found {
return Token::Version(collected);
}
return Token::DecimalNumber(collected);
}
collected.push(line[*cur]);
*cur += 1;
}
*cur -= 1;
Token::DecimalNumber(collected)
}
/**
* Returns the next Token found in the line and increments cur
* to the end of the Token in the parsed line
*/
pub fn next_token(line: &Vec<char>, cur: &mut usize) -> Token {
loop {
if *cur >= line.len() {
return Token::EOF;
}
let t = match line[*cur] {
';' => Token::Semicolon,
'{' => Token::OpenBrace,
'}' => Token::CloseBrace,
'(' => Token::OpenParenthesis,
')' => Token::CloseParenthesis,
'[' => Token::OpenBracket,
']' => Token::CloseBracket,
'?' => Token::Question,
',' => Token::Comma,
'~' => Token::Tilda,
'.' => match_period(line, cur),
':' => match_colon(line, cur), // : :=
'=' => match_equals(line, cur), // = == =>
'+' => match_plus(line, cur), // + ++ +=
'-' => match_minus(line, cur), // - -- -=
'*' => match_star(line, cur), // * ** *=
'/' => match_slash(line, cur), // / // /* /=
'>' => match_rarrow(line, cur), // > >= >> >>= >>> >>>=
'<' => match_larrow(line, cur), // < <= << <<=
'!' => match_exclamation(line, cur), // ! !=
'%' => match_percent(line, cur), // % %=
'&' => match_and(line, cur), // & && &=
'|' => match_or(line, cur), // | || |=
'^' => match_xor(line, cur), // ^ ^=
'"' | '\'' => match_string(line, cur),
'0' => {
if line.match_idx(*cur + 1, ' ') {
to_decimal_number("0")
} else if line.is_hex_delim_at(*cur + 1) {
match_hex_number(line, cur)
} else if line.match_idx(*cur + 1, '.') {
match_rational(line, cur)
} else {
Token::Illegal
}
},
non if non.is_whitespace() => skip_whitespace(line, cur),
num if num.starts_rational() => match_rational(line, cur),
chr if chr.starts_iden_or_keyword() => match_identifier_or_keyword(line, cur),
_ => Token::Illegal
};
if t == Token::Illegal {
return t;
} else {
*cur += 1;
if t != Token::NoMatch {
return t;
} else if *cur >= line.len() {
return Token::EOF;
}
}
}
}
// Return the next token in the line, without incrementing cur
pub fn peek_token(line: &Vec<char>, cur: &mut usize) -> Token {
let old = *cur;
let next = next_token(line, cur);
*cur = old;
next
}
#[cfg(test)]
mod tests {
use super::*;
fn fail_test(expect: Token, actual: Token) {
panic!("Expected: {:?} | Actual: {:?}", expect, actual);
}
fn expect_next_token(s: &Vec<char>, cur: &mut usize, t: Token) {
match next_token(&s, cur) {
ref next if *next == t => (),
actual => fail_test(t, actual)
};
}
/* Colon */
#[test]
fn test_pragma1() {
let s = to_chars("^0.4.25;");
let cur = &mut 0;
expect_next_token(&s, cur, Token::BitwiseXor);
expect_next_token(&s, cur, Token::Version(String::from("0.4.25")));
expect_next_token(&s, cur, Token::Semicolon);
}
#[test]
fn test_colon() {
let s = to_chars(":");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Colon);
}
#[test]
fn test_asmassign() {
let s = to_chars(":=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::ASMAssign);
}
/* Equals */
#[test]
fn test_assignment() {
let s = to_chars("=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Assignment);
}
#[test]
fn test_equals() {
let s = to_chars("==");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Equals);
}
#[test]
fn test_arrow() {
let s = to_chars("=>");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Arrow);
}
/* Plus */
#[test]
fn test_plus() {
let s = to_chars("+");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Plus);
}
#[test]
fn test_increment() {
let s = to_chars("++");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Increment);
}
#[test]
fn test_plus_equals() {
let s = to_chars("+=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::PlusEquals);
}
/* Minus */
#[test]
fn test_minus() {
let s = to_chars("-");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Minus);
}
#[test]
fn test_decrement() {
let s = to_chars("--");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Decrement);
}
#[test]
fn test_minus_equals() {
let s = to_chars("-=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::MinusEquals);
}
/* Star */
#[test]
fn test_multiply() {
let s = to_chars("*");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Multiply);
}
#[test]
fn test_power() {
let s = to_chars("**");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Power);
}
#[test]
fn test_multiply_equals() {
let s = to_chars("*=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::MultiplyEquals);
}
/* Slash */
#[test]
fn test_divide() {
let s = to_chars("/");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Divide);
}
#[test]
fn test_comment_single() {
let s = to_chars("//");
let cur = &mut 0;
expect_next_token(&s, cur, Token::CommentSingle);
}
#[test]
fn test_comment_multi() {
let s = to_chars("/*");
let cur = &mut 0;
expect_next_token(&s, cur, Token::CommentMulti);
}
#[test]
fn test_divide_equals() {
let s = to_chars("/=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::DivideEquals);
}
/* RArrow */
#[test]
fn test_greater_than() {
let s = to_chars(">");
let cur = &mut 0;
expect_next_token(&s, cur, Token::GreaterThan);
}
#[test]
fn test_greater_than_or_equals() {
let s = to_chars(">=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::GreaterThanOrEquals);
}
#[test]
fn test_shift_right() {
let s = to_chars(">>");
let cur = &mut 0;
expect_next_token(&s, cur, Token::ShiftRight);
}
#[test]
fn test_shift_right_equals() {
let s = to_chars(">>=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::ShiftRightEquals);
}
#[test]
fn test_thing_0() { // TODO
let s = to_chars(">>>");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Illegal);
}
#[test]
fn test_thing_1() { // TODO
let s = to_chars(">>>=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Illegal);
}
/* LArrow */
#[test]
fn test_less_than() {
let s = to_chars("<");
let cur = &mut 0;
expect_next_token(&s, cur, Token::LessThan);
}
#[test]
fn test_less_than_or_equals() {
let s = to_chars("<=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::LessThanOrEquals);
}
#[test]
fn test_shift_left() {
let s = to_chars("<<");
let cur = &mut 0;
expect_next_token(&s, cur, Token::ShiftLeft);
}
#[test]
fn test_shift_left_equals() {
let s = to_chars("<<=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::ShiftLeftEquals);
}
/* Exclamation */
#[test]
fn test_exclamation() {
let s = to_chars("!");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Exclamation);
}
#[test]
fn test_not_equals() {
let s = to_chars("!=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::NotEquals);
}
/* Percent */
#[test]
fn test_modulus() {
let s = to_chars("%");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Modulus);
}
#[test]
fn test_mod_equals() {
let s = to_chars("%=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::ModEquals);
}
/* And */
#[test]
fn test_bitwise_and() {
let s = to_chars("&");
let cur = &mut 0;
expect_next_token(&s, cur, Token::BitwiseAnd);
}
#[test]
fn test_logical_and() {
let s = to_chars("&&");
let cur = &mut 0;
expect_next_token(&s, cur, Token::LogicalAnd);
}
#[test]
fn test_and_equals() {
let s = to_chars("&=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::AndEquals);
}
/* Or */
#[test]
fn test_bitwise_or() {
let s = to_chars("|");
let cur = &mut 0;
expect_next_token(&s, cur, Token::BitwiseOr);
}
#[test]
fn test_logical_or() {
let s = to_chars("||");
let cur = &mut 0;
expect_next_token(&s, cur, Token::LogicalOr);
}
#[test]
fn test_or_equals() {
let s = to_chars("|=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::OrEquals);
}
/* Xor */
#[test]
fn test_bitwise_xor() {
let s = to_chars("^");
let cur = &mut 0;
expect_next_token(&s, cur, Token::BitwiseXor);
}
#[test]
fn test_xor_equals() {
let s = to_chars("^=");
let cur = &mut 0;
expect_next_token(&s, cur, Token::XorEquals);
}
/* StringLiteral */
#[test]
fn test_string_literal_0() {
let s = to_chars("\"\"");
let cur = &mut 0;
expect_next_token(&s, cur, Token::StringLiteral(s.as_string()));
}
#[test]
fn test_string_literal_1() {
let s = to_chars("''");
let cur = &mut 0;
expect_next_token(&s, cur, Token::StringLiteral(s.as_string()));
}
#[test]
fn test_string_literal_2() {
let s = to_chars("\"test.sol\"");
let cur = &mut 0;
expect_next_token(&s, cur, Token::StringLiteral(s.as_string()));
}
/* Whitespace */
#[test]
fn test_whitespace_0() {
let s = to_chars(" ++");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Increment);
}
#[test]
fn test_whitespace_1() {
let s = to_chars(" ++ +");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Increment);
expect_next_token(&s, cur, Token::Plus);
}
#[test]
fn test_whitespace_3() {
let s = to_chars(" ++ -- / \"literal\"");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Increment);
expect_next_token(&s, cur, Token::Decrement);
expect_next_token(&s, cur, Token::Divide);
expect_next_token(&s, cur, to_string_literal("\"literal\""));
}
/* Number literals */
#[test]
fn test_numbers_0() {
let s = to_chars("1 12+123");
let cur = &mut 0;
expect_next_token(&s, cur, to_decimal_number("1"));
expect_next_token(&s, cur, to_decimal_number("12"));
expect_next_token(&s, cur, Token::Plus);
expect_next_token(&s, cur, to_decimal_number("123"));
}
#[test]
fn test_numbers_1() {
let s = to_chars("0 0.1");
let cur = &mut 0;
expect_next_token(&s, cur, to_decimal_number("0"));
expect_next_token(&s, cur, to_decimal_number("0.1"));
}
#[test]
fn test_numbers_2() {
let s = to_chars("01234");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Illegal);
}
#[test]
fn test_numbers_3() {
let s = to_chars("1.2e3 4E5");
let cur = &mut 0;
expect_next_token(&s, cur, to_decimal_number("1.2e3"));
expect_next_token(&s, cur, to_decimal_number("4E5"));
}
#[test]
fn test_numbers_4() {
let s = to_chars(".14 .Iden Iden.Iden");
let cur = &mut 0;
expect_next_token(&s, cur, to_decimal_number(".14"));
expect_next_token(&s, cur, Token::Dot);
expect_next_token(&s, cur, to_identifier("Iden"));
expect_next_token(&s, cur, to_identifier("Iden"));
expect_next_token(&s, cur, Token::Dot);
expect_next_token(&s, cur, to_identifier("Iden"));
}
#[test]
fn test_hex_numbers_0() {
let s = to_chars("0x");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Illegal);
}
#[test]
fn test_hex_numbers_1() {
let s = to_chars("0xFF 0 0xF");
let cur = &mut 0;
expect_next_token(&s, cur, to_hex_number("0xFF"));
expect_next_token(&s, cur, to_decimal_number("0"));
expect_next_token(&s, cur, to_hex_number("0xF"));
}
#[test]
fn test_hex_numbers_2() {
let s = to_chars("0xdf 0xZZ");
let cur = &mut 0;
expect_next_token(&s, cur, to_hex_number("0xdf"));
expect_next_token(&s, cur, Token::Illegal);
}
/* Keywords */
#[test]
fn test_address() {
let s = to_chars("address");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Address);
}
#[test]
fn test_anonymous() {
let s = to_chars("anonymous");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Anonymous);
}
#[test]
fn test_as() {
let s = to_chars("as");
let cur = &mut 0;
expect_next_token(&s, cur, Token::As);
}
#[test]
fn test_assembly() {
let s = to_chars("assembly");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Assembly);
}
#[test]
fn test_bool() {
let s = to_chars("bool");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Bool);
}
#[test]
fn test_break() {
let s = to_chars("break");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Break);
}
#[test]
fn test_byte() {
let s = to_chars("byte");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Byte);
}
#[test]
fn test_bytes() {
let s = to_chars("bytes");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Bytes);
}
#[test]
fn test_bytes1() {
let s = to_chars("bytes1");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Bytes1);
}
#[test]
fn test_bytes32() {
let s = to_chars("bytes32");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Bytes32);
}
#[test]
fn test_constant() {
let s = to_chars("constant");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Constant);
}
#[test]
fn test_continue() {
let s = to_chars("continue");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Continue);
}
#[test]
fn test_contract() {
let s = to_chars("contract");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Contract);
}
#[test]
fn test_days() {
let s = to_chars("days");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Days);
}
#[test]
fn test_delete() {
let s = to_chars("delete");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Delete);
}
#[test]
fn test_do() {
let s = to_chars("do");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Do);
}
#[test]
fn test_else() {
let s = to_chars("else");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Else);
}
#[test]
fn test_emit() {
let s = to_chars("emit");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Emit);
}
#[test]
fn test_enum() {
let s = to_chars("enum");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Enum);
}
#[test]
fn test_ether() {
let s = to_chars("ether");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Ether);
}
#[test]
fn test_event() {
let s = to_chars("event");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Event);
}
#[test]
fn test_external() {
let s = to_chars("external");
let cur = &mut 0;
expect_next_token(&s, cur, Token::External);
}
#[test]
fn test_false() {
let s = to_chars("false");
let cur = &mut 0;
expect_next_token(&s, cur, Token::False);
}
#[test]
fn test_finney() {
let s = to_chars("finney");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Finney);
}
#[test]
fn test_fixed() {
let s = to_chars("fixed");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Fixed);
}
#[test]
fn test_for() {
let s = to_chars("for");
let cur = &mut 0;
expect_next_token(&s, cur, Token::For);
}
#[test]
fn test_from() {
let s = to_chars("from");
let cur = &mut 0;
expect_next_token(&s, cur, Token::From);
}
#[test]
fn test_function() {
let s = to_chars("function");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Function);
}
#[test]
fn test_hex() {
let s = to_chars("hex");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Illegal);
}
#[test]
fn test_hex_literal1() {
let s = to_chars("hex\"DEADBEEF\"");
let cur = &mut 0;
expect_next_token(&s, cur, to_hex_literal("hex\"DEADBEEF\""));
}
#[test]
fn test_hex_literal2() {
let s = to_chars("hex\"ZZZZ\"");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Illegal);
}
#[test]
fn test_hours() {
let s = to_chars("hours");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Hours);
}
#[test]
fn test_if() {
let s = to_chars("if");
let cur = &mut 0;
expect_next_token(&s, cur, Token::If);
}
#[test]
fn test_import() {
let s = to_chars("import");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Import);
}
#[test]
fn test_indexed() {
let s = to_chars("indexed");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Indexed);
}
#[test]
fn test_int() {
let s = to_chars("int");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Int);
}
#[test]
fn test_int8() {
let s = to_chars("int8");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Int8);
}
#[test]
fn test_int16() {
let s = to_chars("int16");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Int16);
}
#[test]
fn test_int256() {
let s = to_chars("int256");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Int256);
}
#[test]
fn test_interface() {
let s = to_chars("interface");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Interface);
}
#[test]
fn test_internal() {
let s = to_chars("internal");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Internal);
}
#[test]
fn test_is() {
let s = to_chars("is");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Is);
}
#[test]
fn test_let() {
let s = to_chars("let");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Let);
}
#[test]
fn test_library() {
let s = to_chars("library");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Library);
}
#[test]
fn test_mapping() {
let s = to_chars("mapping");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Mapping);
}
#[test]
fn test_memory() {
let s = to_chars("memory");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Memory);
}
#[test]
fn test_minutes() {
let s = to_chars("minutes");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Minutes);
}
#[test]
fn test_modifier() {
let s = to_chars("modifier");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Modifier);
}
#[test]
fn test_new() {
let s = to_chars("new");
let cur = &mut 0;
expect_next_token(&s, cur, Token::New);
}
#[test]
fn test_payable() {
let s = to_chars("payable");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Payable);
}
#[test]
fn test_pragma() {
let s = to_chars("pragma");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Pragma);
}
#[test]
fn test_private() {
let s = to_chars("private");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Private);
}
#[test]
fn test_public() {
let s = to_chars("public");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Public);
}
#[test]
fn test_pure() {
let s = to_chars("pure");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Pure);
}
#[test]
fn test_return() {
let s = to_chars("return");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Return);
}
#[test]
fn test_returns() {
let s = to_chars("returns");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Returns);
}
#[test]
fn test_seconds() {
let s = to_chars("seconds");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Seconds);
}
#[test]
fn test_storage() {
let s = to_chars("storage");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Storage);
}
#[test]
fn test_string() {
let s = to_chars("string");
let cur = &mut 0;
expect_next_token(&s, cur, Token::String);
}
#[test]
fn test_struct() {
let s = to_chars("struct");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Struct);
}
#[test]
fn test_szabo() {
let s = to_chars("szabo");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Szabo);
}
#[test]
fn test_throw() {
let s = to_chars("throw");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Throw);
}
#[test]
fn test_true() {
let s = to_chars("true");
let cur = &mut 0;
expect_next_token(&s, cur, Token::True);
}
#[test]
fn test_ufixed() {
let s = to_chars("ufixed");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Ufixed);
}
#[test]
fn test_uint() {
let s = to_chars("uint");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Uint);
}
#[test]
fn test_uint8() {
let s = to_chars("uint8");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Uint8);
}
#[test]
fn test_uint16() {
let s = to_chars("uint16");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Uint16);
}
#[test]
fn test_uint256() {
let s = to_chars("uint256");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Uint256);
}
#[test]
fn test_using() {
let s = to_chars("using");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Using);
}
#[test]
fn test_var() {
let s = to_chars("var");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Var);
}
#[test]
fn test_view() {
let s = to_chars("view");
let cur = &mut 0;
expect_next_token(&s, cur, Token::View);
}
#[test]
fn test_weeks() {
let s = to_chars("weeks");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Weeks);
}
#[test]
fn test_wei() {
let s = to_chars("wei");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Wei);
}
#[test]
fn test_while() {
let s = to_chars("while");
let cur = &mut 0;
expect_next_token(&s, cur, Token::While);
}
#[test]
fn test_years() {
let s = to_chars("years");
let cur = &mut 0;
expect_next_token(&s, cur, Token::Years);
}
#[test]
fn test_placeholder() {
let s = to_chars("_");
let cur = &mut 0;
expect_next_token(&s, cur, to_identifier("_"));
}
}
| 25.061432 | 102 | 0.500313 |
2f5e7d367242aeb471d1aadabed68716322ee18a | 47,120 | #![allow(unused_imports, non_camel_case_types)]
use crate::models::r5::Annotation::Annotation;
use crate::models::r5::Attachment::Attachment;
use crate::models::r5::CodeableConcept::CodeableConcept;
use crate::models::r5::Element::Element;
use crate::models::r5::Extension::Extension;
use crate::models::r5::Identifier::Identifier;
use crate::models::r5::Meta::Meta;
use crate::models::r5::Narrative::Narrative;
use crate::models::r5::Observation_Component::Observation_Component;
use crate::models::r5::Observation_ReferenceRange::Observation_ReferenceRange;
use crate::models::r5::Period::Period;
use crate::models::r5::Quantity::Quantity;
use crate::models::r5::Range::Range;
use crate::models::r5::Ratio::Ratio;
use crate::models::r5::Reference::Reference;
use crate::models::r5::ResourceList::ResourceList;
use crate::models::r5::SampledData::SampledData;
use crate::models::r5::Timing::Timing;
use serde_json::json;
use serde_json::value::Value;
use std::borrow::Cow;
/// Measurements and simple assertions made about a patient, device or other subject.
#[derive(Debug)]
pub struct Observation<'a> {
pub(crate) value: Cow<'a, Value>,
}
impl Observation<'_> {
pub fn new(value: &Value) -> Observation {
Observation {
value: Cow::Borrowed(value),
}
}
pub fn to_json(&self) -> Value {
(*self.value).clone()
}
/// Extensions for effectiveDateTime
pub fn _effective_date_time(&self) -> Option<Element> {
if let Some(val) = self.value.get("_effectiveDateTime") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for effectiveInstant
pub fn _effective_instant(&self) -> Option<Element> {
if let Some(val) = self.value.get("_effectiveInstant") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for implicitRules
pub fn _implicit_rules(&self) -> Option<Element> {
if let Some(val) = self.value.get("_implicitRules") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for instantiatesCanonical
pub fn _instantiates_canonical(&self) -> Option<Element> {
if let Some(val) = self.value.get("_instantiatesCanonical") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for issued
pub fn _issued(&self) -> Option<Element> {
if let Some(val) = self.value.get("_issued") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for language
pub fn _language(&self) -> Option<Element> {
if let Some(val) = self.value.get("_language") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for status
pub fn _status(&self) -> Option<Element> {
if let Some(val) = self.value.get("_status") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for valueBoolean
pub fn _value_boolean(&self) -> Option<Element> {
if let Some(val) = self.value.get("_valueBoolean") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for valueDateTime
pub fn _value_date_time(&self) -> Option<Element> {
if let Some(val) = self.value.get("_valueDateTime") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for valueInteger
pub fn _value_integer(&self) -> Option<Element> {
if let Some(val) = self.value.get("_valueInteger") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for valueString
pub fn _value_string(&self) -> Option<Element> {
if let Some(val) = self.value.get("_valueString") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Extensions for valueTime
pub fn _value_time(&self) -> Option<Element> {
if let Some(val) = self.value.get("_valueTime") {
return Some(Element {
value: Cow::Borrowed(val),
});
}
return None;
}
/// A plan, proposal or order that is fulfilled in whole or in part by this event.
/// For example, a MedicationRequest may require a patient to have laboratory test
/// performed before it is dispensed.
pub fn based_on(&self) -> Option<Vec<Reference>> {
if let Some(Value::Array(val)) = self.value.get("basedOn") {
return Some(
val.into_iter()
.map(|e| Reference {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// Indicates the site on the subject's body where the observation was made (i.e. the
/// target site).
pub fn body_site(&self) -> Option<CodeableConcept> {
if let Some(val) = self.value.get("bodySite") {
return Some(CodeableConcept {
value: Cow::Borrowed(val),
});
}
return None;
}
/// A code that classifies the general type of observation being made.
pub fn category(&self) -> Option<Vec<CodeableConcept>> {
if let Some(Value::Array(val)) = self.value.get("category") {
return Some(
val.into_iter()
.map(|e| CodeableConcept {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// Describes what was observed. Sometimes this is called the observation "name".
pub fn code(&self) -> CodeableConcept {
CodeableConcept {
value: Cow::Borrowed(&self.value["code"]),
}
}
/// Some observations have multiple component observations. These component
/// observations are expressed as separate code value pairs that share the same
/// attributes. Examples include systolic and diastolic component observations
/// for blood pressure measurement and multiple component observations for genetics
/// observations.
pub fn component(&self) -> Option<Vec<Observation_Component>> {
if let Some(Value::Array(val)) = self.value.get("component") {
return Some(
val.into_iter()
.map(|e| Observation_Component {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// These resources do not have an independent existence apart from the resource that
/// contains them - they cannot be identified independently, nor can they have their
/// own independent transaction scope.
pub fn contained(&self) -> Option<Vec<ResourceList>> {
if let Some(Value::Array(val)) = self.value.get("contained") {
return Some(
val.into_iter()
.map(|e| ResourceList {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// Provides a reason why the expected value in the element Observation.value[x] is
/// missing.
pub fn data_absent_reason(&self) -> Option<CodeableConcept> {
if let Some(val) = self.value.get("dataAbsentReason") {
return Some(CodeableConcept {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The target resource that represents a measurement from which this observation
/// value is derived. For example, a calculated anion gap or a fetal measurement based
/// on an ultrasound image.
pub fn derived_from(&self) -> Option<Vec<Reference>> {
if let Some(Value::Array(val)) = self.value.get("derivedFrom") {
return Some(
val.into_iter()
.map(|e| Reference {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The device used to generate the observation data.
pub fn device(&self) -> Option<Reference> {
if let Some(val) = self.value.get("device") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The time or time-period the observed value is asserted as being true. For
/// biological subjects - e.g. human patients - this is usually called the
/// "physiologically relevant time". This is usually either the time of the procedure
/// or of specimen collection, but very often the source of the date/time is not
/// known, only the date/time itself.
pub fn effective_date_time(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("effectiveDateTime") {
return Some(string);
}
return None;
}
/// The time or time-period the observed value is asserted as being true. For
/// biological subjects - e.g. human patients - this is usually called the
/// "physiologically relevant time". This is usually either the time of the procedure
/// or of specimen collection, but very often the source of the date/time is not
/// known, only the date/time itself.
pub fn effective_instant(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("effectiveInstant") {
return Some(string);
}
return None;
}
/// The time or time-period the observed value is asserted as being true. For
/// biological subjects - e.g. human patients - this is usually called the
/// "physiologically relevant time". This is usually either the time of the procedure
/// or of specimen collection, but very often the source of the date/time is not
/// known, only the date/time itself.
pub fn effective_period(&self) -> Option<Period> {
if let Some(val) = self.value.get("effectivePeriod") {
return Some(Period {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The time or time-period the observed value is asserted as being true. For
/// biological subjects - e.g. human patients - this is usually called the
/// "physiologically relevant time". This is usually either the time of the procedure
/// or of specimen collection, but very often the source of the date/time is not
/// known, only the date/time itself.
pub fn effective_timing(&self) -> Option<Timing> {
if let Some(val) = self.value.get("effectiveTiming") {
return Some(Timing {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The healthcare event (e.g. a patient and healthcare provider interaction) during
/// which this observation is made.
pub fn encounter(&self) -> Option<Reference> {
if let Some(val) = self.value.get("encounter") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// May be used to represent additional information that is not part of the basic
/// definition of the resource. To make the use of extensions safe and manageable,
/// there is a strict set of governance applied to the definition and use of
/// extensions. Though any implementer can define an extension, there is a set of
/// requirements that SHALL be met as part of the definition of the extension.
pub fn extension(&self) -> Option<Vec<Extension>> {
if let Some(Value::Array(val)) = self.value.get("extension") {
return Some(
val.into_iter()
.map(|e| Extension {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The actual focus of an observation when it is not the patient of record
/// representing something or someone associated with the patient such as a spouse,
/// parent, fetus, or donor. For example, fetus observations in a mother's record.
/// The focus of an observation could also be an existing condition, an intervention,
/// the subject's diet, another observation of the subject, or a body structure
/// such as tumor or implanted device. An example use case would be using the
/// Observation resource to capture whether the mother is trained to change her
/// child's tracheostomy tube. In this example, the child is the patient of record and
/// the mother is the focus.
pub fn focus(&self) -> Option<Vec<Reference>> {
if let Some(Value::Array(val)) = self.value.get("focus") {
return Some(
val.into_iter()
.map(|e| Reference {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// This observation is a group observation (e.g. a battery, a panel of tests, a set
/// of vital sign measurements) that includes the target as a member of the group.
pub fn has_member(&self) -> Option<Vec<Reference>> {
if let Some(Value::Array(val)) = self.value.get("hasMember") {
return Some(
val.into_iter()
.map(|e| Reference {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The logical id of the resource, as used in the URL for the resource. Once
/// assigned, this value never changes.
pub fn id(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("id") {
return Some(string);
}
return None;
}
/// A unique identifier assigned to this observation.
pub fn identifier(&self) -> Option<Vec<Identifier>> {
if let Some(Value::Array(val)) = self.value.get("identifier") {
return Some(
val.into_iter()
.map(|e| Identifier {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// A reference to a set of rules that were followed when the resource was
/// constructed, and which must be understood when processing the content. Often, this
/// is a reference to an implementation guide that defines the special rules along
/// with other profiles etc.
pub fn implicit_rules(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("implicitRules") {
return Some(string);
}
return None;
}
/// The reference to a FHIR ObservationDefinition resource that provides the
/// definition that is adhered to in whole or in part by this Observation instance.
pub fn instantiates_canonical(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("instantiatesCanonical") {
return Some(string);
}
return None;
}
/// The reference to a FHIR ObservationDefinition resource that provides the
/// definition that is adhered to in whole or in part by this Observation instance.
pub fn instantiates_reference(&self) -> Option<Reference> {
if let Some(val) = self.value.get("instantiatesReference") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// A categorical assessment of an observation value. For example, high, low, normal.
pub fn interpretation(&self) -> Option<Vec<CodeableConcept>> {
if let Some(Value::Array(val)) = self.value.get("interpretation") {
return Some(
val.into_iter()
.map(|e| CodeableConcept {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The date and time this version of the observation was made available to providers,
/// typically after the results have been reviewed and verified.
pub fn issued(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("issued") {
return Some(string);
}
return None;
}
/// The base language in which the resource is written.
pub fn language(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("language") {
return Some(string);
}
return None;
}
/// The metadata about the resource. This is content that is maintained by the
/// infrastructure. Changes to the content might not always be associated with version
/// changes to the resource.
pub fn meta(&self) -> Option<Meta> {
if let Some(val) = self.value.get("meta") {
return Some(Meta {
value: Cow::Borrowed(val),
});
}
return None;
}
/// Indicates the mechanism used to perform the observation.
pub fn method(&self) -> Option<CodeableConcept> {
if let Some(val) = self.value.get("method") {
return Some(CodeableConcept {
value: Cow::Borrowed(val),
});
}
return None;
}
/// May be used to represent additional information that is not part of the basic
/// definition of the resource and that modifies the understanding of the element
/// that contains it and/or the understanding of the containing element's descendants.
/// Usually modifier elements provide negation or qualification. To make the use of
/// extensions safe and manageable, there is a strict set of governance applied to
/// the definition and use of extensions. Though any implementer is allowed to define
/// an extension, there is a set of requirements that SHALL be met as part of the
/// definition of the extension. Applications processing a resource are required to
/// check for modifier extensions. Modifier extensions SHALL NOT change the meaning
/// of any elements on Resource or DomainResource (including cannot change the meaning
/// of modifierExtension itself).
pub fn modifier_extension(&self) -> Option<Vec<Extension>> {
if let Some(Value::Array(val)) = self.value.get("modifierExtension") {
return Some(
val.into_iter()
.map(|e| Extension {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// Comments about the observation or the results.
pub fn note(&self) -> Option<Vec<Annotation>> {
if let Some(Value::Array(val)) = self.value.get("note") {
return Some(
val.into_iter()
.map(|e| Annotation {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// A larger event of which this particular Observation is a component or step. For
/// example, an observation as part of a procedure.
pub fn part_of(&self) -> Option<Vec<Reference>> {
if let Some(Value::Array(val)) = self.value.get("partOf") {
return Some(
val.into_iter()
.map(|e| Reference {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// Who was responsible for asserting the observed value as "true".
pub fn performer(&self) -> Option<Vec<Reference>> {
if let Some(Value::Array(val)) = self.value.get("performer") {
return Some(
val.into_iter()
.map(|e| Reference {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// Guidance on how to interpret the value by comparison to a normal or recommended
/// range. Multiple reference ranges are interpreted as an "OR". In other words,
/// to represent two distinct target populations, two `referenceRange` elements would
/// be used.
pub fn reference_range(&self) -> Option<Vec<Observation_ReferenceRange>> {
if let Some(Value::Array(val)) = self.value.get("referenceRange") {
return Some(
val.into_iter()
.map(|e| Observation_ReferenceRange {
value: Cow::Borrowed(e),
})
.collect::<Vec<_>>(),
);
}
return None;
}
/// The specimen that was used when this observation was made.
pub fn specimen(&self) -> Option<Reference> {
if let Some(val) = self.value.get("specimen") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The status of the result value.
pub fn status(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("status") {
return Some(string);
}
return None;
}
/// The patient, or group of patients, location, device, organization, procedure
/// or practitioner this observation is about and into whose or what record the
/// observation is placed. If the actual focus of the observation is different from
/// the subject (or a sample of, part, or region of the subject), the `focus` element
/// or the `code` itself specifies the actual focus of the observation.
pub fn subject(&self) -> Option<Reference> {
if let Some(val) = self.value.get("subject") {
return Some(Reference {
value: Cow::Borrowed(val),
});
}
return None;
}
/// A human-readable narrative that contains a summary of the resource and can be used
/// to represent the content of the resource to a human. The narrative need not encode
/// all the structured data, but is required to contain sufficient detail to make it
/// "clinically safe" for a human to just read the narrative. Resource definitions
/// may define what content should be represented in the narrative to ensure clinical
/// safety.
pub fn text(&self) -> Option<Narrative> {
if let Some(val) = self.value.get("text") {
return Some(Narrative {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_attachment(&self) -> Option<Attachment> {
if let Some(val) = self.value.get("valueAttachment") {
return Some(Attachment {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_boolean(&self) -> Option<bool> {
if let Some(val) = self.value.get("valueBoolean") {
return Some(val.as_bool().unwrap());
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_codeable_concept(&self) -> Option<CodeableConcept> {
if let Some(val) = self.value.get("valueCodeableConcept") {
return Some(CodeableConcept {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_date_time(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("valueDateTime") {
return Some(string);
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_integer(&self) -> Option<f64> {
if let Some(val) = self.value.get("valueInteger") {
return Some(val.as_f64().unwrap());
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_period(&self) -> Option<Period> {
if let Some(val) = self.value.get("valuePeriod") {
return Some(Period {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_quantity(&self) -> Option<Quantity> {
if let Some(val) = self.value.get("valueQuantity") {
return Some(Quantity {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_range(&self) -> Option<Range> {
if let Some(val) = self.value.get("valueRange") {
return Some(Range {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_ratio(&self) -> Option<Ratio> {
if let Some(val) = self.value.get("valueRatio") {
return Some(Ratio {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_sampled_data(&self) -> Option<SampledData> {
if let Some(val) = self.value.get("valueSampledData") {
return Some(SampledData {
value: Cow::Borrowed(val),
});
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_string(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("valueString") {
return Some(string);
}
return None;
}
/// The information determined as a result of making the observation, if the
/// information has a simple value.
pub fn value_time(&self) -> Option<&str> {
if let Some(Value::String(string)) = self.value.get("valueTime") {
return Some(string);
}
return None;
}
pub fn validate(&self) -> bool {
if let Some(_val) = self._effective_date_time() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._effective_instant() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._implicit_rules() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._instantiates_canonical() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._issued() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._language() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._status() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._value_boolean() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._value_date_time() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._value_integer() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._value_string() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self._value_time() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.based_on() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.body_site() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.category() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if !self.code().validate() {
return false;
}
if let Some(_val) = self.component() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.contained() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.data_absent_reason() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.derived_from() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.device() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.effective_date_time() {}
if let Some(_val) = self.effective_instant() {}
if let Some(_val) = self.effective_period() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.effective_timing() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.encounter() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.extension() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.focus() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.has_member() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.id() {}
if let Some(_val) = self.identifier() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.implicit_rules() {}
if let Some(_val) = self.instantiates_canonical() {}
if let Some(_val) = self.instantiates_reference() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.interpretation() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.issued() {}
if let Some(_val) = self.language() {}
if let Some(_val) = self.meta() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.method() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.modifier_extension() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.note() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.part_of() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.performer() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.reference_range() {
if !_val.into_iter().map(|e| e.validate()).all(|x| x == true) {
return false;
}
}
if let Some(_val) = self.specimen() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.status() {}
if let Some(_val) = self.subject() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.text() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_attachment() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_boolean() {}
if let Some(_val) = self.value_codeable_concept() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_date_time() {}
if let Some(_val) = self.value_integer() {}
if let Some(_val) = self.value_period() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_quantity() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_range() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_ratio() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_sampled_data() {
if !_val.validate() {
return false;
}
}
if let Some(_val) = self.value_string() {}
if let Some(_val) = self.value_time() {}
return true;
}
}
#[derive(Debug)]
pub struct ObservationBuilder {
pub(crate) value: Value,
}
impl ObservationBuilder {
pub fn build(&self) -> Observation {
Observation {
value: Cow::Owned(self.value.clone()),
}
}
pub fn with(existing: Observation) -> ObservationBuilder {
ObservationBuilder {
value: (*existing.value).clone(),
}
}
pub fn new(code: CodeableConcept) -> ObservationBuilder {
let mut __value: Value = json!({});
__value["code"] = json!(code.value);
return ObservationBuilder { value: __value };
}
pub fn _effective_date_time<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_effectiveDateTime"] = json!(val.value);
return self;
}
pub fn _effective_instant<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_effectiveInstant"] = json!(val.value);
return self;
}
pub fn _implicit_rules<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_implicitRules"] = json!(val.value);
return self;
}
pub fn _instantiates_canonical<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_instantiatesCanonical"] = json!(val.value);
return self;
}
pub fn _issued<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_issued"] = json!(val.value);
return self;
}
pub fn _language<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_language"] = json!(val.value);
return self;
}
pub fn _status<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_status"] = json!(val.value);
return self;
}
pub fn _value_boolean<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_valueBoolean"] = json!(val.value);
return self;
}
pub fn _value_date_time<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_valueDateTime"] = json!(val.value);
return self;
}
pub fn _value_integer<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_valueInteger"] = json!(val.value);
return self;
}
pub fn _value_string<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_valueString"] = json!(val.value);
return self;
}
pub fn _value_time<'a>(&'a mut self, val: Element) -> &'a mut ObservationBuilder {
self.value["_valueTime"] = json!(val.value);
return self;
}
pub fn based_on<'a>(&'a mut self, val: Vec<Reference>) -> &'a mut ObservationBuilder {
self.value["basedOn"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn body_site<'a>(&'a mut self, val: CodeableConcept) -> &'a mut ObservationBuilder {
self.value["bodySite"] = json!(val.value);
return self;
}
pub fn category<'a>(&'a mut self, val: Vec<CodeableConcept>) -> &'a mut ObservationBuilder {
self.value["category"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn component<'a>(
&'a mut self,
val: Vec<Observation_Component>,
) -> &'a mut ObservationBuilder {
self.value["component"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn contained<'a>(&'a mut self, val: Vec<ResourceList>) -> &'a mut ObservationBuilder {
self.value["contained"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn data_absent_reason<'a>(
&'a mut self,
val: CodeableConcept,
) -> &'a mut ObservationBuilder {
self.value["dataAbsentReason"] = json!(val.value);
return self;
}
pub fn derived_from<'a>(&'a mut self, val: Vec<Reference>) -> &'a mut ObservationBuilder {
self.value["derivedFrom"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn device<'a>(&'a mut self, val: Reference) -> &'a mut ObservationBuilder {
self.value["device"] = json!(val.value);
return self;
}
pub fn effective_date_time<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["effectiveDateTime"] = json!(val);
return self;
}
pub fn effective_instant<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["effectiveInstant"] = json!(val);
return self;
}
pub fn effective_period<'a>(&'a mut self, val: Period) -> &'a mut ObservationBuilder {
self.value["effectivePeriod"] = json!(val.value);
return self;
}
pub fn effective_timing<'a>(&'a mut self, val: Timing) -> &'a mut ObservationBuilder {
self.value["effectiveTiming"] = json!(val.value);
return self;
}
pub fn encounter<'a>(&'a mut self, val: Reference) -> &'a mut ObservationBuilder {
self.value["encounter"] = json!(val.value);
return self;
}
pub fn extension<'a>(&'a mut self, val: Vec<Extension>) -> &'a mut ObservationBuilder {
self.value["extension"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn focus<'a>(&'a mut self, val: Vec<Reference>) -> &'a mut ObservationBuilder {
self.value["focus"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn has_member<'a>(&'a mut self, val: Vec<Reference>) -> &'a mut ObservationBuilder {
self.value["hasMember"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn id<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["id"] = json!(val);
return self;
}
pub fn identifier<'a>(&'a mut self, val: Vec<Identifier>) -> &'a mut ObservationBuilder {
self.value["identifier"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn implicit_rules<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["implicitRules"] = json!(val);
return self;
}
pub fn instantiates_canonical<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["instantiatesCanonical"] = json!(val);
return self;
}
pub fn instantiates_reference<'a>(&'a mut self, val: Reference) -> &'a mut ObservationBuilder {
self.value["instantiatesReference"] = json!(val.value);
return self;
}
pub fn interpretation<'a>(
&'a mut self,
val: Vec<CodeableConcept>,
) -> &'a mut ObservationBuilder {
self.value["interpretation"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn issued<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["issued"] = json!(val);
return self;
}
pub fn language<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["language"] = json!(val);
return self;
}
pub fn meta<'a>(&'a mut self, val: Meta) -> &'a mut ObservationBuilder {
self.value["meta"] = json!(val.value);
return self;
}
pub fn method<'a>(&'a mut self, val: CodeableConcept) -> &'a mut ObservationBuilder {
self.value["method"] = json!(val.value);
return self;
}
pub fn modifier_extension<'a>(&'a mut self, val: Vec<Extension>) -> &'a mut ObservationBuilder {
self.value["modifierExtension"] =
json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn note<'a>(&'a mut self, val: Vec<Annotation>) -> &'a mut ObservationBuilder {
self.value["note"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn part_of<'a>(&'a mut self, val: Vec<Reference>) -> &'a mut ObservationBuilder {
self.value["partOf"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn performer<'a>(&'a mut self, val: Vec<Reference>) -> &'a mut ObservationBuilder {
self.value["performer"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn reference_range<'a>(
&'a mut self,
val: Vec<Observation_ReferenceRange>,
) -> &'a mut ObservationBuilder {
self.value["referenceRange"] = json!(val.into_iter().map(|e| e.value).collect::<Vec<_>>());
return self;
}
pub fn specimen<'a>(&'a mut self, val: Reference) -> &'a mut ObservationBuilder {
self.value["specimen"] = json!(val.value);
return self;
}
pub fn status<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["status"] = json!(val);
return self;
}
pub fn subject<'a>(&'a mut self, val: Reference) -> &'a mut ObservationBuilder {
self.value["subject"] = json!(val.value);
return self;
}
pub fn text<'a>(&'a mut self, val: Narrative) -> &'a mut ObservationBuilder {
self.value["text"] = json!(val.value);
return self;
}
pub fn value_attachment<'a>(&'a mut self, val: Attachment) -> &'a mut ObservationBuilder {
self.value["valueAttachment"] = json!(val.value);
return self;
}
pub fn value_boolean<'a>(&'a mut self, val: bool) -> &'a mut ObservationBuilder {
self.value["valueBoolean"] = json!(val);
return self;
}
pub fn value_codeable_concept<'a>(
&'a mut self,
val: CodeableConcept,
) -> &'a mut ObservationBuilder {
self.value["valueCodeableConcept"] = json!(val.value);
return self;
}
pub fn value_date_time<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["valueDateTime"] = json!(val);
return self;
}
pub fn value_integer<'a>(&'a mut self, val: f64) -> &'a mut ObservationBuilder {
self.value["valueInteger"] = json!(val);
return self;
}
pub fn value_period<'a>(&'a mut self, val: Period) -> &'a mut ObservationBuilder {
self.value["valuePeriod"] = json!(val.value);
return self;
}
pub fn value_quantity<'a>(&'a mut self, val: Quantity) -> &'a mut ObservationBuilder {
self.value["valueQuantity"] = json!(val.value);
return self;
}
pub fn value_range<'a>(&'a mut self, val: Range) -> &'a mut ObservationBuilder {
self.value["valueRange"] = json!(val.value);
return self;
}
pub fn value_ratio<'a>(&'a mut self, val: Ratio) -> &'a mut ObservationBuilder {
self.value["valueRatio"] = json!(val.value);
return self;
}
pub fn value_sampled_data<'a>(&'a mut self, val: SampledData) -> &'a mut ObservationBuilder {
self.value["valueSampledData"] = json!(val.value);
return self;
}
pub fn value_string<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["valueString"] = json!(val);
return self;
}
pub fn value_time<'a>(&'a mut self, val: &str) -> &'a mut ObservationBuilder {
self.value["valueTime"] = json!(val);
return self;
}
}
| 35.007429 | 100 | 0.546923 |
b97d5833224aef820dc156ed2b639b180918ca46 | 4,071 | use std::io::{self, Write};
use std::mem::size_of;
use bytemuck::bytes_of;
use crate::internal::align_offset;
use crate::std140::{AsStd140, Std140, WriteStd140};
/**
Type that enables writing correctly aligned `std140` values to a buffer.
`Writer` is useful when many values need to be laid out in a row that cannot be
represented by a struct alone, like dynamically sized arrays or dynamically
laid-out values.
## Example
In this example, we'll write a length-prefixed list of lights to a buffer.
`std140::Writer` helps align correctly, even across multiple structs, which can
be tricky and error-prone otherwise.
```glsl
struct PointLight {
vec3 position;
vec3 color;
float brightness;
};
buffer POINT_LIGHTS {
uint len;
PointLight[] lights;
} point_lights;
```
```
use crevice::std140::{self, AsStd140};
#[derive(AsStd140)]
struct PointLight {
position: mint::Vector3<f32>,
color: mint::Vector3<f32>,
brightness: f32,
}
let lights = vec![
PointLight {
position: [0.0, 1.0, 0.0].into(),
color: [1.0, 0.0, 0.0].into(),
brightness: 0.6,
},
PointLight {
position: [0.0, 4.0, 3.0].into(),
color: [1.0, 1.0, 1.0].into(),
brightness: 1.0,
},
];
# fn map_gpu_buffer_for_write() -> &'static mut [u8] {
# Box::leak(vec![0; 1024].into_boxed_slice())
# }
let target_buffer = map_gpu_buffer_for_write();
let mut writer = std140::Writer::new(target_buffer);
let light_count = lights.len() as u32;
writer.write(&light_count)?;
// Crevice will automatically insert the required padding to align the
// PointLight structure correctly. In this case, there will be 12 bytes of
// padding between the length field and the light list.
writer.write(lights.as_slice())?;
# fn unmap_gpu_buffer() {}
unmap_gpu_buffer();
# Ok::<(), std::io::Error>(())
```
*/
pub struct Writer<W> {
writer: W,
offset: usize,
}
impl<W: Write> Writer<W> {
/// Create a new `Writer`, wrapping a buffer, file, or other type that
/// implements [`std::io::Write`].
pub fn new(writer: W) -> Self {
Self { writer, offset: 0 }
}
/// Write a new value to the underlying buffer, writing zeroed padding where
/// necessary.
///
/// Returns the offset into the buffer that the value was written to.
pub fn write<T>(&mut self, value: &T) -> io::Result<usize>
where
T: WriteStd140 + ?Sized,
{
value.write_std140(self)
}
/// Write an iterator of values to the underlying buffer.
///
/// Returns the offset into the buffer that the first value was written to.
/// If no values were written, returns the `len()`.
pub fn write_iter<I, T>(&mut self, iter: I) -> io::Result<usize>
where
I: IntoIterator<Item = T>,
T: WriteStd140,
{
let mut first_offset = None;
for item in iter {
let offset = item.write_std140(self)?;
if first_offset.is_none() {
first_offset = Some(offset);
}
}
Ok(first_offset.unwrap_or(self.offset))
}
/// Write an `Std140` type to the underlying buffer.
pub fn write_std140<T>(&mut self, value: &T) -> io::Result<usize>
where
T: Std140,
{
let padding = align_offset(self.offset, T::ALIGNMENT);
for _ in 0..padding {
self.writer.write_all(&[0])?;
}
self.offset += padding;
let value = value.as_std140();
self.writer.write_all(bytes_of(&value))?;
let write_here = self.offset;
self.offset += size_of::<T>();
Ok(write_here)
}
/// Write a slice of values to the underlying buffer.
#[deprecated(
since = "0.6.0",
note = "Use `write` instead -- it now works on slices."
)]
pub fn write_slice<T>(&mut self, slice: &[T]) -> io::Result<usize>
where
T: AsStd140,
{
self.write(slice)
}
/// Returns the amount of data written by this `Writer`.
pub fn len(&self) -> usize {
self.offset
}
}
| 25.285714 | 80 | 0.610661 |
bbe80136340c29cee961227ef39fd59bc18fefd3 | 43 | mod a;
mod b;
pub use self::{a::*, b::*};
| 8.6 | 27 | 0.465116 |
cca353a1d5e625f0e6fbfa3b3b0b2938d4da9492 | 3,662 | use crate::{
render_resource::{Texture, TextureView},
renderer::RenderDevice,
};
use bevy_ecs::prelude::ResMut;
use bevy_utils::HashMap;
use wgpu::{TextureDescriptor, TextureViewDescriptor};
/// The internal representation of a [`CachedTexture`] used to track whether it was recently used
/// and is currently taken.
struct CachedTextureMeta {
texture: Texture,
default_view: TextureView,
taken: bool,
frames_since_last_use: usize,
}
/// A cached GPU [`Texture`] with corresponding [`TextureView`].
/// This is useful for textures that are created repeatedly (each frame) in the rendering process
/// to reduce the amount of GPU memory allocations.
pub struct CachedTexture {
pub texture: Texture,
pub default_view: TextureView,
}
/// This resource caches textures that are created repeatedly in the rendering process and
/// are only required for one frame.
#[derive(Default)]
pub struct TextureCache {
textures: HashMap<wgpu::TextureDescriptor<'static>, Vec<CachedTextureMeta>>,
}
impl TextureCache {
/// Retrieves a texture that matches the `descriptor`. If no matching one is found a new
/// [`CachedTexture`] is created.
pub fn get(
&mut self,
render_device: &RenderDevice,
descriptor: TextureDescriptor<'static>,
) -> CachedTexture {
match self.textures.entry(descriptor) {
std::collections::hash_map::Entry::Occupied(mut entry) => {
for texture in entry.get_mut().iter_mut() {
if !texture.taken {
texture.frames_since_last_use = 0;
texture.taken = true;
return CachedTexture {
texture: texture.texture.clone(),
default_view: texture.default_view.clone(),
};
}
}
let texture = render_device.create_texture(&entry.key().clone());
let default_view = texture.create_view(&TextureViewDescriptor::default());
entry.get_mut().push(CachedTextureMeta {
texture: texture.clone(),
default_view: default_view.clone(),
frames_since_last_use: 0,
taken: true,
});
CachedTexture {
texture,
default_view,
}
}
std::collections::hash_map::Entry::Vacant(entry) => {
let texture = render_device.create_texture(entry.key());
let default_view = texture.create_view(&TextureViewDescriptor::default());
entry.insert(vec![CachedTextureMeta {
texture: texture.clone(),
default_view: default_view.clone(),
taken: true,
frames_since_last_use: 0,
}]);
CachedTexture {
texture,
default_view,
}
}
}
}
/// Updates the cache and only retains recently used textures.
pub fn update(&mut self) {
for textures in self.textures.values_mut() {
for texture in textures.iter_mut() {
texture.frames_since_last_use += 1;
texture.taken = false;
}
textures.retain(|texture| texture.frames_since_last_use < 3);
}
}
}
/// Updates the [`TextureCache`] to only retains recently used textures.
pub fn update_texture_cache_system(mut texture_cache: ResMut<TextureCache>) {
texture_cache.update();
}
| 36.257426 | 97 | 0.575369 |
1860e7a228ff0835b6b6f28a997e634ade07b683 | 935 | // Copyright 2020-2021 The Datafuse Authors.
//
// SPDX-License-Identifier: Apache-2.0.
#[tokio::test]
async fn test_drop_database_interpreter() -> anyhow::Result<()> {
use common_planners::*;
use futures::TryStreamExt;
use pretty_assertions::assert_eq;
use crate::interpreters::*;
use crate::sql::*;
let ctx = crate::tests::try_create_context()?;
if let PlanNode::DropDatabase(plan) =
PlanParser::create(ctx.clone()).build_from_sql("drop database default")?
{
let executor = DropDatabaseInterpreter::try_create(ctx, plan.clone())?;
assert_eq!(executor.name(), "DropDatabaseInterpreter");
let stream = executor.execute().await?;
let result = stream.try_collect::<Vec<_>>().await?;
let expected = vec!["++", "++"];
common_datablocks::assert_blocks_sorted_eq(expected, result.as_slice());
} else {
assert!(false)
}
Ok(())
}
| 30.16129 | 80 | 0.640642 |
d948dc552cdbcab874b9b4a2e41be150c1e0e02a | 301 | // @has variant_tuple_struct.json "$.index[*][?(@.name=='EnumTupleStruct')].visibility" \"public\"
// @has - "$.index[*][?(@.name=='EnumTupleStruct')].kind" \"enum\"
pub enum EnumTupleStruct {
// @has - "$.index[*][?(@.name=='VariantA')].inner.variant_kind" \"tuple\"
VariantA(u32, String),
}
| 43 | 98 | 0.607973 |
ab4dd810e049e1dcb725325c67f07fb3f20ec886 | 1,755 | use core::{ops::Deref, ptr::NonNull};
/// Describes a physical mapping created by `AcpiHandler::map_physical_region` and unmapped by
/// `AcpiHandler::unmap_physical_region`. The region mapped must be at least `size_of::<T>()`
/// bytes, but may be bigger.
pub struct PhysicalMapping<T> {
pub physical_start: usize,
pub virtual_start: NonNull<T>,
pub region_length: usize, // Can be equal or larger than size_of::<T>()
pub mapped_length: usize, // Differs from `region_length` if padding is added for alignment
}
impl<T> Deref for PhysicalMapping<T> {
type Target = T;
fn deref(&self) -> &T {
unsafe { self.virtual_start.as_ref() }
}
}
/// An implementation of this trait must be provided to allow `acpi` to access platform-specific
/// functionality, such as mapping regions of physical memory. You are free to implement these
/// however you please, as long as they conform to the documentation of each function.
pub trait AcpiHandler {
/// Given a starting physical address and a size, map a region of physical memory that contains
/// a `T` (but may be bigger than `size_of::<T>()`). The address doesn't have to be
/// page-aligned, so the implementation may have to add padding to either end. The given
/// size must be greater or equal to the size of a `T`. The virtual address the memory is
/// mapped to does not matter, as long as it is accessible from `acpi`.
fn map_physical_region<T>(&mut self, physical_address: usize, size: usize) -> PhysicalMapping<T>;
/// Unmap the given physical mapping. Safe because we consume the mapping, and so it can't be
/// used after being passed to this function.
fn unmap_physical_region<T>(&mut self, region: PhysicalMapping<T>);
}
| 48.75 | 101 | 0.709972 |
f759ec0d043701a59ce9f51730ea63080c7ad5b8 | 88,947 | //! Tock default Process implementation.
//!
//! `ProcessStandard` is an implementation for a userspace process running on
//! the Tock kernel.
use core::cell::Cell;
use core::cmp;
use core::fmt::Write;
use core::ptr::NonNull;
use core::{mem, ptr, slice, str};
use crate::collections::queue::Queue;
use crate::collections::ring_buffer::RingBuffer;
use crate::config;
use crate::debug;
use crate::errorcode::ErrorCode;
use crate::kernel::Kernel;
use crate::platform::chip::Chip;
use crate::platform::mpu::{self, MPU};
use crate::process::{Error, FunctionCall, FunctionCallSource, Process, State, Task};
use crate::process::{FaultAction, ProcessCustomGrantIdentifer, ProcessId, ProcessStateCell};
use crate::process::{ProcessAddresses, ProcessSizes};
use crate::process_policies::ProcessFaultPolicy;
use crate::process_utilities::ProcessLoadError;
use crate::processbuffer::{ReadOnlyProcessBuffer, ReadWriteProcessBuffer};
use crate::syscall::{self, Syscall, SyscallReturn, UserspaceKernelBoundary};
use crate::upcall::UpcallId;
use crate::utilities::cells::{MapCell, NumericCellExt};
// The completion code for a process if it faulted.
const COMPLETION_FAULT: u32 = 0xffffffff;
/// State for helping with debugging apps.
///
/// These pointers and counters are not strictly required for kernel operation,
/// but provide helpful information when an app crashes.
struct ProcessStandardDebug {
/// If this process was compiled for fixed addresses, save the address
/// it must be at in flash. This is useful for debugging and saves having
/// to re-parse the entire TBF header.
fixed_address_flash: Option<u32>,
/// If this process was compiled for fixed addresses, save the address
/// it must be at in RAM. This is useful for debugging and saves having
/// to re-parse the entire TBF header.
fixed_address_ram: Option<u32>,
/// Where the process has started its heap in RAM.
app_heap_start_pointer: Option<*const u8>,
/// Where the start of the stack is for the process. If the kernel does the
/// PIC setup for this app then we know this, otherwise we need the app to
/// tell us where it put its stack.
app_stack_start_pointer: Option<*const u8>,
/// How low have we ever seen the stack pointer.
app_stack_min_pointer: Option<*const u8>,
/// How many syscalls have occurred since the process started.
syscall_count: usize,
/// What was the most recent syscall.
last_syscall: Option<Syscall>,
/// How many upcalls were dropped because the queue was insufficiently
/// long.
dropped_upcall_count: usize,
/// How many times this process has been paused because it exceeded its
/// timeslice.
timeslice_expiration_count: usize,
}
/// Entry that is stored in the grant pointer table at the top of process
/// memory.
///
/// One copy of this entry struct is stored per grant region defined in the
/// kernel. This type allows the core kernel to lookup a grant based on the
/// driver_num associated with the grant, and also holds the pointer to the
/// memory allocated for the particular grant.
#[repr(C)]
struct GrantPointerEntry {
/// The syscall driver number associated with the allocated grant.
///
/// This defaults to 0 if the grant has not been allocated. Note, however,
/// that 0 is a valid driver_num, and therefore cannot be used to check if a
/// grant is allocated or not.
driver_num: usize,
/// The start of the memory location where the grant has been allocated, or
/// null if the grant has not been allocated.
grant_ptr: *mut u8,
}
/// A type for userspace processes in Tock.
pub struct ProcessStandard<'a, C: 'static + Chip> {
/// Identifier of this process and the index of the process in the process
/// table.
process_id: Cell<ProcessId>,
/// Pointer to the main Kernel struct.
kernel: &'static Kernel,
/// Pointer to the struct that defines the actual chip the kernel is running
/// on. This is used because processes have subtle hardware-based
/// differences. Specifically, the actual syscall interface and how
/// processes are switched to is architecture-specific, and how memory must
/// be allocated for memory protection units is also hardware-specific.
chip: &'static C,
/// Application memory layout:
///
/// ```text
/// ╒════════ ← memory_start + memory_len
/// ╔═ │ Grant Pointers
/// ║ │ ──────
/// │ Process Control Block
/// D │ ──────
/// Y │ Grant Regions
/// N │
/// A │ ↓
/// M │ ────── ← kernel_memory_break
/// I │
/// C │ ────── ← app_break ═╗
/// │ ║
/// ║ │ ↑ A
/// ║ │ Heap P C
/// ╠═ │ ────── ← app_heap_start R C
/// │ Data O E
/// F │ ────── ← data_start_pointer C S
/// I │ Stack E S
/// X │ ↓ S I
/// E │ S B
/// D │ ────── ← current_stack_pointer L
/// │ ║ E
/// ╚═ ╘════════ ← memory_start ═╝
/// ```
///
/// The start of process memory. We store this as a pointer and length and
/// not a slice due to Rust aliasing rules. If we were to store a slice,
/// then any time another slice to the same memory or an ProcessBuffer is
/// used in the kernel would be undefined behavior.
memory_start: *const u8,
/// Number of bytes of memory allocated to this process.
memory_len: usize,
/// Reference to the slice of `GrantPointerEntry`s stored in the process's
/// memory reserved for the kernel. These driver numbers are zero and
/// pointers are null if the grant region has not been allocated. When the
/// grant region is allocated these pointers are updated to point to the
/// allocated memory and the driver number is set to match the driver that
/// owns the grant. No other reference to these pointers exists in the Tock
/// kernel.
grant_pointers: MapCell<&'static mut [GrantPointerEntry]>,
/// Pointer to the end of the allocated (and MPU protected) grant region.
kernel_memory_break: Cell<*const u8>,
/// Pointer to the end of process RAM that has been sbrk'd to the process.
app_break: Cell<*const u8>,
/// Pointer to high water mark for process buffers shared through `allow`
allow_high_water_mark: Cell<*const u8>,
/// Process flash segment. This is the region of nonvolatile flash that
/// the process occupies.
flash: &'static [u8],
/// Collection of pointers to the TBF header in flash.
header: tock_tbf::types::TbfHeader,
/// State saved on behalf of the process each time the app switches to the
/// kernel.
stored_state:
MapCell<<<C as Chip>::UserspaceKernelBoundary as UserspaceKernelBoundary>::StoredState>,
/// The current state of the app. The scheduler uses this to determine
/// whether it can schedule this app to execute.
///
/// The `state` is used both for bookkeeping for the scheduler as well as
/// for enabling control by other parts of the system. The scheduler keeps
/// track of if a process is ready to run or not by switching between the
/// `Running` and `Yielded` states. The system can control the process by
/// switching it to a "stopped" state to prevent the scheduler from
/// scheduling it.
state: ProcessStateCell<'static>,
/// How to respond if this process faults.
fault_policy: &'a dyn ProcessFaultPolicy,
/// Configuration data for the MPU
mpu_config: MapCell<<<C as Chip>::MPU as MPU>::MpuConfig>,
/// MPU regions are saved as a pointer-size pair.
mpu_regions: [Cell<Option<mpu::Region>>; 6],
/// Essentially a list of upcalls that want to call functions in the
/// process.
tasks: MapCell<RingBuffer<'a, Task>>,
/// Count of how many times this process has entered the fault condition and
/// been restarted. This is used by some `ProcessRestartPolicy`s to
/// determine if the process should be restarted or not.
restart_count: Cell<usize>,
/// Name of the app.
process_name: &'static str,
/// Values kept so that we can print useful debug messages when apps fault.
debug: MapCell<ProcessStandardDebug>,
}
impl<C: Chip> Process for ProcessStandard<'_, C> {
fn processid(&self) -> ProcessId {
self.process_id.get()
}
fn enqueue_task(&self, task: Task) -> Result<(), ErrorCode> {
// If this app is in a `Fault` state then we shouldn't schedule
// any work for it.
if !self.is_active() {
return Err(ErrorCode::NODEVICE);
}
let ret = self.tasks.map_or(Err(ErrorCode::FAIL), |tasks| {
match tasks.enqueue(task) {
true => {
// The task has been successfully enqueued.
Ok(())
}
false => {
// The task could not be enqueued as there is
// insufficient space in the ring buffer.
Err(ErrorCode::NOMEM)
}
}
});
if ret.is_ok() {
self.kernel.increment_work();
} else {
// On any error we were unable to enqueue the task. Record the
// error, but importantly do _not_ increment kernel work.
self.debug.map(|debug| {
debug.dropped_upcall_count += 1;
});
}
ret
}
fn ready(&self) -> bool {
self.tasks.map_or(false, |ring_buf| ring_buf.has_elements())
|| self.state.get() == State::Running
}
fn remove_pending_upcalls(&self, upcall_id: UpcallId) {
self.tasks.map(|tasks| {
let count_before = tasks.len();
tasks.retain(|task| match task {
// Remove only tasks that are function calls with an id equal
// to `upcall_id`.
Task::FunctionCall(function_call) => match function_call.source {
FunctionCallSource::Kernel => true,
FunctionCallSource::Driver(id) => {
if id != upcall_id {
true
} else {
self.kernel.decrement_work();
false
}
}
},
_ => true,
});
if config::CONFIG.trace_syscalls {
let count_after = tasks.len();
debug!(
"[{:?}] remove_pending_upcalls[{:#x}:{}] = {} upcall(s) removed",
self.processid(),
upcall_id.driver_num,
upcall_id.subscribe_num,
count_before - count_after,
);
}
});
}
fn get_state(&self) -> State {
self.state.get()
}
fn set_yielded_state(&self) {
if self.state.get() == State::Running {
self.state.update(State::Yielded);
}
}
fn stop(&self) {
match self.state.get() {
State::Running => self.state.update(State::StoppedRunning),
State::Yielded => self.state.update(State::StoppedYielded),
_ => {} // Do nothing
}
}
fn resume(&self) {
match self.state.get() {
State::StoppedRunning => self.state.update(State::Running),
State::StoppedYielded => self.state.update(State::Yielded),
_ => {} // Do nothing
}
}
fn set_fault_state(&self) {
// Use the per-process fault policy to determine what action the kernel
// should take since the process faulted.
let action = self.fault_policy.action(self);
match action {
FaultAction::Panic => {
// process faulted. Panic and print status
self.state.update(State::Faulted);
panic!("Process {} had a fault", self.process_name);
}
FaultAction::Restart => {
self.try_restart(COMPLETION_FAULT);
}
FaultAction::Stop => {
// This looks a lot like restart, except we just leave the app
// how it faulted and mark it as `Faulted`. By clearing
// all of the app's todo work it will not be scheduled, and
// clearing all of the grant regions will cause capsules to drop
// this app as well.
self.terminate(COMPLETION_FAULT);
self.state.update(State::Faulted);
}
}
}
fn try_restart(&self, completion_code: u32) {
// Terminate the process, freeing its state and removing any
// pending tasks from the scheduler's queue.
self.terminate(completion_code);
// If there is a kernel policy that controls restarts, it should be
// implemented here. For now, always restart.
let _res = self.restart();
// Decide what to do with res later. E.g., if we can't restart
// want to reclaim the process resources.
}
fn terminate(&self, _completion_code: u32) {
// Remove the tasks that were scheduled for the app from the
// amount of work queue.
let tasks_len = self.tasks.map_or(0, |tasks| tasks.len());
for _ in 0..tasks_len {
self.kernel.decrement_work();
}
// And remove those tasks
self.tasks.map(|tasks| {
tasks.empty();
});
// Clear any grant regions this app has setup with any capsules.
unsafe {
self.grant_ptrs_reset();
}
// Mark the app as stopped so the scheduler won't try to run it.
self.state.update(State::Terminated);
}
fn get_restart_count(&self) -> usize {
self.restart_count.get()
}
fn has_tasks(&self) -> bool {
self.tasks.map_or(false, |tasks| tasks.has_elements())
}
fn dequeue_task(&self) -> Option<Task> {
self.tasks.map_or(None, |tasks| {
tasks.dequeue().map(|cb| {
self.kernel.decrement_work();
cb
})
})
}
fn pending_tasks(&self) -> usize {
self.tasks.map_or(0, |tasks| tasks.len())
}
fn mem_start(&self) -> *const u8 {
self.memory_start
}
fn mem_end(&self) -> *const u8 {
self.memory_start.wrapping_add(self.memory_len)
}
fn flash_start(&self) -> *const u8 {
self.flash.as_ptr()
}
fn flash_non_protected_start(&self) -> *const u8 {
((self.flash.as_ptr() as usize) + self.header.get_protected_size() as usize) as *const u8
}
fn flash_end(&self) -> *const u8 {
self.flash.as_ptr().wrapping_add(self.flash.len())
}
fn kernel_memory_break(&self) -> *const u8 {
self.kernel_memory_break.get()
}
fn number_writeable_flash_regions(&self) -> usize {
self.header.number_writeable_flash_regions()
}
fn get_writeable_flash_region(&self, region_index: usize) -> (u32, u32) {
self.header.get_writeable_flash_region(region_index)
}
fn update_stack_start_pointer(&self, stack_pointer: *const u8) {
if stack_pointer >= self.mem_start() && stack_pointer < self.mem_end() {
self.debug.map(|debug| {
debug.app_stack_start_pointer = Some(stack_pointer);
// We also reset the minimum stack pointer because whatever
// value we had could be entirely wrong by now.
debug.app_stack_min_pointer = Some(stack_pointer);
});
}
}
fn update_heap_start_pointer(&self, heap_pointer: *const u8) {
if heap_pointer >= self.mem_start() && heap_pointer < self.mem_end() {
self.debug.map(|debug| {
debug.app_heap_start_pointer = Some(heap_pointer);
});
}
}
fn app_memory_break(&self) -> *const u8 {
self.app_break.get()
}
fn setup_mpu(&self) {
self.mpu_config.map(|config| {
self.chip.mpu().configure_mpu(&config, &self.processid());
});
}
fn add_mpu_region(
&self,
unallocated_memory_start: *const u8,
unallocated_memory_size: usize,
min_region_size: usize,
) -> Option<mpu::Region> {
self.mpu_config.and_then(|mut config| {
let new_region = self.chip.mpu().allocate_region(
unallocated_memory_start,
unallocated_memory_size,
min_region_size,
mpu::Permissions::ReadWriteOnly,
&mut config,
);
if new_region.is_none() {
return None;
}
for region in self.mpu_regions.iter() {
if region.get().is_none() {
region.set(new_region);
return new_region;
}
}
// Not enough room in Process struct to store the MPU region.
None
})
}
fn remove_mpu_region(&self, region: mpu::Region) -> Result<(), ErrorCode> {
self.mpu_config.map_or(Err(ErrorCode::INVAL), |mut config| {
// Find the existing mpu region that we are removing; it needs to match exactly.
if let Some(internal_region) = self
.mpu_regions
.iter()
.find(|r| r.get().map_or(false, |r| r == region))
{
self.chip
.mpu()
.remove_memory_region(region, &mut config)
.or(Err(ErrorCode::FAIL))?;
// Remove this region from the tracking cache of mpu_regions
internal_region.set(None);
Ok(())
} else {
Err(ErrorCode::INVAL)
}
})
}
fn sbrk(&self, increment: isize) -> Result<*const u8, Error> {
// Do not modify an inactive process.
if !self.is_active() {
return Err(Error::InactiveApp);
}
let new_break = unsafe { self.app_break.get().offset(increment) };
self.brk(new_break)
}
fn brk(&self, new_break: *const u8) -> Result<*const u8, Error> {
// Do not modify an inactive process.
if !self.is_active() {
return Err(Error::InactiveApp);
}
self.mpu_config
.map_or(Err(Error::KernelError), |mut config| {
if new_break < self.allow_high_water_mark.get() || new_break >= self.mem_end() {
Err(Error::AddressOutOfBounds)
} else if new_break > self.kernel_memory_break.get() {
Err(Error::OutOfMemory)
} else if let Err(_) = self.chip.mpu().update_app_memory_region(
new_break,
self.kernel_memory_break.get(),
mpu::Permissions::ReadWriteOnly,
&mut config,
) {
Err(Error::OutOfMemory)
} else {
let old_break = self.app_break.get();
self.app_break.set(new_break);
self.chip.mpu().configure_mpu(&config, &self.processid());
Ok(old_break)
}
})
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
fn build_readwrite_process_buffer(
&self,
buf_start_addr: *mut u8,
size: usize,
) -> Result<ReadWriteProcessBuffer, ErrorCode> {
if !self.is_active() {
// Do not operate on an inactive process
return Err(ErrorCode::FAIL);
}
// A process is allowed to pass any pointer if the buffer length is 0,
// as to revoke kernel access to a memory region without granting access
// to another one
if size == 0 {
// Clippy complains that we're dereferencing a pointer in a public
// and safe function here. While we are not dereferencing the
// pointer here, we pass it along to an unsafe function, which is as
// dangerous (as it is likely to be dereferenced down the line).
//
// Relevant discussion:
// https://github.com/rust-lang/rust-clippy/issues/3045
//
// It should be fine to ignore the lint here, as a buffer of length
// 0 will never allow dereferencing any memory in a safe manner.
//
// ### Safety
//
// We specific a zero-length buffer, so the implementation of
// `ReadWriteProcessBuffer` will handle any safety issues.
// Therefore, we can encapsulate the unsafe.
Ok(unsafe { ReadWriteProcessBuffer::new(buf_start_addr, 0, self.processid()) })
} else if self.in_app_owned_memory(buf_start_addr, size) {
// TODO: Check for buffer aliasing here
// Valid buffer, we need to adjust the app's watermark
// note: in_app_owned_memory ensures this offset does not wrap
let buf_end_addr = buf_start_addr.wrapping_add(size);
let new_water_mark = cmp::max(self.allow_high_water_mark.get(), buf_end_addr);
self.allow_high_water_mark.set(new_water_mark);
// Clippy complains that we're dereferencing a pointer in a public
// and safe function here. While we are not dereferencing the
// pointer here, we pass it along to an unsafe function, which is as
// dangerous (as it is likely to be dereferenced down the line).
//
// Relevant discussion:
// https://github.com/rust-lang/rust-clippy/issues/3045
//
// It should be fine to ignore the lint here, as long as we make
// sure that we're pointing towards userspace memory (verified using
// `in_app_owned_memory`) and respect alignment and other
// constraints of the Rust references created by
// ReadWriteProcessBuffer.
//
// ### Safety
//
// We encapsulate the unsafe here on the condition in the TODO
// above, as we must ensure that this `ReadWriteProcessBuffer` will
// be the only reference to this memory.
Ok(unsafe { ReadWriteProcessBuffer::new(buf_start_addr, size, self.processid()) })
} else {
Err(ErrorCode::INVAL)
}
}
#[allow(clippy::not_unsafe_ptr_arg_deref)]
fn build_readonly_process_buffer(
&self,
buf_start_addr: *const u8,
size: usize,
) -> Result<ReadOnlyProcessBuffer, ErrorCode> {
if !self.is_active() {
// Do not operate on an inactive process
return Err(ErrorCode::FAIL);
}
// A process is allowed to pass any pointer if the buffer length is 0,
// as to revoke kernel access to a memory region without granting access
// to another one
if size == 0 {
// Clippy complains that we're dereferencing a pointer in a public
// and safe function here. While we are not dereferencing the
// pointer here, we pass it along to an unsafe function, which is as
// dangerous (as it is likely to be dereferenced down the line).
//
// Relevant discussion:
// https://github.com/rust-lang/rust-clippy/issues/3045
//
// It should be fine to ignore the lint here, as a buffer of length
// 0 will never allow dereferencing any memory in a safe manner.
//
// ### Safety
//
// We specific a zero-length buffer, so the implementation of
// `ReadOnlyProcessBuffer` will handle any safety issues. Therefore,
// we can encapsulate the unsafe.
Ok(unsafe { ReadOnlyProcessBuffer::new(buf_start_addr, 0, self.processid()) })
} else if self.in_app_owned_memory(buf_start_addr, size)
|| self.in_app_flash_memory(buf_start_addr, size)
{
// TODO: Check for buffer aliasing here
if self.in_app_owned_memory(buf_start_addr, size) {
// Valid buffer, and since this is in read-write memory (i.e.
// not flash), we need to adjust the process's watermark. Note:
// `in_app_owned_memory()` ensures this offset does not wrap.
let buf_end_addr = buf_start_addr.wrapping_add(size);
let new_water_mark = cmp::max(self.allow_high_water_mark.get(), buf_end_addr);
self.allow_high_water_mark.set(new_water_mark);
}
// Clippy complains that we're dereferencing a pointer in a public
// and safe function here. While we are not dereferencing the
// pointer here, we pass it along to an unsafe function, which is as
// dangerous (as it is likely to be dereferenced down the line).
//
// Relevant discussion:
// https://github.com/rust-lang/rust-clippy/issues/3045
//
// It should be fine to ignore the lint here, as long as we make
// sure that we're pointing towards userspace memory (verified using
// `in_app_owned_memory` or `in_app_flash_memory`) and respect
// alignment and other constraints of the Rust references created by
// ReadWriteProcessBuffer.
//
// ### Safety
//
// We encapsulate the unsafe here on the condition in the TODO
// above, as we must ensure that this `ReadOnlyProcessBuffer` will
// be the only reference to this memory.
Ok(unsafe { ReadOnlyProcessBuffer::new(buf_start_addr, size, self.processid()) })
} else {
Err(ErrorCode::INVAL)
}
}
unsafe fn set_byte(&self, addr: *mut u8, value: u8) -> bool {
if self.in_app_owned_memory(addr, 1) {
// We verify that this will only write process-accessible memory,
// but this can still be undefined behavior if something else holds
// a reference to this memory.
*addr = value;
true
} else {
false
}
}
fn grant_is_allocated(&self, grant_num: usize) -> Option<bool> {
// Do not modify an inactive process.
if !self.is_active() {
return None;
}
// Update the grant pointer to the address of the new allocation.
self.grant_pointers.map_or(None, |grant_pointers| {
// Implement `grant_pointers[grant_num]` without a chance of a
// panic.
grant_pointers
.get(grant_num)
.map_or(None, |grant_entry| Some(!grant_entry.grant_ptr.is_null()))
})
}
fn allocate_grant(
&self,
grant_num: usize,
driver_num: usize,
size: usize,
align: usize,
) -> Option<NonNull<u8>> {
// Do not modify an inactive process.
if !self.is_active() {
return None;
}
// Verify the grant_num is valid.
if grant_num >= self.kernel.get_grant_count_and_finalize() {
return None;
}
// Verify that the grant is not already allocated. If the pointer is not
// null then the grant is already allocated.
if let Some(is_allocated) = self.grant_is_allocated(grant_num) {
if is_allocated {
return None;
}
}
// Verify that there is not already a grant allocated with the same
// driver_num.
let exists = self.grant_pointers.map_or(false, |grant_pointers| {
// Check our list of grant pointers if the driver number is used.
grant_pointers.iter().any(|grant_entry| {
// Check if the grant is both allocated (its grant pointer is
// non null) and the driver number matches.
(!grant_entry.grant_ptr.is_null()) && grant_entry.driver_num == driver_num
})
});
// If we find a match, then the driver_num must already be used and the
// grant allocation fails.
if exists {
return None;
}
// Use the shared grant allocator function to actually allocate memory.
// Returns `None` if the allocation cannot be created.
if let Some(grant_ptr) = self.allocate_in_grant_region_internal(size, align) {
// Update the grant pointer to the address of the new allocation.
self.grant_pointers.map_or(None, |grant_pointers| {
// Implement `grant_pointers[grant_num] = grant_ptr` without a
// chance of a panic.
grant_pointers
.get_mut(grant_num)
.map_or(None, |grant_entry| {
// Actually set the driver num and grant pointer.
grant_entry.driver_num = driver_num;
grant_entry.grant_ptr = grant_ptr.as_ptr() as *mut u8;
// If all of this worked, return the allocated pointer.
Some(grant_ptr)
})
})
} else {
// Could not allocate the memory for the grant region.
None
}
}
fn allocate_custom_grant(
&self,
size: usize,
align: usize,
) -> Option<(ProcessCustomGrantIdentifer, NonNull<u8>)> {
// Do not modify an inactive process.
if !self.is_active() {
return None;
}
// Use the shared grant allocator function to actually allocate memory.
// Returns `None` if the allocation cannot be created.
if let Some(ptr) = self.allocate_in_grant_region_internal(size, align) {
// Create the identifier that the caller will use to get access to
// this custom grant in the future.
let identifier = self.create_custom_grant_identifier(ptr);
Some((identifier, ptr))
} else {
// Could not allocate memory for the custom grant.
None
}
}
fn enter_grant(&self, grant_num: usize) -> Result<*mut u8, Error> {
// Do not try to access the grant region of inactive process.
if !self.is_active() {
return Err(Error::InactiveApp);
}
// Retrieve the grant pointer from the `grant_pointers` slice. We use
// `[slice].get()` so that if the grant number is invalid this will
// return `Err` and not panic.
self.grant_pointers
.map_or(Err(Error::KernelError), |grant_pointers| {
// Implement `grant_pointers[grant_num]` without a chance of a
// panic.
match grant_pointers.get_mut(grant_num) {
Some(grant_entry) => {
// Get a copy of the actual grant pointer.
let grant_ptr = grant_entry.grant_ptr;
// Check if the grant pointer is marked that the grant
// has already been entered. If so, return an error.
if (grant_ptr as usize) & 0x1 == 0x1 {
// Lowest bit is one, meaning this grant has been
// entered.
Err(Error::AlreadyInUse)
} else {
// Now, to mark that the grant has been entered, we
// set the lowest bit to one and save this as the
// grant pointer.
grant_entry.grant_ptr = (grant_ptr as usize | 0x1) as *mut u8;
// And we return the grant pointer to the entered
// grant.
Ok(grant_ptr)
}
}
None => Err(Error::AddressOutOfBounds),
}
})
}
fn enter_custom_grant(
&self,
identifier: ProcessCustomGrantIdentifer,
) -> Result<*mut u8, Error> {
// Do not try to access the grant region of inactive process.
if !self.is_active() {
return Err(Error::InactiveApp);
}
// Get the address of the custom grant based on the identifier.
let custom_grant_address = self.get_custom_grant_address(identifier);
// We never deallocate custom grants and only we can change the
// `identifier` so we know this is a valid address for the custom grant.
Ok(custom_grant_address as *mut u8)
}
fn leave_grant(&self, grant_num: usize) {
// Do not modify an inactive process.
if !self.is_active() {
return;
}
self.grant_pointers.map(|grant_pointers| {
// Implement `grant_pointers[grant_num]` without a chance of a
// panic.
match grant_pointers.get_mut(grant_num) {
Some(grant_entry) => {
// Get a copy of the actual grant pointer.
let grant_ptr = grant_entry.grant_ptr;
// Now, to mark that the grant has been released, we set the
// lowest bit back to zero and save this as the grant
// pointer.
grant_entry.grant_ptr = (grant_ptr as usize & !0x1) as *mut u8;
}
None => {}
}
});
}
fn grant_allocated_count(&self) -> Option<usize> {
// Do not modify an inactive process.
if !self.is_active() {
return None;
}
self.grant_pointers.map(|grant_pointers| {
// Filter our list of grant pointers into just the non null ones,
// and count those. A grant is allocated if its grant pointer is non
// null.
grant_pointers
.iter()
.filter(|grant_entry| !grant_entry.grant_ptr.is_null())
.count()
})
}
fn lookup_grant_from_driver_num(&self, driver_num: usize) -> Result<usize, Error> {
self.grant_pointers
.map_or(Err(Error::KernelError), |grant_pointers| {
// Filter our list of grant pointers into just the non null
// ones, and count those. A grant is allocated if its grant
// pointer is non null.
match grant_pointers.iter().position(|grant_entry| {
// Only consider allocated grants.
(!grant_entry.grant_ptr.is_null()) && grant_entry.driver_num == driver_num
}) {
Some(idx) => Ok(idx),
None => Err(Error::OutOfMemory),
}
})
}
fn is_valid_upcall_function_pointer(&self, upcall_fn: NonNull<()>) -> bool {
let ptr = upcall_fn.as_ptr() as *const u8;
let size = mem::size_of::<*const u8>();
// It is ok if this function is in memory or flash.
self.in_app_flash_memory(ptr, size) || self.in_app_owned_memory(ptr, size)
}
fn get_process_name(&self) -> &'static str {
self.process_name
}
fn set_syscall_return_value(&self, return_value: SyscallReturn) {
match self.stored_state.map(|stored_state| unsafe {
// Actually set the return value for a particular process.
//
// The UKB implementation uses the bounds of process-accessible
// memory to verify that any memory changes are valid. Here, the
// unsafe promise we are making is that the bounds passed to the UKB
// are correct.
self.chip
.userspace_kernel_boundary()
.set_syscall_return_value(
self.mem_start(),
self.app_break.get(),
stored_state,
return_value,
)
}) {
Some(Ok(())) => {
// If we get an `Ok` we are all set.
}
Some(Err(())) => {
// If we get an `Err`, then the UKB implementation could not set
// the return value, likely because the process's stack is no
// longer accessible to it. All we can do is fault.
self.set_fault_state();
}
None => {
// We should never be here since `stored_state` should always be
// occupied.
self.set_fault_state();
}
}
}
fn set_process_function(&self, callback: FunctionCall) {
// See if we can actually enqueue this function for this process.
// Architecture-specific code handles actually doing this since the
// exact method is both architecture- and implementation-specific.
//
// This can fail, for example if the process does not have enough memory
// remaining.
match self.stored_state.map(|stored_state| {
// Let the UKB implementation handle setting the process's PC so
// that the process executes the upcall function. We encapsulate
// unsafe here because we are guaranteeing that the memory bounds
// passed to `set_process_function` are correct.
unsafe {
self.chip.userspace_kernel_boundary().set_process_function(
self.mem_start(),
self.app_break.get(),
stored_state,
callback,
)
}
}) {
Some(Ok(())) => {
// If we got an `Ok` we are all set and should mark that this
// process is ready to be scheduled.
// Move this process to the "running" state so the scheduler
// will schedule it.
self.state.update(State::Running);
}
Some(Err(())) => {
// If we got an Error, then there was likely not enough room on
// the stack to allow the process to execute this function given
// the details of the particular architecture this is running
// on. This process has essentially faulted, so we mark it as
// such.
self.set_fault_state();
}
None => {
// We should never be here since `stored_state` should always be
// occupied.
self.set_fault_state();
}
}
}
fn switch_to(&self) -> Option<syscall::ContextSwitchReason> {
// Cannot switch to an invalid process
if !self.is_active() {
return None;
}
let (switch_reason, stack_pointer) =
self.stored_state.map_or((None, None), |stored_state| {
// Switch to the process. We guarantee that the memory pointers
// we pass are valid, ensuring this context switch is safe.
// Therefore we encapsulate the `unsafe`.
unsafe {
let (switch_reason, optional_stack_pointer) = self
.chip
.userspace_kernel_boundary()
.switch_to_process(self.mem_start(), self.app_break.get(), stored_state);
(Some(switch_reason), optional_stack_pointer)
}
});
// If the UKB implementation passed us a stack pointer, update our
// debugging state. This is completely optional.
stack_pointer.map(|sp| {
self.debug.map(|debug| {
match debug.app_stack_min_pointer {
None => debug.app_stack_min_pointer = Some(sp),
Some(asmp) => {
// Update max stack depth if needed.
if sp < asmp {
debug.app_stack_min_pointer = Some(sp);
}
}
}
});
});
switch_reason
}
fn debug_syscall_count(&self) -> usize {
self.debug.map_or(0, |debug| debug.syscall_count)
}
fn debug_dropped_upcall_count(&self) -> usize {
self.debug.map_or(0, |debug| debug.dropped_upcall_count)
}
fn debug_timeslice_expiration_count(&self) -> usize {
self.debug
.map_or(0, |debug| debug.timeslice_expiration_count)
}
fn debug_timeslice_expired(&self) {
self.debug
.map(|debug| debug.timeslice_expiration_count += 1);
}
fn debug_syscall_called(&self, last_syscall: Syscall) {
self.debug.map(|debug| {
debug.syscall_count += 1;
debug.last_syscall = Some(last_syscall);
});
}
fn debug_heap_start(&self) -> Option<*const u8> {
self.debug
.map_or(None, |debug| debug.app_heap_start_pointer.map(|p| p))
}
fn debug_stack_start(&self) -> Option<*const u8> {
self.debug
.map_or(None, |debug| debug.app_stack_start_pointer.map(|p| p))
}
fn debug_stack_end(&self) -> Option<*const u8> {
self.debug
.map_or(None, |debug| debug.app_stack_min_pointer.map(|p| p))
}
fn get_addresses(&self) -> ProcessAddresses {
ProcessAddresses {
flash_start: self.flash_start() as usize,
flash_non_protected_start: self.flash_non_protected_start() as usize,
flash_end: self.flash_end() as usize,
sram_start: self.mem_start() as usize,
sram_app_brk: self.app_memory_break() as usize,
sram_grant_start: self.kernel_memory_break() as usize,
sram_end: self.mem_end() as usize,
sram_heap_start: self.debug.map_or(None, |debug| {
debug.app_heap_start_pointer.map(|p| p as usize)
}),
sram_stack_top: self.debug.map_or(None, |debug| {
debug.app_stack_start_pointer.map(|p| p as usize)
}),
sram_stack_bottom: self.debug.map_or(None, |debug| {
debug.app_stack_min_pointer.map(|p| p as usize)
}),
}
}
fn get_sizes(&self) -> ProcessSizes {
ProcessSizes {
grant_pointers: mem::size_of::<GrantPointerEntry>()
* self.kernel.get_grant_count_and_finalize(),
upcall_list: Self::CALLBACKS_OFFSET,
process_control_block: Self::PROCESS_STRUCT_OFFSET,
}
}
fn print_memory_map(&self, writer: &mut dyn Write) {
if !config::CONFIG.debug_panics {
return;
}
// Flash
let flash_end = self.flash.as_ptr().wrapping_add(self.flash.len()) as usize;
let flash_start = self.flash.as_ptr() as usize;
let flash_protected_size = self.header.get_protected_size() as usize;
let flash_app_start = flash_start + flash_protected_size;
let flash_app_size = flash_end - flash_app_start;
// Grant pointers size.
let grant_ptr_size = mem::size_of::<GrantPointerEntry>();
let grant_ptrs_num = self.kernel.get_grant_count_and_finalize();
let sram_grant_pointers_size = grant_ptrs_num * grant_ptr_size;
// SRAM addresses
let sram_end = self.mem_end() as usize;
let sram_grant_pointers_start = sram_end - sram_grant_pointers_size;
let sram_upcall_list_start = sram_grant_pointers_start - Self::CALLBACKS_OFFSET;
let process_struct_memory_location = sram_upcall_list_start - Self::PROCESS_STRUCT_OFFSET;
let sram_grant_start = self.kernel_memory_break.get() as usize;
let sram_heap_end = self.app_break.get() as usize;
let sram_heap_start: Option<usize> = self.debug.map_or(None, |debug| {
debug.app_heap_start_pointer.map(|p| p as usize)
});
let sram_stack_start: Option<usize> = self.debug.map_or(None, |debug| {
debug.app_stack_start_pointer.map(|p| p as usize)
});
let sram_stack_bottom: Option<usize> = self.debug.map_or(None, |debug| {
debug.app_stack_min_pointer.map(|p| p as usize)
});
let sram_start = self.mem_start() as usize;
// SRAM sizes
let sram_upcall_list_size = Self::CALLBACKS_OFFSET;
let sram_process_struct_size = Self::PROCESS_STRUCT_OFFSET;
let sram_grant_size = process_struct_memory_location - sram_grant_start;
let sram_grant_allocated = process_struct_memory_location - sram_grant_start;
// application statistics
let events_queued = self.pending_tasks();
let syscall_count = self.debug.map_or(0, |debug| debug.syscall_count);
let last_syscall = self.debug.map(|debug| debug.last_syscall);
let dropped_upcall_count = self.debug.map_or(0, |debug| debug.dropped_upcall_count);
let restart_count = self.restart_count.get();
let _ = writer.write_fmt(format_args!(
"\
𝐀𝐩𝐩: {} - [{:?}]\
\r\n Events Queued: {} Syscall Count: {} Dropped Upcall Count: {}\
\r\n Restart Count: {}\r\n",
self.process_name,
self.state.get(),
events_queued,
syscall_count,
dropped_upcall_count,
restart_count,
));
let _ = match last_syscall {
Some(syscall) => writer.write_fmt(format_args!(" Last Syscall: {:?}\r\n", syscall)),
None => writer.write_str(" Last Syscall: None\r\n"),
};
let _ = writer.write_fmt(format_args!(
"\
\r\n\
\r\n ╔═══════════╤══════════════════════════════════════════╗\
\r\n ║ Address │ Region Name Used | Allocated (bytes) ║\
\r\n ╚{:#010X}═╪══════════════════════════════════════════╝\
\r\n │ Grant Ptrs {:6}\
\r\n │ Upcalls {:6}\
\r\n │ Process {:6}\
\r\n {:#010X} ┼───────────────────────────────────────────\
\r\n │ ▼ Grant {:6} | {:6}{}\
\r\n {:#010X} ┼───────────────────────────────────────────\
\r\n │ Unused\
\r\n {:#010X} ┼───────────────────────────────────────────",
sram_end,
sram_grant_pointers_size,
sram_upcall_list_size,
sram_process_struct_size,
process_struct_memory_location,
sram_grant_size,
sram_grant_allocated,
exceeded_check(sram_grant_size, sram_grant_allocated),
sram_grant_start,
sram_heap_end,
));
match sram_heap_start {
Some(sram_heap_start) => {
let sram_heap_size = sram_heap_end - sram_heap_start;
let sram_heap_allocated = sram_grant_start - sram_heap_start;
let _ = writer.write_fmt(format_args!(
"\
\r\n │ ▲ Heap {:6} | {:6}{} S\
\r\n {:#010X} ┼─────────────────────────────────────────── R",
sram_heap_size,
sram_heap_allocated,
exceeded_check(sram_heap_size, sram_heap_allocated),
sram_heap_start,
));
}
None => {
let _ = writer.write_str(
"\
\r\n │ ▲ Heap ? | ? S\
\r\n ?????????? ┼─────────────────────────────────────────── R",
);
}
}
match (sram_heap_start, sram_stack_start) {
(Some(sram_heap_start), Some(sram_stack_start)) => {
let sram_data_size = sram_heap_start - sram_stack_start;
let sram_data_allocated = sram_data_size as usize;
let _ = writer.write_fmt(format_args!(
"\
\r\n │ Data {:6} | {:6} A",
sram_data_size, sram_data_allocated,
));
}
_ => {
let _ = writer.write_str(
"\
\r\n │ Data ? | ? A",
);
}
}
match (sram_stack_start, sram_stack_bottom) {
(Some(sram_stack_start), Some(sram_stack_bottom)) => {
let sram_stack_size = sram_stack_start - sram_stack_bottom;
let sram_stack_allocated = sram_stack_start - sram_start;
let _ = writer.write_fmt(format_args!(
"\
\r\n {:#010X} ┼─────────────────────────────────────────── M\
\r\n │ ▼ Stack {:6} | {:6}{}",
sram_stack_start,
sram_stack_size,
sram_stack_allocated,
exceeded_check(sram_stack_size, sram_stack_allocated),
));
}
_ => {
let _ = writer.write_str(
"\
\r\n ?????????? ┼─────────────────────────────────────────── M\
\r\n │ ▼ Stack ? | ?",
);
}
}
let _ = writer.write_fmt(format_args!(
"\
\r\n {:#010X} ┼───────────────────────────────────────────\
\r\n │ Unused\
\r\n {:#010X} ┴───────────────────────────────────────────\
\r\n .....\
\r\n {:#010X} ┬─────────────────────────────────────────── F\
\r\n │ App Flash {:6} L\
\r\n {:#010X} ┼─────────────────────────────────────────── A\
\r\n │ Protected {:6} S\
\r\n {:#010X} ┴─────────────────────────────────────────── H\
\r\n",
sram_stack_bottom.unwrap_or(0),
sram_start,
flash_end,
flash_app_size,
flash_app_start,
flash_protected_size,
flash_start
));
}
fn print_full_process(&self, writer: &mut dyn Write) {
if !config::CONFIG.debug_panics {
return;
}
self.print_memory_map(writer);
self.stored_state.map(|stored_state| {
// We guarantee the memory bounds pointers provided to the UKB are
// correct.
unsafe {
self.chip.userspace_kernel_boundary().print_context(
self.mem_start(),
self.app_break.get(),
stored_state,
writer,
);
}
});
// Display grant information.
let number_grants = self.kernel.get_grant_count_and_finalize();
let _ = writer.write_fmt(format_args!(
"\
\r\n Total number of grant regions defined: {}\r\n",
self.kernel.get_grant_count_and_finalize()
));
let rows = (number_grants + 2) / 3;
// Access our array of grant pointers.
self.grant_pointers.map(|grant_pointers| {
// Iterate each grant and show its address.
for i in 0..rows {
for j in 0..3 {
let index = i + (rows * j);
if index >= number_grants {
break;
}
// Implement `grant_pointers[grant_num]` without a chance of
// a panic.
grant_pointers.get(index).map(|grant_entry| {
if grant_entry.grant_ptr.is_null() {
let _ =
writer.write_fmt(format_args!(" Grant {:>2} : -- ", index));
} else {
let _ = writer.write_fmt(format_args!(
" Grant {:>2} {:#x}: {:p}",
index, grant_entry.driver_num, grant_entry.grant_ptr
));
}
});
}
let _ = writer.write_fmt(format_args!("\r\n"));
}
});
// Display the current state of the MPU for this process.
self.mpu_config.map(|config| {
let _ = writer.write_fmt(format_args!("{}", config));
});
// Print a helpful message on how to re-compile a process to view the
// listing file. If a process is PIC, then we also need to print the
// actual addresses the process executed at so that the .lst file can be
// generated for those addresses. If the process was already compiled
// for a fixed address, then just generating a .lst file is fine.
self.debug.map(|debug| {
if debug.fixed_address_flash.is_some() {
// Fixed addresses, can just run `make lst`.
let _ = writer.write_fmt(format_args!(
"\
\r\nTo debug, run `make lst` in the app's folder\
\r\nand open the arch.{:#x}.{:#x}.lst file.\r\n\r\n",
debug.fixed_address_flash.unwrap_or(0),
debug.fixed_address_ram.unwrap_or(0)
));
} else {
// PIC, need to specify the addresses.
let sram_start = self.mem_start() as usize;
let flash_start = self.flash.as_ptr() as usize;
let flash_init_fn = flash_start + self.header.get_init_function_offset() as usize;
let _ = writer.write_fmt(format_args!(
"\
\r\nTo debug, run `make debug RAM_START={:#x} FLASH_INIT={:#x}`\
\r\nin the app's folder and open the .lst file.\r\n\r\n",
sram_start, flash_init_fn
));
}
});
}
}
// Only used if debug_panics == true
#[allow(unused)]
fn exceeded_check(size: usize, allocated: usize) -> &'static str {
if size > allocated {
" EXCEEDED!"
} else {
" "
}
}
impl<C: 'static + Chip> ProcessStandard<'_, C> {
// Memory offset for upcall ring buffer (10 element length).
const CALLBACK_LEN: usize = 10;
const CALLBACKS_OFFSET: usize = mem::size_of::<Task>() * Self::CALLBACK_LEN;
// Memory offset to make room for this process's metadata.
const PROCESS_STRUCT_OFFSET: usize = mem::size_of::<ProcessStandard<C>>();
pub(crate) unsafe fn create<'a>(
kernel: &'static Kernel,
chip: &'static C,
app_flash: &'static [u8],
header_length: usize,
app_version: u16,
remaining_memory: &'a mut [u8],
fault_policy: &'static dyn ProcessFaultPolicy,
require_kernel_version: bool,
index: usize,
) -> Result<(Option<&'static dyn Process>, &'a mut [u8]), ProcessLoadError> {
// Get a slice for just the app header.
let header_flash = app_flash
.get(0..header_length as usize)
.ok_or(ProcessLoadError::NotEnoughFlash)?;
// Parse the full TBF header to see if this is a valid app. If the
// header can't parse, we will error right here.
let tbf_header = tock_tbf::parse::parse_tbf_header(header_flash, app_version)?;
// First thing: check that the process is at the correct location in
// flash if the TBF header specified a fixed address. If there is a
// mismatch we catch that early.
if let Some(fixed_flash_start) = tbf_header.get_fixed_address_flash() {
// The flash address in the header is based on the app binary,
// so we need to take into account the header length.
let actual_address = app_flash.as_ptr() as u32 + tbf_header.get_protected_size();
let expected_address = fixed_flash_start;
if actual_address != expected_address {
return Err(ProcessLoadError::IncorrectFlashAddress {
actual_address,
expected_address,
});
}
}
let process_name = tbf_header.get_package_name();
// If this isn't an app (i.e. it is padding) or it is an app but it
// isn't enabled, then we can skip it and do not create a `Process`
// object.
if !tbf_header.is_app() || !tbf_header.enabled() {
if config::CONFIG.debug_load_processes {
if !tbf_header.is_app() {
debug!(
"Padding in flash={:#010X}-{:#010X}",
app_flash.as_ptr() as usize,
app_flash.as_ptr() as usize + app_flash.len() - 1
);
}
if !tbf_header.enabled() {
debug!(
"Process not enabled flash={:#010X}-{:#010X} process={:?}",
app_flash.as_ptr() as usize,
app_flash.as_ptr() as usize + app_flash.len() - 1,
process_name.unwrap_or("(no name)")
);
}
}
// Return no process and the full memory slice we were given.
return Ok((None, remaining_memory));
}
if let Some((major, minor)) = tbf_header.get_kernel_version() {
// If the `KernelVersion` header is present, we read the requested
// kernel version and compare it to the running kernel version.
if crate::MAJOR != major || crate::MINOR < minor {
// If the kernel major version is different, we prevent the
// process from being loaded.
//
// If the kernel major version is the same, we compare the
// kernel minor version. The current running kernel minor
// version has to be greater or equal to the one that the
// process has requested. If not, we prevent the process from
// loading.
if config::CONFIG.debug_load_processes {
debug!("WARN process {:?} not loaded as it requires kernel version >= {}.{} and < {}.0, (running kernel {}.{})", process_name.unwrap_or("(no name)"), major, minor, (major+1), crate::MAJOR, crate::MINOR);
}
return Err(ProcessLoadError::IncompatibleKernelVersion {
version: Some((major, minor)),
});
}
} else {
if require_kernel_version {
// If enforcing the kernel version is requested, and the
// `KernelVersion` header is not present, we prevent the process
// from loading.
if config::CONFIG.debug_load_processes {
debug!("WARN process {:?} not loaded as it has no kernel version header, please upgrade to elf2tab >= 0.8.0",
process_name.unwrap_or ("(no name"));
}
return Err(ProcessLoadError::IncompatibleKernelVersion { version: None });
}
}
// Otherwise, actually load the app.
let process_ram_requested_size = tbf_header.get_minimum_app_ram_size() as usize;
let init_fn = app_flash
.as_ptr()
.offset(tbf_header.get_init_function_offset() as isize) as usize;
// Initialize MPU region configuration.
let mut mpu_config: <<C as Chip>::MPU as MPU>::MpuConfig = Default::default();
// Allocate MPU region for flash.
if chip
.mpu()
.allocate_region(
app_flash.as_ptr(),
app_flash.len(),
app_flash.len(),
mpu::Permissions::ReadExecuteOnly,
&mut mpu_config,
)
.is_none()
{
if config::CONFIG.debug_load_processes {
debug!(
"[!] flash={:#010X}-{:#010X} process={:?} - couldn't allocate MPU region for flash",
app_flash.as_ptr() as usize,
app_flash.as_ptr() as usize + app_flash.len() - 1,
process_name
);
}
return Err(ProcessLoadError::MpuInvalidFlashLength);
}
// Determine how much space we need in the application's memory space
// just for kernel and grant state. We need to make sure we allocate
// enough memory just for that.
// Make room for grant pointers.
let grant_ptr_size = mem::size_of::<GrantPointerEntry>();
let grant_ptrs_num = kernel.get_grant_count_and_finalize();
let grant_ptrs_offset = grant_ptrs_num * grant_ptr_size;
// Initial size of the kernel-owned part of process memory can be
// calculated directly based on the initial size of all kernel-owned
// data structures.
let initial_kernel_memory_size =
grant_ptrs_offset + Self::CALLBACKS_OFFSET + Self::PROCESS_STRUCT_OFFSET;
// By default we start with the initial size of process-accessible
// memory set to 0. This maximizes the flexibility that processes have
// to allocate their memory as they see fit. If a process needs more
// accessible memory it must use the `brk` memop syscalls to request
// more memory.
//
// We must take into account any process-accessible memory required by
// the context switching implementation and allocate at least that much
// memory so that we can successfully switch to the process. This is
// architecture and implementation specific, so we query that now.
let min_process_memory_size = chip
.userspace_kernel_boundary()
.initial_process_app_brk_size();
// We have to ensure that we at least ask the MPU for
// `min_process_memory_size` so that we can be sure that `app_brk` is
// not set inside the kernel-owned memory region. Now, in practice,
// processes should not request 0 (or very few) bytes of memory in their
// TBF header (i.e. `process_ram_requested_size` will almost always be
// much larger than `min_process_memory_size`), as they are unlikely to
// work with essentially no available memory. But, we still must protect
// for that case.
let min_process_ram_size = cmp::max(process_ram_requested_size, min_process_memory_size);
// Minimum memory size for the process.
let min_total_memory_size = min_process_ram_size + initial_kernel_memory_size;
// Check if this process requires a fixed memory start address. If so,
// try to adjust the memory region to work for this process.
//
// Right now, we only support skipping some RAM and leaving a chunk
// unused so that the memory region starts where the process needs it
// to.
let remaining_memory = if let Some(fixed_memory_start) = tbf_header.get_fixed_address_ram()
{
// The process does have a fixed address.
if fixed_memory_start == remaining_memory.as_ptr() as u32 {
// Address already matches.
remaining_memory
} else if fixed_memory_start > remaining_memory.as_ptr() as u32 {
// Process wants a memory address farther in memory. Try to
// advance the memory region to make the address match.
let diff = (fixed_memory_start - remaining_memory.as_ptr() as u32) as usize;
if diff > remaining_memory.len() {
// We ran out of memory.
let actual_address =
remaining_memory.as_ptr() as u32 + remaining_memory.len() as u32 - 1;
let expected_address = fixed_memory_start;
return Err(ProcessLoadError::MemoryAddressMismatch {
actual_address,
expected_address,
});
} else {
// Change the memory range to start where the process
// requested it.
remaining_memory
.get_mut(diff..)
.ok_or(ProcessLoadError::InternalError)?
}
} else {
// Address is earlier in memory, nothing we can do.
let actual_address = remaining_memory.as_ptr() as u32;
let expected_address = fixed_memory_start;
return Err(ProcessLoadError::MemoryAddressMismatch {
actual_address,
expected_address,
});
}
} else {
remaining_memory
};
// Determine where process memory will go and allocate MPU region for
// app-owned memory.
let (app_memory_start, app_memory_size) = match chip.mpu().allocate_app_memory_region(
remaining_memory.as_ptr() as *const u8,
remaining_memory.len(),
min_total_memory_size,
min_process_memory_size,
initial_kernel_memory_size,
mpu::Permissions::ReadWriteOnly,
&mut mpu_config,
) {
Some((memory_start, memory_size)) => (memory_start, memory_size),
None => {
// Failed to load process. Insufficient memory.
if config::CONFIG.debug_load_processes {
debug!(
"[!] flash={:#010X}-{:#010X} process={:?} - couldn't allocate memory region of size >= {:#X}",
app_flash.as_ptr() as usize,
app_flash.as_ptr() as usize + app_flash.len() - 1,
process_name,
min_total_memory_size
);
}
return Err(ProcessLoadError::NotEnoughMemory);
}
};
// Get a slice for the memory dedicated to the process. This can fail if
// the MPU returns a region of memory that is not inside of the
// `remaining_memory` slice passed to `create()` to allocate the
// process's memory out of.
let memory_start_offset = app_memory_start as usize - remaining_memory.as_ptr() as usize;
// First split the remaining memory into a slice that contains the
// process memory and a slice that will not be used by this process.
let (app_memory_oversize, unused_memory) =
remaining_memory.split_at_mut(memory_start_offset + app_memory_size);
// Then since the process's memory need not start at the beginning of
// the remaining slice given to create(), get a smaller slice as needed.
let app_memory = app_memory_oversize
.get_mut(memory_start_offset..)
.ok_or(ProcessLoadError::InternalError)?;
// Check if the memory region is valid for the process. If a process
// included a fixed address for the start of RAM in its TBF header (this
// field is optional, processes that are position independent do not
// need a fixed address) then we check that we used the same address
// when we allocated it in RAM.
if let Some(fixed_memory_start) = tbf_header.get_fixed_address_ram() {
let actual_address = app_memory.as_ptr() as u32;
let expected_address = fixed_memory_start;
if actual_address != expected_address {
return Err(ProcessLoadError::MemoryAddressMismatch {
actual_address,
expected_address,
});
}
}
// Set the initial process-accessible memory to the amount specified by
// the context switch implementation.
let initial_app_brk = app_memory.as_ptr().add(min_process_memory_size);
// Set the initial allow high water mark to the start of process memory
// since no `allow` calls have been made yet.
let initial_allow_high_water_mark = app_memory.as_ptr();
// Set up initial grant region.
let mut kernel_memory_break = app_memory.as_mut_ptr().add(app_memory.len());
// Now that we know we have the space we can setup the grant
// pointers.
kernel_memory_break = kernel_memory_break.offset(-(grant_ptrs_offset as isize));
// This is safe today, as MPU constraints ensure that `memory_start`
// will always be aligned on at least a word boundary, and that
// memory_size will be aligned on at least a word boundary, and
// `grant_ptrs_offset` is a multiple of the word size. Thus,
// `kernel_memory_break` must be word aligned. While this is unlikely to
// change, it should be more proactively enforced.
//
// TODO: https://github.com/tock/tock/issues/1739
#[allow(clippy::cast_ptr_alignment)]
// Set all grant pointers to null.
let grant_pointers = slice::from_raw_parts_mut(
kernel_memory_break as *mut GrantPointerEntry,
grant_ptrs_num,
);
for grant_entry in grant_pointers.iter_mut() {
grant_entry.driver_num = 0;
grant_entry.grant_ptr = ptr::null_mut();
}
// Now that we know we have the space we can setup the memory for the
// upcalls.
kernel_memory_break = kernel_memory_break.offset(-(Self::CALLBACKS_OFFSET as isize));
// This is safe today, as MPU constraints ensure that `memory_start`
// will always be aligned on at least a word boundary, and that
// memory_size will be aligned on at least a word boundary, and
// `grant_ptrs_offset` is a multiple of the word size. Thus,
// `kernel_memory_break` must be word aligned. While this is unlikely to
// change, it should be more proactively enforced.
//
// TODO: https://github.com/tock/tock/issues/1739
#[allow(clippy::cast_ptr_alignment)]
// Set up ring buffer for upcalls to the process.
let upcall_buf =
slice::from_raw_parts_mut(kernel_memory_break as *mut Task, Self::CALLBACK_LEN);
let tasks = RingBuffer::new(upcall_buf);
// Last thing in the kernel region of process RAM is the process struct.
kernel_memory_break = kernel_memory_break.offset(-(Self::PROCESS_STRUCT_OFFSET as isize));
let process_struct_memory_location = kernel_memory_break;
// Create the Process struct in the app grant region.
let mut process: &mut ProcessStandard<C> =
&mut *(process_struct_memory_location as *mut ProcessStandard<'static, C>);
// Ask the kernel for a unique identifier for this process that is being
// created.
let unique_identifier = kernel.create_process_identifier();
// Save copies of these in case the app was compiled for fixed addresses
// for later debugging.
let fixed_address_flash = tbf_header.get_fixed_address_flash();
let fixed_address_ram = tbf_header.get_fixed_address_ram();
process
.process_id
.set(ProcessId::new(kernel, unique_identifier, index));
process.kernel = kernel;
process.chip = chip;
process.allow_high_water_mark = Cell::new(initial_allow_high_water_mark);
process.memory_start = app_memory.as_ptr();
process.memory_len = app_memory.len();
process.header = tbf_header;
process.kernel_memory_break = Cell::new(kernel_memory_break);
process.app_break = Cell::new(initial_app_brk);
process.grant_pointers = MapCell::new(grant_pointers);
process.flash = app_flash;
process.stored_state = MapCell::new(Default::default());
// Mark this process as unstarted
process.state = ProcessStateCell::new(process.kernel);
process.fault_policy = fault_policy;
process.restart_count = Cell::new(0);
process.mpu_config = MapCell::new(mpu_config);
process.mpu_regions = [
Cell::new(None),
Cell::new(None),
Cell::new(None),
Cell::new(None),
Cell::new(None),
Cell::new(None),
];
process.tasks = MapCell::new(tasks);
process.process_name = process_name.unwrap_or("");
process.debug = MapCell::new(ProcessStandardDebug {
fixed_address_flash: fixed_address_flash,
fixed_address_ram: fixed_address_ram,
app_heap_start_pointer: None,
app_stack_start_pointer: None,
app_stack_min_pointer: None,
syscall_count: 0,
last_syscall: None,
dropped_upcall_count: 0,
timeslice_expiration_count: 0,
});
let flash_protected_size = process.header.get_protected_size() as usize;
let flash_app_start_addr = app_flash.as_ptr() as usize + flash_protected_size;
process.tasks.map(|tasks| {
tasks.enqueue(Task::FunctionCall(FunctionCall {
source: FunctionCallSource::Kernel,
pc: init_fn,
argument0: flash_app_start_addr,
argument1: process.memory_start as usize,
argument2: process.memory_len,
argument3: process.app_break.get() as usize,
}));
});
// Handle any architecture-specific requirements for a new process.
//
// NOTE! We have to ensure that the start of process-accessible memory
// (`app_memory_start`) is word-aligned. Since we currently start
// process-accessible memory at the beginning of the allocated memory
// region, we trust the MPU to give us a word-aligned starting address.
//
// TODO: https://github.com/tock/tock/issues/1739
match process.stored_state.map(|stored_state| {
chip.userspace_kernel_boundary().initialize_process(
app_memory_start,
initial_app_brk,
stored_state,
)
}) {
Some(Ok(())) => {}
_ => {
if config::CONFIG.debug_load_processes {
debug!(
"[!] flash={:#010X}-{:#010X} process={:?} - couldn't initialize process",
app_flash.as_ptr() as usize,
app_flash.as_ptr() as usize + app_flash.len() - 1,
process_name
);
}
return Err(ProcessLoadError::InternalError);
}
};
kernel.increment_work();
// Return the process object and a remaining memory for processes slice.
Ok((Some(process), unused_memory))
}
/// Restart the process, resetting all of its state and re-initializing it
/// to start running. Assumes the process is not running but is still in
/// flash and still has its memory region allocated to it. This implements
/// the mechanism of restart.
fn restart(&self) -> Result<(), ErrorCode> {
// We need a new process identifier for this process since the restarted
// version is in effect a new process. This is also necessary to
// invalidate any stored `ProcessId`s that point to the old version of
// the process. However, the process has not moved locations in the
// processes array, so we copy the existing index.
let old_index = self.process_id.get().index;
let new_identifier = self.kernel.create_process_identifier();
self.process_id
.set(ProcessId::new(self.kernel, new_identifier, old_index));
// Reset debug information that is per-execution and not per-process.
self.debug.map(|debug| {
debug.syscall_count = 0;
debug.last_syscall = None;
debug.dropped_upcall_count = 0;
debug.timeslice_expiration_count = 0;
});
// FLASH
// We are going to start this process over again, so need the init_fn
// location.
let app_flash_address = self.flash_start();
let init_fn = unsafe {
app_flash_address.offset(self.header.get_init_function_offset() as isize) as usize
};
// Reset MPU region configuration.
//
// TODO: ideally, this would be moved into a helper function used by
// both create() and reset(), but process load debugging complicates
// this. We just want to create new config with only flash and memory
// regions.
let mut mpu_config: <<C as Chip>::MPU as MPU>::MpuConfig = Default::default();
// Allocate MPU region for flash.
let app_mpu_flash = self.chip.mpu().allocate_region(
self.flash.as_ptr(),
self.flash.len(),
self.flash.len(),
mpu::Permissions::ReadExecuteOnly,
&mut mpu_config,
);
if app_mpu_flash.is_none() {
// We were unable to allocate an MPU region for flash. This is very
// unexpected since we previously ran this process. However, we
// return now and leave the process faulted and it will not be
// scheduled.
return Err(ErrorCode::FAIL);
}
// RAM
// Re-determine the minimum amount of RAM the kernel must allocate to
// the process based on the specific requirements of the syscall
// implementation.
let min_process_memory_size = self
.chip
.userspace_kernel_boundary()
.initial_process_app_brk_size();
// Recalculate initial_kernel_memory_size as was done in create()
let grant_ptr_size = mem::size_of::<(usize, *mut u8)>();
let grant_ptrs_num = self.kernel.get_grant_count_and_finalize();
let grant_ptrs_offset = grant_ptrs_num * grant_ptr_size;
let initial_kernel_memory_size =
grant_ptrs_offset + Self::CALLBACKS_OFFSET + Self::PROCESS_STRUCT_OFFSET;
let app_mpu_mem = self.chip.mpu().allocate_app_memory_region(
self.mem_start(),
self.memory_len,
self.memory_len, //we want exactly as much as we had before restart
min_process_memory_size,
initial_kernel_memory_size,
mpu::Permissions::ReadWriteOnly,
&mut mpu_config,
);
let (app_mpu_mem_start, app_mpu_mem_len) = match app_mpu_mem {
Some((start, len)) => (start, len),
None => {
// We couldn't configure the MPU for the process. This shouldn't
// happen since we were able to start the process before, but at
// this point it is better to leave the app faulted and not
// schedule it.
return Err(ErrorCode::NOMEM);
}
};
// Reset memory pointers now that we know the layout of the process
// memory and know that we can configure the MPU.
// app_brk is set based on minimum syscall size above the start of
// memory.
let app_brk = app_mpu_mem_start.wrapping_add(min_process_memory_size);
self.app_break.set(app_brk);
// kernel_brk is calculated backwards from the end of memory the size of
// the initial kernel data structures.
let kernel_brk = app_mpu_mem_start
.wrapping_add(app_mpu_mem_len)
.wrapping_sub(initial_kernel_memory_size);
self.kernel_memory_break.set(kernel_brk);
// High water mark for `allow`ed memory is reset to the start of the
// process's memory region.
self.allow_high_water_mark.set(app_mpu_mem_start);
// Drop the old config and use the clean one
self.mpu_config.replace(mpu_config);
// Handle any architecture-specific requirements for a process when it
// first starts (as it would when it is new).
let ukb_init_process = self.stored_state.map_or(Err(()), |stored_state| unsafe {
self.chip.userspace_kernel_boundary().initialize_process(
app_mpu_mem_start,
app_brk,
stored_state,
)
});
match ukb_init_process {
Ok(()) => {}
Err(_) => {
// We couldn't initialize the architecture-specific state for
// this process. This shouldn't happen since the app was able to
// be started before, but at this point the app is no longer
// valid. The best thing we can do now is leave the app as still
// faulted and not schedule it.
return Err(ErrorCode::RESERVE);
}
};
// And queue up this app to be restarted.
let flash_protected_size = self.header.get_protected_size() as usize;
let flash_app_start = app_flash_address as usize + flash_protected_size;
// Mark the state as `Unstarted` for the scheduler.
self.state.update(State::Unstarted);
// Mark that we restarted this process.
self.restart_count.increment();
// Enqueue the initial function.
self.tasks.map(|tasks| {
tasks.enqueue(Task::FunctionCall(FunctionCall {
source: FunctionCallSource::Kernel,
pc: init_fn,
argument0: flash_app_start,
argument1: self.mem_start() as usize,
argument2: self.memory_len,
argument3: self.app_break.get() as usize,
}));
});
// Mark that the process is ready to run.
self.kernel.increment_work();
Ok(())
}
/// Checks if the buffer represented by the passed in base pointer and size
/// is within the RAM bounds currently exposed to the processes (i.e. ending
/// at `app_break`). If this method returns `true`, the buffer is guaranteed
/// to be accessible to the process and to not overlap with the grant
/// region.
fn in_app_owned_memory(&self, buf_start_addr: *const u8, size: usize) -> bool {
let buf_end_addr = buf_start_addr.wrapping_add(size);
buf_end_addr >= buf_start_addr
&& buf_start_addr >= self.mem_start()
&& buf_end_addr <= self.app_break.get()
}
/// Checks if the buffer represented by the passed in base pointer and size
/// are within the readable region of an application's flash memory. If
/// this method returns true, the buffer is guaranteed to be readable to the
/// process.
fn in_app_flash_memory(&self, buf_start_addr: *const u8, size: usize) -> bool {
let buf_end_addr = buf_start_addr.wrapping_add(size);
buf_end_addr >= buf_start_addr
&& buf_start_addr >= self.flash_non_protected_start()
&& buf_end_addr <= self.flash_end()
}
/// Reset all `grant_ptr`s to NULL.
unsafe fn grant_ptrs_reset(&self) {
self.grant_pointers.map(|grant_pointers| {
for grant_entry in grant_pointers.iter_mut() {
grant_entry.driver_num = 0;
grant_entry.grant_ptr = ptr::null_mut();
}
});
}
/// Allocate memory in a process's grant region.
///
/// Ensures that the allocation is of `size` bytes and aligned to `align`
/// bytes.
///
/// If there is not enough memory, or the MPU cannot isolate the process
/// accessible region from the new kernel memory break after doing the
/// allocation, then this will return `None`.
fn allocate_in_grant_region_internal(&self, size: usize, align: usize) -> Option<NonNull<u8>> {
self.mpu_config.and_then(|mut config| {
// First, compute the candidate new pointer. Note that at this point
// we have not yet checked whether there is space for this
// allocation or that it meets alignment requirements.
let new_break_unaligned = self.kernel_memory_break.get().wrapping_sub(size);
// Our minimum alignment requirement is two bytes, so that the
// lowest bit of the address will always be zero and we can use it
// as a flag. It doesn't hurt to increase the alignment (except for
// potentially a wasted byte) so we make sure `align` is at least
// two.
let align = cmp::max(align, 2);
// The alignment must be a power of two, 2^a. The expression
// `!(align - 1)` then returns a mask with leading ones, followed by
// `a` trailing zeros.
let alignment_mask = !(align - 1);
let new_break = (new_break_unaligned as usize & alignment_mask) as *const u8;
// Verify there is space for this allocation
if new_break < self.app_break.get() {
None
// Verify it didn't wrap around
} else if new_break > self.kernel_memory_break.get() {
None
// Verify this is compatible with the MPU.
} else if let Err(_) = self.chip.mpu().update_app_memory_region(
self.app_break.get(),
new_break,
mpu::Permissions::ReadWriteOnly,
&mut config,
) {
None
} else {
// Allocation is valid.
// We always allocate down, so we must lower the
// kernel_memory_break.
self.kernel_memory_break.set(new_break);
// We need `grant_ptr` as a mutable pointer.
let grant_ptr = new_break as *mut u8;
// ### Safety
//
// Here we are guaranteeing that `grant_ptr` is not null. We can
// ensure this because we just created `grant_ptr` based on the
// process's allocated memory, and we know it cannot be null.
unsafe { Some(NonNull::new_unchecked(grant_ptr)) }
}
})
}
/// Create the identifier for a custom grant that grant.rs uses to access
/// the custom grant.
///
/// We create this identifier by calculating the number of bytes between
/// where the custom grant starts and the end of the process memory.
fn create_custom_grant_identifier(&self, ptr: NonNull<u8>) -> ProcessCustomGrantIdentifer {
let custom_grant_address = ptr.as_ptr() as usize;
let process_memory_end = self.mem_end() as usize;
ProcessCustomGrantIdentifer {
offset: process_memory_end - custom_grant_address,
}
}
/// Use a ProcessCustomGrantIdentifer to find the address of the custom
/// grant.
///
/// This reverses `create_custom_grant_identifier()`.
fn get_custom_grant_address(&self, identifier: ProcessCustomGrantIdentifer) -> usize {
let process_memory_end = self.mem_end() as usize;
// Subtract the offset in the identifier from the end of the process
// memory to get the address of the custom grant.
process_memory_end - identifier.offset
}
/// Check if the process is active.
///
/// "Active" is defined as the process can resume executing in the future.
/// This means its state in the `Process` struct is still valid, and that
/// the kernel could resume its execution without completely restarting and
/// resetting its state.
///
/// A process is inactive if the kernel cannot resume its execution, such as
/// if the process faults and is in an invalid state, or if the process
/// explicitly exits.
fn is_active(&self) -> bool {
let current_state = self.state.get();
current_state != State::Terminated && current_state != State::Faulted
}
}
| 41.389949 | 223 | 0.562088 |
902f2c72409ef8b8ceda31ecb8d6eb764ef491ae | 3,601 | // Copyright 2012 The Rust Project Developers. See the COPYRIGHT
// file at the top-level directory of this distribution and at
// http://rust-lang.org/COPYRIGHT.
//
// Licensed under the Apache License, Version 2.0 <LICENSE-APACHE or
// http://www.apache.org/licenses/LICENSE-2.0> or the MIT license
// <LICENSE-MIT or http://opensource.org/licenses/MIT>, at your
// option. This file may not be copied, modified, or distributed
// except according to those terms.
// Microbenchmarks for various functions in std and extra
#![feature(macro_rules)]
extern crate rand;
extern crate time;
use time::precise_time_s;
use rand::Rng;
use std::mem::swap;
use std::os;
use std::str;
use std::vec;
use std::io::File;
macro_rules! bench (
($argv:expr, $id:ident) => (maybe_run_test($argv, stringify!($id).to_owned(), $id))
)
fn main() {
let argv = os::args();
let _tests = argv.slice(1, argv.len());
bench!(argv, shift_push);
bench!(argv, read_line);
bench!(argv, vec_plus);
bench!(argv, vec_append);
bench!(argv, vec_push_all);
bench!(argv, is_utf8_ascii);
bench!(argv, is_utf8_multibyte);
}
fn maybe_run_test(argv: &[~str], name: ~str, test: ||) {
let mut run_test = false;
if os::getenv("RUST_BENCH").is_some() {
run_test = true
} else if argv.len() > 0 {
run_test = argv.iter().any(|x| x == &"all".to_owned()) || argv.iter().any(|x| x == &name)
}
if !run_test {
return
}
let start = precise_time_s();
test();
let stop = precise_time_s();
println!("{}:\t\t{} ms", name, (stop - start) * 1000.0);
}
fn shift_push() {
let mut v1 = Vec::from_elem(30000, 1);
let mut v2 = Vec::new();
while v1.len() > 0 {
v2.push(v1.shift().unwrap());
}
}
fn read_line() {
use std::io::BufferedReader;
let mut path = Path::new(env!("CFG_SRC_DIR"));
path.push("src/test/bench/shootout-k-nucleotide.data");
for _ in range(0, 3) {
let mut reader = BufferedReader::new(File::open(&path).unwrap());
for _line in reader.lines() {
}
}
}
fn vec_plus() {
let mut r = rand::task_rng();
let mut v = Vec::new();
let mut i = 0;
while i < 1500 {
let rv = Vec::from_elem(r.gen_range(0u, i + 1), i);
if r.gen() {
v.push_all_move(rv);
} else {
v = rv.clone().append(v.as_slice());
}
i += 1;
}
}
fn vec_append() {
let mut r = rand::task_rng();
let mut v = Vec::new();
let mut i = 0;
while i < 1500 {
let rv = Vec::from_elem(r.gen_range(0u, i + 1), i);
if r.gen() {
v = v.clone().append(rv.as_slice());
}
else {
v = rv.clone().append(v.as_slice());
}
i += 1;
}
}
fn vec_push_all() {
let mut r = rand::task_rng();
let mut v = Vec::new();
for i in range(0u, 1500) {
let mut rv = Vec::from_elem(r.gen_range(0u, i + 1), i);
if r.gen() {
v.push_all(rv.as_slice());
}
else {
swap(&mut v, &mut rv);
v.push_all(rv.as_slice());
}
}
}
fn is_utf8_ascii() {
let mut v : Vec<u8> = Vec::new();
for _ in range(0u, 20000) {
v.push('b' as u8);
if !str::is_utf8(v.as_slice()) {
fail!("is_utf8 failed");
}
}
}
fn is_utf8_multibyte() {
let s = "b¢€𤭢";
let mut v : Vec<u8> = Vec::new();
for _ in range(0u, 5000) {
v.push_all(s.as_bytes());
if !str::is_utf8(v.as_slice()) {
fail!("is_utf8 failed");
}
}
}
| 23.383117 | 97 | 0.546515 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.