code
stringlengths 114
1.05M
| path
stringlengths 3
312
| quality_prob
float64 0.5
0.99
| learning_prob
float64 0.2
1
| filename
stringlengths 3
168
| kind
stringclasses 1
value |
---|---|---|---|---|---|
defmodule AWS.Batch do
@moduledoc """
AWS Batch enables you to run batch computing workloads on the AWS Cloud.
Batch computing is a common way for developers, scientists, and engineers
to access large amounts of compute resources, and AWS Batch removes the
undifferentiated heavy lifting of configuring and managing the required
infrastructure. AWS Batch will be familiar to users of traditional batch
computing software. This service can efficiently provision resources in
response to jobs submitted in order to eliminate capacity constraints,
reduce compute costs, and deliver results quickly.
As a fully managed service, AWS Batch enables developers, scientists, and
engineers to run batch computing workloads of any scale. AWS Batch
automatically provisions compute resources and optimizes the workload
distribution based on the quantity and scale of the workloads. With AWS
Batch, there is no need to install or manage batch computing software,
which allows you to focus on analyzing results and solving problems. AWS
Batch reduces operational complexities, saves time, and reduces costs,
which makes it easy for developers, scientists, and engineers to run their
batch jobs in the AWS Cloud.
"""
@doc """
Cancels jobs in an AWS Batch job queue. Jobs that are in the `SUBMITTED`,
`PENDING`, or `RUNNABLE` state are cancelled. Jobs that have progressed to
`STARTING` or `RUNNING` are not cancelled (but the API operation still
succeeds, even if no jobs are cancelled); these jobs must be terminated
with the `TerminateJob` operation.
"""
def cancel_job(client, input, options \\ []) do
url = "/v1/canceljob"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Creates an AWS Batch compute environment. You can create `MANAGED` or
`UNMANAGED`compute environments.
In a managed compute environment, AWS Batch manages the compute resources
within the environment, based on the compute resources that you specify.
Instances launched into a managed compute environment use the latest Amazon
ECS-optimized AMI. You can choose to use Amazon EC2 On-Demand instances in
your managed compute environment, or you can use Amazon EC2 Spot instances
that only launch when the Spot bid price is below a specified percentage of
the On-Demand price.
In an unmanaged compute environment, you can manage your own compute
resources. This provides more compute resource configuration options, such
as using a custom AMI, but you must ensure that your AMI meets the Amazon
ECS container instance AMI specification. For more information, see
[Container Instance
AMIs](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/container_instance_AMIs.html)
in the *Amazon EC2 Container Service Developer Guide*. After you have
created your unmanaged compute environment, you can use the
`DescribeComputeEnvironments` operation to find the Amazon ECS cluster that
is associated with it and then manually launch your container instances
into that Amazon ECS cluster. For more information, see [Launching an
Amazon ECS Container
Instance](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/launch_container_instance.html)
in the *Amazon EC2 Container Service Developer Guide*.
"""
def create_compute_environment(client, input, options \\ []) do
url = "/v1/createcomputeenvironment"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Creates an AWS Batch job queue. When you create a job queue, you associate
one or more compute environments to the queue and assign an order of
preference for the compute environments.
You also set a priority to the job queue that determines the order in which
the AWS Batch scheduler places jobs onto its associated compute
environments. For example, if a compute environment is associated with more
than one job queue, the job queue with a higher priority is given
preference for scheduling jobs to that compute environment.
"""
def create_job_queue(client, input, options \\ []) do
url = "/v1/createjobqueue"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Deletes an AWS Batch compute environment.
Before you can delete a compute environment, you must set its state to
`DISABLED` with the `UpdateComputeEnvironment` API operation and
disassociate it from any job queues with the `UpdateJobQueue` API
operation.
"""
def delete_compute_environment(client, input, options \\ []) do
url = "/v1/deletecomputeenvironment"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Deletes the specified job queue. You must first disable submissions for a
queue with the `UpdateJobQueue` operation and terminate any jobs that have
not completed with the `TerminateJob`.
It is not necessary to disassociate compute environments from a queue
before submitting a `DeleteJobQueue` request.
"""
def delete_job_queue(client, input, options \\ []) do
url = "/v1/deletejobqueue"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Deregisters an AWS Batch job definition.
"""
def deregister_job_definition(client, input, options \\ []) do
url = "/v1/deregisterjobdefinition"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Describes one or more of your compute environments.
If you are using an unmanaged compute environment, you can use the
`DescribeComputeEnvironment` operation to determine the `ecsClusterArn`
that you should launch your Amazon ECS container instances into.
"""
def describe_compute_environments(client, input, options \\ []) do
url = "/v1/describecomputeenvironments"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Describes a list of job definitions. You can specify a `status` (such as
`ACTIVE`) to only return job definitions that match that status.
"""
def describe_job_definitions(client, input, options \\ []) do
url = "/v1/describejobdefinitions"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Describes one or more of your job queues.
"""
def describe_job_queues(client, input, options \\ []) do
url = "/v1/describejobqueues"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Describes a list of AWS Batch jobs.
"""
def describe_jobs(client, input, options \\ []) do
url = "/v1/describejobs"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Returns a list of task jobs for a specified job queue. You can filter the
results by job status with the `jobStatus` parameter.
"""
def list_jobs(client, input, options \\ []) do
url = "/v1/listjobs"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Registers an AWS Batch job definition.
"""
def register_job_definition(client, input, options \\ []) do
url = "/v1/registerjobdefinition"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Submits an AWS Batch job from a job definition. Parameters specified during
`SubmitJob` override parameters defined in the job definition.
"""
def submit_job(client, input, options \\ []) do
url = "/v1/submitjob"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Terminates jobs in a job queue. Jobs that are in the `STARTING` or
`RUNNING` state are terminated, which causes them to transition to
`FAILED`. Jobs that have not progressed to the `STARTING` state are
cancelled.
"""
def terminate_job(client, input, options \\ []) do
url = "/v1/terminatejob"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Updates an AWS Batch compute environment.
"""
def update_compute_environment(client, input, options \\ []) do
url = "/v1/updatecomputeenvironment"
headers = []
request(client, :post, url, headers, input, options, nil)
end
@doc """
Updates a job queue.
"""
def update_job_queue(client, input, options \\ []) do
url = "/v1/updatejobqueue"
headers = []
request(client, :post, url, headers, input, options, nil)
end
defp request(client, method, url, headers, input, options, success_status_code) do
client = %{client | service: "batch"}
host = get_host("batch", client)
url = get_url(host, url, client)
headers = Enum.concat([{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"}],
headers)
payload = encode_payload(input)
headers = AWS.Request.sign_v4(client, method, url, headers, payload)
perform_request(method, url, payload, headers, options, success_status_code)
end
defp perform_request(method, url, payload, headers, options, nil) do
case HTTPoison.request(method, url, payload, headers, options) do
{:ok, response=%HTTPoison.Response{status_code: 200, body: ""}} ->
{:ok, response}
{:ok, response=%HTTPoison.Response{status_code: 200, body: body}} ->
{:ok, Poison.Parser.parse!(body), response}
{:ok, response=%HTTPoison.Response{status_code: 202, body: body}} ->
{:ok, Poison.Parser.parse!(body), response}
{:ok, response=%HTTPoison.Response{status_code: 204, body: body}} ->
{:ok, Poison.Parser.parse!(body), response}
{:ok, _response=%HTTPoison.Response{body: body}} ->
reason = Poison.Parser.parse!(body)["message"]
{:error, reason}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp perform_request(method, url, payload, headers, options, success_status_code) do
case HTTPoison.request(method, url, payload, headers, options) do
{:ok, response=%HTTPoison.Response{status_code: ^success_status_code, body: ""}} ->
{:ok, nil, response}
{:ok, response=%HTTPoison.Response{status_code: ^success_status_code, body: body}} ->
{:ok, Poison.Parser.parse!(body), response}
{:ok, _response=%HTTPoison.Response{body: body}} ->
reason = Poison.Parser.parse!(body)["message"]
{:error, reason}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp get_host(endpoint_prefix, client) do
if client.region == "local" do
"localhost"
else
"#{endpoint_prefix}.#{client.region}.#{client.endpoint}"
end
end
defp get_url(host, url, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}#{url}/"
end
defp encode_payload(input) do
if input != nil do
Poison.Encoder.encode(input, [])
else
""
end
end
end
|
lib/aws/batch.ex
| 0.878926 | 0.640875 |
batch.ex
|
starcoder
|
defmodule Apoc do
@moduledoc """
Comprehensive docs coming soon!
"""
alias Apoc.Hazmat
@typedoc """
Hex (lowercase) encoded string
See `Apoc.hex/1`
"""
@type hexstring :: binary()
@typedoc """
An encoded string that represents a string encoded in Apoc's encoding scheme
(URL safe Base 64).
See `Apoc.encode/1`
"""
@type encoded_string :: binary()
@doc """
Hash a message with the default hashing scheme
and encode it with `Apoc.encode/1`.
See `Apoc.Hash` for other hashing schemes and encoding options
"""
@spec hash(message :: binary) :: {:ok, hexstring} | :error
def hash(message) do
# TODO: Use a "defaults" module
Apoc.Hazmat.Hash.SHA256.hash_encode(message)
end
@spec decode(encoded_string()) :: {:ok, binary()} | :error
@doc """
Decodes a URL safe base 64 string to binary, returning
a tuple of `{:ok, decoded_binary}` if successful or `:error` otherwise.
## Examples
iex> Apoc.decode("AQIDBAU")
{:ok, <<1, 2, 3, 4, 5>>}
iex> Apoc.decode("^&%")
:error
"""
def decode(encoded) do
Base.url_decode64(encoded, padding: false)
end
@spec decode!(encoded_string()) :: binary()
@doc """
Similar to `decode/1` but returns the decoded binary directly
rather than a tuple. Raises an error if decoding fails.
## Examples
iex> Apoc.decode!("AQIDBAU")
<<1, 2, 3, 4, 5>>
iex> Apoc.decode!("&^%")
** (ArgumentError) non-alphabet digit found: "&" (byte 38)
"""
def decode!(encoded) do
Base.url_decode64!(encoded, padding: false)
end
@doc """
Encodes a binary as a URL safe base 64 string
## Example
```
iex> Apoc.encode(<<16, 32, 64>>)
"ECBA"
```
## Encoding Scheme
Base 64 is similar to hex encoding but now instead of using 4-bit nibbles
it uses groups of 6 bits (64 possible values) and then assigns each to
a character as defined here https://hexdocs.pm/elixir/Base.html#module-base-64-url-and-filename-safe-alphabet.
The algorithm is a little more complex now as we have to worry about padding
to the nearest mulitple of 6 bytes. However, a simple example can be demonstrated
with 3 bytes which is 24 bits and already a mulitple of 6.
Take the binary `<<10, 10, 10>>`, we can break it into 6-bit components:
```
iex> <<a::6, b::6, c::6, d::6>> = <<10, 10, 10>>
...> [a, b, c, d]
[2, 32, 40, 10]
```
Now mapping each value to the safe alphabet we get:
```
iex> Apoc.encode(<<10, 10, 10>>)
"CgoK"
```
"""
@spec encode(payload :: binary) :: encoded_string()
def encode(payload) when is_binary(payload) do
Base.url_encode64(payload, padding: false)
end
@doc """
Encodes a binary in hex format.
Hex strings represent a binary by splitting each
byte into two parts of 4-bits (called a "nibble").
Each nibble has 16 possible values, 0 through to 15.
Values 0 to 9 stay as they are while values 10 to 15
are mapped to the letters a through to h.
## Example
```
iex> Apoc.hex(<<27, 90, 33, 46>>)
"1b5a212e"
```
## Encoding Scheme
The binary `<<184>>` splits into the nibbles x and y:
```
iex> <<x::4, y::4>> = <<184>>
...> [x, y]
[11, 8]
```
Now 11 maps to the character "b" while 8 stays the same
so the hex encoding of the byte `<<184>>` is "b8".
```
iex> Apoc.hex(<<184>>)
"b8"
```
Note that hex strings are exactly twice as long (in bytes)
as the original binary.
See also `Base.encode16/2`
"""
@spec hex(payload :: binary) :: hexstring()
def hex(payload) when is_binary(payload) do
Base.encode16(payload, case: :lower)
end
@doc """
Decodes a hex encoded string into binary returning a tuple
if successful and `:error` if not.
See also `hex/1` and `unhex!/1`
## Examples
iex> Apoc.unhex("0102030405")
{:ok, <<1, 2, 3, 4, 5>>}
iex> Apoc.unhex("XX")
:error
"""
def unhex(hexstring) when is_binary(hexstring) do
Base.decode16(hexstring, case: :lower)
end
@doc """
Decodes a hex encoded string into binary and returns the result
directly. An error is raised if the string cannot be decoded.
See also `hex/1` and `unhex/1`
## Examples
iex> Apoc.unhex!("0102030405")
<<1, 2, 3, 4, 5>>
iex> Apoc.unhex!("XX")
** (ArgumentError) non-alphabet digit found: "X" (byte 88)
"""
def unhex!(hexstring) when is_binary(hexstring) do
Base.decode16!(hexstring, case: :lower)
end
@deprecated "Use unhex/1 and unhex!/1 instead"
def decode_hex(payload) do
Base.decode16(payload, case: :lower)
end
@doc """
Simple wrapper to `:crypto.strong_rand_bytes/1`.
Returns a secure binary of `num` random bytes
"""
def rand_bytes(num) do
:crypto.strong_rand_bytes(num)
end
# TODO: Test these
defdelegate encrypt(message, key), to: Hazmat.AEAD.AESGCM
defdelegate decrypt(encrypted, key), to: Hazmat.AEAD.AESGCM
@doc """
Signs a message with the given key by generating a Message Authenticated Code (MAC),
often referred to as a tag. A tuple of the form `{:ok, tag}`, with the
tag encoded as per `Apoc.encode/1` or `:error` otherwise.
The default MAC adapter is `Apoc.Hazmat.MAC.HMAC256`. See also `Apoc.Adapters.MAC`.
## Examples
iex> Apoc.sign("hello", Apoc.decode!("<KEY>"))
{:ok, "tP6Nlf174bt05APQxaqXQTnyO-tOpvTJV2WPcD_rej4"}
iex> Apoc.sign("hello", <<0>>)
{:error, "Invalid key size"}
"""
@spec sign(message :: binary(), key :: binary(), opts :: list()) :: {:ok, binary()} | :error
def sign(message, key, opts \\ []) do
with {:ok, binary_tag} <- Hazmat.MAC.HMAC256.sign(message, key, opts) do
{:ok, Apoc.encode(binary_tag)}
end
end
@doc """
Similar to `Apoc.sign/3` but returns the tag directly if succesful (instead of a tuple)
and raises `Apoc.Error` in the case of an error.
## Examples
iex> Apoc.sign!("hello", Apoc.decode!("<KEY>"))
"tP6Nlf174bt05APQxaqXQTnyO-tOpvTJV2WPcD_rej4"
iex> Apoc.sign!("hello", <<0>>)
** (Apoc.Error) Invalid key size
"""
@spec sign!(message :: binary(), key :: binary(), opts :: list()) :: binary()
def sign!(message, key, opts \\ []) do
message
|> Hazmat.MAC.HMAC256.sign!(key, opts)
|> Apoc.encode()
end
@doc """
Verifies a message given the tag encoded by `Apoc.encode/1` or by a 3rd party in
Base64 encoding. If you are verifying tags with other encodings you should use one of the
modules in `Apoc.Hazmat.MAC`.
Returns `true` if verification is successful, and `false` otherwise.
See also `Apoc.sign/3`.
## Examples
iex> "tP6Nlf174bt05APQxaqXQTnyO-tOpvTJV2WPcD_rej4"
...> |> Apoc.verify("hello", Apoc.decode!("0Eqm2Go54JdQPIjS3FkQaSEy1Z-W22eRVRoBNrvp4ok"))
true
iex> "tP6Nlf174bt05APQxaqXQTnyO-tOpvTJV2WPcD_rej4"
...> |> Apoc.verify("hello-tamper", Apoc.decode!("0Eqm2Go54JdQPIjS3FkQaSEy1Z-W22eRVRoBNrvp4ok"))
false
"""
@spec verify(tag :: encoded_string(), message :: iodata(), key :: Apoc.Adapter.MAC.key(), opts :: Keyword.t()) :: true | false
def verify(tag, message, key, opts \\ []) do
with {:ok, binary} <- Apoc.decode(tag) do
Hazmat.MAC.HMAC256.verify(binary, message, key, opts)
end
end
@doc """
Compares two bitlists for equality in constant time
to avoid timing attacks.
See https://codahale.com/a-lesson-in-timing-attacks/
and `Plug.Crypto`.
"""
# TODO: Move to Util?
def secure_compare(left, right) do
if byte_size(left) == byte_size(right) do
secure_compare(left, right, 0) == 0
else
false
end
end
defp secure_compare(<<x, left::binary>>, <<y, right::binary>>, acc) do
use Bitwise
xorred = x ^^^ y
secure_compare(left, right, acc ||| xorred)
end
defp secure_compare(<<>>, <<>>, acc) do
acc
end
end
|
lib/apoc.ex
| 0.856752 | 0.870322 |
apoc.ex
|
starcoder
|
defmodule CargueroTaskBunny.Queue do
@moduledoc """
Convenience functions for accessing CargueroTaskBunny queues.
It's a semi private module normally wrapped by other modules.
## Sub Queues
When CargueroTaskBunny creates(declares) a queue on RabbitMQ, it also creates the following sub queues.
- [queue-name].scheduled: holds jobs to be executed in the future
- [queue-name].retry: holds jobs to be retried
- [queue-name].rejected: holds jobs that were rejected (failed more than max retry times or wrong message format)
"""
@doc """
Declares a queue with sub queues.
Queue.declare_with_subqueues(:default, "normal_jobs")
For this call, the function creates(declares) three queues:
- normal_jobs: a queue that holds jobs to process
- normal_jobs.scheduled: a queue that holds jobs to process in the future
- normal_jobs.retry: a queue that holds jobs failed and waiting to retry
- normal_jobs.rejected: a queue that holds jobs failed and won't be retried
"""
@spec declare_with_subqueues(%AMQP.Connection{} | atom, String.t()) :: {map, map, map, map}
def declare_with_subqueues(host, work_queue) when is_atom(host) do
conn = CargueroTaskBunny.Connection.get_connection!(host)
declare_with_subqueues(conn, work_queue)
end
def declare_with_subqueues(connection, work_queue) do
{:ok, channel} = AMQP.Channel.open(connection)
scheduled_queue = scheduled_queue(work_queue)
retry_queue = retry_queue(work_queue)
rejected_queue = rejected_queue(work_queue)
work = declare(channel, work_queue, durable: true)
rejected = declare(channel, rejected_queue, durable: true)
# Set main queue as dead letter exchange of retry queue.
# It will requeue the message once message TTL is over.
retry_options = [
arguments: [
{"x-dead-letter-exchange", :longstr, ""},
{"x-dead-letter-routing-key", :longstr, work_queue}
],
durable: true
]
retry = declare(channel, retry_queue, retry_options)
scheduled_options = [
arguments: [
{"x-dead-letter-exchange", :longstr, ""},
{"x-dead-letter-routing-key", :longstr, work_queue}
],
durable: true
]
scheduled = declare(channel, scheduled_queue, scheduled_options)
AMQP.Channel.close(channel)
{work, retry, rejected, scheduled}
end
@doc """
Deletes the queue and its subqueues.
"""
@spec delete_with_subqueues(%AMQP.Connection{} | atom, String.t()) :: :ok
def delete_with_subqueues(host, work_queue) when is_atom(host) do
conn = CargueroTaskBunny.Connection.get_connection!(host)
delete_with_subqueues(conn, work_queue)
end
def delete_with_subqueues(connection, work_queue) do
{:ok, channel} = AMQP.Channel.open(connection)
work_queue
|> queue_with_subqueues()
|> Enum.each(fn queue ->
AMQP.Queue.delete(channel, queue)
end)
AMQP.Channel.close(channel)
:ok
end
@doc false
# Declares a single queue with the options
@spec declare(%AMQP.Channel{}, String.t(), keyword) :: map
def declare(channel, queue, options \\ []) do
options = options ++ [durable: true]
{:ok, state} = AMQP.Queue.declare(channel, queue, options)
state
end
@doc """
Returns the message count and consumer count for the given queue.
"""
@spec state(%AMQP.Connection{} | atom, String.t()) :: map
def state(host_or_conn \\ :default, queue)
def state(host, queue) when is_atom(host) do
conn = CargueroTaskBunny.Connection.get_connection!(host)
state(conn, queue)
end
def state(connection, queue) do
{:ok, channel} = AMQP.Channel.open(connection)
{:ok, state} = AMQP.Queue.status(channel, queue)
AMQP.Channel.close(channel)
state
end
@doc """
Returns a list that contains the queue and its subqueue.
"""
@spec queue_with_subqueues(String.t()) :: [String.t()]
def queue_with_subqueues(work_queue) do
[work_queue] ++ subqueues(work_queue)
end
@doc """
Returns all subqueues for the work queue.
"""
@spec subqueues(String.t()) :: [String.t()]
def subqueues(work_queue) do
[
retry_queue(work_queue),
rejected_queue(work_queue),
scheduled_queue(work_queue)
]
end
@doc """
Returns a name of retry queue.
"""
@spec retry_queue(String.t()) :: String.t()
def retry_queue(work_queue) do
work_queue <> ".retry"
end
@doc """
Returns a name of rejected queue.
"""
@spec rejected_queue(String.t()) :: String.t()
def rejected_queue(work_queue) do
work_queue <> ".rejected"
end
@doc """
Returns a name of scheduled queue.
"""
@spec scheduled_queue(String.t()) :: String.t()
def scheduled_queue(work_queue) do
work_queue <> ".scheduled"
end
end
|
lib/carguero_task_bunny/queue.ex
| 0.803058 | 0.469642 |
queue.ex
|
starcoder
|
defmodule Exrabbit.Channel do
@moduledoc """
This module exposes some channel-level AMQP methods.
Mostly the functions that don't belong in neither `Exrabbit.Producer` nor
`Exrabbit.Consumer` are kept here.
"""
use Exrabbit.Records
@type conn :: pid
@type chan :: pid
@type await_confirms_result :: :ok | {:error, :timeout} | {:error, :nack}
@doc """
Open a new channel on an established connection.
Returns a new channel or fails.
"""
@spec open(conn) :: chan | no_return
def open(conn) when is_pid(conn) do
{:ok, chan} = :amqp_connection.open_channel(conn)
chan
end
@doc """
Close previously opened channel.
"""
@spec close(chan) :: :ok
def close(chan), do: :amqp_channel.close(chan)
@doc """
Switch the channel to confirm-mode or tx-mode.
Once set, the mode cannot be changed afterwards.
"""
@spec set_mode(chan, :confirm | :tx) :: :ok
def set_mode(chan, :confirm) do
confirm_select_ok() = :amqp_channel.call(chan, confirm_select())
:ok
end
def set_mode(chan, :tx) do
tx_select_ok() = :amqp_channel.call(chan, tx_select())
:ok
end
@doc """
Set QoS (Quality of Service) on the channel.
The second argument should be an `Exrabbit.Records.basic_qos` record.
"""
@spec set_qos(chan, Exrabbit.Records.basic_qos) :: :ok
def set_qos(chan, basic_qos()=qos) do
basic_qos_ok() = :amqp_channel.call(chan, qos)
:ok
end
@doc """
Acknowledge a message.
## Options
* `multiple: <boolean>` - when `true`, acknowledges all messages up to and
including the current one in a single request; default: `false`
"""
@spec ack(chan, binary) :: :ok
@spec ack(chan, binary, Keyword.t) :: :ok
def ack(chan, tag, opts \\ []) do
method = basic_ack(
delivery_tag: tag,
multiple: Keyword.get(opts, :multiple, false)
)
:amqp_channel.call(chan, method)
end
@doc """
Reject a message (RabbitMQ extension).
## Options
* `multiple: <boolean>` - reject all messages up to and including the
current one; default: `false`
* `requeue: <boolean>` - put rejected messages back into the queue;
default: `true`
"""
@spec nack(chan, binary) :: :ok
@spec nack(chan, binary, Keyword.t) :: :ok
def nack(chan, tag, opts \\ []) do
method = basic_nack(
delivery_tag: tag,
multiple: Keyword.get(opts, :multiple, false),
requeue: Keyword.get(opts, :requeue, true)
)
:amqp_channel.call(chan, method)
end
@doc """
Reject a message.
## Options
* `requeue: <boolean>` - put the message back into the queue; default: `true`
"""
@spec reject(chan, binary) :: :ok
@spec reject(chan, binary, Keyword.t) :: :ok
def reject(chan, tag, opts \\ []) do
method = basic_reject(
delivery_tag: tag,
requeue: Keyword.get(opts, :requeue, true)
)
:amqp_channel.call(chan, method)
end
@doc """
Await for message confirms.
Returns `:ok` or `{:error, <reason>}` where `<reason>` can be one of the
following:
* `:timeout` - the timeout has run out before reply was received
* `:nack` - at least one message hasn't been confirmed
"""
@spec await_confirms(chan) :: await_confirms_result
@spec await_confirms(chan, non_neg_integer) :: await_confirms_result
def await_confirms(chan, timeout \\ confirm_timeout) do
case :amqp_channel.wait_for_confirms(chan, timeout) do
:timeout -> {:error, :timeout}
false -> {:error, :nack}
true -> :ok
end
end
@doc """
Redeliver all currently unacknowledged messages.
## Options
* `requeue: <boolean>` - when `false` (default), the messages will be
redelivered to the original consumer; when `true`, the messages will be
put back into the queue and potentially be delivered to another consumer
of that queue
"""
@spec recover(chan) :: :ok
@spec recover(chan, Keyword.t) :: :ok
def recover(chan, options \\ []) do
basic_recover_ok() = :amqp_channel.call(chan, basic_recover(requeue: Keyword.get(options, :requeue, false)))
:ok
end
@doc """
Commit current transaction.
See http://www.rabbitmq.com/amqp-0-9-1-reference.html#tx.commit for details.
"""
@spec commit(chan) :: :ok
def commit(chan) do
tx_commit_ok() = :amqp_channel.call(chan, tx_commit())
:ok
end
@doc """
Rollback current transaction.
See http://www.rabbitmq.com/amqp-0-9-1-reference.html#tx.rollback for details.
"""
@spec rollback(chan) :: :ok
def rollback(chan) do
tx_rollback_ok() = :amqp_channel.call(chan, tx_rollback())
:ok
end
@doc """
Delete an exchange.
## Options
* `if_unused: <boolean>` - only delete the exchange if it has no queue
bindings
"""
@spec exchange_delete(chan, binary) :: :ok
@spec exchange_delete(chan, binary, Keyword.t) :: :ok
def exchange_delete(chan, name, options \\ []) when is_binary(name) do
method = exchange_delete(
exchange: name,
if_unused: Keyword.get(options, :if_unused, false),
)
exchange_delete_ok() = :amqp_channel.call(chan, method)
:ok
end
@doc """
Clear a queue.
Returns the number of messages it contained.
"""
@spec queue_purge(chan, binary) :: non_neg_integer
def queue_purge(chan, name) when is_binary(name) do
method = queue_purge(queue: name)
queue_purge_ok(message_count: cnt) = :amqp_channel.call(chan, method)
cnt
end
@doc """
Delete a queue.
Returns the number of messages it contained.
## Options
* `if_unused: <boolean>` - only delete the queue if it has no consumers
(this options doesn't seem to work in the underlying Erlang client)
* `if_empty: <boolean>` - only delete the queue if it has no messages
"""
@spec queue_delete(chan, binary) :: non_neg_integer
@spec queue_delete(chan, binary, Keyword.t) :: non_neg_integer
def queue_delete(chan, name, options \\ []) when is_binary(name) do
method = queue_delete(
queue: name,
if_unused: Keyword.get(options, :if_unused, false),
if_empty: Keyword.get(options, :if_empty, false),
)
queue_delete_ok(message_count: cnt) = :amqp_channel.call(chan, method)
cnt
end
defp confirm_timeout, do: Application.get_env(:exrabbit, :confirm_timeout, 15000)
end
|
lib/exrabbit/channel.ex
| 0.914348 | 0.475544 |
channel.ex
|
starcoder
|
alias GraphQL.Lang.AST.Visitor
alias GraphQL.Lang.AST.InitialisingVisitor
alias GraphQL.Lang.AST.PostprocessingVisitor
defmodule GraphQL.Lang.AST.CompositeVisitor do
@moduledoc """
A CompositeVisitor composes two Visitor implementations into a single Visitor.
This provides the ability to chain an arbitrary number of visitors together.
The *outer_visitor* notionally wraps the *inner_visitor*. The order of operations is thus:
1. outer_visitor.enter
2. inner_visitor.enter
3. inner_visitor.leave
4. outer_visitor.leave
"""
defstruct outer_visitor: nil, inner_visitor: nil
@doc """
Composes two Visitors, returning a new one.
"""
def compose(outer_visitor, inner_visitor) do
%GraphQL.Lang.AST.CompositeVisitor{outer_visitor: outer_visitor, inner_visitor: inner_visitor}
end
@doc """
Composes an arbitrarily long list of Visitors into a single Visitor.
The order of the list is outer-to-inner. The leftmost visitor will be invoked first
upon 'enter' and last upon 'leave'.
"""
def compose([visitor]), do: visitor
def compose([outer_visitor|rest]), do: compose(outer_visitor, compose(rest))
end
defimpl Visitor, for: GraphQL.Lang.AST.CompositeVisitor do
@doc """
Invoke *enter* on the outer visitor first, passing the resulting accumulator to the *enter*
call on the *inner* visitor.
If either visitor's enter method returns :skip, both visitors will still be executed, but
then execution will cease.
"""
def enter(composite_visitor, node, accumulator) do
{v1_next_action, v1_accumulator}
= Visitor.enter(composite_visitor.outer_visitor, node, accumulator)
accumulator = Map.merge(accumulator, v1_accumulator)
if v1_next_action == :skip do
{:skip, accumulator}
else
Visitor.enter(composite_visitor.inner_visitor, node, accumulator)
end
end
@doc """
Invoke *leave* on the inner visitor first, passing the resulting accumulator to the *leave*
call on the *outer* visitor.
If either visitor's enter method returns :skip, both visitors will still be executed, but
then execution will cease.
"""
def leave(composite_visitor, node, accumulator) do
v1_accumulator = Visitor.leave(composite_visitor.inner_visitor, node, accumulator)
v2_accumulator = Visitor.leave(composite_visitor.outer_visitor, node, Map.merge(accumulator, v1_accumulator))
v2_accumulator
end
end
defimpl InitialisingVisitor, for: GraphQL.Lang.AST.CompositeVisitor do
@doc """
Invokes *start* on the outer visitor first, then calls *start* on the inner visitor
passing the accumulator from the *outer*.
Returns the accumulator of the *inner* visitor.
"""
def init(composite_visitor, accumulator) do
accumulator = InitialisingVisitor.init(composite_visitor.outer_visitor, accumulator)
InitialisingVisitor.init(composite_visitor.inner_visitor, accumulator)
end
end
defimpl PostprocessingVisitor, for: GraphQL.Lang.AST.CompositeVisitor do
@doc """
Invokes *finish* on the inner visitor first, then calls *finish* on the outer visitor
passing the accumulator from the *inner*.
Returns the accumulator of the *outer* visitor.
"""
def finish(composite_visitor, accumulator) do
accumulator = PostprocessingVisitor.finish(composite_visitor.inner_visitor, accumulator)
PostprocessingVisitor.finish(composite_visitor.outer_visitor, accumulator)
end
end
|
lib/graphql/lang/ast/composite_visitor.ex
| 0.895418 | 0.496765 |
composite_visitor.ex
|
starcoder
|
defmodule RayTracer.Tasks.Chapter11 do
@moduledoc """
This module tests reflections and refractions from Chapter 11
"""
alias RayTracer.RTuple
alias RayTracer.Shape
alias RayTracer.Sphere
alias RayTracer.Plane
alias RayTracer.Canvas
alias RayTracer.Material
alias RayTracer.Color
alias RayTracer.Light
alias RayTracer.World
alias RayTracer.Camera
alias RayTracer.Pattern
alias RayTracer.StripePattern
alias RayTracer.CheckerPattern
alias RayTracer.Transformations
import RTuple, only: [point: 3, vector: 3]
import Light, only: [point_light: 2]
import Transformations
@doc """
Generates a file that tests rendering a world
"""
@spec execute :: :ok
def execute(w \\ 100, h \\ 50) do
# Benchmark(fn -> RayTracer.Tasks.Chapter11.execute end)
world = build_world()
camera = build_camera(w, h)
camera
|> Camera.render(world)
|> Canvas.export_to_ppm_file
:ok
end
defp build_world do
objects = [
floor(),
ceiling(),
west_wall(),
east_wall(),
north_wall(),
south_wall(),
red_sphere(),
blue_glass_sphere(),
green_glass_sphere()
]
light = point_light(point(-4.9, 4.9, -1), Color.new(1, 1, 1))
World.new(objects, light)
end
defp build_camera(w, h) do
transform = view_transform(point(-2.6, 1.5, -3.9), point(-0.6, 1, -0.8), vector(0, 1, 0))
Camera.new(w, h, 1.152, transform)
end
defp wall_material do
transform = Transformations.compose([
scaling(0.25, 0.25, 0.25),
rotation_y(1.5708)
])
pattern = StripePattern.new(Color.new(0.45, 0.45, 0.45), Color.new(0.55, 0.55, 0.55)) |> Pattern.set_transform(transform)
%Material{
pattern: pattern,
ambient: 0,
diffuse: 0.4,
specular: 0,
reflective: 0.3
}
end
def floor do
pattern = CheckerPattern.new(Color.new(0.35, 0.35, 0.35), Color.new(0.65, 0.65, 0.65))
material = %Material{specular: 0, reflective: 0.4, pattern: pattern}
transform = rotation_y(0.31415)
%Plane{material: material} |> Shape.set_transform(transform)
end
def ceiling do
material = %Material{color: Color.new(0.8, 0.8, 0.8), specular: 0, ambient: 0.3}
transform = translation(0, 5, 0)
%Plane{material: material} |> Shape.set_transform(transform)
end
def west_wall do
transform = Transformations.compose([
rotation_y(1.5708),
rotation_z(1.5708),
translation(-5, 0, 0)
])
%Plane{material: wall_material()} |> Shape.set_transform(transform)
end
def east_wall do
transform = Transformations.compose([
rotation_y(1.5708),
rotation_z(1.5708),
translation(5, 0, 0)
])
%Plane{material: wall_material()} |> Shape.set_transform(transform)
end
def north_wall do
transform = Transformations.compose([
rotation_x(1.5708),
translation(0, 0, 5)
])
%Plane{material: wall_material()} |> Shape.set_transform(transform)
end
def south_wall do
transform = Transformations.compose([
rotation_x(1.5708),
translation(0, 0, -5)
])
%Plane{material: wall_material()} |> Shape.set_transform(transform)
end
def red_sphere do
transform = Transformations.compose([
translation(-0.6, 1, 0.6)
])
material = %Material{
color: Color.new(1, 0.3, 0.2),
specular: 0.4,
shininess: 5
}
%Sphere{material: material} |> Shape.set_transform(transform)
end
def blue_glass_sphere do
transform = Transformations.compose([
scaling(0.7, 0.7, 0.7),
translation(0.6, 0.7, -0.6)
])
material = %Material{
color: Color.new(0, 0, 0.2),
ambient: 0,
diffuse: 0.4,
specular: 0.9,
shininess: 300,
reflective: 0.9,
transparency: 0.9,
refractive_index: 1.5
}
%Sphere{material: material} |> Shape.set_transform(transform)
end
def green_glass_sphere do
transform = Transformations.compose([
scaling(0.5, 0.5, 0.5),
translation(-0.7, 0.5, -0.8)
])
material = %Material{
color: Color.new(0, 0.2, 0),
ambient: 0,
diffuse: 0.4,
specular: 0.9,
shininess: 300,
reflective: 0.9,
transparency: 0.9,
refractive_index: 1.5
}
%Sphere{material: material} |> Shape.set_transform(transform)
end
end
|
lib/tasks/chapter11.ex
| 0.883044 | 0.543106 |
chapter11.ex
|
starcoder
|
defmodule Stops.Stop do
@moduledoc """
Domain model for a Stop.
"""
alias Stops.{Api, Stop}
@derive {Jason.Encoder, except: [:bike_storage, :fare_facilities]}
defstruct id: nil,
parent_id: nil,
child_ids: [],
name: nil,
note: nil,
accessibility: [],
address: nil,
municipality: nil,
parking_lots: [],
fare_facilities: [],
bike_storage: [],
latitude: nil,
longitude: nil,
is_child?: false,
station?: false,
has_fare_machine?: false,
has_charlie_card_vendor?: false,
closed_stop_info: nil,
type: nil,
platform_name: nil,
platform_code: nil,
description: nil,
zone: nil
@type id_t :: String.t()
@type stop_type :: :stop | :station | :entrance | :generic_node
@type t :: %Stop{
id: id_t,
parent_id: id_t | nil,
child_ids: [id_t],
name: String.t() | nil,
note: String.t() | nil,
accessibility: [String.t()],
address: String.t() | nil,
municipality: String.t() | nil,
parking_lots: [Stop.ParkingLot.t()],
fare_facilities: MapSet.t(Api.fare_facility()),
bike_storage: [Api.bike_storage_types()],
latitude: float,
longitude: float,
is_child?: boolean,
station?: boolean,
has_fare_machine?: boolean,
has_charlie_card_vendor?: boolean,
closed_stop_info: Stops.Stop.ClosedStopInfo.t() | nil,
type: stop_type,
platform_name: String.t() | nil,
platform_code: String.t() | nil,
description: String.t() | nil,
zone: String.t() | nil
}
defimpl Util.Position do
def latitude(stop), do: stop.latitude
def longitude(stop), do: stop.longitude
end
@doc """
Returns a boolean indicating whether we know the accessibility status of the stop.
"""
@spec accessibility_known?(t) :: boolean
def accessibility_known?(%__MODULE__{accessibility: ["unknown" | _]}), do: false
def accessibility_known?(%__MODULE__{}), do: true
@doc """
Returns a boolean indicating whether we consider the stop accessible.
A stop can have accessibility features but not be considered accessible.
"""
@spec accessible?(t) :: boolean
def accessible?(%__MODULE__{accessibility: ["accessible" | _]}), do: true
def accessible?(%__MODULE__{}), do: false
@doc """
Returns a boolean indicating whether the stop has a known zone
"""
@spec has_zone?(t | id_t) :: boolean
def has_zone?(<<id::binary>>) do
case Stops.Repo.get(id) do
nil -> false
stop -> has_zone?(stop)
end
end
def has_zone?(%__MODULE__{zone: zone}) when not is_nil(zone), do: true
def has_zone?(_), do: false
end
defmodule Stops.Stop.ParkingLot do
@moduledoc """
A group of parking spots at a Stop.
"""
@derive Jason.Encoder
defstruct [
:name,
:address,
:capacity,
:payment,
:manager,
:utilization,
:note,
:latitude,
:longitude
]
@type t :: %Stops.Stop.ParkingLot{
name: String.t(),
address: String.t(),
capacity: Stops.Stop.ParkingLot.Capacity.t() | nil,
payment: Stops.Stop.ParkingLot.Payment.t() | nil,
manager: Stops.Stop.ParkingLot.Manager.t() | nil,
utilization: Stops.Stop.ParkingLot.Utilization.t() | nil,
note: String.t() | nil,
latitude: float | nil,
longitude: float | nil
}
end
defmodule Stops.Stop.ParkingLot.Payment do
@moduledoc """
Info about payment for parking at a Stop.
GTFS Property Mappings:
:methods - list of payment-form-accepted
:mobile_app - {payment-app, payment-app-id, payment-app-url}
:rate - {fee-daily, fee-monthly}
"""
@derive Jason.Encoder
defstruct [:methods, :mobile_app, :daily_rate, :monthly_rate]
@type t :: %__MODULE__{
methods: [String.t()],
mobile_app: Stops.Stop.ParkingLot.Payment.MobileApp.t() | nil,
daily_rate: String.t() | nil,
monthly_rate: String.t() | nil
}
@spec parse(map) :: t
def parse(props) do
%__MODULE__{
methods: Map.get(props, "payment-form-accepted", []),
mobile_app:
Stops.Helpers.struct_or_nil(Stops.Stop.ParkingLot.Payment.MobileApp.parse(props)),
daily_rate: Map.get(props, "fee-daily"),
monthly_rate: Map.get(props, "fee-monthly")
}
end
end
defmodule Stops.Stop.ParkingLot.Payment.MobileApp do
@moduledoc """
GTFS Property Mappings:
:name - payment-app
:id - payment-app-id
:url - payment-app-url
"""
@derive Jason.Encoder
defstruct [:name, :id, :url]
@type t :: %__MODULE__{
name: String.t() | nil,
id: String.t() | nil,
url: String.t() | nil
}
@spec parse(map) :: t
def parse(props) do
%__MODULE__{
name: Map.get(props, "payment-app"),
id: Map.get(props, "payment-app-id"),
url: Map.get(props, "payment-app-url")
}
end
end
defmodule Stops.Stop.ParkingLot.Capacity do
@moduledoc """
Info about parking capacity at a Stop.
GTFS Property Mappings:
:capacity - capacity
:accessible - capacity-accessible
:overnight - overnight-allowed
:type - enclosed
"""
@derive Jason.Encoder
defstruct [:total, :accessible, :overnight, :type]
@type t :: %__MODULE__{
total: integer | nil,
accessible: integer | nil,
overnight: String.t(),
type: String.t() | nil
}
@spec parse(map) :: t
def parse(props) do
%__MODULE__{
total: Map.get(props, "capacity"),
accessible: Map.get(props, "capacity-accessible"),
overnight: pretty_overnight_msg(Map.get(props, "overnight-allowed")),
type: pretty_parking_type(Map.get(props, "enclosed"))
}
end
# GTFS values:
# "1 for true, 2 for false, or 0 for no information"
@spec pretty_parking_type(integer) :: String.t() | nil
defp pretty_parking_type(0), do: nil
defp pretty_parking_type(1), do: "Garage"
defp pretty_parking_type(2), do: "Surface Lot"
@spec pretty_overnight_msg(String.t() | nil) :: String.t()
defp pretty_overnight_msg("no"), do: "Not available"
defp pretty_overnight_msg("yes"), do: "Available"
defp pretty_overnight_msg("yes-except-snow"), do: "Available, except during snow emergencies"
defp pretty_overnight_msg("no-except-snow"), do: "Not available, except during snow emergencies"
defp pretty_overnight_msg("yes-snow-unknown"),
do: "Available. During snow emergencies, check posted signs."
defp pretty_overnight_msg(_), do: "Unknown"
end
defmodule Stops.Stop.ParkingLot.Manager do
@moduledoc """
A manager of a parking lot.
GTFS Property Mappings:
:name - operator
:contact: - contact
:phone - contact-phone
:url - contact-url
"""
@derive Jason.Encoder
defstruct [:name, :contact, :phone, :url]
@type t :: %__MODULE__{
name: String.t() | nil,
contact: String.t() | nil,
phone: String.t() | nil,
url: String.t() | nil
}
@spec parse(map) :: t
def parse(props) do
%__MODULE__{
name: Map.get(props, "operator"),
contact: Map.get(props, "contact"),
phone: Map.get(props, "contact-phone"),
url: Map.get(props, "contact-url")
}
end
end
defmodule Stops.Stop.ParkingLot.Utilization do
@moduledoc """
Utilization data for a parking lot.
GTFS Property Mappings:
:arrive_before - weekday-arrive-before
:typical: - weekday-typical-utilization
"""
@derive Jason.Encoder
defstruct [:arrive_before, :typical]
@type t :: %__MODULE__{
arrive_before: String.t() | nil,
typical: integer | nil
}
@spec parse(map) :: t
def parse(props) do
%__MODULE__{
arrive_before: pretty_date(Map.get(props, "weekday-arrive-before")),
typical: Map.get(props, "weekday-typical-utilization")
}
end
@spec pretty_date(String.t()) :: String.t()
defp pretty_date(date) do
case Timex.parse(date, "{h24}:{m}:{s}") do
{:ok, time} ->
case Timex.format(time, "{h24}:{m} {AM}") do
{:ok, out} -> out
end
{:error, _} ->
nil
end
end
end
defmodule Stops.Stop.ClosedStopInfo do
@moduledoc """
Information about stations not in API data.
"""
@derive Jason.Encoder
defstruct reason: "",
info_link: ""
@type t :: %Stops.Stop.ClosedStopInfo{
reason: String.t(),
info_link: String.t()
}
end
|
apps/stops/lib/stop.ex
| 0.882269 | 0.410166 |
stop.ex
|
starcoder
|
defmodule Day20 do
@moduledoc """
AoC 2019, Day 20 - Donut Maze
"""
@doc """
Steps to get from AA to ZZ
"""
def part1 do
Util.priv_file(:day20, "day20_input.txt")
|> File.read!()
|> path("AA", "ZZ")
end
@doc """
Recursive maze steps from AA to ZZ
"""
def part2 do
Util.priv_file(:day20, "day20_input.txt")
|> File.read!()
|> recursive_path("AA", "ZZ")
end
@doc """
Compute recursive path length in a map
"""
def recursive_path(str, start, goal) do
{links, map} = parse(str)
start = String.to_charlist(start)
goal = String.to_charlist(goal)
start_v = Map.get(links, start) |> elem(1)
goal_v = Map.get(links, goal) |> elem(1)
b = bounds(map)
bfs(map, b, {goal_v, 0}, :queue.from_list([{{start_v, 0}, 0}]), MapSet.new([{start_v, 0}]))
end
defp bounds(map) do
pts = Map.keys(map)
x = Enum.map(pts, fn {x, _y} -> x end) |> Enum.min_max()
y = Enum.map(pts, fn {_x, y} -> y end) |> Enum.min_max()
{x, y}
end
defp bfs(map, b, goal, q, visited) do
{{:value, {target = {loc, depth}, steps}}, q} = :queue.out(q)
if target == goal do
steps
else
{new_q, new_visited} = recursive_neighbors(map, b, q, visited, loc, depth, steps)
bfs(map, b, goal, new_q, new_visited)
end
end
defp recursive_neighbors(map, b, q, visited, loc, depth, steps) do
{new_q, new_v} = add_link({q, visited}, map, b, depth, loc, steps)
get_neigh(map, loc)
|> Enum.filter(&(first_level(&1, depth, b)))
|> Enum.filter(fn {loc, _rest} -> !MapSet.member?(visited, {loc, depth}) end)
|> Enum.map(fn {loc, _type} -> {loc, depth} end)
|> Enum.reduce({new_q, new_v},
fn (item, {new_q, new_v}) ->
{:queue.in({item, steps+1}, new_q), MapSet.put(new_v, item)}
end)
end
defp add_link(curr, map, b, depth, loc, steps), do: add_link(curr, b, depth, {loc, Map.get(map, loc)}, steps)
defp add_link(curr, _b, _depth, {_loc, {:space, _, _}}, _steps), do: curr
defp add_link(curr, _b, _depth, {_loc, {:link, _, nil}}, _steps), do: curr
defp add_link(curr = {q, v}, {{min_x, max_x}, {min_y, max_y}}, depth, {{x, y}, {:link, _str, link}}, steps) do
depth = if x == min_x || x == max_x || y == min_y || y == max_y do
depth - 1
else
depth + 1
end
n = {link, depth}
if MapSet.member?(v, n) do
curr
else
{:queue.in({n, steps+1}, q), MapSet.put(v, n)}
end
end
defp first_level({{x, y}, {:link, str, _link}}, 0, {{min_x, max_x}, {min_y, max_y}}) do
str in ['AA', 'ZZ'] || (x != min_x && x != max_x && y != min_y && y != max_y)
end
defp first_level({_loc, {:link, str, _link}}, _depth, _b) do
str not in ['AA', 'ZZ']
end
defp first_level(_v, _d, _b), do: true
@doc """
Compute path length in a map
"""
def path(str, start, goal) do
{links, map} = parse(str)
g = make_graph(map)
start = String.to_charlist(start)
goal = String.to_charlist(goal)
start_v = Map.get(links, start) |> elem(1)
goal_v = Map.get(links, goal) |> elem(1)
path = Graph.get_shortest_path(g, start_v, goal_v)
Enum.count(path) - 1
end
defp make_graph(map) do
Map.keys(map)
|> Enum.reduce(Graph.new(), &(add_node(&1, &2, map)))
end
defp add_node(k, g, map) do
{type, _str, link} = Map.get(map, k)
g = if type == :link && link != nil do
Graph.add_edge(g, k, link)
|> Graph.add_edge(link, k)
else
g
end
get_neigh(map, k)
|> Enum.reduce(g, fn ({loc, {v, _, _}}, acc) ->
if v == :space do
Graph.add_edge(acc, k, loc)
|> Graph.add_edge(loc, k)
else
acc
end
end)
end
@doc """
Parse a maze
"""
def parse(str) do
String.split(str, "\n", trim: true)
|> Enum.with_index()
|> Enum.map(&parse_row/1)
|> List.flatten()
|> Enum.filter(fn {_loc, v} -> v != "#" and v != " " end)
|> Enum.into(%{})
|> link_portals()
end
defp parse_row({row, num}) do
String.graphemes(row)
|> Enum.with_index()
|> Enum.map(fn {c, col} -> {{col, num}, c} end)
end
defp link_portals(map) do
letters = Map.keys(map)
|> Enum.reduce(%{},
fn (k, acc) ->
v = Map.get(map, k)
if v != "." do
Map.put(acc, k, v)
else
acc
end
end)
neighbors = Map.keys(letters)
|> Enum.reduce([],
fn (k, acc) ->
us = Map.get(map, k)
lst = get_neigh(map, k)
if Enum.count(lst) == 2 do
[{{k, us}, lst} | acc]
else
acc
end
end
)
stripped = Map.keys(map)
|> Enum.reduce(%{},
fn (k, acc) ->
v = Map.get(map, k)
if v == "." do
Map.put(acc, k, {:space, nil, nil})
else
acc
end
end)
{links, map} = Enum.reduce(neighbors,
{%{}, stripped},
fn ({a, [b, c]}, {links, whole}) ->
{p, val} = Enum.sort([a, b, c])
|> node_val()
whole = Map.put(whole, p, {:link, val, nil})
v = Map.get(links, val)
links = if v == nil do
Map.put(links, val, {val, p, nil})
else
{_val, other_p, _} = v
Map.put(links, val, {val, p, other_p})
end
{links, whole}
end)
map = Map.keys(map)
|> Enum.reduce(%{},
fn (k, acc) ->
v = {type, str, _} = Map.get(map, k)
if type == :link do
{str, loc1, loc2} = Map.get(links, str)
other_loc = if loc1 == k, do: loc2, else: loc1
Map.put(acc, k, {:link, str, other_loc})
else
Map.put(acc, k, v)
end
end)
{links, map}
end
defp node_val([{{x, _}, c1}, {{x, _}, c2}, {loc, "."}]) do
{loc, String.to_charlist(c1 <> c2)}
end
defp node_val([{loc, "."}, {{x, _}, c1}, {{x, _}, c2}]) do
{loc, String.to_charlist(c1 <> c2)}
end
defp node_val([{{_, y}, c1}, {{_, y}, c2}, {loc, "."}]) do
{loc, String.to_charlist(c1 <> c2)}
end
defp node_val([{loc, "."}, {{_, y}, c1}, {{_, y}, c2}]) do
{loc, String.to_charlist(c1 <> c2)}
end
defp get_neigh(map, {x, y}) do
[{x+1, y}, {x-1, y}, {x, y+1}, {x, y-1}]
|> Enum.map(fn k -> {k, Map.get(map, k)} end)
|> Enum.filter(fn {_k, v} -> v != nil end)
end
end
|
apps/day20/lib/day20.ex
| 0.658966 | 0.449876 |
day20.ex
|
starcoder
|
defmodule UpsilonBattle.Engine do
defstruct map_id: UUID.uuid4()
@doc """
Initialise le moteur.
"""
def init(_context) do
end
@doc """
Enregistre un nouvel utilisateur ...
Peux echouer si y a deja 4 joueurs ...
retour: {:ok, context, [positions_depart_dispo] } ou {:ok, context, :observateur}
"""
def add_user(_context, _user_id, _username) do
end
@doc """
Retourne la map
"""
def get_map(_context) do
end
@doc """
Retourne les joueurs
"""
def get_players(_context) do
end
@doc """
Retourne true ou false :)
"""
def user_known?(_context, _user_id) do
end
@doc """
Retourne si l'utilisateur est un joueur ou non
"""
def user_is_player?(_context, _user_id) do
end
@doc """
Stock la socket de communication d'un utilisateur
Peux echouer si l'utilisateur est inconnu
retour {:ok, context } ou {:erreur, :inconnu}
"""
def register_user_socket(_context, _user_id, _socket) do
end
@doc """
Retourne les positions de depart disponible pour l'utilisateur
"""
def get_user_start_position(_context, _user_id) do
end
@doc """
Positionne l'utilisateur sur la carte (uniquement lors de la phase de positionnement)
return {:ok, context} ou {:erreur, :inconnu}, {:erreur, :impossible}, {:erreur, :deja_fait}
"""
def set_user_initial_position(_context, _user_id, _x, _y) do
end
@doc """
Dit quel joueurs dois jouer.
"""
def get_current_player(_context) do
end
@doc """
Dit si l'utilisateur est le joueur courant.
"""
def is_current_player(_context, _user_id) do
end
@doc """
Indique aux joueurs qu'ils doivent se mettre a jour
"""
def notify_all_users_refresh(_context) do
UpsilonBattle.RefreshChannel.refresh_all_players()
end
@doc """
Retourne:
{:erreur, :inconnu}
{:erreur, :pas_son_tour}
{:ok, :move} Si on attend le deplacement du joueur
{:ok, :attack} Si on attend l'attaque du joueur
{:ok, :position} Si on attend la position du joueur
"""
def get_player_next_action(_context, _user_id) do
end
@doc """
move_set = [{x,y}]
decrivant toute les cases par lequel va passé le joueur
Retourne {:ok, context} ou {:erreur, :colision} ou {:erreur, :inconnu}
"""
def player_moves(_context, _user_id, _move_set) do
end
@doc """
Attaque une case
Retourne {:ok, context} ou {:erreur, :inconnu}
"""
def player_attacks(_context, _user_id, _target_x, _target_y) do
end
@doc """
Note que l'on doit changer de joueur courant
Retourne {:ok, context}
"""
def switch_to_next_player(_context) do
end
end
|
lib/battle/engine.ex
| 0.576065 | 0.425038 |
engine.ex
|
starcoder
|
defmodule Nosedrum do
@moduledoc """
`nosedrum` is a command framework for use with the excellent
[`nostrum`](https://github.com/Kraigie/nostrum) library.
It contains behaviour specifications for easily implementing command handling
in your bot along with other conveniences to ease creating an interactive bot.
`nosedrum`s provided implementations are largely based off what was originally
written for [bolt](https://github.com/jchristgit/bolt). bolt also contains
around [57
commands](https://github.com/jchristgit/bolt/tree/master/lib/bolt/cogs) based
off the `Nosedrum.Command` behaviour that you can explore if you're looking
for inspiration.
The command processing related parts of the framework consists of three parts:
- `Nosedrum.Command`, the behaviour that all commands must implement.
- `Nosedrum.Invoker`, the behaviour of command processors. Command processors
take a message, look it up in the provided storage implementation,
and invoke commands as required. nosedrum ships with an implementation of
this based on bolt's original command parser named `Nosedrum.Invoker.Split`.
- `Nosedrum.Storage`, the behaviour of command storages. Command storages
allow for fast and simple lookups of commands and command groups and store
command names along with their corresponding `Nosedrum.Command`
implementations internally. An ETS-based command storage implementation is
provided with `Nosedrum.Storage.ETS`.
Additionally, the following utilities are provided:
- `Nosedrum.Converters`, functions for converting parts of messages to objects
from Nostrum such as channels, members, and roles.
- `Nosedrum.MessageCache`, a behaviour for defining message caches, along with
an ETS-based and an Agent-based implementation.
Simply add `:nosedrum` to your `mix.exs`:
def deps do
[
{:nosedrum, "~> #{String.replace_trailing(Mix.Project.config()[:version], ".0", "")}"},
# To use the GitHub version of Nostrum:
# {:nostrum, github: "Kraigie/nostrum", override: true}
]
end
# Getting started
To start off, your commands need to implement the `Nosedrum.Command` behaviour.
As a simple example, let's reimplement
[`ed`](https://www.gnu.org/fun/jokes/ed-msg.html).
defmodule MyBot.Cogs.Ed do
@behaviour Nosedrum.Command
alias Nostrum.Api
@impl true
def usage, do: ["ed [-GVhs] [-p string] [file]"]
@impl true
def description, do: "Ed is the standard text editor."
@impl true
def predicates, do: []
@impl true
def command(msg, _args) do
{:ok, _msg} = Api.create_message(msg.channel_id, "?")
end
end
With your commands defined, choose a `Nosedrum.Storage` implementation and add
it to your application callback. We will use the included
`Nosedrum.Storage.ETS` implementation here, but feel free to write your own:
defmodule MyBot.Application do
use Application
def start(_type, _args) do
children = [
Nosedrum.Storage.ETS,
MyBot.Consumer
]
options = [strategy: :one_for_one, name: MyBot.Supervisor]
Supervisor.start_link(children, options)
end
end
Finally, we hook things up in our consumer: we will load commands once the bot
is ready, and invoke the command invoker on each message.
defmodule MyBot.Consumer do
alias Nosedrum.Invoker.Split, as: CommandInvoker
alias Nosedrum.Storage.ETS, as: CommandStorage
use Nostrum.Consumer
@commands %{
"ed" => MyBot.Cogs.Ed
}
def handle_event({:READY, {_data}, _ws_state}) do
Enum.each(@commands, fn {name, cog} -> CommandStorage.add_command([name], cog) end)
end
def handle_event({:MESSAGE_CREATE, {msg}, _ws_state}) do
CommandInvoker.handle_message(msg, CommandStorage)
end
def handle_event(_data), do: :ok
end
That's all we need to get started with. If you want to customize your bot's
prefix, set the `nosedrum.prefix` configuration variable:
config :nosedrum,
prefix: System.get_env("BOT_PREFIX") || "."
If no value is configured, the default prefix used depends on the chosen
command invoker implementation. `Nosedrum.Invoker.Split` defaults to `.`.
"""
# vim: textwidth=80 sw=2 ts=2:
end
|
lib/nosedrum.ex
| 0.861057 | 0.681462 |
nosedrum.ex
|
starcoder
|
defmodule ExViva.Decoders.Sample do
defmacrop v!(key) do
quote do
Map.fetch!(var!(sample), unquote(key))
end
end
def simple_decode(sample) do
unit = v!("Unit")
type = parse_type(v!("Type"), unit)
%ExViva.Sample{
heading: v!("Heading"),
unit: v!("Unit"),
trend: v!("Trend"),
water_level_reference: v!("WaterLevelReference"),
calm: v!("Calm"),
type: type,
value: parse_value(v!("Value"), unit, type)
}
|> with_name(sample)
|> with_common(sample)
end
defp with_station_id(result, %{"StationID" => station_id}) do
%{result | station_id: station_id}
end
defp with_message(result, %{"Msg" => message}) do
%{result | message: message}
end
defp with_quality(result, %{"Quality" => "Ok"}) do
%{result | quality: "ok"}
end
defp with_quality(result, %{"Quality" => quality}) do
%{result | quality: quality}
end
defp with_timestamp(result, %{"Updated" => updated}) do
%{result | updated_at: parse_timestamp(updated)}
end
defp with_name(result, %{"Name" => name}) do
%{result | name: name}
end
defp with_common(result, sample) do
result
|> with_name(sample)
|> with_station_id(sample)
|> with_message(sample)
|> with_quality(sample)
|> with_timestamp(sample)
end
defp directional_float(value) do
case String.split(value, " ", parts: 2) do
[_dir, value] ->
String.to_float(value)
[value] ->
single_float(value)
end
end
defp parse_timestamp(timestamp) do
String.replace(timestamp, " ", "T")
|> NaiveDateTime.from_iso8601!()
end
defp single_float(value) do
try do
String.to_float(value)
rescue
ArgumentError ->
String.to_integer(value) * 1.0
end
end
defp parse_value(value, unit, type) do
try do
parse_value(value, unit)
catch
error ->
require Logger
Logger.error("failed to decode: #{inspect(type)}")
raise error
end
end
defp parse_value(value, "m³/s"), do: single_float(value)
defp parse_value(value, "mm/h"), do: single_float(value)
defp parse_value(value, "#/cm2/h"), do: single_float(value)
defp parse_value("-", _unit), do: nil
defp parse_value(value, unit) when unit in ["m/s", "knop", "s"], do: directional_float(value)
defp parse_value(value, "cm"), do: single_float(value)
defp parse_value(value, unit) when unit in ["‰", "%", "kg/m³", "°C", "mbar"],
do: single_float(value)
defp parse_value(">" <> value, "m") do
{:less_than, String.to_integer(value)}
end
defp parse_value(value, "m"), do: directional_float(value)
defp parse_type("wind", _unit), do: :wind
defp parse_type("level", _unit), do: :water_level
defp parse_type("watertemp", _unit), do: :water_temperature
defp parse_type("stream", _unit), do: :stream
defp parse_type("water", "‰"), do: :salinity
defp parse_type("water", "kg/m³"), do: :water_density
defp parse_type("water", "m³/s"), do: :water_flow
defp parse_type("pressure", "mbar"), do: :air_pressure
defp parse_type("air", "%"), do: :humidity
defp parse_type("airtemp", "°C"), do: :temperature
defp parse_type("sight", "m"), do: :sight
defp parse_type("wave", "m"), do: :wave_height
defp parse_type("wave", "s"), do: :wave_period
defp parse_type("rain", "mm/h"), do: :rain
defp parse_type("rain", "#/cm2/h"), do: :hail_intensity
end
|
lib/ex_viva/decoders/sample.ex
| 0.582729 | 0.535463 |
sample.ex
|
starcoder
|
defmodule XDR.Type.Array do
@moduledoc """
A fixed-length array of some other type
"""
defstruct type_name: "Array", length: nil, data_type: nil, values: []
@type t() :: %__MODULE__{
type_name: String.t(),
length: XDR.Size.t(),
data_type: XDR.Type.t(),
values: list(XDR.Type.t())
}
@type options() :: [type: XDR.Type.t(), length: XDR.Size.t()]
defimpl XDR.Type do
alias XDR.Error
def build_type(%{type_name: name} = type, options) do
data_type = Keyword.get(options, :type)
length = Keyword.get(options, :length)
unless data_type && XDR.Size.valid?(length) do
raise Error,
message: ":length and :type options required for #{name}",
type: name
end
%{type | data_type: data_type, length: length}
end
def resolve_type!(%{data_type: data_type} = type, %{} = custom_types) do
%{type | data_type: XDR.Type.resolve_type!(data_type, custom_types)}
end
def build_value!(%{data_type: data_type, length: length, type_name: name} = type, raw_values)
when is_list(raw_values) do
unless length(raw_values) == length do
raise Error,
message: "Wrong length, expected #{length} values",
type: name,
data: raw_values
end
values =
raw_values
|> Enum.with_index()
|> Enum.map(fn {value, index} ->
Error.wrap_call(:build_value!, [data_type, value], index)
end)
%{type | values: values}
end
def extract_value!(%{values: values}) do
values
|> Enum.with_index()
|> Enum.map(fn {value, index} ->
Error.wrap_call(:extract_value!, [value], index)
end)
end
def encode!(%{values: values}) do
values
|> Enum.with_index()
|> Enum.map(fn {value, index} ->
Error.wrap_call(:encode!, [value], index)
end)
|> Enum.join()
end
def decode!(%{length: length, data_type: data_type} = type, encoding) do
{reversed_values, rest} =
Enum.reduce(0..(length - 1), {[], encoding}, fn index, {vals, prev_rest} ->
{current_value, next_rest} =
Error.wrap_call(XDR.Type, :decode!, [data_type, prev_rest], index)
{[current_value | vals], next_rest}
end)
{%{type | values: Enum.reverse(reversed_values)}, rest}
end
end
end
|
lib/xdr/types/array.ex
| 0.791378 | 0.476214 |
array.ex
|
starcoder
|
defmodule ETag.Plug do
@moduledoc """
A drop in plug to add support for shallow [ETags](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/ETag).
Shallow means that it uses the whole response to generate the ETag and does
not care about the specific content of each response. It is not context
sensitive.
For a deep (speak context sensitive) generation of ETags you can take a look
at [Phoenix ETag](https://github.com/michalmuskala/phoenix_etag).
# Usage
You can simply use the plug without any configuration, it then defaults to the
configuration as specified in the "Configuration" section.
plug ETag.Plug
You can also provide a number of options, see the "Configuration" section for details.
plug ETag.Plug,
generator: MyCustomGenerator,
methods: ["GET", "HEAD"],
status_codes: [:ok, 201, :not_modified]
# Configuration
## `generator`
Expects a module implementing the `ETag.Generator` behaviour. The plug ships
with a number of "default" generators:
- `ETag.Generator.MD5`
- `ETag.Generator.SHA1`
- `ETag.Generator.SHA512`
### Default
iex> Application.fetch_env!(:etag_plug, :generator)
#{inspect(Application.fetch_env!(:etag_plug, :generator))}
## `methods`
Expects a list of strings, describing the HTTP methods for which ETags
should be generated and evaluated.
### Default
iex> Application.fetch_env!(:etag_plug, :methods)
#{inspect(Application.fetch_env!(:etag_plug, :methods))}
## `status_codes`
Expects an enumerable of integers (or status atoms) which define the statuses
for which ETags should be handled and generated.
### Default
iex> Application.fetch_env!(:etag_plug, :status_codes)
#{inspect(Application.fetch_env!(:etag_plug, :status_codes))}
"""
import Plug.Conn,
only: [
register_before_send: 2,
resp: 3
]
require Logger
defdelegate init(opts), to: ETag.Plug.Options, as: :sanitize!
def call(conn, opts) do
register_before_send(conn, &handle_etag(&1, opts))
end
def handle_etag(conn, opts) do
if should_handle_etag?(conn, opts) do
do_handle_etag(conn, opts)
else
conn
end
end
defp should_handle_etag?(conn, opts) do
relevant_method?(conn, opts) and relevant_status?(conn, opts)
end
defp relevant_method?(%Plug.Conn{method: method}, opts) do
String.upcase(method) in Keyword.fetch!(opts, :methods)
end
defp relevant_status?(%Plug.Conn{status: status}, opts) do
Plug.Conn.Status.code(status) in Keyword.fetch!(opts, :status_codes)
end
defp do_handle_etag(conn, opts) do
case generate_etag(conn.resp_body, opts) do
nil ->
conn
etag ->
conn
|> ETag.put(etag)
|> respond_304_if_not_modified(etag)
end
end
defp generate_etag(content, opts) do
opts
|> Keyword.fetch!(:generator)
|> apply(:generate, [content])
end
defp respond_304_if_not_modified(conn, etag) do
conn
|> ETag.match?(etag)
|> if do
resp(conn, 304, "")
else
conn
end
end
end
|
lib/etag/plug.ex
| 0.786828 | 0.461684 |
plug.ex
|
starcoder
|
defmodule SMPPEX.Session do
@moduledoc """
Module for implementing custom SMPP Session entities.
To implement an Session entitiy, one should implement several callbacks (`SMPPEX.Session` behaviour).
The most proper way to do it is to `use` `SMPPEX.Session`:
```
defmodule MySession do
use SMPPEX.Session
# ...Callback implementation
end
```
In this case all callbacks have reasonable defaults.
"""
alias :erlang, as: Erlang
alias __MODULE__, as: Session
alias SMPPEX.Compat
alias SMPPEX.Pdu
alias SMPPEX.PduStorage
alias SMPPEX.RawPdu
alias SMPPEX.Session.AutoPduHandler
alias SMPPEX.Session.Defaults
alias SMPPEX.SMPPTimers
alias SMPPEX.TransportSession
require Logger
@behaviour TransportSession
defstruct [
:module,
:module_state,
:timers,
:pdus,
:auto_pdu_handler,
:response_limit,
:sequence_number, # increment before use
:time,
:timer_resolution,
:tick_timer_ref
]
@default_call_timeout 5000
@type state :: term
@type request :: term
@type reason :: term
@type reply :: term
@type send_pdu_result :: TransportSession.send_pdu_result()
@type session :: pid
@type from :: TransportSession.from()
@doc """
Invoked when a session is started after a connection successfully established.
`args` argument is taken directly from `ESME.start_link` or `MC.start` call.
The return value should be either `{:ok, state}`, then the session will successfully start and the returned state will be later passed to the other callbacks, or `{:stop, reason}`, then the session will stop with the returned reason.
"""
@callback init(
socket :: TransportSession.socket(),
transport :: TransportSession.transport(),
args :: term
) ::
{:ok, state}
| {:stop, reason}
@doc """
Invoked when the session receives an incoming PDU (which is not a response PDU).
The callback return values indicate the following:
* `{:ok, state}` — use `state` as the new session state;
* `{:ok, pdus, state}` — use `state` as the new session state and additionally send `pdus` to the connection;
* `{:stop, reason, state}` — stop with reason `reason` and use `state` as the new session state.
"""
@callback handle_pdu(pdu :: Pdu.t(), state) ::
{:ok, state}
| {:ok, [Pdu.t()], state}
| {:stop, reason, state}
@doc """
Invoked when the session receives an incoming PDU which couldn't be correctly parsed.
The callback return values indicate the following:
* `{:ok, state}` — use `state` as the new session state;
* `{:ok, pdus, state}` — use `state` as the new session state and additionally send `pdus` to the connection;
* `{:stop, reason, state}` — stop with reason `reason` and use `state` as the new session state.
"""
@callback handle_unparsed_pdu(pdu :: RawPdu.t(), error :: term, state) ::
{:ok, state}
| {:ok, [Pdu.t()], state}
| {:stop, reason, state}
@doc """
Invoked when the session receives a response to a previously sent PDU.
`pdu` argument contains the received response PDU, `original_pdu` contains
the previously sent pdu for which the handled response is received.
The callback return values indicate the following:
* `{:ok, state}` — use `state` as the new session state;
* `{:ok, pdus, state}` — use `state` as the new session state and additionally send `pdus` to the connection;
* `{:stop, reason, state}` — stop with reason `reason` and use `state` as the new session state.
"""
@callback handle_resp(pdu :: Pdu.t(), original_pdu :: Pdu.t(), state) ::
{:ok, state}
| {:ok, [Pdu.t()], state}
| {:stop, reason, state}
@doc """
Invoked when the session does not receive a response to a previously sent PDU
within the specified timeout.
`pdu` argument contains the PDU for which no response was received. If the response
will be received later it will be dropped (with an `info` log message).
The callback return values indicate the following:
* `{:ok, state}` — use `state` as the new session state;
* `{:ok, pdus, state}` — use `state` as the new session state and additionally send `pdus` to the connection;
* `{:stop, reason, state}` — stop with reason `reason` and use `state` as the new session state.
"""
@callback handle_resp_timeout(pdus :: [Pdu.t()], state) ::
{:ok, state}
| {:ok, [Pdu.t()], state}
| {:stop, reason, state}
@doc """
Invoked when the SMPP session successfully sent PDU to transport or failed to do this.
`pdu` argument contains the PDU for which send status is reported. `send_pdu_result` can be
either `:ok` or `{:error, reason}`.
The returned value is used as the new state.
"""
@callback handle_send_pdu_result(pdu :: Pdu.t(), send_pdu_result, state) :: state
@doc """
Invoked when the connection's socket reported a error.
The returned value should be `{reason, state}`. The session stops then with `reason`.
"""
@callback handle_socket_error(error :: term, state) :: {exit_reason :: term, state}
@doc """
Invoked when the connection is closed by the peer.
The returned value should be `{reason, state}`. The session stops then with `reason`.
"""
@callback handle_socket_closed(state) :: {exit_reason :: term, state}
@doc """
Invoked to handle an arbitrary syncronous `request` sent to the session with `Session.call/3` method.
`from` argument can be used to send a response asyncronously via `Session.reply/2`.
The returned values indicate the following:
* `{:reply, reply, state}` — reply with `reply` and use `state` as the new state;
* `{:reply, reply, pdus, state}` — reply with `reply`, use `state` as the new state and additionally send `pdus` to the peer;
* `{:noreply, state}` — do not reply and use `state` as the new state. The reply can be send later via `Session.reply`;
* `{:noreply, pdus, state}` — do not reply, use `state` as the new state and additionally send `pdus` to the peer. The reply can be send later via `Session.reply`;
* `{:stop, reason, reply, state}` — reply with `reply`, use `state` as the new state and exit with `reason`;
* `{:stop, reason, state}` — do not reply, use `state` as the new state and exit with `reason`.
"""
@callback handle_call(request, from, state) ::
{:reply, reply, state}
| {:reply, reply, [Pdu.t()], state}
| {:noreply, state}
| {:noreply, [Pdu.t()], state}
| {:stop, reason, reply, state}
| {:stop, reason, state}
@doc """
Invoked to handle an arbitrary asyncronous `request` sent to the session with `Session.cast/2` method.
The returned values indicate the following:
* `{:noreply, state}` — use `state` as the new state;
* `{:noreply, pdus, state}` — use `state` as the new state and additionally send `pdus` to the peer.;
* `{:stop, reason, state}` — use `state` as the new state and exit with `reason`.
"""
@callback handle_cast(request, state) ::
{:noreply, state}
| {:noreply, [Pdu.t()], state}
| {:stop, reason, state}
@doc """
Invoked to handle a generic message `request` sent to the session process.
The returned values indicate the following:
* `{:noreply, state}` — use `state` as the new state;
* `{:noreply, pdus, state}` — use `state` as the new state and additionally send `pdus` to the peer.;
* `{:stop, reason, state}` — use `state` as the new state and exit with `reason`.
"""
@callback handle_info(request, state) ::
{:noreply, state}
| {:noreply, [Pdu.t()], state}
| {:stop, reason, state}
@doc """
Invoked when the session process is about to exit.
`lost_pdus` contain a list of nonresp `pdus` sent by the session to the peer and which have not yet received a response.
The returned value is either `:stop` or `{:stop, last_pdus, state}`, where `last_pdus` is a list of PDUs which will be sent to the peer before socket close, and `state` is the new state. For example, an ESME can send an unbind PDU or an MC can send negative resps for pending `submit_sm`s if needed.
This callback is called from the underlying `GenServer` `terminate` callbacks, so it has all the corresponding caveats, for example, sometimes it may not be called, see [`GenServer.terminate/2` docs](https://hexdocs.pm/elixir/GenServer.html#c:terminate/2).
"""
@callback terminate(reason, lost_pdus :: [Pdu.t()], state) ::
:stop
| {:stop, [Pdu.t()], state}
@doc """
Invoked to change the state of the session when a different version of a module is loaded (hot code swapping) and the state’s term structure should be changed. The method has the same semantics as the original `GenServer.code_change/3` callback.
"""
@callback code_change(old_vsn :: term | {:down, term}, state, extra :: term) ::
{:ok, state}
| {:error, reason}
defmacro __using__(_) do
quote location: :keep do
@behaviour SMPPEX.Session
require Logger
@doc false
def init(_socket, _transport, args) do
{:ok, args}
end
@doc false
def handle_pdu(_pdu, state), do: {:ok, state}
@doc false
def handle_unparsed_pdu(_pdu, _error, state), do: {:ok, state}
@doc false
def handle_resp(_pdu, _original_pdu, state), do: {:ok, state}
@doc false
def handle_resp_timeout(_pdus, state), do: {:ok, state}
@doc false
def handle_send_pdu_result(_pdu, _result, state), do: state
@doc false
def handle_socket_error(error, state), do: {{:socket_error, error}, state}
@doc false
def handle_socket_closed(state), do: {:socket_closed, state}
@doc false
def handle_call(_request, _from, state), do: {:reply, :ok, state}
@doc false
def handle_cast(_request, state), do: {:noreply, state}
@doc false
def handle_info(_request, state), do: {:noreply, state}
@doc false
def terminate(reason, lost_pdus, _state) do
Logger.info(
"Session #{inspect(self())} stopped with reason: #{inspect(reason)}, lost_pdus: #{
inspect(lost_pdus)
}"
)
:stop
end
@doc false
def code_change(_vsn, state, _extra), do: {:ok, state}
defoverridable init: 3,
handle_pdu: 2,
handle_unparsed_pdu: 3,
handle_resp: 3,
handle_resp_timeout: 2,
handle_send_pdu_result: 3,
handle_socket_error: 2,
handle_socket_closed: 1,
handle_call: 3,
handle_cast: 2,
handle_info: 2,
terminate: 3,
code_change: 3
end
end
# Public interface
@spec send_pdu(session, Pdu.t()) :: :ok
@doc """
Sends a PDU from the session to the peer.
"""
def send_pdu(pid, pdu) do
TransportSession.call(pid, {:send_pdu, pdu})
end
@spec stop(session) :: :ok
@doc """
Stops the session syncronously.
"""
def stop(pid, reason \\ :normal) do
TransportSession.call(pid, {:stop, reason})
end
@spec call(session, request :: term, timeout) :: term
@doc """
Makes a syncronous call to the session.
The call is handled by `handle_call/3` `SMPPEX.Session` callback.
"""
def call(pid, request, timeout \\ @default_call_timeout) do
TransportSession.call(pid, {:call, request}, timeout)
end
@spec cast(session, request :: term) :: :ok
@doc """
Makes an asyncronous call to Session.
The call is handled by `handle_cast/2` `SMPPEX.Session` callback.
"""
def cast(pid, request) do
TransportSession.cast(pid, {:cast, request})
end
@spec reply(from, response :: term) :: :ok
@doc """
Replies to a client calling `Session.call` method.
This function can be used to explicitly send a reply to a client that called `call/3`.
`from` must be the `from` argument (the second argument) accepted by `handle_call/3` callbacks.
The return value is always `:ok`.
"""
def reply(from, response) do
TransportSession.reply(from, response)
end
# SMPP.TransportSession callbacks
def init(socket, transport, [{module, args}, session_opts]) do
case module.init(socket, transport, args) do
{:ok, state} ->
timer_resolution =
Keyword.get(session_opts, :timer_resolution, Defaults.timer_resolution())
timer_ref = Erlang.start_timer(timer_resolution, self(), :emit_tick)
enquire_link_limit =
Keyword.get(session_opts, :enquire_link_limit, Defaults.enquire_link_limit())
enquire_link_resp_limit =
Keyword.get(session_opts, :enquire_link_resp_limit, Defaults.enquire_link_resp_limit())
inactivity_limit =
Keyword.get(session_opts, :inactivity_limit, Defaults.inactivity_limit())
session_init_limit =
Keyword.get(session_opts, :session_init_limit, Defaults.session_init_limit())
time = Compat.monotonic_time()
timers =
SMPPTimers.new(
time,
session_init_limit,
enquire_link_limit,
enquire_link_resp_limit,
inactivity_limit
)
pdu_storage = PduStorage.new()
response_limit = Keyword.get(session_opts, :response_limit, Defaults.response_limit())
auto_pdu_handler = AutoPduHandler.new()
sequence_number = Keyword.get(session_opts, :sequence_number, 0)
{:ok,
%Session{
module: module,
module_state: state,
timers: timers,
pdus: pdu_storage,
auto_pdu_handler: auto_pdu_handler,
response_limit: response_limit,
sequence_number: sequence_number,
time: time,
timer_resolution: timer_resolution,
tick_timer_ref: timer_ref
}}
{:stop, _} = stop ->
stop
end
end
def handle_pdu({:unparsed_pdu, raw_pdu, error}, st) do
{st.module.handle_unparsed_pdu(raw_pdu, error, st.module_state), st}
|> process_handle_unparsed_pdu_reply()
end
def handle_pdu({:pdu, pdu}, st) do
new_st = update_timers_with_incoming_pdu(pdu, st)
case AutoPduHandler.handle_pdu(new_st.auto_pdu_handler, pdu, new_st.sequence_number) do
:proceed ->
handle_pdu_by_callback_module(pdu, new_st)
{:skip, pdus, new_sequence_number} ->
{:ok, pdus, %Session{new_st | sequence_number: new_sequence_number}}
end
end
def handle_send_pdu_result(pdu, send_pdu_result, st) do
new_st = update_timers_with_outgoing_pdu(pdu, send_pdu_result, st)
with {:error, _} <- send_pdu_result do
_ = PduStorage.fetch(new_st.pdus, Pdu.sequence_number(pdu))
end
case AutoPduHandler.handle_send_pdu_result(new_st.auto_pdu_handler, pdu) do
:proceed ->
new_module_state =
st.module.handle_send_pdu_result(pdu, send_pdu_result, new_st.module_state)
%Session{new_st | module_state: new_module_state}
:skip ->
new_st
end
end
def handle_call({:send_pdu, pdu}, _from, st) do
{{:reply, :ok, [pdu], st.module_state}, st}
|> process_handle_call_reply()
end
def handle_call({:stop, reason}, _from, st) do
{{:stop, reason, :ok, st.module_state}, st}
|> process_handle_call_reply()
end
def handle_call({:call, request}, from, st) do
{st.module.handle_call(request, from, st.module_state), st}
|> process_handle_call_reply()
end
def handle_call(request, from, st) do
handle_call({:call, request}, from, st)
end
def handle_cast({:cast, request}, st) do
{st.module.handle_cast(request, st.module_state), st}
|> process_handle_cast_reply()
end
def handle_cast(request, st) do
handle_cast({:cast, request}, st)
end
@doc false
def handle_info({:timeout, _timer_ref, :emit_tick}, st) do
new_tick_timer_ref = Erlang.start_timer(st.timer_resolution, self(), :emit_tick)
Erlang.cancel_timer(st.tick_timer_ref)
Kernel.send(self(), {:tick, Compat.monotonic_time()})
{:noreply, [], %Session{st | tick_timer_ref: new_tick_timer_ref}}
end
def handle_info({:tick, time}, st) do
Kernel.send(self(), {:check_timers, time})
Kernel.send(self(), {:check_expired_pdus, time})
{:noreply, [], %Session{st | time: time}}
end
def handle_info({:check_timers, time}, st) do
check_timers(time, st)
end
def handle_info({:check_expired_pdus, time}, st) do
check_expired_pdus(time, st)
end
def handle_info(request, st) do
{st.module.handle_info(request, st.module_state), st}
|> process_handle_info_reply()
end
def handle_socket_closed(st) do
{reason, new_module_state} = st.module.handle_socket_closed(st.module_state)
{reason, %Session{st | module_state: new_module_state}}
end
def handle_socket_error(error, st) do
{reason, new_module_state} = st.module.handle_socket_error(error, st.module_state)
{reason, %Session{st | module_state: new_module_state}}
end
def terminate(reason, st) do
lost_pdus = PduStorage.fetch_all(st.pdus)
case st.module.terminate(reason, lost_pdus, st.module_state) do
:stop ->
{[], st}
{:stop, pdus, new_module_state} when is_list(pdus) ->
{pdus, %Session{st | module_state: new_module_state}}
other ->
exit({:bad_terminate_reply, other})
end
end
def code_change(old_vsn, st, extra) do
case st.module.code_change(old_vsn, st.module_state, extra) do
{:ok, new_module_state} ->
{:ok, %Session{st | module_state: new_module_state}}
{:error, _} = err ->
err
end
end
# Private
defp handle_pdu_by_callback_module(pdu, st) do
if Pdu.resp?(pdu) do
pdu
|> handle_resp_pdu(st)
|> process_handle_resp_reply()
else
pdu
|> handle_non_resp_pdu(st)
|> process_handle_pdu_reply()
end
end
defp handle_non_resp_pdu(pdu, st) do
{st.module.handle_pdu(pdu, st.module_state), st}
end
defp handle_resp_pdu(pdu, st) do
sequence_number = Pdu.sequence_number(pdu)
case PduStorage.fetch(st.pdus, sequence_number) do
[] ->
Logger.info(
"Session #{inspect(self())}, resp for unknown pdu(sequence_number: #{sequence_number}), dropping"
)
{{:ok, st.module_state}, st}
[original_pdu] ->
{st.module.handle_resp(pdu, original_pdu, st.module_state), st}
end
end
defp update_timers_with_incoming_pdu(pdu, st) do
new_timers =
cond do
Pdu.bind_resp?(pdu) && Pdu.success_resp?(pdu) ->
st.timers
|> SMPPTimers.handle_bind(st.time)
|> SMPPTimers.handle_peer_transaction(st.time)
Pdu.resp?(pdu) ->
st.timers
|> SMPPTimers.handle_peer_action(st.time)
true ->
st.timers
|> SMPPTimers.handle_peer_transaction(st.time)
end
%Session{st | timers: new_timers}
end
defp update_timers_with_outgoing_pdu(pdu, send_pdu_result, st) do
new_timers =
if send_pdu_result == :ok and Pdu.bind_resp?(pdu) and Pdu.success_resp?(pdu) do
st.timers
|> SMPPTimers.handle_bind(st.time)
else
st.timers
end
%Session{st | timers: new_timers}
end
defp check_expired_pdus(time, st) do
AutoPduHandler.drop_expired(st.auto_pdu_handler, time)
case PduStorage.fetch_expired(st.pdus, time) do
[] ->
{:noreply, [], st}
pdus ->
module_reply = st.module.handle_resp_timeout(pdus, st.module_state)
process_handle_resp_timeout_reply({module_reply, st})
end
end
defp check_timers(time, st) do
case SMPPTimers.handle_tick(st.timers, time) do
{:ok, new_timers} ->
new_st = %Session{st | timers: new_timers}
{:noreply, [], new_st}
{:stop, reason} ->
Logger.info("Session #{inspect(self())}, being stopped by timers(#{reason})")
{:stop, {:timers, reason}, [], st}
{:enquire_link, new_timers} ->
{enquire_link, new_sequence_number} =
AutoPduHandler.enquire_link(
st.auto_pdu_handler,
time + st.response_limit,
st.sequence_number
)
{:noreply, [enquire_link],
%Session{
st
| sequence_number: new_sequence_number,
timers: new_timers
}}
end
end
defp save_sent_pdus(pdus, st, pdus_to_send \\ [])
defp save_sent_pdus([], st, pdus_to_send), do: {st, Enum.reverse(pdus_to_send)}
defp save_sent_pdus([pdu | pdus], st, pdus_to_send) do
if Pdu.resp?(pdu) do
save_sent_pdus(pdus, st, [pdu | pdus_to_send])
else
sequence_number = st.sequence_number + 1
new_pdu = %Pdu{pdu | sequence_number: sequence_number}
true = PduStorage.store(st.pdus, new_pdu, st.time + st.response_limit)
new_st = %Session{st | sequence_number: sequence_number}
save_sent_pdus(pdus, new_st, [new_pdu | pdus_to_send])
end
end
defp process_handle_pdu_reply({{:ok, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_pdu_reply({{:ok, _pdus, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_pdu_reply({{:stop, _reason, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_pdu_reply({reply, st}), do: {:stop, {:bad_handle_pdu_reply, reply}, [], st}
defp process_handle_unparsed_pdu_reply({{:ok, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_unparsed_pdu_reply({{:ok, _pdus, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_unparsed_pdu_reply({{:stop, _reason, _mst}, _st} = arg),
do: process_reply(arg)
defp process_handle_unparsed_pdu_reply({reply, st}),
do: {:stop, {:bad_handle_unparsed_pdu_reply, reply}, [], st}
defp process_handle_resp_reply({{:ok, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_resp_reply({{:ok, _pdus, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_resp_reply({{:stop, _reason, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_resp_reply({reply, st}),
do: {:stop, {:bad_handle_resp_reply, reply}, [], st}
defp process_handle_resp_timeout_reply({{:ok, mst}, st}),
do: process_reply({{:noreply, mst}, st})
defp process_handle_resp_timeout_reply({{:ok, pdus, mst}, st}),
do: process_reply({{:noreply, pdus, mst}, st})
defp process_handle_resp_timeout_reply({{:stop, _reason, _mst}, _st} = arg),
do: process_reply(arg)
defp process_handle_resp_timeout_reply({reply, st}),
do: {:stop, {:bad_handle_resp_timeout_reply, reply}, [], st}
defp process_handle_call_reply({{:reply, _reply, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_call_reply({{:reply, _reply, _pdus, _mst}, _st} = arg),
do: process_reply(arg)
defp process_handle_call_reply({{:noreply, _pdus, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_call_reply({{:noreply, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_call_reply({{:stop, _rsn, _reply, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_call_reply({{:stop, _rsn, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_call_reply({reply, st}),
do: {:stop, {:bad_handle_call_reply, reply}, [], st}
defp process_handle_cast_reply({{:noreply, _pdus, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_cast_reply({{:noreply, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_cast_reply({{:stop, _rsn, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_cast_reply({reply, st}),
do: {:stop, {:bad_handle_cast_reply, reply}, [], st}
defp process_handle_info_reply({{:noreply, _pdus, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_info_reply({{:noreply, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_info_reply({{:stop, _rsn, _mst}, _st} = arg), do: process_reply(arg)
defp process_handle_info_reply({reply, st}),
do: {:stop, {:bad_handle_info_reply, reply}, [], st}
defp process_reply({{:ok, module_state}, st}) do
{:ok, [], %Session{st | module_state: module_state}}
end
defp process_reply({{:ok, pdus, module_state}, st}) do
{new_st, pdus_to_send} = save_sent_pdus(pdus, st)
{:ok, pdus_to_send, %Session{new_st | module_state: module_state}}
end
defp process_reply({{:reply, reply, module_state}, st}) do
{:reply, reply, [], %Session{st | module_state: module_state}}
end
defp process_reply({{:reply, reply, pdus, module_state}, st}) do
{new_st, pdus_to_send} = save_sent_pdus(pdus, st)
{:reply, reply, pdus_to_send, %Session{new_st | module_state: module_state}}
end
defp process_reply({{:noreply, module_state}, st}) do
{:noreply, [], %Session{st | module_state: module_state}}
end
defp process_reply({{:noreply, pdus, module_state}, st}) do
{new_st, pdus_to_send} = save_sent_pdus(pdus, st)
{:noreply, pdus_to_send, %Session{new_st | module_state: module_state}}
end
defp process_reply({{:stop, reason, reply, module_state}, st}) do
{:stop, reason, reply, [], %Session{st | module_state: module_state}}
end
defp process_reply({{:stop, reason, module_state}, st}) do
{:stop, reason, [], %Session{st | module_state: module_state}}
end
end
|
lib/smppex/session.ex
| 0.843638 | 0.810779 |
session.ex
|
starcoder
|
defmodule Re.Listing do
@moduledoc """
Model for listings, that is, each apartment or real estate piece on sale.
"""
use Ecto.Schema
import Ecto.Changeset
alias Re.Listings.Liquidity
schema "listings" do
field :uuid, Ecto.UUID
field :type, :string
field :complement, :string
field :description, :string
field :price, :integer
field :property_tax, :float
field :maintenance_fee, :float
field :floor, :string
field :rooms, :integer
field :bathrooms, :integer
field :restrooms, :integer
field :area, :integer
field :garage_spots, :integer, default: 0
field :garage_type, :string
field :suites, :integer
field :dependencies, :integer
field :balconies, :integer
field :has_elevator, :boolean
field :matterport_code, :string
field :status, :string, default: "inactive"
field :is_exclusive, :boolean, default: false
field :is_release, :boolean, default: false
field :is_exportable, :boolean, default: true
field :orientation, :string
field :floor_count, :integer
field :unit_per_floor, :integer
field :sun_period, :string
field :elevators, :integer
field :construction_year, :integer
field :price_per_area, :float
field :suggested_price, :float
field :deactivation_reason, :string
field :sold_price, :integer
field :liquidity_ratio, :float
belongs_to :address, Re.Address
belongs_to :development, Re.Development,
references: :uuid,
foreign_key: :development_uuid,
type: Ecto.UUID
belongs_to :user, Re.User
belongs_to :owner_contact, Re.OwnerContact,
references: :uuid,
foreign_key: :owner_contact_uuid,
type: Ecto.UUID
has_many :images, Re.Image
has_many :price_history, Re.Listings.PriceHistory
has_many :listings_favorites, Re.Favorite
has_many :favorited, through: [:listings_favorites, :user]
has_many :interests, Re.Interest
has_many :units, Re.Unit
many_to_many :tags, Re.Tag,
join_through: Re.ListingTag,
join_keys: [listing_uuid: :uuid, tag_uuid: :uuid],
on_replace: :delete
timestamps()
end
@types ~w(Apartamento Casa Cobertura)
@garage_types ~w(contract condominium)
@orientation_types ~w(frontside backside lateral inside)
@sun_period_types ~w(morning evening)
@deactivation_reasons ~w(duplicated gave_up left_emcasa publication_mistake rented
rejected sold sold_by_emcasa temporarily_suspended to_be_published
went_exclusive)
@required ~w(type description price rooms bathrooms area garage_spots garage_type
address_id user_id suites dependencies has_elevator)a
@optional ~w(complement floor matterport_code is_exclusive status property_tax
maintenance_fee balconies restrooms is_release is_exportable
orientation floor_count unit_per_floor sun_period elevators
construction_year owner_contact_uuid suggested_price
deactivation_reason sold_price)a
@attributes @required ++ @optional
@price_lower_limit 200_000
@price_upper_limit 100_000_000
def price_lower_limit, do: @price_lower_limit
def price_upper_limit, do: @price_upper_limit
def changeset(struct, params) do
struct
|> cast(params, @attributes)
|> validate_attributes()
|> validate_number(
:price,
greater_than_or_equal_to: @price_lower_limit,
less_than_or_equal_to: @price_upper_limit
)
|> validate_inclusion(:type, @types)
|> validate_inclusion(:garage_type, @garage_types)
|> validate_inclusion(:orientation, @orientation_types)
|> validate_inclusion(:sun_period, @sun_period_types)
|> validate_inclusion(:deactivation_reason, @deactivation_reasons)
|> put_default_deactivation_reason()
|> validate_status()
|> generate_uuid()
|> calculate_price_per_area()
|> calculate_liquidity()
end
@development_required ~w(type description price area address_id development_uuid)a
@development_optional ~w(rooms bathrooms garage_spots garage_type
suites dependencies complement floor matterport_code
is_exclusive status property_tax
maintenance_fee balconies restrooms is_release is_exportable
orientation floor_count unit_per_floor sun_period elevators
construction_year)a
@development_attributes @development_required ++ @development_optional
def development_changeset(struct, params) do
struct
|> cast(params, @development_attributes)
|> cast_assoc(:development)
|> validate_required(@development_required)
|> validate_inclusion(:type, @types)
|> generate_uuid()
end
def changeset_update_tags(struct, tags) do
struct
|> change()
|> put_assoc(:tags, tags)
end
@more_than_zero_attributes ~w(property_tax maintenance_fee
bathrooms garage_spots suites
dependencies balconies restrooms)a
defp validate_attributes(changeset) do
Enum.reduce(@more_than_zero_attributes, changeset, &greater_than/2)
end
defp greater_than(attr, changeset) do
validate_number(changeset, attr, greater_than_or_equal_to: 0)
end
def listing_types(), do: @types
defp generate_uuid(changeset), do: Re.ChangesetHelper.generate_uuid(changeset)
defp calculate_price_per_area(%Ecto.Changeset{valid?: true} = changeset) do
price = get_field(changeset, :price, nil)
area = get_field(changeset, :area, nil)
set_price_per_area(price, area, changeset)
end
defp calculate_price_per_area(changeset), do: changeset
defp set_price_per_area(0, _, changeset), do: changeset
defp set_price_per_area(nil, _, changeset), do: changeset
defp set_price_per_area(_, 0, changeset), do: changeset
defp set_price_per_area(_, nil, changeset), do: changeset
defp set_price_per_area(price, area, changeset) do
put_change(changeset, :price_per_area, price / area)
end
defp calculate_liquidity(changeset) do
price = get_field(changeset, :price, 0)
suggested_price = get_field(changeset, :suggested_price, 0)
put_change(changeset, :liquidity_ratio, Liquidity.calculate(price, suggested_price))
end
defp validate_status(changeset) do
if get_field(changeset, :status) == "inactive" do
validate_required(changeset, :deactivation_reason)
else
changeset
end
end
defp put_default_deactivation_reason(changeset) do
case {
get_field(changeset, :status),
get_field(changeset, :deactivation_reason),
get_field(changeset, :id)
} do
{"inactive", nil, nil} -> put_change(changeset, :deactivation_reason, "to_be_published")
{_, _, _} -> changeset
end
end
end
|
apps/re/lib/listings/schemas/listing.ex
| 0.672547 | 0.550305 |
listing.ex
|
starcoder
|
defmodule Nx.Backend do
@moduledoc """
The behaviour for tensor backends.
Each backend is module that defines a struct and implements the callbacks
defined in this module. The callbacks are mostly implementations of the
functions in the `Nx` module with the tensor output shape given as first
argument.
`Nx` backends come in two flavors: opaque backends, of which you should
not access its data directly except through the functions in the `Nx`
module, and public ones, of which its data can be directly accessed and
traversed. The former typically have the `Backend` suffix.
`Nx` ships with the following backends:
* `Nx.BinaryBackend` - an opaque backend written in pure Elixir
that stores the data in Elixir's binaries. This is the default
backend used by the `Nx` module. The backend itself (and its
data) is private and must not be accessed directly.
* `Nx.TemplateBackend` - an opaque backend written that works as
a template in APIs to declare the type, shape, and names of
tensors to be expected in the future.
* `Nx.Defn.Expr` - a public backend used by `defn` to build
expression trees that are traversed by custom compilers.
This module also includes functions that are meant to be shared
across backends.
"""
@type t :: %{__struct__: atom()}
@type tensor :: Nx.Tensor.t()
@type shape :: Nx.Tensor.shape()
@type axis :: Nx.Tensor.axis()
@type axes :: Nx.Tensor.axes()
@type backend_options :: term()
@callback eye(tensor) :: tensor
@callback iota(tensor, axis | nil) :: tensor
@callback random_uniform(tensor, number, number) :: tensor
@callback random_normal(tensor, mu :: float, sigma :: float) :: tensor
@callback from_binary(out :: tensor, binary, backend_options) :: tensor
@callback backend_deallocate(tensor) :: :ok | :already_deallocated
@callback backend_copy(tensor, module, backend_options) :: tensor
@callback backend_transfer(tensor, module, backend_options) :: tensor
@callback to_batched_list(out :: tensor, tensor) :: [tensor]
@callback to_binary(tensor, limit :: non_neg_integer) :: binary
@callback inspect(tensor, Inspect.Opts.t()) :: tensor
@callback as_type(out :: tensor, tensor) :: tensor
@callback bitcast(out :: tensor, tensor) :: tensor
@callback reshape(out :: tensor, tensor, shape) :: tensor
@callback squeeze(out :: tensor, tensor, axes) :: tensor
@callback broadcast(out :: tensor, tensor, shape, axes) :: tensor
@callback transpose(out :: tensor, tensor, axes) :: tensor
@callback pad(out :: tensor, tensor, pad_value :: tensor, padding_config :: list()) :: tensor
@callback reverse(out :: tensor, tensor, axes) :: tensor
@callback dot(out :: tensor, tensor, axes, tensor, axes) :: tensor
@callback clip(out :: tensor, tensor, min :: tensor, max :: tensor) :: tensor
@callback slice(out :: tensor, tensor, list, list, list) :: tensor
@callback concatenate(out :: tensor, tensor, axis) :: tensor
@callback select(out :: tensor, tensor, tensor, tensor) :: tensor
@callback conv(out :: tensor, tensor, kernel :: tensor, keyword) :: tensor
@callback all?(out :: tensor, tensor, keyword) :: tensor
@callback any?(out :: tensor, tensor, keyword) :: tensor
@callback sum(out :: tensor, tensor, keyword) :: tensor
@callback product(out :: tensor, tensor, keyword) :: tensor
@callback reduce_max(out :: tensor, tensor, keyword) :: tensor
@callback reduce_min(out :: tensor, tensor, keyword) :: tensor
@callback argmax(out :: tensor, tensor, keyword) :: tensor
@callback argmin(out :: tensor, tensor, keyword) :: tensor
@callback reduce(out :: tensor, tensor, acc :: tensor, keyword, fun) :: tensor
@callback reduce_window(out :: tensor, tensor, acc :: tensor, shape, keyword, fun) :: tensor
@callback window_sum(out :: tensor, tensor, shape, keyword) :: tensor
@callback window_product(out :: tensor, tensor, shape, keyword) :: tensor
@callback window_max(out :: tensor, tensor, shape, keyword) :: tensor
@callback window_min(out :: tensor, tensor, shape, keyword) :: tensor
@callback map(out :: tensor, tensor, fun) :: tensor
@callback sort(out :: tensor, tensor, keyword) :: tensor
@callback scatter_window_max(out :: tensor, tensor, tensor, shape, keyword, tensor) :: tensor
@callback scatter_window_min(out :: tensor, tensor, tensor, shape, keyword, tensor) :: tensor
@callback cholesky(out :: tensor, tensor) :: tensor
@callback lu({p :: tensor, l :: tensor, u :: tensor}, tensor, keyword) :: tensor
@callback qr({q :: tensor, r :: tensor}, tensor, keyword) :: tensor
@callback svd({u :: tensor, s :: tensor, v :: tensor}, tensor, keyword) :: tensor
binary_ops =
[:add, :subtract, :multiply, :power, :remainder, :divide, :atan2, :min, :max, :quotient] ++
[:bitwise_and, :bitwise_or, :bitwise_xor, :left_shift, :right_shift] ++
[:equal, :not_equal, :greater, :less, :greater_equal, :less_equal] ++
[:logical_and, :logical_or, :logical_xor] ++
[:outer]
for binary_op <- binary_ops do
@callback unquote(binary_op)(out :: t, t, t) :: t
end
unary_ops =
Enum.map(Nx.Shared.unary_math_funs(), &elem(&1, 0)) ++
[:abs, :bitwise_not, :ceil, :floor, :negate, :round, :sign] ++
[:count_leading_zeros, :population_count]
for unary_op <- unary_ops do
@callback unquote(unary_op)(out :: t, t) :: t
end
alias Inspect.Algebra, as: IA
@doc """
Inspects the given tensor given by `binary`.
Note the `binary` may have fewer elements than the
tensor size but, in such cases, it must strictly have
more elements than `inspect_opts.limit`
"""
def inspect(%{shape: shape, type: type}, binary, inspect_opts) do
open = IA.color("[", :list, inspect_opts)
sep = IA.color(",", :list, inspect_opts)
close = IA.color("]", :list, inspect_opts)
dims = Tuple.to_list(shape)
{data, _rest, _limit} = chunk(dims, binary, type, inspect_opts.limit, {open, sep, close})
data
end
defp chunk([], data, {kind, size}, limit, _docs) do
# TODO: Simplify inspection once nonfinite are officially supported in the VM
{doc, tail} =
case kind do
:s ->
<<head::size(size)-signed-native, tail::binary>> = data
{Integer.to_string(head), tail}
:u ->
<<head::size(size)-unsigned-native, tail::binary>> = data
{Integer.to_string(head), tail}
:f ->
<<head::size(size)-bitstring, tail::binary>> = data
{inspect_float(head, size), tail}
:bf ->
<<head::16-bitstring, tail::binary>> = data
{inspect_bf16(head), tail}
end
if limit == :infinity, do: {doc, tail, limit}, else: {doc, tail, limit - 1}
end
defp chunk([dim | dims], data, type, limit, {open, sep, close} = docs) do
{acc, rest, limit} =
chunk_each(dim, data, [], limit, fn chunk, limit ->
chunk(dims, chunk, type, limit, docs)
end)
{open, sep, close, nest} =
if dims == [] do
{open, IA.concat(sep, " "), close, 0}
else
{IA.concat(open, IA.line()), IA.concat(sep, IA.line()), IA.concat(IA.line(), close), 2}
end
doc =
open
|> IA.concat(IA.concat(Enum.intersperse(acc, sep)))
|> IA.nest(nest)
|> IA.concat(close)
{doc, rest, limit}
end
defp chunk_each(0, data, acc, limit, _fun) do
{Enum.reverse(acc), data, limit}
end
defp chunk_each(_dim, data, acc, 0, _fun) do
{Enum.reverse(["..." | acc]), data, 0}
end
defp chunk_each(dim, data, acc, limit, fun) do
{doc, rest, limit} = fun.(data, limit)
chunk_each(dim - 1, rest, [doc | acc], limit, fun)
end
defp inspect_bf16(<<0xFF80::16-native>>), do: "-Inf"
defp inspect_bf16(<<0x7F80::16-native>>), do: "Inf"
defp inspect_bf16(<<0xFFC1::16-native>>), do: "NaN"
defp inspect_bf16(<<0xFF81::16-native>>), do: "NaN"
if System.endianness() == :little do
defp inspect_bf16(bf16) do
<<x::float-little-32>> = <<0::16, bf16::binary>>
Float.to_string(x)
end
defp inspect_float(data, 32) do
case data do
<<0xFF800000::32-native>> -> "-Inf"
<<0x7F800000::32-native>> -> "Inf"
<<_::16, fdf8:f53e:61e4::18, _::7, _sign::1, 0x7F::7>> -> "NaN"
<<x::float-32-native>> -> Float.to_string(x)
end
end
defp inspect_float(data, 64) do
case data do
<<0x7FF0000000000000::64-native>> -> "Inf"
<<0xFFF0000000000000::64-native>> -> "-Inf"
<<_::48, 0xF::4, _::4, _sign::1, 0x7F::7>> -> "NaN"
<<x::float-64-native>> -> Float.to_string(x)
end
end
else
defp inspect_bf16(bf16) do
<<x::float-big-32>> = <<bf16::binary, 0::16>>
Float.to_string(x)
end
defp inspect_float(data, 32) do
case data do
<<0xFF800000::32-native>> -> "-Inf"
<<0x7F800000::32-native>> -> "Inf"
<<_sign::1, 0x7F::7, fdf8:f53e:61e4::18, _::7, _::16>> -> "NaN"
<<x::float-32-native>> -> Float.to_string(x)
end
end
defp inspect_float(data, 64) do
case data do
<<0x7FF0000000000000::64-native>> -> "Inf"
<<0xFFF0000000000000::64-native>> -> "-Inf"
<<_sign::1, 0x7F::7, 0xF::4, _::4, _::48>> -> "NaN"
<<x::float-64-native>> -> Float.to_string(x)
end
end
end
end
|
lib/nx/backend.ex
| 0.857604 | 0.567997 |
backend.ex
|
starcoder
|
defmodule Nebulex.Entry do
@moduledoc """
Defines a Cache Entry.
This is the structure used by the caches for representing cache entries.
"""
# Cache entry definition
defstruct key: nil,
value: nil,
touched: nil,
ttl: :infinity,
time_unit: :millisecond
@typedoc """
Defines a generic struct for a cache entry.
The entry depends on the adapter completely, this struct/type aims to define
the common fields.
"""
@type t :: %__MODULE__{
key: any,
value: any,
touched: integer,
ttl: timeout,
time_unit: System.time_unit()
}
alias Nebulex.Time
@doc """
Encodes a cache entry.
## Example
iex> "hello"
...> |> Nebulex.Entry.encode()
...> |> Nebulex.Entry.decode()
"hello"
"""
@spec encode(term, [term]) :: binary
def encode(data, opts \\ []) do
data
|> :erlang.term_to_binary(opts)
|> Base.url_encode64()
end
@doc """
Decodes a previously encoded entry.
## Example
iex> "hello"
...> |> Nebulex.Entry.encode()
...> |> Nebulex.Entry.decode()
"hello"
"""
# sobelow_skip ["Misc.BinToTerm"]
@spec decode(binary, [term]) :: term
def decode(data, opts \\ []) when is_binary(data) do
data
|> Base.url_decode64!()
|> :erlang.binary_to_term(opts)
end
@doc """
Returns whether the given `entry` has expired or not.
## Example
iex> Nebulex.Entry.expired?(%Nebulex.Entry{})
false
iex> Nebulex.Entry.expired?(
...> %Nebulex.Entry{touched: Nebulex.Time.now() - 10, ttl: 1}
...> )
true
"""
@spec expired?(t) :: boolean
def expired?(%__MODULE__{ttl: :infinity}), do: false
def expired?(%__MODULE__{touched: touched, ttl: ttl, time_unit: unit}) do
Time.now(unit) - touched >= ttl
end
@doc """
Returns the remaining time-to-live.
## Example
iex> Nebulex.Entry.ttl(%Nebulex.Entry{})
:infinity
iex> ttl =
...> Nebulex.Entry.ttl(
...> %Nebulex.Entry{touched: Nebulex.Time.now(), ttl: 100}
...> )
iex> ttl > 0
true
"""
@spec ttl(t) :: timeout
def ttl(%__MODULE__{ttl: :infinity}), do: :infinity
def ttl(%__MODULE__{ttl: ttl, touched: touched, time_unit: unit}) do
ttl - (Time.now(unit) - touched)
end
end
|
lib/nebulex/entry.ex
| 0.901043 | 0.44077 |
entry.ex
|
starcoder
|
defmodule ExProtobuf.Parser do
defmodule ParserError do
defexception [:message]
end
def parse_files!(files, options \\ []) do
Enum.reduce(files, [], fn(path, defs) ->
schema = File.read!(path)
new_defs = parse!(schema, options)
defs ++ new_defs
end) |> finalize!(options)
end
def parse_string!(string, options \\ []) do
parse!(string, options) |> finalize!(options)
end
defp finalize!(defs, options) do
case :gpb_parse.post_process_all_files(defs, options) do
{:ok, defs} -> defs
{:error, error} ->
msg = case error do
[ref_to_undefined_msg_or_enum: {{root_path, field}, type}] ->
type_ref = Enum.map(type, &Atom.to_string/1) |> Enum.join
invalid_ref = Enum.reverse([field|root_path]) |> Enum.map(&Atom.to_string/1) |> Enum.join
"Reference to undefined message or enum #{type_ref} at #{invalid_ref}"
_ when is_binary(error) ->
error
_ ->
Macro.to_string(error)
end
raise ParserError, message: msg
end
end
defp parse(defs, options) when is_list(defs) do
:gpb_parse.post_process_one_file(defs, options)
end
defp parse(string, options) when is_binary(string) do
case :gpb_scan.string('#{string}') do
{:ok, tokens, _} ->
lines = String.split(string, "\n", parts: :infinity) |> Enum.count
case :gpb_parse.parse(tokens ++ [{:'$end', lines + 1}]) do
{:ok, defs} -> parse(defs, options)
error ->
error
end
error ->
error
end
end
defp parse!(string, options) do
case parse(string, options) do
{:ok, defs} -> defs
{:error, error} ->
msg = case error do
[ref_to_undefined_msg_or_enum: {{root_path, field}, type}] ->
type_ref = Enum.map(type, &Atom.to_string/1) |> Enum.join
invalid_ref = Enum.reverse([field|root_path]) |> Enum.map(&Atom.to_string/1) |> Enum.join
"Reference to undefined message or enum #{type_ref} at #{invalid_ref}"
_ when is_binary(error) ->
error
_ ->
Macro.to_string(error)
end
raise ParserError, message: msg
end
end
end
|
lib/exprotobuf/parser.ex
| 0.518059 | 0.565659 |
parser.ex
|
starcoder
|
defmodule EQL.AST.Join do
@moduledoc false
@behaviour EQL.Expression
alias EQL.Expression
alias EQL.AST.{Ident, Mutation, Params, Prop, Query, Union}
defstruct key: nil,
query: nil
@type t(p, q) :: %__MODULE__{
key: p,
query: q
}
@type t ::
t(
Prop.t() | Ident.t() | Params.t(Prop.t() | Ident.t()),
Query.t() | Union.t() | recursion
)
@type expr :: %{
required(prop_or_ident | Params.expr(prop_or_ident)) =>
Query.expr() | Union.expr() | recursion
}
@type mutation_join :: t(Mutation.t(), Query.t())
@type prop_or_ident :: Prop.expr() | Ident.expr()
@type recursion :: non_neg_integer | :infinity
@spec new(
Prop.t() | Ident.t() | Params.t(Prop.t() | Ident.t()),
Query.t() | Union.t() | recursion
) :: t
def new(key, query) do
%__MODULE__{
key: key,
query: query
}
end
defguard is_join(x) when is_map(x) and map_size(x) == 1
@impl Expression
def to_ast(join) when is_join(join) do
with {key, val} <- extract_key_query(join),
key when not is_nil(key) <- Expression.to_ast([Prop, Ident, Params], key),
query when not is_nil(query) <- subquery_to_ast(val) do
new(key, query)
else
_ -> nil
end
end
def to_ast(_), do: nil
@spec extract_key_query(map) :: {term, term}
defp extract_key_query(join) do
[key | _] = Map.keys(join)
{key, Map.get(join, key)}
end
@spec subquery_to_ast(term) :: Query.t() | Union.t() | recursion | nil
defp subquery_to_ast(:infinity), do: :infinity
defp subquery_to_ast(subquery) when is_integer(subquery), do: subquery
defp subquery_to_ast(subquery) do
Expression.to_ast([Query, Union], subquery)
end
defimpl EQL.AST do
def to_expr(join) do
%{@protocol.to_expr(join.key) => @protocol.to_expr(join.query)}
end
def get_key(join) do
@protocol.to_expr(join.key)
end
end
end
|
lib/eql/ast/join.ex
| 0.766512 | 0.448487 |
join.ex
|
starcoder
|
defmodule UltraDark.Transaction do
alias UltraDark.Transaction
alias UltraDark.Utilities
alias Decimal, as: D
@moduledoc """
Contains all the functions that pertain to creating valid transactions
"""
defstruct id: nil,
inputs: [],
outputs: [],
fee: 0,
designations: [],
timestamp: nil,
# Most transactions will be pay-to-public-key
txtype: "P2PK"
@spec calculate_outputs(Transaction) :: %{outputs: list, fee: Decimal}
def calculate_outputs(transaction) do
%{designations: designations} = transaction
fee = calculate_fee(transaction)
outputs =
designations
|> Enum.with_index()
|> Enum.map(fn {designation, idx} ->
%{
txoid: "#{transaction.id}:#{idx}",
addr: designation[:addr],
amount: designation[:amount]
}
end)
%{outputs: outputs, fee: fee}
end
@doc """
Each transaction consists of multiple inputs and outputs. Inputs to any particular transaction are just outputs
from other transactions. This is called the UTXO model. In order to efficiently represent the UTXOs within the transaction,
we can calculate the merkle root of the inputs of the transaction.
"""
@spec calculate_hash(Transaction) :: String.t()
def calculate_hash(transaction) do
transaction.inputs
|> Enum.map(& &1[:txoid])
|> Utilities.calculate_merkle_root()
end
@doc """
In order for a block to be considered valid, it must have a coinbase as the FIRST transaction in the block.
This coinbase has a single output, designated to the address of the miner, and the output amount is
the block reward plus any transaction fees from within the transaction
"""
@spec generate_coinbase(Decimal, String.t()) :: Transaction
def generate_coinbase(amount, miner_address) do
timestamp = DateTime.utc_now() |> DateTime.to_string()
txid = Utilities.sha_base16(miner_address <> timestamp)
%Transaction{
id: txid,
txtype: "COINBASE",
timestamp: timestamp,
outputs: [
%{txoid: "#{txid}:0", addr: miner_address, amount: amount}
]
}
end
@spec sum_inputs(list) :: Decimal
def sum_inputs(inputs) do
Enum.reduce(inputs, D.new(0), fn %{amount: amount}, acc -> D.add(amount, acc) end)
end
@spec calculate_fee(Transaction) :: Decimal
def calculate_fee(transaction) do
D.sub(sum_inputs(transaction.inputs), sum_inputs(transaction.designations))
end
end
|
lib/transaction.ex
| 0.836955 | 0.555496 |
transaction.ex
|
starcoder
|
defmodule Nerves.Grove.OneNumberLeds do
@moduledoc """
String.to_integer("FC", 16)|>Integer.digits(2)
c ("lib/nerves_grove/one_number_led.ex")
Ring#Logger.attach
alias Nerves.Grove.OneNumberLeds
OneNumberLeds.set_one_segment_pins(17, 18, 27, 23, 22, 24, 25, 6)
"""
require Logger
alias Pigpiox.GPIO
@digits_code %{
# [a,b,c,d,e,f,g,h]
null: [0, 0, 0, 0, 0, 0, 0, 0],
zero: [1, 1, 1, 1, 1, 1, 0, 0],
one: [0, 1, 1, 0, 0, 0, 0, 0],
two: [1, 1, 0, 1, 1, 0, 1, 0],
three: [1, 1, 1, 1, 0, 0, 1, 0],
four: [0, 1, 1, 0, 0, 1, 1, 0],
five: [1, 0, 1, 1, 0, 1, 1, 0],
six: [1, 0, 1, 1, 1, 1, 1, 0],
seven: [1, 1, 1, 0, 0, 0, 0, 0],
eight: [1, 1, 1, 1, 1, 1, 1, 0],
nine: [1, 1, 1, 1, 0, 1, 1, 0],
A: [1, 1, 1, 0, 1, 1, 1, 0],
B: [1, 1, 1, 1, 1, 1, 1, 0],
C: [0, 1, 1, 1, 1, 1, 1, 0]
}
@pins_code [:a, :b, :c, :d, :e, :f, :g, :h]
def set_one_segment_pins(pin_a, pin_b, pin_c, pin_d, pin_e, pin_f, pin_g, pin_h) do
input_pins = [pin_a, pin_b, pin_c, pin_d, pin_e, pin_f, pin_g, pin_h]
segment_pins =
for n <- 0..7 do
pin_code = @pins_code |> Enum.at(n)
input_pin = input_pins |> Enum.at(n)
# GPIO.set_mode(input_pin, :output)
# Logger.debug("input_pin: #{input_pin}")
{pin_code, input_pin}
end
segment_pins
end
@doc """
c ("lib/nerves_grove/one_number_led.ex")
alias Nerves.Grove.OneNumberLeds
digit_pids = OneNumberLeds.set_one_segment_pins(0,1,2,3,4,5,6,7)
OneNumberLeds.write(digit_pids,:eight)
"""
def write(digit_pins, digit) do
digit_bits = @digits_code[digit]
# Logger.debug("digit_pins #{inspect(digit_pins)}")
# Logger.debug("digit_bits #{inspect(digit_bits)}")
for n <- 0..7 do
digit_bit = digit_bits |> Enum.at(n)
pin = digit_pins |> Enum.at(n) |> Kernel.elem(1)
if 1 == digit_bit do
# Logger.debug("pin#{inspect(pin)} to 1")
GPIO.write(pin, 1)
else
# Logger.debug("pin#{inspect(pin)} to 0")
GPIO.write(pin, 0)
end
end
end
end
|
lib/nerves_grove/one_number_led.ex
| 0.574992 | 0.52829 |
one_number_led.ex
|
starcoder
|
defmodule Aecore.Channel.ChannelOffChainTx do
@moduledoc """
Structure of an Offchain Channel Transaction. Implements a cryptographically signed container for channel updates associated with an offchain chainstate.
"""
@behaviour Aecore.Channel.ChannelTransaction
alias Aecore.Channel.ChannelOffChainTx
alias Aecore.Channel.Updates.ChannelTransferUpdate
alias Aecore.Channel.ChannelOffChainUpdate
alias Aecore.Keys
alias Aecore.Chain.Identifier
alias Aecore.Tx.SignedTx
alias Aeutil.TypeToTag
@version 1
@signedtx_version 1
@typedoc """
Structure of the ChannelOffChainTx type
"""
@type t :: %ChannelOffChainTx{
channel_id: binary(),
sequence: non_neg_integer(),
updates: list(ChannelOffChainUpdate.update_types()),
state_hash: binary(),
signatures: {binary(), binary()}
}
@typedoc """
The type of errors returned by the functions in this module
"""
@type error :: {:error, String.t()}
@doc """
Definition of Aecore ChannelOffChainTx structure
## Parameters
- channel_id: ID of the channel
- sequence: Number of the update round
- updates: List of updates to the offchain chainstate
- state_hash: Root hash of the offchain chainstate after applying the updates
- signatures: Initiator/Responder signatures of the offchain transaction
"""
defstruct [
:channel_id,
:sequence,
:updates,
:state_hash,
:signatures
]
use Aecore.Util.Serializable
require Logger
@doc """
Validates the signatures under the offchain transaction.
"""
@spec verify_signatures(ChannelOffChainTx.t(), {Keys.pubkey(), Keys.pubkey()}) :: :ok | error()
def verify_signatures(%ChannelOffChainTx{signatures: {_, _}} = state, {
initiator_pubkey,
responder_pubkey
}) do
cond do
!verify_signature_for_key(state, initiator_pubkey) ->
{:error, "#{__MODULE__}: Invalid initiator signature"}
!verify_signature_for_key(state, responder_pubkey) ->
{:error, "#{__MODULE__}: Invalid responder signature"}
true ->
:ok
end
end
def verify_signatures(%ChannelOffChainTx{}, _) do
{:error, "#{__MODULE__}: Invalid signatures count"}
end
@doc """
Checks if there is a signature for the specified pubkey.
"""
@spec verify_signature_for_key(ChannelOffChainTx.t(), Keys.pubkey()) :: boolean()
def verify_signature_for_key(%ChannelOffChainTx{signatures: {<<>>, _}}, _) do
false
end
def verify_signature_for_key(
%ChannelOffChainTx{signatures: {signature1, signature2}} = state,
pubkey
) do
Keys.verify(signing_form(state), signature1, pubkey) or
verify_signature_for_key(%ChannelOffChainTx{state | signatures: {signature2, <<>>}}, pubkey)
end
@spec signature_for_offchain_tx(ChannelOffChainTx.t(), Keys.sign_priv_key()) :: binary()
defp signature_for_offchain_tx(%ChannelOffChainTx{} = offchain_tx, priv_key)
when is_binary(priv_key) do
offchain_tx
|> signing_form()
|> Keys.sign(priv_key)
end
defp signing_form(%ChannelOffChainTx{} = tx) do
rlp_encode(%ChannelOffChainTx{tx | signatures: {<<>>, <<>>}})
end
@doc """
Signs the offchain transaction with the provided private key.
"""
@spec sign(ChannelOffChainTx.t(), Keys.sign_priv_key()) :: ChannelOffChainTx.t()
def sign(%ChannelOffChainTx{signatures: {<<>>, <<>>}} = offchain_tx, priv_key) do
signature = signature_for_offchain_tx(offchain_tx, priv_key)
{:ok, %ChannelOffChainTx{offchain_tx | signatures: {signature, <<>>}}}
end
def sign(%ChannelOffChainTx{signatures: {existing_signature, <<>>}} = offchain_tx, priv_key) do
new_signature = signature_for_offchain_tx(offchain_tx, priv_key)
if new_signature > existing_signature do
{:ok, %ChannelOffChainTx{offchain_tx | signatures: {existing_signature, new_signature}}}
else
{:ok, %ChannelOffChainTx{offchain_tx | signatures: {new_signature, existing_signature}}}
end
end
@doc """
Creates a new offchain transaction containing a transfer update between the specified accounts. The resulting offchain transaction is not tied to any offchain chainstate.
"""
@spec initialize_transfer(binary(), Keys.pubkey(), Keys.pubkey(), non_neg_integer()) ::
ChannelOffChainTx.t()
def initialize_transfer(channel_id, from, to, amount) do
%ChannelOffChainTx{
channel_id: channel_id,
updates: [ChannelTransferUpdate.new(from, to, amount)],
signatures: {<<>>, <<>>}
}
end
@spec offchain_updates(ChannelOffChainTx.t()) :: list(ChannelUpdates.update_types())
def offchain_updates(%ChannelOffChainTx{updates: updates}) do
updates
end
@doc """
Serializes the offchain transaction - signatures are not being included
"""
@spec encode_to_list(ChannelOffChainTx.t()) :: list(binary())
def encode_to_list(%ChannelOffChainTx{
signatures: {<<>>, <<>>},
channel_id: channel_id,
sequence: sequence,
updates: updates,
state_hash: state_hash
}) do
encoded_updates = Enum.map(updates, &ChannelOffChainUpdate.encode_to_list/1)
[
:binary.encode_unsigned(@version),
Identifier.create_encoded_to_binary(channel_id, :channel),
:binary.encode_unsigned(sequence),
encoded_updates,
state_hash
]
end
def encode_to_list(%ChannelOffChainTx{
signatures: {_, _}
}) do
throw("#{__MODULE__}: Serialization.rlp_encode is not supported for offchaintx")
end
def rlp_encode(%ChannelOffChainTx{signatures: {<<>>, <<>>}} = tx) do
Serialization.rlp_encode(tx)
end
def rlp_encode(%ChannelOffChainTx{signatures: {signature1, signature2}} = tx) do
{:ok, signedtx_tag} = TypeToTag.type_to_tag(SignedTx)
ExRLP.encode([
signedtx_tag,
@signedtx_version,
[signature1, signature2],
ChannelOffChainTx.rlp_encode(%ChannelOffChainTx{tx | signatures: {<<>>, <<>>}})
])
end
@doc """
Deserializes the serialized offchain transaction. The resulting transaction does not contain any signatures.
"""
@spec decode_from_list(non_neg_integer(), list(binary())) :: ChannelOffChainTx.t() | error()
def decode_from_list(@version, [
encoded_channel_id,
sequence,
encoded_updates,
state_hash
]) do
with {:ok, channel_id} <-
Identifier.decode_from_binary_to_value(encoded_channel_id, :channel),
decoded_updates <- Enum.map(encoded_updates, &ChannelOffChainUpdate.decode_from_list/1),
# Look for errors
errors <- for({:error, _} = err <- decoded_updates, do: err),
nil <- List.first(errors) do
{:ok,
%ChannelOffChainTx{
channel_id: channel_id,
sequence: :binary.decode_unsigned(sequence),
updates: decoded_updates,
state_hash: state_hash
}}
else
{:error, _} = error ->
error
end
end
def decode_from_list(@version, data) do
{:error, "#{__MODULE__}: decode_from_list: Invalid serialization: #{inspect(data)}"}
end
def decode_from_list(version, _) do
{:error, "#{__MODULE__}: decode_from_list: Unknown version #{version}"}
end
def rlp_decode_signed(binary) do
result =
try do
ExRLP.decode(binary)
rescue
e ->
{:error, "#{__MODULE__}: rlp_decode: IIllegal serialization: #{Exception.message(e)}"}
end
{:ok, signedtx_tag} = TypeToTag.type_to_tag(SignedTx)
signedtx_tag_bin = :binary.encode_unsigned(signedtx_tag)
signedtx_ver_bin = :binary.encode_unsigned(@signedtx_version)
case result do
[^signedtx_tag_bin, ^signedtx_ver_bin, [signature1, signature2], data]
when signature1 < signature2 ->
case rlp_decode(data) do
{:ok, %ChannelOffChainTx{} = tx} ->
{:ok, %ChannelOffChainTx{tx | signatures: {signature1, signature2}}}
{:error, _} = error ->
error
end
[^signedtx_tag_bin, ^signedtx_ver_bin | _] ->
{:error, "#{__MODULE__}: Invalid signedtx serialization"}
[^signedtx_tag_bin | _] ->
{:error, "#{__MODULE__}: Unknown signedtx version"}
list when is_list(list) ->
{:error, "#{__MODULE__}: Invalid tag"}
{:error, _} = error ->
error
end
end
end
|
apps/aecore/lib/aecore/channel/channel_off_chain_tx.ex
| 0.875734 | 0.405625 |
channel_off_chain_tx.ex
|
starcoder
|
defmodule Graphmath.Mat33 do
@moduledoc """
This is the 3D mathematics library for graphmath.
This submodule handles 3x3 matrices using tuples of floats.
"""
@type mat33 :: {float, float, float, float, float, float, float, float, float}
@type vec3 :: {float, float, float}
@type vec2 :: {float, float}
@doc """
`identity()` creates an identity `mat33`.
This returns an identity `mat33`.
"""
@spec identity() :: mat33
def identity() do
{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, 0.0, 0.0, 1.0}
end
@doc """
`zero()` creates a zeroed `mat33`.
This returns a zeroed `mat33`.
"""
@spec zero() :: mat33
def zero() do
{0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0}
end
@doc """
`add(a,b)` adds one `mat33` to another `mat33`.
`a` is the first `mat33`.
`b` is the second `mat33`.
This returns a `mat33` which is the element-wise sum of `a` and `b`.
"""
@spec add(mat33, mat33) :: mat33
def add(a, b) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{b11, b12, b13, b21, b22, b23, b31, b32, b33} = b
{a11 + b11, a12 + b12, a13 + b13, a21 + b21, a22 + b22, a23 + b23, a31 + b31, a32 + b32,
a33 + b33}
end
@doc """
`subtract(a,b)` subtracts one `mat33` from another `mat33`.
`a` is the minuend.
`b` is the subtraherd.
This returns a `mat33` formed by the element-wise subtraction of `b` from `a`.
"""
@spec subtract(mat33, mat33) :: mat33
def subtract(a, b) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{b11, b12, b13, b21, b22, b23, b31, b32, b33} = b
{a11 - b11, a12 - b12, a13 - b13, a21 - b21, a22 - b22, a23 - b23, a31 - b31, a32 - b32,
a33 - b33}
end
@doc """
`scale( a, k )` scales every element in a `mat33` by a coefficient k.
`a` is the `mat33` to scale.
`k` is the float to scale by.
This returns a `mat33` `a` scaled element-wise by `k`.
"""
@spec scale(mat33, float) :: mat33
def scale(a, k) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{a11 * k, a12 * k, a13 * k, a21 * k, a22 * k, a23 * k, a31 * k, a32 * k, a33 * k}
end
@doc """
`make_scale( k )` creates a `mat33` that uniformly scales.
`k` is the float value to scale by.
This returns a `mat33` whose diagonal is all `k`s.
"""
@spec make_scale(float) :: mat33
def make_scale(k) do
{k, 0.0, 0.0, 0.0, k, 0.0, 0.0, 0.0, k}
end
@doc """
`make_scale( sx, sy, sz )` creates a `mat33` that scales each axis independently.
`sx` is a float for scaling the x-axis.
`sy` is a float for scaling the y-axis.
`sz` is a float for scaling the z-axis.
This returns a `mat33` whose diagonal is `{ sx, sy, sz }`.
Note that, when used with `vec2`s via the *transform* methods, `sz` will have no effect.
"""
@spec make_scale(float, float, float) :: mat33
def make_scale(sx, sy, sz) do
{sx, 0.0, 0.0, 0.0, sy, 0.0, 0.0, 0.0, sz}
end
@doc """
`make_translate( tx, ty )` creates a mat33 that translates a vec2 by (tx, ty).
`tx` is a float for translating along the x-axis.
`ty` is a float for translating along the y-axis.
This returns a `mat33` which translates by a `vec2` `{ tx, ty }`.
"""
@spec make_translate(float, float) :: mat33
def make_translate(tx, ty) do
{1.0, 0.0, 0.0, 0.0, 1.0, 0.0, tx, ty, 1.0}
end
@doc """
`make_rotate( theta )` creates a mat33 that rotates a vec2 by `theta` radians about the +Z axis.
`theta` is the float of the number of radians of rotation the matrix will provide.
This returns a `mat33` which rotates by `theta` radians about the +Z axis.
"""
@spec make_rotate(float) :: mat33
def make_rotate(theta) do
st = :math.sin(theta)
ct = :math.cos(theta)
{ct, st, 0.0, -st, ct, 0.0, 0.0, 0.0, 1.0}
end
@doc """
`round( a, sigfigs )` rounds every element of a `mat33` to some number of decimal places.
`a` is the `mat33` to round.
`sigfigs` is an integer on [0,15] of the number of decimal places to round to.
This returns a `mat33` which is the result of rounding `a`.
"""
@spec round(mat33, 0..15) :: mat33
def round(a, sigfigs) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{
Float.round(1.0 * a11, sigfigs),
Float.round(1.0 * a12, sigfigs),
Float.round(1.0 * a13, sigfigs),
Float.round(1.0 * a21, sigfigs),
Float.round(1.0 * a22, sigfigs),
Float.round(1.0 * a23, sigfigs),
Float.round(1.0 * a31, sigfigs),
Float.round(1.0 * a32, sigfigs),
Float.round(1.0 * a33, sigfigs)
}
end
@doc """
`multiply( a, b )` multiply two matrices a and b together.
`a` is the `mat33` multiplicand.
`b` is the `mat33` multiplier.
This returns the `mat33` product of the `a` and `b`.
"""
@spec multiply(mat33, mat33) :: mat33
def multiply(a, b) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{b11, b12, b13, b21, b22, b23, b31, b32, b33} = b
{
a11 * b11 + a12 * b21 + a13 * b31,
a11 * b12 + a12 * b22 + a13 * b32,
a11 * b13 + a12 * b23 + a13 * b33,
a21 * b11 + a22 * b21 + a23 * b31,
a21 * b12 + a22 * b22 + a23 * b32,
a21 * b13 + a22 * b23 + a23 * b33,
a31 * b11 + a32 * b21 + a33 * b31,
a31 * b12 + a32 * b22 + a33 * b32,
a31 * b13 + a32 * b23 + a33 * b33
}
end
@doc """
`multiply_transpose( a, b )` multiply two matrices a and b<sup>T</sup> together.
`a` is the `mat33` multiplicand.
`b` is the `mat33` multiplier.
This returns the `mat33` product of the `a` and `b`<sup>T</sup>.
"""
@spec multiply_transpose(mat33, mat33) :: mat33
def multiply_transpose(a, b) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{b11, b21, b31, b12, b22, b32, b13, b23, b33} = b
{
a11 * b11 + a12 * b21 + a13 * b31,
a11 * b12 + a12 * b22 + a13 * b32,
a11 * b13 + a12 * b23 + a13 * b33,
a21 * b11 + a22 * b21 + a23 * b31,
a21 * b12 + a22 * b22 + a23 * b32,
a21 * b13 + a22 * b23 + a23 * b33,
a31 * b11 + a32 * b21 + a33 * b31,
a31 * b12 + a32 * b22 + a33 * b32,
a31 * b13 + a32 * b23 + a33 * b33
}
end
@doc """
`column0( a )` selects the first column of a `mat33`.
`a` is the `mat33` to take the first column of.
This returns a `vec3` representing the first column of `a`.
"""
@spec column0(mat33) :: vec3
def column0(a) do
{a11, _, _, a21, _, _, a31, _, _} = a
{a11, a21, a31}
end
@doc """
`column1( a )` selects the second column of a `mat33`.
`a` is the `mat33` to take the second column of.
This returns a `vec3` representing the second column of `a`.
"""
@spec column1(mat33) :: vec3
def column1(a) do
{_, a12, _, _, a22, _, _, a32, _} = a
{a12, a22, a32}
end
@doc """
`column2( a )` selects the third column of a `mat33`.
`a` is the `mat33` to take the third column of.
This returns a `vec3` representing the third column of `a`.
"""
@spec column2(mat33) :: vec3
def column2(a) do
{_, _, a13, _, _, a23, _, _, a33} = a
{a13, a23, a33}
end
@doc """
`row0( a )` selects the first row of a `mat33`.
`a` is the `mat33` to take the first row of.
This returns a `vec3` representing the first row of `a`.
"""
@spec row0(mat33) :: vec3
def row0(a) do
{a11, a12, a13, _, _, _, _, _, _} = a
{a11, a12, a13}
end
@doc """
`row1( a )` selects the second row of a `mat33`.
`a` is the `mat33` to take the second row of.
This returns a `vec3` representing the second row of `a`.
"""
@spec row1(mat33) :: vec3
def row1(a) do
{_, _, _, a21, a22, a23, _, _, _} = a
{a21, a22, a23}
end
@doc """
`row2( a )` selects the third row of a `mat33`.
`a` is the `mat33` to take the third row of.
This returns a `vec3` representing the third row of `a`.
"""
@spec row2(mat33) :: vec3
def row2(a) do
{_, _, _, _, _, _, a31, a32, a33} = a
{a31, a32, a33}
end
@doc """
`diag( a )` selects the diagonal of a `mat33`.
`a` is the `mat33` to take the diagonal of.
This returns a `vec3` representing the diagonal of `a`.
"""
@spec diag(mat33) :: vec3
def diag(a) do
{a11, _, _, _, a22, _, _, _, a33} = a
{a11, a22, a33}
end
@doc """
`at( a, i, j)` selects an element of a `mat33`.
`a` is the `mat33` to index.
`i` is the row integer index [0,2].
`j` is the column integer index [0,2].
This returns a float from the matrix at row `i` and column `j`.
"""
@spec at(mat33, non_neg_integer, non_neg_integer) :: float
def at(a, i, j) do
elem(a, 3 * i + j)
end
@doc """
`apply( a, v )` transforms a `vec3` by a `mat33`.
`a` is the `mat33` to transform by.
`v` is the `vec3` to be transformed.
This returns a `vec3` representing **A****v**.
This is the "full" application of a matrix, and uses all elements.
"""
@spec apply(mat33, vec3) :: vec3
def apply(a, v) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{x, y, z} = v
{
a11 * x + a12 * y + a13 * z,
a21 * x + a22 * y + a23 * z,
a31 * x + a32 * y + a33 * z
}
end
@doc """
`apply_transpose( a, v )` transforms a `vec3` by a a transposed `mat33`.
`a` is the `mat33` to transform by.
`v` is the `vec3` to be transformed.
This returns a `vec3` representing **A**<sup>T</sup>**v**.
This is the "full" application of a matrix, and uses all elements.
"""
@spec apply_transpose(mat33, vec3) :: vec3
def apply_transpose(a, v) do
{a11, a21, a31, a12, a22, a32, a13, a23, a33} = a
{x, y, z} = v
{
a11 * x + a12 * y + a13 * z,
a21 * x + a22 * y + a23 * z,
a31 * x + a32 * y + a33 * z
}
end
@doc """
`apply_left( v, a )` transforms a `vec3` by a `mat33`, applied on the left.
`a` is the `mat33` to transform by.
`v` is the `vec3` to be transformed.
This returns a `vec3` representing **v****A**.
This is the "full" application of a matrix, and uses all elements.
"""
@spec apply_left(vec3, mat33) :: vec3
def apply_left(v, a) do
{a11, a12, a13, a21, a22, a23, a31, a32, a33} = a
{x, y, z} = v
{
a11 * x + a21 * y + a31 * z,
a12 * x + a22 * y + a32 * z,
a13 * x + a23 * y + a33 * z
}
end
@doc """
`apply_left_transpose( v, a )` transforms a `vec3` by a transposed `mat33`, applied on the left.
`a` is the `mat33` to transform by.
`v` is the `vec3` to be transformed.
This returns a `vec3` representing **v****A**<sup>T</sup>.
This is the "full" application of a matrix, and uses all elements.
"""
@spec apply_left_transpose(vec3, mat33) :: vec3
def apply_left_transpose(v, a) do
{a11, a21, a31, a12, a22, a32, a13, a23, a33} = a
{x, y, z} = v
{
a11 * x + a21 * y + a31 * z,
a12 * x + a22 * y + a32 * z,
a13 * x + a23 * y + a33 * z
}
end
@doc """
`transform_point( a, v )` transforms a `vec2` point by a `mat33`.
`a` is a `mat33` used to transform the point.
`v` is a `vec2` to be transformed.
This returns a `vec2` representing the application of `a` to `v`.
The point `a` is internally treated as having a third coordinate equal to 1.0.
Note that transforming a point will work for all transforms.
"""
@spec transform_point(mat33, vec2) :: vec2
def transform_point(a, v) do
{a11, a21, _, a12, a22, _, a13, a23, _} = a
{x, y} = v
{
a11 * x + a12 * y + a13,
a21 * x + a22 * y + a23
}
end
@doc """
`transform_vector( a, v )` transforms a `vec2` vector by a `mat33`.
`a` is a `mat33` used to transform the point.
`v` is a `vec2` to be transformed.
This returns a `vec2` representing the application of `a` to `v`.
The point `a` is internally treated as having a third coordinate equal to 0.0.
Note that transforming a vector will work for only rotations, scales, and shears.
"""
@spec transform_vector(mat33, vec2) :: vec2
def transform_vector(a, v) do
{a11, a21, _, a12, a22, _, _, _, _} = a
{x, y} = v
{
a11 * x + a12 * y,
a21 * x + a22 * y
}
end
@doc """
`inverse(a)` calculates the inverse matrix
`a` is a `mat33` to be inverted
Returs a `mat33` representing `a`<sup>-1</sup>
Raises an error when you try to calculate inverse of a matrix whose determinant is `zero`
"""
@spec inverse(mat33) :: mat33
def inverse(a) do
{a00, a01, a02, a10, a11, a12, a20, a21, a22} = a
v00 = a11 * a22 - a12 * a21
v01 = a02 * a21 - a01 * a22
v02 = a01 * a12 - a02 * a11
v10 = a12 * a20 - a10 * a22
v11 = a00 * a22 - a02 * a20
v12 = a02 * a10 - a00 * a12
v20 = a10 * a21 - a11 * a20
v21 = a01 * a20 - a00 * a21
v22 = a00 * a11 - a01 * a10
f_det = a00 * v00 + a01 * v10 + a02 * v20
# if fDet == 0, do: raise "Matrices with determinant equal to zero does not have inverse"
f_inv_det = 1.0 / f_det
{v00 * f_inv_det, v01 * f_inv_det, v02 * f_inv_det, v10 * f_inv_det, v11 * f_inv_det,
v12 * f_inv_det, v20 * f_inv_det, v21 * f_inv_det, v22 * f_inv_det}
end
end
|
lib/graphmath/Mat33.ex
| 0.948799 | 0.927888 |
Mat33.ex
|
starcoder
|
defmodule Codenamex.Game do
@moduledoc """
This module manages the game logic.
All the functions besides setup/0 expect a game state.
A state is a variation of what was created by the setup/0 function.
"""
alias Codenamex.Game.Board
alias Codenamex.Game.Player
alias Codenamex.Game.Team
defstruct [
guests: nil,
blue_team: nil,
red_team: nil,
board: nil,
winner: nil,
turn: nil,
touched_card: nil,
over: false,
status: :pending
]
def setup do
board = Board.setup()
%__MODULE__{
board: board,
turn: board.first_team,
guests: Team.setup(),
blue_team: Team.setup(),
red_team: Team.setup()
}
end
def start(game) do
case game do
%{status: :started} -> {:error, :already_in_progress}
_ -> {:ok, %{game | status: :started}}
end
end
def next_turn(game, player_name) do
case allowed_to_finish_turn?(game, player_name) do
true -> {:ok, %{game | turn: next_team(game.turn)}}
false -> {:error, :wrong_turn}
end
end
def touch_intent(game, word, player_name) do
case allowed_to_touch_card?(game, player_name) do
true -> touch_intent(game, word)
false -> {:error, :wrong_turn}
end
end
def touch_card(game, word, player_name) do
case allowed_to_touch_card?(game, player_name) do
true -> touch_card(game, word)
false -> {:error, :wrong_turn}
end
end
def next_team("red"), do: "blue"
def next_team("blue"), do: "red"
def fetch_players(game) do
guests = Team.fetch_players(game.guests)
red = Team.fetch_players(game.red_team)
blue = Team.fetch_players(game.blue_team)
%{guests: guests, red_team: red, blue_team: blue}
end
def add_player(game, player_name) do
player = Player.setup(player_name, "regular")
case Team.add_player(game.guests, player, "regular") do
{:ok, team} -> {:ok, %{game | guests: team}}
error -> error
end
end
def pick_team(game, player_name, "red", type) do
player = Player.setup(player_name, type)
current_team = find_team(game, player_name)
updated_game = remove_from_team(game, player_name, current_team)
case Team.add_player(updated_game.red_team, player, type) do
{:ok, team} -> {:ok, %{updated_game | red_team: team}}
error -> error
end
end
def pick_team(game, player_name, "blue", type) do
player = Player.setup(player_name, type)
current_team = find_team(game, player_name)
updated_game = remove_from_team(game, player_name, current_team)
case Team.add_player(updated_game.blue_team, player, type) do
{:ok, team} -> {:ok, %{updated_game | blue_team: team}}
error -> error
end
end
def remove_player(game, player_name) do
case find_team(game, player_name) do
nil -> {:error, :player_not_found}
team -> {:ok, remove_from_team(game, player_name, team)}
end
end
def restart(game) do
new_board = Board.setup()
if new_board.first_team == game.board.first_team do
%{red_team: red_players, blue_team: blue_players} = game
%{game | board: new_board, red_team: blue_players, blue_team: red_players, winner: nil, over: false}
else
%{game | board: new_board, winner: nil, over: false}
end
end
defp touch_intent(game, word) do
case Board.touch_intent(game.board, word) do
{:ok, _} -> {:ok, game}
{:error, reason} -> {:error, reason}
end
end
defp touch_card(game, word) do
case Board.touch_card(game.board, word) do
{:ok, {touched_card, updated_board}} ->
{:ok, update_state(game, updated_board, touched_card)}
{:error, reason} ->
{:error, reason}
end
end
defp update_state(game, updated_board, %{color: "black"} = touched_card) do
%{game | board: updated_board, touched_card: touched_card, winner: next_team(game.turn), over: true}
end
defp update_state(game, updated_board, %{color: "yellow"} = touched_card) do
%{game | board: updated_board, touched_card: touched_card, turn: next_team(game.turn)}
end
defp update_state(%{turn: "red"} = game, updated_board, %{color: "red"} = touched_card) do
case updated_board.red_cards do
0 ->
%{game | board: updated_board, touched_card: touched_card, winner: "red", over: true}
_ ->
%{game | board: updated_board, touched_card: touched_card}
end
end
defp update_state(game = %{turn: "red"}, updated_board, %{color: "blue"} = touched_card) do
case updated_board.blue_cards do
0 ->
%{game | board: updated_board, touched_card: touched_card, winner: "blue", over: true}
_ ->
%{game | board: updated_board, touched_card: touched_card, turn: "blue"}
end
end
defp update_state(game = %{turn: "blue"}, updated_board, %{color: "blue"} = touched_card) do
case updated_board.blue_cards do
0 ->
%{game | board: updated_board, touched_card: touched_card, winner: "blue", over: true}
_ ->
%{game | board: updated_board, touched_card: touched_card}
end
end
defp update_state(game = %{turn: "blue"}, updated_board, %{color: "red"} = touched_card) do
case updated_board.red_cards do
0 ->
%{game | board: updated_board, touched_card: touched_card, winner: "red", over: true}
_ ->
%{game | board: updated_board, touched_card: touched_card, turn: "red"}
end
end
defp allowed_to_touch_card?(game, player_name) do
player_team_color = find_team(game, player_name)
team = fetch_team(game, player_team_color)
player = Team.fetch_player(team, player_name)
(player_team_color == game.turn) && Player.can_select_word?(player)
end
defp allowed_to_finish_turn?(game, player_name) do
find_team(game, player_name) == game.turn
end
defp find_team(game, player_name) do
cond do
Team.has_player?(game.guests, player_name) -> "guest"
Team.has_player?(game.red_team, player_name) -> "red"
Team.has_player?(game.blue_team, player_name) -> "blue"
true -> nil
end
end
defp fetch_team(game, "red") do
game.red_team
end
defp fetch_team(game, "blue") do
game.blue_team
end
defp remove_from_team(game, player_name, "guest") do
%{game | guests: Team.remove_player(game.guests, player_name)}
end
defp remove_from_team(game, player_name, "red") do
%{game | red_team: Team.remove_player(game.red_team, player_name)}
end
defp remove_from_team(game, player_name, "blue") do
%{game | blue_team: Team.remove_player(game.blue_team, player_name)}
end
end
|
lib/codenamex/game.ex
| 0.708414 | 0.545467 |
game.ex
|
starcoder
|
defmodule Seely.Router do
@moduledoc """
Functions to find routes in a user-defined router. (See `Seely.DefaultRouter`).
"""
@doc ~s"""
Create a new router (which is nothing than a simple `Keyword` list)
with initially one key only, the `:module` where the actual router is defined.
Keys: `routes` and `parse_opts` will be added later.
"""
def new(module) do
Keyword.new(module: module)
end
@doc ~s"""
Parse the command the user entered.
The options and routes are fetched from the given `router`
(See `Seely.DefaultRouter`). The `command` gets parsed by `Seely.Parser` and
returns a route (`{command,controller,:function}`) for this command if one could be
found. Otherwise it returns `{:error, "No route found"}`.
"""
def parse(command, router) do
options = apply(router, :parse_opts, [])
routes = apply(router, :routes, [])
Seely.Parser.parse(command, options)
|> Seely.Router.find_route(routes)
end
@doc """
Find a route for a given parsed command. A parsed command, as returned from
`parse/2` has the form
{parsed_options, parameters, invalid_options}
# Example: {[upper: true, trim: true], ["echo", " string "], []}
The function either returns a found route in the form
{command, controller, :function}
or, if no function could be found, it returns a route to the `Seely.EchoController`'s
`:error`-function.
"""
def find_route(parsed_command, routes) do
case parsed_command do
{opts, [cmd | params], []} ->
routes
|> Enum.find(fn {c, _controller, _function} ->
cmd == c
end)
|> build_route(opts, params)
{_opts, [_cmd | _params], invalid_options} ->
{Seely.EchoController, :error,
[
{500,
"invalid options: #{
inspect(invalid_options,
pretty: true
)
}"}
]}
unknown ->
{Seely.EchoController, :echo, ["No route #{inspect(unknown)}"]}
end
end
defp build_route(nil, _opts, _params) do
{Seely.EchoController, :error, [{404, "No route found"}]}
end
defp build_route({_cmd, controller, function}, opts, params) do
{controller, function, [params, Keyword.new(opts)]}
end
end
|
lib/seely/controllers/router.ex
| 0.778691 | 0.543893 |
router.ex
|
starcoder
|
defmodule Re.Listings.Filters do
@moduledoc """
Module for grouping filter queries
"""
use Ecto.Schema
import Ecto.{
Query,
Changeset
}
alias Re.Listings.Filters.Relax
schema "listings_filter" do
field :max_price, :integer
field :min_price, :integer
field :max_rooms, :integer
field :min_rooms, :integer
field :max_suites, :integer
field :min_suites, :integer
field :max_bathrooms, :integer
field :min_bathrooms, :integer
field :min_area, :integer
field :max_area, :integer
field :neighborhoods, {:array, :string}
field :types, {:array, :string}
field :max_lat, :float
field :min_lat, :float
field :max_lng, :float
field :min_lng, :float
field :neighborhoods_slugs, {:array, :string}
field :max_garage_spots, :integer
field :min_garage_spots, :integer
field :garage_types, {:array, :string}
field :cities, {:array, :string}
field :cities_slug, {:array, :string}
field :states_slug, {:array, :string}
field :is_exportable, :boolean
field :tags_slug, {:array, :string}
field :tags_uuid, {:array, :string}
field :statuses, {:array, :string}
field :min_floor_count, :integer
field :max_floor_count, :integer
field :min_unit_per_floor, :integer
field :max_unit_per_floor, :integer
field :has_elevator, :boolean
field :orientations, {:array, :string}
field :sun_periods, {:array, :string}
field :min_age, :integer
field :max_age, :integer
field :min_price_per_area, :float
field :max_price_per_area, :float
field :min_maintenance_fee, :float
field :max_maintenance_fee, :float
field :is_release, :boolean
field :exclude_similar_for_primary_market, :boolean
end
@filters ~w(max_price min_price max_rooms min_rooms max_suites min_suites min_area max_area
neighborhoods types max_lat min_lat max_lng min_lng neighborhoods_slugs
max_garage_spots min_garage_spots garage_types cities cities_slug states_slug
is_exportable tags_slug tags_uuid statuses min_floor_count max_floor_count
min_unit_per_floor max_unit_per_floor has_elevator orientations sun_periods
min_age max_age min_price_per_area max_price_per_area min_maintenance_fee
max_maintenance_fee max_bathrooms min_bathrooms is_release
exclude_similar_for_primary_market)a
def changeset(struct, params \\ %{}), do: cast(struct, params, @filters)
def apply(query, params) do
params
|> cast()
|> build_query(query)
end
def cast(params) do
%__MODULE__{}
|> changeset(params)
|> Map.get(:changes)
end
def relax(params) do
params
|> cast()
|> Relax.apply()
end
defp build_query(params, query) do
params
|> Enum.reduce(query, &attr_filter/2)
end
defp attr_filter({:max_price, max_price}, query) do
from(l in query, where: l.price <= ^max_price)
end
defp attr_filter({:min_price, min_price}, query) do
from(l in query, where: l.price >= ^min_price)
end
defp attr_filter({:max_rooms, max_rooms}, query) do
from(l in query, where: l.rooms <= ^max_rooms)
end
defp attr_filter({:min_rooms, min_rooms}, query) do
from(l in query, where: l.rooms >= ^min_rooms)
end
defp attr_filter({:max_suites, max_suites}, query) do
from(l in query, where: l.suites <= ^max_suites)
end
defp attr_filter({:min_suites, min_suites}, query) do
from(l in query, where: l.suites >= ^min_suites)
end
defp attr_filter({:max_bathrooms, max_bathrooms}, query) do
from(l in query, where: l.bathrooms <= ^max_bathrooms)
end
defp attr_filter({:min_bathrooms, min_bathrooms}, query) do
from(l in query, where: l.bathrooms >= ^min_bathrooms)
end
defp attr_filter({:min_area, min_area}, query) do
from(l in query, where: l.area >= ^min_area)
end
defp attr_filter({:max_area, max_area}, query) do
from(l in query, where: l.area <= ^max_area)
end
defp attr_filter({:neighborhoods, []}, query), do: query
defp attr_filter({:statuses, []}, query), do: query
defp attr_filter({:statuses, statuses}, query) do
from(l in query, where: l.status in ^statuses)
end
defp attr_filter({:neighborhoods, neighborhoods}, query) do
from(
l in query,
join: ad in assoc(l, :address),
on: ad.id == l.address_id,
where: ad.neighborhood in ^neighborhoods
)
end
defp attr_filter({:neighborhoods_slugs, []}, query), do: query
defp attr_filter({:neighborhoods_slugs, neighborhood_slugs}, query) do
from(
l in query,
join: ad in assoc(l, :address),
on: ad.id == l.address_id,
where: ad.neighborhood_slug in ^neighborhood_slugs
)
end
defp attr_filter({:types, []}, query), do: query
defp attr_filter({:types, types}, query) do
from(l in query, where: l.type in ^types)
end
defp attr_filter({:max_lat, max_lat}, query) do
from(
l in query,
join: ad in assoc(l, :address),
where: ad.lat <= ^max_lat
)
end
defp attr_filter({:min_lat, min_lat}, query) do
from(
l in query,
join: ad in assoc(l, :address),
where: ad.lat >= ^min_lat
)
end
defp attr_filter({:max_lng, max_lng}, query) do
from(
l in query,
join: ad in assoc(l, :address),
where: ad.lng <= ^max_lng
)
end
defp attr_filter({:min_lng, min_lng}, query) do
from(
l in query,
join: ad in assoc(l, :address),
where: ad.lng >= ^min_lng
)
end
defp attr_filter({:max_garage_spots, max_garage_spots}, query) do
from(l in query, where: l.garage_spots <= ^max_garage_spots)
end
defp attr_filter({:min_garage_spots, min_garage_spots}, query) do
from(l in query, where: l.garage_spots >= ^min_garage_spots)
end
defp attr_filter({:garage_types, []}, query), do: query
defp attr_filter({:garage_types, garage_types}, query) do
from(l in query, where: l.garage_type in ^garage_types)
end
defp attr_filter({:cities, []}, query), do: query
defp attr_filter({:cities, cities}, query) do
from(
l in query,
join: ad in assoc(l, :address),
on: ad.id == l.address_id,
where: ad.city in ^cities
)
end
defp attr_filter({:cities_slug, []}, query), do: query
defp attr_filter({:cities_slug, cities_slug}, query) do
from(
l in query,
join: ad in assoc(l, :address),
on: ad.id == l.address_id,
where: ad.city_slug in ^cities_slug
)
end
defp attr_filter({:states_slug, []}, query), do: query
defp attr_filter({:states_slug, states_slug}, query) do
from(
l in query,
join: ad in assoc(l, :address),
on: ad.id == l.address_id,
where: ad.state_slug in ^states_slug
)
end
defp attr_filter({:is_exportable, is_exportable}, query) do
from(l in query, where: l.is_exportable == ^is_exportable)
end
defp attr_filter({:tags_slug, []}, query), do: query
defp attr_filter({:tags_slug, slugs}, query) do
from(l in query,
join: t in assoc(l, :tags),
where: t.name_slug in ^slugs,
group_by: l.id
)
end
defp attr_filter({:tags_uuid, []}, query), do: query
defp attr_filter({:tags_uuid, uuids}, query) do
from(l in query,
join: t in assoc(l, :tags),
where: t.uuid in ^uuids,
group_by: l.id
)
end
defp attr_filter({:min_floor_count, floor_count}, query) do
from(
l in query,
where: l.floor_count >= ^floor_count
)
end
defp attr_filter({:max_floor_count, floor_count}, query) do
from(
l in query,
where: l.floor_count <= ^floor_count
)
end
defp attr_filter({:min_unit_per_floor, unit_per_floor}, query) do
from(
l in query,
where: l.unit_per_floor >= ^unit_per_floor
)
end
defp attr_filter({:max_unit_per_floor, unit_per_floor}, query) do
from(
l in query,
where: l.unit_per_floor <= ^unit_per_floor
)
end
defp attr_filter({:has_elevator, false}, query) do
from(
l in query,
where: l.elevators == ^0
)
end
defp attr_filter({:has_elevator, true}, query) do
from(
l in query,
where: l.elevators >= ^1
)
end
defp attr_filter({:orientations, []}, query), do: query
defp attr_filter({:orientations, orientations}, query) do
from(
l in query,
where: l.orientation in ^orientations
)
end
defp attr_filter({:sun_periods, []}, query), do: query
defp attr_filter({:sun_periods, sun_periods}, query) do
from(
l in query,
where: l.sun_period in ^sun_periods
)
end
defp attr_filter({:min_age, age}, query) do
from(
l in query,
where: l.construction_year <= ^age_to_year(age)
)
end
defp attr_filter({:max_age, age}, query) do
from(
l in query,
where: l.construction_year >= ^age_to_year(age)
)
end
defp attr_filter({:min_price_per_area, price_per_area}, query) do
from(
l in query,
where: l.price_per_area >= ^price_per_area
)
end
defp attr_filter({:max_price_per_area, price_per_area}, query) do
from(
l in query,
where: l.price_per_area <= ^price_per_area
)
end
defp attr_filter({:min_maintenance_fee, maintenance_fee}, query) do
from(
l in query,
where: l.maintenance_fee >= ^maintenance_fee
)
end
defp attr_filter({:max_maintenance_fee, maintenance_fee}, query) do
from(
l in query,
where: l.maintenance_fee <= ^maintenance_fee
)
end
defp attr_filter({:is_release, is_release}, query) do
from(
l in query,
where: l.is_release == ^is_release
)
end
defp attr_filter({:exclude_similar_for_primary_market, true}, query) do
from(
l in query,
where: l.is_release == false or l.is_exportable == true
)
end
defp attr_filter(_, query), do: query
defp age_to_year(age) do
today = Date.utc_today()
today.year - age
end
end
|
apps/re/lib/listings/filters/filters.ex
| 0.727201 | 0.498047 |
filters.ex
|
starcoder
|
defmodule Pipe.List do
@moduledoc """
Pipes which act in a list-like (or stream-like) manner.
"""
require Pipe, as: P
## Sources
@doc """
Yield all elements of the list.
"""
def source_list(list)
def source_list([]) do
P.source do
P.return nil
end
end
def source_list([h|t]) do
P.source do
P.yield(h)
source_list(t)
end
end
## Conduits
@doc """
Only pass those values for which the filter returns a non-nil non-false value.
The result is the upstream result.
"""
def filter(f) do
P.conduit do
t <- P.await_result()
case t do
{ :result, r } ->
return r
{ :value, x } -> P.conduit do
if (f.(x)) do
P.yield(x)
end
filter(f)
end
end
end
end
@doc """
Map a function over the input values.
The result is the upstream result.
"""
def map(f) do
P.conduit do
t <- P.await_result()
case t do
{ :result, r } ->
return r
{ :value, x } -> P.conduit do
P.yield(f.(x))
map(f)
end
end
end
end
## Sinks
@doc """
Return all remaining elements as a list.
"""
def consume(), do: do_consume([])
defp do_consume(acc) do
P.sink do
r <- P.await()
case r do
[] ->
return(:lists.reverse(acc))
[x] ->
do_consume([x|acc])
end
end
end
@doc """
Ignore all the input and return the upstream result.
"""
def skip_all() do
P.sink do
t <- P.await_result()
case t do
{ :result, r } ->
return r
{ :value, _ } ->
skip_all()
end
end
end
@doc """
Consume input values while the predicate function returns a true value and
return those input values as a list.
"""
def take_while(f), do: do_take_while([], f)
defp do_take_while(acc, f) do
P.sink do
t <- P.await()
case t do
[] -> return :lists.reverse(acc)
[x] ->
if f.(x) do
do_take_while([x|acc], f)
else
P.return_leftovers(:lists.reverse(acc), [x])
end
end
end
end
end
|
lib/pipe/list.ex
| 0.687315 | 0.452596 |
list.ex
|
starcoder
|
defmodule AWS.CloudDirectory do
@moduledoc """
Amazon Cloud Directory
Amazon Cloud Directory is a component of the AWS Directory Service that
simplifies the development and management of cloud-scale web, mobile, and IoT
applications.
This guide describes the Cloud Directory operations that you can call
programmatically and includes detailed information on data types and errors. For
information about Cloud Directory features, see [AWS Directory Service](https://aws.amazon.com/directoryservice/) and the [Amazon Cloud Directory Developer
Guide](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/what_is_cloud_directory.html).
"""
@doc """
Adds a new `Facet` to an object.
An object can have more than one facet applied on it.
"""
def add_facet_to_object(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/facets"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Copies the input published schema, at the specified version, into the
`Directory` with the same name and version as that of the published schema.
"""
def apply_schema(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/apply"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Attaches an existing object to another object.
An object can be accessed in two ways:
1. Using the path
2. Using `ObjectIdentifier`
"""
def attach_object(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/attach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Attaches a policy object to a regular object.
An object can have a limited number of attached policies.
"""
def attach_policy(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/policy/attach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Attaches the specified object to the specified index.
"""
def attach_to_index(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/index/attach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Attaches a typed link to a specified source and target object.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def attach_typed_link(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/attach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Performs all the read operations in a batch.
"""
def batch_read(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/batchread"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Performs all the write operations in a batch.
Either all the operations succeed or none.
"""
def batch_write(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/batchwrite"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Creates a `Directory` by copying the published schema into the directory.
A directory cannot be created without a schema.
You can also quickly create a directory using a managed schema, called the
`QuickStartSchema`. For more information, see [Managed Schema](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/schemas_managed.html)
in the *Amazon Cloud Directory Developer Guide*.
"""
def create_directory(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/directory/create"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Creates a new `Facet` in a schema.
Facet creation is allowed only in development or applied schemas.
"""
def create_facet(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/facet/create"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Creates an index object.
See [Indexing and search](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/indexing_search.html)
for more information.
"""
def create_index(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/index"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Creates an object in a `Directory`.
Additionally attaches the object to a parent, if a parent reference and
`LinkName` is specified. An object is simply a collection of `Facet` attributes.
You can also use this API call to create a policy object, if the facet from
which you create the object is a policy facet.
"""
def create_object(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Creates a new schema in a development state.
A schema can exist in three phases:
* *Development:* This is a mutable phase of the schema. All new
schemas are in the development phase. Once the schema is finalized, it can be
published.
* *Published:* Published schemas are immutable and have a version
associated with them.
* *Applied:* Applied schemas are mutable in a way that allows you to
add new schema facets. You can also add new, nonrequired attributes to existing
schema facets. You can apply only published schemas to directories.
"""
def create_schema(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/create"
headers = []
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Creates a `TypedLinkFacet`.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def create_typed_link_facet(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/facet/create"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Deletes a directory.
Only disabled directories can be deleted. A deleted directory cannot be undone.
Exercise extreme caution when deleting directories.
"""
def delete_directory(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/directory"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Deletes a given `Facet`.
All attributes and `Rule`s that are associated with the facet will be deleted.
Only development schema facets are allowed deletion.
"""
def delete_facet(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/facet/delete"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Deletes an object and its associated attributes.
Only objects with no children and no parents can be deleted. The maximum number
of attributes that can be deleted during an object deletion is 30. For more
information, see [Amazon Cloud Directory Limits](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/limits.html).
"""
def delete_object(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/delete"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Deletes a given schema.
Schemas in a development and published state can only be deleted.
"""
def delete_schema(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Deletes a `TypedLinkFacet`.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def delete_typed_link_facet(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/facet/delete"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Detaches the specified object from the specified index.
"""
def detach_from_index(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/index/detach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Detaches a given object from the parent object.
The object that is to be detached from the parent is specified by the link name.
"""
def detach_object(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/detach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Detaches a policy from an object.
"""
def detach_policy(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/policy/detach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Detaches a typed link from a specified source and target object.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def detach_typed_link(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/detach"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Disables the specified directory.
Disabled directories cannot be read or written to. Only enabled directories can
be disabled. Disabled directories may be reenabled.
"""
def disable_directory(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/directory/disable"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Enables the specified directory.
Only disabled directories can be enabled. Once enabled, the directory can then
be read and written to.
"""
def enable_directory(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/directory/enable"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Returns current applied schema version ARN, including the minor version in use.
"""
def get_applied_schema_version(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/getappliedschema"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves metadata about a directory.
"""
def get_directory(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/directory/get"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Gets details of the `Facet`, such as facet name, attributes, `Rule`s, or
`ObjectType`.
You can call this on all kinds of schema facets -- published, development, or
applied.
"""
def get_facet(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/facet"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves attributes that are associated with a typed link.
"""
def get_link_attributes(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/attributes/get"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves attributes within a facet that are associated with an object.
"""
def get_object_attributes(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/attributes/get"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves metadata about an object.
"""
def get_object_information(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/information"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves a JSON representation of the schema.
See [JSON Schema Format](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/schemas_jsonformat.html#schemas_json)
for more information.
"""
def get_schema_as_json(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/json"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns the identity attribute order for a specific `TypedLinkFacet`.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def get_typed_link_facet_information(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/facet/get"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists schema major versions applied to a directory.
If `SchemaArn` is provided, lists the minor version.
"""
def list_applied_schema_arns(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/applied"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists indices attached to the specified object.
"""
def list_attached_indices(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/indices"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves each Amazon Resource Name (ARN) of schemas in the development state.
"""
def list_development_schema_arns(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/development"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists directories created within an account.
"""
def list_directories(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/directory/list"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves attributes attached to the facet.
"""
def list_facet_attributes(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/facet/attributes"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves the names of facets that exist in a schema.
"""
def list_facet_names(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/facet/list"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns a paginated list of all the incoming `TypedLinkSpecifier` information
for an object.
It also supports filtering by typed link facet and identity attributes. For more
information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def list_incoming_typed_links(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/incoming"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists objects attached to the specified index.
"""
def list_index(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/index/targets"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists the major version families of each managed schema.
If a major version ARN is provided as SchemaArn, the minor version revisions in
that family are listed instead.
"""
def list_managed_schema_arns(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/managed"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists all attributes that are associated with an object.
"""
def list_object_attributes(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/attributes"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns a paginated list of child objects that are associated with a given
object.
"""
def list_object_children(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/children"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Retrieves all available parent paths for any object type such as node, leaf
node, policy node, and index node objects.
For more information about objects, see [Directory Structure](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/key_concepts_directorystructure.html).
Use this API to evaluate all parents for an object. The call returns all objects
from the root of the directory up to the requested object. The API returns the
number of paths based on user-defined `MaxResults`, in case there are multiple
paths to the parent. The order of the paths and nodes returned is consistent
among multiple API calls unless the objects are deleted or moved. Paths not
leading to the directory root are ignored from the target object.
"""
def list_object_parent_paths(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/parentpaths"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists parent objects that are associated with a given object in pagination
fashion.
"""
def list_object_parents(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/parent"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns policies attached to an object in pagination fashion.
"""
def list_object_policies(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/policy"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns a paginated list of all the outgoing `TypedLinkSpecifier` information
for an object.
It also supports filtering by typed link facet and identity attributes. For more
information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def list_outgoing_typed_links(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/outgoing"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns all of the `ObjectIdentifiers` to which a given policy is attached.
"""
def list_policy_attachments(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/policy/attachment"
{headers, input} =
[
{"ConsistencyLevel", "x-amz-consistency-level"},
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists the major version families of each published schema.
If a major version ARN is provided as `SchemaArn`, the minor version revisions
in that family are listed instead.
"""
def list_published_schema_arns(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/published"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns tags for a resource.
Tagging is currently supported only for directories with a limit of 50 tags per
directory. All 50 tags are returned for a given directory with this API call.
"""
def list_tags_for_resource(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/tags"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns a paginated list of all attribute definitions for a particular
`TypedLinkFacet`.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def list_typed_link_facet_attributes(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/facet/attributes"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Returns a paginated list of `TypedLink` facet names for a particular schema.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def list_typed_link_facet_names(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/facet/list"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Lists all policies from the root of the `Directory` to the object specified.
If there are no policies present, an empty list is returned. If policies are
present, and if some objects don't have the policies attached, it returns the
`ObjectIdentifier` for such objects. If policies are present, it returns
`ObjectIdentifier`, `policyId`, and `policyType`. Paths that don't lead to the
root from the target object are ignored. For more information, see
[Policies](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/key_concepts_directory.html#key_concepts_policies).
"""
def lookup_policy(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/policy/lookup"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Publishes a development schema with a major version and a recommended minor
version.
"""
def publish_schema(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/publish"
{headers, input} =
[
{"DevelopmentSchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Allows a schema to be updated using JSON upload.
Only available for development schemas. See [JSON Schema Format](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/schemas_jsonformat.html#schemas_json)
for more information.
"""
def put_schema_from_json(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/json"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Removes the specified facet from the specified object.
"""
def remove_facet_from_object(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/facets/delete"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
An API operation for adding tags to a resource.
"""
def tag_resource(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/tags/add"
headers = []
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
An API operation for removing tags from a resource.
"""
def untag_resource(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/tags/remove"
headers = []
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Does the following:
1.
Adds new `Attributes`, `Rules`, or `ObjectTypes`.
2. Updates existing `Attributes`, `Rules`, or `ObjectTypes`.
3. Deletes existing `Attributes`, `Rules`, or `ObjectTypes`.
"""
def update_facet(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/facet"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Updates a given typed link’s attributes.
Attributes to be updated must not contribute to the typed link’s identity, as
defined by its `IdentityAttributeOrder`.
"""
def update_link_attributes(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/attributes/update"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :post, path_, query_, headers, input, options, 200)
end
@doc """
Updates a given object's attributes.
"""
def update_object_attributes(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/object/update"
{headers, input} =
[
{"DirectoryArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Updates the schema name with a new name.
Only development schema names can be updated.
"""
def update_schema(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/update"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Updates a `TypedLinkFacet`.
For more information, see [Typed Links](https://docs.aws.amazon.com/clouddirectory/latest/developerguide/directory_objects_links.html#directory_objects_links_typedlink).
"""
def update_typed_link_facet(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/typedlink/facet"
{headers, input} =
[
{"SchemaArn", "x-amz-data-partition"},
]
|> AWS.Request.build_params(input)
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Upgrades a single directory in-place using the `PublishedSchemaArn` with schema
updates found in `MinorVersion`.
Backwards-compatible minor version upgrades are instantaneously available for
readers on all objects in the directory. Note: This is a synchronous API call
and upgrades only one schema on a given directory per call. To upgrade multiple
directories from one schema, you would need to call this API on each directory.
"""
def upgrade_applied_schema(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/upgradeapplied"
headers = []
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@doc """
Upgrades a published schema under a new minor version revision using the current
contents of `DevelopmentSchemaArn`.
"""
def upgrade_published_schema(client, input, options \\ []) do
path_ = "/amazonclouddirectory/2017-01-11/schema/upgradepublished"
headers = []
query_ = []
request(client, :put, path_, query_, headers, input, options, 200)
end
@spec request(AWS.Client.t(), binary(), binary(), list(), list(), map(), list(), pos_integer()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, method, path, query, headers, input, options, success_status_code) do
client = %{client | service: "clouddirectory"}
host = build_host("clouddirectory", client)
url = host
|> build_url(path, client)
|> add_query(query, client)
additional_headers = [{"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}]
headers = AWS.Request.add_headers(additional_headers, headers)
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, method, url, headers, payload)
perform_request(client, method, url, payload, headers, options, success_status_code)
end
defp perform_request(client, method, url, payload, headers, options, success_status_code) do
case AWS.Client.request(client, method, url, payload, headers, options) do
{:ok, %{status_code: status_code, body: body} = response}
when is_nil(success_status_code) and status_code in [200, 202, 204]
when status_code == success_status_code ->
body = if(body != "", do: decode!(client, body))
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, path, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}#{path}"
end
defp add_query(url, [], _client) do
url
end
defp add_query(url, query, client) do
querystring = encode!(client, query, :query)
"#{url}?#{querystring}"
end
defp encode!(client, payload, format \\ :json) do
AWS.Client.encode!(client, payload, format)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/cloud_directory.ex
| 0.901271 | 0.417271 |
cloud_directory.ex
|
starcoder
|
defmodule AWS.CodeDeploy do
@moduledoc """
AWS CodeDeploy
AWS CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances running in your
own facility, or serverless AWS Lambda functions.
You can deploy a nearly unlimited variety of application content, such as
an updated Lambda function, code, web and configuration files, executables,
packages, scripts, multimedia files, and so on. AWS CodeDeploy can deploy
application content stored in Amazon S3 buckets, GitHub repositories, or
Bitbucket repositories. You do not need to make changes to your existing
code before you can use AWS CodeDeploy.
AWS CodeDeploy makes it easier for you to rapidly release new features,
helps you avoid downtime during application deployment, and handles the
complexity of updating your applications, without many of the risks
associated with error-prone manual deployments.
**AWS CodeDeploy Components**
Use the information in this guide to help you work with the following AWS
CodeDeploy components:
<ul> <li> **Application**: A name that uniquely identifies the application
you want to deploy. AWS CodeDeploy uses this name, which functions as a
container, to ensure the correct combination of revision, deployment
configuration, and deployment group are referenced during a deployment.
</li> <li> **Deployment group**: A set of individual instances or
CodeDeploy Lambda applications. A Lambda deployment group contains a group
of applications. An EC2/On-premises deployment group contains individually
tagged instances, Amazon EC2 instances in Auto Scaling groups, or both.
</li> <li> **Deployment configuration**: A set of deployment rules and
deployment success and failure conditions used by AWS CodeDeploy during a
deployment.
</li> <li> **Deployment**: The process and the components used in the
process of updating a Lambda function or of installing content on one or
more instances.
</li> <li> **Application revisions**: For an AWS Lambda deployment, this is
an AppSpec file that specifies the Lambda function to update and one or
more functions to validate deployment lifecycle events. For an
EC2/On-premises deployment, this is an archive file containing source
content—source code, web pages, executable files, and deployment
scripts—along with an AppSpec file. Revisions are stored in Amazon S3
buckets or GitHub repositories. For Amazon S3, a revision is uniquely
identified by its Amazon S3 object key and its ETag, version, or both. For
GitHub, a revision is uniquely identified by its commit ID.
</li> </ul> This guide also contains information to help you get details
about the instances in your deployments, to make on-premises instances
available for AWS CodeDeploy deployments, and to get details about a Lambda
function deployment.
**AWS CodeDeploy Information Resources**
<ul> <li> [AWS CodeDeploy User
Guide](http://docs.aws.amazon.com/codedeploy/latest/userguide)
</li> <li> [AWS CodeDeploy API Reference
Guide](http://docs.aws.amazon.com/codedeploy/latest/APIReference/)
</li> <li> [AWS CLI Reference for AWS
CodeDeploy](http://docs.aws.amazon.com/cli/latest/reference/deploy/index.html)
</li> <li> [AWS CodeDeploy Developer
Forum](https://forums.aws.amazon.com/forum.jspa?forumID=179)
</li> </ul>
"""
@doc """
Adds tags to on-premises instances.
"""
def add_tags_to_on_premises_instances(client, input, options \\ []) do
request(client, "AddTagsToOnPremisesInstances", input, options)
end
@doc """
Gets information about one or more application revisions.
"""
def batch_get_application_revisions(client, input, options \\ []) do
request(client, "BatchGetApplicationRevisions", input, options)
end
@doc """
Gets information about one or more applications.
"""
def batch_get_applications(client, input, options \\ []) do
request(client, "BatchGetApplications", input, options)
end
@doc """
Gets information about one or more deployment groups.
"""
def batch_get_deployment_groups(client, input, options \\ []) do
request(client, "BatchGetDeploymentGroups", input, options)
end
@doc """
Gets information about one or more instance that are part of a deployment
group.
"""
def batch_get_deployment_instances(client, input, options \\ []) do
request(client, "BatchGetDeploymentInstances", input, options)
end
@doc """
Gets information about one or more deployments.
"""
def batch_get_deployments(client, input, options \\ []) do
request(client, "BatchGetDeployments", input, options)
end
@doc """
Gets information about one or more on-premises instances.
"""
def batch_get_on_premises_instances(client, input, options \\ []) do
request(client, "BatchGetOnPremisesInstances", input, options)
end
@doc """
For a blue/green deployment, starts the process of rerouting traffic from
instances in the original environment to instances in the replacement
environment without waiting for a specified wait time to elapse. (Traffic
rerouting, which is achieved by registering instances in the replacement
environment with the load balancer, can start as soon as all instances have
a status of Ready.)
"""
def continue_deployment(client, input, options \\ []) do
request(client, "ContinueDeployment", input, options)
end
@doc """
Creates an application.
"""
def create_application(client, input, options \\ []) do
request(client, "CreateApplication", input, options)
end
@doc """
Deploys an application revision through the specified deployment group.
"""
def create_deployment(client, input, options \\ []) do
request(client, "CreateDeployment", input, options)
end
@doc """
Creates a deployment configuration.
"""
def create_deployment_config(client, input, options \\ []) do
request(client, "CreateDeploymentConfig", input, options)
end
@doc """
Creates a deployment group to which application revisions will be deployed.
"""
def create_deployment_group(client, input, options \\ []) do
request(client, "CreateDeploymentGroup", input, options)
end
@doc """
Deletes an application.
"""
def delete_application(client, input, options \\ []) do
request(client, "DeleteApplication", input, options)
end
@doc """
Deletes a deployment configuration.
<note> A deployment configuration cannot be deleted if it is currently in
use. Predefined configurations cannot be deleted.
</note>
"""
def delete_deployment_config(client, input, options \\ []) do
request(client, "DeleteDeploymentConfig", input, options)
end
@doc """
Deletes a deployment group.
"""
def delete_deployment_group(client, input, options \\ []) do
request(client, "DeleteDeploymentGroup", input, options)
end
@doc """
Deletes a GitHub account connection.
"""
def delete_git_hub_account_token(client, input, options \\ []) do
request(client, "DeleteGitHubAccountToken", input, options)
end
@doc """
Deregisters an on-premises instance.
"""
def deregister_on_premises_instance(client, input, options \\ []) do
request(client, "DeregisterOnPremisesInstance", input, options)
end
@doc """
Gets information about an application.
"""
def get_application(client, input, options \\ []) do
request(client, "GetApplication", input, options)
end
@doc """
Gets information about an application revision.
"""
def get_application_revision(client, input, options \\ []) do
request(client, "GetApplicationRevision", input, options)
end
@doc """
Gets information about a deployment.
"""
def get_deployment(client, input, options \\ []) do
request(client, "GetDeployment", input, options)
end
@doc """
Gets information about a deployment configuration.
"""
def get_deployment_config(client, input, options \\ []) do
request(client, "GetDeploymentConfig", input, options)
end
@doc """
Gets information about a deployment group.
"""
def get_deployment_group(client, input, options \\ []) do
request(client, "GetDeploymentGroup", input, options)
end
@doc """
Gets information about an instance as part of a deployment.
"""
def get_deployment_instance(client, input, options \\ []) do
request(client, "GetDeploymentInstance", input, options)
end
@doc """
Gets information about an on-premises instance.
"""
def get_on_premises_instance(client, input, options \\ []) do
request(client, "GetOnPremisesInstance", input, options)
end
@doc """
Lists information about revisions for an application.
"""
def list_application_revisions(client, input, options \\ []) do
request(client, "ListApplicationRevisions", input, options)
end
@doc """
Lists the applications registered with the applicable IAM user or AWS
account.
"""
def list_applications(client, input, options \\ []) do
request(client, "ListApplications", input, options)
end
@doc """
Lists the deployment configurations with the applicable IAM user or AWS
account.
"""
def list_deployment_configs(client, input, options \\ []) do
request(client, "ListDeploymentConfigs", input, options)
end
@doc """
Lists the deployment groups for an application registered with the
applicable IAM user or AWS account.
"""
def list_deployment_groups(client, input, options \\ []) do
request(client, "ListDeploymentGroups", input, options)
end
@doc """
Lists the instance for a deployment associated with the applicable IAM user
or AWS account.
"""
def list_deployment_instances(client, input, options \\ []) do
request(client, "ListDeploymentInstances", input, options)
end
@doc """
Lists the deployments in a deployment group for an application registered
with the applicable IAM user or AWS account.
"""
def list_deployments(client, input, options \\ []) do
request(client, "ListDeployments", input, options)
end
@doc """
Lists the names of stored connections to GitHub accounts.
"""
def list_git_hub_account_token_names(client, input, options \\ []) do
request(client, "ListGitHubAccountTokenNames", input, options)
end
@doc """
Gets a list of names for one or more on-premises instances.
Unless otherwise specified, both registered and deregistered on-premises
instance names will be listed. To list only registered or deregistered
on-premises instance names, use the registration status parameter.
"""
def list_on_premises_instances(client, input, options \\ []) do
request(client, "ListOnPremisesInstances", input, options)
end
@doc """
Sets the result of a Lambda validation function. The function validates one
or both lifecycle events (`BeforeAllowTraffic` and `AfterAllowTraffic`) and
returns `Succeeded` or `Failed`.
"""
def put_lifecycle_event_hook_execution_status(client, input, options \\ []) do
request(client, "PutLifecycleEventHookExecutionStatus", input, options)
end
@doc """
Registers with AWS CodeDeploy a revision for the specified application.
"""
def register_application_revision(client, input, options \\ []) do
request(client, "RegisterApplicationRevision", input, options)
end
@doc """
Registers an on-premises instance.
<note> Only one IAM ARN (an IAM session ARN or IAM user ARN) is supported
in the request. You cannot use both.
</note>
"""
def register_on_premises_instance(client, input, options \\ []) do
request(client, "RegisterOnPremisesInstance", input, options)
end
@doc """
Removes one or more tags from one or more on-premises instances.
"""
def remove_tags_from_on_premises_instances(client, input, options \\ []) do
request(client, "RemoveTagsFromOnPremisesInstances", input, options)
end
@doc """
In a blue/green deployment, overrides any specified wait time and starts
terminating instances immediately after the traffic routing is completed.
"""
def skip_wait_time_for_instance_termination(client, input, options \\ []) do
request(client, "SkipWaitTimeForInstanceTermination", input, options)
end
@doc """
Attempts to stop an ongoing deployment.
"""
def stop_deployment(client, input, options \\ []) do
request(client, "StopDeployment", input, options)
end
@doc """
Changes the name of an application.
"""
def update_application(client, input, options \\ []) do
request(client, "UpdateApplication", input, options)
end
@doc """
Changes information about a deployment group.
"""
def update_deployment_group(client, input, options \\ []) do
request(client, "UpdateDeploymentGroup", input, options)
end
@spec request(map(), binary(), map(), list()) ::
{:ok, Poison.Parser.t | nil, Poison.Response.t} |
{:error, Poison.Parser.t} |
{:error, HTTPoison.Error.t}
defp request(client, action, input, options) do
client = %{client | service: "codedeploy"}
host = get_host("codedeploy", client)
url = get_url(host, client)
headers = [{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "CodeDeploy_20141006.#{action}"}]
payload = Poison.Encoder.encode(input, [])
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
case HTTPoison.post(url, payload, headers, options) do
{:ok, response=%HTTPoison.Response{status_code: 200, body: ""}} ->
{:ok, nil, response}
{:ok, response=%HTTPoison.Response{status_code: 200, body: body}} ->
{:ok, Poison.Parser.parse!(body), response}
{:ok, _response=%HTTPoison.Response{body: body}} ->
error = Poison.Parser.parse!(body)
exception = error["__type"]
message = error["message"]
{:error, {exception, message}}
{:error, %HTTPoison.Error{reason: reason}} ->
{:error, %HTTPoison.Error{reason: reason}}
end
end
defp get_host(endpoint_prefix, client) do
if client.region == "local" do
"localhost"
else
"#{endpoint_prefix}.#{client.region}.#{client.endpoint}"
end
end
defp get_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
end
|
lib/aws/code_deploy.ex
| 0.791418 | 0.470128 |
code_deploy.ex
|
starcoder
|
defmodule Elibuf.Primitives.Enum do
defstruct name: nil, values: [], allow_alias: false
defmodule Value do
defstruct name: nil, order: nil, type: :enum
def new_value(name, order) when is_bitstring(name) and is_integer(order) and order >= 0 do
%__MODULE__{name: name, order: order}
end
def new_value(name) when is_bitstring(name) do
%__MODULE__{name: name}
end
def set_name(%__MODULE__{} = value, name_value) when is_bitstring(name_value) do
%{value | name: name_value}
end
def has_name?(%__MODULE__{} = value) do
is_bitstring(value.name)
end
def set_order(%__MODULE__{} = value, order_value) when is_integer(order_value) do
%{value | order: order_value}
end
def has_order?(%__MODULE__{} = value) do
is_integer(value.order) && value.order >= 0
end
def generate(%__MODULE__{} = value) do
"\t" <> value.name <> " = " <> Integer.to_string(value.order) <> "; // " <> inspect(value)
end
def generate_list(valuelist, :auto_order) when is_list(valuelist) do
generate_list(valuelist, 0)
end
def generate_list(valuelist, starting_point) when is_list(valuelist) and is_integer(starting_point) do
valuelist
|> Enum.reverse
|> Enum.with_index(starting_point)
|> Enum.map(fn value ->
real_value = elem(value, 0)
set_order(real_value, elem(value, 1))
end)
|> Enum.map(fn value ->
generate(value)
end)
end
def generate_list(valuelist) when is_list(valuelist) do
valuelist
|> Enum.sort(&(&1.order <= &2.order))
|> Enum.map(fn value ->
generate(value)
end)
end
def validate(%__MODULE__{} = value) do
validation_errors = %{}
|> Map.put(:has_name, has_name?(value))
|> Map.put(:has_order, has_order?(value))
case validation_errors do
%{has_name: false} -> {validation_errors, false}
%{has_order: false} -> {validation_errors, false}
_ -> {validation_errors, true}
end
end
def valid?(%__MODULE__{} = value) do
elem(validate(value), 1)
end
end
def new_enum() do
%__MODULE__{}
end
def new_enum(name) when is_bitstring(name) do
%__MODULE__{name: name}
end
def new_enum(name, :allow_alias) when is_bitstring(name) do
%__MODULE__{name: name, allow_alias: true}
end
def set_name(%__MODULE__{} = enum, name_value) when is_bitstring(name_value) do
%{enum | name: name_value}
end
def has_name?(%__MODULE__{} = enum) do
enum.name != nil && is_bitstring(enum.name)
end
def add_value(%__MODULE__{} = enum, %Value{} = enum_value) do
%{enum | values: [enum_value | enum.values ]}
end
def remove_value(%__MODULE__{} = enum, value_name) when is_bitstring(value_name) do
index = enum.values
|> Enum.find_index(&(&1.name == value_name))
new_list = enum.values
|> List.delete_at(index)
%{enum | values: new_list}
end
def remove_value(%__MODULE__{} = enum, %Value{} = enum_value) do
new_list = enum.values
|> List.delete(enum_value)
%{enum | values: new_list}
end
def has_value?(%__MODULE__{} = enum, value_name) when is_bitstring(value_name) do
enum.values
|> Enum.find_value(&(&1.name == value_name))
end
def has_values?(%__MODULE__{} = enum) do
length(enum.values) > 0
end
def set_alias(%__MODULE__{} = enum, alias_value) when is_boolean(alias_value) do
%{enum | allow_alias: alias_value}
end
def toggle_alias(%__MODULE__{} = enum) do
%{enum | allow_alias: !enum.allow_alias}
end
def allow_alias?(%__MODULE__{} = enum) do
enum.allow_alias
end
def should_alias?(%__MODULE__{} = enum) do
uniq_values = enum.values
|> Enum.uniq_by(fn %{} = value -> value.order end)
length(enum.values) == length(uniq_values)
end
def validate(%__MODULE__{} = enum) do
validation_errors = %{}
|> Map.put(:has_name, has_name?(enum))
|> Map.put(:has_values, has_values?(enum))
|> Map.put(:aliasing_check, should_alias?(enum))
case validation_errors do
%{has_name: false} -> {validation_errors, false}
%{has_values: false} -> {validation_errors, false}
_ -> {validation_errors, true}
end
end
def valid?(%__MODULE__{} = enum) do
elem(validate(enum), 1)
end
def generate(%__MODULE__{} = enum) do
return_value =
case allow_alias?(enum) do
true -> "enum " <> enum.name <> " {\n\toption allow_alias = true;\n"
false -> "enum " <> enum.name <> " {\n"
end
values = enum.values
|> Value.generate_list
|> Enum.join("\n")
return_value <> values <> "\n}\n"
end
def generate(%__MODULE__{} = enum, :auto_order) do
return_value =
case allow_alias?(enum) do
true -> "enum " <> enum.name <> " { // " <> inspect(Map.delete(enum, :values)) <> "\n\toption allow_alias = true;\n"
false -> "enum " <> enum.name <> " {\n"
end
values = enum.values
|> Value.generate_list(:auto_order)
|> Enum.join("\n")
return_value <> values <> "\n}\n"
end
end
|
lib/primitives/enum.ex
| 0.664105 | 0.511656 |
enum.ex
|
starcoder
|
defmodule Day6 do
@type orbits() :: %{String.t() => String.t()}
@spec add_orbit([String.t()], orbits()) :: orbits()
defp add_orbit([center, body], orbit_map) do
Map.put(orbit_map, body, center)
end
@spec count_orbits(orbits(), String.t()) :: integer
defp count_orbits(map, center) do
case center do
"COM" -> 1
_ -> 1 + count_orbits(map, map[center])
end
end
@spec map_orbits([String.t()]) :: orbits()
def map_orbits(orbit_definitions) do
Enum.map(orbit_definitions, fn s -> String.split(s, ")") end)
|> Enum.reduce(%{}, &add_orbit/2)
end
@spec map_and_count([Strint.t()]) :: integer
def map_and_count(orbit_definitions) do
orbit_map = map_orbits(orbit_definitions)
Enum.map(orbit_map, fn {_body, center} -> count_orbits(orbit_map, center) end)
|> Enum.sum()
end
@spec part1(String.t()) :: integer
def part1(file_name) do
Files.read_lines!(file_name)
|> map_and_count()
end
@spec get_transfer_to_com(orbits(), String.t()) :: [String.t()]
def get_transfer_to_com(orbit_map, body) do
case body do
"COM" -> []
_ -> [body | get_transfer_to_com(orbit_map, orbit_map[body])]
end
end
@spec do_intersect([String.t()], [String.t()]) :: integer
defp do_intersect(path_1, path_2) do
[h_1 | tail_1] = path_1
[h_2 | tail_2] = path_2
case h_1 == h_2 do
false -> length(path_1) + length(path_2) - 2
true -> do_intersect(tail_1, tail_2)
end
end
@spec count_transfers([String.t()], [String.t()]) :: integer
def count_transfers(path_1, path_2) do
do_intersect(Enum.reverse(path_1), Enum.reverse(path_2))
end
@spec transfers(orbits(), String.t(), String.t()) :: integer
def transfers(orbit_map, body_1, body_2) do
path_1 = get_transfer_to_com(orbit_map, body_1)
path_2 = get_transfer_to_com(orbit_map, body_2)
count_transfers(path_1, path_2)
end
@spec part2(String.t()) :: integer
def part2(file_name) do
orbit_map = map_orbits(Files.read_lines!(file_name))
transfers(orbit_map, "YOU", "SAN")
end
end
|
lib/day6.ex
| 0.837354 | 0.563918 |
day6.ex
|
starcoder
|
defmodule Cafex.Protocol.OffsetCommit do
@moduledoc """
This api saves out the consumer's position in the stream for one or more partitions.
The offset commit request support version 0, 1 and 2.
To read more details, visit the [A Guide to The Kafka Protocol](https://cwiki.apache.org/confluence/display/KAFKA/A+Guide+To+The+Kafka+Protocol#AGuideToTheKafkaProtocol-OffsetCommitRequest).
"""
use Cafex.Protocol, api: :offset_commit
@default_consumer_group_generation_id -1
@default_timestamp -1
defrequest do
field :api_version, [default: 0], api_version
field :consumer_group, [default: "cafex"], String.t
field :consumer_group_generation_id, integer | nil
field :consumer_id, String.t | nil
field :retention_time, integer | nil
field :topics, [topic]
@type api_version :: 0 | 1 | 2
@type topic :: {topic_name :: String.t, partitions :: [partition]}
@type partition :: partition_v0 | partition_v1 | partition_v2
@type partition_v0 :: {partition :: integer, offset :: integer, metadata:: binary}
@type partition_v1 :: {partition :: integer, offset :: integer, timestamp :: integer, metadata :: binary}
@type partition_v2 :: {partition :: integer, offset :: integer, metadata:: binary}
end
defresponse do
field :topics, [topic]
@type topic :: {topic_name :: String.t, partitions :: [partition]}
@type partition :: {partition :: integer, error :: Cafex.Protocol.error}
end
def api_version(%Request{api_version: api_version}), do: api_version
def encode(request) do
request |> fill_default |> do_encode
end
defp do_encode(%{api_version: 0} = request), do: encode_0(request)
defp do_encode(%{api_version: 1} = request), do: encode_1(request)
defp do_encode(%{api_version: 2} = request), do: encode_2(request)
defp fill_default(%{api_version: version,
consumer_group_generation_id: id,
consumer_id: consumer_id,
retention_time: time,
topics: topics} = request) do
id = case id do
nil -> @default_consumer_group_generation_id
other -> other
end
time = case time do
nil -> @default_timestamp
other -> other
end
consumer_id = case consumer_id do
nil -> ""
other -> other
end
topics = case version do
1 ->
Enum.map(topics, fn {topic_name, partitions} ->
partitions = Enum.map(partitions, fn
{p, o, m} -> {p, o, @default_timestamp, m}
{_, _, _, _} = partition -> partition
end)
{topic_name, partitions}
end)
_ -> topics
end
%{request | consumer_group_generation_id: id, consumer_id: consumer_id, retention_time: time, topics: topics}
end
defp encode_0(%{consumer_group: consumer_group, topics: topics}) do
[encode_string(consumer_group),
encode_array(topics, &encode_topic_0/1)]
|> IO.iodata_to_binary
end
defp encode_1(%{consumer_group: consumer_group,
consumer_group_generation_id: consumer_group_generation_id,
consumer_id: consumer_id,
topics: topics}) do
[encode_string(consumer_group),
<<consumer_group_generation_id :: 32-signed>>,
encode_string(consumer_id),
encode_array(topics, &encode_topic_1/1)]
|> IO.iodata_to_binary
end
defp encode_2(%{consumer_group: consumer_group,
consumer_group_generation_id: consumer_group_generation_id,
consumer_id: consumer_id,
retention_time: retention_time,
topics: topics}) do
[encode_string(consumer_group),
<<consumer_group_generation_id :: 32-signed>>,
encode_string(consumer_id),
<<retention_time :: 64>>,
encode_array(topics, &encode_topic_2/1)]
|> IO.iodata_to_binary
end
defp encode_topic_0(data), do: encode_topic(data, &encode_partition_0/1)
defp encode_topic_1(data), do: encode_topic(data, &encode_partition_1/1)
defp encode_topic_2(data), do: encode_topic(data, &encode_partition_2/1)
defp encode_topic({topic, partitions}, func) do
[encode_string(topic),
encode_array(partitions, func)]
end
defp encode_partition_0({partition, offset, metadata}) do
[<< partition :: 32-signed, offset :: 64 >>, encode_string(metadata)]
end
defp encode_partition_1({partition, offset, timestamp, metadata}) do
[<< partition :: 32-signed, offset :: 64, timestamp :: 64 >>, encode_string(metadata)]
end
defp encode_partition_2(data), do: encode_partition_0(data)
@spec decode(binary) :: Response.t
def decode(data) when is_binary(data) do
{topics, _} = decode_array(data, &decode_topic/1)
%Response{topics: topics}
end
defp decode_topic(<< size :: 16-signed, topic :: size(size)-binary, rest :: binary >>) do
{partitions, rest} = decode_array(rest, &decode_partition/1)
{{topic, partitions}, rest}
end
defp decode_partition(<< partition :: 32-signed, error_code :: 16-signed, rest :: binary >>) do
{{partition, decode_error(error_code)}, rest}
end
end
|
lib/cafex/protocol/offset_commit.ex
| 0.730097 | 0.409103 |
offset_commit.ex
|
starcoder
|
defmodule VolleyFire do
@moduledoc ~S"""
This module provides a self-scheduling task runner.
There are two main functions:
* roll
The idea is to start all the tasks on the list in
a wrapper that waits to receive a :start message.
The controller sends out count :start messages and
when any of those tasks is finished it sends a start
message to the next task on the list. There's no evidence
that doing it this way makes much sense, but it
was fun to write and shows what is possible on the BEAM.
* rank
Does the same thing, but only calls Task.async
when new slots are available. No :start messages
are required.
"""
@doc ~S"""
Keep count tasks active from the list.
roll starts all the tasks and puts them into a recieve
loop waiting for a `:fire` message. As tasks finish, tasks
in the list are sent `:fire` messages.
The function returns a pid_list of the final count tasks.
"""
def roll(function_list, count) do
pid_list = ready(function_list)
{fire_now, rolling} = Enum.split(pid_list, count)
Enum.map(fire_now, &fire(&1))
await(fire_now, rolling, &fire(&1))
end
@doc ~S"""
Keep count tasks active from the list.
rank starts count tasks from the list and
calls `Task.async` as the initial tasks finish
execution.
The function returns a pid_list of the final count tasks.
"""
def rank(function_list, count) do
{start_now, rolling} = Enum.split(function_list, count)
pid_list = Enum.map(start_now, &start(&1))
await(pid_list, rolling, &start(&1))
end
# need to return the entire task structure.
def fire(%Task{pid: pid} = task) do
send pid, :fire
task
end
def fire(task_list) when is_list(task_list) do
Enum.map(task_list,fn(task) -> fire(task) end )
end
def ready(function_list) when is_list(function_list) do
Enum.map(function_list, &ready(&1))
end
def ready(function) do
Task.async(fn ->
receive do
:fire -> function.()
end
end)
end
def start(function_list) when is_list(function_list) do
Enum.map(function_list, &start(&1))
end
def start(function) do
Task.async(function)
end
def await(tasks,[],_ready_function) do
tasks
end
def await(tasks, rolling, fire_function) do
still_running = Enum.filter(tasks, fn(task) -> is_nil(Task.yield(task,0)) end)
{to_fire, rest} = Enum.split(rolling, Enum.count(tasks) - Enum.count(still_running) )
now_running = fire_function.(to_fire)
await(still_running ++ now_running, rest, fire_function)
end
end
|
lib/volley_fire.ex
| 0.64579 | 0.684547 |
volley_fire.ex
|
starcoder
|
defmodule Collision.Vector.Vector3 do
@moduledoc """
Three dimensional vectors.
"""
defstruct x: 0.0, y: 0.0, z: 0.0
alias Collision.Vector.Vector3
@type t :: Vector3.t
@doc """
Convert a tuple to a vector.
## Examples
iex> Collision.Vector.Vector3.from_tuple({1.0, 1.5, 2.0})
%Collision.Vector.Vector3{x: 1.0, y: 1.5, z: 2.0}
"""
@spec from_tuple({float, float, float}) :: t
def from_tuple({x, y, z}), do: %Vector3{x: x, y: y, z: z}
@doc """
Cross product of two vectors
## Examples
iex> Collision.Vector.Vector3.cross_product(
...> %Collision.Vector.Vector3{x: 2.0, y: 1.0, z: -1.0},
...> %Collision.Vector.Vector3{x: -3.0, y: 4.0, z: 1}
...> )
%Collision.Vector.Vector3{x: 5.0, y: 1.0, z: 11.0}
"""
@spec cross_product(t, t) :: t
def cross_product(%Vector3{x: x1, y: y1, z: z1},
%Vector3{x: x2, y: y2, z: z2}) do
x_term = -z1 * y2 + y1 * z2
y_term = z1 * x2 - x1 * z2
z_term = -y1 * x2 + x1 * y2
%Vector3{x: x_term, y: y_term, z: z_term}
end
defimpl Vector, for: Vector3 do
@type t :: Vector3.t
@type scalar :: float
@spec to_tuple(t) :: {float, float, float}
def to_tuple(%Vector3{x: x, y: y, z: z}), do: {x, y, z}
@spec round_components(t, integer) :: t
def round_components(%Vector3{x: x, y: y, z: z}, n) do
%Vector3{x: Float.round(x, n), y: Float.round(y, n), z: Float.round(z, n)}
end
@spec scalar_mult(t, scalar) :: t
def scalar_mult(%Vector3{x: x, y: y, z: z}, k) do
%Vector3{x: x * k, y: y * k, z: z * k}
end
@spec add(t, t) :: t
def add(%Vector3{x: x1, y: y1, z: z1}, %Vector3{x: x2, y: y2, z: z2}) do
%Vector3{x: x1 + x2, y: y1 + y2, z: z1 + z2}
end
@spec subtract(t, t) :: t
def subtract(%Vector3{x: x1, y: y1, z: z1}, %Vector3{x: x2, y: y2, z: z2}) do
%Vector3{x: x1 - x2, y: y1 - y2, z: z1 - z2}
end
@spec magnitude(t) :: float
def magnitude(%Vector3{} = v1) do
:math.sqrt(magnitude_squared(v1))
end
@spec magnitude_squared(t) :: float
def magnitude_squared(%Vector3{} = v1) do
dot_product(v1, v1)
end
@spec normalize(t) :: t
def normalize(%Vector3{x: x1, y: y1, z: z1} = v1) do
mag = magnitude(v1)
%Vector3{x: x1 / mag, y: y1 / mag, z: z1 / mag}
end
@spec dot_product(t, t) :: float
def dot_product(%Vector3{x: x1, y: y1, z: z1}, %Vector3{x: x2, y: y2, z: z2}) do
x1 * x2 + y1 * y2 + z1 * z2
end
@spec projection(t, t) :: t
def projection(%Vector3{} = v1, %Vector3{} = v2) do
dot = dot_product(v1, v2)
dot_normalized = dot / magnitude_squared(v2)
Vector.scalar_mult(v2, dot_normalized)
end
end
end
|
lib/collision/vector/vector3.ex
| 0.931983 | 0.991364 |
vector3.ex
|
starcoder
|
defmodule Gi do
@moduledoc """
Manipulating Graphics Interfacing
"""
import Gi.Command
import Gi.Image
alias Gi.{Image, Command}
@doc """
Opens image source, raises a `File.Error` exception in case of failure.
## Parameters
- path: path to file image.
## Example
iex> Gi.open("test/example.jpg")
%Gi.Image{
animated: false,
dirty: %{},
ext: ".jpg",
format: nil,
frame_count: 1,
height: nil,
list_command: [],
path: "test/example.jpg",
width: nil
}
"""
@spec open(binary()) :: Image.t()
def open(path) do
unless File.regular?(path), do: raise(File.Error)
%Image{path: path, ext: Path.extname(path)}
end
@doc """
Get information of image.
## Example
iex> Gi.open("test/example.jpg")
...> |> Gi.gm_identify
%Gi.Image{
animated: false,
dirty: %{},
ext: ".jpg",
format: "JPEG (Joint Photographic Experts Group JFIF format)",
frame_count: 1,
height: 312,
list_command: [],
path: "test/example.jpg",
width: 820
}
"""
@spec gm_identify(Image.t()) :: Image.t()
def gm_identify(image) do
# Todo: check animated
{output, 0} = System.cmd("gm", ["identify", "-verbose", image.path])
format = Regex.named_captures(~r/Format: (?<format>[[:alnum:][:blank:]()]+)/, output)
image = %{image | format: format["format"]}
geo = Regex.named_captures(~r/Geometry: (?<geometry>\w+)/, output)
Regex.named_captures(~r/(?<width>\w+)x(?<height>\d+)/, geo["geometry"])
|> Enum.reduce(image, fn {k, v}, acc ->
Map.put(acc, String.to_atom(k), String.to_integer(v))
end)
end
@doc """
Save image.
## Options
- :path - Value as path. Save as image to path
## Example
iex> Gi.open("test/example.jpg")
...> |> Gi.save()
%Gi.Image{
animated: false,
dirty: %{},
ext: ".jpg",
format: nil,
frame_count: 1,
height: nil,
list_command: [],
path: "test/example.jpg",
width: nil
}
iex> Gi.open("test/example.jpg")
...> |> Gi.save(path: "test/new_example.jpg")
%Gi.Image{
animated: false,
dirty: %{},
ext: ".jpg",
format: nil,
frame_count: 1,
height: nil,
list_command: [],
path: "test/new_example.jpg",
width: nil
}
"""
@spec save(Image.t(), Keyword.t()) :: Image.t()
def save(image, opt \\ []) do
save_as = Keyword.get(opt, :path)
case save_as do
nil -> do_save(image)
path -> do_save_as(image, path)
end
end
@doc """
Mogrify image with option.
## Options
- :resize - Resize image to value "WxH" or "WxH!".
- "WxH" keep ratio of the original image.
- Example: "400x300", "150x100" ...
- "WxH!" exact size.
- Example: "300x200!", "200x100!" ...
- :format - Format image to value as jpg, png, webp...
- :draw - Draw object on image:
- "text x,y string" - draw string at position x,y.
- Example: "text 150,150 'Theta.vn'"
- "image Over x,y,w,h file" - draw file on image at position x,y with width w va height h.
- Example: "image Over 0,0,400,600 d/logo.png"
- :pointsize - pointsize of the PostScript, X11, or TrueType font for text, value as integer.
## Example
# Resize image to width x height with ratio (WxH)
iex> Gi.open("test/example.jpg")
...> |> Gi.gm_mogrify(resize: "200x100")
...> |> Gi.save(path: "test/example_resize.jpg")
iex> Gi.open("test/example_resize.jpg")
...> |> Gi.gm_identify
%Gi.Image{
animated: false,
dirty: %{},
ext: ".jpg",
format: "JPEG (Joint Photographic Experts Group JFIF format)",
frame_count: 1,
height: 76,
list_command: [],
path: "test/example_resize.jpg",
width: 200
}
# Resize image to width x height (WxH!)
Gi.open("example.jpg") # example.jpg (300x200)
|> Gi.gm_mogrify(resize: "200x100!")
|> Gi.save() # => example.jpg (200x100)
# Format image to jpg, png, webp, ...
Gi.open("example.jpg")
|> Gi.gm_mogrify(format: "webp")
|> Gi.save() # => create new file "example.webp"
# Draw text on image "text x,y string"
Gi.open("example.jpg")
|> Gi.gm_mogrify(draw: "text 150,150 'Lang Pham'")
|> Gi.save()
# Draw text on image "text x,y string" with pointsize,
Gi.open("example.jpg")
|> Gi.gm_mogrify([pointsize: 30, draw: "text 150,150 'Lang Pham'"])
|> Gi.save()
# Draw image on image "image Over x,y,w,h file"
Gi.open("example.jpg")
|> Gi.gm_mogrify(draw: "image Over 100,100,200, 200 dir/logo.a")
|> Gi.save()
# Multi utilities
Gi.open("example.jpg")
|> Gi.gm_mogrify([resize: "300x200", draw: "text 150,150 'Theta.vn'"])
|> Gi.save()
"""
@spec gm_mogrify(Image.t(), Keyword.t()) :: Image.t()
def gm_mogrify(image, opts) do
param =
Enum.reduce(opts, [], fn x, acc -> acc ++ ["-#{Atom.to_string(elem(x, 0))}", elem(x, 1)] end)
c = %Command{
command: :gm,
sub_command: :mogrify,
param: param
}
format =
Keyword.pop_values(opts, :format)
|> elem(0)
|> List.last()
dirty =
case format do
nil -> %{}
ext -> %{mogrify_format: ext}
end
image = %{image | dirty: dirty}
add_command(image, c)
end
@doc """
Combine multiple images into one
## Example
# Combine multiple images into one
iex> Gi.open("test/frame.png")
...> |> Gi.gm_composite(["test/example.jpg","test/save.png"])
%Gi.Image{
animated: false,
dirty: %{},
ext: ".png",
format: nil,
frame_count: 1,
height: nil,
list_command: [],
path: "test/save.png",
width: nil
}
"""
@spec gm_composite(Image.t(), []) :: Image.t()
def gm_composite(image, list_path) do
param = list_path
c = %Command{
command: :gm,
sub_command: :composite,
param: param
}
add_command(image, c)
|> do_command()
end
@spec do_save_as(Image.t(), String.t()) :: Image.t()
defp do_save_as(image, path) do
dir_name = Path.dirname(path)
File.mkdir_p!(dir_name)
File.cp(image.path, path)
image = %{image | path: path}
if length(image.list_command) == 0 do
image
else
do_command(image)
end
end
@spec do_save(Image.t()) :: Image.t()
defp do_save(image) do
if length(image.list_command) == 0 do
image
else
do_command(image)
end
end
@spec add_command(Image.t(), command) :: Image.t() when command: Command.t()
defp add_command(image, command) when is_command(command) do
command = image.list_command ++ [command]
%{image | list_command: command}
end
defp add_command(image, _), do: image
defp do_command(image) do
Enum.reduce(image.list_command, image, fn action, img -> do_action(img, action) end)
end
defp do_action(img, action) do
case action.command do
:gm -> do_gm(img, action)
_ -> img
end
end
defp do_gm(image, action) do
case action.sub_command do
nil ->
image
:mogrify ->
param = [Atom.to_string(action.sub_command) | action.param] ++ [image.path]
System.cmd(Atom.to_string(action.command), param)
case Map.get(image.dirty, :mogrify_format) do
nil ->
image
ext ->
file = image.path
%{image | path: "#{Path.rootname(file)}.#{ext}"}
end
:composite ->
param = action.param
if length(param) < 2 do
image
else
{_head, tail} = Enum.split(param, -1)
param_with_path = [Atom.to_string(action.sub_command)] ++ [image.path | param]
System.cmd(Atom.to_string(action.command), param_with_path)
result = %{image | path: List.first(tail)}
%{result | list_command: []}
end
_ ->
image
end
end
end
|
lib/gi.ex
| 0.812682 | 0.479747 |
gi.ex
|
starcoder
|
defmodule Exexec do
@moduledoc """
Execute and control OS processes from Elixir.
An idiomatic Elixir wrapper for <NAME>'s excellent
[erlexec](https://github.com/saleyn/erlexec), Exexec provides an Elixir
interface as well as some nice Elixir-y goodies on top.
"""
import Exexec.ToErl
@type command :: String.t() | [Path.t() | [String.t()]]
@type os_pid :: non_neg_integer
@type gid :: non_neg_integer
@type output_file_option ::
{:append, boolean}
| {:mode, non_neg_integer}
@type output_device :: :stdout | :stderr
@type output_file_options :: [output_file_option]
@type output_device_option ::
boolean
| :null
| :close
| :print
| Path.t()
| {Path.t(), output_file_options}
| pid
| (output_device, os_pid, binary -> any)
@type command_option ::
{:monitor, boolean}
| {:sync, boolean}
| {:executable, Path.t()}
| {:cd, Path.t()}
| {:env, %{String.t() => String.t()}}
| {:kill_command, String.t()}
| {:kill_timeout, non_neg_integer}
| {:kill_group, boolean}
| {:group, String.t()}
| {:user, String.t()}
| {:success_exit_code, exit_code}
| {:nice, -20..20}
| {:stdin, boolean | :null | :close | Path.t()}
| {:stdout, :stderr | output_device_option}
| {:stderr, :stdout | output_device_option}
| {:pty, boolean}
@type command_options :: [command_option]
@type exec_option ::
{:debug, boolean | non_neg_integer}
| {:root, boolean}
| {:verbose, boolean}
| {:args, [String.t()]}
| {:alarm, non_neg_integer}
| {:user, String.t()}
| {:limit_users, [String.t()]}
| {:port_path, Path.t()}
| {:env, %{String.t() => String.t()}}
@type exec_options :: [exec_option]
@type signal :: pos_integer
@type on_run ::
{:ok, pid, os_pid}
| {:ok, [{output_device, [binary]}]}
| {:error, any}
@type exit_code :: non_neg_integer
@doc """
Send `signal` to `pid`.
`pid` can be an `Exexec` pid, OS pid, or port.
"""
@spec kill(pid | os_pid | port, signal) :: :ok | {:error, any}
defdelegate kill(pid, signal), to: :exec
@doc """
Start an `Exexec` process to manage existing `os_pid` with options `options`.
`os_pid` can also be a port.
"""
@spec manage(os_pid | port) :: {:ok, pid, os_pid} | {:error, any}
@spec manage(os_pid | port, command_options) :: {:ok, pid, os_pid} | {:error, any}
def manage(os_pid, options \\ []),
do: :exec.manage(os_pid, command_options_to_erl(options))
@doc """
Returns the OS pid for `Exexec` process `pid`.
"""
@spec os_pid(pid) :: {:ok, os_pid} | {:error, any}
def os_pid(pid) do
case :exec.ospid(pid) do
{:error, reason} -> {:error, reason}
os_pid -> {:ok, os_pid}
end
end
@doc """
Returns the `Exexec` pid for `os_pid`.
"""
@spec pid(os_pid) :: {:ok, pid} | {:error, any}
def pid(os_pid) do
case :exec.pid(os_pid) do
{:error, reason} -> {:error, reason}
:undefined -> {:error, :undefined}
pid -> {:ok, pid}
end
end
@doc """
Run an external `command` with `options`.
"""
@spec run(command) :: on_run
@spec run(command, command_options) :: on_run
def run(command, options \\ []) do
command = command_to_erl(command)
options = command_options_to_erl(options)
:exec.run(command, options)
end
@doc """
Run an external `command` with `options`, linking to the current process.
If the external process exits with code 0, the linked process will not exit.
"""
@spec run_link(command) :: on_run
@spec run_link(command, command_options) :: on_run
def run_link(command, options \\ []) do
command = command_to_erl(command)
options = command_options_to_erl(options)
:exec.run_link(command, options)
end
@doc """
Send `data` to the stdin of `pid`.
`pid` can be an `Exexec` pid or an OS pid.
"""
@spec send(pid | os_pid, binary | :eof) :: :ok
defdelegate send(pid, data), to: :exec
@doc """
Change group ID of `os_pid` to `gid`.
"""
@spec set_gid(os_pid, gid) :: :ok | {:error, any}
defdelegate set_gid(os_pid, gid), to: :exec, as: :setpgid
@doc """
Convert integer `signal` to atom, or return `signal`.
"""
@spec signal(signal) :: atom | integer
defdelegate signal(signal), to: :exec
@doc """
Start `Exexec`.
"""
@spec start() :: {:ok, pid} | {:error, any}
defdelegate start(), to: :exec
@doc """
Start `Exexec` with `options`.
"""
@spec start(exec_options) :: {:ok, pid} | {:error, any}
def start(options) do
:exec.start(exec_options_to_erl(options))
end
@doc """
Start `Exexec` and link to calling process.
"""
@spec start_link :: {:ok, pid} | {:error, any}
def start_link(), do: start_link([])
@doc """
Start `Exexec` with `options` and link to calling process.
"""
@spec start_link(exec_options) :: {:ok, pid} | {:error, any}
def start_link(options) do
:exec.start_link(exec_options_to_erl(options))
end
@doc """
Interpret `exit_code`.
If the program exited by signal, returns `{:signal, signal, core}` where `signal`
is the atom or integer signal and `core` is whether a core file was generated.
"""
@spec status(exit_code) :: {:status, signal} | {:signal, signal | :atom, boolean}
defdelegate status(exit_code), to: :exec
@doc """
Stop `pid`.
`pid` can be an `Exexec` pid, OS pid, or port.
The OS process is terminated gracefully. If `:kill_command` was specified,
that command is executed and a timer is started. If the process doesn't exit
immediately, then by default after 5 seconds SIGKILL will be sent to the process.
"""
@spec stop(pid | os_pid | port) :: :ok | {:error, any}
defdelegate stop(pid), to: :exec
@doc """
Stop `pid` and wait for it to exit for `timeout` milliseconds.
See `Exexec.stop/1`.
"""
@spec stop_and_wait(pid | os_pid | port) :: :ok | {:error, any}
@spec stop_and_wait(pid | os_pid | port, integer) :: :ok | {:error, any}
def stop_and_wait(pid, timeout \\ 5_000), do: :exec.stop_and_wait(pid, timeout)
@doc """
Return a list of OS pids managed by `Exexec`.
"""
@spec which_children() :: [os_pid]
defdelegate which_children(), to: :exec
end
|
lib/exexec.ex
| 0.745769 | 0.42668 |
exexec.ex
|
starcoder
|
defmodule Tox do
@moduledoc """
Some structs and functions to work with dates, times, durations, periods, and
intervals.
"""
@typedoc """
Units related to dates and times.
"""
@type unit ::
:year
| :month
| :week
| :day
| :hour
| :minute
| :second
| :microsecond
@typedoc """
An amount of time with a specified unit e.g. `{second: 500}`.
"""
@type duration :: {unit(), integer()}
@typedoc """
Boundaries specifies whether the start and end of an interval are included or
excluded.
* `:open`: start and end are excluded
* `:closed`: start and end are included
* `:left_open`: start is excluded and end is included
* `:right_open`: start is included and end is excluded
"""
@type boundaries :: :closed | :left_open | :right_open | :open
@doc """
Shift the `DateTime`, `NaiveDateTime`, `Date` or `Time` by the given duration.
"""
@spec shift(date_or_time, [duration()]) :: date_or_time
when date_or_time: DateTime.t() | NaiveDateTime.t() | Date.t() | Time.t()
def shift(%DateTime{} = datetime, duration), do: Tox.DateTime.shift(datetime, duration)
def shift(%NaiveDateTime{} = naive, duration), do: Tox.NaiveDateTime.shift(naive, duration)
def shift(%Date{} = date, duration), do: Tox.Date.shift(date, duration)
def shift(%Time{} = time, duration), do: Tox.Time.shift(time, duration)
@doc false
@spec days_per_week :: integer()
def days_per_week, do: 7
@doc false
@spec week(Calendar.date()) :: {Calendar.year(), non_neg_integer()}
def week(%{calendar: Calendar.ISO, year: year, month: month, day: day}) do
:calendar.iso_week_number({year, month, day})
end
Code.ensure_loaded(Date)
if function_exported?(Date, :beginning_of_week, 2) do
@doc false
@spec day_of_week(Calendar.calendar(), integer(), non_neg_integer, non_neg_integer) :: 1..7
def day_of_week(calendar, year, month, day) do
{day, _epoch_day_of_week, _last_day_of_week} =
calendar.day_of_week(year, month, day, :default)
day
end
else
@doc false
@spec day_of_week(Calendar.calendar(), integer(), non_neg_integer, non_neg_integer) :: 1..7
def day_of_week(calendar, year, month, day) do
calendar.day_of_week(year, month, day)
end
end
end
|
lib/tox.ex
| 0.914415 | 0.791741 |
tox.ex
|
starcoder
|
defmodule FaultTree do
@moduledoc """
Main module for creating and interacting with fault trees.
"""
use TypedStruct
require Logger
alias FaultTree.Node
typedstruct do
field :next_id, integer(), default: 0
field :nodes, list(Node.t()), default: []
end
@type error_type :: {:error, String.t()}
@type result :: t() | error_type()
@doc """
Create a new fault tree with an `OR` gate as the root.
"""
@spec create() :: t()
def create(), do: create(:or)
@doc """
Create a new fault tree with the passed in `Node` as the root.
"""
@spec create(Node.t()) :: t()
def create(root = %Node{}) do
%FaultTree{next_id: root.id + 1, nodes: [root]}
end
@doc """
Create a new fault tree and generate a node of the given type for the root.
"""
@spec create(atom) :: t()
def create(root_type) when root_type != :basic do
%Node{id: 0, name: "root", type: root_type}
|> create()
end
@doc """
Add a node to the fault tree. Some validations are performed to make sure the node can
logically be added to the tree.
"""
@spec add_node(FaultTree.t(), Node.t()) :: result()
def add_node(tree, node) do
id = tree.next_id
node = node |> Map.put(:id, id)
case validate_node(tree, node) do
{:error, msg} ->
Logger.error(msg)
{:error, :invalid}
{:ok, tree} ->
tree
|> Map.put(:next_id, id + 1)
|> Map.update!(:nodes, fn nodes -> [node | nodes] end)
end
end
@doc """
Add a basic node to the fault tree with a pre-defined probability.
"""
def add_basic(tree, probability, name), do: add_basic(tree, nil, probability, name, nil)
def add_basic(tree, parent, probability, name), do: add_basic(tree, parent, probability, name, nil)
def add_basic(tree, parent, probability, name, description) do
node = %Node{type: :basic, name: name, probability: Decimal.new(probability),
parent: parent, description: description}
add_node(tree, node)
end
@doc """
Add a logic gate to the fault tree.
"""
def add_logic(tree, parent, type, name, description \\ nil) do
node = %Node{type: type, name: name, parent: parent, description: description}
add_node(tree, node)
end
@doc """
Add an OR gate to the fault tree. Any child nodes failing will cause this node to fail.
"""
def add_or_gate(tree, parent, name, description \\ nil), do: add_logic(tree, parent, :or, name, description)
@doc """
Add an AND gate to the fault tree. All children must fail for this node to fail.
"""
def add_and_gate(tree, parent, name, description \\ nil), do: add_logic(tree, parent, :and, name, description)
@doc """
Add an ATLEAST/VOTING gate to the fault tree. This rqeuires that a minimum of K out of N child nodes fail in
order to be marked as failing.
"""
def add_atleast_gate(tree, parent, min, total, name, description \\ nil) do
node = %Node{type: :atleast, name: name, parent: parent, description: description, atleast: {min, total}}
add_node(tree, node)
end
@doc """
Add a transfer node. This is a reference to a node that already exists in the tree. Transfer nodes cannot have anything modified,
changes must happen on the source.
"""
def add_transfer(tree, parent, source) do
node = %Node{type: :transfer, source: source, name: source, parent: parent}
add_node(tree, node)
end
@doc """
Perform some validation for a new node against the existing tree.
"""
def validate_node(tree, node) do
with {:ok, tree} <- validate_parent(tree, node),
{:ok, tree} <- validate_probability(tree, node),
{:ok, tree} <- validate_atleast(tree, node),
{:ok, tree} <- validate_transfer(tree, node),
{:ok, tree} <- validate_name(tree, node) do
{:ok, tree}
else
err -> err
end
end
@doc """
Validate that the name of the node is unique in the fault tree.
"""
def validate_name(tree, %{type: :transfer, name: name, source: source}) when name == source, do: {:ok, tree}
def validate_name(tree, %{name: new_name}) do
case Enum.find(tree.nodes, fn %{name: name} -> name == new_name end) do
nil -> {:ok, tree}
_ -> {:error, "Name already exists in the tree"}
end
end
@doc """
Validate that the gate types allow setting this node as a child of its listed parent.
"""
def validate_parent(tree, %Node{parent: nil}), do: {:ok, tree}
def validate_parent(tree, node) do
parent = find_by_field(tree, :name, node.parent)
case parent do
nil -> {:error, "Parent not found in tree"}
%Node{type: :basic} -> {:error, "Basic nodes cannot have children"}
%Node{type: :transfer} -> {:error, "Transfer nodes cannot have children"}
%Node{type: :atleast} ->
case {node.type, find_children(parent, tree.nodes)} do
{_, []} -> {:ok, tree}
{:transfer, children} ->
case Enum.filter(children, fn %{name: name} -> name != node.name end) do
[] -> {:ok, tree}
_ -> {:error, "ATLEAST gates can only have a single child node"}
end
{_, _} -> {:error, "ATLEAST gates can only have a single child node"}
end
_ -> {:ok, tree}
end
end
@doc """
Validate that a probability is only set on basic nodes.
Logic gates will have their probability calculated when the tree is built.
"""
def validate_probability(tree, node) do
case node do
%Node{type: :basic, probability: p} when p > 0 and p != nil -> {:ok, tree}
%Node{type: :basic} -> {:error, "Basic events must have a probability set"}
%Node{probability: p} when p > 0 and p != nil -> {:error, "Only basic events should have a probability set"}
_ -> {:ok, tree}
end
end
@doc """
Validate that ATLEAST gates have their parameters set.
"""
def validate_atleast(tree, node) do
case node do
%Node{type: :atleast, atleast: nil} -> {:error, "ATLEAST gates must have minimum and total set"}
_ -> {:ok, tree}
end
end
@doc """
Validate that TRANSFER gates have a source that exists in the tree.
"""
def validate_transfer(tree, node = %Node{type: :transfer}) do
case find_by_field(tree, :name, node.source) do
nil -> {:error, "Source not found for TRANSFER gate"}
_ -> {:ok, tree}
end
end
def validate_transfer(tree, _node), do: {:ok, tree}
@doc """
Convert a tree to JSON.
"""
@spec to_json(t() | map()) :: String.t()
def to_json(tree = %FaultTree{}), do: tree |> build() |> to_json()
def to_json(tree) do
Poison.encode!(tree)
end
def build(tree), do: FaultTree.Analyzer.process(tree)
@doc """
Convert from a string containing fault tree logic into the tree object.
"""
@spec parse(String.t()) :: t()
def parse(doc), do: FaultTree.Parser.XML.parse(doc)
def find_children(%Node{type: :transfer}, _nodes), do: []
def find_children(node, nodes), do: Enum.filter(nodes, fn x -> x.parent == node.name end)
defp find_by_field(tree, field, value), do: tree.nodes |> Enum.find(fn node -> Map.get(node, field) == value end)
end
|
lib/fault_tree.ex
| 0.895139 | 0.666755 |
fault_tree.ex
|
starcoder
|
defmodule Aecore.Channel.Tx.ChannelSlashTx do
@moduledoc """
Module defining the ChannelSlash transaction
"""
@behaviour Aecore.Tx.Transaction
alias Aecore.Channel.Tx.ChannelSlashTx
alias Aecore.Tx.{SignedTx, DataTx}
alias Aecore.Account.AccountStateTree
alias Aecore.Chain.{Chainstate, Identifier}
alias Aecore.Channel.{ChannelStateOnChain, ChannelStateOffChain, ChannelStateTree}
require Logger
@version 1
@typedoc "Expected structure for the ChannelSlash Transaction"
@type payload :: %{
state: map()
}
@typedoc "Reason for the error"
@type reason :: String.t()
@typedoc "Structure that holds specific transaction info in the chainstate."
@type tx_type_state() :: ChannelStateTree.t()
@typedoc "Structure of the ChannelSlash Transaction type"
@type t :: %ChannelSlashTx{
state: ChannelStateOffChain.t()
}
@doc """
Definition of the ChannelSlashTx structure
# Parameters
- state - the state with which the channel is going to be slashed
"""
defstruct [:state]
@spec get_chain_state_name :: atom()
def get_chain_state_name, do: :channels
@spec init(payload()) :: SpendTx.t()
def init(%{state: state}) do
%ChannelSlashTx{state: ChannelStateOffChain.init(state)}
end
@spec create(ChannelStateOffChain.t()) :: ChannelSlashTx.t()
def create(state) do
%ChannelSlashTx{state: state}
end
@spec sequence(ChannelSlashTx.t()) :: non_neg_integer()
def sequence(%ChannelSlashTx{state: %ChannelStateOffChain{sequence: sequence}}), do: sequence
@spec channel_id(ChannelSlashTx.t()) :: binary()
def channel_id(%ChannelSlashTx{state: %ChannelStateOffChain{channel_id: id}}), do: id
@doc """
Validates the transaction without considering state
"""
@spec validate(ChannelSlashTx.t(), DataTx.t()) :: :ok | {:error, reason()}
def validate(
%ChannelSlashTx{state: %ChannelStateOffChain{sequence: sequence}},
%DataTx{} = data_tx
) do
senders = DataTx.senders(data_tx)
cond do
length(senders) != 1 ->
{:error, "#{__MODULE__}: Invalid senders size"}
sequence == 0 ->
{:error, "#{__MODULE__}: Can't slash with zero state"}
true ->
:ok
end
end
@doc """
Slashes the channel
"""
@spec process_chainstate(
Chainstate.accounts(),
ChannelStateTree.t(),
non_neg_integer(),
ChannelSlashTx.t(),
DataTx.t()
) :: {:ok, {Chainstate.accounts(), ChannelStateTree.t()}}
def process_chainstate(
accounts,
channels,
block_height,
%ChannelSlashTx{
state:
%ChannelStateOffChain{
channel_id: channel_id
} = state
},
_data_tx
) do
new_channels =
ChannelStateTree.update!(channels, channel_id, fn channel ->
ChannelStateOnChain.apply_slashing(channel, block_height, state)
end)
{:ok, {accounts, new_channels}}
end
@doc """
Validates the transaction with state considered
"""
@spec preprocess_check(
Chainstate.accounts(),
ChannelStateTree.t(),
non_neg_integer(),
ChannelSlashTx.t(),
DataTx.t()
) :: :ok | {:error, reason()}
def preprocess_check(
accounts,
channels,
_block_height,
%ChannelSlashTx{state: state},
%DataTx{fee: fee} = data_tx
) do
sender = DataTx.main_sender(data_tx)
channel = ChannelStateTree.get(channels, state.channel_id)
cond do
AccountStateTree.get(accounts, sender).balance - fee < 0 ->
{:error, "#{__MODULE__}: Negative sender balance"}
channel == :none ->
{:error, "#{__MODULE__}: Channel doesn't exist (already closed?)"}
ChannelStateOnChain.active?(channel) ->
{:error, "#{__MODULE__}: Can't slash active channel"}
true ->
ChannelStateOnChain.validate_slashing(channel, state)
end
end
@spec deduct_fee(
Chainstate.accounts(),
non_neg_integer(),
ChannelSlashTx.t(),
DataTx.t(),
non_neg_integer()
) :: Chainstate.accounts()
def deduct_fee(accounts, block_height, _tx, %DataTx{} = data_tx, fee) do
DataTx.standard_deduct_fee(accounts, block_height, data_tx, fee)
end
@spec is_minimum_fee_met?(SignedTx.t()) :: boolean()
def is_minimum_fee_met?(%SignedTx{data: %DataTx{fee: fee}}) do
fee >= Application.get_env(:aecore, :tx_data)[:minimum_fee]
end
@spec encode_to_list(ChannelSlashTx.t(), DataTx.t()) :: list()
def encode_to_list(%ChannelSlashTx{state: state}, %DataTx{
senders: senders,
nonce: nonce,
fee: fee,
ttl: ttl
}) do
[
:binary.encode_unsigned(@version),
Identifier.encode_list_to_binary(senders),
:binary.encode_unsigned(nonce),
ChannelStateOffChain.encode_to_list(state),
:binary.encode_unsigned(fee),
:binary.encode_unsigned(ttl)
]
end
@spec decode_from_list(non_neg_integer(), list()) :: {:ok, DataTx.t()} | {:error, reason()}
def decode_from_list(@version, [encoded_senders, nonce, [state_ver_bin | state], fee, ttl]) do
state_ver = :binary.decode_unsigned(state_ver_bin)
case ChannelStateOffChain.decode_from_list(state_ver, state) do
{:ok, state} ->
payload = %ChannelSlashTx{state: state}
DataTx.init_binary(
ChannelSlashTx,
payload,
encoded_senders,
:binary.decode_unsigned(fee),
:binary.decode_unsigned(nonce),
:binary.decode_unsigned(ttl)
)
{:error, _} = error ->
error
end
end
def decode_from_list(@version, data) do
{:error, "#{__MODULE__}: decode_from_list: Invalid serialization: #{inspect(data)}"}
end
def decode_from_list(version, _) do
{:error, "#{__MODULE__}: decode_from_list: Unknown version #{version}"}
end
end
|
apps/aecore/lib/aecore/channel/tx/channel_slash_tx.ex
| 0.878269 | 0.418786 |
channel_slash_tx.ex
|
starcoder
|
defmodule Scidata.KuzushijiMNIST do
@moduledoc """
Module for downloading the [Kuzushiji-MNIST dataset](https://github.com/rois-codh/kmnist).
"""
alias Scidata.Utils
@base_url "http://codh.rois.ac.jp/kmnist/dataset/kmnist/"
@train_image_file "train-images-idx3-ubyte.gz"
@train_label_file "train-labels-idx1-ubyte.gz"
@test_image_file "t10k-images-idx3-ubyte.gz"
@test_label_file "t10k-labels-idx1-ubyte.gz"
@doc """
Downloads the Kuzushiji MNIST training dataset or fetches it locally.
Returns a tuple of format:
{{images_binary, images_type, images_shape},
{labels_binary, labels_type, labels_shape}}
If you want to one-hot encode the labels, you can:
labels_binary
|> Nx.from_binary(labels_type)
|> Nx.new_axis(-1)
|> Nx.equal(Nx.tensor(Enum.to_list(0..9)))
## Options.
* `:base_url` - Dataset base URL.
Defaults to `"http://codh.rois.ac.jp/kmnist/dataset/kmnist/"`
* `:train_image_file` - Training set image filename.
Defaults to `"train-images-idx3-ubyte.gz"`
* `:train_label_file` - Training set label filename.
Defaults to `"train-images-idx1-ubyte.gz"`
* `:cache_dir` - Cache directory.
Defaults to `System.tmp_dir!()`
"""
def download(opts \\ []) do
{download_images(:train, opts), download_labels(:train, opts)}
end
@doc """
Downloads the Kuzushiji MNIST test dataset or fetches it locally.
## Options.
* `:base_url` - Dataset base URL.
Defaults to `"http://codh.rois.ac.jp/kmnist/dataset/kmnist/"`
* `:test_image_file` - Test set image filename.
Defaults to `"t10k-images-idx3-ubyte.gz"`
* `:test_label_file` - Test set label filename.
Defaults to `"t10k-labels-idx1-ubyte.gz"`
* `:cache_dir` - Cache directory.
Defaults to `System.tmp_dir!()`
"""
def download_test(opts \\ []) do
{download_images(:test, opts), download_labels(:test, opts)}
end
defp download_images(:train, opts) do
download_images(opts[:train_image_file] || @train_image_file, opts)
end
defp download_images(:test, opts) do
download_images(opts[:test_image_file] || @test_image_file, opts)
end
defp download_images(filename, opts) do
base_url = opts[:base_url] || @base_url
data = Utils.get!(base_url <> filename, opts).body
<<_::32, n_images::32, n_rows::32, n_cols::32, images::binary>> = data
{images, {:u, 8}, {n_images, 1, n_rows, n_cols}}
end
defp download_labels(:train, opts) do
download_labels(opts[:train_label_file] || @train_label_file, opts)
end
defp download_labels(:test, opts) do
download_labels(opts[:test_label_file] || @test_label_file, opts)
end
defp download_labels(filename, opts) do
base_url = opts[:base_url] || @base_url
data = Utils.get!(base_url <> filename, opts).body
<<_::32, n_labels::32, labels::binary>> = data
{labels, {:u, 8}, {n_labels}}
end
end
|
lib/scidata/kuzushiji_mnist.ex
| 0.858941 | 0.609495 |
kuzushiji_mnist.ex
|
starcoder
|
defmodule Rajska.ObjectAuthorization do
@moduledoc """
Absinthe middleware to ensure object permissions.
Authorizes all Absinthe's [objects](https://hexdocs.pm/absinthe/Absinthe.Schema.Notation.html#object/3) requested in a query by checking the permission defined in each object meta `authorize`.
## Usage
[Create your Authorization module and add it and QueryAuthorization to your Absinthe.Schema](https://hexdocs.pm/rajska/Rajska.html#module-usage). Then set the permitted role to access an object:
```elixir
object :wallet_balance do
meta :authorize, :admin
field :total, :integer
end
object :company do
meta :authorize, :user
field :name, :string
field :wallet_balance, :wallet_balance
end
object :user do
meta :authorize, :all
field :email, :string
field :company, :company
end
```
With the permissions above, a query like the following would only be allowed by an admin user:
```graphql
{
userQuery {
name
email
company {
name
walletBalance { total }
}
}
}
```
Object Authorization middleware runs after Query Authorization middleware (if added) and before the query is resolved by recursively checking the requested objects permissions in the `c:Rajska.Authorization.is_role_authorized?/2` function (which is also used by Query Authorization). It can be overridden by your own implementation.
"""
@behaviour Absinthe.Middleware
alias Absinthe.{
Resolution,
Schema,
Type
}
alias Rajska.Introspection
alias Type.{Custom, Scalar}
def call(%Resolution{state: :resolved} = resolution, _config), do: resolution
def call(%Resolution{definition: definition} = resolution, _config) do
authorize(definition.schema_node.type, definition.selections, resolution)
end
defp authorize(type, fields, resolution) do
type
|> Introspection.get_object_type()
|> lookup_object(resolution.schema)
|> authorize_object(fields, resolution)
end
defp lookup_object(object_type, schema) do
Schema.lookup_type(schema, object_type)
end
# When is a Scalar, Custom or Enum type, authorize.
defp authorize_object(%type{} = object, fields, resolution)
when type in [Scalar, Custom, Type.Enum, Type.Enum.Value] do
put_result(true, fields, resolution, object)
end
# When is an user defined object, lookup the authorize meta tag.
defp authorize_object(object, fields, resolution) do
object
|> Type.meta(:authorize)
|> is_authorized?(resolution.context, object)
|> put_result(fields, resolution, object)
end
defp is_authorized?(nil, _, object), do: raise "No meta authorize defined for object #{inspect object.identifier}"
defp is_authorized?(permission, context, _object) do
Rajska.apply_auth_mod(context, :is_context_authorized?, [context, permission])
end
defp put_result(true, fields, resolution, _type), do: find_associations(fields, resolution)
defp put_result(false, _fields, resolution, object) do
Resolution.put_result(resolution, {:error, "Not authorized to access object #{object.identifier}"})
end
defp find_associations([%{selections: []} | tail], resolution) do
find_associations(tail, resolution)
end
defp find_associations(
[%{schema_node: schema_node, selections: selections} | tail],
resolution
) do
authorize(schema_node.type, selections ++ tail, resolution)
end
defp find_associations([], resolution), do: resolution
end
|
lib/middlewares/object_authorization.ex
| 0.869035 | 0.895477 |
object_authorization.ex
|
starcoder
|
defmodule Bamboo.PostmarkHelper do
@moduledoc """
Functions for using features specific to Postmark e.g. templates
"""
alias Bamboo.Email
@doc """
Set a single tag for an email that allows you to categorize outgoing emails
and get detailed statistics.
A convenience function for `put_private(email, :tag, "my-tag")`
## Example
tag(email, "welcome-email")
"""
def tag(email, tag) do
Email.put_private(email, :tag, tag)
end
@doc """
Send emails using Postmark's template API.
Setup Postmark to send emails using a template. Use this in conjuction with
the template content to offload template rendering to Postmark. The
template id specified here must match the template id in Postmark.
Postmarks's API docs for this can be found [here](https://postmarkapp.com/developer/api/templates-api#email-with-template).
## Example
template(email, "9746128")
template(email, "9746128", %{"name" => "Name", "content" => "John"})
"""
def template(email, template_id, template_model \\ %{})
def template(email, template_id, template_model) when is_integer(template_id) do
email
|> Email.put_private(:template_id, template_id)
|> Email.put_private(:template_model, template_model)
end
def template(email, template_alias, template_model) when is_binary(template_alias) do
email
|> Email.put_private(:template_alias, template_alias)
|> Email.put_private(:template_model, template_model)
end
@doc """
Put extra message parameters that are used by Postmark. You can set things
like TrackOpens, TrackLinks or Attachments.
## Example
put_param(email, "TrackLinks", "HtmlAndText")
put_param(email, "TrackOpens", true)
put_param(email, "Attachments", [
%{
Name: "file.txt",
Content: "/some/file.txt" |> File.read!() |> Base.encode64(),
ContentType: "txt"
}
])
"""
def put_param(%Email{private: %{message_params: _}} = email, key, value) do
put_in(email.private[:message_params][key], value)
end
def put_param(email, key, value) do
email
|> Email.put_private(:message_params, %{})
|> put_param(key, value)
end
end
|
lib/bamboo/postmark_helper.ex
| 0.814643 | 0.413063 |
postmark_helper.ex
|
starcoder
|
defmodule Performance.Kafka do
@moduledoc """
Utilities for working with kafka in performance tests
"""
use Retry
import SmartCity.TestHelper, only: [eventually: 3]
alias Performance.SetupConfig
require Logger
def tune_consumer_parameters(otp_app, %SetupConfig{} = params) do
{_messages, kafka_parameters} = Map.split(params, [:messages])
existing_topic_config = Application.get_env(otp_app, :topic_subscriber_config, Keyword.new())
updated_topic_config =
Keyword.merge(
existing_topic_config,
Keyword.new(Map.from_struct(kafka_parameters))
)
Application.put_env(otp_app, :topic_subscriber_config, updated_topic_config)
Logger.info("Tuned kafka config:")
Application.get_env(otp_app, :topic_subscriber_config)
|> inspect()
|> Logger.info()
end
def get_message_count(endpoints, topic, num_partitions) do
0..(num_partitions - 1)
|> Enum.map(fn partition -> :brod.resolve_offset(endpoints, topic, partition) end)
|> Enum.map(fn {:ok, value} -> value end)
|> Enum.sum()
end
def load_messages(endpoints, dataset, topic, messages, expected_count, producer_chunk_size) do
num_producers = max(div(expected_count, producer_chunk_size), 1)
producer_name = :"#{topic}_producer"
Logger.info(
"Loading #{expected_count} messages into kafka with #{num_producers} producers for topic #{
topic
}"
)
{:ok, producer_pid} =
Elsa.Supervisor.start_link(
endpoints: endpoints,
producer: [topic: topic],
connection: producer_name
)
Elsa.Producer.ready?(producer_name)
messages
|> Stream.map(&prepare_messages(&1, dataset))
|> Stream.chunk_every(producer_chunk_size)
|> Enum.map(&spawn_producer_chunk(&1, topic, producer_name))
|> Enum.each(&Task.await(&1, :infinity))
eventually(
fn ->
current_total = get_total_messages(endpoints, topic, 1)
current_total >= expected_count
end,
200,
5000
)
Process.exit(producer_pid, :normal)
Logger.info("Done loading #{expected_count} messages into #{topic}")
end
def get_total_messages(endpoints, topic, num_partitions \\ 1) do
0..(num_partitions - 1)
|> Enum.map(fn partition -> :brod.resolve_offset(endpoints, topic, partition) end)
|> Enum.map(fn {:ok, value} -> value end)
|> Enum.sum()
end
def wait_for_topic!(endpoints, topic) do
wait exponential_backoff(100) |> Stream.take(10) do
Elsa.topic?(endpoints, topic)
after
_ -> topic
else
_ -> raise "Timed out waiting for #{topic} to be available"
end
end
def generate_consumer_scenarios(message_combos) do
low_max_bytes = {"lmb", 1_000_000}
mid_max_bytes = {"mmb", 10_000_000}
high_max_bytes = {"hmb", 100_000_000}
low_max_wait_time = {"lmw", 1_000}
mid_max_wait_time = {"mmw", 10_000}
high_max_wait_time = {"hmw", 60_000}
low_min_bytes = {"lmib", 0}
mid_min_bytes = {"mmib", 5_000}
high_min_bytes = {"hmib", 1_000_000}
low_prefetch_count = {"lpc", 0}
mid_prefetch_count = {"mpc", 100_000}
high_prefetch_count = {"hpc", 1_000_000}
low_prefetch_bytes = {"lpb", 1_000_000}
mid_prefetch_bytes = {"mpb", 10_000_000}
high_prefetch_bytes = {"hpb", 100_000_000}
combos =
Combinatorics.product([
message_combos,
[low_max_bytes, mid_max_bytes, high_max_bytes],
[low_max_wait_time, mid_max_wait_time, high_max_wait_time],
[low_min_bytes, mid_min_bytes, high_min_bytes],
[low_prefetch_count, mid_prefetch_count, high_prefetch_count],
[low_prefetch_bytes, mid_prefetch_bytes, high_prefetch_bytes]
])
Enum.map(combos, fn l ->
{names, values} = Enum.unzip(l)
label = Enum.join(names, ".")
options =
Enum.zip(
[:messages, :max_bytes, :max_wait_time, :min_bytes, :prefetch_count, :prefetch_bytes],
values
)
|> Keyword.new()
{label, struct(SetupConfig, options)}
end)
|> Map.new()
end
def setup_topics(names, endpoints) do
Enum.map(names, fn name ->
Elsa.create_topic(endpoints, name)
wait_for_topic!(endpoints, name)
end)
|> List.to_tuple()
end
def setup_topics(prefixes, dataset, endpoints) do
Enum.map(prefixes, fn prefix ->
topic = "#{prefix}-#{dataset.id}"
Logger.info("Setting up #{topic} for #{dataset.id}")
topic
end)
|> setup_topics(endpoints)
end
def delete_topics(names, endpoints) do
Enum.map(names, fn name ->
Elsa.delete_topic(endpoints, name)
end)
|> List.to_tuple()
end
def delete_topics(prefixes, dataset, endpoints) do
Enum.map(prefixes, fn prefix ->
topic = "#{prefix}-#{dataset.id}"
Logger.info("Deleting topic #{topic} for #{dataset.id}")
topic
end)
|> delete_topics(endpoints)
end
defp prepare_messages({key, message}, dataset) do
json =
message
|> Map.put(:dataset_id, dataset.id)
|> Jason.encode!()
{key, json}
end
defp spawn_producer_chunk(chunk, topic, producer_name) do
Task.async(fn ->
chunk
|> Stream.chunk_every(1000)
|> Enum.each(fn load_chunk ->
Elsa.produce(producer_name, topic, load_chunk, partition: 0)
end)
end)
end
end
|
apps/performance/lib/performance/kafka.ex
| 0.649245 | 0.425874 |
kafka.ex
|
starcoder
|
defmodule Geocalc do
@moduledoc """
Calculate distance, bearing and more between Latitude/Longitude points.
"""
alias Geocalc.Calculator
alias Geocalc.Calculator.Polygon
alias Geocalc.Point
@doc """
Calculates distance between 2 points.
Return distance in meters.
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> paris = [48.8588589, 2.3475569]
iex> Geocalc.distance_between(berlin, paris)
878327.4291149472
iex> Geocalc.distance_between(paris, berlin)
878327.4291149472
## Example
iex> berlin = %{lat: 52.5075419, lon: 13.4251364}
iex> london = %{lat: 51.5286416, lng: -0.1015987}
iex> paris = %{latitude: 48.8588589, longitude: 2.3475569}
iex> Geocalc.distance_between(berlin, paris)
878327.4291149472
iex> Geocalc.distance_between(paris, london)
344229.88946533133
"""
@spec distance_between(Point.t(), Point.t()) :: number
def distance_between(point_1, point_2) do
Calculator.distance_between(point_1, point_2)
end
@doc """
Calculates if a point is within radius of
the center of a circle. Return boolean.
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> paris = [48.8588589, 2.3475569]
iex> Geocalc.within?(10, paris, berlin)
false
iex> Geocalc.within?(10, berlin, paris)
false
## Example
iex> san_juan = %{lat: 18.4655, lon: 66.1057}
iex> puerto_rico = %{lat: 18.2208, lng: 66.5901}
iex> Geocalc.within?(170_000, puerto_rico, san_juan)
true
"""
@spec within?(number, Point.t(), Point.t()) :: boolean()
def within?(radius, _center, _point) when radius < 0, do: false
def within?(radius, center, point) do
Calculator.distance_between(center, point) <= radius
end
@doc """
Calculates if a point is within a polygon. Return boolean.
## Example
iex> point = [14.952242, 60.1696017]
iex> poly = [[24.950899, 60.169158], [24.953492, 60.169158], [24.953510, 60.170104], [24.950958, 60.169990]]
iex> Geocalc.within?(poly, point)
false
## Example
iex> point = [24.952242, 60.1696017]
iex> poly = [[24.950899, 60.169158], [24.953492, 60.169158], [24.953510, 60.170104], [24.950958, 60.169990]]
iex> Geocalc.within?(poly, point)
true
## Example
iex> point = [24.976567, 60.1612500]
iex> poly = [[24.950899, 60.169158], [24.953492, 60.169158], [24.953510, 60.170104], [24.950958, 60.169990]]
iex> Geocalc.within?(poly, point)
false
"""
@spec within?([Point.t()], Point.t()) :: boolean()
def within?(poly, point) do
Polygon.point_in_polygon?(poly, point)
end
@doc """
Calculates bearing.
Return radians.
## Example
iex> berlin = {52.5075419, 13.4251364}
iex> paris = {48.8588589, 2.3475569}
iex> Geocalc.bearing(berlin, paris)
-1.9739245359361486
iex> Geocalc.bearing(paris, berlin)
1.0178267866082613
## Example
iex> berlin = %{lat: 52.5075419, lon: 13.4251364}
iex> paris = %{latitude: 48.8588589, longitude: 2.3475569}
iex> Geocalc.bearing(berlin, paris)
-1.9739245359361486
"""
@spec bearing(Point.t(), Point.t()) :: number
def bearing(point_1, point_2) do
Calculator.bearing(point_1, point_2)
end
@doc """
Finds point between start and end points in direction to end point
with given distance (in meters).
Finds point from start point with given distance (in meters) and bearing.
Return array with latitude and longitude.
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> paris = [48.8588589, 2.3475569]
iex> bearing = Geocalc.bearing(berlin, paris)
iex> distance = 400_000
iex> Geocalc.destination_point(berlin, bearing, distance)
{:ok, [50.97658022467569, 8.165929595956982]}
## Example
iex> zero_point = {0.0, 0.0}
iex> equator_degrees = 90.0
iex> equator_bearing = Geocalc.degrees_to_radians(equator_degrees)
iex> distance = 1_000_000
iex> Geocalc.destination_point(zero_point, equator_bearing, distance)
{:ok, [5.484172965344896e-16, 8.993216059187306]}
## Example
iex> berlin = %{lat: 52.5075419, lon: 13.4251364}
iex> bearing = -1.9739245359361486
iex> distance = 100_000
iex> Geocalc.destination_point(berlin, bearing, distance)
{:ok, [52.147030316318904, 12.076990111001148]}
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> paris = [48.8588589, 2.3475569]
iex> distance = 250_000
iex> Geocalc.destination_point(berlin, paris, distance)
{:ok, [51.578054644172525, 10.096282782248409]}
"""
@type point_or_bearing() :: Point.t() | number
@spec destination_point(Point.t(), point_or_bearing(), number) :: tuple
def destination_point(point_1, point_2, distance) do
Calculator.destination_point(point_1, point_2, distance)
end
@doc """
Finds intersection point from start points with given bearings.
Return array with latitude and longitude.
Raise an exception if no intersection point found.
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> berlin_bearing = -2.102
iex> london = [51.5286416, -0.1015987]
iex> london_bearing = 1.502
iex> Geocalc.intersection_point(berlin, berlin_bearing, london, london_bearing)
{:ok, [51.49271112601574, 10.735322818996854]}
## Example
iex> berlin = {52.5075419, 13.4251364}
iex> london = {51.5286416, -0.1015987}
iex> paris = {48.8588589, 2.3475569}
iex> Geocalc.intersection_point(berlin, london, paris, london)
{:ok, [51.5286416, -0.10159869999999019]}
## Example
iex> berlin = %{lat: 52.5075419, lng: 13.4251364}
iex> bearing = Geocalc.degrees_to_radians(90.0)
iex> Geocalc.intersection_point(berlin, bearing, berlin, bearing)
{:error, "No intersection point found"}
"""
@spec intersection_point(Point.t(), point_or_bearing(), Point.t(), point_or_bearing()) :: tuple
def intersection_point(point_1, bearing_1, point_2, bearing_2) do
Calculator.intersection_point(point_1, bearing_1, point_2, bearing_2)
rescue
ArithmeticError -> {:error, "No intersection point found"}
end
@doc """
Calculates a bounding box around a point with a radius in meters
Returns an array with 2 points (list format). The bottom left (southwest) point,
and the top-right (northeast) one
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> radius = 10_000
iex> Geocalc.bounding_box(berlin, radius)
[[52.417520954378574, 13.277235453275123], [52.59756284562143, 13.573037346724874]]
"""
@spec bounding_box(Point.t(), number) :: list
def bounding_box(point, radius_in_m) do
Calculator.bounding_box(point, radius_in_m)
end
@doc """
Calculates a bounding box for a list of points
Returns an array with 2 points (list format). The bottom left (southwest) point,
and the top-right (northeast) one
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> london = [51.5286416, -0.1015987]
iex> paris = [48.8588589, 2.3475569]
iex> Geocalc.bounding_box_for_points([berlin, london, paris])
[[48.8588589, -0.1015987], [52.5075419, 13.4251364]]
"""
@spec bounding_box_for_points(list) :: list
def bounding_box_for_points(points) do
Calculator.bounding_box_for_points(points)
end
@doc """
Extend the bounds to contain the given bounds
Returns an array with 2 points (list format). The bottom left (southwest) point,
and the top-right (northeast) one
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> london = [51.5286416, -0.1015987]
iex> Geocalc.extend_bounding_box([berlin, berlin], [london, london])
[[51.5286416, -0.1015987], [52.5075419, 13.4251364]]
"""
@spec extend_bounding_box(list, list) :: list
def extend_bounding_box(bounding_box_1, bounding_box_2) do
Calculator.extend_bounding_box(bounding_box_1, bounding_box_2)
end
@doc """
Returns `true` if the bounding box contains the given point.
## Example
iex> germany = [[47.27, 5.87], [55.1, 15.04]]
iex> berlin = [52.5075419, 13.4251364]
iex> Geocalc.contains_point?(germany, berlin)
true
"""
@spec contains_point?(list, Point.t()) :: boolean
def contains_point?(bounding_box, point) do
Calculator.contains_point?(bounding_box, point)
end
@doc """
# Returns `true` if the bounding box intersects the given bounds.
# Two bounds intersect if they have at least one point in common.
## Example
iex> germany = [[47.27, 5.87], [55.1, 15.04]]
iex> poland = [[49.0, 14.12], [55.03, 24.15]]
iex> Geocalc.intersects_bounding_box?(germany, poland)
true
"""
@spec intersects_bounding_box?(list, list) :: boolean
def intersects_bounding_box?(bounding_box_1, bounding_box_2) do
Calculator.intersects_bounding_box?(bounding_box_1, bounding_box_2)
end
@doc """
# Returns `true` if the bounding box overlaps the given bounds.
# Two bounds overlap if their intersection is an area.
## Example
iex> germany = [[47.27, 5.87], [55.1, 15.04]]
iex> berlin_suburbs = [[52.338261, 13.08835], [52.67551, 13.76116]]
iex> Geocalc.overlaps_bounding_box?(germany, berlin_suburbs)
true
"""
@spec overlaps_bounding_box?(list, list) :: boolean
def overlaps_bounding_box?(bounding_box_1, bounding_box_2) do
Calculator.overlaps_bounding_box?(bounding_box_1, bounding_box_2)
end
@doc """
Compute the geographic center (aka geographic midpoint, center of gravity)
for an array of geocoded objects and/or [lat,lon] arrays (can be mixed).
Any objects missing coordinates are ignored. Follows the procedure
documented at http://www.geomidpoint.com/calculation.html.
## Example
iex> point_1 = [0, 0]
iex> point_2 = [0, 3]
iex> Geocalc.geographic_center([point_1, point_2])
[0.0, 1.5]
"""
@spec geographic_center(list) :: Point.t()
def geographic_center(points) do
Calculator.geographic_center(points)
end
@doc """
Converts radians to degrees.
Return degrees.
## Example
iex> Geocalc.radians_to_degrees(2.5075419)
143.67156782221554
## Example
iex> Geocalc.radians_to_degrees(-0.1015987)
-5.821176714015797
"""
@spec radians_to_degrees(number) :: number
def radians_to_degrees(radians) do
Calculator.radians_to_degrees(radians)
end
@doc """
Converts degrees to radians.
Return radians.
## Example
iex> Geocalc.degrees_to_radians(143.67156782221554)
2.5075419
## Example
iex> Geocalc.degrees_to_radians(-10.735322818996854)
-0.18736672945597435
"""
@spec degrees_to_radians(number) :: number
def degrees_to_radians(degrees) do
Calculator.degrees_to_radians(degrees)
end
@doc """
Returns maximum latitude reached when travelling on a great circle on given
bearing from the point (Clairaut's formula). Negate the result for the
minimum latitude (in the Southern hemisphere).
The maximum latitude is independent of longitude; it will be the same for all
points on a given latitude.
Return radians.
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> paris = [48.8588589, 2.3475569]
iex> bearing = Geocalc.bearing(berlin, paris)
iex> Geocalc.max_latitude(berlin, bearing)
55.953467429882835
"""
@spec max_latitude(Point.t(), number) :: number
def max_latitude(point, bearing) do
Calculator.max_latitude(point, bearing)
end
@doc """
Compute distance from the point to great circle defined by start-point
and end-point.
Return distance in meters.
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> london = [51.5286416, -0.1015987]
iex> paris = [48.8588589, 2.3475569]
iex> Geocalc.cross_track_distance_to(berlin, london, paris)
-877680.2992295175
"""
@spec cross_track_distance_to(Point.t(), Point.t(), Point.t()) :: number
def cross_track_distance_to(point, path_start_point, path_end_point) do
Calculator.cross_track_distance_to(point, path_start_point, path_end_point)
end
@doc """
Returns the pair of meridians at which a great circle defined by two points
crosses the given latitude.
Return longitudes.
## Example
iex> berlin = [52.5075419, 13.4251364]
iex> paris = [48.8588589, 2.3475569]
iex> Geocalc.crossing_parallels(berlin, paris, 12.3456)
{:ok, 123.179463369946, -39.81144878508576}
## Example
iex> point_1 = %{lat: 0, lng: 0}
iex> point_2 = %{lat: -180, lng: -90}
iex> latitude = 45.0
iex> Geocalc.crossing_parallels(point_1, point_2, latitude)
{:error, "Not found"}
"""
@spec crossing_parallels(Point.t(), Point.t(), number) :: number
def crossing_parallels(point_1, path_2, latitude) do
Calculator.crossing_parallels(point_1, path_2, latitude)
end
end
|
lib/geocalc.ex
| 0.92669 | 0.690709 |
geocalc.ex
|
starcoder
|
defmodule Cocktail.ScheduleState do
@moduledoc false
alias Cocktail.{RuleState, Schedule, Span}
@type t :: %__MODULE__{
recurrence_rules: [RuleState.t()],
recurrence_times: [Cocktail.time()],
exception_times: [Cocktail.time()],
start_time: Cocktail.time(),
current_time: Cocktail.time(),
duration: pos_integer | nil
}
@enforce_keys [:start_time, :current_time]
defstruct recurrence_rules: [],
recurrence_times: [],
exception_times: [],
start_time: nil,
current_time: nil,
duration: nil
@spec new(Schedule.t(), Cocktail.time() | nil) :: t
def new(%Schedule{} = schedule, nil), do: new(schedule, schedule.start_time)
def new(%Schedule{} = schedule, current_time) do
current_time =
if Timex.compare(current_time, schedule.start_time) < 0,
do: schedule.start_time,
else: current_time
recurrence_times_after_current_time =
schedule.recurrence_times
|> Enum.filter(&(Timex.compare(&1, current_time) >= 0))
|> Enum.sort(&(Timex.compare(&1, &2) <= 0))
%__MODULE__{
recurrence_rules: schedule.recurrence_rules |> Enum.map(&RuleState.new/1),
recurrence_times: recurrence_times_after_current_time,
exception_times: schedule.exception_times |> Enum.sort(&(Timex.compare(&1, &2) <= 0)),
start_time: schedule.start_time,
current_time: current_time,
duration: schedule.duration
}
|> at_least_one_time()
end
@spec next_time(t) :: {Cocktail.occurrence(), t}
def next_time(%__MODULE__{} = state) do
{time, remaining_rules} = next_time_from_recurrence_rules(state)
{time, remaining_times} = next_time_from_recurrence_times(state.recurrence_times, time)
{is_exception, remaining_exceptions} = apply_exception_time(state.exception_times, time)
result = next_occurrence_and_state(time, remaining_rules, remaining_times, remaining_exceptions, state)
case result do
{occurrence, state} ->
if is_exception do
next_time(state)
else
{occurrence, state}
end
nil ->
nil
end
end
@spec next_time_from_recurrence_rules(t) :: {Cocktail.time() | nil, [RuleState.t()]}
defp next_time_from_recurrence_rules(state) do
remaining_rules =
state.recurrence_rules
|> Enum.map(&RuleState.next_time(&1, state.current_time, state.start_time))
|> Enum.filter(fn r -> !is_nil(r.current_time) end)
time = min_time_for_rules(remaining_rules)
{time, remaining_rules}
end
@spec next_time_from_recurrence_times([Cocktail.time()], Cocktail.time() | nil) ::
{Cocktail.time() | nil, [Cocktail.time()]}
defp next_time_from_recurrence_times([], current_time), do: {current_time, []}
defp next_time_from_recurrence_times([next_time | rest], nil), do: {next_time, rest}
defp next_time_from_recurrence_times([next_time | rest] = times, current_time) do
if Timex.compare(next_time, current_time) <= 0 do
{next_time, rest}
else
{current_time, times}
end
end
@spec apply_exception_time([Cocktail.time()], Cocktail.time() | nil) :: {boolean, [Cocktail.time()]}
defp apply_exception_time([], _), do: {false, []}
defp apply_exception_time(exceptions, nil), do: {false, exceptions}
defp apply_exception_time([next_exception | rest] = exceptions, current_time) do
if Timex.compare(next_exception, current_time) == 0 do
{true, rest}
else
{false, exceptions}
end
end
@spec next_occurrence_and_state(Cocktail.time(), [RuleState.t()], [Cocktail.time()], [Cocktail.time()], t) ::
{Cocktail.occurrence(), t} | nil
defp next_occurrence_and_state(nil, _, _, _, _), do: nil
defp next_occurrence_and_state(time, rules, times, exceptions, state) do
occurrence = span_or_time(time, state.duration)
new_state = %{
state
| recurrence_rules: rules,
recurrence_times: times,
exception_times: exceptions,
current_time: Timex.shift(time, seconds: 1)
}
{occurrence, new_state}
end
@spec span_or_time(Cocktail.time() | nil, pos_integer | nil) :: Cocktail.occurrence()
defp span_or_time(time, nil), do: time
defp span_or_time(time, duration), do: Span.new(time, Timex.shift(time, seconds: duration))
@spec min_time_for_rules([RuleState.t()]) :: Cocktail.time() | nil
defp min_time_for_rules([]), do: nil
defp min_time_for_rules([rule]), do: rule.current_time
defp min_time_for_rules(rules) do
rules
|> Enum.min_by(&Timex.to_erl(&1.current_time))
|> Map.get(:current_time)
end
@spec at_least_one_time(t) :: t
def at_least_one_time(%__MODULE__{recurrence_rules: [], recurrence_times: []} = state),
do: %{state | recurrence_times: [state.start_time]}
def at_least_one_time(%__MODULE__{} = state), do: state
end
|
lib/cocktail/schedule_state.ex
| 0.763307 | 0.419232 |
schedule_state.ex
|
starcoder
|
defmodule Manticoresearch.Api.Search do
@moduledoc """
API calls for all endpoints tagged `Search`.
"""
alias Manticoresearch.Connection
import Manticoresearch.RequestBuilder
@doc """
Perform reverse search on a percolate index
Performs a percolate search. This method must be used only on percolate indexes. Expects two paramenters: the index name and an object with array of documents to be tested. An example of the documents object: ``` {\"query\":{\"percolate\":{\"document\":{\"content\":\"sample content\"}}}} ``` Responds with an object with matched stored queries: ``` {'timed_out':false,'hits':{'total':2,'max_score':1,'hits':[{'_index':'idx_pq_1','_type':'doc','_id':'2','_score':'1','_source':{'query':{'match':{'title':'some'},}}},{'_index':'idx_pq_1','_type':'doc','_id':'5','_score':'1','_source':{'query':{'ql':'some | none'}}}]}} ```
## Parameters
- connection (Manticoresearch.Connection): Connection to server
- index (String.t): Name of the percolate index
- percolate_request (PercolateRequest):
- opts (KeywordList): [optional] Optional parameters
## Returns
{:ok, %Manticoresearch.Model.SearchResponse{}} on success
{:error, info} on failure
"""
@spec percolate(Tesla.Env.client, String.t, Manticoresearch.Model.PercolateRequest.t, keyword()) :: {:ok, Manticoresearch.Model.SearchResponse.t} | {:error, Tesla.Env.t}
def percolate(connection, index, percolate_request, _opts \\ []) do
%{}
|> method(:post)
|> url("/json/pq/#{index}/search")
|> add_param(:body, :body, percolate_request)
|> Enum.into([])
|> (&Connection.request(connection, &1)).()
|> evaluate_response([
{ 200, %Manticoresearch.Model.SearchResponse{}},
{ :default, %Manticoresearch.Model.ErrorResponse{}}
])
end
@doc """
Performs a search
Expects an object with mandatory properties: * the index name * the match query object Example : ``` {'index':'movies','query':{'bool':{'must':[{'query_string':' movie'}]}},'script_fields':{'myexpr':{'script':{'inline':'IF(rating>8,1,0)'}}},'sort':[{'myexpr':'desc'},{'_score':'desc'}],'profile':true} ``` It responds with an object with: - time of execution - if the query timed out - an array with hits (matched documents) - additional, if profiling is enabled, an array with profiling information is attached ``` {'took':10,'timed_out':false,'hits':{'total':2,'hits':[{'_id':'1','_score':1,'_source':{'gid':11}},{'_id':'2','_score':1,'_source':{'gid':12}}]}} ``` For more information about the match query syntax, additional paramaters that can be set to the input and response, please check: https://docs.manticoresearch.com/latest/html/http_reference/json_search.html.
## Parameters
- connection (Manticoresearch.Connection): Connection to server
- search_request (SearchRequest):
- opts (KeywordList): [optional] Optional parameters
## Returns
{:ok, %Manticoresearch.Model.SearchResponse{}} on success
{:error, info} on failure
"""
@spec search(Tesla.Env.client, Manticoresearch.Model.SearchRequest.t, keyword()) :: {:ok, Manticoresearch.Model.SearchResponse.t} | {:error, Tesla.Env.t}
def search(connection, search_request, _opts \\ []) do
%{}
|> method(:post)
|> url("/json/search")
|> add_param(:body, :body, search_request)
|> Enum.into([])
|> (&Connection.request(connection, &1)).()
|> evaluate_response([
{ 200, %Manticoresearch.Model.SearchResponse{}},
{ :default, %Manticoresearch.Model.ErrorResponse{}}
])
end
end
|
out/manticoresearch-elixir/lib/manticoresearch/api/search.ex
| 0.844537 | 0.721768 |
search.ex
|
starcoder
|
defmodule Ash.Changeset do
@moduledoc """
Changesets are used to create and update data in Ash.
Create a changeset with `create/2` or `update/2`, and alter the attributes
and relationships using the functions provided in this module. Nothing in this module
actually incurs changes in a data layer. To commit a changeset, see `c:Ash.Api.create/2`
and `c:Ash.Api.update/2`.
## Primary Keys
For relationship manipulation using `append_to_relationship/3`, `remove_from_relationship/3`
and `replace_relationship/3` there are three types that can be used for primary keys:
1.) An instance of the resource in question.
2.) If the primary key is just a single field, i.e `:id`, then a single value, i.e `1`
3.) A map of keys to values representing the primary key, i.e `%{id: 1}` or `%{id: 1, org_id: 2}`
## Join Attributes
For many to many relationships, the attributes on a join relationship may be set while relating items
by passing a tuple of the primary key and the changes to be applied. This is done via upserts, so
update validations on the join resource are *not* applied, but create validations are.
For example:
```elixir
Ash.Changeset.replace_relationship(changeset, :linked_tickets, [
{1, %{link_type: "blocking"}},
{a_ticket, %{link_type: "caused_by"}},
{%{id: 2}, %{link_type: "related_to"}}
])
```
"""
defstruct [
:data,
:action_type,
:resource,
:api,
context: %{},
after_action: [],
before_action: [],
errors: [],
valid?: true,
attributes: %{},
relationships: %{},
change_dependencies: [],
requests: []
]
defimpl Inspect do
import Inspect.Algebra
def inspect(changeset, opts) do
container_doc(
"#Ash.Changeset<",
[
concat("action_type: ", inspect(changeset.action_type)),
concat("attributes: ", to_doc(changeset.attributes, opts)),
concat("relationships: ", to_doc(changeset.relationships, opts)),
concat("errors: ", to_doc(changeset.errors, opts)),
concat("data: ", to_doc(changeset.data, opts)),
concat("valid?: ", to_doc(changeset.valid?, opts))
],
">",
opts,
fn str, _ -> str end
)
end
end
@type t :: %__MODULE__{}
alias Ash.Error.{
Changes.InvalidAttribute,
Changes.InvalidRelationship,
Changes.NoSuchAttribute,
Changes.NoSuchRelationship,
Invalid.NoSuchResource
}
@doc "Return a changeset over a resource or a record"
@spec new(Ash.resource() | Ash.record(), initial_attributes :: map) :: t
def new(resource, initial_attributes \\ %{})
def new(%resource{} = record, initial_attributes) do
if Ash.Resource.resource?(resource) do
%__MODULE__{resource: resource, data: record, action_type: :update}
|> change_attributes(initial_attributes)
else
%__MODULE__{resource: resource, action_type: :create, data: struct(resource)}
|> add_error(NoSuchResource.exception(resource: resource))
end
end
def new(resource, initial_attributes) do
if Ash.Resource.resource?(resource) do
%__MODULE__{resource: resource, action_type: :create, data: struct(resource)}
|> change_attributes(initial_attributes)
else
%__MODULE__{resource: resource, action_type: :create, data: struct(resource)}
|> add_error(NoSuchResource.exception(resource: resource))
end
end
@doc """
Wraps a function in the before/after action hooks of a changeset.
The function takes a changeset and if it returns
`{:ok, result}`, the result will be passed through the after
action hooks.
"""
@spec with_hooks(t(), (t() -> {:ok, Ash.record()} | {:error, term})) ::
{:ok, term} | {:error, term}
def with_hooks(changeset, func) do
changeset =
Enum.reduce_while(changeset.before_action, changeset, fn before_action, changeset ->
case before_action.(changeset) do
%{valid?: true} = changeset -> {:cont, changeset}
changeset -> {:halt, changeset}
end
end)
if changeset.valid? do
case func.(changeset) do
{:ok, result} ->
Enum.reduce_while(
changeset.after_action,
{:ok, result},
fn after_action, {:ok, result} ->
case after_action.(changeset, result) do
{:ok, new_result} -> {:cont, {:ok, new_result}}
{:error, error} -> {:halt, {:error, error}}
end
end
)
{:error, error} ->
{:error, error}
end
else
{:error, changeset.errors}
end
end
@doc "Gets the changing value or the original value of an attribute"
@spec get_attribute(t, atom) :: term
def get_attribute(changeset, attribute) do
case fetch_change(changeset, attribute) do
{:ok, value} ->
value
:error ->
get_data(changeset, attribute)
end
end
@doc "Gets the new value for an attribute, or `:error` if it is not being changed"
@spec fetch_change(t, atom) :: {:ok, any} | :error
def fetch_change(changeset, attribute) do
Map.fetch(changeset.attributes, attribute)
end
@doc "Gets the original value for an attribute"
@spec get_data(t, atom) :: {:ok, any} | :error
def get_data(changeset, attribute) do
Map.get(changeset.data, attribute)
end
@spec put_context(t(), atom, term) :: t()
def put_context(changeset, key, value) do
%{changeset | context: Map.put(changeset.context, key, value)}
end
@spec set_context(t(), map) :: t()
def set_context(changeset, map) do
%{changeset | context: Map.merge(changeset.context, map)}
end
@doc """
Appends a record of list of records to a relationship. Stacks with previous removals/additions.
Accepts a primary key or a list of primary keys. See the section on "Primary Keys" in the
module documentation for more.
For many to many relationships, accepts changes for any `join_attributes` configured on
the resource. See the section on "Join Attributes" in the module documentation for more.
Cannot be used with `belongs_to` or `has_one` relationships.
See `replace_relationship/3` for manipulating `belongs_to` and `has_one` relationships.
"""
@spec append_to_relationship(t, atom, Ash.primary_key() | [Ash.primary_key()]) :: t()
def append_to_relationship(changeset, relationship, record_or_records) do
case Ash.Resource.relationship(changeset.resource, relationship) do
nil ->
error =
NoSuchRelationship.exception(
resource: changeset.resource,
name: relationship
)
add_error(changeset, error)
%{cardinality: :one, type: type} = relationship ->
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Cannot append to a #{type} relationship"
)
add_error(changeset, error)
%{writable?: false} = relationship ->
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Relationship is not editable"
)
add_error(changeset, error)
%{type: :many_to_many} = relationship ->
case primary_keys_with_changes(relationship, List.wrap(record_or_records)) do
{:ok, primary_keys} ->
relationships =
changeset.relationships
|> Map.put_new(relationship.name, %{})
|> add_to_relationship_key_and_reconcile(relationship, :add, primary_keys)
%{changeset | relationships: relationships}
{:error, error} ->
add_error(changeset, error)
end
relationship ->
case primary_key(relationship, List.wrap(record_or_records)) do
{:ok, primary_keys} ->
relationships =
changeset.relationships
|> Map.put_new(relationship.name, %{})
|> add_to_relationship_key_and_reconcile(relationship, :add, primary_keys)
%{changeset | relationships: relationships}
{:error, error} ->
add_error(changeset, error)
end
end
end
@doc """
Removes a record of list of records to a relationship. Stacks with previous removals/additions.
Accepts a primary key or a list of primary keys. See the section on "Primary Keys" in the
module documentation for more.
Cannot be used with `belongs_to` or `has_one` relationships.
See `replace_relationship/3` for manipulating those relationships.
"""
@spec remove_from_relationship(t, atom, Ash.primary_key() | [Ash.primary_key()]) :: t()
def remove_from_relationship(changeset, relationship, record_or_records) do
case Ash.Resource.relationship(changeset.resource, relationship) do
nil ->
error =
NoSuchRelationship.exception(
resource: changeset.resource,
name: relationship
)
add_error(changeset, error)
%{cardinality: :one, type: type} = relationship ->
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Cannot remove from a #{type} relationship"
)
add_error(changeset, error)
%{writable?: false} = relationship ->
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Relationship is not editable"
)
add_error(changeset, error)
relationship ->
case primary_key(relationship, List.wrap(record_or_records)) do
{:ok, primary_keys} ->
relationships =
changeset.relationships
|> Map.put_new(relationship.name, %{})
|> add_to_relationship_key_and_reconcile(relationship, :remove, primary_keys)
%{changeset | relationships: relationships}
{:error, error} ->
add_error(changeset, error)
nil
end
end
end
defp add_to_relationship_key_and_reconcile(relationships, relationship, key, to_add) do
Map.update!(relationships, relationship.name, fn relationship_changes ->
relationship_changes
|> Map.put_new(key, [])
|> Map.update!(key, &Kernel.++(to_add, &1))
|> reconcile_relationship_changes()
end)
end
@doc """
Replaces the value of a relationship. Any previous additions/removals are cleared.
Accepts a primary key or a list of primary keys. See the section on "Primary Keys" in the
module documentation for more.
For many to many relationships, accepts changes for any `join_attributes` configured on
the resource. See the section on "Join Attributes" in the module documentation for more.
For a `has_many` or `many_to_many` relationship, this means removing any currently related
records that are not present in the replacement list, and creating any that do not exist
in the data layer.
For a `belongs_to` or `has_one`, replace with a `nil` value to unset a relationship.
"""
@spec replace_relationship(
t(),
atom(),
Ash.primary_key() | [Ash.primary_key()] | nil
) :: t()
def replace_relationship(changeset, relationship, record_or_records) do
case Ash.Resource.relationship(changeset.resource, relationship) do
nil ->
error =
NoSuchRelationship.exception(
resource: changeset.resource,
name: relationship
)
add_error(changeset, error)
%{writable?: false} = relationship ->
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Relationship is not editable"
)
add_error(changeset, error)
%{cardinality: :one, type: type}
when is_list(record_or_records) and length(record_or_records) > 1 ->
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Cannot replace a #{type} relationship with multiple records"
)
add_error(changeset, error)
%{type: :many_to_many} = relationship ->
case primary_keys_with_changes(relationship, List.wrap(record_or_records)) do
{:ok, primary_key} ->
relationships =
Map.put(changeset.relationships, relationship.name, %{replace: primary_key})
%{changeset | relationships: relationships}
{:error, error} ->
add_error(changeset, error)
end
relationship ->
record =
if relationship.cardinality == :one do
if is_list(record_or_records) do
List.first(record_or_records)
else
record_or_records
end
else
List.wrap(record_or_records)
end
case primary_key(relationship, record) do
{:ok, primary_key} ->
relationships =
Map.put(changeset.relationships, relationship.name, %{replace: primary_key})
%{changeset | relationships: relationships}
{:error, error} ->
add_error(changeset, error)
end
end
end
@doc "Returns true if an attribute exists in the changes"
@spec changing_attribute?(t(), atom) :: boolean
def changing_attribute?(changeset, attribute) do
Map.has_key?(changeset.attributes, attribute)
end
@doc "Returns true if a relationship exists in the changes"
@spec changing_relationship?(t(), atom) :: boolean
def changing_relationship?(changeset, relationship) do
Map.has_key?(changeset.relationships, relationship)
end
@doc "Change an attribute only if is not currently being changed"
@spec change_new_attribute(t(), atom, term) :: t()
def change_new_attribute(changeset, attribute, value) do
if changing_attribute?(changeset, attribute) do
changeset
else
change_attribute(changeset, attribute, value)
end
end
@doc """
Change an attribute if is not currently being changed, by calling the provided function
Use this if you want to only perform some expensive calculation for an attribute value
only if there isn't already a change for that attribute
"""
@spec change_new_attribute_lazy(t(), atom, (() -> any)) :: t()
def change_new_attribute_lazy(changeset, attribute, func) do
if changing_attribute?(changeset, attribute) do
changeset
else
change_attribute(changeset, attribute, func.())
end
end
@doc "Calls `change_attribute/3` for each key/value pair provided"
@spec change_attributes(t(), map | Keyword.t()) :: t()
def change_attributes(changeset, changes) do
Enum.reduce(changes, changeset, fn {key, value}, changeset ->
change_attribute(changeset, key, value)
end)
end
@doc "Adds a change to the changeset, unless the value matches the existing value"
def change_attribute(changeset, attribute, value) do
case Ash.Resource.attribute(changeset.resource, attribute) do
nil ->
error =
NoSuchAttribute.exception(
resource: changeset.resource,
name: attribute
)
add_error(changeset, error)
%{writable?: false} = attribute ->
add_attribute_invalid_error(changeset, attribute, "Attribute is not writable")
attribute ->
with {:ok, casted} <- Ash.Type.cast_input(attribute.type, value),
:ok <- validate_allow_nil(attribute, casted),
:ok <- Ash.Type.apply_constraints(attribute.type, casted, attribute.constraints) do
data_value = Map.get(changeset.data, attribute.name)
cond do
is_nil(data_value) and is_nil(casted) ->
changeset
Ash.Type.equal?(attribute.type, casted, data_value) ->
changeset
true ->
%{changeset | attributes: Map.put(changeset.attributes, attribute.name, casted)}
end
else
:error ->
add_attribute_invalid_error(changeset, attribute)
{:error, error_or_errors} ->
error_or_errors
|> List.wrap()
|> Enum.reduce(changeset, &add_attribute_invalid_error(&2, attribute, &1))
end
end
end
@doc "Calls `force_change_attribute/3` for each key/value pair provided"
@spec force_change_attributes(t(), map) :: t()
def force_change_attributes(changeset, changes) do
Enum.reduce(changes, changeset, fn {key, value}, changeset ->
force_change_attribute(changeset, key, value)
end)
end
@doc "Changes an attribute even if it isn't writable"
@spec force_change_attribute(t(), atom, any) :: t()
def force_change_attribute(changeset, attribute, value) do
case Ash.Resource.attribute(changeset.resource, attribute) do
nil ->
error =
NoSuchAttribute.exception(
resource: changeset.resource,
name: attribute
)
add_error(changeset, error)
attribute ->
with {:ok, casted} <- Ash.Type.cast_input(attribute.type, value),
:ok <- Ash.Type.apply_constraints(attribute.type, casted, attribute.constraints) do
data_value = Map.get(changeset.data, attribute.name)
cond do
is_nil(data_value) and is_nil(casted) ->
changeset
Ash.Type.equal?(attribute.type, casted, data_value) ->
changeset
true ->
%{changeset | attributes: Map.put(changeset.attributes, attribute.name, casted)}
end
else
:error ->
add_attribute_invalid_error(changeset, attribute)
{:error, error_or_errors} ->
error_or_errors
|> List.wrap()
|> Enum.reduce(changeset, &add_attribute_invalid_error(&2, attribute, &1))
end
end
end
@doc "Adds a before_action hook to the changeset."
@spec before_action(t(), (t() -> t())) :: t()
def before_action(changeset, func) do
%{changeset | before_action: [func | changeset.before_action]}
end
@doc "Adds an after_action hook to the changeset."
@spec after_action(t(), (t(), Ash.record() -> {:ok, Ash.record()} | {:error, term})) :: t()
def after_action(changeset, func) do
%{changeset | after_action: [func | changeset.after_action]}
end
@doc "Returns the original data with attribute changes merged."
@spec apply_attributes(t()) :: Ash.record()
def apply_attributes(changeset) do
Enum.reduce(changeset.attributes, changeset.data, fn {attribute, value}, data ->
Map.put(data, attribute, value)
end)
end
@doc "Adds an error to the changesets errors list, and marks the change as `valid?: false`"
@spec add_error(t(), Ash.error()) :: t()
def add_error(changeset, error) do
%{changeset | errors: [error | changeset.errors], valid?: false}
end
defp reconcile_relationship_changes(%{replace: _, add: add} = changes) do
changes
|> Map.delete(:add)
|> Map.update!(:replace, fn replace ->
replace ++ add
end)
|> reconcile_relationship_changes()
end
defp reconcile_relationship_changes(%{replace: _, remove: remove} = changes) do
changes
|> Map.delete(:remove)
|> Map.update!(:replace, fn replace ->
Enum.reject(replace, &(&1 in remove))
end)
|> reconcile_relationship_changes()
end
defp reconcile_relationship_changes(changes) do
changes
|> update_if_present(:replace, &uniq_if_list/1)
|> update_if_present(:remove, &uniq_if_list/1)
|> update_if_present(:add, &uniq_if_list/1)
end
defp uniq_if_list(list) when is_list(list), do: Enum.uniq(list)
defp uniq_if_list(other), do: other
defp update_if_present(map, key, func) do
if Map.has_key?(map, key) do
Map.update!(map, key, func)
else
map
end
end
defp through_changeset(relationship, changes) do
new(relationship.through, changes)
end
defp primary_keys_with_changes(_, []), do: {:ok, []}
defp primary_keys_with_changes(relationship, records) do
Enum.reduce_while(records, {:ok, []}, fn
{record, changes}, {:ok, acc} ->
with {:ok, primary_key} <- primary_key(relationship, record),
%{valid?: true} = changeset <- through_changeset(relationship, changes) do
{:cont, {:ok, [{primary_key, changeset} | acc]}}
else
%{valid?: false, errors: errors} -> {:halt, {:error, errors}}
{:error, error} -> {:halt, {:error, error}}
end
record, {:ok, acc} ->
case primary_key(relationship, record) do
{:ok, primary_key} -> {:cont, {:ok, [primary_key | acc]}}
{:error, error} -> {:halt, {:error, error}}
end
end)
end
defp primary_key(_, nil), do: {:ok, nil}
defp primary_key(relationship, records) when is_list(records) do
case Ash.Resource.primary_key(relationship.destination) do
[_field] ->
multiple_primary_keys(relationship, records)
_ ->
case single_primary_key(relationship, records) do
{:ok, keys} ->
{:ok, keys}
{:error, _} ->
do_primary_key(relationship, records)
end
end
end
defp primary_key(relationship, record) do
do_primary_key(relationship, record)
end
defp do_primary_key(relationship, record) when is_map(record) do
primary_key = Ash.Resource.primary_key(relationship.destination)
is_pkey_map? =
Enum.all?(primary_key, fn key ->
Map.has_key?(record, key) || Map.has_key?(record, to_string(key))
end)
if is_pkey_map? do
pkey =
Enum.reduce(primary_key, %{}, fn key, acc ->
case Map.fetch(record, key) do
{:ok, value} -> Map.put(acc, key, value)
:error -> Map.put(acc, key, Map.get(record, to_string(key)))
end
end)
{:ok, pkey}
else
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Invalid identifier #{inspect(record)}"
)
{:error, error}
end
end
defp do_primary_key(relationship, record) do
single_primary_key(relationship, record)
end
defp multiple_primary_keys(relationship, values) do
Enum.reduce_while(values, {:ok, []}, fn record, {:ok, primary_keys} ->
case do_primary_key(relationship, record) do
{:ok, pkey} -> {:cont, {:ok, [pkey | primary_keys]}}
{:error, error} -> {:halt, {:error, error}}
end
end)
end
defp single_primary_key(relationship, value) do
with [field] <- Ash.Resource.primary_key(relationship.destination),
attribute <- Ash.Resource.attribute(relationship.destination, field),
{:ok, casted} <- Ash.Type.cast_input(attribute.type, value) do
{:ok, %{field => casted}}
else
_ ->
error =
InvalidRelationship.exception(
relationship: relationship.name,
message: "Invalid identifier #{inspect(value)}"
)
{:error, error}
end
end
@doc false
def changes_depend_on(changeset, dependency) do
%{changeset | change_dependencies: [dependency | changeset.change_dependencies]}
end
@doc false
def add_requests(changeset, requests) when is_list(requests) do
Enum.reduce(requests, changeset, &add_requests(&2, &1))
end
def add_requests(changeset, request) do
%{changeset | requests: [request | changeset.requests]}
end
defp validate_allow_nil(%{allow_nil?: false} = attribute, nil) do
{:error,
InvalidAttribute.exception(
field: attribute.name,
message: "must be present",
validation: {:present, 1, 1}
)}
end
defp validate_allow_nil(_, _), do: :ok
defp add_attribute_invalid_error(changeset, attribute, message \\ nil) do
error =
InvalidAttribute.exception(
field: attribute.name,
validation: {:cast, attribute.type},
message: message
)
add_error(changeset, error)
end
end
|
lib/ash/changeset/changeset.ex
| 0.888202 | 0.761095 |
changeset.ex
|
starcoder
|
defmodule List do
@moduledoc """
Implements functions that only make sense for lists
and cannot be part of the Enum protocol. In general,
favor using the Enum API instead of List.
Some functions in this module expect an index. Index
access for list is linear. Negative indexes are also
supported but they imply the list will be iterated twice,
one to calculate the proper index and another to the
operation.
A decision was taken to delegate most functions to
Erlang's standard library but follow Elixir's convention
of receiving the target (in this case, a list) as the
first argument.
"""
@compile :inline_list_funcs
@doc """
Deletes the given item from the list. Returns a list without
the item. If the item occurs more than once in the list, just
the first occurrence is removed.
## Examples
iex> List.delete([1, 2, 3], 1)
[2,3]
iex> List.delete([1, 2, 2, 3], 2)
[1, 2, 3]
"""
@spec delete(list, any) :: list
def delete(list, item) do
:lists.delete(item, list)
end
@doc """
Duplicates the given element `n` times in a list.
## Examples
iex> List.duplicate("hello", 3)
["hello","hello","hello"]
iex> List.duplicate([1, 2], 2)
[[1,2],[1,2]]
"""
@spec duplicate(elem, non_neg_integer) :: [elem] when elem: var
def duplicate(elem, n) do
:lists.duplicate(n, elem)
end
@doc """
Flattens the given `list` of nested lists.
## Examples
iex> List.flatten([1, [[2], 3]])
[1,2,3]
"""
@spec flatten(deep_list) :: list when deep_list: [any | deep_list]
def flatten(list) do
:lists.flatten(list)
end
@doc """
Flattens the given `list` of nested lists.
The list `tail` will be added at the end of
the flattened list.
## Examples
iex> List.flatten([1, [[2], 3]], [4, 5])
[1,2,3,4,5]
"""
@spec flatten(deep_list, [elem]) :: [elem] when elem: var, deep_list: [elem | deep_list]
def flatten(list, tail) do
:lists.flatten(list, tail)
end
@doc """
Folds (reduces) the given list to the left with
a function. Requires an accumulator.
## Examples
iex> List.foldl([5, 5], 10, fn (x, acc) -> x + acc end)
20
iex> List.foldl([1, 2, 3, 4], 0, fn (x, acc) -> x - acc end)
2
"""
@spec foldl([elem], acc, (elem, acc -> acc)) :: acc when elem: var, acc: var
def foldl(list, acc, function) when is_list(list) and is_function(function) do
:lists.foldl(function, acc, list)
end
@doc """
Folds (reduces) the given list to the right with
a function. Requires an accumulator.
## Examples
iex> List.foldr([1, 2, 3, 4], 0, fn (x, acc) -> x - acc end)
-2
"""
@spec foldr([elem], acc, (elem, acc -> acc)) :: acc when elem: var, acc: var
def foldr(list, acc, function) when is_list(list) and is_function(function) do
:lists.foldr(function, acc, list)
end
@doc """
Returns the first element in `list` or `nil` if `list` is empty.
## Examples
iex> List.first([])
nil
iex> List.first([1])
1
iex> List.first([1, 2, 3])
1
"""
@spec first([elem]) :: nil | elem when elem: var
def first([]), do: nil
def first([h|_]), do: h
@doc """
Returns the last element in `list` or `nil` if `list` is empty.
## Examples
iex> List.last([])
nil
iex> List.last([1])
1
iex> List.last([1, 2, 3])
3
"""
@spec last([elem]) :: nil | elem when elem: var
def last([]), do: nil
def last([h]), do: h
def last([_|t]), do: last(t)
@doc """
Receives a list of tuples and returns the first tuple
where the item at `position` in the tuple matches the
given `item`.
## Examples
iex> List.keyfind([a: 1, b: 2], :a, 0)
{ :a, 1 }
iex> List.keyfind([a: 1, b: 2], 2, 1)
{ :b, 2 }
iex> List.keyfind([a: 1, b: 2], :c, 0)
nil
"""
@spec keyfind([tuple], any, non_neg_integer, any) :: any
def keyfind(list, key, position, default \\ nil) do
:lists.keyfind(key, position + 1, list) || default
end
@doc """
Receives a list of tuples and returns `true` if there is
a tuple where the item at `position` in the tuple matches
the given `item`.
## Examples
iex> List.keymember?([a: 1, b: 2], :a, 0)
true
iex> List.keymember?([a: 1, b: 2], 2, 1)
true
iex> List.keymember?([a: 1, b: 2], :c, 0)
false
"""
@spec keymember?([tuple], any, non_neg_integer) :: any
def keymember?(list, key, position) do
:lists.keymember(key, position + 1, list)
end
@doc """
Receives a list of tuples and replaces the item
identified by `key` at `position` if it exists.
## Examples
iex> List.keyreplace([a: 1, b: 2], :a, 0, { :a, 3 })
[a: 3, b: 2]
"""
@spec keyreplace([tuple], any, non_neg_integer, tuple) :: [tuple]
def keyreplace(list, key, position, new_tuple) do
:lists.keyreplace(key, position + 1, list, new_tuple)
end
@doc """
Receives a list of tuples and sorts the items
at `position` of the tuples. The sort is stable.
## Examples
iex> List.keysort([a: 5, b: 1, c: 3], 1)
[b: 1, c: 3, a: 5]
iex> List.keysort([a: 5, c: 1, b: 3], 0)
[a: 5, b: 3, c: 1]
"""
@spec keysort([tuple], non_neg_integer) :: [tuple]
def keysort(list, position) do
:lists.keysort(position + 1, list)
end
@doc """
Receives a list of tuples and replaces the item
identified by `key` at `position`. If the item
does not exist, it is added to the end of the list.
## Examples
iex> List.keystore([a: 1, b: 2], :a, 0, { :a, 3 })
[a: 3, b: 2]
iex> List.keystore([a: 1, b: 2], :c, 0, { :c, 3 })
[a: 1, b: 2, c: 3]
"""
@spec keystore([tuple], any, non_neg_integer, tuple) :: [tuple]
def keystore(list, key, position, new_tuple) do
:lists.keystore(key, position + 1, list, new_tuple)
end
@doc """
Receives a list of tuples and deletes the first tuple
where the item at `position` matches the
given `item`. Returns the new list.
## Examples
iex> List.keydelete([a: 1, b: 2], :a, 0)
[b: 2]
iex> List.keydelete([a: 1, b: 2], 2, 1)
[a: 1]
iex> List.keydelete([a: 1, b: 2], :c, 0)
[a: 1, b: 2]
"""
@spec keydelete([tuple], any, non_neg_integer) :: [tuple]
def keydelete(list, key, position) do
:lists.keydelete(key, position + 1, list)
end
@doc """
Wraps the argument in a list.
If the argument is already a list, returns the list.
If the argument is `nil`, returns an empty list.
## Examples
iex> List.wrap("hello")
["hello"]
iex> List.wrap([1, 2, 3])
[1,2,3]
iex> List.wrap(nil)
[]
"""
@spec wrap(list | any) :: list
def wrap(list) when is_list(list) do
list
end
def wrap(nil) do
[]
end
def wrap(other) do
[other]
end
@doc """
Zips corresponding elements from each list in `list_of_lists`.
## Examples
iex> List.zip([[1, 2], [3, 4], [5, 6]])
[{1, 3, 5}, {2, 4, 6}]
iex> List.zip([[1, 2], [3], [5, 6]])
[{1, 3, 5}]
"""
@spec zip([list]) :: [tuple]
def zip([]), do: []
def zip(list_of_lists) when is_list(list_of_lists) do
do_zip(list_of_lists, [])
end
@doc """
Unzips the given list of lists or tuples into separate lists and returns a
list of lists.
## Examples
iex> List.unzip([{1, 2}, {3, 4}])
[[1, 3], [2, 4]]
iex> List.unzip([{1, :a, "apple"}, {2, :b, "banana"}, {3, :c}])
[[1, 2, 3], [:a, :b, :c]]
"""
@spec unzip([tuple]) :: [list]
def unzip(list) when is_list(list) do
:lists.map &tuple_to_list/1, zip(list)
end
@doc """
Returns a list with `value` inserted at the specified `index`.
Note that `index` is capped at the list length. Negative indices
indicate an offset from the end of the list.
## Examples
iex> List.insert_at([1, 2, 3, 4], 2, 0)
[1, 2, 0, 3, 4]
iex> List.insert_at([1, 2, 3], 10, 0)
[1, 2, 3, 0]
iex> List.insert_at([1, 2, 3], -1, 0)
[1, 2, 3, 0]
iex> List.insert_at([1, 2, 3], -10, 0)
[0, 1, 2, 3]
"""
@spec insert_at(list, integer, any) :: list
def insert_at(list, index, value) do
if index < 0 do
do_insert_at(list, length(list) + index + 1, value)
else
do_insert_at(list, index, value)
end
end
@doc """
Returns a list with a replaced value at the specified `index`.
Negative indices indicate an offset from the end of the list.
If `index` is out of bounds, the original `list` is returned.
## Examples
iex> List.replace_at([1, 2, 3], 0, 0)
[0, 2, 3]
iex> List.replace_at([1, 2, 3], 10, 0)
[1, 2, 3]
iex> List.replace_at([1, 2, 3], -1, 0)
[1, 2, 0]
iex> List.replace_at([1, 2, 3], -10, 0)
[1, 2, 3]
"""
@spec replace_at(list, integer, any) :: list
def replace_at(list, index, value) do
if index < 0 do
do_replace_at(list, length(list) + index, value)
else
do_replace_at(list, index, value)
end
end
@doc """
Returns a list with an updated value at the specified `index`.
Negative indices indicate an offset from the end of the list.
If `index` is out of bounds, the original `list` is returned.
## Examples
iex> List.update_at([1, 2, 3], 0, &(&1 + 10))
[11, 2, 3]
iex> List.update_at([1, 2, 3], 10, &(&1 + 10))
[1, 2, 3]
iex> List.update_at([1, 2, 3], -1, &(&1 + 10))
[1, 2, 13]
iex> List.update_at([1, 2, 3], -10, &(&1 + 10))
[1, 2, 3]
"""
@spec update_at([elem], integer, (elem -> any)) :: list when elem: var
def update_at(list, index, fun) do
if index < 0 do
do_update_at(list, length(list) + index, fun)
else
do_update_at(list, index, fun)
end
end
@doc """
Produces a new list by removing the value at the specified `index`.
Negative indices indicate an offset from the end of the list.
If `index` is out of bounds, the original `list` is returned.
## Examples
iex> List.delete_at([1, 2, 3], 0)
[2, 3]
iex List.delete_at([1, 2, 3], 10)
[1, 2, 3]
iex> List.delete_at([1, 2, 3], -1)
[1, 2]
"""
@spec delete_at(list, integer) :: list
def delete_at(list, index) do
if index < 0 do
do_delete_at(list, length(list) + index)
else
do_delete_at(list, index)
end
end
## Helpers
# replace_at
defp do_replace_at([], _index, _value) do
[]
end
defp do_replace_at(list, index, _value) when index < 0 do
list
end
defp do_replace_at([_old|rest], 0, value) do
[ value | rest ]
end
defp do_replace_at([h|t], index, value) do
[ h | do_replace_at(t, index - 1, value) ]
end
# insert_at
defp do_insert_at([], _index, value) do
[ value ]
end
defp do_insert_at(list, index, value) when index <= 0 do
[ value | list ]
end
defp do_insert_at([h|t], index, value) do
[ h | do_insert_at(t, index - 1, value) ]
end
# update_at
defp do_update_at([value|list], 0, fun) do
[ fun.(value) | list ]
end
defp do_update_at(list, index, _fun) when index < 0 do
list
end
defp do_update_at([h|t], index, fun) do
[ h | do_update_at(t, index - 1, fun) ]
end
defp do_update_at([], _index, _fun) do
[]
end
# delete_at
defp do_delete_at([], _index) do
[]
end
defp do_delete_at([_|t], 0) do
t
end
defp do_delete_at(list, index) when index < 0 do
list
end
defp do_delete_at([h|t], index) do
[h | do_delete_at(t, index-1)]
end
# zip
defp do_zip(list, acc) do
converter = fn x, acc -> do_zip_each(to_list(x), acc) end
{mlist, heads} = :lists.mapfoldl converter, [], list
case heads do
nil -> :lists.reverse acc
_ -> do_zip mlist, [list_to_tuple(:lists.reverse(heads))|acc]
end
end
defp do_zip_each(_, nil) do
{ nil, nil }
end
defp do_zip_each([h|t], acc) do
{ t, [h|acc] }
end
defp do_zip_each([], _) do
{ nil, nil }
end
defp to_list(tuple) when is_tuple(tuple), do: tuple_to_list(tuple)
defp to_list(list) when is_list(list), do: list
end
|
lib/elixir/lib/list.ex
| 0.903633 | 0.682244 |
list.ex
|
starcoder
|
defmodule Forth do
defp exec(word, st) when is_integer(word), do: [word | st]
defp exec("+", [a, b | st]), do: [a + b | st]
defp exec("+", _), do: raise(Forth.StackUnderflow)
defp exec("-", [a, b | st]), do: [b - a | st]
defp exec("-", _), do: raise(Forth.StackUnderflow)
defp exec("*", [a, b | st]), do: [a * b | st]
defp exec("*", _), do: raise(Forth.StackUnderflow)
defp exec("/", [0, _ | _]), do: raise(Forth.DivisionByZero)
defp exec("/", [a, b | st]), do: [div(b, a) | st]
defp exec("/", _), do: raise(Forth.StackUnderflow)
defp exec("DUP", [a | st]), do: [a, a | st]
defp exec("DUP", _), do: raise(Forth.StackUnderflow)
defp exec("DROP", [_ | st]), do: st
defp exec("DROP", _), do: raise(Forth.StackUnderflow)
defp exec("SWAP", [a, b | st]), do: [b, a | st]
defp exec("SWAP", _), do: raise(Forth.StackUnderflow)
defp exec("OVER", [a, b | st]), do: [b, a, b | st]
defp exec("OVER", _), do: raise(Forth.StackUnderflow)
defp exec(word, _), do: raise(Forth.UnknownWord, word: word)
defmodule Evaluator do
defstruct state: :start, cmddef: [], stack: [], words: %{}
end
@opaque evaluator :: %Evaluator{}
@doc """
Create a new evaluator.
"""
@spec new() :: evaluator
def new() do
%Evaluator{}
end
@doc """
Evaluate an input string, updating the evaluator state.
"""
@spec eval(evaluator, String.t()) :: evaluator
def eval(ev, s) do
s
|> split_words
|> eval_words(ev)
end
defp eval_words(words, ev) do
words |> Enum.reduce(ev, &eval_word/2)
end
defp eval_word(":", %Evaluator{state: :start} = ev) do
%{ev | state: :defining_word, cmddef: []}
end
defp eval_word(word, %Evaluator{state: :start, stack: stack, words: words} = ev) do
case Map.get(words, word) do
nil -> %{ev | stack: exec(word, stack)}
ts -> ts |> eval_words(ev)
end
end
defp eval_word(";", %Evaluator{state: :defining_word, cmddef: cmddef} = ev) do
[cmd | expansion] = cmddef |> Enum.reverse()
define_word(ev, cmd, expansion)
end
defp eval_word(word, %Evaluator{state: :defining_word, cmddef: cmddef} = ev) do
%{ev | cmddef: [word | cmddef]}
end
defp define_word(_ev, word, _expansion) when is_integer(word) do
raise(Forth.InvalidWord, word: word)
end
defp define_word(%Evaluator{words: words} = ev, word, expansion) do
expansion =
expansion
|> Enum.flat_map(fn t ->
case Map.get(words, t) do
nil -> [t]
ts -> ts
end
end)
%{ev | state: :start, words: Map.put(words, word, expansion)}
end
defp split_words(s) do
s
|> String.split(~r{([^[:print:]]|[[:space:]]| )}u)
|> Enum.map(fn word ->
case Integer.parse(word) do
{i, ""} -> i
_ -> String.upcase(word)
end
end)
end
@doc """
Return the current stack as a string with the element on top of the stack
being the rightmost element in the string.
"""
@spec format_stack(evaluator) :: String.t()
def format_stack(%Evaluator{stack: stack}) do
stack
|> Enum.reverse()
|> Enum.map_join(" ", &Integer.to_string/1)
end
defmodule StackUnderflow do
defexception []
def message(_), do: "stack underflow"
end
defmodule InvalidWord do
defexception word: nil
def message(e), do: "invalid word: #{inspect(e.word)}"
end
defmodule UnknownWord do
defexception word: nil
def message(e), do: "unknown word: #{inspect(e.word)}"
end
defmodule DivisionByZero do
defexception []
def message(_), do: "division by zero"
end
end
|
exercises/practice/forth/.meta/example.ex
| 0.672869 | 0.482124 |
example.ex
|
starcoder
|
defmodule Kronos do
@moduledoc """
Kronos is a tool to facilitate the manipulation of dates (via Timestamps).
This library use the seconds as a reference.
iex> import Kronos
...> use Kronos.Infix
...> {:ok, t} = new({2010, 12, 20}, {0, 0, 0})
...> r = t + ~t(2)day + ~t(3)hour + ~t(10)minute + ~t(13)second
...> Kronos.to_string(r)
"2010-12-22 03:10:13Z"
The measures references are :
- `Kronos.second`
- `Kronos.minute`
- `Kronos.second`
- `Kronos.fifteen_minutes`
- `Kronos.half_hour`
- `Kronos.hour`
- `Kronos.half_day`
- `Kronos.day`
- `Kronos.week`
- `Kronos.month` **This measure is an approximation (30 days)**
- `Kronos.year` **This measure is an approximation (365 days)**
## Unsafe values
`Kronos.month` and `Kronos.year` are unsafe, they represents a "approximation"
of a month or a year. You should never use it with `Kronos.truncate/2`,
`Kronos.next/2` and `Kronos.pred/2` (and of course as a `precision flag`).
"""
@first_day_of_week 3
@days_of_week [
:mon,
:tue,
:wed,
:thu,
:fri,
:sat,
:sun
]
@typedoc """
This type represents a typed timestamp
"""
@type t :: Mizur.typed_value
@typedoc """
This type represents a specific week type
"""
@type week_t :: {t, day_of_week}
@typedoc """
This type represents a metric_type
"""
@type metric :: Mizur.metric_type | week_t
@typedoc """
This type represents a range between two timestamp
"""
@type duration :: Mizur.Range.range
@typedoc """
This type represents a triplet of non negative values
"""
@type non_neg_triplet :: {
non_neg_integer,
non_neg_integer,
non_neg_integer
}
@typedoc """
This type represents a couple date-time
"""
@type datetime_t :: {
non_neg_triplet,
non_neg_triplet
}
@typedoc """
This type represents a failable result
"""
@type result :: {:ok, t} | {:error, atom}
@typedoc """
This type represents the day of the week
"""
@type day_of_week ::
:mon
| :tue
| :wed
| :thu
| :fri
| :sat
| :sun
# Internals helpers
def one({mod, unit, _, _, _}), do: apply(mod, unit, [1])
def one({t, _}), do: one(t)
defp int_to_dow(i), do: Enum.at(@days_of_week, i)
defp dow_to_int(d) do
Enum.find_index(
@days_of_week,
fn(x) -> x == d end
)
end
defp modulo(a, b) do
cond do
a >= 0 -> rem(a, b)
true -> b - 1 - rem(-a-1, b)
end
end
defp second?({_, :second, _, _, _}), do: true
defp second?(_), do: false
defp simple_week?({_, :week, _, _, _}), do: true
defp simple_week?(_), do: false
# Definition of the Metric-System
@doc """
Monkeypatch to truncate `Kronos.t`.
"""
@spec week([start: day_of_week]) :: week_t
def week(start: day), do: {week(), day}
use Mizur.System
type second
type minute = 60 * second
type fifteen_minutes = 15 * 60 * second
type half_hour = 30 * 60 * second
type hour = 60 * 60 * second
type half_day = 12 * 60 * 60 * second
type day = 24 * 60 * 60 * second
type week = 7 * 24 * 60 * 60 * second
type month = 30 * 24 * 60 * 60 * second
type year = 365 * 24 * 60 * 60 * second
@doc """
Convert a `Kronos.t` into a string, use the
`DateTime` inspect.
"""
@spec to_string(t) :: String.t
def to_string(value) do
case to_datetime(value) do
{:error, reason} -> "Invalid[#{reason}]"
{:ok, datetime} -> "#{datetime}"
end
end
@doc """
Returns if the given year is a leap year.
iex> Kronos.leap_year?(2004)
true
iex> Kronos.leap_year?(2017)
false
"""
@spec leap_year?(non_neg_integer) :: boolean
def leap_year?(year) do
rem(year, 4) === 0
and (rem(year, 100) > 0 or rem(year, 400) === 0)
end
@doc """
Returns a `Kronos.t` with the number of days in a month,
the month is referenced by `year` and `month (non neg integer)`.
iex> Kronos.days_in(2004, 2)
Kronos.day(29)
iex> Kronos.days_in(2005, 2)
Kronos.day(28)
iex> Kronos.days_in(2005, 1)
Kronos.day(31)
iex> Kronos.days_in(2001, 4)
Kronos.day(30)
"""
@spec days_in(non_neg_integer, 1..12) :: t
def days_in(year, month), do: day(aux_days_in(year, month))
defp aux_days_in(year, 2), do: (if (leap_year?(year)), do: 29, else: 28)
defp aux_days_in(_, month) when month in [4, 6, 9, 11], do: 30
defp aux_days_in(_, _), do: 31
@doc """
Converts an integer (timestamp) to a `Kronos.result`
"""
@spec new(integer) :: result
def new(timestamp) when is_integer(timestamp) do
case DateTime.from_unix(timestamp) do
{:ok, _datetime} -> {:ok, second(timestamp)}
{:error, reason } -> {:error, reason}
end
end
@doc """
Converts an erlang datetime representation to a `Kronos.result`
"""
@spec new(datetime_t) :: result
def new({{_, _, _}, {_, _, _}} = erl_tuple) do
case NaiveDateTime.from_erl(erl_tuple) do
{:error, reason1} -> {:error, reason1}
{:ok, naive} ->
{:ok, result} = DateTime.from_naive(naive, "Etc/UTC")
{:ok, from_datetime(result)}
end
end
@doc """
Converts two tuple (date, time) to a `Kronos.result`
"""
@spec new(non_neg_triplet, non_neg_triplet) :: result
def new({_, _, _} = date, {_, _, _} = time) do
new({date, time})
end
@doc """
Same of `Kronos.new/1` but raise an `ArgumentError` if the
timestamp creation failed.
"""
@spec new!(integer | datetime_t) :: t
def new!(input) do
case new(input) do
{:ok, result} -> result
{:error, reason} ->
raise ArgumentError, message: "Invalid argument, #{reason}"
end
end
@doc """
Same of `Kronos.new/2` but raise an `ArgumentError` if the
timestamp creation failed.
"""
@spec new!(non_neg_triplet, non_neg_triplet) :: t
def new!(date, time), do: new!({date, time})
@doc """
Creates a duration between two `Kronos.t`. This duration
is a `Mizur.Range.range`.
iex> a = Kronos.new!(1)
...> b = Kronos.new!(100)
...> Kronos.laps(a, b)
Mizur.Range.new(Kronos.new!(1), Kronos.new!(100))
"""
@spec laps(t, t) :: duration
def laps(a, b), do: Mizur.Range.new(a, b)
@doc """
Check if a `Kronos.t` is include into a `Kronos.duration`.
iex> duration = KronosTest.mock(:duration, 2017, 2018)
...> a = KronosTest.mock(:day, 2015, 12, 10)
...> b = KronosTest.mock(:day, 2017, 5, 10)
...> {Kronos.include?(a, in: duration), Kronos.include?(b, in: duration)}
{false, true}
"""
@spec include?(t, [in: duration]) :: boolean
def include?(a, in: b), do: Mizur.Range.include?(a, in: b)
@doc """
Checks that two durations have an intersection.
iex> durationA = KronosTest.mock(:duration, 2016, 2018)
...> durationB = KronosTest.mock(:duration, 2017, 2019)
...> Kronos.overlap?(durationA, with: durationB)
true
"""
@spec overlap?(duration, [with: duration]) :: boolean
def overlap?(a, with: b), do: Mizur.Range.overlap?(a, b)
@doc """
Creates a duration between two `Kronos.t`. This duration
is a `Mizur.Range.range`.
iex> a = Kronos.new!(1)
...> b = Kronos.new!(100)
...> [from: a, to: b] |> Kronos.laps
Mizur.Range.new(Kronos.new!(1), Kronos.new!(100))
"""
@spec laps([from: t, to: t]) :: duration
def laps(from: a, to: b), do: laps(a, b)
@doc """
Returns the current timestamp (in a `Kronos.t`)
"""
@spec now() :: t
def now() do
DateTime.utc_now
|> DateTime.to_unix(:second)
|> second()
end
@doc """
Returns the wrapped values (into a `Kronos.t`) as an
integer in `second`. This function is mainly used to convert
`Kronos.t` to` DateTime.t`.
iex> x = Kronos.new!(2000)
...> Kronos.to_integer(x)
2000
"""
@spec to_integer(t) :: integer
def to_integer(timestamp) do
elt = Mizur.from(timestamp, to: second())
round(Mizur.unwrap(elt))
end
@doc """
Converts a `Kronos.t` to a `DateTime.t`, the result is wrapped
into `{:ok, value}` or `{:error, reason}`.
iex> ts = 1493119897
...> a = Kronos.new!(ts)
...> b = DateTime.from_unix(1493119897)
...> Kronos.to_datetime(a) == b
true
"""
@spec to_datetime(t) :: {:ok, DateTime.t} | {:error, atom}
def to_datetime(timestamp) do
timestamp
|> to_integer()
|> DateTime.from_unix(:second)
end
@doc """
Converts a `Kronos.t` to a `DateTime.t`. Raise an `ArgumentError` if
the timestamp is not valid.
iex> ts = 1493119897
...> a = Kronos.new!(ts)
...> b = DateTime.from_unix!(1493119897)
...> Kronos.to_datetime!(a) == b
true
"""
@spec to_datetime!(t) :: DateTime.t
def to_datetime!(timestamp) do
timestamp
|>to_integer()
|> DateTime.from_unix!(:second)
end
@doc """
Converts a `DateTime.t` into a `Kronos.t`
"""
@spec from_datetime(DateTime.t) :: t
def from_datetime(datetime) do
datetime
|> DateTime.to_unix(:second)
|> second()
end
@doc """
`Kronos.after?(a, b)` check if `a` is later in time than `b`.
iex> {a, b} = {Kronos.new!(2), Kronos.new!(1)}
...> Kronos.after?(a, b)
true
You can specify a `precision`, to ignore minutes, hours or days.
(By passing a `precision`, both parameters will be truncated via
`Kronos.truncate/2`).
"""
@spec after?(t, t, metric) :: boolean
def after?(a, b, precision \\ second()) do
use Mizur.Infix, only: [>: 2]
truncate(a, at: precision) > truncate(b, at: precision)
end
@doc """
`Kronos.before?(a, b)` check if `a` is earlier in time than `b`.
iex> {a, b} = {Kronos.new!(2), Kronos.new!(1)}
...> Kronos.before?(b, a)
true
You can specify a `precision`, to ignore minutes, hours or days.
(By passing a `precision`, both parameters will be truncated via
`Kronos.truncate/2`).
"""
@spec before?(t, t, metric) :: boolean
def before?(a, b, precision \\ second()) do
use Mizur.Infix, only: [<: 2]
truncate(a, at: precision) < truncate(b, at: precision)
end
@doc """
`Kronos.equivalent?(a, b)` check if `a` is at the same moment of `b`.
iex> {a, b} = {Kronos.new!(2), Kronos.new!(1)}
...> Kronos.equivalent?(b, a, Kronos.hour())
true
You can specify a `precision`, to ignore minutes, hours or days.
(By passing a `precision`, both parameters will be truncated via
`Kronos.truncate/2`).
"""
@spec equivalent?(t, t, metric) :: boolean
def equivalent?(a, b, precision \\ second()) do
use Mizur.Infix, only: [==: 2]
truncate(a, at: precision) == truncate(b, at: precision)
end
@doc """
Rounds the given timestamp (`timestamp`) to the given type (`at`).
iex> ts = Kronos.new!({2017, 10, 24}, {23, 12, 07})
...> Kronos.truncate(ts, at: Kronos.hour())
Kronos.new!({2017, 10, 24}, {23, 0, 0})
For example :
- truncate of 2017/10/24 23:12:07 at `minute` gives : 2017/10/24 23:12:00
- truncate of 2017/10/24 23:12:07 at `hour` gives : 2017/10/24 23:00:00
- truncate of 2017/10/24 23:12:07 at `day` gives : 2017/10/24 00:00:00
"""
@spec truncate(t, [at: metric]) :: t
def truncate(timestamp, at: {_, dow}) do
ts = truncate(timestamp, at: day())
f = modulo(day_of_week_internal(ts) - dow_to_int(dow), 7)
Mizur.sub(ts, day(f))
end
def truncate({base, _} = timestamp, at: t) do
cond do
second?(t) -> timestamp
simple_week?(t) -> truncate(timestamp, at: week(start: :mon))
true ->
seconds = to_integer(timestamp)
factor = to_integer(one(t))
(seconds - modulo(seconds, factor))
|> second()
|> Mizur.from(to: base)
end
end
@doc """
Returns the difference (always positive) between to members
of a duration.
iex> duration = KronosTest.mock(:duration, 2017, 2018)
...> Mizur.from((Kronos.diff(duration)), to: Kronos.day)
Kronos.day(365)
"""
@spec diff(duration) :: t
def diff(duration) do
{a, b} = Mizur.Range.sort(duration)
Mizur.sub(b, a)
end
@doc """
Jump to the next value of a `type`. For example
`next(Kronos.day, of: Kronos.new({2017, 10, 10}, {22, 12, 12}))` give the
date : `2017-10-11, 0:0:0`.
iex> t = KronosTest.mock(:day, 2017, 10, 10)
...> Kronos.next(Kronos.day, of: t)
KronosTest.mock(:day, 2017, 10, 11)
"""
@spec next(metric, [of: t]) :: t
def next(t, of: ts) do
Mizur.add(ts, one(t))
|> truncate(at: t)
end
@doc """
Jump to the pred value of a `type`. For example
`next(Kronos.day, of: Kronos.new({2017, 10, 10}, {22, 12, 12}))` give the
date : `2017-10-09, 0:0:0`.
iex> t = KronosTest.mock(:day, 2017, 10, 10)
...> Kronos.pred(Kronos.day, of: t)
KronosTest.mock(:day, 2017, 10, 9)
"""
@spec pred(metric, [of: t]) :: t
def pred(t, of: ts) do
Mizur.sub(ts, one(t))
|> truncate(at: t)
end
@doc """
Returns the day of the week from a `Kronos.t`.
0 for Monday, 6 for Sunday.
iex> a = KronosTest.mock(:day, 1970, 1, 1, 12, 10, 11)
...> Kronos.day_of_week_internal(a)
3
iex> a = KronosTest.mock(:day, 2017, 4, 29, 0, 3, 11)
...> Kronos.day_of_week_internal(a)
5
"""
@spec day_of_week_internal(t) :: 0..6
def day_of_week_internal(ts) do
ts
|> truncate(at: day())
|> Mizur.from(to: day())
|> Mizur.unwrap()
|> round()
|> Kernel.+(@first_day_of_week)
|> modulo(7)
end
@doc """
Returns the day of the week from a `Kronos.t`.
0 for Monday, 6 for Sunday.
iex> a = KronosTest.mock(:day, 1970, 1, 1, 12, 10, 11)
...> Kronos.day_of_week(a)
:thu
iex> a = KronosTest.mock(:day, 2017, 4, 29, 0, 3, 11)
...> Kronos.day_of_week(a)
:sat
"""
@spec day_of_week(t) :: day_of_week
def day_of_week(ts) do
day_of_week_internal(ts)
|> int_to_dow()
end
@doc """
Returns the seconds (relatives) of a timestamp.
iex> a = KronosTest.mock(:day, 2017, 10, 10, 23, 45, 53)
...> Kronos.seconds_of(a)
53
iex> a = KronosTest.mock(:day, 1903, 10, 10, 13, 22, 7)
...> Kronos.seconds_of(a)
7
"""
@spec seconds_of(t) :: integer
def seconds_of(timestamp) do
modulo(to_integer(timestamp), 60)
end
@doc """
Returns the minutes (relatives) of a timestamp.
iex> a = KronosTest.mock(:day, 2017, 10, 10, 23, 45, 53)
...> Kronos.minutes_of(a)
45
iex> a = KronosTest.mock(:day, 1903, 10, 10, 13, 22, 7)
...> Kronos.minutes_of(a)
22
"""
@spec minutes_of(t) :: integer
def minutes_of(timestamp) do
timestamp
|> truncate(at: minute())
|> to_integer()
|> Kernel.div(60)
|> modulo(60)
end
@doc """
Returns the hours (relatives) of a timestamp.
iex> a = KronosTest.mock(:day, 2017, 10, 10, 23, 45, 53)
...> Kronos.hours_of(a)
23
iex> a = KronosTest.mock(:day, 1903, 10, 10, 13, 22, 7)
...> Kronos.hours_of(a)
13
"""
@spec hours_of(t) :: integer
def hours_of(timestamp) do
timestamp
|> truncate(at: hour())
|> to_integer()
|> Kernel.div(3600)
|> modulo(24)
end
end
|
lib/kronos.ex
| 0.898261 | 0.655749 |
kronos.ex
|
starcoder
|
defmodule RGBMatrix.Engine do
@moduledoc """
Renders [`Animation`](`RGBMatrix.Animation`)s and outputs colors to be
displayed by anything that registers itself with `register_paintable/2`.
"""
use GenServer
alias KeyboardLayout.LED
alias RGBMatrix.Animation
@type frame :: %{LED.id() => RGBMatrix.any_color_model()}
defmodule State do
@moduledoc false
defstruct [:animation, :paintables, :last_frame, :timer]
end
# Client
@doc """
Start the engine.
This module registers its process globally and is expected to be started by a
supervisor.
"""
@spec start_link(any) :: GenServer.on_start()
def start_link(_args) do
GenServer.start_link(__MODULE__, [], name: __MODULE__)
end
@doc """
Sets the given animation as the currently active animation.
"""
@spec set_animation(animation :: Animation.t()) :: :ok
def set_animation(animation) do
GenServer.cast(__MODULE__, {:set_animation, animation})
end
@doc """
Register a paint function for the engine to send frames to.
This function is idempotent.
"""
@spec register_paintable(paint_fn :: function) :: {:ok, function, frame}
def register_paintable(paint_fn) do
{:ok, frame} = GenServer.call(__MODULE__, {:register_paintable, paint_fn})
{:ok, paint_fn, frame}
end
@doc """
Unregister a paint function so the engine no longer sends frames to it.
This function is idempotent.
"""
@spec unregister_paintable(paint_fn :: function) :: :ok
def unregister_paintable(paint_fn) do
GenServer.call(__MODULE__, {:unregister_paintable, paint_fn})
end
@doc """
Sends interaction events to the engine. Animations may or may not respond
to these interaction events.
"""
@spec interact(led :: LED.t()) :: :ok
def interact(led) do
GenServer.cast(__MODULE__, {:interact, led})
end
# Server
@impl GenServer
def init(_args) do
state = %State{
last_frame: %{},
paintables: MapSet.new()
}
{:ok, state}
end
defp add_paintable(paint_fn, state) do
paintables = MapSet.put(state.paintables, paint_fn)
%State{state | paintables: paintables}
end
defp remove_paintable(paint_fn, state) do
paintables = MapSet.delete(state.paintables, paint_fn)
%State{state | paintables: paintables}
end
defp schedule_next_render(state, :ignore) do
state
end
defp schedule_next_render(state, :never) do
cancel_timer(state)
end
defp schedule_next_render(state, 0) do
send(self(), :render)
cancel_timer(state)
end
defp schedule_next_render(state, ms) when is_integer(ms) and ms > 0 do
state = cancel_timer(state)
%{state | timer: Process.send_after(self(), :render, ms)}
end
defp cancel_timer(%{timer: nil} = state), do: state
defp cancel_timer(state) do
Process.cancel_timer(state.timer)
%{state | timer: nil}
end
@impl true
def handle_info(:render, %{animation: nil} = state) do
{:noreply, state}
end
def handle_info(:render, state) do
{render_in, new_colors, animation} = Animation.render(state.animation)
frame = update_frame(state.last_frame, new_colors)
state =
%State{state | animation: animation, last_frame: frame}
|> paint(frame)
|> schedule_next_render(render_in)
{:noreply, state}
end
defp update_frame(frame, new_colors) do
Enum.reduce(new_colors, frame, fn {led_id, color}, frame ->
Map.put(frame, led_id, color)
end)
end
defp paint(state, frame) do
Enum.reduce(state.paintables, state, fn paint_fn, state ->
case paint_fn.(frame) do
:ok -> state
:unregister -> remove_paintable(paint_fn, state)
end
end)
end
@impl GenServer
def handle_cast({:set_animation, nil}, state) do
state =
%State{state | animation: nil, last_frame: %{}}
|> cancel_timer()
{:noreply, state}
end
def handle_cast({:set_animation, animation}, state) do
state =
%State{state | animation: animation, last_frame: %{}}
|> schedule_next_render(0)
{:noreply, state}
end
@impl GenServer
def handle_cast({:interact, _led}, %{animation: nil} = state) do
{:noreply, state}
end
def handle_cast({:interact, led}, state) do
{render_in, animation} = Animation.interact(state.animation, led)
state =
%State{state | animation: animation}
|> schedule_next_render(render_in)
{:noreply, state}
end
@impl GenServer
def handle_call({:register_paintable, paint_fn}, _from, state) do
state = add_paintable(paint_fn, state)
{:reply, {:ok, state.last_frame}, state}
end
@impl GenServer
def handle_call({:unregister_paintable, key}, _from, state) do
state = remove_paintable(key, state)
{:reply, :ok, state}
end
end
|
lib/rgb_matrix/engine.ex
| 0.919081 | 0.519521 |
engine.ex
|
starcoder
|
defmodule Snitch.Data.Schema.Adjustment do
@moduledoc """
Models a generic `adjustment` to keep a track of adjustments
made against any entity.
Adjustments can be made against entities such as an `order` or
`lineitem` due to various reasons such as adding a promotion, or adding
taxes etc.
The adjustments table has a polymorphic relationship with the actions leading
to it.
"""
use Snitch.Data.Schema
@typedoc """
Represents adjustments.
### Fields
- `adjustable_type`: The type of adjustable for which adjustment is created
it can be an `order` or a `line_item`.
- `adjustable_id`: The id of the adjustable for which the adjustment was
created.
- `amount`: The amount for the adjustment it can be positive or negative
depending on whether the amount has to be added or substracted from the
adjustable total. e.g. it is negative in case of promotions and positive
in case of taxes.
- `eligible`: This is used to check if the created adjustment should be
considered while calculating totals for the adjustable. Adjustment which have
`eligible` as true are only considered during the adjustable total
calculations. This field is especially important while handling promotions.
A promotion is considered applied if adjustments created due to it are
eligible.
- `included`: This is used to assert whether, amount adjusted is already present
in the adjustable total. In case it is false the amount should considered
during total computation.
"""
@type t :: %__MODULE__{}
schema "snitch_adjustments" do
field(:adjustable_type, AdjustableEnum)
field(:adjustable_id, :integer)
field(:amount, :decimal)
field(:label, :string)
field(:eligible, :boolean, default: false)
field(:included, :boolean, default: false)
timestamps()
end
@required_params ~w(adjustable_id adjustable_type amount)a
@optional_params ~w(label eligible included)a
@all_params @required_params ++ @optional_params
def create_changeset(%__MODULE__{} = adjustment, params) do
adjustment
|> cast(params, @all_params)
|> validate_required(@required_params)
end
def update_changeset(%__MODULE__{} = adjustment, params) do
adjustment
|> cast(params, @optional_params)
end
end
|
apps/snitch_core/lib/core/data/schema/adjustment/adjustment.ex
| 0.918663 | 0.764672 |
adjustment.ex
|
starcoder
|
defmodule Nebulex.Adapters.Dist do
@moduledoc """
Adapter module for distributed or partitioned cache.
A distributed, or partitioned, cache is a clustered, fault-tolerant cache
that has linear scalability. Data is partitioned among all the machines
of the cluster. For fault-tolerance, partitioned caches can be configured
to keep each piece of data on one or more unique machines within a cluster.
This adapter in particular hasn't fault-tolerance built-in, each piece of
data is kept in a single node/machine (sharding), therefore, if a node fails,
the data kept by this node won't be available for the rest of the cluster.
This adapter depends on a local cache adapter, it adds a thin layer
on top of it in order to distribute requests across a group of nodes,
where is supposed the local cache is running already.
PG2 is used by the adapter to manage the cluster nodes. When the distributed
cache is started in a node, it creates a PG2 group and joins it (the cache
supervisor PID is joined to the group). Then, when a function is invoked,
the adapter picks a node from the node list (using the PG2 group members),
and then the function is executed on that node. In the same way, when the
supervisor process of the distributed cache dies, the PID of that process
is automatically removed from the PG2 group; this is why it's recommended
to use a distributed hashing algorithm for the node picker.
## Features
* Support for Distributed Cache
* Support for Sharding; handled by `Nebulex.Adapter.NodeSelector`
* Support for transactions via Erlang global name registration facility
## Options
These options can be set through the config file:
* `:local` - The Local Cache module. The value to this option should be
`Nebulex.Adapters.Local`, unless you want to provide a custom local
cache adapter.
* `:node_selector` - The module that implements the node picker interface
`Nebulex.Adapter.NodeSelector`. If this option is not set, the default
implementation provided by the interface is used.
## Runtime options
These options apply to all adapter's functions.
* `:timeout` - The time-out value in milliseconds for the command that
will be executed. If the timeout is exceeded, then the current process
will exit. This adapter uses `Task.await/2` internally, therefore,
check the function documentation to learn more about it. For commands
like `set_many` and `get_many`, if the timeout is exceeded, the task
is shutted down but the current process doesn't exit, only the result
associated to that task is just skipped in the reduce phase.
## Example
`Nebulex.Cache` is the wrapper around the cache. We can define the local
and distributed cache as follows:
defmodule MyApp.LocalCache do
use Nebulex.Cache,
otp_app: :my_app,
adapter: Nebulex.Adapters.Local
end
defmodule MyApp.DistCache do
use Nebulex.Cache,
otp_app: :my_app,
adapter: Nebulex.Adapters.Dist
end
Where the configuration for the cache must be in your application environment,
usually defined in your `config/config.exs`:
config :my_app, MyApp.LocalCache,
n_shards: 2,
gc_interval: 3600
config :my_app, MyApp.DistCache,
local: MyApp.LocalCache
For more information about the usage, check out `Nebulex.Cache`.
## Extended API
This adapter provides some additional functions to the `Nebulex.Cache` API.
### `__local__`
Returns the local cache adapter (the local backend).
### `get_node/1`
This function invokes `c:Nebulex.Adapter.NodeSelector.get_node/2` internally.
MyCache.get_node("mykey")
### `get_nodes/0`
Returns the nodes that belongs to the caller Cache.
MyCache.get_nodes()
## Limitations
This adapter has a limitation for two functions: `get_and_update/4` and
`update/5`. They both have a parameter that is the anonymous function,
and the anonymous function is compiled into the module where it is created,
which means it necessarily doesn't exists on remote nodes. To ensure they
work as expected, you must provide functions from modules existing in all
nodes of the group.
"""
# Inherit default node selector function
use Nebulex.Adapter.NodeSelector
# Inherit default transaction implementation
use Nebulex.Adapter.Transaction
# Provide Cache Implementation
@behaviour Nebulex.Adapter
@behaviour Nebulex.Adapter.Queryable
alias Nebulex.Adapters.Dist.RPC
alias Nebulex.Object
## Adapter
@impl true
defmacro __before_compile__(env) do
otp_app = Module.get_attribute(env.module, :otp_app)
config = Module.get_attribute(env.module, :config)
node_selector = Keyword.get(config, :node_selector, __MODULE__)
task_supervisor = Module.concat([env.module, TaskSupervisor])
unless local = Keyword.get(config, :local) do
raise ArgumentError,
"missing :local configuration in " <>
"config #{inspect(otp_app)}, #{inspect(env.module)}"
end
quote do
alias Nebulex.Adapters.Dist.Cluster
alias Nebulex.Adapters.Local.Generation
def __local__, do: unquote(local)
def __task_sup__, do: unquote(task_supervisor)
def get_nodes do
Cluster.get_nodes(__MODULE__)
end
def get_node(key) do
unquote(node_selector).get_node(get_nodes(), key)
end
def init(config) do
:ok = Cluster.join(__MODULE__)
{:ok, config}
end
end
end
@impl true
def init(opts) do
cache = Keyword.fetch!(opts, :cache)
{:ok, [{Task.Supervisor, name: cache.__task_sup__}]}
end
@impl true
def get(cache, key, opts) do
call(cache, key, :get, [key, opts], opts)
end
@impl true
def get_many(cache, keys, opts) do
map_reduce(keys, cache, :get_many, opts, %{}, fn
res, _, acc when is_map(res) ->
Map.merge(acc, res)
_, _, acc ->
acc
end)
end
@impl true
def set(cache, object, opts) do
call(cache, object.key, :set, [object, opts], opts)
end
@impl true
def set_many(cache, objects, opts) do
objects
|> map_reduce(cache, :set_many, opts, [], fn
:ok, _, acc ->
acc
{:error, err_keys}, _, acc ->
err_keys ++ acc
_, group, acc ->
for(obj <- group, do: obj.key) ++ acc
end)
|> case do
[] -> :ok
acc -> {:error, acc}
end
end
@impl true
def delete(cache, key, opts) do
call(cache, key, :delete, [key, opts], opts)
end
@impl true
def take(cache, key, opts) do
call(cache, key, :take, [key, opts], opts)
end
@impl true
def has_key?(cache, key) do
call(cache, key, :has_key?, [key])
end
@impl true
def object_info(cache, key, attr) do
call(cache, key, :object_info, [key, attr])
end
@impl true
def expire(cache, key, ttl) do
call(cache, key, :expire, [key, ttl])
end
@impl true
def update_counter(cache, key, incr, opts) do
call(cache, key, :update_counter, [key, incr, opts], opts)
end
@impl true
def size(cache) do
Enum.reduce(cache.get_nodes(), 0, fn node, acc ->
node
|> rpc_call(cache, :size, [])
|> Kernel.+(acc)
end)
end
@impl true
def flush(cache) do
Enum.each(cache.get_nodes(), fn node ->
rpc_call(node, cache, :flush, [])
end)
end
## Queryable
@impl true
def all(cache, query, opts) do
for node <- cache.get_nodes(),
elems <- rpc_call(node, cache, :all, [query, opts], opts),
do: elems
end
@impl true
def stream(cache, query, opts) do
Stream.resource(
fn ->
cache.get_nodes()
end,
fn
[] ->
{:halt, []}
[node | nodes] ->
elements =
rpc_call(
node,
__MODULE__,
:eval_local_stream,
[cache, query, opts],
cache.__task_sup__,
opts
)
{elements, nodes}
end,
& &1
)
end
@doc """
Helper to perform `stream/3` locally.
"""
def eval_local_stream(cache, query, opts) do
cache.__local__
|> cache.__local__.__adapter__.stream(query, opts)
|> Enum.to_list()
end
## Private Functions
defp call(cache, key, fun, args, opts \\ []) do
key
|> cache.get_node()
|> rpc_call(cache, fun, args, opts)
end
defp rpc_call(node, cache, fun, args, opts \\ []) do
rpc_call(
node,
cache.__local__.__adapter__,
fun,
[cache.__local__ | args],
cache.__task_sup__,
opts
)
end
defp rpc_call(node, mod, fun, args, supervisor, opts) do
case RPC.call(node, mod, fun, args, supervisor, Keyword.get(opts, :timeout, 5000)) do
{:badrpc, remote_ex} ->
raise remote_ex
response ->
response
end
end
defp group_keys_by_node(enum, cache) do
Enum.reduce(enum, %{}, fn
%Object{key: key} = object, acc ->
node = cache.get_node(key)
Map.put(acc, node, [object | Map.get(acc, node, [])])
key, acc ->
node = cache.get_node(key)
Map.put(acc, node, [key | Map.get(acc, node, [])])
end)
end
defp map_reduce(enum, cache, action, opts, reduce_acc, reduce_fun) do
groups = group_keys_by_node(enum, cache)
tasks =
for {node, group} <- groups do
Task.Supervisor.async(
{cache.__task_sup__, node},
cache.__local__.__adapter__,
action,
[cache.__local__, group, opts]
)
end
tasks
|> Task.yield_many(Keyword.get(opts, :timeout, 5000))
|> :lists.zip(Map.values(groups))
|> Enum.reduce(reduce_acc, fn
{{_task, {:ok, res}}, group}, acc ->
reduce_fun.(res, group, acc)
{{_task, {:exit, _reason}}, group}, acc ->
reduce_fun.(:exit, group, acc)
{{task, nil}, group}, acc ->
_ = Task.shutdown(task, :brutal_kill)
reduce_fun.(nil, group, acc)
end)
end
end
|
lib/nebulex/adapters/dist.ex
| 0.869105 | 0.68856 |
dist.ex
|
starcoder
|
defmodule TrainLoc.Vehicles.Vehicles do
@moduledoc """
Functions for working with collections of vehicles.
"""
alias TrainLoc.Conflicts.Conflict
alias TrainLoc.Utilities.Time
alias TrainLoc.Vehicles.Vehicle
use Timex
require Logger
@spec new() :: %{}
@spec new([Vehicle.t()]) :: map
def new do
%{}
end
def new(vehicles) do
Enum.reduce(vehicles, %{}, fn x, acc -> Map.put(acc, x.vehicle_id, x) end)
end
@spec all_vehicles(map) :: [Vehicle.t()]
def all_vehicles(map) do
Map.values(map)
end
@spec all_ids(map) :: [String.t()]
def all_ids(map) do
Map.keys(map)
end
@spec get(map, Vehicle.t()) :: Vehicle.t() | nil
def get(vehicles, vehicle_id) do
Map.get(vehicles, vehicle_id)
end
@spec put(map, Vehicle.t()) :: map
def put(vehicles, vehicle) do
Map.put(vehicles, vehicle.vehicle_id, vehicle)
end
@doc """
Updates or inserts vehicles into a map of 'old vehicles'.
"""
@spec upsert(map, [Vehicle.t()]) :: map
def upsert(old_vehicles, new_vehicles) do
log_changed_assigns(old_vehicles, new_vehicles)
# Convert the incoming list of vehicles to a map
Enum.reduce(new_vehicles, old_vehicles, fn new_vehicle, acc ->
_ = Vehicle.log_vehicle(new_vehicle)
Map.put(acc, new_vehicle.vehicle_id, new_vehicle)
end)
end
@spec delete(map, String.t()) :: map
def delete(vehicles, vehicle_id) do
Map.delete(vehicles, vehicle_id)
end
@doc """
Detects conflicting assignments. Returns a list of
`TrainLoc.Conflicts.Conflict` structs.
A conflict is when multiple vehicles are assigned to the same trip or block.
"""
@spec find_duplicate_logons(map) :: [Conflict.t()]
def find_duplicate_logons(vehicles) do
same_trip =
vehicles
|> Map.values()
|> Enum.group_by(& &1.trip)
|> Enum.reject(&reject_group?/1)
|> Enum.map(&Conflict.from_tuple(&1, :trip))
same_block =
vehicles
|> Map.values()
|> Enum.group_by(& &1.block)
|> Enum.reject(&reject_group?/1)
|> Enum.map(&Conflict.from_tuple(&1, :block))
Enum.concat(same_trip, same_block)
end
@spec reject_group?({String.t(), [Vehicle.t()]}) :: boolean
defp reject_group?({_, [_]}), do: true
defp reject_group?({"000", _}), do: true
defp reject_group?({_, _}), do: false
@spec log_changed_assigns(map, [Vehicle.t()]) :: any
defp log_changed_assigns(old_vehicles, new_vehicles) do
for new <- new_vehicles do
old = Map.get(old_vehicles, new.vehicle_id, new)
_ =
if old.block != new.block do
Logger.debug(fn ->
"BLOCK CHANGE #{Time.format_datetime(new.timestamp)} - #{new.vehicle_id}: #{old.block}->#{new.block}"
end)
end
_ =
if old.trip != new.trip do
Logger.debug(fn ->
"TRIP CHANGE #{Time.format_datetime(new.timestamp)} - #{new.vehicle_id}: #{old.trip}->#{new.trip}"
end)
end
end
:ok
end
end
|
apps/train_loc/lib/train_loc/vehicles/vehicles.ex
| 0.767036 | 0.545165 |
vehicles.ex
|
starcoder
|
defmodule Terrasol.Document do
@nul32 <<0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,
0, 0, 0>>
@moduledoc """
Handling of the Earthstar document format and the resulting
`Terrasol.Document.t` structures
"""
@enforce_keys [
:author,
:content,
:contentHash,
:deleteAfter,
:format,
:path,
:signature,
:timestamp,
:workspace
]
@derive Jason.Encoder
defstruct author: "",
content: "",
contentHash: "",
deleteAfter: nil,
format: "es.4",
path: "",
signature: "",
timestamp: 1,
workspace: ""
@typedoc "An Earthstar document"
@type t() :: %__MODULE__{
author: Terrasol.Author.t(),
content: String.t(),
contentHash: String.t(),
deleteAfter: nil | pos_integer(),
format: String.t(),
path: Terrasol.Path.t(),
signature: String.t(),
timestamp: pos_integer(),
workspace: Terrasol.Workspace.t()
}
defp compute_hash(doc) do
Terrasol.bencode(
:crypto.hash(
:sha256,
gather_fields(
doc,
[:author, :contentHash, :deleteAfter, :format, :path, :timestamp, :workspace],
""
)
)
)
end
defp content_hash(doc), do: :crypto.hash(:sha256, doc.content)
@doc """
Build a `Terrasol.Document` from a map containing all or some of the required keys.
This is resolved internally in a deterministic way which is implementation-specific
and should not be depended upon to remain the same between versions.
A `:ttl` key may be used. It will be parsed into a `:deleteAfter` using
the document timestamp and adding `Terrasol.duration_us(ttl)`
The final value is passed through `parse/1` returning as that function does.
"""
def build(map) do
build(map, [
:timestamp,
:ttl,
:deleteAfter,
:format,
:workspace,
:path,
:author,
:content,
:contentHash,
:signature
])
end
defp build(map, []), do: parse(map)
defp build(map, [key | rest]), do: build(val_or_gen(map, key), rest)
defp val_or_gen(map, :ttl) do
case Map.fetch(map, :ttl) do
:error ->
map
{:ok, val} ->
map
|> Map.delete(:ttl)
|> Map.put(:deleteAfter, map.timestamp + Terrasol.duration_us(val))
end
end
defp val_or_gen(map, key) do
case Map.fetch(map, key) do
:error -> Map.put(map, key, default(key, map))
_ -> map
end
end
defp default(:timestamp, _), do: :erlang.system_time(:microsecond)
defp default(:format, _), do: "es.4"
defp default(:workspace, _), do: "+terrasol.scratch"
defp default(:path, map) do
case(Map.fetch(map, :deleteAfter)) do
:error -> "/terrasol/scratch/default.txt"
{:ok, nil} -> "/terrasol/scratch/default.txt"
_ -> "/terrasol/scratch/!default.txt"
end
end
defp default(:author, _), do: Terrasol.Author.build(%{})
defp default(:content, _), do: "Auto-text from Terrasol."
defp default(:contentHash, map), do: map |> content_hash |> Terrasol.bencode()
defp default(:deleteAfter, _), do: nil
defp default(:signature, map) do
{priv, pub} =
case Terrasol.Author.parse(map.author) do
:error -> {@nul32, @nul32}
%Terrasol.Author{privatekey: nil, publickey: pk} -> {@nul32, pk}
%Terrasol.Author{privatekey: sk, publickey: pk} -> {sk, pk}
end
map |> compute_hash |> Ed25519.signature(priv, pub) |> Terrasol.bencode()
end
defp gather_fields(_, [], str), do: str
defp gather_fields(doc, [f | rest], str) do
case Map.fetch!(doc, f) do
nil -> gather_fields(doc, rest, str)
val -> gather_fields(doc, rest, str <> "#{f}\t#{val}\n")
end
end
@doc """
Parse and return a `Terrasol.Document` from a map.
Returns `{:invalid, [error_field]}` on an invalid document
A string document is presumed to be JSON
"""
def parse(document)
def parse(doc) when is_binary(doc) do
try do
doc
|> Jason.decode!(keys: :atoms!)
|> parse()
rescue
_ -> {:error, [:badjson]}
end
end
def parse(%__MODULE__{} = doc) do
parse_fields(
doc,
[
:author,
:content,
:contentHash,
:path,
:deleteAfter,
:format,
:signature,
:timestamp,
:workspace
],
[]
)
end
def parse(%{} = doc) do
struct(__MODULE__, doc) |> parse
end
def parse(_), do: {:error, [:badformat]}
defp parse_fields(doc, [], []), do: doc
defp parse_fields(_, [], errs), do: {:invalid, Enum.sort(errs)}
defp parse_fields(doc, [f | rest], errs) when f == :author do
author = Terrasol.Author.parse(doc.author)
errlist =
case author do
%Terrasol.Author{} -> errs
:error -> [f | errs]
end
parse_fields(%{doc | author: author}, rest, errlist)
end
defp parse_fields(doc, [f | rest], errs) when f == :workspace do
ws = Terrasol.Workspace.parse(doc.workspace)
errlist =
case ws do
%Terrasol.Workspace{} -> errs
:error -> [f | errs]
end
parse_fields(%{doc | workspace: ws}, rest, errlist)
end
defp parse_fields(doc, [f | rest], errs) when f == :path do
path = Terrasol.Path.parse(doc.path)
errlist =
case path do
%Terrasol.Path{} -> errs
:error -> [f | errs]
end
parse_fields(%{doc | path: path}, rest, errlist)
end
defp parse_fields(doc, [f | rest], errs) when f == :format do
errlist =
case doc.format do
"es.4" -> errs
_ -> [f | errs]
end
parse_fields(doc, rest, errlist)
end
@min_ts 10_000_000_000_000
@max_ts 9_007_199_254_740_990
defp parse_fields(doc, [f | rest], errs) when f == :deleteAfter do
# Spec min int or after now from our perspective
min_allowed = Enum.max([@min_ts, :erlang.system_time(:microsecond)])
val = doc.deleteAfter
ephem =
case doc.path do
%Terrasol.Path{ephemeral: val} -> val
_ -> false
end
errlist =
case {is_nil(val), ephem, not is_integer(val) or (val >= min_allowed and val <= @max_ts)} do
{true, true, _} -> [:ephem_delete_mismatch | errs]
{false, false, _} -> [:ephem_delete_mismatch | errs]
{_, _, false} -> [f | errs]
{_, _, true} -> errs
end
parse_fields(doc, rest, errlist)
end
defp parse_fields(doc, [f | rest], errs) when f == :timestamp do
# Spec max int or 10 minutes into the future
max_allowed = Enum.min([@max_ts, :erlang.system_time(:microsecond) + 600_000_000])
val = doc.timestamp
errlist =
case is_integer(val) and val >= @min_ts and val <= max_allowed do
true -> errs
false -> [f | errs]
end
parse_fields(doc, rest, errlist)
end
@max_doc_bytes 4_000_000
defp parse_fields(doc, [f | rest], errs) when f == :content do
val = doc.content
errlist =
case byte_size(val) < @max_doc_bytes and String.valid?(val) do
true -> errs
false -> [f | errs]
end
parse_fields(doc, rest, errlist)
end
defp parse_fields(doc, [f | rest], errs) when f == :contentHash do
computed_hash = content_hash(doc)
published_hash =
case Terrasol.bdecode(doc.contentHash) do
:error -> @nul32
val -> val
end
errlist =
case Equivalex.equal?(computed_hash, published_hash) do
true -> errs
false -> [f | errs]
end
parse_fields(doc, rest, errlist)
end
defp parse_fields(doc, [f | rest], errs) when f == :signature do
sig =
case Terrasol.bdecode(doc.signature) do
:error -> @nul32
val -> val
end
author_pub_key =
case Terrasol.Author.parse(doc.author) do
:error -> @nul32
%Terrasol.Author{publickey: pk} -> pk
end
errlist =
case Ed25519.valid_signature?(sig, compute_hash(doc), author_pub_key) do
true -> errs
false -> [f | errs]
end
parse_fields(doc, rest, errlist)
end
# Skip unimplemented checks
defp parse_fields(doc, [_f | rest], errs), do: parse_fields(doc, rest, errs)
end
|
lib/terrasol/document.ex
| 0.688259 | 0.612744 |
document.ex
|
starcoder
|
defmodule Cldr.LanguageTag do
@moduledoc """
Represents a language tag as defined in [rfc5646](https://tools.ietf.org/html/rfc5646)
with extensions "u" and "t" as defined in [BCP 47](https://tools.ietf.org/html/bcp47).
Language tags are used to help identify languages, whether spoken,
written, signed, or otherwise signaled, for the purpose of
communication. This includes constructed and artificial languages
but excludes languages not intended primarily for human
communication, such as programming languages.
## Syntax
A language tag is composed from a sequence of one or more "subtags",
each of which refines or narrows the range of language identified by
the overall tag. Subtags, in turn, are a sequence of alphanumeric
characters (letters and digits), distinguished and separated from
other subtags in a tag by a hyphen ("-", [Unicode] U+002D).
There are different types of subtag, each of which is distinguished
by length, position in the tag, and content: each subtag's type can
be recognized solely by these features. This makes it possible to
extract and assign some semantic information to the subtags, even if
the specific subtag values are not recognized. Thus, a language tag
processor need not have a list of valid tags or subtags (that is, a
copy of some version of the IANA Language Subtag Registry) in order
to perform common searching and matching operations. The only
exceptions to this ability to infer meaning from subtag structure are
the grandfathered tags listed in the productions 'regular' and
'irregular' below. These tags were registered under [RFC3066] and
are a fixed list that can never change.
The syntax of the language tag in ABNF is:
Language-Tag = langtag ; normal language tags
/ privateuse ; private use tag
/ grandfathered ; grandfathered tags
langtag = language
["-" script]
["-" region]
*("-" variant)
*("-" extension)
["-" privateuse]
language = 2*3ALPHA ; shortest ISO 639 code
["-" extlang] ; sometimes followed by
; extended language subtags
/ 4ALPHA ; or reserved for future use
/ 5*8ALPHA ; or registered language subtag
extlang = 3ALPHA ; selected ISO 639 codes
*2("-" 3ALPHA) ; permanently reserved
script = 4ALPHA ; ISO 15924 code
region = 2ALPHA ; ISO 3166-1 code
/ 3DIGIT ; UN M.49 code
variant = 5*8alphanum ; registered variants
/ (DIGIT 3alphanum)
extension = singleton 1*("-" (2*8alphanum))
; Single alphanumerics
; "x" reserved for private use
singleton = DIGIT ; 0 - 9
/ %x41-57 ; A - W
/ %x59-5A ; Y - Z
/ %x61-77 ; a - w
/ %x79-7A ; y - z
privateuse = "x" 1*("-" (1*8alphanum))
grandfathered = irregular ; non-redundant tags registered
/ regular ; during the RFC 3066 era
irregular = "en-GB-oed" ; irregular tags do not match
/ "i-ami" ; the 'langtag' production and
/ "i-bnn" ; would not otherwise be
/ "i-default" ; considered 'well-formed'
/ "i-enochian" ; These tags are all valid,
/ "i-hak" ; but most are deprecated
/ "i-klingon" ; in favor of more modern
/ "i-lux" ; subtags or subtag
/ "i-mingo" ; combination
/ "i-navajo"
/ "i-pwn"
/ "i-tao"
/ "i-tay"
/ "i-tsu"
/ "sgn-BE-FR"
/ "sgn-BE-NL"
/ "sgn-CH-DE"
regular = "art-lojban" ; these tags match the 'langtag'
/ "cel-gaulish" ; production, but their subtags
/ "no-bok" ; are not extended language
/ "no-nyn" ; or variant subtags: their meaning
/ "zh-guoyu" ; is defined by their registration
/ "zh-hakka" ; and all of these are deprecated
/ "zh-min" ; in favor of a more modern
/ "zh-min-nan" ; subtag or sequence of subtags
/ "zh-xiang"
alphanum = (ALPHA / DIGIT) ; letters and numbers
All subtags have a maximum length of eight characters. Whitespace is
not permitted in a language tag. There is a subtlety in the ABNF
production 'variant': a variant starting with a digit has a minimum
length of four characters, while those starting with a letter have a
minimum length of five characters.
## Unicode BCP 47 Extension type "u" - Locale
Extension | Description | Examples
+-------+ | ------------------------------- | ---------
ca | Calendar type | buddhist, chinese, gregory
cf | Currency format style | standard, account
co | Collation type | standard, search, phonetic, pinyin
cu | Currency type | ISO4217 code like "USD", "EUR"
fw | First day of the week identifier | sun, mon, tue, wed, ...
hc | Hour cycle identifier | h12, h23, h11, h24
lb | Line break style identifier | strict, normal, loose
lw | Word break identifier | normal, breakall, keepall
ms | Measurement system identifier | metric, ussystem, uksystem
nu | Number system identifier | arabext, armnlow, roman, tamldec
rg | Region override | The value is a unicode_region_subtag for a regular region (not a macroregion), suffixed by "ZZZZ"
sd | Subdivision identifier | A unicode_subdivision_id, which is a unicode_region_subtagconcatenated with a unicode_subdivision_suffix.
ss | Break supressions identifier | none, standard
tz | Timezone identifier | Short identifiers defined in terms of a TZ time zone database
va | Common variant type | POSIX style locale variant
## Unicode BCP 47 Extension type "t" - Transforms
Extension | Description
+-------+ | -----------------------------------------
mo | Transform extension mechanism: to reference an authority or rules for a type of transformation
s0 | Transform source: for non-languages/scripts, such as fullwidth-halfwidth conversion.
d0 | Transform sdestination: for non-languages/scripts, such as fullwidth-halfwidth conversion.
i0 | Input Method Engine transform
k0 | Keyboard transform
t0 | Machine Translation: Used to indicate content that has been machine translated
h0 | Hybrid Locale Identifiers: h0 with the value 'hybrid' indicates that the -t- value is a language that is mixed into the main language tag to form a hybrid
x0 | Private use transform
Extensions are formatted by specifying keyword pairs after an extension
separator. The example `de-DE-u-co-phonebk` specifies German as spoken in
Germany with a collation of `phonebk`. Another example, "en-latn-AU-u-cf-account"
represents English as spoken in Australia, with the number system "latn" but
formatting currencies with the "accounting" style.
"""
alias Cldr.LanguageTag.Parser
if Code.ensure_loaded?(Jason) do
@derive Jason.Encoder
end
defstruct language: nil,
language_subtags: [],
script: nil,
territory: nil,
language_variant: nil,
locale: %{},
transform: %{},
extensions: %{},
private_use: [],
requested_locale_name: nil,
canonical_locale_name: nil,
cldr_locale_name: nil,
rbnf_locale_name: nil,
gettext_locale_name: nil
@type t :: %__MODULE__{
language: String.t(),
language_subtags: [String.t(), ...] | [],
script: String.t() | nil,
territory: String.t() | nil,
language_variant: String.t() | nil,
locale: %{},
transform: %{},
extensions: %{},
private_use: [String.t(), ...] | [],
requested_locale_name: String.t(),
canonical_locale_name: String.t(),
cldr_locale_name: String.t() | nil,
rbnf_locale_name: String.t()
}
@doc """
Parse a locale name into a `Cldr.LangaugeTag` struct.
* `locale_name` is any valid locale name returned by `Cldr.known_locale_names/1`
Returns:
* `{:ok, language_tag}` or
* `{:error, reason}`
"""
def parse(locale_name) when is_binary(locale_name) do
Parser.parse(locale_name)
end
@doc """
Parse a locale name into a `Cldr.LangaugeTag` struct and raises on error
## Arguments
* `locale_name` is any valid locale name returned by `Cldr.known_locale_names/1`
Returns:
* `language_tag` or
* raises an exception
"""
@spec parse!(Cldr.Locale.locale_name()) :: t() | none()
def parse!(locale_string) when is_binary(locale_string) do
Parser.parse!(locale_string)
end
@doc """
Reconstitute a textual language tag from a
LanguageTag that is suitable
to pass to a collator.
## Example
iex> {:ok, locale} = Cldr.validate_locale "en-US-u-co-phonebk-nu-arab", MyApp.Cldr
iex> Cldr.LanguageTag.to_string(locale)
"en-Latn-US-u-co-phonebk-nu-arab"
"""
@spec to_string(t) :: String.t()
def to_string(%__MODULE__{} = locale) do
basic_tag =
[
locale.language,
locale.language_subtags,
locale.script,
locale.territory,
locale.language_variant
]
|> List.flatten()
|> Enum.reject(&is_nil/1)
|> Enum.join("-")
locale_extension =
locale.locale
|> Enum.map(fn
{k, v} when is_atom(k) -> "#{Parser.inverse_locale_key_map()[k]}-#{v}"
_ -> nil
end)
|> Enum.reject(&is_nil/1)
|> Enum.join("-")
if locale_extension != "" do
basic_tag <> "-u-" <> locale_extension
else
basic_tag
end
end
end
|
lib/cldr/language_tag.ex
| 0.856573 | 0.629832 |
language_tag.ex
|
starcoder
|
defmodule Game.Character do
@moduledoc """
Character GenServer client
A character is a player (session genserver) or an NPC (genserver). They should
handle the following casts:
- `{:targeted, player}`
- `{:apply_effects, effects, player}`
"""
alias Game.Character.Simple
alias Game.Character.Via
@typedoc """
A simple character struct
"""
@type t :: %Simple{}
@doc """
Convert a character into a stripped down version
"""
def to_simple(character = %Simple{}), do: character
def to_simple(character), do: Simple.from_character(character)
def simple_gossip(player_name) do
%Simple{
type: "gossip",
name: player_name
}
end
@doc """
Let the target know they are being targeted
"""
@spec being_targeted(tuple(), Character.t()) :: :ok
def being_targeted(target, player) do
GenServer.cast({:via, Via, who(target)}, {:targeted, player})
end
@doc """
Apply effects on the target
"""
@spec apply_effects(tuple(), [Effect.t()], Character.t(), String.t()) :: :ok
def apply_effects(target, effects, from, description) do
GenServer.cast(
{:via, Via, who(target)},
{:apply_effects, effects, to_simple(from), description}
)
end
@doc """
Reply to the sending character what effects were applied
"""
@spec effects_applied(Character.t(), [Effect.t()], Character.t()) :: :ok
def effects_applied(from, effects, target) do
GenServer.cast({:via, Via, who(from)}, {:effects_applied, effects, to_simple(target)})
end
@doc """
Get character information about the character
"""
@spec info(Character.t()) :: Character.t()
def info(target) do
GenServer.call({:via, Via, who(target)}, :info)
end
@doc """
Notify a character of an event
"""
@spec notify(Character.t(), map()) :: :ok
def notify(target, event) do
GenServer.cast({:via, Via, who(target)}, {:notify, event})
end
@doc """
Check if a character equals another character, generaly the simple version
"""
def equal?(nil, _target), do: false
def equal?(_character, nil), do: false
def equal?({_, character}, {_, target}), do: equal?(character, target)
def equal?({_, character}, target), do: equal?(character, target)
def equal?(character, {_, target}), do: equal?(character, target)
def equal?(character, target) do
character.type == target.type && character.id == target.id
end
@doc """
Converts a tuple with a struct to a tuple with an id
"""
def who(character = %Simple{}), do: character
end
|
lib/game/character.ex
| 0.860266 | 0.567907 |
character.ex
|
starcoder
|
defmodule Stopsel.Invoker do
@moduledoc """
Routes a message through a router.
This module relies on `Stopsel.Router` for matching the routes,
which ensures that only active routes will be tried to match against.
"""
alias Stopsel.{Command, Message, Router, Utils}
require Logger
@type reason :: Router.match_error() | {:halted, Message.t()}
@type prefix_error :: :wrong_prefix
@doc """
Tries to match a message against the loaded routes of the specified router.
The message can either be a `Stopsel.Message` struct or any data structure
that implements `Stopsel.Message.Protocol`.
### Return values
This function will return either `{:ok, value}` or `{:error, reason}`
The `value` in `{:ok, value}` is the result of the executed command.
The `reason` in `{:error, reason}` can be one of the following values
* `:no_match` - No matching route was found for the message
* `{:multiple_matches, matches}` - Multiple matching routes where found for
the message. This should be avoided.
* `{:halted, message}` - The message was halted in the pipeline.
```elixir
iex> import ExUnit.CaptureIO
iex> Stopsel.Router.load_router(MyApp.Router)
true
iex> capture_io(fn -> Stopsel.Invoker.invoke("hello", MyApp.Router) end)
"Hello world!\\n"
iex> Stopsel.Invoker.invoke("helloooo", MyApp.Router)
{:error, :no_match}
```
Only loaded routes will be found.
iex> Stopsel.Router.load_router(MyApp.Router)
true
iex> Stopsel.Router.unload_route(MyApp.Router, ~w"hello")
iex> Stopsel.Invoker.invoke("hello", MyApp.Router)
{:error, :no_match}
"""
@spec invoke(Message.t() | term(), Router.router()) ::
{:ok, term} | {:error, reason()}
def invoke(%Message{} = message, router) do
with {:ok, %Command{} = command} <-
Router.match_route(router, parse_path(message)) do
%{message | rest: command.rest}
|> Message.assign(command.assigns)
|> Message.put_params(command.params)
|> apply_stopsel(command.stopsel)
|> case do
%Message{halted?: true} = message ->
{:error, {:halted, message}}
%Message{} = message ->
{:ok, do_invoke(message, command.module, command.function)}
other ->
raise Stopsel.InvalidMessage, other
end
end
end
def invoke(message, router) do
message = %Message{
assigns: Message.Protocol.assigns(message),
content: Message.Protocol.content(message),
params: Message.Protocol.params(message)
}
invoke(message, router)
end
@doc """
Same as `invoke/2` but also checks that the message starts with the
specified prefix.
Returns `{:error, :wrong_prefix}` otherwise.
iex> import ExUnit.CaptureIO
iex> Stopsel.Router.load_router(MyApp.Router)
true
iex> capture_io(fn ->
...> assert {:ok, :ok} == Stopsel.Invoker.invoke("!hello", MyApp.Router, "!")
...> end)
"Hello world!\\n"
iex> capture_io(fn ->
...> assert {:ok, :ok} == Stopsel.Invoker.invoke("! hello", MyApp.Router, "!")
...> end)
"Hello world!\\n"
"""
@spec invoke(Message.t() | term(), Router.router(), String.t()) ::
{:ok, term} | {:error, reason() | prefix_error()}
def invoke(message, router, ""), do: invoke(message, router)
def invoke(message, router, nil), do: invoke(message, router)
def invoke(%Message{} = message, router, prefix) do
with {:ok, message} <- check_prefix(message, prefix) do
invoke(message, router)
end
end
def invoke(message, router, prefix) do
message = %Message{
assigns: Message.Protocol.assigns(message),
content: Message.Protocol.content(message)
}
invoke(message, router, prefix)
end
defp check_prefix(message, prefix) do
if String.starts_with?(message.content, prefix) do
new_message =
Map.update!(message, :content, fn message ->
message
|> String.trim_leading(prefix)
|> String.trim_leading()
end)
{:ok, new_message}
else
{:error, :wrong_prefix}
end
end
defp parse_path(%Message{content: content}) do
try do
OptionParser.split(content)
rescue
_ -> Utils.split_message(content)
end
end
defp apply_stopsel(message, stopsel) do
Enum.reduce_while(stopsel, message, fn
{{module, function}, config}, message ->
module
|> apply(function, [message, config])
|> handle_message(function)
{module, opts}, message ->
config = module.init(opts)
message
|> module.call(config)
|> handle_message(module)
end)
end
defp handle_message(%Message{} = message, cause) do
if message.halted? do
Logger.debug("Halted message in #{cause}")
{:halt, message}
else
{:cont, message}
end
end
defp handle_message(other, _), do: raise(Stopsel.InvalidMessage, other)
defp do_invoke(%Message{halted?: true} = message, _, _), do: message
defp do_invoke(%Message{} = message, module, function) do
result = apply(module, function, [message, message.params])
Logger.debug("Message handled returned #{inspect(result)}")
result
end
end
|
lib/stopsel/invoker.ex
| 0.884208 | 0.687787 |
invoker.ex
|
starcoder
|
defmodule Still.Compiler.ViewHelpers.ContentTag do
@moduledoc """
Implements an arbitrary content tag with the given content.
"""
@doc """
Renders an arbitrary HTML tag with the given content.
If `content` is `nil`, the rendered tag is self-closing.
`opts` should contain the relevant HTML attributes (e.g `class: "myelem"`).
`aria` attributes should be in the `aria_name` format (e.g: `aria_label: "Label"`).
All data attributes should be within a `data` array (e.g: `data: [method:
"POST", foo: "bar"]`).
## Examples
iex> content_tag("a", "My link", href: "https://example.org", data: [method: "POST", something: "value"], aria_label: "Label")
"<a href=\\"https://example.org\\" data-method=\\"POST\\" data-something=\\"value\\" aria-label=\\"Label\\">My link</a>"
"""
def render(tag, content, opts) do
{data, opts} = Keyword.pop(opts, :data, [])
data_attrs = translate_data_attrs(data)
basic_attrs = translate_basic_attrs(opts)
attrs =
basic_attrs
|> Enum.concat(data_attrs)
|> Enum.join(" ")
opening_tag(tag, content, attrs) <> (content || "") <> closing_tag(tag, content)
end
defp opening_tag(tag, content, attrs) when is_nil(content) do
"<#{tag} #{attrs}"
end
defp opening_tag(tag, _content, attrs) do
"<#{tag} #{attrs}>"
end
defp closing_tag(_tag, content) when is_nil(content) do
"/>"
end
defp closing_tag(tag, _content) do
"</#{tag}>"
end
defp translate_basic_attrs(attrs) do
Enum.map(attrs, fn {attr, value} -> ~s(#{translate_attr_name(attr)}="#{value}") end)
end
defp translate_data_attrs(data) do
Enum.map(data, fn {attr, value} -> ~s(data-#{attr}="#{value}") end)
end
defp translate_attr_name(name) when is_atom(name) do
name
|> Atom.to_string()
|> translate_attr_name()
end
defp translate_attr_name("aria_" <> name) do
"aria-#{String.replace(name, "_", "-")}"
end
defp translate_attr_name(name) do
name
end
end
|
lib/still/compiler/view_helpers/content_tag.ex
| 0.824356 | 0.497925 |
content_tag.ex
|
starcoder
|
defmodule AWS.Keyspaces do
@moduledoc """
Amazon Keyspaces (for Apache Cassandra) is a scalable, highly available, and
managed Apache Cassandra-compatible database service.
Amazon Keyspaces makes it easy to migrate, run, and scale Cassandra workloads in
the Amazon Web Services Cloud. With just a few clicks on the Amazon Web Services
Management Console or a few lines of code, you can create keyspaces and tables
in Amazon Keyspaces, without deploying any infrastructure or installing
software.
In addition to supporting Cassandra Query Language (CQL) requests via
open-source Cassandra drivers, Amazon Keyspaces supports data definition
language (DDL) operations to manage keyspaces and tables using the Amazon Web
Services SDK and CLI. This API reference describes the supported DDL operations
in detail.
For the list of all supported CQL APIs, see [Supported Cassandra APIs, operations, and data types in Amazon
Keyspaces](https://docs.aws.amazon.com/keyspaces/latest/devguide/cassandra-apis.html)
in the *Amazon Keyspaces Developer Guide*.
To learn how Amazon Keyspaces API actions are recorded with CloudTrail, see
[Amazon Keyspaces information in CloudTrail](https://docs.aws.amazon.com/keyspaces/latest/devguide/logging-using-cloudtrail.html#service-name-info-in-cloudtrail)
in the *Amazon Keyspaces Developer Guide*.
For more information about Amazon Web Services APIs, for example how to
implement retry logic or how to sign Amazon Web Services API requests, see
[Amazon Web Services APIs](https://docs.aws.amazon.com/general/latest/gr/aws-apis.html) in the
*General Reference*.
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: nil,
api_version: "2022-02-10",
content_type: "application/x-amz-json-1.0",
credential_scope: nil,
endpoint_prefix: "cassandra",
global?: false,
protocol: "json",
service_id: "Keyspaces",
signature_version: "v4",
signing_name: "cassandra",
target_prefix: "KeyspacesService"
}
end
@doc """
The `CreateKeyspace` operation adds a new keyspace to your account.
In an Amazon Web Services account, keyspace names must be unique within each
Region.
`CreateKeyspace` is an asynchronous operation. You can monitor the creation
status of the new keyspace by using the `GetKeyspace` operation.
For more information, see [Creating keyspaces](https://docs.aws.amazon.com/keyspaces/latest/devguide/working-with-keyspaces.html#keyspaces-create)
in the *Amazon Keyspaces Developer Guide*.
"""
def create_keyspace(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateKeyspace", input, options)
end
@doc """
The `CreateTable` operation adds a new table to the specified keyspace.
Within a keyspace, table names must be unique.
`CreateTable` is an asynchronous operation. When the request is received, the
status of the table is set to `CREATING`. You can monitor the creation status of
the new table by using the `GetTable` operation, which returns the current
`status` of the table. You can start using a table when the status is `ACTIVE`.
For more information, see [Creating tables](https://docs.aws.amazon.com/keyspaces/latest/devguide/working-with-tables.html#tables-create)
in the *Amazon Keyspaces Developer Guide*.
"""
def create_table(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "CreateTable", input, options)
end
@doc """
The `DeleteKeyspace` operation deletes a keyspace and all of its tables.
"""
def delete_keyspace(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteKeyspace", input, options)
end
@doc """
The `DeleteTable` operation deletes a table and all of its data.
After a `DeleteTable` request is received, the specified table is in the
`DELETING` state until Amazon Keyspaces completes the deletion. If the table is
in the `ACTIVE` state, you can delete it. If a table is either in the `CREATING`
or `UPDATING` states, then Amazon Keyspaces returns a `ResourceInUseException`.
If the specified table does not exist, Amazon Keyspaces returns a
`ResourceNotFoundException`. If the table is already in the `DELETING` state, no
error is returned.
"""
def delete_table(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DeleteTable", input, options)
end
@doc """
Returns the name and the Amazon Resource Name (ARN) of the specified table.
"""
def get_keyspace(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetKeyspace", input, options)
end
@doc """
Returns information about the table, including the table's name and current
status, the keyspace name, configuration settings, and metadata.
To read table metadata using `GetTable`, `Select` action permissions for the
table and system tables are required to complete the operation.
"""
def get_table(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetTable", input, options)
end
@doc """
Returns a list of keyspaces.
"""
def list_keyspaces(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListKeyspaces", input, options)
end
@doc """
Returns a list of tables for a specified keyspace.
"""
def list_tables(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTables", input, options)
end
@doc """
Returns a list of all tags associated with the specified Amazon Keyspaces
resource.
"""
def list_tags_for_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "ListTagsForResource", input, options)
end
@doc """
Restores the specified table to the specified point in time within the
`earliest_restorable_timestamp` and the current time.
For more information about restore points, see [ Time window for PITR continuous backups](https://docs.aws.amazon.com/keyspaces/latest/devguide/PointInTimeRecovery_HowItWorks.html#howitworks_backup_window)
in the *Amazon Keyspaces Developer Guide*.
Any number of users can execute up to 4 concurrent restores (any type of
restore) in a given account.
When you restore using point in time recovery, Amazon Keyspaces restores your
source table's schema and data to the state based on the selected timestamp
`(day:hour:minute:second)` to a new table. The Time to Live (TTL) settings are
also restored to the state based on the selected timestamp.
In addition to the table's schema, data, and TTL settings, `RestoreTable`
restores the capacity mode, encryption, and point-in-time recovery settings from
the source table. Unlike the table's schema data and TTL settings, which are
restored based on the selected timestamp, these settings are always restored
based on the table's settings as of the current time or when the table was
deleted.
You can also overwrite these settings during restore:
• Read/write capacity mode
• Provisioned throughput capacity settings
• Point-in-time (PITR) settings
• Tags
For more information, see [PITR restore settings](https://docs.aws.amazon.com/keyspaces/latest/devguide/PointInTimeRecovery_HowItWorks.html#howitworks_backup_settings)
in the *Amazon Keyspaces Developer Guide*.
Note that the following settings are not restored, and you must configure them
manually for the new table:
• Automatic scaling policies (for tables that use provisioned capacity mode)
• Identity and Access Management (IAM) policies
• Amazon CloudWatch metrics and alarms
"""
def restore_table(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "RestoreTable", input, options)
end
@doc """
Associates a set of tags with a Amazon Keyspaces resource.
You can then activate these user-defined tags so that they appear on the Cost
Management Console for cost allocation tracking. For more information, see
[Adding tags and labels to Amazon Keyspaces resources](https://docs.aws.amazon.com/keyspaces/latest/devguide/tagging-keyspaces.html)
in the *Amazon Keyspaces Developer Guide*.
For IAM policy examples that show how to control access to Amazon Keyspaces
resources based on tags, see [Amazon Keyspaces resource access based on tags](https://docs.aws.amazon.com/keyspaces/latest/devguide/security_iam_id-based-policy-examples-tags)
in the *Amazon Keyspaces Developer Guide*.
"""
def tag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "TagResource", input, options)
end
@doc """
Removes the association of tags from a Amazon Keyspaces resource.
"""
def untag_resource(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UntagResource", input, options)
end
@doc """
Adds new columns to the table or updates one of the table's settings, for
example capacity mode, encryption, point-in-time recovery, or ttl settings.
Note that you can only update one specific table setting per update operation.
"""
def update_table(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "UpdateTable", input, options)
end
end
|
lib/aws/generated/keyspaces.ex
| 0.896407 | 0.480662 |
keyspaces.ex
|
starcoder
|
defmodule Job.Pipeline do
@moduledoc """
High-level interface for running `Job`-powered pipeline of actions.
A pipeline is a collection of actions which can be executed in sequence or parallel, and which
all have to succeed for the job to succeed.
## Example
Job.run(
Pipeline.sequence([
action_1,
action_2,
Pipeline.parallel([action_3, action_4]),
action_5
])
)
Such pipeline can be visually represented as:
```text
-> action_3
action_1 -> action_2 -> action_5
-> action_4
```
An action inside a pipeline can be any `t:Job.action/0` that responds with `{:ok, result} | {:error, reason}`. Other
responses are not supported. An action crash will be converted into an `{:error, exit_reason}` response.
A pipeline succeeds if all of its action succeed. In such case, the response of the pipeline is the list of responses
of each action. For the example above, the successful response will be:
{:ok, [
action_1_response,
action_2_response,
[action_3_response, action_4_response],
action_5_response,
]}
An error response depends on the type of pipeline. A sequence pipeline stops on first error, responding with
`{:error, reason}`. A parallel pipeline waits for all the actions to finish. If any of them responded with an error,
the aggregated response will be `{:error, [error1, error2, ...]}`.
Note that in a nested pipeline, where the top-level element is a sequence, an error response can be
`{:error, action_error | [action_error]}`. In addition, the list of errors might be nested. You can consider using
functions such as `List.wrap/1` and `List.flatten/1` to convert the error(s) into a flat list.
"""
@doc """
Returns a specification for running a sequential pipeline as a `Job` action.
The corresponding action will return `{:ok, [action_result]} | {:error, action_error}`
See `Job.start_action/2` for details.
"""
@spec sequence([Job.action()], [Job.action_opt()]) :: Job.action()
def sequence(actions, opts \\ []), do: &{{Task, fn -> &1.(run_sequence(actions)) end}, opts}
@doc """
Returns a specification for running a parallel pipeline as a `Job` action.
The corresponding action will return `{:ok, [action_result]} | {:error, [action_error]}`
See `Job.start_action/2` for details.
"""
@spec parallel([Job.action()], [Job.action_opt()]) :: Job.action()
def parallel(actions, opts \\ []), do: &{{Task, fn -> &1.(run_parallel(actions)) end}, opts}
defp run_sequence(actions) do
result =
Enum.reduce_while(
actions,
[],
fn action, previous_results ->
result =
with {:ok, pid} <- start_action(action),
do: await_pipeline_action(pid)
case result do
{:ok, result} -> {:cont, [result | previous_results]}
{:error, _} = error -> {:halt, error}
end
end
)
with results when is_list(results) <- result, do: {:ok, Enum.reverse(results)}
end
defp run_parallel(actions) do
actions
|> Enum.map(&start_action/1)
|> Enum.map(&with {:ok, pid} <- &1, do: await_pipeline_action(pid))
|> Enum.split_with(&match?({:ok, _}, &1))
|> case do
{successess, []} -> {:ok, Enum.map(successess, fn {:ok, result} -> result end)}
{_, errors} -> {:error, Enum.map(errors, fn {_, result} -> result end)}
end
end
defp start_action(action), do: Job.start_action(action, timeout: :infinity)
defp await_pipeline_action(pid) do
case Job.await(pid) do
{:ok, _} = success -> success
{:error, _} = error -> error
{:exit, reason} -> {:error, reason}
_other -> raise "Pipeline action must return `{:ok, result} | {:error, reason}`"
end
end
end
|
lib/job/pipeline.ex
| 0.92329 | 0.943191 |
pipeline.ex
|
starcoder
|
defmodule Reactivity.DSL.EventStream do
@moduledoc """
The DSL for distributed reactive programming,
specifically, operations applicable to Event Streams.
"""
alias Reactivity.DSL.SignalObs, as: Sobs
alias Reactivity.DSL.Signal, as: Signal
alias ReactiveMiddleware.Registry
alias Observables.Obs
require Logger
@doc """
Checks if the given object `o` is an Event Stream.
"""
def is_event_stream?({:event_stream, _sobs, _gs}=_o), do: true
def is_event_stream?(_o), do: false
@doc """
Creates an Event Stream from a plain Observable `obs`.
Attaches the given Guarantee `g` to it if provided.
Otherwise attaches the globally defined Guarantee,
which is FIFO (the absence of any Guarantee) by default.
"""
def from_plain_obs(obs) do
g = Registry.get_guarantee()
from_plain_obs(obs, g)
end
def from_plain_obs(obs, g) do
sobs =
obs
|> Sobs.from_plain_obs()
es = {:event_stream, sobs, []}
case g do
nil -> es
_ -> es |> Signal.add_guarantee(g)
end
end
@doc """
Creates an Event Stream from a Signal Observable `sobs` and tags it with the given guarantees `gs`.
The assumption here is that the contexts of the Signal Observable have already been attached.
The primitive can be used for Guarantees with non-obvious Contexts (other than e.g. counters)
the developer might come up with.
Attaches the given Guarantee to it if provided without changing the context.
Otherwise attaches the globally defined Guarantee,
which is FIFO (the absence of any Guarantee) by default.
"""
def from_signal_obs(sobs) do
g = Registry.get_guarantee()
gs =
case g do
nil -> []
_ -> [g]
end
from_signal_obs(sobs, gs)
end
def from_signal_obs(sobs, gs) do
{:event_stream, sobs, gs}
end
@doc """
Transforms the Event Stream `es` into a Behaviour by adhering to its latest change.
"""
def hold({:event_stream, sobs, cgs}=_es) do
{:behaviour, sobs, cgs}
end
@doc """
Filters out the Event Streams values that do not satisfy the given predicate.
The expected function should take one argument, the value of an Observable and return a Boolean:
true if the value should be produced, false if the value should be discarded.
If no Guarantee is provided, the merge does not alter the Event Stream Messages.
The consequences of using this operator in this way are left to the developer.
If however a Guarantee is provided, it is attached to the resulting Event Stream as its new Guarantee,
replacing any previous ones. This is reflected in the Message Contexts.
Thus, filtering in this way can be considered to be the creation of a new source Signal
with the given Guarantee in a stratified dependency graph.
"""
def filter({:event_stream, sobs, _cg}=_es, pred, g) do
fobs =
sobs
|> Sobs.to_plain_obs()
|> Obs.filter(pred)
|> Sobs.from_plain_obs()
|> Sobs.add_context(g)
{:event_stream, fobs, [g]}
end
def filter({:event_stream, sobs, cgs}=_es, pred) do
fobs =
sobs
|> Obs.filter(fn {v, _cs} -> pred.(v) end)
{:event_stream, fobs, cgs}
end
@doc """
Merges multiple Event Streams together
The resulting Event Stream carries the events of all composed Event Streams as they arrived.
If no Guarantee is provided, the merge does not alter the Event Stream Messages.
A necessary condition for this operation to be valid then is that the given Event Streams all carry the same Guarantees.
The consequences of using this operator in this way are left to the developer.
If however a Guarantee is provided, it is attached to the resulting Event Stream as its new Guarantee,
replacing any previous ones. This is reflected in the Message Contexts.
Thus, merging in this way can be considered to be the creation of a new source Signal
with the given Guarantee in a stratified dependency graph.
"""
def merge(ess, g) do
sobss =
ess
|> Enum.map(fn {:event_stream, sobs, _gs} -> sobs end)
mobs =
Obs.merge(sobss)
|> Sobs.set_context(g)
{:event_stream, mobs, [g]}
end
def merge([{:event_stream, _obs, gs} | _st] = signals) do
sobss =
signals
|> Enum.map(fn {:event_stream, sobs, _gs} -> sobs end)
mobs = Obs.merge(sobss)
{:event_stream, mobs, gs}
end
@doc """
Merges multiple Event Streams together
The resulting Event Stream carries the events of all composed Event Streams in a round-robin fashion.
Should not be used for Event Streams with known discrepancies in event occurrence frequency,
since messages will accumulate and create a memory leak.
If no Guarantee is provided, the merge does not alter the Event Stream Messages.
A necessary condition for this operation to be valid then is that the given Event Streams all carry the same Guarantees.
The consequences of using this operator in this way are left to the developer.
If however a Guarantee is provided, it is attached to the resulting Event Stream as its new Guarantee,
replacing any previous ones. This is reflected in the Message Contexts.
Thus, merging in this way can be considered to be the creation of a new source Signal
with the given Guarantee in a stratified dependency graph.
"""
def rotate(ess, g) do
sobss =
ess
|> Enum.map(fn {:event_stream, sobs, _cgs} -> sobs end)
robs =
Obs.rotate(sobss)
|> Sobs.set_context(g)
{:event_stream, robs, [g]}
end
def rotate([{:event_stream, _obs, cgs} | _st] = ess) do
sobss =
ess
|> Enum.map(fn {:event_stream, sobs, _cgs} -> sobs end)
robs = Obs.rotate(sobss)
{:event_stream, robs, cgs}
end
@doc """
Applies a given binary function `f` to the values of an Event Stream `es` and its previous result.
Works in the same way as the Enum.scan function:
Enum.scan(1..10, fn(x,y) -> x + y end)
=> [1, 3, 6, 10, 15, 21, 28, 36, 45, 55]
"""
def scan({:event_stream, sobs, cgs}=_es, f, default \\ nil) do
svobs =
sobs
|> Sobs.to_plain_obs()
|> Obs.scan(f, default)
cobs =
sobs
|> Sobs.to_context_obs()
nobs =
svobs
|> Obs.zip(cobs)
{:event_stream, nobs, cgs}
end
@doc """
Delays each produced item by the given interval.
"""
def delay({:event_stream, sobs, cgs}=_es, interval) do
dobs =
sobs
|> Obs.delay(interval)
{:event_stream, dobs, cgs}
end
@doc """
Filters out values of an Event Stream `es` that have already been produced at some point.
If no Guarantee is provided, it does not alter the Event Stream Messages.
The consequences of using this operator in this way are left to the developer.
If however a Guarantee `g` is provided, it is attached to the resulting Event Stream as its new Guarantee,
replacing any previous ones. This is reflected in the Message Contexts.
This can be considered to be the creation of a new source Signal
with the given Guarantee in a stratified dependency graph.
"""
def distinct({:event_stream, sobs, _cgs}=_es, g) do
dsobs =
sobs
|> Sobs.to_plain_obs()
|> Obs.distinct(fn(x,y) -> x == y end)
|> Sobs.from_plain_obs()
|> Sobs.add_context(g)
{:event_stream, dsobs, [g]}
end
def distinct({:event_stream, sobs, cgs}=_es) do
dsobs =
sobs
|> Obs.distinct(fn({v1, _cs1}, {v2, _cs2}) -> v1 == v2 end)
{:event_stream, dsobs, cgs}
end
@doc """
Filters out values of an Event Stream `es` that are equal to the most recently produced value.
If no Guarantee is provided, it does not alter the Event Stream Messages.
The consequences of using this operator in this way are left to the developer.
If however a Guarantee is provided, it is attached to the resulting Event Stream as its new Guarantee,
replacing any previous ones. This is reflected in the Message Contexts.
This can be considered to be the creation of a new source Signal
with the given Guarantee in a stratified dependency graph.
"""
def novel({:event_stream, sobs, _cgs}=_es, g) do
nsobs =
sobs
|> Sobs.to_plain_obs()
|> Obs.novel(fn(x,y) -> x == y end)
|> Sobs.from_plain_obs()
|> Sobs.add_context(g)
{:event_stream, nsobs, [g]}
end
def novel({:event_stream, sobs, cgs}=_es) do
nsobs =
sobs
|> Obs.novel(fn({v1, _cs1}, {v2, _cs2}) -> v1 == v2 end)
{:event_stream, nsobs, cgs}
end
@doc """
Applies a procedure to the values of an Event Stream `es` without changing them.
Generally used for side effects.
"""
def each({:event_stream, sobs, cgs}=_es, proc) do
sobs
|> Sobs.to_plain_obs()
|> Obs.each(proc)
{:event_stream, sobs, cgs}
end
@doc """
Switches from an intial Event Stream to newly supplied Behaviours.
Takes an initial Event Stream `es` and a higher-order Event Stream `he` carrying Event Streams.
Returns an Event Stream that is at first equal to the initial Event Stream.
Each time the higher order Event Stream emits a new Event Stream,
the returned Event Stream switches to this new Event Stream.
Requires that all Event Streams carry values of the same type
and have the same set of consistency guarantees.
"""
def switch({:event_stream, es_sobs, gs}=_es, {:event_stream, hes_sobs, _}=_he) do
switch_obs =
hes_sobs
|> Obs.map(fn {{:event_stream, obs, _}, _gs} -> obs end)
robs = Obs.switch(es_sobs, switch_obs)
{:event_stream, robs, gs}
end
@doc """
Switches from one Event Stream to another on an event occurrence.
Takes three Event Streams.
Returns an Event Stream that emits the events of the first Event Stream `es1` until an event on the third Event Stream `es` occurs,
at which point the resulting Event Stream switches to the second Event Stream `es2`.
The value of the switching event is irrelevant.
Requires that both Event Streams have the same set of consistency guarantees
and carry values of the same type.
"""
def until({:event_stream, es_sobs1, gs1}=_es1, {:event_stream, es_sobs2, _gs2}=_es2, {:event_stream, es_sobs, _gse}=_es) do
robs = Obs.until(es_sobs1, es_sobs2, es_sobs)
{:event_stream, robs, gs1}
end
end
|
lib/reactivity/dsl/event_stream.ex
| 0.900991 | 0.533094 |
event_stream.ex
|
starcoder
|
defmodule ShortMaps do
@default_modifier ?s
@doc ~S"""
Returns a map with the given keys bound to variables with the same name.
This macro sigil is used to reduce boilerplate when writing pattern matches on
maps that bind variables with the same name as the map keys. For example,
given a map that looks like this:
my_map = %{foo: "foo", bar: "bar", baz: "baz"}
..the following is very common Elixir code:
%{foo: foo, bar: bar, baz: baz} = my_map
foo #=> "foo"
The `~m` sigil provides a shorter way to do exactly this. It splits the given
list of words on whitespace (i.e., like the `~w` sigil) and creates a map with
these keys as the keys and with variables with the same name as values. Using
this sigil, this code can be reduced to just this:
~m(foo bar baz)a = my_map
foo #=> "foo"
`~m` can be used in regular pattern matches like the ones in the examples
above but also inside function heads:
defmodule Test do
import ShortMaps
def test(~m(foo)a), do: foo
def test(_), do: :no_match
end
Test.test %{foo: "hello world"} #=> "hello world"
Test.test %{bar: "hey there!"} #=> :no_match
## Modifiers
The `~m` sigil supports both maps with atom keys as well as string keys. Atom
keys can be specified using the `a` modifier, while string keys can be
specified with the `s` modifier (which is the default).
iex> ~m(my_key)s = %{"my_key" => "my value"}
iex> my_key
"my value"
iex> ~m(my_key)a = %{my_key: "my value"}
iex> my_key
"my value"
## Pinning
Matching using the `~m` sigil has full support for the pin operator:
iex> bar = "bar"
iex> ~m(foo ^bar)a = %{foo: "foo", bar: "bar"} #=> this is ok, `bar` matches
iex> foo
"foo"
iex> bar
"bar"
iex> ~m(foo ^bar)a = %{foo: "FOO", bar: "bar"}
iex> foo # still ok, since we didn't pin `foo`, it's now bound to a new value
"FOO"
iex> bar
"bar"
iex> ~m(^bar)a = %{foo: "foo", bar: "BAR"}
** (MatchError) no match of right hand side value: %{bar: "BAR", foo: "foo"}
## Structs
For using structs instead of plain maps, the first word must be the struct
name prefixed with `%`:
defmodule Foo do
defstruct bar: nil
end
~m(%Foo bar)a = %Foo{bar: 4711}
bar #=> 4711
Structs only support atom keys, so you **must** use the `a` modifier or an
exception will be raised.
## Pitfalls
Interpolation isn't supported. `~m(#{foo})` will raise an `ArgumentError`
exception.
The variables associated with the keys in the map have to exist in the scope
if the `~m` sigil is used outside a pattern match:
foo = "foo"
~m(foo bar) #=> ** (RuntimeError) undefined function: bar/0
## Discussion
For more information on this sigil and the discussion that lead to it, visit
[this
topic](https://groups.google.com/forum/#!topic/elixir-lang-core/NoUo2gqQR3I)
in the Elixir mailing list.
"""
defmacro sigil_m(term, modifiers)
defmacro sigil_m({:<<>>, line, [string]}, modifiers) do
sigil_m_function(line, String.split(string), modifier(modifiers), __CALLER__)
end
defmacro sigil_m({:<<>>, _, _}, _modifiers) do
raise ArgumentError, "interpolation is not supported with the ~m sigil"
end
# We raise when the modifier is ?s and we're trying to build a struct.
defp sigil_m_function(_line, ["%" <> _struct_name | _rest], ?s, _caller) do
raise ArgumentError, "structs can only consist of atom keys"
end
defp sigil_m_function(_lin, ["%" <> struct_name | rest], ?a, caller) do
struct_module_quoted = resolve_module(struct_name, caller)
pairs = make_pairs(rest, ?a)
quote do: %unquote(struct_module_quoted){unquote_splicing(pairs)}
end
defp sigil_m_function(line, words, modifier, _caller) do
pairs = make_pairs(words, modifier)
{:%{}, line, pairs}
end
defp resolve_module(struct_name, env) do
Code.string_to_quoted!(struct_name, file: env.file, line: env.line)
end
defp make_pairs(words, modifier) do
keys = Enum.map(words, &strip_pin/1)
variables = Enum.map(words, &handle_var/1)
ensure_valid_variable_names(keys)
case modifier do
?a -> keys |> Enum.map(&String.to_atom/1) |> Enum.zip(variables)
?s -> keys |> Enum.zip(variables)
end
end
defp strip_pin("^" <> name),
do: name
defp strip_pin(name),
do: name
defp handle_var("^" <> name) do
{:^, [], [Macro.var(String.to_atom(name), nil)]}
end
defp handle_var(name) do
String.to_atom(name) |> Macro.var(nil)
end
defp modifier([]),
do: @default_modifier
defp modifier([mod]) when mod in 'as',
do: mod
defp modifier(_),
do: raise(ArgumentError, "only these modifiers are supported: s, a")
defp ensure_valid_variable_names(keys) do
Enum.each keys, fn k ->
unless k =~ ~r/\A[a-zA-Z_]\w*\Z/ do
raise ArgumentError, "invalid variable name: #{k}"
end
end
end
end
|
lib/short_maps.ex
| 0.82176 | 0.519948 |
short_maps.ex
|
starcoder
|
alias Graphqexl.Schema
alias Treex.Tree
defmodule Graphqexl.Query.ResultSet do
@moduledoc """
Result of a GraphQL `t:Graphqexl.Query.t/0` operation, including any errors
"""
@moduledoc since: "0.1.0"
defstruct data: %{}, errors: %{}
@type t :: %Graphqexl.Query.ResultSet{data: Map.t, errors: Map.t}
@doc """
Filter the given `t:Graphqexl.Query.ResultSet.t/0` to the `t:Treex.Tree.t/0` of fields specified.
Returns: `t:Graphqexl.Query.ResultSet.t/0`
"""
@doc since: "0.1.0"
@spec filter(t, Tree.t):: t
# TODO: error handling needs to put info into %{result_set | errors: <info>}
def filter(result_set, fields), do: %{result_set | data: result_set.data |> filter_data(fields)}
@doc """
Validate that a given `t:Graphqexl.Query.ResultSet.t/0` conforms to the given
`t:Graphqexl.Schema.t/0` and is therefore fine to serialize and return in an HTTP response.
Returns: `t:Graphqexl.Query.ResultSet.t/0` (for chainability)
"""
@doc since: "0.1.0"
@spec validate!(t, Schema.t):: t
# recurse through the entire result_set and check that the values are coercible into the types
# specified in the given schema. if any are not, set them to nil in the data field and add an
# appropriate error in the error field (for now, can implement the error rendering in this module,
# but will eventually probably warrant its own module
# TODO: real implementation
def validate!(result_set, _schema), do: result_set
@doc false
@spec filter_data(Map.t, Tree.t):: Map.t
defp filter_data(data, fields),
do: %{fields.value => filter_data_value(data |> Map.get(fields.value), fields.children)}
@doc false
@spec filter_data_value(Map.t, list(Tree.t)):: Map.t | list(Map.t | term) | term
defp filter_data_value(data, []), do: data
defp filter_data_value(data, children) when is_list(data) do
data
|> Enum.map(&(children |> map_filter_data(&1)))
|> Enum.map(&(&1 |> reduce_merge))
end
defp filter_data_value(data, children) do
children
|> Enum.map(&(data |> filter_data(&1) |> Enum.into(%{})))
|> reduce_merge
end
@doc false
@spec map_filter_data(list(term), Map.t):: list(Map.t)
defp map_filter_data(enum, datum), do: enum |> Enum.map(&(datum |> filter_data(&1)))
@doc false
@spec reduce_merge(list(Map.t)):: Map.t
defp reduce_merge(maps), do: maps |> Enum.reduce(%{}, &(&1 |> Map.merge(&2)))
end
|
lib/graphqexl/query/result_set.ex
| 0.660282 | 0.570092 |
result_set.ex
|
starcoder
|
defmodule Evision.Nx do
@moduledoc """
OpenCV mat to Nx tensor.
`:nx` is an optional dependency, so if you want to use
functions in `OpenCV.Nx`, you need to add it to the dependency
list.
"""
import Evision.Errorize
unless Code.ensure_loaded?(Nx) do
@compile {:no_warn_undefined, Nx}
end
@doc """
Transform an `Evision.Mat` reference to `Nx.tensor`.
The resulting tensor is in the shape `{height, width, channels}`.
### Example
```elixir
iex> {:ok, mat} = Evision.imread("/path/to/exist/img.png")
iex> nx_tensor = Evision.Nx.to_nx(mat)
...> #Nx.Tensor<
...> u8[1080][1920][3]
...> [[ ... pixel data ... ]]
...> >
```
"""
@doc namespace: :external
@spec to_nx(reference()) :: {:ok, reference()} | {:error, String.t()}
def to_nx(mat) do
with {:ok, mat_type} <- Evision.Mat.type(mat),
{:ok, mat_shape} <- Evision.Mat.shape(mat),
{:ok, bin} <- Evision.Mat.to_binary(mat) do
bin
|> Nx.from_binary(mat_type)
|> Nx.reshape(mat_shape)
else
{:error, reason} ->
{:error, reason}
end
end
deferror(to_nx(mat))
@doc """
Converts a tensor of `Nx` to `Mat` of evision (OpenCV).
If the tensor has three dimensions, it is expected
to have shape`{height, width, channels}`.
"""
@doc namespace: :external
@spec to_mat(Nx.t()) :: {:ok, reference()} | {:error, String.t()}
def to_mat(t) when is_struct(t, Nx.Tensor) do
case Nx.shape(t) do
{height, width, channels} ->
to_mat(Nx.to_binary(t), Nx.type(t), height, width, channels)
shape ->
Evision.Mat.from_binary_by_shape(Nx.to_binary(t), Nx.type(t), shape)
end
end
deferror(to_mat(t))
@spec to_mat(
binary(),
{atom(), pos_integer()},
pos_integer(),
pos_integer(),
pos_integer()
) :: {:ok, reference()} | {:error, charlist()}
defp to_mat(binary, type, rows, cols, channels) do
Evision.Mat.from_binary(binary, type, rows, cols, channels)
end
end
|
lib/opencv_nx.ex
| 0.893386 | 0.89996 |
opencv_nx.ex
|
starcoder
|
defmodule AWS.KinesisVideo do
@moduledoc """
"""
@doc """
Creates a signaling channel.
`CreateSignalingChannel` is an asynchronous operation.
"""
def create_signaling_channel(client, input, options \\ []) do
path_ = "/createSignalingChannel"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Creates a new Kinesis video stream.
When you create a new stream, Kinesis Video Streams assigns it a version number.
When you change the stream's metadata, Kinesis Video Streams updates the
version.
`CreateStream` is an asynchronous operation.
For information about how the service works, see [How it Works](https://docs.aws.amazon.com/kinesisvideostreams/latest/dg/how-it-works.html).
You must have permissions for the `KinesisVideo:CreateStream` action.
"""
def create_stream(client, input, options \\ []) do
path_ = "/createStream"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Deletes a specified signaling channel.
`DeleteSignalingChannel` is an asynchronous operation. If you don't specify the
channel's current version, the most recent version is deleted.
"""
def delete_signaling_channel(client, input, options \\ []) do
path_ = "/deleteSignalingChannel"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Deletes a Kinesis video stream and the data contained in the stream.
This method marks the stream for deletion, and makes the data in the stream
inaccessible immediately.
To ensure that you have the latest version of the stream before deleting it, you
can specify the stream version. Kinesis Video Streams assigns a version to each
stream. When you update a stream, Kinesis Video Streams assigns a new version
number. To get the latest stream version, use the `DescribeStream` API.
This operation requires permission for the `KinesisVideo:DeleteStream` action.
"""
def delete_stream(client, input, options \\ []) do
path_ = "/deleteStream"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns the most current information about the signaling channel.
You must specify either the name or the Amazon Resource Name (ARN) of the
channel that you want to describe.
"""
def describe_signaling_channel(client, input, options \\ []) do
path_ = "/describeSignalingChannel"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns the most current information about the specified stream.
You must specify either the `StreamName` or the `StreamARN`.
"""
def describe_stream(client, input, options \\ []) do
path_ = "/describeStream"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Gets an endpoint for a specified stream for either reading or writing.
Use this endpoint in your application to read from the specified stream (using
the `GetMedia` or `GetMediaForFragmentList` operations) or write to it (using
the `PutMedia` operation).
The returned endpoint does not have the API name appended. The client needs to
add the API name to the returned endpoint.
In the request, specify the stream either by `StreamName` or `StreamARN`.
"""
def get_data_endpoint(client, input, options \\ []) do
path_ = "/getDataEndpoint"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Provides an endpoint for the specified signaling channel to send and receive
messages.
This API uses the `SingleMasterChannelEndpointConfiguration` input parameter,
which consists of the `Protocols` and `Role` properties.
`Protocols` is used to determine the communication mechanism. For example, if
you specify `WSS` as the protocol, this API produces a secure websocket
endpoint. If you specify `HTTPS` as the protocol, this API generates an HTTPS
endpoint.
`Role` determines the messaging permissions. A `MASTER` role results in this API
generating an endpoint that a client can use to communicate with any of the
viewers on the channel. A `VIEWER` role results in this API generating an
endpoint that a client can use to communicate only with a `MASTER`.
"""
def get_signaling_channel_endpoint(client, input, options \\ []) do
path_ = "/getSignalingChannelEndpoint"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns an array of `ChannelInfo` objects.
Each object describes a signaling channel. To retrieve only those channels that
satisfy a specific condition, you can specify a `ChannelNameCondition`.
"""
def list_signaling_channels(client, input, options \\ []) do
path_ = "/listSignalingChannels"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns an array of `StreamInfo` objects.
Each object describes a stream. To retrieve only streams that satisfy a specific
condition, you can specify a `StreamNameCondition`.
"""
def list_streams(client, input, options \\ []) do
path_ = "/listStreams"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns a list of tags associated with the specified signaling channel.
"""
def list_tags_for_resource(client, input, options \\ []) do
path_ = "/ListTagsForResource"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Returns a list of tags associated with the specified stream.
In the request, you must specify either the `StreamName` or the `StreamARN`.
"""
def list_tags_for_stream(client, input, options \\ []) do
path_ = "/listTagsForStream"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Adds one or more tags to a signaling channel.
A *tag* is a key-value pair (the value is optional) that you can define and
assign to AWS resources. If you specify a tag that already exists, the tag value
is replaced with the value that you specify in the request. For more
information, see [Using Cost Allocation Tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html)
in the *AWS Billing and Cost Management User Guide*.
"""
def tag_resource(client, input, options \\ []) do
path_ = "/TagResource"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Adds one or more tags to a stream.
A *tag* is a key-value pair (the value is optional) that you can define and
assign to AWS resources. If you specify a tag that already exists, the tag value
is replaced with the value that you specify in the request. For more
information, see [Using Cost Allocation Tags](https://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/cost-alloc-tags.html)
in the *AWS Billing and Cost Management User Guide*.
You must provide either the `StreamName` or the `StreamARN`.
This operation requires permission for the `KinesisVideo:TagStream` action.
Kinesis video streams support up to 50 tags.
"""
def tag_stream(client, input, options \\ []) do
path_ = "/tagStream"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Removes one or more tags from a signaling channel.
In the request, specify only a tag key or keys; don't specify the value. If you
specify a tag key that does not exist, it's ignored.
"""
def untag_resource(client, input, options \\ []) do
path_ = "/UntagResource"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Removes one or more tags from a stream.
In the request, specify only a tag key or keys; don't specify the value. If you
specify a tag key that does not exist, it's ignored.
In the request, you must provide the `StreamName` or `StreamARN`.
"""
def untag_stream(client, input, options \\ []) do
path_ = "/untagStream"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Increases or decreases the stream's data retention period by the value that you
specify.
To indicate whether you want to increase or decrease the data retention period,
specify the `Operation` parameter in the request body. In the request, you must
specify either the `StreamName` or the `StreamARN`.
The retention period that you specify replaces the current value.
This operation requires permission for the `KinesisVideo:UpdateDataRetention`
action.
Changing the data retention period affects the data in the stream as follows:
* If the data retention period is increased, existing data is
retained for the new retention period. For example, if the data retention period
is increased from one hour to seven hours, all existing data is retained for
seven hours.
* If the data retention period is decreased, existing data is
retained for the new retention period. For example, if the data retention period
is decreased from seven hours to one hour, all existing data is retained for one
hour, and any data older than one hour is deleted immediately.
"""
def update_data_retention(client, input, options \\ []) do
path_ = "/updateDataRetention"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Updates the existing signaling channel.
This is an asynchronous operation and takes time to complete.
If the `MessageTtlSeconds` value is updated (either increased or reduced), it
only applies to new messages sent via this channel after it's been updated.
Existing messages are still expired as per the previous `MessageTtlSeconds`
value.
"""
def update_signaling_channel(client, input, options \\ []) do
path_ = "/updateSignalingChannel"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@doc """
Updates stream metadata, such as the device name and media type.
You must provide the stream name or the Amazon Resource Name (ARN) of the
stream.
To make sure that you have the latest version of the stream before updating it,
you can specify the stream version. Kinesis Video Streams assigns a version to
each stream. When you update a stream, Kinesis Video Streams assigns a new
version number. To get the latest stream version, use the `DescribeStream` API.
`UpdateStream` is an asynchronous operation, and takes time to complete.
"""
def update_stream(client, input, options \\ []) do
path_ = "/updateStream"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, nil)
end
@spec request(AWS.Client.t(), binary(), binary(), list(), list(), map(), list(), pos_integer()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, method, path, query, headers, input, options, success_status_code) do
client = %{client | service: "kinesisvideo"}
host = build_host("kinesisvideo", client)
url = host
|> build_url(path, client)
|> add_query(query, client)
additional_headers = [{"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}]
headers = AWS.Request.add_headers(additional_headers, headers)
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, method, url, headers, payload)
perform_request(client, method, url, payload, headers, options, success_status_code)
end
defp perform_request(client, method, url, payload, headers, options, success_status_code) do
case AWS.Client.request(client, method, url, payload, headers, options) do
{:ok, %{status_code: status_code, body: body} = response}
when is_nil(success_status_code) and status_code in [200, 202, 204]
when status_code == success_status_code ->
body = if(body != "", do: decode!(client, body))
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, path, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}#{path}"
end
defp add_query(url, [], _client) do
url
end
defp add_query(url, query, client) do
querystring = encode!(client, query, :query)
"#{url}?#{querystring}"
end
defp encode!(client, payload, format \\ :json) do
AWS.Client.encode!(client, payload, format)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/kinesis_video.ex
| 0.907501 | 0.536374 |
kinesis_video.ex
|
starcoder
|
defmodule Wow.AuctionBid do
@moduledoc """
Represent an item sold on the auction house. It doesn't contain when the item was added to the
auction house. This information is present in auction_timestamp.
"""
alias Wow.Repo
use Ecto.Schema
import Ecto.Query, only: [from: 2]
import Ecto.Changeset
@type raw_entry :: %{optional(String.t) => String.t}
@type t :: Ecto.Schema.t
@bid_ttl_in_days 32
@derive {Jason.Encoder, only: [:id, :item_id, :buyout, :quantity, :rand, :context,
:timestamps, :realm, :character, :realm_id, :character_id]}
schema "auction_bid" do
field :bid, :integer
field :buyout, :integer
field :quantity, :integer
field :rand, :integer
field :context, :integer
field :first_dump_timestamp, :utc_datetime
field :last_dump_timestamp, :utc_datetime
field :last_time_left, :string
belongs_to :realm, Wow.Realm
belongs_to :character, Wow.Character
belongs_to :item, Wow.Item
end
@spec changeset(Wow.AuctionBid.t, map) :: Ecto.Changeset.t
def changeset(%Wow.AuctionBid{} = bid, params \\ %{}) do
bid
|> cast(params, [:id, :item_id, :buyout, :quantity, :rand, :context, :realm_id, :character_id, :first_dump_timestamp, :last_dump_timestamp, :last_time_left])
|> validate_required([:item_id, :buyout, :quantity, :rand, :context, :realm_id, :character_id, :first_dump_timestamp, :last_dump_timestamp, :last_time_left])
|> unique_constraint(:id, name: :auction_bid_pkey)
end
@spec insert(Wow.AuctionBid.t, map) :: t
def insert(%Wow.AuctionBid{} = bid, attrs \\ %{}) do
{:ok, result} = bid
|> changeset(attrs)
|> Repo.insert(returning: true, on_conflict: {:replace, [:last_dump_timestamp, :last_time_left, :bid, :character_id]}, conflict_target: :id)
result
end
@spec from_entries([Wow.AuctionEntry.t]) :: [Wow.AuctionBid.t]
def from_entries(entries) do
entries
|> Enum.map(&Wow.AuctionBid.from_entry/1)
end
@spec from_entry(Wow.AuctionEntry.t) :: Wow.AuctionBid.t
def from_entry(e) do
%Wow.AuctionBid{
id: e.auc_id,
bid: e.bid,
item_id: e.item,
buyout: e.buyout,
quantity: e.quantity,
rand: e.rand,
context: e.context,
first_dump_timestamp: e.dump_timestamp,
last_dump_timestamp: e.dump_timestamp,
last_time_left: e.time_left,
}
end
@spec find_by_item_id(non_neg_integer, String.t, String.t, DateTime.t) :: [Wow.AuctionEntry.Subset.t]
defp find_by_item_id(item_id, region, realm, start_date) do
query = from entry in Wow.AuctionBid,
inner_join: r in assoc(entry, :realm),
inner_join: c in assoc(entry, :character),
where: entry.item_id == ^item_id
and r.name == ^realm
and r.region == ^region
and entry.first_dump_timestamp > ^start_date
and not is_nil(c.faction),
select: {entry.first_dump_timestamp, entry.buyout, entry.quantity, c.faction}
query
|> Repo.all
|> Wow.AuctionEntry.Subset.tuple_to_subset
end
@spec find_by_item_id_with_sampling(non_neg_integer, String.t, String.t, non_neg_integer, DateTime.t) :: %{data: [t], initial_count: non_neg_integer}
def find_by_item_id_with_sampling(item_id, region, realm, max, start_date) do
result = find_by_item_id(item_id, region, realm, start_date)
:rand.seed(:exsplus, {1, 2, 3})
%{
initial_count: length(result),
data: result |> Enum.take_random(max)
}
end
@spec most_expensive_items :: [Wow.Item.ItemWithCount]
def most_expensive_items do
key = "model.auction_bid.most_expensive"
case Cachex.get(:wow_cache, key) do
{:ok, nil} ->
lower = Timex.now |> Timex.shift(hours: -24)
upper = Timex.now
query = from entry in Wow.AuctionBid,
inner_join: item in assoc(entry, :item),
where: entry.first_dump_timestamp > ^lower
and entry.first_dump_timestamp <= ^upper,
having: count(entry.item_id) > 50,
limit: 3,
order_by: [desc: fragment("median(buyout / quantity)::bigint")],
group_by: [item.id, item.name, item.icon],
select: {
item.id,
item.name,
item.icon,
fragment("median(buyout / quantity)::bigint"),
item.sell_price,
item.item_level,
item.required_level,
item.quality,
item.description,
item.stats,
count(item.id)
}
response = query
|> Repo.all
|> Enum.map(&Wow.Item.ItemWithCount.tuple_to_subset/1)
Cachex.put(:wow_cache, key, response, ttl: :timer.minutes(30))
response
{:ok, response} -> response
end
end
@spec most_present_items :: [Wow.Item.ItemWithCount]
def most_present_items do
key = "model.auction_bid.most_present_item"
case Cachex.get(:wow_cache, key) do
{:ok, nil} ->
lower = Timex.now |> Timex.shift(hours: -24)
upper = Timex.now
query = from entry in Wow.AuctionBid,
inner_join: item in assoc(entry, :item),
where: entry.first_dump_timestamp > ^lower
and entry.first_dump_timestamp <= ^upper,
limit: 3,
order_by: [desc: count(item.id)],
group_by: [item.id, item.name, item.icon],
select: {
item.id,
item.name,
item.icon,
fragment("median(buyout / quantity)::bigint"),
item.sell_price,
item.item_level,
item.required_level,
item.quality,
item.description,
item.stats,
count(item.id)
}
response = query
|> Repo.all
|> Enum.map(&Wow.Item.ItemWithCount.tuple_to_subset/1)
Cachex.put(:wow_cache, key, response, ttl: :timer.minutes(30))
response
{:ok, response} -> response
end
end
@spec delete_old :: {integer(), nil | [term()]}
def delete_old do
limit = Timex.now |> Timex.shift(hours: -24 * @bid_ttl_in_days)
(from bid in Wow.AuctionBid,
where: bid.last_dump_timestamp < ^limit)
|> Repo.delete_all
end
end
|
lib/wow/auction_bid.ex
| 0.595728 | 0.407982 |
auction_bid.ex
|
starcoder
|
defmodule EctoTrail do
@moduledoc """
EctoTrail allows to store changeset changes into a separate `audit_log` table.
## Usage
1. Add `ecto_trail` to your list of dependencies in `mix.exs`:
def deps do
[{:ecto_trail, "~> 0.1.0"}]
end
2. Ensure `ecto_trail` is started before your application:
def application do
[extra_applications: [:ecto_trail]]
end
3. Add a migration that creates `audit_log` table to `priv/repo/migrations` folder:
defmodule EctoTrail.TestRepo.Migrations.CreateAuditLogTable do
@moduledoc false
use Ecto.Migration
def change do
create table(:audit_log, primary_key: false) do
add :id, :uuid, primary_key: true
add :actor_id, :string, null: false
add :resource, :string, null: false
add :resource_id, :string, null: false
add :changeset, :map, null: false
timestamps([type: :utc_datetime, updated_at: false])
end
end
end
4. Use `EctoTrail` in your repo:
defmodule MyApp.Repo do
use Ecto.Repo, otp_app: :my_app
use EctoTrail
end
5. Use logging functions instead of defaults. See `EctoTrail` module docs.
You can configure audit_log table name (default `audit_log`) in config:
config :ecto_trail,
table_name: "custom_audit_log_name"
If you use multiple Repo and `audit_log` should be stored in tables with different names,
you can configure Schema module for each Repo:
defmodule MyApp.Repo do
use Ecto.Repo, otp_app: :my_app
use EctoTrail, schema: My.Custom.ChangeLogSchema
end
"""
alias Ecto.{Changeset, Multi}
alias EctoTrail.Changelog
require Logger
defmacro __using__(opts) do
schema = Keyword.get(opts, :schema, Changelog)
quote do
@doc """
Call `c:Ecto.Repo.insert/2` operation and store changes in a `change_log` table.
Insert arguments, return and options same as `c:Ecto.Repo.insert/2` has.
"""
@spec insert_and_log(
struct_or_changeset :: Ecto.Schema.t() | Ecto.Changeset.t(),
actor_id :: String.T,
opts :: Keyword.t()
) ::
{:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
def insert_and_log(struct_or_changeset, actor_id, opts \\ []),
do: EctoTrail.insert_and_log(__MODULE__, struct_or_changeset, actor_id, opts)
@doc """
Call `c:Ecto.Repo.update/2` operation and store changes in a `change_log` table.
Insert arguments, return and options same as `c:Ecto.Repo.update/2` has.
"""
@spec update_and_log(
changeset :: Ecto.Changeset.t(),
actor_id :: String.T,
opts :: Keyword.t()
) ::
{:ok, Ecto.Schema.t()}
| {:error, Ecto.Changeset.t()}
def update_and_log(changeset, actor_id, opts \\ []),
do: EctoTrail.update_and_log(__MODULE__, changeset, actor_id, opts)
@doc """
Call `c:Ecto.Repo.audit_schema/0` operation and get Ecto Schema struct for change_log table.
Return Ecto Schema struct for change_log table.
"""
@spec audit_log_schema :: atom()
def audit_log_schema, do: struct(unquote(schema))
end
end
@doc """
Call `c:Ecto.Repo.insert/2` operation and store changes in a `change_log` table.
Insert arguments, return and options same as `c:Ecto.Repo.insert/2` has.
"""
@spec insert_and_log(
repo :: Ecto.Repo.t(),
struct_or_changeset :: Ecto.Schema.t() | Ecto.Changeset.t(),
actor_id :: String.T,
opts :: Keyword.t()
) ::
{:ok, Ecto.Schema.t()} | {:error, Ecto.Changeset.t()}
def insert_and_log(repo, struct_or_changeset, actor_id, opts \\ []) do
Multi.new()
|> Multi.insert(:operation, struct_or_changeset, opts)
|> run_logging_transaction(repo, struct_or_changeset, actor_id)
end
@doc """
Call `c:Ecto.Repo.update/2` operation and store changes in a `change_log` table.
Insert arguments, return and options same as `c:Ecto.Repo.update/2` has.
"""
@spec update_and_log(
repo :: Ecto.Repo.t(),
changeset :: Ecto.Changeset.t(),
actor_id :: String.T,
opts :: Keyword.t()
) ::
{:ok, Ecto.Schema.t()}
| {:error, Ecto.Changeset.t()}
def update_and_log(repo, changeset, actor_id, opts \\ []) do
Multi.new()
|> Multi.update(:operation, changeset, opts)
|> run_logging_transaction(repo, changeset, actor_id)
end
defp run_logging_transaction(multi, repo, struct_or_changeset, actor_id) do
multi
|> Multi.run(:changelog, fn repo, acc ->
log_changes(repo, acc, struct_or_changeset, actor_id)
end)
|> repo.transaction()
|> build_result()
end
defp build_result({:ok, %{operation: operation}}),
do: {:ok, operation}
defp build_result({:error, :operation, reason, _changes_so_far}),
do: {:error, reason}
defp log_changes(repo, multi_acc, struct_or_changeset, actor_id) do
%{operation: operation} = multi_acc
associations = operation.__struct__.__schema__(:associations)
resource = operation.__struct__.__schema__(:source)
embeds = operation.__struct__.__schema__(:embeds)
changes =
struct_or_changeset
|> get_changes()
|> get_embed_changes(embeds)
|> get_assoc_changes(associations)
result =
%{
actor_id: to_string(actor_id),
resource: resource,
resource_id: to_string(operation.id),
changeset: changes
}
|> changelog_changeset(repo)
|> repo.insert()
case result do
{:ok, changelog} ->
{:ok, changelog}
{:error, reason} ->
Logger.error(
"Failed to store changes in audit log: #{inspect(struct_or_changeset)} " <>
"by actor #{inspect(actor_id)}. Reason: #{inspect(reason)}"
)
{:ok, reason}
end
end
defp get_changes(%Changeset{changes: changes}),
do: map_custom_ecto_types(changes)
defp get_changes(changes) when is_map(changes),
do: changes |> Changeset.change(%{}) |> get_changes()
defp get_changes(changes) when is_list(changes),
do:
changes
|> Enum.map_reduce([], fn ch, acc -> {nil, List.insert_at(acc, -1, get_changes(ch))} end)
|> elem(1)
defp get_embed_changes(changeset, embeds) do
Enum.reduce(embeds, changeset, fn embed, changeset ->
case Map.get(changeset, embed) do
nil ->
changeset
embed_changes ->
Map.put(changeset, embed, get_changes(embed_changes))
end
end)
end
defp get_assoc_changes(changeset, associations) do
Enum.reduce(associations, changeset, fn assoc, changeset ->
case Map.get(changeset, assoc) do
nil ->
changeset
assoc_changes ->
Map.put(changeset, assoc, get_changes(assoc_changes))
end
end)
end
defp map_custom_ecto_types(changes) do
Enum.into(changes, %{}, &map_custom_ecto_type/1)
end
defp map_custom_ecto_type({_field, %Changeset{}} = input), do: input
defp map_custom_ecto_type({field, value}) when is_map(value) do
case Map.has_key?(value, :__struct__) do
true -> {field, inspect(value)}
false -> {field, value}
end
end
defp map_custom_ecto_type(value), do: value
defp changelog_changeset(attrs, repo) do
Changeset.cast(repo.audit_log_schema(), attrs, ~w(actor_id resource resource_id changeset)a)
end
end
|
lib/ecto_trail/ecto_trail.ex
| 0.807271 | 0.417865 |
ecto_trail.ex
|
starcoder
|
defmodule Defecto do
import ExUnit.Assertions
@moduledoc """
Convenient chainable assertions for Ecto changesets.
"""
defmacro __using__(options) do
quote do
import Defecto
@repo unquote(options[:repo])
end
end
@doc """
Assert a change is valid.
"""
def assert_change(model, params \\ %{}, changeset_fun \\ :changeset) do
changeset = apply(model.__struct__, changeset_fun, [model, params])
assert changeset.valid?
changeset
end
@doc """
Assert a change is invalid.
"""
def refute_change(model, params \\ %{}, changeset_fun \\ :changeset) do
changeset = apply(model.__struct__, changeset_fun, [model, params])
refute changeset.valid?
changeset
end
@doc """
Assert a changed field exists.
"""
def assert_change_field(changeset, field) do
assert true == Map.has_key?(changeset.changes, field)
changeset
end
@doc """
Assert a changed field does not exist.
"""
def refute_change_field(changeset, field) do
assert false == Map.has_key?(changeset.changes, field)
changeset
end
@doc """
Assert a changed value is equal to value.
"""
def assert_change_value(changeset, field, value) do
assert value == changeset.changes[field]
changeset
end
@doc """
Assert a changed value is not equal to value.
"""
def refute_change_value(changeset, field, value) do
refute value == changeset.changes[field]
changeset
end
@doc """
Assert an error value is equal to value.
"""
def assert_error_value(changeset, field, value) do
assert value == changeset.errors[field]
changeset
end
@doc """
Assert an error value is not equal to value.
"""
def refute_error_value(changeset, field, value) do
refute value == changeset.errors[field]
changeset
end
@doc """
Assert an insertion produces the expected result and changeset.
"""
defmacro assert_insert(changeset, result) do
quote do
assert_insert(unquote(changeset), unquote(result), @repo)
end
end
@doc """
Assert an insertion produces the expected result and changeset.
"""
def assert_insert(changeset, result, repo) do
assert { ^result, changeset } = repo.insert(changeset)
changeset
end
@doc """
Assert an insertion does not produce the expected result and changeset.
"""
defmacro refute_insert(changeset, result) do
quote do
refute_insert(unquote(changeset), unquote(result), @repo)
end
end
@doc """
Assert an insertion does not produce the expected result and changeset.
"""
def refute_insert(changeset, result, repo) do
refute { ^result, changeset } = repo.insert(changeset)
changeset
end
@doc """
Assert an update produces the expected result and changeset.
"""
defmacro assert_update(changeset, result) do
quote do
assert_update(unquote(changeset), unquote(result), @repo)
end
end
@doc """
Assert an update produces the expected result and changeset.
"""
def assert_update(changeset, result, repo) do
assert { ^result, changeset } = repo.update(changeset)
changeset
end
@doc """
Assert an update does not produce the expected result and changeset.
"""
defmacro refute_update(changeset, result) do
quote do
refute_update(unquote(changeset), unquote(result), @repo)
end
end
@doc """
Assert an update does not produce the expected result and changeset.
"""
def refute_update(changeset, result, repo) do
refute { ^result, changeset } = repo.update(changeset)
changeset
end
end
|
lib/defecto.ex
| 0.834542 | 0.624079 |
defecto.ex
|
starcoder
|
defmodule Resty.Resource do
@moduledoc """
This module provides a few functions to work with resource structs. Resource
structs are created thanks to the `Resty.Resource.Base` module.
"""
@typedoc """
A resource struct (also called a resource).
"""
@type t() :: struct()
@typedoc """
A resource module.
For the resource `%Post{}` the resource module would be `Post`
"""
@type mod() :: module()
@typedoc """
A resource primary key.
"""
@type primary_key() :: any()
@typedoc """
Parameters used when building an url.
"""
@type url_parameters :: Keyword.t()
@doc """
Clone the given resource
This will create a new resource struct from the given one. The new struct
will be marked as not persisted and will not have an id.
*This is not deep cloning. If there are associations their ids and persisted
states won't be updated.*
```
iex> Post
...> |> Resty.Repo.first!()
...> |> Resty.Resource.clone()
%Post{
id: nil,
body: "<PASSWORD>",
name: "<NAME>",
author_id: 1,
author: %Resty.Associations.NotLoaded{},
comments: %Resty.Associations.NotLoaded{}
}
```
"""
def clone(resource), do: clone(resource.__struct__, resource)
defp clone(resource_module, resource) do
resource
|> Map.take(resource_module.raw_attributes())
|> Map.delete(resource_module.primary_key())
|> resource_module.build()
end
@doc """
Return the primary_key of the given resource.
"""
def get_primary_key(resource = %{__struct__: resource_module}) do
Map.get(resource, resource_module.primary_key())
end
@doc """
Return the raw attributes of the resource. These attributes are what will
eventually get sent to the underlying api.
"""
def raw_attributes(resource = %{__struct__: resource_module}) do
resource |> Map.take(resource_module.raw_attributes())
end
@doc """
Is the given resource new? (not persisted)
```
iex> Post.build()
...> |> Resty.Resource.new?()
true
iex> Post.build()
...> |> Resty.Repo.save!()
...> |> Resty.Resource.new?()
false
```
"""
def new?(resource), do: !persisted?(resource)
@doc """
Has the given resource been persisted?
```
iex> Post.build()
...> |> Resty.Resource.persisted?()
false
iex> Post.build()
...> |> Resty.Repo.save!()
...> |> Resty.Resource.persisted?()
true
```
"""
def persisted?(resource)
def persisted?(%{__persisted__: persisted}), do: persisted
@doc """
Mark the given resource as persisted
iex> Post.build()
...> |> Resty.Resource.mark_as_persisted()
...> |> Resty.Resource.persisted?()
true
```
"""
def mark_as_persisted(resource) do
%{resource | __persisted__: true}
end
@doc """
Build a URL to the resource.
```
iex> Post |> Resty.Resource.url_for()
"site.tld/posts"
iex> Post.build(id: 1) |> Resty.Resource.url_for()
"site.tld/posts/1"
```
"""
def url_for(module_or_resource)
def url_for(resource_module) when is_atom(resource_module) do
url_for(resource_module, [])
end
def url_for(resource) when is_map(resource) do
resource_module = resource.__struct__
id = Map.get(resource, resource_module.primary_key())
url_for(resource_module, id)
end
@doc """
Build a URL to the resource.
```
iex> Post |> Resty.Resource.url_for(key: "value")
"site.tld/posts?key=value"
iex> Post.build(id: 1) |> Resty.Resource.url_for(key: "value")
"site.tld/posts/1?key=value"
iex> Post |> Resty.Resource.url_for("slug")
"site.tld/posts/slug"
```
"""
def url_for(module_or_resource, id_or_params)
def url_for(resource_module, params) when is_atom(resource_module) and is_list(params) do
url_for(resource_module, nil, params)
end
def url_for(resource, params) when is_map(resource) and is_list(params) do
resource_module = resource.__struct__
id = Map.get(resource, resource_module.primary_key())
url_for(resource_module, id, params)
end
def url_for(resource_module, id) when is_atom(resource_module) do
url_for(resource_module, id, [])
end
@doc """
Build a URL to the resource.
```
iex> Post |> Resty.Resource.url_for("slug", key: "value")
"site.tld/posts/slug?key=value"
```
"""
def url_for(resource_module, resource_id, params)
when is_atom(resource_module) and is_list(params) do
Resty.Resource.UrlBuilder.build(resource_module, resource_id, params)
end
end
|
lib/resty/resource.ex
| 0.815453 | 0.59305 |
resource.ex
|
starcoder
|
defmodule PlugSigaws do
@moduledoc """
Plug to authenticate HTTP requests that have been signed using AWS Signature V4.
(Refer to this [Blog post](https://handnot2.github.io/blog/elixir/aws-signature-sigaws))
[](http://inch-ci.org/github/handnot2/plug_sigaws)
### Plug Pipeline Setup
This plug relies on `sigaws` library to verify the signature. When the
signature verification fails, further pipeline processing is halted.
Upon successful signature verification, an "assign" (`:sigaws_ctxt`)
is setup in the returned connection containing the verification context.
Edit your `router.ex` file and add this plug to the appropriate pipeline.
```elixir
pipeline :api do
plug :accepts, ["json"]
plug PlugSigaws
end
```
You can use this plug to secure access to non-api resources as well using
"presigned URLs".
### Content Parser Setup
The signature verification process involves computing the hash digest of the
request body in its raw form. Given the Plug restriction that the request body
can be read only once, it is imperative that any content parsers used in
the Plug pipeline make the raw content available for hash computation.
Edit the `endpoint.ex` file and replace the `:json` and `:urlencoded` parsers
with the corresponding `PlugSigaws` versions that make the raw content
available for hash computation.
```elixir
plug Plug.Parsers,
parsers: [PlugSigaws.Parsers.JSON, PlugSigaws.Parsers.URLENCODED, :multipart],
pass: ["*/*"],
json_decoder: Poison
```
> This plug checks for `conn.assigns[:raw_body]` in the connection. Any
> content parser plug present before this plug that consumes the request body
> should make the raw content available in the `:raw_body` assign. **Without this
> the signature verification may fail.**
>
> If the raw body is not available as an assign, this plug will read the request
> body by calling `Plug.Conn.read_body/2` and make it available in the assign
> for subsequent consumption in the pipeline.
### Quickstart Verification Provider
Verifying the signature involves making sure that the region/service used
in the signature are valid for the server hosting the service. It also
involves recomputing the signature from the request data and comparing
against what is passed in.
The `sigaws` package includes a "quickstart" provider that can be used to
quickly try out signature verification. **You need three things to make
use of this provider**:
1. Add the quick start provider to the project dependencies.
```elixir
defp deps do
{:sigaws_quickstart_provider, "~> 0.1"}
end
```
2. Add this provider to your supervision tree. This is needed so that
the credentials can be read from a file.
```elixir
use Application
def start(_type, _args) do
import Supervisor.Spec
children = [
worker(SigawsQuickStartProvider, [[name: :sigaws_provider]]),
# ....
]
# ....
Supervisor.start_link(children, opts)
end
```
3. Add the following to your `config.exs`:
```elixir
config :plug_sigaws,
provider: SigawsQuickStartProvider
config :sigaws_quickstart_provider,
regions: "us-east-1,alpha-quad,beta-quad,gamma-quad,delta-quad",
services: "my-service,img-service",
creds_file: "sigaws_quickstart.creds"
```
The quickstart provider configuration parameters:
- `regions` -- Set this to a comma separated list of region names.
For example, `us-east-1,gamma-quad,delta-quad`.
A request signed with a region not in this list will fail.
- `services` -- Set this to a comma separated list of service names.
Just as the was case with regions, a request
signed with a service not in this list will fail.
- `creds_file` -- Path to the credentials file. The quickstart provider
reads this file to get the list of valid access key IDs and their
corresponding secrets. Each line in this file represents a valid
credential with a colon separating the access key ID and the secret.
Here are the defaults used when any of these environment variables is not set:
| Parameter | Default |
|:-------- |:------- |
| `regions` | `us-east-1` |
| `services` | `my-service` |
| `creds_file` | `sigaws_quickstart.creds` in the current working directory|
### Build your own Verification Provider
Most probably you want the access key ID/secrets stored in a database or some
other external system.
> Use `SigawsQuickStartProvider` (a separate Hex package) as a starting point
> and build your own provider.
Configure `plug_sigaws` to use your provider instead of the quickstart provider.
"""
import Plug.Conn
require Logger
def init(opts), do: opts
@doc """
Performs AWS Signature verification using the request data in the connection.
Upon success, verification information is made available in an assign called
`:sigaws_ctxt`.
Failure results in an HTTP `401` response.
"""
def call(conn, _opts) do
{conn, body} =
if conn.assigns[:raw_body] != nil do
{conn, conn.assigns[:raw_body]}
else
{:ok, body, conn} = conn |> read_body()
{conn |> assign(:raw_body, body), body}
end
provider = Application.get_env(:plug_sigaws, :provider)
unless provider do
Logger.log(:error, "ERROR: plug_sigaws provider config not set")
end
verification_opts = [
provider: provider,
method: conn.method,
headers: conn.req_headers,
params: conn.query_params,
body: body
]
verification_result = Sigaws.verify(conn.request_path, verification_opts)
case verification_result do
{:ok, %Sigaws.Ctxt{} = ctxt} ->
assign(conn, :sigaws_ctxt, ctxt)
{:error, error, info} when is_atom(error) ->
msg = "#{Atom.to_string(error)}: #{inspect info}"
conn |> resp(401, msg) |> halt()
{:error, msg} -> conn |> resp(401, msg) |> halt()
end
rescue
error ->
conn
|> resp(401, "Authorization Failed: #{inspect error}")
|> halt()
end
end
|
lib/plug_sigaws.ex
| 0.898144 | 0.834002 |
plug_sigaws.ex
|
starcoder
|
defmodule Game do
@moduledoc """
Provides the functions from a hangman
"""
@doc """
Made a list of "-" whit the same size of a Word
## Parametres
- Letter_list: List of all letters of a Word
## Exemple
iex> Game.turn_ocult("Marcelo")
["-", "-", "-", "-", "-", "-", "-"]
"""
def turn_ocult(letter_list), do: Enum.map(letter_list, fn _ -> "-" end)
@doc """
Verife if the the letter gived by the user exists in the orinal word, and switch the "-" in the ocult from the word in the same index
Used just from @check_letter.
## Parametres
- {Value, Ocult}
- Value: Char that represents the atual letter of the word
- Ocult: Char that represents the atual letter of the ocult list
- Value: Char that representes the word gived by the user
"""
def att_ocult({value, _}, value), do: value
def att_ocult({_, ocult}, _), do: ocult
@doc """
Transform the list of word and ocult in a unic list of tuples {x, y} when x is a latter of the original word and y is the same index in ocult.
## Parametres
- Word: String, the orignal word
- Letter: String, the letter gived by the user
- Ocult: Ocult list
## Exemple
iex> Game.turn_ocult("Marcelo")
["-", "-", "-", "-", "-", "-", "-"]
"""
def check_letter(word, letter, ocult) do
list_letter = Enum.zip(String.codepoints(word), ocult)
Enum.map(list_letter, fn e -> att_ocult(e, letter) end)
end
@doc """
Compare the orignal word with the atual ocult word, and if they are the same word, the user win's
## Parametres
- Word: String. The orignal word
- Ocult: String, the ocult word
## Exemple
iex> Game.win?("Marcelo", "-a-c-lo")
["-", "a", "-", "c", "-", "l", "o"]
iex> Game.win?("Marcelo", "Marcelo")
:ok
"""
def win?(word, word), do: :ok
def win?(_, ocult), do: String.codepoints(ocult)
@doc """
Draw on the terminal the ocult word and get some letter from the user
## Parametres
- Ocult: List of all "-" and letters
## Exemple
iex> Game.draw(["-", "-", "-", "-", "-", "-", "-"])
-------
Typer a letter:
iex> Game.draw(["-", "a", "-", "c", "-", "l", "o"])
-a-c-lo
Typer a letter:
"""
def draw(ocult, attempts) do
IO.puts "\nWord: #{ocult}"
IO.puts "#{attempts} attempts left."
String.trim(IO.gets "Type a letter: ")
end
@doc """
Start the game
## Parametres
- Word: A random word taken from the list of words
- Ocult: The same word, but ocult "--------"
- attempts; Int, the numbers of attempts that lefts
## Exemple
iex> Game.loop("Marcelo", "--------", 2)
# Game start
iex> Game.loop("Marcelo", :ok, 1)
You Win
iex> Game.loop("Marcelo", "mar-celo", -1)
You lose, all your attempts are end
"""
def loop(word, :ok, _), do: IO.puts "\nCongratulations, you freed the Alchemist. \nThe word is #{word}."
def loop(_, _, -1), do: IO.puts "\nYou lose, all your attempts are end. \nThe Alchemist remains stuck."
def loop(word, ocult, attempts) do
letter = draw(ocult, attempts)
new_ocult = check_letter(word, letter, ocult)
win = win?(word, Enum.join(new_ocult))
loop(word, win, attempts - 1)
end
end
words = ["javascrtip", "python", "clojure", "haskell", "java", "ruby",
"elixir", "erlang", "ocalm", "groovy", "pascal", "swift"]
# Get a random word from the list
word = Enum.at(words, :rand.uniform(Kernel.length(words)))
ocult = Game.turn_ocult(String.codepoints(word))
# Start the game
Game.loop(word, ocult, 15)
|
hanged-alchemist.ex
| 0.712532 | 0.671545 |
hanged-alchemist.ex
|
starcoder
|
defmodule BitstylesPhoenix.Component.Error do
use BitstylesPhoenix.Component
alias Phoenix.HTML.Form, as: PhxForm
@moduledoc """
Component for showing UI errors.
"""
@doc """
Render errors from a Phoenix.HTML.Form.
## Attributes
- `form` *(required)* - The form to render the input form.
- `field` *(required)* - The name of the field for the input.
- `class` - Extra classes to pass to the wrapping `ul` if there are mutliple errors.
See `BitstylesPhoenix.Helper.classnames/1` for usage.
- `error_class` - Extra classes to pass to the error component.
See `BitstylesPhoenix.Helper.classnames/1` for usage.
See also `BitstylesPhoenix.Component.Form`.
Uses the `translate_errors` MFA from the config to translate field errors (e.g. with `gettext`).
"""
story("A single error", '''
iex> assigns = %{form: @error_form}
...> render ~H"""
...> <.ui_errors form={@form} field={:single} />
...> """
"""
<span class="u-fg--warning" phx-feedback-for="user[single]">
is too short
</span>
"""
''')
story("Multiple errors", '''
iex> assigns = %{form: @error_form}
...> render ~H"""
...> <.ui_errors form={@form} field={:multiple} />
...> """
"""
<ul class="u-padding-xl-left">
<li>
<span class="u-fg--warning" phx-feedback-for="user[multiple]">
is simply bad
</span>
</li>
<li>
<span class="u-fg--warning" phx-feedback-for="user[multiple]">
not fun
</span>
</li>
</ul>
"""
''')
def ui_errors(assigns) do
assigns.form.errors
|> Keyword.get_values(assigns.field)
|> case do
[] ->
~H""
[error] ->
assigns = assign(assigns, error: error)
~H"""
<.ui_error
error={error}
phx-feedback-for={PhxForm.input_name(assigns.form, assigns.field)}
class={assigns[:error_class]} />
"""
errors ->
class = classnames(["u-padding-xl-left", assigns[:class]])
assigns = assign(assigns, class: class)
~H"""
<ul class={@class}>
<%= for error <- errors do %>
<li>
<.ui_error
error={error}
phx-feedback-for={PhxForm.input_name(assigns.form, assigns.field)}
class={assigns[:error_class]} />
</li>
<% end %>
</ul>
"""
end
end
@doc """
Generates tag for custom errors.
## Attributes
- `error` *(required)* - The error to render (expected to be a tuple with `{message :: String.t(), opts :: keyword()}`).
- All other attributes are passed to the outer `span` tag.
Uses the `translate_errors` MFA from the config to translate field errors (e.g. with `gettext`).
The error will be rendered with the warning color, as
specified in [bitstyles colors](https://bitcrowd.github.io/bitstyles/?path=/docs/utilities-fg--warning).
"""
story("An error tag", '''
iex> assigns = %{}
...> render ~H"""
...> <.ui_error error={{"Foo error", []}} />
...> """
"""
<span class="u-fg--warning">
Foo error
</span>
"""
''')
story("An error tag extra options and classes", '''
iex> assigns = %{error: {"Foo error", []}}
...> render ~H"""
...> <.ui_error error={@error} phx-feedback-for="foo" class="bar" />
...> """
"""
<span class="u-fg--warning bar" phx-feedback-for="foo">
Foo error
</span>
"""
''')
def ui_error(assigns) do
extra = assigns_to_attributes(assigns, [:class, :error, :field, :form])
class = classnames(["u-fg--warning", assigns[:class]])
assigns = assign(assigns, class: class, extra: extra)
~H"""
<span class={@class} {@extra}><%= translate_error(@error) %></span>
"""
end
defp translate_error(error) do
{mod, translate_fn, args} =
Application.get_env(
:bitstyles_phoenix,
:translate_errors,
{__MODULE__, :no_translation, []}
)
apply(mod, translate_fn, args ++ [error])
end
@doc false
def no_translation({error, _}) do
error
end
def no_translation(error), do: error
end
|
lib/bitstyles_phoenix/component/error.ex
| 0.845002 | 0.443962 |
error.ex
|
starcoder
|
defmodule Exnoops.Directbot do
@moduledoc """
Module to interact with Github's Noop: Directbot
See the [official `noop` documentation](https://noopschallenge.com/challenges/directbot) for API information including the accepted parameters
"""
require Logger
import Exnoops.API
@noop "directbot"
@doc """
Query Directbot for direction(s)
+ Parameters are sent with a keyword list into the function
+ Returns a list of tuples:
`{direction, distance, speed, coordinate tuple {{a_x, a_y}, {b_x, b_y}}}`
## Examples
iex> Exnoops.Directbot.get_direction()
{:ok, [{:up, 96, 97, nil}]}
iex> Exnoops.Directbot.get_direction([count: 5])
{:ok, [
{:down, 73, 58, nil},
{:right, 58, 69, nil},
{:down, 42, 12, nil},
{:right, 51, 84, nil},
{:down, 35, 14, nil},
]}
# Set max speed and distance
iex> Exnoops.Directbot.get_direction([count: 5, speed: 5, distance: 10])
{:ok, [
{:left, 10, 2, nil},
{:down, 10, 2, nil},
{:right, 10, 2, nil},
{:down, 10,1, nil},
{:up, 10,4, nil}
]}
iex> Exnoops.Directbot.get_direction([count: 1, connected: 1])
{:ok, [{:up, 32, 6, {{84,609}, {91,609}}}]}
"""
@spec get_direction(keyword()) :: {atom(), list()}
def get_direction(opts \\ []) when is_list(opts) do
Logger.debug("Calling Directbot.get_direction()")
case get("/" <> @noop, opts) do
{:ok, %{"directions" => directions}} ->
{:ok, format_directions(directions)}
error ->
error
end
end
def format_directions(directions) do
Enum.map(directions, &line_to_tuple/1)
end
defp line_to_tuple(line_map) do
line_map
|> Map.update("coordinates", nil, fn
%{"a" => %{"x" => a_x, "y" => a_y}, "b" => %{"x" => b_x, "y" => b_y}} ->
{{a_x, a_y}, {b_x, b_y}}
end)
|> (fn %{
"direction" => direction,
"distance" => distance,
"speed" => speed,
"coordinates" => coordinates
} ->
{String.to_atom(direction), distance, speed, coordinates}
end).()
end
end
|
lib/exnoops/directbot.ex
| 0.765155 | 0.663973 |
directbot.ex
|
starcoder
|
defmodule Pandex do
@moduledoc ~S"""
Pandex is a lightweight ELixir wrapper for [Pandoc](http://pandoc.org). Pandex has no dependencies other than pandoc itself. Pandex enables you to convert Markdown, CommonMark, HTML, Latex, json, html to HTML, HTML5, opendocument, rtf, texttile, asciidoc, markdown, json and others. Pandex has no dependencies other than Pandoc itself.
Pandex enables you to perform any combination of the conversion below:
|Convert From (any) | Convert To (any) |
|:-------------------|:-------------------|
| commonmark | asciidoc |
| gfm | beamer |
| html | commonmark |
| json | context |
| latex | docbook |
| markdown | dzslides |
| markdown_github * | gfm |
| markdown_mmd | html |
| markdown_phpextra | html5 |
| markdown_strict | json |
| rst | latex |
| textile | man |
| | markdown |
| | markdown_github * |
| | markdown_mmd |
| | markdown_phpextra |
| | markdown_strict |
| | mediawiki |
| | opendocument |
| | org |
| | plain |
| | rst |
| | rtf |
| | s5 |
| | slidy |
| |texinfo |
| | textile |
`*` Deprecated: `markdown_github`. Use `gfm` instead.
# Usage
Pandex follows the syntax of `<format from>_to_<format to> <string>`
## Examples:
iex> Pandex.gfm_to_html("# Title \n\n## List\n\n- one\n- two\n- three\n")
{:ok, "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li>one</li>\n<li>two</li>\n<li>three</li>\n</ul>\n"}
iex> Pandex.latex_to_html5("\\section{Title}\n\n\\subsection{List}\n\n\\begin{itemize}\n\\tightlist\n\\item\n one\n\\item\n two\n\\item\n three\n\\end{itemize}\n")
{:ok, "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li><p>one</p></li>\n<li><p>two</p></li>\n<li><p>three</p></li>\n</ul>\n"}
iex> Pandex.latex_to_json("\\section{Title}\\label{title}\n\n\\subsection{List}\\label{list}\n\n\\begin{itemize}\n\\item\n one\n\\item\n two\n\\item\n three\n\\end{itemize}\n")
{:ok, "{\"blocks\":[{\"t\":\"Header\",\"c\":[1,[\"title\",[],[]],[{\"t\":\"Str\",\"c\":\"Title\"}]]},{\"t\":\"Header\",\"c\":[2,[\"list\",[],[]],[{\"t\":\"Str\",\"c\":\"List\"}]]},{\"t\":\"BulletList\",\"c\":[[{\"t\":\"Para\",\"c\":[{\"t\":\"Str\",\"c\":\"one\"}]}],[{\"t\":\"Para\",\"c\":[{\"t\":\"Str\",\"c\":\"two\"}]}],[{\"t\":\"Para\",\"c\":[{\"t\":\"Str\",\"c\":\"three\"}]}]]}],\"pandoc-api-version\":[1,17,5,4],\"meta\":{}}\n"}
"""
@readers [
"commonmark",
"gfm",
"html",
"json",
"latex",
"markdown",
"markdown_github",
"markdown_mmd",
"markdown_phpextra",
"markdown_strict",
"rst",
"textile"
]
@writers [
"asciidoc",
"beamer",
"commonmark",
"context",
"docbook",
"dzslides",
"gfm",
"html",
"html5",
"json",
"latex",
"man",
"markdown",
"markdown_github",
"markdown_mmd",
"markdown_phpextra",
"markdown_strict",
"mediawiki",
"opendocument",
"org",
"plain",
"rst",
"rtf",
"s5",
"slidy",
"texinfo",
"textile"
]
@tmp_folder ".tmp"
Enum.each(@readers, fn reader ->
Enum.each(@writers, fn writer ->
# Convert a string from one format to another.
# Example: `Pandex.markdown_to_html5("# Title \n\n## List\n\n- one\n- two\n- three\n")`
def unquote(:"#{reader}_to_#{writer}")(string, options \\ []) do
convert_string(string, unquote(reader), unquote(writer), options)
end
# Convert a file from one format to another.
# Example: `Pandex.markdown_file_to_html("sample.md") `
def unquote(:"#{reader}_file_to_#{writer}")(file, options \\ []) do
convert_file(file, unquote(reader), unquote(writer), options)
end
end)
end)
@doc """
`convert_string` works under the hood of all the other string conversion functions.
"""
def convert_string(string, from, to, options \\ []) when is_list(options) do
unless File.dir?(@tmp_folder), do: File.mkdir(@tmp_folder)
file = Path.join(@tmp_folder, random_filename())
File.write(file, string)
result =
case System.cmd("pandoc", [file, "--from=#{from}", "--to=#{to}" | options]) do
{output, 0} -> {:ok, output}
e -> {:error, e}
end
File.rm(file)
result
end
@doc """
`convert_file` works under the hood of all the other functions.
"""
def convert_file(file, from, to, options \\ []) when is_list(options) do
case System.cmd("pandoc", [file, "--from=#{from}", "--to=#{to}" | options]) do
{output, 0} -> {:ok, output}
e -> {:error, e}
end
end
# Private
defp random_filename do
Enum.join([random_string(), "-", timestamp(), ".tmp"])
end
defp random_string(length \\ 36) do
length
|> :crypto.strong_rand_bytes()
|> Base.url_encode64()
|> String.slice(0, length)
|> String.downcase()
end
defp timestamp do
:os.system_time(:seconds)
end
end
|
lib/pandex.ex
| 0.746878 | 0.493714 |
pandex.ex
|
starcoder
|
defmodule Grizzly.ZWave.Commands.NodeInfoCacheReport do
@moduledoc """
Report the cached node information
This command is normally used to respond to the `NodeInfoCacheGet` command
Params:
- `:seq_number` - the sequence number of the network command, normally from
from the `NodeInfoCacheGet` command (required)
- `:status` - the status fo the node information (required)
- `:age` - the age of the cache data. A number that is expressed `2 ^ n`
minutes (required)
- `:listening?` - if the node is listening node or sleeping node (required)
- `:command_classes` - a list of lists of command classes tagged by security attributes (optional default empty
list)
- `:basic_device_class` - the basic device class (required)
- `:generic_device_class` - the generic device class (required)
- `:specific_device_class` - the specific device class (required)
"""
@behaviour Grizzly.ZWave.Command
import Bitwise
alias Grizzly.ZWave.Command
alias Grizzly.ZWave.CommandClasses.NetworkManagementProxy
alias Grizzly.ZWave.{CommandClasses, DeviceClasses}
@typedoc """
The status of the refresh of the node information cache
Status:
- `:ok` - the requested node id could found and up-to-date information is
returned
- `:not_responding` - the requested node id could be found but fresh
information could not be retrieved
- `:unknown` - the node id is unknown
"""
@type status :: :ok | :not_responding | :unknown
@type tagged_command_classes ::
{:non_secure_supported, [CommandClasses.command_class()]}
| {:non_secure_controlled, [CommandClasses.command_class()]}
| {:secure_supported, [CommandClasses.command_class()]}
| {:secure_controlled, [CommandClasses.command_class()]}
@type param ::
{:seq_number, Grizzly.seq_number()}
| {:status, status()}
| {:age, 1..14}
| {:listening?, boolean()}
| {:command_classes, [tagged_command_classes]}
| {:basic_device_class, DeviceClasses.basic_device_class()}
| {:generic_device_class, DeviceClasses.generic_device_class()}
| {:specific_device_class, DeviceClasses.specific_device_class()}
@impl true
@spec new([param]) :: {:ok, Grizzly.ZWave.Command.t()}
def new(params) do
# TODO validate params
command = %Command{
name: :node_info_cache_report,
command_byte: 0x04,
command_class: NetworkManagementProxy,
params: params,
impl: __MODULE__
}
{:ok, command}
end
@impl true
def encode_params(command) do
seq_number = Command.param!(command, :seq_number)
status_byte = encode_status(Command.param!(command, :status))
age = Command.param!(command, :age)
listening_byte = encode_listening?(Command.param!(command, :listening?))
command_classes = Command.param!(command, :command_classes)
basic_device_class_byte =
DeviceClasses.basic_device_class_to_byte(Command.param!(command, :basic_device_class))
generic_device_class = Command.param!(command, :generic_device_class)
specific_device_class_byte =
DeviceClasses.specific_device_class_to_byte(
generic_device_class,
Command.param!(command, :specific_device_class)
)
optional_functionality_byte = encode_optional_functionality_byte(command_classes)
# the `0x00` byte is a reserved byte for Z-Wave and must be set to 0x00
<<seq_number, status_byte ||| age, listening_byte, optional_functionality_byte, 0x00,
basic_device_class_byte, DeviceClasses.generic_device_class_to_byte(generic_device_class),
specific_device_class_byte>> <> CommandClasses.command_class_list_to_binary(command_classes)
end
@impl true
def decode_params(
<<seq_number, status::size(4), age::size(4), list?::size(1), _::size(7), _, _keys,
basic_device_class_byte, generic_device_class_byte, specific_device_class_byte,
command_classes::binary>>
) do
{:ok, basic_device_class} =
DeviceClasses.basic_device_class_from_byte(basic_device_class_byte)
{:ok, generic_device_class} =
DeviceClasses.generic_device_class_from_byte(generic_device_class_byte)
{:ok, specific_device_class} =
DeviceClasses.specific_device_class_from_byte(
generic_device_class,
specific_device_class_byte
)
{:ok,
[
seq_number: seq_number,
basic_device_class: basic_device_class,
generic_device_class: generic_device_class,
specific_device_class: specific_device_class,
listening?: bit_to_bool(list?),
command_classes: CommandClasses.command_class_list_from_binary(command_classes),
status: decode_status(status),
age: age
]}
end
def encode_status(_), do: 0
def encode_command_classes(_), do: 0
def encode_listening?(true), do: 0x80
def encode_listening?(false), do: 0x00
def bit_to_bool(bit), do: bit == 1
def encode_optional_functionality_byte([]), do: 0x00
def encode_optional_functionality_byte(_), do: 0x80
def decode_command_classes(""), do: []
def decode_command_classes(_), do: []
def decode_status(0x00), do: :ok
def decode_status(0x01), do: :not_responding
def decode_status(0x02), do: :unknown
end
|
lib/grizzly/zwave/commands/node_info_cached_report.ex
| 0.805632 | 0.488283 |
node_info_cached_report.ex
|
starcoder
|
defmodule LocalLedger.CachedBalance do
@moduledoc """
This module is an interface to the LocalLedgerDB Balance schema. It is responsible for caching
balances and serves as an interface to retrieve the current balances (which will either be
loaded from a cached balance or computed - or both).
"""
alias LocalLedgerDB.{Balance, CachedBalance, Transaction}
@doc """
Cache all the balances using a batch stream mechanism for retrieval (1000 at a time). This
is meant to be used in some kind of schedulers, but can also be ran manually.
"""
@spec cache_all() :: {}
def cache_all do
Balance.stream_all(fn balance ->
{:ok, calculate_with_strategy(balance)}
end)
end
@doc """
Get all the balance amounts for the given balance.
"""
@spec all(Balance.t) :: {:ok, Map.t}
def all(balance) do
{:ok, get_amounts(balance)}
end
@doc """
Get the balance amount for the specified minted token (friendly_id) and
the given balance.
"""
@spec get(Balance.t, String.t) :: {:ok, Map.t}
def get(balance, friendly_id) do
amounts = get_amounts(balance)
{:ok, %{friendly_id => amounts[friendly_id] || 0}}
end
defp get_amounts(balance) do
balance.address
|> CachedBalance.get()
|> calculate_amounts(balance)
end
defp calculate_amounts(nil, balance), do: calculate_from_beginning_and_insert(balance)
defp calculate_amounts(cached_balance, balance) do
balance.address
|> Transaction.calculate_all_balances(%{
since: cached_balance.computed_at
})
|> add_amounts(cached_balance.amounts)
end
defp add_amounts(amounts_1, amounts_2) do
Map.keys(amounts_1) ++ Map.keys(amounts_2)
|> Enum.map(fn friendly_id ->
{friendly_id, (amounts_1[friendly_id] || 0) + (amounts_2[friendly_id] || 0)}
end)
|> Enum.into(%{})
end
defp calculate_with_strategy(balance) do
:local_ledger
|> Application.get_env(:balance_caching_strategy)
|> calculate_with_strategy(balance)
end
defp calculate_with_strategy("since_last_cached", balance) do
case CachedBalance.get(balance.address) do
nil -> calculate_from_beginning_and_insert(balance)
cached_balance -> calculate_from_cached_and_insert(balance, cached_balance)
end
end
defp calculate_with_strategy("since_beginning", balance) do
calculate_from_beginning_and_insert(balance)
end
defp calculate_with_strategy(_, balance) do
calculate_with_strategy("since_beginning", balance)
end
defp calculate_from_beginning_and_insert(balance) do
computed_at = NaiveDateTime.utc_now()
balance.address
|> Transaction.calculate_all_balances(%{upto: computed_at})
|> insert(balance, computed_at)
end
defp calculate_from_cached_and_insert(balance, cached_balance) do
computed_at = NaiveDateTime.utc_now()
balance.address
|> Transaction.calculate_all_balances(%{
since: cached_balance.computed_at,
upto: computed_at
})
|> add_amounts(cached_balance.amounts)
|> insert(balance, computed_at)
end
defp insert(amounts, balance, computed_at) do
if Enum.any?(amounts, fn {_token, amount} -> amount > 0 end) do
{:ok, _} = CachedBalance.insert(%{
amounts: amounts,
balance_address: balance.address,
computed_at: computed_at
})
end
amounts
end
end
|
apps/local_ledger/lib/local_ledger/cached_balance.ex
| 0.835383 | 0.538983 |
cached_balance.ex
|
starcoder
|
defmodule Shiftplaner.Event do
@moduledoc false
use Ecto.Schema
alias Shiftplaner.{Event, Repo, Weekend}
import Ecto.{Query, Changeset}, warn: false
require Logger
@primary_key {:id, :binary_id, autogenerate: true}
@foreign_key_type Ecto.UUID
@preloads :weekends
@type id :: String.t
@type t :: %__MODULE__{name: String.t, active: boolean, weekends: list(Shiftplaner.Weekend.t)}
schema "event" do
field :name
field :active, :boolean, default: false
has_many :weekends, Weekend, on_delete: :delete_all
timestamps()
end
@spec add_weekend_to_event(Shiftplaner.Event.t, Shiftplaner.Weekend.t)
:: {:ok, Shiftplaner.Event.t} | {:error, Ecto.Changeset.t}
def add_weekend_to_event(%Event{} = event, list_of_events) when is_list(list_of_events) do
Enum.each(list_of_events, &add_weekend_to_event(event, &1))
end
def add_weekend_to_event(%Event{} = event, %Weekend{} = weekend) do
event
|> event_changeset(%{})
|> put_assoc(:weekends, [weekend])
|> Repo.update()
|> update_result()
end
def add_weekend_to_event(%Event{} = event, attrs) when is_map(attrs) do
event
|> event_changeset(attrs)
|> Repo.update()
|> update_result()
end
@doc """
Tries to create an event from the given ```attrs```.
Returns either ```{:ok, event}``` or ```{:error, changeset}```
"""
@spec create_event(map) :: {:ok, Shiftplaner.Event.t} | {:error, Ecto.Changeset.t}
def create_event(attrs) do
%Event{}
|> event_changeset(attrs)
|> Repo.insert()
|> insert_result()
end
@doc """
creates a changeset for the given ```event```
event = An event of type ```%Shiftplaner.Event{}```
Returns an ```%Ecto.Changeset{}``` for the ```%Shiftplaner.Event{}```
"""
@spec change_event(Shiftplaner.Event.t) :: Ecto.Changeset.t
def change_event(%Event{} = event) do
event_changeset(event, %{})
end
@doc """
It returns {:ok, event} if the struct has been successfully deleted or {:error, changeset}
if there was a validation or a known constraint error.
"""
@spec delete_event(Shiftplaner.Event.t) :: {:ok, Shiftplaner.Event.t} | {:error, Ecto.Changeset.t}
def delete_event(%Event{} = event) do
event
|> Repo.delete()
end
@doc """
Lists all active and inactive events and preloads the weekends.
Returns a list of ```Shiftplaner.Event```
"""
@spec list_all_events :: list(Shiftplaner.Event.t)
def list_all_events do
Event
|> Repo.all()
|> Repo.preload(:weekends)
end
@doc """
Lists all active Events. Struct is fully preloded.
"""
@spec list_all_active_events :: list(Shiftplaner.Event.t)
def list_all_active_events do
Event
|> where([e], e.active == true)
|> join(:left, [e], weekends in assoc(e, :weekends))
|> join(:left, [e, w], days in assoc(w, :days))
|> join(:left, [e, w, d], shifts in assoc(d, :shifts))
|> join(:left, [e, w, d, s], available_persons in assoc(s, :available_persons))
|> join(:left, [e, w, d, s], dispositioned_persons in assoc(s, :dispositioned_persons))
|> join(:left, [e, w, d, s], dispositioned_griller in assoc(s, :dispositioned_griller))
|> preload(
[
_,
weekends,
days,
shifts,
available_persons,
dispositioned_persons,
dispositioned_griller
],
[
weekends: {
weekends,
days: {
days,
shifts: {
shifts,
available_persons: available_persons,
dispositioned_persons: dispositioned_persons,
dispositioned_griller: dispositioned_griller
}
}
}
]
)
|> order_by(
[e, w, d, s, ava_p, disp_p, disp_g],
[e.inserted_at, d.date, s.start_time, disp_p.sure_name]
)
|> Repo.all
end
@doc """
List all shift ids for an given event.
event: either an event_id or an ```Shiftplaner.Event```
Returns: a list of all shift ids for an given event.
"""
@spec list_all_shifts_for_event(
Shiftplaner.Event.t | Shiftplaner.Event.id
)
:: list(Shiftplaner.Shift.id) | no_return
def list_all_shifts_for_event(%Event{} = event) do
list_all_shifts_for_event(event.id)
end
def list_all_shifts_for_event(event_id) when is_binary(event_id) do
query = from e in Event,
where: e.id == ^event_id,
join: w in assoc(e, :weekends),
join: d in assoc(w, :days),
join: s in assoc(d, :shifts),
select: s.id
Repo.all(query)
end
@doc """
Tries to get the event for the given binary uuid.
Returns either an Event or nil or nothing.
"""
@spec get_event(String.t) :: {:ok, Shiftplaner.Event.t} | {:error, :could_not_fetch_event}
def get_event(id) when is_binary(id) do
Event
|> where([e], e.id == ^id)
|> Repo.one
|> Repo.preload(:weekends)
|> result_to_tuple()
end
@doc """
Similiar to ```get_event/1``` but raises if no records is found.
"""
@spec get_event!(String.t) :: Shiftplaner.Event.t | no_return
def get_event!(id) when is_binary(id) do
case get_event(id) do
{:ok, event} -> event
{:error, _} -> raise RuntimeError, message: "Could not fetch event :("
end
end
def preload_all(%Event{} = event) do
query = from e in Event,
where: e.id == ^event.id,
left_join: weekends in assoc(e, :weekends),
left_join: days in assoc(weekends, :days),
left_join: shifts in assoc(days, :shifts),
preload: [weekends: {weekends, days: {days, :shifts}}]
Repo.one(query)
end
@doc """
Updates the given ```Event```.
If successful returns ```{:ok, updated_event}```.
If unsuccesful returns ```{:error, changeset}```.
"""
@spec update_event(Shiftplaner.Event.t, map) :: {:ok, Shiftplaner.Event} | {
:error,
Ecto.Changeset.t
}
def update_event(%Event{} = event, attrs) do
event
|> event_changeset(attrs)
|> Repo.update()
|> update_result()
end
defp event_changeset(%Event{} = event, attrs) do
event
|> Repo.preload(@preloads)
|> cast(attrs, [:name, :active])
|> cast_assoc(:weekends)
|> validate_required([:name])
end
defp insert_result({:ok, %Event{} = event}) do
Logger.debug fn ->
"successfully inserted event - #{event.id}: #{event.name}"
end
{:ok, event}
end
defp insert_result({:error, reason}) do
Logger.warn fn -> "Could not insert event - #{reason}" end
{:error, reason}
end
defp result_to_tuple(%Event{} = event) do
{:ok, event}
end
defp result_to_tuple(_) do
{:error, :could_not_fetch_event}
end
defp update_result({:ok, %Event{} = event}) do
Logger.debug fn ->
"successfully updated event - #{event.id}: #{event.name}"
end
{:ok, event}
end
defp update_result({:error, reason}) do
Logger.warn fn -> "Could not update event - #{reason}" end
{:error, reason}
end
end
|
lib/shiftplaner/event.ex
| 0.900102 | 0.623291 |
event.ex
|
starcoder
|
require Logger
defmodule ExoSQL.Parser do
@moduledoc """
Parsed an SQL statement into a ExoSQL.Query.
The Query then needs to be planned and executed.
It also resolves partial column and table names using data from the context
and its schema functions.
Uses leex and yecc to perform a first phase parsing, and then
convert an dprocess the structure using more context knowledge to return
a proper Query struct.
"""
~S"""
Parses from the yeec provided parse tree to a more realistic and complete parse
tree with all data resolved.
"""
defp real_parse(parsed, context) do
%{
with: with_,
select: select,
from: from,
where: where,
groupby: groupby,
join: join,
orderby: orderby,
limit: limit,
offset: offset,
union: union
} = parsed
{select, select_options} = select
if ExoSQL.debug_mode(context) do
Logger.debug("ExoSQL Parser #{inspect(parsed, pretty: true)} #{inspect(context)}")
end
{context, with_parsed} =
if with_ != [] do
# Logger.debug("Parsed #{inspect parsed, pretty: true}")
context = Map.put(context, :with, %{})
# Context adds the known columns to be used later, and returns the
# :with_parsed which are the queries to be executed once by the main query.
Enum.reduce(with_, {context, []}, fn
{name, select}, {context, with_parsed} ->
{:ok, parsed} = real_parse(select, context)
# Logger.debug("parse with #{inspect parsed}")
columns =
resolve_columns(parsed, context)
|> Enum.map(fn {_, _, col} -> {:with, name, col} end)
# Logger.debug("Columns for #{inspect(name)}: #{inspect(columns)}")
context = put_in(context, [:with, name], columns)
{context, with_parsed ++ [{name, parsed}]}
end)
else
{context, []}
end
all_tables_at_context = resolve_all_tables(context)
# Logger.debug("All tables #{inspect all_tables_at_context}")
# Logger.debug("Resolve tables #{inspect(from, pretty: true)}")
{from, cross_joins} =
case from do
[] ->
{nil, []}
[from | cross_joins] ->
from = resolve_table(from, all_tables_at_context, context)
cross_joins =
Enum.map(cross_joins, fn
# already a cross join lateral, not change
{:cross_join_lateral, opts} ->
{:cross_join_lateral, opts}
# dificult for erlang parser, needs just redo here.
{:alias, {{:cross_join_lateral, cjl}, alias_}} ->
{:cross_join_lateral, {:alias, {cjl, alias_}}}
# functions are always lateral
{:fn, f} ->
{:cross_join_lateral, {:fn, f}}
{:alias, {{:fn, f}, al}} ->
{:cross_join_lateral, {:alias, {{:fn, f}, al}}}
# Was using , operator -> cross joins
other ->
{:cross_join, other}
end)
{from, cross_joins}
end
# Logger.debug("from #{inspect (cross_joins ++ join), pretty: true}")
join =
Enum.map(cross_joins ++ join, fn
{:cross_join_lateral, table} ->
context =
Map.put(
context,
"__parent__",
resolve_all_columns([from], context) ++ Map.get(context, "__parent__", [])
)
# Logger.debug("Resolve table, may need my columns #{inspect table} #{inspect context}")
resolved = resolve_table(table, all_tables_at_context, context)
{:cross_join_lateral, resolved}
{type, {:select, query}} ->
{:ok, parsed} = real_parse(query, context)
# Logger.debug("Resolved #{inspect parsed}")
{type, parsed}
{type, {{:select, query}, ops}} ->
{:ok, parsed} = real_parse(query, context)
# Logger.debug("Resolved #{inspect parsed}")
{type, {parsed, ops}}
{type, {:table, table}} ->
resolved = resolve_table({:table, table}, all_tables_at_context, context)
{type, resolved}
{type, {{:table, table}, ops}} ->
# Logger.debug("F is #{inspect {table, ops}}")
resolved = resolve_table({:table, table}, all_tables_at_context, context)
{type, {resolved, ops}}
{_type, {{:alias, {{:fn, _}, _}}, _}} = orig ->
orig
{type, {{:alias, {orig, alias_}}, ops}} ->
# Logger.debug("Table is #{inspect orig}")
resolved = resolve_table(orig, all_tables_at_context, context)
{type, {{:alias, {resolved, alias_}}, ops}}
end)
# Logger.debug("Prepare join tables #{inspect(join, pretty: true)}")
all_tables =
if join != [] do
[from] ++
Enum.map(join, fn
{_type, {:table, from}} ->
{:table, from}
{_type, {:alias, {from, alias_}}} ->
{:alias, {from, alias_}}
{_type, {:fn, args}} ->
{:fn, args}
{_type, {from, _on}} ->
from
{_type, from} ->
from
end)
else
if from == nil do
[]
else
[from]
end
end
# Logger.debug("All tables at all columns #{inspect(all_tables)}")
select_columns = resolve_all_columns(all_tables, context)
all_columns = Map.get(context, "__parent__", []) ++ select_columns
# Logger.debug("Resolved columns at query: #{inspect(all_columns)}")
# Now resolve references to tables, as in FROM xx, LATERAL nested(xx.json, "a")
from = resolve_column(from, all_columns, context)
groupby =
if groupby do
Enum.map(groupby, &resolve_column(&1, all_columns, context))
else
nil
end
# Logger.debug("All tables #{inspect all_tables}")
join =
Enum.map(join, fn
{type, {:fn, {func, params}}} ->
# Logger.debug("params #{inspect params}")
params = Enum.map(params, &resolve_column(&1, all_columns, context))
{type, {:fn, {func, params}}}
{type, {:alias, {{:fn, {func, params}}, alias_}}} ->
# Logger.debug("params #{inspect params}")
params = Enum.map(params, &resolve_column(&1, all_columns, context))
{type, {:alias, {{:fn, {func, params}}, alias_}}}
{type, {{:table, table}, expr}} ->
{type,
{
{:table, table},
resolve_column(expr, all_columns, context)
}}
{type, {any, expr}} ->
{type,
{
any,
resolve_column(expr, all_columns, context)
}}
{type, %ExoSQL.Query{} = query} ->
{type, query}
end)
# the resolve all expressions as we know which tables to use
# Logger.debug("Get select resolved: #{inspect select}")
select =
case select do
[{:all_columns}] ->
# SELECT * do not include parent columns.
select_columns |> Enum.map(&{:column, &1})
_other ->
Enum.map(select, &resolve_column(&1, all_columns, context))
end
# Logger.debug("Resolved: #{inspect select}")
distinct =
case Keyword.get(select_options, :distinct) do
nil -> nil
other -> resolve_column(other, all_columns, context)
end
crosstab = Keyword.get(select_options, :crosstab)
where =
if where do
resolve_column(where, all_columns, context)
else
nil
end
# Resolve orderby
orderby =
Enum.map(orderby, fn {type, expr} ->
{type, resolve_column(expr, all_columns, context)}
end)
# resolve union
union =
if union do
{type, other} = union
{:ok, other} = real_parse(other, context)
{type, other}
end
with_ = with_parsed
{:ok,
%ExoSQL.Query{
select: select,
distinct: distinct,
crosstab: crosstab,
# all the tables it gets data from, but use only the frist and the joins.
from: from,
where: where,
groupby: groupby,
join: join,
orderby: orderby,
limit: limit,
offset: offset,
union: union,
with: with_
}}
end
@doc """
Parses an SQL statement and returns the parsed ExoSQL struct.
"""
def parse(sql, context) do
try do
sql = String.to_charlist(sql)
lexed =
case :sql_lexer.string(sql) do
{:ok, lexed, _lines} -> lexed
{:error, {other, _}} -> throw(other)
end
parsed =
case :sql_parser.parse(lexed) do
{:ok, parsed} -> parsed
{:error, any} -> throw(any)
end
# Logger.debug("Yeec parsed: #{inspect parsed, pretty: true}")
real_parse(parsed, context)
catch
{line_number, :sql_lexer, msg} ->
{:error, {:syntax, {msg, line_number}}}
{line_number, :sql_parser, msg} ->
{:error, {:syntax, {to_string(msg), line_number}}}
any ->
Logger.debug("Generic error at SQL parse: #{inspect(any)}")
{:error, any}
end
end
@doc ~S"""
Calculates the list of all FQcolumns.
This simplifies later the gathering of which table has which column and so on,
specially when aliases are taken into account
"""
def resolve_all_columns(tables, context) do
# Logger.debug("Resolve all tables #{inspect tables}")
Enum.flat_map(tables, &resolve_columns(&1, context))
end
def resolve_columns({:alias, {any, alias_}}, context) do
case resolve_columns(any, context) do
# only one answer, same name as "table", alias it
[{_, a, a}] ->
[{:tmp, alias_, alias_}]
other ->
Enum.map(other, fn {_db, _table, column} -> {:tmp, alias_, column} end)
end
end
def resolve_columns({:table, {:with, table}}, context) do
# Logger.debug("Get :with columns: #{inspect table} #{inspect context, pretty: true}")
context[:with][table]
end
def resolve_columns({:table, nil}, _context) do
[]
end
def resolve_columns({:table, {db, table}}, context) do
# Logger.debug("table #{inspect {db, table}}")
{:ok, schema} = ExoSQL.schema(db, table, context)
Enum.map(schema[:columns], &{db, table, &1})
end
# no column names given, just unnest
def resolve_columns({:fn, {"unnest", [_expr]}}, _context) do
[{:tmp, "unnest", "unnest"}]
end
def resolve_columns({:fn, {"unnest", [_expr | columns]}}, _context) do
columns |> Enum.map(fn {:lit, col} -> {:tmp, "unnest", col} end)
end
def resolve_columns({:fn, {function, _params}}, _context) do
[{:tmp, function, function}]
end
def resolve_columns({:lateral, something}, context) do
resolve_columns(something, context)
end
def resolve_columns({:select, query}, _context) do
{columns, _} =
Enum.reduce(query[:select], {[], 1}, fn column, {acc, count} ->
# Logger.debug("Resolve column name for: #{inspect column}")
column =
case column do
{:column, {_db, _table, column}} ->
{:tmp, :tmp, column}
{:alias, {_, alias_}} ->
{:tmp, :tmp, alias_}
_expr ->
{:tmp, :tmp, "col_#{count}"}
end
# Logger.debug("Resolved: #{inspect column}")
{acc ++ [column], count + 1}
end)
# Logger.debug("Get column from select #{inspect query[:select]}: #{inspect columns}")
columns
end
def resolve_columns(%ExoSQL.Query{} = q, _context) do
get_query_columns(q)
end
def resolve_columns({:columns, columns}, _context) do
columns
end
def get_table_columns({db, table}, all_columns) do
for {^db, ^table, column} <- all_columns, do: column
end
@doc ~S"""
Resolves all known tables at this context. This helps to fully qualify tables.
TODO Could be more efficient accessing as little as possible the schemas, but
maybe not possible.
"""
def resolve_all_tables(context) do
Enum.flat_map(context, fn
{:with_parsed, with_} ->
[]
{:with, with_} ->
Map.keys(with_) |> Enum.map(&{:with, &1})
{db, _config} ->
{:ok, tables} = ExoSQL.schema(db, context)
tables |> Enum.map(&{db, &1})
end)
end
@doc ~S"""
Given a table-like tuple, returns the real table names
The table-like can be a function, a lateral join, or a simple table. It
resolves unknown parts, as for example {:table, {nil, "table"}}, will fill
which db.
It returns the same form, but with more data, and calling again will result
in the same result.
"""
def resolve_table({:table, {nil, name}}, all_tables, _context) when is_binary(name) do
# Logger.debug("Resolve #{inspect name} at #{inspect all_tables}")
options = for {db, ^name} <- all_tables, do: {db, name}
# Logger.debug("Options are #{inspect options}")
case options do
[table] -> {:table, table}
l when l == [] -> raise "Cant find table #{inspect(name)}"
_other -> raise "Ambiguous table name #{inspect(name)}"
end
end
def resolve_table({:table, {_db, _name}} = orig, _all_tables, _context) do
orig
end
def resolve_table({:select, query}, _all_tables, context) do
{:ok, parsed} = real_parse(query, context)
parsed
end
def resolve_table({:fn, _function} = orig, _all_tables, _context), do: orig
def resolve_table({:alias, {table, alias_}}, all_tables, context) do
{:alias, {resolve_table(table, all_tables, context), alias_}}
end
def resolve_table({:lateral, table}, all_tables, context) do
resolved = resolve_table(table, all_tables, context)
{:lateral, resolved}
end
def resolve_table(other, _all_tables, _context) do
Logger.error("Cant resolve table #{inspect(other)}")
# maybe it do not have the type tagged at other ({:table, other}). Typical fail here.
raise "Cant resolve table #{inspect(other)}"
end
@doc ~S"""
From the list of tables, and context, and an unknown column, return the
FQN of the column.
"""
def resolve_column({:column, {nil, nil, column}}, all_columns, context) do
found =
Enum.filter(all_columns, fn
{_db, _table, ^column} -> true
_other -> false
end)
found =
case found do
[one] ->
{:column, one}
[] ->
parent_schema = Map.get(context, "__parent__", false)
if parent_schema do
{:column, found} =
resolve_column({:column, {nil, nil, column}}, parent_schema, context)
# Logger.debug("Found column from parent #{inspect found}")
{:column, found}
else
raise "Not found #{inspect(column)} in #{inspect(all_columns)}"
end
many ->
raise "Ambiguous column #{inspect(column)} in #{inspect(many)}"
end
if found do
found
else
raise "Not found #{inspect(column)} in #{inspect(all_columns)}"
end
end
def resolve_column({:column, {nil, table, column}}, all_columns, context) do
# Logger.debug("Find #{inspect {nil, table, column}} at #{inspect all_columns} + #{inspect Map.get(context, "__parent__", [])}")
found =
Enum.find(all_columns, fn
{_db, ^table, ^column} -> true
_other -> false
end)
if found do
{:column, found}
else
parent_schema = Map.get(context, "__parent__", [])
if parent_schema != [] do
context = Map.drop(context, ["__parent__"])
case resolve_column({:column, {nil, table, column}}, parent_schema, context) do
{:column, found} ->
{:column, found}
_ ->
throw({:not_found, {table, column}, :in, all_columns})
end
else
throw({:not_found, {table, column}, :in, all_columns})
end
end
end
def resolve_column({:column, _} = column, _schema, _context), do: column
def resolve_column({:op, {op, ex1, ex2}}, all_columns, context) do
{:op,
{op, resolve_column(ex1, all_columns, context), resolve_column(ex2, all_columns, context)}}
end
def resolve_column({:fn, {f, params}}, all_columns, context) do
params = Enum.map(params, &resolve_column(&1, all_columns, context))
{:fn, {f, params}}
end
def resolve_column({:distinct, expr}, all_columns, context) do
{:distinct, resolve_column(expr, all_columns, context)}
end
def resolve_column({:case, list}, all_columns, context) do
# Logger.debug("Resolve case #{inspect list, pretty: true}")
list =
Enum.map(list, fn
{c, e} ->
{resolve_column(c, all_columns, context), resolve_column(e, all_columns, context)}
{e} ->
{resolve_column(e, all_columns, context)}
end)
{:case, list}
end
def resolve_column({:case, expr, list}, all_columns, context) do
expr = resolve_column(expr, all_columns, context)
list =
Enum.map(list, fn
{c, e} ->
{resolve_column(c, all_columns, context), resolve_column(e, all_columns, context)}
{e} ->
{resolve_column(e, all_columns, context)}
end)
{:case, expr, list}
end
def resolve_column({:alias, {expr, alias_}}, all_columns, context) do
{:alias, {resolve_column(expr, all_columns, context), alias_}}
end
def resolve_column({:select, query}, all_columns, context) do
all_columns = all_columns ++ Map.get(context, "__parent__", [])
context = Map.put(context, "__parent__", all_columns)
{:ok, parsed} = real_parse(query, context)
# Logger.debug("Parsed query #{inspect query} -> #{inspect parsed, pretty: true}")
{:select, parsed}
end
def resolve_column({:lateral, expr}, all_columns, context) do
{:lateral, resolve_column(expr, all_columns, context)}
end
def resolve_column(other, _schema, _context) do
other
end
defp get_query_columns(%ExoSQL.Query{select: select, crosstab: nil}) do
get_column_names_or_alias(select, 1)
end
defp get_query_columns(%ExoSQL.Query{select: select, crosstab: :all_columns}) do
# only the first is sure.. the rest too dynamic to know
[hd(get_column_names_or_alias(select, 1))]
end
defp get_query_columns(%ExoSQL.Query{select: select, crosstab: crosstab})
when is_list(crosstab) do
first = hd(get_column_names_or_alias(select, 1))
more = crosstab |> Enum.map(&{:tmp, :tmp, &1})
[first | more]
end
defp get_query_columns({:columns, columns}) do
columns
end
defp get_column_names_or_alias([{:column, column} | rest], count) do
[column | get_column_names_or_alias(rest, count + 1)]
end
defp get_column_names_or_alias([{:alias, {_column, alias_}} | rest], count) do
[{:tmp, :tmp, alias_} | get_column_names_or_alias(rest, count + 1)]
end
defp get_column_names_or_alias([{:fn, {"unnest", [_from | columns]}} | rest], count) do
Enum.map(columns, fn {:lit, name} -> {:tmp, :tmp, name} end) ++
get_column_names_or_alias(rest, count + 1)
end
defp get_column_names_or_alias([_head | rest], count) do
[{:tmp, :tmp, "col_#{count}"} | get_column_names_or_alias(rest, count + 1)]
end
defp get_column_names_or_alias([], _count), do: []
end
|
lib/parser.ex
| 0.647352 | 0.546194 |
parser.ex
|
starcoder
|
defmodule Zippy.ZForest do
@moduledoc """
Zipper forests are a zipper structure where each node is a list of
subtrees.You can iterate over this structure, and thus represent a minimum spanning tree, a DOM, an undo tree, etc.
Adding, replacing and deleting operations are constant time.
This module is a port of <NAME>'s [“Zippers”](https://github.com/ferd/zippers) library, under the MIT licence.
"""
alias __MODULE__
@typep zlist(a) :: {prev::list(a), next::list(a)}
@typep znode() :: zlist({term(), zlist(term())})
@typep thread() :: [znode()]
@typedoc "A Zipper forest"
@type t() :: {thread(), znode()}
@doc "Create an empty zipper forest with `value` as its first element."
@spec root(term()) :: ZForest.t
def root(value) do
{[],
{[],
[{value,
{[], []}
}
]
}
}
end
@doc """
Extract the node value from the current tree position as `{:ok, value}`.
If there is no item, the function returns `{:error, nil}`
"""
@spec value(ZForest.t) :: {:ok, term()} | {:error, nil}
def value({_thread, {_prev, []}}), do: {:error, nil}
def value({_thread, {_prev, [{value, _children}|_next]}}), do: {:ok, value}
@doc "Replace the node at the `current` position with `value`."
@spec replace(ZForest.t, term()) :: ZForest
def replace({thread, {left, right}}, value) do
{thread,
{left,
[{value}, {[],[]} | right]
}
}
end
@doc "Insert a new node at the `current` position."
@spec insert(ZForest.t, term) :: ZForest
def insert({thread, {left, right}}, value) do
{thread,
{left,
[{value, {[], []}} | right]
}
}
end
@doc "Delete the node at the `current position`. The next one on the right will take its place."
@spec delete(ZForest.t) :: ZForest.t
def delete({thread, {left, [_|right]}}) do
{thread, {left, right}}
end
@doc """
Moves to the previous node from the `current` item.
If we are already at the top, this function returns `nil`.
"""
@spec prev(ZForest.t) :: ZForest.t | nil
def prev({_thread, {[], _next}}), do: nil
def prev({thread, {[h|t], right}}) do
{thread, {t, [h|right]}}
end
@doc """
Moves to the next node from the `current` item.
If there is no next node, this function returns `nil`.
"""
@spec next(ZForest.t) :: ZForest | nil
def next({_thread, {_prev, []}}), do: nil
def next({thread, {left, [h|t]}}) do
{thread, {[h|left], t}}
end
@doc """
Moves down the forest to the children of the `current` node.
If we are already at the bottom, this function returns `nil`.
"""
@spec down(ZForest.t) :: ZForest.t | nil
def down({_thread, {_left, []}}), do: nil
def down({thread, {left, [{value, children}|right]}}) do
{
[
{left,
[value|right]} | thread],
children
}
end
@doc """
Moves up the forest to the parent of the `current` node, without rewinding the `current` node's child list.
If we are already at the top, this function returns `nil`.
"""
@spec up(ZForest.t) :: ZForest.t | nil
def up({[], _children}), do: nil
def up({[{left, [value|right]}|thread], children}) do
{thread,
{left,
[{value, children}|right]
}
}
end
@doc """
Moves up the forest to the parent of the `current` node, while rewinding the `current` node's child list.
This allows the programmer to access children as it it were the first time, all the time.
If we are already at the top, this function returns `nil`.
"""
@spec rup(ZForest.t) :: ZForest.t | nil
def rup({[], _children}), do: nil
def rup({[{parent_left, [value|parent_right]}|thread], {left, right}}) do
{thread,
{parent_left,
[{value, {[], Enum.reverse(left) ++ right}}|parent_right]
}
}
end
end
|
lib/zippy/ZForest.ex
| 0.816955 | 0.67684 |
ZForest.ex
|
starcoder
|
defmodule Toml do
@moduledoc File.read!(Path.join([__DIR__, "..", "README.md"]))
@type key :: binary | atom | term
@type opt ::
{:keys, :atoms | :atoms! | :string | (key -> term)}
| {:filename, String.t()}
| {:transforms, [Toml.Transform.t()]}
@type opts :: [opt]
@type reason :: {:invalid_toml, binary} | binary
@type error :: {:error, reason}
@doc """
Decode the given binary as TOML content
## Options
You can pass the following options to configure the decoder behavior:
* `:filename` - pass a filename to use in error messages
* `:keys` - controls how keys in the document are decoded. Possible values are:
* `:strings` (default) - decodes keys as strings
* `:atoms` - converts keys to atoms with `String.to_atom/1`
* `:atoms!` - converts keys to atoms with `String.to_existing_atom/1`
* `(key -> term)` - converts keys using the provided function
* `:transforms` - a list of custom transformations to apply to decoded TOML values,
see `c:Toml.Transform.transform/2` for details.
## Decoding keys to atoms
The `:atoms` option uses the `String.to_atom/1` call that can create atoms at runtime.
Since the atoms are not garbage collected, this can pose a DoS attack vector when used
on user-controlled data. It is recommended that if you either avoid converting to atoms,
by using `keys: :strings`, or require known keys, by using the `keys: :atoms!` option,
which will cause decoding to fail if the key is not an atom already in the atom table.
## Transformations
You should rarely need custom datatype transformations, but in some cases it can be quite
useful. In particular if you want to transform things like IP addresses from their string
form to the Erlang address tuples used in most `:inet` APIs, a custom transform can ensure
that all addresses are usable right away, and that validation of those addresses is done as
part of decoding the document.
Keep in mind that transforms add additional work to decoding, which may result in reduced
performance, if you don't need the convenience, or the validation, deferring such conversions
until the values are used may be a better approach, rather than incurring the overhead during decoding.
"""
@spec decode(binary) :: {:ok, map} | error
@spec decode(binary, opts) :: {:ok, map} | error
defdelegate decode(bin, opts \\ []), to: __MODULE__.Decoder
@doc """
Same as `decode/1`, but returns the document directly, or raises `Toml.Error` if it fails.
"""
@spec decode!(binary) :: map | no_return
@spec decode!(binary, opts) :: map | no_return
defdelegate decode!(bin, opts \\ []), to: __MODULE__.Decoder
@doc """
Decode the file at the given path as TOML
Takes same options as `decode/2`
"""
@spec decode_file(binary) :: {:ok, map} | error
@spec decode_file(binary, opts) :: {:ok, map} | error
defdelegate decode_file(path, opts \\ []), to: __MODULE__.Decoder
@doc """
Same as `decode_file/1`, but returns the document directly, or raises `Toml.Error` if it fails.
"""
@spec decode_file!(binary) :: map | no_return
@spec decode_file!(binary, opts) :: map | no_return
defdelegate decode_file!(path, opts \\ []), to: __MODULE__.Decoder
@doc """
Decode the given stream as TOML.
Takes same options as `decode/2`
"""
@spec decode_stream(Enumerable.t()) :: {:ok, map} | error
@spec decode_stream(Enumerable.t(), opts) :: {:ok, map} | error
defdelegate decode_stream(stream, opts \\ []), to: __MODULE__.Decoder
@doc """
Same as `decode_stream/1`, but returns the document directly, or raises `Toml.Error` if it fails.
"""
@spec decode_stream!(Enumerable.t()) :: map | no_return
@spec decode_stream!(Enumerable.t(), opts) :: map | no_return
defdelegate decode_stream!(stream, opts \\ []), to: __MODULE__.Decoder
end
|
lib/toml.ex
| 0.929176 | 0.646139 |
toml.ex
|
starcoder
|
defmodule Toby.Data.Applications do
@moduledoc """
Utilities for gathering application data such as the process tree.
"""
alias Toby.Data.Node
alias Toby.Util.Tree
def applications(node) do
{:ok, applications_in_tree(node)}
end
def application(node, app) do
with {:ok, data} <- Node.application(node, app) do
{tree, tree_last_idx} =
node
|> application_process_tree(app)
|> Tree.to_indexed_tree()
app_data =
data
|> Enum.into(%{})
|> Map.merge(%{
name: app,
process_tree: tree,
process_tree_size: tree_last_idx
})
{:ok, app_data}
else
:undefined -> {:ok, nil}
end
end
defp application_process_tree(node, app) do
case application_master(node, app) do
nil -> nil
pid -> process_tree(node, pid, [application_controller(node)])
end
end
defp process_tree(_node, port, _parents) when is_port(port) do
{port, []}
end
defp process_tree(node, pid, parents) when is_pid(pid) do
{:links, links} = Node.process_info(node, pid, :links)
child_pids = links -- parents
children = for child <- child_pids, do: process_tree(node, child, [pid | parents])
case Node.process_info(node, pid, :registered_name) do
{:registered_name, name} -> {name, children}
_ -> {pid, children}
end
end
defp application_controller(node) do
Node.where_is(node, :application_controller)
end
defp application_master(node, app) do
Enum.find(application_masters(node), fn pid ->
case Node.application_by_pid(node, pid) do
{:ok, ^app} -> true
_ -> false
end
end)
end
defp application_masters(node) do
{:links, masters} = Node.process_info(node, application_controller(node), :links)
masters
end
defp applications_in_tree(node) do
Enum.flat_map(application_masters(node), fn pid ->
case Node.application_by_pid(node, pid) do
{:ok, app} -> [app]
_ -> []
end
end)
end
end
|
lib/toby/data/applications.ex
| 0.651798 | 0.408129 |
applications.ex
|
starcoder
|
defmodule Mix.Tasks.Licenses do
@moduledoc """
Lists all dependencies along with a summary of their licenses.
This task checks each entry in dependency package's `:licenses` list against the SPDX License List.
To see details about licenses that are not found in the SPDX list, use `mix licenses.explain`.
## Command line options
* `--osi` - additionally check if all licenses are approved by the [Open Source Initiative](https://opensource.org/licenses)
* `--update` - pull down a fresh copy of the SPDX license list instead of using the version checked in with this tool.
"""
@shortdoc "Lists all dependencies along with a summary of their licenses."
use Mix.Task
alias HexLicenses.Rule.{Deprecation, OSIApproval, SPDXListed}
alias HexLicenses.Rule
@impl Mix.Task
def run(args) do
license_list =
if "--update" in args do
HexLicenses.SPDX.fetch_licenses()
|> HexLicenses.SPDX.parse_licenses()
else
HexLicenses.SPDX.licenses()
end
checks = [
SPDXListed.new(license_list),
Deprecation.new(license_list)
]
checks =
if "--osi" in args do
[OSIApproval.new(license_list) | checks]
else
checks
end
results =
HexLicenses.license_check(checks)
|> Map.new(fn {dep, results} ->
{dep, summarize_all(results)}
end)
first_column_width =
Map.keys(results)
|> Enum.map(&to_string/1)
|> Enum.map(&String.length/1)
|> Enum.max(fn -> 0 end)
|> max(String.length("Dependency"))
|> Kernel.+(2)
rows =
Enum.sort_by(results, fn {dep, _summary} -> to_string(dep) end)
|> Enum.map(fn {dep, summary} ->
dep = String.pad_trailing(to_string(dep), first_column_width)
[dep, summary]
end)
|> Enum.map(&IO.ANSI.format/1)
header =
IO.ANSI.format([:faint, String.pad_trailing("Dependency", first_column_width), "Status"])
shell = Mix.shell()
shell.info(header)
Enum.each(rows, &shell.info/1)
end
defp summarize_all(results) do
if Enum.all?(results, &Rule.pass?/1) do
IO.ANSI.format([:green, "all checks passed"])
else
str =
Enum.map(results, &Rule.failure_summary/1)
|> Enum.join(", ")
IO.ANSI.format([:red, str])
end
end
end
|
lib/mix/tasks/licenses.ex
| 0.820829 | 0.435421 |
licenses.ex
|
starcoder
|
defmodule Veritaserum do
@moduledoc """
Sentiment analysis based on AFINN-165, emojis and some enhancements.
Also supports:
- emojis (❤️, 😱...)
- boosters (*very*, *really*...)
- negators (*don't*, *not*...).
"""
alias Veritaserum.Evaluator
@supported_languages ["pt", "es", "en"]
@doc """
Returns the list of supported languages.
iex> Veritaserum.supported_languages()
["pt", "es", "en"]
"""
@spec supported_languages() :: list(String.t())
def supported_languages(), do: @supported_languages
@doc """
Returns the sentiment value for the given text.
`lang` can be used to specify the language of the text (currently only English is supported)
iex> Veritaserum.analyze(["I ❤️ Veritaserum", "Veritaserum is really awesome"])
[3, 5]
iex> Veritaserum.analyze("I love Veritaserum")
3
"""
@spec analyze(String.t() | list(String.t()), String.t()) :: integer | list(integer)
def analyze(input, lang \\ "en")
def analyze(input, lang) when is_list(input) and lang in @supported_languages do
Enum.map(input, &analyze(&1, lang))
end
def analyze(input, lang) when is_bitstring(input) and lang in @supported_languages do
input
|> clean
|> String.split()
|> Enum.map(&mark_word(&1, lang))
|> get_score()
|> round()
end
def analyze(_, _), do: nil
@doc """
Returns a tuple of the sentiment value and the metadata for the given text.
`lang` can be used to specify the language of the text (currently only English is supported)
iex> Veritaserum.analyze_with_metadata("I love Veritaserum")
{3, [{:neutral, 0, "i"}, {:word, 3, "love"}, {:neutral, 0, "veritaserum"}]}
"""
@spec analyze_with_metadata(String.t(), String.t()) :: {number(), [{atom, number, String.t()}]}
def analyze_with_metadata(input, lang \\ "en")
def analyze_with_metadata(input, lang)
when is_bitstring(input) and lang in @supported_languages do
list_with_marks =
input
|> clean
|> String.split()
|> Enum.map(&mark_word(&1, lang))
score = get_score(list_with_marks)
{score, list_with_marks}
end
def analyze_with_metadata(_, _), do: nil
# Mark every word in the input with type and score
defp mark_word(word, lang) do
with {_, nil, _} <- {:negator, Evaluator.evaluate_negator(word, lang), word},
{_, nil, _} <- {:booster, Evaluator.evaluate_booster(word, lang), word},
{_, nil, _} <- {:emoticon, Evaluator.evaluate_emoticon(word), word},
{_, nil, _} <- {:word, Evaluator.evaluate_word(word, lang), word},
do: {:neutral, 0, word}
end
# Compute the score from a list of marked words
defp get_score(words) do
[List.first(words) | words]
|> Stream.chunk_every(2, 1)
|> Stream.map(fn pair ->
# TODO Complety rethink this to allow higher proximity between boosters/negators and the words
case pair do
[{:negator, _, _}, {:word, score, _}] ->
-score
[{:booster, booster_score, _}, {:word, word_score, _}] ->
if word_score > 0, do: word_score + booster_score, else: word_score - booster_score
[_, {type, score, _}] when type in [:word, :emoticon] ->
score
_ ->
0
end
end)
|> Enum.sum()
end
# Clean and sanitize the input text
defp clean(text) do
# Returns the major version of Elixir, i.e. 8 for version 1.8.x
ver = System.version() |> String.split(".") |> Enum.at(1) |> String.to_integer()
cleaned_text =
text
|> String.replace(~r/\n/, " ")
|> String.downcase()
|> String.replace(~r/[.,\/#!$%\^&\*;:{}=_`\"~()]/, " ")
if ver >= 9 do
String.replace(cleaned_text, Evaluator.emoticon_list(), fn match -> " #{match} " end)
else
String.replace(cleaned_text, Evaluator.emoticon_list(), " ", insert_replaced: 1)
end
|> String.replace(~r/ {2,}/, " ")
end
end
|
lib/veritaserum.ex
| 0.684686 | 0.433142 |
veritaserum.ex
|
starcoder
|
defmodule HideInPng do
@moduledoc """
## Description
HideInPng is an Elixir module for hiding files within
PNG images.
## Examples
iex(2)> c "hideinpng.ex"
[HideInPng]
iex(3)> HideInPng.encode("imgs/dice.png", "imgs/mushroom.png")
:ok
iex(4)> HideInPng.decode("imgs/dice.png", "imgs/decoded.png")
:ok
## Author
<NAME>
github.com/corey-p
"""
def encode(target_path, payload_path) do
# Open target and payload files
{:ok, target_file} = File.open(target_path, [:read, :write])
{:ok, payload_file} = File.open(payload_path, [:read])
start(target_file, payload_file, "encode")
end
def decode(target_path, destination_path) do
# Open target and destination files
{:ok, target_file} = File.open(target_path, [:read])
{:ok, destination_file} = File.open(destination_path, [:write])
start(target_file, destination_file, "decode")
end
defp start(target, payload, mode) do
# Parse out the IHDR chunk
<<
0x89, 0x50, 0x4E, 0x47, 0x0D, 0x0A, 0x1A, 0x0A,
length :: size(32),
"IHDR",
_width :: size(32),
_height :: size(32),
_bit_depth,
_color_type,
_compression_method,
_filter_method,
_interlace_method,
_crc :: size(32),
chunks :: binary
>> = IO.binread(target, :all)
# Recursively iterate over remaining chunks
read_chunks(target, payload, chunks, mode, length)
File.close(target)
File.close(payload)
end
defp read_chunks(_ ,_ ,<<>>, _, _), do: 1
defp read_chunks(target, payload, <<
length :: size(32),
chunk_type :: binary - size(4),
chunk_data :: binary - size(length),
_crc :: size(32),
chunks :: binary
>>, mode, acc) do
cond do
mode == "encode" and chunk_type == "IEND" ->
inject_payload(target, payload, acc)
mode == "decode" and chunk_type == "PAYL" ->
IO.binwrite(payload, chunk_data)
true -> read_chunks(target, payload, chunks, mode, acc + length)
end
end
defp inject_payload(target, payload, position) do
# Read target to position
IO.binread(target, position)
# Build binary to inject
payload_binary = IO.binread(payload, :all)
package_binary = <<byte_size(payload_binary) :: size(32)>> <> "PAYL" <> payload_binary <> "FCRC"
# Write to target
IO.binwrite(target, package_binary)
# Write the end chunk
end_chunk_binary = <<0 :: size(32)>> <> "IEND"
IO.binwrite(target, end_chunk_binary)
end
end
|
hideinpng.ex
| 0.617628 | 0.442817 |
hideinpng.ex
|
starcoder
|
defmodule HtmlBuilder do
@moduledoc """
The module holds the logic for building the HTML tree.
Instead of creating and injecting functions for each tag
into the caller's module, it post walks the AST and changes
nodes that represent a HTML tag. The nodes will be changed to
call the `tag` macro with the appropriate
tag name, optional attributes and body.
"""
@external_resource tags_path = Path.join([__DIR__, "resources", "tags.txt"])
@external_resource entities_path = Path.join([__DIR__, "resources", "entities.txt"])
@tags (for line <- File.stream!(tags_path, [], :line) do
line |> String.trim |> String.to_atom
end)
@html_entities (for line <- File.stream!(entities_path, [], :line), into: %{} do
[decoded, encoded] = line |> String.split(",") |> Enum.map(&String.trim(&1))
{decoded, encoded }
end)
defmacro markup(do: block) do
quote do
# Avoid clash with div macro for <div> tag
import Kernel, except: [div: 2]
{:ok, var!(buffer, Html)} = start_buffer([])
{:ok, var!(whitespaces, Html)} = start_buffer([])
unquote(Macro.postwalk(block, &postwalk/1))
result = render(var!(buffer, Html))
[:ok, :ok] = stop_buffers([var!(buffer, Html), var!(whitespaces, Html)])
result
end
end
def postwalk({:text, _meta, [string]}) do
sanitized = string |> to_string |> sanitize
quote do: put_buffer(var!(buffer, Html), " #{concate(var!(whitespaces, Html))}#{unquote(sanitized)}")
end
def postwalk({tag_name, _meta, [[do: inner]]}) when tag_name in @tags do
quote do: tag(unquote(tag_name), [], do: unquote(inner))
end
def postwalk({tag_name, _meta, [attrs, [do: inner]]}) when tag_name in @tags do
quote do: tag(unquote(tag_name), unquote(attrs), do: unquote(inner))
end
def postwalk(ast), do: ast
defmacro tag(name, attrs, do: inner) do
quote do
put_buffer(var!(buffer, Html), open_tag(unquote(name), concate(var!(whitespaces, Html)), unquote(attrs)))
put_buffer(var!(whitespaces, Html), " ")
unquote(inner)
pop_buffer(var!(whitespaces, Html))
put_buffer(var!(buffer, Html), end_tag(unquote(name), concate(var!(whitespaces, Html))))
end
end
def open_tag(name, whitespaces, []), do: "#{whitespaces}<#{name}>\n"
def open_tag(name, whitespaces, attrs) do
attr_html = for {key, val} <- attrs, into: "", do: " #{key}=\"#{val}\""
"#{whitespaces}<#{name}#{attr_html}>\n"
end
def end_tag(name, whitespaces), do: "\n#{whitespaces}</#{name}>"
# Need to find a better way to dynamically create the regex using keys of @html_entities
def sanitize(string), do: Regex.replace(~r/(<|>|&|"|'|¢|£|¥|€|©|®)/, string, fn e -> "#{encode_html_entity(e)}" end)
def start_buffer(state), do: Agent.start_link(fn -> state end)
def put_buffer(buff, content), do: Agent.update(buff, &[content | &1])
def pop_buffer(buff), do: Agent.update(buff, &List.delete_at(&1, 0))
def stop_buffers(buffs), do: buffs |> Enum.map(&Agent.stop/1)
def concate(buff), do: render(buff)
def render(buff), do: Agent.get(buff, &(&1)) |> Enum.reverse |> Enum.join("")
def encode_html_entity(entity), do: @html_entities[to_string(entity)]
end
|
lib/html_builder.ex
| 0.592077 | 0.422803 |
html_builder.ex
|
starcoder
|
defmodule FlowAssertions.AssertionA do
@moduledoc """
Assertions used to test other assertions.
These assertions do not work on the textual output of an assertion
error. Instead, they inspect the internals of the
`ExUnit.AssertionError` structure. Here is an example:
```elixir
assertion_fails(
"some text"
[left: "something"],
fn ->
assert_no_value(data, :something_field)
end)
```
The first argument ("some text") describes the expected `:message` field.
The next line describes the value expected in the `:left` field.
As far as I know, the `ExUnit.AssertionError` isn't guaranteed to
stay stable, so beware. As of August 2020, these are the fields
you can use:
defexception left: @no_value,
right: @no_value,
message: @no_value,
expr: @no_value,
args: @no_value,
doctest: @no_value,
context: :expr
"""
import ExUnit.Assertions
alias ExUnit.AssertionError
import FlowAssertions.Define.Defchain
alias FlowAssertions.MapA
@doc """
Check if an assertion fails in the expected way.
The assertion must be wrapped in a function:
```elixir
fn ->
assert_no_value(data, :something_field)
end)
```
A typical call looks like this:
```elixir
assertion_fails(
"some text"
[left: "something"],
fn -> ... end)
```
The first argument is the expected `:message` field in the
`ExUnit.AssertionError` structure.
The second argument is a keyword list describing the other fields of
the `AssertionError` structure.
The `:message` value and the other field values will be compared using
`FlowAssertions.MiscA.good_enough?/2`. So, for example, the check of the
error message need not be exact:
```elixir
assertion_fails(
~r/some text.*final text/,
...
```
If the assertion correctly fails, the return value is the
`ExUnit.AssertionError` the assertion created.
"""
def assertion_fails(message, kws \\ [], f) do
assert_raise(AssertionError, f)
|> MapA.assert_fields(kws ++ [message: message])
end
@doc """
A variant of `assertion_fails/3` that is useful in chaining.
This assertion is tailored for a function that is passed through a pipeline
and exposed to different arguments at each stage:
```elixir
msg = "..."
(&error_content/1) # <= parentheses are important
|> assertion_fails_for(:error, msg)
|> assertion_fails_for({:ok, "content"}, msg)
```
In the above case, `FlowAssertions.MiscA.error_content/1` is exposed to
two different values. Both should cause an `ExUnit.AssertionError` with a
particular `:message`.
A third argument could describe other `ExUnit.AssertionError`
fields, as in `assertion_fails/3`.
Note the parentheses around `error_content/1`. They are required by
Elixir's precedence rules: `&` binds more loosely than `|>`. If you
leave the parentheses off, the expression turns into a single
function that's never executed - meaning that the test can never
fail, no matter how broken the code is.
"""
defchain assertion_fails_for(under_test, left, message, kws \\ []) do
assert_raise(AssertionError, fn -> under_test.(left) end)
|> MapA.assert_fields(kws ++ [message: message, left: left])
end
end
|
lib/assertion_a.ex
| 0.930482 | 0.97372 |
assertion_a.ex
|
starcoder
|
defmodule Commodity.ChannelCase do
@moduledoc """
This module defines the test case to be used by
channel tests.
Such tests rely on `Phoenix.ChannelTest` and also
import other functionality to make it easier
to build common datastructures and query the data layer.
Finally, if the test case interacts with the database,
it cannot be async. For this reason, every test runs
inside a transaction which is reset at the beginning
of the test unless the test case is marked as async.
"""
use ExUnit.CaseTemplate
import Ecto.Query
alias Commodity.Repo
alias Commodity.Api.Iam.User
alias Commodity.Api.Util.JWTView
using do
quote do
# Import conveniences for testing with channels
use Phoenix.ChannelTest
alias Commodity.Repo
alias Commodity.Api.Iam.User
alias Commodity.Api.Iam.AccessControl.PermissionSet
alias Commodity.Api.Iam.AccessControl.PermissionSetGrant
# The default endpoint for testing
@endpoint Commodity.Endpoint
import Commodity.ChannelCase
end
end
setup tags do
:ok = Ecto.Adapters.SQL.Sandbox.checkout(Commodity.Repo)
unless tags[:async] do
Ecto.Adapters.SQL.Sandbox.mode(Commodity.Repo, {:shared, self()})
end
cond do
tags[:login] == :user ->
query = from u in User,
join: up in User.Passphrase,
on: u.id == up.user_id,
limit: 1,
order_by: [asc: u.id],
select: {u, up}
{user, passphrase} = Repo.one!(query)
user = assign_token(user, passphrase)
{:ok, user: user}
true ->
:ok
end
end
defp assign_token(user = %User{}, passphrase) do
jwk = %{
"kty" => "oct",
"k" => Keyword.fetch!(Application.get_env(:commodity, :jwk), :secret_key_base)
}
jws = %{
"alg" => "HS256",
"typ" => "JWT"
}
issuer = Keyword.fetch!(Application.get_env(:commodity, :jwt), :iss)
expire =
:os.system_time(:seconds) + Keyword.fetch!(Application.get_env(:commodity, :jwt), :exp)
payload = %{"iss" => issuer,
"exp" => expire,
"sub" => "access"}
jwt =
payload
|> Map.merge(JWTView.render("jwt.json", %{user: user, passphrase: passphrase}))
{_, token} =
JOSE.JWT.sign(jwk, jws, jwt)
|> JOSE.JWS.compact()
user =
Map.put(user, :jwt, token)
user
end
end
|
test/support/channel_case.ex
| 0.702938 | 0.438485 |
channel_case.ex
|
starcoder
|
defmodule Transmission do
use ExActor.GenServer
alias Transmission.Api
alias Transmission.TorrentAdd
alias Transmission.TorrentGet
alias Transmission.TorrentReannounce
alias Transmission.TorrentRemove
alias Transmission.TorrentStart
alias Transmission.TorrentStartNow
alias Transmission.TorrentStop
alias Transmission.TorrentVerify
defstart start_link(url, username, password) do
initial_state(%{
tesla: Api.new(url, username, password),
token: nil
})
end
defcall get_torrents(ids \\ nil), state: state do
{token, %{torrents: torrents}} =
Api.execute_method(state.tesla, state.token, TorrentGet.method(ids))
set_and_reply(%{state | token: token}, torrents)
end
defcall stop_torrents(ids \\ nil), state: state do
{token, %{}} = Api.execute_method(state.tesla, state.token, TorrentStop.method(ids))
set_and_reply(%{state | token: token}, nil)
end
defcall start_torrents(ids \\ nil), state: state do
{token, %{}} = Api.execute_method(state.tesla, state.token, TorrentStart.method(ids))
set_and_reply(%{state | token: token}, nil)
end
defcall start_now_torrents(ids \\ nil), state: state do
{token, %{}} = Api.execute_method(state.tesla, state.token, TorrentStartNow.method(ids))
set_and_reply(%{state | token: token}, nil)
end
defcall verify_torrents(ids \\ nil), state: state do
{token, %{}} = Api.execute_method(state.tesla, state.token, TorrentVerify.method(ids))
set_and_reply(%{state | token: token}, nil)
end
defcall reannounce_torrents(ids \\ nil), state: state do
{token, %{}} = Api.execute_method(state.tesla, state.token, TorrentReannounce.method(ids))
set_and_reply(%{state | token: token}, nil)
end
defcall add_torrent(options), state: state do
{token, id} =
case Api.execute_method(state.tesla, state.token, TorrentAdd.method(options)) do
{token, %{"torrent-added": %{id: id}}} -> {token, id}
{token, %{"torrent-duplicate": %{id: id}}} -> {token, id}
end
set_and_reply(%{state | token: token}, id)
end
defcall remove_torrent(ids, delete_local_data \\ false), state: state do
{token, %{}} =
Api.execute_method(state.tesla, state.token, TorrentRemove.method(ids, delete_local_data))
set_and_reply(%{state | token: token}, nil)
end
end
|
lib/transmission.ex
| 0.550607 | 0.428981 |
transmission.ex
|
starcoder
|
defmodule RTQueue do
@moduledoc """
An elixir realtime queue implement.
"""
@typedoc """
The state type in the RTQueue module stands for the reverse state of the queue.
"""
@type state :: :Empty
| {:Reverse, non_neg_integer, list, list, list, list}
| {:Concat, non_neg_integer, list, list}
| {:Done, list}
@typedoc """
The q type stands for the realtime data for a RTQueue.
"""
@type q :: {list, non_neg_integer, state, list, non_neg_integer}
defstruct realtime: {[], 0, :Empty, [], 0}
@typedoc """
The t type stands for the RTQueue.
"""
@type t :: %RTQueue{realtime: q}
@doc """
Return an empty queue.
"""
@spec new() :: t
def new(), do: %RTQueue{realtime: {[], 0, :Empty, [], 0}}
@doc """
Return true when the queue is empty, false when the queue is not empty.
"""
@spec empty?(t) :: boolean
def empty?(queue) do
{_, lenf, _, _, _} = queue.realtime
lenf == 0
end
@spec next(state) :: state
defp next({:Reverse, n, [x | f], fp, [y | r], rp}) do
{:Reverse, (n + 1), f, [x | fp], r, [y | rp]}
end
defp next({:Reverse, n, [], fp, [y], rp}) do
{:Concat, n, fp, [y | rp]}
end
defp next({:Concat, 0, _, acc}) do
{:Done, acc}
end
defp next({:Concat, n, [x | fp], acc}) do
{:Concat, (n - 1), fp, [x | acc]}
end
defp next(s), do: s
@spec abort(state) :: state
defp abort({:Concat, 0, _, [_ | acc]}) do
{:Done, acc}
end
defp abort({:Concat, n, fp, acc}) do
{:Concat, (n - 1), fp, acc}
end
defp abort({:Reverse, n, f, fp, r, rp}) do
{:Reverse, (n - 1), f, fp, r, rp}
end
defp abort(s), do: s
@spec step(list, non_neg_integer, state, list, non_neg_integer) :: t
defp step(f, lenf, s, r, lenr) do
sp =
if Enum.empty?(f) do
s |> next() |> next()
else
s |> next()
end
case sp do
{:Done, fp} -> %RTQueue{realtime: {fp, lenf, :Empty, r, lenr}}
sp -> %RTQueue{realtime: {f, lenf, sp, r, lenr}}
end
end
@spec balance(list, non_neg_integer, state, list, non_neg_integer) :: t
defp balance(f, lenf, s, r, lenr) do
cond do
lenr <= lenf -> step(f, lenf, s, r, lenr)
true -> step(f, lenf + lenr, {:Reverse, 0, f, [], r, []}, [], 0)
end
end
@doc """
Push an element to the back of a queue.
"""
@spec push(t, any) :: t
def push(queue, x) do
{f, lenf, s, r, lenr} = queue.realtime
balance(f, lenf, s, [x | r], (lenr + 1))
end
@doc """
Pop the front element of a queue.
"""
@spec pop(t) :: t
def pop(queue) do
{[_ | f], lenf, s, r, lenr} = queue.realtime
balance(f, lenf - 1, abort(s), r, lenr)
end
@doc """
Get the front element of a queue.
"""
@spec front(t) :: any
def front(queue) do
{[x | _], _, _, _, _} = queue.realtime
x
end
@doc """
Return the size of a queue.
"""
@spec size(t) :: non_neg_integer
def size(queue) do
{_, lenf, _, _, lenr} = queue.realtime
lenf + lenr
end
defimpl Inspect do
def inspect(queue, _opts \\ []) do
case RTQueue.empty?(queue) do
false -> Inspect.Algebra.concat([
"#RTQueue<[",
"size: " <> to_string(RTQueue.size(queue)),
", front: " <> Kernel.inspect(RTQueue.front(queue)),
"]>"
])
true -> "Empty #RTQueue"
end
end
end
end
defmodule FQueue do
@moduledoc """
An elixir queue implement using fingertree.
"""
require FList.FTree
defstruct tree: :Empty
@type t :: %FQueue{tree: FList.FTree.t}
@doc """
Return an empty queue.
"""
@spec new() :: t
def new(), do: %FQueue{tree: :Empty}
@doc """
Return the size of a queue.
"""
@spec size(t) :: non_neg_integer
def size(queue), do: FList.FTree.sizeT(queue.tree)
@doc """
Return true when the queue is empty, false when the queue is not empty.
"""
@spec empty?(t) :: boolean
def empty?(queue), do: queue.tree == :Empty
@doc """
Get the front element of a queue.
"""
@spec front(t) :: any
def front(queue), do: FList.FTree.head(queue.tree)
@doc """
Get the back element of a queue.
"""
@spec back(t) :: any
def back(queue), do: FList.FTree.last(queue.tree)
@doc """
Push an element to the back of a queue.
"""
@spec push(t, any) :: t
def push(queue, element) do
%FQueue{queue | tree: queue.tree |> FList.FTree.snoc(element)}
end
@doc """
Pop the front element of a queue.
"""
@spec pop(t) :: t
def pop(queue) do
%FQueue{queue | tree: queue.tree |> FList.FTree.tail()}
end
defimpl Inspect do
def inspect(queue, _opts \\ []) do
case FQueue.empty?(queue) do
false -> Inspect.Algebra.concat([
"#FQueue<[",
"size: " <> to_string(FQueue.size(queue)),
", front: " <> Kernel.inspect(FQueue.front(queue)),
"]>"
])
true -> "Empty #FQueue"
end
end
end
end
defmodule LQueue do
@moduledoc """
An elixir lazy queue implement using Stream.
"""
defstruct stream: %Stream{Stream.__struct__ | enum: []}, size: 0
@type t :: %LQueue{stream: Enumerable.t}
@doc """
Return an empty queue in default.
"""
@spec new(Enumerable.t, non_neg_integer) :: t
def new(s \\ nil, len \\ 0) do
case s do
nil -> %LQueue{stream: Stream.concat([]), size: len}
s -> %LQueue{stream: s, size: len}
end
end
@doc """
Push an element to the back of a queue.
"""
@spec push(t, any) :: t
def push(queue, ele) do
Stream.concat(queue.stream, [ele]) |> new(queue.size + 1)
end
@doc """
Get the front element of a queue.
"""
@spec front(t) :: any
def front(queue) do
cond do
queue.size > 0 -> Enum.fetch!(queue.stream, 0)
true -> :error
end
end
@doc """
Pop the front element of a queue.
"""
@spec pop(t) :: t
def pop(queue, num \\ 1) do
cond do
queue.size >= num -> Stream.drop(queue.stream, num) |> new(queue.size - num)
true -> :error
end
end
@doc """
Return true when the queue is empty, false when the queue is not empty.
"""
@spec empty?(t) :: boolean
def empty?(queue) do
queue.size == 0
end
@doc """
Return the size of a queue.
"""
@spec size(t) :: non_neg_integer
def size(queue) do
queue.size
end
defimpl Inspect do
def inspect(queue, _opts \\ []) do
case LQueue.empty?(queue) do
false -> Inspect.Algebra.concat([
"#LQueue<[",
"size: " <> to_string(LQueue.size(queue)),
", front: " <> Kernel.inspect(LQueue.front(queue)),
"]>"
])
true -> "Empty #LQueue"
end
end
end
end
|
lib/exqueue.ex
| 0.776114 | 0.497681 |
exqueue.ex
|
starcoder
|
defmodule Day21 do
@weapons [%{cost: 8, damage: 4},
%{cost: 10, damage: 5},
%{cost: 25, damage: 6},
%{cost: 40, damage: 7},
%{cost: 74, damage: 8}]
@armor [%{cost: 13, armor: 1},
%{cost: 31, armor: 2},
%{cost: 53, armor: 3},
%{cost: 75, armor: 4},
%{cost: 102, armor: 5}]
@rings [%{cost: 25, damage: 1},
%{cost: 50, damage: 2},
%{cost: 100, damage: 3},
%{cost: 20, armor: 1},
%{cost: 40, armor: 2},
%{cost: 80, armor: 3}]
def part1(boss) do
all_combinations()
|> Enum.map(fn items ->
item = consolidate(items)
{Map.fetch!(item, :cost), item}
end)
|> Enum.sort
|> Enum.dedup
|> Enum.drop_while(fn {_, you} ->
you = Map.put(you, :hit_points, 100)
play(you, boss) == :boss end)
|> Enum.take(1)
|> hd
|> elem(0)
end
def part2(boss) do
all_combinations()
|> Enum.map(fn items ->
item = consolidate(items)
{Map.fetch!(item, :cost), item}
end)
|> Enum.sort
|> Enum.dedup
|> Enum.reverse
|> Enum.drop_while(fn {_, you} ->
you = Map.put(you, :hit_points, 100)
play(you, boss) == :you end)
|> Enum.take(1)
|> hd
|> elem(0)
end
def play(you, boss) do
with {:ok, boss} <- hit(you, boss, :you),
{:ok, you} <- hit(boss, you, :boss)
do
play(you, boss)
end
end
defp consolidate(items) do
initial = %{cost: 0, damage: 0, armor: 0}
Enum.reduce(items, initial, fn item, acc ->
Enum.reduce(item, acc, fn {key, value}, acc ->
Map.update!(acc, key, &(&1 + value))
end)
end)
end
defp hit(attacker, defender, winner) do
%{damage: damage} = attacker
%{hit_points: points, armor: armor} = defender
damage = max(damage - armor, 1)
points = points - damage
defender = %{defender | hit_points: points}
if points <= 0 do
winner
else
{:ok, defender}
end
end
defp all_combinations() do
weapon_armor =
Enum.flat_map(@weapons, fn weapon ->
Enum.flat_map(@armor, fn armor ->
[[weapon, armor]]
end)
end)
combinations =
Enum.reduce(@weapons, weapon_armor, fn weapon, acc ->
[[weapon] | acc]
end)
Enum.reduce(combinations, combinations, fn combination, acc ->
Enum.reduce(@rings, acc, fn ring, acc ->
one_ring = [ring | combination]
acc = [one_ring | acc]
Enum.reduce(@rings -- [ring], acc, fn ring, acc ->
[[ring | one_ring] | acc]
end)
end)
end)
end
end
|
day21/lib/day21.ex
| 0.576304 | 0.628906 |
day21.ex
|
starcoder
|
defmodule Framebuffer do
@moduledoc """
Abstraction over Linux framebuffer devices.
## Linux headers
- [linux/fb.h](https://github.com/torvalds/linux/blob/master/include/linux/fb.h)
- [uapi/linux/fb.h](https://github.com/torvalds/linux/blob/master/include/uapi/linux/fb.h)
## Linux documentation
- [fb device API](https://github.com/torvalds/linux/blob/master/Documentation/fb/api.rst)
- [fb driver API](https://github.com/torvalds/linux/blob/master/Documentation/driver-api/frame-buffer.rst)
"""
defstruct [
:ref,
:fix_screeninfo,
:var_screeninfo
]
@type t() :: %__MODULE__{
ref: reference(),
fix_screeninfo: Framebuffer.Screeninfo.Fix.t(),
var_screeninfo: Framebuffer.Screeninfo.Var.t()
}
@type device_t() :: Path.t()
@type pixel_t() :: {x(), y(), color()}
@type x() :: non_neg_integer()
@type y() :: non_neg_integer()
@typedoc "Color: {red, green, blue}"
@type color() :: {non_neg_integer(), non_neg_integer(), non_neg_integer()}
@doc """
Opens a framebuffer device and returns an `:ok` tuple with a `t:Framebuffer.t/0`
including fixed and variable device information and a reference to an open
file descriptor. This file descriptor is kept open for the lifetime of the
reference.
## Arguments
| parameter | required | default |
| --------- | -------- | -------- |
| device | false | /dev/fb0 |
"""
@spec open(device_t()) :: {:ok, Framebuffer.t()} | {:error, term()}
def open(device \\ "/dev/fb0"), do: Framebuffer.NIF.open(device)
@doc """
Optimistic version of `open/1`. Raises on any error.
"""
@spec open!(device_t()) :: Framebuffer.t() | no_return()
def open!(device \\ "/dev/fb0") do
case Framebuffer.NIF.open(device) do
{:ok, framebuffer} -> framebuffer
{:error, error} -> raise(error)
end
end
@doc """
Given an open framebuffer, refresh its fixed and variable device information.
## Arguments
| parameter | required | default |
| ----------- | -------- | ------- |
| framebuffer | true | |
"""
@spec info(Framebuffer.t()) :: {:ok, Framebuffer.t()} | {:error, term()}
def info(framebuffer), do: Framebuffer.NIF.info(framebuffer)
@spec info!(Framebuffer.t()) :: Framebuffer.t() | no_return()
def info!(framebuffer) do
case Framebuffer.NIF.info(framebuffer) do
{:ok, framebuffer} -> framebuffer
{:error, error} -> raise(error)
end
end
defimpl String.Chars do
def to_string(framebuffer) do
var = framebuffer.var_screeninfo
"""
mode "#{var.xres}x#{var.yres}"
geometry #{var.xres} #{var.yres} #{var.xres_virtual} #{var.yres_virtual} #{var.bits_per_pixel}
timings #{var.pixclock} #{var.left_margin} #{var.right_margin} #{var.upper_margin} #{var.lower_margin} #{var.hsync_len} #{var.vsync_len}
nonstd #{var.nonstd}
rgba #{var.red},#{var.green},#{var.blue},#{var.transp}
endmode
"""
end
end
@spec put_pixel(Framebuffer.t(), x(), y(), color()) :: :ok | {:error, term()}
def put_pixel(framebuffer, x, y, color) do
cond do
x >= framebuffer.var_screeninfo.xres ->
{:error, "out of bounds"}
y >= framebuffer.var_screeninfo.yres ->
{:error, "out of bounds"}
true ->
Framebuffer.NIF.put_pixel(framebuffer, x, y, color)
end
end
@spec clear(Framebuffer.t()) :: {:ok, Framebuffer.t()}
def clear(framebuffer) do
zero = {0, 0, 0}
framebuffer
|> to_stream()
|> Stream.each(fn {x, y} ->
:ok = Framebuffer.NIF.put_pixel(framebuffer, x, y, zero)
end)
|> Stream.run()
{:ok, framebuffer}
end
@spec rand(Framebuffer.t()) :: {:ok, Framebuffer.t()}
def rand(framebuffer) do
framebuffer
|> to_stream()
|> Stream.each(fn {x, y} ->
color = {:rand.uniform(256) - 1, :rand.uniform(256) - 1, :rand.uniform(256) - 1}
:ok = Framebuffer.NIF.put_pixel(framebuffer, x, y, color)
end)
|> Stream.run()
{:ok, framebuffer}
end
defp to_stream(framebuffer) do
xlimit = framebuffer.var_screeninfo.xres - 1
ylimit = framebuffer.var_screeninfo.yres - 1
Stream.unfold(
{0, 0},
fn
{^xlimit, ^ylimit} ->
nil
{^xlimit, y} = prev ->
next = {0, y + 1}
{prev, next}
{x, y} = prev ->
next = {x + 1, y}
{prev, next}
end
)
end
end
|
lib/framebuffer.ex
| 0.872266 | 0.554048 |
framebuffer.ex
|
starcoder
|
defmodule Nostrum.Struct.Permission do
@moduledoc """
Functions that work on permissions.
Some functions return a list of permissions. You can use enumerable functions
to work with permissions:
```Elixir
alias Nostrum.Cache.GuildCache
alias Nostrum.Struct.Guild.Member
guild = GuildCache.get!(279093381723062272)
member = Enum.find(guild.members, & &1.id === 177888205536886784)
member_perms = Member.guild_permissions(member, guild)
if :administrator in member_perms do
IO.puts("This user has the administrator permission.")
end
```
"""
use Bitwise
@typedoc """
Represents a single permission as a bitvalue.
"""
@type bit :: non_neg_integer
@typedoc """
Represents a set of permissions as a bitvalue.
"""
@type bitset :: non_neg_integer
@type general_permission ::
:create_instant_invite
| :kick_members
| :ban_members
| :administrator
| :manage_channels
| :manage_guild
| :view_audit_log
| :view_channel
| :change_nickname
| :manage_nicknames
| :manage_roles
| :manage_webhooks
| :manage_emojis
@type text_permission ::
:add_reactions
| :send_messages
| :send_tts_messages
| :manage_messages
| :embed_links
| :attach_files
| :read_message_history
| :mention_everyone
| :use_external_emojis
@type voice_permission ::
:connect
| :speak
| :mute_members
| :deafen_members
| :move_members
| :use_vad
@type t ::
general_permission
| text_permission
| voice_permission
@permission_to_bit_map %{
create_instant_invite: 0x00000001,
kick_members: 0x00000002,
ban_members: 0x00000004,
administrator: 0x00000008,
manage_channels: 0x00000010,
manage_guild: 0x00000020,
add_reactions: 0x00000040,
view_audit_log: 0x00000080,
view_channel: 0x00000400,
send_messages: 0x00000800,
send_tts_messages: 0x00001000,
manage_messages: 0x00002000,
embed_links: 0x00004000,
attach_files: 0x00008000,
read_message_history: 0x00010000,
mention_everyone: 0x00020000,
use_external_emojis: 0x00040000,
connect: 0x00100000,
speak: 0x00200000,
mute_members: 0x00400000,
deafen_members: 0x00800000,
move_members: 0x01000000,
use_vad: 0x02000000,
change_nickname: 0x04000000,
manage_nicknames: 0x08000000,
manage_roles: 0x10000000,
manage_webhooks: 0x20000000,
manage_emojis: 0x40000000
}
@bit_to_permission_map Map.new(@permission_to_bit_map, fn {k, v} -> {v, k} end)
@permission_list Map.keys(@permission_to_bit_map)
@doc """
Returns `true` if `term` is a permission; otherwise returns `false`.
## Examples
```Elixir
iex> Nostrum.Struct.Permission.is_permission(:administrator)
true
iex> Nostrum.Struct.Permission.is_permission(:not_a_permission)
false
```
"""
defguard is_permission(term) when is_atom(term) and term in @permission_list
@doc """
Returns a list of all permissions.
"""
@spec all() :: [t]
def all, do: @permission_list
@doc """
Converts the given bit to a permission.
This function returns `:error` if `bit` does not map to a permission.
## Examples
```Elixir
iex> Nostrum.Struct.Permission.from_bit(0x04000000)
{:ok, :change_nickname}
iex> Nostrum.Struct.Permission.from_bit(0)
:error
```
"""
@spec from_bit(bit) :: {:ok, t} | :error
def from_bit(bit) do
Map.fetch(@bit_to_permission_map, bit)
end
@doc """
Same as `from_bit/1`, but raises `ArgumentError` in case of failure.
## Examples
```Elixir
iex> Nostrum.Struct.Permission.from_bit!(0x04000000)
:change_nickname
iex> Nostrum.Struct.Permission.from_bit!(0)
** (ArgumentError) expected a valid bit, got: `0`
```
"""
@spec from_bit!(bit) :: t
def from_bit!(bit) do
case from_bit(bit) do
{:ok, perm} -> perm
:error -> raise(ArgumentError, "expected a valid bit, got: `#{inspect(bit)}`")
end
end
@doc """
Converts the given bitset to a list of permissions.
If the bitset contains invalid bits, returns `{:error, invalid_bits}`.
## Examples
```Elixir
iex> Nostrum.Struct.Permission.from_bitset(0x08000002)
{:ok, [:kick_members, :manage_nicknames]}
iex> Nostrum.Struct.Permission.from_bitset(0)
{:ok, []}
iex> Nostrum.Struct.Permission.from_bitset(0x4000000000000)
{:error, [0x4000000000000]}
```
"""
@spec from_bitset(bitset) :: {:ok, [t]} | {:error, [bit]}
def from_bitset(bitset) do
{errors, successes} =
0..53
|> Enum.map(fn index -> 0x1 <<< index end)
|> Enum.filter(fn mask -> (bitset &&& mask) === mask end)
|> Enum.map(fn bit ->
case from_bit(bit) do
{:ok, perm} -> {:ok, perm}
:error -> {:error, bit}
end
end)
|> Enum.split_with(&match?({:error, _}, &1))
case errors do
[] ->
{:ok, successes |> Enum.map(fn {:ok, perm} -> perm end)}
errors ->
{:error, errors |> Enum.map(fn {:error, bit} -> bit end)}
end
end
@doc """
Same as `from_bitset/1`, but raises `ArgumentError` in case of failure.
## Examples
```Elixir
iex> Nostrum.Struct.Permission.from_bitset!(0x04000000)
[:change_nickname]
"""
@spec from_bitset!(bitset) :: [t] | no_return
def from_bitset!(bitset) do
case from_bitset(bitset) do
{:ok, perms} ->
perms
{:error, invalid_bits} ->
raise(ArgumentError, "got a bitset with invalid bits `#{inspect(invalid_bits)}`")
end
end
@doc """
Converts the given permission to a bit.
## Examples
```Elixir
iex> Nostrum.Struct.Permission.to_bit(:administrator)
8
```
"""
@spec to_bit(t) :: bit
def to_bit(permission) when is_permission(permission), do: @permission_to_bit_map[permission]
@doc """
Converts the given enumerable of permissions to a bitset.
## Examples
```Elixir
iex> Nostrum.Struct.Permission.to_bitset([:administrator, :create_instant_invite])
9
```
"""
@spec to_bitset(Enum.t()) :: bitset
def to_bitset(permissions) do
permissions
|> Enum.map(&to_bit(&1))
|> Enum.reduce(fn bit, acc -> acc ||| bit end)
end
end
|
lib/nostrum/struct/permission.ex
| 0.892834 | 0.771026 |
permission.ex
|
starcoder
|
defmodule PlugCacheControl.Header do
@moduledoc false
@typep maybe(t) :: t | nil
@type t :: %__MODULE__{
must_revalidate: maybe(boolean()),
no_cache: maybe(boolean()),
no_store: maybe(boolean()),
no_transform: maybe(boolean()),
proxy_revalidate: maybe(boolean()),
private: maybe(boolean()),
public: maybe(boolean()),
max_age: maybe(integer()),
s_maxage: maybe(integer()),
stale_while_revalidate: maybe(integer()),
stale_if_error: maybe(integer())
}
@directives [
:must_revalidate,
:no_cache,
:no_store,
:no_transform,
:proxy_revalidate,
:private,
:public,
:max_age,
:s_maxage,
:stale_while_revalidate,
:stale_if_error
]
@numeric_dir [:max_age, :s_maxage, :stale_while_revalidate, :stale_if_error]
defstruct @directives
defguardp is_directive(directive) when directive in @directives
defguardp is_numeric(directive) when directive in @numeric_dir
defguardp is_delta(time)
when is_integer(time) and time >= 0
@doc """
Creates a new header struct.
"""
@spec new() :: t()
def new, do: %__MODULE__{}
@doc """
Creates a new header struct from an enumerable.
"""
@spec new(Enum.t()) :: t()
def new(directives) do
put_many(%__MODULE__{}, directives)
end
@doc """
Puts a value of a directive in the header struct.
"""
@spec put(t(), Utils.directive(), term()) :: t()
def put(%__MODULE__{} = header, directive, value) do
do_put(header, directive, value)
end
@spec put_many(t(), Enum.t()) :: t()
def put_many(%__MODULE__{} = header, directives) do
Enum.reduce(directives, header, fn {directive, value}, header ->
put(header, directive, value)
end)
end
@spec to_string(t()) :: String.t()
def to_string(%__MODULE__{} = header) do
Kernel.to_string(header)
end
defp do_put(_, directive, _) when not is_directive(directive) do
raise ArgumentError, "Invalid directive #{inspect(directive)}"
end
defp do_put(header, :public, value) when is_boolean(value) do
%{header | public: value, private: !value}
end
defp do_put(header, :private, value) when is_boolean(value) do
%{header | public: !value, private: value}
end
defp do_put(header, :no_cache, fields) when is_list(fields) do
joined_fields = Enum.join(fields, ", ")
%{header | no_cache: "\"#{joined_fields}\""}
end
defp do_put(header, directive, {time, _unit} = dur)
when is_numeric(directive) and is_delta(time) do
do_put(header, directive, duration_to_seconds(dur))
end
defp do_put(header, directive, field) when is_numeric(directive) and is_delta(field) do
struct_put!(header, directive, field)
end
defp do_put(header, directive, value) when is_boolean(value) do
struct_put!(header, directive, value)
end
defp do_put(_, directive, value) do
raise ArgumentError, "Invalid value #{inspect(value)} for directive #{inspect(directive)}."
end
defimpl String.Chars do
defp atom_to_directive(atom) when is_atom(atom) do
atom
|> Atom.to_string()
|> String.replace("_", "-")
end
def to_string(header) do
header
|> Map.from_struct()
|> Enum.reduce([], fn
{_key, nil}, acc -> acc
{_key, false}, acc -> acc
{key, true}, acc -> [atom_to_directive(key) | acc]
{key, value}, acc -> ["#{atom_to_directive(key)}=#{value}" | acc]
end)
|> Enum.join(", ")
end
end
defp duration_to_seconds({period, _unit} = dur) when is_integer(period) and period >= 0,
do: do_duration_to_seconds(dur)
defp do_duration_to_seconds({period, unit}) when unit in [:second, :seconds], do: period
defp do_duration_to_seconds({period, unit}) when unit in [:minute, :minutes], do: period * 60
defp do_duration_to_seconds({period, unit}) when unit in [:hour, :hours], do: period * 60 * 60
defp do_duration_to_seconds({period, unit}) when unit in [:day, :days],
do: period * 60 * 60 * 24
defp do_duration_to_seconds({period, unit}) when unit in [:week, :weeks],
do: period * 60 * 60 * 24 * 7
defp do_duration_to_seconds({period, unit}) when unit in [:year, :years],
do: period * 60 * 60 * 24 * 365
defp do_duration_to_seconds({_, unit}),
do: raise(ArgumentError, "Invalid unit #{inspect(unit)}.")
defp struct_put!(struct, field, value) when is_struct(struct) and is_atom(field) do
to_merge = Map.put(%{}, field, value)
struct!(struct, to_merge)
end
end
|
lib/plug_cache_control/header.ex
| 0.787278 | 0.431345 |
header.ex
|
starcoder
|
defmodule ExVault.KV2 do
@moduledoc """
A wrapper over the basic operations for working with KV v2 data.
Construct a *backend*--a client paired with the mount path for the `kv`
version 2 secrets engine it interacts with--using the `ExVault.KV2.new/2`
function.
Each of the operations in this module have a variant that operates on a client
and mount path, and another that operates on a backend.
See the [Vault documentation](https://www.vaultproject.io/docs/secrets/kv/kv-v2.html)
for the secrets engine.
"""
defstruct [:client, :mount]
@type t :: %__MODULE__{
client: ExVault.client(),
mount: String.t()
}
@doc """
Create a new backend for the `kv` version 2 secrets engine.
Params:
* `client` the `ExVault` client.
* `mount` the mount path for the `kv` secrets engine.
"""
@spec new(ExVault.client(), String.t()) :: t()
def new(client, mount), do: %__MODULE__{client: client, mount: mount}
# TODO: config
defmodule GetData do
@moduledoc """
Response struct for the data returned by the `kv` version 2 secrets engine.
"""
alias ExVault.Response.{Logical, Success}
defstruct [:resp, :data, :metadata]
@typedoc """
Struct containing the data and metadata for a path in the secrets engine.
See the [Vault documentation](https://www.vaultproject.io/api/secret/kv/kv-v2.html#read-secret-version)
for the `kv` version 2 API.
* `:resp`: The original response received from Vault.
* `:data`: The key/value pairs for the path.
* `:metadata`: The metadata associated with the data.
"""
@type t :: %__MODULE__{
resp: Success.t(),
data: %{optional(String.t()) => String.t()},
metadata: %{optional(String.t()) => any()}
}
@doc false
def mkresp({:ok, resp = %Success{logical: %Logical{data: data}}}),
do:
{:ok,
%__MODULE__{
resp: resp,
data: data["data"],
metadata: data["metadata"]
}}
@doc false
def mkresp(resp), do: resp
end
@typedoc "Response type returned by `get_data/3` and `get_data/4`."
@type get_data_response :: {:ok, GetData.t() | ExVault.Response.Error.t()} | {:error, any()}
@doc """
Read the value of a key.
Params:
* `client` the `ExVault` client.
* `mount` the mount path for the `kv` secrets engine.
* `path` the path to the key in the secrets engine.
* `opts` TODO: document opts.
"""
@spec get_data(ExVault.client(), String.t(), String.t(), keyword()) :: get_data_response()
def get_data(client, mount, path, opts) do
query = Keyword.take(opts, [:version])
client
|> ExVault.read("#{mount}/data/#{path}", query: query)
|> GetData.mkresp()
end
@doc """
Read the value of a key.
Params:
* `backend` the `ExVault.KV2` backend.
* `path` the path to the key in the secrets engine.
* `opts` TODO: document opts.
"""
@spec get_data(t(), String.t(), keyword()) :: get_data_response()
def get_data(backend, path, opts \\ []), do: get_data(backend.client, backend.mount, path, opts)
@doc """
Write the value of a key.
Params:
* `client` the `ExVault` client.
* `mount` the mount path for the `kv` secrets engine.
* `path` the path to the key in the secrets engine.
* `data` the data to write as a JSON-compatible map.
* `opts` TODO: document opts.
"""
@spec put_data(ExVault.client(), String.t(), String.t(), map(), keyword()) :: ExVault.response()
def put_data(client, mount, path, data, opts) do
# TODO: cas
ExVault.write(client, "#{mount}/data/#{path}", %{"data" => data})
end
@doc """
Write the value of a key.
Params:
* `backend` the `ExVault.KV2` backend.
* `path` the path to the key in the secrets engine.
* `data` the data to write as a JSON-compatible map.
* `opts` TODO: document opts.
"""
@spec put_data(t(), String.t(), map(), keyword()) :: ExVault.response()
def put_data(backend, path, data, opts \\ []),
do: put_data(backend.client, backend.mount, path, data, opts)
# TODO: delete
# TODO: undelete
# TODO: destroy
# TODO: list
# TODO: read metadata
# TODO: update metadata
# TODO: delete metadata
end
|
lib/exvault/kv2.ex
| 0.666822 | 0.745097 |
kv2.ex
|
starcoder
|
defmodule AWS.Textract do
@moduledoc """
Amazon Textract detects and analyzes text in documents and converts it into
machine-readable text.
This is the API reference documentation for Amazon Textract.
"""
alias AWS.Client
alias AWS.Request
def metadata do
%AWS.ServiceMetadata{
abbreviation: nil,
api_version: "2018-06-27",
content_type: "application/x-amz-json-1.1",
credential_scope: nil,
endpoint_prefix: "textract",
global?: false,
protocol: "json",
service_id: "Textract",
signature_version: "v4",
signing_name: "textract",
target_prefix: "Textract"
}
end
@doc """
Analyzes an input document for relationships between detected items.
The types of information returned are as follows:
* Form data (key-value pairs). The related information is returned
in two `Block` objects, each of type `KEY_VALUE_SET`: a KEY `Block` object and a
VALUE `Block` object. For example, *Name: <NAME>* contains a key and
value. *Name:* is the key. *<NAME>* is the value.
* Table and table cell data. A TABLE `Block` object contains
information about a detected table. A CELL `Block` object is returned for each
cell in a table.
* Lines and words of text. A LINE `Block` object contains one or
more WORD `Block` objects. All lines and words that are detected in the document
are returned (including text that doesn't have a relationship with the value of
`FeatureTypes`).
Selection elements such as check boxes and option buttons (radio buttons) can be
detected in form data and in tables. A SELECTION_ELEMENT `Block` object contains
information about a selection element, including the selection status.
You can choose which type of analysis to perform by specifying the
`FeatureTypes` list.
The output is returned in a list of `Block` objects.
`AnalyzeDocument` is a synchronous operation. To analyze documents
asynchronously, use `StartDocumentAnalysis`.
For more information, see [Document Text Analysis](https://docs.aws.amazon.com/textract/latest/dg/how-it-works-analyzing.html).
"""
def analyze_document(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "AnalyzeDocument", input, options)
end
@doc """
Detects text in the input document.
Amazon Textract can detect lines of text and the words that make up a line of
text. The input document must be an image in JPEG or PNG format.
`DetectDocumentText` returns the detected text in an array of `Block` objects.
Each document page has as an associated `Block` of type PAGE. Each PAGE `Block`
object is the parent of LINE `Block` objects that represent the lines of
detected text on a page. A LINE `Block` object is a parent for each word that
makes up the line. Words are represented by `Block` objects of type WORD.
`DetectDocumentText` is a synchronous operation. To analyze documents
asynchronously, use `StartDocumentTextDetection`.
For more information, see [Document Text Detection](https://docs.aws.amazon.com/textract/latest/dg/how-it-works-detecting.html).
"""
def detect_document_text(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "DetectDocumentText", input, options)
end
@doc """
Gets the results for an Amazon Textract asynchronous operation that analyzes
text in a document.
You start asynchronous text analysis by calling `StartDocumentAnalysis`, which
returns a job identifier (`JobId`). When the text analysis operation finishes,
Amazon Textract publishes a completion status to the Amazon Simple Notification
Service (Amazon SNS) topic that's registered in the initial call to
`StartDocumentAnalysis`. To get the results of the text-detection operation,
first check that the status value published to the Amazon SNS topic is
`SUCCEEDED`. If so, call `GetDocumentAnalysis`, and pass the job identifier
(`JobId`) from the initial call to `StartDocumentAnalysis`.
`GetDocumentAnalysis` returns an array of `Block` objects. The following types
of information are returned:
* Form data (key-value pairs). The related information is returned
in two `Block` objects, each of type `KEY_VALUE_SET`: a KEY `Block` object and a
VALUE `Block` object. For example, *Name: <NAME>* contains a key and
value. *Name:* is the key. *<NAME>* is the value.
* Table and table cell data. A TABLE `Block` object contains
information about a detected table. A CELL `Block` object is returned for each
cell in a table.
* Lines and words of text. A LINE `Block` object contains one or
more WORD `Block` objects. All lines and words that are detected in the document
are returned (including text that doesn't have a relationship with the value of
the `StartDocumentAnalysis` `FeatureTypes` input parameter).
Selection elements such as check boxes and option buttons (radio buttons) can be
detected in form data and in tables. A SELECTION_ELEMENT `Block` object contains
information about a selection element, including the selection status.
Use the `MaxResults` parameter to limit the number of blocks that are returned.
If there are more results than specified in `MaxResults`, the value of
`NextToken` in the operation response contains a pagination token for getting
the next set of results. To get the next page of results, call
`GetDocumentAnalysis`, and populate the `NextToken` request parameter with the
token value that's returned from the previous call to `GetDocumentAnalysis`.
For more information, see [Document Text Analysis](https://docs.aws.amazon.com/textract/latest/dg/how-it-works-analyzing.html).
"""
def get_document_analysis(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetDocumentAnalysis", input, options)
end
@doc """
Gets the results for an Amazon Textract asynchronous operation that detects text
in a document.
Amazon Textract can detect lines of text and the words that make up a line of
text.
You start asynchronous text detection by calling `StartDocumentTextDetection`,
which returns a job identifier (`JobId`). When the text detection operation
finishes, Amazon Textract publishes a completion status to the Amazon Simple
Notification Service (Amazon SNS) topic that's registered in the initial call to
`StartDocumentTextDetection`. To get the results of the text-detection
operation, first check that the status value published to the Amazon SNS topic
is `SUCCEEDED`. If so, call `GetDocumentTextDetection`, and pass the job
identifier (`JobId`) from the initial call to `StartDocumentTextDetection`.
`GetDocumentTextDetection` returns an array of `Block` objects.
Each document page has as an associated `Block` of type PAGE. Each PAGE `Block`
object is the parent of LINE `Block` objects that represent the lines of
detected text on a page. A LINE `Block` object is a parent for each word that
makes up the line. Words are represented by `Block` objects of type WORD.
Use the MaxResults parameter to limit the number of blocks that are returned. If
there are more results than specified in `MaxResults`, the value of `NextToken`
in the operation response contains a pagination token for getting the next set
of results. To get the next page of results, call `GetDocumentTextDetection`,
and populate the `NextToken` request parameter with the token value that's
returned from the previous call to `GetDocumentTextDetection`.
For more information, see [Document Text Detection](https://docs.aws.amazon.com/textract/latest/dg/how-it-works-detecting.html).
"""
def get_document_text_detection(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "GetDocumentTextDetection", input, options)
end
@doc """
Starts the asynchronous analysis of an input document for relationships between
detected items such as key-value pairs, tables, and selection elements.
`StartDocumentAnalysis` can analyze text in documents that are in JPEG, PNG, and
PDF format. The documents are stored in an Amazon S3 bucket. Use
`DocumentLocation` to specify the bucket name and file name of the document.
`StartDocumentAnalysis` returns a job identifier (`JobId`) that you use to get
the results of the operation. When text analysis is finished, Amazon Textract
publishes a completion status to the Amazon Simple Notification Service (Amazon
SNS) topic that you specify in `NotificationChannel`. To get the results of the
text analysis operation, first check that the status value published to the
Amazon SNS topic is `SUCCEEDED`. If so, call `GetDocumentAnalysis`, and pass the
job identifier (`JobId`) from the initial call to `StartDocumentAnalysis`.
For more information, see [Document Text Analysis](https://docs.aws.amazon.com/textract/latest/dg/how-it-works-analyzing.html).
"""
def start_document_analysis(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StartDocumentAnalysis", input, options)
end
@doc """
Starts the asynchronous detection of text in a document.
Amazon Textract can detect lines of text and the words that make up a line of
text.
`StartDocumentTextDetection` can analyze text in documents that are in JPEG,
PNG, and PDF format. The documents are stored in an Amazon S3 bucket. Use
`DocumentLocation` to specify the bucket name and file name of the document.
`StartTextDetection` returns a job identifier (`JobId`) that you use to get the
results of the operation. When text detection is finished, Amazon Textract
publishes a completion status to the Amazon Simple Notification Service (Amazon
SNS) topic that you specify in `NotificationChannel`. To get the results of the
text detection operation, first check that the status value published to the
Amazon SNS topic is `SUCCEEDED`. If so, call `GetDocumentTextDetection`, and
pass the job identifier (`JobId`) from the initial call to
`StartDocumentTextDetection`.
For more information, see [Document Text Detection](https://docs.aws.amazon.com/textract/latest/dg/how-it-works-detecting.html).
"""
def start_document_text_detection(%Client{} = client, input, options \\ []) do
Request.request_post(client, metadata(), "StartDocumentTextDetection", input, options)
end
end
|
lib/aws/generated/textract.ex
| 0.914796 | 0.644756 |
textract.ex
|
starcoder
|
defmodule ReconLib do
require :recon_lib
@moduledoc """
Regroups useful functionality used by recon when dealing with data
from the node. The functions in this module allow quick runtime
access to fancier behaviour than what would be done using recon
module itself.
"""
@type diff :: [Recon.proc_attrs() | Recon.inet_attrs()]
@type milliseconds :: non_neg_integer
@type interval_ms :: non_neg_integer
@type scheduler_id :: pos_integer
@type sched_time ::
{scheduler_id, active_time :: non_neg_integer, total_time :: non_neg_integer}
@doc """
Compare two samples and return a list based on some key. The type
mentioned for the structure is `diff()` (`{key, val, other}`), which
is compatible with the `Recon.proc_attrs()` type.
"""
@spec sliding_window(first :: diff, last :: diff) :: diff
def sliding_window(first, last) do
:recon_lib.sliding_window(first, last)
end
@doc """
Runs a fun once, waits `ms`, runs the fun again, and returns both
results.
"""
@spec sample(milliseconds, (() -> term)) ::
{first :: term, second :: term}
def sample(delay, fun), do: :recon_lib.sample(delay, fun)
@doc """
Takes a list of terms, and counts how often each of them appears in
the list. The list returned is in no particular order.
"""
@spec count([term]) :: [{term, count :: integer}]
def count(terms), do: :recon_lib.count(terms)
@doc """
Returns a list of all the open ports in the VM, coupled with
one of the properties desired from `:erlang.port_info/1` and
`:erlang.port_info/2`
"""
@spec port_list(attr :: atom) :: [{port, term}]
def port_list(attr), do: :recon_lib.port_list(attr)
@doc """
Returns a list of all the open ports in the VM, but only if the
`attr`'s resulting value matches `val`. `attr` must be a property
accepted by `:erlang.port_info/2`.
"""
@spec port_list(attr :: atom, term) :: [port]
def port_list(attr, val), do: :recon_lib.port_list(attr, val)
@doc """
Returns the attributes (`Recon.proc_attrs/0`) of all processes of
the node, except the caller.
"""
@spec proc_attrs(term) :: [Recon.proc_attrs()]
def proc_attrs(attr_name) do
:recon_lib.proc_attrs(attr_name)
end
@doc """
Returns the attributes of a given process. This form of attributes
is standard for most comparison functions for processes in recon.
A special attribute is `binary_memory`, which will reduce the memory
used by the process for binary data on the global heap.
"""
@spec proc_attrs(term, pid) :: {:ok, Recon.proc_attrs()} | {:error, term}
def proc_attrs(attr_name, pid) do
:recon_lib.proc_attrs(attr_name, pid)
end
@doc """
Returns the attributes (Recon.inet_attrs/0) of all inet ports (UDP,
SCTP, TCP) of the node.
"""
@spec inet_attrs(term) :: [Recon.inet_attrs()]
def inet_attrs(attr_name), do: :recon_lib.inet_attrs(attr_name)
@doc """
Returns the attributes required for a given inet port (UDP, SCTP,
TCP). This form of attributes is standard for most comparison
functions for processes in recon.
"""
@spec inet_attrs(Recon.inet_attri_name(), port) ::
{:ok, Recon.inet_attrs()} | {:error, term}
def inet_attrs(attr, port), do: :recon_lib.inet_attrs(attr, port)
@doc """
Equivalent of `pid(x, y, z)` in the Elixir's iex shell.
"""
@spec triple_to_pid(non_neg_integer, non_neg_integer, non_neg_integer) :: pid
def triple_to_pid(x, y, z), do: :recon_lib.triple_to_pid(x, y, z)
@doc """
Transforms a given term to a pid.
"""
@spec term_to_pid(Recon.pid_term()) :: pid
def term_to_pid(term) do
pre_process_pid_term(term) |> :recon_lib.term_to_pid()
end
defp pre_process_pid_term({_a, _b, _c} = pid_term) do
pid_term
end
defp pre_process_pid_term(<<"#PID", pid_term::binary>>) do
to_char_list(pid_term)
end
defp pre_process_pid_term(pid_term) when is_binary(pid_term) do
to_char_list(pid_term)
end
defp pre_process_pid_term(pid_term) do
pid_term
end
@doc """
Transforms a given term to a port.
"""
@spec term_to_port(Recon.port_term()) :: port
def term_to_port(term) when is_binary(term) do
to_char_list(term) |> :recon_lib.term_to_port()
end
def term_to_port(term) do
:recon_lib.term_to_port(term)
end
@doc """
Calls a given function every `interval` milliseconds and supports
a map-like interface (each result is modified and returned)
"""
@spec time_map(
n :: non_neg_integer,
interval_ms,
fun :: (state :: term -> {term, state :: term}),
initial_state :: term,
mapfun :: (term -> term)
) :: [term]
def time_map(n, interval, fun, state, map_fun) do
:recon_lib.time_map(n, interval, fun, state, map_fun)
end
@doc """
Calls a given function every `interval` milliseconds and supports
a fold-like interface (each result is modified and accumulated)
"""
@spec time_fold(
n :: non_neg_integer,
interval_ms,
fun :: (state :: term -> {term, state :: term}),
initial_state :: term,
foldfun :: (term, acc0 :: term -> acc1 :: term),
initial_acc :: term
) :: [term]
def time_fold(n, interval, fun, state, fold_fun, init) do
:recon_lib.time_fold(n, interval, fun, state, fold_fun, init)
end
@doc """
Diffs two runs of :erlang.statistics(scheduler_wall_time) and
returns usage metrics in terms of cores and 0..1 percentages.
"""
@spec scheduler_usage_diff(sched_time, sched_time) ::
[{scheduler_id, usage :: number}]
def scheduler_usage_diff(first, last) do
:recon_lib.scheduler_usage_diff(first, last)
end
end
|
lib/recon_lib.ex
| 0.876178 | 0.505066 |
recon_lib.ex
|
starcoder
|
defmodule Map.Element do
@type single() :: Map.Node.t() | Map.Way.t() | Map.Relation.t()
@type collection() :: [single()]
@type indexed_collection() :: %{binary() => single()}
@typep flexible_collection() :: collection() | indexed_collection()
@spec filter_by_tag(flexible_collection(), atom(), binary() | [binary()]) :: [single()]
def filter_by_tag(map, tag, value) when is_map(map),
do: map |> Map.values() |> filter_by_tag(tag, value)
def filter_by_tag(list, tag, value) when is_binary(value),
do: filter_by_tag(list, tag, List.wrap(value))
def filter_by_tag(list, tag, values) when is_list(list) and is_atom(tag) and is_list(values) do
Enum.filter(list, fn %{tags: tags} -> Enum.member?(values, tags[tag]) end)
end
@spec find_by_tag(flexible_collection(), atom(), binary() | [binary()]) :: single() | nil
def find_by_tag(map, tag, value) when is_map(map),
do: map |> Map.values() |> find_by_tag(tag, value)
def find_by_tag(list, tag, value) when is_binary(value),
do: find_by_tag(list, tag, List.wrap(value))
def find_by_tag(list, tag, values)
when is_list(list) and is_atom(tag) and is_list(values) do
Enum.find(list, fn %{tags: tags} -> Enum.member?(values, tags[tag]) end)
end
@spec bbox(flexible_collection() | single()) :: Geo.BoundingBox.t() | nil
def bbox([]), do: nil
def bbox(list) when is_list(list) do
list
|> Parallel.map(&bbox/1)
|> Enum.reduce(&Geo.CheapRuler.union/2)
end
def bbox(%{bbox: bbox}) when not is_nil(bbox), do: bbox
def bbox(%Map.Node{} = n), do: Geo.CheapRuler.bbox(n)
def bbox(%Map.Way{nodes: nodes}), do: Geo.CheapRuler.bbox(nodes)
def bbox(%Map.Relation{members: members}), do: members |> Enum.map(& &1.ref) |> bbox()
def bbox(map) when is_map(map), do: map |> Map.values() |> bbox()
def with_bbox(elem), do: Map.put(elem, :bbox, bbox(elem))
@spec add_new_tags(single(), map()) :: single()
def add_new_tags(%{tags: tags} = elem, extra_tags) do
tags = Map.merge(extra_tags, tags)
%{elem | tags: tags}
end
@spec keep_only_tags(single(), list()) :: single()
def keep_only_tags(%{tags: tags} = elem, tags_to_keep) do
tags = Map.take(tags, tags_to_keep)
%{elem | tags: tags}
end
end
|
lib/map/element.ex
| 0.79653 | 0.416559 |
element.ex
|
starcoder
|
defmodule AWS.Datapipeline do
@moduledoc """
AWS Data Pipeline configures and manages a data-driven workflow called a
pipeline. AWS Data Pipeline handles the details of scheduling and ensuring
that data dependencies are met so that your application can focus on
processing the data.
AWS Data Pipeline provides a JAR implementation of a task runner called AWS
Data Pipeline Task Runner. AWS Data Pipeline Task Runner provides logic for
common data management scenarios, such as performing database queries and
running data analysis using Amazon Elastic MapReduce (Amazon EMR). You can
use AWS Data Pipeline Task Runner as your task runner, or you can write
your own task runner to provide custom data management.
AWS Data Pipeline implements two main sets of functionality. Use the first
set to create a pipeline and define data sources, schedules, dependencies,
and the transforms to be performed on the data. Use the second set in your
task runner application to receive the next task ready for processing. The
logic for performing the task, such as querying the data, running data
analysis, or converting the data from one format to another, is contained
within the task runner. The task runner performs the task assigned to it by
the web service, reporting progress to the web service as it does so. When
the task is done, the task runner reports the final success or failure of
the task to the web service.
"""
@doc """
Validates the specified pipeline and starts processing pipeline tasks. If
the pipeline does not pass validation, activation fails.
If you need to pause the pipeline to investigate an issue with a component,
such as a data source or script, call `DeactivatePipeline`.
To activate a finished pipeline, modify the end date for the pipeline and
then activate it.
"""
def activate_pipeline(client, input, options \\ []) do
request(client, "ActivatePipeline", input, options)
end
@doc """
Adds or modifies tags for the specified pipeline.
"""
def add_tags(client, input, options \\ []) do
request(client, "AddTags", input, options)
end
@doc """
Creates a new, empty pipeline. Use `PutPipelineDefinition` to populate the
pipeline.
"""
def create_pipeline(client, input, options \\ []) do
request(client, "CreatePipeline", input, options)
end
@doc """
Deactivates the specified running pipeline. The pipeline is set to the
`DEACTIVATING` state until the deactivation process completes.
To resume a deactivated pipeline, use `ActivatePipeline`. By default, the
pipeline resumes from the last completed execution. Optionally, you can
specify the date and time to resume the pipeline.
"""
def deactivate_pipeline(client, input, options \\ []) do
request(client, "DeactivatePipeline", input, options)
end
@doc """
Deletes a pipeline, its pipeline definition, and its run history. AWS Data
Pipeline attempts to cancel instances associated with the pipeline that are
currently being processed by task runners.
Deleting a pipeline cannot be undone. You cannot query or restore a deleted
pipeline. To temporarily pause a pipeline instead of deleting it, call
`SetStatus` with the status set to `PAUSE` on individual components.
Components that are paused by `SetStatus` can be resumed.
"""
def delete_pipeline(client, input, options \\ []) do
request(client, "DeletePipeline", input, options)
end
@doc """
Gets the object definitions for a set of objects associated with the
pipeline. Object definitions are composed of a set of fields that define
the properties of the object.
"""
def describe_objects(client, input, options \\ []) do
request(client, "DescribeObjects", input, options)
end
@doc """
Retrieves metadata about one or more pipelines. The information retrieved
includes the name of the pipeline, the pipeline identifier, its current
state, and the user account that owns the pipeline. Using account
credentials, you can retrieve metadata about pipelines that you or your IAM
users have created. If you are using an IAM user account, you can retrieve
metadata about only those pipelines for which you have read permissions.
To retrieve the full pipeline definition instead of metadata about the
pipeline, call `GetPipelineDefinition`.
"""
def describe_pipelines(client, input, options \\ []) do
request(client, "DescribePipelines", input, options)
end
@doc """
Task runners call `EvaluateExpression` to evaluate a string in the context
of the specified object. For example, a task runner can evaluate SQL
queries stored in Amazon S3.
"""
def evaluate_expression(client, input, options \\ []) do
request(client, "EvaluateExpression", input, options)
end
@doc """
Gets the definition of the specified pipeline. You can call
`GetPipelineDefinition` to retrieve the pipeline definition that you
provided using `PutPipelineDefinition`.
"""
def get_pipeline_definition(client, input, options \\ []) do
request(client, "GetPipelineDefinition", input, options)
end
@doc """
Lists the pipeline identifiers for all active pipelines that you have
permission to access.
"""
def list_pipelines(client, input, options \\ []) do
request(client, "ListPipelines", input, options)
end
@doc """
Task runners call `PollForTask` to receive a task to perform from AWS Data
Pipeline. The task runner specifies which tasks it can perform by setting a
value for the `workerGroup` parameter. The task returned can come from any
of the pipelines that match the `workerGroup` value passed in by the task
runner and that was launched using the IAM user credentials specified by
the task runner.
If tasks are ready in the work queue, `PollForTask` returns a response
immediately. If no tasks are available in the queue, `PollForTask` uses
long-polling and holds on to a poll connection for up to a 90 seconds,
during which time the first newly scheduled task is handed to the task
runner. To accomodate this, set the socket timeout in your task runner to
90 seconds. The task runner should not call `PollForTask` again on the same
`workerGroup` until it receives a response, and this can take up to 90
seconds.
"""
def poll_for_task(client, input, options \\ []) do
request(client, "PollForTask", input, options)
end
@doc """
Adds tasks, schedules, and preconditions to the specified pipeline. You can
use `PutPipelineDefinition` to populate a new pipeline.
`PutPipelineDefinition` also validates the configuration as it adds it to
the pipeline. Changes to the pipeline are saved unless one of the following
three validation errors exists in the pipeline.
<ol> <li>An object is missing a name or identifier field.</li> <li>A string
or reference field is empty.</li> <li>The number of objects in the pipeline
exceeds the maximum allowed objects.</li> <li>The pipeline is in a FINISHED
state.</li> </ol> Pipeline object definitions are passed to the
`PutPipelineDefinition` action and returned by the `GetPipelineDefinition`
action.
"""
def put_pipeline_definition(client, input, options \\ []) do
request(client, "PutPipelineDefinition", input, options)
end
@doc """
Queries the specified pipeline for the names of objects that match the
specified set of conditions.
"""
def query_objects(client, input, options \\ []) do
request(client, "QueryObjects", input, options)
end
@doc """
Removes existing tags from the specified pipeline.
"""
def remove_tags(client, input, options \\ []) do
request(client, "RemoveTags", input, options)
end
@doc """
Task runners call `ReportTaskProgress` when assigned a task to acknowledge
that it has the task. If the web service does not receive this
acknowledgement within 2 minutes, it assigns the task in a subsequent
`PollForTask` call. After this initial acknowledgement, the task runner
only needs to report progress every 15 minutes to maintain its ownership of
the task. You can change this reporting time from 15 minutes by specifying
a `reportProgressTimeout` field in your pipeline.
If a task runner does not report its status after 5 minutes, AWS Data
Pipeline assumes that the task runner is unable to process the task and
reassigns the task in a subsequent response to `PollForTask`. Task runners
should call `ReportTaskProgress` every 60 seconds.
"""
def report_task_progress(client, input, options \\ []) do
request(client, "ReportTaskProgress", input, options)
end
@doc """
Task runners call `ReportTaskRunnerHeartbeat` every 15 minutes to indicate
that they are operational. If the AWS Data Pipeline Task Runner is launched
on a resource managed by AWS Data Pipeline, the web service can use this
call to detect when the task runner application has failed and restart a
new instance.
"""
def report_task_runner_heartbeat(client, input, options \\ []) do
request(client, "ReportTaskRunnerHeartbeat", input, options)
end
@doc """
Requests that the status of the specified physical or logical pipeline
objects be updated in the specified pipeline. This update might not occur
immediately, but is eventually consistent. The status that can be set
depends on the type of object (for example, DataNode or Activity). You
cannot perform this operation on `FINISHED` pipelines and attempting to do
so returns `InvalidRequestException`.
"""
def set_status(client, input, options \\ []) do
request(client, "SetStatus", input, options)
end
@doc """
Task runners call `SetTaskStatus` to notify AWS Data Pipeline that a task
is completed and provide information about the final status. A task runner
makes this call regardless of whether the task was sucessful. A task runner
does not need to call `SetTaskStatus` for tasks that are canceled by the
web service during a call to `ReportTaskProgress`.
"""
def set_task_status(client, input, options \\ []) do
request(client, "SetTaskStatus", input, options)
end
@doc """
Validates the specified pipeline definition to ensure that it is well
formed and can be run without error.
"""
def validate_pipeline_definition(client, input, options \\ []) do
request(client, "ValidatePipelineDefinition", input, options)
end
@spec request(AWS.Client.t(), binary(), map(), list()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, action, input, options) do
client = %{client | service: "datapipeline"}
host = build_host("datapipeline", client)
url = build_url(host, client)
headers = [
{"Host", host},
{"Content-Type", "application/x-amz-json-1.1"},
{"X-Amz-Target", "DataPipeline.#{action}"}
]
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
post(client, url, payload, headers, options)
end
defp post(client, url, payload, headers, options) do
case AWS.Client.request(client, :post, url, payload, headers, options) do
{:ok, %{status_code: 200, body: body} = response} ->
body = if body != "", do: decode!(client, body)
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
defp encode!(client, payload) do
AWS.Client.encode!(client, payload, :json)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/datapipeline.ex
| 0.904368 | 0.823506 |
datapipeline.ex
|
starcoder
|
defmodule SimpleBudget.Calculations.Daily do
@moduledoc false
import Ecto.Query
alias SimpleBudget.Repo
def all do
remaining = remaining()
remaining_per_day = remaining_per_day(remaining)
%{
remaining: remaining,
remaining_per_day: remaining_per_day
}
end
defp remaining do
remaining_credit_after_debt()
|> Decimal.sub(savings())
|> Decimal.sub(goals())
end
defp remaining_credit_after_debt do
Decimal.sub(credits(), debts())
end
defp remaining_per_day(remaining) do
days_left =
Timex.now()
|> Timex.end_of_month()
|> Timex.diff(Timex.now(), :days)
if days_left == 0 do
remaining
else
Decimal.div(remaining, days_left)
end
end
defp credits do
credits_query =
from(
a in "accounts",
where: a.debt == false,
select: sum(a.balance)
)
credits = credits_query |> Repo.one() |> zero_or_decimal()
adjustments_query =
from(
a in "accounts",
where: a.debt == false,
join: adjustments in "adjustments",
on: adjustments.account_id == a.id,
select: sum(adjustments.total)
)
adjustments = adjustments_query |> Repo.one() |> zero_or_decimal()
Decimal.add(credits, adjustments)
end
defp debts do
debts_query =
from(
a in "accounts",
where: a.debt == true,
select: sum(a.balance)
)
debts = debts_query |> Repo.one() |> zero_or_decimal()
adjustments_query =
from(
a in "accounts",
where: a.debt == true,
join: adjustments in "adjustments",
on: adjustments.account_id == a.id,
select: sum(adjustments.total)
)
adjustments = adjustments_query |> Repo.one() |> zero_or_decimal()
Decimal.add(debts, adjustments)
end
defp savings do
savings_query =
from(
a in "savings",
select: sum(a.amount)
)
savings_query |> Repo.one() |> zero_or_decimal()
end
defp zero_or_decimal(input) when is_nil(input) do
0.0 |> Decimal.from_float()
end
defp zero_or_decimal(input) when is_float(input) do
input |> Decimal.from_float()
end
defp zero_or_decimal(input) do
input |> Decimal.round(1)
end
defp goals do
goals_query =
from(
a in "goals",
select:
sum(
fragment("(target / (end_date - start_date)) * DATE_PART('day', now() - start_date)")
)
)
goals_query |> Repo.one() |> zero_or_decimal()
end
end
|
lib/simple_budget/calculations/daily.ex
| 0.613352 | 0.401013 |
daily.ex
|
starcoder
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.