code
stringlengths 114
1.05M
| path
stringlengths 3
312
| quality_prob
float64 0.5
0.99
| learning_prob
float64 0.2
1
| filename
stringlengths 3
168
| kind
stringclasses 1
value |
---|---|---|---|---|---|
defprotocol Realm.Apply do
@moduledoc """
An extension of `Realm.Functor`, `Apply` provides a way to _apply_ arguments
to functions when both are apply in the same kind of container. This can be
seen as running function application "in a context".
For a nice, illustrated introduction,
see [Functors, Applicatives, And Monads In Pictures](http://adit.io/posts/2013-04-17-functors,_applicatives,_and_monads_in_pictures.html).
## Graphically
If function application looks like this
data |> function == result
and a functor looks like this
%Container<data> ~> function == %Container<result>
then an apply looks like
%Container<data> ~>> %Container<function> == %Container<result>
which is similar to function application inside containers, plus the ability to
attach special effects to applications.
data --------------- function ---------------> result
%Container<data> --- %Container<function> ---> %Container<result>
This lets us do functorial things like
* continue applying values to a curried function resulting from a `Realm.Functor.map/2`
* apply multiple functions to multiple arguments (with lists)
* propogate some state (like [`Nothing`](https://hexdocs.pm/algae/Algae.Maybe.Nothing.html#content)
in [`Algae.Maybe`](https://hexdocs.pm/algae/Algae.Maybe.html#content))
but now with a much larger number of arguments, reuse partially applied functions,
and run effects with the function container as well as the data container.
## Examples
iex> ap([fn x -> x + 1 end, fn y -> y * 10 end], [1, 2, 3])
[2, 3, 4, 10, 20, 30]
iex> [100, 200]
...> |> Realm.Functor.map(curry(fn)(x, y, z) -> x * y / z end)
...> |> provide([5, 2])
...> |> provide([100, 50])
[5.0, 10.0, 2.0, 4.0, 10.0, 20.0, 4.0, 8.0]
# ↓ ↓
# 100 * 5 / 100 200 * 5 / 50
iex> import Realm.Functor
...>
...> [100, 200]
...> ~> fn(x, y, z) ->
...> x * y / z
...> end <<~ [5, 2]
...> <<~ [100, 50]
[5.0, 10.0, 2.0, 4.0, 10.0, 20.0, 4.0, 8.0]
# ↓ ↓
# 100 * 5 / 100 200 * 5 / 50
%Algae.Maybe.Just{just: 42}
~> fn(x, y, z) ->
x * y / z
end <<~ %Algae.Maybe.Nothing{}
<<~ %Algae.Maybe.Just{just: 99}
#=> %Algae.Maybe.Nothing{}
## `convey` vs `ap`
`convey` and `ap` essentially associate in opposite directions. For example,
large data is _usually_ more efficient with `ap`, and large numbers of
functions are _usually_ more efficient with `convey`.
It's also more consistent consistency. In Elixir, we like to think of a "subject"
being piped through a series of transformations. This places the function argument
as the second argument. In `Realm.Functor`, this was of little consequence.
However, in `Apply`, we're essentially running superpowered function application.
`ap` is short for `apply`, as to not conflict with `Kernel.apply/2`, and is meant
to respect a similar API, with the function as the first argument. This also reads
nicely when piped, as it becomes `[funs] |> ap([args1]) |> ap([args2])`,
which is similar in structure to `fun.(arg2).(arg1)`.
With potentially multiple functions being applied over potentially
many arguments, we need to worry about ordering. `convey` not only flips
the order of arguments, but also who is in control of ordering.
`convey` typically runs each function over all arguments (`first_fun ⬸ all_args`),
and `ap` runs all functions for each element (`first_arg ⬸ all_funs`).
This may change the order of results, and is a feature, not a bug.
iex> [1, 2, 3]
...> |> Realm.Apply.convey([&(&1 + 1), &(&1 * 10)])
[
2, 10, # [(1 + 1), (1 * 10)]
3, 20, # [(2 + 1), (2 * 10)]
4, 30 # [(3 + 1), (3 * 10)]
]
iex> [&(&1 + 1), &(&1 * 10)]
...> |> ap([1, 2, 3])
[
2, 3, 4, # [(1 + 1), (2 + 1), (3 + 1)]
10, 20, 30 # [(1 * 10), (2 * 10), (3 * 10)]
]
## Type Class
An instance of `Realm.Apply` must also implement `Realm.Functor`,
and define `Realm.Apply.convey/2`.
Functor [map/2]
↓
Apply [convey/2]
"""
@doc """
Pipe arguments to functions, when both are apply in the same
type of data structure.
## Examples
iex> [1, 2, 3]
...> |> Apply.convey([fn x -> x + 1 end, fn y -> y * 10 end])
[2, 10, 3, 20, 4, 30]
"""
@spec convey(t(), t()) :: t()
def convey(apply, func)
end
defmodule Realm.Apply.Algebra do
alias Realm.{Apply, Functor}
import Quark.Curry
@doc """
Alias for `convey/2`.
Why "hose"?
* Pipes (`|>`) are application with arguments flipped
* `ap/2` is like function application "in a context"
* The opposite of `ap` is a contextual pipe
* `hose`s are a kind of flexible pipe
Q.E.D.

## Examples
iex> [1, 2, 3]
...> |> hose([fn x -> x + 1 end, fn y -> y * 10 end])
[2, 10, 3, 20, 4, 30]
"""
@spec hose(Apply.t(), Apply.t()) :: Apply.t()
def hose(apply, func), do: Apply.convey(apply, func)
@doc """
Reverse arguments and sequencing of `convey/2`.
Conceptually this makes operations happen in
a different order than `convey/2`, with the left-side arguments (functions) being
run on all right-side arguments, in that order. We're altering the _sequencing_
of function applications.
## Examples
iex> import Realm.Apply.Algebra
...> ap([fn x -> x + 1 end, fn y -> y * 10 end], [1, 2, 3])
[2, 3, 4, 10, 20, 30]
# For comparison
iex> Apply.convey([1, 2, 3], [fn x -> x + 1 end, fn y -> y * 10 end])
[2, 10, 3, 20, 4, 30]
iex> [100, 200]
...> |> Realm.Functor.map(curry(fn)(x, y, z) -> x * y / z end)
...> |> ap([5, 2])
...> |> ap([100, 50])
[5.0, 10.0, 2.0, 4.0, 10.0, 20.0, 4.0, 8.0]
# ↓ ↓
# 100 * 5 / 100 200 * 5 / 50
"""
@spec ap(Apply.t(), Apply.t()) :: Apply.t()
def ap(func, apply) do
lift(apply, func, fn arg, fun -> fun.(arg) end)
end
@doc """
Sequence actions, replacing the first/previous values with the last argument
This is essentially a sequence of actions forgetting the first argument
## Examples
iex> import Realm.Apply.Algebra
...> [1, 2, 3]
...> |> then([4, 5, 6])
...> |> then([7, 8, 9])
[
7, 8, 9,
7, 8, 9,
7, 8, 9,
7, 8, 9,
7, 8, 9,
7, 8, 9,
7, 8, 9,
7, 8, 9,
7, 8, 9
]
iex> import Realm.Apply.Algebra
...> {1, 2, 3} |> then({4, 5, 6}) |> then({7, 8, 9})
{12, 15, 9}
"""
@spec then(Apply.t(), Apply.t()) :: Apply.t()
def then(left, right), do: over(&Quark.constant(&2, &1), left, right)
@doc """
Sequence actions, replacing the last argument with the first argument's values
This is essentially a sequence of actions forgetting the second argument
## Examples
iex> import Realm.Apply.Algebra
...> [1, 2, 3]
...> |> following([3, 4, 5])
...> |> following([5, 6, 7])
[
1, 1, 1, 1, 1, 1, 1, 1, 1,
2, 2, 2, 2, 2, 2, 2, 2, 2,
3, 3, 3, 3, 3, 3, 3, 3, 3
]
iex> import Realm.Apply.Algebra
...> {1, 2, 3} |> following({4, 5, 6}) |> following({7, 8, 9})
{12, 15, 3}
"""
@spec following(Apply.t(), Apply.t()) :: Apply.t()
def following(left, right), do: lift(right, left, &Quark.constant(&2, &1))
@doc """
Extends `Functor.map/2` to apply arguments to a binary function
## Examples
iex> import Realm.Apply.Algebra
...> lift([1, 2], [3, 4], &+/2)
[4, 5, 5, 6]
iex> import Realm.Apply.Algebra
...> [1, 2]
...> |> lift([3, 4], &*/2)
[3, 6, 4, 8]
"""
@spec lift(Apply.t(), Apply.t(), fun()) :: Apply.t()
def lift(a, b, fun) do
a
|> Functor.map(curry(fun))
|> (fn f -> Apply.convey(b, f) end).()
end
@doc """
Extends `lift` to apply arguments to a ternary function
## Examples
iex> import Realm.Apply.Algebra
...> lift([1, 2], [3, 4], [5, 6], fn(a, b, c) -> a * b - c end)
[-2, -3, 1, 0, -1, -2, 3, 2]
"""
@spec lift(Apply.t(), Apply.t(), Apply.t(), fun()) :: Apply.t()
def lift(a, b, c, fun), do: a |> lift(b, fun) |> ap(c)
@doc """
Extends `lift` to apply arguments to a quaternary function
## Examples
iex> import Realm.Apply.Algebra
...> lift([1, 2], [3, 4], [5, 6], [7, 8], fn(a, b, c, d) -> a * b - c + d end)
[5, 6, 4, 5, 8, 9, 7, 8, 6, 7, 5, 6, 10, 11, 9, 10]
"""
@spec lift(Apply.t(), Apply.t(), Apply.t(), Apply.t(), fun()) :: Apply.t()
def lift(a, b, c, d, fun), do: a |> lift(b, c, fun) |> ap(d)
@doc """
Extends `over` to apply arguments to a binary function
## Examples
iex> over(&+/2, [1, 2], [3, 4])
[4, 5, 5, 6]
iex> (&*/2)
...> |> over([1, 2], [3, 4])
[3, 4, 6, 8]
"""
@spec over(fun(), Apply.t(), Apply.t()) :: Apply.t()
def over(fun, a, b), do: a |> Functor.map(curry(fun)) |> ap(b)
@doc """
Extends `over` to apply arguments to a ternary function
## Examples
iex> import Realm.Apply.Algebra
...> fn(a, b, c) -> a * b - c end
...> |> over([1, 2], [3, 4], [5, 6])
[-2, -3, -1, -2, 1, 0, 3, 2]
"""
@spec over(fun(), Apply.t(), Apply.t(), Apply.t()) :: Apply.t()
def over(fun, a, b, c), do: fun |> over(a, b) |> ap(c)
@doc """
Extends `over` to apply arguments to a ternary function
## Examples
iex> import Realm.Apply.Algebra
...> fn(a, b, c) -> a * b - c end
...> |> over([1, 2], [3, 4], [5, 6])
[-2, -3, -1, -2, 1, 0, 3, 2]
"""
@spec over(fun(), Apply.t(), Apply.t(), Apply.t(), Apply.t()) :: Apply.t()
def over(fun, a, b, c, d), do: fun |> over(a, b, c) |> ap(d)
end
defimpl Realm.Apply, for: Function do
use Quark
def convey(g, f), do: fn x -> curry(f).(x).(curry(g).(x)) end
end
defimpl Realm.Apply, for: List do
def convey(val_list, fun_list) when is_list(fun_list) do
Enum.flat_map(val_list, fn val ->
Enum.map(fun_list, fn fun -> fun.(val) end)
end)
end
end
# Contents must be semigroups
defimpl Realm.Apply, for: Tuple do
alias Realm.Semigroup
def convey({v, w}, {a, fun}) do
{Semigroup.append(v, a), fun.(w)}
end
def convey({v, w, x}, {a, b, fun}) do
{Semigroup.append(v, a), Semigroup.append(w, b), fun.(x)}
end
def convey({v, w, x, y}, {a, b, c, fun}) do
{Semigroup.append(v, a), Semigroup.append(w, b), Semigroup.append(x, c), fun.(y)}
end
def convey({v, w, x, y, z}, {a, b, c, d, fun}) do
{
Semigroup.append(a, v),
Semigroup.append(b, w),
Semigroup.append(c, x),
Semigroup.append(d, y),
fun.(z)
}
end
def convey(left, right) when tuple_size(left) == tuple_size(right) do
last_index = tuple_size(left) - 1
left
|> Tuple.to_list()
|> Enum.zip(Tuple.to_list(right))
|> Enum.with_index()
|> Enum.map(fn
{{arg, fun}, ^last_index} -> fun.(arg)
{{left, right}, _} -> Semigroup.append(left, right)
end)
|> List.to_tuple()
end
end
|
lib/realm/apply.ex
| 0.88786 | 0.695836 |
apply.ex
|
starcoder
|
defmodule ApiWeb.Presence do
@moduledoc """
Provides presence tracking to channels and processes.
See the [`Phoenix.Presence`](http://hexdocs.pm/phoenix/Phoenix.Presence.html)
docs for more details.
## Usage
Presences can be tracked in your channel after joining:
defmodule Api.MyChannel do
use ApiWeb, :channel
alias ApiWeb.Presence
def join("some:topic", _params, socket) do
send(self(), :after_join)
{:ok, assign(socket, :user_id, ...)}
end
def handle_info(:after_join, socket) do
push(socket, "presence_state", Presence.list(socket))
{:ok, _} = Presence.track(socket, socket.assigns.user_id, %{
online_at: inspect(System.system_time(:second))
})
{:noreply, socket}
end
end
In the example above, `Presence.track` is used to register this
channel's process as a presence for the socket's user ID, with
a map of metadata. Next, the current presence list for
the socket's topic is pushed to the client as a `"presence_state"` event.
Finally, a diff of presence join and leave events will be sent to the
client as they happen in real-time with the "presence_diff" event.
See `Phoenix.Presence.list/2` for details on the presence data structure.
## Fetching Presence Information
The `fetch/2` callback is triggered when using `list/1`
and serves as a mechanism to fetch presence information a single time,
before broadcasting the information to all channel subscribers.
This prevents N query problems and gives you a single place to group
isolated data fetching to extend presence metadata.
The function receives a topic and map of presences and must return a
map of data matching the Presence data structure:
%{"123" => %{metas: [%{status: "away", phx_ref: ...}],
"456" => %{metas: [%{status: "online", phx_ref: ...}]}
The `:metas` key must be kept, but you can extend the map of information
to include any additional information. For example:
def fetch(_topic, entries) do
users = entries |> Map.keys() |> Accounts.get_users_map(entries)
# => %{"123" => %{name: "User 123"}, "456" => %{name: nil}}
for {key, %{metas: metas}} <- entries, into: %{} do
{key, %{metas: metas, user: users[key]}}
end
end
The function above fetches all users from the database who
have registered presences for the given topic. The fetched
information is then extended with a `:user` key of the user's
information, while maintaining the required `:metas` field from the
original presence data.
"""
use Phoenix.Presence,
otp_app: :api,
pubsub_server: Api.PubSub
require Logger
@doc """
Catches exit due to GenServer.call/2 timeout.
Note that the client still need to handle and ignore the GenServer callback
(two-element tuples with a reference as the first element)
"""
@spec safe_list(Phoenix.Socket.t() | String.t()) ::
{:ok, Phoenix.Presence.presences()} | {:error, :timeout}
def safe_list(socket_or_topic) do
try do
{:ok, __MODULE__.list(socket_or_topic)}
catch
:exit, _ ->
Logger.warn("#{__MODULE__}: safe_list/1 timed out.")
{:error, :timeout}
end
end
@spec safe_update(Phoenix.Socket.t(), String.t(), map() | (map() -> map())) ::
{:ok, ref :: binary()} | {:error, :timeout | term()}
def safe_update(socket, key, meta) do
try do
__MODULE__.update(socket, key, meta)
catch
:exit, _ ->
Logger.warn("#{__MODULE__}: safe_update/3 timed out.")
{:error, :timeout}
end
end
defp base_meta(type, %{"name" => name}) when is_binary(name) do
%{
"isHost" => type == :host,
"name" => name,
"result" => []
}
end
def meta(type = :host, payload = %{"game" => game}) when is_binary(game) do
base_meta(type, payload)
|> Map.put("game", game)
|> Map.put("state", 0)
|> Map.put("seed", DateTime.utc_now() |> DateTime.to_unix() |> to_string())
end
def meta(type = :non_host, payload) do
base_meta(type, payload)
end
end
|
api/lib/api_web/channels/presence.ex
| 0.837321 | 0.416114 |
presence.ex
|
starcoder
|
if match?({:module, _}, Code.ensure_compiled(ExAws.S3)) do
defmodule Trunk.Storage.S3 do
@moduledoc """
A `Trunk.Storage` implementation for Amazon’s S3 service.
"""
@behaviour Trunk.Storage
@doc ~S"""
Saves the file to Amazon S3.
- `directory` - The directory (will be combined with the `filename` to form the S3 key.
- `filename` - The name of the file (will be combined with the `directory` to form the S3 key.
- `source_path` - The full path to the file to be stored. This is a path to the uploaded file or a temporary file that has undergone transformation
- `opts` - The options for the storage system
- `bucket:` (required) The S3 bucket in which to store the object
- `ex_aws:` (optional) override options for `ex_aws`
- All other options are passed through to S3 put object which means you can pass in anything accepted by `t:ExAws.S3.put_object_opts/0` including but not limited to `:acl`, `:meta`, `:content_type`, and `:content_disposition`
## Example:
The file will be saved to s3.amazonaws.com/my-bucket/path/to/file.ext
```
Trunk.Storage.S3.save("path/to/", "file.ext", "/tmp/uploaded_file.ext", bucket: "my-bucket")
"""
@spec save(String.t(), String.t(), String.t(), keyword) :: :ok | {:error, :file.posix()}
def save(directory, filename, source_path, opts \\ []) do
key = directory |> Path.join(filename)
bucket = Keyword.fetch!(opts, :bucket)
ex_aws_opts = Keyword.get(opts, :ex_aws, [])
save_opts = Keyword.drop(opts, [:bucket, :ex_aws])
with {:ok, source_data} <- File.read(source_path),
{:ok, _result} <- put_object(bucket, key, source_data, save_opts, ex_aws_opts) do
:ok
else
error -> error
end
end
defp put_object(bucket, key, source_data, storage_opts, ex_aws_opts) do
bucket
|> ExAws.S3.put_object(key, source_data, storage_opts)
|> ExAws.request(ex_aws_opts)
end
def retrieve(directory, filename, destination_path, opts \\ []) do
key = directory |> Path.join(filename)
bucket = Keyword.fetch!(opts, :bucket)
ex_aws_opts = Keyword.get(opts, :ex_aws, [])
{:ok, %{body: data}} = get_object(bucket, key, ex_aws_opts)
File.write(destination_path, data, [:binary, :write])
end
defp get_object(bucket, key, ex_aws_opts) do
bucket
|> ExAws.S3.get_object(key)
|> ExAws.request(ex_aws_opts)
end
@doc ~S"""
Deletes the file from Amazon S3.
- `directory` - The directory (will be combined with the `filename` to form the S3 key.
- `filename` - The name of the file (will be combined with the `directory` to form the S3 key.
- `opts` - The options for the storage system
- `bucket:` (required) The S3 bucket in which to store the object
- `ex_aws:` (optional) override options for `ex_aws`
## Example:
The file will be removed from s3.amazonaws.com/my-bucket/path/to/file.ext
```
Trunk.Storage.S3.delete("path/to/", "file.ext", bucket: "my-bucket")
"""
@spec delete(String.t(), String.t(), keyword) :: :ok | {:error, :file.posix()}
def delete(directory, filename, opts \\ []) do
key = directory |> Path.join(filename)
bucket = Keyword.fetch!(opts, :bucket)
ex_aws_opts = Keyword.get(opts, :ex_aws, [])
bucket
|> ExAws.S3.delete_object(key)
|> ExAws.request(ex_aws_opts)
|> case do
{:ok, _} -> :ok
error -> error
end
end
@doc ~S"""
Generates a URL to the S3 object
- `directory` - The directory (will be combined with the `filename` to form the S3 key.
- `filename` - The name of the file (will be combined with the `directory` to form the S3 key.
- `opts` - The options for the storage system
- `bucket:` (required) The S3 bucket in which to store the object.
- `virtual_host:` (optional) boolean indicator whether to generate a virtual host style URL or not.
- `signed:` (optional) boolean whether to sign the URL or not.
- `ex_aws:` (optional) override options for `ex_aws`
## Example:
```
Trunk.Storage.S3.build_url("path/to", "file.ext", bucket: "my-bucket")
#=> "https://s3.amazonaws.com/my-bucket/path/to/file.ext"
Trunk.Storage.S3.build_url("path/to", "file.ext", bucket: "my-bucket", virtual_host: true)
#=> "https://my-bucket.s3.amazonaws.com/path/to/file.ext"
Trunk.Storage.S3.build_url("path/to", "file.ext", bucket: "my-bucket", signed: true)
#=> "https://s3.amazonaws.com/my-bucket/path/to/file.ext?X-Amz-Algorithm=AWS4-HMAC-SHA256&…"
```
"""
def build_uri(directory, filename, opts \\ []) do
key = directory |> Path.join(filename)
bucket = Keyword.fetch!(opts, :bucket)
ex_aws_opts = Keyword.get(opts, :ex_aws, [])
config = ExAws.Config.new(:s3, ex_aws_opts)
{:ok, url} = ExAws.S3.presigned_url(config, :get, bucket, key, opts)
if Keyword.get(opts, :signed, false) do
url
else
uri = URI.parse(url)
%{uri | query: nil} |> URI.to_string()
end
end
end
end
|
lib/trunk/storage/s3.ex
| 0.780579 | 0.730891 |
s3.ex
|
starcoder
|
defmodule Mgp.Utils do
@default_date Date.from_iso8601!("2016-10-01")
@default_time Time.from_iso8601!("08:00:00")
def pluck(list, []), do: list
def pluck([], _), do: []
def pluck(list, indexes), do: pluck(list, indexes, 0, [])
def pluck(_, [], _, agg), do: :lists.reverse(agg)
def pluck([], _, _, agg), do: :lists.reverse(agg)
def pluck([head | tail], [cur_idx | rest_idx] = indexes, index, agg) do
case cur_idx == index do
true -> pluck(tail, rest_idx, index + 1, [head | agg])
false -> pluck(tail, indexes, index + 1, agg)
end
end
@spec dbf_to_csv(binary()) :: {any(), non_neg_integer()}
def dbf_to_csv(dbf_file) do
System.cmd("dbview", ["-d", "!", "-b", "-t", dbf_file])
end
@spec default_date() :: Date.t()
def default_date(), do: @default_date
@spec default_time() :: Time.t()
def default_time(), do: @default_time
@spec to_decimal(binary() | integer() | Decimal.t()) :: nil | Decimal.t()
def to_decimal(""), do: nil
def to_decimal(n), do: Decimal.new(n)
def to_timestamp(lmd, lmt) do
{:ok, timestamp} = NaiveDateTime.new(to_date(lmd), to_time(lmt))
timestamp
end
@spec to_date(<<_::_*64>>) :: Date.t()
def to_date(<<y0, y1, y2, y3, m0, m1, d0, d1>>) do
Date.from_iso8601!(<<y0, y1, y2, y3, "-", m0, m1, "-", d0, d1>>)
end
def to_date(""), do: @default_date
def to_date(nil), do: @default_date
@spec to_time(nil | binary()) :: Time.t()
def to_time(""), do: @default_time
def to_time(nil), do: @default_time
def to_time(time), do: Time.from_iso8601!(time)
@spec to_integer(binary()) :: any()
def to_integer(""), do: nil
def to_integer(int) do
case String.contains?(int, ".") do
true ->
String.to_integer(hd(String.split(int, ".")))
false ->
String.to_integer(int)
end
end
@spec nil?(any()) :: any()
def nil?(""), do: nil
def nil?(string), do: string
@spec clean_string(binary()) :: binary()
def clean_string(bin) do
case String.valid?(bin) do
true ->
bin
false ->
bin
|> String.codepoints()
|> Enum.filter(&String.valid?(&1))
|> Enum.join()
end
end
@spec clean_line(binary()) :: binary()
def clean_line(line) do
case String.contains?(line, "\"") do
true -> String.replace(line, "\"", "")
false -> line
end
end
def format_date(d) do
:io_lib.format("~2..0B-~2..0B-~4..0B", [d.day, d.month, d.year])
end
def format_currency(n), do: Number.Delimit.number_to_delimited(n)
end
|
lib/mgp/utils.ex
| 0.682045 | 0.431405 |
utils.ex
|
starcoder
|
defmodule BlobFont.CLI do
@moduledoc """
BlobFont is a utility to convert BMFont files into blob
font files.
usage:
blob_font [options] [file]
Options:
* `--help`, `-h` - Prints this message and exits
* `--newline`, `-n` - Add a newline letter
* `--tab`, `-t` - Add a tab letter
* `--unicode`, `-u` - Set that the output should be unicode
* `--ascii`, `-a` - Set that the output should be ascii
* `--trim`, `-tr` - Trim the texture extension
* `--name`, `-tn NAME` - Set the full texture name
* `--letter`, `-l CHAR [-glyph x y w h] [-offset x y] [-adv advance]` - Add or modify a letter
* `--letter-remove`, `-lr CHAR` - Remove a letter
* `--bmfont` - Specify input type is BMFont
* `--stdin`, `-in` - Specify that the input will be passed through stdin
"""
def main(args \\ [], opts \\ [])
def main([cmd|_], _) when cmd in ["-h", "--help"], do: help()
def main([cmd|args], opts) when cmd in ["-n", "--newline"], do: main(args, [{ :letter, { ?\n, [] } }|opts])
def main([cmd|args], opts) when cmd in ["-t", "--tab"], do: main(args, [{ :letter, { ?\t, [] } }|opts])
def main([cmd|args], opts) when cmd in ["-u", "--unicode"], do: main(args, [{ :unicode, true }|opts])
def main([cmd|args], opts) when cmd in ["-a", "--ascii"], do: main(args, [{ :unicode, false }|opts])
def main([cmd|args], opts) when cmd in ["-tr", "--trim"], do: main(args, [{ :trim, true }|opts])
def main([cmd, name|args], opts) when cmd in ["-tn", "--name"], do: main(args, [{ :name, name }|opts])
def main([cmd, char|args], opts) when cmd in ["-l", "--letter"] do
{ args, letter_opts } = letter_options(args)
main(args, [{ :letter, { String.to_charlist(char) |> hd, letter_opts } }|opts])
end
def main([cmd, char|args], opts) when cmd in ["-lr", "--letter-remove"] do
main(args, [{ :letter, { String.to_charlist(char) |> hd, :remove } }|opts])
end
def main([cmd|args], opts) when cmd in ["--bmfont"], do: main(args, [{ :type, :bmfont }|opts])
def main([cmd|args], opts) when cmd in ["-in", "--stdin"], do: main(args, [{ :stdin, true }|opts])
def main([file], opts), do: convert(File.read!(file), Path.extname(file), opts)
def main([], opts) do
:ok = :io.setopts(:standard_io, encoding: :latin1)
case opts[:stdin] && IO.binread(:all) do
data when is_binary(data) and bit_size(data) > 0 ->
:ok = :io.setopts(:standard_io, encoding: :utf8)
convert(data, "", opts)
_ ->
:ok = :io.setopts(:standard_io, encoding: :utf8)
help()
end
end
def main(_, _), do: help()
defp letter_options(args, opts \\ [])
defp letter_options(["-glyph", x, y, w, h|args], opts), do: letter_options(args, [{ :glyph, { to_integer(x), to_integer(y), to_integer(w), to_integer(h) } }|opts])
defp letter_options(["-offset", x, y|args], opts), do: letter_options(args, [{ :offset, { to_integer(x), to_integer(y) } }|opts])
defp letter_options(["-adv", x|args], opts), do: letter_options(args, [{ :advance, to_integer(x) }|opts])
defp letter_options(args, opts), do: { args, opts }
defp to_integer(value) do
{ value, _ } = Integer.parse(value)
value
end
defp help(), do: get_docs() |> SimpleMarkdown.convert(render: &SimpleMarkdownExtensionCLI.Formatter.format/1) |> IO.puts
defp get_docs() do
if Version.match?(System.version, "> 1.7.0") do
{ :docs_v1, _, :elixir, "text/markdown", %{ "en" => doc }, _, _ } = Code.fetch_docs(__MODULE__)
doc
else
{ _, doc } = Code.get_docs(__MODULE__, :moduledoc)
doc
end
end
defp modify_bmfont_char(char, [{ :glyph, { x, y, w, h } }|opts]), do: modify_bmfont_char(%{ char | x: x, y: y, width: w, height: h }, opts)
defp modify_bmfont_char(char, [{ :offset, { x, y } }|opts]), do: modify_bmfont_char(%{ char | xoffset: x, yoffset: y }, opts)
defp modify_bmfont_char(char, [{ :advance, x }|opts]), do: modify_bmfont_char(%{ char | xadvance: x }, opts)
defp modify_bmfont_char(char, []), do: char
defp modify_bmfont_char([%{ id: id }|chars], { id, :remove }, new_chars), do: new_chars ++ chars
defp modify_bmfont_char([char = %{ id: id }|chars], { id, opts }, new_chars), do: [modify_bmfont_char(char, opts)|new_chars] ++ chars
defp modify_bmfont_char([char|chars], letter, new_chars), do: modify_bmfont_char(chars, letter, [char|new_chars])
defp modify_bmfont_char([], { id, opts }, new_chars), do: [modify_bmfont_char(%BMFont.Char{ id: id }, opts)|new_chars]
defp modify_bmfont(font, [{ :letter, letter }|opts]), do: Map.update!(font, :chars, &modify_bmfont_char(&1, letter, [])) |> modify_bmfont(opts)
defp modify_bmfont(font, [{ :unicode, unicode }|opts]), do: Map.update!(font, :info, &(%{ &1 | unicode: unicode })) |> modify_bmfont(opts)
defp modify_bmfont(font, [{ :name, name }|opts]), do: Map.update!(font, :pages, &Enum.map(&1, fn page -> %{ page | file: name } end)) |> modify_bmfont(opts)
defp modify_bmfont(font, [{ :trim, true }|opts]), do: Map.update!(font, :pages, &Enum.map(&1, fn page -> %{ page | file: Path.rootname(page.file) } end)) |> modify_bmfont(opts)
defp modify_bmfont(font, [_|opts]), do: modify_bmfont(font, opts)
defp modify_bmfont(font, []), do: font
defp type_for_content(<<"BMF", _ :: binary>>), do: :bmfont
defp type_for_content(_), do: nil
defp type_for_extension(_), do: nil
defp input_type(input, ext, opts) do
with nil <- opts[:type],
nil <- type_for_extension(ext),
nil <- type_for_content(input) do
:bmfont
else
type -> type
end
end
defp convert(input, ext, opts) do
case input_type(input, ext, opts) do
:bmfont -> BMFont.parse(input) |> modify_bmfont(opts)
end
|> BlobFont.convert
|> IO.puts
end
end
|
lib/blob_font/cli.ex
| 0.697815 | 0.417984 |
cli.ex
|
starcoder
|
defmodule IO do
@moduledoc """
Module responsible for doing IO. The function in this
module expects an iodata as argument encoded in UTF-8.
An iodata can be:
* A list of integers representing a string. Any unicode
character must be represented with one entry in the list,
this entry being an integer with the codepoint value;
* A binary in which unicode characters are represented
with many bytes (Elixir's default representation);
* A list of binaries or a list of char lists (as described above);
* If none of the above, `to_binary` is invoked in the
given argument;
"""
@doc """
Reads `count` bytes from the IO device. It returns:
* `data` - The input characters.
* :eof - End of file was encountered.
* {:error, reason} - Other (rare) error condition,
for instance {:error, :estale} if reading from an
NFS file system.
"""
def read(device // :stdio, count) do
Erlang.io.get_chars(map_dev(device), "", count)
end
@doc """
Read a line from the IO device. It returns:
* `data` - The input characters.
* :eof - End of file was encountered.
* {:error, reason} - Other (rare) error condition,
for instance {:error, :estale} if reading from an
NFS file system.
This function does the same as `gets/2`,
except the prompt is not required as argument.
"""
def readline(device // :stdio) do
Erlang.io.get_line(map_dev(device), "")
end
@doc """
Writes the given argument to the given device.
By default the device is the standard output.
The argument is expected to be a chardata (i.e.
a char list or an unicode binary).
It returns `:ok` if it succeeds.
## Examples
IO.write "sample"
#=> "sample"
IO.write :stderr, "error"
#=> "error"
"""
def write(device // :stdio, item) do
Erlang.io.put_chars map_dev(device), to_iodata(item)
end
@doc """
Writes the argument to the device, similarly to write
but adds a new line at the end. The argument is expected
to be a chardata.
"""
def puts(device // :stdio, item) do
erl_dev = map_dev(device)
Erlang.io.put_chars erl_dev, to_iodata(item)
Erlang.io.nl(erl_dev)
end
@doc """
Inspects and writes the given argument to the device
followed by a new line. Returns the item given.
"""
def inspect(device // :stdio, item) do
puts device, Kernel.inspect(item)
item
end
@doc """
Gets `count` bytes from the IO device. It returns:
* `data` - The input characters.
* :eof - End of file was encountered.
* {:error, reason} - Other (rare) error condition,
for instance {:error, :estale} if reading from an
NFS file system.
"""
def getb(device // :stdio, prompt, count // 1) do
Erlang.io.get_chars(map_dev(device), to_iodata(prompt), count)
end
@doc """
Reads a line from the IO device. It returns:
* `data` - The characters in the line terminated
by a LF (or end of file).
* :eof - End of file was encountered.
* {:error, reason} - Other (rare) error condition,
for instance {:error, :estale} if reading from an
NFS file system.
"""
def gets(device // :stdio, prompt) do
Erlang.io.get_line(map_dev(device), to_iodata(prompt))
end
# Map the Elixir names for standard io and error to Erlang names
defp map_dev(:stdio), do: :standard_io
defp map_dev(:stderr), do: :standard_error
defp map_dev(other), do: other
defp to_iodata(io) when is_list(io) or is_binary(io), do: io
defp to_iodata(other), do: to_binary(other)
end
|
lib/elixir/lib/io.ex
| 0.820613 | 0.670244 |
io.ex
|
starcoder
|
defmodule OpenLocationCode do
@pair_code_length 10
@separator "+"
@separator_position 8
@padding "0"
@latitude_max 90
@longitude_max 180
@code_alphabet "23456789CFGHJMPQRVWX"
#The resolution values in degrees for each position in the lat/lng pair
#encoding. These give the place value of each position, and therefore the
#dimensions of the resulting area.
@pair_resolutions [20.0, 1.0, 0.05, 0.0025, 0.000125]
@moduledoc """
Open Location Code (OLC) is a geocoding system for identifying an area anywhere on planet Earth. Originally developed in
2014, OLCs are also called "plus codes". Nearby locations have similar codes, and they can be encoded and decoded offline.
As blocks are refined to a smaller and smaller area, the number of trailing zeros in a plus code will shrink.
For more information on the OLC specification, check the [OLC Wikipedia entry](https://en.wikipedia.org/wiki/Open_Location_Code)
There are two main functions in this module--encoding and decoding.
"""
@doc """
Encodes a location into an Open Location Code string.
Produces a code of the specified length, or the default length if no length
is provided. The length determines the accuracy of the code. The default length is
10 characters, returning a code of approximately 13.5x13.5 meters. Longer
codes represent smaller areas, but lengths > 14 refer to areas smaller than the accuracy of
most devices.
Latitude is in signed decimal degrees and will be clipped to the range -90 to 90. Longitude
is in signed decimal degrees and will be clipped to the range -180 to 180.
## Examples
iex> OpenLocationCode.encode(20.375,2.775, 6)
"7FG49Q00+"
iex> OpenLocationCode.encode(20.3700625,2.7821875)
"7FG49QCJ+2V"
"""
def encode(latitude, longitude, code_length \\ @pair_code_length) do
latitude = clip_latitude(latitude)
longitude = normalize_longitude(longitude)
latitude = if latitude == 90 do
latitude - precision_by_length(code_length)
else
latitude
end
encode_pairs(latitude + @latitude_max, longitude + @longitude_max, code_length, "", 0)
end
@doc """
Decodes a code string into an `OpenLocationCode.CodeArea` struct
## Examples
iex> OpenLocationCode.decode("6PH57VP3+PR")
%OpenLocationCode.CodeArea{lat_resolution: 1.25e-4,
long_resolution: 1.25e-4,
south_latitude: 1.2867499999999998,
west_longitude: 103.85449999999999}
"""
def decode(olcstring) do
code = clean_code(olcstring)
{south_lat, west_long, lat_res, long_res} = decode_location(code)
%OpenLocationCode.CodeArea{south_latitude: south_lat,
west_longitude: west_long,
lat_resolution: lat_res,
long_resolution: long_res}
end
# Codec functions
defp encode_pairs(adj_latitude, adj_longitude, code_length, code, digit_count) when digit_count < code_length do
place_value = (digit_count / 2)
|> floor
|> resolution_for_pos
{ncode, adj_latitude} = append_code(code, adj_latitude, place_value)
digit_count = digit_count + 1
{ncode, adj_longitude} = append_code(ncode, adj_longitude, place_value)
digit_count = digit_count + 1
# Should we add a separator here?
ncode = if digit_count == @separator_position and digit_count < code_length do
ncode <> @separator
else
ncode
end
encode_pairs(adj_latitude, adj_longitude, code_length, ncode, digit_count)
end
defp encode_pairs(_, _, code_length, code, digit_count) when digit_count == code_length do
code
|> pad_trailing
|> ensure_separator
end
defp append_code(code, adj_coord, place_value) do
digit_value = floor(adj_coord / place_value)
adj_coord = adj_coord - (digit_value * place_value)
code = code <> String.at(@code_alphabet, digit_value)
{ code, adj_coord }
end
defp pad_trailing(code) do
if String.length(code) < @separator_position do
String.pad_trailing(code, @separator_position, @padding)
else
code
end
end
defp ensure_separator(code) do
if String.length(code) == @separator_position do
code <> @separator
else
code
end
end
defp floor(num) when is_number(num) do
Kernel.trunc(:math.floor(num))
end
defp resolution_for_pos(position) do
Enum.at(@pair_resolutions, position)
end
defp clip_latitude(latitude) do
Kernel.min(90, Kernel.max(-90, latitude))
end
defp normalize_longitude(longitude) do
case longitude do
l when l < -180 -> normalize_longitude(l + 360)
l when l > 180 -> normalize_longitude(l - 360)
l -> l
end
end
defp precision_by_length(code_length) do
if code_length <= @pair_code_length do
:math.pow(20, (div(code_length,-2)) + 2)
else
:math.pow(20,-3) / (:math.pow(5,(code_length - @pair_code_length)))
end
end
defp clean_code(code) do
code |> String.replace(@separator, "") |> String.replace_trailing(@padding, "")
end
defp decode_location(code) do
_decode_location(0, code, String.length(code), -90.0, -180.0, 400.0, 400.0)
end
defp _decode_location(digit, code, code_length, south_lat, west_long, lat_res, long_res) when digit < code_length do
code_at_digit = String.at(code, digit)
if digit < @pair_code_length do
code_at_digit1 = String.at(code, digit+1)
lat_res = lat_res / 20
long_res = long_res / 20
south_lat = south_lat + (lat_res * index_of_codechar(code_at_digit))
west_long = west_long + (long_res * index_of_codechar(code_at_digit1))
_decode_location(digit + 2, code, code_length, south_lat, west_long, lat_res, long_res)
else
lat_res = lat_res / 5
long_res = long_res / 4
row = index_of_codechar(code_at_digit) / 4
col = rem(index_of_codechar(code_at_digit), 4)
south_lat = south_lat + (lat_res * row)
west_long = west_long + (long_res * col)
_decode_location(digit + 1, code, code_length, south_lat, west_long, lat_res, long_res)
end
end
defp _decode_location(digit, _, code_length, south_lat, west_long, lat_res, long_res) when digit == code_length do
{south_lat, west_long, lat_res, long_res}
end
defp index_of_codechar(codechar) do
{index, _} = :binary.match(@code_alphabet, codechar)
index
end
end
|
lib/openlocationcode.ex
| 0.698946 | 0.574096 |
openlocationcode.ex
|
starcoder
|
defmodule RayTracer.Light do
@moduledoc """
This module defines light sources
"""
alias RayTracer.RTuple
alias RayTracer.Color
alias RayTracer.Material
alias RayTracer.Pattern
alias RayTracer.Shape
import RTuple, only: [normalize: 1, reflect: 2]
@type t :: %__MODULE__{
position: RTuple.point,
intensity: Color.t
}
defstruct [:position, :intensity]
@doc """
Builds a light source with no size, existing at a single point in space.
It also has a color intensity.
"""
@spec point_light(RTuple.point, Color.t) :: t
def point_light(position, intensity) do
%__MODULE__{position: position, intensity: intensity}
end
@doc """
Computes a resulting color after applying the Phong shading model.
`material` - material definition for the illuminated object.
`position` - point being iluminated
`light` - light source
`eyev` - eye vector
`normalv` - surface normal vector
`in_shadow` - indicates whether the point is in shadow of an object
"""
@spec lighting(Material.t, Shape.t, t, RTuple.point, RTuple.vector, RTuple.vector, boolean) :: Color.t
def lighting(material, object, light, position, eyev, normalv, in_shadow) do
# Combine the surface color with the light's color/intensity
effective_color =
material
|> material_color_at(object, position)
|> Color.hadamard_product(light.intensity)
# Find the direction to the light sourc
lightv = light.position |> RTuple.sub(position) |> normalize
# Compute the ambient contribution
ambient = effective_color |> Color.mul(material.ambient)
if in_shadow do
ambient
else
# `light_dot_normal` represents the cosine of the angle between the
# light vector and the normal vector. A negative number means the
# light is on the other side of the surface.
light_dot_normal = RTuple.dot(lightv, normalv)
diffuse = calc_diffuse(light_dot_normal, effective_color, material)
specular = calc_specular(light_dot_normal, lightv, normalv, eyev, material, light)
ambient |> Color.add(diffuse) |> Color.add(specular)
end
end
defp material_color_at(material, object, point) do
if material.pattern do
material.pattern |> Pattern.pattern_at_shape(object, point)
else
material.color
end
end
defp calc_diffuse(light_dot_normal, _, _) when light_dot_normal < 0, do: Color.black
defp calc_diffuse(light_dot_normal, effective_color, material) do
effective_color |> Color.mul(material.diffuse * light_dot_normal)
end
defp calc_specular(light_dot_normal, _, _, _, _, _) when light_dot_normal < 0, do: Color.black
defp calc_specular(_, lightv, normalv, eyev, material, light) do
# `reflect_dot_eye` represents the cosine of the angle between the
# reflection vector and the eye vector. A negative number means the
# light reflects away from the eye.
reflectv = lightv |> RTuple.negate() |> reflect(normalv)
reflect_dot_eye = reflectv |> RTuple.dot(eyev)
if reflect_dot_eye <= 0 do
Color.black
else
# compute the specular contribution
factor = :math.pow(reflect_dot_eye, material.shininess)
light.intensity |> Color.mul(material.specular * factor)
end
end
end
|
lib/light.ex
| 0.93744 | 0.570989 |
light.ex
|
starcoder
|
defmodule Prometheus.Collector do
@moduledoc """
A collector for a set of metrics.
Normal users should use `Prometheus.Metric.Gauge`, `Prometheus.Metric.Counter`,
`Prometheus.Metric.Summary`
and `Prometheus.Metric.Histogram`.
Implementing `:prometheus_collector` behaviour is for advanced uses such as proxying
metrics from another monitoring system.
It is the responsibility of the implementer to ensure produced metrics are valid.
You will be working with Prometheus data model directly (see `Prometheus.Model` ).
Callbacks:
- `collect_mf(registry, callback)` - called by exporters and formats.
Should call `callback` for each `MetricFamily` of this collector;
- `collect_metrics(name, data)` - called by `MetricFamily` constructor.
Should return Metric list for each MetricFamily identified by `name`.
`data` is a term associated with MetricFamily by collect_mf.
- `deregister_cleanup(registry)` - called when collector unregistered by
`registry`. If collector is stateful you can put cleanup code here.
Example (simplified [`:prometheus_vm_memory_collector`](https://github.com/deadtrickster/prometheus.erl/blob/master/doc/prometheus_vm_memory_collector.md)):
```
iex(3)> defmodule Prometheus.VMMemoryCollector do
...(3)> use Prometheus.Collector
...(3)>
...(3)> @labels [:processes, :atom, :binary, :code, :ets]
...(3)>
...(3)> def collect_mf(_registry, callback) do
...(3)> memory = :erlang.memory()
...(3)> callback.(create_gauge(
...(3)> :erlang_vm_bytes_total,
...(3)> "The total amount of memory currently allocated.",
...(3)> memory))
...(3)> :ok
...(3)> end
...(3)>
...(3)> def collect_metrics(:erlang_vm_bytes_total, memory) do
...(3)> Prometheus.Model.gauge_metrics(
...(3)> for label <- @labels do
...(3)> {[type: label], memory[label]}
...(3)> end)
...(3)> end
...(3)>
...(3)> defp create_gauge(name, help, data) do
...(3)> Prometheus.Model.create_mf(name, help, :gauge, __MODULE__, data)
...(3)> end
...(3)> end
iex(4)> Prometheus.Registry.register_collector(Prometheus.VMMemoryCollector)
:ok
iex(5)> r = ~r/# TYPE erlang_vm_bytes_total gauge
...(5)> # HELP erlang_vm_bytes_total
...(5)> The total amount of memory currently allocated.
...(5)> erlang_vm_bytes_total{type=\"processes\"} [1-9]
...(5)> erlang_vm_bytes_total{type=\"atom\"} [1-9]
...(5)> erlang_vm_bytes_total{type=\"binary\"} [1-9]
...(5)> erlang_vm_bytes_total{type=\"code\"} [1-9]
...(5)> erlang_vm_bytes_total{type=\"ets\"} [1-9]/
iex(6)> Regex.match?(r, Prometheus.Format.Text.format)
true
```
"""
defmacro __using__(_opts) do
quote location: :keep do
@behaviour :prometheus_collector
require Prometheus.Error
require Prometheus.Model
def deregister_cleanup(_registry) do
:ok
end
defoverridable deregister_cleanup: 1
end
end
use Prometheus.Erlang, :prometheus_collector
@doc """
Calls `callback` for each MetricFamily of this collector.
"""
delegate collect_mf(registry \\ :default, collector, callback)
end
|
astreu/deps/prometheus_ex/lib/prometheus/collector.ex
| 0.882117 | 0.844985 |
collector.ex
|
starcoder
|
defmodule VintageNet.RouteManager do
use GenServer
require Logger
alias VintageNet.Interface.Classification
alias VintageNet.Route.{Calculator, InterfaceInfo, IPRoute, Properties}
@moduledoc """
This module manages the default route.
Devices with more than one network interface may have more than one
way of reaching the Internet. The routing table decides which interface
an IP packet should use by looking at the "default route" entries.
One interface is chosen.
Since not all interfaces are equal, we'd like Linux to pick the
fastest and lowest latency one. for example, one could
prefer wired Ethernet over WiFi and prefer WiFi over a cellular
connection. This module lets you specify an ordering for interfaces
and sets up the routes based on this ordering.
This module also handles networking failures. One failure that
Linux can't figure out on its own is whether an interface can
reach the Internet. Internet reachability is handled elsewhere
like in the `ConnectivityChecker` module. This module should be
told reachability status so that it can properly order default
routes so that the best reachable interface is used.
IMPORTANT: This module uses priority-based routing. Make sure the
following kernel options are enabled:
```text
CONFIG_IP_ADVANCED_ROUTER=y
CONFIG_IP_MULTIPLE_TABLES=y
```
"""
defmodule State do
@moduledoc false
defstruct prioritization: nil, interfaces: nil, route_state: nil, routes: []
end
@doc """
Start the route manager.
"""
@spec start_link(keyword) :: GenServer.on_start()
def start_link(args) do
GenServer.start_link(__MODULE__, args, name: __MODULE__)
end
@doc """
Stop the route manager.
"""
@spec stop() :: :ok
def stop() do
GenServer.stop(__MODULE__)
end
@doc """
Set the default route for an interface.
This replaces any existing routes on that interface
"""
@spec set_route(
VintageNet.ifname(),
[{:inet.ip_address(), VintageNet.prefix_length()}],
:inet.ip_address(),
Classification.connection_status()
) ::
:ok
def set_route(ifname, ip_subnets, route, status \\ :lan) do
GenServer.call(__MODULE__, {:set_route, ifname, ip_subnets, route, status})
end
@doc """
Set the connection status on an interface.
Changing the connection status can re-prioritize routing. The
specified interface doesn't need to have a default route.
"""
@spec set_connection_status(VintageNet.ifname(), Classification.connection_status()) :: :ok
def set_connection_status(ifname, status) do
GenServer.call(__MODULE__, {:set_connection_status, ifname, status})
end
@doc """
Clear out the default gateway for an interface.
"""
@spec clear_route(VintageNet.ifname()) :: :ok
def clear_route(ifname) do
GenServer.call(__MODULE__, {:clear_route, ifname})
end
@doc """
Set the order that default gateways should be used
The list is ordered from highest priority to lowest
"""
@spec set_prioritization([Classification.prioritization()]) :: :ok
def set_prioritization(priorities) do
GenServer.call(__MODULE__, {:set_prioritization, priorities})
end
## GenServer
@impl true
def init(_args) do
# Fresh slate
IPRoute.clear_all_routes()
IPRoute.clear_all_rules(Calculator.rule_table_index_range())
state =
%State{
prioritization: Classification.default_prioritization(),
interfaces: %{},
route_state: Calculator.init()
}
|> update_route_tables()
{:ok, state}
end
@impl true
def handle_call({:set_route, ifname, ip_subnets, default_gateway, status}, _from, state) do
_ = Logger.info("RouteManager: set_route #{ifname} -> #{inspect(status)}")
ifentry = %InterfaceInfo{
interface_type: Classification.to_type(ifname),
ip_subnets: ip_subnets,
default_gateway: default_gateway,
status: status
}
new_state =
put_in(state.interfaces[ifname], ifentry)
|> update_route_tables()
{:reply, :ok, new_state}
end
@impl true
def handle_call({:set_connection_status, ifname, status}, _from, state) do
new_state =
state
|> update_connection_status(ifname, status)
{:reply, :ok, new_state}
end
@impl true
def handle_call({:clear_route, ifname}, _from, state) do
new_state =
if Map.has_key?(state.interfaces, ifname) do
_ = Logger.info("RouteManager: clear_route #{ifname}")
%{state | interfaces: Map.delete(state.interfaces, ifname)}
|> update_route_tables()
else
state
end
{:reply, :ok, new_state}
end
@impl true
def handle_call({:set_prioritization, priorities}, _from, state) do
new_state =
state
|> Map.put(:prioritization, priorities)
|> update_route_tables()
{:reply, :ok, new_state}
end
# Only process routes if the status changes
defp update_connection_status(
%State{interfaces: interfaces} = state,
ifname,
new_status
) do
case interfaces[ifname] do
nil ->
state
ifentry ->
if ifentry.status != new_status do
_ =
Logger.info("RouteManager: set_connection_status #{ifname} -> #{inspect(new_status)}")
put_in(state.interfaces[ifname].status, new_status)
|> update_route_tables()
else
state
end
end
end
defp update_route_tables(state) do
# See what changed and then run it.
{new_route_state, new_routes} =
Calculator.compute(state.route_state, state.interfaces, state.prioritization)
route_delta = List.myers_difference(state.routes, new_routes)
# Update Linux's routing tables
Enum.each(route_delta, &handle_delta/1)
# Update the global routing properties in the property table
Properties.update_available_interfaces(new_routes)
Properties.update_best_connection(state.interfaces)
%{state | route_state: new_route_state, routes: new_routes}
end
defp handle_delta({:eq, _anything}), do: :ok
defp handle_delta({:del, deletes}) do
Enum.each(deletes, &handle_delete/1)
end
defp handle_delta({:ins, inserts}) do
Enum.each(inserts, &handle_insert/1)
end
defp handle_delete({:default_route, ifname, _default_gateway, _metric, table_index}) do
IPRoute.clear_a_route(ifname, table_index)
|> warn_on_error("clear_a_route")
end
defp handle_delete({:local_route, ifname, address, subnet_bits, metric, table_index}) do
IPRoute.clear_a_local_route(ifname, address, subnet_bits, metric, table_index)
|> warn_on_error("clear_a_local_route")
end
defp handle_delete({:rule, table_index, _address}) do
IPRoute.clear_a_rule(table_index)
|> warn_on_error("clear_a_rule")
end
defp handle_insert({:default_route, ifname, default_gateway, metric, table_index}) do
:ok = IPRoute.add_default_route(ifname, default_gateway, metric, table_index)
end
defp handle_insert({:rule, table_index, address}) do
:ok = IPRoute.add_rule(address, table_index)
end
defp handle_insert({:local_route, ifname, address, subnet_bits, metric, table_index}) do
if table_index == :main do
# HACK: Delete automatically created local routes that have a 0 metric
_ = IPRoute.clear_a_local_route(ifname, address, subnet_bits, 0, :main)
:ok
end
:ok = IPRoute.add_local_route(ifname, address, subnet_bits, metric, table_index)
end
defp warn_on_error(:ok, _label), do: :ok
defp warn_on_error({:error, reason}, label) do
Logger.warn("route_manager(#{label}): ignoring failure #{inspect(reason)}")
end
end
|
lib/vintage_net/route_manager.ex
| 0.854824 | 0.491578 |
route_manager.ex
|
starcoder
|
defmodule P6 do
@moduledoc """
[K, N]区間内の素数に対して、尺取り法を用いて、ハッシュ関数を満たす最大数列を探す。
# Examples
> P6.solve(2, 2)
2
> P6.solve(1, 11)
3
> P6.solve(10, 100)
31
"""
import Integer, only: [is_odd: 1]
def main do
k = IO.read(:line) |> String.trim() |> String.to_integer()
n = IO.read(:line) |> String.trim() |> String.to_integer()
solve(k, n) |> IO.puts
end
defmodule Prime do
defstruct value: 2
def next(prime \\ %__MODULE__{value: 2})
def next(%__MODULE__{value: 2}), do: %__MODULE__{value: 3}
def next(%__MODULE__{value: n} = prime) do
if is_prime(n + 2),
do: %{prime | value: n + 2},
else: next(%{prime | value: n + 2})
end
def first(1), do: 2
def first(2), do: 2
def first(3), do: 3
def first(n) when is_odd(n) do
if is_prime(n), do: n, else: first(n + 2)
end
def first(n), do: first(n + 1)
@doc """
素数を判定する。
# Examples
iex> Prime.is_prime(1)
false
iex> [2, 3, 5, 7, 11, 13, 17, 19]
...> |> Enum.map(&P2.is_prime/1)
[true, true, true, true, true, true, true, true]
iex> Prime.is_prime(4)
false
iex> Prime.is_prime(24)
false
iex> Prime.is_prime(58)
false
"""
def is_prime(n)
def is_prime(n) when n < 2, do: false
def is_prime(2), do: true
def is_prime(3), do: true
def is_prime(n) when 3 < n do
if rem(n, 2) == 0, do: false, else: is_prime(n, 3)
end
defp is_prime(n, i) when i * i <= n do
if rem(n, i) == 0, do: false, else: is_prime(n, i + 2)
end
defp is_prime(_, _), do: true
defimpl Enumerable do
def count(_), do: {:error, __MODULE__}
def member?(_, _), do: {:error, __MODULE__}
def slice(_), do: {:error, __MODULE__}
def reduce(_prime, {:halt, acc}, _fun), do: {:halted, acc}
def reduce(prime, {:suspend, acc}, fun), do: {:suspended, acc, &reduce(prime, &1, fun)}
def reduce(%{value: v} = prime, {:cont, acc}, fun) do
reduce(Prime.next(prime), fun.(v, acc), fun)
end
end
end
def solve(k, n) do
%Prime{value: Prime.first(k)}
|> Enum.reduce_while({%{}, nil, nil}, fn
prime, acc when n < prime ->
{:halt, acc}
prime, {memo, nil, max} ->
memo = Map.put(memo, prime, MapSet.new([frank_hash(prime)]))
{:cont, {memo, [prime], max}}
prime, {memo, start_with, max} ->
hashed_value = frank_hash(prime)
memo = Map.put(memo, prime, MapSet.new([hashed_value]))
{memo, max, complete} = start_with
|> Enum.reduce({memo, max, []}, fn target, {memo, max, complete} ->
if MapSet.member?(memo[target], hashed_value) do
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target, [max | complete]}
else
nil ->
{memo, target, complete}
_ ->
{memo, max, [target | complete]}
end
else
memo = Map.put(memo, target, MapSet.put(memo[target], hashed_value))
{memo, max, complete}
end
end)
{
:cont,
{
Enum.reduce(complete, memo, fn
v, memo when v == max -> memo
v, memo -> Map.delete(memo, v)
end),
Enum.reject([prime | start_with], fn v -> v in complete end),
max
}
}
end)
|> (fn {memo, start_with, max} ->
start_with
|> Enum.reduce({memo, max}, fn
target, {memo, max} ->
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target}
else
nil ->
{memo, target}
_ ->
{memo, max}
end
end)
|> elem(1)
end).()
end
def frank_hash(n) when n < 10, do: n
def frank_hash(n), do: n |> Integer.digits() |> Enum.sum() |> frank_hash()
@doc """
Streamを使って、逐次を取り出す方法。
"""
def stream(n, k) do
n..k
|> Stream.filter(fn v -> Prime.is_prime(v) end)
|> Enum.reduce({%{}, nil, nil}, fn
prime, {memo, nil, max} ->
memo = Map.put(memo, prime, MapSet.new([frank_hash(prime)]))
{memo, [prime], max}
prime, {memo, start_with, max} ->
hashed_value = frank_hash(prime)
memo = Map.put(memo, prime, MapSet.new([hashed_value]))
{memo, max, complete} = start_with
|> Enum.reduce({memo, max, []}, fn target, {memo, max, complete} ->
if MapSet.member?(memo[target], hashed_value) do
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target, [max | complete]}
else
nil ->
{memo, target, complete}
_ ->
{memo, max, [target | complete]}
end
else
memo = Map.put(memo, target, MapSet.put(memo[target], hashed_value))
{memo, max, complete}
end
end)
{
Enum.reduce(complete, memo, fn
v, memo when v == max -> memo
v, memo -> Map.delete(memo, v)
end),
Enum.reject([prime | start_with], fn v -> v in complete end),
max
}
end)
|> (fn {memo, start_with, max} ->
start_with
|> Enum.reduce({memo, max}, fn
target, {memo, max} ->
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target}
else
nil ->
{memo, target}
_ ->
{memo, max}
end
end)
|> elem(1)
end).()
end
@doc """
予め範囲内の素数をリスト化する方法
"""
def prime(n, k) do
%Prime{}
|> to_list([], n, k)
|> Enum.reduce({%{}, nil, nil}, fn
prime, {memo, nil, max} ->
memo = Map.put(memo, prime, MapSet.new([frank_hash(prime)]))
{memo, [prime], max}
prime, {memo, start_with, max} ->
hashed_value = frank_hash(prime)
memo = Map.put(memo, prime, MapSet.new([hashed_value]))
{memo, max, complete} = start_with
|> Enum.reduce({memo, max, []}, fn target, {memo, max, complete} ->
if MapSet.member?(memo[target], hashed_value) do
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target, [max | complete]}
else
nil ->
{memo, target, complete}
_ ->
{memo, max, [target | complete]}
end
else
memo = Map.put(memo, target, MapSet.put(memo[target], hashed_value))
{memo, max, complete}
end
end)
{
Enum.reduce(complete, memo, fn
v, memo when v == max -> memo
v, memo -> Map.delete(memo, v)
end),
Enum.reject([prime | start_with], fn v -> v in complete end),
max
}
end)
|> (fn {memo, start_with, max} ->
start_with
|> Enum.reduce({memo, max}, fn
target, {memo, max} ->
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target}
else
nil ->
{memo, target}
_ ->
{memo, max}
end
end)
|> elem(1)
end).()
end
@doc """
k以上の素数を予め求め、そこからStreamを用いて素数を取り出す方法
"""
def prime_n(k, n) do
%Prime{value: Prime.first(k)}
|> Stream.take_while(&(&1 <= n))
|> Enum.reduce({%{}, nil, nil}, fn
prime, {memo, nil, max} ->
memo = Map.put(memo, prime, MapSet.new([frank_hash(prime)]))
{memo, [prime], max}
prime, {memo, start_with, max} ->
hashed_value = frank_hash(prime)
memo = Map.put(memo, prime, MapSet.new([hashed_value]))
{memo, max, complete} = start_with
|> Enum.reduce({memo, max, []}, fn target, {memo, max, complete} ->
if MapSet.member?(memo[target], hashed_value) do
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target, [max | complete]}
else
nil ->
{memo, target, complete}
_ ->
{memo, max, [target | complete]}
end
else
memo = Map.put(memo, target, MapSet.put(memo[target], hashed_value))
{memo, max, complete}
end
end)
{
Enum.reduce(complete, memo, fn
v, memo when v == max -> memo
v, memo -> Map.delete(memo, v)
end),
Enum.reject([prime | start_with], fn v -> v in complete end),
max
}
end)
|> (fn {memo, start_with, max} ->
start_with
|> Enum.reduce({memo, max}, fn
target, {memo, max} ->
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target}
else
nil ->
{memo, target}
_ ->
{memo, max}
end
end)
|> elem(1)
end).()
end
@doc """
これが総合的に早そう。
Streamを挟まず、`Enum.reduce_while/3`を用いて、素数を求めると同時に計算する方法
"""
def prime_w(k, n) do
%Prime{value: Prime.first(k)}
|> Enum.reduce_while({%{}, nil, nil}, fn
prime, acc when n < prime ->
{:halt, acc}
prime, {memo, nil, max} ->
memo = Map.put(memo, prime, MapSet.new([frank_hash(prime)]))
{:cont, {memo, [prime], max}}
prime, {memo, start_with, max} ->
hashed_value = frank_hash(prime)
memo = Map.put(memo, prime, MapSet.new([hashed_value]))
{memo, max, complete} = start_with
|> Enum.reduce({memo, max, []}, fn target, {memo, max, complete} ->
if MapSet.member?(memo[target], hashed_value) do
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target, [max | complete]}
else
nil ->
{memo, target, complete}
_ ->
{memo, max, [target | complete]}
end
else
memo = Map.put(memo, target, MapSet.put(memo[target], hashed_value))
{memo, max, complete}
end
end)
{
:cont,
{
Enum.reduce(complete, memo, fn
v, memo when v == max -> memo
v, memo -> Map.delete(memo, v)
end),
Enum.reject([prime | start_with], fn v -> v in complete end),
max
}
}
end)
|> (fn {memo, start_with, max} ->
start_with
|> Enum.reduce({memo, max}, fn
target, {memo, max} ->
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target}
else
nil ->
{memo, target}
_ ->
{memo, max}
end
end)
|> elem(1)
end).()
end
@doc """
Streamを挟まず、`Enum.reduce_while/3`を用いて、素数を求めると同時に計算する方法
それに加え、不要となった辞書の削除を行わない方法
"""
def prime_d(k, n) do
%Prime{value: Prime.first(k)}
|> Enum.reduce_while({%{}, nil, nil}, fn
prime, acc when n < prime ->
{:halt, acc}
prime, {memo, nil, max} ->
memo = Map.put(memo, prime, MapSet.new([frank_hash(prime)]))
{:cont, {memo, [prime], max}}
prime, {memo, start_with, max} ->
hashed_value = frank_hash(prime)
memo = Map.put(memo, prime, MapSet.new([hashed_value]))
{memo, max, complete} = start_with
|> Enum.reduce({memo, max, []}, fn target, {memo, max, complete} ->
if MapSet.member?(memo[target], hashed_value) do
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target, [max | complete]}
else
nil ->
{memo, target, complete}
_ ->
{memo, max, [target | complete]}
end
else
memo = Map.put(memo, target, MapSet.put(memo[target], hashed_value))
{memo, max, complete}
end
end)
{:cont,
{
memo,
Enum.reject([prime | start_with], fn v -> v in complete end),
max
}
}
end)
|> (fn {memo, start_with, max} ->
start_with
|> Enum.reduce({memo, max}, fn
target, {memo, max} ->
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target}
else
nil ->
{memo, target}
_ ->
{memo, max}
end
end)
|> elem(1)
end).()
end
def to_list(%Prime{value: v} = p, [], n, k) when v < n, do: to_list(Prime.next(p), [], n, k)
def to_list(%Prime{value: v}, l, _n, k) when k < v, do: l |> Enum.reverse()
def to_list(%Prime{value: v} = p, l, n, k), do: to_list(Prime.next(p), [v | l], n, k)
end
"""
defmodule Main do
import Integer, only: [is_odd: 1]
defmodule Prime do
defstruct value: 2
def next(prime \\ %__MODULE__{value: 2})
def next(%__MODULE__{value: 2}), do: %__MODULE__{value: 3}
def next(%__MODULE__{value: n} = prime) do
if is_prime(n + 2),
do: %{prime | value: n + 2},
else: next(%{prime | value: n + 2})
end
def first(1), do: 2
def first(2), do: 2
def first(3), do: 3
def first(n) when is_odd(n), do: if is_prime(n), do: n, else: first(n + 2)
def first(n), do: first(n + 1)
def is_prime(n)
def is_prime(n) when n < 2, do: false
def is_prime(2), do: true
def is_prime(3), do: true
def is_prime(n) when 3 < n,
do: if rem(n, 2) == 0, do: false, else: is_prime(n, 3)
defp is_prime(n, i) when i * i <= n,
do: if rem(n, i) == 0, do: false, else: is_prime(n, i + 2)
defp is_prime(_, _), do: true
defimpl Enumerable do
def count(_), do: {:error, __MODULE__}
def member?(_, _), do: {:error, __MODULE__}
def slice(_), do: {:error, __MODULE__}
def reduce(_prime, {:halt, acc}, _fun), do: {:halted, acc}
def reduce(prime, {:suspend, acc}, fun),
do: {:suspended, acc, &reduce(prime, &1, fun)}
def reduce(%{value: v} = prime, {:cont, acc}, fun),
do: reduce(Prime.next(prime), fun.(v, acc), fun)
end
end
defp r_to_i, do: IO.read(:line) |> String.trim() |> String.to_integer()
def main do
IO.puts solve(r_to_i(), r_to_i())
end
def solve(k, n) do
%Prime{value: Prime.first(k)}
|> Enum.reduce_while({%{}, nil, nil}, fn
prime, acc when n < prime ->
{:halt, acc}
prime, {memo, nil, max} ->
{:cont, {Map.put(memo, prime, MapSet.new([frank_hash(prime)])), [prime], max}}
prime, {memo, start_with, max} ->
hashed_value = frank_hash(prime)
memo = Map.put(memo, prime, MapSet.new([hashed_value]))
{memo, max, complete} = start_with
|> Enum.reduce({memo, max, []}, fn target, {memo, max, complete} ->
if MapSet.member?(memo[target], hashed_value) do
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target, [max | complete]}
else
nil ->
{memo, target, complete}
_ ->
{memo, max, [target | complete]}
end
else
{Map.put(memo, target, MapSet.put(memo[target], hashed_value)), max, complete}
end
end)
{
:cont,
{
Enum.reduce(complete, memo, fn
v, memo when v == max -> memo
v, memo -> Map.delete(memo, v)
end),
Enum.reject([prime | start_with], fn v -> v in complete end),
max
}
}
end)
|> (fn {memo, start_with, max} ->
start_with
|> Enum.reduce({memo, max}, fn
target, {memo, max} ->
with max when not is_nil(max) <- max,
true <- MapSet.size(memo[max]) <= MapSet.size(memo[target]),
true <- max < target do
{memo, target}
else
nil ->
{memo, target}
_ ->
{memo, max}
end
end)
|> elem(1)
end).()
end
defp frank_hash(n) when n < 10, do: n
defp frank_hash(n), do: n |> Integer.digits() |> Enum.sum() |> frank_hash()
end
"""
|
lib/100/p6.ex
| 0.674801 | 0.551272 |
p6.ex
|
starcoder
|
defmodule Node do
@moduledoc """
Functions related to VM nodes.
Some of the functions in this module are inlined by the compiler,
similar to functions in the `Kernel` module and they are explicitly
marked in their docs as "inlined by the compiler". For more information
about inlined functions, check out the `Kernel` module.
"""
@type t :: node
@doc """
Turns a non-distributed node into a distributed node.
This functionality starts the `:net_kernel` and other
related processes.
"""
@spec start(node, :longnames | :shortnames, non_neg_integer) :: {:ok, pid} | {:error, term}
def start(name, type \\ :longnames, tick_time \\ 15000) do
:net_kernel.start([name, type, tick_time])
end
@doc """
Turns a distributed node into a non-distributed node.
For other nodes in the network, this is the same as the node going down.
Only possible when the node was started with `Node.start/3`, otherwise
returns `{:error, :not_allowed}`. Returns `{:error, :not_found}` if the
local node is not alive.
"""
@spec stop() :: :ok | {:error, :not_allowed | :not_found}
def stop() do
:net_kernel.stop()
end
@doc """
Returns the current node.
It returns the same as the built-in `node()`.
"""
@spec self :: t
def self do
:erlang.node()
end
@doc """
Returns `true` if the local node is alive.
That is, if the node can be part of a distributed system.
"""
@spec alive? :: boolean
def alive? do
:erlang.is_alive()
end
@doc """
Returns a list of all visible nodes in the system, excluding
the local node.
Same as `list(:visible)`.
Inlined by the compiler.
"""
@spec list :: [t]
def list do
:erlang.nodes()
end
@doc """
Returns a list of nodes according to argument given.
The result returned when the argument is a list, is the list of nodes
satisfying the disjunction(s) of the list elements.
For more information, see `:erlang.nodes/1`.
Inlined by the compiler.
"""
@type state :: :visible | :hidden | :connected | :this | :known
@spec list(state | [state]) :: [t]
def list(args) do
:erlang.nodes(args)
end
@doc """
Monitors the status of the node.
If `flag` is `true`, monitoring is turned on.
If `flag` is `false`, monitoring is turned off.
For more information, see `:erlang.monitor_node/2`.
For monitoring status changes of all nodes, see `:net_kernel.monitor_nodes/3`.
"""
@spec monitor(t, boolean) :: true
def monitor(node, flag) do
:erlang.monitor_node(node, flag)
end
@doc """
Behaves as `monitor/2` except that it allows an extra
option to be given, namely `:allow_passive_connect`.
For more information, see `:erlang.monitor_node/3`.
For monitoring status changes of all nodes, see `:net_kernel.monitor_nodes/3`.
"""
@spec monitor(t, boolean, [:allow_passive_connect]) :: true
def monitor(node, flag, options) do
:erlang.monitor_node(node, flag, options)
end
@doc """
Tries to set up a connection to node.
Returns `:pang` if it fails, or `:pong` if it is successful.
## Examples
iex> Node.ping(:unknown_node)
:pang
"""
@spec ping(t) :: :pong | :pang
def ping(node) do
:net_adm.ping(node)
end
@doc """
Forces the disconnection of a node.
This will appear to the `node` as if the local node has crashed.
This function is mainly used in the Erlang network authentication
protocols. Returns `true` if disconnection succeeds, otherwise `false`.
If the local node is not alive, the function returns `:ignored`.
For more information, see `:erlang.disconnect_node/1`.
"""
@spec disconnect(t) :: boolean | :ignored
def disconnect(node) do
:erlang.disconnect_node(node)
end
@doc """
Establishes a connection to `node`.
Returns `true` if successful, `false` if not, and the atom
`:ignored` if the local node is not alive.
For more information, see `:net_kernel.connect_node/1`.
"""
@spec connect(t) :: boolean | :ignored
def connect(node) do
:net_kernel.connect_node(node)
end
@doc """
Returns the PID of a new process started by the application of `fun`
on `node`. If `node` does not exist, a useless PID is returned.
For the list of available options, see `:erlang.spawn/2`.
Inlined by the compiler.
"""
@spec spawn(t, (() -> any)) :: pid
def spawn(node, fun) do
:erlang.spawn(node, fun)
end
@doc """
Returns the PID of a new process started by the application of `fun`
on `node`.
If `node` does not exist, a useless PID is returned.
For the list of available options, see `:erlang.spawn_opt/3`.
Inlined by the compiler.
"""
@spec spawn(t, (() -> any), Process.spawn_opts()) :: pid | {pid, reference}
def spawn(node, fun, opts) do
:erlang.spawn_opt(node, fun, opts)
end
@doc """
Returns the PID of a new process started by the application of
`module.function(args)` on `node`.
If `node` does not exist, a useless PID is returned.
For the list of available options, see `:erlang.spawn/4`.
Inlined by the compiler.
"""
@spec spawn(t, module, atom, [any]) :: pid
def spawn(node, module, fun, args) do
:erlang.spawn(node, module, fun, args)
end
@doc """
Returns the PID of a new process started by the application of
`module.function(args)` on `node`.
If `node` does not exist, a useless PID is returned.
For the list of available options, see `:erlang.spawn/5`.
Inlined by the compiler.
"""
@spec spawn(t, module, atom, [any], Process.spawn_opts()) :: pid | {pid, reference}
def spawn(node, module, fun, args, opts) do
:erlang.spawn_opt(node, module, fun, args, opts)
end
@doc """
Returns the PID of a new linked process started by the application of `fun` on `node`.
A link is created between the calling process and the new process, atomically.
If `node` does not exist, a useless PID is returned (and due to the link, an exit
signal with exit reason `:noconnection` will be received).
Inlined by the compiler.
"""
@spec spawn_link(t, (() -> any)) :: pid
def spawn_link(node, fun) do
:erlang.spawn_link(node, fun)
end
@doc """
Returns the PID of a new linked process started by the application of
`module.function(args)` on `node`.
A link is created between the calling process and the new process, atomically.
If `node` does not exist, a useless PID is returned (and due to the link, an exit
signal with exit reason `:noconnection` will be received).
Inlined by the compiler.
"""
@spec spawn_link(t, module, atom, [any]) :: pid
def spawn_link(node, module, fun, args) do
:erlang.spawn_link(node, module, fun, args)
end
@doc """
Sets the magic cookie of `node` to the atom `cookie`.
The default node is `Node.self/0`, the local node. If `node` is the local node,
the function also sets the cookie of all other unknown nodes to `cookie`.
This function will raise `FunctionClauseError` if the given `node` is not alive.
"""
@spec set_cookie(t, atom) :: true
def set_cookie(node \\ Node.self(), cookie) when is_atom(cookie) do
:erlang.set_cookie(node, cookie)
end
@doc """
Returns the magic cookie of the local node.
Returns the cookie if the node is alive, otherwise `:nocookie`.
"""
@spec get_cookie() :: atom
def get_cookie() do
:erlang.get_cookie()
end
end
|
lib/elixir/lib/node.ex
| 0.888961 | 0.72526 |
node.ex
|
starcoder
|
defmodule Snitch.Data.Schema.PromotionRule.OrderTotal do
@moduledoc """
Models the `promotion rule` based on order total.
"""
use Snitch.Data.Schema
use Snitch.Data.Schema.PromotionRule
alias Snitch.Domain.Order, as: OrderDomain
@type t :: %__MODULE__{}
@name "Order Item Total"
embedded_schema do
field(:lower_range, :decimal, default: 0.0)
field(:upper_range, :decimal, default: 0.0)
end
def changeset(%__MODULE__{} = data, params) do
data
|> cast(params, [:lower_range, :upper_range])
end
def rule_name() do
@name
end
@doc """
Checks if the supplied order meets the criteria of the promotion rule
`order total`.
Takes as input the `order` and the `rule_data` which in this case
is `upper_range` and the `lower_range`. Order total is evaluated against the
specified ranges. It should fall in between them.
### Note
If the `upper_range` is not set and is 0 then upper_range is ignored and the
order would be evaluated only against `lower_range`.
"""
def eligible(order, rule_data) do
order_total = OrderDomain.total_amount(order)
if satisfies_rule?(order_total, rule_data) do
{true, "order satisfies the rule"}
else
{false, "order doesn't falls under the item total condition"}
end
end
defp satisfies_rule?(order_total, rule_data) do
order_total_in_range?(
order_total,
Decimal.cast(rule_data["lower_range"]),
Decimal.cast(rule_data["upper_range"])
)
end
defp order_total_in_range?(order_total, lower_range, %Decimal{}) do
currency = order_total.currency
lower_range = Money.new!(currency, lower_range)
case Money.compare(order_total, lower_range) do
:gt ->
true
_ ->
false
end
end
defp order_total_in_range?(order_total, lower_range, upper_range) do
currency = order_total.currency
lower_range = Money.new(currency, lower_range)
upper_range = Money.new!(currency, upper_range)
value_lower =
case Money.compare(order_total, lower_range) do
:gt ->
true
_ ->
false
end
value_upper =
case Money.compare(order_total, upper_range) do
:lt ->
true
_ ->
false
end
value_lower && value_upper
end
end
|
apps/snitch_core/lib/core/data/schema/promotion/promotion_rule/order_total.ex
| 0.871283 | 0.525551 |
order_total.ex
|
starcoder
|
defmodule Ecto.Date do
@moduledoc """
An Ecto type for dates.
"""
defstruct [:year, :month, :day]
@doc """
Converts an `Ecto.Date` into a date triplet.
"""
def to_erl(%Ecto.Date{year: year, month: month, day: day}) do
{year, month, day}
end
@doc """
Converts a date triplet into an `Ecto.Date`.
"""
def from_erl({year, month, day}) do
%Ecto.Date{year: year, month: month, day: day}
end
@doc """
Returns an `Ecto.Date` in local time.
"""
def local do
from_erl(:erlang.date)
end
@doc """
Returns an `Ecto.Date` in UTC.
"""
def utc do
{date, _time} = :erlang.universaltime
from_erl(date)
end
end
defmodule Ecto.Time do
@moduledoc """
An Ecto type for time.
"""
defstruct [:hour, :min, :sec]
@doc """
Converts an `Ecto.Time` into a time triplet.
"""
def to_erl(%Ecto.Time{hour: hour, min: min, sec: sec}) do
{hour, min, sec}
end
@doc """
Converts a time triplet into an `Ecto.Time`.
"""
def from_erl({hour, min, sec}) do
%Ecto.Time{hour: hour, min: min, sec: sec}
end
@doc """
Returns an `Ecto.Time` in local time.
"""
def local do
from_erl(:erlang.time)
end
@doc """
Returns an `Ecto.Time` in UTC.
"""
def utc do
{_date, time} = :erlang.universaltime
from_erl(time)
end
end
defmodule Ecto.DateTime do
@moduledoc """
An Ecto type for dates and times.
"""
defstruct [:year, :month, :day, :hour, :min, :sec]
@doc """
Converts an `Ecto.DateTime` into a `{date, time}` tuple.
"""
def to_erl(%Ecto.DateTime{year: year, month: month, day: day, hour: hour, min: min, sec: sec}) do
{{year, month, day}, {hour, min, sec}}
end
@doc """
Converts a `{date, time}` tuple into an `Ecto.DateTime`.
"""
def from_erl({{year, month, day}, {hour, min, sec}}) do
%Ecto.DateTime{year: year, month: month, day: day,
hour: hour, min: min, sec: sec}
end
@doc """
Converts `Ecto.DateTime` into an `Ecto.Date`.
"""
def to_date(%Ecto.DateTime{year: year, month: month, day: day}) do
%Ecto.Date{year: year, month: month, day: day}
end
@doc """
Converts `Ecto.DateTime` into an `Ecto.Time`.
"""
def to_time(%Ecto.Time{hour: hour, min: min, sec: sec}) do
%Ecto.Time{hour: hour, min: min, sec: sec}
end
@doc """
Converts the given `Ecto.Date` and `Ecto.Time` into `Ecto.DateTime`.
"""
def from_date_and_time(%Ecto.Date{year: year, month: month, day: day},
%Ecto.Time{hour: hour, min: min, sec: sec}) do
%Ecto.DateTime{year: year, month: month, day: day,
hour: hour, min: min, sec: sec}
end
@doc """
Returns an `Ecto.DateTime` in local time.
"""
def local do
from_erl(:erlang.localtime)
end
@doc """
Returns an `Ecto.DateTime` in UTC.
"""
def utc do
from_erl(:erlang.universaltime)
end
end
|
lib/ecto/datetime.ex
| 0.892924 | 0.74008 |
datetime.ex
|
starcoder
|
defmodule Oli.Delivery.Hierarchy do
@moduledoc """
A module for hierarchy and HierarchyNode operations and utilities
A delivery hierarchy is the main structure in which a course curriculum is organized
to be delivered. It is mainly persisted through section resource records. A hierarchy is
also a generic in-memory representation of a curriculum which can be passed into
delivery-centric functions from an authoring context, in which case the hierarchy could
be ephemeral and section_resources are empty (e.g. course preview)
See also HierarchyNode for more details
"""
import Oli.Utils
alias Oli.Delivery.Hierarchy.HierarchyNode
alias Oli.Resources.Numbering
alias Oli.Publishing.PublishedResource
@doc """
From a constructed hierarchy root node, or a collection of hierarchy nodes, return
an ordered flat list of the nodes of only the pages in the hierarchy.
"""
def flatten_pages(nodes) when is_list(nodes) do
Enum.reduce(nodes, [], &flatten_pages(&1, &2))
end
def flatten_pages(%HierarchyNode{} = node), do: flatten_pages(node, []) |> Enum.reverse()
defp flatten_pages(%HierarchyNode{} = node, all) do
if node.revision.resource_type_id == Oli.Resources.ResourceType.get_id_by_type("page") do
[node | all]
else
Enum.reduce(node.children, all, &flatten_pages(&1, &2))
end
end
@doc """
From a constructed hierarchy root node return an ordered flat list of all the nodes
in the hierarchy. Containers appear before their contents
"""
def flatten_hierarchy(%HierarchyNode{} = node),
do: flatten_hierarchy(node, []) |> Enum.reverse()
defp flatten_hierarchy(%HierarchyNode{} = node, all) do
all = [node | all]
Enum.reduce(node.children, all, &flatten_hierarchy(&1, &2))
end
def create_hierarchy(revision, published_resources_by_resource_id) do
numbering_tracker = Numbering.init_numbering_tracker()
level = 0
create_hierarchy(revision, published_resources_by_resource_id, level, numbering_tracker)
end
defp create_hierarchy(revision, published_resources_by_resource_id, level, numbering_tracker) do
{index, numbering_tracker} = Numbering.next_index(numbering_tracker, level, revision)
children =
Enum.map(revision.children, fn child_id ->
%PublishedResource{revision: child_revision} =
published_resources_by_resource_id[child_id]
create_hierarchy(
child_revision,
published_resources_by_resource_id,
level + 1,
numbering_tracker
)
end)
%PublishedResource{publication: pub} =
published_resources_by_resource_id[revision.resource_id]
%HierarchyNode{
uuid: uuid(),
numbering: %Numbering{
index: index,
level: level
},
revision: revision,
resource_id: revision.resource_id,
project_id: pub.project_id,
children: children
}
end
@doc """
Crawls the hierarchy and removes any nodes with duplicate resource_ids.
The first node encountered with a resource_id will be left in place,
any subsequent duplicates will be removed from the hierarchy
"""
def purge_duplicate_resources(%HierarchyNode{} = hierarchy) do
purge_duplicate_resources(hierarchy, %{})
|> then(fn {hierarchy, _} -> hierarchy end)
end
def purge_duplicate_resources(
%HierarchyNode{resource_id: resource_id, children: children} = node,
processed_nodes
) do
processed_nodes = Map.put_new(processed_nodes, resource_id, node)
{children, processed_nodes} =
Enum.reduce(children, {[], processed_nodes}, fn child, {children, processed_nodes} ->
# filter out any child which has already been processed or recursively process the child node
if Map.has_key?(processed_nodes, child.resource_id) do
# skip child, as it is a duplicate resource
{children, processed_nodes}
else
{child, processed_nodes} = purge_duplicate_resources(child, processed_nodes)
{[child | children], processed_nodes}
end
end)
|> then(fn {children, processed_nodes} -> {Enum.reverse(children), processed_nodes} end)
{%HierarchyNode{node | children: children}, processed_nodes}
end
def find_in_hierarchy(
%HierarchyNode{uuid: uuid, children: children} = node,
uuid_to_find
)
when is_binary(uuid_to_find) do
if uuid == uuid_to_find do
node
else
Enum.reduce(children, nil, fn child, acc ->
if acc == nil, do: find_in_hierarchy(child, uuid_to_find), else: acc
end)
end
end
def find_in_hierarchy(
%HierarchyNode{children: children} = node,
find_by
)
when is_function(find_by) do
if find_by.(node) do
node
else
Enum.reduce(children, nil, fn child, acc ->
if acc == nil, do: find_in_hierarchy(child, find_by), else: acc
end)
end
end
def reorder_children(
children,
node,
source_index,
index
) do
insert_index =
if source_index < index do
index - 1
else
index
end
children =
Enum.filter(children, fn %HierarchyNode{revision: r} -> r.id !== node.revision.id end)
|> List.insert_at(insert_index, node)
children
end
def find_and_update_node(hierarchy, node) do
if hierarchy.uuid == node.uuid do
node
else
%HierarchyNode{
hierarchy
| children:
Enum.map(hierarchy.children, fn child -> find_and_update_node(child, node) end)
}
end
end
def find_and_remove_node(hierarchy, uuid) do
if uuid in Enum.map(hierarchy.children, & &1.uuid) do
%HierarchyNode{
hierarchy
| children: Enum.filter(hierarchy.children, fn child -> child.uuid != uuid end)
}
else
%HierarchyNode{
hierarchy
| children:
Enum.map(hierarchy.children, fn child -> find_and_remove_node(child, uuid) end)
}
end
end
def move_node(hierarchy, node, destination_uuid) do
hierarchy = find_and_remove_node(hierarchy, node.uuid)
destination = find_in_hierarchy(hierarchy, destination_uuid)
updated_container = %HierarchyNode{destination | children: [node | destination.children]}
find_and_update_node(hierarchy, updated_container)
end
def add_materials_to_hierarchy(
hierarchy,
active,
selection,
published_resources_by_resource_id_by_pub
) do
nodes =
selection
|> Enum.map(fn {publication_id, resource_id} ->
revision =
published_resources_by_resource_id_by_pub
|> Map.get(publication_id)
|> Map.get(resource_id)
|> Map.get(:revision)
create_hierarchy(revision, published_resources_by_resource_id_by_pub[publication_id])
end)
find_and_update_node(hierarchy, %HierarchyNode{active | children: active.children ++ nodes})
|> Numbering.renumber_hierarchy()
|> then(fn {updated_hierarchy, _numberings} -> updated_hierarchy end)
end
@doc """
Given a hierarchy node, this function "flattens" all nodes below into a list, in the order that
a student would encounter the resources working linearly through a course.
As an example, consider the followign hierarchy:
--Unit 1
----Module 1
------Page A
------Page B
--Unit 2
----Moudule 2
------Page C
The above would be flattened to:
Unit 1
Module 1
Page A
Page B
Unit 2
Module 2
Page C
"""
def flatten(%HierarchyNode{} = root) do
flatten_helper(root, [], [])
|> Enum.reverse()
end
defp flatten_helper(%HierarchyNode{children: children}, flattened_nodes, current_ancestors) do
Enum.reduce(children, flattened_nodes, fn node, acc ->
node = %{node | ancestors: current_ancestors}
case Oli.Resources.ResourceType.get_type_by_id(node.revision.resource_type_id) do
"container" -> flatten_helper(node, [node | acc], current_ancestors ++ [node])
_ -> [node | acc]
end
end)
end
@doc """
Debugging utility to inspect a hierarchy without all the noise. Choose which keys
to drop in the HierarchyNodes using the drop_keys option.
"""
def inspect(%HierarchyNode{} = hierarchy, opts \\ []) do
label = Keyword.get(opts, :label)
drop_keys = Keyword.get(opts, :drop_keys, [:revision, :section_resource])
drop_r(hierarchy, drop_keys)
# credo:disable-for-next-line Credo.Check.Warning.IoInspect
|> IO.inspect(label: label)
hierarchy
end
defp drop_r(%HierarchyNode{children: children} = node, drop_keys) do
%HierarchyNode{node | children: Enum.map(children, &drop_r(&1, drop_keys))}
|> Map.drop([:__struct__ | drop_keys])
end
end
|
lib/oli/delivery/hierarchy.ex
| 0.723016 | 0.55911 |
hierarchy.ex
|
starcoder
|
defmodule Helper.Parser do
@moduledoc false
defmacro parser(country, code) do
quote do
@doc """
Same as `parse/1` but the number doesn't have the international code, instead you specify country as an atom with two-letters code.
For NANP countries you can use the atom `:nanp` or two-letter codes for any country in NANP.
For United Kingdom is possible to use the more known acronym `:uk` or the official two-letter code `:gb`.
```
iex> Phone.parse("5132345678", :br)
{:ok, %{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}}
iex> Phone.parse("(51)3234-5678", :br)
{:ok, %{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}}
iex> Phone.parse("51 3234-5678", :br)
{:ok, %{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}}
iex> Phone.parse(5132345678, :br)
{:ok, %{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}}
```
"""
def parse(number, unquote(country)) when is_bitstring(number) do
parse(unquote(code) <> number)
end
def parse(number, unquote(country)) when is_integer(number) do
number = Integer.to_string(number)
parse(unquote(code) <> number)
end
def parse(_, unquote(country)) do
parse(unquote(country))
end
@doc """
Same as `parse/2`, except it raises on error.
```
iex> Phone.parse!("5132345678", :br)
%{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}
iex> Phone.parse!("(51)3234-5678", :br)
%{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}
iex> Phone.parse!("51 3234-5678", :br)
%{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}
iex> Phone.parse!(5132345678, :br)
%{a2: "BR", a3: "BRA", country: "Brazil", international_code: "55", area_code: "51", number: "32345678", area_abbreviation: "RS", area_type: "state", area_name: "Rio Grande do Sul"}
```
"""
def parse!(number, unquote(country)) when is_bitstring(number) do
parse!(unquote(code) <> number)
end
def parse!(number, unquote(country)) when is_integer(number) do
number = Integer.to_string(number)
parse!(unquote(code) <> number)
end
def parse!(_, unquote(country)) do
parse!(unquote(country))
end
end
end
defmacro country_parser do
quote do
parser :ad, "376"
parser :ae, "971"
parser :af, "93"
parser :al, "355"
parser :am, "374"
parser :ao, "244"
parser :ar, "54"
parser :at, "43"
parser :aw, "297"
parser :az, "994"
parser :ba, "387"
parser :bd, "880"
parser :be, "32"
parser :bg, "359"
parser :bh, "973"
parser :bi, "257"
parser :bj, "229"
parser :bn, "673"
parser :bo, "591"
parser :br, "55"
parser :bt, "975"
parser :bw, "267"
parser :by, "375"
parser :bz, "501"
parser :cd, "243"
parser :cf, "236"
parser :cg, "242"
parser :ch, "41"
parser :ci, "225"
parser :ck, "682"
parser :cl, "56"
parser :cm, "237"
parser :cn, "86"
parser :co, "57"
parser :cr, "506"
parser :cu, "53"
parser :cv, "238"
parser :cw, "599"
parser :cy, "357"
parser :cz, "420"
parser :de, "49"
parser :dj, "253"
parser :dk, "45"
parser :dz, "213"
parser :ec, "593"
parser :ee, "372"
parser :eg, "20"
parser :er, "291"
parser :es, "34"
parser :et, "251"
parser :fi, "358"
parser :fj, "679"
parser :fm, "691"
parser :fo, "298"
parser :fr, "33"
parser :ga, "241"
parser :gb, "44"
parser :ge, "995"
parser :gf, "594"
parser :gh, "233"
parser :gi, "350"
parser :gl, "299"
parser :gm, "220"
parser :gn, "224"
parser :gq, "240"
parser :gt, "502"
parser :gw, "245"
parser :gy, "592"
parser :gr, "30"
parser :uk, "30"
parser :hk, "852"
parser :hn, "504"
parser :hr, "385"
parser :ht, "509"
parser :hu, "36"
parser :id, "62"
parser :ie, "353"
parser :il, "972"
parser :in, "91"
parser :io, "246"
parser :iq, "964"
parser :ir, "98"
parser :is, "353"
parser :it, "39"
parser :jo, "962"
parser :jp, "81"
parser :ke, "254"
parser :kg, "996"
parser :kh, "855"
parser :ki, "686"
parser :km, "269"
parser :kp, "850"
parser :kr, "82"
parser :kw, "965"
parser :kz, "7"
parser :la, "856"
parser :lb, "961"
parser :li, "423"
parser :lk, "94"
parser :lr, "231"
parser :ls, "266"
parser :lt, "370"
parser :lu, "352"
parser :lv, "371"
parser :ly, "218"
parser :ma, "212"
parser :mc, "377"
parser :md, "373"
parser :me, "382"
parser :mg, "261"
parser :mh, "692"
parser :mk, "389"
parser :ml, "223"
parser :mm, "95"
parser :mn, "976"
parser :mo, "853"
parser :mq, "596"
parser :mr, "222"
parser :mt, "356"
parser :mu, "230"
parser :mv, "960"
parser :mw, "265"
parser :mx, "52"
parser :my, "60"
parser :mz, "258"
parser :nanp, "1"
parser :ag, "1"
parser :ai, "1"
parser :as, "1"
parser :bb, "1"
parser :bm, "1"
parser :bs, "1"
parser :ca, "1"
parser :dm, "1"
parser :do, "1"
parser :gd, "1"
parser :gu, "1"
parser :jm, "1"
parser :kn, "1"
parser :ky, "1"
parser :lc, "1"
parser :mp, "1"
parser :ms, "1"
parser :pr, "1"
parser :sx, "1"
parser :tc, "1"
parser :tt, "1"
parser :us, "1"
parser :vc, "1"
parser :vg, "1"
parser :vi, "1"
parser :na, "264"
parser :nc, "687"
parser :ne, "227"
parser :ng, "234"
parser :ni, "505"
parser :nl, "31"
parser :no, "47"
parser :np, "977"
parser :nr, "674"
parser :nu, "683"
parser :nz, "64"
parser :om, "968"
parser :pa, "507"
parser :pe, "51"
parser :pf, "689"
parser :pg, "675"
parser :ph, "63"
parser :pk, "92"
parser :pl, "48"
parser :pm, "508"
parser :ps, "970"
parser :pt, "351"
parser :pw, "680"
parser :py, "595"
parser :qa, "974"
parser :ro, "40"
parser :rs, "381"
parser :ru, "7"
parser :rw, "250"
parser :sa, "966"
parser :sb, "677"
parser :sc, "248"
parser :sd, "249"
parser :se, "46"
parser :sg, "65"
parser :sh, "290"
parser :si, "386"
parser :sk, "421"
parser :sl, "232"
parser :sm, "378"
parser :sn, "221"
parser :so, "252"
parser :sr, "597"
parser :ss, "211"
parser :st, "239"
parser :sv, "503"
parser :sy, "963"
parser :sz, "268"
parser :td, "235"
parser :tg, "228"
parser :th, "66"
parser :tj, "992"
parser :tk, "690"
parser :tl, "670"
parser :tm, "993"
parser :tn, "216"
parser :to, "676"
parser :tr, "90"
parser :tv, "688"
parser :tw, "886"
parser :tz, "255"
parser :ua, "380"
parser :ug, "256"
parser :uy, "598"
parser :uz, "998"
parser :ve, "58"
parser :vn, "84"
parser :vu, "678"
parser :wf, "681"
parser :ws, "685"
parser :ye, "967"
parser :za, "27"
parser :zm, "260"
parser :zw, "263"
end
end
end
|
lib/helpers/parser.ex
| 0.874808 | 0.833731 |
parser.ex
|
starcoder
|
defmodule Cassette.Controller do
@moduledoc """
A helper module to quickly validate roles and get the current user
To use in your controller, add as a plug restricting the actions:
```elixir
defmodule MyApp.MyController do
use MyApp.Web, :controller
use Cassette.Controller
plug :require_role!, "ADMIN" when action in [:edit, :update, :new, :create]
def update(conn, %{"id" => id}) do
something = Repo.get!(Something, id)
changeset = Something.changeset(something)
render(conn, "edit.html", something: something, changeset: changeset)
end
end
```
You can also customize how a forbidden situation is handled:
```elixir
defmodule MyApp.MyController do
use MyApp.Web, :controller
use Cassette.Controller, on_forbidden: fn(conn) ->
redirect(conn, to: "/403.html")
end
plug :require_role!, "VIEWER"
def index(conn, _params) do
render(conn, "index.html")
end
end
```
You can use one of your controller functions as well:
```elixir
defmodule MyApp.MyController do
use MyApp.Web, :controller
use Cassette.Controller, on_forbidden: &MyApp.MyController.forbidden/1
plug :require_role! "VIEWER"
def index(conn, _params) do
render(conn, "index.html")
end
end
```
Or since `require_role!/2` halts the connection you may do the following for simple actions.
```elixir
defmodule MyApp.MyController do
use MyApp.Web, :controller
use Cassette.Controller
def index(conn, _params) do
conn
|> require_role!("VIEWER")
|> render("index.html")
end
end
```
You can also write your own plugs using the "softer" `has_role?/2` or `has_raw_role?/2`:
```elixir
defmodule MyApp.MyController do
use MyApp.web, :controller
use Cassette.Controller
plug :check_authorization
def index(conn, _params) do
render(conn, "index.html")
end
def check_authorization(conn, _params) do
if has_role?(conn, "viewer") do
conn
else
conn
|> render("forbidden.html")
|> halt
end
end
end
```
"""
alias Cassette.Plug.RequireRolePlug
alias Plug.Conn
defmacro __using__(opts \\ []) do
quote do
import Conn
import Cassette.Plug.RequireRolePlug,
only: [current_user: 1, has_role?: 3, has_raw_role?: 2]
defp __forbidden_callback__ do
unquote(opts[:on_forbidden]) ||
fn conn ->
conn
|> resp(403, "Forbidden")
|> halt
end
end
@doc """
Tests if the user has the role. Where role can be any of the terms accepted by any implementation of `has_role?/2`.
This will halt the connection and set the status to forbidden if authorization fails.
"""
@spec require_role!(Conn.t(), RequireRolePlug.role_param()) :: Conn.t()
def require_role!(conn, roles) do
if has_role?(conn, roles, unquote(opts)) do
conn
else
__forbidden_callback__().(conn)
end
end
@doc """
Returns if the user has the role.
"""
@spec has_role?(Conn.t(), RequireRolePlug.role_param()) :: boolean
def has_role?(conn, roles) do
has_role?(conn, roles, unquote(opts))
end
@doc """
Tests if the user has the (raw) role. Where role can be any of the terms accepted by any implementation of `has_raw_role?/2`.
This will halt the connection and set the status to forbidden if authorization fails.
"""
@spec require_raw_role!(Conn.t(), RequireRolePlug.role_param()) :: Conn.t()
def require_raw_role!(conn, roles) do
if has_raw_role?(conn, roles) do
conn
else
__forbidden_callback__().(conn)
end
end
end
end
end
|
lib/cassette/controller.ex
| 0.832475 | 0.645518 |
controller.ex
|
starcoder
|
defmodule Tai.Venues.Product do
@type status ::
:unknown
| :pre_trading
| :trading
| :restricted
| :post_trading
| :end_of_day
| :halt
| :auction_match
| :break
| :settled
| :unlisted
@typedoc """
The product to buy/sell or the underlying product used to buy/sell. For the product BTCUSD
- BTC = base asset
- USD = quote asset
"""
@type asset :: Tai.Markets.Asset.symbol()
@type venue_asset :: String.t()
@typedoc """
The underlying value of the product. Spot products will always have a value = 1. Derivative products
can have values > 1.
e.g. OkEx quarterly futures product has a value of 100 where 1 contract represents $100 USD.
"""
@type value :: Decimal.t()
@typedoc """
The side that the value represents
"""
@type value_side :: :base | :quote
@typedoc """
Whether or not the product can be used as collateral for a portfolios balance
"""
@type collateral :: true | false
@typedoc """
The ratio of balance of the quote asset that is used as collateral in the portfolio balance
"""
@type collateral_weight :: Decimal.t() | nil
@typedoc """
A derivative contract where PnL settlement is a different asset to the base or quote assets.
"""
@type quanto :: true | false
@typedoc """
A derivative contract where the PnL settlement is in the base asset, e.g. XBTUSD settles PnL in XBT
"""
@type inverse :: true | false
@typedoc """
The expiration date
"""
@type expiry :: DateTime.t() | nil
@type symbol :: atom
@type venue_symbol :: String.t()
@type type :: :spot | :future | :swap | :option | :leveraged_token | :bvol | :ibvol | :move
@type t :: %Tai.Venues.Product{
venue_id: Tai.Venue.id(),
symbol: symbol,
venue_symbol: venue_symbol,
alias: String.t() | nil,
base: asset,
quote: asset,
venue_base: venue_asset,
venue_quote: venue_asset,
status: status,
type: type,
listing: DateTime.t() | nil,
expiry: expiry,
collateral: collateral,
collateral_weight: collateral_weight,
price_increment: Decimal.t(),
size_increment: Decimal.t(),
min_price: Decimal.t(),
min_size: Decimal.t(),
min_notional: Decimal.t() | nil,
max_price: Decimal.t() | nil,
max_size: Decimal.t() | nil,
value: value,
value_side: value_side,
is_quanto: quanto,
is_inverse: inverse,
maker_fee: Decimal.t() | nil,
taker_fee: Decimal.t() | nil,
strike: Decimal.t() | nil,
option_type: :call | :put | nil
}
@enforce_keys ~w[
venue_id
symbol
venue_symbol
base
quote
venue_base
venue_quote
status
type
collateral
price_increment
size_increment
min_price
min_size
value
value_side
is_quanto
is_inverse
]a
defstruct ~w[
venue_id
symbol
venue_symbol
alias
base
quote
venue_base
venue_quote
status
type
listing
expiry
collateral
collateral_weight
price_increment
size_increment
min_notional
min_price
min_size
max_size
max_price
value
value_side
is_quanto
is_inverse
maker_fee
taker_fee
strike
option_type
]a
end
|
apps/tai/lib/tai/venues/product.ex
| 0.8471 | 0.544256 |
product.ex
|
starcoder
|
defmodule Harald.HCI do
@moduledoc """
> The HCI provides a uniform interface method of accessing a Bluetooth Controller’s
> capabilities.
Reference: Version 5.0, Vol. 2, Part E, 1
"""
alias Harald.{HCI.Event, Serializable}
@behaviour Serializable
@typedoc """
OpCode Group Field.
See `t:opcode/0`
"""
@type ogf :: 0..63
@typedoc """
OpCode Command Field.
See `t:opcode/0`
"""
@type ocf :: 0..1023
@typedoc """
> Each command is assigned a 2 byte Opcode used to uniquely identify different types of
> commands. The Opcode parameter is divided into two fields, called the OpCode Group Field (OGF)
> and OpCode Command Field (OCF). The OGF occupies the upper 6 bits of the Opcode, while the OCF
> occupies the remaining 10 bits. The OGF of 0x3F is reserved for vendor-specific debug
> commands. The organization of the opcodes allows additional information to be inferred without
> fully decoding the entire Opcode.
Reference: Version 5.0, Vol. 2, Part E, 5.4.1
"""
@type opcode :: binary()
@type opt :: boolean() | binary()
@type opts :: binary() | [opt()]
@type command :: <<_::8, _::_*8>>
@spec opcode(ogf(), ocf()) :: opcode()
def opcode(ogf, ocf) when ogf < 64 and ocf < 1024 do
<<opcode::size(16)>> = <<ogf::size(6), ocf::size(10)>>
<<opcode::little-size(16)>>
end
@spec command(opcode(), opts()) :: command()
def command(opcode, opts \\ "")
def command(opcode, [_ | _] = opts) do
opts_bin = for o <- opts, into: "", do: to_bin(o)
command(opcode, opts_bin)
end
def command(opcode, opts) do
s = byte_size(opts)
opcode <> <<s::size(8)>> <> opts
end
@doc """
Convert a value to a binary.
iex> to_bin(false)
<<0>>
iex> to_bin(true)
<<1>>
iex> to_bin(<<1, 2, 3>>)
<<1, 2, 3>>
"""
@spec to_bin(boolean() | binary()) :: binary()
def to_bin(false), do: <<0>>
def to_bin(true), do: <<1>>
def to_bin(bin) when is_binary(bin), do: bin
@impl Serializable
for module <- Event.event_modules() do
def serialize(%unquote(module){} = event) do
{:ok, bin} = Event.serialize(event)
{:ok, <<Event.indicator(), bin::binary>>}
end
end
@impl Serializable
def deserialize(<<4, rest::binary>>) do
case Event.deserialize(rest) do
{:ok, _} = ret -> ret
{:error, bin} when is_binary(bin) -> {:error, <<4, bin::binary>>}
{:error, data} -> {:error, data}
end
end
def deserialize(bin), do: {:error, bin}
end
|
lib/harald/hci.ex
| 0.887266 | 0.602325 |
hci.ex
|
starcoder
|
defmodule Plaid.Item do
@moduledoc """
Functions for Plaid `item` endpoint.
"""
import Plaid, only: [make_request_with_cred: 4, validate_cred: 1]
alias Plaid.Utils
@derive Jason.Encoder
defstruct available_products: [],
billed_products: [],
error: nil,
institution_id: nil,
item_id: nil,
webhook: nil,
consent_expiration_time: nil,
request_id: nil,
status: nil
@type t :: %__MODULE__{
available_products: [String.t()],
billed_products: [String.t()],
error: String.t() | nil,
institution_id: String.t(),
item_id: String.t(),
webhook: String.t(),
consent_expiration_time: String.t(),
request_id: String.t(),
status: Plaid.Item.Status.t()
}
@type params :: %{required(atom) => String.t()}
@type config :: %{required(atom) => String.t()}
@type service :: :dwolla | :modern_treasury
@endpoint :item
defmodule Status do
@moduledoc """
Plaid Item Status data structure.
"""
@derive Jason.Encoder
defstruct investments: nil,
transactions: nil,
last_webhook: nil
@type t :: %__MODULE__{
investments: Plaid.Item.Status.Investments.t(),
transactions: Plaid.Item.Status.Transactions.t(),
last_webhook: Plaid.Item.Status.LastWebhook.t()
}
defmodule Investments do
@moduledoc """
Plaid Item Status Investments data structure.
"""
@derive Jason.Encoder
defstruct last_successful_update: nil, last_failed_update: nil
@type t :: %__MODULE__{last_successful_update: String.t(), last_failed_update: String.t()}
end
defmodule Transactions do
@moduledoc """
Plaid Item Status Transactions data structure.
"""
@derive Jason.Encoder
defstruct last_successful_update: nil, last_failed_update: nil
@type t :: %__MODULE__{last_successful_update: String.t(), last_failed_update: String.t()}
end
defmodule LastWebhook do
@moduledoc """
Plaid Item Status LastWebhook data structure.
"""
@derive Jason.Encoder
defstruct sent_at: nil, code_sent: nil
@type t :: %__MODULE__{sent_at: String.t(), code_sent: String.t()}
end
end
@doc """
Gets an Item.
Parameters
```
%{access_token: "access-env-identifier"}
```
"""
@spec get(params, config | nil) :: {:ok, Plaid.Item.t()} | {:error, Plaid.Error.t()}
def get(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "#{@endpoint}/get"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
@doc """
Exchanges a public token for an access token and item id.
Parameters
```
%{public_token: "public-env-identifier"}
```
Response
```
{:ok, %{access_token: "access-env-identifier", item_id: "some-id", request_id: "f24wfg"}}
```
"""
@spec exchange_public_token(params, config | nil) :: {:ok, map} | {:error, Plaid.Error.t()}
def exchange_public_token(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "#{@endpoint}/public_token/exchange"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
@doc """
Creates a public token. To be used to put Plaid Link into update mode.
Parameters
```
%{access_token: "access-env-identifier"}
```
Response
```
{:ok, %{public_token: "access-env-identifier", expiration: 3600, request_id: "kg414f"}}
```
"""
@spec create_public_token(params, config | nil) :: {:ok, map} | {:error, Plaid.Error.t()}
def create_public_token(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "#{@endpoint}/public_token/create"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
@doc """
Updates an Item's webhook.
Parameters
```
%{access_token: "access-env-identifier", webhook: "http://mywebsite/api"}
```
"""
@spec update_webhook(params, config | nil) :: {:ok, Plaid.Item.t()} | {:error, Plaid.Error.t()}
def update_webhook(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "#{@endpoint}/webhook/update"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
@doc """
Invalidates access token and returns a new one.
Parameters
```
%{access_token: "access-env-identifier"}
```
Response
```
{:ok, %{new_access_token: "access-env-identifier", request_id: "gag8fs"}}
```
"""
@spec rotate_access_token(params, config | nil) :: {:ok, map} | {:error, Plaid.Error.t()}
def rotate_access_token(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "#{@endpoint}/access_token/invalidate"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
@doc """
Updates a V1 access token to V2.
Parameters
```
%{access_token_v1: "<PASSWORD>"}
```
Response
```
{:ok, %{access_token: "access-env-identifier", item_id: "some-id", request_id: "f24wfg"}}
```
"""
@spec update_version_access_token(params, config | nil) ::
{:ok, map} | {:error, Plaid.Error.t()}
def update_version_access_token(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "#{@endpoint}/access_token/update_version"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
@doc """
Removes an Item.
Parameters
```
%{access_token: "access-env-identifier"}
```
Response
```
{:ok, %{request_id: "[Unique request ID]"}}
```
"""
@spec remove(params, config | nil) :: {:ok, map} | {:error, Plaid.Error.t()}
def remove(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "#{@endpoint}/remove"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
@doc """
[Creates a processor token](https://developers.dwolla.com/resources/dwolla-plaid-integration.html)
used to create an authenticated funding source with Dwolla.
Parameters
```
%{access_token: "access-env-identifier", account_id: "plaid-account-id"}
```
Response
```
{:ok, %{processor_token: "some-token", request_id: "k522f2"}}
```
"""
@deprecated "Use create_processor_token/3 instead"
@spec create_processor_token(params, config | nil) :: {:ok, map} | {:error, Plaid.Error.t()}
def create_processor_token(params, config \\ %{}) do
create_processor_token(params, :dwolla, config)
end
@doc """
Creates a processor token used to integrate with services external to Plaid.
Parameters
```
%{access_token: "access-env-identifier", account_id: "plaid-account-id"}
```
Response
```
{:ok, %{processor_token: "some-token", request_id: "k522f2"}}
```
"""
@spec create_processor_token(params, service, config | nil) ::
{:ok, map} | {:error, Plaid.Error.t()}
def create_processor_token(params, service, config) do
config = validate_cred(config)
endpoint = "processor/#{service_to_string(service)}/processor_token/create"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
defp service_to_string(:dwolla), do: "dwolla"
defp service_to_string(:modern_treasury), do: "modern_treasury"
@doc """
[Creates a stripe bank account token](https://stripe.com/docs/ach)
used to create an authenticated funding source with Stripe.
Parameters
```
%{access_token: "<PASSWORD>", account_id: "plaid-account-id"}
```
Response
```
{:ok, %{stripe_bank_account_token: "<KEY>", request_id: "[Unique request ID]"}}
```
"""
@spec create_stripe_bank_account_token(params, config | nil) ::
{:ok, map} | {:error, Plaid.Error.t()}
def create_stripe_bank_account_token(params, config \\ %{}) do
config = validate_cred(config)
endpoint = "processor/stripe/bank_account_token/create"
make_request_with_cred(:post, endpoint, config, params)
|> Utils.handle_resp(@endpoint)
end
end
|
lib/plaid/item.ex
| 0.816991 | 0.629632 |
item.ex
|
starcoder
|
defmodule Mix.Tasks.Absinthe.Gen.Resolver do
use Mix.Task
alias Mix.AbsintheGeneratorUtils
@shortdoc "Generates an absinthe resolver"
@moduledoc """
Generates an Absinthe Schema
### Options
#{NimbleOptions.docs(AbsintheGenerator.Resolver.definitions())}
### Specifying Middleware
To specify middleware we can utilize the following syntax
```bash
pre_middleware:mutation:AuthMiddleware post_middleware:all:ChangesetErrorFormatter
```
Middleware can be set for `mutation`, `query`, `subscription` or `all` and can
also be set to either run pre or post resolution using `pre_middleware` or `post_middleware`
### Example
```bash
mix absinthe.gen.resolver func_name:2:MyModule.function
--app-name MyApp
--resolver-name students
--moduledoc "this is the test"
```
"""
@resolver_regex ~r/^[a-z_]+(:(with_parent|with_resolution|with_parent:with_resolution)){0,1}:[A-Za-z]+\.[a-z_]+$/
def run(args) do
AbsintheGeneratorUtils.ensure_not_in_umbrella!("absinthe.gen.resolver")
{args, extra_args} = AbsintheGeneratorUtils.parse_path_opts(args, [
path: :string,
app_name: :string,
moduledoc: :string,
resolver_name: :string
])
parsed_resolver_functions = extra_args
|> validate_resolver_string
|> parse_resolver_functions
args
|> Map.new
|> Map.put(:resolver_functions, parsed_resolver_functions)
|> serialize_to_resolver_struct
|> AbsintheGenerator.Resolver.run
|> AbsintheGeneratorUtils.write_template(path_from_args(args))
end
defp path_from_args(args) do
Keyword.get(
args,
:path,
"./lib/#{Macro.underscore(args[:app_name])}_web/resolvers/#{Macro.underscore(args[:resolver_name])}.ex"
)
end
defp validate_resolver_string(resolver_parts) do
if resolver_parts === [] or Enum.all?(resolver_parts, &Regex.match?(@resolver_regex, &1)) do
resolver_parts
else
Mix.raise("""
\n
Resolver format isn't setup properly and must match the following regex
#{inspect @resolver_regex}
Example:
func_name:MyModule.function
all_users:with_parent:Account.all_users
all_users:with_resolution:Account.all_users
all_users:with_parent:with_resolution:Account.all_users
""")
end
end
defp parse_resolver_functions(parsed_resolver_functions) do
Enum.map(parsed_resolver_functions, fn resolver_function ->
case String.split(resolver_function, ":") do
[resolver_func_name, fn_name] ->
"""
def #{resolver_func_name}(params, _resolution) do
#{fn_name}(params)
end
"""
[resolver_func_name, "with_parent", "with_resolution", fn_name] ->
"""
def #{resolver_func_name}(params, parent, resolution) do
#{fn_name}(params, parent, resolution)
end
"""
[resolver_func_name, "with_parent", fn_name] ->
"""
def #{resolver_func_name}(params, parent) do
#{fn_name}(params, parent)
end
"""
[resolver_func_name, "with_resolution", fn_name] ->
"""
def #{resolver_func_name}(params, resolution) do
#{fn_name}(params, resolution)
end
"""
end
end)
end
defp serialize_to_resolver_struct(params) do
%AbsintheGenerator.Resolver{
app_name: params[:app_name],
moduledoc: params[:moduledoc],
resolver_name: params[:resolver_name],
resolver_functions: params[:resolver_functions]
}
end
end
|
lib/mix/tasks/resolver.ex
| 0.791378 | 0.675641 |
resolver.ex
|
starcoder
|
defmodule ListDict do
@moduledoc """
A Dict implementation that works on lists of two-items tuples.
This dictionary is only recommended for keeping a small amount
of values. Other dict alternatives are more viable for keeping
any other amount than a handful.
For more information about the functions and their APIs, please
consult the `Dict` module.
"""
@doc """
Returns a new `ListDict`, i.e. an empty list.
"""
def new, do: []
@doc false
def new(pairs) do
IO.write :stderr, "ListDict.new/1 is deprecated, please use Enum.into/2 instead\n#{Exception.format_stacktrace}"
Enum.to_list pairs
end
@doc false
def new(list, transform) when is_function(transform) do
IO.write :stderr, "ListDict.new/2 is deprecated, please use Enum.into/3 instead\n#{Exception.format_stacktrace}"
Enum.map list, transform
end
def keys(dict) do
for { key, _ } <- dict, do: key
end
def values(dict) do
for { _, value } <- dict, do: value
end
def size(dict) do
length(dict)
end
def has_key?(dict, key)
def has_key?([{ key, _ }|_], key), do: true
def has_key?([{ _, _ }|t], key), do: has_key?(t, key)
def has_key?([], _key), do: false
def get(dict, key, default \\ nil)
def get([{ key, value }|_], key, _default), do: value
def get([{ _, _ }|t], key, default), do: get(t, key, default)
def get([], _key, default), do: default
def fetch(dict, key)
def fetch([{ key, value }|_], key), do: { :ok, value }
def fetch([{ _, _ }|t], key), do: fetch(t, key)
def fetch([], _key), do: :error
def fetch!(dict, key) do
case fetch(dict, key) do
{ :ok, value } -> value
:error -> raise(KeyError, key: key, term: dict)
end
end
def pop(dict, key, default \\ nil) do
{ get(dict, key, default), delete(dict, key) }
end
def put(dict, key, val) do
[{key, val}|delete(dict, key)]
end
def put_new(dict, key, val) do
case has_key?(dict, key) do
true -> dict
false -> [{key, val}|dict]
end
end
def delete(dict, key)
def delete([{ key, _ }|t], key), do: t
def delete([{ _, _ } = h|t], key), do: [h|delete(t, key)]
def delete([], _key), do: []
def merge(dict, enum, callback \\ fn(_k, _v1, v2) -> v2 end) do
Enum.reduce enum, dict, fn { k, v2 }, acc ->
update(acc, k, v2, fn(v1) -> callback.(k, v1, v2) end)
end
end
def split(dict, keys) do
acc = { [], [] }
{take, drop} = Enum.reduce dict, acc, fn({ k, v }, { take, drop }) ->
if k in keys do
{ [{k, v}|take], drop }
else
{ take, [{k, v}|drop] }
end
end
{Enum.reverse(take), Enum.reverse(drop)}
end
def take(dict, keys) do
for { k, _ } = tuple <- dict, k in keys, do: tuple
end
def drop(dict, keys) do
for { k, _ } = tuple <- dict, not k in keys, do: tuple
end
def update!(list, key, fun) do
update!(list, key, fun, list)
end
defp update!([{key, value}|list], key, fun, _dict) do
[{key, fun.(value)}|delete(list, key)]
end
defp update!([{_, _} = e|list], key, fun, dict) do
[e|update!(list, key, fun, dict)]
end
defp update!([], key, _fun, dict) do
raise(KeyError, key: key, term: dict)
end
def update([{key, value}|dict], key, _initial, fun) do
[{key, fun.(value)}|delete(dict, key)]
end
def update([{_, _} = e|dict], key, initial, fun) do
[e|update(dict, key, initial, fun)]
end
def update([], key, initial, _fun) do
[{key, initial}]
end
def empty(_dict) do
IO.write :stderr, "ListDict.empty/1 is deprecated, please use Collectable.empty/1 instead\n#{Exception.format_stacktrace}"
[]
end
def equal?(dict, other) do
:lists.keysort(1, dict) === :lists.keysort(1, other)
end
@doc false
def reduce(_, { :halt, acc }, _fun), do: { :halted, acc }
def reduce(list, { :suspend, acc }, fun), do: { :suspended, acc, &reduce(list, &1, fun) }
def reduce([], { :cont, acc }, _fun), do: { :done, acc }
def reduce([{_,_}=h|t], { :cont, acc }, fun), do: reduce(t, fun.(h, acc), fun)
def to_list(dict), do: dict
end
|
lib/elixir/lib/list_dict.ex
| 0.647464 | 0.558809 |
list_dict.ex
|
starcoder
|
defmodule PandaDoc do
@moduledoc """
Documentation for `PandaDoc` which provides an API for pandadoc.com.
## Installation
This package can be installed by adding `pandadoc` to your list of dependencies in `mix.exs`:
```elixir
def deps do
[{:pandadoc, "~> 0.1.2"}]
end
```
## Configuration
Put the following lines into your `config.exs` or better, into your environment configuration files like `test.exs`, `dev.exs` or `prod.exs`.
```elixir
config :pandadoc, api_key: "<your api key>"
```
## WebHooks in Phoenix
Put the following lines in a file called `pandadoc_controller.ex` inside your controllers directory.
```elixir
defmodule YourAppWeb.PandaDocController do
use PandaDoc.PhoenixController
def handle_document_change(id, status, _details) do
id
|> Documents.get_by_pandadoc_id!()
|> Documents.update_document(%{status: status})
end
def handle_document_complete(id, pdf, status, _details) do
id
|> Documents.get_by_pandadoc_id!()
|> Documents.update_document(%{data: pdf, status: status})
end
end
```
Put the following lines into your `router.ex` and configure the WebHook in the pandadoc portal.
```elixir
post "/callbacks/pandadoc", YourAppWeb.PandaDocController, :webhook
```
## Usage
iex> recipients = [
%PandaDoc.Recipient{
email: "<EMAIL>",
first_name: "Jane",
last_name: "Example",
role: "signer1"
}
]
iex> PandaDoc.create_document("Sample PandaDoc PDF.pdf", [] = pdf_bytes, recipients)
{:ok, "msFYActMfJHqNTKH8YSvF1"}
"""
import PandaDoc.RequestBuilder
alias PandaDoc.Connection
alias Tesla.Multipart
@doc """
Creates a new Document from the given PDF file.
## Parameters
- name (String): Name of the document
- pdf_bytes (Binary): PDF content
- recipients ([PandaDoc.Model.Recipient]): Array of Recipients
- fields (Map): [optional] Field-mappings for the PDF
- tags ([String]): [optional] Array of Tags
- parse_form_fields (Boolean): [optional] Should PandaDoc parse old-style PDF Fields?
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, document_id}` on success
- `{:error, info}` on failure
## Examples
iex> pdf_bytes = File.read("/path/to/my.pdf")
iex> recipients = [
%PandaDoc.Model.Recipient{email: "<EMAIL>", first_name: "Jane", last_name: "Example", role: "signer1"}
]
iex> fields = %{
name: %PandaDoc.Model.Field{value: "John", role: "signer1"}
}
iex> PandaDoc.create_document("Sample PandaDoc PDF.pdf", pdf_bytes, recipients, fields, ["tag1"])
{:ok, "msFYActMfJHqNTKH8YSvF1"}
"""
@spec create_document(
String.t(),
binary(),
list(PandaDoc.Model.Recipient.t()),
map() | nil,
list(String.t()) | nil,
boolean() | nil,
Tesla.Env.client() | nil
) ::
{:ok, String.t()}
| {:ok, PandaDoc.Model.BasicDocumentResponse.t()}
| {:ok, PandaDoc.Model.ErrorResponse.t()}
| {:error, Tesla.Env.t()}
def create_document(
name,
pdf_bytes,
recipients,
fields \\ %{},
tags \\ [],
parse_form_fields \\ false,
client \\ Connection.new()
) do
json =
%{
name: name,
tags: tags,
fields: fields,
recipients: recipients,
parse_form_fields: parse_form_fields
}
|> Poison.encode!()
mp =
Multipart.new()
|> Multipart.add_content_type_param("charset=utf-8")
|> Multipart.add_field("data", json)
|> Multipart.add_file_content(pdf_bytes, name,
headers: [{"content-type", "application/pdf"}]
)
with {:ok, %PandaDoc.Model.BasicDocumentResponse{id: id}} <-
%{}
|> method(:post)
|> url("/documents")
|> add_param(:body, :body, mp)
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{201, %PandaDoc.Model.BasicDocumentResponse{}},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
]) do
{:ok, id}
end
end
@doc """
Move a document to sent status and send an optional email.
## Parameters
- id (String): PandaDoc Document ID
- subject (String): [optional] E-Mail Subject
- message (String): [optional] E-Mail Message
- contact (PandaDoc.Model.CreateContact): Contact data
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, %PandaDoc.Model.BasicDocumentResponse{}}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.send_document("msFYActMfJHqNTKH8YSvF1", "Document ready", "Hi there, please sign this document")
{:ok, %PandaDoc.Model.BasicDocumentResponse{id: "msFYActMfJHqNTKH8YSvF1", status: "document.sent"}}
"""
@spec send_document(
String.t(),
String.t() | nil,
String.t() | nil,
boolean() | nil,
Tesla.Env.client() | nil
) ::
{:ok, PandaDoc.Model.BasicDocumentResponse.t()}
| {:ok, PandaDoc.Model.ErrorResponse.t()}
| {:error, Tesla.Env.t()}
def send_document(
id,
subject \\ nil,
message \\ nil,
silent \\ false,
client \\ Connection.new()
) do
json = %{
subject: subject,
message: message,
silent: silent
}
%{}
|> method(:post)
|> url("/documents/#{id}/send")
|> add_param(:body, :body, json)
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{200, %PandaDoc.Model.BasicDocumentResponse{}}
])
end
@doc """
Get basic data about a document such as name, status, and dates.
## Parameters
- id (String): PandaDoc Document ID
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, %PandaDoc.Model.BasicDocumentResponse{}}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.document_status("msFYActMfJHqNTKH8YSvF1")
{:ok, %PandaDoc.Model.BasicDocumentResponse{id: "msFYActMfJHqNTKH8YSvF1", status: "document.waiting_approval"}}
"""
@spec document_status(String.t(), Tesla.Env.client() | nil) ::
{:ok, PandaDoc.Model.BasicDocumentResponse.t()}
| {:ok, PandaDoc.Model.ErrorResponse.t()}
| {:error, Tesla.Env.t()}
def document_status(id, client \\ Connection.new()) do
%{}
|> method(:get)
|> url("/documents/#{id}")
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{200, %PandaDoc.Model.BasicDocumentResponse{}},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{404, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
])
end
@doc """
Get detailed data about a document such as name, status, dates, fields, metadata and much more.
## Parameters
- id (String): PandaDoc Document ID
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, %PandaDoc.Model.DocumentResponse{}}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.document_details("msFYActMfJHqNTKH8YSvF1")
{:ok, %PandaDoc.Model.DocumentResponse{id: "msFYActMfJHqNTKH8YSvF1", status: "document.waiting_approval"}}
"""
@spec document_details(String.t(), Tesla.Env.client() | nil) ::
{:ok, PandaDoc.Model.DocumentResponse.t()}
| {:ok, PandaDoc.Model.ErrorResponse.t()}
| {:error, Tesla.Env.t()}
def document_details(id, client \\ Connection.new()) do
%{}
|> method(:get)
|> url("/documents/#{id}/details")
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{200, %PandaDoc.Model.DocumentResponse{}},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{404, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
])
end
@doc """
Generates a link for the given recipient that you can just email or iframe with a validity of `lifetime` seconds (86400 by default).
## Parameters
- id (String): PandaDoc Document ID
- recipient_email (String): Recipient E-Mail Address
- lifetime (Integer): [optional] Seconds for this Link to be valid. Defaults to 86_400.
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, "https://app.pandadoc.com/s/.." = url, ~U[2021-01-23 06:40:00] = expires}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.share_document("msFYActMfJHqNTKH8YSvF1", "<EMAIL>", 900)
{:ok, "https://app.pandadoc.com/s/msFYActMfJHqNTKH8YSvF1", expires_at: ~U[2017-08-29T22:18:44.315Z]}
"""
@spec share_document(String.t(), String.t(), integer() | nil, Tesla.Env.client() | nil) ::
{:ok, String.t(), DateTime.t()}
| {:ok, PandaDoc.Model.ErrorResponse.t()}
| {:error, Tesla.Env.t()}
def share_document(id, recipient_email, lifetime \\ 86_400, client \\ Connection.new()) do
json = %{
recipient: recipient_email,
lifetime: lifetime
}
with {:ok, %PandaDoc.Model.BasicDocumentResponse{id: share_id, expires_at: expires_at}} <-
%{}
|> method(:post)
|> url("/documents/#{id}/session")
|> add_param(:body, :body, json)
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{201, %PandaDoc.Model.BasicDocumentResponse{}},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{404, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
]) do
{:ok, "https://app.pandadoc.com/s/#{share_id}", expires_at}
end
end
@doc """
Download a PDF of any document.
## Parameters
- id (String): PandaDoc Document ID
- query (Keywords): [optional] Query parameters for Watermarks
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, [] = pdf_bytes}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.download_document("msFYActMfJHqNTKH8YSvF1", watermark_text: "WATERMARKED")
{:ok, []}
"""
@spec download_document(String.t(), keyword(String.t()) | nil, Tesla.Env.client() | nil) ::
{:ok, binary()} | {:ok, PandaDoc.Model.ErrorResponse.t()} | {:error, Tesla.Env.t()}
def download_document(id, query \\ [], client \\ Connection.new()) do
optional_params = %{
:watermark_text => :query,
:watermark_color => :query,
:watermark_font_size => :query,
:watermark_opacity => :query
}
%{}
|> method(:get)
|> url("/documents/#{id}/download")
|> add_optional_params(optional_params, query)
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{200, :bytes},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{404, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
])
end
@doc """
Download a signed PDF of a completed document.
- id (String): PandaDoc Document ID
- query (Keywords): [optional] Query parameters for Watermarks
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, [] = pdf_bytes}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.download_protected_document("msFYActMfJHqNTKH8YSvF1")
{:ok, []}
"""
@spec download_protected_document(
String.t(),
keyword(String.t()) | nil,
Tesla.Env.client() | nil
) ::
{:ok, binary()} | {:ok, PandaDoc.Model.ErrorResponse.t()} | {:error, Tesla.Env.t()}
def download_protected_document(id, query \\ [], client \\ Connection.new()) do
optional_params = %{
:hard_copy_type => :query
}
%{}
|> method(:get)
|> url("/documents/#{id}/download-protected")
|> add_optional_params(optional_params, query)
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{200, :bytes},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{404, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
])
end
@doc """
Delete a document.
## Parameters
- id (String): PandaDoc Document ID
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, :ok}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.delete_document("msFYActMfJHqNTKH8YSvF1")
:ok
"""
@spec delete_document(String.t(), Tesla.Env.client() | nil) ::
{:ok, :atom} | {:ok, PandaDoc.Model.ErrorResponse.t()} | {:error, Tesla.Env.t()}
def delete_document(id, client \\ Connection.new()) do
%{}
|> method(:delete)
|> url("/documents/#{id}")
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{204, :ok},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{404, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
])
end
@doc """
List documents, optionally filter by a search query or tags.
## Parameters
- query (Keywords): [optional] Query parameters
- connection (PandaDoc.Connection): [optional] Connection to server
## Returns
- `{:ok, [%PandaDoc.Model.BasicDocumentResponse{}}` on success
- `{:error, info}` on failure
## Examples
iex> PandaDoc.list_documents()
{:ok, %PandaDoc.Model.DocumentListResponse{results: [%PandaDoc.Model.BasicDocumentResponse{}]}}
"""
@spec list_documents(keyword(String.t()) | nil, Tesla.Env.client() | nil) ::
{:ok, PandaDoc.Model.DocumentListResponse.t()}
| {:ok, PandaDoc.Model.ErrorResponse.t()}
| {:error, Tesla.Env.t()}
def list_documents(query \\ [], client \\ Connection.new()) do
optional_params = %{
:q => :query,
:tag => :query,
:status => :query,
:count => :query,
:page => :query,
:deleted => :query,
:id => :query,
:template_id => :query,
:folder_uuid => :query
}
%{}
|> method(:get)
|> url("/documents")
|> add_optional_params(optional_params, query)
|> Enum.into([])
|> (&Tesla.request(client, &1)).()
|> evaluate_response([
{200, %PandaDoc.Model.DocumentListResponse{}},
{400, %PandaDoc.Model.ErrorResponse{}},
{403, %PandaDoc.Model.ErrorResponse{}},
{404, %PandaDoc.Model.ErrorResponse{}},
{500, %PandaDoc.Model.ErrorResponse{}}
])
end
end
|
lib/panda_doc.ex
| 0.772659 | 0.783658 |
panda_doc.ex
|
starcoder
|
defmodule Day12.Instruction do
@moduledoc """
Parses an instruction.
"""
@copy_number ~r{cpy (\d+) ([a-d])}
@copy_register ~r{cpy ([a-d]) ([a-d])}
@increment ~r{inc ([a-d])}
@decrement ~r{dec ([a-d])}
@jump_if_non_zero ~r{jnz ([a-d]) (-?\d+)}
@jump_numbers ~r{jnz (-?\d+) (-?\d+)}
def parse(input) do
input = input |> String.strip
parsers |> Enum.find_value(&(&1).(input))
end
defp parsers do
[
&parse_copy_number/1,
&parse_copy_register/1,
&parse_increment/1,
&parse_decrement/1,
&parse_jump_if_non_zero/1,
&parse_jump_numbers/1
]
end
defp parse_copy_number(input) do
result = @copy_number |> Regex.run(input)
if result do
[_, num, register] = result
{:copy_number, num |> parse_num, register |> parse_register}
end
end
defp parse_copy_register(input) do
result = @copy_register |> Regex.run(input)
if result do
[_, from, to] = result
{:copy_register, from |> parse_register, to |> parse_register}
end
end
defp parse_increment(input) do
result = @increment |> Regex.run(input)
if result do
[_, register] = result
{:increment, register |> parse_register}
end
end
defp parse_decrement(input) do
result = @decrement |> Regex.run(input)
if result do
[_, register] = result
{:decrement, register |> parse_register}
end
end
defp parse_jump_if_non_zero(input) do
result = @jump_if_non_zero |> Regex.run(input)
if result do
[_, register, steps] = result
{:jump_if_non_zero, register |> parse_register, steps |> parse_num}
end
end
defp parse_jump_numbers(input) do
result = @jump_numbers |> Regex.run(input)
if result do
[_, num, steps] = result
if num == "0", do: :no_op, else: {:jump, steps |> parse_num}
end
end
defp parse_num(input) do
{num, ""} = input |> Integer.parse
num
end
def parse_register(input) do
input |> String.to_atom
end
end
|
2016/day12/lib/day12/instruction.ex
| 0.517571 | 0.408336 |
instruction.ex
|
starcoder
|
defmodule Money.Backend do
@moduledoc false
def define_money_module(config) do
module = inspect(__MODULE__)
backend = config.backend
config = Macro.escape(config)
quote location: :keep, bind_quoted: [module: module, backend: backend, config: config] do
defmodule Money do
@moduledoc false
if Cldr.Config.include_module_docs?(config.generate_docs) do
@moduledoc """
A backend module for Money.
This module provides the same api as the Money
module however:
* It matches the standard behaviour of other
`ex_cldr` based libraries in maintaining the
main public API on the backend module
* It does not require the `:backend` option to
be provided since that is implied through the
use of the backend module.
All the functions in this module delegate to
the functions in `Money`.
"""
end
defdelegate validate_currency(currency_code), to: Cldr
defdelegate known_currencies, to: Cldr
defdelegate known_current_currencies, to: :"Elixir.Money"
defdelegate known_historic_currencies, to: :"Elixir.Money"
defdelegate known_tender_currencies, to: :"Elixir.Money"
@doc """
Returns a %:'Elixir.Money'{} struct from a currency code and a currency amount or
an error tuple of the form `{:error, {exception, message}}`.
## Arguments
* `currency_code` is an ISO4217 three-character upcased binary or atom
* `amount` is an integer, string or Decimal
## Options
`:locale` is any known locale. The locale is used to normalize any
binary (String) amounts to a form that can be consumed by `Decimal.new/1`.
This consists of removing any localised grouping characters and replacing
the localised decimal separator with a ".".
Note that the `currency_code` and `amount` arguments can be supplied in
either order,
## Examples
iex> #{inspect(__MODULE__)}.new(:USD, 100)
#Money<:USD, 100>
iex> #{inspect(__MODULE__)}.new(100, :USD)
#Money<:USD, 100>
iex> #{inspect(__MODULE__)}.new("USD", 100)
#Money<:USD, 100>
iex> #{inspect(__MODULE__)}.new("thb", 500)
#Money<:THB, 500>
iex> #{inspect(__MODULE__)}.new("EUR", Decimal.new(100))
#Money<:EUR, 100>
iex> #{inspect(__MODULE__)}.new(:EUR, "100.30")
#Money<:EUR, 100.30>
iex> #{inspect(__MODULE__)}.new(:XYZZ, 100)
{:error, {Money.UnknownCurrencyError, "The currency :XYZZ is invalid"}}
iex> #{inspect(__MODULE__)}.new("1.000,99", :EUR, locale: "de")
#Money<:EUR, 1000.99>
iex> #{inspect(__MODULE__)}.new 123.445, :USD
{:error,
{Money.InvalidAmountError,
"Float amounts are not supported in new/2 due to potenial " <>
"rounding and precision issues. If absolutely required, " <>
"use Money.from_float/2"}}
"""
@spec new(
:"Elixir.Money".amount() | :"Elixir.Money".currency_code(),
:"Elixir.Money".amount()
| :"Elixir.Money".currency_code(),
Keyword.t()
) ::
:"Elixir.Money".t() | {:error, {module(), String.t()}}
def new(currency_code, amount, options \\ []) do
:"Elixir.Money".new(currency_code, amount, options)
end
@doc """
Returns a %:'Elixir.Money'{} struct from a currency code and a currency amount. Raises an
exception if the current code is invalid.
## Arguments
* `currency_code` is an ISO4217 three-character upcased binary or atom
* `amount` is an integer, float or Decimal
## Examples
Money.new!(:XYZZ, 100)
** (Money.UnknownCurrencyError) Currency :XYZZ is not known
(ex_money) lib/money.ex:177: Money.new!/2
"""
@spec new!(
:"Elixir.Money".amount() | :"Elixir.Money".currency_code(),
:"Elixir.Money".amount()
| :"Elixir.Money".currency_code(),
Keyword.t()
) ::
:"Elixir.Money".t() | no_return()
def new!(currency_code, amount, options \\ []) do
:"Elixir.Money".new!(currency_code, amount, options)
end
@doc """
Returns a %:'Elixir.Money'{} struct from a currency code and a float amount, or
an error tuple of the form `{:error, {exception, message}}`.
Floats are fraught with danger in computer arithmetic due to the
unexpected loss of precision during rounding. The IEEE754 standard
indicates that a number with a precision of 16 digits should
round-trip convert without loss of fidelity. This function supports
numbers with a precision up to 15 digits and will error if the
provided amount is outside that range.
**Note** that `Money` cannot detect lack of precision or rounding errors
introduced upstream. This function therefore should be used with
great care and its use should be considered potentially harmful.
## Arguments
* `currency_code` is an ISO4217 three-character upcased binary or atom
* `amount` is a float
## Examples
iex> #{inspect(__MODULE__)}.from_float 1.23456, :USD
#Money<:USD, 1.23456>
iex> #{inspect(__MODULE__)}.from_float 1.234567890987656, :USD
{:error,
{Money.InvalidAmountError,
"The precision of the float 1.234567890987656 is " <>
"greater than 15 which could lead to unexpected results. " <>
"Reduce the precision or call Money.new/2 with a Decimal or String amount"}}
"""
# @doc since: "2.0.0"
@spec from_float(
float | :"Elixir.Money".currency_code(),
float | :"Elixir.Money".currency_code()
) ::
:"Elixir.Money".t() | {:error, {module(), String.t()}}
def from_float(currency_code, amount) do
:"Elixir.Money".from_float(currency_code, amount)
end
@doc """
Returns a %:'Elixir.Money'{} struct from a currency code and a float amount, or
raises an exception if the currency code is invalid.
See `Money.from_float/2` for further information.
**Note** that `Money` cannot detect lack of precision or rounding errors
introduced upstream. This function therefore should be used with
great care and its use should be considered potentially harmful.
## Arguments
* `currency_code` is an ISO4217 three-character upcased binary or atom
* `amount` is a float
## Examples
iex> #{inspect(__MODULE__)}.from_float!(:USD, 1.234)
#Money<:USD, 1.234>
Money.from_float!(:USD, 1.234567890987654)
#=> ** (Money.InvalidAmountError) The precision of the float 1.234567890987654 is greater than 15 which could lead to unexpected results. Reduce the precision or call Money.new/2 with a Decimal or String amount
(ex_money) lib/money.ex:293: Money.from_float!/2
"""
# @doc since: "2.0.0"
@spec from_float!(:"Elixir.Money".currency_code(), float) ::
:"Elixir.Money".t() | no_return()
def from_float!(currency_code, amount) do
:"Elixir.Money".from_float!(currency_code, amount)
end
@doc """
Parse a string and return a `Money.t` or an error.
The string to be parsed is required to have a currency
code and an amount. The currency code may be placed
before the amount or after, but not both.
Parsing is strict. Additional text surrounding the
currency code and amount will cause the parse to
fail.
## Arguments
* `string` is a string to be parsed
* `options` is a keyword list of options that is
passed to `Money.new/3` with the exception of
the options listed below
## Options
* `backend` is any module() that includes `use Cldr` and therefore
is a `Cldr` backend module(). The default is `Money.default_backend()`
* `locale_name` is any valid locale name returned by `Cldr.known_locale_names/1`
or a `Cldr.LanguageTag` struct returned by `Cldr.Locale.new!/2`
The default is `<backend>.get_locale()`
* `currency_filter` is an `atom` or list of `atoms` representing the
currency types to be considered for a match. If a list of
atoms is given, the currency must meet all criteria for
it to be considered.
* `:all`, the default, considers all currencies
* `:current` considers those currencies that have a `:to`
date of nil and which also is a known ISO4217 currency
* `:historic` is the opposite of `:current`
* `:tender` considers currencies that are legal tender
* `:unannotated` considers currencies that don't have
"(some string)" in their names. These are usually
financial instruments.
* `fuzzy` is a float greater than `0.0` and less than or
equal to `1.0` which is used as input to the
`String.jaro_distance/2` to determine is the provided
currency string is *close enough* to a known currency
string for it to identify definitively a currency code.
It is recommended to use numbers greater than `0.8` in
order to reduce false positives.
## Returns
* a `Money.t` if parsing is successful or
* `{:error, {exception, reason}}` if an error is
detected.
## Examples
iex> #{inspect(__MODULE__)}.parse("USD 100")
#Money<:USD, 100>
iex> #{inspect(__MODULE__)}.parse "USD 100,00", locale: "de"
#Money<:USD, 100.00>
iex> #{inspect(__MODULE__)}.parse("100 USD")
#Money<:USD, 100>
iex> #{inspect(__MODULE__)}.parse("100 eurosports", fuzzy: 0.8)
#Money<:EUR, 100>
iex> #{inspect(__MODULE__)}.parse("100 eurosports", fuzzy: 0.9)
{:error,
{Money.UnknownCurrencyError, "The currency \\"eurosports\\" is unknown or not supported"}}
iex> #{inspect(__MODULE__)}.parse("100 afghan afghanis")
#Money<:AFN, 100>
iex> #{inspect(__MODULE__)}.parse("100")
{:error,
{Money.Invalid, "A currency code, symbol or description must be specified but was not found in \\"100\\""}}
iex> #{inspect(__MODULE__)}.parse("USD 100 with trailing text")
{:error,
{Money.ParseError, "Could not parse \\"USD 100 with trailing text\\"."}}
"""
# @doc since: "3.2.0"
@spec parse(String.t(), Keyword.t()) ::
:"Elixir.Money".t() | {:error, {module(), String.t()}}
def parse(string, options \\ []) do
:"Elixir.Money".parse(string, options)
end
@doc """
Returns a formatted string representation of a `Money{}`.
Formatting is performed according to the rules defined by CLDR. See
`Cldr.Number.to_string/2` for formatting options. The default is to format
as a currency which applies the appropriate rounding and fractional digits
for the currency.
## Arguments
* `money` is any valid `Money.t` type returned
by `Money.new/2`
* `options` is a keyword list of options
## Returns
* `{:ok, string}` or
* `{:error, reason}`
## Options
* `:backend` is any CLDR backend module. The default is
`Money.default_backend()`.
* Any other options are passed to `Cldr.Number.to_string/3`
## Examples
iex> #{inspect(__MODULE__)}.to_string Money.new(:USD, 1234)
{:ok, "$1,234.00"}
iex> #{inspect(__MODULE__)}.to_string Money.new(:JPY, 1234)
{:ok, "¥1,234"}
iex> #{inspect(__MODULE__)}.to_string Money.new(:THB, 1234)
{:ok, "THB 1,234.00"}
iex> #{inspect(__MODULE__)}.to_string Money.new(:USD, 1234), format: :long
{:ok, "1,234 US dollars"}
"""
@spec to_string(:"Elixir.Money".t(), Keyword.t() | Cldr.Number.Format.Options.t()) ::
{:ok, String.t()} | {:error, {atom, String.t()}}
def to_string(money, options \\ []) do
options = Keyword.put(options, :backend, unquote(backend))
:"Elixir.Money".to_string(money, options)
end
@doc """
Returns a formatted string representation of a `Money.t` or raises if
there is an error.
Formatting is performed according to the rules defined by CLDR. See
`Cldr.Number.to_string!/2` for formatting options. The default is to format
as a currency which applies the appropriate rounding and fractional digits
for the currency.
## Arguments
* `money` is any valid `Money.t` type returned
by `Money.new/2`
* `options` is a keyword list of options
## Options
* `:backend` is any CLDR backend module. The default is
`Money.default_backend()`.
* Any other options are passed to `Cldr.Number.to_string/3`
## Examples
iex> #{inspect(__MODULE__)}.to_string! Money.new(:USD, 1234)
"$1,234.00"
iex> #{inspect(__MODULE__)}.to_string! Money.new(:JPY, 1234)
"¥1,234"
iex> #{inspect(__MODULE__)}.to_string! Money.new(:THB, 1234)
"THB 1,234.00"
iex> #{inspect(__MODULE__)}.to_string! Money.new(:USD, 1234), format: :long
"1,234 US dollars"
"""
@spec to_string!(:"Elixir.Money".t(), Keyword.t()) :: String.t() | no_return()
def to_string!(%:"Elixir.Money"{} = money, options \\ []) do
options = Keyword.put(options, :backend, unquote(backend))
:"Elixir.Money".to_string!(money, options)
end
@doc """
Returns the amount part of a `Money` type as a `Decimal`
## Arguments
* `money` is any valid `Money.t` type returned
by `Money.new/2`
## Returns
* a `Decimal.t`
## Example
iex> m = #{inspect(__MODULE__)}.new("USD", 100)
iex> #{inspect(__MODULE__)}.to_decimal(m)
#Decimal<100>
"""
@spec to_decimal(money :: :"Elixir.Money".t()) :: Decimal.t()
def to_decimal(%:"Elixir.Money"{} = money) do
:"Elixir.Money".to_decimal(money)
end
@doc """
The absolute value of a `Money` amount.
Returns a `Money` type with a positive sign for the amount.
## Arguments
* `money` is any valid `Money.t` type returned
by `Money.new/2`
## Returns
* a `Money.t`
## Example
iex> m = #{inspect(__MODULE__)}.new("USD", -100)
iex> #{inspect(__MODULE__)}.abs(m)
#Money<:USD, 100>
"""
@spec abs(money :: :"Elixir.Money".t()) :: :"Elixir.Money".t()
def abs(%:"Elixir.Money"{} = money) do
:"Elixir.Money".abs(money)
end
@doc """
Add two `Money` values.
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `{:ok, money}` or
* `{:error, reason}`
## Example
iex> #{inspect(__MODULE__)}.add Money.new(:USD, 200), Money.new(:USD, 100)
{:ok, Money.new(:USD, 300)}
iex> #{inspect(__MODULE__)}.add Money.new(:USD, 200), Money.new(:AUD, 100)
{:error, {ArgumentError, "Cannot add monies with different currencies. " <>
"Received :USD and :AUD."}}
"""
@spec add(money_1 :: :"Elixir.Money".t(), money_2 :: :"Elixir.Money".t()) ::
{:ok, :"Elixir.Money".t()} | {:error, {module(), String.t()}}
def add(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".add(money_1, money_2)
end
@doc """
Add two `Money` values and raise on error.
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `{:ok, money}` or
* raises an exception
## Examples
iex> #{inspect(__MODULE__)}.add! Money.new(:USD, 200), Money.new(:USD, 100)
#Money<:USD, 300>
#{inspect(__MODULE__)}.add! Money.new(:USD, 200), Money.new(:CAD, 500)
** (ArgumentError) Cannot add two %:'Elixir.Money'{} with different currencies. Received :USD and :CAD.
"""
def add!(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".add!(money_1, money_2)
end
@doc """
Subtract one `Money` value struct from another.
## Options
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `{:ok, money}` or
* `{:error, reason}`
## Example
iex> #{inspect(__MODULE__)}.sub Money.new(:USD, 200), Money.new(:USD, 100)
{:ok, Money.new(:USD, 100)}
"""
@spec sub(money_1 :: :"Elixir.Money".t(), money_2 :: :"Elixir.Money".t()) ::
{:ok, :"Elixir.Money".t()} | {:error, {module(), String.t()}}
def sub(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".sub(money_1, money_2)
end
@doc """
Subtract one `Money` value struct from another and raise on error.
Returns either `{:ok, money}` or `{:error, reason}`.
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* a `Money.t` struct or
* raises an exception
## Examples
iex> #{inspect(__MODULE__)}.sub! Money.new(:USD, 200), Money.new(:USD, 100)
#Money<:USD, 100>
#{inspect(__MODULE__)}.sub! Money.new(:USD, 200), Money.new(:CAD, 500)
** (ArgumentError) Cannot subtract monies with different currencies. Received :USD and :CAD.
"""
@spec sub!(money_1 :: :"Elixir.Money".t(), money_2 :: :"Elixir.Money".t()) ::
:"Elixir.Money".t() | none()
def sub!(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".sub!(money_1, money_2)
end
@doc """
Multiply a `Money` value by a number.
## Arguments
* `money` is any valid `Money.t` type returned
by `Money.new/2`
* `number` is an integer, float or `Decimal.t`
> Note that multipling one %:'Elixir.Money'{} by another is not supported.
## Returns
* `{:ok, money}` or
* `{:error, reason}`
## Example
iex> #{inspect(__MODULE__)}.mult(Money.new(:USD, 200), 2)
{:ok, Money.new(:USD, 400)}
iex> #{inspect(__MODULE__)}.mult(Money.new(:USD, 200), "xx")
{:error, {ArgumentError, "Cannot multiply money by \\"xx\\""}}
"""
@spec mult(:"Elixir.Money".t(), Cldr.Math.number_or_decimal()) ::
{:ok, :"Elixir.Money".t()} | {:error, {module(), String.t()}}
def mult(%:"Elixir.Money"{} = money, number) do
:"Elixir.Money".mult(money, number)
end
@doc """
Multiply a `Money` value by a number and raise on error.
## Arguments
* `money` is any valid `Money.t` types returned
by `Money.new/2`
* `number` is an integer, float or `Decimal.t`
## Returns
* a `Money.t` or
* raises an exception
## Examples
iex> #{inspect(__MODULE__)}.mult!(Money.new(:USD, 200), 2)
#Money<:USD, 400>
#{inspect(__MODULE__)}.mult!(Money.new(:USD, 200), :invalid)
** (ArgumentError) Cannot multiply money by :invalid
"""
@spec mult!(:"Elixir.Money".t(), Cldr.Math.number_or_decimal()) ::
:"Elixir.Money".t() | none()
def mult!(%:"Elixir.Money"{} = money, number) do
:"Elixir.Money".mult!(money, number)
end
@doc """
Divide a `Money` value by a number.
## Arguments
* `money` is any valid `Money.t` types returned
by `Money.new/2`
* `number` is an integer, float or `Decimal.t`
> Note that dividing one %:'Elixir.Money'{} by another is not supported.
## Returns
* `{:ok, money}` or
* `{:error, reason}`
## Example
iex> #{inspect(__MODULE__)}.div Money.new(:USD, 200), 2
{:ok, Money.new(:USD, 100)}
iex> #{inspect(__MODULE__)}.div(Money.new(:USD, 200), "xx")
{:error, {ArgumentError, "Cannot divide money by \\"xx\\""}}
"""
@spec div(:"Elixir.Money".t(), Cldr.Math.number_or_decimal()) ::
{:ok, :"Elixir.Money".t()} | {:error, {module(), String.t()}}
def div(%:"Elixir.Money"{} = money_1, number) do
:"Elixir.Money".div(money_1, number)
end
@doc """
Divide a `Money` value by a number and raise on error.
## Arguments
* `money` is any valid `Money.t` types returned
by `Money.new/2`
* `number` is an integer, float or `Decimal.t`
## Returns
* a `Money.t` struct or
* raises an exception
## Examples
iex> #{inspect(__MODULE__)}.div Money.new(:USD, 200), 2
{:ok, Money.new(:USD, 100)}
#{inspect(__MODULE__)}.div(Money.new(:USD, 200), "xx")
** (ArgumentError) "Cannot divide money by \\"xx\\""]}}
"""
def div!(%:"Elixir.Money"{} = money, number) do
:"Elixir.Money".div!(money, number)
end
@doc """
Returns a boolean indicating if two `Money` values are equal
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `true` or `false`
## Example
iex> #{inspect(__MODULE__)}.equal? Money.new(:USD, 200), Money.new(:USD, 200)
true
iex> #{inspect(__MODULE__)}.equal? Money.new(:USD, 200), Money.new(:USD, 100)
false
"""
@spec equal?(money_1 :: :"Elixir.Money".t(), money_2 :: :"Elixir.Money".t()) :: boolean
def equal?(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".equal?(money_1, money_2)
end
@doc """
Compares two `Money` values numerically. If the first number is greater
than the second :gt is returned, if less than :lt is returned, if both
numbers are equal :eq is returned.
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `:gt` | `:eq` | `:lt` or
* `{:error, {module(), String.t}}`
## Examples
iex> #{inspect(__MODULE__)}.compare Money.new(:USD, 200), Money.new(:USD, 100)
:gt
iex> #{inspect(__MODULE__)}.compare Money.new(:USD, 200), Money.new(:USD, 200)
:eq
iex> #{inspect(__MODULE__)}.compare Money.new(:USD, 200), Money.new(:USD, 500)
:lt
iex> #{inspect(__MODULE__)}.compare Money.new(:USD, 200), Money.new(:CAD, 500)
{:error,
{ArgumentError,
"Cannot compare monies with different currencies. Received :USD and :CAD."}}
"""
@spec compare(money_1 :: :"Elixir.Money".t(), money_2 :: :"Elixir.Money".t()) ::
:gt | :eq | :lt | {:error, {module(), String.t()}}
def compare(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".compare(money_1, money_2)
end
@doc """
Compares two `Money` values numerically and raises on error.
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `:gt` | `:eq` | `:lt` or
* raises an exception
## Examples
#{inspect(__MODULE__)}.compare! Money.new(:USD, 200), Money.new(:CAD, 500)
** (ArgumentError) Cannot compare monies with different currencies. Received :USD and :CAD.
"""
def compare!(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".cmp!(money_1, money_2)
end
@doc """
Compares two `Money` values numerically. If the first number is greater
than the second #Integer<1> is returned, if less than Integer<-1> is
returned. Otherwise, if both numbers are equal Integer<0> is returned.
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `-1` | `0` | `1` or
* `{:error, {module(), String.t}}`
## Examples
iex> #{inspect(__MODULE__)}.cmp Money.new(:USD, 200), Money.new(:USD, 100)
1
iex> #{inspect(__MODULE__)}.cmp Money.new(:USD, 200), Money.new(:USD, 200)
0
iex> #{inspect(__MODULE__)}.cmp Money.new(:USD, 200), Money.new(:USD, 500)
-1
iex> #{inspect(__MODULE__)}.cmp Money.new(:USD, 200), Money.new(:CAD, 500)
{:error,
{ArgumentError,
"Cannot compare monies with different currencies. Received :USD and :CAD."}}
"""
@spec cmp(money_1 :: :"Elixir.Money".t(), money_2 :: :"Elixir.Money".t()) ::
-1 | 0 | 1 | {:error, {module(), String.t()}}
def cmp(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".cmp(money_1, money_2)
end
@doc """
Compares two `Money` values numerically and raises on error.
## Arguments
* `money_1` and `money_2` are any valid `Money.t` types returned
by `Money.new/2`
## Returns
* `-1` | `0` | `1` or
* raises an exception
## Examples
#{inspect(__MODULE__)}.cmp! Money.new(:USD, 200), Money.new(:CAD, 500)
** (ArgumentError) Cannot compare monies with different currencies. Received :USD and :CAD.
"""
def cmp!(%:"Elixir.Money"{} = money_1, %:"Elixir.Money"{} = money_2) do
:"Elixir.Money".cmp!(money_1, money_2)
end
@doc """
Split a `Money` value into a number of parts maintaining the currency's
precision and rounding and ensuring that the parts sum to the original
amount.
## Arguments
* `money` is a `%:'Elixir.Money'{}` struct
* `parts` is an integer number of parts into which the `money` is split
Returns a tuple `{dividend, remainder}` as the function result
derived as follows:
1. Round the money amount to the required currency precision using
`Money.round/1`
2. Divide the result of step 1 by the integer divisor
3. Round the result of the division to the precision of the currency
using `Money.round/1`
4. Return two numbers: the result of the division and any remainder
that could not be applied given the precision of the currency.
## Examples
#{inspect(__MODULE__)}.split Money.new(123.5, :JPY), 3
{¥41, ¥1}
#{inspect(__MODULE__)}.split Money.new(123.4, :JPY), 3
{¥41, ¥0}
#{inspect(__MODULE__)}.split Money.new(123.7, :USD), 9
{$13.74, $0.04}
"""
@spec split(:"Elixir.Money".t(), non_neg_integer) ::
{:"Elixir.Money".t(), :"Elixir.Money".t()}
def split(%:"Elixir.Money"{} = money, parts) when is_integer(parts) do
:"Elixir.Money".split(money, parts)
end
@doc """
Round a `Money` value into the acceptable range for the requested currency.
## Arguments
* `money` is a `%:'Elixir.Money'{}` struct
* `opts` is a keyword list of options
## Options
* `:rounding_mode` that defines how the number will be rounded. See
`Decimal.Context`. The default is `:half_even` which is also known
as "banker's rounding"
* `:currency_digits` which determines the rounding increment.
The valid options are `:cash`, `:accounting` and `:iso` or
an integer value representing the rounding factor. The
default is `:iso`.
## Notes
There are two kinds of rounding applied:
1. Round to the appropriate number of fractional digits
3. Apply an appropriate rounding increment. Most currencies
round to the same precision as the number of decimal digits, but some
such as `:CHF` round to a minimum such as `0.05` when its a cash
amount. The rounding increment is applied when the option
`:currency_digits` is set to `:cash`
## Examples
iex> #{inspect(__MODULE__)}.round Money.new("123.73", :CHF), currency_digits: :cash
#Money<:CHF, 123.75>
iex> #{inspect(__MODULE__)}.round Money.new("123.73", :CHF), currency_digits: 0
#Money<:CHF, 124>
iex> #{inspect(__MODULE__)}.round Money.new("123.7456", :CHF)
#Money<:CHF, 123.75>
iex> #{inspect(__MODULE__)}.round Money.new("123.7456", :JPY)
#Money<:JPY, 124>
"""
@spec round(:"Elixir.Money".t(), Keyword.t()) :: :"Elixir.Money".t()
def round(%:"Elixir.Money"{} = money, options \\ []) do
:"Elixir.Money".round(money, options)
end
@doc """
Set the fractional part of a `Money`.
## Arguments
* `money` is a `%:'Elixir.Money'{}` struct
* `fraction` is an integer amount that will be set
as the fraction of the `money`
## Notes
The fraction can only be set if it matches the number of
decimal digits for the currency associated with the `money`.
Therefore, for a currency with 2 decimal digits, the
maximum for `fraction` is `99`.
## Examples
iex> #{inspect(__MODULE__)}.put_fraction Money.new(:USD, "2.49"), 99
#Money<:USD, 2.99>
iex> #{inspect(__MODULE__)}.put_fraction Money.new(:USD, "2.49"), 0
#Money<:USD, 2.0>
iex> #{inspect(__MODULE__)}.put_fraction Money.new(:USD, "2.49"), 999
{:error,
{Money.InvalidAmountError, "Rounding up to 999 is invalid for currency :USD"}}
"""
def put_fraction(%:"Elixir.Money"{} = money, upto) when is_integer(upto) do
:"Elixir.Money".put_fraction(money, upto)
end
@doc """
Convert `money` from one currency to another.
## Arguments
* `money` is any `Money.t` struct returned by `Cldr.Currency.new/2`
* `to_currency` is a valid currency code into which the `money` is converted
* `rates` is a `Map` of currency rates where the map key is an upcased
atom or string and the value is a Decimal conversion factor. The default is the
latest available exchange rates returned from `Money.ExchangeRates.latest_rates()`
## Examples
#{inspect(__MODULE__)}.to_currency(Money.new(:USD, 100), :AUD, %{USD: Decimal.new(1), AUD: Decimal.from_float(0.7345)})
{:ok, #Money<:AUD, 73.4500>}
#{inspect(__MODULE__)}.to_currency(Money.new("USD", 100), "AUD", %{"USD" => Decimal.new(1), "AUD" => Decimal.from_float(0.7345)})
{:ok, #Money<:AUD, 73.4500>}
iex> #{inspect(__MODULE__)}.to_currency Money.new(:USD, 100), :AUDD, %{USD: Decimal.new(1), AUD: Decimal.from_float(0.7345)}
{:error, {Cldr.UnknownCurrencyError, "The currency :AUDD is invalid"}}
iex> #{inspect(__MODULE__)}.to_currency Money.new(:USD, 100), :CHF, %{USD: Decimal.new(1), AUD: Decimal.from_float(0.7345)}
{:error, {Money.ExchangeRateError, "No exchange rate is available for currency :CHF"}}
"""
@spec to_currency(
:"Elixir.Money".t(),
:"Elixir.Money".currency_code(),
:"Elixir.Money.ExchangeRates".t()
| {:ok, :"Elixir.Money.ExchangeRates".t()}
| {:error, {module(), String.t()}}
) :: {:ok, :"Elixir.Money".t()} | {:error, {module(), String.t()}}
def to_currency(money, to_currency, rates \\ :"Elixir.Money.ExchangeRates".latest_rates()) do
:"Elixir.Money".to_currency(money, to_currency, rates)
end
@doc """
Convert `money` from one currency to another and raises on error
## Arguments
* `money` is any `Money.t` struct returned by `Cldr.Currency.new/2`
* `to_currency` is a valid currency code into which the `money` is converted
* `rates` is a `Map` of currency rates where the map key is an upcased
atom or string and the value is a Decimal conversion factor. The default is the
latest available exchange rates returned from `Money.ExchangeRates.latest_rates()`
## Examples
iex> #{inspect(__MODULE__)}.to_currency! Money.new(:USD, 100), :AUD, %{USD: Decimal.new(1), AUD: Decimal.from_float(0.7345)}
#Money<:AUD, 73.4500>
iex> #{inspect(__MODULE__)}.to_currency! Money.new("USD", 100), "AUD", %{"USD" => Decimal.new(1), "AUD" => Decimal.from_float(0.7345)}
#Money<:AUD, 73.4500>
#{inspect(__MODULE__)}.to_currency! Money.new(:USD, 100), :ZZZ, %{USD: Decimal.new(1), AUD: Decimal.from_float(0.7345)}
** (Cldr.UnknownCurrencyError) Currency :ZZZ is not known
"""
@spec to_currency!(
:"Elixir.Money".t(),
:"Elixir.Money".currency_code(),
:"Elixir.Money.ExchangeRates".t()
| {:ok, :"Elixir.Money.ExchangeRates".t()}
| {:error, {module(), String.t()}}
) :: :"Elixir.Money".t() | no_return
def to_currency!(
%:"Elixir.Money"{} = money,
currency,
rates \\ :"Elixir.Money.ExchangeRates".latest_rates()
) do
:"Elixir.Money".to_currency!(money, currency, rates)
end
@doc """
Returns the effective cross-rate to convert from one currency
to another.
## Arguments
* `from` is any `Money.t` struct returned by `Cldr.Currency.new/2` or a valid
currency code
* `to_currency` is a valid currency code into which the `money` is converted
* `rates` is a `Map` of currency rates where the map key is an upcased
atom or string and the value is a Decimal conversion factor. The default is the
latest available exchange rates returned from `Money.ExchangeRates.latest_rates()`
## Examples
#{inspect(__MODULE__)}.cross_rate(Money.new(:USD, 100), :AUD, %{USD: Decimal.new(1), AUD: Decimal.new("0.7345")})
{:ok, #Decimal<0.7345>}
#{inspect(__MODULE__)}.cross_rate Money.new(:USD, 100), :ZZZ, %{USD: Decimal.new(1), AUD: Decimal.new(0.7345)}
** (Cldr.UnknownCurrencyError) Currency :ZZZ is not known
"""
@spec cross_rate(
:"Elixir.Money".t() | :"Elixir.Money".currency_code(),
:"Elixir.Money".currency_code(),
:"Elixir.Money.ExchangeRates".t() | {:ok, :"Elixir.Money.ExchangeRates".t()}
) :: {:ok, Decimal.t()} | {:error, {module(), String.t()}}
def cross_rate(from, to, rates \\ :"Elixir.Money.ExchangeRates".latest_rates()) do
:"Elixir.Money".cross_rate(from, to, rates)
end
@doc """
Returns the effective cross-rate to convert from one currency
to another.
## Arguments
* `from` is any `Money.t` struct returned by `Cldr.Currency.new/2` or a valid
currency code
* `to_currency` is a valid currency code into which the `money` is converted
* `rates` is a `Map` of currency rates where the map key is an upcased
atom or string and the value is a Decimal conversion factor. The default is the
latest available exchange rates returned from `Money.ExchangeRates.latest_rates()`
## Examples
iex> #{inspect(__MODULE__)}.cross_rate!(Money.new(:USD, 100), :AUD, %{USD: Decimal.new(1), AUD: Decimal.new("0.7345")})
#Decimal<0.7345>
iex> #{inspect(__MODULE__)}.cross_rate!(:USD, :AUD, %{USD: Decimal.new(1), AUD: Decimal.new("0.7345")})
#Decimal<0.7345>
#{inspect(__MODULE__)}.cross_rate Money.new(:USD, 100), :ZZZ, %{USD: Decimal.new(1), AUD: Decimal.new("0.7345")}
** (Cldr.UnknownCurrencyError) Currency :ZZZ is not known
"""
@spec cross_rate!(
:"Elixir.Money".t() | :"Elixir.Money".currency_code(),
:"Elixir.Money".currency_code(),
:"Elixir.Money.ExchangeRates".t() | {:ok, :"Elixir.Money.ExchangeRates".t()}
) :: Decimal.t() | no_return
def cross_rate!(from, to_currency, rates \\ :"Elixir.Money.ExchangeRates".latest_rates()) do
:"Elixir.Money".cross_rate!(from, to_currency, rates)
end
@doc """
Calls `Decimal.reduce/1` on the given `:'Elixir.Money'.t()`
This will normalize the coefficient and exponent of the
decimal amount in a standard way that may aid in
native comparison of `%:'Elixir.Money'.t()` items.
## Example
iex> x = %Money{currency: :USD, amount: %Decimal{sign: 1, coef: 42, exp: 0}}
#Money<:USD, 42>
iex> y = %Money{currency: :USD, amount: %Decimal{sign: 1, coef: 4200000000, exp: -8}}
#Money<:USD, 42.00000000>
iex> x == y
false
iex> y = Money.normalize(x)
#Money<:USD, 42>
iex> x == y
true
"""
@spec normalize(:"Elixir.Money".t()) :: :"Elixir.Money".t()
@doc since: "5.0.0"
def normalize(%:"Elixir.Money"{} = money) do
:"Elixir.Money".normalize(money)
end
@deprecated "Use #{inspect __MODULE__}.normalize/1 instead."
def reduce(money) do
normalize(money)
end
@doc """
Returns a tuple comprising the currency code, integer amount,
exponent and remainder
Some services require submission of money items as an integer
with an implied exponent that is appropriate to the currency.
Rather than return only the integer, `Money.to_integer_exp`
returns the currency code, integer, exponent and remainder.
The remainder is included because to return an integer
money with an implied exponent the `Money` has to be rounded
potentially leaving a remainder.
## Arguments
* `money` is any `Money.t` struct returned by `Cldr.Currency.new/2`
## Notes
* Since the returned integer is expected to have the implied fractional
digits the `Money` needs to be rounded which is what this function does.
## Example
iex> m = #{inspect(__MODULE__)}.new(:USD, "200.012356")
#Money<:USD, 200.012356>
iex> #{inspect(__MODULE__)}.to_integer_exp(m)
{:USD, 20001, -2, Money.new(:USD, "0.002356")}
iex> m = #{inspect(__MODULE__)}.new(:USD, "200.00")
#Money<:USD, 200.00>
iex> #{inspect(__MODULE__)}.to_integer_exp(m)
{:USD, 20000, -2, Money.new(:USD, "0.00")}
"""
def to_integer_exp(%:"Elixir.Money"{} = money) do
:"Elixir.Money".to_integer_exp(money)
end
@doc """
Convert an integer representation of money into a `Money` struct.
This is the inverse operation of `Money.to_integer_exp/1`. Note
that the ISO definition of currency digits (subunit) is *always*
used. This is, in some cases like the Colombian Peso (COP)
different to the CLDR definition.
## Options
* `integer` is an integer representation of a mooney item including
any decimal digits. ie. 20000 would interpreted to mean $200.00
* `currency` is the currency code for the `integer`. The assumed
decimal places is derived from the currency code.
## Returns
* A `Money` struct or
* `{:error, {Cldr.UnknownCurrencyError, message}}`
## Examples
iex> #{inspect(__MODULE__)}.from_integer(20000, :USD)
#Money<:USD, 200.00>
iex> #{inspect(__MODULE__)}.from_integer(200, :JPY)
#Money<:JPY, 200>
iex> #{inspect(__MODULE__)}.from_integer(20012, :USD)
#Money<:USD, 200.12>
iex> #{inspect(__MODULE__)}.from_integer(20012, :COP)
#Money<:COP, 200.12>
"""
@spec from_integer(integer, :"Elixir.Money".currency_code()) ::
:"Elixir.Money".t() | {:error, module(), String.t()}
def from_integer(amount, currency) when is_integer(amount) do
:"Elixir.Money".from_integer(amount, currency)
end
@doc """
Return a zero amount `Money.t` in the given currency
## Example
iex> #{inspect(__MODULE__)}.zero(:USD)
#Money<:USD, 0>
iex> money = Money.new(:USD, 200)
iex> #{inspect(__MODULE__)}.zero(money)
#Money<:USD, 0>
iex> #{inspect(__MODULE__)}.zero :ZZZ
{:error, {Cldr.UnknownCurrencyError, "The currency :ZZZ is invalid"}}
"""
@spec zero(:"Elixir.Money".currency_code() | :"Elixir.Money".t()) :: :"Elixir.Money".t()
def zero(%:"Elixir.Money"{} = money) do
:"Elixir.Money".zero(money)
end
def zero(currency) when is_atom(currency) do
:"Elixir.Money".zero(currency)
end
@doc false
def from_integer({_currency, _integer, _exponent, _remainder} = value) do
:"Elixir.Money".from_integer(value)
end
defp default_backend do
:"Elixir.Money".default_backend()
end
end
end
end
end
|
lib/money/backend.ex
| 0.876476 | 0.484746 |
backend.ex
|
starcoder
|
defmodule Naboo.Domains do
@moduledoc """
The Domain context.
"""
import Ecto.Query, warn: false
alias Naboo.Repo
alias Naboo.Domain.Address
alias Naboo.Domain.Node
@doc """
Returns the list of all nodes.
## Examples
iex> list_nodes()
[%Node{}, ...]
"""
def list_nodes(), do: Repo.all(Node)
@doc """
Safely gets a single node.
Returns nil if the Node does not exist.
## Examples
iex> get_node(123)
%Node{}
iex> get_node!(456)
nil
"""
def get_node(id), do: Repo.get(Node, id)
@doc """
Gets a single node.
Raises `Ecto.NoResultsError` if the Node does not exist.
## Examples
iex> get_node!(123)
%Node{}
iex> get_node!(456)
** (Ecto.NoResultsError)
"""
def get_node!(id), do: Repo.get!(Node, id)
@doc """
Creates a node.
## Examples
iex> create_node(%{field: value})
{:ok, %Node{}}
iex> create_node(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_node(attrs \\ %{}) do
%Node{}
|> Node.changeset(attrs)
|> Repo.insert()
end
@doc """
Updates a node.
## Examples
iex> update_node(node, %{field: new_value})
{:ok, %Node{}}
iex> update_node(node, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_node(%Node{} = node, attrs) do
node
|> Node.changeset(attrs)
|> Repo.update()
end
@doc """
Deletes a node.
## Examples
iex> delete_node(node)
{:ok, %Node{}}
iex> delete_node(node)
{:error, %Ecto.Changeset{}}
"""
def delete_node(%Node{} = node) do
Repo.delete(node)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking node changes.
## Examples
iex> change_node(node)
%Ecto.Changeset{data: %Node{}}
"""
def change_node(%Node{} = node, attrs \\ %{}) do
Node.changeset(node, attrs)
end
@doc """
Returns the list of all addresses.
## Examples
iex> list_addresses()
[%Address{}, ...]
"""
def list_addresses(), do: Repo.all(Address)
@doc """
Safely gets a single address.
Returns nil if the Address does not exist.
## Examples
iex> get_address(123)
%Address{}
iex> get_address!(456)
nil
"""
def get_address(id), do: Repo.get(Address, id)
@doc """
Gets a single address.
Raises `Ecto.NoResultsError` if the Address does not exist.
## Examples
iex> get_address!(123)
%Address{}
iex> get_address!(456)
** (Ecto.NoResultsError)
"""
def get_address!(id), do: Repo.get!(Address, id)
@doc """
Creates a address.
## Examples
iex> create_address(%{field: value})
{:ok, %Address{}}
iex> create_address(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_address(attrs \\ %{}) do
%Address{}
|> Address.changeset(attrs)
|> Repo.insert()
end
@doc """
Updates a address.
## Examples
iex> update_address(address, %{field: new_value})
{:ok, %Address{}}
iex> update_address(address, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_address(%Address{} = address, attrs) do
address
|> Address.changeset(attrs)
|> Repo.update()
end
@doc """
Deletes a address.
## Examples
iex> delete_address(address)
{:ok, %Address{}}
iex> delete_address(address)
{:error, %Ecto.Changeset{}}
"""
def delete_address(%Address{} = address) do
Repo.delete(address)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking address changes.
## Examples
iex> change_address(address)
%Ecto.Changeset{data: %Address{}}
"""
def change_address(%Address{} = address, attrs \\ %{}) do
Address.changeset(address, attrs)
end
end
|
lib/naboo/domain/domains.ex
| 0.877306 | 0.436502 |
domains.ex
|
starcoder
|
defmodule Pushex.GCM.Request do
@moduledoc """
`Pushex.GCM.Request` represents a request that will be sent to GCM.
It contains the notification, and all the metadata that can be sent with it.
Only the key with a value will be sent to GCM, so that the proper default
are used.
"""
use Vex.Struct
defstruct [
:app,
:registration_ids,
:to,
:collapse_key,
:priority,
:content_available,
:delay_while_idle,
:time_to_live,
:restricted_package_name,
:data,
:notification
]
@type t :: %__MODULE__{
app: Pushex.GCM.App.t,
registration_ids: [String.t],
to: String.t,
collapse_key: String.t,
priority: String.t,
content_available: boolean,
delay_while_idle: boolean,
time_to_live: non_neg_integer,
restricted_package_name: String.t,
data: map,
notification: map
}
validates :app,
presence: true,
type: [is: Pushex.GCM.App]
validates :registration_ids,
type: [is: [[list: :binary], :nil]],
presence: [if: [to: nil]]
validates :to,
type: [is: [:binary, :nil]],
presence: [if: [registration_ids: nil]]
validates :collapse_key,
type: [is: [:binary, :nil]]
validates :priority,
type: [is: [:string, :nil]],
inclusion: [in: ~w(high normal), allow_nil: true]
validates :content_available,
type: [is: [:boolean, :nil]]
validates :delay_while_idle,
type: [is: [:boolean, :nil]]
validates :time_to_live,
type: [is: [:integer, :nil]]
validates :restricted_package_name,
type: [is: [:binary, :nil]]
validates :data,
type: [is: [:map, :nil]]
validates :notification,
type: [is: [:map, :nil]]
def create(params) do
Pushex.Util.create_struct(__MODULE__, params)
end
def create!(params) do
Pushex.Util.create_struct!(__MODULE__, params)
end
def create!(notification, app, opts) do
params = opts
|> Keyword.merge(create_base_request(notification))
|> Keyword.put(:app, app)
|> normalize_opts
Pushex.Util.create_struct!(__MODULE__, params)
end
defp create_base_request(notification) do
keys = ["notification", "data", :notification, :data]
req = Enum.reduce(keys, %{}, fn(elem, acc) -> add_field(acc, notification, elem) end)
if Enum.empty?(req) do
[notification: notification]
else
Keyword.new(req)
end
end
defp add_field(request, notification, field) do
case Map.fetch(notification, field) do
{:ok, value} -> Map.put(request, field, value)
:error -> request
end
end
defp normalize_opts(opts) do
if is_list(opts[:to]) do
{to, opts} = Keyword.pop(opts, :to)
Keyword.put(opts, :registration_ids, to)
else
opts
end
end
end
defimpl Poison.Encoder, for: Pushex.GCM.Request do
def encode(request, options) do
request
|> Map.from_struct()
|> Enum.filter(fn {_key, value} -> not is_nil(value) end)
|> Enum.into(%{})
|> Map.delete(:app)
|> Poison.encode!(options)
end
end
|
lib/pushex/gcm/request.ex
| 0.711832 | 0.517571 |
request.ex
|
starcoder
|
defmodule StarkInfra.IssuingInvoice do
alias __MODULE__, as: IssuingInvoice
alias StarkInfra.Utils.Rest
alias StarkInfra.Utils.Check
alias StarkInfra.User.Project
alias StarkInfra.User.Organization
alias StarkInfra.Error
@moduledoc """
# IssuingInvoice struct
"""
@doc """
The IssuingInvoice structs created in your Workspace load your Issuing balance when paid.
## Parameters (required):
- `:amount` [integer]: IssuingInvoice value in cents. ex: 1234 (= R$ 12.34)
## Parameters (optional):
- `:tax_id` [string, default sub-issuer tax ID]: payer tax ID (CPF or CNPJ) with or without formatting. ex: "01234567890" or "20.018.183/0001-80"
- `:name` [string, default sub-issuer name]: payer name. ex: "Iron Bank S.A."
- `:tags` [list of strings, default []]: list of strings for tagging. ex: ["travel", "food"]
## Attributes (return-only):
- `:id` [string]: unique id returned when IssuingInvoice is created. ex: "5656565656565656"
- `:status` [string]: current IssuingInvoice status. ex: "created", "expired", "overdue", "paid"
- `:issuing_transaction_id` [string]: ledger transaction ids linked to this IssuingInvoice. ex: "issuing-invoice/5656565656565656"
- `:updated` [DateTime]: latest update DateTime for the IssuingInvoice. ex: ~U[2020-3-10 10:30:0:0]
- `:created` [DateTime]: creation datetime for the IssuingInvoice. ex: ~U[2020-03-10 10:30:0:0]
"""
@enforce_keys [
:amount
]
defstruct [
:amount,
:tax_id,
:name,
:tags,
:id,
:status,
:issuing_transaction_id,
:updated,
:created
]
@type t() :: %__MODULE__{}
@doc """
Send a list of IssuingInvoice structs for creation in the Stark Infra API
## Parameters (required):
- `:invoice` [IssuingInvoice struct]: IssuingInvoice struct to be created in the API.
## Options:
- `:user` [Organization/Project, default nil]: Organization or Project struct returned from StarkInfra.project(). Only necessary if default project or organization has not been set in configs.
## Return:
- IssuingInvoice struct with updated attributes
"""
@spec create(
invoice: IssuingInvoice.t(),
user: Organization.t() | Project.t() | nil
) ::
{:ok, IssuingInvoice.t()} |
{:error, [Error.t()]}
def create(invoice, options \\ []) do
Rest.post_single(
resource(),
invoice,
options
)
end
@doc """
Same as create(), but it will unwrap the error tuple and raise in case of errors.
"""
@spec create!(
invoice: IssuingInvoice.t(),
user: Organization.t() | Project.t() | nil
) :: any
def create!(invoice, options \\ []) do
Rest.post_single!(
resource(),
invoice,
options
)
end
@doc """
Receive a single IssuingInvoice struct previously created in the Stark Infra API by its id
## Parameters (required):
- `:id` [string]: struct unique id. ex: "5656565656565656"
## Options:
- `:user` [Organization/Project, default nil]: Organization or Project struct returned from StarkInfra.project(). Only necessary if default project or organization has not been set in configs.
## Return:
- IssuingInvoice struct with updated attributes
"""
@spec get(
id: binary,
user: Organization.t() | Project.t() | nil
) ::
{:ok, IssuingInvoice.t()} |
{:error, [Error.t()]}
def get(id, options \\ []) do
Rest.get_id(
resource(),
id,
options
)
end
@doc """
Same as get(), but it will unwrap the error tuple and raise in case of errors.
"""
@spec get!(
id: binary,
user: Organization.t() | Project.t() | nil
) :: any
def get!(id, options \\ []) do
Rest.get_id!(
resource(),
id,
options
)
end
@doc """
Receive a stream of IssuingInvoices structs previously created in the Stark Infra API
## Options:
- `:limit` [integer, default nil]: maximum number of structs to be retrieved. Unlimited if nil. ex: 35
- `:after` [Date or string, default nil]: date filter for structs created only after specified date. ex: ~D[2020-03-25]
- `:before` [Date or string, default nil]: date filter for structs created only before specified date. ex: ~D[2020-03-25]
- `:status` [list of strings, default nil]: filter for status of retrieved structs. ex: ["created", "expired", "overdue", "paid"]
- `:tags` [list of strings, default nil]: tags to filter retrieved structs. ex: ["tony", "stark"]
- `:user` [Organization/Project, default nil]: Organization or Project struct returned from StarkInfra.project(). Only necessary if default project or organization has not been set in configs.
## Return:
- stream of IssuingInvoices structs with updated attributes
"""
@spec query(
limit: integer | nil,
after: Date.t() | binary | nil,
before: Date.t() | binary | nil,
status: [binary] | nil,
tags: [binary] | nil,
user: Organization.t() | Project.t() | nil
) ::
{:ok, {binary, [IssuingInvoice.t()]}} |
{:error, [Error.t()]}
def query(options \\ []) do
Rest.get_list(
resource(),
options
)
end
@doc """
Same as query(), but it will unwrap the error tuple and raise in case of errors.
"""
@spec query!(
limit: integer | nil,
after: Date.t() | binary | nil,
before: Date.t() | binary | nil,
status: [binary] | nil,
tags: [binary] | nil,
user: Organization.t() | Project.t() | nil
) :: any
def query!(options \\ []) do
Rest.get_list!(
resource(),
options
)
end
@doc """
Receive a list of IssuingInvoices structs previously created in the Stark Infra API and the cursor to the next page.
## Options:
- `:cursor` [string, default nil]: cursor returned on the previous page function call
- `:limit` [integer, default 100]: maximum number of structs to be retrieved. Unlimited if nil. ex: 35
- `:after` [Date or string, default nil]: date filter for structs created only after specified date. ex: ~D[2020-03-25]
- `:before` [Date or string, default nil]: date filter for structs created only before specified date. ex: ~D[2020-03-25]
- `:status` [list of strings, default nil]: filter for status of retrieved structs. ex: ["created", "expired", "overdue", "paid"]
- `:tags` [list of strings, default nil]: tags to filter retrieved structs. ex: ["tony", "stark"]
- `:user` [Organization/Project, default nil]: Organization or Project struct returned from StarkInfra.project(). Only necessary if default project or organization has not been set in configs.
## Return:
- list of IssuingInvoices structs with updated attributes
- cursor to retrieve the next page of IssuingInvoices structs
"""
@spec page(
cursor: binary | nil,
limit: integer | nil,
after: Date.t() | binary | nil,
before: Date.t() | binary | nil,
status: [binary] | nil,
tags: [binary] | nil,
user: Organization.t() | Project.t() | nil
) ::
{:ok, {binary, [IssuingInvoice.t()]}} |
{:error, [Error.t()]}
def page(options \\ []) do
Rest.get_page(
resource(),
options
)
end
@doc """
Same as page(), but it will unwrap the error tuple and raise in case of errors.
"""
@spec page!(
cursor: binary | nil,
limit: integer | nil,
after: Date.t() | binary | nil,
before: Date.t() | binary | nil,
status: [binary] | nil,
tags: [binary] | nil,
user: Organization.t() | Project.t() | nil
) :: any
def page!(options \\ []) do
Rest.get_page!(
resource(),
options
)
end
@doc false
def resource() do
{
"IssuingInvoice",
&resource_maker/1
}
end
@doc false
def resource_maker(json) do
%IssuingInvoice{
amount: json[:amount],
tax_id: json[:tax_id],
name: json[:name],
tags: json[:tags],
id: json[:id],
status: json[:status],
issuing_transaction_id: json[:issuing_transaction_id],
updated: json[:updated] |> Check.datetime(),
created: json[:created] |> Check.datetime()
}
end
end
|
lib/issuing_invoice/issuing_invoice.ex
| 0.887052 | 0.588948 |
issuing_invoice.ex
|
starcoder
|
defmodule ExWikipedia.Page do
@moduledoc """
`ExWikipedia.page/2` delegates here. This module represents the current
implementation.
"""
@behaviour ExWikipedia
@follow_redirect true
@default_http_client HTTPoison
@default_json_parser Jason
@default_status_key :status_code
@default_body_key :body
alias ExWikipedia.PageParser
@typedoc """
- `:external_links` - List of fully qualified URLs to associate with the Wikipedia page.
- `:categories` - List of categories the Wikipedia page belongs to
- `:content` - String text found on Wikipedia page
- `:images` - List of relative URLs pointing to images found on the Wikipedia page
- `:page_id` - Wikipedia page id represented as an integer
- `:revision_id` - Wikipedia page revision id represented as an integer
- `:summary` - String text representing Wikipedia page's summary
- `:title` - String title of the Wikipedia page
- `:url` - Fully qualified URL of Wikipedia page
- `:is_redirect?` - Boolean. Indicates whether the content is from a page
redirected from the one requested.
"""
@type t :: %__MODULE__{
external_links: [String.t()],
categories: [String.t()],
content: binary(),
images: [String.t()],
page_id: integer(),
revision_id: integer(),
summary: binary(),
title: binary(),
url: binary(),
is_redirect?: boolean(),
links: [String.t()]
}
@enforce_keys [:content, :page_id, :summary, :title, :url]
defstruct external_links: [],
categories: [],
content: "",
images: [],
page_id: nil,
revision_id: nil,
summary: "",
title: "",
url: "",
is_redirect?: false,
links: []
@doc """
Fetches a Wikipedia page by its ID.
## Options
- `:http_client`: HTTP Client used to fetch Wikipedia page via Wikipedia's integer ID. Default: `#{@default_http_client}`
- `:decoder`: Decoder used to decode JSON returned from Wikipedia API. Default: `#{@default_json_parser}`
- `:http_headers`: HTTP headers that are passed into the client. Default: []
- `:http_opts`: HTTP options passed to the client. Default: []
- `:body_key`: key inside the HTTP client's response which contains the response body.
This may change depending on the client used. Default: `#{@default_body_key}`
- `:status_key`: key inside the HTTP client's response which returns the HTTP status code.
This may change depending on the client used. Default: `#{@default_status_key}`
- `:parser`: Parser used to parse response returned from client. Default: `ExWikipedia.PageParser`
- `:parser_opts`: Parser options passed the the parser. Default: `[]`.
See `ExWikipedia.PageParser` for supported option.
- `:follow_redirect`: indicates whether or not the content from a redirected
page constitutes a valid response. Default: `#{inspect(@follow_redirect)}`
- `:language`: Language for searching through wikipedia. Default: "en"
- `:by`: Queries Wikipedia API by `:page_id` or `:title`. Default: :page_id
"""
@impl ExWikipedia
def fetch(id, opts \\ [])
def fetch(id, opts) do
http_client = Keyword.get(opts, :http_client, @default_http_client)
decoder = Keyword.get(opts, :decoder, @default_json_parser)
http_headers = Keyword.get(opts, :http_headers, [])
http_opts = Keyword.get(opts, :http_opts, [])
body_key = Keyword.get(opts, :body_key, @default_body_key)
status_key = Keyword.get(opts, :status_key, @default_status_key)
parser = Keyword.get(opts, :parser, PageParser)
follow_redirect = Keyword.get(opts, :follow_redirect, @follow_redirect)
parser_opts =
opts
|> Keyword.get(:parser_opts, [])
|> Keyword.put(:follow_redirect, follow_redirect)
with {:ok, raw_response} <-
http_client.get(build_url(id, opts), http_headers, http_opts),
:ok <- ok_http_status_code(raw_response, status_key),
{:ok, body} <- get_body(raw_response, body_key),
{:ok, response} <- decoder.decode(body, keys: :atoms),
{:ok, parsed_response} <- parser.parse(response, parser_opts) do
{:ok, struct(__MODULE__, parsed_response)}
else
_ ->
{:error,
"There is no page with #{Keyword.get(opts, :by, :page_id)} #{id} in #{Keyword.get(opts, :language, Application.fetch_env!(:ex_wikipedia, :default_language))}.wikipedia.org"}
end
end
defp get_body(raw_response, body_key) do
case Map.fetch(raw_response, body_key) do
{:ok, body} ->
{:ok, body}
:error ->
{:error, "#{inspect(body_key)} not found as key in response: #{inspect(raw_response)}"}
end
end
defp ok_http_status_code(raw_response, status_key) do
case Map.fetch(raw_response, status_key) do
{:ok, 200} -> :ok
_ -> {:error, "#{inspect(status_key)} not 200 in response: #{inspect(raw_response)}"}
end
end
defp build_url(page, opts) do
language =
Keyword.get(opts, :language, Application.fetch_env!(:ex_wikipedia, :default_language))
type = Keyword.get(opts, :by, :page_id)
build_by_type(page, language, type)
end
defp build_by_type(page, lang, :title) do
"https://#{lang}.wikipedia.org/w/api.php?action=parse&page=#{page}&format=json&redirects=true&prop=text|langlinks|categories|links|templates|images|externallinks|sections|revid|displaytitle|iwlinks|properties|parsewarnings|headhtml"
|> URI.encode()
end
defp build_by_type(page, lang, :page_id) do
"https://#{lang}.wikipedia.org/w/api.php?action=parse&pageid=#{page}&format=json&redirects=true&prop=text|langlinks|categories|links|templates|images|externallinks|sections|revid|displaytitle|iwlinks|properties|parsewarnings|headhtml"
end
defp build_by_type(_page, _lang, type), do: "Type #{type} is not supported."
end
|
lib/ex_wikipedia/page/page.ex
| 0.869077 | 0.403097 |
page.ex
|
starcoder
|
defmodule EllipticCurve.Signature do
@moduledoc """
Used to convert signature between struct (raw numbers r and s) and .der or .pem formats.
Functions:
- fromBase64()
- fromBase64!()
- fromDer()
- fromDer!()
- toBase64()
- toDer()
"""
alias __MODULE__, as: Signature
alias EllipticCurve.Utils.{Der, Base64, BinaryAscii}
@doc """
Holds signature data. Is usually extracted from base64 strings.
Parameters:
- `:r` [integer]: first signature number;
- `:s` [integer]: second signature number;
"""
defstruct [:r, :s]
@doc """
Converts a base 64 signature into the decoded struct format
Parameters:
- `base64` [string]: message that will be signed
Returns {:ok, signature}:
- `signature` [%EllipticCurve.Signature]: decoded signature, exposing r and s;
## Example:
iex> EllipticCurve.Ecdsa.fromBase64("<KEY>")
{:ok, %EllipticCurve.Signature{r: 114398670046563728651181765316495176217036114587592994448444521545026466264118, s: 65366972607021398158454632864220554542282541376523937745916477386966386597715}}
"""
def fromBase64(base64) do
{:ok, fromBase64!(base64)}
rescue
e in RuntimeError -> {:error, e}
end
@doc """
Converts a base 64 signature into the decoded struct format
Parameters:
- base64 [string]: signature in base 64 format
Returns {:ok, signature}:
- signature [%EllipticCurve.Signature]: decoded signature, exposing r and s;
## Example:
iex> EllipticCurve.Ecdsa.fromBase64!("<KEY>")
%EllipticCurve.Signature{r: 114398670046563728651181765316495176217036114587592994448444521545026466264118, s: 65366972607021398158454632864220554542282541376523937745916477386966386597715}
"""
def fromBase64!(base64String) do
base64String
|> Base64.decode()
|> fromDer!()
end
@doc """
Converts a der signature (raw binary) into the decoded struct format
Parameters:
- `der` [string]: signature in der format (raw binary)
Returns {:ok, signature}:
- `signature` [%EllipticCurve.Signature]: decoded signature, exposing r and s;
## Example:
iex> EllipticCurve.Ecdsa.fromDer(<<48, 69, 2, 33, 0, 211, 243, 12, 93, ...>>)
{:ok, %EllipticCurve.Signature{r: 95867440227398247533351136059968563162267771464707645727187625451839377520639, s: 35965164910442916948460815891253401171705649249124379540577916592403246631835}}
"""
def fromDer(der) do
{:ok, fromDer!(der)}
rescue
e in RuntimeError -> {:error, e}
end
@doc """
Converts a der signature (raw binary) into the decoded struct format
Parameters:
- `der` [string]: signature in der format (raw binary)
Returns:
- `signature` [%EllipticCurve.Signature]: decoded signature, exposing r and s;
## Example:
iex> EllipticCurve.Ecdsa.fromDer!(<<48, 69, 2, 33, 0, 211, 243, 12, 93, ...>>)
%EllipticCurve.Signature{r: 95867440227398247533351136059968563162267771464707645727187625451839377520639, s: 35965164910442916948460815891253401171705649249124379540577916592403246631835}
"""
def fromDer!(der) do
{rs, firstEmpty} = Der.removeSequence(der)
if byte_size(firstEmpty) > 0 do
raise "trailing junk after DER signature: " <> BinaryAscii.hexFromBinary(firstEmpty)
end
{r, rest} = Der.removeInteger(rs)
{s, secondEmpty} = Der.removeInteger(rest)
if byte_size(secondEmpty) > 0 do
raise "trailing junk after DER numbers: " <> BinaryAscii.hexFromBinary(secondEmpty)
end
%Signature{r: r, s: s}
end
@doc """
Converts a signature in decoded struct format into a base 64 string
Parameters:
- `signature` [%EllipticCurve.Signature]: decoded signature struct;
Returns:
- `base64` [string]: signature in base 64 format
## Example:
iex> EllipticCurve.Ecdsa.toBase64(%EllipticCurve.Signature{r: 123, s: 456})
"YXNvZGlqYW9pZGphb2lkamFvaWRqc2Fpb3NkamE="
"""
def toBase64(signature) do
signature
|> toDer()
|> Base64.encode()
end
@doc """
Converts a signature in decoded struct format into der format (raw binary)
Parameters:
- `signature` [%EllipticCurve.Signature]: decoded signature struct;
Returns:
- `der` [string]: signature in der format
## Example:
iex> EllipticCurve.Ecdsa.toDer(%EllipticCurve.Signature{r: 95867440227398247533351136059968563162267771464707645727187625451839377520639, s: 35965164910442916948460815891253401171705649249124379540577916592403246631835})
<<48, 69, 2, 33, 0, 211, 243, 12, 93, 107, 214, 149, 243, ...>>
"""
def toDer(signature) do
Der.encodeSequence([
Der.encodeInteger(signature.r),
Der.encodeInteger(signature.s)
])
end
end
|
lib/signature.ex
| 0.929927 | 0.509459 |
signature.ex
|
starcoder
|
defmodule Drunkard.Recipes do
@moduledoc """
The Recipes context.
"""
import Ecto.Query, warn: false
alias Drunkard.Repo
alias Drunkard.Recipes.Slug
alias Drunkard.Recipes.Ingredient
@doc """
Returns the list of ingredients.
## Examples
iex> list_ingredients()
[%Ingredient{}, ...]
"""
def list_ingredients, do: Repo.all(Ingredient)
def list_ingredients(preload_fields) do
Repo.all(Ingredient) |> Enum.map(fn i -> ingredient_preload(i, preload_fields) end)
end
@doc """
Gets a single ingredient.
Raises `Ecto.NoResultsError` if the Ingredient does not exist.
## Examples
iex> get_ingredient!(123)
%Ingredient{}
iex> get_ingredient!(456)
** (Ecto.NoResultsError)
"""
def get_ingredient!(%{uuid: uuid}), do: Repo.get!(Ingredient, uuid)
def get_ingredient!(%{name: name}), do: Repo.get_by!(Ingredient, name: name)
def get_ingredients_and_preload!(%{name_part: name_part}) do
Repo.all(from i in Ingredient, where: ilike(i.name, ^"%#{name_part}%"), preload: [:icon, :image, :alternatives, :tags])
end
def get_ingredients_and_preload!(names) do
Repo.all(from i in Ingredient, where: i.name in ^names, preload: [:icon, :image, :alternatives, :tags])
end
def get_ingredients_and_preload!(names, preload_fields) do
Repo.all(from i in Ingredient, where: i.name in ^names, preload: ^preload_fields)
end
def ingredient_preload(ingredient, preload_fields) do
Repo.preload(ingredient, preload_fields)
end
@doc """
Creates a ingredient.
## Examples
iex> create_ingredient(%{field: value})
{:ok, %Ingredient{}}
iex> create_ingredient(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_ingredient(attrs \\ %{}) do
%Ingredient{}
|> Ingredient.changeset(attrs)
|> Repo.insert()
end
def create_ingredient!(attrs \\ %{}) do
%Ingredient{}
|> Ingredient.changeset(attrs)
|> Repo.insert!()
end
@doc """
Updates a ingredient.
## Examples
iex> update_ingredient(ingredient, %{field: new_value})
{:ok, %Ingredient{}}
iex> update_ingredient(ingredient, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_ingredient(%Ingredient{} = ingredient, attrs) do
ingredient
|> Ingredient.changeset(attrs)
|> Repo.update()
end
def update_ingredient!(%Ingredient{} = ingredient, attrs) do
ingredient
|> Ingredient.changeset(attrs)
|> Repo.update!()
end
@doc """
Deletes a Ingredient.
## Examples
iex> delete_ingredient(ingredient)
{:ok, %Ingredient{}}
iex> delete_ingredient(ingredient)
{:error, %Ecto.Changeset{}}
"""
def delete_ingredient(%Ingredient{} = ingredient) do
Repo.delete(ingredient)
end
def delete_ingredient!(%Ingredient{} = ingredient) do
Repo.delete!(ingredient)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking ingredient changes.
## Examples
iex> change_ingredient(ingredient)
%Ecto.Changeset{source: %Ingredient{}}
"""
def change_ingredient(%Ingredient{} = ingredient) do
Ingredient.changeset(ingredient, %{})
end
alias Drunkard.Recipes.IngredientToIngredient
def create_ingredient_ingredient_link!(attrs) do
%IngredientToIngredient{}
|> IngredientToIngredient.changeset(attrs)
|> Repo.insert!()
end
alias Drunkard.Recipes.Glass
@doc """
Returns the list of glasses.
## Examples
iex> list_glasses()
[%Glass{}, ...]
"""
def list_glasses do
Repo.all(Glass)
end
@doc """
Gets a single glass.
Raises `Ecto.NoResultsError` if the Glass does not exist.
## Examples
iex> get_glass!(123)
%Glass{}
iex> get_glass!(456)
** (Ecto.NoResultsError)
"""
def get_glass!(%{uuid: uuid}), do: Repo.get!(Glass, uuid)
def get_glass!(%{name: name}), do: Repo.get_by!(Glass, name: name)
def get_glasses_and_preload!(%{name_part: name_part}) do
Repo.all(from g in Glass, where: ilike(g.name, ^"%#{name_part}%"), preload: [:icon, :image, :recipes])
end
def glass_preload(glass, preload_fields) do
Repo.preload(glass, preload_fields)
end
@doc """
Creates a glass.
## Examples
iex> create_glass(%{field: value})
{:ok, %Glass{}}
iex> create_glass(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_glass(attrs \\ %{}) do
%Glass{}
|> Glass.changeset(attrs)
|> Repo.insert()
end
def create_glass!(attrs \\ %{}) do
%Glass{}
|> Glass.changeset(attrs)
|> Repo.insert!()
end
@doc """
Updates a glass.
## Examples
iex> update_glass(glass, %{field: new_value})
{:ok, %Glass{}}
iex> update_glass(glass, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_glass(%Glass{} = glass, attrs) do
glass
|> Glass.changeset(attrs)
|> Repo.update()
end
@doc """
Deletes a Glass.
## Examples
iex> delete_glass(glass)
{:ok, %Glass{}}
iex> delete_glass(glass)
{:error, %Ecto.Changeset{}}
"""
def delete_glass(%Glass{} = glass) do
Repo.delete(glass)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking glass changes.
## Examples
iex> change_glass(glass)
%Ecto.Changeset{source: %Glass{}}
"""
def change_glass(%Glass{} = glass) do
Glass.changeset(glass, %{})
end
alias Drunkard.Recipes.Recipe
@doc """
Returns the list of recipes.
## Examples
iex> list_recipes()
[%Recipe{}, ...]
"""
def list_recipes do
Repo.all(Recipe)
end
@doc """
Gets a single recipe.
Raises `Ecto.NoResultsError` if the Recipe does not exist.
## Examples
iex> get_recipe!(123)
%Recipe{}
iex> get_recipe!(456)
** (Ecto.NoResultsError)
"""
def get_recipe!(%{uuid: uuid}), do: Repo.get!(Recipe, uuid)
def get_recipe!(%{name: name}), do: Repo.get_by!(Recipe, name: name)
def get_random_recipe!(), do: Repo.all(from r in Recipe, order_by: fragment("random()"), limit: 1) |> Enum.at(0)
def get_recipes_and_preload!(%{name_part: name_part}) do
Repo.all(from r in Recipe, where: ilike(r.name, ^"%#{name_part}%"), preload: [:icon, :image, :glass, :tags])
end
def get_recipes_by_ingredient_preload!(%{ingredient_uuid: iuuid, preload_fields: preload_fields}) do
Repo.all(from r in Recipe,
where: fragment(
"? @> ? or ? @> ?",
r.recipe_ingredients,
^[%{ingredient: iuuid}],
r.recipe_ingredients,
^[%{alternatives: [iuuid]}]
),
preload: ^preload_fields
)
end
def get_recipes_by_ingredient_preload!(%{ingredient_uuid: _iuuid} = params) do
get_recipes_by_ingredient_preload!(Map.put(params, :preload_fields, [:icon, :image, :glass, :tags]))
end
def get_recipes_by_ingredients_preload!(%{ingredient_uuids: ingredient_uuids, preload_fields: preload_fields}) do
Repo.all(from r in Recipe,
join: iv in fragment("jsonb_array_elements(?)", r.ingredient_variations), on: true,
where: fragment(
"? @> ?",
^ingredient_uuids,
iv
),
group_by: r.uuid,
preload: ^preload_fields
)
end
def get_recipes_by_ingredients_preload!(%{ingredient_uuids: _ingredient_uuids} = params) do
get_recipes_by_ingredients_preload!(Map.put(params, :preload_fields, [:icon, :image, :glass, :tags]))
end
def recipe_preload(recipe, preload_fields) do
Repo.preload(recipe, preload_fields)
end
@doc """
Creates a recipe.
## Examples
iex> create_recipe(%{field: value})
{:ok, %Recipe{}}
iex> create_recipe(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_recipe(attrs \\ %{}) do
%Recipe{}
|> Recipe.changeset(attrs)
|> Repo.insert()
end
def create_recipe!(attrs \\ %{}) do
%Recipe{}
|> Recipe.changeset(attrs)
|> Repo.insert!()
end
@doc """
Updates a recipe.
## Examples
iex> update_recipe(recipe, %{field: new_value})
{:ok, %Recipe{}}
iex> update_recipe(recipe, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_recipe(%Recipe{} = recipe, attrs) do
recipe
|> Recipe.changeset(attrs)
|> Repo.update()
end
@doc """
Deletes a Recipe.
## Examples
iex> delete_recipe(recipe)
{:ok, %Recipe{}}
iex> delete_recipe(recipe)
{:error, %Ecto.Changeset{}}
"""
def delete_recipe(%Recipe{} = recipe) do
Repo.delete(recipe)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking recipe changes.
## Examples
iex> change_recipe(recipe)
%Ecto.Changeset{source: %Recipe{}}
"""
def change_recipe(%Recipe{} = recipe) do
Recipe.changeset(recipe, %{})
end
alias Drunkard.Recipes.Tag
@doc """
Returns the list of tags.
## Examples
iex> list_tags()
[%Tag{}, ...]
"""
def list_tags do
Repo.all(Tag)
end
@doc """
Gets a single tag.
Raises `Ecto.NoResultsError` if the Tag does not exist.
## Examples
iex> get_tag!(123)
%Tag{}
iex> get_tag!(456)
** (Ecto.NoResultsError)
"""
def get_tag!(%{uuid: uuid}), do: Repo.get!(Tag, uuid)
def get_tag!(%{name: name}), do: Repo.get_by!(Tag, name: name)
def get_tag_by_name(name) do
Repo.all(from t in Tag, where: t.name == ^name)
end
@doc """
Creates a tag.
## Examples
iex> create_tag(%{field: value})
{:ok, %Tag{}}
iex> create_tag(%{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def create_tag(attrs \\ %{}) do
%Tag{}
|> Tag.changeset(attrs)
|> Repo.insert()
end
@doc """
Updates a tag.
## Examples
iex> update_tag(tag, %{field: new_value})
{:ok, %Tag{}}
iex> update_tag(tag, %{field: bad_value})
{:error, %Ecto.Changeset{}}
"""
def update_tag(%Tag{} = tag, attrs) do
tag
|> Tag.changeset(attrs)
|> Repo.update()
end
@doc """
Deletes a Tag.
## Examples
iex> delete_tag(tag)
{:ok, %Tag{}}
iex> delete_tag(tag)
{:error, %Ecto.Changeset{}}
"""
def delete_tag(%Tag{} = tag) do
Repo.delete(tag)
end
@doc """
Returns an `%Ecto.Changeset{}` for tracking tag changes.
## Examples
iex> change_tag(tag)
%Ecto.Changeset{source: %Tag{}}
"""
def change_tag(%Tag{} = tag) do
Tag.changeset(tag, %{})
end
end
|
lib/drunkard/recipes.ex
| 0.787646 | 0.447943 |
recipes.ex
|
starcoder
|
defimpl Timex.Protocol, for: Date do
@moduledoc """
This module represents all functions specific to creating/manipulating/comparing Dates (year/month/day)
"""
use Timex.Constants
import Timex.Macros
@epoch_seconds :calendar.datetime_to_gregorian_seconds({{1970, 1, 1}, {0, 0, 0}})
def to_julian(%Date{:year => y, :month => m, :day => d}) do
Timex.Calendar.Julian.julian_date(y, m, d)
end
def to_gregorian_seconds(date), do: to_seconds(date, :zero)
def to_gregorian_microseconds(date), do: to_seconds(date, :zero) * (1_000 * 1_000)
def to_unix(date), do: trunc(to_seconds(date, :epoch))
def to_date(date), do: date
def to_datetime(%Date{} = date, timezone) do
with {:tzdata, tz} when is_binary(tz) <- {:tzdata, Timex.Timezone.name_of(timezone)},
{:ok, datetime} <- Timex.DateTime.new(date, ~T[00:00:00], tz, Timex.Timezone.Database) do
datetime
else
{:tzdata, err} ->
err
{:error, _} = err ->
err
{:gap, _a, b} ->
b
{:ambiguous, _a, b} ->
b
end
end
def to_naive_datetime(%Date{year: y, month: m, day: d}) do
Timex.NaiveDateTime.new!(y, m, d, 0, 0, 0)
end
def to_erl(%Date{year: y, month: m, day: d}), do: {y, m, d}
def century(%Date{:year => year}), do: Timex.century(year)
def is_leap?(%Date{year: year}), do: :calendar.is_leap_year(year)
def beginning_of_day(%Date{} = date), do: date
def end_of_day(%Date{} = date), do: date
def beginning_of_week(%Date{} = date, weekstart) do
case Timex.days_to_beginning_of_week(date, weekstart) do
{:error, _} = err -> err
days -> shift(date, days: -days)
end
end
def end_of_week(%Date{} = date, weekstart) do
case Timex.days_to_end_of_week(date, weekstart) do
{:error, _} = err ->
err
days_to_end ->
shift(date, days: days_to_end)
end
end
def beginning_of_year(%Date{} = date),
do: %{date | :month => 1, :day => 1}
def end_of_year(%Date{} = date),
do: %{date | :month => 12, :day => 31}
def beginning_of_quarter(%Date{month: month} = date) do
month = 1 + 3 * (Timex.quarter(month) - 1)
%{date | :month => month, :day => 1}
end
def end_of_quarter(%Date{month: month} = date) do
month = 3 * Timex.quarter(month)
end_of_month(%{date | :month => month, :day => 1})
end
def beginning_of_month(%Date{} = date),
do: %{date | :day => 1}
def end_of_month(%Date{} = date),
do: %{date | :day => days_in_month(date)}
def quarter(%Date{year: y, month: m, day: d}), do: Calendar.ISO.quarter_of_year(y, m, d)
def days_in_month(%Date{:year => y, :month => m}), do: Timex.days_in_month(y, m)
def week_of_month(%Date{:year => y, :month => m, :day => d}), do: Timex.week_of_month(y, m, d)
def weekday(%Date{} = date), do: Timex.Date.day_of_week(date)
def weekday(%Date{} = date, weekstart), do: Timex.Date.day_of_week(date, weekstart)
def day(%Date{} = date),
do: 1 + Timex.diff(date, %Date{:year => date.year, :month => 1, :day => 1}, :days)
def is_valid?(%Date{:year => y, :month => m, :day => d}) do
:calendar.valid_date({y, m, d})
end
def iso_week(%Date{:year => y, :month => m, :day => d}),
do: Timex.iso_week(y, m, d)
def from_iso_day(%Date{year: year}, day) when is_day_of_year(day) do
{year, month, day_of_month} = Timex.Helpers.iso_day_to_date_tuple(year, day)
%Date{year: year, month: month, day: day_of_month}
end
def set(%Date{} = date, options) do
validate? = Keyword.get(options, :validate, true)
Enum.reduce(options, date, fn
_option, {:error, _} = err ->
err
option, %Date{} = result ->
case option do
{:validate, _} ->
result
{:datetime, {{y, m, d}, {_, _, _}}} ->
if validate? do
%{
result
| :year => Timex.normalize(:year, y),
:month => Timex.normalize(:month, m),
:day => Timex.normalize(:day, {y, m, d})
}
else
%{result | :year => y, :month => m, :day => d}
end
{:date, {y, m, d}} ->
if validate? do
{yn, mn, dn} = Timex.normalize(:date, {y, m, d})
%{result | :year => yn, :month => mn, :day => dn}
else
%{result | :year => y, :month => m, :day => d}
end
{:day, d} ->
if validate? do
%{result | :day => Timex.normalize(:day, {result.year, result.month, d})}
else
%{result | :day => d}
end
{name, val} when name in [:year, :month] ->
if validate? do
Map.put(result, name, Timex.normalize(name, val))
else
Map.put(result, name, val)
end
{name, _} when name in [:time, :timezone, :hour, :minute, :second, :microsecond] ->
result
{option_name, _} ->
{:error, {:invalid_option, option_name}}
end
end)
end
def shift(%Date{} = date, [{_, 0}]), do: date
def shift(%Date{} = date, options) do
allowed_options =
Enum.filter(options, fn
{:hours, value} when value >= 24 or value <= -24 ->
true
{:hours, _} ->
false
{:minutes, value} when value >= 24 * 60 or value <= -24 * 60 ->
true
{:minutes, _} ->
false
{:seconds, value} when value >= 24 * 60 * 60 or value <= -24 * 60 * 60 ->
true
{:seconds, _} ->
false
{:milliseconds, value}
when value >= 24 * 60 * 60 * 1000 or value <= -24 * 60 * 60 * 1000 ->
true
{:milliseconds, _} ->
false
{:microseconds, {value, _}}
when value >= 24 * 60 * 60 * 1000 * 1000 or value <= -24 * 60 * 60 * 1000 * 1000 ->
true
{:microseconds, value}
when value >= 24 * 60 * 60 * 1000 * 1000 or value <= -24 * 60 * 60 * 1000 * 1000 ->
true
{:microseconds, _} ->
false
{_type, _value} ->
true
end)
case Timex.shift(to_naive_datetime(date), allowed_options) do
{:error, _} = err ->
err
%NaiveDateTime{:year => y, :month => m, :day => d} ->
%Date{year: y, month: m, day: d}
end
end
def shift(_, _), do: {:error, :badarg}
defp to_seconds(%Date{year: y, month: m, day: d}, :zero),
do: :calendar.datetime_to_gregorian_seconds({{y, m, d}, {0, 0, 0}})
defp to_seconds(%Date{year: y, month: m, day: d}, :epoch),
do: :calendar.datetime_to_gregorian_seconds({{y, m, d}, {0, 0, 0}}) - @epoch_seconds
end
|
lib/date/date.ex
| 0.74872 | 0.64058 |
date.ex
|
starcoder
|
defmodule Brex.Rule do
@moduledoc """
The behaviour for module based rules which requires an `evaluate/1` function.
Also offers some helpful functions to deal with all kinds of rules.
Furthermore contains the `Brex.Rule.Evaluable` protocol which represents the
basic building block of `Brex`. Currently supported rule types are:
- `atom` or rather Modules
- `function` with arity 1
- `struct`s, take a look at `Brex.Rule.Struct` for details
# Example - Module based rule
defmodule OkRule do
@behaviour Brex.Rule
@impl Brex.Rule
def evaluate(:ok), do: true
@impl Brex.Rule
def evaluate({:ok, _}), do: true
@impl Brex.Rule
def evaluate(_), do: false
end
"""
alias Brex.Types
@type t :: any()
@callback evaluate(value :: Types.value()) :: Types.evaluation()
defprotocol Evaluable do
@moduledoc """
The main rule protocol. Each rule needs to implement this protocol to be
considered a rule.
Take a look at `Brex.Rule.Struct` for details on implementing struct based
custom rules.
"""
@spec evaluate(t(), Types.value()) :: Types.evaluation()
def evaluate(rule, value)
end
@doc """
Calls `evaluate/2` with the given rule and value and wraps it in a
`Brex.Result` struct.
"""
@spec evaluate(t(), Types.value()) :: Types.result()
def evaluate(rule, value) do
%Brex.Result{
evaluation: Evaluable.evaluate(rule, value),
rule: rule,
value: value
}
end
@doc """
Returns the type or rather the implementation module for
`Brex.Rule.Evaluable`.
## Examples
iex> Brex.Rule.type(&is_list/1)
Brex.Rule.Evaluable.Function
iex> Brex.Rule.type(SomeModuleRule)
Brex.Rule.Evaluable.Atom
iex> Brex.Rule.type(Brex.all([]))
Brex.Rule.Evaluable.Brex.Operator
iex> Brex.Rule.type("something")
nil
"""
@spec type(t()) :: module() | nil
def type(rule), do: Evaluable.impl_for(rule)
@doc """
Returns the number of clauses this rule has.
## Examples
iex> Brex.Rule.number_of_clauses([])
0
iex> rules = [fn _ -> true end]
iex> Brex.Rule.number_of_clauses(rules)
1
iex> rules = [fn _ -> true end, Brex.any(fn _ -> false end, fn _ -> true end)]
iex> Brex.Rule.number_of_clauses(rules)
3
"""
@spec number_of_clauses(t() | list(t())) :: non_neg_integer()
def number_of_clauses(rules) when is_list(rules) do
rules
|> Enum.map(&number_of_clauses/1)
|> Enum.sum()
end
def number_of_clauses(rule) do
case Brex.Operator.clauses(rule) do
{:ok, clauses} -> number_of_clauses(clauses)
:error -> 1
end
end
end
|
lib/brex/rule.ex
| 0.919615 | 0.693577 |
rule.ex
|
starcoder
|
defmodule Memory do
@moduledoc """
Module for working with the VM's internal memory.
The VM's internal memory is indexed (chunked) every 32 bytes,
thus represented with a mapping between index and a 32-byte integer (word).
Data from the memory can be accessed at byte level.
"""
use Bitwise
@chunk_size 32
@word_size 256
@doc """
Read a word from the memory, starting at a given `address`.
"""
@spec load(integer(), map()) :: {integer(), map()}
def load(address, state) do
memory = State.memory(state)
{memory_index, bit_position} = get_index_in_memory(address)
prev_saved_value = Map.get(memory, memory_index, 0)
next_saved_value = Map.get(memory, memory_index + @chunk_size, 0)
<<_::size(bit_position), prev::binary>> = <<prev_saved_value::256>>
<<next::size(bit_position), _::binary>> = <<next_saved_value::256>>
value_binary = prev <> <<next::size(bit_position)>>
memory1 = update_memory_size(address + 31, memory)
{binary_word_to_integer(value_binary), State.set_memory(memory1, state)}
end
@doc """
Write a word (32 bytes) to the memory, starting at a given `address`
"""
@spec store(integer(), integer(), map()) :: map()
def store(address, value, state) do
memory = State.memory(state)
{memory_index, bit_position} = get_index_in_memory(address)
remaining_bits = @word_size - bit_position
<<prev_bits::size(remaining_bits), next_bits::binary>> = <<value::256>>
prev_saved_value = Map.get(memory, memory_index, 0)
new_prev_value =
write_part(
bit_position,
<<prev_bits::size(remaining_bits)>>,
remaining_bits,
<<prev_saved_value::256>>
)
memory1 = Map.put(memory, memory_index, binary_word_to_integer(new_prev_value))
memory2 =
if rem(address, @chunk_size) != 0 do
next_saved_value = Map.get(memory, memory_index + @chunk_size, 0)
new_next_value = write_part(0, next_bits, bit_position, <<next_saved_value::256>>)
Map.put(memory1, memory_index + @chunk_size, binary_word_to_integer(new_next_value))
else
memory1
end
memory3 = update_memory_size(address + 31, memory2)
State.set_memory(memory3, state)
end
@doc """
Write 1 byte to the memory, at a given `address`
"""
@spec store8(integer(), integer(), map()) :: map()
def store8(address, value, state) do
memory = State.memory(state)
{memory_index, bit_position} = get_index_in_memory(address)
saved_value = Map.get(memory, memory_index, 0)
new_value = write_part(bit_position, <<value::size(8)>>, 8, <<saved_value::256>>)
memory1 = Map.put(memory, memory_index, binary_word_to_integer(new_value))
memory2 = update_memory_size(address + 7, memory1)
State.set_memory(memory2, state)
end
@doc """
Get the current size of the memory, in chunks
"""
@spec memory_size_words(map()) :: non_neg_integer()
def memory_size_words(state) do
memory = State.memory(state)
Map.get(memory, :size)
end
@doc """
Get the current size of the memory, in bytes
"""
@spec memory_size_bytes(map()) :: non_neg_integer()
def memory_size_bytes(state) do
memory_size_words = memory_size_words(state)
memory_size_words * @chunk_size
end
@doc """
Get n bytes (area) from the memory, starting at a given address
"""
@spec get_area(integer(), integer(), map()) :: {binary(), map()}
def get_area(from, bytes, state) do
memory = State.memory(state)
{memory_index, bit_position} = get_index_in_memory(from)
area = read(<<>>, bytes, bit_position, memory_index, memory)
memory1 = update_memory_size(from + bytes - 1, memory)
{area, State.set_memory(memory1, state)}
end
@doc """
Write n bytes (area) to the memory, starting at a given address
"""
@spec write_area(integer(), integer(), map()) :: map()
def write_area(from, bytes, state) do
memory = State.memory(state)
{memory_index, bit_position} = get_index_in_memory(from)
memory1 = write(bytes, bit_position, memory_index, memory)
memory2 = update_memory_size(from + byte_size(bytes) - 1, memory1)
State.set_memory(memory2, state)
end
# Read n bytes from the memory (may be bigger than 32 bytes),
# starting at a given address in the memory
defp read(read_value, 0, _bit_position, _memory_index, _memory) do
read_value
end
defp read(read_value, bytes_left, bit_position, memory_index, memory) do
memory_index_bits_left =
if bit_position == 0 do
@word_size
else
@word_size - bit_position
end
size_bits = bytes_left * 8
bits_to_read =
if memory_index_bits_left <= size_bits do
memory_index_bits_left
else
size_bits
end
saved_value = Map.get(memory, memory_index, 0)
<<_::size(bit_position), read_part::size(bits_to_read), _::binary>> = <<saved_value::256>>
new_read_value = read_value <> <<read_part::size(bits_to_read)>>
new_bytes_left = bytes_left - round(bits_to_read / 8)
read(new_read_value, new_bytes_left, 0, memory_index + @chunk_size, memory)
end
# Write n bytes to the memory (may be bigger than 32 bytes),
# starting at a given address in the memory
defp write(<<>>, _bit_position, _memory_index, memory) do
memory
end
defp write(bytes, bit_position, memory_index, memory) do
memory_index_bits_left =
if bit_position == 0 do
@word_size
else
@word_size - bit_position
end
size_bits = byte_size(bytes) * 8
bits_to_write =
if memory_index_bits_left <= size_bits do
memory_index_bits_left
else
size_bits
end
saved_value = Map.get(memory, memory_index, 0)
<<new_bytes::size(bits_to_write), bytes_left::binary>> = bytes
new_value_binary =
write_part(
bit_position,
<<new_bytes::size(bits_to_write)>>,
bits_to_write,
<<saved_value::256>>
)
memory1 = Map.put(memory, memory_index, binary_word_to_integer(new_value_binary))
write(bytes_left, 0, memory_index + @chunk_size, memory1)
end
# Get the index in memory, in which the `address` is positioned
defp get_index_in_memory(address) do
memory_index = trunc(Float.floor(address / @chunk_size) * @chunk_size)
bit_position = rem(address, @chunk_size) * 8
{memory_index, bit_position}
end
# Write given binary to a given memory chunk
defp write_part(bit_position, value_binary, size_bits, chunk_binary) do
<<prev::size(bit_position), _::size(size_bits), next::binary>> = chunk_binary
<<prev::size(bit_position)>> <> value_binary <> next
end
# Get the integer value of a `word`
defp binary_word_to_integer(word) do
<<word_integer::size(256), _::binary>> = word
word_integer
end
# Update the size of the memory, if the given `address` is positioned
# outside of the biggest allocated chunk
defp update_memory_size(address, memory) do
{memory_index, _} = get_index_in_memory(address)
current_mem_size_words = Map.get(memory, :size)
if (memory_index + @chunk_size) / @chunk_size > current_mem_size_words do
Map.put(memory, :size, round((memory_index + @chunk_size) / @chunk_size))
else
memory
end
end
end
|
apps/aevm/lib/memory.ex
| 0.840292 | 0.637003 |
memory.ex
|
starcoder
|
defmodule VintageNet.IP.DhcpdConfig do
@moduledoc """
This is a helper module for VintageNet.Technology implementations that use
a DHCP server.
DHCP server parameters are:
* `:start` - Start of the lease block
* `:end` - End of the lease block
* `:max_leases` - The maximum number of leases
* `:decline_time` - The amount of time that an IP will be reserved (leased to nobody)
* `:conflict_time` -The amount of time that an IP will be reserved
* `:offer_time` - How long an offered address is reserved (seconds)
* `:min_lease` - If client asks for lease below this value, it will be rounded up to this value (seconds)
* `:auto_time` - The time period at which udhcpd will write out leases file.
* `:static_leases` - list of `{mac_address, ip_address}`
* `:options` - a map DHCP response options to set. Such as:
* `:dns` - IP_LIST
* `:domain` - STRING - [0x0f] client's domain suffix
* `:hostname` - STRING
* `:mtu` - NUM
* `:router` - IP_LIST
* `:search` - STRING_LIST - [0x77] search domains
* `:serverid` - IP (defaults to the interface's IP address)
* `:subnet` - IP (as a subnet mask)
> #### :options {: .info}
> Options may also be passed in as integers. These are passed directly to the DHCP server
> and their values are strings that are not interpreted by VintageNet. Use this to support
> custom DHCP header options. For more details on DHCP response options see RFC 2132
## Example
```
VintageNet.configure("wlan0", %{
type: VintageNetWiFi,
vintage_net_wifi: %{
networks: [
%{
mode: :ap,
ssid: "test ssid",
key_mgmt: :none
}
]
},
dhcpd: %{
start: "192.168.24.2",
end: "192.168.24.10",
options: %{
dns: ["1.1.1.1", "1.0.0.1"],
subnet: "192.168.24.255",
router: ["192.168.24.1"]
}
}
})
```
"""
alias VintageNet.{Command, IP}
alias VintageNet.Interface.RawConfig
@ip_list_options [:dns, :router]
@ip_options [:serverid, :subnet]
@int_options [:mtu]
@string_options [:hostname, :domain]
@string_list_options [:search]
@list_options @ip_list_options ++ @string_list_options
@doc """
Normalize the DHCPD parameters in a configuration.
"""
@spec normalize(map()) :: map()
def normalize(%{dhcpd: dhcpd} = config) do
# Normalize IP addresses
new_dhcpd =
dhcpd
|> Map.update(:start, {192, 168, 0, 20}, &IP.ip_to_tuple!/1)
|> Map.update(:end, {192, 168, 0, 254}, &IP.ip_to_tuple!/1)
|> normalize_static_leases()
|> normalize_options()
|> Map.take([
:start,
:end,
:max_leases,
:decline_time,
:conflict_time,
:offer_time,
:min_lease,
:auto_time,
:static_leases,
:options
])
%{config | dhcpd: new_dhcpd}
end
def normalize(config), do: config
defp normalize_static_leases(%{static_leases: leases} = dhcpd_config) do
new_leases = Enum.map(leases, &normalize_lease/1)
%{dhcpd_config | static_leases: new_leases}
end
defp normalize_static_leases(dhcpd_config), do: dhcpd_config
defp normalize_lease({hwaddr, ipa}) do
{hwaddr, IP.ip_to_tuple!(ipa)}
end
defp normalize_options(%{options: options} = dhcpd_config) do
new_options = for option <- options, into: %{}, do: normalize_option(option)
%{dhcpd_config | options: new_options}
end
defp normalize_options(dhcpd_config), do: dhcpd_config
defp normalize_option({ip_option, ip})
when ip_option in @ip_options do
{ip_option, IP.ip_to_tuple!(ip)}
end
defp normalize_option({ip_list_option, ip_list})
when ip_list_option in @ip_list_options and is_list(ip_list) do
{ip_list_option, Enum.map(ip_list, &IP.ip_to_tuple!/1)}
end
defp normalize_option({string_list_option, string_list})
when string_list_option in @string_list_options and is_list(string_list) do
{string_list_option, Enum.map(string_list, &to_string/1)}
end
defp normalize_option({list_option, one_item})
when list_option in @list_options and not is_list(one_item) do
# Fix super-easy mistake of not passing a list when there's only one item
normalize_option({list_option, [one_item]})
end
defp normalize_option({int_option, value})
when int_option in @int_options and
is_integer(value) do
{int_option, value}
end
defp normalize_option({string_option, string})
when string_option in @string_options do
{string_option, to_string(string)}
end
defp normalize_option({other_option, string})
when is_integer(other_option) and is_binary(string) do
{other_option, to_string(string)}
end
defp normalize_option({bad_option, _value}) do
raise ArgumentError,
"Unknown dhcpd option '#{bad_option}'. Options unknown to VintageNet can be passed in as integers."
end
@doc """
Add udhcpd configuration commands for running a DHCP server
"""
@spec add_config(RawConfig.t(), map(), keyword()) :: RawConfig.t()
def add_config(
%RawConfig{
ifname: ifname,
files: files,
child_specs: child_specs
} = raw_config,
%{dhcpd: dhcpd_config},
opts
) do
tmpdir = Keyword.fetch!(opts, :tmpdir)
udhcpd_conf_path = Path.join(tmpdir, "udhcpd.conf.#{ifname}")
new_files =
files ++
[
{udhcpd_conf_path, udhcpd_contents(ifname, dhcpd_config, tmpdir)}
]
new_child_specs =
child_specs ++
[
Supervisor.child_spec(
{MuonTrap.Daemon,
[
"udhcpd",
[
"-f",
udhcpd_conf_path
],
Command.add_muon_options(
stderr_to_stdout: true,
log_output: :debug,
env: BEAMNotify.env(name: "vintage_net_comm", report_env: true)
)
]},
id: :udhcpd
)
]
%RawConfig{raw_config | files: new_files, child_specs: new_child_specs}
end
def add_config(raw_config, _config_without_dhcpd, _opts), do: raw_config
defp udhcpd_contents(ifname, dhcpd, tmpdir) do
pidfile = Path.join(tmpdir, "udhcpd.#{ifname}.pid")
lease_file = Path.join(tmpdir, "udhcpd.#{ifname}.leases")
initial = """
interface #{ifname}
pidfile #{pidfile}
lease_file #{lease_file}
notify_file #{BEAMNotify.bin_path()}
"""
config = Enum.map(dhcpd, &to_udhcpd_string/1)
IO.iodata_to_binary([initial, "\n", config, "\n"])
end
defp to_udhcpd_string({:start, val}) do
"start #{IP.ip_to_string(val)}\n"
end
defp to_udhcpd_string({:end, val}) do
"end #{IP.ip_to_string(val)}\n"
end
defp to_udhcpd_string({:max_leases, val}) do
"max_leases #{val}\n"
end
defp to_udhcpd_string({:decline_time, val}) do
"decline_time #{val}\n"
end
defp to_udhcpd_string({:conflict_time, val}) do
"conflict_time #{val}\n"
end
defp to_udhcpd_string({:offer_time, val}) do
"offer_time #{val}\n"
end
defp to_udhcpd_string({:min_lease, val}) do
"min_lease #{val}\n"
end
defp to_udhcpd_string({:auto_time, val}) do
"auto_time #{val}\n"
end
defp to_udhcpd_string({:static_leases, leases}) do
Enum.map(leases, fn {mac, ip} ->
"static_lease #{mac} #{IP.ip_to_string(ip)}\n"
end)
end
defp to_udhcpd_string({:options, options}) do
for option <- options do
["opt ", to_udhcpd_option_string(option), "\n"]
end
end
defp to_udhcpd_option_string({option, ip}) when option in @ip_options do
[to_string(option), " ", IP.ip_to_string(ip)]
end
defp to_udhcpd_option_string({option, ip_list}) when option in @ip_list_options do
[to_string(option), " " | ip_list_to_iodata(ip_list)]
end
defp to_udhcpd_option_string({option, string_list}) when option in @string_list_options do
[to_string(option), " " | Enum.intersperse(string_list, " ")]
end
defp to_udhcpd_option_string({option, value}) when option in @int_options do
[to_string(option), " ", to_string(value)]
end
defp to_udhcpd_option_string({option, string}) when option in @string_options do
[to_string(option), " ", string]
end
defp to_udhcpd_option_string({other_option, string}) when is_integer(other_option) do
[to_string(other_option), " ", string]
end
defp ip_list_to_iodata(ip_list) do
ip_list
|> Enum.map(&IP.ip_to_string/1)
|> Enum.intersperse(" ")
end
end
|
lib/vintage_net/ip/dhcpd_config.ex
| 0.864732 | 0.726523 |
dhcpd_config.ex
|
starcoder
|
defmodule AWS.AppConfig do
@moduledoc """
AWS AppConfig
Use AWS AppConfig, a capability of AWS Systems Manager, to create, manage, and
quickly deploy application configurations.
AppConfig supports controlled deployments to applications of any size and
includes built-in validation checks and monitoring. You can use AppConfig with
applications hosted on Amazon EC2 instances, AWS Lambda, containers, mobile
applications, or IoT devices.
To prevent errors when deploying application configurations, especially for
production systems where a simple typo could cause an unexpected outage,
AppConfig includes validators. A validator provides a syntactic or semantic
check to ensure that the configuration you want to deploy works as intended. To
validate your application configuration data, you provide a schema or a Lambda
function that runs against the configuration. The configuration deployment or
update can only proceed when the configuration data is valid.
During a configuration deployment, AppConfig monitors the application to ensure
that the deployment is successful. If the system encounters an error, AppConfig
rolls back the change to minimize impact for your application users. You can
configure a deployment strategy for each application or environment that
includes deployment criteria, including velocity, bake time, and alarms to
monitor. Similar to error monitoring, if a deployment triggers an alarm,
AppConfig automatically rolls back to the previous version.
AppConfig supports multiple use cases. Here are some examples.
* **Application tuning**: Use AppConfig to carefully introduce
changes to your application that can only be tested with production traffic.
* **Feature toggle**: Use AppConfig to turn on new features that
require a timely deployment, such as a product launch or announcement.
* **Allow list**: Use AppConfig to allow premium subscribers to
access paid content.
* **Operational issues**: Use AppConfig to reduce stress on your
application when a dependency or other external factor impacts the system.
This reference is intended to be used with the [AWS AppConfig User Guide](http://docs.aws.amazon.com/systems-manager/latest/userguide/appconfig.html).
"""
@doc """
An application in AppConfig is a logical unit of code that provides capabilities
for your customers.
For example, an application can be a microservice that runs on Amazon EC2
instances, a mobile application installed by your users, a serverless
application using Amazon API Gateway and AWS Lambda, or any system you run on
behalf of others.
"""
def create_application(client, input, options \\ []) do
path_ = "/applications"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 201)
end
@doc """
Information that enables AppConfig to access the configuration source.
Valid configuration sources include Systems Manager (SSM) documents, SSM
Parameter Store parameters, and Amazon S3 objects. A configuration profile
includes the following information.
* The Uri location of the configuration data.
* The AWS Identity and Access Management (IAM) role that provides
access to the configuration data.
* A validator for the configuration data. Available validators
include either a JSON Schema or an AWS Lambda function.
For more information, see [Create a Configuration and a Configuration Profile](http://docs.aws.amazon.com/systems-manager/latest/userguide/appconfig-creating-configuration-and-profile.html)
in the *AWS AppConfig User Guide*.
"""
def create_configuration_profile(client, application_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 201)
end
@doc """
A deployment strategy defines important criteria for rolling out your
configuration to the designated targets.
A deployment strategy includes: the overall duration required, a percentage of
targets to receive the deployment during each interval, an algorithm that
defines how percentage grows, and bake time.
"""
def create_deployment_strategy(client, input, options \\ []) do
path_ = "/deploymentstrategies"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 201)
end
@doc """
For each application, you define one or more environments.
An environment is a logical deployment group of AppConfig targets, such as
applications in a `Beta` or `Production` environment. You can also define
environments for application subcomponents such as the `Web`, `Mobile` and
`Back-end` components for your application. You can configure Amazon CloudWatch
alarms for each environment. The system monitors alarms during a configuration
deployment. If an alarm is triggered, the system rolls back the configuration.
"""
def create_environment(client, application_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 201)
end
@doc """
Create a new configuration in the AppConfig configuration store.
"""
def create_hosted_configuration_version(client, application_id, configuration_profile_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}/hostedconfigurationversions"
{headers, input} =
[
{"ContentType", "Content-Type"},
{"Description", "Description"},
{"LatestVersionNumber", "Latest-Version-Number"},
]
|> AWS.Request.build_params(input)
query_ = []
case request(client, :post, path_, query_, headers, input, options, 201) do
{:ok, body, response} when not is_nil(body) ->
body =
[
{"Application-Id", "ApplicationId"},
{"Configuration-Profile-Id", "ConfigurationProfileId"},
{"Content-Type", "ContentType"},
{"Description", "Description"},
{"Version-Number", "VersionNumber"},
]
|> Enum.reduce(body, fn {header_name, key}, acc ->
case List.keyfind(response.headers, header_name, 0) do
nil -> acc
{_header_name, value} -> Map.put(acc, key, value)
end
end)
{:ok, body, response}
result ->
result
end
end
@doc """
Delete an application.
Deleting an application does not delete a configuration from a host.
"""
def delete_application(client, application_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 204)
end
@doc """
Delete a configuration profile.
Deleting a configuration profile does not delete a configuration from a host.
"""
def delete_configuration_profile(client, application_id, configuration_profile_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 204)
end
@doc """
Delete a deployment strategy.
Deleting a deployment strategy does not delete a configuration from a host.
"""
def delete_deployment_strategy(client, deployment_strategy_id, input, options \\ []) do
path_ = "/deployementstrategies/#{URI.encode(deployment_strategy_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 204)
end
@doc """
Delete an environment.
Deleting an environment does not delete a configuration from a host.
"""
def delete_environment(client, application_id, environment_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments/#{URI.encode(environment_id)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 204)
end
@doc """
Delete a version of a configuration from the AppConfig configuration store.
"""
def delete_hosted_configuration_version(client, application_id, configuration_profile_id, version_number, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}/hostedconfigurationversions/#{URI.encode(version_number)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 204)
end
@doc """
Retrieve information about an application.
"""
def get_application(client, application_id, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Receive information about a configuration.
AWS AppConfig uses the value of the `ClientConfigurationVersion` parameter to
identify the configuration version on your clients. If you don’t send
`ClientConfigurationVersion` with each call to `GetConfiguration`, your clients
receive the current configuration. You are charged each time your clients
receive a configuration.
To avoid excess charges, we recommend that you include the
`ClientConfigurationVersion` value with every call to `GetConfiguration`. This
value must be saved on your client. Subsequent calls to `GetConfiguration` must
pass this value by using the `ClientConfigurationVersion` parameter.
"""
def get_configuration(client, application, configuration, environment, client_configuration_version \\ nil, client_id, options \\ []) do
path_ = "/applications/#{URI.encode(application)}/environments/#{URI.encode(environment)}/configurations/#{URI.encode(configuration)}"
headers = []
query_ = []
query_ = if !is_nil(client_id) do
[{"client_id", client_id} | query_]
else
query_
end
query_ = if !is_nil(client_configuration_version) do
[{"client_configuration_version", client_configuration_version} | query_]
else
query_
end
case request(client, :get, path_, query_, headers, nil, options, 200) do
{:ok, body, response} when not is_nil(body) ->
body =
[
{"Configuration-Version", "ConfigurationVersion"},
{"Content-Type", "ContentType"},
]
|> Enum.reduce(body, fn {header_name, key}, acc ->
case List.keyfind(response.headers, header_name, 0) do
nil -> acc
{_header_name, value} -> Map.put(acc, key, value)
end
end)
{:ok, body, response}
result ->
result
end
end
@doc """
Retrieve information about a configuration profile.
"""
def get_configuration_profile(client, application_id, configuration_profile_id, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Retrieve information about a configuration deployment.
"""
def get_deployment(client, application_id, deployment_number, environment_id, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments/#{URI.encode(environment_id)}/deployments/#{URI.encode(deployment_number)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Retrieve information about a deployment strategy.
A deployment strategy defines important criteria for rolling out your
configuration to the designated targets. A deployment strategy includes: the
overall duration required, a percentage of targets to receive the deployment
during each interval, an algorithm that defines how percentage grows, and bake
time.
"""
def get_deployment_strategy(client, deployment_strategy_id, options \\ []) do
path_ = "/deploymentstrategies/#{URI.encode(deployment_strategy_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Retrieve information about an environment.
An environment is a logical deployment group of AppConfig applications, such as
applications in a `Production` environment or in an `EU_Region` environment.
Each configuration deployment targets an environment. You can enable one or more
Amazon CloudWatch alarms for an environment. If an alarm is triggered during a
deployment, AppConfig roles back the configuration.
"""
def get_environment(client, application_id, environment_id, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments/#{URI.encode(environment_id)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Get information about a specific configuration version.
"""
def get_hosted_configuration_version(client, application_id, configuration_profile_id, version_number, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}/hostedconfigurationversions/#{URI.encode(version_number)}"
headers = []
query_ = []
case request(client, :get, path_, query_, headers, nil, options, 200) do
{:ok, body, response} when not is_nil(body) ->
body =
[
{"Application-Id", "ApplicationId"},
{"Configuration-Profile-Id", "ConfigurationProfileId"},
{"Content-Type", "ContentType"},
{"Description", "Description"},
{"Version-Number", "VersionNumber"},
]
|> Enum.reduce(body, fn {header_name, key}, acc ->
case List.keyfind(response.headers, header_name, 0) do
nil -> acc
{_header_name, value} -> Map.put(acc, key, value)
end
end)
{:ok, body, response}
result ->
result
end
end
@doc """
List all applications in your AWS account.
"""
def list_applications(client, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/applications"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"next_token", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"max_results", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists the configuration profiles for an application.
"""
def list_configuration_profiles(client, application_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"next_token", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"max_results", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
List deployment strategies.
"""
def list_deployment_strategies(client, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/deploymentstrategies"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"next_token", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"max_results", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Lists the deployments for an environment.
"""
def list_deployments(client, application_id, environment_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments/#{URI.encode(environment_id)}/deployments"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"next_token", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"max_results", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
List the environments for an application.
"""
def list_environments(client, application_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"next_token", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"max_results", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
View a list of configurations stored in the AppConfig configuration store by
version.
"""
def list_hosted_configuration_versions(client, application_id, configuration_profile_id, max_results \\ nil, next_token \\ nil, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}/hostedconfigurationversions"
headers = []
query_ = []
query_ = if !is_nil(next_token) do
[{"next_token", next_token} | query_]
else
query_
end
query_ = if !is_nil(max_results) do
[{"max_results", max_results} | query_]
else
query_
end
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Retrieves the list of key-value tags assigned to the resource.
"""
def list_tags_for_resource(client, resource_arn, options \\ []) do
path_ = "/tags/#{URI.encode(resource_arn)}"
headers = []
query_ = []
request(client, :get, path_, query_, headers, nil, options, 200)
end
@doc """
Starts a deployment.
"""
def start_deployment(client, application_id, environment_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments/#{URI.encode(environment_id)}/deployments"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 201)
end
@doc """
Stops a deployment.
This API action works only on deployments that have a status of `DEPLOYING`.
This action moves the deployment to a status of `ROLLED_BACK`.
"""
def stop_deployment(client, application_id, deployment_number, environment_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments/#{URI.encode(environment_id)}/deployments/#{URI.encode(deployment_number)}"
headers = []
query_ = []
request(client, :delete, path_, query_, headers, input, options, 202)
end
@doc """
Metadata to assign to an AppConfig resource.
Tags help organize and categorize your AppConfig resources. Each tag consists of
a key and an optional value, both of which you define. You can specify a maximum
of 50 tags for a resource.
"""
def tag_resource(client, resource_arn, input, options \\ []) do
path_ = "/tags/#{URI.encode(resource_arn)}"
headers = []
query_ = []
request(client, :post, path_, query_, headers, input, options, 204)
end
@doc """
Deletes a tag key and value from an AppConfig resource.
"""
def untag_resource(client, resource_arn, input, options \\ []) do
path_ = "/tags/#{URI.encode(resource_arn)}"
headers = []
{query_, input} =
[
{"TagKeys", "tagKeys"},
]
|> AWS.Request.build_params(input)
request(client, :delete, path_, query_, headers, input, options, 204)
end
@doc """
Updates an application.
"""
def update_application(client, application_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}"
headers = []
query_ = []
request(client, :patch, path_, query_, headers, input, options, 200)
end
@doc """
Updates a configuration profile.
"""
def update_configuration_profile(client, application_id, configuration_profile_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}"
headers = []
query_ = []
request(client, :patch, path_, query_, headers, input, options, 200)
end
@doc """
Updates a deployment strategy.
"""
def update_deployment_strategy(client, deployment_strategy_id, input, options \\ []) do
path_ = "/deploymentstrategies/#{URI.encode(deployment_strategy_id)}"
headers = []
query_ = []
request(client, :patch, path_, query_, headers, input, options, 200)
end
@doc """
Updates an environment.
"""
def update_environment(client, application_id, environment_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/environments/#{URI.encode(environment_id)}"
headers = []
query_ = []
request(client, :patch, path_, query_, headers, input, options, 200)
end
@doc """
Uses the validators in a configuration profile to validate a configuration.
"""
def validate_configuration(client, application_id, configuration_profile_id, input, options \\ []) do
path_ = "/applications/#{URI.encode(application_id)}/configurationprofiles/#{URI.encode(configuration_profile_id)}/validators"
headers = []
{query_, input} =
[
{"ConfigurationVersion", "configuration_version"},
]
|> AWS.Request.build_params(input)
request(client, :post, path_, query_, headers, input, options, 204)
end
@spec request(AWS.Client.t(), binary(), binary(), list(), list(), map(), list(), pos_integer()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, method, path, query, headers, input, options, success_status_code) do
client = %{client | service: "appconfig"}
host = build_host("appconfig", client)
url = host
|> build_url(path, client)
|> add_query(query, client)
additional_headers = [{"Host", host}, {"Content-Type", "application/x-amz-json-1.1"}]
headers = AWS.Request.add_headers(additional_headers, headers)
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, method, url, headers, payload)
perform_request(client, method, url, payload, headers, options, success_status_code)
end
defp perform_request(client, method, url, payload, headers, options, success_status_code) do
case AWS.Client.request(client, method, url, payload, headers, options) do
{:ok, %{status_code: status_code, body: body} = response}
when is_nil(success_status_code) and status_code in [200, 202, 204]
when status_code == success_status_code ->
body = if(body != "", do: decode!(client, body))
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, path, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}#{path}"
end
defp add_query(url, [], _client) do
url
end
defp add_query(url, query, client) do
querystring = encode!(client, query, :query)
"#{url}?#{querystring}"
end
defp encode!(client, payload, format \\ :json) do
AWS.Client.encode!(client, payload, format)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :json)
end
end
|
lib/aws/generated/app_config.ex
| 0.887778 | 0.488954 |
app_config.ex
|
starcoder
|
defmodule Bitcraft do
@moduledoc """
The following are the main Bitcraft components:
* `Bitcraft.BitBlock` - This is the main Bitcraft component. It provides
a DSL that allows to define bit-blocks with their segments (useful for
building binary protocols) and automatically injects encoding and decoding
functions for them.
* `Bitcraft` - This is a helper module that provides utility functions to
work with bit strings and binaries.
"""
use Bitwise
use Bitcraft.Helpers
# Base data types for binaries
@type base_type ::
integer
| float
| binary
| bitstring
| byte
| char
@typedoc "Segment type"
@type segment_type :: base_type | Bitcraft.BitBlock.Array.t()
@typedoc "Codable segment type"
@type codable_segment_type :: base_type | [base_type]
## API
@doc """
Encodes the given `input` into a bitstring.
## Options
* `:size` - The size in bits for the input to encode. The default
value depend on the type, for integer is 8, for float is 63, and for
other data types is `nil`. If the `input` is a list, this option is
skipped, since it is handled as array and the size will be
`array_length * element_size`.
* `:type` - The segment type given by `Bitcraft.segment_type()`.
Defaults to `:integer`.
* `:sign` - If the input is an integer, defines if it is `:signed`
or `:unsigned`. Defaults to `:unsigned`.
* `:endian` - Applies to `utf32`, `utf16`, `float`, `integer`.
Defines the endianness, `:big` or `:little`. Defaults to `:big`.
## Example
iex> Bitcraft.encode_segment(15)
<<15>>
iex> Bitcraft.encode_segment(255, size: 4)
<<15::size(4)>>
iex> Bitcraft.encode_segment(-3.3, size: 64, type: :float)
<<192, 10, 102, 102, 102, 102, 102, 102>>
iex> Bitcraft.encode_segment("hello", type: :binary)
"hello"
iex> Bitcraft.encode_segment(<<1, 2, 3>>, type: :bits)
<<1, 2, 3>>
iex> Bitcraft.encode_segment([1, -2, 3], type: %Bitcraft.BitBlock.Array{
...> type: :integer, element_size: 4},
...> sign: :signed
...> )
<<30, 3::size(4)>>
"""
@spec encode_segment(codable_segment_type, Keyword.t()) :: bitstring
def encode_segment(input, opts \\ []) do
type = Keyword.get(opts, :type, :integer)
sign = Keyword.get(opts, :sign, :unsigned)
endian = Keyword.get(opts, :endian, :big)
size = Keyword.get(opts, :size)
size =
cond do
is_nil(size) and is_integer(input) -> 8
is_nil(size) and is_float(input) -> 64
true -> size
end
encode_segment(input, size, type, sign, endian)
end
@doc """
Returns a tuple `{decoded value, leftover}` where the first element is the
decoded value from the given `input` (according to the given `otps` too)
and the second element is the leftover.
## Options
* `:size` - The size in bits to decode. Defaults to `byte_size(input) * 8`.
If the type is `Bitcraft.BitBlock.Array.()`, the size should match with
`array_length * element_size`.
* `:type` - The segment type given by `Bitcraft.segment_type()`.
Defaults to `:integer`.
* `:sign` - If the input is an integer, defines if it is `:signed`
or `:unsigned`. Defaults to `:unsigned`.
* `:endian` - Applies to `utf32`, `utf16`, `float`, `integer`.
Defines the endianness, `:big` or `:little`. Defaults to `:big`.
## Example
iex> 3
...> |> Bitcraft.encode_segment(size: 4)
...> |> Bitcraft.decode_segment(size: 4)
{3, ""}
iex> -3.3
...> |> Bitcraft.encode_segment(size: 64, type: :float, sign: :signed)
...> |> Bitcraft.decode_segment(size: 64, type: :float, sign: :signed)
{-3.3, ""}
iex> "test"
...> |> Bitcraft.encode_segment(type: :binary)
...> |> Bitcraft.decode_segment(size: 4, type: :binary)
{"test", ""}
iex> <<1, 2, 3, 4>>
...> |> Bitcraft.encode_segment(type: :bits)
...> |> Bitcraft.decode_segment(size: 32, type: :bits)
{<<1, 2, 3, 4>>, ""}
iex> alias Bitcraft.BitBlock.Array
iex> [1, 2]
...> |> Bitcraft.encode_segment(type: %Array{})
...> |> Bitcraft.decode_segment(size: 16, type: %Array{})
{[1, 2], ""}
iex> [3.3, -7.7, 9.9]
...> |> Bitcraft.encode_segment(
...> type: %Array{type: :float, element_size: 64},
...> sign: :signed
...> )
...> |> Bitcraft.decode_segment(
...> size: 192,
...> type: %Array{type: :float, element_size: 64},
...> sign: :signed
...> )
{[3.3, -7.7, 9.9], ""}
"""
@spec decode_segment(bitstring, Keyword.t()) :: {codable_segment_type, bitstring}
def decode_segment(input, opts \\ []) do
type = Keyword.get(opts, :type, :integer)
sign = Keyword.get(opts, :sign, :unsigned)
endian = Keyword.get(opts, :endian, :big)
size = opts[:size] || byte_size(input) * 8
decode_segment(input, size, type, sign, endian)
end
@doc """
Returns the number of `1`s in binary representation of the given `integer`.
## Example
iex> Bitcraft.count_ones(15)
4
iex> Bitcraft.count_ones(255)
8
"""
@spec count_ones(integer) :: integer
def count_ones(integer) when is_integer(integer) do
count_ones(integer, 0)
end
defp count_ones(0, count), do: count
defp count_ones(integer, count) do
count_ones(integer &&& integer - 1, count + 1)
end
end
|
lib/bitcraft.ex
| 0.941372 | 0.705506 |
bitcraft.ex
|
starcoder
|
defmodule AdventOfCode.Day05 do
import AdventOfCode.Utils
@type coordinate :: {integer(), integer()}
@type vent :: {coordinate(), coordinate()}
@type heatmap :: %{required(coordinate()) => integer()}
@spec part1([binary()]) :: integer()
def part1(args) do
parse_args(args)
|> number_intersections(:filtered)
end
@spec part2([binary()]) :: integer()
def part2(args) do
parse_args(args)
|> number_intersections(:unfiltered)
end
@spec number_intersections([vent()], :filtered | :unfiltered) :: integer()
defp number_intersections(vents, should_filter) do
case should_filter do
:filtered -> Enum.filter(vents, &hor_or_vert?/1)
:unfiltered -> vents
end
|> generate_heatmap()
|> Map.values()
|> Enum.filter(&(&1 >= 2))
|> Enum.count()
end
@spec generate_heatmap([vent()]) :: heatmap()
defp generate_heatmap(vents) do
Enum.reduce(vents, %{}, fn vent, acc ->
Map.merge(acc, vent_heatmap(vent), fn _k, v1, v2 -> v1 + v2 end)
end)
end
@spec vent_heatmap(vent()) :: heatmap()
defp vent_heatmap(vent) do
vent_coverage(vent)
|> Enum.reduce(%{}, fn coordinate, acc ->
Map.update(acc, coordinate, 1, &(&1 + 1))
end)
end
@spec vent_coverage(vent) :: [coordinate]
defp vent_coverage(vent) do
{{x1, y1}, {x2, y2}} = vent
if hor_or_vert?(vent) do
for x <- x1..x2,
y <- y1..y2,
do: {x, y}
else
Enum.zip(x1..x2, y1..y2)
end
end
@spec hor_or_vert?(vent()) :: boolean()
defp hor_or_vert?(vent) do
{{x1, y1}, {x2, y2}} = vent
x1 == x2 || y1 == y2
end
@spec parse_args([binary()]) :: [vent()]
defp parse_args(args), do: Enum.map(args, &parse_line/1)
@spec parse_line(binary()) :: {coordinate(), coordinate()}
defp parse_line(line) do
String.split(line, " -> ")
|> Enum.map(&parse_coordinate/1)
|> Kernel.then(&List.to_tuple/1)
end
@spec parse_coordinate(binary()) :: coordinate()
defp parse_coordinate(coordinate) do
coordinate
|> String.split(",")
|> Enum.map(&parse_int!/1)
|> Kernel.then(&List.to_tuple/1)
end
end
|
lib/advent_of_code/day_05.ex
| 0.799286 | 0.543893 |
day_05.ex
|
starcoder
|
defmodule AdventOfCode2019.CarePackage do
@moduledoc """
Day 13 — https://adventofcode.com/2019/day/13
"""
require AdventOfCode2019.IntcodeComputer
@spec part1(Enumerable.t()) :: integer
def part1(in_stream) do
in_stream
|> load_program()
|> play()
|> List.first()
end
@spec part2(Enumerable.t()) :: integer
def part2(in_stream) do
in_stream
|> load_program()
|> Map.put(0, 2)
|> play()
|> List.last()
end
defp load_program(in_stream) do
in_stream
|> Stream.map(&AdventOfCode2019.IntcodeComputer.load_program/1)
|> Enum.take(1)
|> List.first()
end
@spec play(map) :: list
defp play(program), do: play({:noop, {program, 0, 0}, nil}, [], 0, 0, 0, 0, 0)
@spec play(tuple, list, integer, integer, integer, integer, integer) :: list
defp play({:done, _state, _id}, _tile, count, _pad, _ball, _input, score), do: [count, score]
defp play({:output, state, score}, [0, -1], count, pad, ball, input, _score) do
AdventOfCode2019.IntcodeComputer.step(state, [input])
|> play([], count, pad, ball, input, score)
end
defp play({:output, state, 2}, [_y, _x], count, pad, ball, input, score) do
AdventOfCode2019.IntcodeComputer.step(state, [input])
|> play([], count + 1, pad, ball, input, score)
end
defp play({:output, state, 3}, [_y, pad], count, _pad, ball, input, score) do
AdventOfCode2019.IntcodeComputer.step(state, [input])
|> play([], count, pad, ball, joystick(pad, ball), score)
end
defp play({:output, state, 4}, [_y, ball], count, pad, _ball, input, score) do
AdventOfCode2019.IntcodeComputer.step(state, [input])
|> play([], count, pad, ball, joystick(pad, ball), score)
end
defp play({:output, state, _id}, [_y, _x], count, pad, ball, input, score) do
AdventOfCode2019.IntcodeComputer.step(state, [input])
|> play([], count, pad, ball, input, score)
end
defp play({:output, state, yx}, tile, count, pad, ball, input, score) do
AdventOfCode2019.IntcodeComputer.step(state, [input])
|> play([yx | tile], count, pad, ball, input, score)
end
defp play({_result, state, _id}, tile, count, pad, ball, input, score) do
AdventOfCode2019.IntcodeComputer.step(state, [input])
|> play(tile, count, pad, ball, input, score)
end
@spec joystick(integer, integer) :: integer
defp joystick(x, x), do: 0
defp joystick(pad, ball) when pad < ball, do: 1
defp joystick(_, _), do: -1
end
|
lib/advent_of_code_2019/day13.ex
| 0.837387 | 0.506836 |
day13.ex
|
starcoder
|
defmodule Beacon.Chain.Params do
@moduledoc """
Defines global parameters for ETH 2.0 as defined in the specifiction:
https://github.com/ethereum/eth2.0-specs/blob/dev/specs/phase0/beacon-chain.md
"""
values = [
# Misc
eth1_follow_distance: :math.pow(2, 10),
max_committes_per_slot: :math.pow(2, 6),
target_committe_size: :math.pow(2, 7),
max_validator_per_committe: :math.pow(2, 11),
min_per_epoche_churn_limit: :math.pow(2, 16),
shuffle_round_count: 90,
min_genesis_active_validator_count: :math.pow(2, 14),
min_genesis_time: 1_578_009_600,
hysteresis_quotient: 4,
hysteresis_downward_multiplier: 1,
hysteresis_upward_multiplier: 5,
# Gwei values
min_deposit_amount: :math.pow(2, 0) * :math.pow(10, 9),
max_effective_balance: :math.pow(2, 5) * :math.pow(10, 9),
ejection_balance: :math.pow(2, 4) * :math.pow(10, 9),
effective_balance_increment: :math.pow(2, 0) * :math.pow(10, 9),
# Initial values
genesis_fork_version: 0x00000000,
bls_withdrawal_prefix: 0x00,
# Time Parameters
min_genesis_delay: 86_400,
seconds_per_slot: 12,
seconds_per_eth1_block: 14,
min_attestation_inclusion_delay: :math.pow(2, 0),
slots_per_epoch: :math.pow(2, 5),
min_seed_lookahead: :math.pow(2, 0),
max_seed_lookahead: :math.pow(2, 2),
min_epochs_to_inactivity_paneltiy: :math.pow(2, 2),
epochs_per_eth1_voting_period: :math.pow(2, 5),
slots_per_historical_root: :math.pow(2, 13),
min_validator_withdrawability_delay: :math.pow(2, 8),
shard_committee_peroid: :math.pow(2, 8),
# State list lengths
epochs_per_historical_vector: :math.pow(2, 16),
epochs_per_slashing_vector: :math.pow(2, 13),
historical_roots_limit: :math.pow(2, 24),
validator_registry_limit: :math.pow(2, 40),
# Rewards and penalties
base_reward_factor: :math.pow(2, 6),
whistleblower_reward_quotient: :math.pow(2, 9),
proposer_reward_quotient: :math.pow(2, 3),
inactivity_penality_quotient: :math.pow(2, 24),
min_slashing_penality_quotient: :math.pow(2, 5),
# Max operations per Block
max_proposer_slashings: :math.pow(2, 4),
max_attester_slashings: :math.pow(2, 1),
max_attestations: :math.pow(2, 7),
max_depostis: :math.pow(2, 4),
max_voluntary_exits: :math.pow(2, 4),
# Domain types
domain_beacon_proposer: 0x00000000,
domain_beacon_attester: 0x01000000,
domain_randao: 0x02000000,
domain_voluntary_exit: 0x04000000,
domain_selection_proof: 0x05000000,
domain_aggregate_and_proof: 0x06000000
]
for {key, value} <- values do
def decode(unquote(key)), do: unquote(value)
end
end
|
apps/panacea_beacon/lib/panacea_beacon/chain/params.ex
| 0.810929 | 0.71222 |
params.ex
|
starcoder
|
defmodule GraphQL.Schema do
@type t :: %GraphQL.Schema{
query: Map,
mutation: Map,
type_cache: Map,
directives: [GraphQL.Type.Directive.t]
}
alias GraphQL.Type.Input
alias GraphQL.Type.Interface
alias GraphQL.Type.Union
alias GraphQL.Type.ObjectType
alias GraphQL.Type.Introspection
alias GraphQL.Type.CompositeType
alias GraphQL.Lang.AST.Nodes
defstruct query: nil,
mutation: nil,
type_cache: nil,
directives: [
GraphQL.Type.Directives.include,
GraphQL.Type.Directives.skip
]
def with_type_cache(schema = %{type_cache: nil}), do: new(schema)
def with_type_cache(schema), do: schema
def new(%{query: query, mutation: mutation}) do
%GraphQL.Schema{query: query, mutation: mutation, type_cache: do_reduce_types(query, mutation)}
end
def new(%{mutation: mutation}), do: new(%{query: nil, mutation: mutation})
def new(%{query: query}), do: new(%{query: query, mutation: nil})
# FIXME: I think *schema* should be the first argument in this module.
def type_from_ast(nil, _), do: nil
def type_from_ast(%{kind: :NonNullType,} = input_type_ast, schema) do
%GraphQL.Type.NonNull{ofType: type_from_ast(input_type_ast.type, schema)}
end
def type_from_ast(%{kind: :ListType,} = input_type_ast, schema) do
%GraphQL.Type.List{ofType: type_from_ast(input_type_ast.type, schema)}
end
def type_from_ast(%{kind: :NamedType} = input_type_ast, schema) do
schema.type_cache |> Map.get(input_type_ast.name.value, :not_found)
end
defp do_reduce_types(query, mutation) do
%{}
|> reduce_types(query)
|> reduce_types(mutation)
|> reduce_types(Introspection.Schema.type)
end
defp reduce_types(typemap, %{ofType: list_type}) do
reduce_types(typemap, list_type)
end
defp reduce_types(typemap, %Interface{} = type) do
Map.put(typemap, type.name, type)
end
defp reduce_types(typemap, %Union{} = type) do
typemap = Map.put(typemap, type.name, type)
Enum.reduce(type.types, typemap, fn(fieldtype,map) ->
reduce_types(map, fieldtype)
end)
end
defp reduce_types(typemap, %ObjectType{} = type) do
if Map.has_key?(typemap, type.name) do
typemap
else
typemap = Map.put(typemap, type.name, type)
thunk_fields = CompositeType.get_fields(type)
typemap = Enum.reduce(thunk_fields, typemap, fn({_,fieldtype},typemap) ->
_reduce_arguments(typemap, fieldtype)
|> reduce_types(fieldtype.type)
end)
typemap = Enum.reduce(type.interfaces, typemap, fn(fieldtype,map) ->
reduce_types(map, fieldtype)
end)
end
end
defp reduce_types(typemap, %Input{} = type) do
if Map.has_key?(typemap, type.name) do
typemap
else
typemap = Map.put(typemap, type.name, type)
thunk_fields = CompositeType.get_fields(type)
typemap = Enum.reduce(thunk_fields, typemap, fn({_,fieldtype},typemap) ->
_reduce_arguments(typemap, fieldtype)
|> reduce_types(fieldtype.type)
end)
end
end
defp reduce_types(typemap, %{name: name} = type), do: Map.put(typemap, name, type)
defp reduce_types(typemap, nil), do: typemap
defp reduce_types(typemap, type_module) when is_atom(type_module) do
reduce_types(typemap, apply(type_module, :type, []))
end
@spec operation_root_type(GraphQL.Schema.t, Nodes.operation_node) :: atom
def operation_root_type(schema, operation) do
Map.get(schema, operation.operation)
end
defp _reduce_arguments(typemap, %{args: args}) do
field_arg_types = Enum.map(args, fn{_,v} -> v.type end)
Enum.reduce(field_arg_types, typemap, fn(fieldtype,typemap) ->
reduce_types(typemap, fieldtype)
end)
end
defp _reduce_arguments(typemap, _), do: typemap
end
|
lib/graphql/type/schema.ex
| 0.541166 | 0.464476 |
schema.ex
|
starcoder
|
defmodule Asciichart do
@moduledoc """
ASCII chart generation.
Ported to Elixir from [https://github.com/kroitor/asciichart](https://github.com/kroitor/asciichart)
"""
@doc ~S"""
Generates a chart for the specified list of numbers.
Optionally, the following settings can be provided:
* :offset - the number of characters to set as the chart's offset (left)
* :height - adjusts the height of the chart
* :padding - one or more characters to use for the label's padding (left)
## Examples
iex> Asciichart.plot([1, 2, 3, 3, 2, 1])
{:ok, "3.00 ┤ ╭─╮ \n2.00 ┤╭╯ ╰╮ \n1.00 ┼╯ ╰ \n "}
# should render as
3.00 ┤ ╭─╮
2.00 ┤╭╯ ╰╮
1.00 ┼╯ ╰
iex> Asciichart.plot([1, 2, 6, 6, 2, 1], height: 2)
{:ok, "6.00 ┼ \n3.50 ┤ ╭─╮ \n1.00 ┼─╯ ╰─ \n "}
# should render as
6.00 ┼
3.50 ┤ ╭─╮
1.00 ┼─╯ ╰─
iex> Asciichart.plot([1, 2, 5, 5, 4, 3, 2, 100, 0], height: 3, offset: 10, padding: "__")
{:ok, " 100.00 ┼ ╭╮ \n _50.00 ┤ ││ \n __0.00 ┼──────╯╰ \n "}
# should render as
100.00 ┼ ╭╮
_50.00 ┤ ││
__0.00 ┼──────╯╰
# Rendering of empty charts is not supported
iex> Asciichart.plot([])
{:error, "No data"}
"""
@spec plot([number], %{optional(atom) => any}) :: String.t()
def plot(series, cfg \\ %{}) do
case series do
[] ->
{:error, "No data"}
[_ | _] ->
minimum = Enum.min(series)
maximum = Enum.max(series)
interval = abs(maximum - minimum)
offset = cfg[:offset] || 3
height = if cfg[:height], do: cfg[:height] - 1, else: interval
padding = cfg[:padding] || " "
ratio = height / interval
min2 = Float.floor(minimum * ratio)
max2 = Float.ceil(maximum * ratio)
intmin2 = trunc(min2)
intmax2 = trunc(max2)
rows = abs(intmax2 - intmin2)
width = length(series) + offset
# empty space
result =
0..(rows + 1)
|> Enum.map(fn x ->
{x, 0..width |> Enum.map(fn y -> {y, " "} end) |> Enum.into(%{})}
end)
|> Enum.into(%{})
max_label_size =
(maximum / 1)
|> Float.round(cfg[:decimals] || 2)
|> :erlang.float_to_binary(decimals: cfg[:decimals] || 2)
|> String.length()
min_label_size =
(minimum / 1)
|> Float.round(cfg[:decimals] || 2)
|> :erlang.float_to_binary(decimals: cfg[:decimals] || 2)
|> String.length()
label_size = max(min_label_size, max_label_size)
# axis and labels
result =
intmin2..intmax2
|> Enum.reduce(result, fn y, map ->
label =
(maximum - (y - intmin2) * interval / rows)
|> Float.round(cfg[:decimals] || 2)
|> :erlang.float_to_binary(decimals: cfg[:decimals] || 2)
|> String.pad_leading(label_size, padding)
updated_map = put_in(map[y - intmin2][max(offset - String.length(label), 0)], label)
put_in(updated_map[y - intmin2][offset - 1], if(y == 0, do: "┼", else: "┤"))
end)
# first value
y0 = trunc(Enum.at(series, 0) * ratio - min2)
result = put_in(result[rows - y0][offset - 1], "┼")
# plot the line
result =
0..(length(series) - 2)
|> Enum.reduce(result, fn x, map ->
y0 = trunc(Enum.at(series, x + 0) * ratio - intmin2)
y1 = trunc(Enum.at(series, x + 1) * ratio - intmin2)
if y0 == y1 do
put_in(map[rows - y0][x + offset], "─")
else
updated_map =
put_in(
map[rows - y1][x + offset],
if(y0 > y1, do: "╰", else: "╭")
)
updated_map =
put_in(
updated_map[rows - y0][x + offset],
if(y0 > y1, do: "╮", else: "╯")
)
(min(y0, y1) + 1)..max(y0, y1)
|> Enum.drop(-1)
|> Enum.reduce(updated_map, fn y, map ->
put_in(map[rows - y][x + offset], "│")
end)
end
end)
# ensures cell order, regardless of map sizes
result =
result
|> Enum.sort_by(fn {k, _} -> k end)
|> Enum.map(fn {_, x} ->
x
|> Enum.sort_by(fn {k, _} -> k end)
|> Enum.map(fn {_, y} -> y end)
|> Enum.join()
end)
|> Enum.join("\n")
{:ok, result}
end
end
end
|
lib/asciichart.ex
| 0.841598 | 0.624279 |
asciichart.ex
|
starcoder
|
defmodule MazeServer.MazeAi do
@moduledoc """
this is an AI for maze game that developed with elixir
it solve maze with this algorithms:
* Breadth-first search (BFS)
* Iterative deepening depth-first search (IDS)
* A* search (A Star)
all of this algorithms will use graph search algorithm with ** special ** frontier rules or heuristical function.
"""
@doc """
this function will return main test board.
"""
def init_board do
[
"1111111111111111111111",
"1001100000000000110001",
"1111111110000000000001",
"1111000100000111111101",
"1011100100000111111101",
"1011100111000111111101",
"1000000111000000000001",
"1000000111000111111111",
"1111110100000000000021",
"1000000001000000000001",
"1000000001000000000001",
"1000000111110011000001",
"1000000001000011000011",
"1000000001000011010011",
"1001000000000011010011",
"1000000000000011010011",
"1000000000000000010011",
"1111000000000000010011",
"1111011001110000010001",
"1100011001110000011111",
"1100011001110000011111",
"1111111111111111111111"
]
end
@doc """
`find_target` takes a board, search for target point in it then return `end_point`.
## Examples
iex> MazeServer.MazeAi.find_target(["111", "121", "111"])
{1, 1}
"""
def find_target(board) do
x =
board
|> Enum.filter(&String.contains?(&1, "2"))
|> Enum.at(0)
|> String.to_charlist()
|> Enum.find_index(&(&1 == ?2))
y =
board
|> Enum.find_index(&String.contains?(&1, "2"))
{x, y}
end
@doc """
duty of `expander` is check frontier queue and explored set
for avoiding of redundancy and revisiting a node and visiting a wall node.
it will return a new valid frontier.
## Examples
iex> MazeServer.MazeAi.expander([%{x: 1, y: 2}], %{x: 1, y: 2, state: "0"}, [%{x: 4, y: 3}], fn f, p -> List.insert_at(f, -1, p) end)
[%{x: 1, y: 2}]
iex> MazeServer.MazeAi.expander([%{x: 2, y: 4}], %{x: 1, y: 2, state: "0"}, [%{x: 1, y: 2}], fn f, p -> List.insert_at(f, -1, p) end)
[%{x: 2, y: 4}]
iex> MazeServer.MazeAi.expander([%{x: 1, y: 2}], %{x: 4, y: 2, state: "1"}, [%{x: 4, y: 3}], fn f, p -> List.insert_at(f, -1, p) end)
[%{x: 1, y: 2}]
iex> MazeServer.MazeAi.expander([%{x: 1, y: 2}], %{x: 4, y: 2, state: "0"}, [%{x: 4, y: 3}], fn f, p -> List.insert_at(f, -1, p) end)
[%{x: 1, y: 2}, %{state: "0", x: 4, y: 2}]
"""
def expander(frontier, point, explored_set, frontier_push) do
unless Enum.any?(frontier, &(&1.x == point.x and &1.y == point.y)) or
Enum.any?(explored_set, &(&1.x == point.x and &1.y == point.y)) or point.state == "1" do
frontier_push.(frontier, point)
else
frontier
end
end
@doc """
`expand` function will create child nodes.
"""
def expand(
%{x: x, y: y} = point,
board,
frontier,
explored_set,
frontier_push,
g,
h,
end_point,
expanding
) do
points =
[
create_point(%{x: x + 1, y: y}, board, g, h, end_point, point),
create_point(%{x: x - 1, y: y}, board, g, h, end_point, point),
create_point(%{x: x, y: y + 1}, board, g, h, end_point, point),
create_point(%{x: x, y: y - 1}, board, g, h, end_point, point)
]
|> Enum.shuffle()
expanding.(frontier, Enum.at(points, 0), explored_set, frontier_push)
|> expanding.(Enum.at(points, 1), explored_set, frontier_push)
|> expanding.(Enum.at(points, 2), explored_set, frontier_push)
|> expanding.(Enum.at(points, 3), explored_set, frontier_push)
end
defp state_maker(board, %{x: x, y: y}) do
board
|> Enum.at(y)
|> String.at(x)
end
@doc """
it takes a location in board (x, y) and make a point node.
"""
def create_point(%{x: x, y: y} = point, board, _g, h, end_point, nil) do
%{
x: x,
y: y,
state: state_maker(board, point),
parent: nil,
path_cost: 0 + h.({x, y}, end_point),
level: 0
}
end
def create_point(
%{x: x, y: y} = point,
board,
g,
h,
end_point,
%{path_cost: path_cost, level: level} = parent
) do
%{
x: x,
y: y,
state: state_maker(board, point),
parent: parent,
path_cost: g.(path_cost, {x, y}) + h.({x, y}, end_point),
level: level + 1
}
end
@doc """
`graph_search` function is a basical function for search on graphs.
it use `frontier_push` and `frontier_pop` for define frontier rules.
"""
def graph_search(
frontier,
explored_set,
board,
goal,
wall,
limit,
frontier_pop,
frontier_push
)
when is_list(frontier) and is_list(explored_set) and
is_bitstring(goal) and is_bitstring(wall) and is_number(limit) and
is_function(frontier_pop, 1) and is_function(frontier_push, 2) do
graph_search(
frontier,
explored_set,
{},
board,
goal,
wall,
limit,
fn pc, _ -> pc + 1 end,
fn _, _ -> 0 end,
frontier_pop,
frontier_push
)
end
def graph_search(
frontier,
explored_set,
end_point,
board,
goal,
wall,
limit,
g,
h,
frontier_pop,
frontier_push,
expander \\ &expander/4
)
when is_list(frontier) and is_list(explored_set) and is_tuple(end_point) and
is_bitstring(goal) and is_bitstring(wall) and is_number(limit) and
is_function(g, 2) and is_function(h, 2) and
is_function(frontier_pop, 1) and is_function(frontier_push, 2) do
{point, new_frontier} = frontier_pop.(frontier)
new_explored_set = [point | explored_set]
cond do
point == nil ->
[:error]
point.state == goal ->
[:ok, point, %{explored_set: new_explored_set}]
point.level == limit ->
[
graph_search(
new_frontier,
new_explored_set,
end_point,
board,
goal,
wall,
limit,
g,
h,
frontier_pop,
frontier_push
),
:cutoff
]
true ->
expand(point, board, new_frontier, explored_set, frontier_push, g, h, end_point, expander)
|> graph_search(
new_explored_set,
end_point,
board,
goal,
wall,
limit,
g,
h,
frontier_pop,
frontier_push
)
end
end
end
|
lib/maze_server/maze_ai.ex
| 0.849316 | 0.673822 |
maze_ai.ex
|
starcoder
|
defmodule Zaryn.Mining.ValidationContext do
@moduledoc """
Represent the transaction validation workflow state
"""
defstruct [
:transaction,
:previous_transaction,
:welcome_node,
:coordinator_node,
:cross_validation_nodes,
:validation_stamp,
unspent_outputs: [],
cross_validation_stamps: [],
cross_validation_nodes_confirmation: <<>>,
validation_nodes_view: <<>>,
chain_storage_nodes: [],
chain_storage_nodes_view: <<>>,
beacon_storage_nodes: [],
beacon_storage_nodes_view: <<>>,
sub_replication_tree: %{
chain: <<>>,
beacon: <<>>,
IO: <<>>
},
full_replication_tree: %{
chain: [],
beacon: [],
IO: []
},
io_storage_nodes: [],
previous_storage_nodes: [],
replication_nodes_confirmation: %{
chain: <<>>,
beacon: <<>>,
IO: <<>>
},
valid_pending_transaction?: false
]
alias Zaryn.Contracts
alias Zaryn.Crypto
alias Zaryn.Election
alias Zaryn.Mining.Fee
alias Zaryn.Mining.ProofOfWork
alias Zaryn.OracleChain
alias Zaryn.P2P
alias Zaryn.P2P.Node
alias Zaryn.Replication
alias Zaryn.TransactionChain
alias Zaryn.TransactionChain.Transaction
alias Zaryn.TransactionChain.Transaction.CrossValidationStamp
alias Zaryn.TransactionChain.Transaction.ValidationStamp
alias Zaryn.TransactionChain.Transaction.ValidationStamp.LedgerOperations
alias Zaryn.TransactionChain.Transaction.ValidationStamp.LedgerOperations.UnspentOutput
alias Zaryn.TransactionChain.TransactionData
alias Zaryn.Utils
@type t :: %__MODULE__{
transaction: Transaction.t(),
previous_transaction: nil | Transaction.t(),
unspent_outputs: list(UnspentOutput.t()),
welcome_node: Node.t(),
coordinator_node: Node.t(),
cross_validation_nodes: list(Node.t()),
previous_storage_nodes: list(Node.t()),
chain_storage_nodes: list(Node.t()),
beacon_storage_nodes: list(Node.t()),
io_storage_nodes: list(Node.t()),
cross_validation_nodes_confirmation: bitstring(),
validation_stamp: nil | ValidationStamp.t(),
full_replication_tree: %{
chain: list(bitstring()),
beacon: list(bitstring()),
IO: list(bitstring())
},
sub_replication_tree: %{
chain: bitstring(),
beacon: bitstring(),
IO: bitstring()
},
cross_validation_stamps: list(CrossValidationStamp.t()),
replication_nodes_confirmation: %{
chain: bitstring(),
beacon: bitstring(),
IO: bitstring()
},
validation_nodes_view: bitstring(),
chain_storage_nodes_view: bitstring(),
beacon_storage_nodes_view: bitstring(),
valid_pending_transaction?: boolean()
}
@doc """
Create a new mining context.
It extracts coordinator and cross validation nodes from the validation nodes list
It computes P2P views based on the cross validation nodes, beacon and chain storage nodes availability
## Examples
iex> ValidationContext.new(
...> transaction: %Transaction{},
...> welcome_node: %Node{last_public_key: "key1", availability_history: <<1::1>>},
...> validation_nodes: [%Node{last_public_key: "key2", availability_history: <<1::1>>}, %Node{last_public_key: "key3", availability_history: <<1::1>>}],
...> chain_storage_nodes: [%Node{last_public_key: "key4", availability_history: <<1::1>>}, %Node{last_public_key: "key5", availability_history: <<1::1>>}],
...> beacon_storage_nodes: [%Node{last_public_key: "key6", availability_history: <<1::1>>}, %Node{last_public_key: "key7", availability_history: <<1::1>>}])
%ValidationContext{
transaction: %Transaction{},
welcome_node: %Node{last_public_key: "key1", availability_history: <<1::1>>},
coordinator_node: %Node{last_public_key: "key2", availability_history: <<1::1>>},
cross_validation_nodes: [%Node{last_public_key: "key3", availability_history: <<1::1>>}],
cross_validation_nodes_confirmation: <<0::1>>,
chain_storage_nodes: [%Node{last_public_key: "key4", availability_history: <<1::1>>}, %Node{last_public_key: "key5", availability_history: <<1::1>>}],
beacon_storage_nodes: [%Node{last_public_key: "key6", availability_history: <<1::1>>}, %Node{last_public_key: "key7", availability_history: <<1::1>>}]
}
"""
@spec new(opts :: Keyword.t()) :: t()
def new(attrs \\ []) when is_list(attrs) do
{coordinator_node, cross_validation_nodes} =
case Keyword.get(attrs, :validation_nodes) do
[coordinator_node | []] ->
{coordinator_node, [coordinator_node]}
[coordinator_node | cross_validation_nodes] ->
{coordinator_node, cross_validation_nodes}
end
nb_cross_validations_nodes = length(cross_validation_nodes)
tx = Keyword.get(attrs, :transaction)
welcome_node = Keyword.get(attrs, :welcome_node)
chain_storage_nodes = Keyword.get(attrs, :chain_storage_nodes)
beacon_storage_nodes = Keyword.get(attrs, :beacon_storage_nodes)
%__MODULE__{
transaction: tx,
welcome_node: welcome_node,
coordinator_node: coordinator_node,
cross_validation_nodes: cross_validation_nodes,
cross_validation_nodes_confirmation: <<0::size(nb_cross_validations_nodes)>>,
chain_storage_nodes: chain_storage_nodes,
beacon_storage_nodes: beacon_storage_nodes
}
end
@doc """
Set the pending transaction validation flag
"""
@spec set_pending_transaction_validation(t(), boolean()) :: t()
def set_pending_transaction_validation(context = %__MODULE__{}, valid?)
when is_boolean(valid?) do
%{context | valid_pending_transaction?: valid?}
end
@doc """
Determine if the enough context has been retrieved
## Examples
iex> %ValidationContext{
...> cross_validation_nodes_confirmation: <<fdf8:f53e:61e4::18, 0::1, fdf8:f53e:61e4::18>>,
...> cross_validation_nodes: [
...> %Node{first_public_key: "key1"},
...> %Node{first_public_key: "key2"},
...> %Node{first_public_key: "key3"}
...> ]
...> }
...> |> ValidationContext.enough_confirmations?()
false
iex> %ValidationContext{
...> cross_validation_nodes_confirmation: <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18>>,
...> cross_validation_nodes: [
...> %Node{first_public_key: "key1"},
...> %Node{first_public_key: "key2"},
...> %Node{first_public_key: "key3"}
...> ]
...> }
...> |> ValidationContext.enough_confirmations?()
true
"""
@spec enough_confirmations?(t()) :: boolean()
def enough_confirmations?(%__MODULE__{
cross_validation_nodes: cross_validation_nodes,
cross_validation_nodes_confirmation: confirmed_nodes
}) do
Enum.reduce(cross_validation_nodes, <<>>, fn _, acc -> <<1::1, acc::bitstring>> end) ==
confirmed_nodes
end
@doc """
Confirm a cross validation node by setting a bit to 1 in the confirmation list
## Examples
iex> %ValidationContext{
...> cross_validation_nodes: [
...> %Node{last_public_key: "key2"},
...> %Node{last_public_key: "key3"}
...> ],
...> cross_validation_nodes_confirmation: <<0::1, 0::1>>
...> }
...> |> ValidationContext.confirm_validation_node("key3")
%ValidationContext{
cross_validation_nodes: [
%Node{last_public_key: "key2"},
%Node{last_public_key: "key3"}
],
cross_validation_nodes_confirmation: <<0::1, fdf8:f53e:61e4::18>>
}
"""
def confirm_validation_node(
context = %__MODULE__{cross_validation_nodes: cross_validation_nodes},
node_public_key
) do
index = Enum.find_index(cross_validation_nodes, &(&1.last_public_key == node_public_key))
Map.update!(
context,
:cross_validation_nodes_confirmation,
&Utils.set_bitstring_bit(&1, index)
)
end
@doc """
Add the validation stamp to the mining context
"""
@spec add_validation_stamp(t(), ValidationStamp.t()) :: t()
def add_validation_stamp(context = %__MODULE__{}, stamp = %ValidationStamp{}) do
%{context | validation_stamp: stamp} |> add_io_storage_nodes()
end
@doc """
Determines if the expected cross validation stamps have been received
## Examples
iex> %ValidationContext{
...> cross_validation_stamps: [
...> %CrossValidationStamp{},
...> %CrossValidationStamp{},
...> %CrossValidationStamp{},
...> ],
...> cross_validation_nodes: [
...> %Node{},
...> %Node{},
...> %Node{},
...> %Node{},
...> ]
...> }
...> |> ValidationContext.enough_cross_validation_stamps?()
false
"""
@spec enough_cross_validation_stamps?(t()) :: boolean()
def enough_cross_validation_stamps?(%__MODULE__{
cross_validation_nodes: cross_validation_nodes,
cross_validation_stamps: stamps
}) do
length(cross_validation_nodes) == length(stamps)
end
@doc """
Determines if the atomic commitment has been reached from the cross validation stamps.
"""
@spec atomic_commitment?(t()) :: boolean()
def atomic_commitment?(%__MODULE__{transaction: tx, cross_validation_stamps: stamps}) do
%{tx | cross_validation_stamps: stamps}
|> Transaction.atomic_commitment?()
end
@doc """
Add a cross validation stamp if not exists
"""
@spec add_cross_validation_stamp(t(), CrossValidationStamp.t()) :: t()
def add_cross_validation_stamp(
context = %__MODULE__{
validation_stamp: validation_stamp
},
stamp = %CrossValidationStamp{
node_public_key: from
}
) do
cond do
!cross_validation_node?(context, from) ->
context
!CrossValidationStamp.valid_signature?(stamp, validation_stamp) ->
context
cross_validation_stamp_exists?(context, from) ->
context
true ->
Map.update!(context, :cross_validation_stamps, &[stamp | &1])
end
end
defp cross_validation_stamp_exists?(
%__MODULE__{cross_validation_stamps: stamps},
node_public_key
)
when is_binary(node_public_key) do
Enum.any?(stamps, &(&1.node_public_key == node_public_key))
end
@doc """
Determines if a node is a cross validation node
## Examples
iex> %ValidationContext{
...> coordinator_node: %Node{last_public_key: "key1"},
...> cross_validation_nodes: [
...> %Node{last_public_key: "key2"},
...> %Node{last_public_key: "key3"},
...> %Node{last_public_key: "key4"},
...> ]
...> }
...> |> ValidationContext.cross_validation_node?("key3")
true
iex> %ValidationContext{
...> coordinator_node: %Node{last_public_key: "key1"},
...> cross_validation_nodes: [
...> %Node{last_public_key: "key2"},
...> %Node{last_public_key: "key3"},
...> %Node{last_public_key: "key4"},
...> ]
...> }
...> |> ValidationContext.cross_validation_node?("key1")
false
"""
@spec cross_validation_node?(t(), Crypto.key()) :: boolean()
def cross_validation_node?(
%__MODULE__{cross_validation_nodes: cross_validation_nodes},
node_public_key
)
when is_binary(node_public_key) do
Enum.any?(cross_validation_nodes, &(&1.last_public_key == node_public_key))
end
@doc """
Add the replication tree and initialize the replication nodes confirmation list
## Examples
iex> %ValidationContext{
...> full_replication_tree: %{ chain: [<<0::1, fdf8:f53e:61e4::18>>, <<fdf8:f53e:61e4::18, 0::1>>], beacon: [<<0::1, fdf8:f53e:61e4::18>>, <<fdf8:f53e:61e4::18, 0::1>>], IO: [<<0::1, fdf8:f53e:61e4::18>>, <<fdf8:f53e:61e4::18, 0::1>>] },
...> sub_replication_tree: %{ chain: <<fdf8:f53e:61e4::18, 0::1>>, beacon: <<fdf8:f53e:61e4::18, 0::1>>, IO: <<fdf8:f53e:61e4::18, 0::1>> },
...> replication_nodes_confirmation: %{ chain: <<0::1, 0::1>>, beacon: <<0::1, 0::1>>, IO: <<0::1, 0::1>> }
...> } = %ValidationContext{
...> coordinator_node: %Node{last_public_key: "key1"},
...> cross_validation_nodes: [%Node{last_public_key: "key2"}],
...> }
...> |> ValidationContext.add_replication_tree(%{ chain: [<<0::1, fdf8:f53e:61e4::18>>, <<fdf8:f53e:61e4::18, 0::1>>], beacon: [<<0::1, fdf8:f53e:61e4::18>>, <<fdf8:f53e:61e4::18, 0::1>>], IO: [<<0::1, fdf8:f53e:61e4::18>>, <<fdf8:f53e:61e4::18, 0::1>>] }, "key2")
"""
@spec add_replication_tree(
t(),
replication_trees :: %{
chain: list(bitstring()),
beacon: list(bitstring()),
IO: list(bitstring())
},
node_public_key :: Crypto.key()
) :: t()
def add_replication_tree(
context = %__MODULE__{
coordinator_node: coordinator_node,
cross_validation_nodes: cross_validation_nodes
},
tree = %{chain: chain_tree, beacon: beacon_tree, IO: io_tree},
node_public_key
)
when is_list(chain_tree) and is_list(beacon_tree) and is_list(io_tree) and
is_binary(node_public_key) do
validation_nodes = [coordinator_node | cross_validation_nodes]
validator_index = Enum.find_index(validation_nodes, &(&1.last_public_key == node_public_key))
sub_chain_tree = Enum.at(chain_tree, validator_index)
sub_beacon_tree = Enum.at(beacon_tree, validator_index)
sub_io_tree = Enum.at(io_tree, validator_index)
sub_tree_size = bit_size(sub_chain_tree)
%{
context
| sub_replication_tree: %{
chain: sub_chain_tree,
beacon: sub_beacon_tree,
IO: sub_io_tree
},
full_replication_tree: tree,
replication_nodes_confirmation: %{
chain: <<0::size(sub_tree_size)>>,
beacon: <<0::size(sub_tree_size)>>,
IO: <<0::size(sub_tree_size)>>
}
}
end
@doc """
Get the entire list of storage nodes (transaction chain, beacon chain, I/O)
## Examples
iex> %ValidationContext{
...> chain_storage_nodes: [%Node{first_public_key: "key1"}, %Node{first_public_key: "key2"}],
...> beacon_storage_nodes: [%Node{first_public_key: "key3"}, %Node{first_public_key: "key1"}],
...> io_storage_nodes: [%Node{first_public_key: "key4"}, %Node{first_public_key: "key5"}]
...> }
...> |> ValidationContext.get_storage_nodes()
%{
%Node{first_public_key: "key1"} => [:beacon, :chain],
%Node{first_public_key: "key2"} => [:chain],
%Node{first_public_key: "key3"} => [:beacon],
%Node{first_public_key: "key4"} => [:IO],
%Node{first_public_key: "key5"} => [:IO]
}
"""
@spec get_storage_nodes(t()) :: list(Node.t())
def get_storage_nodes(%__MODULE__{
chain_storage_nodes: chain_storage_nodes,
beacon_storage_nodes: beacon_storage_nodes,
io_storage_nodes: io_storage_nodes
}) do
[{:chain, chain_storage_nodes}, {:beacon, beacon_storage_nodes}, {:IO, io_storage_nodes}]
|> Enum.reduce(%{}, fn {role, nodes}, acc ->
Enum.reduce(nodes, acc, fn node, acc ->
Map.update(acc, node, [role], &[role | &1])
end)
end)
end
@doc """
Get the replication nodes from the replication trees for the actual subtree
## Examples
iex> %ValidationContext{
...> chain_storage_nodes: [
...> %Node{last_public_key: "key5"},
...> %Node{last_public_key: "key7"}
...> ],
...> beacon_storage_nodes: [
...> %Node{last_public_key: "key10"},
...> %Node{last_public_key: "key11"}
...> ],
...> io_storage_nodes: [
...> %Node{last_public_key: "key12"},
...> %Node{last_public_key: "key5"}
...> ],
...> sub_replication_tree: %{
...> chain: <<fdf8:f53e:61e4::18, 0::1>>,
...> beacon: <<fdf8:f53e:61e4::18, 0::1>>,
...> IO: <<0::1, 1::1>>
...> }
...> }
...> |> ValidationContext.get_replication_nodes()
%{
%Node{last_public_key: "key10"} => [:beacon],
%Node{last_public_key: "key5"} => [:chain, :IO]
}
"""
@spec get_replication_nodes(t()) :: list(Node.t())
def get_replication_nodes(%__MODULE__{
sub_replication_tree: %{
chain: chain_tree,
beacon: beacon_tree,
IO: io_tree
},
chain_storage_nodes: chain_storage_nodes,
beacon_storage_nodes: beacon_storage_nodes,
io_storage_nodes: io_storage_nodes
}) do
chain_storage_node_indexes = get_storage_nodes_tree_indexes(chain_tree)
beacon_storage_node_indexes = get_storage_nodes_tree_indexes(beacon_tree)
io_storage_node_indexes = get_storage_nodes_tree_indexes(io_tree)
%{
chain: Enum.map(chain_storage_node_indexes, &Enum.at(chain_storage_nodes, &1)),
beacon: Enum.map(beacon_storage_node_indexes, &Enum.at(beacon_storage_nodes, &1)),
IO: Enum.map(io_storage_node_indexes, &Enum.at(io_storage_nodes, &1))
}
|> Enum.reduce(%{}, fn {role, nodes}, acc ->
Enum.reduce(nodes, acc, fn node, acc ->
Map.update(acc, node, [role], &[role | &1])
end)
end)
end
defp get_storage_nodes_tree_indexes(tree) do
tree
|> Utils.bitstring_to_integer_list()
|> Enum.with_index()
|> Enum.filter(&match?({1, _}, &1))
|> Enum.map(&elem(&1, 1))
end
@doc """
Get the transaction validated including the validation stamp and cross validation stamps
"""
@spec get_validated_transaction(t()) :: Transaction.t()
def get_validated_transaction(%__MODULE__{
transaction: transaction,
validation_stamp: validation_stamp,
cross_validation_stamps: cross_validation_stamps
}) do
%{
transaction
| validation_stamp: validation_stamp,
cross_validation_stamps: cross_validation_stamps
}
end
@doc """
Acknowledge the replication confirm from the given storage node towards the given validator node
## Examples
iex> %ValidationContext{replication_nodes_confirmation: %{
...> chain: <<0::1, 0::1, 1::1>>,
...> IO: <<0::1, 0::1, 0::1>>,
...> beacon: <<0::1, 0::1, 0::1>>
...> }} = %ValidationContext{
...> replication_nodes_confirmation: %{ chain: <<0::1, 0::1, 0::1>>, beacon: <<0::1, 0::1, 0::1>>, IO: <<0::1, 0::1, 0::1>> },
...> sub_replication_tree: %{ chain: <<0::1, 0::1, 0::1>>, IO: <<0::1, 0::1, 0::1>>, beacon: <<0::1, 0::1, 0::1>>},
...> coordinator_node: %Node{last_public_key: "key1"},
...> cross_validation_nodes: [%Node{last_public_key: "key2"}, %Node{last_public_key: "key3"}],
...> chain_storage_nodes: [
...> %Node{first_public_key: "key10", last_public_key: "key10"},
...> %Node{first_public_key: "key11", last_public_key: "key11"},
...> %Node{first_public_key: "key12", last_public_key: "key12"}
...> ]
...> }
...> |> ValidationContext.confirm_replication("key12", [:chain])
"""
@spec confirm_replication(
t(),
storage_node_key :: Crypto.key(),
tree_types :: list(:chain | :beacon | :IO)
) :: t()
def confirm_replication(
context = %__MODULE__{
chain_storage_nodes: chain_storage_nodes,
beacon_storage_nodes: beacon_storage_nodes,
io_storage_nodes: io_storage_nodes
},
from,
tree_types
) do
Enum.reduce(tree_types, context, fn
:chain, acc ->
index = Enum.find_index(chain_storage_nodes, &(&1.last_public_key == from))
Map.update!(acc, :replication_nodes_confirmation, fn nodes ->
Map.update!(nodes, :chain, &Utils.set_bitstring_bit(&1, index))
end)
:beacon, acc ->
index = Enum.find_index(beacon_storage_nodes, &(&1.last_public_key == from))
Map.update!(acc, :replication_nodes_confirmation, fn nodes ->
Map.update!(nodes, :beacon, &Utils.set_bitstring_bit(&1, index))
end)
:IO, acc ->
index = Enum.find_index(io_storage_nodes, &(&1.last_public_key == from))
Map.update!(acc, :replication_nodes_confirmation, fn nodes ->
Map.update!(nodes, :IO, &Utils.set_bitstring_bit(&1, index))
end)
end)
end
@doc """
Determine if the number of replication nodes confirmation is reached
## Examples
iex> %ValidationContext{
...> replication_nodes_confirmation: %{
...> chain: <<0::1, fdf8:f53e:61e4::18, 0::1>>,
...> IO: <<0::1, 0::1, 0::1>>,
...> beacon: <<0::1, fdf8:f53e:61e4::18, 0::1>>
...> },
...> sub_replication_tree: %{
...> chain: <<0::1, fdf8:f53e:61e4::18, 0::1>>,
...> IO: <<0::1, fdf8:f53e:61e4::18, 0::1>>,
...> beacon: <<0::1, fdf8:f53e:61e4::18, 0::1>>
...> }
...> }
...> |> ValidationContext.enough_replication_confirmations?()
false
iex> %ValidationContext{
...> replication_nodes_confirmation: %{
...> chain: <<0::1, fdf8:f53e:61e4::18, 0::1>>,
...> IO: <<0::1, fdf8:f53e:61e4::18, 0::1>>,
...> beacon: <<0::1, fdf8:f53e:61e4::18, 0::1>>
...> },
...> sub_replication_tree: %{
...> chain: <<0::1, fdf8:f53e:61e4::18, 0::1>>,
...> IO: <<0::1, fdf8:f53e:61e4::18, 0::1>>,
...> beacon: <<0::1, fdf8:f53e:61e4::18, 0::1>>
...> }
...> }
...> |> ValidationContext.enough_replication_confirmations?()
true
"""
@spec enough_replication_confirmations?(t()) :: boolean()
def enough_replication_confirmations?(%__MODULE__{
replication_nodes_confirmation: replication_nodes_confirmation,
sub_replication_tree: replication_tree
}) do
Enum.all?(replication_nodes_confirmation, fn {tree, confirmations} ->
Utils.count_bitstring_bits(confirmations) ==
Utils.count_bitstring_bits(Map.get(replication_tree, tree))
end)
end
@doc """
Initialize the transaction mining context
"""
@spec put_transaction_context(
t(),
Transaction.t(),
list(UnspentOutput.t()),
list(Node.t()),
bitstring(),
bitstring(),
bitstring()
) :: t()
def put_transaction_context(
context = %__MODULE__{},
previous_transaction,
unspent_outputs,
previous_storage_nodes,
chain_storage_nodes_view,
beacon_storage_nodes_view,
validation_nodes_view
) do
context
|> Map.put(:previous_transaction, previous_transaction)
|> Map.put(:unspent_outputs, unspent_outputs)
|> Map.put(:previous_storage_nodes, previous_storage_nodes)
|> Map.put(:chain_storage_nodes_view, chain_storage_nodes_view)
|> Map.put(:beacon_storage_nodes_view, beacon_storage_nodes_view)
|> Map.put(:validation_nodes_view, validation_nodes_view)
end
@doc """
Aggregate the transaction mining context with the incoming context retrieved from the validation nodes
## Examples
iex> %ValidationContext{
...> previous_storage_nodes: [%Node{first_public_key: "key1"}],
...> chain_storage_nodes_view: <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18>>,
...> beacon_storage_nodes_view: <<fdf8:f53e:61e4::18, 0::1, fdf8:f53e:61e4::18>>,
...> validation_nodes_view: <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, 0::1>>,
...> cross_validation_nodes: [%Node{last_public_key: "key3"}, %Node{last_public_key: "key5"}],
...> cross_validation_nodes_confirmation: <<0::1, 0::1>>
...> }
...> |> ValidationContext.aggregate_mining_context(
...> [%Node{first_public_key: "key2"}],
...> <<fdf8:f53e:61e4::18, 0::1, fdf8:f53e:61e4::18>>,
...> <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18>>,
...> <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18>>,
...> "key5"
...> )
%ValidationContext{
previous_storage_nodes: [
%Node{first_public_key: "key1"},
%Node{first_public_key: "key2"}
],
chain_storage_nodes_view: <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18>>,
beacon_storage_nodes_view: <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18>>,
validation_nodes_view: <<fdf8:f53e:61e4::18, fdf8:f53e:61e4::18, fdf8:f53e:61e4::18>>,
cross_validation_nodes_confirmation: <<0::1, fdf8:f53e:61e4::18>>,
cross_validation_nodes: [%Node{last_public_key: "key3"}, %Node{last_public_key: "key5"}]
}
"""
@spec aggregate_mining_context(
t(),
list(Node.t()),
bitstring(),
bitstring(),
bitstring(),
Crypto.key()
) :: t()
def aggregate_mining_context(
context = %__MODULE__{},
previous_storage_nodes,
validation_nodes_view,
chain_storage_nodes_view,
beacon_storage_nodes_view,
from
)
when is_list(previous_storage_nodes) and is_bitstring(validation_nodes_view) and
is_bitstring(chain_storage_nodes_view) and
is_bitstring(beacon_storage_nodes_view) do
if cross_validation_node?(context, from) do
context
|> confirm_validation_node(from)
|> aggregate_p2p_views(
validation_nodes_view,
chain_storage_nodes_view,
beacon_storage_nodes_view
)
|> aggregate_previous_storage_nodes(previous_storage_nodes)
else
context
end
end
defp aggregate_p2p_views(
context = %__MODULE__{
validation_nodes_view: validation_nodes_view1,
chain_storage_nodes_view: chain_storage_nodes_view1,
beacon_storage_nodes_view: beacon_storage_nodes_view1
},
validation_nodes_view2,
chain_storage_nodes_view2,
beacon_storage_nodes_view2
)
when is_bitstring(validation_nodes_view2) and is_bitstring(chain_storage_nodes_view2) and
is_bitstring(beacon_storage_nodes_view2) do
%{
context
| validation_nodes_view:
Utils.aggregate_bitstring(validation_nodes_view1, validation_nodes_view2),
chain_storage_nodes_view:
Utils.aggregate_bitstring(chain_storage_nodes_view1, chain_storage_nodes_view2),
beacon_storage_nodes_view:
Utils.aggregate_bitstring(beacon_storage_nodes_view1, beacon_storage_nodes_view2)
}
end
defp aggregate_previous_storage_nodes(
context = %__MODULE__{previous_storage_nodes: previous_nodes},
received_previous_storage_nodes
)
when is_list(received_previous_storage_nodes) do
previous_storage_nodes = P2P.distinct_nodes([previous_nodes, received_previous_storage_nodes])
%{context | previous_storage_nodes: previous_storage_nodes}
end
@doc """
Return the validation nodes
"""
@spec get_validation_nodes(t()) :: list(Node.t())
def get_validation_nodes(%__MODULE__{
coordinator_node: coordinator_node,
cross_validation_nodes: cross_validation_nodes
}) do
[coordinator_node | cross_validation_nodes] |> P2P.distinct_nodes()
end
@doc """
Create a validation stamp based on the validation context and add it to the context
"""
@spec create_validation_stamp(t()) :: t()
def create_validation_stamp(
context = %__MODULE__{
transaction: tx,
previous_transaction: prev_tx,
unspent_outputs: unspent_outputs,
coordinator_node: coordinator_node,
cross_validation_nodes: cross_validation_nodes,
previous_storage_nodes: previous_storage_nodes,
valid_pending_transaction?: valid_pending_transaction?
}
) do
initial_error = if valid_pending_transaction?, do: nil, else: :pending_transaction
validation_stamp =
%ValidationStamp{
timestamp: DateTime.utc_now(),
proof_of_work: do_proof_of_work(tx),
proof_of_integrity: TransactionChain.proof_of_integrity([tx, prev_tx]),
proof_of_election:
Election.validation_nodes_election_seed_sorting(tx, DateTime.utc_now()),
ledger_operations:
%LedgerOperations{
transaction_movements:
tx
|> Transaction.get_movements()
|> LedgerOperations.resolve_transaction_movements(DateTime.utc_now()),
fee:
Fee.calculate(
tx,
OracleChain.get_zaryn_price(DateTime.utc_now()) |> Keyword.fetch!(:usd)
)
}
|> LedgerOperations.from_transaction(tx)
|> LedgerOperations.distribute_rewards(
coordinator_node,
cross_validation_nodes,
previous_storage_nodes
)
|> LedgerOperations.consume_inputs(tx.address, unspent_outputs),
recipients: resolve_transaction_recipients(tx),
errors: [initial_error, chain_error(prev_tx, tx)] |> Enum.filter(& &1)
}
|> ValidationStamp.sign()
add_io_storage_nodes(%{context | validation_stamp: validation_stamp})
end
defp chain_error(nil, _tx = %Transaction{}), do: nil
defp chain_error(
prev_tx = %Transaction{data: %TransactionData{code: prev_code}},
tx = %Transaction{}
)
when prev_code != "" do
unless Contracts.accept_new_contract?(prev_tx, tx) do
:contract_validation
end
end
defp chain_error(_, _), do: nil
defp resolve_transaction_recipients(%Transaction{
data: %TransactionData{recipients: recipients}
}) do
recipients
|> Task.async_stream(&TransactionChain.resolve_last_address(&1, DateTime.utc_now()),
on_timeout: :kill_task
)
|> Enum.filter(&match?({:ok, _}, &1))
|> Enum.into([], fn {:ok, res} -> res end)
end
defp add_io_storage_nodes(
context = %__MODULE__{transaction: tx, validation_stamp: validation_stamp}
) do
io_storage_nodes = Replication.io_storage_nodes(%{tx | validation_stamp: validation_stamp})
%{context | io_storage_nodes: io_storage_nodes}
end
@doc """
Create a replication tree based on the validation context (storage nodes and validation nodes)
and store it as a bitstring list.
## Examples
iex> %ValidationContext{
...> coordinator_node: %Node{first_public_key: "key1", network_patch: "AAA", last_public_key: "key1"},
...> cross_validation_nodes: [%Node{first_public_key: "key2", network_patch: "FAC", last_public_key: "key2"}],
...> chain_storage_nodes: [%Node{first_public_key: "key3", network_patch: "BBB"}, %Node{first_public_key: "key4", network_patch: "EFC"}]
...> }
...> |> ValidationContext.create_replication_tree()
%ValidationContext{
sub_replication_tree: %{
chain: <<fdf8:f53e:61e4::18, 0::1>>,
beacon: <<>>,
IO: <<>>
},
full_replication_tree: %{
IO: [],
beacon: [],
chain: [<<fdf8:f53e:61e4::18, 0::1>>, <<0::1, 1::1>>]
},
replication_nodes_confirmation: %{
IO: <<0::1, 0::1>>,
beacon: <<0::1, 0::1>>,
chain: <<0::1, 0::1>>
},
coordinator_node: %Node{first_public_key: "key1", network_patch: "AAA", last_public_key: "key1"},
cross_validation_nodes: [%Node{first_public_key: "key2", network_patch: "FAC", last_public_key: "key2"}],
chain_storage_nodes: [%Node{first_public_key: "key3", network_patch: "BBB"}, %Node{first_public_key: "key4", network_patch: "EFC"}]
}
"""
@spec create_replication_tree(t()) :: t()
def create_replication_tree(
context = %__MODULE__{
chain_storage_nodes: chain_storage_nodes,
beacon_storage_nodes: beacon_storage_nodes,
io_storage_nodes: io_storage_nodes
}
) do
validation_nodes = get_validation_nodes(context)
chain_replication_tree = Replication.generate_tree(validation_nodes, chain_storage_nodes)
beacon_replication_tree = Replication.generate_tree(validation_nodes, beacon_storage_nodes)
io_replication_tree = Replication.generate_tree(validation_nodes, io_storage_nodes)
tree = %{
chain:
Enum.map(chain_replication_tree, fn {_, list} ->
P2P.bitstring_from_node_subsets(chain_storage_nodes, list)
end),
beacon:
Enum.map(beacon_replication_tree, fn {_, list} ->
P2P.bitstring_from_node_subsets(beacon_storage_nodes, list)
end),
IO:
Enum.map(io_replication_tree, fn {_, list} ->
P2P.bitstring_from_node_subsets(io_storage_nodes, list)
end)
}
sub_tree = %{
chain: tree |> Map.get(:chain) |> Enum.at(0, <<>>),
beacon: tree |> Map.get(:beacon) |> Enum.at(0, <<>>),
IO: tree |> Map.get(:IO) |> Enum.at(0, <<>>)
}
sub_tree_size = sub_tree |> Map.get(:chain) |> bit_size()
%{
context
| sub_replication_tree: sub_tree,
full_replication_tree: tree,
replication_nodes_confirmation: %{
chain: <<0::size(sub_tree_size)>>,
beacon: <<0::size(sub_tree_size)>>,
IO: <<0::size(sub_tree_size)>>
}
}
end
defp do_proof_of_work(tx) do
result =
tx
|> ProofOfWork.list_origin_public_keys_candidates()
|> ProofOfWork.find_transaction_origin_public_key(tx)
case result do
{:ok, pow} ->
pow
{:error, :not_found} ->
""
end
end
@doc """
Cross validate the validation stamp using the validation context as reference and
listing the potential inconsistencies.
The cross validation stamp is therefore signed and stored in the context
"""
@spec cross_validate(t()) :: t()
def cross_validate(context = %__MODULE__{validation_stamp: validation_stamp}) do
inconsistencies = validation_stamp_inconsistencies(context)
cross_stamp =
%CrossValidationStamp{inconsistencies: inconsistencies}
|> CrossValidationStamp.sign(validation_stamp)
%{context | cross_validation_stamps: [cross_stamp]}
end
defp validation_stamp_inconsistencies(context = %__MODULE__{validation_stamp: stamp}) do
subsets_verifications = [
timestamp: fn -> valid_timestamp(stamp, context) end,
signature: fn -> valid_stamp_signature(stamp, context) end,
proof_of_work: fn -> valid_stamp_proof_of_work?(stamp, context) end,
proof_of_integrity: fn -> valid_stamp_proof_of_integrity?(stamp, context) end,
proof_of_election: fn -> valid_stamp_proof_of_election?(stamp, context) end,
transaction_fee: fn -> valid_stamp_fee?(stamp, context) end,
transaction_movements: fn -> valid_stamp_transaction_movements?(stamp, context) end,
recipients: fn -> valid_stamp_recipients?(stamp, context) end,
node_movements: fn -> valid_stamp_node_movements?(stamp, context) end,
unspent_outputs: fn -> valid_stamp_unspent_outputs?(stamp, context) end,
errors: fn -> valid_stamp_errors?(stamp, context) end
]
subsets_verifications
|> Enum.map(&{elem(&1, 0), elem(&1, 1).()})
|> Enum.filter(&match?({_, false}, &1))
|> Enum.map(&elem(&1, 0))
end
defp valid_timestamp(%ValidationStamp{timestamp: timestamp}, _) do
diff = DateTime.diff(timestamp, DateTime.utc_now())
diff <= 0 and diff > -10
end
defp valid_stamp_signature(stamp = %ValidationStamp{}, %__MODULE__{
coordinator_node: %Node{last_public_key: coordinator_node_public_key}
}) do
ValidationStamp.valid_signature?(stamp, coordinator_node_public_key)
end
defp valid_stamp_proof_of_work?(%ValidationStamp{proof_of_work: pow}, %__MODULE__{
transaction: tx
}) do
case pow do
"" ->
do_proof_of_work(tx) == ""
_ ->
Transaction.verify_origin_signature?(tx, pow)
end
end
defp valid_stamp_proof_of_integrity?(%ValidationStamp{proof_of_integrity: poi}, %__MODULE__{
transaction: tx,
previous_transaction: prev_tx
}),
do: TransactionChain.proof_of_integrity([tx, prev_tx]) == poi
defp valid_stamp_proof_of_election?(
%ValidationStamp{proof_of_election: poe, timestamp: timestamp},
%__MODULE__{
transaction: tx
}
),
do: poe == Election.validation_nodes_election_seed_sorting(tx, timestamp)
defp valid_stamp_fee?(
%ValidationStamp{timestamp: timestamp, ledger_operations: %LedgerOperations{fee: fee}},
%__MODULE__{transaction: tx}
) do
Fee.calculate(
tx,
OracleChain.get_zaryn_price(timestamp) |> Keyword.fetch!(:usd)
) == fee
end
defp valid_stamp_errors?(%ValidationStamp{errors: errors}, %__MODULE__{
transaction: tx,
previous_transaction: prev_tx,
valid_pending_transaction?: valid_pending_transaction?
}) do
initial_error = if valid_pending_transaction?, do: nil, else: :pending_transaction
[initial_error, chain_error(prev_tx, tx)] |> Enum.filter(& &1) == errors
end
defp valid_stamp_recipients?(%ValidationStamp{recipients: recipients}, %__MODULE__{
transaction: tx
}),
do: resolve_transaction_recipients(tx) == recipients
defp valid_stamp_transaction_movements?(
%ValidationStamp{
timestamp: timestamp,
ledger_operations: ops
},
%__MODULE__{transaction: tx}
) do
LedgerOperations.valid_transaction_movements?(ops, Transaction.get_movements(tx), timestamp)
end
defp valid_stamp_unspent_outputs?(
%ValidationStamp{
ledger_operations: %LedgerOperations{fee: fee, unspent_outputs: next_unspent_outputs}
},
%__MODULE__{
transaction: tx,
unspent_outputs: previous_unspent_outputs
}
) do
%LedgerOperations{unspent_outputs: expected_unspent_outputs} =
%LedgerOperations{
fee: fee,
transaction_movements: Transaction.get_movements(tx)
}
|> LedgerOperations.from_transaction(tx)
|> LedgerOperations.consume_inputs(tx.address, previous_unspent_outputs)
expected_unspent_outputs == next_unspent_outputs
end
defp valid_stamp_node_movements?(%ValidationStamp{ledger_operations: ops}, %__MODULE__{
transaction: tx,
coordinator_node: %Node{last_public_key: coordinator_node_public_key},
cross_validation_nodes: cross_validation_nodes,
unspent_outputs: unspent_outputs
}) do
previous_storage_nodes =
P2P.distinct_nodes([unspent_storage_nodes(unspent_outputs), previous_storage_nodes(tx)])
with true <- LedgerOperations.valid_node_movements_roles?(ops),
true <-
LedgerOperations.valid_node_movements_cross_validation_nodes?(
ops,
Enum.map(cross_validation_nodes, & &1.last_public_key)
),
true <-
LedgerOperations.valid_node_movements_previous_storage_nodes?(
ops,
Enum.map(previous_storage_nodes, & &1.last_public_key)
),
true <- LedgerOperations.valid_reward_distribution?(ops),
true <-
LedgerOperations.has_node_movement_with_role?(
ops,
coordinator_node_public_key,
:coordinator_node
),
true <-
Enum.all?(
cross_validation_nodes,
&LedgerOperations.has_node_movement_with_role?(
ops,
&1.last_public_key,
:cross_validation_node
)
) do
true
end
end
defp unspent_storage_nodes([]), do: []
defp unspent_storage_nodes(unspent_outputs) do
unspent_outputs
|> Stream.map(&Replication.chain_storage_nodes(&1.from))
|> Enum.to_list()
end
defp previous_storage_nodes(tx) do
tx
|> Transaction.previous_address()
|> Replication.chain_storage_nodes()
end
end
|
lib/zaryn/mining/validation_context.ex
| 0.875088 | 0.566318 |
validation_context.ex
|
starcoder
|
defmodule Certbot.Acme.Plug do
@moduledoc """
Plug used to intercept challenge verification calls on the request path
`/.well-known/acme-challenge/<token>`.
The plug can be placed early in the pipeline. When using Phoenix, it should
be placed before your router in your `endpoint.ex`.
If you plan on redirecting http to https using Plug.SSL, place it after this plug.
`Certbot.Acme.Plug` needs to work over http.
It requires two options.
- `:challenge_store` -- The challenge store used, so when a verication call
comes in, it can check whether it knows the token. It needs to be the same store
where the `Certbot.Provider.Acme` provider stores the challenges.
- `:jwk` -- A jwk map, see below for an example on how to generate one from
a private key.
## Example
```
@jwk "priv/cert/selfsigned_key.pem" |> File.read!() |> JOSE.JWK.from_pem() |> JOSE.JWK.to_map()
plug Certbot.Acme.Plug, challenge_store: Certbot.ChallengeStore.Default, jwk: @jwk
```
"""
alias Certbot.Acme.Challenge
@spec init(any) :: {atom, any}
def init(opts) do
challenge_store = Keyword.fetch!(opts, :challenge_store)
jwk = Keyword.fetch!(opts, :jwk)
validate_jwk!(jwk)
{challenge_store, jwk}
end
@spec call(Plug.Conn.t(), {atom, any}) :: Plug.Conn.t()
def call(conn, {challenge_store, jwk}) do
case conn.request_path do
"/.well-known/acme-challenge/" <> token ->
reply_challenge(conn, token, {challenge_store, jwk})
_ ->
conn
end
end
defp reply_challenge(conn, token, {challenge_store, jwk}) do
case challenge_store.find_by_token(token) do
{:ok, challenge} ->
authorization = Challenge.authorization(challenge, jwk)
conn
|> Plug.Conn.send_resp(200, authorization)
|> Plug.Conn.halt()
_ ->
conn
|> Plug.Conn.send_resp(404, "Not found")
|> Plug.Conn.halt()
end
end
defp validate_jwk!(jwk) do
%JOSE.JWK{} = JOSE.JWK.from_map(jwk)
jwk
rescue
_ -> reraise ArgumentError, "Invalid jwk supplied to `Certbot.Acme.Plug`"
end
end
|
lib/certbot/acme/plug.ex
| 0.854733 | 0.720835 |
plug.ex
|
starcoder
|
defmodule EllipticCurve.PrivateKey do
@moduledoc """
Used to create private keys or convert them between struct and .der or .pem formats. Also allows creations of public keys from private keys.
Functions:
- generate()
- toPem()
- toDer()
- fromPem()
- fromPem!()
- fromDer()
- fromDer!()
"""
alias EllipticCurve.{PublicKey, Curve}
alias EllipticCurve.PrivateKey.{Data}
alias EllipticCurve.PublicKey.Data, as: PublicKeyData
alias EllipticCurve.Utils.Integer, as: IntegerUtils
alias EllipticCurve.Utils.{Der, BinaryAscii, Math}
@hexAt "\x00"
@doc """
Creates a new private key
Parameters:
- secret [int]: private key secret; Default: nil -> random key will be generated;
- curve [atom]: curve name; Default: :secp256k1;
Returns:
- privateKey [%EllipticCurve.PrivateKey.Data]: private key struct
## Example:
iex> EllipticCurve.PrivateKey.generate()
%EllipticCurve.PrivateKey.Data{...}
"""
def generate(secret \\ nil, curve \\ :secp256k1)
def generate(secret, curve) when is_nil(secret) do
generate(
IntegerUtils.between(
1,
Curve.KnownCurves.getCurveByName(curve)."N" - 1
),
curve
)
end
def generate(secret, curve) do
%Data{
secret: secret,
curve: Curve.KnownCurves.getCurveByName(curve)
}
end
@doc """
Gets the public associated with a private key
Parameters:
- privateKey [%EllipticCurve.PrivateKey.Data]: private key struct
Returns:
- publicKey [%EllipticCurve.PublicKey.Data]: public key struct
## Example:
iex> EllipticCurve.PrivateKey.getPublicKey(privateKey)
%EllipticCurve.PublicKey.Data{...}
"""
def getPublicKey(privateKey) do
curveData = privateKey.curve
%PublicKeyData{
point:
Math.multiply(
curveData."G",
privateKey.secret,
curveData."N",
curveData."A",
curveData."P"
),
curve: curveData
}
end
@doc """
Converts a private key in decoded struct format into a pem string
Parameters:
- privateKey [%EllipticCurve.PrivateKey.Data]: decoded private key struct;
Returns:
- pem [string]: private key in pem format
## Example:
iex> EllipticCurve.PrivateKey.toPem(%EllipticCurve.PrivateKey.Data{...})
"-----<KEY>"
"""
def toPem(privateKey) do
Der.toPem(
toDer(privateKey),
"EC PRIVATE KEY"
)
end
@doc """
Converts a private key in decoded struct format into a der string (raw binary)
Parameters:
- privateKey [$EllipticCurve.PrivateKey.Data]: decoded private key struct;
Returns:
- der [string]: private key in der format
## Example:
iex> EllipticCurve.PrivateKey.toDer(%EllipticCurve.PrivateKey.Data{...})
<<48, 116, 2, 1, 1, 4, 32, 59, 210, 253, 23, 93, 23, ...>>
"""
def toDer(privateKey) do
Der.encodeSequence([
Der.encodeInteger(1),
Der.encodeOctetString(toString(privateKey)),
Der.encodeConstructed(0, Der.encodeOid(privateKey.curve.oid)),
Der.encodeConstructed(
1,
Der.encodeBitString(PublicKey.toString(getPublicKey(privateKey), true))
)
])
end
@doc false
def toString(privateKey) do
BinaryAscii.stringFromNumber(privateKey.secret, Curve.getLength(privateKey.curve))
end
@doc """
Converts a private key in pem format into decoded struct format
Parameters:
- pem [string]: private key in pem format
Returns {:ok, privateKey}:
- privateKey [%EllipticCurve.PrivateKey.Data]: decoded private key struct;
## Example:
iex> EllipticCurve.PrivateKey.fromPem("-----<KEY>")
{:ok, %EllipticCurve.PrivateKey.Data{...}}
"""
def fromPem(pem) do
{:ok, fromPem!(pem)}
rescue
e in RuntimeError -> {:error, e}
end
@doc """
Converts a private key in pem format into decoded struct format
Parameters:
- pem [string]: private key in pem format
Returns:
- privateKey [%EllipticCurve.PrivateKey.Data]: decoded private key struct;
## Example:
iex> EllipticCurve.PrivateKey.fromPem!("-----<KEY>")
%EllipticCurve.PrivateKey.Data{...}
"""
def fromPem!(pem) do
String.split(pem, "-----BEGIN EC PRIVATE KEY-----")
|> List.last()
|> Der.fromPem()
|> fromDer!
end
@doc """
Converts a private key in der format into decoded struct format
Parameters:
- der [string]: private key in der format
Returns {:ok, privateKey}:
- privateKey [%EllipticCurve.PrivateKey.Data]: decoded private key struct;
## Example:
iex> EllipticCurve.PrivateKey.fromDer(<<48, 116, 2, 1, 1, 4, 32, 59, 210, 253, 23, 93, 23, ...>>)
{:ok, %EllipticCurve.PrivateKey.Data{...}}
"""
def fromDer(der) do
{:ok, fromDer!(der)}
rescue
e in RuntimeError -> {:error, e}
end
@doc """
Converts a private key in der format into decoded struct format
Parameters:
- der [string]: private key in der format
Returns:
- privateKey [%EllipticCurve.PrivateKey.Data]: decoded private key struct;
## Example:
iex> EllipticCurve.PrivateKey.fromDer!(<<48, 116, 2, 1, 1, 4, 32, 59, 210, 253, 23, 93, 23, ...>>)
%EllipticCurve.PrivateKey.Data{...}
"""
def fromDer!(der) do
{bytes1, empty} = Der.removeSequence(der)
if byte_size(empty) != 0 do
throw("trailing junk after DER private key: #{BinaryAscii.hexFromBinary(empty)}")
end
{one, bytes2} = Der.removeInteger(bytes1)
if one != 1 do
throw("expected '1' at start of DER private key, got #{one}")
end
{privateKeyString, bytes3} = Der.removeOctetString(bytes2)
{tag, curveOidString, _bytes4} = Der.removeConstructed(bytes3)
if tag != 0 do
throw("expected tag 0 in DER private key, got #{tag}")
end
{oidCurve, empty} = Der.removeObject(curveOidString)
if byte_size(empty) != 0 do
throw("trailing junk after DER private key curve_oid: #{BinaryAscii.hexFromBinary(empty)}")
end
privateKeyStringLength = byte_size(privateKeyString)
curveData = Curve.KnownCurves.getCurveByOid(oidCurve)
curveLength = Curve.getLength(curveData)
if privateKeyStringLength < curveLength do
(String.duplicate(@hexAt, curveLength - privateKeyStringLength) <> privateKeyString)
|> fromString(curveData)
else
fromString!(privateKeyString, curveData.name)
end
end
@doc false
def fromString(string, curve \\ :secp256k1) do
{:ok, fromString!(string, curve)}
rescue
e in RuntimeError -> {:error, e}
end
@doc false
def fromString!(string, curve \\ :secp256k1) do
%Data{
secret: BinaryAscii.numberFromString(string),
curve: Curve.KnownCurves.getCurveByName(curve)
}
end
end
|
lib/privateKey/privateKey.ex
| 0.939512 | 0.438184 |
privateKey.ex
|
starcoder
|
defmodule Conejo.Consumer do
@moduledoc """
`Conejo.Consumer` is the behaviour which will help you to implement your own RabbitMQ consumers.
### Configuration
Conejo.Consumer needs a configuration in the environment files.
Example:
```elixir
config :my_application, :consumer,
exchange: "my_exchange",
exchange_type: "topic",
queue_name: "my_queue",
queue_declaration_options: [{:auto_delete, true}, {:exclusive, true}],
queue_bind_options: [routing_key: "example"],
consume_options: [no_ack: true]
```
### Definition
```elixir
defmodule MyConsumer do
use Conejo.Consumer
def handle_consume(_channel, payload, _params) do
IO.inspect payload
end
end
```
### Start Up
```elixir
options = Application.get_all_env(:my_application)[:consumer]
{:ok, consumer} = MyConsumer.start_link(options, [name: :consumer])
```
"""
@type channel :: AMQP.Channel
@type payload :: any
@type params :: %{}
@type ack_opts :: [multiple: boolean]
@type nack_opts :: [multiple: boolean, requeue: boolean]
@type reject_opts :: [requeue: boolean]
@doc """
It will be executed after a message is received in an independant process.
* **payload**: The received message.
* **params**: All the available parameters related to the received message.
It has to return:
* If there is acknowledge:
* `:ack`
* `{:ack, ack_opts}`
* `:nack`
* `{:nack, nack_opts}`
* `:reject`
* `{:reject, reject_opts}`
where the options are Keyword lists like : _[multiple: boolean, requeue: boolean]_
* If there is no acknowledge:
* Any other value
"""
@callback handle_consume(channel, payload, params) ::
:ack
| {:ack, ack_opts}
| :nack
| {:nack, nack_opts}
| :reject
| {:reject, reject_opts}
| any
defmacro __using__(_) do
quote location: :keep do
use Conejo.Channel
@behaviour Conejo.Consumer
def declare_queue(chan, queue, options) do
AMQP.Queue.declare(chan, queue, options)
end
def declare_exchange(chan, exchange, exchange_type) do
nil
end
def bind_queue(chan, queue, exchange, options) do
AMQP.Queue.bind(chan, queue, exchange, options)
end
def consume_data(chan, queue, options) do
a = AMQP.Basic.consume(chan, queue, nil, options)
end
def do_consume(channel, payload, params) do
case handle_consume(channel, payload, params) do
:ack ->
AMQP.Basic.ack(channel, params.delivery_tag)
{:ack, ack_opts} ->
AMQP.Basic.ack(channel, params.delivery_tag, ack_opts)
:nack ->
AMQP.Basic.nack(channel, params.delivery_tag)
{:nack, nack_opts} ->
AMQP.Basic.nack(channel, params.delivery_tag, nack_opts)
_ ->
:ok
end
end
def async_publish(publisher, exchange, topic, message) do
nil
end
def sync_publish(publisher, exchange, topic, message) do
nil
end
end
end
end
|
lib/conejo/consumer.ex
| 0.89501 | 0.785185 |
consumer.ex
|
starcoder
|
defmodule Zstream do
@moduledoc """
Module for reading and writing ZIP file stream
## Example
```
Zstream.zip([
Zstream.entry("report.csv", Stream.map(records, &CSV.dump/1)),
Zstream.entry("catfilm.mp4", File.stream!("/catfilm.mp4", [], 512), coder: Zstream.Coder.Stored)
])
|> Stream.into(File.stream!("/archive.zip"))
|> Stream.run
```
```
File.stream!("archive.zip", [], 512)
|> Zstream.unzip()
|> Enum.reduce(%{}, fn
{:entry, %Zstream.Entry{name: file_name} = entry}, state -> state
{:data, data}, state -> state
{:data, :eof}, state -> state
end)
```
"""
defmodule Entry do
@type t :: %__MODULE__{
name: String.t(),
compressed_size: integer(),
mtime: NaiveDateTime.t(),
size: integer(),
extras: list()
}
defstruct [:name, :compressed_size, :mtime, :size, :extras]
end
@opaque entry :: map
@doc """
Creates a ZIP file entry with the given `name`
The `enum` could be either lazy `Stream` or `List`. The elements in `enum`
should be of type `iodata`
## Options
* `:coder` (module | {module, list}) - The compressor that should be
used to encode the data. Available options are
`Zstream.Coder.Deflate` - use deflate compression
`Zstream.Coder.Stored` - store without any compression
Defaults to `Zstream.Coder.Deflate`
* `:encryption_coder` ({module, keyword}) - The encryption module that should be
used to encrypt the data. Available options are
`Zstream.EncryptionCoder.Traditional` - use tranditional zip
encryption scheme. `:password` key should be present in the
options. Example `{Zstream.EncryptionCoder.Traditional, password:
"<PASSWORD>"}`
`Zstream.EncryptionCoder.None` - no encryption
Defaults to `Zstream.EncryptionCoder.None`
* `:mtime` (DateTime) - File last modication time. Defaults to system local time.
"""
@spec entry(String.t(), Enumerable.t(), Keyword.t()) :: entry
defdelegate entry(name, enum, options \\ []), to: Zstream.Zip
@doc """
Creates a ZIP file stream
entries are consumed one by one in the given order
## Options
* `:zip64` (boolean) - If set to `true` zip64 format is used. Zip64
can support files more than 4 GB in size, but not all the unzip
programs support this format. Defaults to `false`
"""
@spec zip([entry], Keyword.t()) :: Enumerable.t()
defdelegate zip(entries, options \\ []), to: Zstream.Zip
@doc """
Unzips file stream
returns a new stream which emits the following tuples for each zip entry
{`:entry`, `t:Zstream.Entry.t/0`} - Indicates a new file entry.
{`:data`, `t:iodata/0` | `:eof`} - one or more data tuples will be emitted for each entry. `:eof` indicates end of data tuples for current entry.
### NOTES
Unzip doesn't support all valid zip files. Zip file format allows
the writer to write the file size info after the file data, which
allows the writer to zip streams with unknown size. But this
prevents the reader from unzipping the file in a streaming fashion,
because to find the file size one has to go to the end of the
stream. Ironcially, if you use Zstream to zip a file, the same file
can't be unzipped using Zstream.
* doesn't support file which uses data descriptor header
* doesn't support encrypted file
"""
defdelegate unzip(stream), to: Zstream.Unzip
end
|
lib/zstream.ex
| 0.880129 | 0.870212 |
zstream.ex
|
starcoder
|
defmodule Solid.Expression do
@moduledoc """
Expression evaluation for the following binary operators:
== != > < >= <=
Also combine expressions with `and`, `or`
"""
alias Solid.Argument
@type value :: number | iolist | boolean | nil
@doc """
Evaluate a single expression
iex> Solid.Expression.eval({"Beer Pack", :contains, "Pack"})
true
iex> Solid.Expression.eval({1, :==, 2})
false
iex> Solid.Expression.eval({1, :==, 1})
true
iex> Solid.Expression.eval({1, :!=, 2})
true
iex> Solid.Expression.eval({1, :!=, 1})
false
iex> Solid.Expression.eval({1, :<, 2})
true
iex> Solid.Expression.eval({1, :<, 1})
false
iex> Solid.Expression.eval({1, :>, 2})
false
iex> Solid.Expression.eval({2, :>, 1})
true
iex> Solid.Expression.eval({1, :>=, 1})
true
iex> Solid.Expression.eval({1, :>=, 0})
true
iex> Solid.Expression.eval({1, :>=, 2})
false
iex> Solid.Expression.eval({1, :<=, 1})
true
iex> Solid.Expression.eval({1, :<=, 0})
false
iex> Solid.Expression.eval({1, :<=, 2})
true
iex> Solid.Expression.eval({"Meat", :contains, "Pack"})
false
iex> Solid.Expression.eval({["Beer", "Pack"], :contains, "Pack"})
true
iex> Solid.Expression.eval({["Meat"], :contains, "Pack"})
false
iex> Solid.Expression.eval({nil, :contains, "Pack"})
false
iex> Solid.Expression.eval({"Meat", :contains, nil})
false
iex> Solid.Expression.eval(true)
true
iex> Solid.Expression.eval(false)
false
iex> Solid.Expression.eval(nil)
false
iex> Solid.Expression.eval(1)
true
iex> Solid.Expression.eval("")
true
iex> Solid.Expression.eval({0, :<=, nil})
false
iex> Solid.Expression.eval({1.0, :<, nil})
false
iex> Solid.Expression.eval({nil, :>=, 1.0})
false
iex> Solid.Expression.eval({nil, :>, 0})
false
"""
@spec eval({value, atom, value} | value) :: boolean
def eval({nil, :contains, _v2}), do: false
def eval({_v1, :contains, nil}), do: false
def eval({v1, :contains, v2}) when is_list(v1), do: v2 in v1
def eval({v1, :contains, v2}), do: String.contains?(v1, v2)
def eval({v1, :<=, nil}) when is_number(v1), do: false
def eval({v1, :<, nil}) when is_number(v1), do: false
def eval({nil, :>=, v2}) when is_number(v2), do: false
def eval({nil, :>, v2}) when is_number(v2), do: false
def eval({v1, op, v2}), do: apply(Kernel, op, [v1, v2])
def eval(value) do
if value do
true
else
false
end
end
@doc """
Evaluate a list of expressions combined with `or`, `and`
"""
@spec eval(list, map) :: boolean
def eval(exps, context) when is_list(exps) do
exps
|> Enum.chunk_every(2)
|> Enum.reverse()
|> Enum.reduce(nil, fn
[exp, :bool_and], acc ->
do_eval(exp, context) and acc
[exp, :bool_or], acc ->
do_eval(exp, context) or acc
[exp], nil ->
do_eval(exp, context)
end)
end
defp do_eval([arg1: v1, op: [op], arg2: v2], context) do
v1 = get_argument(v1, context)
v2 = get_argument(v2, context)
eval({v1, op, v2})
end
defp do_eval(value, context), do: eval(get_argument(value, context))
defp get_argument([argument: argument, filters: filters], context) do
Argument.get(argument, context, filters: filters)
end
defp get_argument([argument: argument], context) do
Argument.get(argument, context)
end
defp get_argument(argument, context) do
Argument.get(argument, context)
end
end
|
lib/solid/expression.ex
| 0.720565 | 0.488405 |
expression.ex
|
starcoder
|
defmodule Pummpcomm.Cgm.Timestamper do
@moduledoc """
Adds timestamps to events based on the preceeding timestamps or timestamped events in a series of events
"""
# Constants
@relative_events [
:sensor_weak_signal,
:sensor_calibration,
:sensor_glucose_value,
:sensor_data_low,
:sensor_data_high,
:sensor_error,
:sensor_packet
]
# Types
@typedoc """
An event
"""
@type event :: tuple
@typedoc """
The name of an `event`
"""
@type event_name :: atom
# Functions
@doc """
If the event can serve as a reference timestamp for relative events that precede it
"""
@spec is_reference_event?(event) :: boolean
def is_reference_event?(event), do: event_key(event) == :sensor_timestamp
@doc """
If the event's timestamp is relative to the reference event's timestamp
"""
@spec is_relative_event?(event) :: boolean
def is_relative_event?(event), do: event_key(event) in @relative_events
@doc """
All events that are relative to the previous timestamp in the event series
"""
@spec relative_events() :: [event_name, ...]
def relative_events, do: @relative_events
@doc """
Adds timestamps to all relative events in `events` using the reference events' timestamps
"""
@spec timestamp_events([event]) :: [event]
def timestamp_events(events) do
reverse_events = Enum.reverse(events)
process_events(reverse_events, [], try_find_timestamp(reverse_events, 0))
end
## Private Functions
defp process_events([], processed, _), do: processed
defp process_events([event | tail], processed, timestamp) do
cond do
is_reference_event?(event) ->
reference_timestamp = elem(event, 1)[:timestamp]
process_events(tail, [event | processed], reference_timestamp)
is_relative_event?(event) ->
event = add_timestamp(event, timestamp)
timestamp = decrement_timestamp(timestamp)
process_events(tail, [event | processed], timestamp)
true ->
process_events(tail, [event | processed], timestamp)
end
end
defp decrement_timestamp(nil), do: nil
defp decrement_timestamp(timestamp), do: Timex.shift(timestamp, minutes: -5)
defp add_timestamp(event, timestamp) do
{event_key(event), Map.put(event_map(event), :timestamp, timestamp)}
end
defp try_find_timestamp([], _), do: nil
defp try_find_timestamp([event | tail], count) do
cond do
is_relative_event?(event) ->
try_find_timestamp(tail, count + 1)
event_key(event) in [:data_end, :nineteen_something, :null_byte] ->
try_find_timestamp(tail, count)
is_reference_event?(event) && event_map(event)[:event_type] == :last_rf ->
timestamp = event_map(event)[:timestamp]
Timex.shift(timestamp, minutes: count * 5)
true ->
nil
end
end
defp event_key(event), do: elem(event, 0)
defp event_map(event) when tuple_size(event) == 1, do: %{}
defp event_map(event) when tuple_size(event) >= 2, do: elem(event, 1)
end
|
lib/pummpcomm/cgm/timestamper.ex
| 0.824144 | 0.626267 |
timestamper.ex
|
starcoder
|
defmodule Learn.CourseUser do
# Struct for a v1 CourseUser
defstruct [:userId, :courseId, :childCourse, :dataSourceId, :availability, :courseRoleId, :user, :course]
@doc """
{
"userId": "string",
"courseId": "string",
"childCourseId": "string",
"dataSourceId": "string",
"created": "2019-04-18T21:02:44.928Z",
"availability": {
"available": "Yes"
},
"courseRoleId": "Instructor",
"bypassCourseAvailabilityUntil": "2019-04-18T21:02:44.928Z",
"lastAccessed": "2019-04-18T21:02:44.928Z",
"user": {
"id": "string",
"uuid": "string",
"externalId": "string",
"dataSourceId": "string",
"userName": "string",
"studentId": "string",
"educationLevel": "K8",
"gender": "Female",
"birthDate": "2019-04-18T21:02:44.928Z",
"created": "2019-04-18T21:02:44.928Z",
"lastLogin": "2019-04-18T21:02:44.928Z",
"institutionRoleIds": [
"string"
],
"systemRoleIds": [
"SystemAdmin"
],
"availability": {
"available": "Yes"
},
"name": {
"given": "string",
"family": "string",
"middle": "string",
"other": "string",
"suffix": "string",
"title": "string"
},
"job": {
"title": "string",
"department": "string",
"company": "string"
},
"contact": {
"homePhone": "string",
"mobilePhone": "string",
"businessPhone": "string",
"businessFax": "string",
"email": "string",
"webPage": "string"
},
"address": {
"street1": "string",
"street2": "string",
"city": "string",
"state": "string",
"zipCode": "string",
"country": "string"
},
"locale": {
"id": "string",
"calendar": "Gregorian",
"firstDayOfWeek": "Sunday"
}
},
"course": {
"id": "string",
"uuid": "string",
"externalId": "string",
"dataSourceId": "string",
"courseId": "string",
"name": "string",
"description": "string",
"created": "2019-04-18T21:02:44.928Z",
"organization": true,
"ultraStatus": "Undecided",
"allowGuests": true,
"readOnly": true,
"termId": "string",
"availability": {
"available": "Yes",
"duration": {
"type": "Continuous",
"start": "2019-04-18T21:02:44.929Z",
"end": "2019-04-18T21:02:44.929Z",
"daysOfUse": 0
}
},
"enrollment": {
"type": "InstructorLed",
"start": "2019-04-18T21:02:44.929Z",
"end": "2019-04-18T21:02:44.929Z",
"accessCode": "string"
},
"locale": {
"id": "string",
"force": true
},
"hasChildren": true,
"parentId": "string",
"externalAccessUrl": "string",
"guestAccessUrl": "string"
}
}
Create a new CourseUser from the JSON that comes back from GET /courses/user_id
"""
def new_from_json(json) do # returns a CourseUser
my_map = Poison.decode!(json)
user = Learn.RestUtil.to_struct(Learn.CourseUser, my_map)
user
end
end #Learn.CourseUser
|
lib/learn/course_user.ex
| 0.543106 | 0.473049 |
course_user.ex
|
starcoder
|
defmodule NodePing.Results do
@moduledoc """
Get check results and uptime information
"""
alias NodePing.Helpers, as: Helpers
alias NodePing.HttpRequests, as: HttpRequests
@api_url "https://api.nodeping.com/api/1"
@results_path "/results"
@uptime_path "/results/uptime"
@current_path "/results/current"
@doc """
Get results for specified check
## Parameters
- `token` - NodePing API token that was provided with account
- `id` - Check id of the check you want to get
- `opts` - Optional arguments to filter results. Empty list if no opts.
- `customerid` - optional ID to access a subaccount
The list of values for `opts` can be found in NodePing's documents here:
https://nodeping.com/docs-api-results.html#get
## Opts
- `span` - number of hours of results to retrieve. If used in combination with limit the narrower range will be used.
- `limit` - optional integer - number of records to retrieve. Defaults to 300 if span is not set.
- `start` - date/time for the start of the results. Timestamps should be milliseconds, or an RFC2822 or ISO 8601 date
- `end` - date/time for the end of the results.
- `offset` - integer offset to have the system perform uptime calculations for a different time zone.
- `clean` - boolean sets whether to use the older format for the result record that includes a doc object. `true` strongly recommended
## Examples
iex> opts = [{:span, 48}, {:limit, 10}]
iex> token = System.fetch_env!("TOKEN")
iex> checkid = "201205050153W2Q4C-0J2HSIRF"
iex> {:ok, results} = NodePing.Results.get_results(token, checkid, opts)
"""
def get_results(token, id, opts, customerid \\ nil) do
querystrings =
([{:token, token}] ++ opts)
|> Helpers.add_cust_id(customerid)
|> Helpers.merge_querystrings()
(@api_url <> @results_path <> "/#{id}" <> querystrings)
|> HttpRequests.get()
end
@doc """
Get results for specified check
## Parameters
- `token` - NodePing API token that was provided with account
- `id` - Check id of the check you want to get
- `opts` - Optional arguments to filter results. Empty list if no opts.
- `customerid` - optional ID to access a subaccount
The list of values for `opts` can be found in NodePing's documents here:
https://nodeping.com/docs-api-results.html#get
## Opts
- `span` - number of hours of results to retrieve. If used in combination with limit the narrower range will be used.
- `limit` - optional integer - number of records to retrieve. Defaults to 300 if span is not set.
- `start` - date/time for the start of the results. Timestamps should be milliseconds, or an RFC2822 or ISO 8601 date
- `end` - date/time for the end of the results.
- `offset` - integer offset to have the system perform uptime calculations for a different time zone.
- `clean` - boolean sets whether to use the older format for the result record that includes a doc object. `true` strongly recommended
## Examples
iex> opts = [{:span, 48}, {:limit, 10}]
iex> token = System.fetch_env!("TOKEN")
iex> checkid = "201205050153W2Q4C-0J2HSIRF"
iex> results = NodePing.Results.get_results!(token, checkid, opts)
"""
def get_results!(token, id, opts, customerid \\ nil) do
case NodePing.Results.get_results(token, id, opts, customerid) do
{:ok, result} -> result
{:error, error} -> error
end
end
@doc """
Get uptime information for a check
## Parameters
- `token` - NodePing API token that was provided with account
- `id` - Check id of the check you want to get uptime information about
- `opts` - optional list of args for specifying the range of information to gather. Empty list for no opts
- `customerid` - optional ID to access a subaccount
## Opts
- `interval` - "days" or "months". "months" is the default
- `start` - date/time for the start of the results. Timestamps should be milliseconds, or an RFC2822 or ISO 8601 date
- `end` - date/time for the end of the results.
## Examples
iex> opts = [{:interval, "days"}, {:start, "2020-02"}, {:end, "2020-05"}]
iex> token = System.fetch_env!("TOKEN")
iex> checkid = "201205050153W2Q4C-0J2HSIRF"
iex> {:ok, results} = NodePing.Results.get_results(token, checkid, opts)
"""
def uptime(token, id, opts, customerid \\ nil) do
querystrings =
([{:token, token}] ++ opts)
|> Helpers.add_cust_id(customerid)
|> Helpers.merge_querystrings()
(@api_url <> @uptime_path <> "/#{id}" <> querystrings)
|> HttpRequests.get()
end
@doc """
Get uptime information for a check
## Parameters
- `token` - NodePing API token that was provided with account
- `id` - Check id of the check you want to get uptime information about
- `opts` - optional list of args for specifying the range of information to gather. Empty list for no opts
- `customerid` - optional ID to access a subaccount
## Opts
- `interval` - "days" or "months". "months" is the default
- `start` - date/time for the start of the results. Timestamps should be milliseconds, or an RFC2822 or ISO 8601 date
- `end` - date/time for the end of the results.
## Examples
iex> opts = [{:interval, "days"}, {:start, "2020-02"}, {:end, "2020-05"}]
iex> token = System.fetch_env!("TOKEN")
iex> checkid = "201205050153W2Q4C-0J2HSIRF"
iex> results = NodePing.Results.get_results(token, checkid, opts)
"""
def uptime!(token, id, opts, customerid \\ nil) do
case NodePing.Results.uptime(token, id, opts, customerid) do
{:ok, result} -> result
{:error, error} -> error
end
end
@doc """
Retrieves information about current "events" for checks. Events include down events
and disabled checks. If you need a list of all checks with their passing/failing state,
please use the 'checks' list rather than this 'current' call.
## Parameters
- `token` - NodePing API token that was provided with account
- `customerid` - optional ID to access a subaccount
"""
def current_events(token, customerid \\ nil) do
querystrings =
Helpers.add_cust_id([{:token, token}], customerid)
|> Helpers.merge_querystrings()
(@api_url <> @current_path <> querystrings)
|> HttpRequests.get()
end
@doc """
Retrieves information about current "events" for checks. Events include down events
and disabled checks. If you need a list of all checks with their passing/failing state,
please use the 'checks' list rather than this 'current' call.
## Parameters
- `token` - NodePing API token that was provided with account
- `customerid` - optional ID to access a subaccount
"""
def current_events!(token, customerid \\ nil) do
case NodePing.Results.current_events(token, customerid) do
{:ok, result} -> result
{:error, error} -> error
end
end
@doc """
Retrieves information about "events" for checks. Events include down events and disabled checks.
https://nodeping.com/docs-api-results.html#events
## Parameters
- `token` - NodePing API token that was provided with account
- `id` - Check id of the check you want to get events for
- `opts` - Optional list of tuples to specify returned results
## Opts
- `start` - Start date to retrieve events from a specific range of time.
- `end` - End date to retrieve events from a specific range of time.
- `limit` - limit for the number of records to retrieve.
## Examples
iex> token = System.fetch_env!("TOKEN")
iex> checkid = "201205050153W2Q4C-0J2HSIRF"
iex> start = "2020-06"
iex> end = "2020-07"
iex> limit = 10
iex> opts = [{:start, start}, {:end, end}, {:limit, limit}]
iex> {:ok, results} = NodePing.Results.get_events(token, checkid, opts)
"""
def get_events(token, checkid, opts \\ []) do
querystrings = Helpers.merge_querystrings([{:token, token}] ++ opts)
(@api_url <> "/results/events/#{checkid}" <> querystrings)
|> HttpRequests.get()
end
@doc """
Retrieves information about "events" for checks. Events include down events and disabled checks.
https://nodeping.com/docs-api-results.html#events
## Parameters
- `token` - NodePing API token that was provided with account
- `id` - Check id of the check you want to get events for
- `opts` - Optional list of tuples to specify returned results
## Opts
- `start` - Start date to retrieve events from a specific range of time.
- `end` - End date to retrieve events from a specific range of time.
- `limit` - limit for the number of records to retrieve.
## Examples
iex> token = System.fetch_env!("TOKEN")
iex> checkid = "201205050153W2Q4C-0J2HSIRF"
iex> start = "2020-06"
iex> end = "2020-07"
iex> limit = 10
iex> opts = [{:start, start}, {:end, end}, {:limit, limit}]
iex> results = NodePing.Results.get_events!(token, checkid, opts)
"""
def get_events!(token, checkid, opts \\ []) do
case get_events(token, checkid, opts) do
{:ok, result} -> result
{:error, error} -> error
end
end
end
|
lib/results.ex
| 0.867514 | 0.46393 |
results.ex
|
starcoder
|
defmodule ECS.System do
@moduledoc """
Functions to setup and control systems.
# Basic Usage
A system iterates over entities with certain components, defined in
`component_keys/0`, and calls `perform/1` on each entity. `perform/1` should
return the entity_pid when the entity should continue to exist in the shared
collection.
## Examples
# Define a service to display entities' names.
defmodule DisplayNames do
@use ECS.System
# Accepts entities with a name component.
def component_keys, do: [:name]
def perform(entity) do
# Displays the entity's name.
IO.puts entity.name
# Return the entity so that it is passed onto other systems.
entity
end
end
# Run entities through systems.
updated_entities = ECS.System.run([DisplayNames], entities)
# Stateful System
A system may also specify `initial_state/0` (`fn -> nil end` by default) which
will be passed as the second argument to `perform/2`. `perform/2` should
return a tuple in the form `{entity, state}` where state will be the new state
for the next `perform/2` call. You can use this, for example, to keep track of
changes made, such as to note how many entities actually changed. This state
will be returned by `run/2` in the second element of the tuple as a list,
where each item in the list is the corresponding state from the System passed
to `run/2`.
This could be used, for example, to track collisions amongst collidable
components for later use.
## Examples
defmodule Counter do
use ECS.System
def component_keys, do: [:countable]
def initial_state, do: 0
def perform(entity, counter) do
{entity, counter + 1}
end
end
systems = [Counter, MultiplierCounter]
entities = [%{countable: true}]
{_, [num_countable_entities]} = ECS.System.run(systems, entities)
# Interacting with Multiple Components
Often, a system requires knowledge of multiple components in order to have the
intended affects, such as the aforementions collision resolution. You can
specify `other_component_keys/0` similar to `component_keys/0` which will
cause affiliated entities to be passed to `perform/3` as the third argument.
## Examples
defmodule MultiplyCounter do
use ECS.System
def other_component_keys, do: [:multiplier]
def component_keys, do: [:countable]
def initial_state, do: 0
def perform(entity, counter, multipliers) do
factor = case multipliers do
[] -> 1
m ->
Enum.reduce(m, 0, fn e, x ->
x + e.multiplier.factor
end)
end
{entity, counter + 1 * factor}
end
## Pre/Post Hooks
You may also specify `pre_perform` and `post_perform` which can be used to
transform the state and/or entities before any `perform` calls are made. This
can be used for more dynamic filtering or transformation from within the
system.
"""
@doc "Defines the component keys to search for that the system processes."
@callback component_keys() :: [atom]
@doc "Defines the component keys to search for that the system's entities may interact with."
@callback other_component_keys() :: [atom]
@doc "Defines the initial state for a system's run cycle."
@callback initial_state() :: state :: any
@doc "Called before any `perform` calls are made in order to transform the entities list or the state."
@callback pre_perform(entities :: [pid], state :: any, opts :: Keyword.t()) ::
{entities :: [pid], state :: any}
@doc "Modifies the given `entity` and, optionally, the system's current run's `state`."
@callback perform(
entity :: pid,
state :: term | nil,
other_entities :: [pid] | [],
opts :: Keyword.t()
) :: {entity :: pid, state :: any} | entity :: pid
@doc "Called after all `perform` calls are made in order to transform the entities list or the state."
@callback post_perform(entities :: [pid], state :: any, opts :: Keyword.t()) ::
{entities :: [pid], state :: any}
defmacro __using__(_opts) do
quote do
@behaviour ECS.System
def initial_state(), do: nil
defoverridable initial_state: 0
def other_component_keys(), do: []
defoverridable other_component_keys: 0
def pre_perform(entities, state, _opts \\ nil), do: {entities, state}
defoverridable pre_perform: 3
def post_perform(entities, state, _opts \\ nil), do: {entities, state}
defoverridable post_perform: 3
def perform(entity) do
entity
end
defoverridable perform: 1
def perform(entity, nil) do
perform(entity)
end
def perform(entity, state) do
{entity, state}
end
defoverridable perform: 2
def perform(entity, state, other_entities) do
perform(entity, state)
end
defoverridable perform: 3
def perform(entity, state, other_entities, opts) do
perform(entity, state, other_entities)
end
defoverridable perform: 4
end
end
@doc "Run `systems` over `entities`."
def run(systems, entities, opts \\ []) do
{entities, states} =
Enum.reduce(systems, {entities, []}, fn system, {entities, states} ->
{entities, state} = do_run(system, entities, opts)
{entities, [state | states]}
end)
{entities, Enum.reverse(states)}
end
defp do_run([], entities, _opts), do: {entities, nil}
defp do_run(system, entities, opts) do
state = system.initial_state()
{entities, state} = system.pre_perform(entities, state, opts)
other_entities =
case system.other_component_keys do
[] ->
[]
keys ->
Enum.filter(
entities,
&Enum.reduce(keys, true, fn key, okay ->
okay && Map.has_key?(&1, key)
end)
)
end
{entities, state} =
Enum.map_reduce(entities, state, fn entity, state ->
iterate(system, entity, state, other_entities, opts)
end)
system.post_perform(entities, state, opts)
end
defp iterate(system, entity, state, other_entities, opts) do
if Enum.reduce(system.component_keys, true, fn key, okay ->
okay && Map.has_key?(entity, key)
end) do
case system.perform(entity, state, other_entities, opts) do
{entity, state} -> {entity, state}
entity -> {entity, state}
end
else
{entity, state}
end
end
end
|
lib/ecs/system.ex
| 0.882225 | 0.695467 |
system.ex
|
starcoder
|
defmodule Model.Schedule do
@moduledoc """
The arrival drop off (`drop_off_type`) time (`arrival_time`) and departure pick up (`pickup_type`) time
(`departure_time`) to/from a stop (`stop_id`) at a given sequence (`stop_sequence`) along a trip (`trip_id`) going in
a direction (`direction_id`) along a route (`route_id`) when the trip is following a service (`service_id`) to
determine when it is active.
See [GTFS `stop_times.txt`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
For predictions of the actual arrival/departure time, see `Model.Prediction.t`.
"""
use Recordable, [
:trip_id,
:stop_id,
:arrival_time,
:departure_time,
:stop_sequence,
:stop_headsign,
:pickup_type,
:drop_off_type,
:position,
:route_id,
:direction_id,
:service_id,
:timepoint?
]
@typedoc """
| Value | Description |
|-------|-----------------------------|
| `0` | Regularly scheduled |
| `1` | Not available |
| `2` | Must phone agency |
| `3` | Must coordinate with driver |
See [GTFS `stop_times.txt` `drop_off_type` and `pickup_type`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
"""
@type pickup_drop_off_type :: 0..3
@typedoc """
The number of seconds past midnight
"""
@type seconds_past_midnight :: non_neg_integer
@typedoc """
* `true` - `arrival_time` and `departure_time` are exact
* `false` - `arrival_time` and `departure_time` are approximate or interpolated
"""
@type timepoint :: boolean
@typedoc """
* `:arrival_time` - When the vehicle arrives at `stop_id`. `nil` if the first stop (`stop_id`) on the trip
(`trip_id`). See
[GTFS `stop_times.txt` `arrival_time`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:departure_time` - When the vehicle arrives at `stop_id`. `nil` if the last stop (`stop_id`) on the trip
(`trip_id`). See
[GTFS `stop_times.txt` `departure_time`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:direction_id` - Which direction along `route_id` the `trip_id` is going. See
[GTFS `trips.txt` `direction_id`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#tripstxt)
* `:drop_off_type` - How the vehicle arrives at `stop_id`. See
[GTFS `stop_times.txt` `drop_off_type`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:pickup_type` - How the vehicle departs from `stop_id`. See
[GTFS `stop_times.txt` `pickup_type`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:position` - Marks the first and last stop on the trip, so that the range of `stop_sequence` does not need to be
known or calculated.
* `:route_id` - The route `trip_id` is on doing in `direction_id`. See
[GTFS `trips.txt` `route_id`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#tripstxt)
* `:service_id` - The service that `trip_id` is following to determine when it is active. See
[GTFS `trips.txt` `service_id`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#tripstxxt)
* `:stop_id` - The stop being arrived at and departed from. See
[GTFS `stop_times.txt` `stop_id`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:stop_sequence` - The sequence the `stop_id` is arrived at during the `trip_id`. The stop sequence is
monotonically increasing along the trip, but the `stop_sequence` along the `trip_id` are not necessarily
consecutive. See
[GTFS `stop_times.txt` `stop_sequence`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:stop_headsign` - Text identifying destination of the trip, overriding trip-level headsign if present. See [GTFS `stop_times.txt` `stop_headsign`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:timepoint?` - `true` if `arrival_time` and `departure_time` are exact; otherwise, `false`. See
[GTFS `stop_times.txt` `timepoint`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
* `:trip_id` - The trip on which `stop_id` occurs in `stop_sequence`. See
[GTFS `stop_times.txt` `trip_id`](https://github.com/google/transit/blob/master/gtfs/spec/en/reference.md#stop_timestxt)
"""
@type t :: %__MODULE__{
arrival_time: seconds_past_midnight | nil,
departure_time: seconds_past_midnight | nil,
direction_id: Model.Direction.id(),
drop_off_type: pickup_drop_off_type,
pickup_type: pickup_drop_off_type,
position: :first | :last | nil,
route_id: Model.Route.id(),
service_id: Model.Service.id(),
stop_id: Model.Stop.id(),
stop_sequence: non_neg_integer,
stop_headsign: String.t() | nil,
timepoint?: timepoint,
trip_id: Model.Trip.id()
}
@doc """
The arrival time or departure time of the schedule.
"""
@spec time(t) :: seconds_past_midnight
def time(%__MODULE__{arrival_time: time}) when not is_nil(time) do
time
end
def time(%__MODULE__{departure_time: time}) when not is_nil(time) do
time
end
end
|
apps/model/lib/model/schedule.ex
| 0.871775 | 0.623979 |
schedule.ex
|
starcoder
|
defmodule Timex.Parsers.DateFormat.Directive do
@moduledoc """
This module defines parsing directives for all date/time
tokens timex knows about. It is composed of a Directive struct,
containing the rules for parsing a given token, and a `get/1`
function, which fetches a directive for a given token value, i.e. `:year4`.
"""
alias Timex.DateFormat.Formats
alias Timex.Parsers.DateFormat.Directive
require Formats
@derive Access
defstruct token: :undefined,
# The number of characters this directive can occupy
# Should either be :word, meaning it is bounded by the
# next occurance of whitespace, an integer, which is a
# strict limit on the length (no more, no less), or a range
# which defines the min and max lengths that are considered
# valid
len: 0,
# The minimum value of a numeric directive
min: false,
# The maximum value of a numeric directive
max: false,
# Allows :numeric, :alpha, :match, :format, :char
type: :undefined,
# Either false, or a number representing the amount of padding
pad: false,
pad_type: :zero,
# Can be false, meaning no validation, a function to call which
# will be passed the parsed value as a string, and should return
# true/false, or a regex, which will be required to match the parsed
# value.
validate: false,
# If type: :match is given, this should contain either a match value, or
# a list of values of which the parsed value should be a member.
match: false,
# If type: :format is given, this is the format specification to parse
# the input string with.
# Expected format:
# [tokenizer: <module>, format: <format string>]
format: false,
# If this token is not required in the source string
optional: false,
# The raw token
raw: ""
@doc """
Gets a parsing directive for the given token name, where the token name
is an atom.
## Example
iex> alias Timex.Parsers.Directive
iex> Directive.get(:year4)
%Directive{token: :year4, len: 1..4, type: :numeric, pad: 0}
"""
@spec get(atom) :: %Directive{}
def get(token)
# Years
def get(:year4), do: %Directive{token: :year4, len: 1..4, type: :numeric, pad: 0}
def get(:year2), do: %Directive{token: :year2, len: 1..2, type: :numeric, pad: 0}
def get(:century), do: %Directive{token: :century, len: 1..2, type: :numeric, pad: 0}
def get(:iso_year4), do: %Directive{token: :iso_year4, len: 1..4, type: :numeric, pad: 0}
def get(:iso_year2), do: %Directive{token: :iso_year2, len: 1..2, type: :numeric, pad: 0}
# Months
def get(:month), do: %Directive{token: :month, len: 1..2, min: 1, max: 12, type: :numeric, pad: 0}
def get(:mshort), do: %Directive{token: :mshort, len: 3, type: :word, validate: :month_to_num}
def get(:mfull), do: %Directive{token: :mfull, len: :word, type: :word, validate: :month_to_num}
# Days
def get(:day), do: %Directive{token: :day, len: 1..2, min: 1, max: 31, type: :numeric, pad: 0}
def get(:oday), do: %Directive{token: :oday, len: 1..3, min: 1, max: 366, type: :numeric, pad: 0}
# Weeks
def get(:iso_weeknum), do: %Directive{token: :iso_weeknum, len: 1..2, min: 1, max: 53, type: :numeric, pad: 0}
def get(:week_mon), do: %Directive{token: :week_mon, len: 1..2, min: 1, max: 53, type: :numeric, pad: 0}
def get(:week_sun), do: %Directive{token: :week_sun, len: 1..2, min: 1, max: 53, type: :numeric, pad: 0}
def get(:wday_mon), do: %Directive{token: :wday_mon, len: 1, min: 0, max: 6, type: :numeric, pad: 0}
def get(:wday_sun), do: %Directive{token: :wday_sun, len: 1, min: 1, max: 7, type: :numeric, pad: 0}
def get(:wdshort), do: %Directive{token: :wdshort, len: 3, type: :word, validate: :day_to_num}
def get(:wdfull), do: %Directive{token: :wdfull, len: :word, type: :word, validate: :day_to_num}
# Hours
def get(:hour24), do: %Directive{token: :hour24, len: 1..2, min: 0, max: 24, type: :numeric, pad: 0}
def get(:hour12), do: %Directive{token: :hour12, len: 1..2, min: 1, max: 12, type: :numeric, pad: 0}
def get(:min), do: %Directive{token: :min, len: 1..2, min: 0, max: 59, type: :numeric, pad: 0}
def get(:sec), do: %Directive{token: :sec, len: 1..2, min: 0, max: 59, type: :numeric, pad: 0}
def get(:sec_fractional), do: %Directive{token: :sec_fractional, len: 1..3, min: 0, max: 999, type: :numeric, pad: 0, optional: true}
def get(:sec_epoch), do: %Directive{token: :sec_epoch, len: :word, type: :numeric, pad: 0}
def get(:am), do: %Directive{token: :am, len: 2, type: :match, match: ["am", "pm"]}
def get(:AM), do: %Directive{token: :AM, len: 2, type: :match, match: ["AM", "PM"]}
# Timezones
def get(:zname), do: %Directive{token: :zname, len: 1..4, type: :word}
def get(:zoffs), do: %Directive{token: :zoffs, len: 5, type: :word, validate: ~r/^[-+]\d{4}$/}
def get(:zoffs_colon), do: %Directive{token: :zoffs_colon, len: 6, type: :word, validate: ~r/^[-+]\d{2}:\d{2}$/}
def get(:zoffs_sec), do: %Directive{token: :zoffs_sec, len: 9, type: :word, validate: ~r/^[-+]\d{2}:\d{2}\d{2}$/}
# Preformatted Directives
def get(:iso_8601), do: %Directive{token: :iso_8601, type: :format, format: Formats.iso_8601}
def get(:iso_8601z), do: %Directive{token: :iso_8601z, type: :format, format: Formats.iso_8601z}
def get(:iso_date), do: %Directive{token: :iso_date, type: :format, format: Formats.iso_date}
def get(:iso_time), do: %Directive{token: :iso_time, type: :format, format: Formats.iso_time}
def get(:iso_week), do: %Directive{token: :iso_week, type: :format, format: Formats.iso_week}
def get(:iso_weekday), do: %Directive{token: :iso_weekday, type: :format, format: Formats.iso_weekday}
def get(:iso_ordinal), do: %Directive{token: :iso_ordinal, type: :format, format: Formats.iso_ordinal}
def get(:rfc_822), do: %Directive{token: :rfc_822, type: :format, format: Formats.rfc_822}
def get(:rfc_822z), do: %Directive{token: :rfc_822z, type: :format, format: Formats.rfc_822z}
def get(:rfc_1123), do: %Directive{token: :rfc_1123, type: :format, format: Formats.rfc_1123}
def get(:rfc_1123z), do: %Directive{token: :rfc_1123z, type: :format, format: Formats.rfc_1123z}
def get(:rfc_3339), do: %Directive{token: :rfc_3339, type: :format, format: Formats.rfc_3339}
def get(:rfc_3339z), do: %Directive{token: :rfc_3339z, type: :format, format: Formats.rfc_3339z}
def get(:ansic), do: %Directive{token: :ansic, type: :format, format: Formats.ansic}
def get(:unix), do: %Directive{token: :unix, type: :format, format: Formats.unix}
def get(:kitchen), do: %Directive{token: :kitchen, type: :format, format: Formats.kitchen}
def get(:slashed), do: %Directive{token: :slashed, type: :format, format: Formats.slashed_date}
def get(:strftime_iso_date), do: %Directive{token: :strftime_iso_date, type: :format, format: Formats.strftime_iso_date}
def get(:strftime_clock), do: %Directive{token: :strftime_clock, type: :format, format: Formats.strftime_clock}
def get(:strftime_kitchen), do: %Directive{token: :strftime_kitchen, type: :format, format: Formats.strftime_kitchen}
def get(:strftime_shortdate), do: %Directive{token: :strftime_shortdate, type: :format, format: Formats.strftime_shortdate}
# Catch-all
def get(directive), do: {:error, "Unrecognized directive: #{directive}"}
end
|
lib/parsers/dateformat/directive.ex
| 0.906314 | 0.721363 |
directive.ex
|
starcoder
|
defmodule RDF.Triple do
@moduledoc """
Helper functions for RDF triples.
A RDF Triple is represented as a plain Elixir tuple consisting of three valid
RDF values for subject, predicate and object.
"""
alias RDF.Statement
@type t :: {Statement.subject, Statement.predicate, Statement.object}
@type coercible_t ::
{Statement.coercible_subject, Statement.coercible_predicate,
Statement.coercible_object}
@type t_values :: {String.t, String.t, any}
@doc """
Creates a `RDF.Triple` with proper RDF values.
An error is raised when the given elements are not coercible to RDF values.
Note: The `RDF.triple` function is a shortcut to this function.
## Examples
iex> RDF.Triple.new("http://example.com/S", "http://example.com/p", 42)
{~I<http://example.com/S>, ~I<http://example.com/p>, RDF.literal(42)}
iex> RDF.Triple.new(EX.S, EX.p, 42)
{RDF.iri("http://example.com/S"), RDF.iri("http://example.com/p"), RDF.literal(42)}
"""
@spec new(
Statement.coercible_subject,
Statement.coercible_predicate,
Statement.coercible_object
) :: t
def new(subject, predicate, object) do
{
Statement.coerce_subject(subject),
Statement.coerce_predicate(predicate),
Statement.coerce_object(object)
}
end
@doc """
Creates a `RDF.Triple` with proper RDF values.
An error is raised when the given elements are not coercible to RDF values.
Note: The `RDF.triple` function is a shortcut to this function.
## Examples
iex> RDF.Triple.new {"http://example.com/S", "http://example.com/p", 42}
{~I<http://example.com/S>, ~I<http://example.com/p>, RDF.literal(42)}
iex> RDF.Triple.new {EX.S, EX.p, 42}
{RDF.iri("http://example.com/S"), RDF.iri("http://example.com/p"), RDF.literal(42)}
"""
@spec new(coercible_t) :: t
def new({subject, predicate, object}), do: new(subject, predicate, object)
@doc """
Returns a tuple of native Elixir values from a `RDF.Triple` of RDF terms.
Returns `nil` if one of the components of the given tuple is not convertible via `RDF.Term.value/1`.
The optional second argument allows to specify a custom mapping with a function
which will receive a tuple `{statement_position, rdf_term}` where
`statement_position` is one of the atoms `:subject`, `:predicate` or `:object`,
while `rdf_term` is the RDF term to be mapped. When the given function returns
`nil` this will be interpreted as an error and will become the overhaul result
of the `values/2` call.
## Examples
iex> RDF.Triple.values {~I<http://example.com/S>, ~I<http://example.com/p>, RDF.literal(42)}
{"http://example.com/S", "http://example.com/p", 42}
iex> {~I<http://example.com/S>, ~I<http://example.com/p>, RDF.literal(42)}
...> |> RDF.Triple.values(fn
...> {:object, object} -> RDF.Term.value(object)
...> {_, term} -> term |> to_string() |> String.last()
...> end)
{"S", "p", 42}
"""
@spec values(t | any, Statement.term_mapping) :: t_values | nil
def values(triple, mapping \\ &Statement.default_term_mapping/1)
def values({subject, predicate, object}, mapping) do
with subject_value when not is_nil(subject_value) <- mapping.({:subject, subject}),
predicate_value when not is_nil(predicate_value) <- mapping.({:predicate, predicate}),
object_value when not is_nil(object_value) <- mapping.({:object, object})
do
{subject_value, predicate_value, object_value}
else
_ -> nil
end
end
def values(_, _), do: nil
@doc """
Checks if the given tuple is a valid RDF triple.
The elements of a valid RDF triple must be RDF terms. On the subject
position only IRIs and blank nodes allowed, while on the predicate position
only IRIs allowed. The object position can be any RDF term.
"""
@spec valid?(t | any) :: boolean
def valid?(tuple)
def valid?({_, _, _} = triple), do: Statement.valid?(triple)
def valid?(_), do: false
end
|
lib/rdf/triple.ex
| 0.841891 | 0.739799 |
triple.ex
|
starcoder
|
defmodule ExCrud do
@moduledoc """
A module that injects crud based functions into a context module for use in Ecto based applications
Call the `use ExCrud` macro to inject crud functions into the context module
In each Context Module - you want to use the crud functions
-----------------------------------------------------------
defmodule MyApp.Post do
use ExCrud, context: MyApp.Repo, schema_module: MyApp.Post.PostSchema
end
List of the Crud functions available in the context module
----------------------------------------------------------
- `add/1`
## Examples:
- `iex> MyApp.Post.add(%{key1: value1, key2: value2})`
`{:ok, struct}`
- `iex> MyApp.Post.add(key1: value1, key2: value)`
`{:ok, struct}`
- `get/1`
## Examples:
- `iex> MyApp.Post.get(1)`
`{:ok, struct}`
- `iex> MyApp.Post.get(id: 1)`
`{:ok, struct}`
- `iex> MyApp.Post.get(%{id: 1})`
`{:ok, struct}`
- `get_all/0`
## Examples
- `iex> MyApp.Post.get_all()`
`{:ok, list of structures}`
- `get_few/1`
## Examples
- `iex> MyApp.Post.get_few(200)`
`{:ok, list of structures}`
- `update/2`
## Examples
- `iex> MyApp.Post.update(struct, key: value)`
`{:ok, list of structures}`
- `iex> MyApp.Post.update(struct, %{key: value})`
`{:ok, list of structures}`
- `iex> MyApp.Post.update(id, key: value)`
`{:ok, list of structures}`
- `iex> MyApp.Post.update(id, %{key: value})`
`{:ok, list of structures}`
- `delete/1`
## Examples
- `iex> MyApp.Post.delete(1)`
`{:ok, structure}`
- `iex> MyApp.Post.delete(struct)`
`{:ok, structure}`
Inspiration
-----------
- Here are two existing libraries that inspired this one and I stand on their shoulders. I just wanted to
do this slightly differently and hence this library.
- https://github.com/PavelDotsenko/CRUD
- https://github.com/jungsoft/crudry
"""
@moduledoc since: "1.0.5"
use Ecto.Schema
defmacro __using__(opts) do
quote bind_quoted: [opts: opts] do
import Ecto.Query, only: [from: 2, where: 2, where: 3, offset: 2]
@cont Keyword.get(opts, :context)
@schema Keyword.get(opts, :schema_module)
ExCrud.Utils.raise_context_not_set_error(@cont)
ExCrud.Utils.raise_schema_not_set_error(@schema)
@doc """
Returns the current Repo
"""
def context(), do: @cont
@doc """
Adds a new entity to the database
## Takes in parameters:
- `opts`: Map or paramatras key: value separated by commas
## Returns:
- `{:ok, struct}`
- `{:error, error as a string or list of errors}`
## Examples:
- `iex> MyApp.Post.add(%{key1: value1, key2: value2})`
`{:ok, struct}`
- `iex> MyApp.Post.add(key1: value1, key2: value)`
`{:ok, struct}`
"""
def add(opts), do: @cont.insert(set_field(@schema, opts)) |> response(@schema)
@doc """
Retrieves structure from DB
## Takes in parameters:
- Using `id` records from the database
- `id`: Structure identifier in the database
- Search by a bunch of `keys: value` of a record in the database
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns:
- `{:ok, struct}`
- `{:error, error as a string}`
## Examples:
- `iex> MyApp.Post.add(1)`
`{:ok, struct}`
- `iex> MyApp.Post.add(id: 1)`
`{:ok, struct}`
- `iex> MyApp.Post.add(%{id: 1})`
`{:ok, struct}`
"""
def get(id) when is_integer(id) or is_binary(id) do
@cont.get(@schema, id) |> response(@schema)
end
@doc """
Retrieves structure from DB
## Takes in parameters:
- Using `id` records from the database
- `id`: Structure identifier in the database
- Search by a bunch of `keys: value` of a record in the database
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns:
- `{:ok, struct}`
- `{:error, error as a string}`
## Examples:
- `iex> MyApp.Post.get(1)`
`{:ok, struct}`
- `iex> MyApp.Post.get(id: 1)`
`{:ok, struct}`
- `iex> MyApp.Post.get(%{id: 1})`
`{:ok, struct}`
"""
def get(opts) when is_list(opts) or is_map(opts) do
@cont.get_by(@schema, opts_to_map(opts)) |> response(@schema)
end
@doc """
Returns a list of structures from the database corresponding to the given Module
## Takes in parameters:
## Returns:
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.get_all()`
`{:ok, list of structures}`
"""
def get_all() do
{:ok, @cont.all(from(item in @schema, select: item, order_by: item.id))}
end
@doc """
Returns a list of structures from the database corresponding to the given Module
## Takes in parameters:
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.add(id: 1)`
`{:ok, list of structures}`
- `iex> MyApp.Post.add(%{id: 1})`
`{:ok, list of structures}`
"""
def get_all(opts) when is_list(opts) or is_map(opts) do
{:ok, @cont.all(from(i in @schema, select: i, order_by: i.id) |> filter(opts))}
end
@doc """
Returns the specified number of items for the module
## Takes in parameters:
- `limit`: Number of items to display
## Returns
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.get_few(200)`
`{:ok, list of structures}`
"""
def get_few(limit) when is_integer(limit) do
{:ok, @cont.all(from(i in @schema, select: i, order_by: i.id, limit: ^limit))}
end
@doc """
Returns the specified number of items for a module starting from a specific item
## Takes in parameters:
- `limit`: Number of items to display
- `offset`: First element number
## Returns
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.get_few(200, 50)`
`{:ok, list of structures}`
"""
def get_few(limit, offset) when is_integer(limit) and is_integer(offset) do
query = from(i in @schema, select: i, order_by: i.id, limit: ^limit, offset: ^offset)
{:ok, @cont.all(query)}
end
@doc """
Returns the specified number of items for a module starting from a specific item
## Takes in parameters:
- `limit`: Number of items to display
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.get_few(200, key: value)`
`{:ok, list of structures}`
- `iex> MyApp.Post.get_few(200, %{key: value})`
`{:ok, list of structures}`
"""
def get_few(limit, opts) when is_list(opts) or is_map(opts) do
query = from(i in @schema, select: i, order_by: i.id, limit: ^limit)
{:ok, @cont.all(query |> filter(opts))}
end
@doc """
Returns the specified number of items for a module starting from a specific item
## Takes in parameters:
- `limit`: Number of items to display
- `offset`: First element number
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.get_few(200, 50, key: value)`
`{:ok, list of structures}`
- `iex> MyApp.Post.get_few(200, 50, %{key: value})`
`{:ok, list of structures}`
"""
def get_few(limit, offset, opts) when is_list(opts) or is_map(opts) do
query = from(i in @schema, select: i, limit: ^limit)
{:ok, @cont.all(query |> filter(opts) |> offset(^offset))}
end
@doc """
Makes changes to the structure from the database
## Takes in parameters:
- `item`: Structure for change
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.update(item, key: value)`
`{:ok, list of structures}`
- `iex> MyApp.Post.update(item, %{key: value})`
`{:ok, list of structures}`
"""
def update(%@schema{} = item, opts) when is_struct(item) do
ExCrud.Utils.check_changeset_function(item.__struct__)
item.__struct__.changeset(item, opts_to_map(opts))
|> @cont.update()
end
@doc """
Makes changes to the structure from the database
## Takes in parameters:
- `id`: Structure identifier in the database
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns
- `{:ok, structure}`
- `{:ok, error as a string or list of errors}`
## Examples
- `iex> MyApp.Post.update(1, key: value)`
`{:ok, structure}`
- `iex> MyApp.Post.update(1, %{key: value})`
`{:ok, structure}`
"""
def update(id, opts) when is_integer(id) or is_binary(id),
do: get(id) |> update_response(opts)
@doc """
Makes changes to the structure from the database
## Takes in parameters:
- `key`: Field from structure
- `val`: Field value
- `opts`: Map or paramatras `keys: value` separated by commas
## Returns
- `{:ok, structure}`
- `{:error, error as a string or list of errors}`
## Examples
- `iex> MyApp.Post.update(key, 1, key: value)`
`{:ok, structure}`
- `iex> MyApp.Post.update(key, 1, %{key: value})`
`{:ok, structure}`
"""
def update(key, val, opts), do: get([{key, val}]) |> update_response(opts)
@doc """
Removes the specified structure from the database
## Takes in parameters:
- `item`: Structure
## Returns
- `{:ok, structure}`
- `{:error, error as a string or list of errors}`
## Examples
- `iex> MyApp.Post.delete(structure)`
`{:ok, structure}`
"""
def delete(%@schema{} = item) when is_struct(item) do
try do
@cont.delete(item)
rescue
_ -> {:error, module_title(item) <> " is not fount"}
end
end
@doc """
Removes the specified structure from the database
## Takes in parameters:
- `id`: Structure identifier in the database
## Returns
- `{:ok, structure}`
- `{:error, error as a string or list of errors}`
## Examples
- `iex> MyApp.Post.delete(1)`
`{:ok, structure}`
"""
def delete(id), do: get(id) |> delete_response()
@doc """
Returns a list of structures in which the values of the specified fields partially or completely correspond to the entered text
## Takes in parameters:
- `id`: Structure identifier in the database
## Returns
- `{:ok, list of structures}`
- `{:ok, []}`
## Examples
- `iex> MyApp.Post.find(key: "sample")`
`{:ok, list of structures}`
- `iex> MyApp.Post.find(%{key: "sample"})`
`{:ok, list of structures}`
"""
def find(opts),
do: from(item in @schema, select: item) |> find(opts_to_map(opts), Enum.count(opts), 0)
defp set_field(mod, opts) do
ExCrud.Utils.check_changeset_function(mod)
mod.changeset(mod.__struct__, opts_to_map(opts))
end
defp opts_to_map(opts) when is_map(opts), do: opts
defp opts_to_map(opts) when is_list(opts),
do: Enum.reduce(opts, %{}, fn {key, value}, acc -> Map.put(acc, key, value) end)
defp find(query, opts, count, acc) do
{key, val} = Enum.at(opts, acc)
result = query |> where([i], ilike(field(i, ^key), ^"%#{val}%"))
if acc < count - 1,
do: find(result, opts, count, acc + 1),
else: {:ok, @cont.all(result)}
end
defp filter(query, opts), do: filter(query, opts, Enum.count(opts), 0)
defp filter(query, opts, count, acc) do
fields = Map.new([Enum.at(opts, acc)]) |> Map.to_list()
result = query |> where(^fields)
if acc < count - 1, do: filter(result, opts, count, acc + 1), else: result
end
defp module_title(mod) when is_struct(mod), do: module_title(mod.__struct__)
defp module_title(mod), do: Module.split(mod) |> Enum.at(Enum.count(Module.split(mod)) - 1)
defp error_handler(err) when is_struct(err),
do: Enum.map(err.errors, fn {key, {msg, _}} -> error_str(key, msg) end)
defp error_handler(err) when is_tuple(err),
do: Enum.map([err], fn {_, message} -> message end)
defp error_handler(error), do: error
defp delete_response({:error, reason}), do: {:error, error_handler(reason)}
defp delete_response({:ok, item}), do: delete(item)
defp update_response({:error, reason}, _opts), do: {:error, error_handler(reason)}
defp update_response({:ok, item}, opts), do: update(item, opts)
defp response(nil, mod), do: {:error, module_title(mod) <> " not found"}
defp response({:error, reason}, _module), do: {:error, error_handler(reason)}
defp response({:ok, item}, _module), do: {:ok, item}
defp response(item, _module), do: {:ok, item}
defp error_str(key, msg), do: "#{Atom.to_string(key) |> String.capitalize()}: #{msg}"
end
end
end
|
lib/ex_crud.ex
| 0.906852 | 0.629888 |
ex_crud.ex
|
starcoder
|
defmodule Currency do
@moduledoc """
Represents a Currency type accordingly to ISO 4217
"""
alias __MODULE__, as: Currency
alias Repository.CurrencyRepository, as: CurrencyRepository
defstruct alpha_code: "BRL",
numeric_code: 986,
exponent: 2,
name: "Brazilian Real",
symbol: "R$"
@doc """
Finds `Currency` from a given `alpha_code`
## Examples:
```
iex> Currency.find(:BRL)
%Currency{alpha_code: "BRL", exponent: 2, name: "Brazilian Real", numeric_code: 986, symbol: "R$"}
iex> Currency.find(:usd)
%Currency{alpha_code: "USD", exponent: 2, name: "US Dollar", numeric_code: 840, symbol: "$"}
iex> Currency.find(:LBR)
nil
"""
def find(alpha_code) do
find!(alpha_code)
rescue
ArgumentError -> nil
end
@doc """
Finds `Currency` from a given `alpha_code`
## Examples:
```
iex> Currency.find!(:BRL)
%Currency{alpha_code: "BRL", exponent: 2, name: "Brazilian Real", numeric_code: 986, symbol: "R$"}
iex> Currency.find!(:usd)
%Currency{alpha_code: "USD", exponent: 2, name: "US Dollar", numeric_code: 840, symbol: "$"}
iex> Currency.find!("brl")
%Currency{alpha_code: "BRL", exponent: 2, name: "Brazilian Real", numeric_code: 986, symbol: "R$"}
iex> Currency.find!(86)
** (ArgumentError) "86" must be atom or string
"""
def find!(alpha_code) do
cond do
is_atom(alpha_code) ->
Atom.to_string(alpha_code)
|> String.upcase()
|> String.to_atom()
|> get!()
is_binary(alpha_code) ->
String.upcase(alpha_code)
|> String.to_atom()
|> get!()
true ->
raise(ArgumentError, message: "\"#{alpha_code}\" must be atom or string")
end
end
@doc """
Returns `Currency` exponent
## Examples:
```
iex> Currency.get_factor(Currency.find!(:BRL))
100
"""
def get_factor(%Currency{exponent: exponent}) do
:math.pow(10, exponent) |> round()
end
defp get!(alpha_code) do
{_ok, currencies} = CurrencyRepository.all()
case Map.fetch(currencies, alpha_code) do
{:ok, currency} -> currency
:error -> {:error, raise_not_found_currency(alpha_code)}
end
end
@doc """
Returns the `alpha_code` represented as an `atom`
## Examples:
```
iex> Currency.to_atom(Currency.find!(:JPY))
:JPY
"""
def to_atom(%Currency{alpha_code: alpha_code}) do
alpha_code |> String.to_existing_atom()
end
defp raise_not_found_currency(alpha_code) do
raise ArgumentError,
message: "Currency #{alpha_code} not found"
end
end
|
lib/money/currency/currency.ex
| 0.95334 | 0.886911 |
currency.ex
|
starcoder
|
defmodule Pandadoc.Options.CreateFromTemplate do
@moduledoc """
Structure for creating documents
"""
alias Pandadoc.Options
@fields quote(
do: [
folder_uuid: String.t() | nil,
tags: list(String.t()) | nil,
recipients: list(Options.Recipient.t()) | nil,
tokens: list(Options.Token.t()) | nil,
fields: %{required(String.t()) => Options.Field.t()} | nil,
metadata: %{required(String.t()) => String.t()} | nil,
pricing_tables: list(Options.PricingTable.t()) | nil,
content_placeholders: list(Options.ContentPlaceHolders.t()) | nil,
images: list(Options.Image.t()) | nil
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.CreateFromPDF do
@moduledoc """
Structure for creating documents
"""
alias Pandadoc.Options
@fields quote(
do: [
tags: list(String.t()) | nil,
recipients: list(Options.Recipient.t()) | nil,
fields: %{required(String.t()) => Options.Field.t()} | nil,
parse_form_fields: boolean()
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.Token do
@moduledoc """
Structure for a token
"""
@fields quote(
do: [
name: String.t(),
value: String.t()
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.Field do
@moduledoc """
Structure for a field
"""
@fields quote(
do: [
value: String.t(),
role: String.t() | nil
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.Recipient do
@moduledoc """
Structure for recipients
"""
@fields quote(
do: [
email: String.t(),
first_name: String.t() | nil,
last_name: String.t() | nil,
role: String.t() | nil,
signing_order: integer() | nil
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.Image do
@moduledoc """
Structure for images
"""
@fields quote(
do: [
name: String.t(),
urls: list(String.t())
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.PricingTable do
@moduledoc """
Structure for pricingtable
"""
alias Pandadoc.Options
@fields quote(
do: [
name: String.t(),
options: map() | nil,
sections: list(Options.PricingTableSection.t())
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.PricingTableSection do
@moduledoc """
Structure for pricingtable sections
"""
alias Pandadoc.Options
@fields quote(
do: [
title: String.t(),
default: boolean(),
rows: list(Options.PricingTableRow.t())
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.PricingTableRow do
@moduledoc """
Structure for pricingtable rows
"""
@fields quote(
do: [
options: map() | nil,
data: map() | nil,
custom_fields: map() | nil
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
defmodule Pandadoc.Options.ContentPlaceHolders do
@moduledoc """
Structure for content placeholders
"""
@fields quote(
do: [
block_id: String.t(),
content_library_items: list(map()) | nil
]
)
defstruct Keyword.keys(@fields)
@type t() :: %__MODULE__{unquote_splicing(@fields)}
end
|
lib/pandadoc/documents/create_options.ex
| 0.80969 | 0.417806 |
create_options.ex
|
starcoder
|
defmodule AWS.CloudSearch do
@moduledoc """
Amazon CloudSearch Configuration Service
You use the Amazon CloudSearch configuration service to create, configure, and
manage search domains.
Configuration service requests are submitted using the AWS Query protocol. AWS
Query requests are HTTP or HTTPS requests submitted via HTTP GET or POST with a
query parameter named Action.
The endpoint for configuration service requests is region-specific:
cloudsearch.*region*.amazonaws.com. For example,
cloudsearch.us-east-1.amazonaws.com. For a current list of supported regions and
endpoints, see [Regions and Endpoints](http://docs.aws.amazon.com/general/latest/gr/rande.html#cloudsearch_region).
"""
@doc """
Indexes the search suggestions.
For more information, see [Configuring Suggesters](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/getting-suggestions.html#configuring-suggesters)
in the *Amazon CloudSearch Developer Guide*.
"""
def build_suggesters(client, input, options \\ []) do
request(client, "BuildSuggesters", input, options)
end
@doc """
Creates a new search domain.
For more information, see [Creating a Search Domain](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/creating-domains.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def create_domain(client, input, options \\ []) do
request(client, "CreateDomain", input, options)
end
@doc """
Configures an analysis scheme that can be applied to a `text` or `text-array`
field to define language-specific text processing options.
For more information, see [Configuring Analysis Schemes](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-analysis-schemes.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def define_analysis_scheme(client, input, options \\ []) do
request(client, "DefineAnalysisScheme", input, options)
end
@doc """
Configures an ``Expression`` for the search domain.
Used to create new expressions and modify existing ones. If the expression
exists, the new configuration replaces the old one. For more information, see
[Configuring Expressions](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-expressions.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def define_expression(client, input, options \\ []) do
request(client, "DefineExpression", input, options)
end
@doc """
Configures an ``IndexField`` for the search domain.
Used to create new fields and modify existing ones. You must specify the name of
the domain you are configuring and an index field configuration. The index field
configuration specifies a unique name, the index field type, and the options you
want to configure for the field. The options you can specify depend on the
``IndexFieldType``. If the field exists, the new configuration replaces the old
one. For more information, see [Configuring Index Fields](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-index-fields.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def define_index_field(client, input, options \\ []) do
request(client, "DefineIndexField", input, options)
end
@doc """
Configures a suggester for a domain.
A suggester enables you to display possible matches before users finish typing
their queries. When you configure a suggester, you must specify the name of the
text field you want to search for possible matches and a unique name for the
suggester. For more information, see [Getting Search Suggestions](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/getting-suggestions.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def define_suggester(client, input, options \\ []) do
request(client, "DefineSuggester", input, options)
end
@doc """
Deletes an analysis scheme.
For more information, see [Configuring Analysis Schemes](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-analysis-schemes.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def delete_analysis_scheme(client, input, options \\ []) do
request(client, "DeleteAnalysisScheme", input, options)
end
@doc """
Permanently deletes a search domain and all of its data.
Once a domain has been deleted, it cannot be recovered. For more information,
see [Deleting a Search Domain](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/deleting-domains.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def delete_domain(client, input, options \\ []) do
request(client, "DeleteDomain", input, options)
end
@doc """
Removes an ``Expression`` from the search domain.
For more information, see [Configuring Expressions](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-expressions.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def delete_expression(client, input, options \\ []) do
request(client, "DeleteExpression", input, options)
end
@doc """
Removes an ``IndexField`` from the search domain.
For more information, see [Configuring Index Fields](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-index-fields.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def delete_index_field(client, input, options \\ []) do
request(client, "DeleteIndexField", input, options)
end
@doc """
Deletes a suggester.
For more information, see [Getting Search Suggestions](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/getting-suggestions.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def delete_suggester(client, input, options \\ []) do
request(client, "DeleteSuggester", input, options)
end
@doc """
Gets the analysis schemes configured for a domain.
An analysis scheme defines language-specific text processing options for a
`text` field. Can be limited to specific analysis schemes by name. By default,
shows all analysis schemes and includes any pending changes to the
configuration. Set the `Deployed` option to `true` to show the active
configuration and exclude pending changes. For more information, see
[Configuring Analysis Schemes](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-analysis-schemes.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_analysis_schemes(client, input, options \\ []) do
request(client, "DescribeAnalysisSchemes", input, options)
end
@doc """
Gets the availability options configured for a domain.
By default, shows the configuration with any pending changes. Set the `Deployed`
option to `true` to show the active configuration and exclude pending changes.
For more information, see [Configuring Availability Options](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-availability-options.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_availability_options(client, input, options \\ []) do
request(client, "DescribeAvailabilityOptions", input, options)
end
@doc """
Returns the domain's endpoint options, specifically whether all requests to the
domain must arrive over HTTPS.
For more information, see [Configuring Domain Endpoint Options](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-domain-endpoint-options.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_domain_endpoint_options(client, input, options \\ []) do
request(client, "DescribeDomainEndpointOptions", input, options)
end
@doc """
Gets information about the search domains owned by this account.
Can be limited to specific domains. Shows all domains by default. To get the
number of searchable documents in a domain, use the console or submit a
`matchall` request to your domain's search endpoint:
`q=matchall&q.parser=structured&size=0`. For more information, see
[Getting Information about a Search Domain](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/getting-domain-info.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_domains(client, input, options \\ []) do
request(client, "DescribeDomains", input, options)
end
@doc """
Gets the expressions configured for the search domain.
Can be limited to specific expressions by name. By default, shows all
expressions and includes any pending changes to the configuration. Set the
`Deployed` option to `true` to show the active configuration and exclude pending
changes. For more information, see [Configuring Expressions](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-expressions.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_expressions(client, input, options \\ []) do
request(client, "DescribeExpressions", input, options)
end
@doc """
Gets information about the index fields configured for the search domain.
Can be limited to specific fields by name. By default, shows all fields and
includes any pending changes to the configuration. Set the `Deployed` option to
`true` to show the active configuration and exclude pending changes. For more
information, see [Getting Domain Information](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/getting-domain-info.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_index_fields(client, input, options \\ []) do
request(client, "DescribeIndexFields", input, options)
end
@doc """
Gets the scaling parameters configured for a domain.
A domain's scaling parameters specify the desired search instance type and
replication count. For more information, see [Configuring Scaling Options](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-scaling-options.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_scaling_parameters(client, input, options \\ []) do
request(client, "DescribeScalingParameters", input, options)
end
@doc """
Gets information about the access policies that control access to the domain's
document and search endpoints.
By default, shows the configuration with any pending changes. Set the `Deployed`
option to `true` to show the active configuration and exclude pending changes.
For more information, see [Configuring Access for a Search Domain](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-access.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_service_access_policies(client, input, options \\ []) do
request(client, "DescribeServiceAccessPolicies", input, options)
end
@doc """
Gets the suggesters configured for a domain.
A suggester enables you to display possible matches before users finish typing
their queries. Can be limited to specific suggesters by name. By default, shows
all suggesters and includes any pending changes to the configuration. Set the
`Deployed` option to `true` to show the active configuration and exclude pending
changes. For more information, see [Getting Search Suggestions](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/getting-suggestions.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def describe_suggesters(client, input, options \\ []) do
request(client, "DescribeSuggesters", input, options)
end
@doc """
Tells the search domain to start indexing its documents using the latest
indexing options.
This operation must be invoked to activate options whose `OptionStatus` is
`RequiresIndexDocuments`.
"""
def index_documents(client, input, options \\ []) do
request(client, "IndexDocuments", input, options)
end
@doc """
Lists all search domains owned by an account.
"""
def list_domain_names(client, input, options \\ []) do
request(client, "ListDomainNames", input, options)
end
@doc """
Configures the availability options for a domain.
Enabling the Multi-AZ option expands an Amazon CloudSearch domain to an
additional Availability Zone in the same Region to increase fault tolerance in
the event of a service disruption. Changes to the Multi-AZ option can take about
half an hour to become active. For more information, see [Configuring Availability
Options](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-availability-options.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def update_availability_options(client, input, options \\ []) do
request(client, "UpdateAvailabilityOptions", input, options)
end
@doc """
Updates the domain's endpoint options, specifically whether all requests to the
domain must arrive over HTTPS.
For more information, see [Configuring Domain Endpoint Options](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-domain-endpoint-options.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def update_domain_endpoint_options(client, input, options \\ []) do
request(client, "UpdateDomainEndpointOptions", input, options)
end
@doc """
Configures scaling parameters for a domain.
A domain's scaling parameters specify the desired search instance type and
replication count. Amazon CloudSearch will still automatically scale your domain
based on the volume of data and traffic, but not below the desired instance type
and replication count. If the Multi-AZ option is enabled, these values control
the resources used per Availability Zone. For more information, see [Configuring Scaling
Options](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-scaling-options.html)
in the *Amazon CloudSearch Developer Guide*.
"""
def update_scaling_parameters(client, input, options \\ []) do
request(client, "UpdateScalingParameters", input, options)
end
@doc """
Configures the access rules that control access to the domain's document and
search endpoints.
For more information, see [ Configuring Access for an Amazon CloudSearch Domain](http://docs.aws.amazon.com/cloudsearch/latest/developerguide/configuring-access.html).
"""
def update_service_access_policies(client, input, options \\ []) do
request(client, "UpdateServiceAccessPolicies", input, options)
end
@spec request(AWS.Client.t(), binary(), map(), list()) ::
{:ok, map() | nil, map()}
| {:error, term()}
defp request(client, action, input, options) do
client = %{client | service: "cloudsearch"}
host = build_host("cloudsearch", client)
url = build_url(host, client)
headers = [
{"Host", host},
{"Content-Type", "application/x-www-form-urlencoded"}
]
input = Map.merge(input, %{"Action" => action, "Version" => "2013-01-01"})
payload = encode!(client, input)
headers = AWS.Request.sign_v4(client, "POST", url, headers, payload)
post(client, url, payload, headers, options)
end
defp post(client, url, payload, headers, options) do
case AWS.Client.request(client, :post, url, payload, headers, options) do
{:ok, %{status_code: 200, body: body} = response} ->
body = if body != "", do: decode!(client, body)
{:ok, body, response}
{:ok, response} ->
{:error, {:unexpected_response, response}}
error = {:error, _reason} -> error
end
end
defp build_host(_endpoint_prefix, %{region: "local", endpoint: endpoint}) do
endpoint
end
defp build_host(_endpoint_prefix, %{region: "local"}) do
"localhost"
end
defp build_host(endpoint_prefix, %{region: region, endpoint: endpoint}) do
"#{endpoint_prefix}.#{region}.#{endpoint}"
end
defp build_url(host, %{:proto => proto, :port => port}) do
"#{proto}://#{host}:#{port}/"
end
defp encode!(client, payload) do
AWS.Client.encode!(client, payload, :query)
end
defp decode!(client, payload) do
AWS.Client.decode!(client, payload, :xml)
end
end
|
lib/aws/generated/cloud_search.ex
| 0.893797 | 0.463019 |
cloud_search.ex
|
starcoder
|
defmodule JaResource.Index do
import Plug.Conn, only: [put_status: 2]
@moduledoc """
Provides `handle_index/2`, `filter/4` and `sort/4` callbacks.
It relies on (and uses):
* JaResource.Repo
* JaResource.Records
* JaResource.Serializable
When used JaResource.Index defines the `index/2` action suitable for handling
json-api requests.
To customize the behaviour of the index action the following callbacks can be implemented:
* handle_index/2
* render_index/3
* filter/4
* sort/4
* JaResource.Records.records/1
* JaResource.Repo.repo/0
* JaResource.Serializable.serialization_opts/3
"""
@doc """
Returns the models to be represented by this resource.
Default implementation is the result of the JaResource.Records.records/2
callback. Usually a module or an `%Ecto.Query{}`.
The results of this callback are passed to the filter and sort callbacks before the query is executed.
`handle_index/2` can alternatively return a conn with any response/body.
Example custom implementation:
def handle_index(conn, _params) do
case conn.assigns[:user] do
nil -> App.Post
user -> User.own_posts(user)
end
end
In most cases JaResource.Records.records/1, filter/4, and sort/4 are the
better customization hooks.
"""
@callback handle_index(Plug.Conn.t(), map) :: Plug.Conn.t() | JaResource.records()
@doc """
Callback executed for each `filter` param.
For example, if you wanted to optionally filter on an Article's category and
issue, your request url might look like:
/api/articles?filter[category]=elixir&filter[issue]=12
You would then want two callbacks:
def filter(_conn, query, "category", category) do
where(query, category: category)
end
def filter(_conn, query, "issue", issue_id) do
where(query, issue_id: issue_id)
end
You can also use guards to whitelist a handeful of attributes:
@filterable_attrs ~w(title category author_id issue_id)
def filter(_conn, query, attr, val) when attr in @filterable_attrs do
where(query, [{String.to_existing_atom(attr), val}])
end
Anything not explicitly matched by your callbacks will be ignored.
"""
@callback filter(Plug.Conn.t(), JaResource.records(), String.t(), String.t()) ::
JaResource.records()
@doc """
Callback executed for each value in the sort param.
Fourth argument is the direction as an atom, either `:asc` or `:desc` based
upon the presence or not of a `-` prefix.
For example if you wanted to sort by date then title your request url might
look like:
/api/articles?sort=-created,title
You would then want two callbacks:
def sort(_conn, query, "created", direction) do
order_by(query, [{direction, :inserted_at}])
end
def sort(_conn, query, "title", direction) do
order_by(query, [{direction, :title}])
end
Anything not explicitly matched by your callbacks will be ignored.
"""
@callback sort(Plug.Conn.t(), JaResource.records(), String.t(), :asc | :dsc) ::
JaResource.records()
@doc """
Callback executed to query repo.
By default this just calls `all/2` on the repo. Can be customized for
pagination, monitoring, etc. For example to paginate with Scrivener:
def handle_index_query(%{query_params: qp}, query) do
repo().paginate(query, qp["page"] || %{})
end
"""
@callback handle_index_query(Plug.Conn.t(), Ecto.Query.t() | module) :: any
@doc """
Returns a `Plug.Conn` in response to successful update.
Default implementation renders the view.
"""
@callback render_index(Plug.Conn.t(), JaResource.records(), list) :: Plug.Conn.t()
@doc """
Execute the index action on a given module implementing Index behaviour and conn.
"""
def call(controller, conn) do
conn
|> controller.handle_index(conn.params)
|> JaResource.Index.filter(conn, controller)
|> JaResource.Index.sort(conn, controller)
|> JaResource.Index.execute_query(conn, controller)
|> JaResource.Index.respond(conn, controller)
end
defmacro __using__(_) do
quote do
use JaResource.Repo
use JaResource.Records
use JaResource.Serializable
@behaviour JaResource.Index
@before_compile JaResource.Index
def handle_index_query(_conn, query), do: repo().all(query)
def render_index(conn, models, opts) do
conn
|> Phoenix.Controller.render(:index, data: models, opts: opts)
end
def handle_index(conn, params), do: records(conn)
defoverridable handle_index: 2, render_index: 3, handle_index_query: 2
end
end
@doc false
defmacro __before_compile__(_) do
quote do
def filter(_conn, results, _key, _val), do: results
def sort(_conn, results, _key, _dir), do: results
end
end
@doc false
def filter(results, conn = %{params: %{"filter" => filters}}, resource) do
filters
|> Map.keys()
|> Enum.reduce(results, fn k, acc ->
resource.filter(conn, acc, k, filters[k])
end)
end
def filter(results, _conn, _controller), do: results
@sort_regex ~r/(-?)(\S*)/
@doc false
def sort(results, conn = %{params: %{"sort" => fields}}, controller) do
fields
|> String.split(",")
|> Enum.reduce(results, fn field, acc ->
case Regex.run(@sort_regex, field) do
[_, "", field] -> controller.sort(conn, acc, field, :asc)
[_, "-", field] -> controller.sort(conn, acc, field, :desc)
end
end)
end
def sort(results, _conn, _controller), do: results
@doc false
def execute_query(%Plug.Conn{} = conn, _conn, _controller), do: conn
def execute_query(results, _conn, _controller) when is_list(results), do: results
def execute_query(query, conn, controller), do: controller.handle_index_query(conn, query)
@doc false
def respond(%Plug.Conn{} = conn, _oldconn, _controller), do: conn
def respond({:error, errors}, conn, _controller), do: error(conn, errors)
def respond(models, conn, controller) do
opts = controller.serialization_opts(conn, conn.query_params, models)
controller.render_index(conn, models, opts)
end
defp error(conn, errors) do
conn
|> put_status(:internal_server_error)
|> Phoenix.Controller.render(:errors, data: errors)
end
end
|
lib/ja_resource/index.ex
| 0.81928 | 0.404478 |
index.ex
|
starcoder
|
defmodule Sippet.Transactions.Server.Invite do
@moduledoc false
use Sippet.Transactions.Server, initial_state: :proceeding
alias Sippet.Message
alias Sippet.Message.StatusLine
alias Sippet.Transactions.Server.State
@t2 4_000
@before_trying 200
@timer_g 500
@timer_h 64 * @timer_g
# timer I is 5s
@timer_i 5_000
def init(%State{key: key, sippet: sippet} = data) do
# add an alias for incoming ACK requests for status codes != 200
Registry.register(sippet, {:transaction, %{key | method: :ack}}, nil)
super(data)
end
defp retry(
{past_wait, passed_time},
%State{extras: %{last_response: last_response}} = data
) do
send_response(last_response, data)
new_delay = min(past_wait * 2, @t2)
{:keep_state_and_data, [{:state_timeout, new_delay, {new_delay, passed_time + new_delay}}]}
end
def proceeding(:enter, _old_state, %State{request: request} = data) do
receive_request(request, data)
{:keep_state_and_data, [{:state_timeout, @before_trying, :still_trying}]}
end
def proceeding(:state_timeout, :still_trying, %State{request: request} = data) do
response = request |> Message.to_response(100)
data = send_response(response, data)
{:keep_state, data}
end
def proceeding(
:cast,
{:incoming_request, _request},
%State{extras: %{last_response: last_response}} = data
) do
send_response(last_response, data)
:keep_state_and_data
end
def proceeding(:cast, {:incoming_request, _request}, _data),
do: :keep_state_and_data
def proceeding(:cast, {:outgoing_response, response}, data) do
data = send_response(response, data)
case StatusLine.status_code_class(response.start_line) do
1 -> {:keep_state, data}
2 -> {:stop, :normal, data}
_ -> {:next_state, :completed, data}
end
end
def proceeding(:cast, {:error, reason}, data),
do: shutdown(reason, data)
def proceeding(event_type, event_content, data),
do: unhandled_event(event_type, event_content, data)
def completed(:enter, _old_state, %State{request: request} = data) do
actions =
if reliable?(request, data) do
[{:state_timeout, @timer_h, {@timer_h, @timer_h}}]
else
[{:state_timeout, @timer_g, {@timer_g, @timer_g}}]
end
{:keep_state_and_data, actions}
end
def completed(:state_timeout, time_event, data) do
{_past_wait, passed_time} = time_event
if passed_time >= @timer_h do
timeout(data)
else
retry(time_event, data)
end
end
def completed(
:cast,
{:incoming_request, request},
%State{extras: %{last_response: last_response}} = data
) do
case request.start_line.method do
:invite ->
send_response(last_response, data)
:keep_state_and_data
:ack ->
{:next_state, :confirmed, data}
_otherwise ->
shutdown(:invalid_method, data)
end
end
def completed(:cast, {:error, reason}, data),
do: shutdown(reason, data)
def completed(event_type, event_content, data),
do: unhandled_event(event_type, event_content, data)
def confirmed(:enter, _old_state, %State{request: request} = data) do
if reliable?(request, data) do
{:stop, :normal, data}
else
{:keep_state_and_data, [{:state_timeout, @timer_i, nil}]}
end
end
def confirmed(:state_timeout, _nil, data),
do: {:stop, :normal, data}
def confirmed(:cast, {:incoming_request, _request}, _data),
do: :keep_state_and_data
def confirmed(:cast, {:error, _reason}, _data),
do: :keep_state_and_data
def confirmed(event_type, event_content, data),
do: unhandled_event(event_type, event_content, data)
end
|
lib/sippet/transactions/server/invite.ex
| 0.550124 | 0.44065 |
invite.ex
|
starcoder
|
defmodule APNSx.Server do
@moduledoc """
Simulates an APN server
* Collect alls certificate verifications and channel messages.
* Will accept any SSL connection (no real verification)
* Support queued commands which will trigger behaviours when a client sends data
* `:close` Will terminate the connection
* `{:respond, data}` Will respond with the given data
"""
use GenServer
@doc """
Starts a server using the `certfile` and `keyfile` pair as paths to load
the SSL crypto and listens on `port`.
"""
@spec start(String.t, String.t, non_neg_integer) :: {:ok, pid}
def start(certfile, keyfile, port) do
GenServer.start_link(__MODULE__, {certfile, keyfile, port})
end
def init({certfile, keyfile, port}) do
server = self
opts = [certfile: certfile,
keyfile: keyfile,
reuseaddr: true,
verify: :verify_peer,
verify_fun: {fn(a,b,c) -> verify(server, {a,b,c}) end, nil}]
{:ok, listen_socket} = :ssl.listen(port, opts)
Task.start fn ->
{:ok, socket} = :ssl.transport_accept(listen_socket)
:ok = :ssl.ssl_accept(socket)
:ssl.controlling_process(socket, server)
end
{:ok, {[],[]}}
end
@doc """
Retrieves all collected certificate verification calls and channel messages
from `server` in order of their arrival
"""
@spec retrieve_log(pid) :: [...]
def retrieve_log(server) do
GenServer.call(server, :retrieve_log)
end
@doc """
Enqueues a command `cmd` which will be used for the next incoming
channel message to the `server`
"""
@spec queue_cmd(pid, any) :: :ok
def queue_cmd(server, cmd) do
GenServer.call(server, {:queue_cmd, cmd})
end
defp verify(server, payload) do
GenServer.call(server, {:verify, payload})
end
def handle_call(:retrieve_log, _, {cmds, log}) do
{:reply, Enum.reverse(log), {cmds, log}}
end
def handle_call({:queue_cmd, cmd}, _, {cmds, log}) do
{:reply, :ok, {[cmd |cmds], log}}
end
def handle_call({:verify, {cert, result, user_state}}, _, {cmds, log}) do
log = [{:cert, {cert, result}} | log]
{:reply, {:valid, user_state}, {cmds, log}}
end
def handle_info({:ssl, _, data}, {[], log}) do
log = [{:ssl, data} | log]
{:noreply, {[], log}}
end
def handle_info({:ssl, ssl_socket, data}, {[cmd | cmds], log}) do
log = [{:ssl, data} | log]
case cmd do
{:respond, data} ->
:ok = :ssl.send(ssl_socket, data)
:close ->
:ok = :ssl.close(ssl_socket)
end
{:noreply, {cmds, log}}
end
end
|
lib/apns/server.ex
| 0.765725 | 0.536677 |
server.ex
|
starcoder
|
defmodule Weaver.Loom do
@moduledoc """
Enables running a topology of concurrent Weaver workers using `GenStage`.
## Supervisor
`Weaver.Loom` implements the `Supervisor` specification, so you can run it
as part of any supervision tree:
```
defmodule MyApp.Application do
...
def start(_type, _args) do
children = [
...
Weaver.Loom
]
opts = [strategy: :one_for_one, name: MyApp.Supervisor]
Supervisor.start_link(children, opts)
end
end
```
## Usage
The easiest way to use Loom is to call `Weaver.Loom.weave/3` with:
* a GraphQL query (String)
* an optional list of options (see `Weaver.prepare/2`)
* a callback function
### Callback function
The callback function is called with:
* a `result` tuple (`Weaver.Step.Result`)
* a map of `assigns`
It may return either of:
* `{:ok, dispatch, next, dispatch_assigns, next_assigns}` to signal Loom to continue the stream
* `dispatch` is a list of steps to be dispatched to the next level of workers - usually the (modified) result's `dispatched` list
* `next` is a step to be processed next by the same worker - usually the result's `next` step
* `dispatch_assigns` is a map to be passed to callbacks in the `dispatch` steps
* `next_assigns` is a map to be passed to callbacks in the `next` step
* `{:retry, assigns, delay}` to signal Loom to retry the step after the given delay (in milliseconds)
* `{:error, error}` to signal Loom to stop processing this stream
It can choose based on the `errors` it receives as an argument.
### Error handling
Error cases outside Weaver:
* an error in the resolver that can be retried (e.g. connection timeout)
-> resolver adds retry with delay hint (in milliseconds) to `errors`
e.g. `{:retry, reason, delay}`
* an error in the resolver that can not be retried (e.g. wrong type returned)
-> resolver adds error to `errors`
e.g. `{:error, reason}`
* an error in the callback function
-> the callback is responsible for handling its errors
-> Loom will catch uncaught errors, ignore them, and continue with the next event
"""
use Supervisor
alias Weaver.Loom.{Consumer, Event, Producer, Prosumer}
def start_link(_arg) do
Supervisor.start_link(__MODULE__, :ok, name: __MODULE__)
end
def prepare(query, schema, opts \\ [], callback)
def prepare(query, schema, opts, callback) when is_binary(query) do
with {:ok, step} <- Weaver.prepare(query, schema, opts) do
prepare(step, schema, opts, callback)
end
end
def prepare(step = %Absinthe.Blueprint{}, _schema, _opts, callback) do
%Event{step: step, callback: callback}
end
def weave(event = %Event{}) do
Producer.add(event)
end
def init(:ok) do
children = [
Producer,
processor(:weaver_processor_1a, [Producer]),
processor(:weaver_processor_2a, [:weaver_processor_1a]),
processor(:weaver_processor_3a, [:weaver_processor_2a]),
processor(:weaver_processor_4a, [:weaver_processor_3a]),
processor(:weaver_processor_5a, [:weaver_processor_4a]),
processor(:weaver_processor_6a, [:weaver_processor_5a]),
processor(:weaver_processor_7a, [:weaver_processor_6a]),
processor(:weaver_processor_8a, [:weaver_processor_7a]),
processor(:weaver_processor_9a, [:weaver_processor_8a], Consumer)
]
Supervisor.init(children, strategy: :rest_for_one)
end
defp processor(name, subscriptions, role \\ Prosumer) do
Supervisor.child_spec({role, {name, subscriptions}}, id: name)
end
end
|
lib/weaver/loom/loom.ex
| 0.876284 | 0.827131 |
loom.ex
|
starcoder
|
defmodule Vttyl.Decode do
@moduledoc false
alias Vttyl.Part
def parse(enum_content) do
enum_content
|> Stream.map(fn line -> Regex.replace(~r/#.*/, line, "") end)
|> Stream.map(&String.trim/1)
|> Stream.reject(&(&1 in ["", "WEBVTT"]))
|> Stream.chunk_while(%Part{}, &parse_chunk/2, &parse_chunk_after/1)
|> Stream.filter(&full_chunk?/1)
end
defp parse_chunk(line, acc) do
acc =
cond do
Regex.match?(~r/^\d+$/, line) ->
%Part{acc | part: String.to_integer(line)}
not is_nil(acc.part) and timestamps?(line) ->
{start_ts, end_ts} = parse_timestamps(line)
%Part{acc | start: start_ts, end: end_ts}
# Text content should be on one line and the other stuff should have appeared
not is_nil(acc.part) and not is_nil(acc.start) and not is_nil(acc.end) and line != "" ->
%Part{acc | text: line}
true ->
acc
end
if full_chunk?(acc) do
{:cont, acc, %Part{}}
else
{:cont, acc}
end
end
defp parse_chunk_after(acc) do
if full_chunk?(acc) do
{:cont, acc, %Part{}}
else
{:cont, acc}
end
end
defp full_chunk?(%Part{part: part, start: start, end: ts_end, text: text}) do
not is_nil(part) and not is_nil(start) and not is_nil(ts_end) and not is_nil(text)
end
@ts_pattern ~S"(?:(\d{2,}):)?(\d{2}):(\d{2})\.(\d{3})"
@line_regex ~r/#{@ts_pattern} --> #{@ts_pattern}/
@ts_regex ~r/#{@ts_pattern}/
# 00:00:00.000 --> 00:01:01.000
defp timestamps?(line) do
Regex.match?(@line_regex, line)
end
defp parse_timestamps(line) do
line
|> String.split("-->")
|> Enum.map(fn ts ->
ts = String.trim(ts)
[hour, minute, second, millisecond] = Regex.run(@ts_regex, ts, capture: :all_but_first)
case hour do
"" -> 0
hour -> String.to_integer(hour) * 3_600_000
end +
String.to_integer(minute) * 60_000 +
String.to_integer(second) * 1_000 +
String.to_integer(millisecond)
end)
|> List.to_tuple()
end
end
|
lib/vttyl/decode.ex
| 0.524638 | 0.404213 |
decode.ex
|
starcoder
|
defmodule Genome.LooseSequence do
alias Genome.Nucleotide
import Genome.Sequence, only: [encode: 1, decode: 2, reverse_complement: 1]
def pattern_count(seq, pattern, d), do: Enum.count(pattern_matches(seq, pattern, d))
def pattern_matches(seq, pattern, d, index \\ 0, acc \\ [])
def pattern_matches(seq, pattern, _, _, acc) when length(pattern) > length(seq), do: acc
def pattern_matches(seq, pattern, d, index, acc) do
k = length(pattern)
kmer = Enum.take(seq, k)
new_acc = if hamming_distance(kmer, pattern) <= d, do: [index | acc], else: acc
pattern_matches(tl(seq), pattern, d, index + 1, new_acc)
end
def frequencies(seq, k, d, acc \\ %{}) do
with kmer <- Enum.take(seq, k),
^k <- Enum.count(kmer) do
new_acc =
kmer
|> neighbors(d)
|> Enum.reduce(acc, fn neighbor, acc -> Map.update(acc, encode(neighbor), 1, & &1 + 1) end)
frequencies(tl(seq), k, d, new_acc)
else
_ -> acc
end
end
@doc """
Finds the most frequent k-mers with mismatches in a sequence.
iex> "ACGTTGCATGTCGCATGATGCATGAGAGCT"
...> |> Genome.Sequence.from_string()
...> |> Genome.LooseSequence.frequent_patterns(4, 1)
...> |> Enum.map(&Genome.Sequence.to_string/1)
...> |> MapSet.new()
MapSet.new(~w|GATG ATGC ATGT|)
"""
def frequent_patterns(seq, k, d) do
{patterns, _} =
seq
|> frequencies(k, d)
|> reduce_to_most_frequent(k)
patterns
end
@doc """
Finds the most frequent k-mers with mismatches and reverse complements in a sequence.
iex> "ACGTTGCATGTCGCATGATGCATGAGAGCT"
...> |> Genome.Sequence.from_string()
...> |> Genome.LooseSequence.frequent_patterns_with_reverse_complements(4, 1)
...> |> Enum.map(&Genome.Sequence.to_string/1)
...> |> MapSet.new()
MapSet.new(~w|ATGT ACAT|)
"""
def frequent_patterns_with_reverse_complements(seq, k, d) do
direct_freqs =
seq
|> frequencies(k, d)
freqs =
seq
|> reverse_complement()
|> frequencies(k, d, direct_freqs)
{patterns, _} =
freqs
|> reduce_to_most_frequent(k)
patterns
end
@doc """
iex> "ACG"
...> |> Genome.Sequence.from_string()
...> |> Genome.LooseSequence.neighbors(1)
...> |> Enum.map(&Genome.Sequence.to_string/1)
...> |> MapSet.new()
MapSet.new(~w|CCG TCG GCG AAG ATG AGG ACA ACC ACT ACG|)
"""
def neighbors(seq, 0), do: MapSet.new([seq])
def neighbors([_], _), do: Nucleotide.all() |> Enum.map(&[&1])
def neighbors([head|tail], d) when d > 0 do
tail
|> neighbors(d)
|> Enum.flat_map(fn neighbor ->
if hamming_distance(neighbor, tail) < d,
do: Nucleotide.all() |> Enum.map(& [&1|neighbor]),
else: [[head|neighbor]]
end)
|> MapSet.new()
end
@doc """
iex> Genome.LooseSequence.hamming_distance([0, 1, 2, 3, 2, 1, 0], [1, 2, 3, 3, 2, 1, 0])
3
"""
def hamming_distance(seq1, seq2) do
Enum.zip(seq1, seq2)
|> Enum.count(fn {a, b} -> a != b end)
end
defp reduce_to_most_frequent(freqs, k) do
freqs
|> Enum.reduce({[], 0}, fn
{encoded_pattern, count}, {_, winning_count} when count > winning_count ->
{MapSet.new([decode(encoded_pattern, k)]), count}
{encoded_pattern, count}, {patterns, count} ->
{MapSet.put(patterns, decode(encoded_pattern, k)), count}
_, acc ->
acc
end)
end
end
|
lib/genome/loose_sequence.ex
| 0.736116 | 0.500916 |
loose_sequence.ex
|
starcoder
|
defmodule ArtemisWeb.AsyncRenderLive do
use ArtemisWeb.LiveView
@moduledoc """
Asynchronously render a template
## Fetch Data Asynchronously (Optional)
Can be passed an arbitrary `async_data` function to be executed as part of
the async load. It is excluded from the assign data and never exposed to the
client.
Supports multiple formats:
- Tuple: `{Module, :function_name}`
- Named Function: `&custom_function/1`
- Anyonmous Function: `fn _assigns -> true end`
Note: In order to pass a named or anonymous function, it must first be
serialized with the exposed `serialize` function first.
"""
@async_data_timeout :timer.minutes(5)
@ignored_session_keys ["async_data", "conn"]
@default_async_render_type :component
# LiveView Callbacks
@impl true
def mount(_params, session, socket) do
async_fetch? = Map.get(session, "async_fetch?", true)
async_render_reload_limit = session["async_data_reload_limit"]
async_render_type = session["async_render_type"] || @default_async_render_type
async_status_after_initial_render = session["async_status_after_initial_render"] || :loading
private_state = [
async_data: session["async_data"]
]
{:ok, async_render_private_state_pid} = ArtemisWeb.AsyncRenderLivePrivateState.start_link(private_state)
socket =
socket
|> add_session_to_assigns(session)
|> assign(:async_data, nil)
|> assign(:async_data_reload_count, 0)
|> assign(:async_data_reload_limit, async_render_reload_limit)
|> assign(:async_fetch?, async_fetch?)
|> assign(:async_render_private_state_pid, async_render_private_state_pid)
|> assign(:async_render_type, async_render_type)
|> assign(:async_status, :loading)
|> assign(:async_status_after_initial_render, async_status_after_initial_render)
|> maybe_fetch_data()
if connected?(socket) && async_fetch? do
Process.send_after(self(), :async_data, 10)
end
{:ok, socket}
end
@impl true
def render(assigns) do
Phoenix.View.render(ArtemisWeb.LayoutView, "async_render.html", assigns)
end
# GenServer Callbacks
@impl true
def handle_info(:async_data, socket) do
{:noreply, add_async_data(socket)}
end
def handle_info(:context_cache_updating, socket) do
socket = assign(socket, :async_data_reload_count, socket.assigns.async_data_reload_count + 1)
socket =
case below_reload_limit?(socket) do
true -> assign(socket, :async_status, :reloading)
false -> socket
end
{:noreply, socket}
end
def handle_info(:context_cache_updated, socket) do
if below_reload_limit?(socket) do
Process.send(self(), :async_data, [])
end
{:noreply, socket}
end
def handle_info(_, socket) do
{:noreply, socket}
end
# Callbacks
def deserialize(binary) do
:erlang.binary_to_term(binary)
rescue
_error in ArgumentError -> binary
end
def serialize(term), do: :erlang.term_to_binary(term)
# Helpers
defp add_session_to_assigns(socket, session) do
Enum.reduce(session, socket, fn {key, value}, acc ->
atom_key = Artemis.Helpers.to_atom(key)
case Enum.member?(@ignored_session_keys, key) do
false -> assign(acc, atom_key, value)
true -> acc
end
end)
end
defp maybe_fetch_data(socket) do
case socket.assigns.async_fetch? do
true -> socket
_ -> add_async_data(socket, async_status: socket.assigns.async_status_after_initial_render)
end
end
defp add_async_data(socket, options \\ []) do
async_data = fetch_async_data(socket)
async_status = Keyword.get(options, :async_status, :loaded)
socket
|> assign(:async_data, async_data)
|> assign(:async_status, async_status)
end
defp fetch_async_data(socket) do
pid = socket.assigns.async_render_private_state_pid
message = {:async_data, self(), socket.assigns}
GenServer.call(pid, message, @async_data_timeout)
end
defp below_reload_limit?(socket) do
reload_count = socket.assigns.async_data_reload_count
reload_limit = socket.assigns.async_data_reload_limit
cond do
reload_limit -> reload_count <= reload_limit
true -> true
end
end
end
|
apps/artemis_web/lib/artemis_web/live/async_render_live.ex
| 0.78083 | 0.41324 |
async_render_live.ex
|
starcoder
|
defmodule Issues.TableFormatter do
@doc """
Print issue table based on selected issues columns.
Table allignment is dynamically calculated based on column data maximal length.
"""
def table_print issues,columns do
header_column_lengths = column_widths(issues,columns)
IO.puts create_header(columns,header_column_lengths)
IO.puts create_separator(columns,header_column_lengths)
issues
|> Enum.map(fn(x) -> IO.puts(create_body(columns,header_column_lengths,x)) end)
end
@doc """
returns length of longer touple element
"""
def get_longer column_info do
case {String.length(elem(column_info,0)),elem(column_info,1)} do
{_,0} -> 0
{x,y} when x > y -> x
{_,y} -> y
end
end
@doc """
Create a separator between table header and row data based on selected columns and maximal column length values
"""
def create_separator(columns,header_column_lengths) do
columns
|> Enum.zip(header_column_lengths)
|> Enum.map(fn(x) -> "+#{String.duplicate("-",get_longer(x))}" end)
|> Enum.join
end
@doc """
Creates table body based on selected columns for a specific issue row
"""
def create_body(columns,header_column_lengths,row) do
columns
|> Enum.zip(header_column_lengths)
|> Enum.map(fn(x) -> "|#{String.pad_trailing(to_string(row[elem(x,0)]),get_longer(x))}" end)
|> Enum.join
end
@doc """
Creates table header based on list of selected columns from list of issues
"""
def create_header(columns,header_column_lengths) do
columns
|> Enum.zip(header_column_lengths)
|> Enum.map(fn(x) -> "|#{String.pad_trailing(elem(x,0),get_longer(x))}" end)
|> Enum.join
end
@doc """
Gets list of maximal lengths for each column from issues
"""
def column_widths issues,columns do
columns
|> Enum.map(fn(n) -> String.length(column_max_width(issues,n)) end)
end
@doc """
Gets value of maximal length from list of maps, based on map key as filter
"""
def column_max_width list,name do
list
|> Enum.map(fn(x) -> to_string(x[name]) end)
|> Enum.max_by(&String.length/1)
end
end
|
issues/lib/issues/table_formatter.ex
| 0.511717 | 0.416678 |
table_formatter.ex
|
starcoder
|
defmodule MapSchema.Macros.JsonEncoding do
@moduledoc false
@doc """
The JsonEncoding module compone the macros that let us, build the methods:
- json_encode(map)
Take a map and cast to json string format.
- json_decode(json)
Take a json string format and build a map, following the rules of schema.
Note:It´s check the data types.
- json_decode(mapa, json)
Take a json string format and build a map, change the values of the actual map following the rules of schema.
Note:It´s check the data types.
"""
def install do
encode_methods = install_json_encode()
decore_1_using_mapschema = install_decode_to_mapschema()
decode_2_mutable = install_decode_mutable()
[encode_methods, decore_1_using_mapschema, decode_2_mutable]
end
defp install_json_encode do
quote do
@doc """
Let encode object to Json.
Note: This method use Jason library.
"""
def unquote(:json_encode)(var!(mapa)) when is_map(var!(mapa)) do
Jason.encode!(var!(mapa))
end
def unquote(:json_encode)(_) do
MapSchema.Exceptions.throw_error_should_be_a_map()
end
end
end
defp install_decode_to_mapschema do
quote do
@doc """
Let decode json to Object. Checking every type following the schema.
Note: This method use Jason library.
"""
def unquote(:json_decode)(var!(json)) do
put(%{}, Jason.decode!(var!(json)))
end
end
end
defp install_decode_mutable do
quote do
@doc """
Let decode json and mut a existing object . Checking every type following the schema.
Intenal it´s using the method `put/2`.
Note: This method use Jason library.
## Parameters
- mapa: Object
- json: Json object
"""
@spec json_decode(
any(),
binary()
| maybe_improper_list(
binary() | maybe_improper_list(any(), binary() | []) | byte(),
binary() | []
)
) :: any()
def unquote(:json_decode)(var!(mapa), var!(json)) when is_map(var!(mapa)) do
put(var!(mapa), Jason.decode!(var!(json)))
end
def unquote(:json_decode)(_, _) do
MapSchema.Exceptions.throw_error_should_be_a_map()
end
end
end
end
|
lib/skeleton/macros/json_encoding.ex
| 0.69233 | 0.435962 |
json_encoding.ex
|
starcoder
|
defmodule Zaryn.SelfRepair.Sync.BeaconSummaryHandler.TransactionHandler do
@moduledoc false
alias Zaryn.BeaconChain.Slot.TransactionSummary
alias Zaryn.Crypto
alias Zaryn.P2P
alias Zaryn.P2P.Message.GetTransaction
alias Zaryn.P2P.Message.NotFound
alias Zaryn.Replication
alias Zaryn.TransactionChain.Transaction
alias Zaryn.Utils
require Logger
@doc """
Determine if the transaction should be downloaded by the local node.
Verify firstly the chain storage nodes election.
If not successful, perform storage nodes election based on the transaction movements.
"""
@spec download_transaction?(TransactionSummary.t()) :: boolean()
def download_transaction?(%TransactionSummary{
address: address,
type: type,
movements_addresses: mvt_addresses
}) do
node_list = [P2P.get_node_info() | P2P.authorized_nodes()] |> P2P.distinct_nodes()
chain_storage_nodes = Replication.chain_storage_nodes_with_type(address, type, node_list)
if Utils.key_in_node_list?(chain_storage_nodes, Crypto.first_node_public_key()) do
true
else
Enum.any?(mvt_addresses, fn address ->
io_storage_nodes = Replication.chain_storage_nodes(address, node_list)
node_pool_address = Crypto.hash(Crypto.last_node_public_key())
Utils.key_in_node_list?(io_storage_nodes, Crypto.first_node_public_key()) or
address == node_pool_address
end)
end
end
@doc """
Request the transaction for the closest storage nodes and replicate it locally.
"""
@spec download_transaction(TransactionSummary.t(), patch :: binary()) ::
:ok | {:error, :invalid_transaction}
def download_transaction(
%TransactionSummary{address: address, type: type, timestamp: timestamp},
node_patch
)
when is_binary(node_patch) do
Logger.info("Synchronize missed transaction", transaction: "#{type}@#{Base.encode16(address)}")
storage_nodes =
case P2P.authorized_nodes(timestamp) do
[] ->
Replication.chain_storage_nodes_with_type(address, type)
nodes ->
Replication.chain_storage_nodes_with_type(address, type, nodes)
end
response =
storage_nodes
|> Enum.reject(&(&1.first_public_key == Crypto.first_node_public_key()))
|> P2P.reply_first(%GetTransaction{address: address})
case response do
{:ok, tx = %Transaction{}} ->
node_list = [P2P.get_node_info() | P2P.authorized_nodes()] |> P2P.distinct_nodes()
roles =
[
chain:
Replication.chain_storage_node?(
address,
type,
Crypto.last_node_public_key(),
node_list
),
IO: Replication.io_storage_node?(tx, Crypto.last_node_public_key(), node_list)
]
|> Utils.get_keys_from_value_match(true)
Replication.process_transaction(tx, roles, self_repair?: true)
{:ok, %NotFound{}} ->
Logger.error("Transaction not found from remote nodes during self repair",
transaction: "#{type}@#{Base.encode16(address)}"
)
{:error, :network_issue} ->
Logger.error("Network issue during during self repair",
transaction: "#{type}@#{Base.encode16(address)}"
)
end
end
end
|
lib/zaryn/self_repair/sync/beacon_summary_handler/transaction_handler.ex
| 0.782621 | 0.401424 |
transaction_handler.ex
|
starcoder
|
defmodule Kernel.SpecialForms do
@moduledoc """
In this module we define Elixir special forms. Special forms
cannot be overridden by the developer and are the basic
building blocks of Elixir code.
Some of those forms are lexical (like `alias`, `case`, etc).
The macros `{}` and `<<>>` are also special forms used to define
tuple and binary data structures respectively.
This module also documents Elixir's pseudo variables (`__ENV__`,
`__MODULE__`, `__DIR__` and `__CALLER__`). Pseudo variables return
information about Elixir's compilation environment and can only
be read, never assigned to.
Finally, it also documents 2 special forms, `__block__` and
`__aliases__`, which are not intended to be called directly by the
developer but they appear in quoted contents since they are essential
in Elixir's constructs.
"""
@doc """
Creates a tuple.
Only two item tuples are considered literals in Elixir.
Therefore all other tuples are represented in the AST
as a call to the special form `:{}`.
Conveniences for manipulating tuples can be found in the
`Tuple` module. Some functions for working with tuples are
also available in `Kernel`, namely `Kernel.elem/2`,
`Kernel.put_elem/3` and `Kernel.tuple_size/1`.
## Examples
iex> {1, 2, 3}
{1, 2, 3}
iex> quote do: {1, 2, 3}
{:{}, [], [1, 2, 3]}
"""
defmacro unquote(:{})(args)
@doc """
Creates a map.
Maps are key-value stores where keys are compared
using the match operator (`===`). Maps can be created with
the `%{}` special form where keys are associated via `=>`:
%{1 => 2}
Maps also support the keyword notation, as other special forms,
as long as they are at the end of the argument list:
%{hello: :world, with: :keywords}
%{:hello => :world, with: :keywords}
If a map has duplicated keys, the last key will always have
higher precedence:
iex> %{a: :b, a: :c}
%{a: :c}
Conveniences for manipulating maps can be found in the
`Map` module.
## Access syntax
Besides the access functions available in the `Map` module,
like `Map.get/3` and `Map.fetch/2`, a map can be accessed using the
`.` operator:
iex> map = %{a: :b}
iex> map.a
:b
Note that the `.` operator expects the field to exist in the map.
If not, an `ArgumentError` is raised.
## Update syntax
Maps also support an update syntax:
iex> map = %{:a => :b}
iex> %{map | :a => :c}
%{:a => :c}
Notice the update syntax requires the given keys to exist.
Trying to update a key that does not exist will raise an `ArgumentError`.
## AST representation
Regardless if `=>` or the keywords syntax is used, Maps are
always represented internally as a list of two-items tuples
for simplicity:
iex> quote do: %{:a => :b, c: :d}
{:%{}, [], [{:a, :b}, {:c, :d}]}
"""
defmacro unquote(:%{})(args)
@doc """
Creates a struct.
A struct is a tagged map that allows developers to provide
default values for keys, tags to be used in polymorphic
dispatches and compile time assertions.
To define a struct, you just need to implement the `__struct__/0`
function in a module:
defmodule User do
def __struct__ do
%{name: "john", age: 27}
end
end
In practice though, structs are usually defined with the
`Kernel.defstruct/1` macro:
defmodule User do
defstruct name: "john", age: 27
end
Now a struct can be created as follows:
%User{}
Underneath a struct is just a map with a `__struct__` field
pointing to the `User` module:
%User{} == %{__struct__: User, name: "john", age: 27}
A struct also validates that the given keys are part of the defined
struct. The example below will fail because there is no key
`:full_name` in the `User` struct:
%User{full_name: "<NAME>"}
Note that a struct specifies a minimum set of keys required
for operations. Other keys can be added to structs via the
regular map operations:
user = %User{}
Map.put(user, :a_non_struct_key, :value)
An update operation specific for structs is also available:
%User{user | age: 28}
The syntax above will guarantee the given keys are valid at
compilation time and it will guarantee at runtime the given
argument is a struct, failing with `BadStructError` otherwise.
Although structs are maps, by default structs do not implement
any of the protocols implemented for maps. Check
`Kernel.defprotocol/2` for more information on how structs
can be used with protocols for polymorphic dispatch. Also
see `Kernel.struct/2` for examples on how to create and update
structs dynamically.
"""
defmacro unquote(:%)(struct, map)
@doc """
Defines a new bitstring.
## Examples
iex> << 1, 2, 3 >>
<< 1, 2, 3 >>
## Bitstring types
A bitstring is made of many segments. Each segment has a
type, which defaults to integer:
iex> <<1, 2, 3>>
<<1, 2, 3>>
Elixir also accepts by default the segment to be a literal
string or a literal char list, which are by expanded to integers:
iex> <<0, "foo">>
<<0, 102, 111, 111>>
Any other type needs to be explicitly tagged. For example,
in order to store a float type in the binary, one has to do:
iex> <<3.14 :: float>>
<<64, 9, 30, 184, 81, 235, 133, 31>>
This also means that variables need to be explicitly tagged,
otherwise Elixir defaults to integer:
iex> rest = "oo"
iex> <<102, rest>>
** (ArgumentError) argument error
We can solve this by explicitly tagging it as a binary:
iex> rest = "oo"
iex> <<102, rest :: binary>>
"foo"
The type can be integer, float, bitstring/bits, binary/bytes,
utf8, utf16 or utf32, e.g.:
iex> rest = "oo"
iex> <<102 :: float, rest :: binary>>
<<64, 89, 128, 0, 0, 0, 0, 0, 111, 111>>
An integer can be any arbitrary precision integer. A float is an
IEEE 754 binary32 or binary64 floating point number. A bitstring
is an arbitrary series of bits. A binary is a special case of
bitstring that has a total size divisible by 8.
The utf8, utf16, and utf32 types are for unicode codepoints. They
can also be applied to literal strings and char lists:
iex> <<"foo" :: utf16>>
<<0, 102, 0, 111, 0, 111>>
The bits type is an alias for bitstring. The bytes type is an
alias for binary.
The signedness can also be given as signed or unsigned. The
signedness only matters for matching and relevant only for
integers. If unspecified, it defaults to unsigned. Example:
iex> <<-100 :: signed, _rest :: binary>> = <<-100, "foo">>
<<156, 102, 111, 111>>
This match would have failed if we did not specify that the
value -100 is signed. If we're matching into a variable instead
of a value, the signedness won't be checked; rather, the number
will simply be interpreted as having the given (or implied)
signedness, e.g.:
iex> <<val, _rest :: binary>> = <<-100, "foo">>
iex> val
156
Here, `val` is interpreted as unsigned.
The endianness of a segment can be big, little or native (the
latter meaning it will be resolved at VM load time).
Many options can be given by using `-` as separator, order is
arbitrary. The following are all the same:
<<102 :: integer-native, rest :: binary>>
<<102 :: native-integer, rest :: binary>>
<<102 :: unsigned-big-integer, rest :: binary>>
<<102 :: unsigned-big-integer-size(8), rest :: binary>>
<<102 :: unsigned-big-integer-8, rest :: binary>>
<<102 :: 8-integer-big-unsigned, rest :: binary>>
<<102, rest :: binary>>
And so on.
Endianness only makes sense for integers and some UTF code
point types (utf16 and utf32).
Finally, we can also specify size and unit for each segment. The
unit is multiplied by the size to give the effective size of
the segment in bits. The default unit for integers, floats,
and bitstrings is 1. For binaries, it is 8.
Since integers are default, the default unit is 1. The example below
matches because the string "foo" takes 24 bits and we match it
against a segment of 24 bits, 8 of which are taken by the integer
102 and the remaining 16 bits are specified on the rest.
iex> <<102, _rest :: size(16)>> = "foo"
"foo"
We can also match by specifying size and unit explicitly:
iex> <<102, _rest :: size(2)-unit(8)>> = "foo"
"foo"
However, if we expect a size of 32, it won't match:
iex> <<102, _rest :: size(32)>> = "foo"
** (MatchError) no match of right hand side value: "foo"
Size and unit are not applicable to utf8, utf16, and utf32.
The default size for integers is 8. For floats, it is 64. For
binaries, it is the size of the binary. Only the last binary
in a binary match can use the default size (all others must
have their size specified explicitly).
iex> <<3.14 :: float>>
<<64, 9, 30, 184, 81, 235, 133, 31>>
iex> <<3.14 :: float-32>>
<<64, 72, 245, 195>>
Size and unit can also be specified using a syntax shortcut
when passing integer values:
iex> x = 1
iex> << x :: 8 >> == << x :: size(8) >>
true
iex> << x :: 8 * 4 >> == << x :: size(8)-unit(4) >>
true
This syntax reflects the fact the effective size is given by
multiplying the size by the unit.
For floats, `size * unit` must result in 32 or 64, corresponding
to binary32 and binary64, respectively.
"""
defmacro unquote(:<<>>)(args)
@doc """
Defines a remote call or an alias.
The dot (`.`) in Elixir can be used for remote calls:
iex> String.downcase("FOO")
"foo"
In this example above, we have used `.` to invoke `downcase` in the
`String` alias, passing "FOO" as argument. We can also use the dot
for creating aliases:
iex> Hello.World
Hello.World
This time, we have joined two aliases, defining the final alias
`Hello.World`.
## Syntax
The right side of `.` may be a word starting in upcase, which represents
an alias, a word starting with lowercase or underscore, any valid language
operator or any name wrapped in single- or double-quotes. Those are all valid
examples:
iex> Kernel.Sample
Kernel.Sample
iex> Kernel.length([1, 2, 3])
3
iex> Kernel.+(1, 2)
3
iex> Kernel."length"([1, 2, 3])
3
iex> Kernel.'+'(1, 2)
3
Note that `Kernel."HELLO"` will be treated as a remote call and not an alias.
This choice was done so every time single- or double-quotes are used, we have
a remote call regardless of the quote contents. This decision is also reflected
in the quoted expressions discussed below.
## Quoted expression
When `.` is used, the quoted expression may take two distinct
forms. When the right side starts with a lowercase letter (or
underscore):
iex> quote do: String.downcase("FOO")
{{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]}
Notice we have an inner tuple, containing the atom `:.` representing
the dot as first element:
{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}
This tuple follows the general quoted expression structure in Elixir,
with the name as first argument, some keyword list as metadata as second,
and the number of arguments as third. In this case, the arguments is the
alias `String` and the atom `:downcase`. The second argument is **always**
an atom:
iex> quote do: String."downcase"("FOO")
{{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]}
The tuple containing `:.` is wrapped in another tuple, which actually
represents the function call, and has `"FOO"` as argument.
When the right side is an alias (i.e. starts with uppercase), we get instead:
iex> quote do: Hello.World
{:__aliases__, [alias: false], [:Hello, :World]}
We got into more details about aliases in the `__aliases__` special form
documentation.
## Unquoting
We can also use unquote to generate a remote call in a quoted expression:
iex> x = :downcase
iex> quote do: String.unquote(x)("FOO")
{{:., [], [{:__aliases__, [alias: false], [:String]}, :downcase]}, [], ["FOO"]}
Similar to `Kernel."HELLO"`, `unquote(x)` will always generate a remote call,
independent of the value of `x`. To generate an alias via the quoted expression,
one needs to rely on `Module.concat/2`:
iex> x = Sample
iex> quote do: Module.concat(String, unquote(x))
{{:., [], [{:__aliases__, [alias: false], [:Module]}, :concat]}, [],
[{:__aliases__, [alias: false], [:String]}, Sample]}
"""
defmacro unquote(:.)(left, right)
@doc """
`alias` is used to setup aliases, often useful with modules names.
## Examples
`alias` can be used to setup an alias for any module:
defmodule Math do
alias MyKeyword, as: Keyword
end
In the example above, we have set up `MyKeyword` to be aliased
as `Keyword`. So now, any reference to `Keyword` will be
automatically replaced by `MyKeyword`.
In case one wants to access the original `Keyword`, it can be done
by accessing `Elixir`:
Keyword.values #=> uses MyKeyword.values
Elixir.Keyword.values #=> uses Keyword.values
Notice that calling `alias` without the `as:` option automatically
sets an alias based on the last part of the module. For example:
alias Foo.Bar.Baz
Is the same as:
alias Foo.Bar.Baz, as: Baz
## Lexical scope
`import`, `require` and `alias` are called directives and all
have lexical scope. This means you can set up aliases inside
specific functions and it won't affect the overall scope.
## Warnings
If you alias a module and you don't use the alias, Elixir is
going to issue a warning implying the alias is not being used.
In case the alias is generated automatically by a macro,
Elixir won't emit any warnings though, since the alias
was not explicitly defined.
Both warning behaviours could be changed by explicitly
setting the `:warn` option to `true` or `false`.
"""
defmacro alias(module, opts)
@doc """
Requires a given module to be compiled and loaded.
## Examples
Notice that usually modules should not be required before usage,
the only exception is if you want to use the macros from a module.
In such cases, you need to explicitly require them.
Let's suppose you created your own `if` implementation in the module
`MyMacros`. If you want to invoke it, you need to first explicitly
require the `MyMacros`:
defmodule Math do
require MyMacros
MyMacros.if do_something, it_works
end
An attempt to call a macro that was not loaded will raise an error.
## Alias shortcut
`require` also accepts `as:` as an option so it automatically sets
up an alias. Please check `alias` for more information.
"""
defmacro require(module, opts)
@doc """
Imports function and macros from other modules.
`import` allows one to easily access functions or macros from
others modules without using the qualified name.
## Examples
If you are using several functions from a given module, you can
import those functions and reference them as local functions,
for example:
iex> import List
iex> flatten([1, [2], 3])
[1, 2, 3]
## Selector
By default, Elixir imports functions and macros from the given
module, except the ones starting with underscore (which are
usually callbacks):
import List
A developer can filter to import only macros or functions via
the only option:
import List, only: :functions
import List, only: :macros
Alternatively, Elixir allows a developer to pass pairs of
name/arities to `:only` or `:except` as a fine grained control
on what to import (or not):
import List, only: [flatten: 1]
import String, except: [split: 2]
Notice that calling `except` for a previously declared `import`
simply filters the previously imported elements. For example:
import List, only: [flatten: 1, keyfind: 3]
import List, except: [flatten: 1]
After the two import calls above, only `List.keyfind/3` will be
imported.
## Lexical scope
It is important to notice that `import` is lexical. This means you
can import specific macros inside specific functions:
defmodule Math do
def some_function do
# 1) Disable `if/2` from Kernel
import Kernel, except: [if: 2]
# 2) Require the new `if` macro from MyMacros
import MyMacros
# 3) Use the new macro
if do_something, it_works
end
end
In the example above, we imported macros from `MyMacros`,
replacing the original `if/2` implementation by our own
within that specific function. All other functions in that
module will still be able to use the original one.
## Warnings
If you import a module and you don't use any of the imported
functions or macros from this module, Elixir is going to issue
a warning implying the import is not being used.
In case the import is generated automatically by a macro,
Elixir won't emit any warnings though, since the import
was not explicitly defined.
Both warning behaviours could be changed by explicitly
setting the `:warn` option to `true` or `false`.
## Ambiguous function/macro names
If two modules `A` and `B` are imported and they both contain
a `foo` function with an arity of `1`, an error is only emitted
if an ambiguous call to `foo/1` is actually made; that is, the
errors are emitted lazily, not eagerly.
"""
defmacro import(module, opts)
@doc """
Returns the current environment information as a `Macro.Env` struct.
In the environment you can access the current filename,
line numbers, set up aliases, the current function and others.
"""
defmacro __ENV__
@doc """
Returns the current module name as an atom or `nil` otherwise.
Although the module can be accessed in the `__ENV__`, this macro
is a convenient shortcut.
"""
defmacro __MODULE__
@doc """
Returns the current directory as a binary.
Although the directory can be accessed as `Path.dirname(__ENV__.file)`,
this macro is a convenient shortcut.
"""
defmacro __DIR__
@doc """
Returns the current calling environment as a `Macro.Env` struct.
In the environment you can access the filename, line numbers,
set up aliases, the function and others.
"""
defmacro __CALLER__
@doc """
Accesses an already bound variable in match clauses.
## Examples
Elixir allows variables to be rebound via static single assignment:
iex> x = 1
iex> x = 2
iex> x
2
However, in some situations, it is useful to match against an existing
value, instead of rebinding. This can be done with the `^` special form:
iex> x = 1
iex> ^x = List.first([1])
iex> ^x = List.first([2])
** (MatchError) no match of right hand side value: 2
Note that `^` always refers to the value of x prior to the match. The
following example will match:
iex> x = 0
iex> {x, ^x} = {1, 0}
iex> x
1
"""
defmacro ^(var)
@doc """
Matches the value on the right against the pattern on the left.
"""
defmacro left = right
@doc """
Used by types and bitstrings to specify types.
This operator is used in two distinct occasions in Elixir.
It is used in typespecs to specify the type of a variable,
function or of a type itself:
@type number :: integer | float
@spec add(number, number) :: number
It may also be used in bit strings to specify the type
of a given bit segment:
<<int::integer-little, rest::bits>> = bits
Read the documentation for `Kernel.Typespec` and
`<<>>/1` for more information on typespecs and
bitstrings respectively.
"""
defmacro left :: right
@doc ~S"""
Gets the representation of any expression.
## Examples
quote do: sum(1, 2, 3)
#=> {:sum, [], [1, 2, 3]}
## Explanation
Any Elixir code can be represented using Elixir data structures.
The building block of Elixir macros is a tuple with three elements,
for example:
{:sum, [], [1, 2, 3]}
The tuple above represents a function call to `sum` passing 1, 2 and
3 as arguments. The tuple elements are:
* The first element of the tuple is always an atom or
another tuple in the same representation.
* The second element of the tuple represents metadata.
* The third element of the tuple are the arguments for the
function call. The third argument may be an atom, which is
usually a variable (or a local call).
## Options
* `:unquote` - when `false`, disables unquoting. Useful when you have a quote
inside another quote and want to control what quote is able to unquote.
* `:location` - when set to `:keep`, keeps the current line and file from
quote. Read the Stacktrace information section below for more
information.
* `:context` - sets the resolution context.
* `:bind_quoted` - passes a binding to the macro. Whenever a binding is
given, `unquote` is automatically disabled.
## Quote literals
Besides the tuple described above, Elixir has a few literals that
when quoted return themselves. They are:
:sum #=> Atoms
1 #=> Integers
2.0 #=> Floats
[1, 2] #=> Lists
"strings" #=> Strings
{key, value} #=> Tuples with two elements
## Quote and macros
`quote` is commonly used with macros for code generation. As an exercise,
let's define a macro that multiplies a number by itself (squared). Note
there is no reason to define such as a macro (and it would actually be
seen as a bad practice), but it is simple enough that it allows us to focus
on the important aspects of quotes and macros:
defmodule Math do
defmacro squared(x) do
quote do
unquote(x) * unquote(x)
end
end
end
We can invoke it as:
import Math
IO.puts "Got #{squared(5)}"
At first, there is nothing in this example that actually reveals it is a
macro. But what is happening is that, at compilation time, `squared(5)`
becomes `5 * 5`. The argument `5` is duplicated in the produced code, we
can see this behaviour in practice though because our macro actually has
a bug:
import Math
my_number = fn ->
IO.puts "Returning 5"
5
end
IO.puts "Got #{squared(my_number.())}"
The example above will print:
Returning 5
Returning 5
25
Notice how "Returning 5" was printed twice, instead of just once. This is
because a macro receives an expression and not a value (which is what we
would expect in a regular function). This means that:
squared(my_number.())
Actually expands to:
my_number.() * my_number.()
Which invokes the function twice, explaining why we get the printed value
twice! In the majority of the cases, this is actually unexpected behaviour,
and that's why one of the first things you need to keep in mind when it
comes to macros is to **not unquote the same value more than once**.
Let's fix our macro:
defmodule Math do
defmacro squared(x) do
quote do
x = unquote(x)
x * x
end
end
end
Now invoking `square(my_number.())` as before will print the value just
once.
In fact, this pattern is so common that most of the times you will want
to use the `bind_quoted` option with `quote`:
defmodule Math do
defmacro squared(x) do
quote bind_quoted: [x: x] do
x * x
end
end
end
`:bind_quoted` will translate to the same code as the example above.
`:bind_quoted` can be used in many cases and is seen as good practice,
not only because it helps us from running into common mistakes but also
because it allows us to leverage other tools exposed by macros, such as
unquote fragments discussed in some sections below.
Before we finish this brief introduction, you will notice that, even though
we defined a variable `x` inside our quote:
quote do
x = unquote(x)
x * x
end
When we call:
import Math
squared(5)
x #=> ** (RuntimeError) undefined function or variable: x
We can see that `x` did not leak to the user context. This happens
because Elixir macros are hygienic, a topic we will discuss at length
in the next sections as well.
## Hygiene in variables
Consider the following example:
defmodule Hygiene do
defmacro no_interference do
quote do: a = 1
end
end
require Hygiene
a = 10
Hygiene.no_interference
a #=> 10
In the example above, `a` returns 10 even if the macro
is apparently setting it to 1 because variables defined
in the macro does not affect the context the macro is executed in.
If you want to set or get a variable in the caller's context, you
can do it with the help of the `var!` macro:
defmodule NoHygiene do
defmacro interference do
quote do: var!(a) = 1
end
end
require NoHygiene
a = 10
NoHygiene.interference
a #=> 1
Note that you cannot even access variables defined in the same
module unless you explicitly give it a context:
defmodule Hygiene do
defmacro write do
quote do
a = 1
end
end
defmacro read do
quote do
a
end
end
end
Hygiene.write
Hygiene.read
#=> ** (RuntimeError) undefined function or variable: a
For such, you can explicitly pass the current module scope as
argument:
defmodule ContextHygiene do
defmacro write do
quote do
var!(a, ContextHygiene) = 1
end
end
defmacro read do
quote do
var!(a, ContextHygiene)
end
end
end
ContextHygiene.write
ContextHygiene.read
#=> 1
## Hygiene in aliases
Aliases inside quote are hygienic by default.
Consider the following example:
defmodule Hygiene do
alias HashDict, as: D
defmacro no_interference do
quote do: D.new
end
end
require Hygiene
Hygiene.no_interference #=> #HashDict<[]>
Notice that, even though the alias `D` is not available
in the context the macro is expanded, the code above works
because `D` still expands to `HashDict`.
Similarly, even if we defined an alias with the same name
before invoking a macro, it won't affect the macro's result:
defmodule Hygiene do
alias HashDict, as: D
defmacro no_interference do
quote do: D.new
end
end
require Hygiene
alias SomethingElse, as: D
Hygiene.no_interference #=> #HashDict<[]>
In some cases, you want to access an alias or a module defined
in the caller. For such, you can use the `alias!` macro:
defmodule Hygiene do
# This will expand to Elixir.Nested.hello
defmacro no_interference do
quote do: Nested.hello
end
# This will expand to Nested.hello for
# whatever is Nested in the caller
defmacro interference do
quote do: alias!(Nested).hello
end
end
defmodule Parent do
defmodule Nested do
def hello, do: "world"
end
require Hygiene
Hygiene.no_interference
#=> ** (UndefinedFunctionError) ...
Hygiene.interference
#=> "world"
end
## Hygiene in imports
Similar to aliases, imports in Elixir are hygienic. Consider the
following code:
defmodule Hygiene do
defmacrop get_size do
quote do
size("hello")
end
end
def return_size do
import Kernel, except: [size: 1]
get_size
end
end
Hygiene.return_size #=> 5
Notice how `return_size` returns 5 even though the `size/1`
function is not imported. In fact, even if `return_size` imported
a function from another module, it wouldn't affect the function
result:
def return_size do
import Dict, only: [size: 1]
get_size
end
Calling this new `return_size` will still return 5 as result.
Elixir is smart enough to delay the resolution to the latest
moment possible. So, if you call `size("hello")` inside quote,
but no `size/1` function is available, it is then expanded in
the caller:
defmodule Lazy do
defmacrop get_size do
import Kernel, except: [size: 1]
quote do
size([a: 1, b: 2])
end
end
def return_size do
import Kernel, except: [size: 1]
import Dict, only: [size: 1]
get_size
end
end
Lazy.return_size #=> 2
## Stacktrace information
When defining functions via macros, developers have the option of
choosing if runtime errors will be reported from the caller or from
inside the quote. Let's see an example:
# adder.ex
defmodule Adder do
@doc "Defines a function that adds two numbers"
defmacro defadd do
quote location: :keep do
def add(a, b), do: a + b
end
end
end
# sample.ex
defmodule Sample do
import Adder
defadd
end
When using `location: :keep` and invalid arguments are given to
`Sample.add/2`, the stacktrace information will point to the file
and line inside the quote. Without `location: :keep`, the error is
reported to where `defadd` was invoked. Note `location: :keep` affects
only definitions inside the quote.
## Binding and unquote fragments
Elixir quote/unquote mechanisms provides a functionality called
unquote fragments. Unquote fragments provide an easy way to generate
functions on the fly. Consider this example:
kv = [foo: 1, bar: 2]
Enum.each kv, fn {k, v} ->
def unquote(k)(), do: unquote(v)
end
In the example above, we have generated the functions `foo/0` and
`bar/0` dynamically. Now, imagine that, we want to convert this
functionality into a macro:
defmacro defkv(kv) do
Enum.map kv, fn {k, v} ->
quote do
def unquote(k)(), do: unquote(v)
end
end
end
We can invoke this macro as:
defkv [foo: 1, bar: 2]
However, we can't invoke it as follows:
kv = [foo: 1, bar: 2]
defkv kv
This is because the macro is expecting its arguments to be a
keyword list at **compilation** time. Since in the example above
we are passing the representation of the variable `kv`, our
code fails.
This is actually a common pitfall when developing macros. We are
assuming a particular shape in the macro. We can work around it
by unquoting the variable inside the quoted expression:
defmacro defkv(kv) do
quote do
Enum.each unquote(kv), fn {k, v} ->
def unquote(k)(), do: unquote(v)
end
end
end
If you try to run our new macro, you will notice it won't
even compile, complaining that the variables `k` and `v`
does not exist. This is because of the ambiguity: `unquote(k)`
can either be an unquote fragment, as previously, or a regular
unquote as in `unquote(kv)`.
One solution to this problem is to disable unquoting in the
macro, however, doing that would make it impossible to inject the
`kv` representation into the tree. That's when the `:bind_quoted`
option comes to the rescue (again!). By using `:bind_quoted`, we
can automatically disable unquoting while still injecting the
desired variables into the tree:
defmacro defkv(kv) do
quote bind_quoted: [kv: kv] do
Enum.each kv, fn {k, v} ->
def unquote(k)(), do: unquote(v)
end
end
end
In fact, the `:bind_quoted` option is recommended every time
one desires to inject a value into the quote.
"""
defmacro quote(opts, block)
@doc """
Unquotes the given expression from inside a macro.
## Examples
Imagine the situation you have a variable `value` and
you want to inject it inside some quote. The first attempt
would be:
value = 13
quote do: sum(1, value, 3)
Which would then return:
{:sum, [], [1, {:value, [], quoted}, 3]}
Which is not the expected result. For this, we use unquote:
value = 13
quote do: sum(1, unquote(value), 3)
#=> {:sum, [], [1, 13, 3]}
"""
defmacro unquote(:unquote)(expr)
@doc """
Unquotes the given list expanding its arguments. Similar
to unquote.
## Examples
values = [2, 3, 4]
quote do: sum(1, unquote_splicing(values), 5)
#=> {:sum, [], [1, 2, 3, 4, 5]}
"""
defmacro unquote(:unquote_splicing)(expr)
@doc ~S"""
Comprehensions allow you to quickly build a data structure from
an enumerable or a bitstring.
Let's start with an example:
iex> for n <- [1, 2, 3, 4], do: n * 2
[2, 4, 6, 8]
A comprehension accepts many generators and filters. Enumerable
generators are defined using `<-`:
# A list generator:
iex> for n <- [1, 2, 3, 4], do: n * 2
[2, 4, 6, 8]
# A comprehension with two generators
iex> for x <- [1, 2], y <- [2, 3], do: x*y
[2, 3, 4, 6]
Filters can also be given:
# A comprehension with a generator and a filter
iex> for n <- [1, 2, 3, 4, 5, 6], rem(n, 2) == 0, do: n
[2, 4, 6]
Note generators can also be used to filter as it removes any value
that doesn't match the left side of `<-`:
iex> for {:user, name} <- [user: "john", admin: "john", user: "meg"] do
...> String.upcase(name)
...> end
["JOHN", "MEG"]
Bitstring generators are also supported and are very useful when you
need to organize bitstring streams:
iex> pixels = <<213, 45, 132, 64, 76, 32, 76, 0, 0, 234, 32, 15>>
iex> for <<r::8, g::8, b::8 <- pixels >>, do: {r, g, b}
[{213, 45, 132}, {64, 76, 32}, {76, 0, 0}, {234, 32, 15}]
Variable assignments inside the comprehension, be it in generators,
filters or inside the block, are not reflected outside of the
comprehension.
## Into
In the examples above, the result returned by the comprehension was
always a list. The returned result can be configured by passing an
`:into` option, that accepts any structure as long as it implements
the `Collectable` protocol.
For example, we can use bitstring generators with the `:into` option
to easily remove all spaces in a string:
iex> for <<c <- " hello world ">>, c != ?\s, into: "", do: <<c>>
"helloworld"
The `IO` module provides streams, that are both `Enumerable` and
`Collectable`, here is an upcase echo server using comprehensions:
for line <- IO.stream(:stdio, :line), into: IO.stream(:stdio, :line) do
String.upcase(line)
end
"""
defmacro for(args)
@doc """
Defines an anonymous function.
## Examples
iex> add = fn a, b -> a + b end
iex> add.(1, 2)
3
"""
defmacro unquote(:fn)(clauses)
@doc """
Internal special form for block expressions.
This is the special form used whenever we have a block
of expressions in Elixir. This special form is private
and should not be invoked directly:
iex> quote do: (1; 2; 3)
{:__block__, [], [1, 2, 3]}
"""
defmacro __block__(args)
@doc """
Captures or creates an anonymous function.
## Capture
The capture operator is most commonly used to capture a
function with given name and arity from a module:
iex> fun = &Kernel.is_atom/1
iex> fun.(:atom)
true
iex> fun.("string")
false
In the example above, we captured `Kernel.is_atom/1` as an
anonymous function and then invoked it.
The capture operator can also be used to capture local functions,
including private ones, and imported functions by omitting the
module name:
&local_function/1
## Anonymous functions
The capture operator can also be used to partially apply
functions, where `&1`, `&2` and so on can be used as value
placeholders. For example:
iex> double = &(&1 * 2)
iex> double.(2)
4
In other words, `&(&1 * 2)` is equivalent to `fn x -> x * 2 end`.
Another example using a local function:
iex> fun = &is_atom(&1)
iex> fun.(:atom)
true
The `&` operator can be used with more complex expressions:
iex> fun = &(&1 + &2 + &3)
iex> fun.(1, 2, 3)
6
As well as with lists and tuples:
iex> fun = &{&1, &2}
iex> fun.(1, 2)
{1, 2}
iex> fun = &[&1|&2]
iex> fun.(1, 2)
[1|2]
The only restrictions when creating anonymous functions is that at
least one placeholder must be present, i.e. it must contain at least
`&1`:
# No placeholder fails to compile
&var
# Block expressions are also not supported
&(foo(&1, &2); &3 + &4)
"""
defmacro unquote(:&)(expr)
@doc """
Internal special form to hold aliases information.
It is usually compiled to an atom:
iex> quote do: Foo.Bar
{:__aliases__, [alias: false], [:Foo, :Bar]}
Elixir represents `Foo.Bar` as `__aliases__` so calls can be
unambiguously identified by the operator `:.`. For example:
iex> quote do: Foo.bar
{{:., [], [{:__aliases__, [alias: false], [:Foo]}, :bar]}, [], []}
Whenever an expression iterator sees a `:.` as the tuple key,
it can be sure that it represents a call and the second argument
in the list is an atom.
On the other hand, aliases holds some properties:
1. The head element of aliases can be any term that must expand to
an atom at compilation time.
2. The tail elements of aliases are guaranteed to always be atoms.
3. When the head element of aliases is the atom `:Elixir`, no expansion happen.
"""
defmacro __aliases__(args)
@doc """
Calls the overriden function when overriding it with `defoverridable`.
See `Kernel.defoverridable` for more information and documentation.
"""
defmacro super(args)
@doc """
Matches the given expression against the given clauses.
## Examples
case thing do
{:selector, i, value} when is_integer(i) ->
value
value ->
value
end
In the example above, we match `thing` against each clause "head"
and execute the clause "body" corresponding to the first clause
that matches. If no clause matches, an error is raised.
## Variables handling
Notice that variables bound in a clause "head" do not leak to the
outer context:
case data do
{:ok, value} -> value
:error -> nil
end
value #=> unbound variable value
However, variables explicitly bound in the clause "body" are
accessible from the outer context:
value = 7
case lucky? do
false -> value = 13
true -> true
end
value #=> 7 or 13
In the example above, value is going to be `7` or `13` depending on
the value of `lucky?`. In case `value` has no previous value before
case, clauses that do not explicitly bind a value have the variable
bound to `nil`.
"""
defmacro case(condition, clauses)
@doc """
Evaluates the expression corresponding to the first clause that
evaluates to truth value.
Raises an error if all conditions evaluate to `nil` or `false`.
## Examples
cond do
1 + 1 == 1 ->
"This will never match"
2 * 2 != 4 ->
"Nor this"
true ->
"This will"
end
"""
defmacro cond(clauses)
@doc ~S"""
Evaluates the given expressions and handle any error, exit
or throw that may have happened.
## Examples
try do
do_something_that_may_fail(some_arg)
rescue
ArgumentError ->
IO.puts "Invalid argument given"
catch
value ->
IO.puts "caught #{value}"
else
value ->
IO.puts "Success! The result was #{value}"
after
IO.puts "This is printed regardless if it failed or succeed"
end
The rescue clause is used to handle exceptions, while the catch
clause can be used to catch thrown values. The else clause can
be used to control flow based on the result of the expression.
Catch, rescue and else clauses work based on pattern matching.
Note that calls inside `try` are not tail recursive since the VM
needs to keep the stacktrace in case an exception happens.
## Rescue clauses
Besides relying on pattern matching, rescue clauses provides some
conveniences around exceptions that allows one to rescue an
exception by its name. All the following formats are valid rescue
expressions:
try do
UndefinedModule.undefined_function
rescue
UndefinedFunctionError -> nil
end
try do
UndefinedModule.undefined_function
rescue
[UndefinedFunctionError] -> nil
end
# rescue and bind to x
try do
UndefinedModule.undefined_function
rescue
x in [UndefinedFunctionError] -> nil
end
# rescue all and bind to x
try do
UndefinedModule.undefined_function
rescue
x -> nil
end
## Erlang errors
Erlang errors are transformed into Elixir ones during rescue:
try do
:erlang.error(:badarg)
rescue
ArgumentError -> :ok
end
The most common Erlang errors will be transformed into their
Elixir counter-part. Those which are not will be transformed
into `ErlangError`:
try do
:erlang.error(:unknown)
rescue
ErlangError -> :ok
end
In fact, ErlangError can be used to rescue any error that is
not an Elixir error proper. For example, it can be used to rescue
the earlier `:badarg` error too, prior to transformation:
try do
:erlang.error(:badarg)
rescue
ErlangError -> :ok
end
## Catching throws and exits
The catch clause can be used to catch throws values and exits.
try do
exit(:shutdown)
catch
:exit, :shutdown -> IO.puts "Exited with shutdown reason"
end
try do
throw(:sample)
catch
:throw, :sample ->
IO.puts "sample thrown"
end
catch values also support `:error`, as in Erlang, although it is
commonly avoided in favor of raise/rescue control mechanisms.
## Else clauses
Else clauses allow the result of the expression to be pattern
matched on:
x = 2
try do
1 / x
rescue
ArithmeticError ->
:infinity
else
y when y < 1 and y > -1 ->
:small
_ ->
:large
end
If an else clause is not present the result of the expression will
be return, if no exceptions are raised:
x = 1
^x =
try do
1 / x
rescue
ArithmeticError ->
:infinity
end
However when an else clause is present but the result of the expression
does not match any of the patterns an exception will be raised. This
exception will not be caught by a catch or rescue in the same try:
x = 1
try do
try do
1 / x
rescue
# The TryClauseError can not be rescued here:
TryClauseError ->
:error_a
else
0 ->
:small
end
rescue
# The TryClauseError is rescued here:
TryClauseError ->
:error_b
end
Similarly an exception inside an else clause is not caught or rescued
inside the same try:
try do
try do
nil
catch
# The exit(1) call below can not be caught here:
:exit, _ ->
:exit_a
else
_ ->
exit(1)
end
catch
# The exit is caught here:
:exit, _ ->
:exit_b
end
This means the VM no longer needs to keep the stacktrace once inside
an else clause and so tail recursion is possible when using a `try`
with a tail call as the final call inside an else clause. The same
is `true` for `rescue` and `catch` clauses.
## Variable handling
Since an expression inside `try` may not have been evaluated
due to an exception, any variable created inside `try` cannot
be accessed externally. For instance:
try do
x = 1
do_something_that_may_fail(same_arg)
:ok
catch
_, _ -> :failed
end
x #=> unbound variable `x`
In the example above, `x` cannot be accessed since it was defined
inside the `try` clause. A common practice to address this issue
is to return the variables defined inside `try`:
x =
try do
x = 1
do_something_that_may_fail(same_arg)
x
catch
_, _ -> :failed
end
"""
defmacro try(args)
@doc """
Checks if there is a message matching the given clauses
in the current process mailbox.
In case there is no such message, the current process hangs
until a message arrives or waits until a given timeout value.
## Examples
receive do
{:selector, i, value} when is_integer(i) ->
value
value when is_atom(value) ->
value
_ ->
IO.puts :stderr, "Unexpected message received"
end
An optional after clause can be given in case the message was not
received after the specified period of time:
receive do
{:selector, i, value} when is_integer(i) ->
value
value when is_atom(value) ->
value
_ ->
IO.puts :stderr, "Unexpected message received"
after
5000 ->
IO.puts :stderr, "No message in 5 seconds"
end
The `after` clause can be specified even if there are no match clauses.
There are two special cases for the timeout value given to `after`
* `:infinity` - the process should wait indefinitely for a matching
message, this is the same as not using a timeout
* 0 - if there is no matching message in the mailbox, the timeout
will occur immediately
## Variables handling
The `receive` special form handles variables exactly as the `case`
special macro. For more information, check the docs for `case/2`.
"""
defmacro receive(args)
end
|
lib/elixir/lib/kernel/special_forms.ex
| 0.923251 | 0.628749 |
special_forms.ex
|
starcoder
|
defmodule GameServer.DaraDots.DaraDotsGame do
use GenServer
alias Phoenix.PubSub
alias GameServer.DaraDots.{Board, Coordinate}
@broadcast_frequency 70
def start(id) do
GenServer.start(__MODULE__, id, name: via_tuple(id))
end
defp via_tuple(id) do
{:via, Registry, {GameServer.Registry, {__MODULE__, id}}}
end
@impl GenServer
def init(game_id) do
# Distances are represented as percentages for the board to display
initial_state = %{
game_id: game_id
}
# Setup the initial pieces
{:ok, board} = Board.new()
initial_state = Map.put(initial_state, :board, board)
# Start the regular state broadcasting
Process.send_after(self(), :broadcast_game_state, @broadcast_frequency)
{:ok, initial_state}
end
@impl GenServer
def handle_info(:broadcast_game_state, state) do
Process.send_after(self(), :broadcast_game_state, @broadcast_frequency)
broadcast_game_state(state)
{:noreply, state}
end
defp broadcast_game_state(state) do
# generate the game state to be broadcast
state_to_broadcast = %{
dots:
Enum.map(
state.board.dot_coords,
fn coord -> coord |> Coordinate.to_list end
),
bot_alpha: state.board.bot_linker_alpha.coord |> Coordinate.to_list,
bot_beta: state.board.bot_linker_beta.coord |> Coordinate.to_list,
top_alpha: state.board.top_linker_alpha.coord |> Coordinate.to_list,
top_beta: state.board.top_linker_beta.coord |> Coordinate.to_list,
movable_dots:
Enum.map(
Board.get_movable_coords(state.board, :top_linker_beta) |> MapSet.to_list(),
fn coord -> Coordinate.to_list(coord) end
),
runner_pieces:
Enum.map(
MapSet.to_list(state.board.runner_pieces),
fn runner -> Coordinate.to_list(runner.coord) end
)
}
PubSub.broadcast(
GameServer.PubSub,
"dara_dots_game:#{state.game_id}",
{:new_game_state, state_to_broadcast}
)
end
end
|
apps/game_server/lib/game_server/dara_dots/dara_dots_game.ex
| 0.773901 | 0.435481 |
dara_dots_game.ex
|
starcoder
|
defmodule VehicleHelpers do
@moduledoc """
Various functions for working on lists of vehicle to show on a map, or render tooltips.
"""
alias Vehicles.Vehicle
alias Predictions.Prediction
alias Routes.{Route, Shape}
alias Stops.Stop
alias Schedules.Trip
alias SiteWeb.ScheduleController.VehicleLocations
import Routes.Route, only: [vehicle_name: 1]
import Phoenix.HTML.Tag, only: [content_tag: 2, content_tag: 3]
import Phoenix.HTML, only: [safe_to_string: 1]
import SiteWeb.ViewHelpers, only: [format_schedule_time: 1]
@type tooltip_index_key :: {Trip.id_t() | nil, Stop.id_t()} | Stop.id_t()
@type tooltip_index :: %{
optional({Trip.id_t() | nil, Stop.id_t()}) => VehicleTooltip.t(),
optional(Stop.id_t()) => VehicleTooltip.t()
}
@doc """
There are multiple places where vehicle tooltips are used. This function is called from the controller to
construct a convenient map that can be used in views / templates to determine if a tooltip is available
and to fetch all of the required data
"""
@spec build_tooltip_index(Route.t(), VehicleLocations.t(), [Prediction.t()]) :: tooltip_index
def build_tooltip_index(route, vehicle_locations, vehicle_predictions) do
indexed_predictions = index_vehicle_predictions(vehicle_predictions)
vehicle_locations
|> Stream.reject(fn {{_trip_id, stop_id}, _status} -> is_nil(stop_id) end)
|> Enum.reduce(%{}, fn vehicle_location, output ->
{{trip_id, child_stop_id}, vehicle_status} = vehicle_location
{prediction, trip} =
if trip_id do
{
prediction_for_stop(indexed_predictions, trip_id, child_stop_id),
Schedules.Repo.trip(trip_id)
}
else
{nil, nil}
end
parent_stop = Stops.Repo.get_parent(child_stop_id)
stop_id = stop_id(parent_stop, child_stop_id)
tooltip = %VehicleTooltip{
vehicle: vehicle_status,
prediction: prediction,
stop_name: stop_name(parent_stop),
trip: trip,
route: route
}
output
|> Map.put(stop_id, tooltip)
|> Map.put({trip_id, stop_id}, tooltip)
end)
end
@spec prediction_for_stop(VehicleLocations.t(), String.t(), String.t()) :: Prediction.t() | nil
defp prediction_for_stop(vehicle_predictions, trip_id, stop_id) do
Map.get(vehicle_predictions, {trip_id, stop_id})
end
@spec index_vehicle_predictions([Prediction.t()]) :: %{
{String.t(), String.t()} => Prediction.t()
}
defp index_vehicle_predictions(predictions) do
predictions
|> Stream.filter(&(&1.trip && &1.stop))
|> Stream.map(&{{&1.trip.id, &1.stop.id}, &1})
|> Enum.into(Map.new())
end
@spec stop_name(Stops.Stop.t() | nil) :: String.t()
defp stop_name(nil), do: ""
defp stop_name(stop), do: stop.name
@spec stop_id(Stops.Stop.t() | nil, String.t()) :: String.t()
defp stop_id(nil, child_stop_id), do: child_stop_id
defp stop_id(stop, _), do: stop.id
@doc """
Get polylines for vehicles that didn't already have their shape included when the route polylines were requested
"""
@spec get_vehicle_polylines(VehicleLocations.t(), [Shape.t()]) :: [String.t()]
def get_vehicle_polylines(locations, route_shapes) do
vehicle_shape_ids = vehicle_shape_ids(locations)
route_shape_ids = MapSet.new(route_shapes, & &1.id)
vehicle_shape_ids
|> MapSet.difference(route_shape_ids)
|> Enum.map(&Routes.Repo.get_shape(&1))
|> Enum.flat_map(fn
[] ->
[]
[%Shape{} = shape | _] ->
[shape.polyline]
end)
end
@spec vehicle_shape_ids(VehicleLocations.t()) :: MapSet.t()
defp vehicle_shape_ids(locations) do
for {_, value} <- locations,
is_binary(value.shape_id),
into: MapSet.new() do
value.shape_id
end
end
@doc """
Function used to return tooltip text for a VehicleTooltip struct
"""
@spec tooltip(VehicleTooltip.t() | nil) :: Phoenix.HTML.Safe.t()
def tooltip(nil) do
""
end
def tooltip(%{
prediction: prediction,
vehicle: vehicle,
trip: trip,
stop_name: stop_name,
route: route
}) do
# Get stop name from vehicle if present, otherwise use provided predicted stop_name
stop_name =
if vehicle.stop_id do
case Stops.Repo.get_parent(vehicle.stop_id) do
nil -> stop_name
%Stops.Stop{name: name} -> name
end
else
stop_name
end
time_text = prediction_time_text(prediction)
status_text = prediction_status_text(prediction)
stop_text = realtime_stop_text(trip, stop_name, vehicle, route)
build_tooltip(time_text, status_text, stop_text)
end
@spec prediction_status_text(Prediction.t() | nil) :: iodata
defp prediction_status_text(%Prediction{status: status, track: track})
when not is_nil(track) and not is_nil(status) do
[String.capitalize(status), " on track ", track]
end
defp prediction_status_text(_) do
[]
end
@spec prediction_time_text(Prediction.t() | nil) :: iodata
defp prediction_time_text(nil) do
[]
end
defp prediction_time_text(%Prediction{time: nil}) do
[]
end
defp prediction_time_text(%Prediction{time: time, departing?: true}) do
["Expected departure at ", format_schedule_time(time)]
end
defp prediction_time_text(%Prediction{time: time}) do
["Expected arrival at ", format_schedule_time(time)]
end
@spec realtime_stop_text(Trip.t() | nil, String.t(), Vehicle.t() | nil, Route.t()) :: iodata
defp realtime_stop_text(trip, stop_name, %Vehicle{status: status}, route) do
[
display_headsign_text(route, trip),
String.downcase(vehicle_name(route)),
display_trip_name(route, trip)
] ++
realtime_status_with_stop(status, stop_name)
end
@spec display_headsign_text(Route.t(), Trip.t() | nil) :: iodata
defp display_headsign_text(_, %{headsign: headsign}), do: [headsign, " "]
defp display_headsign_text(%{name: name}, _), do: [name, " "]
defp display_headsign_text(_, _), do: ""
@spec realtime_status_with_stop(atom, String.t()) :: iodata()
defp realtime_status_with_stop(_status, "") do
[]
end
defp realtime_status_with_stop(status, stop_name) do
[
realtime_status_text(status),
stop_name
]
end
@spec realtime_status_text(atom) :: String.t()
defp realtime_status_text(:incoming), do: " is arriving at "
defp realtime_status_text(:stopped), do: " has arrived at "
defp realtime_status_text(:in_transit), do: " is on the way to "
@spec display_trip_name(Route.t(), Trip.t() | nil) :: iodata
defp display_trip_name(%{type: 2}, %{name: name}), do: [" ", name]
defp display_trip_name(_, _), do: ""
@spec build_tooltip(iodata, iodata, iodata) :: String.t()
defp build_tooltip(time_text, status_text, stop_text) do
time_tag = do_build_tooltip(time_text)
status_tag = do_build_tooltip(status_text)
stop_tag = do_build_tooltip(stop_text)
:div
|> content_tag([stop_tag, time_tag, status_tag])
|> safe_to_string
|> String.replace(~s("), ~s('))
end
@spec do_build_tooltip(iodata) :: Phoenix.HTML.Safe.t()
defp do_build_tooltip([]) do
""
end
defp do_build_tooltip(text) do
content_tag(:p, text, class: 'prediction-tooltip')
end
end
|
apps/site/lib/vehicle_helpers.ex
| 0.747892 | 0.44354 |
vehicle_helpers.ex
|
starcoder
|
defmodule Mix.Tasks.Deps do
use Mix.Task
import Mix.Dep, only: [loaded: 1, format_dep: 1, format_status: 1, check_lock: 1]
@shortdoc "Lists dependencies and their status"
@moduledoc ~S"""
Lists all dependencies and their status.
Dependencies must be specified in the `mix.exs` file in one of
the following formats:
{app, requirement}
{app, opts}
{app, requirement, opts}
Where:
* app is an atom
* requirement is a `Version` requirement or a regular expression
* opts is a keyword list of options
For example:
{:plug, ">= 0.4.0"}
{:gettext, git: "https://github.com/elixir-lang/gettext.git", tag: "0.1"}
{:local_dependency, path: "path/to/local_dependency"}
By default, dependencies are fetched using the [Hex package manager](https://hex.pm/):
{:plug, ">= 0.4.0"}
By specifying such dependencies, Mix will automatically install
Hex (if it wasn't previously installed) and download a package
suitable to your project.
Mix also supports Git and path dependencies:
{:foobar, git: "https://github.com/elixir-lang/foobar.git", tag: "0.1"}
{:foobar, path: "path/to/foobar"}
And also in umbrella dependencies:
{:my_app, in_umbrella: true}
Path and in umbrella dependencies are automatically recompiled by
the parent project whenever they change. While fetchable dependencies,
like the ones using `:git`, are recompiled only when fetched/updated.
The dependencies' versions are expected to be formatted according to
Semantic Versioning and the requirements must be specified as defined
in the `Version` module.
## Options
Below we provide a more detailed look into the available options.
### Dependency definition options
* `:app` - when set to `false`, does not read the app file for this
dependency. By default, the app file is read
* `:env` - the environment (as an atom) to run the dependency on; defaults to `:prod`
* `:compile` - a command (string) to compile the dependency; defaults to a `mix`,
`rebar` or `make` command
* `:optional` - marks the dependency as optional. In such cases, the
current project will always include the optional dependency but any
other project that depends on the current project won't be forced to
use the optional dependency. However, if the other project includes
the optional dependency on its own, the requirements and options
specified here will also be applied.
* `:only` - the dependency is made available only in the given environments,
useful when declaring dev- or test-only dependencies; by default the
dependency will be available in all environments. The value of this option
can either be a single environment (like `:dev`) or a list of environments
(like `[:dev, :test]`)
* `:override` - if set to `true` the dependency will override any other
definitions of itself by other dependencies
* `:manager` - Mix can also compile Rebar, Rebar3 and makefile projects
and can fetch sub dependencies of Rebar and Rebar3 projects. Mix will
try to infer the type of project but it can be overridden with this
option by setting it to `:mix`, `:rebar3`, `:rebar` or `:make`. In case
there are conflicting definitions, the first manager in the list above
will be picked up. For example, if a dependency is found with `:rebar3`
and `:rebar` managers in different part of the trees, `:rebar3` will
be automatically picked. You can find the manager by running `mix deps`
and override it by setting the `:override` option in a top-level project.
* `:runtime` - whether the dependency is part of runtime applications.
Defaults to `true` which automatically adds the application to the list
of apps that are started automatically and included in releases
### Git options (`:git`)
* `:git` - the Git repository URI
* `:github` - a shortcut for specifying Git repos from GitHub, uses `git:`
* `:ref` - the reference to checkout (may be a branch, a commit SHA or a tag)
* `:branch` - the Git branch to checkout
* `:tag` - the Git tag to checkout
* `:submodules` - when `true`, initialize submodules for the repo
* `:sparse` - checkout a single directory inside the Git repository and use it
as your Mix dependency. Search "sparse git checkouts" for more information.
If your Git repository requires authentication, such as basic username:password
HTTP authentication via URLs, it can be achieve via git configuration, keeping
the access rules outside of source control.
git config --global url."https://YOUR_USER:[email protected]/".insteadOf "https://example.com/"
For more information, see the `git config` documentation:
https://git-scm.com/docs/git-config#git-config-urlltbasegtinsteadOf
### Path options (`:path`)
* `:path` - the path for the dependency
* `:in_umbrella` - when `true`, sets a path dependency pointing to
"../#{app}", sharing the same environment as the current application
### Hex options (`:hex`)
See the [Hex usage documentation](https://hex.pm/docs/usage) for Hex options.
## Deps task
`mix deps` task lists all dependencies in the following format:
APP VERSION (SCM) (MANAGER)
[locked at REF]
STATUS
It supports the following options:
* `--all` - checks all dependencies, regardless of specified environment
"""
@spec run(OptionParser.argv()) :: :ok
def run(args) do
Mix.Project.get!()
{opts, _, _} = OptionParser.parse(args)
loaded_opts = if opts[:all], do: [], else: [env: Mix.env()]
shell = Mix.shell()
Enum.each(loaded(loaded_opts), fn dep ->
%Mix.Dep{scm: scm, manager: manager} = dep
dep = check_lock(dep)
extra = if manager, do: " (#{manager})", else: ""
shell.info("* #{format_dep(dep)}#{extra}")
if formatted = scm.format_lock(dep.opts) do
shell.info(" locked at #{formatted}")
end
shell.info(" #{format_status(dep)}")
end)
end
end
|
lib/mix/lib/mix/tasks/deps.ex
| 0.841011 | 0.568056 |
deps.ex
|
starcoder
|
defmodule FinTex.Model.Transaction do
@moduledoc """
The following fields are public:
* `name` - Name of originator or recipient
* `account_number` - Account number of originator or recipient. Empty if transaction has no account number.
* `bank_code` - Bank code of originator or recipient. Empty if transaction has no bank code.
* `amount` - Transaction amount
* `booking_date` - Booking date
* `value_date` - Value date
* `purpose` - Purpose text. This field might be empty if the transaction has no purpose
* `code` - Business transaction code
* `booking_text` - Booking text. This field might be empty if the transaction has no booking text
* `booked` - This flag indicates whether the transaction is booked or pending
"""
@type t :: %__MODULE__{
name: binary,
account_number: binary,
bank_code: binary,
amount: %Decimal{},
booking_date: DateTime.t,
value_date: DateTime.t,
purpose: binary,
code: non_neg_integer,
booking_text: binary,
booked: boolean
}
defstruct [
:name,
:account_number,
:bank_code,
:amount,
:booking_date,
:value_date,
:purpose,
:code,
:booking_text,
:booked
]
@doc false
def from_statement(%MT940.StatementLineBundle{
account_holder: account_holder,
account_number: account_number,
amount: amount,
bank_code: bank_code,
code: code,
details: details,
entry_date: entry_date,
funds_code: funds_code,
transaction_description: transaction_description,
value_date: value_date
}) do
sign = case funds_code do
:credit -> +1
:debit -> -1
:return_credit -> -1
:return_debit -> +1
end
%__MODULE__{
name: account_holder |> Enum.join(" "),
account_number: account_number,
bank_code: bank_code,
amount: amount |> Decimal.mult(sign |> Decimal.new),
booking_date: entry_date,
value_date: value_date,
purpose: details,
code: code,
booking_text: transaction_description
}
end
end
|
lib/model/transaction.ex
| 0.908962 | 0.502808 |
transaction.ex
|
starcoder
|
defmodule ExWire.Packet.NewBlockHashes do
@moduledoc """
Advertises new blocks to the network.
```
**NewBlockHashes** [`+0x01`: `P`, [`hash_0`: `B_32`, `number_0`: `P`], [`hash_1`: `B_32`, `number_1`: `P`], ...]
Specify one or more new blocks which have appeared on the
network. To be maximally helpful, nodes should inform peers of all blocks that
they may not be aware of. Including hashes that the sending peer could
reasonably be considered to know (due to the fact they were previously
informed of because that node has itself advertised knowledge of the hashes
through NewBlockHashes) is considered Bad Form, and may reduce the reputation
of the sending node. Including hashes that the sending node later refuses to
honour with a proceeding GetBlockHeaders message is considered Bad Form, and
may reduce the reputation of the sending node.
```
"""
@behaviour ExWire.Packet
@type t :: %__MODULE__{
hashes: [ExWire.Packet.block_hash()]
}
defstruct [
:hashes
]
@doc """
Given a NewBlockHashes packet, serializes for transport over Eth Wire Protocol.
## Examples
iex> %ExWire.Packet.NewBlockHashes{hashes: [{<<5>>, 1}, {<<6>>, 2}]}
...> |> ExWire.Packet.NewBlockHashes.serialize()
[[<<5>>, 1], [<<6>>, 2]]
iex> %ExWire.Packet.NewBlockHashes{hashes: []}
...> |> ExWire.Packet.NewBlockHashes.serialize()
[]
"""
@spec serialize(t) :: ExRLP.t()
def serialize(packet = %__MODULE__{}) do
for {hash, number} <- packet.hashes, do: [hash, number]
end
@doc """
Given an RLP-encoded NewBlockHashes packet from Eth Wire Protocol,
decodes into a NewBlockHashes struct.
## Examples
iex> ExWire.Packet.NewBlockHashes.deserialize([[<<5>>, 1], [<<6>>, 2]])
%ExWire.Packet.NewBlockHashes{hashes: [{<<5>>, 1}, {<<6>>, 2}]}
iex> ExWire.Packet.NewBlockHashes.deserialize([])
** (MatchError) no match of right hand side value: []
"""
@spec deserialize(ExRLP.t()) :: t
def deserialize(rlp) do
# must be an array with at least one element
hash_lists = [_h | _t] = rlp
if Enum.count(hash_lists) > 256, do: raise("Too many hashes")
hashes = for [hash, number] <- hash_lists, do: {hash, number}
%__MODULE__{
hashes: hashes
}
end
@doc """
Handles a NewBlockHashes message. This is when a peer wants to
inform us that she knows about new blocks. For now, we'll do nothing.
## Examples
iex> %ExWire.Packet.NewBlockHashes{hashes: [{<<5>>, 1}, {<<6>>, 2}]}
...> |> ExWire.Packet.NewBlockHashes.handle()
:ok
"""
@spec handle(ExWire.Packet.packet()) :: ExWire.Packet.handle_response()
def handle(_packet = %__MODULE__{}) do
# TODO: Do something
:ok
end
end
|
apps/ex_wire/lib/ex_wire/packet/new_block_hashes.ex
| 0.828766 | 0.876898 |
new_block_hashes.ex
|
starcoder
|
defmodule Crux.Structs.Message do
@moduledoc """
Represents a Discord [Message Object](https://discord.com/developers/docs/resources/channel#message-object).
Differences opposed to the Discord API Object:
- `:mentions` is a MapSet of user ids
"""
@moduledoc since: "0.1.0"
@behaviour Crux.Structs
alias Crux.Structs
alias Crux.Structs.{
Application,
Attachment,
Embed,
Member,
Message,
Reaction,
Snowflake,
Sticker,
User,
Util
}
defstruct [
:id,
:channel_id,
:guild_id,
:author,
:member,
:content,
:timestamp,
:edited_timestamp,
:tts,
:mention_everyone,
:mentions,
:mention_roles,
:mention_channels,
:attachments,
:embeds,
:reactions,
:nonce,
:pinned,
:webhook_id,
:type,
:activity,
:application,
:message_reference,
:flags,
:stickers,
:referenced_message,
:interaction
]
@typedoc since: "0.2.1"
@type message_activity :: %{
optional(:party_id) => String.t(),
type: integer()
}
@typedoc since: "0.2.1"
@type mention_channel :: %{
id: Snowflake.t(),
guild_id: Snowflake.t(),
name: String.t(),
type: non_neg_integer()
}
@typedoc """
* `message_id` is `nil` for the initial message sent when a user starts following a channel.
* `guild_id` is only `nil` for some messages during the initial rollout of this feature.
"""
@typedoc since: "0.2.1"
@type message_reference :: %{
message_id: Snowflake.t() | nil,
guild_id: Snowflake.t() | nil,
channel_id: Snowflake.t()
}
@typedoc """
Additional information for interaction response messages.
For more information see the [Discord Developer Documentation](https://discord.com/developers/docs/interactions/slash-commands#messageinteraction).
"""
@typedoc since: "0.3.0"
@type message_interaction :: %{
id: Snowflake.t(),
type: 1..2,
name: String.t(),
user: User.t()
}
@typedoc since: "0.1.0"
@type t :: %__MODULE__{
id: Snowflake.t(),
channel_id: Snowflake.t(),
guild_id: Snowflake.t() | nil,
author: User.t(),
member: Member.t() | nil,
content: String.t(),
timestamp: String.t(),
edited_timestamp: String.t() | nil,
tts: boolean(),
mention_everyone: boolean(),
mentions: MapSet.t(Snowflake.t()),
mention_roles: MapSet.t(Snowflake.t()),
mention_channels: [mention_channel()],
attachments: [Attachment.t()],
embeds: [Embed.t()],
reactions: %{String.t() => Reaction.t()},
nonce: String.t() | nil,
pinned: boolean(),
webhook_id: Snowflake.t() | nil,
type: integer(),
activity: message_activity() | nil,
application: Application.t() | nil,
message_reference: message_reference() | nil,
flags: Message.Flags.t(),
interaction: message_interaction() | nil
}
@typedoc """
All available types that can be resolved into a message id.
"""
@typedoc since: "0.2.1"
@type id_resolvable() :: Message.t() | Snowflake.t() | String.t()
@doc """
Creates a `t:Crux.Structs.Message.t/0` struct from raw data.
> Automatically invoked by `Crux.Structs.create/2`.
"""
@doc since: "0.1.0"
@spec create(data :: map()) :: t()
def create(data) do
data =
data
|> Util.atomify()
|> Map.update(:application, nil, &Structs.create(&1, Application))
|> Map.update(:attachments, [], &Structs.create(&1, Attachment))
|> Map.update!(:author, &Structs.create(&1, User))
|> Map.update!(:channel_id, &Snowflake.to_snowflake/1)
|> Map.update(:embeds, [], &Structs.create(&1, Embed))
|> Map.update(:guild_id, nil, &Snowflake.to_snowflake/1)
|> Map.update!(:id, &Snowflake.to_snowflake/1)
|> Map.update(:mention_channels, [], &create_mention_channel/1)
|> Map.update(
:mention_roles,
%MapSet{},
&MapSet.new(&1, fn role_id -> Snowflake.to_snowflake(role_id) end)
)
|> Map.update(:mentions, %MapSet{}, &MapSet.new(&1, Util.map_to_id()))
|> Map.update(:message_reference, nil, &create_message_reference/1)
|> Map.update(
:reactions,
%{},
&Map.new(&1, fn reaction ->
reaction = Structs.create(reaction, Reaction)
key = reaction.<KEY> || reaction.emoji.name
{key, reaction}
end)
)
|> Map.update(:webhook_id, nil, &Snowflake.to_snowflake/1)
|> Map.update(:flags, nil, &Message.Flags.resolve/1)
|> Map.update(:stickers, nil, &Util.raw_data_to_map(&1, Sticker))
|> Map.update(:referenced_message, nil, fn
nil -> nil
message -> create(message)
end)
|> Map.update(:interaction, nil, &create_interaction/1)
message = Map.update(data, :member, nil, create_member(data))
struct(__MODULE__, message)
end
defp create_mention_channel(mention_channels)
when is_list(mention_channels) do
Enum.map(mention_channels, &create_mention_channel/1)
end
defp create_mention_channel(%{} = mention_channel) do
mention_channel
|> Map.update!(:id, &Snowflake.to_snowflake/1)
|> Map.update!(:guild_id, &Snowflake.to_snowflake/1)
end
defp create_message_reference(%{} = message_reference) do
message_reference
|> Map.update(:message_id, nil, &Snowflake.to_snowflake/1)
|> Map.update(:channel_id, nil, &Snowflake.to_snowflake/1)
|> Map.update(:guild_id, nil, &Snowflake.to_snowflake/1)
end
defp create_member(data) do
fn member ->
member
|> Map.put(:guild_id, data.guild_id)
|> Map.put(:user, %{id: data.author.id})
|> Structs.create(Member)
end
end
defp create_interaction(nil), do: nil
defp create_interaction(interaction) do
interaction
|> Map.update!(:id, &Snowflake.to_snowflake/1)
|> Map.update!(:user, &Structs.create(&1, User))
end
defimpl String.Chars, for: Crux.Structs.Message do
@spec to_string(Message.t()) :: String.t()
def to_string(%Message{content: content}), do: content
end
end
|
lib/structs/message.ex
| 0.819785 | 0.427935 |
message.ex
|
starcoder
|
defmodule Pandex do
@readers [
"markdown",
"markdown_github",
"markdown_strict",
"markdown_mmd",
"markdown_phpextra",
"commonmark",
"json",
"rst",
"textile",
"html",
"latex"
]
@writers [
"json",
"html",
"html5",
"s5",
"slidy",
"dzslides",
"docbook",
"docx",
"man",
"opendocument",
"latex",
"beamer",
"context",
"texinfo",
"markdown",
"markdown_github",
"markdown_strict",
"markdown_mmd",
"markdown_phpextra",
"commonmark",
"plain",
"rst",
"mediawiki",
"textile",
"rtf",
"org",
"asciidoc",
"pdf"
]
@moduledoc ~S"""
Pandex is a lightweight ELixir wrapper for [Pandoc](http://pandoc.org). Pandex has no dependencies other than pandoc itself. Pandex enables you to convert Markdown, CommonMark, HTML, Latex, json, html to HTML, HTML5, opendocument, rtf, texttile, asciidoc, markdown, json and others. Pandex has no dependencies other than Pandoc itself.
Pandex enables you to perform any combination of the conversion below:
|Convert From (any)| Convert To (any) |
|:-----------------|:-------------------|
|markdown | json |
|markdown_github | html |
|markdown_strict | html5 |
|markdown_mmd | s5 |
|commonmark | slidy |
|json | dzslides |
|rst | docbook |
|textile | man |
|html | opendocument |
|latex | latex |
|markdown_phpextra | beamer |
| | context |
| | texinfo |
| | markdown |
| | markdown_github |
| | markdown_strict |
| | markdown_mmd |
| | markdown_phpextra |
| | commonmark |
| | plain |
| | rst |
| | mediawiki |
| | textile |
| | rtf |
| | org |
| | asciidoc |
# Usage
Pandex follows the syntax of `<format from>_to_<format to> <string>`
## Examples:
iex> Pandex.markdown_to_html "# Title \n\n## List\n\n- one\n- two\n- three\n"
{:ok, "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li>one</li>\n<li>two</li>\n<li>three</li>\n</ul>\n"}
iex> Pandex.html_to_commonmark "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li>one</li>\n<li>two</li>\n<li>three</li>\n</ul>\n"
{:ok, "# Title\n\n## List\n\n - one\n - two\n - three\n"}
iex> Pandex.html_to_opendocument "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li>one</li>\n<li>two</li>\n<li>three</li>\n</ul>\n"
{:ok, "<text:h text:style-name=\"Heading_20_1\" text:outline-level=\"1\"><text:bookmark-start text:name=\"title\" />Title<text:bookmark-end text:name=\"title\" /></text:h>\n<text:h text:style-name=\"Heading_20_2\" text:outline-level=\"2\"><text:bookmark-start text:name=\"list\" />List<text:bookmark-end text:name=\"list\" /></text:h>\n<text:list text:style-name=\"L1\">\n <text:list-item>\n <text:p text:style-name=\"P1\">one</text:p>\n </text:list-item>\n <text:list-item>\n <text:p text:style-name=\"P1\">two</text:p>\n </text:list-item>\n <text:list-item>\n <text:p text:style-name=\"P1\">three</text:p>\n </text:list-item>\n</text:list>\n"}
iex> Pandex.commonmark_to_latex "# Title \n\n## List\n\n- one\n- two\n- three\n"
{:ok, "\\section{Title}\n\n\\subsection{List}\n\n\\begin{itemize}\n\\tightlist\n\\item\n one\n\\item\n two\n\\item\n three\n\\end{itemize}\n"}
iex> Pandex.latex_to_html5 "\\section{Title}\n\n\\subsection{List}\n\n\\begin{itemize}\n\\tightlist\n\\item\n one\n\\item\n two\n\\item\n three\n\\end{itemize}\n"
{:ok, "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li><p>one</p></li>\n<li><p>two</p></li>\n<li><p>three</p></li>\n</ul>\n"}
iex> Pandex.html_to_latex "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li><p>one</p></li>\n<li><p>two</p></li>\n<li><p>three</p></li>\n</ul>\n"
{:ok, "\\hypertarget{title}{%\n\\section{Title}\\label{title}}\n\n\\hypertarget{list}{%\n\\subsection{List}\\label{list}}\n\n\\begin{itemize}\n\\item\n one\n\\item\n two\n\\item\n three\n\\end{itemize}\n"}
iex> Pandex.latex_to_json "\\section{Title}\\label{title}\n\n\\subsection{List}\\label{list}\n\n\\begin{itemize}\n\\item\n one\n\\item\n two\n\\item\n three\n\\end{itemize}\n"
{:ok, "{\"blocks\":[{\"t\":\"Header\",\"c\":[1,[\"title\",[],[]],[{\"t\":\"Str\",\"c\":\"Title\"}]]},{\"t\":\"Header\",\"c\":[2,[\"list\",[],[]],[{\"t\":\"Str\",\"c\":\"List\"}]]},{\"t\":\"BulletList\",\"c\":[[{\"t\":\"Para\",\"c\":[{\"t\":\"Str\",\"c\":\"one\"}]}],[{\"t\":\"Para\",\"c\":[{\"t\":\"Str\",\"c\":\"two\"}]}],[{\"t\":\"Para\",\"c\":[{\"t\":\"Str\",\"c\":\"three\"}]}]]}],\"pandoc-api-version\":[1,17,5,4],\"meta\":{}}\n"}
iex> Pandex.markdown_to_rst "# Title \n\n## List\n\n- one\n- two\n- three\n"
{:ok, "Title\n=====\n\nList\n----\n\n- one\n- two\n- three\n"}
iex> Pandex.markdown_to_rtf "# Title \n\n## List\n\n- one\n- two\n- three\n"
{:ok, "{\\pard \\ql \\f0 \\sa180 \\li0 \\fi0 \\b \\fs36 Title\\par}\n{\\pard \\ql \\f0 \\sa180 \\li0 \\fi0 \\b \\fs32 List\\par}\n{\\pard \\ql \\f0 \\sa0 \\li360 \\fi-360 \\bullet \\tx360\\tab one\\par}\n{\\pard \\ql \\f0 \\sa0 \\li360 \\fi-360 \\bullet \\tx360\\tab two\\par}\n{\\pard \\ql \\f0 \\sa0 \\li360 \\fi-360 \\bullet \\tx360\\tab three\\sa180\\par}\n\n"}
iex> Pandex.markdown_to_opendocument "# Title \n\n## List\n\n- one\n- two\n- three\n"
{:ok, "<text:h text:style-name=\"Heading_20_1\" text:outline-level=\"1\"><text:bookmark-start text:name=\"title\" />Title<text:bookmark-end text:name=\"title\" /></text:h>\n<text:h text:style-name=\"Heading_20_2\" text:outline-level=\"2\"><text:bookmark-start text:name=\"list\" />List<text:bookmark-end text:name=\"list\" /></text:h>\n<text:list text:style-name=\"L1\">\n <text:list-item>\n <text:p text:style-name=\"P1\">one</text:p>\n </text:list-item>\n <text:list-item>\n <text:p text:style-name=\"P1\">two</text:p>\n </text:list-item>\n <text:list-item>\n <text:p text:style-name=\"P1\">three</text:p>\n </text:list-item>\n</text:list>\n"}
iex> Pandex.commonmark_to_textile "# Title \n\n## List\n\n- one\n- two\n- three\n"
{:ok, "h1. Title\n\nh2. List\n\n* one\n* two\n* three\n\n"}
iex> Pandex.textile_to_markdown_github "h1. Title\n\nh2. List\n\n* one\n* two\n* three\n\n"
{:ok, "Title\n=====\n\nList\n----\n\n- one\n- two\n- three\n"}
iex> Pandex.textile_to_markdown_phpextra "h1. Title\n\nh2. List\n\n* one\n* two\n* three\n\n"
{:ok, "Title {#title}\n=====\n\nList {#list}\n----\n\n- one\n- two\n- three\n"}
iex> Pandex.textile_to_html5 "h1. Title\n\nh2. List\n\n* one\n* two\n* three\n\n"
{:ok, "<h1 id=\"title\">Title</h1>\n<h2 id=\"list\">List</h2>\n<ul>\n<li>one</li>\n<li>two</li>\n<li>three</li>\n</ul>\n"}
iex> Pandex.textile_to_opendocument "h1. Title\n\nh2. List\n\n* one\n* two\n* three\n\n"
{:ok, "<text:h text:style-name=\"Heading_20_1\" text:outline-level=\"1\"><text:bookmark-start text:name=\"title\" />Title<text:bookmark-end text:name=\"title\" /></text:h>\n<text:h text:style-name=\"Heading_20_2\" text:outline-level=\"2\"><text:bookmark-start text:name=\"list\" />List<text:bookmark-end text:name=\"list\" /></text:h>\n<text:list text:style-name=\"L1\">\n <text:list-item>\n <text:p text:style-name=\"P1\">one</text:p>\n </text:list-item>\n <text:list-item>\n <text:p text:style-name=\"P1\">two</text:p>\n </text:list-item>\n <text:list-item>\n <text:p text:style-name=\"P1\">three</text:p>\n </text:list-item>\n</text:list>\n"}
iex> Pandex.textile_to_asciidoc "h1. Title\n\nh2. List\n\n* one\n* two\n* three\n\n"
{:ok, "== Title\n\n=== List\n\n* one\n* two\n* three\n"}
iex> Pandex.markdown_to_asciidoc "# Title \n\n## List\n\n- one\n- two\n- three\n"
{:ok, "== Title\n\n=== List\n\n* one\n* two\n* three\n"}
"""
Enum.each(@readers, fn reader ->
Enum.each(@writers, fn writer ->
# function names are atoms. Hence converting String to Atom here. You can also use:
# `name = reader <> "_to_" <> writer |> String.to_atom`
# convert a string from one format to another.
# Example: markdown_to_html5 "# Title \n\n## List\n\n- one\n- two\n- three\n"
def unquote(:"#{reader}_to_#{writer}")(string, options \\ []) do
convert_string(string, unquote(reader), unquote(writer), options)
end
# convert a file from one format to another.
# Example: `markdown_file_to_html("sample.md") `
def unquote(:"#{reader}_file_to_#{writer}")(file, options \\ []) do
convert_file(file, unquote(reader), unquote(writer), options)
end
end)
end)
@doc """
`convert_string` works under the hood of all the other string conversion functions.
"""
def convert_string(string, from \\ "markdown", to \\ "html", options \\ []) do
if !File.dir?(".temp"), do: File.mkdir(".temp")
name = ".temp/" <> random_name
File.write(name, string)
{output, _} =
System.cmd("pandoc", Enum.concat([name, "--from=#{from}", "--to=#{to}"], options))
File.rm(name)
{:ok, output}
end
@doc """
`convert_file` works under the hood of all the other functions.
"""
def convert_file(file, from \\ "markdown", to \\ "html", options \\ []) do
{output, _} =
System.cmd("pandoc", Enum.concat([file, "--from=#{from}", "--to=#{to}"], options))
{:ok, output}
end
def random_name do
random_string <> "-" <> timestamp <> ".temp"
end
defp random_string do
:random.seed(:erlang.monotonic_time(), :erlang.time_offset(), :erlang.unique_integer())
0x100000000000000 |> :random.uniform() |> Integer.to_string(36) |> String.downcase()
end
defp timestamp do
{megasec, sec, _microsec} = :os.timestamp()
(megasec * 1_000_000 + sec) |> Integer.to_string()
end
end
|
lib/pandex.ex
| 0.59843 | 0.46041 |
pandex.ex
|
starcoder
|
defmodule Pigeon.APNS do
@moduledoc """
`Pigeon.Adapter` for Apple Push Notification Service (APNS) push notifications.
## Getting Started
1. Create an `APNS` dispatcher.
```
# lib/apns.ex
defmodule YourApp.APNS do
use Pigeon.Dispatcher, otp_app: :your_app
end
```
2. (Optional) Add configuration to your `config.exs`.
```
# config.exs
config :your_app, YourApp.APNS,
adapter: Pigeon.APNS,
cert: File.read!("cert.pem"),
key: File.read!("key_unencrypted.pem"),
mode: :dev
```
Or use token based authentication:
```
config :your_app, YourApp.APNS,
adapter: Pigeon.APNS,
key: File.read!("AuthKey.p8"),
key_identifier: "<KEY>",
mode: :dev,
team_id: "DEF8901234"
```
3. Start your dispatcher on application boot.
```
defmodule YourApp.Application do
@moduledoc false
use Application
@doc false
def start(_type, _args) do
children = [
YourApp.APNS
]
opts = [strategy: :one_for_one, name: YourApp.Supervisor]
Supervisor.start_link(children, opts)
end
end
```
If you skipped step two, include your configuration.
```
defmodule YourApp.Application do
@moduledoc false
use Application
@doc false
def start(_type, _args) do
children = [
{YourApp.APNS, apns_opts()}
]
opts = [strategy: :one_for_one, name: YourApp.Supervisor]
Supervisor.start_link(children, opts)
end
defp apns_opts do
[
adapter: Pigeon.APNS,
cert: File.read!("cert.pem"),
key: File.read!("key_unencrypted.pem"),
mode: :dev
]
end
end
```
4. Create a notification. **Note: Your push topic is generally the app's bundle identifier.**
```
n = Pigeon.APNS.Notification.new("your message", "your device token", "your push topic")
```
5. Send the packet. Pushes are synchronous and return the notification with an
updated `:response` key.
```
YourApp.APNS.push(n)
```
## Configuration Options
#### Certificate Authentication
- `:cert` - Push certificate. Must be the full-text string of the file contents.
- `:key` - Push private key. Must be the full-text string of the file contents.
#### Token Authentication
- `:key` - JWT private key. Must be the full-text string of the file contents.
- `:key_identifier` - A 10-character key identifier (kid) key, obtained from
your developer account.
- `:team_id` - Your 10-character Team ID, obtained from your developer account.
#### Shared Options
- `:mode` - If set to `:dev` or `:prod`, will set the appropriate `:uri`.
- `:ping_period` - Interval between server pings. Necessary to keep long
running APNS connections alive. Defaults to 10 minutes.
- `:port` - Push server port. Can be any value, but APNS only accepts
`443` and `2197`.
- `:uri` - Push server uri. If set, overrides uri defined by `:mode`.
Useful for test environments.
## Generating Your Certificate and Key .pem
1. In Keychain Access, right-click your push certificate and select _"Export..."_
2. Export the certificate as `cert.p12`
3. Click the dropdown arrow next to the certificate, right-click the private
key and select _"Export..."_
4. Export the private key as `key.p12`
5. From a shell, convert the certificate.
```
openssl pkcs12 -clcerts -nokeys -out cert.pem -in cert.p12
```
6. Convert the key. Be sure to set a PEM pass phrase here. The pass phrase must be 4 or
more characters in length or this will not work. You will need that pass phrase added
here in order to remove it in the next step.
```
openssl pkcs12 -nocerts -out key.pem -in key.p12
```
7. Remove the PEM pass phrase from the key.
```
openssl rsa -in key.pem -out key_unencrypted.pem
```
8. `cert.pem` and `key_unencrypted.pem` can now be used in your configuration.
"""
defstruct queue: Pigeon.NotificationQueue.new(),
stream_id: 1,
socket: nil,
config: nil
@behaviour Pigeon.Adapter
alias Pigeon.{Configurable, NotificationQueue}
alias Pigeon.APNS.ConfigParser
alias Pigeon.Http2.{Client, Stream}
@impl true
def init(opts) do
config = ConfigParser.parse(opts)
Configurable.validate!(config)
state = %__MODULE__{config: config}
case connect_socket(config) do
{:ok, socket} ->
Configurable.schedule_ping(config)
{:ok, %{state | socket: socket}}
{:error, reason} ->
{:stop, reason}
end
end
@impl true
def handle_push(notification, %{config: config, queue: queue} = state) do
headers = Configurable.push_headers(config, notification, [])
payload = Configurable.push_payload(config, notification, [])
Client.default().send_request(state.socket, headers, payload)
new_q = NotificationQueue.add(queue, state.stream_id, notification)
state =
state
|> inc_stream_id()
|> Map.put(:queue, new_q)
{:noreply, state}
end
def handle_info(:ping, state) do
Client.default().send_ping(state.socket)
Configurable.schedule_ping(state.config)
{:noreply, state}
end
def handle_info({:closed, _}, %{config: config} = state) do
case connect_socket(config) do
{:ok, socket} ->
Configurable.schedule_ping(config)
{:noreply, %{state | socket: socket}}
{:error, reason} ->
{:stop, reason}
end
end
@impl true
def handle_info(msg, state) do
case Client.default().handle_end_stream(msg, state) do
{:ok, %Stream{} = stream} -> process_end_stream(stream, state)
_else -> {:noreply, state}
end
end
defp connect_socket(config), do: connect_socket(config, 0)
defp connect_socket(_config, 3), do: {:error, :timeout}
defp connect_socket(config, tries) do
case Configurable.connect(config) do
{:ok, socket} -> {:ok, socket}
{:error, _reason} -> connect_socket(config, tries + 1)
end
end
@doc false
def process_end_stream(%Stream{id: stream_id} = stream, state) do
%{queue: queue, config: config} = state
case NotificationQueue.pop(queue, stream_id) do
{nil, new_queue} ->
# Do nothing if no queued item for stream
{:noreply, %{state | queue: new_queue}}
{notif, new_queue} ->
Configurable.handle_end_stream(config, stream, notif)
{:noreply, %{state | queue: new_queue}}
end
end
@doc false
def inc_stream_id(%{stream_id: stream_id} = state) do
%{state | stream_id: stream_id + 2}
end
end
|
lib/pigeon/apns.ex
| 0.8415 | 0.737087 |
apns.ex
|
starcoder
|
defmodule Unicode do
@moduledoc """
Functions to introspect the Unicode character database and
to provide fast codepoint lookups for scripts, blocks,
categories and properties.
"""
alias Unicode.Utils
@type codepoint :: non_neg_integer
@type codepoint_or_string :: codepoint | String.t()
@doc false
@data_dir Path.join(__DIR__, "../data") |> Path.expand()
def data_dir do
@data_dir
end
@doc """
Returns the version of Unicode in use.
"""
@version File.read!("data/blocks.txt")
|> String.split("\n")
|> Enum.at(0)
|> String.replace("# Blocks-", "")
|> String.replace(".txt", "")
|> String.split(".")
|> Enum.map(&String.to_integer/1)
|> List.to_tuple()
def version do
@version
end
@doc """
Returns a map of aliases mapping
property names to a module that
serves that property
"""
def property_servers do
Unicode.Property.servers()
end
def fetch_property(property) when is_binary(property) do
Map.fetch(property_servers(), Utils.downcase_and_remove_whitespace(property))
end
def get_property(property) when is_binary(property) do
Map.get(property_servers(), Utils.downcase_and_remove_whitespace(property))
end
@doc """
Returns the Unicode category for a codepoint or a list of
categories for a string.
## Argument
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* in the case of a single codepoint, an atom representing
one of the categories listed below
* in the case of a string, a list representing the
category for each codepoint in the string
## Notes
THese categories match the names of the Unicode character
classes used in various regular expression engines and in
Unicode Sets. The full list of categories is:
| Category | Matches |
| --------- | ----------------------- |
| :C | Other |
| :Cc | Control |
| :Cf | Format |
| :Cn | Unassigned |
| :Co | Private use |
| :Cs | Surrogate |
| :L | Letter |
| :Ll | Lower case letter |
| :Lm | Modifier letter |
| :Lo | Other letter |
| :Lt | Title case letter |
| :Lu | Upper case letter |
| :M | Mark |
| :Mc | Spacing mark |
| :Me | Enclosing mark |
| :Mn | Non-spacing mark |
| :N | Number |
| :Nd | Decimal number |
| :Nl | Letter number |
| :No | Other number |
| :P | Punctuation |
| :Pc | Connector punctuation |
| :Pd | Dash punctuation |
| :Pe | Close punctuation |
| :Pf | Final punctuation |
| :Pi | Initial punctuation |
| :Po | Other punctuation |
| :Ps | Open punctuation |
| :S | Symbol |
| :Sc | Currency symbol |
| :Sk | Modifier symbol |
| :Sm | Mathematical symbol |
| :So | Other symbol |
| :Z | Separator |
| :Zl | Line separator |
| :Zp | Paragraph separator |
| :Zs | Space separator |
Note too that the group level categories like `:L`,
`:M`, `:S` and so on are not assigned to any codepoint.
They can only be identified by combining the results
for each of the subsidiary categories.
## Examples
iex> Unicode.category ?ä
:Ll
iex> Unicode.category ?A
:Lu
iex> Unicode.category ?🧐
:So
iex> Unicode.category ?+
:Sm
iex> Unicode.category ?1
:Nd
iex> Unicode.category "äA"
[:Ll, :Lu]
"""
@spec category(codepoint_or_string) :: atom | [atom, ...]
defdelegate category(codepoint_or_string), to: Unicode.GeneralCategory
@doc """
Returns the script name of a codepoint
or the list of block names for each codepoint
in a string.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* in the case of a single codepoint, a string
script name
* in the case of a string, a list of string
script names for each codepoint in the
` codepoint_or_string`
## Exmaples
iex> Unicode.script ?ä
:latin
iex> Unicode.script ?خ
:arabic
iex> Unicode.script ?अ
:devanagari
iex> Unicode.script ?א
:hebrew
iex> Unicode.script ?Ж
:cyrillic
iex> Unicode.script ?δ
:greek
iex> Unicode.script ?ก
:thai
iex> Unicode.script ?ယ
:myanmar
"""
@spec script(codepoint_or_string) :: String.t() | [String.t(), ...]
defdelegate script(codepoint_or_string), to: Unicode.Script
@doc """
Returns the block name of a codepoint
or the list of block names for each codepoint
in a string.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* in the case of a single codepoint, an atom
block name
* in the case of a string, a list of atom
block names for each codepoint in the
`codepoint_or_string`
## Exmaples
iex> Unicode.block ?ä
:latin_1_supplement
iex> Unicode.block ?A
:basic_latin
iex> Unicode.block "äA"
[:latin_1_supplement, :basic_latin]
"""
@spec block(codepoint_or_string) :: atom | [atom, ...]
defdelegate block(codepoint_or_string), to: Unicode.Block
@doc """
Returns the list of properties of each codepoint
in a given string or the list of properties for a
given string.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* in the case of a single codepoint, an atom
list of properties
* in the case of a string, a list of atom
lisr for each codepoint in the
` codepoint_or_string`
## Exmaples
iex> Unicode.properties 0x1bf0
[
:alphabetic,
:case_ignorable,
:grapheme_extend,
:id_continue,
:other_alphabetic,
:xid_continue
]
iex> Unicode.properties ?A
[
:alphabetic,
:ascii_hex_digit,
:cased,
:changes_when_casefolded,
:changes_when_casemapped,
:changes_when_lowercased,
:grapheme_base,
:hex_digit,
:id_continue,
:id_start,
:uppercase,
:xid_continue,
:xid_start
]
iex> Unicode.properties ?+
[:grapheme_base, :math, :pattern_syntax]
iex> Unicode.properties "a1+"
[
[
:alphabetic,
:ascii_hex_digit,
:cased,
:changes_when_casemapped,
:changes_when_titlecased,
:changes_when_uppercased,
:grapheme_base,
:hex_digit,
:id_continue,
:id_start,
:lowercase,
:xid_continue,
:xid_start
],
[
:ascii_hex_digit,
:emoji,
:emoji_component,
:grapheme_base,
:hex_digit,
:id_continue,
:xid_continue
],
[
:grapheme_base,
:math,
:pattern_syntax
]
]
"""
@spec properties(codepoint_or_string) :: [atom, ...] | [[atom, ...], ...]
defdelegate properties(codepoint_or_string), to: Unicode.Property
@doc """
Returns `true` if a single Unicode codepoint (or all characters in the
given string) adhere to the Derived Core Property `Alphabetic`
otherwise returns `false`.
These are all characters that are usually used as representations
of letters/syllabes in words/sentences.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
## Examples
iex> Unicode.alphabetic?(?a)
true
iex> Unicode.alphabetic?("A")
true
iex> Unicode.alphabetic?("Elixir")
true
iex> Unicode.alphabetic?("الإكسير")
true
# comma and whitespace
iex> Unicode.alphabetic?("foo, bar")
false
iex> Unicode.alphabetic?("42")
false
iex> Unicode.alphabetic?("龍王")
true
# Summation, \u2211
iex> Unicode.alphabetic?("∑")
false
# Greek capital letter sigma, \u03a3
iex> Unicode.alphabetic?("Σ")
true
"""
@spec alphabetic?(codepoint_or_string) :: boolean
defdelegate alphabetic?(codepoint_or_string), to: Unicode.Property
@doc """
Returns `true` if a single Unicode codepoint (or all characters
in the given string) are either `alphabetic?/1` or
`numeric?/1` otherwise returns `false`.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
### Examples
iex> Unicode.alphanumeric? "1234"
true
iex> Unicode.alphanumeric? "KeyserSöze1995"
true
iex> Unicode.alphanumeric? "3段"
true
iex> Unicode.alphanumeric? "<EMAIL>"
false
"""
@spec alphanumeric?(codepoint_or_string) :: boolean
defdelegate alphanumeric?(codepoint_or_string), to: Unicode.Property
@doc """
Returns `true` if a single Unicode codepoint (or all characters
in the given string) adhere to Unicode category `:Nd`
otherwise returns `false`.
This group of characters represents the decimal digits zero
through nine (0..9) and the equivalents in non-Latin scripts.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
## Examples
"""
@spec digits?(codepoint_or_string) :: boolean
defdelegate digits?(codepoint_or_string), to: Unicode.Property, as: :numeric?
@doc """
Returns `true` if a single Unicode codepoint (or all characters
in the given string) adhere to Unicode categories `:Nd`,
`:Nl` and `:No` otherwise returns `false`.
This group of characters represents the decimal digits zero
through nine (0..9) and the equivalents in non-Latin scripts.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
## Examples
iex> Unicode.numeric?("65535")
true
iex> Unicode.numeric?("42")
true
iex> Unicode.numeric?("lapis philosophorum")
false
"""
@spec numeric?(codepoint_or_string) :: boolean
defdelegate numeric?(codepoint_or_string), to: Unicode.Property, as: :extended_numeric?
@doc """
Returns `true` if a single Unicode codepoint (or all characters
in the given string) are `emoji` otherwise returns `false`.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
### Examples
iex> Unicode.emoji? "🧐🤓🤩🤩️🤯"
true
"""
@spec emoji?(codepoint_or_string) :: boolean
defdelegate emoji?(codepoint_or_string), to: Unicode.Emoji
@doc """
Returns `true` if a single Unicode codepoint (or all characters
in the given string) the category `:Sm` otherwise returns `false`.
These are all characters whose primary usage is in mathematical
concepts (and not in alphabets). Notice that the numerical digits
are not part of this group.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
## Examples
iex> Unicode.math?(?=)
true
iex> Unicode.math?("=")
true
iex> Unicode.math?("1+1=2") # Digits do not have the `:math` property.
false
iex> Unicode.math?("परिस")
false
iex> Unicode.math?("∑") # Summation, \\u2211
true
iex> Unicode.math?("Σ") # Greek capital letter sigma, \\u03a3
false
"""
@spec math?(codepoint_or_string) :: boolean
defdelegate math?(codepoint_or_string), to: Unicode.Property
@doc """
Returns either `true` if the codepoint has the `:cased` property
or `false`.
The `:cased` property means that this character has at least
an upper and lower representation and possibly a titlecase
representation too.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
## Examples
iex> Unicode.cased? ?ယ
false
iex> Unicode.cased? ?A
true
"""
@spec cased?(codepoint_or_string) :: boolean
defdelegate cased?(codepoint_or_string), to: Unicode.Property
@doc """
Returns `true` if a single Unicode codepoint (or all characters
in the given string) the category `:Ll` otherwise returns `false`.
Notice that there are many languages that do not have a distinction
between cases. Their characters are not included in this group.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
## Examples
iex> Unicode.lowercase?(?a)
true
iex> Unicode.lowercase?("A")
false
iex> Unicode.lowercase?("Elixir")
false
iex> Unicode.lowercase?("léon")
true
iex> Unicode.lowercase?("foo, bar")
false
iex> Unicode.lowercase?("42")
false
iex> Unicode.lowercase?("Σ")
false
iex> Unicode.lowercase?("σ")
true
"""
@spec lowercase?(codepoint_or_string) :: boolean
defdelegate lowercase?(codepoint_or_string), to: Unicode.Property
defdelegate downcase?(codepoint_or_string), to: Unicode.Property, as: :lowercase?
@doc """
Returns `true` if a single Unicode codepoint (or all characters
in the given string) the category `:Lu` otherwise returns `false`.
Notice that there are many languages that do not have a distinction
between cases. Their characters are not included in this group.
## Arguments
* `codepoint_or_string` is a single integer codepoint
or a `String.t`.
## Returns
* `true` or `false`
For the string-version, the result will be true only if _all_
codepoints in the string adhere to the property.
## Examples
iex> Unicode.uppercase?(?a)
false
iex> Unicode.uppercase?("A")
true
iex> Unicode.uppercase?("Elixir")
false
iex> Unicode.uppercase?("CAMEMBERT")
true
iex> Unicode.uppercase?("foo, bar")
false
iex> Unicode.uppercase?("42")
false
iex> Unicode.uppercase?("Σ")
true
iex> Unicode.uppercase?("σ")
false
"""
@spec uppercase?(codepoint_or_string) :: boolean
defdelegate uppercase?(codepoint_or_string), to: Unicode.Property
defdelegate upcase?(codepoint_or_string), to: Unicode.Property, as: :uppercase?
@doc """
Returns a list of tuples representing the
assigned ranges of Unicode code points.
This information is derived from the block
ranges as defined by `Unicode.Block.blocks/0`.
"""
@spec assigned :: [{pos_integer, pos_integer}]
defdelegate assigned, to: Unicode.Block, as: :assigned
@deprecated "Use Unicode.assigned/0"
def ranges do
assigned()
end
@doc """
Returns a list of tuples representing the
full range of Unicode code points.
"""
@all [{0x0, 0x10FFFF}]
@spec all :: [{0x0, 0x10FFFF}]
def all do
@all
end
@doc """
Removes accents (diacritical marks) from
a string.
## Arguments
* `string` is any `String.t`
## Returns
* A string with all diacritical marks
removed
## Notes
The string is first normalised to `:nfd` form
and then all characters in the block
`:comnbining_diacritical_marks` is removed
from the string
## Example
iex> Unicode.unaccent("Et Ça sera sa moitié.")
"Et Ca sera sa moitie."
"""
def unaccent(string) do
string
|> normalize_nfd
|> String.to_charlist()
|> remove_diacritical_marks([:combining_diacritical_marks])
|> List.to_string()
end
defp remove_diacritical_marks(charlist, blocks) do
Enum.reduce(charlist, [], fn char, acc ->
if Unicode.Block.block(char) in blocks do
acc
else
[char | acc]
end
end)
|> Enum.reverse()
end
@doc false
def compact_ranges([{as, ae}, {bs, be} | rest]) when ae >= bs - 1 and as <= be do
compact_ranges([{as, be} | rest])
end
def compact_ranges([{as, ae}, {_bs, be} | rest]) when ae >= be do
compact_ranges([{as, ae} | rest])
end
def compact_ranges([first]) do
[first]
end
def compact_ranges([first | rest]) do
[first | compact_ranges(rest)]
end
# OTP 20 introduced the `:unicode: module
# but we also want to support earlier
# versions
@doc false
if Code.ensure_loaded?(:unicode) do
def normalize_nfd(string) do
:unicode.characters_to_nfd_binary(string)
end
else
def normalize_nfd(string) do
String.normalize(string, :nfd)
end
end
end
|
lib/unicode.ex
| 0.903521 | 0.505676 |
unicode.ex
|
starcoder
|
defmodule Cldr.Normalize.Units do
@moduledoc false
alias Cldr.Substitution
def normalize(content, locale) do
content
|> normalize_units(locale)
end
@unit_types ["short", "long", "narrow"]
def normalize_units(content, locale) do
units =
units_for_locale(locale)
|> get_in(["main", locale, "units"])
|> Cldr.Map.underscore_keys()
normalized_units =
units
|> Enum.filter(fn {k, _v} -> k in @unit_types end)
|> Enum.into(%{})
|> process_unit_types(@unit_types)
Map.put(content, "units", normalized_units)
end
def process_unit_types(%{} = content, unit_types) do
Enum.reduce(unit_types, content, &process_unit_type(&1, &2))
end
def process_unit_type(type, %{} = content) do
updated_format =
get_in(content, [type])
|> Enum.map(&process_formats/1)
|> Cldr.Map.merge_map_list()
put_in(content, [type], updated_format)
end
def process_formats({unit, formats}) do
parsed_formats =
Enum.map(formats, fn
{"unit_pattern_count_" <> count, template} ->
{:nominative, {count, Substitution.parse(template)}}
{"genitive_count_" <> count, template} ->
{:genitive, {count, Substitution.parse(template)}}
{"accusative_count_" <> count, template} ->
{:accusative, {count, Substitution.parse(template)}}
{"dative_count_" <> count, template} ->
{:dative, {count, Substitution.parse(template)}}
{"locative_count_" <> count, template} ->
{:locative, {count, Substitution.parse(template)}}
{"instrumental_count_" <> count, template} ->
{:instrumental, {count, Substitution.parse(template)}}
{"vocative_count_" <> count, template} ->
{:vocative, {count, Substitution.parse(template)}}
{"display_name", display_name} ->
{"display_name", display_name}
{"gender", gender} ->
{"gender", gender}
{"compound_unit_pattern1" <> rest, template} ->
{"compound_unit_pattern", compound_unit(rest, template)}
{type, template} ->
{type, Substitution.parse(template)}
end)
|> Enum.group_by(&elem(&1, 0), &elem(&1, 1))
|> Enum.map(fn
{k, v} when is_atom(k) ->
{k, Map.new(v)}
{k, [v]} ->
{k, v}
{k, v} when is_list(v) ->
{k, map_nested_compounds(v)}
end)
|> Map.new()
%{unit => parsed_formats}
end
# Decode compound units which can have
# a count, a gender and a grammatical case
# but not necessarily all of them
# The order is <gender> <count> <case>
def compound_unit("_" <> rest, template) do
compound_unit(rest, template)
end
def compound_unit("", template) do
{:nominative, template}
end
# Could be count_one or count_one_case_...
# Followed by a potential "case_"
def compound_unit("count_" <> rest, template) do
case String.split(rest, "_", parts: 2) do
[count] ->
{count, template}
[count, rest] ->
{count, compound_unit(rest, template)}
end
end
# Grammatical case is the terminal clause, nothing
# after it
def compound_unit("case_" <> grammatical_case, template) do
{grammatical_case, template}
end
# Could be gender_masculine_count_one or gender_masculine_count_one_case_...
def compound_unit("gender_" <> rest, template) do
[gender, rest] = String.split(rest, "_", parts: 2)
{gender, compound_unit(rest, template)}
end
# Take the nested structure and turn it into maps
def map_nested_compounds(list, acc \\ Map.new())
def map_nested_compounds([], acc) do
acc
end
def map_nested_compounds(value, %{} = _acc) when is_binary(value) do
Substitution.parse(value)
end
def map_nested_compounds({key, value}, acc) do
Map.put(acc, key, map_nested_compounds(value))
end
def map_nested_compounds([{key, value} | rest], acc) do
mapped_value = map_nested_compounds(value)
acc =
Map.update(acc, key, mapped_value, fn
current when is_map(current) and is_map(mapped_value) ->
Map.merge(current, mapped_value)
current when is_map(current) ->
Map.put(current, :nominative, mapped_value)
current when is_list(current) ->
value
|> map_nested_compounds()
|> Map.put(:nominative, current)
end)
map_nested_compounds(rest, acc)
end
def units_for_locale(locale) do
if File.exists?(locale_path(locale)) do
locale
|> locale_path
|> File.read!()
|> Jason.decode!()
else
{:error, {:units_file_not_found, locale_path(locale)}}
end
end
@spec locale_path(binary) :: String.t()
def locale_path(locale) when is_binary(locale) do
Path.join(units_dir(), [locale, "/units.json"])
end
@units_dir Path.join(Cldr.Config.download_data_dir(), ["cldr-units-full", "/main"])
@spec units_dir :: String.t()
def units_dir do
@units_dir
end
end
|
mix/support/normalize/normalize_units.ex
| 0.775052 | 0.407363 |
normalize_units.ex
|
starcoder
|
defmodule Membrane.Core.Element.PadsSpecsParser do
@moduledoc """
Functions parsing element pads specifications, generating functions and docs
based on them.
"""
alias Membrane.{Buffer, Caps, Element}
alias Element.Pad
alias Bunch.Type
use Bunch
@type parsed_pad_specs_t :: %{
:availability => Pad.availability_t(),
:mode => Pad.mode_t(),
:caps => Caps.Matcher.caps_specs_t(),
optional(:demand_unit) => Buffer.Metric.unit_t(),
:direction => Pad.direction_t()
}
@doc """
Generates `membrane_{direction}_pads/0` function, along with docs and typespecs.
Pads specifications are parsed with `parse_pads_specs!/4`, and docs are
generated with `generate_docs_from_pads_specs/1`.
"""
@spec def_pads(Macro.t(), Pad.direction_t()) :: Macro.t()
def def_pads(raw_specs, direction) do
Code.ensure_loaded(Caps.Matcher)
specs =
raw_specs
|> Bunch.Macro.inject_calls([
{Caps.Matcher, :one_of},
{Caps.Matcher, :range}
])
quote do
already_parsed = __MODULE__ |> Module.get_attribute(:membrane_pads) || []
@membrane_pads unquote(specs)
|> unquote(__MODULE__).parse_pads_specs!(
already_parsed,
unquote(direction),
__ENV__
)
|> Kernel.++(already_parsed)
@doc """
Returns pads specification for `#{inspect(__MODULE__)}`
They are the following:
#{@membrane_pads |> unquote(__MODULE__).generate_docs_from_pads_specs()}
"""
if __MODULE__ |> Module.defines?({:membrane_pads, 0}) do
__MODULE__ |> Module.make_overridable(membrane_pads: 0)
else
@impl true
end
def membrane_pads() do
@membrane_pads
end
end
end
@doc """
Parses pads specifications defined with `Membrane.Element.Base.Mixin.SourceBehaviour.def_output_pads/1`
or `Membrane.Element.Base.Mixin.SinkBehaviour.def_input_pads/1`.
"""
@spec parse_pads_specs!(
specs :: [Element.pad_specs_t()],
already_parsed :: [{Pad.name_t(), parsed_pad_specs_t}],
direction :: Pad.direction_t(),
declaration_env :: Macro.Env.t()
) :: parsed_pad_specs_t | no_return
def parse_pads_specs!(specs, already_parsed, direction, env) do
with {:ok, specs} <- parse_pads_specs(specs, already_parsed, direction) do
specs
else
{:error, reason} ->
raise CompileError,
file: env.file,
line: env.line,
description: """
Error parsing pads specs defined in #{inspect(env.module)}.def_#{direction}_pads/1,
reason: #{inspect(reason)}
"""
end
end
@spec parse_pads_specs(
specs :: [Element.pad_specs_t()],
already_parsed :: [{Pad.name_t(), parsed_pad_specs_t}],
direction :: Pad.direction_t()
) :: Type.try_t(parsed_pad_specs_t)
defp parse_pads_specs(specs, already_parsed, direction) do
withl keyword: true <- specs |> Keyword.keyword?(),
dups: [] <- (specs ++ already_parsed) |> Keyword.keys() |> Bunch.Enum.duplicates(),
parse: {:ok, specs} <- specs |> Bunch.Enum.try_map(&parse_pad_specs(&1, direction)) do
specs |> Bunch.TupleList.map_values(&Map.put(&1, :direction, direction)) ~> {:ok, &1}
else
keyword: false -> {:error, {:pads_not_a_keyword, specs}}
dups: dups -> {:error, {:duplicate_pad_names, dups}}
parse: {:error, reason} -> {:error, reason}
end
end
defp parse_pad_specs(spec, direction) do
withl spec: {name, config} when is_atom(name) and is_list(config) <- spec,
config:
{:ok, config} <-
Bunch.Config.parse(config,
availability: [in: [:always, :on_request], default: :always],
caps: [validate: &Caps.Matcher.validate_specs/1],
mode: [in: [:pull, :push], default: :pull],
demand_unit: [
in: [:buffers, :bytes],
require_if: &(&1.mode == :pull and direction == :input)
]
) do
{:ok, {name, config}}
else
spec: spec -> {:error, {:invalid_pad_spec, spec}}
config: {:error, reason} -> {:error, {reason, pad: name}}
end
end
@doc """
Generates docs describing pads, based on pads specification.
"""
@spec generate_docs_from_pads_specs(parsed_pad_specs_t) :: String.t()
def generate_docs_from_pads_specs(pads_specs) do
pads_specs
|> Enum.map(&generate_docs_from_pad_specs/1)
|> Enum.join("\n")
end
defp generate_docs_from_pad_specs({name, config}) do
"""
* Pad `#{inspect(name)}`
#{
config
|> Enum.map(fn {k, v} ->
"* #{k |> to_string() |> String.replace("_", " ")}: #{generate_pad_property_doc(k, v)}"
end)
|> Enum.join("\n")
|> indent(2)
}
"""
end
defp generate_pad_property_doc(:caps, caps) do
caps
|> Bunch.listify()
|> Enum.map(fn
{module, params} ->
params_doc =
params |> Enum.map(fn {k, v} -> "`#{k}`: `#{inspect(v)}`" end) |> Enum.join(", ")
"`#{inspect(module)}`, params: #{params_doc}"
module ->
"`#{inspect(module)}`"
end)
~> (
[doc] -> doc
docs -> docs |> Enum.map(&"\n* #{&1}") |> Enum.join()
)
|> indent()
end
defp generate_pad_property_doc(_k, v) do
"`#{inspect(v)}`"
end
defp indent(string, size \\ 1) do
string
|> String.split("\n")
|> Enum.map(&(String.duplicate(" ", size) <> &1))
|> Enum.join("\n")
end
end
|
lib/membrane/core/element/pads_specs_parser.ex
| 0.768646 | 0.448004 |
pads_specs_parser.ex
|
starcoder
|
defmodule BggXmlApi2.Item do
@moduledoc """
A set of functions for searching and retrieving information on Items.
"""
import SweetXml
alias BggXmlApi2.Api, as: BggApi
@enforce_keys [:id, :name, :type, :year_published]
defstruct [
:id,
:name,
:type,
:year_published,
:image,
:thumbnail,
:description,
:min_players,
:max_players,
:playing_time,
:min_play_time,
:max_play_time,
:average_rating,
:average_weight,
categories: [],
mechanics: [],
families: [],
expansions: [],
designers: [],
artists: [],
publishers: []
]
@doc """
Search for an Item based on `name`.
## Options
Options can be:
* `:exact` - if set to true an exact match search on the name will be done
* `:type` - a list of strings where each one is a type of item to search for,
the types of items available are rpgitem, videogame, boardgame,
boardgameaccessory or boardgameexpansion
"""
@spec search(String.t(), keyword) :: {:ok, [%__MODULE__{}]} | :error
def search(name, opts \\ []) do
result =
name
|> build_search_query_string(opts)
|> BggApi.get()
case result do
{:ok, response} ->
return =
response
|> Map.get(:body)
|> retrieve_multi_item_details(~x"//item"l)
|> Enum.map(&process_item/1)
{:ok, return}
_ ->
:error
end
end
@doc """
Retrieve information on an Item based on `id`.
"""
@spec info(String.t()) :: {:ok, %BggXmlApi2.Item{}} | {:error, :no_results}
def info(id) do
with {:ok, response} <- BggApi.get("/thing?stats=1&id=#{id}"),
body <- Map.get(response, :body),
{:ok, item} <- retrieve_item_details(body, ~x"//item") do
{:ok, process_item(item)}
else
{:error, _} ->
{:error, :no_results}
end
end
defp build_search_query_string(name, opts) do
exact_search = Keyword.get(opts, :exact, false)
exact = if exact_search, do: "&exact=1", else: ""
type_search = Keyword.get(opts, :type, false)
type = if type_search, do: "&type=#{Enum.join(type_search, ",")}", else: ""
"/search?query=#{URI.encode(name)}#{exact}#{type}"
end
defp retrieve_item_details(xml, path_to_item) do
case xpath(xml, path_to_item) do
nil -> {:error, :no_results}
item -> {:ok, rid(item)}
end
end
defp retrieve_multi_item_details(xml, path_to_items) do
xml
|> xpath(path_to_items)
|> Enum.map(&rid/1)
end
defp rid(item) do
%{
id:
item
|> xpath(~x"./@id")
|> if_charlist_convert_to_string(),
name:
item
|> xpath(~x"./name[@type='primary']/@value")
|> if_charlist_convert_to_string(),
type:
item
|> xpath(~x"./@type")
|> if_charlist_convert_to_string(),
year_published:
item
|> xpath(~x"./yearpublished/@value")
|> if_charlist_convert_to_string(),
image:
item
|> xpath(~x"./image/text()")
|> if_charlist_convert_to_string(),
thumbnail:
item
|> xpath(~x"./thumbnail/text()")
|> if_charlist_convert_to_string(),
description:
item
|> xpath(~x"./description/text()"l)
|> Enum.join(),
min_players:
item
|> xpath(~x"./minplayers/@value")
|> if_charlist_convert_to_integer(),
max_players:
item
|> xpath(~x"./maxplayers/@value")
|> if_charlist_convert_to_integer(),
playing_time:
item
|> xpath(~x"./playingtime/@value")
|> if_charlist_convert_to_integer(),
min_play_time:
item
|> xpath(~x"./minplaytime/@value")
|> if_charlist_convert_to_integer(),
max_play_time:
item
|> xpath(~x"./maxplaytime/@value")
|> if_charlist_convert_to_integer(),
average_rating:
item
|> xpath(~x"./statistics/ratings/average/@value")
|> if_charlist_convert_to_float(),
average_weight:
item
|> xpath(~x"./statistics/ratings/averageweight/@value")
|> if_charlist_convert_to_float(),
categories:
item
|> xpath(~x"./link[@type='boardgamecategory']/@value"l)
|> Enum.map(&if_charlist_convert_to_string/1),
mechanics:
item
|> xpath(~x"./link[@type='boardgamemechanic']/@value"l)
|> Enum.map(&if_charlist_convert_to_string/1),
families:
item
|> xpath(~x"./link[@type='boardgamefamily']/@value"l)
|> Enum.map(&if_charlist_convert_to_string/1),
expansions:
item
|> xpath(~x"./link[@type='boardgameexpansion']/@value"l)
|> Enum.map(&if_charlist_convert_to_string/1),
designers:
item
|> xpath(~x"./link[@<EMAIL>='boardgamedesigner']/@value"l)
|> Enum.map(&if_charlist_convert_to_string/1),
artists:
item
|> xpath(~x"./link[@type='boardgameartist']/@value"l)
|> Enum.map(&if_charlist_convert_to_string/1),
publishers:
item
|> xpath(~x"./link[@type='boardgamepublisher']/@value"l)
|> Enum.map(&if_charlist_convert_to_string/1)
}
end
defp process_item(%{description: ""} = item) do
item = %{item | description: nil}
struct(__MODULE__, item)
end
defp process_item(item) do
item = Map.update(item, :description, nil, &HtmlEntities.decode/1)
struct(__MODULE__, item)
end
defp if_charlist_convert_to_string(possible_charlist) do
if is_list(possible_charlist) do
List.to_string(possible_charlist)
else
possible_charlist
end
end
defp if_charlist_convert_to_integer(possible_charlist) do
if is_list(possible_charlist) do
List.to_integer(possible_charlist)
else
possible_charlist
end
end
defp if_charlist_convert_to_float(possible_charlist) do
if is_list(possible_charlist) do
{float, _} = possible_charlist |> List.to_string() |> Float.parse()
float
else
possible_charlist
end
end
end
|
lib/bgg_xml_api2/item.ex
| 0.736685 | 0.433502 |
item.ex
|
starcoder
|
defmodule Commandex do
@moduledoc """
Defines a command struct.
Commandex structs are a loose implementation of the command pattern, making it easy
to wrap parameters, data, and errors into a well-defined struct.
## Example
A fully implemented command module might look like this:
defmodule RegisterUser do
import Commandex
command do
param :email
param :password
data :password_hash
data :user
pipeline :hash_password
pipeline :create_user
pipeline :send_welcome_email
end
def hash_password(command, %{password: nil} = _params, _data) do
command
|> put_error(:password, :not_given)
|> halt()
end
def hash_password(command, %{password: password} = _params, _data) do
put_data(command, :password_hash, Base.encode64(password))
end
def create_user(command, %{email: email} = _params, %{password_hash: phash} = _data) do
%User{}
|> User.changeset(%{email: email, password_hash: phash})
|> Repo.insert()
|> case do
{:ok, user} -> put_data(command, :user, user)
{:error, changeset} -> command |> put_error(:repo, changeset) |> halt()
end
end
def send_welcome_email(command, _params, %{user: user}) do
Mailer.send_welcome_email(user)
command
end
end
The `command/1` macro will define a struct that looks like:
%RegisterUser{
success: false,
halted: false,
errors: %{},
params: %{email: nil, password: nil},
data: %{password_hash: nil, user: nil},
pipelines: [:hash_password, :create_user, :send_welcome_email]
}
As well as two functions:
&RegisterUser.new/1
&RegisterUser.run/1
`&new/1` parses parameters into a new struct. These can be either a keyword list
or map with atom/string keys.
`&run/1` takes a command struct and runs it through the pipeline functions defined
in the command. Functions are executed *in the order in which they are defined*.
If a command passes through all pipelines without calling `halt/1`, `:success`
will be set to `true`. Otherwise, subsequent pipelines after the `halt/1` will
be ignored and `:success` will be set to `false`.
## Example
%{email: "<EMAIL>", password: "<PASSWORD>"}
|> RegisterUser.new()
|> RegisterUser.run()
|> case do
%{success: true, data: %{user: user}} ->
# Success! We've got a user now
%{success: false, errors: %{password: :not_given}} ->
# Respond with a 400 or something
%{success: false, errors: _error} ->
# I'm a lazy programmer that writes catch-all error handling
end
"""
@typedoc """
Command pipeline stage.
A pipeline function can be defined multiple ways:
- `pipeline :do_work` - Name of a function inside the command's module, arity three.
- `pipeline {YourModule, :do_work}` - Arity three.
- `pipeline {YourModule, :do_work, [:additonal, "args"]}` - Arity three plus the
number of additional args given.
- `pipeline &YourModule.do_work/1` - Or any anonymous function of arity one.
- `pipeline &YourModule.do_work/3` - Or any anonymous function of arity three.
"""
@type pipeline ::
atom
| {module, atom}
| {module, atom, [any]}
| (command :: struct -> command :: struct)
| (command :: struct, params :: map, data :: map -> command :: struct)
@typedoc """
Command struct.
## Attributes
- `data` - Data generated during the pipeline, defined by `Commandex.data/1`.
- `errors` - Errors generated during the pipeline with `Commandex.put_error/3`
- `halted` - Whether or not the pipeline was halted.
- `params` - Parameters given to the command, defined by `Commandex.param/1`.
- `pipelines` - A list of pipeline functions to execute, defined by `Commandex.pipeline/1`.
- `success` - Whether or not the command was successful. This is only set to
`true` if the command was not halted after running all of the pipelines.
"""
@type command :: %{
__struct__: atom,
data: map,
errors: map,
halted: boolean,
params: map,
pipelines: [pipeline()],
success: boolean
}
@doc """
Defines a command struct with params, data, and pipelines.
"""
@spec command(do: any) :: no_return
defmacro command(do: block) do
prelude =
quote do
for name <- [:struct_fields, :params, :data, :pipelines] do
Module.register_attribute(__MODULE__, name, accumulate: true)
end
for field <- [{:success, false}, {:errors, %{}}, {:halted, false}] do
Module.put_attribute(__MODULE__, :struct_fields, field)
end
try do
import Commandex
unquote(block)
after
:ok
end
end
postlude =
quote unquote: false do
params = for pair <- Module.get_attribute(__MODULE__, :params), into: %{}, do: pair
data = for pair <- Module.get_attribute(__MODULE__, :data), into: %{}, do: pair
pipelines = __MODULE__ |> Module.get_attribute(:pipelines) |> Enum.reverse()
Module.put_attribute(__MODULE__, :struct_fields, {:params, params})
Module.put_attribute(__MODULE__, :struct_fields, {:data, data})
Module.put_attribute(__MODULE__, :struct_fields, {:pipelines, pipelines})
defstruct @struct_fields
@typedoc """
Command struct.
## Attributes
- `data` - Data generated during the pipeline, defined by `Commandex.data/1`.
- `errors` - Errors generated during the pipeline with `Commandex.put_error/3`
- `halted` - Whether or not the pipeline was halted.
- `params` - Parameters given to the command, defined by `Commandex.param/1`.
- `pipelines` - A list of pipeline functions to execute, defined by `Commandex.pipeline/1`.
- `success` - Whether or not the command was successful. This is only set to
`true` if the command was not halted after running all of the pipelines.
"""
@type t :: %__MODULE__{
data: map,
errors: map,
halted: boolean,
params: map,
pipelines: [Commandex.pipeline()],
success: boolean
}
@doc """
Creates a new struct from given parameters.
"""
@spec new(map | Keyword.t()) :: t
def new(opts \\ []) do
Commandex.parse_params(%__MODULE__{}, opts)
end
@doc """
Runs given pipelines in order and returns command struct.
`run/1` can either take parameters that would be passed to `new/1`
or the command struct itself.
"""
@spec run(map | Keyword.t() | t) :: t
def run(%unquote(__MODULE__){pipelines: pipelines} = command) do
pipelines
|> Enum.reduce_while(command, fn fun, acc ->
case acc do
%{halted: false} -> {:cont, Commandex.apply_fun(acc, fun)}
_ -> {:halt, acc}
end
end)
|> Commandex.maybe_mark_successful()
end
def run(params) do
params
|> new()
|> run()
end
end
quote do
unquote(prelude)
unquote(postlude)
end
end
@doc """
Defines a command parameter field.
Parameters are supplied at struct creation, before any pipelines are run.
command do
param :email
param :password
# ...data
# ...pipelines
end
"""
@spec param(atom, Keyword.t()) :: no_return
defmacro param(name, opts \\ []) do
quote do
Commandex.__param__(__MODULE__, unquote(name), unquote(opts))
end
end
@doc """
Defines a command data field.
Data field values are created and set as pipelines are run. Set one with `put_data/3`.
command do
# ...params
data :password_hash
data :user
# ...pipelines
end
"""
@spec data(atom) :: no_return
defmacro data(name) do
quote do
Commandex.__data__(__MODULE__, unquote(name))
end
end
@doc """
Defines a command pipeline.
Pipelines are functions executed against the command, *in the order in which they are defined*.
For example, two pipelines could be defined:
pipeline :check_valid_email
pipeline :create_user
Which could be mentally interpreted as:
command
|> check_valid_email()
|> create_user()
A pipeline function can be defined multiple ways:
- `pipeline :do_work` - Name of a function inside the command's module, arity three.
- `pipeline {YourModule, :do_work}` - Arity three.
- `pipeline {YourModule, :do_work, [:additonal, "args"]}` - Arity three plus the
number of additional args given.
- `pipeline &YourModule.do_work/1` - Or any anonymous function of arity one.
- `pipeline &YourModule.do_work/3` - Or any anonymous function of arity three.
"""
@spec pipeline(atom) :: no_return
defmacro pipeline(name) do
quote do
Commandex.__pipeline__(__MODULE__, unquote(name))
end
end
@doc """
Sets a data field with given value.
Define a data field first:
data :password_hash
Set the password pash in one of your pipeline functions:
def hash_password(command, %{password: password} = _params, _data) do
# Better than plaintext, I guess
put_data(command, :password_hash, Base.encode64(password))
end
"""
@spec put_data(command, atom, any) :: command
def put_data(%{data: data} = command, key, val) do
%{command | data: Map.put(data, key, val)}
end
@doc """
Sets error for given key and value.
`:errors` is a map. Putting an error on the same key will overwrite the previous value.
def hash_password(command, %{password: nil} = _params, _data) do
command
|> put_error(:password, :not_supplied)
|> halt()
end
"""
@spec put_error(command, any, any) :: command
def put_error(%{errors: error} = command, key, val) do
%{command | errors: Map.put(error, key, val)}
end
@doc """
Halts a command pipeline.
Any pipelines defined after the halt will be ignored. If a command finishes running through
all pipelines, `:success` will be set to `true`.
def hash_password(command, %{password: nil} = _params, _data) do
command
|> put_error(:password, :not_supplied)
|> halt()
end
"""
@spec halt(command) :: command
def halt(command), do: %{command | halted: true}
@doc false
def maybe_mark_successful(%{halted: false} = command), do: %{command | success: true}
def maybe_mark_successful(command), do: command
@doc false
def parse_params(%{params: p} = struct, params) when is_list(params) do
params = for {key, _} <- p, into: %{}, do: {key, Keyword.get(params, key, p[key])}
%{struct | params: params}
end
def parse_params(%{params: p} = struct, %{} = params) do
params = for {key, _} <- p, into: %{}, do: {key, get_param(params, key, p[key])}
%{struct | params: params}
end
@doc false
def apply_fun(%mod{params: params, data: data} = command, name) when is_atom(name) do
:erlang.apply(mod, name, [command, params, data])
end
def apply_fun(command, fun) when is_function(fun, 1) do
fun.(command)
end
def apply_fun(%{params: params, data: data} = command, fun) when is_function(fun, 3) do
fun.(command, params, data)
end
def apply_fun(%{params: params, data: data} = command, {m, f}) do
:erlang.apply(m, f, [command, params, data])
end
def apply_fun(%{params: params, data: data} = command, {m, f, a}) do
:erlang.apply(m, f, [command, params, data] ++ a)
end
def __param__(mod, name, opts) do
params = Module.get_attribute(mod, :params)
if List.keyfind(params, name, 0) do
raise ArgumentError, "param #{inspect(name)} is already set on command"
end
default = Keyword.get(opts, :default)
Module.put_attribute(mod, :params, {name, default})
end
def __data__(mod, name) do
data = Module.get_attribute(mod, :data)
if List.keyfind(data, name, 0) do
raise ArgumentError, "data #{inspect(name)} is already set on command"
end
Module.put_attribute(mod, :data, {name, nil})
end
def __pipeline__(mod, name) do
Module.put_attribute(mod, :pipelines, name)
end
defp get_param(params, key, default) do
case Map.get(params, key) do
nil ->
Map.get(params, to_string(key), default)
val ->
val
end
end
end
|
lib/commandex.ex
| 0.840128 | 0.496765 |
commandex.ex
|
starcoder
|
defmodule RogerUI.Queues do
@moduledoc """
Normalizes nodes data structures from Roger.Info.running_jobs() function in order to obtain queues:
Given a nested data structure, where each element contents nested items:
input = [
"[email protected]": %{
running: %{
"roger_test_partition_1" => %{
default: %{consumer_count: 1, max_workers: 10, message_count: 740, paused: false},
fast: %{consumer_count: 1, max_workers: 10, message_count: 740, paused: false},
other: %{consumer_count: 1, max_workers: 2, message_count: 0, paused: false}
}]
return a sorted Map with the following keys:
[
%{"partition_name" => "roger_partition_demo", "queue_name" => "roger_test_partition_1", "qualified_queue_name" => "roger_test_partition_1-default"}
}]
"""
alias Roger.Queue
@doc """
Takes a Keyword list that contains the nodes, status and partitions with queues, like this:
[
"[email protected]": %{
running: %{
"roger_test_partition_1" => %{
default: %{consumer_count: 1, max_workers: 10, message_count: 740, paused: false},
fast: %{consumer_count: 1, max_workers: 10, message_count: 740, paused: false},
other: %{consumer_count: 1, max_workers: 2, message_count: 0, paused: false}
}
},...
]
and transforms it into a list of queues
"""
def nodes_to_queues(nodes) do
nodes
|> Keyword.values()
|> Stream.flat_map(&Map.values/1)
|> Stream.flat_map(&partition_to_queues/1)
end
@doc """
Verifies if queue_name is an atom, if not, transforms it into one
"""
def normalize_name(name) do
if is_atom(name), do: name, else: String.to_existing_atom(name)
end
defp normalize_queues({partition_name, queues}) do
Stream.map(queues, fn {qn, queue} ->
queue
|> Map.put("qualified_queue_name", Queue.make_name(partition_name, qn))
|> Map.put("queue_name", qn)
|> Map.put("partition_name", partition_name)
end)
end
defp partition_to_queues(partition), do: Stream.flat_map(partition, &normalize_queues/1)
end
|
lib/roger_ui/queues.ex
| 0.819135 | 0.472014 |
queues.ex
|
starcoder
|
defmodule KeywordValidator do
@moduledoc """
Functions for validating keyword lists.
The main function in this module is `validate/2`, which allows developers to
validate a keyword list against a given schema.
A schema is simply a map that matches the keys for the keyword list. The values
in the schema represent the options available during validation.
iex> KeywordValidator.validate([foo: :foo], %{foo: [type: :atom, required: true]})
{:ok, [foo: :foo]}
iex> KeywordValidator.validate([foo: :foo], %{foo: [inclusion: [:one, :two]]})
{:error, [foo: ["must be one of: [:one, :two]"]]}
"""
@type val_type ::
:any
| :atom
| :binary
| :bitstring
| :boolean
| :float
| :function
| {:function, arity :: non_neg_integer()}
| :integer
| {:keyword, schema()}
| :list
| {:list, val_type()}
| :map
| :mfa
| :module
| :number
| :pid
| :port
| :struct
| {:struct, module()}
| :timeout
| :tuple
| {:tuple, size :: non_neg_integer()}
| {:tuple, tuple_val_types :: tuple()}
@type key_opt ::
{:default, any()}
| {:required, boolean()}
| {:type, val_type() | [val_type()]}
| {:format, Regex.t()}
| {:custom, (atom(), any() -> [] | [binary()]) | {module(), atom()}}
| {:inclusion, list()}
| {:exclusion, list()}
@type key_opts :: [key_opt()]
@type schema :: %{atom() => key_opts()}
@type invalid :: [{atom(), [String.t()]}]
@type option :: {:strict, boolean()}
@type options :: [option()]
@default_key_opts [
default: nil,
required: false,
type: :any,
format: nil,
custom: [],
inclusion: [],
exclusion: []
]
@doc """
Validates a keyword list using the provided schema.
A schema is a simple map, with each key representing a key in your keyword list.
The values in the map represent the options available for validation.
If the validation passes, we are returned a two-item tuple of `{:ok, keyword}`.
Otherwise, returns `{:error, invalid}` - where `invalid` is a keyword list of errors.
## Schema Options
* `:required` - boolean representing whether the key is required or not, defaults to `false`
* `:default` - the default value for the key if not provided one, defaults to `nil`
* `:type` - the type associated with the key value. must be one of `t:val_type/0`
* `:format` - a regex used to validate string format
* `:inclusion` - a list of items that the value must be a included in
* `:exclusion` - a list of items that the value must not be included in
* `:custom` - a list of two-arity functions or tuples in the format `{module, function}`
that serve as custom validators. the function will be given the key and value as
arguments, and must return a list of string errors (or an empty list if no errors are present)
## Options
* `:strict` - boolean representing whether extra keys will become errors, defaults to `true`
## Examples
iex> KeywordValidator.validate([foo: :foo], %{foo: [type: :atom, required: true]})
{:ok, [foo: :foo]}
iex> KeywordValidator.validate([foo: :foo], %{bar: [type: :any]})
{:error, [foo: ["is not a valid key"]]}
iex> KeywordValidator.validate([foo: :foo], %{bar: [type: :any]}, strict: false)
{:ok, []}
iex> KeywordValidator.validate([foo: :foo], %{foo: [inclusion: [:one, :two]]})
{:error, [foo: ["must be one of: [:one, :two]"]]}
iex> KeywordValidator.validate([foo: {:foo, 1}], %{foo: [type: {:tuple, {:atom, :integer}}]})
{:ok, [foo: {:foo, 1}]}
iex> KeywordValidator.validate([foo: ["one", 2]], %{foo: [type: {:list, :binary}]})
{:error, [foo: ["must be a list of type :binary"]]}
iex> KeywordValidator.validate([foo: "foo"], %{foo: [format: ~r/foo/]})
{:ok, [foo: "foo"]}
iex> KeywordValidator.validate([foo: %Foo{}], %{foo: [type: {:struct, Bar}]})
{:error, [foo: ["must be a struct of type Bar"]]}
iex> KeywordValidator.validate([foo: "foo"], %{foo: [custom: [fn key, val -> ["some error"] end]]})
{:error, [foo: ["some error"]]}
"""
@spec validate(keyword(), schema(), options()) :: {:ok, keyword()} | {:error, invalid()}
def validate(keyword, schema, opts \\ []) when is_list(keyword) and is_map(schema) do
strict = Keyword.get(opts, :strict, true)
valid = []
invalid = []
{keyword, valid, invalid}
|> validate_extra_keys(schema, strict)
|> validate_keys(schema)
|> to_tagged_tuple()
end
@doc """
The same as `validate/2` but raises an `ArgumentError` exception if invalid.
## Example
iex> KeywordValidator.validate!([foo: :foo], %{foo: [type: :atom, required: true]})
[foo: :foo]
iex> KeywordValidator.validate!([foo: :foo], %{foo: [inclusion: [:one, :two]]})
** (ArgumentError) Invalid keyword given.
Keyword:
[foo: :foo]
Invalid:
foo: ["must be one of: [:one, :two]"]
"""
@spec validate!(keyword(), schema(), options()) :: Keyword.t()
def validate!(keyword, schema, opts \\ []) do
case validate(keyword, schema, opts) do
{:ok, valid} ->
valid
{:error, invalid} ->
raise ArgumentError, """
Invalid keyword given.
Keyword:
#{inspect(keyword, pretty: true)}
Invalid:
#{format_invalid(invalid)}
"""
end
end
defp validate_extra_keys(results, _schema, false), do: results
defp validate_extra_keys({keyword, _valid, _invalid} = results, schema, true) do
Enum.reduce(keyword, results, fn {key, _val}, {keyword, valid, invalid} ->
if Map.has_key?(schema, key) do
{keyword, valid, invalid}
else
{keyword, valid, put_error(invalid, key, "is not a valid key")}
end
end)
end
defp put_error(invalid, key, msg) when is_binary(msg) do
Keyword.update(invalid, key, [msg], fn errors ->
[msg | errors]
end)
end
defp put_error(invalid, key, msgs) when is_list(msgs) do
Enum.reduce(msgs, invalid, &put_error(&2, key, &1))
end
defp validate_keys(result, schema) do
Enum.reduce(schema, result, &maybe_validate_key(&1, &2))
end
defp maybe_validate_key({key, opts}, {keyword, valid, invalid}) do
opts = @default_key_opts |> Keyword.merge(opts) |> Enum.into(%{})
if validate_key?(keyword, key, opts) do
validate_key({key, opts}, {keyword, valid, invalid})
else
{keyword, valid, invalid}
end
end
defp validate_key?(keyword, key, opts) do
Keyword.has_key?(keyword, key) || opts.required || opts.default != nil
end
defp validate_key({key, opts}, {keyword, valid, invalid}) do
val = Keyword.get(keyword, key, opts.default)
{key, opts, val, []}
|> validate_required()
|> validate_type()
|> validate_format()
|> validate_inclusion()
|> validate_exclusion()
|> validate_custom()
|> case do
{key, _, val, []} -> {keyword, Keyword.put(valid, key, val), invalid}
{key, _, _, errors} -> {keyword, valid, put_error(invalid, key, errors)}
end
end
defp validate_required({key, %{required: true} = opts, nil, errors}) do
{key, opts, nil, ["is a required key" | errors]}
end
defp validate_required(validation) do
validation
end
defp validate_type({key, opts, val, errors}) do
case validate_type(opts.type, val) do
{:ok, val} -> {key, opts, val, errors}
{:error, msg} -> {key, opts, val, [msg | errors]}
end
end
defp validate_type(:any, val), do: {:ok, val}
defp validate_type(:atom, val) when is_atom(val) and not is_nil(val), do: {:ok, val}
defp validate_type(:atom, _val), do: {:error, "must be an atom"}
defp validate_type(:binary, val) when is_binary(val), do: {:ok, val}
defp validate_type(:binary, _val), do: {:error, "must be a binary"}
defp validate_type(:bitstring, val) when is_bitstring(val), do: {:ok, val}
defp validate_type(:bitstring, _val), do: {:error, "must be a bitstring"}
defp validate_type(:boolean, val) when is_boolean(val), do: {:ok, val}
defp validate_type(:boolean, _val), do: {:error, "must be a boolean"}
defp validate_type(:float, val) when is_float(val), do: {:ok, val}
defp validate_type(:float, _val), do: {:error, "must be a float"}
defp validate_type(:function, val) when is_function(val), do: {:ok, val}
defp validate_type(:function, _val), do: {:error, "must be a function"}
defp validate_type({:function, arity}, val) when is_function(val, arity), do: {:ok, val}
defp validate_type({:function, arity}, _val),
do: {:error, "must be a function of arity #{arity}"}
defp validate_type(:integer, val) when is_integer(val), do: {:ok, val}
defp validate_type(:integer, _val), do: {:error, "must be an integer"}
defp validate_type({:keyword, schema}, val) when is_list(val) do
case validate(val, schema) do
{:ok, val} -> {:ok, val}
{:error, _errors} -> {:error, "must be a keyword with structure: #{schema_string(schema)}"}
end
end
defp validate_type({:keyword, schema}, _val) do
{:error, "must be a keyword with structure: #{schema_string(schema)}"}
end
defp validate_type(:list, val) when is_list(val), do: {:ok, val}
defp validate_type(:list, _val), do: {:error, "must be a list"}
defp validate_type({:list, type}, val) when is_list(val) do
Enum.reduce_while(val, {:ok, []}, fn item, {:ok, acc} ->
case validate_type(type, item) do
{:ok, val} -> {:cont, {:ok, acc ++ [val]}}
{:error, _} -> {:halt, {:error, "must be a list of type #{inspect(type)}"}}
end
end)
end
defp validate_type({:list, type}, _val), do: {:error, "must be a list of type #{inspect(type)}"}
defp validate_type(:map, val) when is_map(val), do: {:ok, val}
defp validate_type(:map, _val), do: {:error, "must be a map"}
defp validate_type(:mfa, {mod, fun, arg} = val)
when is_atom(mod) and not is_nil(mod) and is_atom(fun) and not is_nil(fun) and is_list(arg) do
if Code.ensure_loaded?(mod) and function_exported?(mod, fun, length(arg)) do
{:ok, val}
else
{:error, "must be a mfa"}
end
end
defp validate_type(:mfa, _val), do: {:error, "must be a mfa"}
defp validate_type(:module, val) when is_atom(val) and not is_nil(val) do
if Code.ensure_loaded?(val) do
{:ok, val}
else
{:error, "must be a module"}
end
end
defp validate_type(:module, _val), do: {:error, "must be a module"}
defp validate_type(:number, val) when is_number(val), do: {:ok, val}
defp validate_type(:number, _val), do: {:error, "must be a number"}
defp validate_type(:pid, val) when is_pid(val), do: {:ok, val}
defp validate_type(:pid, _val), do: {:error, "must be a PID"}
defp validate_type(:port, val) when is_port(val), do: {:ok, val}
defp validate_type(:port, _val), do: {:error, "must be a port"}
defp validate_type(:struct, %{__struct__: _} = val), do: {:ok, val}
defp validate_type(:struct, _val), do: {:error, "must be a struct"}
defp validate_type({:struct, type1}, %{__struct__: type2} = val) when type1 == type2,
do: {:ok, val}
defp validate_type({:struct, type}, _val),
do: {:error, "must be a struct of type #{inspect(type)}"}
defp validate_type(:timeout, val) when is_integer(val), do: {:ok, val}
defp validate_type(:timeout, :infinity = val), do: {:ok, val}
defp validate_type(:timeout, _val), do: {:error, "must be a timeout"}
defp validate_type(:tuple, val) when is_tuple(val), do: {:ok, val}
defp validate_type(:tuple, _val), do: {:error, "must be a tuple"}
defp validate_type({:tuple, size}, val)
when is_tuple(val) and is_integer(size) and tuple_size(val) == size,
do: {:ok, val}
defp validate_type({:tuple, size}, _val) when is_integer(size),
do: {:error, "must be a tuple of size #{size}"}
defp validate_type({:tuple, types}, val)
when is_tuple(types) and is_tuple(val) and tuple_size(types) == tuple_size(val) do
type_list = Tuple.to_list(types)
val_list = Tuple.to_list(val)
validations = Enum.zip(type_list, val_list)
Enum.reduce_while(validations, {:ok, {}}, fn {type, val}, {:ok, acc} ->
case validate_type(type, val) do
{:ok, val} -> {:cont, {:ok, Tuple.append(acc, val)}}
{:error, _} -> {:halt, {:error, "must be a tuple with the structure: #{inspect(types)}"}}
end
end)
end
defp validate_type({:tuple, type}, _val),
do: {:error, "must be a tuple with the structure: #{inspect(type)}"}
defp validate_type(types, val) when is_list(types) do
error = {:error, "must be one of the following: #{inspect(types)}"}
Enum.reduce_while(types, error, fn type, acc ->
case validate_type(type, val) do
{:ok, _} = success -> {:halt, success}
{:error, _} -> {:cont, acc}
end
end)
end
defp validate_format({key, %{format: %Regex{} = format} = opts, val, errors}) do
if val =~ format do
{key, opts, val, errors}
else
{key, opts, val, ["has invalid format" | errors]}
end
end
defp validate_format(validation) do
validation
end
defp validate_inclusion({_, %{inclusion: []}, _, _} = validation) do
validation
end
defp validate_inclusion({key, %{inclusion: inclusion} = opts, val, errors}) do
if Enum.member?(inclusion, val) do
{key, opts, val, errors}
else
{key, opts, val, ["must be one of: #{inspect(inclusion)}" | errors]}
end
end
defp validate_exclusion({_, %{exclusion: []}, _, _} = validation) do
validation
end
defp validate_exclusion({key, %{exclusion: exclusion} = opts, val, errors}) do
if Enum.member?(exclusion, val) do
{key, opts, val, ["must not be one of: #{inspect(exclusion)}" | errors]}
else
{key, opts, val, errors}
end
end
defp validate_custom({_, %{custom: []}, _, _} = validation) do
validation
end
defp validate_custom({key, %{custom: custom} = opts, val, errors}) do
errors = Enum.reduce(custom, errors, &validate_custom(&1, key, val, &2))
{key, opts, val, errors}
end
defp validate_custom({module, fun}, key, val, errors) do
apply(module, fun, [key, val]) ++ errors
end
defp validate_custom(validator, key, val, errors) when is_function(validator, 2) do
validator.(key, val) ++ errors
end
defp to_tagged_tuple({_, valid, []}), do: {:ok, valid}
defp to_tagged_tuple({_, _, invalid}), do: {:error, invalid}
defp format_invalid(invalid) do
invalid
|> Enum.reduce("", fn {key, errors}, final ->
final <> "#{key}: #{inspect(errors, pretty: true)}\n"
end)
|> String.trim_trailing("\n")
end
defp schema_string(schema) do
schema =
schema
|> Enum.map(fn {k, v} -> "#{k}: #{inspect(v)}" end)
|> Enum.join(", ")
"[#{schema}]"
end
end
|
lib/keyword_validator.ex
| 0.936103 | 0.625981 |
keyword_validator.ex
|
starcoder
|
defmodule Timex.Ecto.TimestampWithTimezone do
@moduledoc """
Support for using Timex with :timestamptz fields
"""
use Timex
@behaviour Ecto.Type
def type, do: :timestamptz
@doc """
Handle casting to Timex.Ecto.TimestampWithTimezone
"""
def cast(%DateTime{} = dt), do: to_local(dt)
# Support embeds_one/embeds_many
def cast(%{"calendar" => _cal,
"year" => y, "month" => m, "day" => d,
"hour" => h, "minute" => mm, "second" => s, "ms" => ms,
"timezone" => %{"full_name" => tzname,
"abbreviation" => abbr,
"offset_std" => offset_std,
"offset_utc" => offset_utc}}) do
dt = %DateTime{
:year => y,
:month => m,
:day => d,
:hour => h,
:minute => mm,
:second => s,
:microsecond => Timex.Ecto.Helpers.millisecond_to_microsecond(ms),
:time_zone => tzname,
:zone_abbr => abbr,
:utc_offset => offset_utc,
:std_offset => offset_std
}
to_local(dt)
end
def cast(%{"calendar" => _cal,
"year" => y, "month" => m, "day" => d,
"hour" => h, "minute" => mm, "second" => s, "millisecond" => ms,
"timezone" => %{"full_name" => tzname,
"abbreviation" => abbr,
"offset_std" => offset_std,
"offset_utc" => offset_utc}}) do
dt = %DateTime{
:year => y,
:month => m,
:day => d,
:hour => h,
:minute => mm,
:second => s,
:microsecond => Timex.Ecto.Helpers.millisecond_to_microsecond(ms),
:time_zone => tzname,
:zone_abbr => abbr,
:utc_offset => offset_utc,
:std_offset => offset_std
}
to_local(dt)
end
def cast(%{"calendar" => _cal,
"year" => y, "month" => m, "day" => d,
"hour" => h, "minute" => mm, "second" => s, "microsecond" => us,
"time_zone" => tzname, "zone_abbr" => abbr, "utc_offset" => offset_utc, "std_offset" => offset_std}) do
case us do
us when is_integer(us) -> Timex.DateTime.Helpers.construct_microseconds(us)
{_,_} -> us
end
dt = %DateTime{
:year => y,
:month => m,
:day => d,
:hour => h,
:minute => mm,
:second => s,
:microsecond => us,
:time_zone => tzname,
:zone_abbr => abbr,
:utc_offset => offset_utc,
:std_offset => offset_std
}
to_local(dt)
end
def cast(input) when is_binary(input) do
case Timex.parse(input, "{ISO:Extended}") do
{:ok, dt} -> to_local(dt)
{:error, _} -> :error
end
end
def cast(input) do
case Timex.to_datetime(input) do
{:error, _} ->
case Ecto.DateTime.cast(input) do
{:ok, d} -> load({{d.year, d.month, d.day}, {d.hour, d.min, d.sec, d.usec}})
:error -> :error
end
%DateTime{} = dt ->
to_local(dt)
end
end
@doc """
Load from the native Ecto representation
"""
def load({{_, _, _}, {_, _, _, _}} = dt), do: to_local(Timex.to_datetime(dt))
def load({{_, _, _}, {_, _, _}} = dt), do: to_local(Timex.to_datetime(dt))
def load(_), do: :error
@doc """
Convert to the native Ecto representation
"""
def dump(%DateTime{microsecond: {us, _}} = dt) do
dt = Timezone.convert(dt, "Etc/UTC")
{:ok, {{dt.year, dt.month, dt.day}, {dt.hour, dt.minute, dt.second, us}}}
end
def autogenerate(precision \\ :sec)
def autogenerate(:sec) do
{date, {h, m, s}} = :erlang.universaltime
load({date,{h, m, s, 0}}) |> elem(1)
end
def autogenerate(:usec) do
timestamp = {_,_, usec} = :os.timestamp
{date, {h, m, s}} = :calendar.now_to_datetime(timestamp)
load({date, {h, m, s, usec}}) |> elem(1)
end
defp to_local(%DateTime{} = dt) do
case Timezone.local() do
{:error, _} -> :error
tz -> {:ok, Timezone.convert(dt, tz)}
end
end
end
|
lib/types/timestamptz.ex
| 0.763396 | 0.420957 |
timestamptz.ex
|
starcoder
|
defmodule Ockam.Kafka.Hub.Service.Provider do
@moduledoc """
Implementation for Ockam.Hub.Service.Provider
providing kafka stream services, :stream_kafka and :stream_kafka_index
Services arguments:
stream_kafka:
address_prefix: optional<string>, worker address prefix
stream_prefix: optional<string>, kafka topic prefix
endpoints: optional<string | [string] | [{string, integer}]>, kafka bootstrap endpoints, defaults to "localhost:9092"
user: optional<string>, kafka SASL username
password: optional<string>, kafka SASL password, defaults to "" if only user is set
sasl: optional<atom|string>, kafka sasl mode, defaults to "plain"
ssl: optional<boolean>, if kafka server using ssl, defaults to false
replication_factor: optional<integer> replication factor for topics, defaults to 1
stream_kafka_index:
address_prefix: optional<string>, worker address prefix
stream_prefix: optional<string>, kafka topic prefix
endpoints: optional<string | [string] | [{string, integer}]>, kafka bootstrap endpoints, defaults to "localhost:9092"
user: optional<string>, kafka SASL username
password: optional<string>, kafka SASL password, defaults to "" if only user is set
sasl: optional<atom|string>, kafka sasl mode, defaults to "plain"
ssl: optional<boolean> if kafka server using ssl, defaults to false
"""
@behaviour Ockam.Hub.Service.Provider
alias Ockam.Stream.Index.Worker, as: StreamIndexService
alias Ockam.Stream.Workers.Service, as: StreamService
@services [:stream_kafka, :stream_kafka_index]
@impl true
def services() do
@services
end
@impl true
def start_service(:stream_kafka, args) do
address_prefix = Keyword.get(args, :address_prefix, "")
base_address = Keyword.get(args, :address, "stream_kafka")
prefix_address = prefix_address(base_address, address_prefix)
stream_options = [
storage_mod: Ockam.Stream.Storage.Kafka,
storage_options: storage_options(args)
]
StreamService.create(address: prefix_address, stream_options: stream_options)
end
def start_service(:stream_kafka_index, args) do
address_prefix = Keyword.get(args, :address_prefix, "")
base_address = Keyword.get(args, :address, "stream_kafka_index")
prefix_address = prefix_address(base_address, address_prefix)
StreamIndexService.create(
address: prefix_address,
storage_mod: Ockam.Stream.Index.KafkaOffset,
storage_options: storage_options(args)
)
end
def prefix_address(base_address, "") do
base_address
end
def prefix_address(base_address, prefix) do
prefix <> "_" <> base_address
end
def storage_options(args) do
stream_prefix =
Keyword.get(args, :stream_prefix, Application.get_env(:ockam_kafka, :stream_prefix, ""))
prefix =
case stream_prefix do
"" -> ""
string -> "#{string}_"
end
sasl_options = sasl_options(args)
ssl = Keyword.get(args, :ssl, Application.get_env(:ockam_kafka, :ssl))
replication_factor =
Keyword.get(
args,
:replication_factor,
Application.get_env(:ockam_kafka, :replication_factor)
)
endpoints = endpoints(args)
[
replication_factor: replication_factor,
endpoints: endpoints,
client_config: [ssl: ssl] ++ sasl_options,
topic_prefix: prefix
]
end
def sasl_options(args) do
sasl =
args |> Keyword.get(:sasl, Application.get_env(:ockam_kafka, :sasl)) |> String.to_atom()
user = Keyword.get(args, :user, Application.get_env(:ockam_kafka, :user))
password = Keyword.get(args, :password, Application.get_env(:ockam_kafka, :password))
case user do
nil ->
[]
_defined ->
[sasl: {sasl, user, password}]
end
end
def endpoints(args) do
args
|> Keyword.get(:endpoints, Application.get_env(:ockam_kafka, :endpoints))
|> parse_endpoints()
end
def parse_endpoints(endpoints) when is_list(endpoints) do
Enum.map(endpoints, fn string when is_binary(string) ->
with [host, port_str] <- String.split(string, ":"),
port_int <- String.to_integer(port_str) do
{host, port_int}
else
err ->
raise("Unable to parse kafka endpoints: #{inspect(endpoints)}: #{inspect(err)}")
end
end)
end
def parse_endpoints(endpoints) do
parse_endpoints(String.split(endpoints, ","))
end
end
|
implementations/elixir/ockam/ockam_hub/lib/hub/service/provider/kafka.ex
| 0.789761 | 0.450239 |
kafka.ex
|
starcoder
|
defmodule Goth.Token do
@moduledoc """
Functions for retrieving the token from the Google API.
"""
@type t :: %__MODULE__{
token: String.t(),
type: String.t(),
scope: String.t(),
expires: non_neg_integer,
sub: String.t() | nil
}
defstruct [
:token,
:type,
:scope,
:sub,
:expires,
# Deprecated fields:
:account
]
@default_url "https://www.googleapis.com/oauth2/v4/token"
@default_scopes ["https://www.googleapis.com/auth/cloud-platform"]
@doc """
Fetch the token from the Google API using the given `config`.
Config may contain the following keys:
* `:source` - See "Source" section below.
* `:http_client` - HTTP client configuration, defaults to using `Goth.HTTPClient.Hackney`.
See `Goth.HTTPClient` for more information.
## Source
Source can be one of:
#### Service account - `{:service_account, credentials, options}`
The `credentials` is a map and can contain the following keys:
* `"private_key"`
* `"client_email"`
The `options` is a keywords list and can contain the following keys:
* `:url` - the URL of the authentication service, defaults to:
`"https://www.googleapis.com/oauth2/v4/token"`
* `:scopes` - the list of token scopes, defaults to `#{inspect(@default_scopes)}`
* `:sub` - an email of user being impersonated, defaults to `nil`
#### Refresh token - `{:refresh_token, credentials, options}`
The `credentials` is a map and can contain the following keys:
* `"refresh_token"`
* `"client_id"`
* `"client_secret"`
The `options` is a keywords list and can contain the following keys:
* `:url` - the URL of the authentication service, defaults to:
`"https://www.googleapis.com/oauth2/v4/token"`
#### Google metadata server - `{:metadata, options}`
The `options` is a keywords list and can contain the following keys:
* `:account` - the name of the account to generate the token for, defaults to `"default"`
* `:url` - the URL of the metadata server, defaults to `"http://metadata.google.internal"`
## Examples
#### Generate a token using a service account credentials file:
iex> credentials = "credentials.json" |> File.read!() |> Jason.decode!()
iex> Goth.Token.fetch(%{source: {:service_account, credentials, []}})
{:ok, %Goth.Token{...}}
You can generate a credentials file containing service account using `gcloud` utility like this:
gcloud iam service-accounts keys create --key-file-type=json --iam-account=... credentials.json
#### Retrieve the token using a refresh token:
iex> credentials = "credentials.json" |> File.read!() |> Jason.decode!()
iex> Goth.Token.fetch(%{source: {:refresh_token, credentials, []}})
{:ok, %Goth.Token{...}}
You can generate a credentials file containing refresh token using `gcloud` utility like this:
gcloud auth application-default login
#### Retrieve the token using the Google metadata server:
iex> Goth.Token.fetch(%{source: {:metadata, []}})
{:ok, %Goth.Token{...}}
See [Storing and retrieving instance metadata](https://cloud.google.com/compute/docs/storing-retrieving-metadata)
for more information on metadata server.
"""
@doc since: "1.3.0"
@spec fetch(map()) :: {:ok, t()} | {:error, Exception.t}
def fetch(config) when is_map(config) do
config =
Map.put_new_lazy(config, :http_client, fn ->
Goth.HTTPClient.init({Goth.HTTPClient.Hackney, []})
end)
request(config)
end
defp request(%{source: {:service_account, credentials, options}} = config)
when is_map(credentials) and is_list(options) do
url = Keyword.get(options, :url, @default_url)
sub = Keyword.get(options, :sub)
scopes = Keyword.get(options, :scopes, @default_scopes)
jwt_scope = Enum.join(scopes, " ")
claims = %{"scope" => jwt_scope}
claims = if sub, do: Map.put(claims, "sub", sub), else: claims
jwt = jwt_encode(claims, credentials)
headers = [{"content-type", "application/x-www-form-urlencoded"}]
grant_type = "urn:ietf:params:oauth:grant-type:jwt-bearer"
body = "grant_type=#{grant_type}&assertion=#{jwt}"
result =
Goth.HTTPClient.request(config.http_client, :post, url, headers, body, [])
|> handle_response()
case result do
{:ok, token} -> {:ok, %{token | scope: jwt_scope, sub: sub || token.sub}}
{:error, error} -> {:error, error}
end
end
defp request(%{source: {:refresh_token, credentials, options}} = config)
when is_map(credentials) and is_list(options) do
url = Keyword.get(options, :url, @default_url)
headers = [{"Content-Type", "application/x-www-form-urlencoded"}]
refresh_token = Map.fetch!(credentials, "refresh_token")
client_id = Map.fetch!(credentials, "client_id")
client_secret = Map.fetch!(credentials, "client_secret")
body =
URI.encode_query(
grant_type: "refresh_token",
refresh_token: refresh_token,
client_id: client_id,
client_secret: client_secret
)
Goth.HTTPClient.request(config.http_client, :post, url, headers, body, [])
|> handle_response()
end
defp request(%{source: {:metadata, options}} = config) when is_list(options) do
account = Keyword.get(options, :account, "default")
url = Keyword.get(options, :url, "http://metadata.google.internal")
url = "#{url}/computeMetadata/v1/instance/service-accounts/#{account}/token"
headers = [{"metadata-flavor", "Google"}]
Goth.HTTPClient.request(config.http_client, :get, url, headers, "", [])
|> handle_response()
end
defp handle_response({:ok, %{status: 200, body: body}}) do
case Jason.decode(body) do
{:ok, attrs} -> {:ok, build_token(attrs)}
{:error, reason} -> {:error, reason}
end
end
defp handle_response({:ok, response}) do
message = """
unexpected status #{response.status} from Google
#{response.body}
"""
{:error, RuntimeError.exception(message)}
end
defp handle_response({:error, exception}) do
{:error, exception}
end
defp jwt_encode(claims, %{"private_key" => private_key, "client_email" => client_email}) do
jwk = JOSE.JWK.from_pem(private_key)
header = %{"alg" => "RS256", "typ" => "JWT"}
unix_time = System.system_time(:second)
default_claims = %{
"iss" => client_email,
"aud" => "https://www.googleapis.com/oauth2/v4/token",
"exp" => unix_time + 3600,
"iat" => unix_time
}
claims = Map.merge(default_claims, claims)
JOSE.JWT.sign(jwk, header, claims) |> JOSE.JWS.compact() |> elem(1)
end
defp build_token(%{"access_token" => _} = attrs) do
%__MODULE__{
expires: System.system_time(:second) + attrs["expires_in"],
token: attrs["access_token"],
type: attrs["token_type"],
scope: attrs["scope"],
sub: attrs["sub"]
}
end
defp build_token(%{"id_token" => jwt}) when is_binary(jwt) do
%JOSE.JWT{fields: fields} = JOSE.JWT.peek_payload(jwt)
%__MODULE__{
expires: fields["exp"],
token: jwt,
type: "Bearer",
scope: fields["aud"],
sub: fields["sub"]
}
end
# Everything below is deprecated.
alias Goth.TokenStore
alias Goth.Client
# Get a `%Goth.Token{}` for a particular `scope`. `scope` can be a single
# scope or multiple scopes joined by a space. See [OAuth 2.0 Scopes for Google APIs](https://developers.google.com/identity/protocols/googlescopes) for all available scopes.
# `sub` needs to be specified if impersonation is used to prevent cache
# leaking between users.
# ## Example
# iex> Token.for_scope("https://www.googleapis.com/auth/pubsub")
# {:ok, %Goth.Token{expires: ..., token: "...", type: "..."} }
@deprecated "Use Goth.fetch/1 instead"
def for_scope(info, sub \\ nil)
@spec for_scope(scope :: String.t(), sub :: String.t() | nil) :: {:ok, t} | {:error, any()}
def for_scope(scope, sub) when is_binary(scope) do
case TokenStore.find({:default, scope}, sub) do
:error -> retrieve_and_store!({:default, scope}, sub)
{:ok, token} -> {:ok, token}
end
end
@spec for_scope(info :: {String.t() | atom(), String.t()}, sub :: String.t() | nil) ::
{:ok, t} | {:error, any()}
def for_scope({account, scope}, sub) do
case TokenStore.find({account, scope}, sub) do
:error -> retrieve_and_store!({account, scope}, sub)
{:ok, token} -> {:ok, token}
end
end
@doc false
# Parse a successful JSON response from Google's token API and extract a `%Goth.Token{}`
def from_response_json(scope, sub \\ nil, json)
@spec from_response_json(String.t(), String.t() | nil, String.t()) :: t
def from_response_json(scope, sub, json) when is_binary(scope) do
{:ok, attrs} = json |> Jason.decode()
%__MODULE__{
token: attrs["access_token"],
type: attrs["token_type"],
scope: scope,
sub: sub,
expires: :os.system_time(:seconds) + attrs["expires_in"],
account: :default
}
end
@spec from_response_json(
{atom() | String.t(), String.t()},
String.t() | nil,
String.t()
) :: t
def from_response_json({account, scope}, sub, json) do
{:ok, attrs} = json |> Jason.decode()
%__MODULE__{
token: attrs["access_token"],
type: attrs["token_type"],
scope: scope,
sub: sub,
expires: :os.system_time(:seconds) + attrs["expires_in"],
account: account
}
end
# Retrieve a new access token from the API. This is useful for expired tokens,
# although `Goth` automatically handles refreshing tokens for you, so you should
# rarely if ever actually need to call this method manually.
@doc false
@spec refresh!(t() | {any(), any()}) :: {:ok, t()}
def refresh!(%__MODULE__{account: account, scope: scope, sub: sub}),
do: refresh!({account, scope}, sub)
def refresh!(%__MODULE__{account: account, scope: scope}), do: refresh!({account, scope})
@doc false
@spec refresh!({any(), any()}, any()) :: {:ok, t()}
def refresh!({account, scope}, sub \\ nil), do: retrieve_and_store!({account, scope}, sub)
@doc false
def queue_for_refresh(%__MODULE__{} = token) do
diff = token.expires - :os.system_time(:seconds)
if diff < 10 do
# just do it immediately
Task.async(fn ->
__MODULE__.refresh!(token)
end)
else
:timer.apply_after((diff - 10) * 1000, __MODULE__, :refresh!, [token])
end
end
defp retrieve_and_store!({account, scope}, sub) do
Client.get_access_token({account, scope}, sub: sub)
|> case do
{:ok, token} ->
TokenStore.store({account, scope}, sub, token)
{:ok, token}
other ->
other
end
end
end
|
lib/goth/token.ex
| 0.891679 | 0.600598 |
token.ex
|
starcoder
|
defmodule NRepl.Messages.Eval do
import UUID, only: [uuid4: 0]
defstruct op: "eval",
code: nil,
session: nil,
id: UUID.uuid4(),
# The column number in [file] at which [code] starts.
column: nil,
# A fully-qualified symbol naming a var whose function value will be used to evaluate [code], instead of clojure.core/eval (the default).
eval: nil,
# The path to the file containing [code]. clojure.core/*file* will be bound to this.
file: nil,
# The line number in [file] at which [code] starts.
line: nil,
# A fully-qualified symbol naming a var whose function to use to convey interactive errors. Must point to a function that takes a java.lang.Throwable as its sole argument.
"nrepl.middleware.caught/caught": nil,
# If logical true, the printed value of any interactive errors will be returned in the response (otherwise they will be elided). Delegates to nrepl.middleware.print to perform the printing. Defaults to false.
"nrepl.middleware.caught/print?": nil,
# The size of the buffer to use when streaming results. Defaults to 1024.
"nrepl.middleware.print/buffer-size": nil,
# A seq of the keys in the response whose values should be printed.
"nrepl.middleware.print/keys": nil,
# A map of options to pass to the printing function. Defaults to nil.
"nrepl.middleware.print/options": nil,
# A fully-qualified symbol naming a var whose function to use for printing. Must point to a function with signature [value writer options].
"nrepl.middleware.print/print": nil,
# A hard limit on the number of bytes printed for each value.
"nrepl.middleware.print/quota": nil,
# If logical true, the result of printing each value will be streamed to the client over one or more messages.
"nrepl.middleware.print/stream?": nil
def required(), do: [:code]
end
|
lib/n_repl/messages/eval.ex
| 0.541166 | 0.521227 |
eval.ex
|
starcoder
|
defmodule Tensorflow.MemAllocatorStats do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
num_allocs: integer,
bytes_in_use: integer,
peak_bytes_in_use: integer,
largest_alloc_size: integer,
fragmentation_metric: float | :infinity | :negative_infinity | :nan
}
defstruct [
:num_allocs,
:bytes_in_use,
:peak_bytes_in_use,
:largest_alloc_size,
:fragmentation_metric
]
field(:num_allocs, 1, type: :int64)
field(:bytes_in_use, 2, type: :int64)
field(:peak_bytes_in_use, 3, type: :int64)
field(:largest_alloc_size, 4, type: :int64)
field(:fragmentation_metric, 5, type: :float)
end
defmodule Tensorflow.MemChunk do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
address: non_neg_integer,
size: integer,
requested_size: integer,
bin: integer,
op_name: String.t(),
freed_at_count: non_neg_integer,
action_count: non_neg_integer,
in_use: boolean,
step_id: non_neg_integer
}
defstruct [
:address,
:size,
:requested_size,
:bin,
:op_name,
:freed_at_count,
:action_count,
:in_use,
:step_id
]
field(:address, 1, type: :uint64)
field(:size, 2, type: :int64)
field(:requested_size, 3, type: :int64)
field(:bin, 4, type: :int32)
field(:op_name, 5, type: :string)
field(:freed_at_count, 6, type: :uint64)
field(:action_count, 7, type: :uint64)
field(:in_use, 8, type: :bool)
field(:step_id, 9, type: :uint64)
end
defmodule Tensorflow.BinSummary do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
bin: integer,
total_bytes_in_use: integer,
total_bytes_in_bin: integer,
total_chunks_in_use: integer,
total_chunks_in_bin: integer
}
defstruct [
:bin,
:total_bytes_in_use,
:total_bytes_in_bin,
:total_chunks_in_use,
:total_chunks_in_bin
]
field(:bin, 1, type: :int32)
field(:total_bytes_in_use, 2, type: :int64)
field(:total_bytes_in_bin, 3, type: :int64)
field(:total_chunks_in_use, 4, type: :int64)
field(:total_chunks_in_bin, 5, type: :int64)
end
defmodule Tensorflow.SnapShot do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
action_count: non_neg_integer,
size: integer
}
defstruct [:action_count, :size]
field(:action_count, 1, type: :uint64)
field(:size, 2, type: :int64)
end
defmodule Tensorflow.MemoryDump do
@moduledoc false
use Protobuf, syntax: :proto3
@type t :: %__MODULE__{
allocator_name: String.t(),
bin_summary: [Tensorflow.BinSummary.t()],
chunk: [Tensorflow.MemChunk.t()],
snap_shot: [Tensorflow.SnapShot.t()],
stats: Tensorflow.MemAllocatorStats.t() | nil
}
defstruct [:allocator_name, :bin_summary, :chunk, :snap_shot, :stats]
field(:allocator_name, 1, type: :string)
field(:bin_summary, 2, repeated: true, type: Tensorflow.BinSummary)
field(:chunk, 3, repeated: true, type: Tensorflow.MemChunk)
field(:snap_shot, 4, repeated: true, type: Tensorflow.SnapShot)
field(:stats, 5, type: Tensorflow.MemAllocatorStats)
end
|
lib/tensorflow/core/protobuf/bfc_memory_map.pb.ex
| 0.77373 | 0.47457 |
bfc_memory_map.pb.ex
|
starcoder
|
defmodule Timewrap do
@moduledoc """
Timewrap is a "Time-Wrapper" through which you can access different
time-sources, Elixir and Erlang offers you. Other than that you
can implement on your own.
Also, _Timewrap_ can do the time-warp, freeze, and unfreeze a
`Timewrap.Timer`.
You can instantiate different `Timewrap.Timer`s, registered and
supervised by `:name`.
The `Timewrap.TimeSupervisor` is started with the `Timewrap.Application`
and implicitly starts the default timer `:default_timer`. This
one is used whenever you call Timewrap-functions without a
timer given as the first argument.
### Configuration
`config/config.exs`
config :timewrap,
timer: :default,
unit: :second,
calendar: Calendar.ISO,
representation: :unix
### Examples:
use Timewrap # imports some handy Timewrap-functions.
#### With default Timer
Timewrap.freeze_timer()
item1 = %{ time: current_time() }
:timer.sleep(1000)
item2 = %{ time: current_time() }
assert item1.time == item2.time
#### Transactions with a given and frozen time
with_frozen_timer(~N[1964-08-31 06:00:00Z], fn ->
... do something while `current_time` will
always return the given timestamp within this
block...
end )
#### Start several independent timers
{:ok, today} = new_timer(:today)
{:ok, next_week} = new_timer(:next_week)
freeze_time(:today, ~N[2019-02-11 09:00:00])
freeze_time(:next_week, ~N[2019-02-18 09:00:00])
... do something ...
unfreeze_time(:today)
unfreeze_time(:next_week)
"""
defmacro __using__(_args) do
quote do
alias Timewrap.Timer
import Timewrap,
only: [
current_time: 0,
current_time: 1,
freeze_time: 0,
freeze_time: 1,
freeze_time: 2,
unfreeze_time: 0,
unfreeze_time: 1,
new_timer: 1,
release_timer: 1,
with_frozen_time: 2
]
end
end
@doc """
Get the `current_time` in the format you've configured.
### Configuration
TODO: Describe configuration here.
### Examples
iex> Timewrap.current_time() == System.system_time(:second)
true
"""
def current_time(timer \\ :default_timer) do
Timewrap.Timer.current_time(timer)
end
@doc """
Freeze the current time. All calls to `current_time` will return
the same value until you call `unfreeze`.
### Example:
iex> frozen = Timewrap.freeze_time()
iex> :timer.sleep(1001)
iex> assert frozen == Timewrap.current_time()
true
"""
def freeze_time(timer \\ :default_timer, time \\ nil) do
freeze(timer, time)
end
@doc """
Unfreeze a frozen time. If the Timer is not frozen, this function
is a _noop_.
### Example:
iex> frozen = Timewrap.freeze_time()
iex> :timer.sleep(1001)
iex> assert frozen == Timewrap.current_time()
iex> reseted = Timewrap.unfreeze_time()
iex> assert reseted == System.system_time(:second)
iex> assert reseted != frozen
true
"""
def unfreeze_time(timer \\ :default_timer) do
unfreeze(timer)
end
@doc """
Start a new timer-agent. A supervised worker of
`Timewrap.TimeSupervisor`.
### Example:
iex> use Timewrap
iex> {:ok, t} = new_timer(:mytime)
iex> assert is_pid(t)
iex> :timer.sleep(100)
iex> Timewrap.release_timer(t)
:ok
"""
def new_timer(name) do
{:ok, pid} = Timewrap.TimeSupervisor.start_timer(name)
{:ok, pid}
end
@doc """
Release a running timer terminates the process and removes
it from supervision.
### Example:
iex> use Timewrap
iex> {:ok, t} = new_timer(:mytime)
iex> :timer.sleep(100)
iex> release_timer(t)
iex> Process.alive?(t)
false
"""
def release_timer(pid) do
if Process.alive?(pid) do
Timewrap.TimeSupervisor.stop_timer(pid)
end
end
@doc """
Execute a given block with a timer frozen at the given time
### Example
iex> use Timewrap
iex> "1964-08-31 06:00:00"
iex> |> Timewrap.with_frozen_time(fn() ->
iex> assert current_time() == -168372000
iex> end)
true
"""
def with_frozen_time(time, fun)
def with_frozen_time(time, fun) when is_binary(time) do
if String.ends_with?(time,"Z") do
{:ok, dt, offset} = DateTime.from_iso8601(time)
freeze_time(:default_timer, DateTime.to_unix(dt) + offset)
rc = fun.()
unfreeze_time(:default_timer)
rc
else
with_frozen_time(time <> "Z", fun)
end
end
def with_frozen_time(%NaiveDateTime{} = ndt, fun) do
NaiveDateTime.to_iso8601(ndt)
|> with_frozen_time(fun)
end
defp freeze(timer, time), do: Timewrap.Timer.freeze(timer, time)
defp unfreeze(timer), do: Timewrap.Timer.unfreeze(timer)
end
|
lib/timewrap.ex
| 0.796253 | 0.494995 |
timewrap.ex
|
starcoder
|
defmodule RoboticaUi.RootManager do
@moduledoc """
Manage active scene and tabs
"""
require Logger
use GenServer
alias Scenic.ViewPort
defmodule Scenes do
@moduledoc false
@type t :: %__MODULE__{
message: atom() | {atom(), any()} | nil
}
defstruct [:message]
end
defmodule Tabs do
@moduledoc false
@type t :: %__MODULE__{
clock: atom() | {atom(), any()} | nil,
schedule: atom() | {atom(), any()} | nil,
local: atom() | {atom(), any()} | nil
}
defstruct [:clock, :schedule, :local]
end
defmodule State do
@moduledoc false
@type t :: %__MODULE__{
scenes: Scenes.t(),
tabs: Tabs.t(),
tab: :clock | :schedule | :local,
timer: reference() | nil,
priority_scene: atom() | {atom(), any()} | nil
}
defstruct scenes: %Scenes{},
tabs: %Tabs{
clock: {RoboticaUi.Scene.Clock, nil},
schedule: {RoboticaUi.Scene.Schedule, nil},
local: {RoboticaUi.Scene.Local, nil}
},
tab: :clock,
timer: nil,
priority_scene: nil
end
def start_link(_opts) do
GenServer.start_link(__MODULE__, :ok, name: __MODULE__)
end
@impl true
def init(_opts) do
state = reset_timer(%State{})
{:ok, state}
end
@spec set_priority_scene(any) :: :ok
def set_priority_scene(scene) do
GenServer.cast(__MODULE__, {:set_priority_scene, scene})
end
@spec set_tab_scene(any, any) :: :ok
def set_tab_scene(id, scene) do
GenServer.cast(__MODULE__, {:set_tab_scene, id, scene})
end
@spec set_tab(any) :: :ok
def set_tab(id) do
GenServer.cast(__MODULE__, {:set_tab, id})
end
@spec reset_screensaver :: :ok
def reset_screensaver do
GenServer.cast(__MODULE__, {:reset_screensaver})
end
# PRIVATE STUFF BELOW
@spec screen_off?(State.t()) :: boolean()
defp screen_off?(state) do
is_nil(state.timer)
end
@spec get_current_scene(State.t()) :: atom() | {atom(), any()} | nil
def get_current_scene(state) do
cond do
not is_nil(state.priority_scene) -> state.priority_scene
screen_off?(state) -> :screen_off
true -> Map.get(state.tabs, state.tab)
end
end
@spec update_state(State.t(), (State.t() -> State.t())) :: State.t()
def update_state(state, callback) do
old_scene = get_current_scene(state)
new_state = callback.(state)
new_scene = get_current_scene(new_state)
if old_scene != :screen_off and new_scene == :screen_off do
screen_off()
end
required_scene =
case new_scene do
:screen_off -> {RoboticaUi.Scene.Screensaver, nil}
new_scene -> new_scene
end
if old_scene != new_scene do
ViewPort.set_root(:main_viewport, required_scene)
end
if old_scene == :screen_off and new_scene != :screen_off do
screen_on()
end
new_state
end
@spec reset_timer(State.t()) :: State.t()
defp reset_timer(state) do
Logger.debug("reset_timer #{inspect(state.timer)}")
case state.timer do
nil -> nil
timer -> Process.cancel_timer(timer)
end
timer = Process.send_after(__MODULE__, :screen_off, 30_000, [])
%State{state | timer: timer}
end
# Screen Control
@spec screen_off :: nil
defp screen_off do
Logger.debug("screen_off")
File.write("/sys/class/backlight/rpi_backlight/bl_power", "1")
try do
System.cmd("vcgencmd", ["display_power", "0"])
rescue
ErlangError -> nil
end
end
@spec screen_on :: nil
defp screen_on do
Logger.info("screen_on")
File.write("/sys/class/backlight/rpi_backlight/bl_power", "0")
try do
System.cmd("vcgencmd", ["display_power", "1"])
rescue
ErlangError -> nil
end
end
# Callback methods
@impl true
def handle_info(:screen_off, state) do
Logger.debug("rx screen_off")
state =
update_state(state, fn state ->
Process.cancel_timer(state.timer)
%State{state | timer: nil}
end)
{:noreply, state}
end
@impl true
def handle_cast({:set_priority_scene, scene}, state) do
Logger.info("rx set_priority_scene #{inspect(scene)}")
state =
update_state(state, fn state ->
%State{state | priority_scene: scene}
end)
{:noreply, state}
end
@impl true
def handle_cast({:set_tab_scene, id, scene}, state) do
Logger.info("rx set_tab_scene #{inspect(id)} #{inspect(scene)}")
# We update the saved state but do not update the display.
state = %State{state | tabs: %{state.tabs | id => scene}}
{:noreply, state}
end
@impl true
def handle_cast({:set_tab, id}, state) do
Logger.info("rx set_tab #{inspect(id)}")
state =
update_state(state, fn state ->
%State{state | tab: id}
end)
{:noreply, state}
end
@impl true
def handle_cast({:reset_screensaver}, state) do
Logger.debug("rx reset_screensaver")
state =
update_state(state, fn state ->
reset_timer(state)
end)
{:noreply, state}
end
end
|
robotica_ui/lib/root_manager.ex
| 0.70416 | 0.433502 |
root_manager.ex
|
starcoder
|
defmodule Cqrs.BoundedContext do
alias Cqrs.{BoundedContext, Guards}
@moduledoc """
Macros to create proxy functions to [commands](`Cqrs.Command`) and [queries](`Cqrs.Query`) in a module.
## Examples
defmodule Users do
use Cqrs.BoundedContext
command CreateUser
command CreateUser, as: :create_user2
query GetUser
query GetUser, as: :get_user2
end
### Commands
iex> {:error, {:invalid_command, errors}} = Users.create_user(name: "chris", email: "wrong")
...> errors
%{email: ["has invalid format"]}
iex> {:error, {:invalid_command, errors}} = Users.create_user2(name: "chris", email: "wrong")
...> errors
%{email: ["has invalid format"]}
iex> Users.create_user(name: "chris", email: "<EMAIL>")
{:ok, :dispatched}
iex> Users.create_user2(name: "chris", email: "<EMAIL>")
{:ok, :dispatched}
### Queries
iex> Users.get_user!(email: "wrong")
** (Cqrs.QueryError) email has invalid format
iex> {:error, errors} = Users.get_user2(%{bad: "data"})
...> errors
%{email: ["can't be blank"]}
iex> {:error, errors} = Users.get_user(email: "wrong")
...> errors
%{email: ["has invalid format"]}
iex> {:ok, query} = Users.get_user_query(email: "<EMAIL>")
...> query
#Ecto.Query<from u0 in User, where: u0.email == ^"<EMAIL>">
iex> {:ok, user} = Users.get_user(email: "<EMAIL>")
...> %{id: user.id, email: user.email}
%{id: "052c1984-74c9-522f-858f-f04f1d4cc786", email: "<EMAIL>"}
"""
defmacro __using__(_) do
quote do
import BoundedContext, only: [command: 1, command: 2, query: 1, query: 2]
end
end
@doc """
Creates proxy functions to dispatch this command module.
## Functions created
When given `CreateUser`
* `create_user!/0`
* `create_user!/1`
* `create_user!/2`
* `create_user/0`
* `create_user/1`
* `create_user/2`
## Options
* `:then` - A function of one arity to run with the execution result.
"""
defmacro command(command_module, opts \\ []) do
opts = Macro.escape(opts)
function_head =
quote do
function_name = BoundedContext.__function_name__(unquote(command_module), unquote(opts))
BoundedContext.__command_proxy_function_head__(unquote(command_module), function_name)
end
proxies =
quote location: :keep do
function_name = BoundedContext.__function_name__(unquote(command_module), unquote(opts))
Guards.ensure_is_command!(unquote(command_module))
BoundedContext.__command_proxy__(unquote(command_module), function_name, unquote(opts))
end
quote do
Module.eval_quoted(__ENV__, [unquote(function_head), unquote(proxies)])
end
end
def __command_proxy_function_head__(command_module, function_name) do
quote do
required_fields = unquote(command_module).__required_fields__()
if length(required_fields) > 0 do
def unquote(function_name)(attrs, opts \\ [])
def unquote(:"#{function_name}!")(attrs, opts \\ [])
else
def unquote(function_name)(attrs \\ [], opts \\ [])
def unquote(:"#{function_name}!")(attrs \\ [], opts \\ [])
end
end
end
def __command_proxy__(command_module, function_name, opts) do
quote do
@doc """
#{unquote(command_module).__module_docs__()}
"""
def unquote(function_name)(attrs, opts) do
opts = Keyword.merge(unquote(opts), Cqrs.Options.normalize(opts))
BoundedContext.__dispatch_command__(unquote(command_module), attrs, opts)
end
@doc """
#{unquote(command_module).__module_docs__()}
"""
def unquote(:"#{function_name}!")(attrs, opts) do
opts = Keyword.merge(unquote(opts), Cqrs.Options.normalize(opts))
BoundedContext.__dispatch_command__!(unquote(command_module), attrs, opts)
end
end
end
@doc """
Creates proxy functions to create and execute the give query.
## Functions created
When given `ListUsers`
* `list_users!/0`
* `list_users!/1`
* `list_users!/2`
* `list_users/0`
* `list_users/1`
* `list_users/2`
* `list_users_query!/0`
* `list_users_query!/1`
* `list_users_query!/2`
* `list_users_query/0`
* `list_users_query/1`
* `list_users_query/2`
"""
defmacro query(query_module, opts \\ []) do
opts = Macro.escape(opts)
function_head =
quote do
function_name = BoundedContext.__function_name__(unquote(query_module), unquote(opts))
BoundedContext.__query_proxy_function_head__(unquote(query_module), function_name)
end
proxies =
quote location: :keep do
Guards.ensure_is_query!(unquote(query_module))
function_name = BoundedContext.__function_name__(unquote(query_module), unquote(opts))
BoundedContext.__query_proxy__(
unquote(query_module),
function_name,
unquote(opts)
)
end
quote do
Module.eval_quoted(__ENV__, [unquote(function_head), unquote(proxies)])
end
end
def __query_proxy_function_head__(query_module, function_name) do
quote do
required_filters = unquote(query_module).__required_filters__()
if length(required_filters) > 0 do
def unquote(function_name)(filters, opts \\ [])
def unquote(:"#{function_name}!")(filters, opts \\ [])
def unquote(:"#{function_name}_query")(filters, opts \\ [])
def unquote(:"#{function_name}_query!")(filters, opts \\ [])
else
def unquote(function_name)(filters \\ [], opts \\ [])
def unquote(:"#{function_name}!")(filters \\ [], opts \\ [])
def unquote(:"#{function_name}_query")(filters \\ [], opts \\ [])
def unquote(:"#{function_name}_query!")(filters \\ [], opts \\ [])
end
end
end
def __query_proxy__(query_module, function_name, opts) do
quote do
@doc """
#{unquote(query_module).__module_docs__()}
"""
def unquote(function_name)(filters, opts) do
opts = Keyword.merge(unquote(opts), Cqrs.Options.normalize(opts))
BoundedContext.__execute_query__(unquote(query_module), filters, opts)
end
@doc """
#{unquote(query_module).__module_docs__()}
"""
def unquote(:"#{function_name}!")(filters, opts) do
opts = Keyword.merge(unquote(opts), Cqrs.Options.normalize(opts))
BoundedContext.__execute_query__!(unquote(query_module), filters, opts)
end
query_name = unquote(query_module).__name__()
query_headline_modifier = if query_name =~ ~r/^[aeiou]/i, do: "an", else: "a"
@doc """
Creates #{query_headline_modifier} [#{query_name}](`#{unquote(query_module)}`) query without executing it.
#{unquote(query_module).__module_docs__()}
"""
def unquote(:"#{function_name}_query")(filters, opts) do
BoundedContext.__create_query__(unquote(query_module), filters, opts)
end
@doc """
Creates #{query_headline_modifier} [#{query_name}](`#{unquote(query_module)}`) query without executing it.
#{unquote(query_module).__module_docs__()}
"""
def unquote(:"#{function_name}_query!")(filters, opts) do
BoundedContext.__create_query__!(unquote(query_module), filters, opts)
end
end
end
def __function_name__(module, opts) do
[name | _] =
module
|> Module.split()
|> Enum.reverse()
default_function_name =
name
|> to_string
|> Macro.underscore()
|> String.to_atom()
Keyword.get(opts, :as, default_function_name)
end
def __dispatch_command__(module, attrs, opts) do
attrs
|> module.new(opts)
|> module.dispatch(opts)
|> __handle_result__(opts)
end
def __dispatch_command__!(module, attrs, opts) do
attrs
|> module.new!(opts)
|> module.dispatch(opts)
|> __handle_result__(opts)
end
def __create_query__(module, attrs, opts) do
module.new(attrs, opts)
end
def __create_query__!(module, attrs, opts) do
module.new!(attrs, opts)
end
def __execute_query__(module, attrs, opts) do
attrs
|> module.new(opts)
|> module.execute(opts)
|> __handle_result__(opts)
end
def __execute_query__!(module, attrs, opts) do
attrs
|> module.new!(opts)
|> module.execute!(opts)
|> __handle_result__(opts)
end
def __handle_result__(result, opts) do
case Keyword.get(opts, :then, &Function.identity/1) do
fun when is_function(fun, 1) -> fun.(result)
_ -> raise("'then' should be a function/1")
end
end
end
|
lib/cqrs/bounded_context.ex
| 0.775477 | 0.411643 |
bounded_context.ex
|
starcoder
|
defmodule Rolodex.Response do
@moduledoc """
Exposes functions and macros for defining reusable responses.
It exposes the following macros, which when used together will setup a response:
- `response/2` - for declaring a response
- `desc/1` - for setting an (optional) response description
- `content/2` - for defining a response shape for a specific content type
- `schema/1` and `schema/2` - for defining the shape for a content type
- `example/2` - for defining an (optional) response example for a content type
It also exposes the following functions:
- `is_response_module?/1` - determines if the provided item is a module that
has defined a reusable response
- `to_map/1` - serializes a response module into a map
- `get_refs/1` - traverses a response and searches for any nested `Rolodex.Schema`
refs within
"""
alias Rolodex.DSL
defmacro __using__(_opts) do
quote do
use Rolodex.DSL
import Rolodex.Response, only: :macros
end
end
@doc """
Opens up the response definition for the current module. Will name the response
and generate metadata for the response based on macro calls within the provided
block.
**Accept**
- `name` - the response name
- `block` - response shape definitions
## Example
defmodule MyResponse do
use Rolodex.Response
response "MyResponse" do
desc "A demo response with multiple content types"
content "application/json" do
schema MyResponseSchema
example :response, %{foo: "bar"}
example :other_response, %{bar: "baz"}
end
content "foo/bar" do
schema AnotherResponseSchema
example :response, %{foo: "bar"}
end
end
end
"""
defmacro response(name, opts) do
DSL.def_content_body(:__response__, name, opts)
end
@doc """
Sets a description for the response
"""
defmacro desc(str), do: DSL.set_desc(str)
@doc """
Sets headers to be included in the response. You can use a shared headers ref
defined via `Rolodex.Headers`, or just pass in a bare map or keyword list. If
the macro is called multiple times, all headers passed in will be merged together
in the docs result.
## Examples
# Shared headers module
defmodule MyResponse do
use Rolodex.Response
response "MyResponse" do
headers MyResponseHeaders
headers MyAdditionalResponseHeaders
end
end
# Headers defined in place
defmodule MyResponse do
use Rolodex.Response
response "MyResponse" do
headers %{
"X-Pagination" => %{
type: :integer,
description: "Pagination information"
}
}
end
end
"""
defmacro headers(metadata), do: DSL.set_headers(metadata)
@doc """
Defines a response shape for the given content type key
**Accepts**
- `key` - a valid content-type key
- `block` - metadata about the response shape for this content type
"""
defmacro content(key, opts) do
DSL.def_content_type_shape(:__response__, key, opts)
end
@doc """
Sets an example for the content type. This macro can be used multiple times
within a content type block to allow multiple examples.
**Accepts**
- `name` - a name for the example
- `body` - a map, which is the example data
"""
defmacro example(name, example_body) do
DSL.set_example(:__response__, name, example_body)
end
@doc """
Sets a schema for the current response content type. There are three ways
you can define a schema for a content-type chunk:
1. You can pass in an alias for a reusable schema defined via `Rolodex.Schema`
2. You can define a schema inline via the same macro syntax used in `Rolodex.Schema`
3. You can define a schema inline via a bare map, which will be parsed with `Rolodex.Field`
## Examples
# Via a reusable schema alias
content "application/json" do
schema MySchema
end
# Can define a schema inline via the schema + field + partial macros
content "application/json" do
schema do
field :id, :uuid
field :name, :string, desc: "The name"
partial PaginationParams
end
end
# Can provide a bare map, which will be parsed via `Rolodex.Field`
content "application/json" do
schema %{
type: :object,
properties: %{
id: :uuid,
name: :string
}
}
end
"""
defmacro schema(mod), do: DSL.set_schema(:__response__, mod)
@doc """
Sets a collection of schemas for the current response content type.
## Examples
# Response is a list
content "application/json" do
schema :list, of: [MySchema]
end
# Response is one of the provided types
content "application/json" do
schema :one_of, of: [MySchema, MyOtherSchema]
end
"""
defmacro schema(collection_type, opts) do
DSL.set_schema(:__response__, collection_type, opts)
end
@doc """
Adds a new field to the schema when defining a schema inline via macros. See
`Rolodex.Field` for more information about valid field metadata.
Accepts
- `identifier` - field name
- `type` - either an atom or another Rolodex.Schema module
- `opts` - a keyword list of options, looks for `desc` and `of` (for array types)
## Example
defmodule MyResponse do
use Rolodex.Response
response "MyResponse" do
content "application/json" do
schema do
# Atomic field with no description
field :id, :uuid
# Atomic field with a description
field :name, :string, desc: "The object's name"
# A field that refers to another, nested object
field :other, OtherSchema
# A field that is an array of items of one-or-more types
field :multi, :list, of: [:string, OtherSchema]
# You can use a shorthand to define a list field, the below is identical
# to the above
field :multi, [:string, OtherSchema]
# A field that is one of the possible provided types
field :any, :one_of, of: [:string, OtherSchema]
end
end
end
end
"""
defmacro field(identifier, type, opts \\ []) do
DSL.set_field(:fields, identifier, type, opts)
end
@doc """
Adds a new partial to the schema when defining a schema inline via macros. A
partial is another schema that will be serialized and merged into the top-level
properties map for the current schema. Partials are useful for shared parameters
used across multiple schemas. Bare keyword lists and maps that are parseable
by `Rolodex.Field` are also supported.
## Example
defmodule PaginationParams do
use Rolodex.Schema
schema "PaginationParams" do
field :page, :integer
field :page_size, :integer
field :total_pages, :integer
end
end
defmodule MyResponse do
use Rolodex.Response
response "MyResponse" do
content "application/json" do
schema do
field :id, :uuid
partial PaginationParams
end
end
end
end
"""
defmacro partial(mod), do: DSL.set_partial(mod)
@doc """
Determines if an arbitrary item is a module that has defined a reusable response
via `Rolodex.Response` macros
## Example
iex> defmodule SimpleResponse do
...> use Rolodex.Response
...> response "SimpleResponse" do
...> content "application/json" do
...> schema MySchema
...> end
...> end
...> end
iex>
iex> # Validating a response module
iex> Rolodex.Response.is_response_module?(SimpleResponse)
true
iex> # Validating some other module
iex> Rolodex.Response.is_response_module?(OtherModule)
false
"""
@spec is_response_module?(any()) :: boolean()
def is_response_module?(mod), do: DSL.is_module_of_type?(mod, :__response__)
@doc """
Serializes the `Rolodex.Response` metadata into a formatted map.
## Example
iex> defmodule MySimpleSchema do
...> use Rolodex.Schema
...>
...> schema "MySimpleSchema" do
...> field :id, :uuid
...> end
...> end
iex>
iex> defmodule MyResponse do
...> use Rolodex.Response
...>
...> response "MyResponse" do
...> desc "A demo response"
...>
...> headers %{"X-Rate-Limited" => :boolean}
...>
...> content "application/json" do
...> schema MySimpleSchema
...> example :response, %{id: "123"}
...> end
...>
...> content "application/json-list" do
...> schema [MySimpleSchema]
...> example :response, [%{id: "123"}]
...> example :another_response, [%{id: "234"}]
...> end
...> end
...> end
iex>
iex> Rolodex.Response.to_map(MyResponse)
%{
desc: "A demo response",
headers: [
%{"X-Rate-Limited" => %{type: :boolean}}
],
content: %{
"application/json" => %{
examples: %{
response: %{id: "123"}
},
schema: %{
type: :ref,
ref: Rolodex.ResponseTest.MySimpleSchema
}
},
"application/json-list" => %{
examples: %{
response: [%{id: "123"}],
another_response: [%{id: "234"}],
},
schema: %{
type: :list,
of: [
%{type: :ref, ref: Rolodex.ResponseTest.MySimpleSchema}
]
}
}
}
}
"""
@spec to_map(module()) :: map()
def to_map(mod), do: DSL.to_content_body_map(&mod.__response__/1)
@doc """
Traverses a serialized Response and collects any nested references to any
Schemas within. See `Rolodex.Field.get_refs/1` for more info.
"""
@spec get_refs(module()) :: [module()]
def get_refs(mod), do: DSL.get_refs_in_content_body(&mod.__response__/1)
end
|
lib/rolodex/response.ex
| 0.906681 | 0.561215 |
response.ex
|
starcoder
|
defmodule Cannes.Tools do
@moduledoc """
`Cannes.Tools` facilitates a usefull wrapper for the python library called `cantool`.
"""
use Export.Python
@python_dir "lib/python"
@python_module "cantool"
@doc """
Calls the given function with args from the given Python file.
"""
@spec python_call(binary, binary, [any]) :: any
def python_call(file, function, args \\ []) do
{:ok, py} = Python.start(python_path: Path.expand(@python_dir))
Python.call(py, file, function, args)
end
@doc """
Decode given signal data data as a message of given frame id or name frame_id_or_name. Returns a dictionary of signal name-value entries.
If decode_choices is `false` scaled values are not converted to choice strings (if available).
If scaling is `false` no scaling of signals is performed.
## Example
iex> Cannes.Tools.decode_message(2024, <<0x04, 0x41, 0x0C, 0x02, 0x6A, 0x00, 0x00, 0x00>>)
%{
'ParameterID_Service01' => 'S1_PID_0C_EngineRPM',
'S1_PID_0C_EngineRPM' => 154.5,
'length' => 4,
'response' => 4,
'service' => 'Show current data '
}
"""
@spec decode_message(any, any, any, any) :: any
def decode_message(arbitration_id, data, decode_choices \\ true, scaling \\ true) do
python_call(@python_module, "decode_message", [arbitration_id, data, decode_choices, scaling])
end
@doc """
Encode given signal data data as a message of given frame id or name frame_id_or_name. data is a dictionary of signal name-value entries.
If scaling is `false` no scaling of signals is performed.
If padding is `true` unused bits are encoded as 1.
If strict is `true` all signal values must be within their allowed ranges, or an exception is raised.
## Example
iex> Cannes.Tools.encode_message(2024, %{"ParameterID_Service01" => "S1_PID_0C_EngineRPM", "S1_PID_0C_EngineRPM" => 154.5, "length" => 4, "response" => 4, "service" => "Show current data "})
<<4, 65, 12, 2, 106, 0, 0, 0>>
"""
@spec encode_message(any, any, any, any, any) :: any
def encode_message(frame_id_or_name, data, scaling \\ true, padding \\ false, strict \\ true) do
python_call(@python_module, "encode_message", [
frame_id_or_name,
data |> Jason.encode!(),
scaling,
padding,
strict
])
end
end
|
lib/tools.ex
| 0.874466 | 0.460471 |
tools.ex
|
starcoder
|
defmodule Etherscan.Util do
@moduledoc false
@denominations [
wei: 1,
kwei: 1000,
mwei: 1_000_000,
gwei: 1_000_000_000,
shannon: 1_000_000_000,
nano: 1_000_000_000,
szabo: 1_000_000_000_000,
micro: 1_000_000_000_000,
finney: 1_000_000_000_000_000,
milli: 1_000_000_000_000_000,
ether: 1_000_000_000_000_000_000
]
@doc """
Formats a string representing an Ethereum balance
"""
@spec format_balance(balance :: String.t()) :: String.t()
def format_balance(balance) do
balance
|> String.to_integer()
|> convert()
end
@spec convert(number :: integer() | float(), opts :: Keyword.t()) :: String.t()
def convert(number, opts \\ [])
def convert(number, opts) when is_number(number) do
denom =
@denominations
|> List.keyfind(Keyword.get(opts, :denomination, :ether), 0)
|> elem(1)
pretty_float(number / denom, Keyword.get(opts, :decimals, 20))
end
def convert(number, opts) when is_binary(number) do
number
|> String.to_integer()
|> convert(opts)
end
@doc """
Converts a float to a nicely formatted string
"""
@spec pretty_float(number :: float() | String.t(), decimals :: integer()) :: String.t()
def pretty_float(number, decimals \\ 20)
def pretty_float(number, decimals) when is_number(number) do
:erlang.float_to_binary(number, [:compact, decimals: decimals])
end
def pretty_float(number, decimals) when is_binary(number) do
number
|> String.to_float()
|> pretty_float(decimals)
end
@doc """
Wraps a value inside a tagged Tuple using the provided tag.
"""
@spec wrap(value :: any(), tag :: atom()) :: {atom(), any()}
def wrap(value, tag) when is_atom(tag), do: {tag, value}
@spec hex_to_number(hex :: String.t()) :: {:ok, integer()} | {:error, String.t()}
def hex_to_number("0x" <> hex) do
hex
|> Integer.parse(16)
|> case do
{integer, _} ->
integer
|> wrap(:ok)
:error ->
"invalid hex - #{inspect("0x" <> hex)}"
|> wrap(:error)
end
end
def hex_to_number(hex), do: "invalid hex - #{inspect(hex)}" |> wrap(:error)
@spec safe_hex_to_number(hex :: String.t()) :: integer()
def safe_hex_to_number(hex) do
hex
|> hex_to_number()
|> case do
{:ok, integer} ->
integer
{:error, _reason} ->
0
end
end
@spec number_to_hex(number :: integer() | String.t()) :: String.t()
def number_to_hex(number) when is_integer(number) do
number
|> Integer.to_string(16)
|> (&Kernel.<>("0x", &1)).()
end
def number_to_hex(number) when is_binary(number) do
number
|> String.to_integer()
|> number_to_hex()
end
end
|
lib/etherscan/util.ex
| 0.866217 | 0.560433 |
util.ex
|
starcoder
|
defmodule Ash.Flow.Transformers.SetTypes do
@moduledoc "Sets the actual types and transforms the type constraints"
use Ash.Dsl.Transformer
alias Ash.Dsl.Transformer
def transform(_resource, dsl_state) do
set_argument_types(dsl_state)
end
defp set_argument_types(dsl_state) do
arguments = Transformer.get_entities(dsl_state, [:flow])
new_arguments =
arguments
|> Enum.reduce_while({:ok, []}, fn argument, {:ok, args} ->
type = Ash.Type.get_type(argument.type)
case validate_constraints(type, argument.constraints) do
{:ok, constraints} ->
{:cont, {:ok, [%{argument | type: type, constraints: constraints} | args]}}
{:error, error} ->
{:halt, {:error, error}}
end
end)
case new_arguments do
{:ok, new_args} ->
{:ok,
Enum.reduce(new_args, dsl_state, fn new_arg, dsl_state ->
Transformer.replace_entity(
dsl_state,
[:flow],
new_arg,
fn replacing ->
replacing.name == new_arg.name
end
)
end)}
{:error, error} ->
{:error, error}
end
end
def validate_constraints(type, constraints) do
case type do
{:array, type} ->
with {:ok, new_constraints} <-
Ash.OptionsHelpers.validate(
Keyword.delete(constraints, :items),
Ash.Type.array_constraints(type)
),
{:ok, item_constraints} <- validate_item_constraints(type, constraints) do
{:ok, Keyword.put(new_constraints, :items, item_constraints)}
end
type ->
schema = Ash.Type.constraints(type)
case Ash.OptionsHelpers.validate(constraints, schema) do
{:ok, constraints} ->
validate_none_reserved(constraints, type)
{:error, error} ->
{:error, error}
end
end
end
defp validate_item_constraints(type, constraints) do
schema = Ash.Type.constraints(type)
case Ash.OptionsHelpers.validate(constraints[:items] || [], schema) do
{:ok, item_constraints} ->
validate_none_reserved(item_constraints, type)
{:error, error} ->
{:error, error}
end
end
@reserved ~w(default source autogenerate read_after_writes virtual primary_key load_in_query redact)a
defp validate_none_reserved(constraints, type) do
case Enum.find(@reserved, &Keyword.has_key?(constraints, &1)) do
nil ->
{:ok, constraints}
key ->
{:error,
"Invalid constraint key #{key} in type #{inspect(type)}. This name is reserved due to the underlying ecto implementation."}
end
end
def after?(Ash.Resource.Transformers.BelongsToAttribute), do: true
def after?(_), do: false
end
|
lib/ash/flow/transformers/set_types.ex
| 0.82379 | 0.498047 |
set_types.ex
|
starcoder
|
defmodule Wabbit.Exchange do
@moduledoc """
Functions to operate on Exchanges.
"""
import Wabbit.Record
@doc """
Declares an Exchange. The default Exchange type is `direct`.
AMQP 0-9-1 brokers provide four pre-declared exchanges:
* Direct exchange: (empty string) or `amq.direct`
* Fanout exchange: `amq.fanout`
* Topic exchange: `amq.topic`
* Headers exchange: `amq.match` (and `amq.headers` in RabbitMQ)
Besides the exchange name and type, the following options can be used:
# Options
* `:durable` - If set, keeps the Exchange between restarts of the broker
* `:auto_delete` - If set, deletes the Exchange once all queues unbind from it
* `:passive` - If set, returns an error if the Exchange does not already exist
* `:internal` - If set, the exchange may not be used directly by publishers,
but only when bound to other exchanges. Internal exchanges are used to
construct wiring that is not visible to applications.
* `:no_wait` - If set, the server will not respond to the method
* `:arguments` - A set of arguments for the declaration
"""
def declare(channel, exchange, type \\ :direct, options \\ []) do
exchange_declare =
exchange_declare(exchange: exchange,
type: Atom.to_string(type),
passive: Keyword.get(options, :passive, false),
durable: Keyword.get(options, :durable, false),
auto_delete: Keyword.get(options, :auto_delete, false),
internal: Keyword.get(options, :internal, false),
nowait: Keyword.get(options, :no_wait, false),
arguments: Keyword.get(options, :arguments, []))
exchange_declare_ok() = :amqp_channel.call(channel, exchange_declare)
:ok
end
@doc """
Deletes an Exchange by name.
When an Exchange is deleted all bindings to it are also deleted
# Options
* `:if_unused` - If set, the server will only delete the exchange
if it has no queue bindings
* `:no_wait` - If set, the server will not respond to the method
"""
def delete(channel, exchange, options \\ []) do
exchange_delete =
exchange_delete(exchange: exchange,
if_unused: Keyword.get(options, :if_unused, false),
nowait: Keyword.get(options, :no_wait, false))
exchange_delete_ok() = :amqp_channel.call(channel, exchange_delete)
:ok
end
@doc """
Binds an Exchange to another Exchange or a Queue using the
exchange.bind AMQP method (a RabbitMQ-specific extension)
# Options
* `:routing_key` - If set, specifies the routing key for the binding
* `:no_wait` - If set, the server will not respond to the method
* `:arguments` - A set of arguments for the binding
"""
def bind(channel, destination, source, options \\ []) do
exchange_bind =
exchange_bind(destination: destination,
source: source,
routing_key: Keyword.get(options, :routing_key, ""),
nowait: Keyword.get(options, :no_wait, false),
arguments: Keyword.get(options, :arguments, []))
exchange_bind_ok() = :amqp_channel.call(channel, exchange_bind)
:ok
end
@doc """
Unbinds an Exchange from another Exchange or a Queue using the
exchange.unbind AMQP method (a RabbitMQ-specific extension)
# Options
* `:routing_key` - If set, specifies the routing key for the unbind
* `:no_wait` - If set, the server will not respond to the method
* `:arguments` - A set of arguments for the unbind
"""
def unbind(channel, destination, source, options \\ []) do
exchange_unbind =
exchange_unbind(destination: destination,
source: source,
routing_key: Keyword.get(options, :routing_key, ""),
nowait: Keyword.get(options, :no_wait, false),
arguments: Keyword.get(options, :arguments, []))
exchange_unbind_ok() = :amqp_channel.call(channel, exchange_unbind)
:ok
end
@doc """
Convenience function to declare an Exchange of type `direct`.
"""
def direct(channel, exchange, options \\ []) do
declare(channel, exchange, :direct, options)
end
@doc """
Convenience function to declare an Exchange of type `fanout`.
"""
def fanout(channel, exchange, options \\ []) do
declare(channel, exchange, :fanout, options)
end
@doc """
Convenience function to declare an Exchange of type `topic`.
"""
def topic(channel, exchange, options \\ []) do
declare(channel, exchange, :topic, options)
end
end
|
lib/wabbit/exchange.ex
| 0.814459 | 0.602559 |
exchange.ex
|
starcoder
|
defmodule MazeServer.MazeAi.AStar do
alias MazeServer.MazeAi
@moduledoc """
this module define A* rules.
in this algorithm, frontier is same as BFS with a tiny difference!
frontier will sort by ascending `path_cost` order after every push.
"""
@doc """
this expander is same as `MazeAi.expander` but it will check path_cost for expand a node!
"""
def expander(frontier, point, explored_set, frontier_push) do
unless Enum.any?(explored_set, &(&1.x == point.x and &1.y == point.y)) or point.state == "1" do
p_index =
Enum.find_index(
frontier,
&(&1.x == point.x and &1.y == point.y and point.path_cost < &1.path_cost)
)
new_frontier =
unless p_index == nil do
List.delete_at(frontier, p_index)
else
frontier
end
frontier_push.(new_frontier, point)
else
frontier
end
end
@doc """
frontier pop rule for A*.
"""
def frontier_pop(frontier) do
List.pop_at(frontier, 0)
end
@doc """
frontier push rule for A*.
it will sort frontier by ascending `path_cost` order.
"""
def frontier_push(frontier, point) do
List.insert_at(frontier, -1, point)
|> Enum.sort(fn %{path_cost: pc1}, %{path_cost: pc2} -> pc1 <= pc2 end)
end
@doc """
heuristic calculation with diagonal distance calculator.
## Examples
iex> MazeServer.MazeAi.AStar.h({1, 2}, {4, 5})
6
and this is true:
__2 * max((4-1), (5-2)) = 6__
"""
def h({x, y}, {end_x, end_y}) do
d1 = abs(end_x - x)
d2 = abs(end_y - y)
# Diagonal Distance
2 * max(d1, d2)
end
@doc """
it will calculates path cost of a node with path cost of its parent node plus one!
"""
def g(path_cost, _) do
path_cost + 1
end
@doc """
A* search. it will search same as BFS.
"""
def search(board \\ MazeAi.init_board(), point \\ %{x: 1, y: 14}) do
target = MazeAi.find_target(board)
root = MazeAi.create_point(point, board, nil, &h/2, target, nil)
MazeAi.graph_search(
[root],
[],
target,
board,
"2",
"1",
-1,
&g/2,
&h/2,
&frontier_pop/1,
&frontier_push/2,
&expander/4
)
end
end
|
lib/maze_server/maze_ai/a_star.ex
| 0.852721 | 0.522263 |
a_star.ex
|
starcoder
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.