code
stringlengths
26
124k
docstring
stringlengths
23
125k
func_name
stringlengths
1
98
language
stringclasses
1 value
repo
stringlengths
5
53
path
stringlengths
7
151
url
stringlengths
50
211
license
stringclasses
7 values
def get(i) return nil if i >= @size || i < 0 @tuple[i].get end
Get the value of the element at the given index. @param [Integer] i the index from which to retrieve the value @return [Object] the value at the given index or nil if the index is out of bounds
get
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
Apache-2.0
def set(i, value) return nil if i >= @size || i < 0 @tuple[i].set(value) end
Set the element at the given index to the given value @param [Integer] i the index for the element to set @param [Object] value the value to set at the given index @return [Object] the new value of the element at the given index or nil if the index is out of bounds
set
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
Apache-2.0
def compare_and_set(i, old_value, new_value) return false if i >= @size || i < 0 @tuple[i].compare_and_set(old_value, new_value) end
Set the value at the given index to the new value if and only if the current value matches the given old value. @param [Integer] i the index for the element to set @param [Object] old_value the value to compare against the current value @param [Object] new_value the value to set at the given index @return [Boolean] true if the value at the given element was set else false
compare_and_set
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
Apache-2.0
def each @tuple.each {|ref| yield ref.get} end
Calls the given block once for each element in self, passing that element as a parameter. @yieldparam [Object] ref the `Concurrent::AtomicReference` object at the current index
each
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tuple.rb
Apache-2.0
def initialize(value) @value = value @lock = Mutex.new end
Create a new `TVar` with an initial value.
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tvar.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tvar.rb
Apache-2.0
def atomically raise ArgumentError.new('no block given') unless block_given? # Get the current transaction transaction = Transaction::current # Are we not already in a transaction (not nested)? if transaction.nil? # New transaction begin # Retry loop loop do # Create a new transaction transaction = Transaction.new Transaction::current = transaction # Run the block, aborting on exceptions begin result = yield rescue Transaction::AbortError => e transaction.abort result = Transaction::ABORTED rescue Transaction::LeaveError => e transaction.abort break result rescue => e transaction.abort raise e end # If we can commit, break out of the loop if result != Transaction::ABORTED if transaction.commit break result end end end ensure # Clear the current transaction Transaction::current = nil end else # Nested transaction - flatten it and just run the block yield end end
Run a block that reads and writes `TVar`s as a single atomic transaction. With respect to the value of `TVar` objects, the transaction is atomic, in that it either happens or it does not, consistent, in that the `TVar` objects involved will never enter an illegal state, and isolated, in that transactions never interfere with each other. You may recognise these properties from database transactions. There are some very important and unusual semantics that you must be aware of: * Most importantly, the block that you pass to atomically may be executed more than once. In most cases your code should be free of side-effects, except for via TVar. * If an exception escapes an atomically block it will abort the transaction. * It is undefined behaviour to use callcc or Fiber with atomically. * If you create a new thread within an atomically, it will not be part of the transaction. Creating a thread counts as a side-effect. Transactions within transactions are flattened to a single transaction. @example a = new TVar(100_000) b = new TVar(100) Concurrent::atomically do a.value -= 10 b.value += 10 end
atomically
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tvar.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tvar.rb
Apache-2.0
def abort_transaction raise Transaction::AbortError.new end
Abort a currently running transaction - see `Concurrent::atomically`.
abort_transaction
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tvar.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/tvar.rb
Apache-2.0
def compare_and_set(expected_val, new_val, expected_mark, new_mark) # Memoize a valid reference to the current AtomicReference for # later comparison. current = reference curr_val, curr_mark = current # Ensure that that the expected marks match. return false unless expected_mark == curr_mark if expected_val.is_a? Numeric # If the object is a numeric, we need to ensure we are comparing # the numerical values return false unless expected_val == curr_val else # Otherwise, we need to ensure we are comparing the object identity. # Theoretically, this could be incorrect if a user monkey-patched # `Object#equal?`, but they should know that they are playing with # fire at that point. return false unless expected_val.equal? curr_val end prospect = immutable_array(new_val, new_mark) compare_and_set_reference current, prospect end
Atomically sets the value and mark to the given updated value and mark given both: - the current value == the expected value && - the current mark == the expected mark @param [Object] expected_val the expected value @param [Object] new_val the new value @param [Boolean] expected_mark the expected mark @param [Boolean] new_mark the new mark @return [Boolean] `true` if successful. A `false` return indicates that the actual value was not equal to the expected value or the actual mark was not equal to the expected mark
compare_and_set
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
Apache-2.0
def set(new_val, new_mark) self.reference = immutable_array(new_val, new_mark) end
_Unconditionally_ sets to the given value of both the reference and the mark. @param [Object] new_val the new value @param [Boolean] new_mark the new mark @return [Array] both the new value and the new mark
set
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
Apache-2.0
def update loop do old_val, old_mark = reference new_val, new_mark = yield old_val, old_mark if compare_and_set old_val, new_val, old_mark, new_mark return immutable_array(new_val, new_mark) end end end
Pass the current value and marked state to the given block, replacing it with the block's results. May retry if the value changes during the block's execution. @yield [Object] Calculate a new value and marked state for the atomic reference using given (old) value and (old) marked @yieldparam [Object] old_val the starting value of the atomic reference @yieldparam [Boolean] old_mark the starting state of marked @return [Array] the new value and new mark
update
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
Apache-2.0
def try_update! old_val, old_mark = reference new_val, new_mark = yield old_val, old_mark unless compare_and_set old_val, new_val, old_mark, new_mark fail ::Concurrent::ConcurrentUpdateError, 'AtomicMarkableReference: Update failed due to race condition.', 'Note: If you would like to guarantee an update, please use ' + 'the `AtomicMarkableReference#update` method.' end immutable_array(new_val, new_mark) end
Pass the current value to the given block, replacing it with the block's result. Raise an exception if the update fails. @yield [Object] Calculate a new value and marked state for the atomic reference using given (old) value and (old) marked @yieldparam [Object] old_val the starting value of the atomic reference @yieldparam [Boolean] old_mark the starting state of marked @return [Array] the new value and marked state @raise [Concurrent::ConcurrentUpdateError] if the update fails
try_update!
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
Apache-2.0
def try_update old_val, old_mark = reference new_val, new_mark = yield old_val, old_mark return unless compare_and_set old_val, new_val, old_mark, new_mark immutable_array(new_val, new_mark) end
Pass the current value to the given block, replacing it with the block's result. Simply return nil if update fails. @yield [Object] Calculate a new value and marked state for the atomic reference using given (old) value and (old) marked @yieldparam [Object] old_val the starting value of the atomic reference @yieldparam [Boolean] old_mark the starting state of marked @return [Array] the new value and marked state, or nil if the update failed
try_update
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/atomic_markable_reference.rb
Apache-2.0
def initialize(parties, &block) Utility::NativeInteger.ensure_integer_and_bounds parties Utility::NativeInteger.ensure_positive_and_no_zero parties super(&nil) synchronize { ns_initialize parties, &block } end
Create a new `CyclicBarrier` that waits for `parties` threads @param [Fixnum] parties the number of parties @yield an optional block that will be executed that will be executed after the last thread arrives and before the others are released @raise [ArgumentError] if `parties` is not an integer or is less than zero
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
Apache-2.0
def parties synchronize { @parties } end
@return [Fixnum] the number of threads needed to pass the barrier
parties
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
Apache-2.0
def number_waiting synchronize { @number_waiting } end
@return [Fixnum] the number of threads currently waiting on the barrier
number_waiting
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
Apache-2.0
def wait(timeout = nil) synchronize do return false unless @generation.status == :waiting @number_waiting += 1 if @number_waiting == @parties @action.call if @action ns_generation_done @generation, :fulfilled true else generation = @generation if ns_wait_until(timeout) { generation.status != :waiting } generation.status == :fulfilled else ns_generation_done generation, :broken, false false end end end end
Blocks on the barrier until the number of waiting threads is equal to `parties` or until `timeout` is reached or `reset` is called If a block has been passed to the constructor, it will be executed once by the last arrived thread before releasing the others @param [Fixnum] timeout the number of seconds to wait for the counter or `nil` to block indefinitely @return [Boolean] `true` if the `count` reaches zero else false on `timeout` or on `reset` or if the barrier is broken
wait
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
Apache-2.0
def reset synchronize { ns_generation_done @generation, :reset } end
resets the barrier to its initial state If there is at least one waiting thread, it will be woken up, the `wait` method will return false and the barrier will be broken If the barrier is broken, this method restores it to the original state @return [nil]
reset
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
Apache-2.0
def broken? synchronize { @generation.status != :waiting } end
A barrier can be broken when: - a thread called the `reset` method while at least one other thread was waiting - at least one thread timed out on `wait` method A broken barrier can be restored using `reset` it's safer to create a new one @return [Boolean] true if the barrier is broken otherwise false
broken?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/cyclic_barrier.rb
Apache-2.0
def initialize super synchronize { ns_initialize } end
Creates a new `Event` in the unset state. Threads calling `#wait` on the `Event` will block.
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
Apache-2.0
def set? synchronize { @set } end
Is the object in the set state? @return [Boolean] indicating whether or not the `Event` has been set
set?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
Apache-2.0
def set synchronize { ns_set } end
Trigger the event, setting the state to `set` and releasing all threads waiting on the event. Has no effect if the `Event` has already been set. @return [Boolean] should always return `true`
set
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
Apache-2.0
def reset synchronize do if @set @set = false @iteration +=1 end true end end
Reset a previously set event back to the `unset` state. Has no effect if the `Event` has not yet been set. @return [Boolean] should always return `true`
reset
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
Apache-2.0
def wait(timeout = nil) synchronize do unless @set iteration = @iteration ns_wait_until(timeout) { iteration < @iteration || @set } else true end end end
Wait a given number of seconds for the `Event` to be set by another thread. Will wait forever when no `timeout` value is given. Returns immediately if the `Event` has already been set. @return [Boolean] true if the `Event` was set before timeout else false
wait
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/event.rb
Apache-2.0
def initialize(default = nil, &default_block) if default && block_given? raise ArgumentError, "Cannot use both value and block as default value" end if block_given? @default_block = default_block @default = nil else @default_block = nil @default = default end @index = LOCALS.next_index(self) end
Creates a fiber local variable. @param [Object] default the default value when otherwise unset @param [Proc] default_block Optional block that gets called to obtain the default value for each fiber
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/fiber_local_var.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/fiber_local_var.rb
Apache-2.0
def value LOCALS.fetch(@index) { default } end
Returns the value in the current fiber's copy of this fiber-local variable. @return [Object] the current value
value
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/fiber_local_var.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/fiber_local_var.rb
Apache-2.0
def bind(value) if block_given? old_value = self.value self.value = value begin yield ensure self.value = old_value end end end
Bind the given value to fiber local storage during execution of the given block. @param [Object] value the value to bind @yield the operation to be performed with the bound variable @return [Object] the value
bind
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/fiber_local_var.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/fiber_local_var.rb
Apache-2.0
def local_finalizer(index) proc do free_index(index) end end
When the local goes out of scope, clean up that slot across all locals currently assigned.
local_finalizer
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
Apache-2.0
def thread_fiber_finalizer(array_object_id) proc do weak_synchronize do @all_arrays.delete(array_object_id) end end end
When a thread/fiber goes out of scope, remove the array from @all_arrays.
thread_fiber_finalizer
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
Apache-2.0
def locals raise NotImplementedError end
Returns the locals for the current scope, or nil if none exist.
locals
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
Apache-2.0
def locals! raise NotImplementedError end
Returns the locals for the current scope, creating them if necessary.
locals!
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/locals.rb
Apache-2.0
def initialize super() @Counter = AtomicFixnum.new(0) # single integer which represents lock state @ReadLock = Synchronization::Lock.new @WriteLock = Synchronization::Lock.new end
Implementation notes: A goal is to make the uncontended path for both readers/writers lock-free Only if there is reader-writer or writer-writer contention, should locks be used Internal state is represented by a single integer ("counter"), and updated using atomic compare-and-swap operations When the counter is 0, the lock is free Each reader increments the counter by 1 when acquiring a read lock (and decrements by 1 when releasing the read lock) The counter is increased by (1 << 15) for each writer waiting to acquire the write lock, and by (1 << 29) if the write lock is taken Create a new `ReadWriteLock` in the unlocked state.
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
Apache-2.0
def release_read_lock while true c = @Counter.value if @Counter.compare_and_set(c, c-1) # If one or more writers were waiting, and we were the last reader, wake a writer up if waiting_writer?(c) && running_readers(c) == 1 @WriteLock.signal end break end end true end
Release a previously acquired read lock. @return [Boolean] true if the lock is successfully released
release_read_lock
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
Apache-2.0
def release_write_lock return true unless running_writer? c = @Counter.update { |counter| counter - RUNNING_WRITER } @ReadLock.broadcast @WriteLock.signal if waiting_writers(c) > 0 true end
Release a previously acquired write lock. @return [Boolean] true if the lock is successfully released
release_write_lock
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
Apache-2.0
def write_locked? @Counter.value >= RUNNING_WRITER end
Queries if the write lock is held by any thread. @return [Boolean] true if the write lock is held else false`
write_locked?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/read_write_lock.rb
Apache-2.0
def initialize super() @Counter = AtomicFixnum.new(0) # single integer which represents lock state @ReadQueue = Synchronization::Lock.new # used to queue waiting readers @WriteQueue = Synchronization::Lock.new # used to queue waiting writers @HeldCount = LockLocalVar.new(0) # indicates # of R & W locks held by this thread end
Create a new `ReentrantReadWriteLock` in the unlocked state.
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
Apache-2.0
def try_read_lock if (held = @HeldCount.value) > 0 if held & READ_LOCK_MASK == 0 # If we hold a write lock, but not a read lock... @Counter.update { |c| c + 1 } end @HeldCount.value = held + 1 return true else c = @Counter.value if !waiting_or_running_writer?(c) && @Counter.compare_and_set(c, c+1) @HeldCount.value = held + 1 return true end end false end
Try to acquire a read lock and return true if we succeed. If it cannot be acquired immediately, return false. @return [Boolean] true if the lock is successfully acquired
try_read_lock
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
Apache-2.0
def release_read_lock held = @HeldCount.value = @HeldCount.value - 1 rlocks_held = held & READ_LOCK_MASK if rlocks_held == 0 c = @Counter.update { |counter| counter - 1 } # If one or more writers were waiting, and we were the last reader, wake a writer up if waiting_or_running_writer?(c) && running_readers(c) == 0 @WriteQueue.signal end elsif rlocks_held == READ_LOCK_MASK raise IllegalOperationError, "Cannot release a read lock which is not held" end true end
Release a previously acquired read lock. @return [Boolean] true if the lock is successfully released
release_read_lock
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
Apache-2.0
def try_write_lock if (held = @HeldCount.value) >= WRITE_LOCK_HELD @HeldCount.value = held + WRITE_LOCK_HELD return true else c = @Counter.value if !waiting_or_running_writer?(c) && running_readers(c) == held && @Counter.compare_and_set(c, c+RUNNING_WRITER) @HeldCount.value = held + WRITE_LOCK_HELD return true end end false end
Try to acquire a write lock and return true if we succeed. If it cannot be acquired immediately, return false. @return [Boolean] true if the lock is successfully acquired
try_write_lock
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
Apache-2.0
def release_write_lock held = @HeldCount.value = @HeldCount.value - WRITE_LOCK_HELD wlocks_held = held & WRITE_LOCK_MASK if wlocks_held == 0 c = @Counter.update { |counter| counter - RUNNING_WRITER } @ReadQueue.broadcast @WriteQueue.signal if waiting_writers(c) > 0 elsif wlocks_held == WRITE_LOCK_MASK raise IllegalOperationError, "Cannot release a write lock which is not held" end true end
Release a previously acquired write lock. @return [Boolean] true if the lock is successfully released
release_write_lock
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/reentrant_read_write_lock.rb
Apache-2.0
def initialize(default = nil, &default_block) if default && block_given? raise ArgumentError, "Cannot use both value and block as default value" end if block_given? @default_block = default_block @default = nil else @default_block = nil @default = default end @index = LOCALS.next_index(self) end
Creates a thread local variable. @param [Object] default the default value when otherwise unset @param [Proc] default_block Optional block that gets called to obtain the default value for each thread
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/thread_local_var.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/thread_local_var.rb
Apache-2.0
def value LOCALS.fetch(@index) { default } end
Returns the value in the current thread's copy of this thread-local variable. @return [Object] the current value
value
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/thread_local_var.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/thread_local_var.rb
Apache-2.0
def bind(value) if block_given? old_value = self.value self.value = value begin yield ensure self.value = old_value end end end
Bind the given value to thread local storage during execution of the given block. @param [Object] value the value to bind @yield the operation to be performed with the bound variable @return [Object] the value
bind
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/thread_local_var.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/atomic/thread_local_var.rb
Apache-2.0
def notify_observers(*args, &block) observers = duplicate_observers notify_to(observers, *args, &block) self end
Notifies all registered observers with optional args @param [Object] args arguments to be passed to each observer @return [CopyOnWriteObserverSet] self
notify_observers
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_notify_observer_set.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_notify_observer_set.rb
Apache-2.0
def notify_and_delete_observers(*args, &block) observers = duplicate_and_clear_observers notify_to(observers, *args, &block) self end
Notifies all registered observers with optional args and deletes them. @param [Object] args arguments to be passed to each observer @return [CopyOnWriteObserverSet] self
notify_and_delete_observers
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_notify_observer_set.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_notify_observer_set.rb
Apache-2.0
def notify_observers(*args, &block) notify_to(observers, *args, &block) self end
Notifies all registered observers with optional args @param [Object] args arguments to be passed to each observer @return [CopyOnWriteObserverSet] self
notify_observers
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_write_observer_set.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_write_observer_set.rb
Apache-2.0
def notify_and_delete_observers(*args, &block) old = clear_observers_and_return_old notify_to(old, *args, &block) self end
Notifies all registered observers with optional args and deletes them. @param [Object] args arguments to be passed to each observer @return [CopyOnWriteObserverSet] self
notify_and_delete_observers
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_write_observer_set.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/copy_on_write_observer_set.rb
Apache-2.0
def empty?(head = head()) head.equal? EMPTY end
@param [Node] head @return [true, false]
empty?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
Apache-2.0
def compare_and_push(head, value) compare_and_set_head head, Node[value, head] end
@param [Node] head @param [Object] value @return [true, false]
compare_and_push
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
Apache-2.0
def compare_and_pop(head) compare_and_set_head head, head.next_node end
@param [Node] head @return [true, false]
compare_and_pop
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
Apache-2.0
def compare_and_clear(head) compare_and_set_head head, EMPTY end
@param [Node] head @return [true, false]
compare_and_clear
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
Apache-2.0
def clear_if(head) compare_and_set_head head, EMPTY end
@param [Node] head @return [true, false]
clear_if
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
Apache-2.0
def replace_if(head, new_head) compare_and_set_head head, new_head end
@param [Node] head @param [Node] new_head @return [true, false]
replace_if
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
Apache-2.0
def clear_each(&block) while true current_head = head return self if current_head == EMPTY if compare_and_set_head current_head, EMPTY each current_head, &block return self end end end
@return [self] @yield over the cleared stack @yieldparam [Object] value
clear_each
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/lock_free_stack.rb
Apache-2.0
def swap(x, y) temp = @queue[x] @queue[x] = @queue[y] @queue[y] = temp end
Exchange the values at the given indexes within the internal array. @param [Integer] x the first index to swap @param [Integer] y the second index to swap @!visibility private
swap
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
Apache-2.0
def ordered?(x, y) (@queue[x] <=> @queue[y]) == @comparator end
Are the items at the given indexes ordered based on the priority order specified at construction? @param [Integer] x the first index from which to retrieve a comparable value @param [Integer] y the second index from which to retrieve a comparable value @return [Boolean] true if the two elements are in the correct priority order else false @!visibility private
ordered?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
Apache-2.0
def sink(k) success = false while (j = (2 * k)) <= @length do j += 1 if j < @length && ! ordered?(j, j+1) break if ordered?(k, j) swap(k, j) success = true k = j end success end
Percolate down to maintain heap invariant. @param [Integer] k the index at which to start the percolation @!visibility private
sink
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
Apache-2.0
def swim(k) success = false while k > 1 && ! ordered?(k/2, k) do swap(k, k/2) k = k/2 success = true end success end
Percolate up to maintain heap invariant. @param [Integer] k the index at which to start the percolation @!visibility private
swim
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/ruby_non_concurrent_priority_queue.rb
Apache-2.0
def try_await_lock(table, i) if table && i >= 0 && i < table.size # bounds check, TODO: why are we bounds checking? spins = SPIN_LOCK_ATTEMPTS randomizer = base_randomizer = Concurrent::ThreadSafe::Util::XorShiftRandom.get while equal?(table.volatile_get(i)) && self.class.locked_hash?(my_hash = hash) if spins >= 0 if (randomizer = (randomizer >> 1)).even? # spin at random if (spins -= 1) == 0 Thread.pass # yield before blocking else randomizer = base_randomizer = Concurrent::ThreadSafe::Util::XorShiftRandom.xorshift(base_randomizer) if randomizer.zero? end end elsif cas_hash(my_hash, my_hash | WAITING) force_acquire_lock(table, i) break end end end end
Spins a while if +LOCKED+ bit set and this node is the first of its bin, and then sets +WAITING+ bits on hash field and blocks (once) if they are still set. It is OK for this method to return even if lock is not available upon exit, which enables these simple single-wait mechanics. The corresponding signalling operation is performed within callers: Upon detecting that +WAITING+ has been set when unlocking lock (via a failed CAS from non-waiting +LOCKED+ state), unlockers acquire the +cheap_synchronize+ lock and perform a +cheap_broadcast+.
try_await_lock
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def clear return self unless current_table = table current_table_size = current_table.size deleted_count = i = 0 while i < current_table_size if !(node = current_table.volatile_get(i)) i += 1 elsif (node_hash = node.hash) == MOVED current_table = node.key current_table_size = current_table.size elsif Node.locked_hash?(node_hash) decrement_size(deleted_count) # opportunistically update count deleted_count = 0 node.try_await_lock(current_table, i) else current_table.try_lock_via_hash(i, node, node_hash) do begin deleted_count += 1 if NULL != node.value # recheck under lock node.value = nil end while node = node.next current_table.volatile_set(i, nil) i += 1 end end end decrement_size(deleted_count) self end
Implementation for clear. Steps through each bin, removing all nodes.
clear
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def internal_replace(key, expected_old_value = NULL, &block) hash = key_hash(key) current_table = table while current_table if !(node = current_table.volatile_get(i = current_table.hash_to_index(hash))) break elsif (node_hash = node.hash) == MOVED current_table = node.key elsif (node_hash & HASH_BITS) != hash && !node.next # precheck break # rules out possible existence elsif Node.locked_hash?(node_hash) try_await_lock(current_table, i, node) else succeeded, old_value = attempt_internal_replace(key, expected_old_value, hash, current_table, i, node, node_hash, &block) return old_value if succeeded end end NULL end
Internal versions of the insertion methods, each a little more complicated than the last. All have the same basic structure: 1. If table uninitialized, create 2. If bin empty, try to CAS new node 3. If bin stale, use new table 4. Lock and validate; if valid, scan and add or update The others interweave other checks and/or alternative actions: * Plain +get_and_set+ checks for and performs resize after insertion. * compute_if_absent prescans for mapping without lock (and fails to add if present), which also makes pre-emptive resize checks worthwhile. Someday when details settle down a bit more, it might be worth some factoring to reduce sprawl.
internal_replace
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def table_size_for(entry_count) size = 2 size <<= 1 while size < entry_count size end
Returns a power of two table size for the given desired capacity.
table_size_for
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def initialize_table until current_table ||= table if (size_ctrl = size_control) == NOW_RESIZING Thread.pass # lost initialization race; just spin else try_in_resize_lock(current_table, size_ctrl) do initial_size = size_ctrl > 0 ? size_ctrl : DEFAULT_CAPACITY current_table = self.table = Table.new(initial_size) initial_size - (initial_size >> 2) # 75% load factor end end end current_table end
Initializes table, using the size recorded in +size_control+.
initialize_table
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def check_for_resize while (current_table = table) && MAX_CAPACITY > (table_size = current_table.size) && NOW_RESIZING != (size_ctrl = size_control) && size_ctrl < @counter.sum try_in_resize_lock(current_table, size_ctrl) do self.table = rebuild(current_table) (table_size << 1) - (table_size >> 1) # 75% load factor end end end
If table is too small and not already resizing, creates next table and transfers bins. Rechecks occupancy after a transfer to see if another resize is already needed because resizings are lagging additions.
check_for_resize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def rebuild(table) old_table_size = table.size new_table = table.next_in_size_table # puts "#{old_table_size} -> #{new_table.size}" forwarder = Node.new(MOVED, new_table, NULL) rev_forwarder = nil locked_indexes = nil # holds bins to revisit; nil until needed locked_arr_idx = 0 bin = old_table_size - 1 i = bin while true if !(node = table.volatile_get(i)) # no lock needed (or available) if bin >= 0, because we're not popping values from locked_indexes until we've run through the whole table redo unless (bin >= 0 ? table.cas(i, nil, forwarder) : lock_and_clean_up_reverse_forwarders(table, old_table_size, new_table, i, forwarder)) elsif Node.locked_hash?(node_hash = node.hash) locked_indexes ||= ::Array.new if bin < 0 && locked_arr_idx > 0 locked_arr_idx -= 1 i, locked_indexes[locked_arr_idx] = locked_indexes[locked_arr_idx], i # swap with another bin redo end if bin < 0 || locked_indexes.size >= TRANSFER_BUFFER_SIZE node.try_await_lock(table, i) # no other options -- block redo end rev_forwarder ||= Node.new(MOVED, table, NULL) redo unless table.volatile_get(i) == node && node.locked? # recheck before adding to list locked_indexes << i new_table.volatile_set(i, rev_forwarder) new_table.volatile_set(i + old_table_size, rev_forwarder) else redo unless split_old_bin(table, new_table, i, node, node_hash, forwarder) end if bin > 0 i = (bin -= 1) elsif locked_indexes && !locked_indexes.empty? bin = -1 i = locked_indexes.pop locked_arr_idx = locked_indexes.size - 1 else return new_table end end end
Moves and/or copies the nodes in each bin to new table. See above for explanation.
rebuild
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def split_old_bin(table, new_table, i, node, node_hash, forwarder) table.try_lock_via_hash(i, node, node_hash) do split_bin(new_table, i, node, node_hash) table.volatile_set(i, forwarder) end end
Splits a normal bin with list headed by e into lo and hi parts; installs in given table.
split_old_bin
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/atomic_reference_map_backend.rb
Apache-2.0
def initialize(options = nil, &default_proc) validate_options_hash!(options) if options.kind_of?(::Hash) set_backend(default_proc) @default_proc = default_proc end
WARNING: all public methods of the class must operate on the @backend directly without calling each other. This is important because of the SynchronizedMapBackend which uses a non-reentrant mutex for performance reasons.
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/non_concurrent_map_backend.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/collection/map/non_concurrent_map_backend.rb
Apache-2.0
def value synchronize { apply_deref_options(@value) } end
NOTE: This module is going away in 2.0. In the mean time we need it to play nicely with the synchronization layer. This means that the including class SHOULD be synchronized and it MUST implement a `#synchronize` method. Not doing so will lead to runtime errors. Return the value this object represents after applying the options specified by the `#set_deref_options` method. @return [Object] the current value of the object
value
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/dereferenceable.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/dereferenceable.rb
Apache-2.0
def set_deref_options(opts = {}) synchronize{ ns_set_deref_options(opts) } end
@!macro dereferenceable_set_deref_options Set the options which define the operations #value performs before returning data to the caller (dereferencing). @note Most classes that include this module will call `#set_deref_options` from within the constructor, thus allowing these options to be set at object creation. @param [Hash] opts the options defining dereference behavior. @option opts [String] :dup_on_deref (false) call `#dup` before returning the data @option opts [String] :freeze_on_deref (false) call `#freeze` before returning the data @option opts [String] :copy_on_deref (nil) call the given `Proc` passing the internal value and returning the value returned from the proc
set_deref_options
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/dereferenceable.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/dereferenceable.rb
Apache-2.0
def log(level, progname, message = nil, &block) logger = if defined?(@logger) && @logger @logger else Concurrent.global_logger end logger.call level, progname, message, &block rescue => error $stderr.puts "`Concurrent.configuration.logger` failed to log #{[level, progname, message, block]}\n" + "#{error.message} (#{error.class})\n#{error.backtrace.join "\n"}" end
Logs through {Concurrent.global_logger}, it can be overridden by setting @logger @param [Integer] level one of Logger::Severity constants @param [String] progname e.g. a path of an Actor @param [String, nil] message when nil block is used to generate the message @yieldreturn [String] a message
log
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/logging.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/logging.rb
Apache-2.0
def fulfilled? state == :fulfilled end
NOTE: The Dereferenceable module is going away in 2.0. In the mean time we need it to place nicely with the synchronization layer. This means that the including class SHOULD be synchronized and it MUST implement a `#synchronize` method. Not doing so will lead to runtime errors. Has the obligation been fulfilled? @return [Boolean]
fulfilled?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def rejected? state == :rejected end
Has the obligation been rejected? @return [Boolean]
rejected?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def pending? state == :pending end
Is obligation completion still pending? @return [Boolean]
pending?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def unscheduled? state == :unscheduled end
Is the obligation still unscheduled? @return [Boolean]
unscheduled?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def complete? [:fulfilled, :rejected].include? state end
Has the obligation completed processing? @return [Boolean]
complete?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def incomplete? ! complete? end
Is the obligation still awaiting completion of processing? @return [Boolean]
incomplete?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def value(timeout = nil) wait timeout deref end
The current value of the obligation. Will be `nil` while the state is pending or the operation has been rejected. @param [Numeric] timeout the maximum time in seconds to wait. @return [Object] see Dereferenceable#deref
value
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def wait(timeout = nil) event.wait(timeout) if timeout != 0 && incomplete? self end
Wait until obligation is complete or the timeout has been reached. @param [Numeric] timeout the maximum time in seconds to wait. @return [Obligation] self
wait
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def wait!(timeout = nil) wait(timeout).tap { raise self if rejected? } end
Wait until obligation is complete or the timeout is reached. Will re-raise any exceptions raised during processing (but will not raise an exception on timeout). @param [Numeric] timeout the maximum time in seconds to wait. @return [Obligation] self @raise [Exception] raises the reason when rejected
wait!
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def value!(timeout = nil) wait(timeout) if rejected? raise self else deref end end
The current value of the obligation. Will be `nil` while the state is pending or the operation has been rejected. Will re-raise any exceptions raised during processing (but will not raise an exception on timeout). @param [Numeric] timeout the maximum time in seconds to wait. @return [Object] see Dereferenceable#deref @raise [Exception] raises the reason when rejected
value!
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def state synchronize { @state } end
The current state of the obligation. @return [Symbol] the current state
state
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def reason synchronize { @reason } end
If an exception was raised during processing this will return the exception object. Will return `nil` when the state is pending or if the obligation has been successfully fulfilled. @return [Exception] the exception raised during processing or `nil`
reason
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def exception(*args) raise 'obligation is not rejected' unless rejected? reason.exception(*args) end
@example allows Obligation to be risen rejected_ivar = Ivar.new.fail raise rejected_ivar
exception
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def compare_and_set_state(next_state, *expected_current) synchronize do if expected_current.include? @state @state = next_state true else false end end end
Atomic compare and set operation State is set to `next_state` only if `current state == expected_current`. @param [Symbol] next_state @param [Symbol] expected_current @return [Boolean] true is state is changed, false otherwise @!visibility private
compare_and_set_state
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def if_state(*expected_states) synchronize do raise ArgumentError.new('no block given') unless block_given? if expected_states.include? @state yield else false end end end
Executes the block within mutex if current state is included in expected_states @return block value if executed, false otherwise @!visibility private
if_state
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def ns_check_state?(expected) @state == expected end
Am I in the current state? @param [Symbol] expected The state to check against @return [Boolean] true if in the expected state else false @!visibility private
ns_check_state?
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/obligation.rb
Apache-2.0
def add_observer(observer = nil, func = :update, &block) observers.add_observer(observer, func, &block) end
@!macro observable_add_observer Adds an observer to this set. If a block is passed, the observer will be created by this method and no other params should be passed. @param [Object] observer the observer to add @param [Symbol] func the function to call on the observer during notification. Default is :update @return [Object] the added observer
add_observer
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/observable.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/observable.rb
Apache-2.0
def with_observer(observer = nil, func = :update, &block) add_observer(observer, func, &block) self end
As `#add_observer` but can be used for chaining. @param [Object] observer the observer to add @param [Symbol] func the function to call on the observer during notification. @return [Observable] self
with_observer
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/observable.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/observable.rb
Apache-2.0
def delete_observers observers.delete_observers self end
@!macro observable_delete_observers Remove all observers associated with this object. @return [Observable] self
delete_observers
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/observable.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/concern/observable.rb
Apache-2.0
def fallback_action(*args) case fallback_policy when :abort lambda { raise RejectedExecutionError } when :discard lambda { false } when :caller_runs lambda { begin yield(*args) rescue => ex # let it fail log DEBUG, ex end true } else lambda { fail "Unknown fallback policy #{fallback_policy}" } end end
Returns an action which executes the `fallback_policy` once the queue size reaches `max_queue`. The reason for the indirection of an action is so that the work can be deferred outside of synchronization. @param [Array] args the arguments to the task which is being handled. @!visibility private
fallback_action
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/abstract_executor_service.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/abstract_executor_service.rb
Apache-2.0
def initialize(opts = {}) defaults = { idletime: DEFAULT_THREAD_IDLETIMEOUT } overrides = { min_threads: 0, max_threads: DEFAULT_MAX_POOL_SIZE, max_queue: DEFAULT_MAX_QUEUE_SIZE } super(defaults.merge(opts).merge(overrides)) end
@!macro cached_thread_pool_method_initialize Create a new thread pool. @param [Hash] opts the options defining pool behavior. @option opts [Symbol] :fallback_policy (`:abort`) the fallback policy @raise [ArgumentError] if `fallback_policy` is not a known policy @see http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html#newCachedThreadPool--
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/cached_thread_pool.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/cached_thread_pool.rb
Apache-2.0
def initialize(num_threads, opts = {}) raise ArgumentError.new('number of threads must be greater than zero') if num_threads.to_i < 1 defaults = { max_queue: DEFAULT_MAX_QUEUE_SIZE, idletime: DEFAULT_THREAD_IDLETIMEOUT } overrides = { min_threads: num_threads, max_threads: num_threads } super(defaults.merge(opts).merge(overrides)) end
@!macro fixed_thread_pool_method_initialize Create a new thread pool. @param [Integer] num_threads the number of threads to allocate @param [Hash] opts the options defining pool behavior. @option opts [Symbol] :fallback_policy (`:abort`) the fallback policy @raise [ArgumentError] if `num_threads` is less than or equal to zero @raise [ArgumentError] if `fallback_policy` is not a known policy @see http://docs.oracle.com/javase/8/docs/api/java/util/concurrent/Executors.html#newFixedThreadPool-int-
initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/fixed_thread_pool.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/fixed_thread_pool.rb
Apache-2.0
def ns_assign_worker(*args, &task) # keep growing if the pool is not at the minimum yet worker, _ = (@ready.pop if @pool.size >= @min_length) || ns_add_busy_worker if worker worker << [task, args] true else false end rescue ThreadError # Raised when the operating system refuses to create the new thread return false end
tries to assign task to a worker, tries to get one from @ready or to create new one @return [true, false] if task is assigned to a worker @!visibility private
ns_assign_worker
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
Apache-2.0
def ns_enqueue(*args, &task) return false if @synchronous if !ns_limited_queue? || @queue.size < @max_queue @queue << [task, args] true else false end end
tries to enqueue task @return [true, false] if enqueued @!visibility private
ns_enqueue
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
Apache-2.0
def ns_add_busy_worker return if @pool.size >= @max_length @workers_counter += 1 @pool << (worker = Worker.new(self, @workers_counter)) @largest_length = @pool.length if @pool.length > @largest_length worker end
creates new worker which has to receive work to do after it's added @return [nil, Worker] nil of max capacity is reached @!visibility private
ns_add_busy_worker
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
Apache-2.0
def ns_ready_worker(worker, last_message, success = true) task_and_args = @queue.shift if task_and_args worker << task_and_args else # stop workers when !running?, do not return them to @ready if running? raise unless last_message @ready.push([worker, last_message]) else worker.stop end end end
handle ready worker, giving it new job or assigning back to @ready @!visibility private
ns_ready_worker
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
Apache-2.0
def ns_remove_busy_worker(worker) @pool.delete(worker) stopped_event.set if @pool.empty? && !running? true end
removes a worker which is not in not tracked in @ready @!visibility private
ns_remove_busy_worker
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
Apache-2.0
def ns_prune_pool now = Concurrent.monotonic_time stopped_workers = 0 while [email protected]? && (@pool.size - stopped_workers > @min_length) worker, last_message = @ready.first if now - last_message > self.idletime stopped_workers += 1 @ready.shift worker << :stop else break end end @next_gc_time = Concurrent.monotonic_time + @gc_interval end
try oldest worker if it is idle for enough time, it's returned back at the start @!visibility private
ns_prune_pool
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb
Apache-2.0
def work(job) job.call ensure synchronize do job = @stash.shift || (@being_executed = false) end # TODO maybe be able to tell caching pool to just enqueue this job, because the current one end at the end # of this block call_job job if job end
ensures next job is executed if any is stashed
work
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/serialized_execution.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/serialized_execution.rb
Apache-2.0
def post(delay, *args, &task) raise ArgumentError.new('no block given') unless block_given? return false unless running? opts = { executor: @task_executor, args: args, timer_set: self } task = ScheduledTask.execute(delay, opts, &task) # may raise exception task.unscheduled? ? false : task end
Post a task to be execute run after a given delay (in seconds). If the delay is less than 1/100th of a second the task will be immediately post to the executor. @param [Float] delay the number of seconds to wait for before executing the task. @param [Array<Object>] args the arguments passed to the task on execution. @yield the task to be performed. @return [Concurrent::ScheduledTask, false] IVar representing the task if the post is successful; false after shutdown. @raise [ArgumentError] if the intended execution time is not in the future. @raise [ArgumentError] if no block is given.
post
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/timer_set.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/timer_set.rb
Apache-2.0
def ns_initialize(opts) @queue = Collection::NonConcurrentPriorityQueue.new(order: :min) @task_executor = Options.executor_from_options(opts) || Concurrent.global_io_executor @timer_executor = SingleThreadExecutor.new @condition = Event.new @ruby_pid = $$ # detects if Ruby has forked end
Initialize the object. @param [Hash] opts the options to create the object with. @!visibility private
ns_initialize
ruby
collabnix/kubelabs
.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/timer_set.rb
https://github.com/collabnix/kubelabs/blob/master/.bundles_cache/ruby/2.6.0/gems/concurrent-ruby-1.2.2/lib/concurrent-ruby/concurrent/executor/timer_set.rb
Apache-2.0