code
stringlengths
26
124k
docstring
stringlengths
23
125k
func_name
stringlengths
1
98
language
stringclasses
1 value
repo
stringlengths
5
53
path
stringlengths
7
151
url
stringlengths
50
211
license
stringclasses
7 values
def autocmd(event, options={}, &block) register_handler(:autocmd, event, options, block) end
Register an +nvim+ autocmd. See +:h autocmd+. @param event [String] The event type. See +:h autocmd-events+ @param options [Hash] Autocmd options. @param &block [Proc, nil] The body of the autocmd. @option options [String] :pattern The buffer name pattern. See +:h autocmd-patterns+. @option options [String] :eval An +nvim+ expression. Gets evaluated and passed as an argument to the block.
autocmd
ruby
neovim/neovim-ruby
lib/neovim/plugin/dsl.rb
https://github.com/neovim/neovim-ruby/blob/master/lib/neovim/plugin/dsl.rb
MIT
def script_host! @plugin.script_host = true end
Mark this plugin as the Ruby script host started by nvim. Should only be used in +Neovim::RubyProvider+.
script_host!
ruby
neovim/neovim-ruby
lib/neovim/plugin/dsl.rb
https://github.com/neovim/neovim-ruby/blob/master/lib/neovim/plugin/dsl.rb
MIT
def setup(&block) @plugin.setup_blocks << block end
Register a setup block to run once before the host starts. The block should expect to receive a single argument, a +Neovim::Client+. This is used for bootstrapping the ruby provider, and not meant to be used publicly in plugin definitions.
setup
ruby
neovim/neovim-ruby
lib/neovim/plugin/dsl.rb
https://github.com/neovim/neovim-ruby/blob/master/lib/neovim/plugin/dsl.rb
MIT
def rpc(name, &block) @plugin.handlers.push(Handler.unqualified(name, block)) end
Directly define a synchronous RPC call without a namespace. This is used for exposing ruby provider calls, and not meant to be used publicly in plugin definitions.
rpc
ruby
neovim/neovim-ruby
lib/neovim/plugin/dsl.rb
https://github.com/neovim/neovim-ruby/blob/master/lib/neovim/plugin/dsl.rb
MIT
def register_handler(name, &block) @handlers[name.to_s] = ::Proc.new do |client, *args| block.call(client, *args) end end
Define an RPC handler for use in remote modules. @param name [String] The handler name. @param block [Proc] The body of the handler.
register_handler
ruby
neovim/neovim-ruby
lib/neovim/remote_module/dsl.rb
https://github.com/neovim/neovim-ruby/blob/master/lib/neovim/remote_module/dsl.rb
MIT
def from_path(path_string_or_path) path = Pathname.new(path_string_or_path) return false unless path.exist? path_dir, file = path.relative_path_from(Pathname.new(domains_path)).split backwards_path = path_dir.to_s.split('/').push(file.basename('.txt').to_s) domain = backwards_path.reverse.join('.') Swot.new(domain) end
Returns a new Swot instance for the domain file at the given path. Note that the path must be absolute. Returns a Swot instance or false is no domain is found at the given path.
from_path
ruby
leereilly/swot
lib/swot.rb
https://github.com/leereilly/swot/blob/master/lib/swot.rb
MIT
def valid? if domain.nil? false elsif BLACKLIST.any? { |d| to_s =~ /(\A|\.)#{Regexp.escape(d)}\z/ } false elsif ACADEMIC_TLDS.include?(domain.tld) true elsif academic_domain? true else false end end
Figure out if an email or domain belongs to academic institution. Returns true if the domain name belongs to an academic institution; false otherwise.
valid?
ruby
leereilly/swot
lib/swot.rb
https://github.com/leereilly/swot/blob/master/lib/swot.rb
MIT
def institution_name @institution_name ||= File.read(file_path, :mode => "rb", :external_encoding => "UTF-8").strip rescue nil end
Figure out the institution name based on the email address/domain. Returns a string with the institution name; nil if nothing is found.
institution_name
ruby
leereilly/swot
lib/swot.rb
https://github.com/leereilly/swot/blob/master/lib/swot.rb
MIT
def academic_domain? @academic_domain ||= File.exist?(file_path) end
Figure out if a domain name is a know academic institution. Returns true if the domain name belongs to a known academic institution; false otherwise.
academic_domain?
ruby
leereilly/swot
lib/swot.rb
https://github.com/leereilly/swot/blob/master/lib/swot.rb
MIT
def each_domain return to_enum(:each_domain) unless block_given? Pathname.glob(Pathname.new(Swot.domains_path).join('**/*.txt')) do |path| yield(Swot.from_path(path)) end end
Yields a Swot instance for every domain under lib/domains. Does not include blacklisted or ACADEMIC_TLDS domains. returns a Enumerator object with Swot instances if no block is given
each_domain
ruby
leereilly/swot
lib/swot/collection_methods.rb
https://github.com/leereilly/swot/blob/master/lib/swot/collection_methods.rb
MIT
def assert_not(object, message = nil) message ||= "Expected #{mu_pp(object)} to be nil or false" assert !object, message end
Extracted from Rails ActiveSupport::Testing::Assertions Assert that an expression is not truthy. Passes if <tt>object</tt> is +nil+ or +false+. "Truthy" means "considered true in a conditional" like <tt>if foo</tt>. assert_not nil # => true assert_not false # => true assert_not 'foo' # => Expected "foo" to be nil or false An error message can be specified. assert_not foo, 'foo should be false'
assert_not
ruby
leereilly/swot
test/helper.rb
https://github.com/leereilly/swot/blob/master/test/helper.rb
MIT
def set_lens(focal_length, frame_height = 24.0) @fov = 2.0 * Math.rad_to_deg(::Math.atan(frame_height / (focal_length * 2.0))) update_projection_matrix end
Uses Focal Length (in mm) to estimate and set FOV 35mm (fullframe) camera is used if frame size is not specified; Formula based on http://www.bobatkins.com/photography/technical/field_of_view.html
set_lens
ruby
danini-the-panini/mittsu
lib/mittsu/cameras/perspective_camera.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/cameras/perspective_camera.rb
MIT
def set_view_offset(full_width, full_height, x, y, width, height) @full_width = full_width @full_height = full_height @x = x @y = y @width = width @height = height update_projection_matrix end
Sets an offset in a larger frustum. This is useful for multi-window or multi-monitor/multi-machine setups. For example, if you have 3x2 monitors and each monitor is 1920x1080 and the monitors are in grid like this +---+---+---+ | A | B | C | +---+---+---+ | D | E | F | +---+---+---+ then for each monitor you would call it like this var w = 1920; var h = 1080; var fullWidth = w * 3; var fullHeight = h * 2; --A-- camera.setOffset( fullWidth, fullHeight, w * 0, h * 0, w, h ); --B-- camera.setOffset( fullWidth, fullHeight, w * 1, h * 0, w, h ); --C-- camera.setOffset( fullWidth, fullHeight, w * 2, h * 0, w, h ); --D-- camera.setOffset( fullWidth, fullHeight, w * 0, h * 1, w, h ); --E-- camera.setOffset( fullWidth, fullHeight, w * 1, h * 1, w, h ); --F-- camera.setOffset( fullWidth, fullHeight, w * 2, h * 1, w, h ); Note there is no reason monitors have to be the same size or in a grid.
set_view_offset
ruby
danini-the-panini/mittsu
lib/mittsu/cameras/perspective_camera.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/cameras/perspective_camera.rb
MIT
def compute_offsets(size = 65535) # WebGL limits type of index buffer values to 16-bit. # TODO: check what the limit is for OpenGL, as we aren't using WebGL here indices = self[:index].array vertices = self[:position].array faces_count = indices.length / 3 # puts "Computing buffers in offsets of #{size} -> indices:#{indices.length} vertices:#{vertices.length}" # puts "Faces to process: #{(indices.length/3)}" # puts "Reordering #{verticesCount} vertices." sorted_indices = Array.new(indices.length) # 16-bit (Uint16Array in THREE.js) index_ptr = 0 vertex_ptr = 0 offsets = [DrawCall.new(0, 0, 0)] offset = offsets.first duplicated_vertices = 0 new_vertice_maps = 0 face_vertices = Array.new(6) # (Int32Array) vertex_map = Array.new(vertices.length) # (Int32Array) rev_vertex_map = Array.new(vertices.length) # (Int32Array) vertices.each_index do |j| vertex_map[j] = -1 rev_vertex_map[j] = -1 end # Traverse every face and reorder vertices in the proper offsets of 65k. # We can have more than 65k entries in the index buffer per offset, but only reference 65k values. faces_count.times do |findex| new_vertice_maps = 0 3.times do |vo| vid = indices[findex * 3 * vo] if vertex_map[vid] == -1 # unmapped vertice face_vertices[vo * 2] = vid face_vertices[vo * 2 + 1] = -1 new_vertice_maps += 1 elsif vertex_map[vid] < offset.index # reused vertices from previous block (duplicate) face_vertices[vo * 2] = vid face_vertices[vo * 2 + 1] = -1 duplicated_vertices += 1 else # reused vertice in the current block face_vertices[vo * 2] =vid face_vertices[vo * 2 + 1] = vertec_map[vid] end end face_max = vertex_ptr + new_vertex_maps if face_max > offset.index + size new_offset = DrawCall.new(index_ptr, 0, vertex_ptr) offsets << new_offset offset = new_offset # Re-evaluate reused vertices in light of new offset. (0...6).step(2) do |v| new_vid = face_vertices[v + 1] if (new_vid > -1 && new_vid < offset.index) faceVertices[v + 1] = -1 end end # Reindex the face. (0...6).step(2) do |v| vid = face_vertices[v] new_vid = face_vertices[v + 1] if new_vid == -1 new_vid = vertex_ptr vertex_ptr += 1 end vertex_map[vid] = new_vid rev_vertex_map[new_vid] = vid sorted_indices [index_ptr] = new_vid - offset.index # XXX: overflows at 16bit index_ptr += 1 offset.count += 1 end end # Move all attribute values to map to the new computed indices , also expand the vertice stack to match our new vertexPtr. self.reorder_buffers(sorted_indices, rev_vertex_map, vertex_ptr) @draw_calls = offsets # order_time = Time.now # puts "Reorder time: #{(order_time - s)}ms" # puts "Duplicated #{duplicated_vertices} vertices." # puts "Compute Buffers time: #{(Time.now - s)}ms" # puts "Draw offsets: #{offsets.length}" offsets end end
Compute the draw offset for large models by chunking the index buffer into chunks of 65k addressable vertices. This method will effectively rewrite the index buffer and remap all attributes to match the new indices. WARNING: This method will also expand the vertex count to prevent sprawled triangles across draw offsets. size - Defaults to 65535, but allows for larger or smaller chunks.
compute_offsets
ruby
danini-the-panini/mittsu
lib/mittsu/core/buffer_geometry.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/core/buffer_geometry.rb
MIT
def reorder_buffers(index_buffer, index_map, vertex_count) # Create a copy of all attributes for reordering sorted_attributes = {} @attributes.each do |key, attribute| next if key == :index source_array = attribute.array sorted_attributes[key] = source_array.class.new(attribute.item_size * vertex_count) end # move attribute positions based on the new index map vertex_count.times do |new_vid| vid = index_map[new_vid] @attributes.each do |key, attribute| next if key == :index attr_array = attribute.array attr_size = attribute.item_size sorted_attr = sorted_attributes[key] attr_size.times do |k| sorted_attr[new_vid * attr_size + k] = attr_array[vid * attr_size + k] end end end # Carry the new sorted buffers locally @attributes[:index].array = index_buffer @attributes.each do |key, attribute| next if key == :index attribute.array = sorted_attributes[key] attribute.num_items = attribute.item_size * vertex_count end end
reoderBuffers: Reorder attributes based on a new indexBuffer and indexMap. indexBuffer - Uint16Array of the new ordered indices. indexMap - Int32Array where the position is the new vertex ID and the value the old vertex ID for each vertex. vertexCount - Amount of total vertices considered in this reordering (in case you want to grow the vertice stack).
reorder_buffers
ruby
danini-the-panini/mittsu
lib/mittsu/core/buffer_geometry.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/core/buffer_geometry.rb
MIT
def make(v1, v2, v3) face = Face3.new(v1.index, v2.index, v3.index, [v1.clone, v2.clone, v3.clone]) @faces << face @centroid.copy(v1).add(v2).add(v3).divide_scalar(3) azi = azimuth(@centroid) @face_vertex_uvs[0] << [ correct_uv(v1.uv, v1, azi), correct_uv(v2.uv, v2, azi), correct_uv(v3.uv, v3, azi) ] end
Approximate a curved face with recursively sub-divided triangles.
make
ruby
danini-the-panini/mittsu
lib/mittsu/extras/geometries/polyhedron_geometry.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/extras/geometries/polyhedron_geometry.rb
MIT
def subdivide(face, detail) cols = 2.0 ** detail a = prepare(@vertices[face.a]) b = prepare(@vertices[face.b]) c = prepare(@vertices[face.c]) v = [] # Construct all of the vertices for this subdivision. for i in 0..cols do v[i] = [] aj = prepare(a.clone.lerp(c, i.to_f / cols.to_f)) bj = prepare(b.clone.lerp(c, i.to_f / cols.to_f)) rows = cols - i for j in 0..rows do v[i][j] = if j.zero? && i == cols aj else prepare(aj.clone.lerp(bj, j.to_f / rows.to_f)) end end end # Construct all of the faces for i in 0...cols do for j in (0...(2 * (cols - i) - 1)) do k = j/2 if j.even? make( v[i][k + 1], v[i + 1][k], v[i][k] ) else make( v[i][k + 1], v[i + 1][k + 1], v[i + 1][k] ) end end end end
Analytically subdivide a face to the required detail level.
subdivide
ruby
danini-the-panini/mittsu
lib/mittsu/extras/geometries/polyhedron_geometry.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/extras/geometries/polyhedron_geometry.rb
MIT
def azimuth(vector) ::Math.atan2(vector.z, -vector.x) end
Angle around the Y axis, counter-clockwise when looking from above.
azimuth
ruby
danini-the-panini/mittsu
lib/mittsu/extras/geometries/polyhedron_geometry.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/extras/geometries/polyhedron_geometry.rb
MIT
def correct_uv(uv, vector, azimuth) return Vector2.new(uv.x - 1.0, uv.y) if azimuth < 0 return Vector2.new(azimuth / 2.0 / ::Math::PI + 0.5, uv.y) if vector.x.zero? && vector.z.zero? uv.clone end
Texture fixing helper. Spheres have some odd behaviours.
correct_uv
ruby
danini-the-panini/mittsu
lib/mittsu/extras/geometries/polyhedron_geometry.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/extras/geometries/polyhedron_geometry.rb
MIT
def random() set(Random.new.rand, Random.new.rand, Random.new.rand) self end
TODO: maybe add range 3 values as arguments (range_x, range_y, range_z) to this method
random
ruby
danini-the-panini/mittsu
lib/mittsu/math/vector3.rb
https://github.com/danini-the-panini/mittsu/blob/master/lib/mittsu/math/vector3.rb
MIT
def child @child ||= Task.new(@buffer, self, depth + 1) end
Note: task depth is not checked.
child
ruby
redis/hiredis-rb
lib/hiredis/ruby/reader.rb
https://github.com/redis/hiredis-rb/blob/master/lib/hiredis/ruby/reader.rb
BSD-3-Clause
def test_read_against_timeout_with_other_thread thread = Thread.new do sleep 0.1 while true end listen do |_| hiredis.connect("localhost", DEFAULT_PORT) hiredis.timeout = 10_000 assert_raises Errno::EAGAIN do hiredis.read end end ensure thread.kill end
Test that the Hiredis thread is scheduled after some time while waiting for the descriptor to be readable.
test_read_against_timeout_with_other_thread
ruby
redis/hiredis-rb
test/connection_test.rb
https://github.com/redis/hiredis-rb/blob/master/test/connection_test.rb
BSD-3-Clause
def test_eagain_on_write_followed_by_remote_drain skip "`#flush` doesn't work on hi-redis 1.0. See https://github.com/redis/hiredis/issues/975" listen do |server| hiredis.connect("localhost", 6380) hiredis.timeout = 100_000 # Find out buffer sizes sndbuf = sockopt(hiredis.sock, Socket::SO_SNDBUF) rcvbuf = sockopt(hiredis.sock, Socket::SO_RCVBUF) # This thread starts reading the server buffer after 50ms. This will # cause the local write to first return EAGAIN, wait for the socket to # become writable with select(2) and retry. begin thread = Thread.new do sleep(0.050) loop do server.read(1024) end end # Make request that fills both the remote receive buffer and the local # send buffer. This assumes that the size of the receive buffer on the # remote end is equal to our local receive buffer size. hiredis.write(["x" * rcvbuf]) hiredis.write(["x" * sndbuf]) hiredis.flush hiredis.disconnect ensure thread.kill end end end
This does not have consistent outcome for different operating systems... def test_eagain_on_write listen do |server| hiredis.connect("localhost", 6380) hiredis.timeout = 100_000 # Find out buffer sizes sndbuf = sockopt(hiredis.sock, Socket::SO_SNDBUF) rcvbuf = sockopt(hiredis.sock, Socket::SO_RCVBUF) # Make request that fills both the remote receive buffer and the local # send buffer. This assumes that the size of the receive buffer on the # remote end is equal to our local receive buffer size. assert_raises Errno::EAGAIN do hiredis.write(["x" * rcvbuf * 2]) hiredis.write(["x" * sndbuf * 2]) hiredis.flush end end end
test_eagain_on_write_followed_by_remote_drain
ruby
redis/hiredis-rb
test/connection_test.rb
https://github.com/redis/hiredis-rb/blob/master/test/connection_test.rb
BSD-3-Clause
def add_crumb(name, *args) options = args.extract_options! url = args.first raise ArgumentError, "Need more arguments" unless name or options[:record] or block_given? raise ArgumentError, "Cannot pass url and use block" if url && block_given? before_filter(options) do |instance| url = yield instance if block_given? url = instance.send url if url.is_a? Symbol if url.present? if url.kind_of? Array url.map! do |name| name.is_a?(Symbol) ? instance.instance_variable_get("@#{name}") : name end end if not url.kind_of? String url = instance.send :url_for, url end end # Get the return value of the name if its a proc. name = name.call(instance) if name.is_a?(Proc) _record = instance.instance_variable_get("@#{name}") unless name.kind_of?(String) if _record and _record.respond_to? :to_param instance.add_crumb(_record.to_s, url || instance.url_for(_record), options) else instance.add_crumb(name, url, options) end # FIXME: url = instance.url_for(name) if name.respond_to?("to_param") && url.nil? # FIXME: Add ||= for the name, url above end end
Add a crumb to the crumbs array. add_crumb("Home", "/") add_crumb(lambda { |instance| instance.business_name }, "/") add_crumb("Business") { |instance| instance.business_path } Works like a before_filter so +:only+ and +except+ both work.
add_crumb
ruby
zachinglis/crummy
lib/crummy/action_controller.rb
https://github.com/zachinglis/crummy/blob/master/lib/crummy/action_controller.rb
MIT
def add_crumb(name, url=nil, options={}) crumbs.push [name, url, options] end
Add a crumb to the crumbs array. add_crumb("Home", "/") add_crumb("Business") { |instance| instance.business_path }
add_crumb
ruby
zachinglis/crummy
lib/crummy/action_controller.rb
https://github.com/zachinglis/crummy/blob/master/lib/crummy/action_controller.rb
MIT
def crumbs get_or_set_ivar "@_crumbs", [] end
Lists the crumbs as an array
crumbs
ruby
zachinglis/crummy
lib/crummy/action_controller.rb
https://github.com/zachinglis/crummy/blob/master/lib/crummy/action_controller.rb
MIT
def crumbs @_crumbs ||= [] # Give me something to push to end
List the crumbs as an array
crumbs
ruby
zachinglis/crummy
lib/crummy/action_view.rb
https://github.com/zachinglis/crummy/blob/master/lib/crummy/action_view.rb
MIT
def add_crumb(name, url=nil, options={}) crumbs.push [name, url, options] end
Add a crumb to the +crumbs+ array
add_crumb
ruby
zachinglis/crummy
lib/crummy/action_view.rb
https://github.com/zachinglis/crummy/blob/master/lib/crummy/action_view.rb
MIT
def render_crumbs(options = {}) raise ArgumentError, "Renderer and block given" if options.has_key?(:renderer) && block_given? return yield(crumbs, options) if block_given? @_renderer ||= if options.has_key?(:renderer) options.delete(:renderer) else require 'crummy/standard_renderer' Crummy::StandardRenderer.new end @_renderer.render_crumbs(crumbs, options) end
Render the list of crumbs using renderer
render_crumbs
ruby
zachinglis/crummy
lib/crummy/action_view.rb
https://github.com/zachinglis/crummy/blob/master/lib/crummy/action_view.rb
MIT
def render_crumbs(crumbs, options = {}) options[:skip_if_blank] ||= Crummy.configuration.skip_if_blank return '' if options[:skip_if_blank] && crumbs.count < 1 options[:format] ||= Crummy.configuration.format options[:separator] ||= Crummy.configuration.send(:"#{options[:format]}_separator") options[:right_separator] ||= Crummy.configuration.send(:"#{options[:format]}_right_separator") options[:links] ||= Crummy.configuration.links options[:first_class] ||= Crummy.configuration.first_class options[:last_class] ||= Crummy.configuration.last_class options[:microdata] ||= Crummy.configuration.microdata if options[:microdata].nil? options[:truncate] ||= Crummy.configuration.truncate if options[:truncate] options[:last_crumb_linked] = Crummy.configuration.last_crumb_linked if options[:last_crumb_linked].nil? options[:right_side] ||= Crummy.configuration.right_side last_hash = lambda {|o|k=o.map{|c| c.is_a?(Hash) ? (c.empty? ? nil: c) : nil}.compact k.empty? ? {} : k.last } local_global = lambda {|crumb, global_options, param_name| last_hash.call(crumb).has_key?(param_name.to_sym) ? last_hash.call(crumb)[param_name.to_sym] : global_options[param_name.to_sym]} case options[:format] when :html crumb_string = crumbs.map{|crumb|local_global.call(crumb, options, :right_side) ? nil : crumb_to_html(crumb, local_global.call(crumb, options, :links), local_global.call(crumb, options, :first_class), local_global.call(crumb, options, :last_class), (crumb == crumbs.first), (crumb == crumbs.last), local_global.call(crumb, options, :microdata), local_global.call(crumb, options, :last_crumb_linked), local_global.call(crumb, options, :truncate))}.compact.join(options[:separator]).html_safe crumb_string when :html_list # Let's set values for special options of html_list format options[:li_class] ||= Crummy.configuration.li_class options[:ol_class] ||= Crummy.configuration.ol_class options[:ol_id] ||= Crummy.configuration.ol_id options[:ol_id] = nil if options[:ol_id].blank? crumb_string = crumbs.map{|crumb|local_global.call(crumb, options, :right_side) ? nil : crumb_to_html_list(crumb, local_global.call(crumb, options, :links), local_global.call(crumb, options, :li_class), local_global.call(crumb, options, :first_class), local_global.call(crumb, options, :last_class), (crumb == crumbs.first), (crumb == crumbs.find_all{|crumb| !last_hash.call(crumb).fetch(:right_side,false)}.compact.last), local_global.call(crumb, options, :microdata), local_global.call(crumb, options, :last_crumb_linked), local_global.call(crumb, options, :truncate), local_global.call(crumb, options, :separator))}.compact.join.html_safe crumb_right_string = crumbs.reverse.map{|crumb|!local_global.call(crumb, options, :right_side) ? nil : crumb_to_html_list(crumb, local_global.call(crumb, options, :links), local_global.call(crumb, options, :li_right_class), local_global.call(crumb, options, :first_class), local_global.call(crumb, options, :last_class), (crumb == crumbs.first), (crumb == crumbs.find_all{|crumb|!local_global.call(crumb, options, :right_side)}.compact.last), local_global.call(crumb, options, :microdata), local_global.call(crumb, options, :last_crumb_linked), local_global.call(crumb, options, :truncate), local_global.call(crumb, options, :right_separator))}.compact.join.html_safe crumb_string = content_tag(:ol, crumb_string+crumb_right_string, :class => options[:ol_class], :id => options[:ol_id]) crumb_string when :xml crumbs.collect do |crumb| crumb_to_xml(crumb, local_global.call(crumb, options, :links), local_global.call(crumb, options, :separator), (crumb == crumbs.first), (crumb == crumbs.last)) end * '' else raise ArgumentError, "Unknown breadcrumb output format" end end
Render the list of crumbs as either html or xml Takes 3 options: The output format. Can either be xml or html. Default :html :format => (:html|:xml) The separator text. It does not assume you want spaces on either side so you must specify. Default +&raquo;+ for :html and +crumb+ for xml :separator => string Render links in the output. Default +true+ :link => boolean Examples: render_crumbs #=> <a href="/">Home</a> &raquo; <a href="/businesses">Businesses</a> render_crumbs :separator => ' | ' #=> <a href="/">Home</a> | <a href="/businesses">Businesses</a> render_crumbs :format => :xml #=> <crumb href="/">Home</crumb><crumb href="/businesses">Businesses</crumb> render_crumbs :format => :html_list #=> <ol class="" id=""><li class=""><a href="/">Home</a></li><li class=""><a href="/">Businesses</a></li></ol> With :format => :html_list you can specify additional params: li_class, ol_class, ol_id The only argument is for the separator text. It does not assume you want spaces on either side so you must specify. Defaults to +&raquo;+ render_crumbs(" . ") #=> <a href="/">Home</a> . <a href="/businesses">Businesses</a>
render_crumbs
ruby
zachinglis/crummy
lib/crummy/standard_renderer.rb
https://github.com/zachinglis/crummy/blob/master/lib/crummy/standard_renderer.rb
MIT
def invisible_captcha(honeypot = nil, scope = nil, options = {}) @captcha_ocurrences = 0 unless defined?(@captcha_ocurrences) @captcha_ocurrences += 1 if InvisibleCaptcha.timestamp_enabled || InvisibleCaptcha.spinner_enabled session[:invisible_captcha_timestamp] = Time.zone.now.iso8601 end if InvisibleCaptcha.spinner_enabled && @captcha_ocurrences == 1 session[:invisible_captcha_spinner] = InvisibleCaptcha.encode("#{session[:invisible_captcha_timestamp]}-#{request.remote_ip}") end build_invisible_captcha(honeypot, scope, options) end
Builds the honeypot html @param honeypot [Symbol] name of honeypot, ie: subtitle => input name: subtitle @param scope [Symbol] name of honeypot scope, ie: topic => input name: topic[subtitle] @param options [Hash] html_options for input and invisible_captcha options @return [String] the generated html
invisible_captcha
ruby
markets/invisible_captcha
lib/invisible_captcha/view_helpers.rb
https://github.com/markets/invisible_captcha/blob/master/lib/invisible_captcha/view_helpers.rb
MIT
def validate(record) hash = record.hash_from_file self.errors = {} if should_have_key?(:order_type, in: hash, as: Hash) if should_have_key?(:fields, in: hash[:order_type], as: Hash) fds = CustomFields::OrderTypeFieldDefSet.new(hash[:order_type][:fields]) errors.merge!(fds.errors) end should_have_key?(:code, in: hash[:order_type], as: String) should_have_key?(:name, in: hash[:order_type], as: String) end unless errors.empty? record.errors.add(:file, errors.values.flatten) end end
Checks record.hash_from_file 1. it should contain a hash 2. the hash should have :order_type key with hash value 3. the hash[:order_type] should have :fields key with hash value 4. the hash[:order_type] should have :code key with string value 5. CustomFields::FieldDefSet.new(hash) should be valid
validate
ruby
hydra-billing/homs
app/validators/order_type_file_validator.rb
https://github.com/hydra-billing/homs/blob/master/app/validators/order_type_file_validator.rb
Apache-2.0
def bp_running?(entity_code, entity_class, bp_codes) processes = active_process_instances(entity_code, entity_class) definitions = fetch_concurrently { process_definitions } definitions_ids = definitions.select { |d| bp_codes.include?(d.key) }.map(&:id) processes.select { |p| p['definitionId'].in?(definitions_ids) }.present? end
TODO: How to distinguish between running process instance and done TODO: Think of suspended process instances
bp_running?
ruby
hydra-billing/homs
hbw/app/models/hbw/common/adapter.rb
https://github.com/hydra-billing/homs/blob/master/hbw/app/models/hbw/common/adapter.rb
Apache-2.0
def fetch_response(method, url, params) responses.fetch(method) .fetch(url) .find { |el| el['params'] == Addressable::URI.unescape(params.to_query) } .fetch('response') end
# Structure of a camunda api mock file: <method>: <url>: - params: <body-1 or querystring-1> response: <response-1> - params: <body-2 or querystring-2> response: <response-2>
fetch_response
ruby
hydra-billing/homs
hbw/app/models/hbw/common/yml_api.rb
https://github.com/hydra-billing/homs/blob/master/hbw/app/models/hbw/common/yml_api.rb
Apache-2.0
def nest_subscriptions(subscriptions) parent_subscriptions = subscriptions.select do |subscription| subscription['parSubscriptionId'].nil? end parent_subscriptions.each do |subscription| subscription['childServices'] = subscriptions.select do |subs| subs['parSubscriptionId'] == subscription['subscriptionId'] end end parent_subscriptions end
nest child subscriptions into childServices within parent ones
nest_subscriptions
ruby
hydra-billing/homs
hbw/app/models/hbw/fields/services_table.rb
https://github.com/hydra-billing/homs/blob/master/hbw/app/models/hbw/fields/services_table.rb
Apache-2.0
def nest_services(services) parent_services = services.select do |service| service['parPriceLineId'].nil? end parent_services.each do |service| service['childServices'] = services.select do |serv| serv['parPriceLineId'] == service['priceLineId'] end end parent_services end
nest child services into childServices within parent ones
nest_services
ruby
hydra-billing/homs
hbw/app/models/hbw/fields/services_table.rb
https://github.com/hydra-billing/homs/blob/master/hbw/app/models/hbw/fields/services_table.rb
Apache-2.0
def plugin_enabled?(plugin_name) @config['plugins'].has_key? plugin_name end
Returns true if the specified plugin is enabled, false otherwise.
plugin_enabled?
ruby
tumblr/jetpants
lib/jetpants.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants.rb
Apache-2.0
def app_credentials {user: @config['mysql_app_user'], pass: @config['mysql_app_password']} end
Returns a hash containing :user => username string, :pass => password string for the MySQL application user, as found in Jetpants' configuration. Plugins may freely override this if there's a better way to obtain this password -- for example, if you already distribute an application configuration or credentials file to all of your servers.
app_credentials
ruby
tumblr/jetpants
lib/jetpants.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants.rb
Apache-2.0
def replication_credentials {user: @config['mysql_repl_user'], pass: @config['mysql_repl_password']} end
Returns a hash containing :user => username string, :pass => password string for the MySQL replication user, as found in Jetpants' configuration. Plugins may freely override this if there's a better way to obtain this password -- for example, by parsing master.info on a specific slave in your topology. SEE ALSO: DB#replication_credentials, which only falls back to the global version when needed.
replication_credentials
ruby
tumblr/jetpants
lib/jetpants.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants.rb
Apache-2.0
def method_missing(name, *args, &block) if @config.has_key? name.to_s @config[name.to_s] elsif name.to_s[-1] == '=' && @config.has_key?(name.to_s[0..-2]) var = name.to_s[0..-2] @config[var] = args[0] elsif @topology.respond_to? name @topology.send name, *args, &block else super end end
Proxy missing top-level Jetpants methods to the configuration hash, or failing that, to the Topology singleton.
method_missing
ruby
tumblr/jetpants
lib/jetpants.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants.rb
Apache-2.0
def callback_priority(value) @callback_priority = value end
Set the priority (higher = called first) for any subsequent callbacks defined in the current class.
callback_priority
ruby
tumblr/jetpants
lib/jetpants/callback.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/callback.rb
Apache-2.0
def method_missing(name, *args, &block) if @host.respond_to? name @host.send name, *args, &block else super end end
##### Host methods ######################################################## Jetpants::DB delegates missing methods to its Jetpants::Host.
method_missing
ruby
tumblr/jetpants
lib/jetpants/db.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db.rb
Apache-2.0
def respond_to?(name, include_private=false) super || @host.respond_to?(name) end
Alters respond_to? logic to account for delegation of missing methods to the instance's Host.
respond_to?
ruby
tumblr/jetpants
lib/jetpants/db.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db.rb
Apache-2.0
def same_host_as?(db) @ip == db.ip end
Returns true if the supplied Jetpants::DB is on the same Jetpants::Host as self.
same_host_as?
ruby
tumblr/jetpants
lib/jetpants/db.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db.rb
Apache-2.0
def get_ssh_connection conn = nil attempts = 0 5.times do |attempt| @lock.synchronize do if @connection_pool.count > 0 conn = @connection_pool.shift end end unless conn params = { :paranoid => false, :user_known_hosts_file => '/dev/null', :timeout => 5, } params[:keys] = Jetpants.ssh_keys if Jetpants.ssh_keys params[:port] = Jetpants.ssh_port if Jetpants.ssh_port user = Jetpants.ssh_user begin @lock.synchronize do conn = Net::SSH.start(@ip, user, params) end rescue => ex output "Unable to SSH on attempt #{attempt + 1}: #{ex.to_s}" conn = nil next end end # Confirm that the connection works if conn begin result = conn.exec!('echo ping').strip raise "Unexpected result" unless result == 'ping' @available = true return conn rescue output "Discarding nonfunctional SSH connection" conn = nil end end end @available = false raise "Unable to obtain working SSH connection to #{self} after 5 attempts" end
Returns a Net::SSH::Connection::Session for the host. Verifies that the connection is working before returning it.
get_ssh_connection
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def save_ssh_connection(conn) conn.exec! 'cd ~' @lock.synchronize do @connection_pool << conn end rescue output "Discarding nonfunctional SSH connection" end
Adds a Net::SSH::Connection::Session to a pool of idle persistent connections.
save_ssh_connection
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def ssh_cmd(cmd, attempts=3) attempts ||= 1 conn = get_ssh_connection cmd = [cmd] unless cmd.is_a? Array result = nil cmd.each do |c| failures = 0 begin output "Executing (attempt #{failures + 1} / #{attempts}) on #{@ip}: #{c}" if Jetpants.debug result = conn.exec! c do |ch, stream, data| if stream == :stderr output "SSH ERROR: #{data}" end ch[:result] ||= '' ch[:result] << data end rescue failures += 1 raise if failures >= attempts output "Command \"#{c}\" failed, re-trying after delay" sleep(failures) retry end end save_ssh_connection conn return result end
Execute the given UNIX command string (or array of strings) as root via SSH. By default, if something is wrong with the SSH connection, the command will be attempted up to 3 times before an exception is thrown. Be sure to set this to 1 or false for commands that are not idempotent. Returns the result of the command executed. If cmd was an array of strings, returns the result of the LAST command executed.
ssh_cmd
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def ssh_cmd!(cmd) ssh_cmd cmd, false end
Shortcut for use when a command is not idempotent and therefore isn't safe to retry if something goes wonky with the SSH connection.
ssh_cmd!
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def confirm_listening_on_port(port, timeout=10) checker_th = Thread.new { ssh_cmd "while [[ `netstat -ln | grep :#{port} | wc -l` -lt 1 ]] ; do sleep 1; done" } raise "Nothing is listening on #{@ip}:#{port} after #{timeout} seconds" unless checker_th.join(timeout) true end
Confirm that something is listening on the given port. The timeout param indicates how long to wait (in seconds) for a process to be listening.
confirm_listening_on_port
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def available? # If we haven't tried an ssh command yet, @available will be nil. Running # a first no-op command will populate it to true or false. if @available.nil? ssh_cmd 'echo ping' rescue nil end @available end
Returns true if the host is accessible via SSH, false otherwise
available?
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def clone_part(work_item, base_dir, targets) #output "Sending item: #{work_item}" # Both 'sender' and 'receiver' use the specific parameter YAML file to perform the # transfer. These parameters are set by jetpants through following hashes based on # the given work item. sender_params = {} receiver_params = {} should_encrypt = work_item['should_encrypt'] block_count = work_item['size'] / Jetpants.block_size block_count += 1 if (work_item['size'] % Jetpants.block_size != 0) block_offset = work_item['offset'] / Jetpants.block_size sender_params['base_dir'] = base_dir sender_params['filename'] = work_item['filename'].sub('/./', '/') sender_params['block_count'] = block_count sender_params['block_offset'] = block_offset sender_params['block_size'] = Jetpants.block_size sender_params['transfer_id'] = work_item['transfer_id'] sender_params['compression_cmd'] = "#{Jetpants.compress_with}" if Jetpants.compress_with sender_params['encryption_cmd'] = "#{Jetpants.encrypt_with}" if (Jetpants.encrypt_with && should_encrypt) receiver_params['block_count'] = sender_params['block_count'] receiver_params['block_offset'] = sender_params['block_offset'] receiver_params['block_size'] = sender_params['block_size'] receiver_params['filename'] = sender_params['filename'] receiver_params['transfer_id'] = work_item['transfer_id'] receiver_params['decompression_cmd'] = "#{Jetpants.decompress_with}" if Jetpants.decompress_with receiver_params['decryption_cmd'] = "#{Jetpants.decrypt_with}" if (Jetpants.decrypt_with && should_encrypt) port = work_item['port'] destinations = targets targets = targets.keys workers = [] targets.reverse.each_with_index do |target, i| receiver_params['base_dir'] = destinations[target] if i == 0 receiver_params.delete('chain_ip') receiver_params.delete('chain_port') else # If chaining needs to be setup, we add those parameters to YAML chain_target = targets.reverse[i - 1] receiver_params['chain_ip'] = chain_target.ip receiver_params['chain_port'] = port end string = receiver_params.to_yaml.gsub("\n", "\\n") # For each receiver in the chain, write the transfer parameters # in the YAML file at specified location. # # This location must match the one used by 'receiver.rb' script in puppet # cmd = "echo -e \"#{string}\" > #{Jetpants.recv_param_path}" target.ssh_cmd(cmd) end sender_params['target_ip'] = targets[0].ip sender_params['target_port'] = port string = sender_params.to_yaml.gsub("\n", "\\n") # New lines need additional escape # Sender also needs the transfer parameters in the YAML file at specified # location. Set those too. cmd = "echo -e \"#{string}\" > #{Jetpants.send_param_path}" ssh_cmd(cmd) # Trigger the data transfer # Sender is going to read the transfer parameters from the specified # location and start the data transfer ssh_cmd!("#{Jetpants.sender_bin_path}") end
This is the method used by each thread to clone a part of file to targets. The work item has all the necessary information for the thread to setup the cloning chain The basic logic of copy chain remains the same as that of 'fast_copy_chain'. The chain has been oursourced to the individual scripts 'sender' and 'receiver'. These 2 scripts are deployed on each DB node. Jetpants orchestrate the transfers using 'ncat' (not nc) through a single port and multiple connections between sender and the receivers. (The 'ncat' allows us to exec the 'receiver' script on receiving a connection request from 'sender', multiple connections are not supported by 'nc').
clone_part
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def ensure_transfer_started(targets, work_item) targets = targets.keys marker = "#{Jetpants.export_location}/__#{work_item['transfer_id']}.success" look_for_clone_marker(marker) targets.each do |target| target.look_for_clone_marker(marker) end end
This method ensures that the transfer started across the hosts. Success of this method is crucial for the whole operation, because we use the same file to convey the parameters of transfer. The reason to use same parameter file is that 'ncat' is execing the transfer for us when the new connection request is received on the port. We cannot exec different command per connection. Imagine it to be a 'xinetd'. The way we ensure that the transfer started is by checking for an existance of a file at the last node in the chain.
ensure_transfer_started
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def faster_copy_chain(base_dir, targets, options={}) filenames = options[:files] filenames = [filenames] unless filenames.respond_to?(:each) progress = {} destinations = targets targets = targets.keys targets.each do | target | dir = destinations[target] unless options[:overwrite] all_paths = filenames.keys.map {|f| dir + f}.join ' ' dirlist = target.dir_list(all_paths) dirlist.each {|name, size| raise "File #{name} exists on destination and has nonzero size!" if size.to_i > 0} end end # Is encryption required? should_encrypt = false targets.each do |t| should_encrypt = should_encrypt || should_encrypt_with?(t) end # Divide each large file into Jetpants.split_size work items. # A thread will operate on each part independently. # We do not physically divide the file, the illusion of dividing a file is # created using the file offset the "dd" will operate on (man dd, skip and seek options) work_items = Queue.new filenames.each do |file, file_size| remaining_size = file_size num_parts = remaining_size / Jetpants.split_size num_parts += 1 if (remaining_size % Jetpants.split_size != 0) offset = 0 size = [remaining_size, Jetpants.split_size].min for part in (1..num_parts) work_item = {'filename' => file, 'offset' => offset, 'size' => size, 'part' => part, 'should_encrypt' => should_encrypt} work_item['transfer_id'] = "#{work_item['filename'].gsub("/", "_")}_#{work_item['part']}" work_items << work_item offset += size remaining_size -= size size = [remaining_size, Jetpants.split_size].min end progress[file] = {'total_size' => file_size, 'sent' => 0 } end # Deciding upon number of threads to use for the copy # Based on the minimum core count of the node in the chain core_counts = [cores] targets.each do |target| core_counts << target.cores end num_threads = core_counts.min - 2 # Leave 2 cores for OS num_threads = num_threads > 12 ? 12 : num_threads # Cap it somewhere, Flash will be HOT otherwise output "Using #{num_threads} threads to clone the big files with #{work_items.size} parts" port = options[:port] # We only have one port available, so we use 'ncat' utility that allows us to # exec per received connection. # # Now what we 'exec' is a ruby script called 'receiver.rb' that does all the # ugly shell handling of the chain. So, following command starts the ncat # server on each receiver. When it receives a connection it request, it # is going to fork-exec the 'receiver' script, that handles the data transfer # cmd = "ncat --recv-only -lk #{port} -m 100 --sh-exec \"#{Jetpants.receiver_bin_path}\"" receiver_cmd = "nohup #{cmd} > /dev/null 2>&1 &" targets.each do |target| dir = destinations[target] raise "Directory #{t}:#{dir} looks suspicious" if dir.include?('..') || dir.include?('./') || dir == '/' || dir == '' target.ssh_cmd "mkdir -p #{dir}" target.ssh_cmd(receiver_cmd) target.confirm_listening_on_port(port, timeout = 30) end workers = {} until work_items.empty? sleep(1) work_item = work_items.deq work_item['port'] = port worker = Thread.new { clone_part(work_item, base_dir, destinations) } workers[worker] = work_item ensure_transfer_started(destinations, work_item) while workers.count >= num_threads watch_progress(workers, progress) end end output "All work items submitted for transfer" # All work items have been submitted for processing now, # Let us just wait for all of them to finish until workers.count == 0 watch_progress(workers, progress) end output "All work items transferred" # OK !! done, lets kill the ncat servers started on all the targets cmd_str = cmd.gsub!('"', '').strip targets.each do |target| target.ssh_cmd("pkill -f '#{cmd_str}'") end # Because we initiate the "dd" threads using root, the permissions of the # files are lost (owner:group). We fix them now by querying for the same # at the source. # RE for stats output to extract the mode, user, group of the file filenames.each do |file, file_size| source_file = (base_dir + file).sub('/./', '/') result = ssh_cmd("stat #{source_file}").split("\n")[3] mode_stats = get_file_stats(source_file) raise "Could not get stats for source #{source_file}. Clone is almost done, you can fix it manually" if mode_stats.nil? destinations.each do |target, dir| target_file = (dir + file).sub('/./', '/') raise "Invalid target file #{target_file} on Target #{target}. Clone is almost done, you can fix it manually" if dir == '/' || dir == '' target.ssh_cmd("chmod #{mode_stats['mode']} #{target_file}") target.ssh_cmd("chown #{mode_stats['user']}:#{mode_stats['group']} #{target_file}") end end end
Method used to divide the work of sending large files. Parameters to method are exactly similar as that of 'fast_copy_chain'. :files option, however, include only the large files that we want to clone using multiple threads This method does not check whether compression and encryption binaries are installed or not. This is because, most of the times, this will be preceeded by a call to 'fast_copy_chain' to send out small files. 'fast_copy_chain' does ensure the binaries are installed.
faster_copy_chain
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def dir_list(dir) ls_out = ssh_cmd "ls --color=never -1AgGF #{dir}" # disable color, 1 file per line, all but . and .., hide owner+group, include type suffix result = {} ls_out.split("\n").each do |line| next unless matches = line.match(/^[\.\w-]+\s+\d+\s+(?<size>\d+).*(?:\d\d:\d\d|\d{4})\s+(?<name>.*)$/) file_name = matches[:name] file_name = file_name[0...-1] if file_name =~ %r![*/=>@|]$! result[file_name.split('/')[-1]] = (matches[:name][-1] == '/' ? '/' : matches[:size].to_i) end result end
Given the name of a directory or single file, returns a hash of filename => size of each file present. Subdirectories will be returned with a size of '/', so you can process these differently as needed. WARNING: This is brittle. It parses output of "ls". If anyone has a gem to do better remote file management via ssh, then please by all means send us a pull request!
dir_list
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def compare_dir(base_dir, targets, options={}) # Normalize the filenames param so it is an array filenames = options[:files] || ['.'] filenames = [filenames] unless filenames.respond_to?(:each) # Normalize the targets param, so that targets is an array of Hosts and # destinations is a hash of hosts => dirs destinations = {} targets = [targets] unless targets.respond_to?(:each) base_dir += '/' unless base_dir[-1] == '/' if targets.is_a? Hash destinations = targets destinations.each {|t, d| destinations[t] += '/' unless d[-1] == '/'} targets = targets.keys else destinations = targets.inject({}) {|memo, target| memo[target] = base_dir; memo} end raise "No target hosts supplied" if targets.count < 1 queue = filenames.map {|f| ['', f]} # array of [subdir, filename] pairs while (tuple = queue.shift) subdir, filename = tuple source_dirlist = dir_list(base_dir + subdir + filename) destinations.each do |target, path| target_dirlist = target.dir_list(path + subdir + filename) source_dirlist.each do |name, size| target_size = target_dirlist[name] || 'MISSING' raise "Directory listing mismatch when comparing #{self}:#{base_dir}#{subdir}#{filename}/#{name} to #{target}:#{path}#{subdir}#{filename}/#{name} (size: #{size} vs #{target_size})" unless size == target_size end end queue.concat(source_dirlist.map {|name, size| size == '/' ? [subdir + '/' + name, '/'] : nil}.compact) end end
Compares file existence and size between hosts. Param format identical to the first three params of Host#fast_copy_chain, except only supported option is :files. Raises an exception if the files don't exactly match, otherwise returns true.
compare_dir
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def dir_size(dir) total_size = 0 dir_list(dir).each do |name, size| total_size += (size == '/' ? dir_size(dir + '/' + name) : size.to_i) end total_size end
Recursively computes size of files in dir
dir_size
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def get_file_stats(filename) mode_re = /^Access:\s+\((?<mode>\d+)\/(?<permissions>[drwx-]+)\)\s+Uid:\s+\(\s+\d+\/\s+(?<user>\w+)\)\s+Gid:\s+\(\s+\d+\/\s+(?<group>\w+)\)$/x result = ssh_cmd("stat #{filename}").split("\n") mode_line = result[3] tokens = mode_line.match(mode_re) # Later when we need more info we will merge hashes obtained from REs tokens end
##### Misc methods ######################################################## `stat` call to get all the information about the given file
get_file_stats
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def set_io_scheduler(name, device='sda') output "Setting I/O scheduler for #{device} to #{name}." ssh_cmd "echo '#{name}' >/sys/block/#{device}/queue/scheduler" end
Changes the I/O scheduler to name (such as 'deadline', 'noop', 'cfq') for the specified device.
set_io_scheduler
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def confirm_installed(program_name) raise "#{program_name} not installed, or missing from path" unless has_installed(program_name) true end
Confirms that the specified binary is installed and on the shell path.
confirm_installed
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def cores return @cores if @cores count = ssh_cmd %q{cat /proc/cpuinfo|grep 'processor\s*:' | wc -l} @cores = (count ? count.to_i : 1) end
Returns number of cores on machine. (reflects virtual cores if hyperthreading enabled, so might be 2x real value in that case.) Not currently used by anything in Jetpants base, but might be useful for plugins that want to tailor the concurrency level to the machine's capabilities.
cores
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def memory(in_gb=false) line = ssh_cmd 'cat /proc/meminfo | grep MemTotal' matches = line.match /(?<size>\d+)\s+(?<unit>kB|mB|gB|B)/ size = matches[:size].to_i multipliers = {kB: 1024, mB: 1024**2, gB: 1024**3, B: 1} size *= multipliers[matches[:unit].to_sym] in_gb ? size / 1024**3 : size end
Returns the amount of memory on machine, either in bytes (default) or in GB. Linux-specific.
memory
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def to_s return @ip end
Returns the host's IP address as a string.
to_s
ruby
tumblr/jetpants
lib/jetpants/host.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/host.rb
Apache-2.0
def concurrent_each collect {|*item| Thread.new {yield *item}}.each {|th| th.join} self end
Works like each but runs the block in a separate thread per item.
concurrent_each
ruby
tumblr/jetpants
lib/jetpants/monkeypatch.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/monkeypatch.rb
Apache-2.0
def concurrent_map collect {|*item| Thread.new {yield *item}}.collect {|th| th.value} end
Works like map but runs the block in a separate thread per item.
concurrent_map
ruby
tumblr/jetpants
lib/jetpants/monkeypatch.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/monkeypatch.rb
Apache-2.0
def slaves(type=false) case type when :active_slave, :active then active_slaves when :standby_slave, :standby then standby_slaves when :backup_slave, :backup then backup_slaves when false then @master.slaves else [] end end
Returns all slaves, or pass in :active, :standby, or :backup to receive slaves just of a particular type
slaves
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def active_slaves @master.slaves.select {|sl| @active_slave_weights[sl]} end
Returns an array of Jetpants::DB objects. Active slaves are ones that receive read queries from your application.
active_slaves
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def standby_slaves @master.slaves.reject {|sl| @active_slave_weights[sl] || sl.for_backups?} end
Returns an array of Jetpants::DB objects. Standby slaves do not receive queries from your application. These are for high availability. They can be turned into active slaves or even the master, and can also be used for cloning additional slaves.
standby_slaves
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def backup_slaves @master.slaves.reject {|sl| @active_slave_weights[sl] || !sl.for_backups?} end
Returns an array of Jetpants::DB objects. Backup slaves are never promoted to active or master. They are for dedicated backup purposes. They may be a different/cheaper hardware spec than other slaves.
backup_slaves
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def nodes [master, slaves].flatten.compact end
returns a flat array of all Jetpants::DB objects in the pool: the master and all slaves of all types.
nodes
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def probe_tables master.probe @probe_lock.synchronize do return unless @tables.nil? db = standby_slaves.last || active_slaves.last || master if db && db.running? output "Probing tables via #{db}" else output "Warning: unable to probe tables" return end @tables = [] sql = "SHOW TABLES" db.query_return_array(sql).each do |tbl| table_name = tbl.values.first @tables << db.detect_table_schema(table_name) end end end
Look at a database in the pool (preferably a standby slave, but will check active slave or master if nothing else is available) and retrieve a list of tables, detecting their schema
probe_tables
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def tables self.probe_tables unless @tables @tables end
Returns a list of table objects for this pool
tables
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def get_table(table) raise "Pool #{self} does not have table #{table}" unless has_table? table @tables.select{|tb| tb.to_s == table}.first end
Retrieve the table object for a given table name
get_table
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def has_active_slave(slave_db, weight=100) slave_db = slave_db.to_db raise "Attempt to mark a DB as its own active slave" if slave_db == @master @active_slave_weights[slave_db] = weight end
Informs Jetpants that slave_db is an active slave. Potentially used by plugins, such as in Topology at start-up time.
has_active_slave
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def mark_slave_active(slave_db, weight=100) raise "Attempt to make a backup slave be an active slave" if slave_db.for_backups? has_active_slave slave_db, weight sync_configuration end
Turns a standby slave into an active slave, giving it the specified read weight. Syncs the pool's configuration afterwards. It's up to your asset tracker plugin to actually do something with this information.
mark_slave_active
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def mark_slave_standby(slave_db) slave_db = slave_db.to_db raise "Cannot call mark_slave_standby on a master" if slave_db == @master @active_slave_weights.delete(slave_db) sync_configuration end
Turns an active slave into a standby slave. Syncs the pool's configuration afterwards. It's up to your asset tracker plugin to actually do something with this information.
mark_slave_standby
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def remove_slave!(slave_db) raise "Slave is not in this pool" unless slave_db.pool == self return false unless (slave_db.running? && slave_db.available?) slave_db.disable_monitoring slave_db.disable_replication! sync_configuration # may or may not be sufficient -- see note above. end
Remove a slave from a pool entirely. This is destructive, ie, it does a RESET SLAVE on the db. Note that a plugin may want to override this (or implement after_remove_slave!) to actually sync the change to an asset tracker, depending on how the plugin implements Pool#sync_configuration. (If the implementation makes sync_configuration work by iterating over the pool's current slaves to update their status/role/pool, it won't see any slaves that have been removed, and therefore won't update them.) This method has no effect on slaves that are unavailable via SSH or have MySQL stopped, because these are only considered to be in the pool if your asset tracker plugin intentionally adds them. Such plugins could also handle this in the after_remove_slave! callback.
remove_slave!
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def add_alias(name) if @aliases.include? name false else @aliases << name true end end
Informs this pool that it has an alias. A pool may have any number of aliases.
add_alias
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def summary_info(node, counter, tab, extended_info=false) if extended_info details = {} if !node.running? details[node] = {coordinates: ['unknown'], lag: 'N/A', gtid_exec: 'unknown'} elsif node == @master and !node.is_slave? details[node] = {lag: 'N/A'} if gtid_mode? details[node][:gtid_exec] = node.gtid_executed_from_pool_master_string else details[node][:coordinates] = node.binlog_coordinates(false) end else lag = node.seconds_behind_master lag_str = lag.nil? ? 'NULL' : lag.to_s + 's' details[node] = {lag: lag_str} if gtid_mode? details[node][:gtid_exec] = node.gtid_executed_from_pool_master_string else details[node][:coordinates] = node.repl_binlog_coordinates(false) end end end # tabs below takes care of the indentation depending on the level of replication chain. tabs = ' ' * (tab + 1) # Prepare the extended_info if needed binlog_pos = '' slave_lag = '' if extended_info slave_lag = "lag=#{details[node][:lag]}" unless node == @master && !node.is_slave? binlog_pos = gtid_mode? ? details[node][:gtid_exec] : details[node][:coordinates].join(':') end if node == @master and !node.is_slave? # Preparing the data_set_size and pool alias text alias_text = @aliases.count > 0 ? ' (aliases: ' + @aliases.join(', ') + ')' : '' data_size = @master.running? ? "[#{master.data_set_size(true)}GB]" : '' state_text = (respond_to?(:state) && state != :ready ? " (state: #{state})" : '') print "#{name}#{alias_text}#{state_text} #{data_size}\n" print "\tmaster = %-15s %-32s %s\n" % [node.ip, node.hostname, binlog_pos] else # Determine the slave type below type = node.role.to_s.split('_').first format_str = "%s%-7s slave #{counter + 1} = %-15s %-32s " + (gtid_mode? ? "%-46s" : "%-26s") + " %s\n" print format_str % [tabs, type, node.ip, node.hostname, binlog_pos, slave_lag] end end
This function aids in providing the information about master/slaves discovered.
summary_info
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def summary(extended_info=false, with_children=false, node=@master, depth=1) probe i = 0 summary_info(node, i, depth, extended_info) slave_list = node.slaves slave_roles = Hash.new slave_list.each { |slave| slave_roles[slave] = slave.role } Hash[slave_roles.sort_by{ |k, v| v }].keys.each_with_index do |s, i| summary_info(s, i, depth, extended_info) if s.has_slaves? s.slaves.sort.each do |slave| summary(extended_info, with_children, slave, depth + 1) end end end true end
Displays a summary of the pool's members. This outputs immediately instead of returning a string, so that you can invoke something like: Jetpants.topology.pools.each &:summary to easily display a summary.
summary
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def promotable_nodes(enslaving_old_master=true) # Array of pool members, either including or excluding the old master as requested comparisons = nodes.select &:running? comparisons.delete master unless enslaving_old_master # Keep track of up to one "last resort" DB, which will only be promoted if # there are no other candidates. The score int allows ranking of last resorts, # to figure out the "least bad" promotion candidate in an emergency. last_resort_candidate = nil last_resort_score = 0 last_resort_warning = "" consider_last_resort = Proc.new do |db, score, warning| if last_resort_candidate.nil? || last_resort_score < score last_resort_candidate = db last_resort_score = score last_resort_warning = warning end end # Build list of good candidates for promotion candidates = nodes.reject {|db| db == master || db.for_backups? || !db.running?} candidates.select! do |candidate| others = comparisons.reject {|db| db == candidate} # Node isn't promotable if it's running a higher version of MySQL than any of its future replicas next if others.any? {|db| candidate.version_cmp(db) > 0} if gtid_mode? # Ordinarily if gtid_mode is already in use in the pool, gtid_deployment_step # should not be enabled anywhere; this likely indicates either an incomplete # GTID rollout occurred, or an automation bug elsewhere. Reject the candidate # outright, unless the old master is dead, in which case we consider it as a # last resort with low score. if candidate.gtid_deployment_step? unless master.running? warning = "gtid_deployment_step is still enabled (indicating incomplete GTID rollout?), but there's no better candidate" consider_last_resort.call(candidate, 0, warning) end next end # See if any replicas would break if this candidate becomes master. If any will # break, only allow promotion as a last resort, with the score based on what # percentage will break breaking_count = others.count {|db| candidate.purged_transactions_needed_by? db} if breaking_count > 0 breaking_pct = (100.0 * (breaking_count.to_f / others.length.to_f)).to_int score = 100 - breaking_pct warning = "#{breaking_pct}% of replicas will break upon promoting this node, but there's no better candidate" consider_last_resort.call(candidate, score, warning) next end end # gtid_mode checks # Only consider active slaves to be full candidates if the old master # is dead and we don't have GTID. In this situation, an active slave may # have the furthest replication progress. But in any other situation, # consider active slaves to be last resort, since promoting one would # also require converting a standby to be an active slave. if candidate.role == :active_slave && (gtid_mode? || master.running?) consider_last_resort.call(candidate, 100, "only promotion candidate is an active slave, since no standby slaves are suitable") next end # If we didn't hit a "next" statement in any of the above checks, the node is promotable true end if candidates.length == 0 && !last_resort_candidate.nil? last_resort_candidate.output "WARNING: #{last_resort_warning}" candidates << last_resort_candidate end candidates end
Returns an array of DBs in this pool that are candidates for promotion to new master. NOTE: doesn't yet handle hierarchical replication scenarios. This method currently only considers direct slaves of the pool master, by virtue of how Pool#nodes works. The enslaving_old_master arg determines whether or not the current pool master would become a replica (enslaving_old_master=true) vs being removed from the pool (enslaving_old_master=false). This just determines whether it is included in version comparison logic, purged binlog logic, etc. The *result* of this method will always exclude the current master regardless, as it doesn't make sense to consider a node that's already the master to be "promotable".
promotable_nodes
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def master_promotion!(promoted, enslave_old_master=true) demoted = @master raise "Demoted node is already the master of this pool!" if demoted == promoted raise "Promoted host is not in the right pool!" unless demoted.slaves.include?(promoted) output "Preparing to demote master #{demoted} and promote #{promoted} in its place." live_promotion = demoted.running? # If demoted machine is available, confirm it is read-only and binlog isn't moving, # and then wait for slaves to catch up to this position. Or if using GTID, only need # to wait for new_master to catch up; GTID allows us to repoint lagging slaves without # issue. if live_promotion demoted.enable_read_only! raise "Unable to enable global read-only mode on demoted machine" unless demoted.read_only? raise "Demoted machine still taking writes (from superuser or replication?) despite being read-only" if taking_writes?(0.5) must_catch_up = (gtid_mode? ? [promoted] : demoted.slaves) must_catch_up.concurrent_each do |s| while demoted.ahead_of? s do s.output "Still catching up to demoted master" sleep 1 end end # Demoted machine not available -- wait for slaves' binlogs to stop moving else demoted.slaves.concurrent_each do |s| while s.taking_writes?(1.0) do # Ensure we're not taking writes because a formerly dead master came back to life # In this situation, a human should inspect the old master manually raise "Dead master came back to life, aborting" if s.replicating? s.output "Still catching up on replication" end end end # Stop replication on all slaves replicas = demoted.slaves.dup replicas.each do |s| s.pause_replication if s.replicating? end raise "Unable to stop replication on all slaves" if replicas.any? {|s| s.replicating?} # Determine options for CHANGE MASTER creds = promoted.replication_credentials change_master_options = { user: creds[:user], password: creds[:pass], } if gtid_mode? change_master_options[:auto_position] = true promoted.gtid_executed(true) else change_master_options[:log_file], change_master_options[:log_pos] = promoted.binlog_coordinates end # reset slave on promoted, and make sure read_only is disabled promoted.disable_replication! promoted.disable_read_only! # gather our new replicas replicas.delete promoted replicas << demoted if live_promotion && enslave_old_master # If old master is dead and we're using GTID, try to catch up the new master # from its siblings, in case one of them is further ahead. Currently using the # default 5-minute timeout of DB#replay_missing_transactions, gives up after that. if gtid_mode? && change_master_options[:auto_position] && !live_promotion promoted.replay_missing_transactions(replicas, change_master_options) end # Repoint replicas to the new master replicas.each {|r| r.change_master_to(promoted, change_master_options)} # ensure our replicas are configured correctly by comparing our staged values to current values of replicas promoted_replication_config = { master_host: promoted.ip, master_user: change_master_options[:user], } if gtid_mode? promoted_replication_config[:auto_position] = "1" else promoted_replication_config[:master_log_file] = change_master_options[:log_file] promoted_replication_config[:exec_master_log_pos] = change_master_options[:log_pos].to_s end replicas.each do |r| promoted_replication_config.each do |option, value| raise "Unexpected slave status value for #{option} in replica #{r} after promotion" unless r.slave_status[option] == value end r.resume_replication unless r.replicating? end # Update the pool # Note: if the demoted machine is not available, plugin may need to implement an # after_master_promotion! method which handles this case in configuration tracker @active_slave_weights.delete promoted # if promoting an active slave, remove it from read pool @master = promoted @master_uuid = nil # clear any memoized value sync_configuration Jetpants.topology.write_config output "Promotion complete. Pool master is now #{promoted}." replicas.all? {|r| r.replicating?} end
Demotes the pool's existing master, promoting a slave in its place. The old master will become a slave of the new master if enslave_old_master is true, unless the old master is unavailable/crashed.
master_promotion!
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def gtid_mode? return @gtid_mode unless @gtid_mode.nil? any_gtids_executed = false # If master is running, it is sufficient to just check it alone, since # replicas must be using GTID if the master is nodes_to_examine = (master.running? ? [master] : slaves) nodes_to_examine.each do |db| begin row = db.query_return_first 'SELECT UPPER(@@global.gtid_mode) AS gtid_mode, @@global.gtid_executed AS gtid_executed' rescue # Treat pre-5.6 MySQL, or MariaDB, as not having GTID enabled. These will # raise an exception because the global vars in the query above don't exist. row = {gtid_mode: 'OFF', gtid_executed: ''} end unless row[:gtid_mode] == 'ON' @gtid_mode = false return @gtid_mode end any_gtids_executed = true unless row[:gtid_executed] == '' end if any_gtids_executed @gtid_mode = true else false # intentionally avoid memoization for this situation -- no way to invalidate properly end end
Returns true if the entire pool is using gtid_mode AND has executed at least one transaction with a GTID, false otherwise. The gtid_executed check allows this method to tell when GTIDs can actually be used (for auto-positioning, telling which node is ahead, etc) vs when we need to fall back to using coordinates despite @@global.gtid_executed being ON. This method is safe to use even if the master is dead. In most situations, this method memoizes the value on first use, to avoid repeated querying from subsequent calls.
gtid_mode?
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def master_uuid return @master_uuid unless @master_uuid.nil? raise "Pool#master_uuid requires gtid_mode" unless gtid_mode? if master.running? @master_uuid = master.server_uuid return @master_uuid end slaves.select(&:running?).each do |s| if s.master == master master_uuid = s.slave_status[:master_uuid] unless master_uuid.nil? || master_uuid == '' @master_uuid = master_uuid return @master_uuid end end end raise "Unable to determine the master_uuid for #{self}" end
Returns the server_uuid of the pool's master. Safe to use even if the master is dead, as long as the asset tracker populates the dead master's @slaves properly (as jetpants_collins already does). Memoizes the value to avoid repeated lookup; methods that modify the pool master clear the memoized value.
master_uuid
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def after_sync_configuration unless Jetpants.topology.pools.include? self Jetpants.topology.add_pool self end end
Callback to ensure that a sync'ed pool is already in Topology.pools
after_sync_configuration
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def method_missing(name, *args, &block) if @master.respond_to? name @master.send name, *args, &block else super end end
Jetpants::Pool proxies missing methods to the pool's @master Jetpants::DB instance.
method_missing
ruby
tumblr/jetpants
lib/jetpants/pool.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/pool.rb
Apache-2.0
def initialize(min_id, max_id, master, state=:ready, shard_pool_name=nil) @min_id = min_id.to_i @max_id = (max_id.to_s.upcase == 'INFINITY' ? 'INFINITY' : max_id.to_i) @state = state @children = [] # array of shards being initialized by splitting this one @parent = nil shard_pool_name = Jetpants.topology.default_shard_pool if shard_pool_name.nil? @shard_pool = Jetpants.topology.shard_pool(shard_pool_name) super(generate_name, master) end
Constructor for Shard -- * min_id: int * max_id: int or the string "INFINITY" * master: string (IP address) or a Jetpants::DB object * state: one of the above state symbols
initialize
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def generate_name prefix = (@shard_pool.nil?) ? 'anon' : @shard_pool.name.downcase "#{prefix}-#{min_id}-#{max_id.to_s.downcase}" end
Generates a string containing the shard's min and max IDs. Plugin may want to override.
generate_name
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def in_config? [:ready, :child, :needs_cleanup, :read_only, :offline].include? @state end
Returns true if the shard state is one of the values that indicates it's a live / in-production shard. These states include :ready, :child, :needs_cleanup, :read_only, and :offline.
in_config?
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def mark_slave_active(slave_db, weight=100) raise "Shards do not support active slaves" end
In default Jetpants, we assume each Shard has 1 master and N standby slaves; we never have active (read) slaves for shards. So calling mark_slave_active on a Shard generates an exception. Plugins may override this behavior, which may be necessary for sites spanning two or more active data centers.
mark_slave_active
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def standby_slaves result = super if @children.count > 0 is_child_master = {} @children.each {|c| is_child_master[c.master] = true} result.reject {|sl| is_child_master[sl]} else result end end
Returns the master's standby slaves, ignoring any child shards since they are a special case of slaves.
standby_slaves
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def db(mode=:read) (mode.to_sym == :write && @parent ? @parent.master : master) end
Returns the Jetpants::DB object corresponding to the requested access mode (either :read or :write). Ordinarily this will be the shard's @master, unless this shard is still a child, in which case we send writes the the shard's parent's master instead.
db
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def probe_tables if Jetpants.topology.shards(self.shard_pool.name).first == self super else Jetpants.topology.shards(self.shard_pool.name).first.probe_tables end end
Override the probe_tables method to accommodate shard topology - delegate everything to the first shard.
probe_tables
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def tables if Jetpants.topology.shards(self.shard_pool.name).first == self super else Jetpants.topology.shards(self.shard_pool.name).first.tables end end
Override the tables accessor to accommodate shard topology - delegate everything to the first shard
tables
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def add_child(shard) raise "Shard #{shard} already has a parent!" if shard.parent @children << shard shard.parent = self end
Adds a Jetpants::Shard to this shard's array of children, and sets the child's parent to be self.
add_child
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def remove_child(shard) raise "Shard #{shard} isn't a child of this shard!" unless shard.parent == self @children.delete shard shard.parent = nil end
Removes a Jetpants::Shard from this shard's array of children, and sets the child's parent to nil.
remove_child
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def move_reads_to_children @state = :deprecated @children.concurrent_each do |c| raise "Child shard #{c} not in :replicating state!" if c.state != :replicating end @children.concurrent_each do |c| c.state = :child c.sync_configuration end sync_configuration end
puts the shard in a state that triggers reads to move to child shards
move_reads_to_children
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def move_writes_to_children @children.each do |c| c.state = :needs_cleanup c.sync_configuration end end
Transitions the shard's children into the :needs_cleanup state. It is the responsibility of an asset tracker plugin / config generator to implement config generation in a way that actually makes writes go to shards in the :needs_cleanup state.
move_writes_to_children
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def prune_data! raise "Cannot prune a shard that isn't still slaving from another shard" unless @master.is_slave? unless [:initializing, :exporting, :importing].include? @state raise "Shard #{self} is not in a state compatible with calling prune_data! (current state=#{@state})" end tables = Table.from_config('sharded_tables', shard_pool.name) if @max_id == 'INFINITY' max_table_value = tables.map do |table| @master.highest_table_key_value(table,table.sharding_keys.first) end.max max_table_value = max_table_value * Jetpants.max_table_multiplier infinity = true else max_table_value = @max_id infinity = false end if @state == :initializing @state = :exporting sync_configuration end if @state == :exporting stop_query_killer export_schemata tables export_data tables, @min_id, max_table_value, infinity @state = :importing sync_configuration end if @state == :importing stop_query_killer disable_monitoring restart_mysql '--skip-log-bin', '--skip-log-slave-updates', '--innodb-autoinc-lock-mode=2', '--skip-slave-start', '--loose-gtid-mode=OFF' raise "Binary logging has somehow been re-enabled. Must abort for safety!" if binary_log_enabled? import_schemata! alter_schemata if respond_to? :alter_schemata import_data tables, @min_id, max_table_value, infinity restart_mysql # to clear out previous option overrides disabling binlog etc enable_monitoring start_query_killer end end
Exports data that should stay on this shard, drops and re-creates tables, and then re-imports the data
prune_data!
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def clone_slaves_from_master # If shard is already in state :child, it may already have slaves standby_slaves_needed = slaves_layout[:standby_slave] standby_slaves_needed -= standby_slaves.size if @state == :child backup_slaves_needed = slaves_layout[:backup_slave] backup_slaves_needed -= backup_slaves.size if @state == :child if standby_slaves_needed < 1 && backup_slaves_needed < 1 output "Shard already has enough standby slaves and backup slaves, skipping step of cloning more" return end standby_slaves_available = Jetpants.topology.count_spares(role: :standby_slave, like: master) raise "Not enough standby_slave role machines in spare pool!" if standby_slaves_needed > standby_slaves_available backup_slaves_available = Jetpants.topology.count_spares(role: :backup_slave) if backup_slaves_needed > backup_slaves_available if standby_slaves_available > backup_slaves_needed + standby_slaves_needed && agree("Not enough backup_slave role machines in spare pool, would you like to use standby_slaves? [yes/no]: ") standby_slaves_needed = standby_slaves_needed + backup_slaves_needed backup_slaves_needed = 0 else raise "Not enough backup_slave role machines in spare pool!" if backup_slaves_needed > backup_slaves_available end end # Handle state transitions if @state == :child || @state == :importing @state = :replicating sync_configuration elsif @state == :offline || @state == :replicating # intentional no-op, no need to change state else raise "Shard #{self} is not in a state compatible with calling clone_slaves_from_master! (current state=#{@state})" end standby_slaves = Jetpants.topology.claim_spares(standby_slaves_needed, role: :standby_slave, like: master, for_pool: master.pool) backup_slaves = Jetpants.topology.claim_spares(backup_slaves_needed, role: :backup_slave, for_pool: master.pool) enslave!([standby_slaves, backup_slaves].flatten) [standby_slaves, backup_slaves].flatten.each &:resume_replication [self, standby_slaves, backup_slaves].flatten.each { |db| db.catch_up_to_master } @children end
Creates standby slaves for a shard by cloning the master. Only call this on a child shard that isn't in production yet, or on a production shard that's been marked as offline.
clone_slaves_from_master
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def cleanup! raise "Cannot call cleanup! on a child shard" if @parent # situation A - clean up after a shard split if @state == :deprecated && @children.size > 0 tables = Table.from_config('sharded_tables', pool.shard_pool.name) @master.revoke_all_access! @children.concurrent_each do |child_shard| raise "Child state does not indicate cleanup is needed" unless child_shard.state == :needs_cleanup raise "Child shard master should be a slave in order to clean up" unless child_shard.is_slave? child_shard.master.disable_replication! # stop slaving from parent child_shard.prune_data_to_range tables, child_shard.min_id, child_shard.max_id end # We have to iterate over a copy of the @children array, rather than the array # directly, since Array#each skips elements when you remove elements in-place, # which Shard#remove_child does... @children.dup.each do |child_shard| child_shard.state = :ready remove_child child_shard child_shard.sync_configuration end @state = :recycle # situation B - clean up after a two-step (lockless) shard master promotion elsif @state == :needs_cleanup && @master.master && !@parent eject_master = @master.master eject_slaves = eject_master.slaves.reject { |s| s == @master } rescue [] # stop the new master from replicating from the old master (we are about to eject) @master.disable_replication! eject_slaves.each(&:revoke_all_access!) eject_master.revoke_all_access! # We need to update the asset tracker to no longer consider the ejected # nodes as part of this pool. This includes ejecting the old master, which # might be handled by Pool#after_master_promotion! instead # of Shard#sync_configuration. after_master_promotion!(@master, false) if respond_to? :after_master_promotion! @state = :ready else raise "Shard #{self} is not in a state compatible with calling cleanup! (state=#{state}, child count=#{@children.size}" end sync_configuration end
Cleans up the state of a shard. This has two use-cases: A. Run this on a parent shard after the rest of a shard split is complete. Sets this shard's master to read-only; removes the application user from self (without replicating this change to children); disables replication between the parent and the children; and then removes rows from the children that replicated to the wrong shard. B. Run this on a shard that just underwent a two-step promotion process which moved all reads, and then all writes, to a slave that has slaves of its own. For example, if upgrading MySQL on a shard by creating a newer-version slave and then adding slaves of its own to it (temp hierarchical replication setup). You can use this method to then "eject" the older-version master and its older-version slaves from the pool.
cleanup!
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0