code
stringlengths
26
124k
docstring
stringlengths
23
125k
func_name
stringlengths
1
98
language
stringclasses
1 value
repo
stringlengths
5
53
path
stringlengths
7
151
url
stringlengths
50
211
license
stringclasses
7 values
def even_split_id_range(num_children) raise "Cannot calculate an even split of last shard" if @max_id == 'INFINITY' id_ranges = [] ids_total = 1 + @max_id - @min_id current_min_id = @min_id num_children.times do |i| ids_this_pool = (ids_total / num_children).floor ids_this_pool += 1 if i < (ids_total % num_children) id_ranges << [current_min_id, current_min_id + ids_this_pool - 1] current_min_id += ids_this_pool end id_ranges end
Splits self's ID range into num_children pieces Returns an array of [low_id, high_id] arrays, suitable for passing to Shard#init_child_shard_masters
even_split_id_range
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def init_child_shard_masters(id_ranges) # Validations: make sure enough machines in spare pool; enough slaves of shard being split; # no existing children of shard being split # TODO: fix the first check to separately account for :role, ie check master and standby_slave counts separately # (this is actually quite difficult since we can't provide a :like node in a sane way) spares_needed = id_ranges.size * (1 + Jetpants.standby_slaves_per_pool) raise "Not enough machines in spare pool!" if spares_needed > Jetpants.topology.count_spares(role: :master, like: master) raise 'Shard split functionality requires Jetpants config setting "standby_slaves_per_pool" is at least 1' if Jetpants.standby_slaves_per_pool < 1 raise "Must have at least #{Jetpants.standby_slaves_per_pool} slaves of shard being split" if master.slaves.size < Jetpants.standby_slaves_per_pool raise "Shard #{self} already has #{@children.size} child shards" if @children.size > 0 # Set up the child shards, and give them masters id_ranges.each do |my_range| spare = Jetpants.topology.claim_spare(role: :master, like: master) spare.disable_read_only! if (spare.running? && spare.read_only?) spare.output "Will be master for new shard with ID range of #{my_range.first} to #{my_range.last} (inclusive)" child_shard = Shard.new(my_range.first, my_range.last, spare, :initializing, shard_pool.name) child_shard.sync_configuration add_child(child_shard) Jetpants.topology.add_pool child_shard end # We'll clone the full parent data set from a standby slave of the shard being split source = standby_slaves.first targets = @children.map &:master source.clone_multi_threaded = self.master.clone_multi_threaded source.enslave_siblings! targets end
Early step of shard split process: initialize child shard pools, pull boxes from spare list to use as masters for these new shards, and then populate them with the full data set from self (the shard being split). Supply an array of [min_id, max_id] arrays, specifying the ID ranges to use for each child. For example, if self has @min_id = 1001 and @max_id = 4000, and you're splitting into 3 evenly-sized child shards, you'd supply [[1001,2000], [2001,3000], [3001, 4000]]
init_child_shard_masters
ruby
tumblr/jetpants
lib/jetpants/shard.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/shard.rb
Apache-2.0
def initialize(name, params={}) @name = name parse_params(params) end
Create a Table. Possible keys include 'sharding_key', 'chunks', 'order_by', 'create_table', 'pool', 'indexes', and anything else handled by plugins
initialize
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def max_pk_val_query if @primary_key.is_a?(Array) pk_str = @primary_key.join(",") pk_ordering = @primary_key.map{|key| "#{key} DESC"}.join(',') sql = "SELECT #{pk_str} FROM #{@name} ORDER BY #{pk_ordering} LIMIT 1" else sql = "SELECT MAX(#{@primary_key}) FROM #{@name}" end return sql end
Returns the current maximum primary key value, returns the values of the record when ordered by the key fields in order, descending on a multi-value PK
max_pk_val_query
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def drop_index_query(index_name) raise "Unable to find index #{index_name}!" unless indexes.has_key? index_name "ALTER TABLE #{name} DROP INDEX #{index_name}" end
generates a query to drop a specified index named by the symbol passed in to the method
drop_index_query
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def create_index_query(index_specs) index_defs = [] index_specs.each do |index_name, index_opts| raise "Cannot determine index name!" if index_name.nil? raise "Cannot determine index metadata for new index #{index_name}!" unless index_opts[:columns].kind_of?(Array) index_opts[:columns].each do |col| raise "Table #{name} does not have column #{col}" unless columns.include?(col) end unique = "" if index_opts[:unique] unique = "UNIQUE" end index_defs << "ADD #{unique} INDEX #{index_name} (#{index_opts[:columns].join(',')})" end "ALTER TABLE #{name} #{index_defs.join(", ")}" end
generates a query to create a specified index, given a hash of columns, a name, and a unique flag as show below: {:index_name=> {:columns=>[:column_one, :column_two], :unique=>false}},
create_index_query
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def first_pk_col if @primary_key.is_a? Array @primary_key.first else @primary_key end end
Returns the first column of the primary key, or nil if there isn't one
first_pk_col
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def belongs_to?(pool) return @pool == pool end
Returns true if the table is associated with the supplied pool
belongs_to?
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def sql_export_range(min_id=false, max_id=false) outfile = export_file_path min_id, max_id sql = "SELECT * FROM #{@name} " if min_id || max_id clauses = case when min_id && max_id then @sharding_keys.collect {|col| "(#{col} >= #{min_id} AND #{col} <= #{max_id}) "} when min_id then @sharding_keys.collect {|col| "#{col} >= #{min_id} "} when max_id then @sharding_keys.collect {|col| "#{col} <= #{max_id} "} end sql << "WHERE " + clauses.join('OR ') end sql << "ORDER BY #{@order_by} " if @order_by sql << "INTO OUTFILE '#{outfile}'" end
Returns the SQL for performing a data export of a given ID range
sql_export_range
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def sql_import_range(min_id=false, max_id=false, extra_opts=nil) outfile = export_file_path min_id, max_id option = "" unless extra_opts.nil? option = " IGNORE" if extra_opts.strip.upcase == "IGNORE" || (@sharding_keys.count > 1 && (min_id || max_id)) option = " REPLACE" if extra_opts.strip.upcase == "REPLACE" end sql = "LOAD DATA INFILE '#{outfile}'#{option} INTO TABLE #{@name} CHARACTER SET binary" end
Returns the SQL necessary to load the table's data. Note that we use an IGNORE on multi-sharding-key tables. This is because we get duplicate rows between export chunk files in this case.
sql_import_range
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def sql_cleanup_next_id(sharding_key, id, direction) if direction == :asc "SELECT MIN(#{sharding_key}) FROM #{@name} WHERE #{sharding_key} > #{id}" elsif direction == :desc "SELECT MAX(#{sharding_key}) FROM #{@name} WHERE #{sharding_key} < #{id}" else raise "Unknown direction parameter #{direction}" end end
Returns the SQL necessary to iterate over a given sharding key by ID -- returns the next ID desired. Useful when performing a cleanup operation over a sparse ID range.
sql_cleanup_next_id
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def sql_cleanup_delete(sharding_key, min_keep_id, max_keep_id) sql = "DELETE FROM #{@name} WHERE #{sharding_key} = ?" # if there are multiple sharding cols, we need to be more careful to keep rows # where the OTHER sharding col(s) do fall within the shard's range @sharding_keys.each do |other_col| next if other_col == sharding_key sql << " AND NOT (#{other_col} >= #{min_keep_id} AND #{other_col} <= #{max_keep_id})" end return sql end
Returns the SQL necessary to clean rows that shouldn't be on this shard. Pass in a sharding key and the min/max allowed ID on the shard, and get back a SQL DELETE statement. When running that statement, pass in an ID (obtained from sql_cleanup_next_id) as a bind variable.
sql_cleanup_delete
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def sql_count_rows(min_id, max_id) sql = "SELECT COUNT(*) FROM #{@name}" return sql unless min_id && max_id wheres = [] if @sharding_keys.size > 0 @sharding_keys.each {|col| wheres << "(#{col} >= #{min_id} AND #{col} <= #{max_id})"} sql << ' WHERE ' + wheres.join(" OR ") elsif first_pk_col sql << " WHERE #{first_pk_col} >= #{min_id} AND #{first_pk_col} <= #{max_id}" end sql end
Returns SQL to counts number of rows between the given ID ranges. Warning: will give potentially misleading counts on multi-sharding-key tables.
sql_count_rows
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def export_file_path(min_id=false, max_id=false) case when min_id && max_id then "#{Jetpants.export_location}/#{@name}#{min_id}-#{max_id}.out" when min_id then "#{Jetpants.export_location}/#{@name}#{min_id}-and-up.out" when max_id then "#{Jetpants.export_location}/#{@name}start-#{max_id}.out" else "#{Jetpants.export_location}/#{@name}-full.out" end end
Returns a file path (as a String) for the export dumpfile of the given ID range.
export_file_path
ruby
tumblr/jetpants
lib/jetpants/table.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/table.rb
Apache-2.0
def load_pools output "Notice: no plugin has overridden Topology#load_pools, so *no* pools are imported automatically" end
Plugin should override so that this reads in a configuration and initializes @pools as appropriate.
load_pools
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def load_shard_pools output "Notice: no plugin has overridden Topology#load_shard_pools, so *no* shard pools are imported automaticaly" end
Plugin should override this to initialize @shard_pools
load_shard_pools
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def add_pool(pool) output "Notice: no plugin has overridden Topology#add_pool, so the pool was *not* added to the topology" end
Plugin should override so that this adds the given pool to the current topology (@pools)
add_pool
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def add_shard_pool(shard_pool) output "Notice: no plugin has overridden Topology#add_shard_pool, so the shard pool was *not* added to the topology" end
Plugin should override so that this adds the given shard pool to the current topology (@shard_pools)
add_shard_pool
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def claim_spares(count, options={}) raise "Plugin must override Topology#claim_spares" end
Plugin should override so that this returns an array of [count] Jetpants::DB objects, or throws an exception if not enough left. Options hash is plugin-specific. Jetpants core will provide these two options, but it's up to a plugin to handle (or ignore) them: :role => :master or :standby_slave, indicating what purpose the new node(s) will be used for. Useful if your hardware spec varies by node role (not recommended!) or if you vet your master candidates more carefully. :like => a Jetpants::DB object, indicating that the spare node hardware spec should be like the specified DB's spec.
claim_spares
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def count_spares(options={}) raise "Plugin must override Topology#count_spares" end
Plugin should override so that this returns a count of spare machines matching the selected options. options hash follows same format as for Topology#claim_spares.
count_spares
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def spares(options={}) raise "Plugin must override Topology#spares" end
Plugin should override so that this returns a list of spare machines matching the selected options. options hash follows same format as for Topology#claim_spares.
spares
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def valid_roles [:master, :active_slave, :standby_slave, :backup_slave] end
Returns a list of valid role symbols in use in Jetpants.
valid_roles
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def slave_roles valid_roles.reject {|r| r == :master} end
Returns a list of valid role symbols which indicate a slave status
slave_roles
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def shards(shard_pool_name = nil) if shard_pool_name.nil? shard_pool_name = default_shard_pool output "Using default shard pool #{default_shard_pool}" end pools.select {|p| p.is_a? Shard}.select { |p| p.shard_pool && p.shard_pool.name.downcase == shard_pool_name.downcase } end
##### Instance Methods #################################################### Returns array of this topology's Jetpants::Pool objects of type Jetpants::Shard
shards
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def functional_partitions pools.reject {|p| p.is_a? Shard} end
Returns array of this topology's Jetpants::Pool objects that are NOT of type Jetpants::Shard
functional_partitions
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def pool(target) if target.is_a?(DB) pools.select {|p| p.master == target}.first else pools.select {|p| p.name.downcase == target.downcase}.first end end
Finds and returns a single Jetpants::Pool. Target may be a name (string, case insensitive) or master (DB object).
pool
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def shard(min_id, max_id, shard_pool_name = nil) shard_pool_name = default_shard_pool if shard_pool_name.nil? if max_id.is_a?(String) && max_id.upcase == 'INFINITY' max_id = max_id.upcase else max_id = max_id.to_i end min_id = min_id.to_i shards(shard_pool_name).select {|s| s.min_id == min_id && s.max_id == max_id}.first end
Finds and returns a single Jetpants::Shard
shard
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def shard_for_id(id, shard_pool = nil) shard_pool = default_shard_pool if shard_pool.nil? choices = shards(shard_pool).select {|s| s.min_id <= id && (s.max_id == 'INFINITY' || s.max_id >= id)} choices.reject! {|s| s.parent && ! s.in_config?} # filter out child shards that are still being built # Preferentially return child shards at this point if choices.any? {|s| s.parent} choices.select {|s| s.parent}.first else choices.first end end
Returns the Jetpants::Shard that handles the given ID. During a shard split, if the child isn't "in production" yet (ie, it's still being built), this will always return the parent shard. Once the child is fully built / in production, this method will always return the child shard. However, Shard#db(:write) will correctly delegate writes to the parent shard when appropriate in this case. (see also: Topology#shard_db_for_id)
shard_for_id
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def shard_db_for_id(id, mode=:read, shard_pool = nil) shard_for_id(id, shard_pool).db(mode) end
Returns the Jetpants::DB that handles the given ID with the specified mode (either :read or :write)
shard_db_for_id
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def claim_spare(options={}) claim_spares(1, options)[0] end
Nicer inteface into claim_spares when only one DB is desired -- returns a single Jetpants::DB object instead of an array.
claim_spare
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def valid_role? role valid_roles.include? role.to_s.downcase.to_sym end
Returns if the supplied role is valid
valid_role?
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def normalize_roles(*roles) roles = roles.flatten.map {|r| r.to_s.downcase == 'slave' ? slave_roles.map(&:to_s) : r.to_s.downcase}.flatten roles.each {|r| raise "#{r} is not a valid role" unless valid_role? r} roles.uniq.map &:to_sym end
Converts the supplied roles (strings or symbols) into lowercase symbol versions Will expand out special role of :slave to be all slave roles.
normalize_roles
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def clear @pools = nil @shard_pools = nil DB.clear Host.clear end
Clears the pool list and nukes cached DB and Host object lookup tables
clear
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def refresh clear load_shard_pools load_pools true end
Empties and then reloads the pool list
refresh
ruby
tumblr/jetpants
lib/jetpants/topology.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/topology.rb
Apache-2.0
def mysql_root_cmd(cmd, options={}) raise 'mysql_root_cmd: cannot include "' if cmd.include? '"' raise 'mysql_root_cmd: cannot include \n' if cmd.include? "\n" terminator = options[:terminator] || '\G' attempts = (options[:attempts].nil? ? 3 : (options[:attempts].to_i || 1)) schema = (options[:schema] == true ? app_schema : options[:schema]) failures = 0 begin raise "MySQL is not running" unless running? supply_root_pw = (Jetpants.mysql_root_password ? "-p#{Jetpants.mysql_root_password}" : '') supply_port = (@port == 3306 ? '' : "-h 127.0.0.1 -P #{@port}") real_cmd = %Q{mysql #{supply_root_pw} #{supply_port} -ss -e "#{cmd}#{terminator}" #{schema}} real_cmd.untaint result = ssh_cmd!(real_cmd) raise result if result && result.downcase.start_with?('error ') result = parse_vertical_result(result) if options[:parse] && terminator == '\G' return result rescue => ex failures += 1 raise if failures >= attempts output "Root query \"#{cmd}\" failed: #{ex.message}, re-trying after delay" sleep 3 * failures retry end end
Runs the provided SQL statement as root, locally via an SSH command line, and returns the response as a single string. Available options: :terminator:: how to terminate the query, such as '\G' or ';'. (default: '\G') :parse:: parse a single-row, vertical-format result (:terminator must be '\G') and return it as a hash :schema:: name of schema to use, or true to use this DB's default. This may have implications when used with filtered replication! (default: nil, meaning no schema) :attempts:: by default, queries will be attempted up to 3 times. set this to 0 or false for non-idempotent queries.
mysql_root_cmd
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def disconnect if @db @db.disconnect rescue nil @db = nil end @user = nil @schema = nil end
Closes the database connection(s) in the connection pool.
disconnect
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def reconnect(options={}) disconnect # force disconnection even if we're not changing user or schema connect(options) end
Disconnects and reconnects to the database.
reconnect
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def connection @db || connect end
Returns a Sequel database object representing the current connection. If no current connection, this will automatically connect with default options.
connection
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def query(sql, *binds) ds = connection.fetch(sql, *binds) connection.execute_dui(ds.update_sql) {|c| return c.last_id > 0 ? c.last_id : c.affected_rows} end
Execute a write (INSERT, UPDATE, DELETE, REPLACE, etc) query. If the query is an INSERT, returns the last insert ID (if an auto_increment column is involved). Otherwise returns the number of affected rows.
query
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def query_return_array(sql, *binds) connection.fetch(sql, *binds).all end
Execute a read (SELECT) query. Returns an array of hashes.
query_return_array
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def query_return_first(sql, *binds) connection.fetch(sql, *binds).first end
Execute a read (SELECT) query. Returns a hash of the first row only.
query_return_first
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def query_return_first_value(sql, *binds) connection.fetch(sql, *binds).single_value end
Execute a read (SELECT) query. Returns the value of the first column of the first row only.
query_return_first_value
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def parse_vertical_result(text) results = {} return results unless text raise text.chomp if text =~ /^ERROR/ lines = text.split("\n") lines.each do |line| col, val = line.split ':' next unless val results[col.strip.downcase.to_sym] = val.strip end results end
Parses the result of a MySQL query run with a \G terminator. Useful when interacting with MySQL via the command-line client (for secure access to the root user) instead of via the MySQL protocol.
parse_vertical_result
ruby
tumblr/jetpants
lib/jetpants/db/client.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/client.rb
Apache-2.0
def export_schemata(tables) output 'Exporting table definitions' supply_root_pw = (Jetpants.mysql_root_password ? "-p#{Jetpants.mysql_root_password}" : '') supply_port = (@port == 3306 ? '' : "-h 127.0.0.1 -P #{@port}") cmd = "mysqldump #{supply_root_pw} #{supply_port} -d #{app_schema} " + tables.join(' ') + " >#{Jetpants.export_location}/create_tables_#{@port}.sql" cmd.untaint result = ssh_cmd(cmd) output result end
Exports the DROP TABLE + CREATE TABLE statements for the given tables via mysqldump
export_schemata
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def import_schemata! output 'Dropping and re-creating table definitions' result = mysql_root_cmd "source #{Jetpants.export_location}/create_tables_#{@port}.sql", terminator: '', schema: true output result end
Executes a .sql file previously created via export_schemata. Warning: this will DESTROY AND RECREATE any tables contained in the file. DO NOT USE ON A DATABASE THAT CONTAINS REAL DATA!!! This method doesn't check first! The statements will replicate to any slaves! PROCEED WITH CAUTION IF RUNNING THIS MANUALLY!
import_schemata!
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def export_data(tables, min_id=false, max_id=false, infinity=false) pause_replication if @master && ! @repl_paused import_export_user = 'jetpants' create_user(import_export_user) grant_privileges(import_export_user) # standard privs grant_privileges(import_export_user, '*', 'FILE') # FILE global privs reconnect(user: import_export_user) @counts ||= {} tables.each {|t| @counts[t.name] = export_table_data t, min_id, max_id, infinity} ensure reconnect(user: app_credentials[:user]) drop_user import_export_user end
Exports data for the supplied tables. If min/max ID supplied, only exports data where at least one of the table's sharding keys falls within this range. Creates a 'jetpants' db user with FILE permissions for the duration of the export.
export_data
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def export_table_data(table, min_id=false, max_id=false, infinity=false) unless min_id && max_id && table.chunks > 0 output "Exporting all data", table rows_exported = query(table.sql_export_all) output "#{rows_exported} rows exported", table return rows_exported end output "Exporting data for ID range #{min_id}..#{max_id}", table lock = Mutex.new rows_exported = 0 chunks_completed = 0 (min_id..max_id).in_chunks(table.chunks) do |min, max| attempts = 0 begin sql = table.sql_export_range(min, max) result = query sql lock.synchronize do rows_exported += result chunks_completed += 1 percent_finished = 100 * chunks_completed / table.chunks output("Export #{percent_finished}% complete.", table) if table.chunks >= 40 && chunks_completed % 20 == 0 end rescue => ex if attempts >= 10 output "EXPORT ERROR: #{ex.message}, chunk #{min}-#{max}, giving up", table raise end attempts += 1 output "EXPORT ERROR: #{ex.message}, chunk #{min}-#{max}, attempt #{attempts}, re-trying after delay", table ssh_cmd("rm -f " + table.export_file_path(min, max)) sleep(1.0 * attempts) retry end end if infinity attempts = 0 begin output("Exporting infinity range.", table) infinity_rows_exported = query(table.sql_export_range(max_id+1, false)) rows_exported += infinity_rows_exported output("Export of infinity range complete.", table) rescue => ex if attempts >= 10 output "EXPORT ERROR: #{ex.message}, chunk #{max_id+1}-INFINITY, giving up", table raise end attempts += 1 output "EXPORT ERROR: #{ex.message}, chunk #{max_id+1}-INFINITY, attempt #{attempts}, re-trying after delay", table ssh_cmd("rm -f " + table.export_file_path(max_id+1, false)) sleep(1.0 * attempts) retry end end output "#{rows_exported} rows exported", table rows_exported end
Exports data for a table. Only includes the data subset that falls within min_id and max_id. The export files will be located according to the export_location configuration setting. Returns the number of rows exported.
export_table_data
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def import_data(tables, min_id=false, max_id=false, infinity=false, extra_opts=nil) raise "Binary logging must be disabled prior to calling DB#import_data" if binary_log_enabled? disable_read_only! import_export_user = 'jetpants' create_user(import_export_user) grant_privileges(import_export_user) # standard privs grant_privileges(import_export_user, '*', 'FILE') # FILE global privs # Disable unique checks upon connecting. This has to be done at the :after_connect level in Sequel # to guarantee it's being run on every connection in the conn pool. This is mysql2-specific. disable_unique_checks_proc = Proc.new {|mysql2_client| mysql2_client.query 'SET unique_checks = 0'} reconnect(user: import_export_user, after_connect: disable_unique_checks_proc) import_counts = {} tables.each {|t| import_counts[t.name] = import_table_data t, min_id, max_id, infinity, extra_opts} # Verify counts @counts ||= {} @counts.each do |name, exported| if exported == import_counts[name] output "Verified import count matches export count for table #{name}" else raise "Import count (#{import_counts[name]}) does not match export count (#{exported}) for table #{name}" end end ensure reconnect(user: app_credentials[:user]) drop_user(import_export_user) end
Imports data for a table that was previously exported using export_data. Only includes the data subset that falls within min_id and max_id. If run after export_data (in the same process), import_data will automatically confirm that the import counts match the previous export counts. Creates a 'jetpants' db user with FILE permissions for the duration of the import. Note: the caller must disable binary logging (for speed reasons and to avoid potential GTID problems with complex operations) and set InnoDB autoinc lock mode to 2 (to support chunking of auto-inc tables) prior to calling DB#import_data. This is the caller's responsibility, and can be achieved by calling DB#restart_mysql with appropriate option overrides prior to importing data. After done importing, the caller can clear those settings by calling DB#restart_mysql again with no params.
import_data
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def import_table_data(table, min_id=false, max_id=false, infinity=false, extra_opts=nil) unless min_id && max_id && table.chunks > 0 output "Importing all data", table rows_imported = query(table.sql_import_all) output "#{rows_imported} rows imported", table return rows_imported end output "Importing data for ID range #{min_id}..#{max_id}", table lock = Mutex.new rows_imported = 0 chunks_completed = 0 (min_id..max_id).in_chunks(table.chunks) do |min, max| attempts = 0 begin sql = table.sql_import_range(min, max, extra_opts) result = query sql lock.synchronize do rows_imported += result chunks_completed += 1 percent_finished = 100 * chunks_completed / table.chunks output("Import #{percent_finished}% complete.", table) if table.chunks >= 40 && chunks_completed % 20 == 0 chunk_file_name = table.export_file_path(min, max) ssh_cmd "rm -f #{chunk_file_name}" end rescue => ex if attempts >= 10 output "IMPORT ERROR: #{ex.message}, chunk #{min}-#{max}, giving up", table raise end attempts += 1 output "IMPORT ERROR: #{ex.message}, chunk #{min}-#{max}, attempt #{attempts}, re-trying after delay", table sleep(3.0 * attempts) retry end end if infinity attempts = 0 begin infinity_rows_imported = query(table.sql_import_range(max_id+1, false)) output("Importing infinity range", table) chunk_file_name = table.export_file_path(max_id+1, false) ssh_cmd "rm -f #{chunk_file_name}" rows_imported += infinity_rows_imported output("Import of infinity range complete", table) rescue => ex if attempts >= 10 output "IMPORT ERROR: #{ex.message}, chunk #{max_id+1}-INFINITY, giving up", table raise end attempts += 1 output "IMPORT ERROR: #{ex.message}, chunk #{max_id+1}-INFINITY, attempt #{attempts}, re-trying after delay", table sleep(3.0 * attempts) retry end end output "#{rows_imported} rows imported", table rows_imported end
Imports the data subset previously dumped through export_data. Returns number of rows imported.
import_table_data
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def row_counts(tables, min_id, max_id) tables = [tables] unless tables.is_a? Array lock = Mutex.new row_count = {} tables.each do |t| row_count[t.name] = 0 if min_id && max_id && t.chunks > 1 (min_id..max_id).in_chunks(t.chunks, Jetpants.max_concurrency) do |min, max| result = query_return_first_value(t.sql_count_rows(min, max)) lock.synchronize {row_count[t.name] += result} end else row_count[t.name] = query_return_first_value(t.sql_count_rows(false, false)) end output "#{row_count[t.name]} rows counted", t end row_count end
Counts rows falling between min_id and max_id for the supplied tables. Returns a hash mapping table names to counts. Note: runs 10 concurrent queries to perform the count quickly. This is MUCH faster than doing a single count, but far more I/O intensive, so don't use this on a master or active slave.
row_counts
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def prune_data_to_range(tables, keep_min_id, keep_max_id) reconnect(user: app_credentials[:user]) tables.each do |t| output "Cleaning up data, pruning to only keep range #{keep_min_id}-#{keep_max_id}", t rows_deleted = 0 [:asc, :desc].each {|direction| rows_deleted += delete_table_data_outside_range(t, keep_min_id, keep_max_id, direction)} output "Done cleanup; #{rows_deleted} rows deleted", t end end
Cleans up all rows that should no longer be on this db. Supply the ID range (in terms of the table's sharding key) of rows to KEEP.
prune_data_to_range
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def delete_table_data_outside_range(table, keep_min_id, keep_max_id, direction) rows_deleted = 0 if direction == :asc dir_english = "Ascending" boundary = keep_max_id output "Removing rows with ID > #{boundary}", table elsif direction == :desc dir_english = "Descending" boundary = keep_min_id output "Removing rows with ID < #{boundary}", table else raise "Unknown order parameter #{order}" end table.sharding_keys.each do |col| deleter_sql = table.sql_cleanup_delete(col, keep_min_id, keep_max_id) id = boundary iter = 0 while id do finder_sql = table.sql_cleanup_next_id(col, id, direction) id = query_return_first_value(finder_sql) break unless id rows_deleted += query(deleter_sql, id) # Slow down on multi-col sharding key tables, due to queries being far more expensive sleep(0.0001) if table.sharding_keys.size > 1 iter += 1 output("#{dir_english} deletion progress: through #{col} #{id}, deleted #{rows_deleted} rows so far", table) if iter % 50000 == 0 end end rows_deleted end
Helper method used by prune_data_to_range. Deletes data for the given table that falls either below the supplied keep_min_id (if direction is :desc) or falls above the supplied keep_max_id (if direction is :asc).
delete_table_data_outside_range
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def clone_to!(*targets) targets.flatten! raise "Cannot clone an instance onto its master" if master && targets.include?(master) destinations = {} targets.each do |t| destinations[t] = t.mysql_directory raise "Over 100 MB of existing MySQL data on target #{t}, aborting copy!" if t.data_set_size > 100000000 end # Construct the list of files and dirs to copy. We include ib_lru_dump if present # (ie, if using Percona Server with innodb_buffer_pool_restore_at_startup enabled) # since this will greatly improve warm-up time of the cloned nodes databases = mysql_root_cmd("SHOW DATABASES").split("\n").select { |row| row.include?('Database:') }.map{ |line| line.split(":").last.strip }.reject { |s| Jetpants.mysql_clone_ignore.include? s } # If using GTID, we need to remember the source's gtid_executed from the point-in-time of the copy. # We also need to ensure that the targets match the same gtid-related variables as the source. # Ordinarily this should be managed by my.cnf, but while a fleet-wide GTID rollout is still underway, # claimed spares won't get the appropriate settings automatically since they aren't in a pool yet. pause_replication unless @repl_paused if gtid_mode? source_gtid_executed = gtid_executed targets.each do |t| t.add_start_option '--loose-gtid-mode=ON' t.add_start_option '--enforce-gtid-consistency=1' end end if gtid_deployment_step? targets.each {|t| t.add_start_option '--loose-gtid-deployment-step=1'} end [self, targets].flatten.concurrent_each {|t| t.stop_query_killer; t.stop_mysql} targets.concurrent_each {|t| t.ssh_cmd "rm -rf #{t.mysql_directory}/ib_logfile*"} files = (databases + ['ibdata1', app_schema]).uniq files += ['*.tokudb', 'tokudb.*', 'log*.tokulog*'] if ssh_cmd("test -f #{mysql_directory}/tokudb.environment 2>/dev/null; echo $?").chomp.to_i == 0 files << 'ib_lru_dump' if ssh_cmd("test -f #{mysql_directory}/ib_lru_dump 2>/dev/null; echo $?").chomp.to_i == 0 if @clone_multi_threaded multi_threaded_cloning(mysql_directory, destinations, :port => 3306, :files => files, :overwrite => true) else fast_copy_chain(mysql_directory, destinations, :port => 3306, :files => files, :overwrite => true) end clone_settings_to!(*targets) [self, targets].flatten.concurrent_each do |t| t.start_mysql t.start_query_killer end # If the source is using GTID, we need to set the targets' gtid_purged to equal the # source's gtid_executed. This is needed because we do not copy binlogs, which are # the source of truth for gtid_purged and gtid_executed. (Note, setting gtid_purged # also inherently sets gtid_executed.) unless source_gtid_executed.nil? targets.concurrent_each do |t| # Restarts done by jetpants will preserve gtid_mode, but manual restarts might not, # since the target isn't in a pool yet raise "Target #{t} is not using gtid_mode as expected! Did something restart it out-of-band?" unless t.gtid_mode? # If gtid_executed is non-empty on a fresh node, the node probably wasn't fully re-provisioned. # This is bad since gtid_executed is set on startup based on the binlog contents, and we can't # set gtid_purged unless it's empty. So we have to RESET MASTER to fix this. if t.gtid_executed(true) != '' t.output 'Node unexpectedly has non-empty gtid_executed! Probably leftover binlogs from previous life...' t.output 'Attempting a RESET MASTER to nuke leftovers' t.output t.mysql_root_cmd 'RESET MASTER' end t.gtid_purged = source_gtid_executed raise "Expected gtid_executed on target #{t} to now match source, but it doesn't" unless t.gtid_executed == source_gtid_executed end end end
Copies mysql db files from self to one or more additional DBs. WARNING: temporarily shuts down mysql on self, and WILL OVERWRITE CONTENTS OF MYSQL DIRECTORY ON TARGETS. Confirms first that none of the targets have over 100MB of data in the schema directory or in ibdata1. MySQL is restarted on source and targets afterwards.
clone_to!
ruby
tumblr/jetpants
lib/jetpants/db/import_export.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/import_export.rb
Apache-2.0
def drop_user(username=false) username ||= app_credentials[:user] commands = ['SET SESSION sql_log_bin = 0'] Jetpants.mysql_grant_ips.each do |ip| commands << "DROP USER '#{username}'@'#{ip}'" end commands << "FLUSH PRIVILEGES" commands = commands.join '; ' mysql_root_cmd commands Jetpants.mysql_grant_ips.each do |ip| output "Dropped user '#{username}'@'#{ip}' (only on this node -- not binlogged)" end end
Drops a user. SEE NOTE ABOVE RE: ALWAYS SKIPS BINLOG
drop_user
ruby
tumblr/jetpants
lib/jetpants/db/privileges.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/privileges.rb
Apache-2.0
def grant_or_revoke_privileges(statement, username, database, privileges) preposition = (statement.downcase == 'revoke' ? 'FROM' : 'TO') username ||= app_credentials[:user] database ||= app_schema privileges = Jetpants.mysql_grant_privs if privileges.empty? privileges = privileges.join(',') commands = ['SET SESSION sql_log_bin = 0'] Jetpants.mysql_grant_ips.each do |ip| commands << "#{statement} #{privileges} ON #{database}.* #{preposition} '#{username}'@'#{ip}'" end commands << "FLUSH PRIVILEGES" commands = commands.join '; ' mysql_root_cmd commands Jetpants.mysql_grant_ips.each do |ip| verb = (statement.downcase == 'revoke' ? 'Revoking' : 'Granting') target_db = (database == '*' ? 'globally' : "on #{database}.*") output "#{verb} privileges #{preposition.downcase} '#{username}'@'#{ip}' #{target_db}: #{privileges.downcase} (only on this node -- not binlogged)" end end
Helper method that can do grants or revokes. SEE NOTE ABOVE RE: ALWAYS SKIPS BINLOG
grant_or_revoke_privileges
ruby
tumblr/jetpants
lib/jetpants/db/privileges.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/privileges.rb
Apache-2.0
def revoke_all_access! user_name = app_credentials[:user] enable_read_only! drop_user(user_name) # never written to binlog, so no risk of it replicating end
Disables access to a DB by the application user, and sets the DB to read-only. Useful when decommissioning instances from a shard that's been split, or a former slave that's been permanently removed from the pool
revoke_all_access!
ruby
tumblr/jetpants
lib/jetpants/db/privileges.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/privileges.rb
Apache-2.0
def enable_read_only! if read_only? output "Node already has read_only mode enabled" true else output "Enabling read_only mode" mysql_root_cmd 'SET GLOBAL read_only = 1' read_only? end end
Enables global read-only mode on the database.
enable_read_only!
ruby
tumblr/jetpants
lib/jetpants/db/privileges.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/privileges.rb
Apache-2.0
def disable_read_only! if read_only? output "Disabling read_only mode" mysql_root_cmd 'SET GLOBAL read_only = 0' not read_only? else output "Confirmed that read_only mode is already disabled" true end end
Disables global read-only mode on the database.
disable_read_only!
ruby
tumblr/jetpants
lib/jetpants/db/privileges.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/privileges.rb
Apache-2.0
def override_mysql_grant_ips(ips) ip_holder = Jetpants.mysql_grant_ips Jetpants.mysql_grant_ips = ips begin yield rescue StandardError, Interrupt, IOError Jetpants.mysql_grant_ips = ip_holder raise end Jetpants.mysql_grant_ips = ip_holder end
override Jetpants.mysql_grant_ips temporarily before executing a block then set Jetpants.mysql_grant_ips back to the original values eg. master.override_mysql_grant_ips(['10.10.10.10']) do #something end
override_mysql_grant_ips
ruby
tumblr/jetpants
lib/jetpants/db/privileges.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/privileges.rb
Apache-2.0
def resume_replication raise "This DB object has no master" unless master # Display the replication progress if gtid_mode? && gtid_executed != '' gtid_executed_from_pool_master_string(true) else repl_binlog_coordinates(true) end output "Resuming replication from #{@master}." output mysql_root_cmd "START SLAVE" @repl_paused = false end
Starts replication, or restarts replication after a pause
resume_replication
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def pause_replication_with(*db_list) raise "DB#pause_replication_with requires at least one DB as parameter!" unless db_list.size > 0 my_pool = pool(true) if my_pool.gtid_mode? raise 'DB#pause_replication_with requires all nodes to be in the same pool!' unless db_list.all? {|db| db.pool(true) == my_pool} raise 'DB#pause_replication_with cannot be used on a master!' if self == my_pool.master || db_list.include?(my_pool.master) else raise 'Without GTID, DB#pause_replication_with requires all nodes to have the same master!' unless db_list.all? {|db| db.master == master} end db_list.unshift self unless db_list.include? self db_list.concurrent_each &:pause_replication furthest_replica = db_list.inject{|furthest_db, this_db| this_db.ahead_of?(furthest_db) ? this_db : furthest_db} if my_pool.gtid_mode? gtid_set = furthest_replica.gtid_executed_from_pool_master(true) # If no replicas have executed transactions from current pool master, fall back to # using the full gtid_executed. This is risky if any replicas have errant transactions # (i.e. something ran binlogged statements directly on a replica) since START SLAVE UNTIL # will never complete in this situation. if gtid_set.nil? output 'WARNING: no replicas have executed transactions from current pool master; something may be amiss' gtid_set = furthest_replica.gtid_executed(true) end else binlog_coord = furthest_replica.repl_binlog_coordinates(true) end db_list.select {|db| furthest_replica.ahead_of? db}.concurrent_each do |db| db.resume_replication_until(binlog_coord, gtid_set) end if db_list.any? {|db| furthest_replica.ahead_of?(db) || db.ahead_of?(furthest_replica)} raise 'Unexpectedly unable to stop slaves in the same position; perhaps something restarted replication?' end end
Stops replication at the same place on many nodes. Uses GTIDs if enabled, otherwise uses binlog coordinates. Only works on hierarchical replication topologies if using GTID.
pause_replication_with
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def resume_replication_until(binlog_coord, gtid_set=nil, timeout_sec=3600) if binlog_coord.nil? && !gtid_set.nil? output "Resuming replication until after GTID set #{gtid_set}, waiting for up to #{timeout_sec} seconds." output mysql_root_cmd "START SLAVE UNTIL SQL_AFTER_GTIDS = '#{gtid_set}'" result = query_return_first_value("SELECT WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS(?, ?)", gtid_set, timeout_sec) if result == -1 mysql_root_cmd "STOP SLAVE" # safer than leaving an UNTIL replication condition set indefinitely raise "#{self} did not reach GTID set #{gtid_set} within #{timeout_sec} seconds. Stopping replication." end elsif !binlog_coord.nil? && gtid_set.nil? output "Resuming replication until master coords (#{binlog_coord[0]}, #{binlog_coord[1]}), waiting for up to #{timeout_sec} seconds." output mysql_root_cmd "START SLAVE UNTIL MASTER_LOG_FILE = '#{binlog_coord[0]}', MASTER_LOG_POS = #{binlog_coord[1]}" result = query_return_first_value("SELECT MASTER_POS_WAIT(?, ?, ?)", *binlog_coord, timeout_sec) if result == -1 mysql_root_cmd "STOP SLAVE" # safer than leaving an UNTIL replication condition set indefinitely raise "#{self} did not reach master coords (#{binlog_coord[0]}, #{binlog_coord[1]}) within #{timeout_sec} seconds. Stopping replication." end else raise "DB#resume_replication_until requires EXACTLY ONE of binlog_coord or gtid_set to be non-nil" end # START SLAVE UNTIL will leave the slave io thread running, so we explicitly stop it output mysql_root_cmd "STOP SLAVE IO_THREAD" @repl_paused = true end
Resumes replication up to the specified binlog_coord (array of [logfile string, position int]) or gtid set (string). You must supply binlog_coord OR gtid_set, but not both; set the other one to nil. This method blocks until the specified coordinates/gtids have been reached, up to a max of the specified timeout, after which point it raises an exception.
resume_replication_until
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def enslave_siblings!(targets) raise "Can only call enslave_siblings! on a slave instance" unless master disable_monitoring targets.each {|t| t.disable_monitoring} pause_replication unless @repl_paused change_master_options = { user: replication_credentials[:user], password: replication_credentials[:pass], } if pool(true).gtid_mode? change_master_options[:auto_position] = true gtid_executed_from_pool_master_string(true) # display gtid executed value else change_master_options[:log_file], change_master_options[:log_pos] = repl_binlog_coordinates end clone_to!(targets) targets.each do |t| t.change_master_to(master, change_master_options) t.enable_read_only! end [ self, targets ].flatten.each(&:resume_replication) # should already have happened from the clone_to! restart anyway, but just to be explicit [ self, targets ].flatten.concurrent_each{|n| n.catch_up_to_master 21600 } [ self, targets ].flatten.each(&:enable_monitoring) end
Wipes out the target instances and turns them into slaves of self's master. Resumes replication on self afterwards, but does NOT automatically start replication on the targets. Warning: takes self offline during the process, so don't use on an active slave!
enslave_siblings!
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def repl_binlog_coordinates(display_info=true) raise "This instance is not a slave" unless master status = slave_status file, pos = status[:relay_master_log_file], status[:exec_master_log_pos].to_i output "Has executed through master's binlog coordinates of (#{file}, #{pos})." if display_info [file, pos] end
Use this on a slave to return [master log file name, position] for how far this slave has executed (in terms of its master's binlogs) in its SQL replication thread.
repl_binlog_coordinates
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def binlog_coordinates(display_info=true) hash = mysql_root_cmd('SHOW MASTER STATUS', :parse=>true) raise "Cannot obtain binlog coordinates of this master because binary logging is not enabled" unless hash[:file] output "Own binlog coordinates are (#{hash[:file]}, #{hash[:position].to_i})." if display_info [hash[:file], hash[:position].to_i] end
Returns a two-element array containing [log file name, position] for this database. Only useful when called on a master. This is the current instance's own binlog coordinates, NOT the coordinates of replication progress on a slave!
binlog_coordinates
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def seconds_behind_master raise "This instance is not a slave" unless master lag = slave_status[:seconds_behind_master] lag == 'NULL' ? nil : lag.to_i end
Returns the number of seconds behind the master the replication execution is, as reported by SHOW SLAVE STATUS.
seconds_behind_master
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def catch_up_to_master(timeout=21600, threshold=3, poll_frequency=5) raise "This instance is not a slave" unless master resume_replication if @repl_paused if pool(true).gtid_mode? master_gtid_executed = master.gtid_executed(true) master_taking_writes = Proc.new {|db| db.gtid_executed != master_gtid_executed} else master_coords = master.binlog_coordinates(true) master_taking_writes = Proc.new {|db| db.binlog_coordinates != master_coords} end times_at_zero = 0 start = Time.now.to_i output "Waiting to catch up to master" while (Time.now.to_i - start) < timeout lag = seconds_behind_master if lag == 0 if master_taking_writes.call(master) master_taking_writes = Proc.new {true} # no need for subsequent re-checking times_at_zero += 1 if times_at_zero >= threshold output "Caught up to master." return true end elsif !master.ahead_of? self output "Caught up to master completely (no writes occurring)." return true end sleep poll_frequency elsif lag.nil? resume_replication sleep 1 raise "Unable to restart replication" if seconds_behind_master.nil? else output "Currently #{lag} seconds behind master." times_at_zero = 0 extra_sleep_time = (lag > 30000 ? 300 : (seconds_behind_master / 100).ceil) sleep poll_frequency + extra_sleep_time end end raise "This instance did not catch up to its master within #{timeout} seconds" end
Call this method on a replica to block until it catches up with its master. If this doesn't happen within timeout (seconds), raises an exception. If the pool is currently receiving writes, this method monitors slave lag and will wait for self's SECONDS_BEHIND_MASTER to reach 0 and stay at 0 after repeated polls (based on threshold and poll_frequency). If a large amount of slave lag is seen, polling frequency is automatically adjusted. In other words, with default settings: checks slave lag every 5+ sec, and returns true if slave lag is zero 3 times in a row. Gives up if this does not occur within a one-hour period. If the pool is NOT receiving writes, this method bases its behavior on coords/gtids (depending which is in use) and guarantees that self is at the same position as its master.
catch_up_to_master
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def slave_status hash = mysql_root_cmd('SHOW SLAVE STATUS', :parse=>true) hash = {} if hash[:master_user] == 'test' if @master && hash.count < 1 message = "should be a slave of #{@master}, but SHOW SLAVE STATUS indicates otherwise" raise "#{self}: #{message}" if Jetpants.verify_replication output message end hash end
Returns a hash containing the information from SHOW SLAVE STATUS
slave_status
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def replication_credentials user = false pass = false if master || slaves.count > 0 target = (@master ? self : @slaves[0]) results = target.ssh_cmd("cat #{mysql_directory}/master.info | head -6 | tail -2").split if results.count == 2 && results[0] != 'test' user, pass = results end end user && pass ? {user: user, pass: pass} : Jetpants.replication_credentials end
Reads an existing master.info file on this db or one of its slaves, propagates the info back to the Jetpants singleton, and returns it as a hash containing :user and :pass. If the node is not a slave and has no slaves, will use the global Jetpants config instead.
replication_credentials
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def ahead_of?(node) # During a shard operation, nodes may be in different logical pools according to # the asset tracker, but the same physical pool in reality. We only want to raise # an exception if the nodes truly share no common ancestry. my_pool = pool(true) target_pool = node.pool(true) if my_pool != target_pool my_pool_master = master || self my_pool_master = my_pool_master.master while my_pool_master.master target_pool_master = node.master || node target_pool_master = target_pool_master.master while target_pool_master.master raise "Cannot compare nodes with different pools and different top-level masters!" if my_pool_master != target_pool_master end # Only use GTID if enabled and the nodes are in the same logical pool. Otherwise, methods # like gtid_executed_from_pool_master will not work properly. if my_pool.gtid_mode? && my_pool == target_pool # Ordinarily we only want to concern ourselves with transactions that came from the # current pool master. BUT if the target node hasn't executed any of those, then we need # to look at other things as a workaround. If self DOES have transactions from pool # master and the other node doesn't, we know self is ahead; otherwise, look at full gtid # sets (not just from pool master) and see if one node has transactions the other does not. node_gtid_exec = node.gtid_executed_from_pool_master(true) if node_gtid_exec.nil? return true unless gtid_executed_from_pool_master(true).nil? self_has_extra = has_extra_transactions_vs? node node_has_extra = node.has_extra_transactions_vs? self raise "Cannot determine which node is ahead; both have disjoint extra transactions" if self_has_extra && node_has_extra return self_has_extra else return ahead_of_gtid? node_gtid_exec end end # If we get here, we must use coordinates instead of GTID. The correct coordinates # to use, on self and on node, depend on their roles relative to each other. if node == self.master return repl_ahead_of_coordinates?(node.binlog_coordinates) elsif self == node.master return ahead_of_coordinates?(node.repl_binlog_coordinates) elsif self.master == node.master # This case also gets triggered if they're both false, but that never # occurs -- they wouldn't be in the same pool in that case return repl_ahead_of_coordinates?(node.repl_binlog_coordinates) else raise "Cannot compare coordinates of nodes more than one level apart in replication hierarchy!" end end
Return true if this node's replication progress is ahead of the provided node, or false otherwise. The nodes must be in the same pool to be comparable. Without GTID, they must also be at most one level away from each other on the replication hierarchy.
ahead_of?
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def ahead_of_gtid?(gtid_set) self_progress = gtid_executed_from_pool_master # Don't try comparing to a node that hasn't executed any transactions from # current pool master. The definition of "ahead" in this situation could be # undefined, e.g. if self_progress is also nil. Instead in this case, use # another method like DB#has_extra_transactions_vs? to get a sane result. raise "Cannot call DB#ahead_of_gtid? with a nil arg" if gtid_set.nil? if self_progress.nil? # self hasn't executed transactions from pool master but other node has: we know we're behind false elsif gtid_set == self_progress # same gtid_executed: we're not "ahead" false else result = query_return_first_value("SELECT gtid_subset(?, ?)", gtid_set, self_progress) # a 1 result means "gtid_set is a subset of self_progress" which means self has executed # all of these transactions already result == 1 end end
Returns true if self has executed at least one transaction past the supplied gtid_set The arg should only contain one uuid, obtained from gtid_executed_from_pool_master. (With a full gtid_executed containing multiple uuids, the notion of "ahead" could be undefined, as there's no implied ordering of uuids)
ahead_of_gtid?
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def gtid_deployment_step? # to_s is needed because global_variables[:gtid_deployment_step] is nil for non-Percona Server, # or for Percona Server versions prior to 5.6.22-72.0 global_variables[:gtid_deployment_step].to_s.downcase == 'on' end
note: as implemented, this is Percona Server specific. MySQL 5.6 has no equivalent functionality. https://www.percona.com/doc/percona-server/5.6/flexibility/online_gtid_deployment.html WebScaleSQL ties this in to read_only instead of making it a separate variable. Percona presumably used a separate variable to support master-master pools, but Jetpants does not support these.
gtid_deployment_step?
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def gtid_executed(display_info=false) result = query_return_first_value "SELECT @@global.gtid_executed" result.gsub! "\n", '' if display_info displayable_result = (result == '' ? '[empty]' : result) output "gtid_executed is #{displayable_result}" end result end
This intentionally executes a query instead of using SHOW GLOBAL VARIABLES, because the value can get quite long, and SHOW GLOBAL VARIABLES truncates its output
gtid_executed
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def gtid_executed_from_pool_master(display_info=false) uuid = pool(true).master_uuid gtid_sets = gtid_executed.split(',') # This intentionally will be nil if no transactions executed from pool master result = gtid_sets.select {|gs| gs.start_with? "#{uuid}:"}.first if display_info if result.nil? output "gtid_executed does not contain any transactions from pool master #{uuid}!" else output "gtid_executed from pool master is #{result}" end end result end
Returns the portion of self's gtid_executed relevant to just the pool's current master. This is useful for comparing replication progress without potentially getting tripped-up by missing transactions from other server_uuids. (which shouldn't normally happen anyway, but if they do, it would cause problems with WAIT_UNTIL_SQL_THREAD_AFTER_GTIDS if using the full gtid_executed set) If the DB has not executed any transactions from the pool master yet, returns nil! This is intentional, so that callers can handle this situation as appropriate.
gtid_executed_from_pool_master
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def gtid_executed_from_pool_master_string(display_info=false) result = gtid_executed_from_pool_master(display_info) rescue "unknown:unknown" if result.nil? uuid = pool(true).master_uuid "#{uuid}:none" else result end end
Like gtid_executed_from_pool_master, but instead of nil in the no-transactions case, returns a user-friendly uuid:none string. In the cannot-determine-master-UUID case, does not throw an exception. This method should generally only be used for display purposes, not for any MySQL function calls!
gtid_executed_from_pool_master_string
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def has_extra_transactions_vs?(relative_to_node) my_pool = pool(true) raise "Node #{relative_to_node} is not in the same pool as #{self}" unless relative_to_node.pool(true) == my_pool raise "DB#has_extra_transactions_vs? requires gtid_mode" unless my_pool.gtid_mode? # We specifically obtain gtid_executed for self BEFORE obtaining it for the # other node. That way, if writes are still occurring and the other node is # higher on the replication chain, we won't get thrown off by the new writes # looking like transactions that ran on the replica but not the master. self_gtid_exec = self.gtid_executed other_gtid_exec = relative_to_node.gtid_executed result = query_return_first_value("SELECT gtid_subset(?, ?)", self_gtid_exec, other_gtid_exec) # a 0 result means "not a subset" which indicates self_gtid_exec contains transactions missing # from other_gtid_exec result == 0 end
Returns true if self has executed transactions that relative_to_node has not. relative_to_node must be in the same pool as self. NOTE: if there are writes occurring in the pool, this method should only be used with a relative_to_node that is higher on the replication chain than self (i.e. relative_to_node is self's master, or self's master's master).
has_extra_transactions_vs?
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def purged_transactions_needed_by?(node) my_pool = pool(true) raise "Node #{node} is not in the same pool as #{self}" unless node.pool(true) == my_pool raise "DB#purged_transactions_needed_by? requires gtid_mode" unless my_pool.gtid_mode? result = node.query_return_first_value("SELECT gtid_subset(?, @@global.gtid_executed)", gtid_purged) # a 0 result means "not a subset" which indicates there are transactions on self's # gtid_purged list that have not been executed on node result == 0 end
Returns true if self has already purged binlogs containing transactions that the target node would need. This means that if we promoted self to be the master of the target node, replication would break on the target node.
purged_transactions_needed_by?
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def replay_missing_transactions(siblings, change_master_options, timeout=300, progress_interval=5) if siblings.empty? output "Cannot replay missing transactions -- no siblings provided" return false end if replicating? || pool(true).master.running? output "DB#replay_missing_transactions may only be used in a dead-master scenario" return false end furthest_replica = siblings.inject{|furthest_db, this_db| this_db.ahead_of?(furthest_db) ? this_db : furthest_db} unless furthest_replica.ahead_of? self output "Selected new master is already up-to-date with all transactions executed by other replicas" return true end if furthest_replica.purged_transactions_needed_by? self output "WARNING: Needs missing transactions from furthest-ahead replica #{furthest_replica}, but they have already been purged!" output "This means we cannot replay these transactions on the new master, so they will effectively be lost." gtid_executed(true) furthest_replica.gtid_executed(true) return false end output "Obtaining missing transactions by temporarily replicating from furthest-ahead node #{furthest_replica}" gtid_executed(true) furthest_replica.gtid_executed(true) change_master_to furthest_replica, change_master_options resume_replication attempts = timeout / progress_interval while attempts > 0 && replicating? && furthest_replica.has_extra_transactions_vs?(self) do sleep progress_interval attempts -= 1 gtid_executed(true) end disable_replication! if furthest_replica.has_extra_transactions_vs?(self) output "WARNING: Unable to complete replaying missing transactions." output "Giving up and proceeding with the rest of promotion. Some transactions will effecitvely be lost." false else output "Successfully replayed missing transactions from furthest-ahead node #{furthest_replica}" true end end
When a master dies and the new-master candidate is potentially not the furthest-ahead replica, call this method on the new-master to have it catch up on missing transactions from whichever sibling is further ahead. Returns true if the transactions were applied successfully or if no catch up was necessary, or false if not successful. siblings should be an array of other DBs in the pool at the same level as self (but excluding self). timeout and progress_interval are in seconds.
replay_missing_transactions
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def repoint_to(new_master_node) raise "DB#repoint_to can only be called on a slave" unless is_slave? # If the pool is using GTID, repointing is much simpler. Auto-positioning takes care of the logic for us. my_pool = pool(true) if my_pool.gtid_mode? # We don't even need to differentiate between the two cases, all we need is a sanity-check on same pool raise "DB#repoint_to must be called on nodes in the same pool" unless new_master_node.pool(true) == my_pool raise "#{new_master_node} already purged transactions needed by #{self}" if new_master_node.purged_transactions_needed_by? self change_master_to new_master_node, auto_position: true if slave_status[:master_host] != new_master_node.ip || slave_status[:auto_position] != '1' raise "Unexpected slave status values in replica #{self} after repointing" end resume_replication unless replicating? catch_up_to_master return end # Case 1, we compare the master two levels up with the master_node provided as argument, if equals we can change the topology by pausing replication of slave's master and retrieving replication coordinates to set replication from new_master_node. if master.master == new_master_node orig_master_node = master orig_master_node.pause_replication unless seconds_behind_master == 0 catch_up_to_master end log, pos = orig_master_node.repl_binlog_coordinates raise "No replication status found." if log.nil? # If log.nil? returns true, it means replication is not set up on the orig_master_node hence pause_replication will fail and resume_replication will fail too. stop_replication orig_master_node.resume_replication # Case 2, we compare master of both slave and new_master_node, if it equals then we determine that both slave and new_master_node are siblings and we can retrieve binlog coordinates of new_master_node to set up slave replicating from it elsif master == new_master_node.master pause_replication_with new_master_node log, pos = new_master_node.binlog_coordinates new_master_node.resume_replication raise "Binary logging not enabled." if log.nil? stop_replication # If log.nil? returns true in this case, it means binary logging must not be enabled here. We cannot setup replication without it. else raise "Without GTID, DB#repoint_to can work only with cases where Node-to-repoint is a sibling with its future master OR where Node-to-repoint is a tiered slave one level down the future master" end change_master_options = { user: new_master_node.replication_credentials[:user], password: new_master_node.replication_credentials[:pass], log_file: log, log_pos: pos, } # CHANGE MASTER TO .. command reset_replication! change_master_to new_master_node, change_master_options change_master_options[:master_host] = new_master_node.ip change_master_options[:master_user] = change_master_options.delete :user change_master_options[:master_log_file] = change_master_options.delete :log_file change_master_options[:exec_master_log_pos] = change_master_options.delete :log_pos change_master_options[:exec_master_log_pos] = change_master_options[:exec_master_log_pos].to_s change_master_options.delete :password change_master_options.each do |option, value| raise "Unexpected slave status value for #{option} in replica #{self} after repointing" unless slave_status[option] == value end resume_replication unless replicating? catch_up_to_master end
Modifies the replication topology; supports two different cases: Case 1: A tiered slave is re-pointed one level up, i.e. to be sibling of its current master. Case 2: A sibling slave is re-pointed one level down, i.e. to be replicating from one of its sibling.
repoint_to
ruby
tumblr/jetpants
lib/jetpants/db/replication.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/replication.rb
Apache-2.0
def detect_table_schema(table_name) table_sql = "SHOW CREATE TABLE `#{table_name}`" create_statement = query_return_first(table_sql).values.last pk_sql = "SHOW INDEX IN #{table_name} WHERE Key_name = 'PRIMARY'" pk_fields = query_return_array(pk_sql) pk_fields.sort_by!{|pk| pk[:Seq_in_index]} params = { 'primary_key' => pk_fields.map{|pk| pk[:Column_name] }, 'create_table' => create_statement, 'indexes' => connection.indexes(table_name), 'pool' => pool, 'columns' => connection.schema(table_name).map{|schema| schema[0]} } if pool.is_a? Shard config_params = Jetpants.send('sharded_tables')[pool.shard_pool.name.downcase] unless(config_params[table_name].nil?) params.merge!(config_params[table_name]) end end Table.new(table_name, params) end
Query the database for information about the table schema, including primary key, create statement, and columns
detect_table_schema
ruby
tumblr/jetpants
lib/jetpants/db/schema.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/schema.rb
Apache-2.0
def has_table?(table) pool.has_table? table end
Delegates check for a table existing by name up to the pool
has_table?
ruby
tumblr/jetpants
lib/jetpants/db/schema.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/schema.rb
Apache-2.0
def add_start_option(option) @start_options ||= [] @start_options << option unless @start_options.include? option end
add a server start option for the instance will be combined with options passed into start_mysql
add_start_option
ruby
tumblr/jetpants
lib/jetpants/db/server.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/server.rb
Apache-2.0
def remove_start_option(option) @start_options ||= [] @start_options.delete option end
remove a start option from the db instance will not prevent the option from being passed into start_mysql
remove_start_option
ruby
tumblr/jetpants
lib/jetpants/db/server.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/server.rb
Apache-2.0
def stop_mysql output "Attempting to shutdown MySQL" # Ensure GTID-related variables persist across a planned restart. This is needed regardless # of any plugins rewriting my.cnf based on pool membership, since there are scenarios # involving new nodes needing correct gtid_mode *prior* to Pool#sync_configuration being # called. (Note: DB#start_mysql is smart enough to *temporarily* ignore gtid_mode if # specifically starting with binlogging disabled.) if gtid_mode? add_start_option '--loose-gtid-mode=ON' add_start_option '--enforce-gtid-consistency=1' end if gtid_deployment_step? add_start_option '--loose-gtid-deployment-step=1' end disconnect if @db service_stop('mysql') running = ssh_cmd "netstat -ln | grep \":#{@port}\\s\" | wc -l" raise "[#{@ip}] Failed to shut down MySQL: Something is still listening on port #{@port}" unless running.chomp == '0' @options = [] @running = false end
Shuts down MySQL, and confirms that it is no longer listening. OK to use this if MySQL is already stopped; it's a no-op then.
stop_mysql
ruby
tumblr/jetpants
lib/jetpants/db/server.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/server.rb
Apache-2.0
def start_mysql(*options) if @master @repl_paused = options.include?('--skip-slave-start') end mysql_start_options = [ options, start_options ].flatten mysql_start_options.delete '--loose-gtid-mode=ON' if mysql_start_options.include? '--skip-log-bin' running = ssh_cmd "netstat -ln | grep ':#{@port}' | wc -l" raise "[#{@ip}] Failed to start MySQL: Something is already listening on port #{@port}" unless running.chomp == '0' if mysql_start_options.size == 0 output "Attempting to start MySQL, no option overrides supplied" else output "Attempting to start MySQL with options #{mysql_start_options.join(' ')}" end service_start('mysql', mysql_start_options) @options = options confirm_listening @running = true if role == :master && ! @options.include?('--skip-networking') disable_read_only! end end
Starts MySQL, and confirms that something is now listening on the port. Raises an exception if MySQL is already running or if something else is already running on its port. Options should be supplied as positional method args, for example: start_mysql '--skip-networking', '--skip-grant-tables'
start_mysql
ruby
tumblr/jetpants
lib/jetpants/db/server.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/server.rb
Apache-2.0
def flush_innodb_cache(timeout=1800, poll_frequency=30) # Before setting any variables we collect their current values to reset later on. prev_innodb_max_dirty_pages_pct = global_variables[:innodb_max_dirty_pages_pct].to_i prev_innodb_io_capacity_max = global_variables[:innodb_io_capacity_max].to_i prev_innodb_io_capacity = global_variables[:innodb_io_capacity].to_i # innodb_io_capacity can improve the performance of dirty page flushing in case faster storage is available, for eg on a flash storage increasing innodb_io_capacity to 50000 can result in innodb flushing more aggressively. io_capacity = Jetpants.innodb_flush_iops.to_i set_vars = "set global innodb_max_dirty_pages_pct = 0, global innodb_io_capacity_max = #{io_capacity + 2000}, global innodb_io_capacity = #{io_capacity}" mysql_root_cmd(set_vars) total_bufferpool_pages = global_status[:Innodb_buffer_pool_pages_data].to_i reset_vars = "set global innodb_io_capacity = #{prev_innodb_io_capacity}, global innodb_io_capacity_max = #{prev_innodb_io_capacity_max}, global innodb_max_dirty_pages_pct = #{prev_innodb_max_dirty_pages_pct}" start = Time.now.to_i output "Starting to flush dirty buffers to disk" while (Time.now.to_i - start) < timeout pages = global_status[:Innodb_buffer_pool_pages_dirty].to_i if pages < (total_bufferpool_pages/100) * 1.5 output "Dirty buffers have been flushed to disk, only 1.5% remaining." mysql_root_cmd(reset_vars) return true else output "Number of dirty pages remaining to be flushed: #{pages}" sleep poll_frequency end end raise "This instance was not able to flush all the dirty buffers within #{timeout} seconds. Resetting themysql variables back to previous values." mysql_root_cmd(reset_vars) end
DB#Flush_innodb_cache function flushes the dirty pages from Innodb buffer pool aggressively. This function should mostly be used in conjunction with DB#Restart_mysql where, flushing pages prior to restart reduces the time require to shutdown MySQL there-by reducing the restart times.
flush_innodb_cache
ruby
tumblr/jetpants
lib/jetpants/db/server.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/server.rb
Apache-2.0
def confirm_listening(timeout=10) if @options.include? '--skip-networking' output 'Unable to confirm mysqld listening because server started with --skip-networking' false else confirm_listening_on_port(@port, timeout) end end
Confirms that a process is listening on the DB's port
confirm_listening
ruby
tumblr/jetpants
lib/jetpants/db/server.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/server.rb
Apache-2.0
def get_query_runtime(duration, database = nil) raise 'Percona::Toolkit is not installed on the server' if self.ssh_cmd('which pt-query-digest 2> /dev/null').nil? dumpfile = File.join(Dir.tmpdir, 'jetpants_tcpdump.' + (0...8).map { (65 + rand(26)).chr }.join) get_tcpdump_sample duration, dumpfile if database output("Analyzing the tcpdump with pt-query-digest for database '#{database}'") pt_query_digest = "pt-query-digest --filter '$event->{db} && $event->{db} eq \"#{database}\"' --type tcpdump --limit 30 - 2> /dev/null" else output('Analyzing the tcpdump with pt-query-digest') pt_query_digest = 'pt-query-digest --type tcpdump --limit 30 - 2> /dev/null' end output(self.ssh_cmd "tcpdump -s 0 -x -n -q -tttt -r #{dumpfile} | #{pt_query_digest}") self.ssh_cmd "rm -f #{dumpfile}" nil end
Run tcpdump on the MySQL traffic and return the top 30 slowest queries
get_query_runtime
ruby
tumblr/jetpants
lib/jetpants/db/server.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/server.rb
Apache-2.0
def master return nil unless running? || @master probe if @master.nil? @master end
Returns the Jetpants::DB instance that is the master of this instance, or false if there isn't one, or nil if we can't tell because this instance isn't running.
master
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def slaves return nil unless running? || @slaves probe if @slaves.nil? @slaves end
Returns an Array of Jetpants::DB instances that are slaving from this instance, or nil if we can't tell because this instance isn't running.
slaves
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def repl_paused? return nil unless master probe if @repl_paused.nil? @repl_paused end
Returns true if replication is paused on this instance, false if it isn't, or nil if this instance isn't a slave (or if we can't tell because the instance isn't running)
repl_paused?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def running? probe if @running.nil? @running end
Returns true if MySQL is running for this instance, false otherwise. Note that if the host isn't available/online/reachable, we consider MySQL to not be running.
running?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def probed? [@master, @slaves, @running].compact.count >= 3 end
Returns true if we've probed this MySQL instance already. Several methods trigger a probe, including master, slaves, repl_paused?, and running?.
probed?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def probe(force=false, expected_master: nil) @probe_mutex.synchronize { unless (probed? && !force) output "Probing MySQL installation" probe_running probe_master if expected_master.nil? || master == expected_master probe_slaves else output "Not probing slaves, as our master (#{master}) does not match what our caller expected (#{expected_master})" end end } self end
Probes this instance to discover its status, master, and slaves. Several other methods trigger a probe automatically, including master, slaves, repl_paused?, and running?. Ordinarily this method won't re-probe an instance that has already been probed, unless you pass force=true. This can be useful if something external to Jetpants has changed a DB's state while Jetpants is running. For example, if you're using jetpants console and, for whatever reason, you stop replication on a slave manually outside of Jetpants. In this case you will need to force a probe so that Jetpants learns about the change.
probe
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def replicating? status = slave_status [status[:slave_io_running], status[:slave_sql_running]].all? {|s| s && s.downcase == 'yes'} end
Returns true if the MySQL slave I/O thread and slave SQL thread are both running, false otherwise. Note that this always checks the current actual state of the instance, as opposed to DB#repl_paused? which just remembers the state from the previous probe and any actions since then.
replicating?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def has_slaves? slaves.count > 0 end
Returns true if this instance had at least one slave when it was last probed, false otherwise. (This method will indirectly force a probe if the instance hasn't been probed before.)
has_slaves?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def read_only? global_variables[:read_only].downcase == 'on' end
Returns true if the global READ_ONLY variable is set, false otherwise.
read_only?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def taking_connections?(max=4, interval=2.0, threshold=1) current_conns = query_return_array('show processlist').count return true if current_conns > max conn_counter = global_status[:Connections].to_i sleep(interval) global_status[:Connections].to_i - conn_counter > threshold end
Confirms instance has no more than [max] connections currently (AS VISIBLE TO THE APP USER), and in [interval] seconds hasn't received more than [threshold] additional connections. You may need to adjust max if running multiple query killers, monitoring agents, etc.
taking_connections?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def max_threads_running(tries=8, interval=1.0) poll_status_value(:Threads_running,:max, tries, interval) end
Gets the max threads connected over a time period
max_threads_running
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def poll_status_value(field, type=:max, tries=8, interval=1.0) max = 0 sum = 0 tries.times do value = global_status[field].to_i max = value unless max > value sum += value sleep(interval) end if type == :max max elsif type == :avg sum.to_f/tries.to_f end end
Gets the max or avg for a mysql value
poll_status_value
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0