code
stringlengths
26
124k
docstring
stringlengths
23
125k
func_name
stringlengths
1
98
language
stringclasses
1 value
repo
stringlengths
5
53
path
stringlengths
7
151
url
stringlengths
50
211
license
stringclasses
7 values
def taking_writes?(interval=5.0) raise "DB#taking_writes? only works if binary logging enabled" unless binary_log_enabled? # if using GTID, check gtid_executed instead of binlog_coordinates # since DB#gtid_executed is slightly faster than DB#binlog_coordinates if pool(true).gtid_mode? executed = gtid_executed(true) sleep(interval) executed != gtid_executed(true) else coords = binlog_coordinates sleep(interval) coords != binlog_coordinates end end
Confirms the binlog of this node has not moved during a duration of [interval] seconds.
taking_writes?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def is_standby? !(running?) || (is_slave? && !taking_connections?) end
Returns true if this instance appears to be a standby slave, false otherwise. Note that "standby" in this case is based on whether the slave is actively receiving connections, not based on any Pool's understanding of the slave's state. An asset- tracker plugin may want to override this to determine standby status differently.
is_standby?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def for_backups? @host.hostname.start_with? 'backup' end
Jetpants supports a notion of dedicated backup machines, containing one or more MySQL instances that are considered "backup slaves", which will never be promoted to serve production queries. The default implementation identifies these by a hostname beginning with "backup". You may want to override this with a plugin to use a different scheme if your architecture contains a similar type of node.
for_backups?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def is_spare? raise "Plugin must override DB#is_spare?" end
Plugin should override so that this returns if the node is a spare
is_spare?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def promotable_to_master?(enslaving_old_master=true) pool(true).promotable_nodes(enslaving_old_master).include? self end
Returns true if the node can be promoted to be the master of its pool, false otherwise (also false if node is ALREADY the master). The enslaving_old_master arg just indicates whether or not the proposed promotion would keep the old master in the pool or not. Don't use this in hierarchical replication scenarios unless GTID in use and your asset tracker plugin handles cross-datacenter promotions properly (jetpants_collins doesn't yet, but support is planned to be added soon!)
promotable_to_master?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def global_variables query_return_array('show global variables').reduce do |variables, variable| variables[variable[:Variable_name].to_sym] = variable[:Value] variables end end
Returns a hash mapping global MySQL variables (as symbols) to their values (as strings).
global_variables
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def global_status query_return_array('show global status').reduce do |variables, variable| variables[variable[:Variable_name].to_sym] = variable[:Value] variables end end
Returns a hash mapping global MySQL status fields (as symbols) to their values (as strings).
global_status
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def version_tuple result = nil if running? # If the server is running, we can just query it result = global_variables[:version].split('.', 3).map(&:to_i) rescue nil end if result.nil? # Otherwise we need to parse the output of mysqld --version output = ssh_cmd 'mysqld --version' matches = output.downcase.match('ver\s*(\d+)\.(\d+)\.(\d+)') raise "Unable to determine version for #{self}" unless matches result = matches[1, 3].map(&:to_i) end result end
Returns an array of integers representing the version of the MySQL server. For example, Percona Server 5.5.27-rel28.1-log would return [5, 5, 27]
version_tuple
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def normalized_version(precision=2) raise "Invalid precision #{precision}" if precision < 1 || precision > 3 version_tuple[0, precision].join('.') end
Return a string representing the version. The precision indicates how many major/minor version numbers to return. ie, on 5.5.29, normalized_version(3) returns '5.5.29', normalized_version(2) returns '5.5', and normalized_version(1) returns '5'
normalized_version
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def version_cmp(db, precision=2) raise "Invalid precision #{precision}" if precision < 1 || precision > 3 my_tuple = version_tuple[0, precision] other_tuple = db.version_tuple[0, precision] my_tuple.each_with_index do |subver, i| return -1 if subver < other_tuple[i] return 1 if subver > other_tuple[i] end 0 end
Returns -1 if self is running a lower version than db; 1 if self is running a higher version; and 0 if running same version.
version_cmp
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def pool(create_if_missing=false) result = Jetpants.topology.pool(self) if !result if master result = master.pool(create_if_missing) elsif create_if_missing result = Pool.new('anon_pool_' + ip.tr('.', ''), self) def result.sync_configuration; end end end result end
Returns the Jetpants::Pool that this instance belongs to, if any. Can optionally create an anonymous pool if no pool was found. This anonymous pool intentionally has a blank sync_configuration implementation.
pool
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def role probe unless probed? p = pool case when !@master then :master # nodes that aren't slaves (including orphans) when p && p.master == self then :master # nodes that the topology thinks are masters when for_backups? then :backup_slave when p && p.active_slave_weights[self] then :active_slave # if pool in topology, determine based on expected/ideal state when !p && !is_standby? then :active_slave # if pool missing from topology, determine based on actual state else :standby_slave end end
Determines the DB's role in its pool. Returns either :master, :active_slave, :standby_slave, or :backup_slave. Note that we consider a node with no master and no slaves to be a :master, since we can't determine if it had slaves but they're just offline/dead, vs it being an orphaned machine. In hierarchical replication scenarios (such as the child shard masters in the middle of a shard split), we return :master if Jetpants.topology considers the node to be the master for a pool.
role
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def usable_spare? if @spare_validation_errors.nil? @spare_validation_errors = [] # The order of checks is important -- if the node isn't even reachable by SSH, # don't run any of the other checks, for example. # Note that we probe concurrently in Topology#query_spare_assets, ahead of time if !probed? @spare_validation_errors << 'Attempt to probe node failed' elsif !available? @spare_validation_errors << 'Node is not reachable via SSH' elsif !running? @spare_validation_errors << 'MySQL is not running' elsif pool @spare_validation_errors << 'Node already has a pool' else validate_spare end unless @spare_validation_errors.empty? error_text = @spare_validation_errors.join '; ' output "Removed from spare pool for failing checks: #{error_text}" end end @spare_validation_errors.empty? end
Returns true if this database is a spare node and looks ready for use, false otherwise. Normally no need for plugins to override this (as of Jetpants 0.8.1), they should override DB#validate_spare instead.
usable_spare?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def validate_spare @spare_validation_errors << "The node is not marked as a spare in the asset tracker" unless is_spare? end
Performs validation checks on this node to see whether it is a usable spare. The default implementation just ensures that the node is spare according to DB#is_spare? Downstream plugins may override this to do additional checks to ensure the node is in a sane condition. No need to check whether the node is SSH-able, MySQL is running, or not already in a pool -- DB#usable_spare? already does that automatically.
validate_spare
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def cleanup_spare! raise "Plugin must override DB#cleanup_spare!" end
Resets the MySQL data on a server and puts it into the spare pool
cleanup_spare!
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def claim! raise "Plugin must override DB#claim!" end
Sets a server as "claimed" in the asset tracker so that no other operation can use it. This is used for moving servers out of the spare pool.
claim!
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def use_ssl_replication? global_variables[:have_ssl] && global_variables[:have_ssl].downcase == "yes" end
Determine whether a server should use ssl as a replication source
use_ssl_replication?
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def probe_master return unless @running # leaves @master as nil to indicate unknown state status = slave_status if !status || status.count < 1 @master = false else @master = self.class.new(status[:master_host], status[:master_port]) if status[:slave_io_running] != status[:slave_sql_running] output "One replication thread is stopped and the other is not." if Jetpants.verify_replication output "You must repair this node manually, OR remove it from its pool permanently if it is unrecoverable." raise "Fatal replication problem on #{self}" end pause_replication else @repl_paused = (status[:slave_io_running].downcase == 'no') end end end
Checks slave status to determine master and whether replication is paused An asset-tracker plugin may want to implement DB#after_probe_master to populate @master even if @running is false.
probe_master
ruby
tumblr/jetpants
lib/jetpants/db/state.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/db/state.rb
Apache-2.0
def service(operation, name, options='') @host.ssh_cmd "service #{name} #{operation.to_s} #{options}".rstrip end
Performs the given operation (:start, :stop, :restart, :status) for the specified service (ie "mysql"). Requires that the "service" bin is in root's PATH. Please be aware that the output format and exit codes for the service binary vary between Linux distros!
service
ruby
tumblr/jetpants
lib/jetpants/hostservice/upstart.rb
https://github.com/tumblr/jetpants/blob/master/lib/jetpants/hostservice/upstart.rb
Apache-2.0
def snapshot storage_sizes = {} timestamp = Time.now.to_i current_sizes_storage = current_sizes all_mounts.each do |key, value| storage_sizes[key] = value storage_sizes[key]['db_sizes'] = current_sizes_storage[key] end store_data(storage_sizes, timestamp) snapshot_autoinc(timestamp) end
# grab snapshot of data and store it in mysql
snapshot
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def plan(email=false) history = get_history mount_stats_storage = all_mounts now = Time.now.to_i output = '' ##get segments for 24 hour blocks segments = segmentify(history, 60 * 60 * 24) total_mysql_dataset, total_growth_per_day = get_total_consumed_stg(mount_stats_storage, segments) output += "\n\n________________________________________________________________________________________________________\n" output += "Your MySQL data is #{total_mysql_dataset.first.round(2)}#{total_mysql_dataset.last}. It grew by #{total_growth_per_day.first.round(2)}#{total_growth_per_day.last} since yesterday" if Jetpants.topology.respond_to? :capacity_plan_notices output += "\n\n________________________________________________________________________________________________________\n" output += "Notices\n\n" output += Jetpants.topology.capacity_plan_notices end criticals = [] warnings = [] ## check to see if any mounts are currently over the usage points mount_stats_storage.each do |key, value| if value['used'].to_f/value['total'].to_f > Jetpants.plugins['capacity_plan']['critical_mount'] criticals << key elsif value['used'].to_f/value['total'].to_f > Jetpants.plugins['capacity_plan']['warning_mount'] warnings << key end end if criticals.count > 0 output += "\n\n________________________________________________________________________________________________________\n" output += "Critical Mounts\n\n" criticals.each do |mount| output += mount + "\n" end end if warnings.count > 0 output += "\n\n________________________________________________________________________________________________________\n" output += "Warning Mounts\n\n" warnings.each do |mount| output += mount + "\n" end end output += "\n\n________________________________________________________________________________________________________\n" output += "Usage and Time Left\n" output += " --------- The 'GB per day' and 'Days left' fields are using a growth rate that is calculated by taking \n --------- an exponentially decaying avg\n\n" ##get segments for 24 hour blocks segments = segmentify(history, 60 * 60 * 24) output += "%30s %20s %10s %10s %16s\n" % ["pool name","Current Data Size","GB per day","Days left","(until critical)"] output += "%30s %20s %10s %10s\n" % ["---------","-----------------","----------","---------"] mount_stats_storage.each do |name, temp| growth_rate = false segments[name].each do |range, value| growth_rate = calc_avg(growth_rate || value, value) end critical = mount_stats_storage[name]['total'].to_f * Jetpants.plugins['capacity_plan']['critical_mount'] if (per_day(bytes_to_gb(growth_rate))) <= 0 || ((critical - mount_stats_storage[name]['used'].to_f)/ per_day(growth_rate)) > 999 output += "%30s %20.2f %10.2f %10s\n" % [name, bytes_to_gb(mount_stats_storage[name]['used'].to_f), (per_day(bytes_to_gb(growth_rate+0))), 'N/A'] else output += "%30s %20.2f %10.2f %10.2f\n" % [name, bytes_to_gb(mount_stats_storage[name]['used'].to_f), (per_day(bytes_to_gb(growth_rate+0))),((critical - mount_stats_storage[name]['used'].to_f)/ per_day(growth_rate))] end end output += "\n\n________________________________________________________________________________________________________\nDay Over Day\n\n" output += "%30s %10s %10s %10s %10s %11s\n" % ["pool name", "today", "1 day ago", "2 days ago", "7 days ago", "14 days ago"] output += "%30s %10s %10s %10s %10s %11s\n" % ["---------", "-----", "---------", "----------", "----------", "-----------"] mount_stats_storage.each do |name, temp| out_array = [] segments[name].each do |range, value| out_array << per_day(bytes_to_gb(value))+0 end output += "%30s %10s %10s %10s %10s %11s\n" % [name, (out_array.reverse[0] ? "%.2f" % out_array.reverse[0] : 'N/A'), (out_array.reverse[1] ? "%.2f" % out_array.reverse[1] : 'N/A'), (out_array.reverse[2] ? "%.2f" % out_array.reverse[2] : 'N/A'), (out_array.reverse[7] ? "%.2f" % out_array.reverse[7] : 'N/A'), (out_array.reverse[14] ? "%.2f" % out_array.reverse[14] : 'N/A')] end date = Time.now.strftime("%Y-%m-%d") autoinc_history = get_autoinc_history(date) output += "\n________________________________________________________________________________________________________\nAuto-Increment Checker\n\n" output += "Top 5 tables with Auto-Increment filling up are: \n" output += "%30s %20s %20s %20s %10s %15s\n" % ["Pool name", "Table name", "Column name", "Column type", "Fill ratio", "Current Max val"] autoinc_history.each do |hash_key, value| value.each do |table, data| output += "%30s %20s %20s %20s %10s %15s\n" % [data["pool"], table, data["column_name"], data["column_type"], data["ratio"], data["max_val"]] end end output += outliers collins_results = get_hardware_stats output += collins_results puts output html = '<html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"></head><body><pre style="font-size=20px;">' + output + '</pre></body></html>' if email Pony.mail(:to => email, :from => 'jetpants', :subject => 'Jetpants Capacity Plan - '+Time.now.strftime("%m/%d/%Y %H:%M:%S"), :html_body => html, :headers => {'X-category' => 'cronjobr'}) end end
# generate the capacity plan and if email is true also send it to the email address listed
plan
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def calc_avg(avg, new_value, count=false) unless count (new_value * 0.5) + (avg * (1.0 - 0.5)) else avg + ((new_value - avg) / count) end end
use an exponentially decaying avg unless there is a count then use a cumulative moving avg
calc_avg
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def current_sizes pool_sizes = {} Jetpants.pools.each do |p| pool_sizes[p.name] = p.data_set_size end pool_sizes end
# grab the current sizes from actual data set size including logs (in bytes)
current_sizes
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def all_mounts all_mount_stats = {} Jetpants.pools.each do |p| mount_stats = p.mount_stats # check if any of the slaves has less total capacity than the master p.slaves.each do |s| slave_mount_stats = s.mount_stats mount_stats = slave_mount_stats if slave_mount_stats['total'] < mount_stats['total'] end all_mount_stats[p.name] ||= mount_stats end all_mount_stats end
# get all mount's data in kilobytes
all_mounts
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def get_total_consumed_stg(per_pool_consumed, segments) total_consumed = 0 total_growth = 0 growth_rate = false per_pool_consumed.each do |pool, storage| total_consumed += storage["used"] segments[pool].each do |range, value| growth_rate = calc_avg(growth_rate || value, value) end total_growth += per_day(growth_rate) end return Kibi.humanize(total_consumed), Kibi.humanize(total_growth) end
# get the total MySQL dataset size across whole site
get_total_consumed_stg
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def store_data(mount_data,timestamp) mount_data.each do |key, value| @@db.query('INSERT INTO storage (`timestamp`, `pool`, `total`, `used`, `available`, `db_sizes`) VALUES ( ? , ? , ? , ? , ? , ? )', timestamp.to_s, key, value['total'].to_s, value['used'].to_s, value['available'].to_s, value['db_sizes'].to_s) end end
# loop through data and enter it in mysql
store_data
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def get_history history = {} @@db.query_return_array('select timestamp, pool, total, used, available, db_sizes from storage order by id').each do |row| history[row[:pool]] ||= {} history[row[:pool]][row[:timestamp]] ||= {} history[row[:pool]][row[:timestamp]]['total'] = row[:total] history[row[:pool]][row[:timestamp]]['used'] = row[:used] history[row[:pool]][row[:timestamp]]['available'] = row[:available] history[row[:pool]][row[:timestamp]]['db_sizes'] = row[:db_sizes] end history end
# get history from mysql of all data right now
get_history
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def segmentify(hash, timeperiod) new_hash = {} hash.each do |name, temp| before_timestamp = false keeper = [] last_timestamp = nil last_value = nil hash[name].sort.each do |timestamp, value| new_hash[name] ||= {} last_timestamp = timestamp last_value = value unless before_timestamp && timestamp > (timeperiod - 60 ) + before_timestamp unless before_timestamp before_timestamp = timestamp end keeper << value else new_hash[name][before_timestamp.to_s+"-"+timestamp.to_s] = (keeper[0]['used'].to_f - value['used'].to_f )/(before_timestamp.to_f - timestamp.to_f) before_timestamp = timestamp keeper = [] keeper << value end end if keeper.length > 1 new_hash[name][before_timestamp.to_s+"-"+last_timestamp.to_s] = (keeper[0]['used'].to_f - last_value['used'].to_f )/(before_timestamp.to_f - last_timestamp.to_f) end end new_hash end
# segment out groups to a given time period
segmentify
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def get_hardware_stats #see if function exists return '' unless Jetpants.topology.respond_to? :machine_status_counts data = Jetpants.topology.machine_status_counts output = '' output += "\n________________________________________________________________________________________________________\n" output += "Hardware status\n\n" headers = ['status'].concat(data.first[1].keys).concat(['total']) output += (headers.map { |i| "%20s"}.join(" ")+"\n") % headers output += (headers.map { |i| "%20s"}.join(" ")+"\n") % headers.map { |i| '------------------'} data.each do |key, status| unless key == 'unallocated' total = 0 status.each do |nodeclass, value| total += value.to_i end output += (headers.map { |i| "%20s"}.join(" ")+"\n") % [key].concat(status.values).concat([total]) end end output += "\nTotal Unallocated nodes - " + data['unallocated'] + "\n\n" output end
get a hash of machines to display at then end of the email you need to have a method in Jetpants.topology.machine_status_counts to get your machine types and states
get_hardware_stats
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def outliers output = '' output += "\n________________________________________________________________________________________________________\n" output += "New Outliers\n" output += "--Compare the last 3 days in 2 hour blocks to the same 2 hour block 7, 14, 21, 28 days ago\n\n" output += "%30s %25s %25s %10s %11s\n" % ['Pool Name', 'Start Time', 'End Time', 'Usage', 'Prev Weeks'] output += "%30s %25s %25s %10s %11s\n" % ['---------', '----------', '--------', '-----', '----------'] block_sizes = 60 * 60 * 2 + 120 days_from = [7,14,21,28] Jetpants.pools.each do |p| start_time = Time.now.to_i - 3 * 24 * 60 * 60 counter = 0 counter_time = 0 output_buffer = '' last_per = nil name = p.name while start_time + (60 * 62) < Time.now.to_i temp_array = [] from_blocks = {} from_per = {} now_block = get_history_block(name, start_time, start_time + block_sizes) unless now_block.count == 0 now_per = (now_block.first[1]['used'].to_f - now_block.values.last['used'].to_f)/(now_block.first[0].to_f - now_block.keys.last.to_f) days_from.each do |days| temp = get_history_block(name, start_time - (days * 24 * 60 * 60), start_time - (days * 24 * 60 * 60) + block_sizes) if temp.count >= 2 from_blocks[days] = temp from_per[days] = (from_blocks[days].first[1]['used'].to_f - from_blocks[days].values.last['used'].to_f)/(from_blocks[days].first[0].to_f - from_blocks[days].keys.last.to_f) end end # remove outliers from compare array because we only care about current outliers not old outliers from_per.each do |day, value| if(value > from_per.values.mean * 5.0 || value < from_per.values.mean * -5.0) from_per.delete(day) end end if from_per.count > 0 if((now_per > (from_per.values.mean * 2.2) && from_per.values.mean != 0) || (from_per.values.mean == 0 && now_per > 1048576)) if counter == 0 counter_time = start_time end counter += 1 if counter > 3 output_buffer = "%30s %25s %25s %10.2f %11.2f\n" % [name, Time.at(counter_time.to_i).strftime("%m/%d/%Y %H:%M:%S"), Time.at(start_time + block_sizes).strftime("%m/%d/%Y %H:%M:%S"), per_day(bytes_to_gb(now_per)), per_day(bytes_to_gb(from_per.values.mean))] end else counter = 0 unless output_buffer == '' output += output_buffer output_buffer = '' end end if((now_per > (from_per.values.mean * 5.0) && from_per.values.mean != 0) || (from_per.values.mean == 0 && now_per > 1048576)) output += "%30s %25s %25s %10.2f %11.2f\n" % [name, Time.at(start_time).strftime("%m/%d/%Y %H:%M:%S"), Time.at(start_time + block_sizes).strftime("%m/%d/%Y %H:%M:%S"), per_day(bytes_to_gb(now_per)), per_day(bytes_to_gb(from_per.values.mean))] end end # end if hash has values end start_time += block_sizes - 120 end # end while loop for last 3 days output_buffer = '' counter = 0 counter_time = 0 end output end
figure out the outliers for the last 3 days
outliers
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def get_history_block(pool,time_start,time_stop) history = {} @@db.query_return_array('select timestamp, pool, total, used, available, db_sizes from storage where pool = ? and timestamp >= ? and timestamp <= ? order by id', pool, time_start, time_stop).each do |row| history[row[:timestamp]] ||= {} history[row[:timestamp]]['total'] = row[:total] history[row[:timestamp]]['used'] = row[:used] history[row[:timestamp]]['available'] = row[:available] history[row[:timestamp]]['db_sizes'] = row[:db_sizes] end history end
# get history from mysql of all data right now
get_history_block
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def snapshot_autoinc(timestamp) date = Time.now.strftime("%Y-%m-%d") if Jetpants.plugins['capacity_plan']['autoinc_ignore_list'].nil? pools_list = Jetpants.topology.pools else ignore_list = Jetpants.plugins['capacity_plan']['autoinc_ignore_list'] ignore_list.map! { |p| Jetpants.topology.pool(p) } pools_list = Jetpants.topology.pools.reject { |p| ignore_list.include? p } end query = %Q| SELECT * FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_SCHEMA NOT IN ('mysql', 'information_schema', 'performance_schema', 'test') AND LOCATE('auto_increment', EXTRA) > 0 | pools_list.each do |p| slave = p.standby_slaves.first if !slave.nil? slave.query_return_array(query).each do |row| table_name = row[:TABLE_NAME] schema_name = row[:TABLE_SCHEMA] column_name = row[:COLUMN_NAME] column_type = row[:COLUMN_TYPE] data_type = row[:DATA_TYPE] data_type_max_value = max_value(data_type) unless column_type.split.last == "unsigned" data_type_max_value = (data_type_max_value / 2) - 1 end sql = "SELECT MAX(#{column_name}) as max_value FROM #{schema_name}.#{table_name}" max_val = '' slave.query_return_array(sql).each do |row| max_val = row[:max_value] end @@db.query('INSERT INTO auto_inc_checker (`timestamp`, `pool`, `table_name`, `column_name`, `column_type`, `max_val`, `data_type_max`) values (?, ?, ?, ?, ?, ?, ?)', timestamp, slave.pool.to_s, table_name, column_name, data_type, max_val, data_type_max_value) end end end end
# get the auto_inc ratios for all pools
snapshot_autoinc
ruby
tumblr/jetpants
plugins/capacity_plan/capacity_plan.rb
https://github.com/tumblr/jetpants/blob/master/plugins/capacity_plan/capacity_plan.rb
Apache-2.0
def to_db raise "Can only call to_db on SERVER_NODE assets, but #{self} has type #{type}" unless type.upcase == 'SERVER_NODE' backend_ip_address.to_db end
Convert a Collins::Asset to a Jetpants::DB. Requires asset TYPE to be SERVER_NODE.
to_db
ruby
tumblr/jetpants
plugins/jetpants_collins/asset.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/asset.rb
Apache-2.0
def to_host raise "Can only call to_host on SERVER_NODE assets, but #{self} has type #{type}" unless type.upcase == 'SERVER_NODE' backend_ip_address.to_host end
Convert a Collins::Asset to a Jetpants::Host. Requires asset TYPE to be SERVER_NODE.
to_host
ruby
tumblr/jetpants
plugins/jetpants_collins/asset.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/asset.rb
Apache-2.0
def to_shard_pool raise "Can only call to_shard_pool on CONFIGURATION assets, but #{self} has type #{type}" unless type.upcase == 'CONFIGURATION' raise "Unknown primary role #{primary_role} for configuration asset #{self}" unless primary_role.upcase == 'MYSQL_SHARD_POOL' raise "No shard_pool attribute set on asset #{self}" unless shard_pool && shard_pool.length > 0 Jetpants::ShardPool.new(shard_pool) end
Convert a Collins:Asset to a Jetpants::ShardPool
to_shard_pool
ruby
tumblr/jetpants
plugins/jetpants_collins/asset.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/asset.rb
Apache-2.0
def to_pool raise "Can only call to_pool on CONFIGURATION assets, but #{self} has type #{type}" unless type.upcase == 'CONFIGURATION' raise "Unknown primary role #{primary_role} for configuration asset #{self}" unless ['MYSQL_POOL', 'MYSQL_SHARD'].include?(primary_role.upcase) raise "No pool attribute set on asset #{self}" unless pool && pool.length > 0 # if this node is iniitalizing we know there will be no server assets # associated with it if !shard_state.nil? and shard_state.upcase == "INITIALIZING" master_assets = [] else # Find the master(s) for this pool. If we got back multiple masters, first # try ignoring the remote datacenter ones master_assets = Jetpants.topology.server_node_assets(pool.downcase, :master) end if master_assets.count > 1 results = master_assets.select {|a| a.location.nil? || a.location.upcase == Jetpants::Plugin::JetCollins.datacenter} master_assets = results if results.count > 0 end puts "WARNING: multiple masters found for pool #{pool}; using first match" if master_assets.count > 1 if master_assets.count == 0 puts "WARNING: no masters found for pool #{pool}; ignoring pool entirely" result = nil elsif primary_role.upcase == 'MYSQL_POOL' result = Jetpants::Pool.new(pool.downcase, master_assets.first.to_db) if aliases aliases.split(',').each {|a| result.add_alias(a.downcase)} end result.slave_name = slave_pool_name if slave_pool_name result.master_read_weight = master_read_weight if master_read_weight active_slave_assets = Jetpants.topology.server_node_assets(pool.downcase, :active_slave) active_slave_assets.each do |asset| weight = asset.slave_weight && asset.slave_weight.to_i > 0 ? asset.slave_weight.to_i : 100 result.has_active_slave(asset.to_db, weight) end elsif primary_role.upcase == 'MYSQL_SHARD' result = Jetpants::Shard.new(shard_min_id.to_i, shard_max_id == 'INFINITY' ? 'INFINITY' : shard_max_id.to_i, master_assets.first.to_db, shard_state.downcase.to_sym, shard_pool) # We'll need to set up the parent/child relationship if a shard split is in progress, # BUT we need to wait to do that later since the shards may have been returned by # Collins out-of-order, so the parent shard object might not exist yet. # For now we just remember the NAME of the parent shard. result.has_parent = shard_parent if shard_parent else raise "Unknown configuration asset primary role #{primary_role} for asset #{self}" end result end
Convert a Collins::Asset to either a Jetpants::Pool or a Jetpants::Shard, depending on the value of PRIMARY_ROLE. Requires asset TYPE to be CONFIGURATION.
to_pool
ruby
tumblr/jetpants
plugins/jetpants_collins/asset.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/asset.rb
Apache-2.0
def is_standby? !(running?) || (is_slave? && !taking_connections? && collins_secondary_role == 'standby_slave') end
#### METHOD OVERRIDES ##################################################### Add an actual collins check to confirm a machine is a standby
is_standby?
ruby
tumblr/jetpants
plugins/jetpants_collins/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/db.rb
Apache-2.0
def for_backups? hostname.start_with?('backup') || in_remote_datacenter? end
Treat any node outside of current data center as being for backups. This prevents inadvertent cross-data-center master promotion.
for_backups?
ruby
tumblr/jetpants
plugins/jetpants_collins/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/db.rb
Apache-2.0
def after_probe_master unless @running if collins_secondary_role == 'master' @master = false else pool = Jetpants.topology.pool(collins_pool) @master = pool.master if pool end end # We completely ignore cross-data-center master unless inter_dc_mode is enabled. # This may change in a future Jetpants release, once we support tiered replication more cleanly. if @master && @master.in_remote_datacenter? && !Jetpants::Plugin::JetCollins.inter_dc_mode? @remote_master = @master # keep track of it, in case we need to know later @master = false elsif !@master in_remote_datacenter? # just calling to cache for current node, before we probe its slaves, so that its slaves don't need to query Collins end end
#### CALLBACKS ############################################################ Determine master from Collins if machine is unreachable or MySQL isn't running.
after_probe_master
ruby
tumblr/jetpants
plugins/jetpants_collins/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/db.rb
Apache-2.0
def after_probe_slaves # If this machine has a master AND has slaves of its own AND is in another data center, # ignore its slaves entirely unless inter_dc_mode is enabled. # This may change in a future Jetpants release, once we support tiered replication more cleanly. @slaves = [] if @running && @master && @slaves.count > 0 && in_remote_datacenter? && !Jetpants::Plugin::JetCollins.inter_dc_mode? unless @running p = Jetpants.topology.pool(self) @slaves = (p ? p.slaves_according_to_collins : []) end end
Determine slaves from Collins if machine is unreachable or MySQL isn't running
after_probe_slaves
ruby
tumblr/jetpants
plugins/jetpants_collins/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/db.rb
Apache-2.0
def in_remote_datacenter? @host.collins_location != Plugin::JetCollins.datacenter end
#### NEW METHODS ########################################################## Returns true if this database is located in the same datacenter as jetpants_collins has been figured for, false otherwise.
in_remote_datacenter?
ruby
tumblr/jetpants
plugins/jetpants_collins/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/db.rb
Apache-2.0
def pool(create_if_missing=false) result = Jetpants.topology.pool(collins_pool || self) if !result if master result = master.pool(create_if_missing) elsif create_if_missing result = Pool.new('anon_pool_' + ip.tr('.', ''), self) def result.sync_configuration; end end end result end
Returns the Jetpants::Pool that this instance belongs to, if any. Can optionally create an anonymous pool if no pool was found. This anonymous pool intentionally has a blank sync_configuration implementation. Rely on Collins for pool information if it is already in one.
pool
ruby
tumblr/jetpants
plugins/jetpants_collins/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/db.rb
Apache-2.0
def collins_location return @collins_location if @collins_location ca = collins_asset @collins_location ||= (ca ? ca.location || Plugin::JetCollins.datacenter : 'unknown') @collins_location.upcase! @collins_location end
Returns which datacenter this host is in. Only a getter, intentionally no setter.
collins_location
ruby
tumblr/jetpants
plugins/jetpants_collins/host.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/host.rb
Apache-2.0
def method_missing(name, *args, &block) if service.respond_to? name Jetpants.with_retries( Jetpants.plugins['jetpants_collins']['retries'], Jetpants.plugins['jetpants_collins']['max_retry_backoff'] ) { service.send name, *args, &block } else super end end
We delegate missing class (module) methods to the collins API client, if it responds to them.
method_missing
ruby
tumblr/jetpants
plugins/jetpants_collins/jetpants_collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/jetpants_collins.rb
Apache-2.0
def included(base) base.class_eval do def self.collins_attr_accessor(*fields) fields.each do |field| define_method("collins_#{field}") do Jetpants.with_retries( Jetpants.plugins['jetpants_collins']['retries'], Jetpants.plugins['jetpants_collins']['max_retry_backoff'] ) { (collins_get(field) || '').downcase } end define_method("collins_#{field}=") do |value| Jetpants.with_retries( Jetpants.plugins['jetpants_collins']['retries'], Jetpants.plugins['jetpants_collins']['max_retry_backoff'] ) { result = collins_set(field, value) Jetpants.with_retries( Jetpants.plugins['jetpants_collins']['retries'], Jetpants.plugins['jetpants_collins']['max_retry_backoff'] ) do if field == :status && value.include?(':') fetched = ("#{self.collins_status}:#{self.collins_state}").to_s.downcase else fetched = (collins_get(field) || '').to_s.downcase end expected = (value || '').to_s.downcase if fetched != expected raise "Retrying until Collins reports #{field} changed from '#{fetched}' to '#{expected}'." end end result } end end end # We make these 4 accessors available to ANY class including this mixin collins_attr_accessor :primary_role, :secondary_role, :pool, :status, :state end end
Eigenclass mix-in for collins_attr_accessor Calling "collins_attr_accessor :foo" in your class body will create methods collins_foo and collins_foo= which automatically get/set Collins attribute foo
included
ruby
tumblr/jetpants
plugins/jetpants_collins/jetpants_collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/jetpants_collins.rb
Apache-2.0
def datacenter (Jetpants.plugins['jetpants_collins']['datacenter'] || 'UNKNOWN-DC').upcase end
Returns the 'datacenter' config option for this plugin, or 'UNKNOWN-DC' if none has been configured. This only matters in multi-datacenter Collins topologies.
datacenter
ruby
tumblr/jetpants
plugins/jetpants_collins/jetpants_collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/jetpants_collins.rb
Apache-2.0
def enable_inter_dc_mode Jetpants.plugins['jetpants_collins']['inter_dc_mode'] = true Jetpants.plugins['jetpants_collins']['remote_lookup'] = true end
Ordinarily, in a multi-datacenter environment, jetpants_collins places a number of restrictions on interacting with assets that aren't in the local datacenter, for safety's sake and to simplify how hierarchical replication trees are represented: * Won't change Collins attributes on remote server node assets. * If a local node has a master in a remote datacenter, it is ignored/hidden. * If a local node has a slave in a remote datacenter, it's treated as a backup_slave, in order to prevent cross-datacenter master promotions. If any of these remote-datacenter slaves have slaves of their own, they're ignored/hidden. You may DISABLE these restrictions by calling enable_inter_dc_mode. Normally you do NOT want to do this, except in special situations like a migration between datacenters.
enable_inter_dc_mode
ruby
tumblr/jetpants
plugins/jetpants_collins/jetpants_collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/jetpants_collins.rb
Apache-2.0
def inter_dc_mode? Jetpants.plugins['jetpants_collins']['inter_dc_mode'] || false end
Returns true if enable_inter_dc_mode has been called, false otherwise.
inter_dc_mode?
ruby
tumblr/jetpants
plugins/jetpants_collins/jetpants_collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/jetpants_collins.rb
Apache-2.0
def collins_asset raise "Any class including Plugin::JetCollins must also implement collins_asset instance method!" end
#### INSTANCE (MIX-IN) METHODS ########################################## The base class needs to implement this!
collins_asset
ruby
tumblr/jetpants
plugins/jetpants_collins/jetpants_collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/jetpants_collins.rb
Apache-2.0
def collins_status_state values = collins_get :status, :state "#{values[:status]}:#{values[:state]}".downcase end
Returns a single downcased "status:state" string, useful when trying to compare both fields at once.
collins_status_state
ruby
tumblr/jetpants
plugins/jetpants_collins/jetpants_collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/jetpants_collins.rb
Apache-2.0
def to_db if self.to_s =~ /(\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})/ Jetpants::DB.new(self.to_s) else selector = { :hostname => "^#{self.to_s}$", :details => true, :size => 1, :page => 0 } assets = Jetpants::Plugin::JetCollins.find(selector, false) raise "Invalid hostname: #{self}" if assets.empty? assets.first.to_db end end
Converts self to a Jetpants::DB by way of to_s. Only really useful for Strings containing IP addresses, or Objects whose to_string method returns an IP address as a string.
to_db
ruby
tumblr/jetpants
plugins/jetpants_collins/monkeypatch.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/monkeypatch.rb
Apache-2.0
def collins_asset(create_if_missing=false) selector = { operation: 'and', details: true, type: 'CONFIGURATION', primary_role: 'MYSQL_POOL', pool: "^#{@name.upcase}$", status: 'Allocated', } selector[:remoteLookup] = true if Jetpants.plugins['jetpants_collins']['remote_lookup'] results = Plugin::JetCollins.find selector, !create_if_missing # If we got back multiple results, try ignoring the remote datacenter ones if results.count > 1 filtered_results = results.select {|a| a.location.nil? || a.location.upcase == Plugin::JetCollins.datacenter} results = filtered_results if filtered_results.count > 0 end if results.count > 1 raise "Multiple configuration assets found for pool #{name}" elsif results.count == 0 && create_if_missing output "Could not find configuration asset for pool; creating now" new_tag = 'mysql-' + @name asset = Collins::Asset.new type: 'CONFIGURATION', tag: new_tag, status: 'Allocated' begin Plugin::JetCollins.create!(asset) rescue collins_set asset: asset, status: 'Allocated' end collins_set asset: asset, primary_role: 'MYSQL_POOL', pool: @name.upcase Plugin::JetCollins.get new_tag elsif results.count == 0 && !create_if_missing raise "Could not find configuration asset for pool #{name}" else results.first end end
Returns a Collins::Asset for this pool. Can optionally create one if not found.
collins_asset
ruby
tumblr/jetpants
plugins/jetpants_collins/pool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/pool.rb
Apache-2.0
def sync_configuration asset = collins_asset(true) collins_set asset: asset, slave_pool_name: slave_name || '', aliases: aliases.join(',') || '', master_read_weight: master_read_weight [@master, slaves].flatten.each do |db| current_status = (db.collins_status || '').downcase db.collins_status = 'Allocated:RUNNING' unless current_status == 'maintenance' db.collins_pool = @name end @master.collins_secondary_role = 'MASTER' slaves(:active).each do |db| db.collins_secondary_role = 'ACTIVE_SLAVE' weight = @active_slave_weights[db] db.collins_slave_weight = (weight == 100 ? '' : weight) end slaves(:standby).each {|db| db.collins_secondary_role = 'STANDBY_SLAVE'} slaves(:backup).each {|db| db.collins_secondary_role = 'BACKUP_SLAVE'} @claimed_nodes = [] true end
#### METHOD OVERRIDES ##################################################### Examines the current state of the pool (as known to Jetpants) and updates Collins to reflect this, in terms of the pool's configuration asset as well as the individual hosts.
sync_configuration
ruby
tumblr/jetpants
plugins/jetpants_collins/pool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/pool.rb
Apache-2.0
def running_slaves(secondary_role=false) slaves.select { |slave| collins_secondary_role = Jetpants.topology.normalize_roles(slave.collins_secondary_role).first rescue false (slave.collins_status_state.downcase == 'allocated:running') && (secondary_role ? collins_secondary_role == secondary_role : true) } end
Return the count of Allocated:RUNNING slaves
running_slaves
ruby
tumblr/jetpants
plugins/jetpants_collins/pool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/pool.rb
Apache-2.0
def after_remove_slave!(slave_db) slave_db.collins_pool = slave_db.collins_secondary_role = slave_db.collins_slave_weight = '' current_status = (slave_db.collins_status || '').downcase slave_db.collins_status = 'Unallocated' unless current_status == 'maintenance' end
#### CALLBACKS ############################################################ Pushes slave removal to Collins. (Normally this type of logic is handled by Pool#sync_configuration, but that won't handle this case, since sync_configuration only updates hosts still in the pool.)
after_remove_slave!
ruby
tumblr/jetpants
plugins/jetpants_collins/pool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/pool.rb
Apache-2.0
def after_master_promotion!(promoted, enslave_old_master=true) Jetpants.topology.clear_asset_cache # Find the master asset(s) for this pool, filtering down to only current datacenter assets = Jetpants.topology.server_node_assets(@name, :master) assets.reject! {|a| a.location && a.location.upcase != Plugin::JetCollins.datacenter} assets.map(&:to_db).each do |db| if db != @master || !db.running? db.collins_pool = '' db.collins_secondary_role = '' if enslave_old_master db.output 'REMINDER: you must manually put this host into Maintenance status in Collins' unless db.collins_status.downcase == 'maintenance' else db.collins_status = 'Unallocated' end end end # Clean up any slaves that are no longer slaving (again only looking at current datacenter) assets = Jetpants.topology.server_node_assets(@name, :slave) assets.reject! {|a| a.location && a.location.upcase != Plugin::JetCollins.datacenter} assets.map(&:to_db).each do |db| if db.master != @master || !db.running? || db.pool != self db.output "Not replicating from new master, removing from pool #{self}" db.collins_pool = '' db.collins_secondary_role = '' db.collins_status = 'Unallocated' end end end
If the demoted master was offline, record some info in Collins, otherwise there will be 2 masters listed
after_master_promotion!
ruby
tumblr/jetpants
plugins/jetpants_collins/pool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/pool.rb
Apache-2.0
def slaves_according_to_collins results = [] Jetpants.topology.server_node_assets(@name, :slave).each do |asset| slave = asset.to_db output "Collins found slave #{slave.ip} (#{slave.hostname})" results << slave end results end
Called from DB#after_probe_master and DB#after_probe_slave for machines that are unreachable via SSH, or reachable but MySQL isn't running.
slaves_according_to_collins
ruby
tumblr/jetpants
plugins/jetpants_collins/pool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/pool.rb
Apache-2.0
def collins_asset(create_if_missing=false) selector = { operation: 'and', details: true, type: 'CONFIGURATION', primary_role: '^MYSQL_SHARD$', shard_min_id: "^#{@min_id}$", shard_max_id: "^#{@max_id}$", shard_pool: "^#{@shard_pool.name}$" } selector[:remoteLookup] = true if Jetpants.plugins['jetpants_collins']['remote_lookup'] results = Plugin::JetCollins.find selector, !create_if_missing # If we got back multiple results, try ignoring the remote datacenter ones if results.count > 1 filtered_results = results.select {|a| a.location.nil? || a.location.upcase == Plugin::JetCollins.datacenter} results = filtered_results if filtered_results.count > 0 end if results.count > 1 raise "Multiple configuration assets found for pool #{name}" elsif results.count == 0 && create_if_missing output "Could not find configuration asset for pool; creating now" new_tag = 'mysql-' + @name asset = Collins::Asset.new type: 'CONFIGURATION', tag: new_tag, status: 'Allocated' begin Plugin::JetCollins.create!(asset) rescue collins_set asset: asset, status: 'Allocated' end collins_set asset: asset, primary_role: 'MYSQL_SHARD', pool: @name.upcase, shard_min_id: @min_id, shard_max_id: @max_id, shard_pool: @shard_pool.name.upcase Plugin::JetCollins.get new_tag elsif results.count == 0 && !create_if_missing raise "Could not find configuration asset for pool #{name}" else results.first end end
Returns a Collins::Asset for this pool
collins_asset
ruby
tumblr/jetpants
plugins/jetpants_collins/shard.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/shard.rb
Apache-2.0
def sync_configuration status = case when in_config? then 'Allocated' when @state == :deprecated then 'Cancelled' when @state == :recycle then 'Decommissioned' else 'Provisioning' end collins_set asset: collins_asset(true), status: status, shard_state: @state.to_s.upcase, shard_parent: @parent ? @parent.name : '' if @state == :recycle [@master, @master.slaves].flatten.each do |db| db.collins_pool = '' db.collins_secondary_role = '' db.collins_status = 'Unallocated' end elsif @state != :initializing # Note that we don't call Pool#slaves here to get all 3 types in one shot, # because that potentially includes child shards, and we don't want to overwrite # their pool setting... [@master, active_slaves, standby_slaves, backup_slaves].flatten.each do |db| current_status = (db.collins_status || '').downcase db.collins_status = 'Allocated:RUNNING' unless current_status == 'maintenance' db.collins_pool = @name end @master.collins_secondary_role = 'MASTER' standby_slaves.each {|db| db.collins_secondary_role = 'STANDBY_SLAVE'} backup_slaves.each {|db| db.collins_secondary_role = 'BACKUP_SLAVE'} end # handle lockless master migration situations if @state == :child && master.master && !@parent to_be_ejected_master = master.master to_be_ejected_master.collins_secondary_role = :standby_slave # not accurate, but no better option for now end true end
#### METHOD OVERRIDES ##################################################### Examines the current state of the pool (as known to Jetpants) and updates Collins to reflect this, in terms of the pool's configuration asset as well as the individual hosts.
sync_configuration
ruby
tumblr/jetpants
plugins/jetpants_collins/shard.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/shard.rb
Apache-2.0
def process_spare_selector_options(selector, options) # If you wanted to support an option of :role, and map this to the Collins # SECONDARY_ROLE attribute, you could implement this via: # selector[:secondary_role] = options[:role].to_s.downcase if options[:role] # This could be useful if, for example, you use a different hardware spec # for masters vs slaves. (doing so isn't really recommended, which is why # we omit this logic by default.) # return the selector selector end
#### METHODS THAT OTHER PLUGINS CAN OVERRIDE ############################## IMPORTANT NOTE This plugin does NOT implement write_config, since this format of your app configuration file entirely depends on your web framework! You will have to implement this yourself in a separate plugin; recommended approach is to add serialization methods to Pool and Shard, and call it on each @pool, writing out to a file or pinging a config service, depending on whatever your application uses. Handles extra options for querying spare nodes. Takes a Collins selector hash and an options hash, and returns a potentially-modified Collins selector hash. The default implementation here implements no special logic. Custom plugins (loaded AFTER jetpants_collins is loaded) can override this method to manipulate the selector; see commented-out example below.
process_spare_selector_options
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def load_pools load_shard_pools if @shard_pools.nil? # We keep a cache of Collins::Asset objects, organized as pool_name => role => [asset, asset, ...] @pool_role_assets = {} # Populate the cache for all master and active_slave nodes. (We restrict to these types # because this is sufficient for creating Pool objects and generating app config files.) Jetpants.topology.server_node_assets(false, :master, :active_slave) @pools = configuration_assets('MYSQL_POOL', 'MYSQL_SHARD').map(&:to_pool) @pools.compact! # remove nils from pools that had no master @pools.sort_by! { |p| sort_pools_callback p } # Set up parent/child relationships between shards currently being split. # We do this in a separate step afterwards so that Topology#pool can find the parent # by name, regardless of ordering in Collins @pools.select {|p| p.has_parent}.each do |p| parent = pool(p.has_parent) or raise "Cannot find parent shard named #{p.has_parent}" parent.add_child(p) end true end
Initializes list of pools + shards from Collins
load_pools
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def claim_spares(count, options={}) return [] if count == 0 spare_count = count_spares(options) raise "Not enough spare machines available! Found #{spare_count}, needed #{count}" if spare_count < count assets = query_spare_assets(count, options) claimed_dbs = assets.map do |asset| db = asset.to_db db.claim! if options[:for_pool] options[:for_pool].claimed_nodes << db unless options[:for_pool].claimed_nodes.include? db end db end if options[:for_pool] compare_pool = options[:for_pool] elsif options[:like] && options[:like].pool compare_pool = options[:like].pool else compare_pool = false end if(compare_pool && claimed_dbs.select{|db| db.proximity_score(compare_pool) > 0}.count > 0) compare_pool.output "Unable to claim #{count} nodes with an ideal proximity score!" end claimed_dbs end
Returns (count) DB objects. Pulls from machines in the spare state and converts them to the Allocated status. You can pass in :role to request spares with a particular secondary_role
claim_spares
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def count_spares(options={}) options[:no_error_on_zero] = true query_spare_assets(100, options).count end
This method won't ever return a number higher than 100, but that's not a problem, since no single operation requires that many spares
count_spares
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def spares(options={}) query_spare_assets(100, options).map(&:to_db) end
This method won't ever return more than than 100 nodes, but that's not a problem, since no single operation requires that many spares
spares
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def server_node_assets(pool_name=false, *roles) roles = normalize_roles(roles) if roles.count > 0 # Check for previously-cached result. (Only usable if a pool_name supplied.) if pool_name && @pool_role_assets[pool_name] if roles.count > 0 && roles.all? {|r| @pool_role_assets[pool_name].has_key? r} return roles.map {|r| @pool_role_assets[pool_name][r]}.flatten elsif roles.count == 0 && valid_roles.all? {|r| @pool_role_assets[pool_name].has_key? r} return @pool_role_assets[pool_name].values.flatten end end per_page = Jetpants.plugins['jetpants_collins']['selector_page_size'] || 50 selector = { operation: 'and', details: true, size: per_page, query: 'primary_role = ^DATABASE$ AND type = ^SERVER_NODE$ AND status != ^DECOMMISSIONED$ AND status !=^MAINTENANCE$' } selector[:remoteLookup] = true if Jetpants.plugins['jetpants_collins']['remote_lookup'] selector[:query] += " AND pool = ^#{pool_name}$" if pool_name if roles.count == 1 selector[:query] += " AND secondary_role = ^#{roles.first}$" elsif roles.count > 1 values = roles.map {|r| "secondary_role = ^#{r}$"} selector[:query] += ' AND (' + values.join(' OR ') + ')' end assets = [] done = false page = 0 # Query Collins one or more times, until we've seen all the results until done do selector[:page] = page # find() apparently alters the selector object now, so we dup it # also force JetCollins to retry requests to the Collins server results = Plugin::JetCollins.find selector.dup, true, page == 0 done = (results.count < per_page) || (results.count == 0 && page > 0) page += 1 assets.concat(results.select {|a| a.pool}) # filter out any spare nodes, which will have no pool set end # Next we need to update our @pool_role_assets cache. But first let's set it to [] for each pool/role # that we queried. This intentionally nukes any previous cached data, and also allows us to differentiate # between an empty result and a cache miss. roles = valid_roles if roles.count == 0 seen_pools = assets.map {|a| a.pool.downcase} seen_pools << pool_name if pool_name seen_pools.uniq.each do |p| @pool_role_assets[p] ||= {} roles.each {|r| @pool_role_assets[p][r] = []} end # Filter assets.select! {|a| a.pool && a.secondary_role && %w(allocated maintenance).include?(a.status.downcase)} # Cache assets.each {|a| @pool_role_assets[a.pool.downcase][a.secondary_role.downcase.to_sym] << a} # Return assets end
Returns an array of Collins::Asset objects meeting the given criteria. Caches the result for subsequent use. Optionally supply a pool name to restrict the result to that pool. Optionally supply one or more role symbols (:master, :active_slave, :standby_slave, :backup_slave) to filter the result to just those SECONDARY_ROLE values in Collins.
server_node_assets
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def configuration_assets(*primary_roles) raise "Must supply at least one primary_role" if primary_roles.count < 1 per_page = Jetpants.plugins['jetpants_collins']['selector_page_size'] || 50 selector = { operation: 'and', details: true, size: per_page, query: 'status != ^DECOMMISSIONED$ AND type = ^CONFIGURATION$', } if primary_roles.count == 1 selector[:primary_role] = primary_roles.first else values = primary_roles.map {|r| "primary_role = ^#{r}$"} selector[:query] += ' AND (' + values.join(' OR ') + ')' end selector[:remoteLookup] = true if Jetpants.plugins['jetpants_collins']['remote_lookup'] done = false page = 0 assets = [] until done do selector[:page] = page # find() apparently alters the selector object now, so we dup it # also force JetCollins to retry requests to the Collins server page_of_results = Plugin::JetCollins.find selector.dup, true, page == 0 assets += page_of_results page += 1 done = (page_of_results.count < per_page) || (page_of_results.count == 0 && page > 0) end # If remote lookup is enabled, remove the remote copy of any pool that exists # in both local and remote datacenters. if Jetpants.plugins['jetpants_collins']['remote_lookup'] dc_pool_map = {Plugin::JetCollins.datacenter => {}} assets.each do |a| location = a.location || Plugin::JetCollins.datacenter pool = a.pool ? a.pool.downcase : a.tag[6..-1].downcase # if no pool, strip 'mysql-' off front and use that dc_pool_map[location] ||= {} dc_pool_map[location][pool] = a end # Grab everything from current DC first (higher priority over other # datacenters), then grab everything from remote DCs. final_assets = dc_pool_map[Plugin::JetCollins.datacenter].values dc_pool_map.each do |dc, pool_to_assets| next if dc == Plugin::JetCollins.datacenter pool_to_assets.each do |pool, a| final_assets << a unless dc_pool_map[Plugin::JetCollins.datacenter][pool] end end assets = final_assets end assets end
Returns an array of configuration assets with the supplied primary role(s)
configuration_assets
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def query_spare_assets(count, options={}) per_page = Jetpants.plugins['jetpants_collins']['selector_page_size'] || 50 # Intentionally no remoteLookup=true here. We only want to grab spare nodes # from the datacenter that Jetpants is running in. selector = { operation: 'and', details: true, type: 'SERVER_NODE', status: 'Allocated', state: 'SPARE', primary_role: 'DATABASE', size: per_page, } selector = process_spare_selector_options(selector, options) source = options[:like] done = false page = 0 nodes = [] until done do selector[:page] = page error_on_zero = page == 0 && !options[:no_error_on_zero] # find() apparently alters the selector object now, so we dup it # also force JetCollins to retry requests to the Collins server page_of_results = Plugin::JetCollins.find selector.dup, true, error_on_zero nodes += page_of_results done = (page_of_results.count < per_page) || (page_of_results.count == 0 && page > 0) page += 1 end keep_assets = [] nodes.map(&:to_db).concurrent_each {|db| db.probe rescue nil} nodes.concurrent_each do |node| db = node.to_db if(db.usable_spare? && ( !source || (!source.pool && db.usable_with?(source)) || ( (!options[:for_pool] && source.pool && db.usable_in?(source.pool)) || (options[:for_pool] && db.usable_in?(options[:for_pool])) ) ) ) keep_assets << node end end if options[:for_pool] compare_pool = options[:for_pool] elsif source && source.pool compare_pool = source.pool else compare_pool = false end # here we compare nodes against the optionally provided source to attempt to # claim a node which is not physically local to the source nodes if compare_pool keep_assets = sort_assets_for_pool(compare_pool, keep_assets) end keep_assets.slice(0,count) end
Helper method to query Collins for spare DBs.
query_spare_assets
ruby
tumblr/jetpants
plugins/jetpants_collins/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/jetpants_collins/topology.rb
Apache-2.0
def tables_to_merge(shard_pool) tables = Table.from_config('sharded_tables', shard_pool) table_list = [] if (!Jetpants.plugins['merge_helper'].nil? && Jetpants.plugins['merge_helper'].has_key?('table_list')) table_list = Jetpants.plugins['merge_helper']['table_list'] end tables.select! { |table| table_list.include? table.name } unless table_list.empty? tables end
Provide a config hook to specify a list of tables to merge, overriding the sharded_tables list
tables_to_merge
ruby
tumblr/jetpants
plugins/merge_helper/merge_helper.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/merge_helper.rb
Apache-2.0
def probe_master return unless running? raise "Attempting to probe a database without aggregation capabilities as an aggregate node" unless aggregator? probe_aggregate_nodes end
override the single master state probe with a probe of all replication sources
probe_master
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def probe_aggregate_nodes @aggregating_node_list = [] @replication_states = {} all_slave_statuses.each do |status| aggregate_node = DB.new(status[:master_host], status[:master_port]) @aggregating_node_list << aggregate_node if status[:slave_io_running] != status[:slave_sql_running] output "One replication thread is stopped and the other is not for #{status[:name]}." if Jetpants.verify_replication output "You must repair this node manually, OR remove it from its pool permanently if it is unrecoverable." raise "Fatal replication problem on #{self}" end aggregate_pause_replication(aggregate_node) @replication_states[aggregate_node] = :paused else if status[:slave_io_running].downcase == 'yes' @replication_states[aggregate_node] = :running else @replication_states[aggregate_node] = :paused end end end end
uses multi-source replication semantics to build a list of replication sources
probe_aggregate_nodes
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def add_node_to_aggregate(node, option_hash = {}) raise "Attempting to add a node to aggregate to a non-aggregation node" unless aggregator? raise "Attempting to add an invalid aggregation source" unless node raise "Attempting to add a node that is already being aggregated" if aggregating_for? node @replication_states ||= {} logfile = option_hash[:log_file] pos = option_hash[:log_pos] if !(logfile && pos) raise "Cannot use coordinates of a new master that is receiving updates" if node.master && ! node.repl_paused? logfile, pos = node.binlog_coordinates end repl_user = option_hash[:user] || replication_credentials[:user] repl_pass = option_hash[:password] || replication_credentials[:pass] result = mysql_root_cmd "CHANGE MASTER '#{node.pool}' TO " + "MASTER_HOST='#{node.ip}', " + "MASTER_PORT=#{node.port}, " + "MASTER_LOG_FILE='#{logfile}', " + "MASTER_LOG_POS=#{pos}, " + "MASTER_USER='#{repl_user}', " + "MASTER_PASSWORD='#{repl_pass}'" output "Adding node #{node} to list of aggregation data sources with coordinates (#{logfile}, #{pos}). #{result}" @replication_states[node] = :paused @aggregating_node_list << node node.slaves << self end
Similar to operations that change master This method uses the aggregating node's pool as the connection name
add_node_to_aggregate
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def pause_all_replication raise "Pausing replication with no aggregating nodes" if aggregating_nodes.empty? replication_names = aggregating_nodes.map{|node| node.pool}.join(", ") output "Pausing replication for #{replication_names}" output mysql_root_cmd "STOP ALL SLAVES" @replication_states.keys.each do |key| @replication_states[key] = :paused end @repl_paused = true end
pauses replication from all sources, updating internal state
pause_all_replication
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def aggregate_resume_replication(node) raise "Attempting to resume aggregate replication for a node not in aggregation list" unless aggregating_for? node aggregate_repl_binlog_coordinates(node, true) output "Resuming aggregate replication from #{node.pool}." output mysql_root_cmd "START SLAVE '#{node.pool}'" @replication_states[node] = :running @repl_paused = !any_replication_running? end
resume replication from all sources, updating internal state
aggregate_resume_replication
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def resume_all_replication raise "Resuming replication with no aggregating nodes" if aggregating_nodes.empty? paused_nodes = replication_states.select{|node,state| state == :paused}.keys.map(&:pool) output "Resuming replication for #{paused_nodes.join(", ")}" output mysql_root_cmd "START ALL SLAVES" @replication_states.keys.each do |key| @replication_states[key] = :running end @repl_paused = false end
This is potentially dangerous, as it will start all replicating even if there are some replication streams in a paused state
resume_all_replication
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def aggregate_catch_up_to_master(node, timeout=21600, threshold=3, poll_frequency=5) raise "Attempting to catch up aggregate replication for a node which is not in the aggregation list" unless aggregating_for? node aggregate_resume_replication(node) if replication_states[node] == :paused times_at_zero = 0 start = Time.now.to_i output "Waiting to catch up to aggregation node" while (Time.now.to_i - start) < timeout lag = aggregate_seconds_behind_master(node) if lag == 0 times_at_zero += 1 if times_at_zero >= threshold output "Caught up to master \"#{node.pool}\" (#{node})." return true end sleep poll_frequency elsif lag.nil? aggregate_resume_replication(node) sleep 1 raise "Unable to restart replication" if aggregate_seconds_behind_master(node).nil? else output "Currently #{lag} seconds behind master \"#{node.pool}\" (#{node})." times_at_zero = 0 extra_sleep_time = (lag > 30000 ? 300 : (aggregate_seconds_behind_master(node) / 100).ceil) sleep poll_frequency + extra_sleep_time end end raise "This instance did not catch up to its aggregate data source \"#{node.pool}\" (#{node}) within #{timeout} seconds" end
This is a lot of copypasta, punting on it for now until if/when we integrate more with core
aggregate_catch_up_to_master
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def validate_aggregate_row_counts(restart_monitoring = true, tables = false) tables = Table.from_config('sharded_tables', aggregating_nodes.first.pool.shard_pool.name) unless tables query_nodes = [ slaves, aggregating_nodes ].flatten aggregating_nodes.concurrent_each do |node| node.disable_monitoring node.stop_query_killer node.pause_replication end begin node_counts = {} # gather counts for source nodes aggregating_nodes.concurrent_each do |node| counts = tables.limited_concurrent_map(8) { |table| rows = node.query_return_first_value("SELECT count(*) FROM #{table}") node.output "#{rows}", table [ table, rows ] } node_counts[node] = Hash[counts] end # wait until here to pause replication # to make sure all statements drain through slaves.concurrent_each do |node| node.disable_monitoring node.stop_query_killer node.pause_replication end # gather counts from slave # this should be the new shard master slave = slaves.last aggregate_counts = tables.limited_concurrent_map(8) { |table| rows = slave.query_return_first_value("SELECT count(*) FROM #{table}") slave.output "#{rows}", table [ table, rows ] } aggregate_counts = Hash[aggregate_counts] # sum up source node counts total_node_counts = {} aggregate_counts.keys.each do |key| total_node_counts[key] = 0 aggregating_nodes.each do |node| total_node_counts[key] = total_node_counts[key] + node_counts[node][key] end end # validate row counts valid = true total_node_counts.each do |table,count| if total_node_counts[table] != aggregate_counts[table] valid = false output "Counts for #{table} did not match. #{aggregate_counts[table]} on combined node and #{total_node_counts[table]} on source nodes" end end ensure if restart_monitoring query_nodes.concurrent_each do |node| node.start_replication node.catch_up_to_master node.start_query_killer node.enable_monitoring end end end if valid output "Row counts match" else output "Row count mismatch! check output above" end valid end
Performs a validation step of pausing replication and determining row counts on an aggregating server and its data sources WARNING! This will pause replication on the nodes this machine aggregates from And perform expensive row count operations on them
validate_aggregate_row_counts
ruby
tumblr/jetpants
plugins/merge_helper/lib/aggregator.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/aggregator.rb
Apache-2.0
def ship_schema_to(node) export_schemata tables fast_copy_chain( Jetpants.export_location, node, port: 3307, files: [ "create_tables_#{node.port}.sql" ], overwrite: true ) end
Export and ship the table schema to a specified node WARNING! The export created will be destructive to any data on the destination node!
ship_schema_to
ruby
tumblr/jetpants
plugins/merge_helper/lib/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/db.rb
Apache-2.0
def inject_counts(counts) @counts = counts end
all the insertion of combined export counts for validation
inject_counts
ruby
tumblr/jetpants
plugins/merge_helper/lib/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/db.rb
Apache-2.0
def validate_shard_data tables = Table.from_config('sharded_tables', shard_pool.name) table_statuses = {} tables.limited_concurrent_map(8) { |table| table.sharding_keys.each do |col| range_sql = table.sql_range_check col, @min_id, @max_id # use a standby slave, since this query will be very heavy and these shards are live db = standby_slaves.last result = db.query_return_array range_sql if result.first.values.first > 0 table_statuses[table] = :invalid else table_statuses[table] = :valid end end } table_statuses end
Runs queries against a slave in the pool to verify sharding key values
validate_shard_data
ruby
tumblr/jetpants
plugins/merge_helper/lib/shard.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/shard.rb
Apache-2.0
def table_export_filenames(full_path = true, tables = false) export_filenames = [] tables = Table.from_config('sharded_tables', shard_pool.name) unless tables export_filenames = tables.map { |table| table.export_filenames(@min_id, @max_id) }.flatten export_filenames.map!{ |filename| File.basename filename } unless full_path export_filenames end
Generate a list of filenames for exported data
table_export_filenames
ruby
tumblr/jetpants
plugins/merge_helper/lib/shard.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/shard.rb
Apache-2.0
def sql_range_check(sharding_key, min_id, max_id) "SELECT count(*) AS invalid_records FROM #{@name} WHERE #{sharding_key} > #{max_id} OR #{sharding_key} < #{min_id}" end
Generate a query to determine if there are any rows outside of the shard id range
sql_range_check
ruby
tumblr/jetpants
plugins/merge_helper/lib/table.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/table.rb
Apache-2.0
def export_filenames(min_id, max_id) export_filenames = [] (min_id..max_id).in_chunks(@chunks) do |min, max| export_filenames << export_file_path(min, max) end export_filenames end
Generate a list of chunked filenames for import/export
export_filenames
ruby
tumblr/jetpants
plugins/merge_helper/lib/table.rb
https://github.com/tumblr/jetpants/blob/master/plugins/merge_helper/lib/table.rb
Apache-2.0
def collins_check_can_be_altered? collins_osc_state['current_state'] == "can_be_altered" end
We're using collins to contain our implicit state machine, which looks like this: can_be_altered -> being_altered -> can_be_altered \ -> needs_rename -> can_be_altered You enter needs_rename by passing `--skip-rename` at the start, after the alter table call completes according to pt-online-schema-change. You exit `--skip-rename` when calling `alter_table_rename` check if a alter is already running
collins_check_can_be_altered?
ruby
tumblr/jetpants
plugins/online_schema_change/lib/collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/online_schema_change/lib/collins.rb
Apache-2.0
def collins_set_being_altered!(database, table, alter, skip_rename) self.collins_osc_state = { 'running' => true, 'started' => Time.now.to_i, 'database' => database, 'table' => table, 'alter' => alter, 'current_state' => "being_altered", 'next_state' => skip_rename ? "needs_rename" : "can_be_altered" } end
update collins for tracking alters, so there is only one running at a time
collins_set_being_altered!
ruby
tumblr/jetpants
plugins/online_schema_change/lib/collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/online_schema_change/lib/collins.rb
Apache-2.0
def collins_check_needs_rename? collins_osc_state['current_state'] == "needs_rename" end
check to see if we can progress to needs_rename
collins_check_needs_rename?
ruby
tumblr/jetpants
plugins/online_schema_change/lib/collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/online_schema_change/lib/collins.rb
Apache-2.0
def collins_set_can_be_altered! self.collins_osc_state = nil end
clean up collins after alter / rename
collins_set_can_be_altered!
ruby
tumblr/jetpants
plugins/online_schema_change/lib/collins.rb
https://github.com/tumblr/jetpants/blob/master/plugins/online_schema_change/lib/collins.rb
Apache-2.0
def has_space_for_alter?(table_name, database_name=nil) database_name ||= app_schema table_size = dir_size("#{mysql_directory}/#{database_name}/#{table_name}.ibd") table_size < mount_stats['available'] end
make sure there is enough space to do an online schema change
has_space_for_alter?
ruby
tumblr/jetpants
plugins/online_schema_change/lib/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/online_schema_change/lib/db.rb
Apache-2.0
def alter_table_shards(database, table, alter, dry_run=true, shard_pool=nil, skip_rename=false, arbitrary_options=[]) shard_pool = Jetpants.topology.default_shard_pool if shard_pool.nil? my_shards = shards(shard_pool).dup ui = PreflightShardUI.new(my_shards) ui.run! do |shard,stage| # If we're past preflight, we want to not prompt the confirmation. force = stage == :all shard.alter_table(database, table, alter, dry_run, force, skip_rename, arbitrary_options) end end
run an alter table on all the sharded pools if you specify dry run it will run a dry run on all the shards otherwise it will run on the first shard and ask if you want to continue on the rest of the shards, 10 shards at a time
alter_table_shards
ruby
tumblr/jetpants
plugins/online_schema_change/lib/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/online_schema_change/lib/topology.rb
Apache-2.0
def drop_old_alter_table_shards(database, table, shard_pool = nil) shard_pool = Jetpants.topology.default_shard_pool if shard_pool.nil? my_shards = shards(shard_pool).dup ui = PreflightShardUI.new(my_shards) ui.run! do |shard,_| shard.drop_old_alter_table(database, table) end end
will drop old table from the shards after a alter table this is because we do not drop the old table in the osc also I will do the first shard and ask if you want to continue, after that it will do each table serially
drop_old_alter_table_shards
ruby
tumblr/jetpants
plugins/online_schema_change/lib/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/online_schema_change/lib/topology.rb
Apache-2.0
def after_probe_master unless @running my_pool, my_role = Jetpants.topology.class.tracker.determine_pool_and_role(@ip, @port) @master = (my_role == 'MASTER' ? false : my_pool.master) end end
#### CALLBACKS ############################################################ Determine master from asset tracker if machine is unreachable or MySQL isn't running.
after_probe_master
ruby
tumblr/jetpants
plugins/simple_tracker/lib/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/db.rb
Apache-2.0
def after_probe_slaves unless @running @slaves = Jetpants.topology.class.tracker.determine_slaves(@ip, @port) end end
Determine slaves from asset tracker if machine is unreachable or MySQL isn't running
after_probe_slaves
ruby
tumblr/jetpants
plugins/simple_tracker/lib/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/db.rb
Apache-2.0
def to_hash(for_app_config=false) if for_app_config slave_data = active_slave_weights.map {|db, weight| {'host' => db.to_s, 'weight' => weight}} else slave_data = active_slave_weights.map {|db, weight| {'host' => db.to_s, 'weight' => weight, 'role' => 'ACTIVE_SLAVE'}} + standby_slaves.map {|db| {'host' => db.to_s, 'role' => 'STANDBY_SLAVE'}} + backup_slaves.map {|db| {'host' => db.to_s, 'role' => 'BACKUP_SLAVE'}} end { 'name' => name, 'aliases' => aliases, 'slave_name' => slave_name, 'master' => master.to_s, 'master_read_weight' => master_read_weight || 0, 'slaves' => slave_data } end
#### NEW METHODS ########################################################## Converts a Pool to a hash, for use in either the internal asset tracker json (for_app_config=false) or for use in the application config file yaml (for_app_config=true)
to_hash
ruby
tumblr/jetpants
plugins/simple_tracker/lib/pool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/pool.rb
Apache-2.0
def to_hash(for_app_config=false) if for_app_config # Ignore shards that shouldn't receive queries from the application return nil unless in_config? me = {'min_id' => min_id.to_i, 'max_id' => (max_id == 'INFINITY') ? 'INFINITY' : max_id.to_i} # We need to correctly handle child shards (which still have writes sent their parent), # read-only shards, and offline shards appropriately. hosts = case state when :ready, :needs_cleanup then {'host' => master.ip} when :child then {'host_read' => master.ip, 'host_write' => master.master.ip} when :read_only then {'host_read' => master.ip, 'host_write' => false} when :merging then {'host_read' => combined_shard.master.ip, 'host_write' => master.ip} else {'host' => false} end me.merge hosts else slave_data = active_slave_weights.map { |db, weight| {'host' => db.to_s, 'weight' => weight, 'role' => 'ACTIVE_SLAVE'} } slave_data += standby_slaves.map { |db| {'host' => db.to_s, 'role' => 'STANDBY_SLAVE'} } slave_data += backup_slaves.map { |db| {'host' => db.to_s, 'role' => 'BACKUP_SLAVE'} } { 'min_id' => min_id, 'max_id' => max_id, 'parent' => parent ? parent.to_s : nil, 'state' => state, 'master' => master, 'slaves' => slave_data, 'shard_pool' => shard_pool, } end end
#### NEW METHODS ########################################################## Converts a Shard to a hash, for use in either the internal asset tracker json (for_app_config=false) or for use in the application config file yaml (for_app_config=true)
to_hash
ruby
tumblr/jetpants
plugins/simple_tracker/lib/shard.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/shard.rb
Apache-2.0
def to_hash(for_app_config = true) { shard_pool: @name } end
#### NEW METHODS ########################################################## Converts a Shard to a hash, for use in either the internal asset tracker json (for_app_config=false) or for use in the application config file yaml (for_app_config=true)
to_hash
ruby
tumblr/jetpants
plugins/simple_tracker/lib/shardpool.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/shardpool.rb
Apache-2.0
def load_pools # Create Pool and Shard objects @pools = self.class.tracker.global_pools.map {|h| Pool.from_hash(h)}.compact all_shards = self.class.tracker.shards.map {|h| Shard.from_hash(h)}.reject {|s| s.state == :recycle} @pools.concat all_shards # Now that all shards exist, we can safely assign parent/child relationships self.class.tracker.shards.each {|h| Shard.assign_relationships(h, all_shards)} end
#### METHOD OVERRIDES ##################################################### Populates @pools by reading asset tracker data
load_pools
ruby
tumblr/jetpants
plugins/simple_tracker/lib/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/topology.rb
Apache-2.0
def load_shard_pools @shard_pools = self.class.tracker.shard_pools.map{|h| ShardPool.from_hash(h) }.compact end
Populate @shard_pools by reading asset tracker data
load_shard_pools
ruby
tumblr/jetpants
plugins/simple_tracker/lib/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/topology.rb
Apache-2.0
def write_config config_file_path = self.class.tracker.app_config_file_path # Convert the pool list into a hash db_data = { 'database' => { 'pools' => functional_partitions.map {|p| p.to_hash(true)}, }, 'shard_pools' => {} } shard_pools.each do |shard_pool| db_data['shard_pools'][shard_pool.name] = shard_pool.shards.select {|s| s.in_config?}.map {|s| s.to_hash(true)} end # Convert that hash to YAML and write it to a file File.open(config_file_path, 'w') do |f| f.write db_data.to_yaml f.close end puts "Regenerated #{config_file_path}" end
Generates a database configuration file for a hypothetical web application
write_config
ruby
tumblr/jetpants
plugins/simple_tracker/lib/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/topology.rb
Apache-2.0
def claim_spares(count, options={}) raise "Not enough spare machines -- requested #{count}, only have #{self.class.tracker.spares.count}" if self.class.tracker.spares.count < count hashes = self.class.tracker.spares.shift(count) update_tracker_data dbs = hashes.map {|h| h.is_a?(Hash) && h['node'] ? h['node'].to_db : h.to_db} if options[:for_pool] pool = options[:for_pool] dbs.each do |db| pool.claimed_nodes << db unless pool.claimed_nodes.include? db end end dbs end
simple_tracker completely ignores any options like :role or :like
claim_spares
ruby
tumblr/jetpants
plugins/simple_tracker/lib/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/topology.rb
Apache-2.0
def update_tracker_data self.class.tracker.global_pools = functional_partitions.map &:to_hash self.class.tracker.shards = pools.select{|p| p.is_a? Shard}.reject {|s| s.state == :recycle}.map &:to_hash self.class.tracker.shard_pools = shard_pools.map(&:to_hash) self.class.tracker.save end
#### NEW METHODS ########################################################## Called by Pool#sync_configuration to update our asset tracker json. This actually re-writes all the json. With a more dynamic asset tracker (something backed by a database, for example) this wouldn't be necessary - instead Pool#sync_configuration could just update the info for that pool only.
update_tracker_data
ruby
tumblr/jetpants
plugins/simple_tracker/lib/topology.rb
https://github.com/tumblr/jetpants/blob/master/plugins/simple_tracker/lib/topology.rb
Apache-2.0
def before_start_mysql(*options) return unless @needs_upgrade @repl_paused = false if @master running = ssh_cmd "netstat -ln | grep #{@port} | wc -l" raise "[#{@ip}] Failed to start MySQL: Something is already listening on port #{@port}" unless running.chomp == '0' output "Attempting to start MySQL with --skip-networking --skip-grant-tables in prep for upgrade" # Can't use start_mysql here without causing infinite recursion! Also don't need # to do all the same checks here, nor do we need to store these to @options. service_start('mysql', [ '--skip-networking', '--skip-grant-tables' ]) output "Attempting to run mysql_upgrade" output ssh_cmd('mysql_upgrade') output "Upgrade complete" @needs_upgrade = false # Now shut down mysql, so that start_mysql can restart it without the --skip-* options stop_mysql end
#### CALLBACKS ############################################################ Handle upgrading mysql if needed
before_start_mysql
ruby
tumblr/jetpants
plugins/upgrade_helper/db.rb
https://github.com/tumblr/jetpants/blob/master/plugins/upgrade_helper/db.rb
Apache-2.0