code
stringlengths
52
7.75k
docs
stringlengths
1
5.85k
def enumeration (cls): from pwkit import unicode_to_str name = cls.__name__ pickle_compat = getattr (cls, '__pickle_compat__', False) def __unicode__ (self): return '<enumeration holder %s>' % name def getattr_error (self, attr): raise AttributeError ('enumeration %s does not contain attribute %s' % (name, attr)) def modattr_error (self, *args, **kwargs): raise AttributeError ('modification of %s enumeration not allowed' % name) clsdict = { '__doc__': cls.__doc__, '__slots__': (), '__unicode__': __unicode__, '__str__': unicode_to_str, '__repr__': unicode_to_str, '__getattr__': getattr_error, '__setattr__': modattr_error, '__delattr__': modattr_error, } for key in dir (cls): if not key.startswith ('_'): clsdict[key] = getattr (cls, key) if pickle_compat: clsdict['__call__'] = lambda self, x: x enumcls = type (name, (object, ), clsdict) return enumcls ()
A very simple decorator for creating enumerations. Unlike Python 3.4 enumerations, this just gives a way to use a class declaration to create an immutable object containing only the values specified in the class. If the attribute ``__pickle_compat__`` is set to True in the decorated class, the resulting enumeration value will be callable such that ``EnumClass(x) = x``. This is needed to unpickle enumeration values that were previously implemented using :class:`enum.Enum`.
def fits_recarray_to_data_frame (recarray, drop_nonscalar_ok=True): from pandas import DataFrame def normalize (): for column in recarray.columns: n = column.name d = recarray[n] if d.ndim != 1: if not drop_nonscalar_ok: raise ValueError ('input must have only scalar columns') continue if d.dtype.isnative: yield (n.lower (), d) else: yield (n.lower (), d.byteswap (True).newbyteorder ()) return DataFrame (dict (normalize ()))
Convert a FITS data table, stored as a Numpy record array, into a Pandas DataFrame object. By default, non-scalar columns are discarded, but if *drop_nonscalar_ok* is False then a :exc:`ValueError` is raised. Column names are lower-cased. Example:: from pwkit import io, numutil hdu_list = io.Path ('my-table.fits').read_fits () # assuming the first FITS extension is a binary table: df = numutil.fits_recarray_to_data_frame (hdu_list[1].data) FITS data are big-endian, whereas nowadays almost everything is little-endian. This seems to be an issue for Pandas DataFrames, where ``df[['col1', 'col2']]`` triggers an assertion for me if the underlying data are not native-byte-ordered. This function normalizes the read-in data to native endianness to avoid this. See also :meth:`pwkit.io.Path.read_fits_bintable`.
def data_frame_to_astropy_table (dataframe): from astropy.utils import OrderedDict from astropy.table import Table, Column, MaskedColumn from astropy.extern import six out = OrderedDict() for name in dataframe.columns: column = dataframe[name] mask = np.array (column.isnull ()) data = np.array (column) if data.dtype.kind == 'O': # If all elements of an object array are string-like or np.nan # then coerce back to a native numpy str/unicode array. string_types = six.string_types if six.PY3: string_types += (bytes,) nan = np.nan if all(isinstance(x, string_types) or x is nan for x in data): # Force any missing (null) values to b''. Numpy will # upcast to str/unicode as needed. data[mask] = b'' # When the numpy object array is represented as a list then # numpy initializes to the correct string or unicode type. data = np.array([x for x in data]) if np.any(mask): out[name] = MaskedColumn(data=data, name=name, mask=mask) else: out[name] = Column(data=data, name=name) return Table(out)
This is a backport of the Astropy method :meth:`astropy.table.table.Table.from_pandas`. It converts a Pandas :class:`pandas.DataFrame` object to an Astropy :class:`astropy.table.Table`.
def page_data_frame (df, pager_argv=['less'], **kwargs): import codecs, subprocess, sys pager = subprocess.Popen (pager_argv, shell=False, stdin=subprocess.PIPE, close_fds=True) try: enc = codecs.getwriter (sys.stdout.encoding or 'utf8') (pager.stdin) df.to_string (enc, **kwargs) finally: enc.close () pager.stdin.close () pager.wait ()
Render a DataFrame as text and send it to a terminal pager program (e.g. `less`), so that one can browse a full table conveniently. df The DataFrame to view pager_argv: default ``['less']`` A list of strings passed to :class:`subprocess.Popen` that launches the pager program kwargs Additional keywords are passed to :meth:`pandas.DataFrame.to_string`. Returns ``None``. Execution blocks until the pager subprocess exits.
def slice_around_gaps (values, maxgap): if not (maxgap > 0): # above test catches NaNs, other weird cases raise ValueError ('maxgap must be positive; got %r' % maxgap) values = np.asarray (values) delta = values[1:] - values[:-1] if np.any (delta < 0): raise ValueError ('values must be in nondecreasing order') whgap = np.where (delta > maxgap)[0] + 1 prev_idx = None for gap_idx in whgap: yield slice (prev_idx, gap_idx) prev_idx = gap_idx yield slice (prev_idx, None)
Given an ordered array of values, generate a set of slices that traverse all of the values. Within each slice, no gap between adjacent values is larger than `maxgap`. In other words, these slices break the array into chunks separated by gaps of size larger than maxgap.
def slice_evenly_with_gaps (values, target_len, maxgap): if not (target_len > 0): raise ValueError ('target_len must be positive; got %r' % target_len) values = np.asarray (values) l = values.size for gapslice in slice_around_gaps (values, maxgap): start, stop, ignored_stride = gapslice.indices (l) num_elements = stop - start nsegments = int (np.floor (float (num_elements) / target_len)) nsegments = max (nsegments, 1) nsegments = min (nsegments, num_elements) segment_len = num_elements / nsegments offset = 0. prev = start for _ in range (nsegments): offset += segment_len next = start + int (round (offset)) if next > prev: yield slice (prev, next) prev = next
Given an ordered array of values, generate a set of slices that traverse all of the values. Each slice contains about `target_len` items. However, no slice contains a gap larger than `maxgap`, so a slice may contain only a single item (if it is surrounded on both sides by a large gap). If a non-gapped run of values does not divide evenly into `target_len`, the algorithm errs on the side of making the slices contain more than `target_len` items, rather than fewer. It also attempts to keep the slice size uniform within each non-gapped run.
def reduce_data_frame_evenly_with_gaps (df, valcol, target_len, maxgap, **kwargs): Reduce" a DataFrame by collapsing rows in grouped chunks, grouping based on gaps in one of the columns. This function combines :func:`reduce_data_frame` with :func:`slice_evenly_with_gaps`. """ return reduce_data_frame (df, slice_evenly_with_gaps (df[valcol], target_len, maxgap), **kwargs)
Reduce" a DataFrame by collapsing rows in grouped chunks, grouping based on gaps in one of the columns. This function combines :func:`reduce_data_frame` with :func:`slice_evenly_with_gaps`.
def usmooth (window, uncerts, *data, **kwargs): window = np.asarray (window) uncerts = np.asarray (uncerts) # Hacky keyword argument handling because you can't write "def foo (*args, # k=0)". k = kwargs.pop ('k', None) if len (kwargs): raise TypeError ("smooth() got an unexpected keyword argument '%s'" % kwargs.keys ()[0]) # Done with kwargs futzing. if k is None: k = window.size conv = lambda q, r: np.convolve (q, r, mode='valid') if uncerts is None: w = np.ones_like (x) else: w = uncerts ** -2 cw = conv (w, window) cu = np.sqrt (conv (w, window**2)) / cw result = [cu] + [conv (w * np.asarray (x), window) / cw for x in data] if k != 1: result = [x[::k] for x in result] return result
Smooth data series according to a window, weighting based on uncertainties. Arguments: window The smoothing window. uncerts An array of uncertainties used to weight the smoothing. data One or more data series, of the same size as *uncerts*. k = None If specified, only every *k*-th point of the results will be kept. If k is None (the default), it is set to ``window.size``, i.e. correlated points will be discarded. Returns: ``(s_uncerts, s_data[0], s_data[1], ...)``, the smoothed uncertainties and data series. Example:: u, x, y = numutil.usmooth (np.hamming (7), u, x, y)
def dfsmooth (window, df, ucol, k=None): import pandas as pd if k is None: k = window.size conv = lambda q, r: np.convolve (q, r, mode='valid') w = df[ucol] ** -2 invcw = 1. / conv (w, window) # XXX: we're not smoothing the index. res = {} for col in df.columns: if col == ucol: res[col] = np.sqrt (conv (w, window**2)) * invcw else: res[col] = conv (w * df[col], window) * invcw res = pd.DataFrame (res) return res[::k]
Smooth a :class:`pandas.DataFrame` according to a window, weighting based on uncertainties. Arguments are: window The smoothing window. df The :class:`pandas.DataFrame`. ucol The name of the column in *df* that contains the uncertainties to weight by. k = None If specified, only every *k*-th point of the results will be kept. If k is None (the default), it is set to ``window.size``, i.e. correlated points will be discarded. Returns: a smoothed data frame. The returned data frame has a default integer index. Example:: sdata = numutil.dfsmooth (np.hamming (7), data, 'u_temp')
def weighted_mean_df (df, **kwargs): return weighted_mean (df[df.columns[0]], df[df.columns[1]], **kwargs)
The same as :func:`weighted_mean`, except the argument is expected to be a two-column :class:`pandas.DataFrame` whose first column gives the data values and second column gives their uncertainties. Returns ``(weighted_mean, uncertainty_in_mean)``.
def weighted_variance (x, weights): n = len (x) if n < 3: raise ValueError ('cannot calculate meaningful variance of fewer ' 'than three samples') wt_mean = np.average (x, weights=weights) return np.average (np.square (x - wt_mean), weights=weights) * n / (n - 1)
Return the variance of a weighted sample. The weighted sample mean is calculated and subtracted off, so the returned variance is upweighted by ``n / (n - 1)``. If the sample mean is known to be zero, you should just compute ``np.average (x**2, weights=weights)``.
def unit_tophat_ee (x): x = np.asarray (x) x1 = np.atleast_1d (x) r = ((0 < x1) & (x1 < 1)).astype (x.dtype) if x.ndim == 0: return np.asscalar (r) return r
Tophat function on the unit interval, left-exclusive and right-exclusive. Returns 1 if 0 < x < 1, 0 otherwise.
def make_tophat_ee (lower, upper): if not np.isfinite (lower): raise ValueError ('"lower" argument must be finite number; got %r' % lower) if not np.isfinite (upper): raise ValueError ('"upper" argument must be finite number; got %r' % upper) def range_tophat_ee (x): x = np.asarray (x) x1 = np.atleast_1d (x) r = ((lower < x1) & (x1 < upper)).astype (x.dtype) if x.ndim == 0: return np.asscalar (r) return r range_tophat_ee.__doc__ = ('Ranged tophat function, left-exclusive and ' 'right-exclusive. Returns 1 if %g < x < %g, ' '0 otherwise.') % (lower, upper) return range_tophat_ee
Return a ufunc-like tophat function on the defined range, left-exclusive and right-exclusive. Returns 1 if lower < x < upper, 0 otherwise.
def make_tophat_ei (lower, upper): if not np.isfinite (lower): raise ValueError ('"lower" argument must be finite number; got %r' % lower) if not np.isfinite (upper): raise ValueError ('"upper" argument must be finite number; got %r' % upper) def range_tophat_ei (x): x = np.asarray (x) x1 = np.atleast_1d (x) r = ((lower < x1) & (x1 <= upper)).astype (x.dtype) if x.ndim == 0: return np.asscalar (r) return r range_tophat_ei.__doc__ = ('Ranged tophat function, left-exclusive and ' 'right-inclusive. Returns 1 if %g < x <= %g, ' '0 otherwise.') % (lower, upper) return range_tophat_ei
Return a ufunc-like tophat function on the defined range, left-exclusive and right-inclusive. Returns 1 if lower < x <= upper, 0 otherwise.
def make_tophat_ie (lower, upper): if not np.isfinite (lower): raise ValueError ('"lower" argument must be finite number; got %r' % lower) if not np.isfinite (upper): raise ValueError ('"upper" argument must be finite number; got %r' % upper) def range_tophat_ie (x): x = np.asarray (x) x1 = np.atleast_1d (x) r = ((lower <= x1) & (x1 < upper)).astype (x.dtype) if x.ndim == 0: return np.asscalar (r) return r range_tophat_ie.__doc__ = ('Ranged tophat function, left-inclusive and ' 'right-exclusive. Returns 1 if %g <= x < %g, ' '0 otherwise.') % (lower, upper) return range_tophat_ie
Return a ufunc-like tophat function on the defined range, left-inclusive and right-exclusive. Returns 1 if lower <= x < upper, 0 otherwise.
def make_tophat_ii (lower, upper): if not np.isfinite (lower): raise ValueError ('"lower" argument must be finite number; got %r' % lower) if not np.isfinite (upper): raise ValueError ('"upper" argument must be finite number; got %r' % upper) def range_tophat_ii (x): x = np.asarray (x) x1 = np.atleast_1d (x) r = ((lower <= x1) & (x1 <= upper)).astype (x.dtype) if x.ndim == 0: return np.asscalar (r) return r range_tophat_ii.__doc__ = ('Ranged tophat function, left-inclusive and ' 'right-inclusive. Returns 1 if %g <= x <= %g, ' '0 otherwise.') % (lower, upper) return range_tophat_ii
Return a ufunc-like tophat function on the defined range, left-inclusive and right-inclusive. Returns 1 if lower < x < upper, 0 otherwise.
def make_step_lcont (transition): if not np.isfinite (transition): raise ValueError ('"transition" argument must be finite number; got %r' % transition) def step_lcont (x): x = np.asarray (x) x1 = np.atleast_1d (x) r = (x1 > transition).astype (x.dtype) if x.ndim == 0: return np.asscalar (r) return r step_lcont.__doc__ = ('Left-continuous step function. Returns 1 if x > %g, ' '0 otherwise.') % (transition,) return step_lcont
Return a ufunc-like step function that is left-continuous. Returns 1 if x > transition, 0 otherwise.
def make_step_rcont (transition): if not np.isfinite (transition): raise ValueError ('"transition" argument must be finite number; got %r' % transition) def step_rcont (x): x = np.asarray (x) x1 = np.atleast_1d (x) r = (x1 >= transition).astype (x.dtype) if x.ndim == 0: return np.asscalar (r) return r step_rcont.__doc__ = ('Right-continuous step function. Returns 1 if x >= ' '%g, 0 otherwise.') % (transition,) return step_rcont
Return a ufunc-like step function that is right-continuous. Returns 1 if x >= transition, 0 otherwise.
def make_fixed_temp_multi_apec(kTs, name_template='apec%d', norm=None): total_model = None sub_models = [] for i, kT in enumerate(kTs): component = ui.xsapec(name_template % i) component.kT = kT ui.freeze(component.kT) if norm is not None: component.norm = norm sub_models.append(component) if total_model is None: total_model = component else: total_model = total_model + component return total_model, sub_models
Create a model summing multiple APEC components at fixed temperatures. *kTs* An iterable of temperatures for the components, in keV. *name_template* = 'apec%d' A template to use for the names of each component; it is string-formatted with the 0-based component number as an argument. *norm* = None An initial normalization to be used for every component, or None to use the Sherpa default. Returns: A tuple ``(total_model, sub_models)``, where *total_model* is a Sherpa model representing the sum of the APEC components and *sub_models* is a list of the individual models. This function creates a vector of APEC model components and sums them. Their *kT* parameters are set and then frozen (using :func:`sherpa.astro.ui.freeze`), so that upon exit from this function, the amplitude of each component is the only free parameter.
def expand_rmf_matrix(rmf): n_chan = rmf.e_min.size n_energy = rmf.n_grp.size expanded = np.zeros((n_energy, n_chan)) mtx_ofs = 0 grp_ofs = 0 for i in range(n_energy): for j in range(rmf.n_grp[i]): f = rmf.f_chan[grp_ofs] n = rmf.n_chan[grp_ofs] expanded[i,f:f+n] = rmf.matrix[mtx_ofs:mtx_ofs+n] mtx_ofs += n grp_ofs += 1 return expanded
Expand an RMF matrix stored in compressed form. *rmf* An RMF object as might be returned by ``sherpa.astro.ui.get_rmf()``. Returns: A non-sparse RMF matrix. The Response Matrix Function (RMF) of an X-ray telescope like Chandra can be stored in a sparse format as defined in `OGIP Calibration Memo CAL/GEN/92-002 <https://heasarc.gsfc.nasa.gov/docs/heasarc/caldb/docs/memos/cal_gen_92_002/cal_gen_92_002.html>`_. For visualization and analysis purposes, it can be useful to de-sparsify the matrices stored in this way. This function does that, returning a two-dimensional Numpy array.
def derive_identity_arf(name, arf): from sherpa.astro.data import DataARF from sherpa.astro.instrument import ARF1D darf = DataARF( name, arf.energ_lo, arf.energ_hi, np.ones(arf.specresp.shape), arf.bin_lo, arf.bin_hi, arf.exposure, header = None, ) return ARF1D(darf, pha=arf._pha)
Create an "identity" ARF that has uniform sensitivity. *name* The name of the ARF object to be created; passed to Sherpa. *arf* An existing ARF object on which to base this one. Returns: A new ARF1D object that has a uniform spectral response vector. In many X-ray observations, the relevant background signal does not behave like an astrophysical source that is filtered through the telescope's response functions. However, I have been unable to get current Sherpa (version 4.9) to behave how I want when working with backround models that are *not* filtered through these response functions. This function constructs an "identity" ARF response function that has uniform sensitivity as a function of detector channel.
def get_source_qq_data(id=None): sdata = ui.get_data(id=id) kev = sdata.get_x() obs_data = sdata.counts model_data = ui.get_model(id=id)(kev) return np.vstack((kev, obs_data, model_data))
Get data for a quantile-quantile plot of the source data and model. *id* The dataset id for which to get the data; defaults if unspecified. Returns: An ndarray of shape ``(3, npts)``. The first slice is the energy axis in keV; the second is the observed values in each bin (counts, or rate, or rate per keV, etc.); the third is the corresponding model value in each bin. The inputs are implicit; the data are obtained from the current state of the Sherpa ``ui`` module.
def get_bkg_qq_data(id=None, bkg_id=None): bdata = ui.get_bkg(id=id, bkg_id=bkg_id) kev = bdata.get_x() obs_data = bdata.counts model_data = ui.get_bkg_model(id=id, bkg_id=bkg_id)(kev) return np.vstack((kev, obs_data, model_data))
Get data for a quantile-quantile plot of the background data and model. *id* The dataset id for which to get the data; defaults if unspecified. *bkg_id* The identifier of the background; defaults if unspecified. Returns: An ndarray of shape ``(3, npts)``. The first slice is the energy axis in keV; the second is the observed values in each bin (counts, or rate, or rate per keV, etc.); the third is the corresponding model value in each bin. The inputs are implicit; the data are obtained from the current state of the Sherpa ``ui`` module.
def download_file(local_filename, url, clobber=False): dir_name = os.path.dirname(local_filename) mkdirs(dir_name) if clobber or not os.path.exists(local_filename): i = requests.get(url) # if not exists if i.status_code == 404: print('Failed to download file:', local_filename, url) return False # write out in 1MB chunks chunk_size_in_bytes = 1024*1024 # 1MB with open(local_filename, 'wb') as local_file: for chunk in i.iter_content(chunk_size=chunk_size_in_bytes): local_file.write(chunk) return True
Download the given file. Clobber overwrites file if exists.
def download_json(local_filename, url, clobber=False): with open(local_filename, 'w') as json_file: json_file.write(json.dumps(requests.get(url).json(), sort_keys=True, indent=2, separators=(',', ': ')))
Download the given JSON file, and pretty-print before we output it.
def data_to_imagesurface (data, **kwargs): import cairo data = np.atleast_2d (data) if data.ndim != 2: raise ValueError ('input array may not have more than 2 dimensions') argb32 = data_to_argb32 (data, **kwargs) format = cairo.FORMAT_ARGB32 height, width = argb32.shape stride = cairo.ImageSurface.format_stride_for_width (format, width) if argb32.strides[0] != stride: raise ValueError ('stride of data array not compatible with ARGB32') return cairo.ImageSurface.create_for_data (argb32, format, width, height, stride)
Turn arbitrary data values into a Cairo ImageSurface. The method and arguments are the same as data_to_argb32, except that the data array will be treated as 2D, and higher dimensionalities are not allowed. The return value is a Cairo ImageSurface object. Combined with the write_to_png() method on ImageSurfaces, this is an easy way to quickly visualize 2D data.
def get_token(filename=TOKEN_PATH, envvar=TOKEN_ENVVAR): if os.path.isfile(filename): with open(filename) as token_file: token = token_file.readline().strip() else: token = os.environ.get(envvar) if not token: raise ValueError("No token found.\n" "{} file doesn't exist.\n{} environment variable is not set.".format(filename, envvar)) return token
Returns pipeline_token for API Tries local file first, then env variable
def stats (self, antnames): nbyant = np.zeros (self.nants, dtype=np.int) sum = np.zeros (self.nants, dtype=np.complex) sumsq = np.zeros (self.nants) q = np.abs (self.normvis - 1) for i in range (self.nsamps): i1, i2 = self.blidxs[i] nbyant[i1] += 1 nbyant[i2] += 1 sum[i1] += q[i] sum[i2] += q[i] sumsq[i1] += q[i]**2 sumsq[i2] += q[i]**2 avg = sum / nbyant std = np.sqrt (sumsq / nbyant - avg**2) navg = 1. / np.median (avg) nstd = 1. / np.median (std) for i in range (self.nants): print (' %2d %10s %3d %f %f %f %f' % (i, antnames[i], nbyant[i], avg[i], std[i], avg[i] * navg, std[i] * nstd))
XXX may be out of date.
def read_stream (stream): section = None key = None data = None for fullline in stream: line = fullline.split ('#', 1)[0] m = sectionre.match (line) if m is not None: # New section if section is not None: if key is not None: section.set_one (key, data.strip ().decode ('utf8')) key = data = None yield section section = Holder () section.section = m.group (1) continue if len (line.strip ()) == 0: if key is not None: section.set_one (key, data.strip ().decode ('utf8')) key = data = None continue m = escre.match (fullline) if m is not None: if section is None: raise InifileError ('key seen without section!') if key is not None: section.set_one (key, data.strip ().decode ('utf8')) key = m.group (1) data = m.group (2).replace (r'\"', '"').replace (r'\n', '\n').replace (r'\\', '\\') section.set_one (key, data.decode ('utf8')) key = data = None continue m = keyre.match (line) if m is not None: if section is None: raise InifileError ('key seen without section!') if key is not None: section.set_one (key, data.strip ().decode ('utf8')) key = m.group (1) data = m.group (2) if not len (data): data = ' ' elif not data[-1].isspace (): data += ' ' continue if line[0].isspace () and key is not None: data += line.strip () + ' ' continue raise InifileError ('unparsable line: ' + line[:-1]) if section is not None: if key is not None: section.set_one (key, data.strip ().decode ('utf8')) yield section
Python 3 compat note: we're assuming `stream` gives bytes not unicode.
def write_stream (stream, holders, defaultsection=None): anybefore = False for h in holders: if anybefore: print ('', file=stream) s = h.get ('section', defaultsection) if s is None: raise ValueError ('cannot determine section name for item <%s>' % h) print ('[%s]' % s, file=stream) for k in sorted (x for x in six.iterkeys (h.__dict__) if x != 'section'): v = h.get (k) if v is None: continue print ('%s = %s' % (k, v), file=stream) anybefore = True
Very simple writing in ini format. The simple stringification of each value in each Holder is printed, and no escaping is performed. (This is most relevant for multiline values or ones containing pound signs.) `None` values are skipped. Arguments: stream A text stream to write to. holders An iterable of objects to write. Their fields will be written as sections. defaultsection=None Section name to use if a holder doesn't contain a `section` field.
def write (stream_or_path, holders, **kwargs): if isinstance (stream_or_path, six.string_types): return write_stream (io.open (stream_or_path, 'wt'), holders, **kwargs) else: return write_stream (stream_or_path, holders, **kwargs)
Very simple writing in ini format. The simple stringification of each value in each Holder is printed, and no escaping is performed. (This is most relevant for multiline values or ones containing pound signs.) `None` values are skipped. Arguments: stream A text stream to write to. holders An iterable of objects to write. Their fields will be written as sections. defaultsection=None Section name to use if a holder doesn't contain a `section` field.
def in_casapy (helper, caltable=None, selectcals={}, plotoptions={}, xaxis=None, yaxis=None, figfile=None): if caltable is None: raise ValueError ('caltable') show_gui = (figfile is None) cp = helper.casans.cp helper.casans.tp.setgui (show_gui) cp.open (caltable) cp.selectcal (**selectcals) cp.plotoptions (**plotoptions) cp.plot (xaxis, yaxis) if show_gui: import pylab as pl pl.show () else: cp.savefig (figfile)
This function is run inside the weirdo casapy IPython environment! A strange set of modules is available, and the `pwkit.environments.casa.scripting` system sets up a very particular environment to allow encapsulated scripting.
def _qrd_solve_full(a, b, ddiag, dtype=np.float): a = np.asarray(a, dtype) b = np.asarray(b, dtype) ddiag = np.asarray(ddiag, dtype) n, m = a.shape assert m >= n assert b.shape == (m, ) assert ddiag.shape == (n, ) # The computation is straightforward. q, r, pmut = _qr_factor_full(a) bqt = np.dot(b, q.T) x, s = _manual_qrd_solve(r[:,:n], pmut, ddiag, bqt, dtype=dtype, build_s=True) return x, s, pmut
Solve the equation A^T x = B, D x = 0. Parameters: a - an n-by-m array, m >= n b - an m-vector ddiag - an n-vector giving the diagonal of D. (The rest of D is 0.) Returns: x - n-vector solving the equation. s - the n-by-n supplementary matrix s. pmut - n-element permutation vector defining the permutation matrix P. The equations are solved in a least-squares sense if the system is rank-deficient. D is a diagonal matrix and hence only its diagonal is in fact supplied as an argument. The matrix s is full lower triangular and solves the equation P^T (A A^T + D D) P = S^T S (needs transposition?) where P is the permutation matrix defined by the vector pmut; it puts the rows of 'a' in order of nonincreasing rank, so that a[pmut] has its rows sorted that way.
def _lmder1_linear_full_rank(n, m, factor, target_fnorm1, target_fnorm2): def func(params, vec): s = params.sum() temp = 2. * s / m + 1 vec[:] = -temp vec[:params.size] += params def jac(params, jac): # jac.shape = (n, m) by LMDER standards jac.fill(-2. / m) for i in range(n): jac[i,i] += 1 guess = np.ones(n) * factor #_lmder1_test(m, func, jac, guess) _lmder1_driver(m, func, jac, guess, target_fnorm1, target_fnorm2, [-1] * n)
A full-rank linear function (lmder test #1)
def _lmder1_linear_r1zcr(n, m, factor, target_fnorm1, target_fnorm2, target_params): def func(params, vec): s = 0 for j in range(1, n - 1): s += (j + 1) * params[j] for i in range(m): vec[i] = i * s - 1 vec[m-1] = -1 def jac(params, jac): jac.fill(0) for i in range(1, n - 1): for j in range(1, m - 1): jac[i,j] = j * (i + 1) guess = np.ones(n) * factor #_lmder1_test(m, func, jac, guess) _lmder1_driver(m, func, jac, guess, target_fnorm1, target_fnorm2, None)
A rank-1 linear function with zero columns and rows (lmder test #3)
def _lmder1_rosenbrock(): def func(params, vec): vec[0] = 10 * (params[1] - params[0]**2) vec[1] = 1 - params[0] def jac(params, jac): jac[0,0] = -20 * params[0] jac[0,1] = -1 jac[1,0] = 10 jac[1,1] = 0 guess = np.asfarray([-1.2, 1]) norm1s = [0.491934955050e+01, 0.134006305822e+04, 0.1430000511923e+06] for i in range(3): _lmder1_driver(2, func, jac, guess * 10**i, norm1s[i], 0, [1, 1])
Rosenbrock function (lmder test #4)
def _lmder1_powell_singular(): def func(params, vec): vec[0] = params[0] + 10 * params[1] vec[1] = np.sqrt(5) * (params[2] - params[3]) vec[2] = (params[1] - 2 * params[2])**2 vec[3] = np.sqrt(10) * (params[0] - params[3])**2 def jac(params, jac): jac.fill(0) jac[0,0] = 1 jac[0,3] = 2 * np.sqrt(10) * (params[0] - params[3]) jac[1,0] = 10 jac[1,2] = 2 * (params[1] - 2 * params[2]) jac[2,1] = np.sqrt(5) jac[2,2] = -2 * jac[2,1] jac[3,1] = -np.sqrt(5) jac[3,3] = -jac[3,0] guess = np.asfarray([3, -1, 0, 1]) _lmder1_test(4, func, jac, guess) _lmder1_test(4, func, jac, guess * 10) _lmder1_test(4, func, jac, guess * 100)
Powell's singular function (lmder test #6). Don't run this as a test, since it just zooms to zero parameters. The precise results depend a lot on nitty-gritty rounding and tolerances and things.
def _lmder1_freudenstein_roth(): def func(params, vec): vec[0] = -13 + params[0] + ((5 - params[1]) * params[1] - 2) * params[1] vec[1] = -29 + params[0] + ((1 + params[1]) * params[1] - 14) * params[1] def jac(params, jac): jac[0] = 1 jac[1,0] = params[1] * (10 - 3 * params[1]) - 2 jac[1,1] = params[1] * (2 + 3 * params[1]) - 14 guess = np.asfarray([0.5, -2]) _lmder1_driver(2, func, jac, guess, 0.200124960962e+02, 0.699887517585e+01, [0.114124844655e+02, -0.896827913732e+00]) _lmder1_driver(2, func, jac, guess * 10, 0.124328339489e+05, 0.699887517449e+01, [0.114130046615e+02, -0.896796038686e+00]) _lmder1_driver(2, func, jac, guess * 100, 0.11426454595762e+08, 0.699887517243e+01, [0.114127817858e+02, -0.896805107492e+00])
Freudenstein and Roth function (lmder1 test #7)
def _lmder1_meyer(): y3 = np.asarray([3.478e4, 2.861e4, 2.365e4, 1.963e4, 1.637e4, 1.372e4, 1.154e4, 9.744e3, 8.261e3, 7.03e3, 6.005e3, 5.147e3, 4.427e3, 3.82e3, 3.307e3, 2.872e3]) def func(params, vec): temp = 5 * (np.arange(16) + 1) + 45 + params[2] tmp1 = params[1] / temp tmp2 = np.exp(tmp1) vec[:] = params[0] * tmp2 - y3 def jac(params, jac): temp = 5 * (np.arange(16) + 1) + 45 + params[2] tmp1 = params[1] / temp tmp2 = np.exp(tmp1) jac[0] = tmp2 jac[1] = params[0] * tmp2 / temp jac[2] = -tmp1 * jac[1] guess = np.asfarray([0.02, 4000, 250]) _lmder1_driver(16, func, jac, guess, 0.4115346655430312e+05, 0.9377945146518742e+01, [0.5609636471026614e-02, 0.6181346346286591e+04, 0.3452236346241440e+03])
Meyer function (lmder1 test #10)
def p_side(self, idx, sidedness): dsideval = _dside_names.get(sidedness) if dsideval is None: raise ValueError('unrecognized sidedness "%s"' % sidedness) p = self._pinfob p[idx] = (p[idx] & ~PI_M_SIDE) | dsideval return self
Acceptable values for *sidedness* are "auto", "pos", "neg", and "two".
def is_strict_subclass (value, klass): return (isinstance (value, type) and issubclass (value, klass) and value is not klass)
Check that `value` is a subclass of `klass` but that it is not actually `klass`. Unlike issubclass(), does not raise an exception if `value` is not a type.
def invoke_tool (namespace, tool_class=None): import sys from .. import cli cli.propagate_sigint () cli.unicode_stdio () cli.backtrace_on_usr1 () if tool_class is None: for value in itervalues (namespace): if is_strict_subclass (value, Multitool): if tool_class is not None: raise PKError ('do not know which Multitool implementation to use') tool_class = value if tool_class is None: raise PKError ('no Multitool implementation to use') tool = tool_class () tool.populate (itervalues (namespace)) tool.commandline (sys.argv)
Invoke a tool and exit. `namespace` is a namespace-type dict from which the tool is initialized. It should contain exactly one value that is a `Multitool` subclass, and this subclass will be instantiated and populated (see `Multitool.populate()`) using the other items in the namespace. Instances and subclasses of `Command` will therefore be registered with the `Multitool`. The tool is then invoked. `pwkit.cli.propagate_sigint()` and `pwkit.cli.unicode_stdio()` are called at the start of this function. It should therefore be only called immediately upon startup of the Python interpreter. This function always exits with an exception. The exception will be SystemExit (0) in case of success. The intended invocation is `invoke_tool (globals ())` in some module that defines a `Multitool` subclass and multiple `Command` subclasses. If `tool_class` is not None, this is used as the tool class rather than searching `namespace`, potentially avoiding problems with modules containing multiple `Multitool` implementations.
def invoke_with_usage (self, args, **kwargs): argv0 = kwargs['argv0'] usage = self._usage (argv0) argv = [argv0] + args uina = 'long' if self.help_if_no_args else False check_usage (usage, argv, usageifnoargs=uina) try: return self.invoke (args, **kwargs) except UsageError as e: wrong_usage (usage, str (e))
Invoke the command with standardized usage-help processing. Same calling convention as `Command.invoke()`.
def get_arg_parser (self, **kwargs): import argparse ap = argparse.ArgumentParser ( prog = kwargs['argv0'], description = self.summary, ) return ap
Return an instance of `argparse.ArgumentParser` used to process this tool's command-line arguments.
def invoke_with_usage (self, args, **kwargs): ap = self.get_arg_parser (**kwargs) args = ap.parse_args (args) return self.invoke (args, **kwargs)
Invoke the command with standardized usage-help processing. Same calling convention as `Command.invoke()`, except here *args* is an un-parsed list of strings.
def register (self, cmd): if cmd.name is None: raise ValueError ('no name set for Command object %r' % cmd) if cmd.name in self.commands: raise ValueError ('a command named "%s" has already been ' 'registered' % cmd.name) self.commands[cmd.name] = cmd return self
Register a new command with the tool. 'cmd' is expected to be an instance of `Command`, although here only the `cmd.name` attribute is investigated. Multiple commands with the same name are not allowed to be registered. Returns 'self'.
def populate (self, values): for value in values: if isinstance (value, Command): self.register (value) elif is_strict_subclass (value, Command) and getattr (value, 'name') is not None: self.register (value ()) return self
Register multiple new commands by investigating the iterable `values`. For each item in `values`, instances of `Command` are registered, and subclasses of `Command` are instantiated (with no arguments passed to the constructor) and registered. Other kinds of values are ignored. Returns 'self'.
def invoke_command (self, cmd, args, **kwargs): new_kwargs = kwargs.copy () new_kwargs['argv0'] = kwargs['argv0'] + ' ' + cmd.name new_kwargs['parent'] = self new_kwargs['parent_kwargs'] = kwargs return cmd.invoke_with_usage (args, **new_kwargs)
This function mainly exists to be overridden by subclasses.
def commandline (self, argv): self.invoke_with_usage (argv[1:], tool=self, argv0=self.cli_name)
Run as if invoked from the command line. 'argv' is a Unix-style list of arguments, where the zeroth item is the program name (which is ignored here). Usage help is printed if deemed appropriate (e.g., no arguments are given). This function always terminates with an exception, with the exception being a SystemExit(0) in case of success. Note that we don't actually use `argv[0]` to set `argv0` because it will generally be the full path to the script name, which is unattractive.
def cited_names_from_aux_file(stream): cited = set() for line in stream: if not line.startswith(r'\citation{'): continue line = line.rstrip() if line[-1] != '}': continue # should issue a warning or something entries = line[10:-1] for name in entries.split(','): name = name.strip() if name not in cited: yield name cited.add(name)
Parse a LaTeX ".aux" file and generate a list of names cited according to LaTeX ``\\citation`` commands. Repeated names are generated only once. The argument should be a opened I/O stream.
def merge_bibtex_collections(citednames, maindict, extradicts, allow_missing=False): allrecords = {} for ed in extradicts: allrecords.update(ed) allrecords.update(maindict) missing = [] from collections import OrderedDict records = OrderedDict() from itertools import chain wantednames = sorted(chain(citednames, six.viewkeys(maindict))) for name in wantednames: rec = allrecords.get(name) if rec is None: missing.append(name) else: records[name] = rec if len(missing) and not allow_missing: # TODO: custom exception so caller can actually see what's missing; # could conceivably stub out missing records or something. raise PKError('missing BibTeX records: %s', ' '.join(missing)) return records
There must be a way to be efficient and stream output instead of loading everything into memory at once, but, meh. Note that we augment `citednames` with all of the names in `maindict`. The intention is that if we've gone to the effort of getting good data for some record, we don't want to trash it if the citation is temporarily removed (even if it ought to be manually recoverable from version control). Seems better to err on the side of preservation; I can write a quick pruning tool later if needed.
def write_bibtex_dict(stream, entries): from bibtexparser.bwriter import BibTexWriter writer = BibTexWriter() writer.indent = ' ' writer.entry_separator = '' first = True for rec in entries: if first: first = False else: stream.write(b'\n') stream.write(writer._entry_to_bibtex(rec).encode('utf8'))
bibtexparser.write converts the entire database to one big string and writes it out in one go. I'm sure it will always all fit in RAM but some things just will not stand.
def merge_bibtex_with_aux(auxpath, mainpath, extradir, parse=get_bibtex_dict, allow_missing=False): auxpath = Path(auxpath) mainpath = Path(mainpath) extradir = Path(extradir) with auxpath.open('rt') as aux: citednames = sorted(cited_names_from_aux_file(aux)) main = mainpath.try_open(mode='rt') if main is None: maindict = {} else: maindict = parse(main) main.close() def gen_extra_dicts(): # If extradir does not exist, Path.glob() will return an empty list, # which seems acceptable to me. for item in sorted(extradir.glob('*.bib')): with item.open('rt') as extra: yield parse(extra) merged = merge_bibtex_collections(citednames, maindict, gen_extra_dicts(), allow_missing=allow_missing) with mainpath.make_tempfile(want='handle', resolution='overwrite') as newbib: write_bibtex_dict(newbib, six.viewvalues(merged))
Merge multiple BibTeX files into a single homogeneously-formatted output, using a LaTeX .aux file to know which records are worth paying attention to. The file identified by `mainpath` will be overwritten with the new .bib contents. This function is intended to be used in a version-control context. Files matching the glob "*.bib" in `extradir` will be read in to supplement the information in `mainpath`. Records already in the file in `mainpath` always take precedence.
def just_smart_bibtools(bib_style, aux, bib): extradir = Path('.bibtex') extradir.ensure_dir(parents=True) bib_export(bib_style, aux, extradir / 'ZZ_bibtools.bib', no_tool_ok=True, quiet=True, ignore_missing=True) merge_bibtex_with_aux(aux, bib, extradir)
Tectonic has taken over most of the features that this tool used to provide, but here's a hack to keep my smart .bib file generation working.
def in_casapy (helper, asdm=None, ms=None): if asdm is None: raise ValueError ('asdm') if ms is None: raise ValueError ('ms') helper.casans.importasdm ( asdm = asdm, vis = ms, asis = 'Antenna Station Receiver Source CalAtmosphere CalWVR CorrelatorMode SBSummary', bdfflags = True, lazy = False, process_caldevice = False, )
This function is run inside the weirdo casapy IPython environment! A strange set of modules is available, and the `pwkit.environments.casa.scripting` system sets up a very particular environment to allow encapsulated scripting.
def bp_to_aap (bp): ap1, ap2 = bp if ap1 < 0: raise ValueError ('first antpol %d is negative' % ap1) if ap2 < 0: raise ValueError ('second antpol %d is negative' % ap2) pol = _fpol_to_pol[((ap1 & 0x7) << 4) + (ap2 & 0x7)] if pol == 0xFF: raise ValueError ('no CASA polarization code for pairing ' '%c-%c' % (fpol_names[ap1 & 0x7], fpol_names[ap2 & 0x7])) return ap1 >> 3, ap2 >> 3, pol
Converts a basepol into a tuple of (ant1, ant2, pol).
def aap_to_bp (ant1, ant2, pol): if ant1 < 0: raise ValueError ('first antenna is below 0: %s' % ant1) if ant2 < ant1: raise ValueError ('second antenna is below first: %s' % ant2) if pol < 1 or pol > 12: raise ValueError ('illegal polarization code %s' % pol) fps = _pol_to_fpol[pol] ap1 = (ant1 << 3) + ((fps >> 4) & 0x07) ap2 = (ant2 << 3) + (fps & 0x07) return ap1, ap2
Create a basepol from antenna numbers and a CASA polarization code.
def postproc (stats_result): n, mean, scat = stats_result mean *= 180 / np.pi # rad => deg scat /= n # variance-of-samples => variance-of-mean scat **= 0.5 # variance => stddev scat *= 180 / np.pi # rad => deg return mean, scat
Simple helper to postprocess angular outputs from StatsCollectors in the way we want.
def postproc_mask (stats_result): n, mean, scat = stats_result ok = np.isfinite (mean) n = n[ok] mean = mean[ok] scat = scat[ok] mean *= 180 / np.pi # rad => deg scat /= n # variance-of-samples => variance-of-mean scat **= 0.5 # variance => stddev scat *= 180 / np.pi # rad => deg return ok, mean, scat
Simple helper to postprocess angular outputs from StatsCollectors in the way we want.
def finish (self, keyset, mask=True): n_us = len (self._keymap) # By definition (for now), wt >= 1 everywhere, so we don't need to # worry about div-by-zero. wt_us = self._m0[:n_us] mean_us = self._m1[:n_us] / wt_us var_us = self._m2[:n_us] / wt_us - mean_us**2 n_them = len (keyset) wt = np.zeros (n_them, dtype=self._m0.dtype) mean = np.empty (n_them, dtype=self._m1.dtype) mean.fill (np.nan) var = np.empty_like (mean) var.fill (np.nan) us_idx = [] them_idx = [] for them_i, key in enumerate (keyset): us_i = self._keymap[key] if us_i < n_us: them_idx.append (them_i) us_idx.append (us_i) # otherwise, we must not have seen that key wt[them_idx] = wt_us[us_idx] mean[them_idx] = mean_us[us_idx] var[them_idx] = var_us[us_idx] if mask: m = ~np.isfinite (mean) mean = np.ma.MaskedArray (mean, m) var = np.ma.MaskedArray (var, m) self._m0 = self._m1 = self._m2 = None self._keymap.clear () return wt, mean, var
Returns (weights, means, variances), where: weights ndarray of number of samples per key means computed mean value for each key variances computed variance for each key
def _finish_timeslot (self): for fpol, aps in self.ap_by_fpol.items (): aps = sorted (aps) nap = len (aps) for i1, ap1 in enumerate (aps): for i2 in range (i1, nap): ap2 = aps[i2] bp1 = (ap1, ap2) info = self.data_by_bp.get (bp1) if info is None: continue data1, flags1 = info for i3 in range (i2, nap): ap3 = aps[i3] bp2 = (ap2, ap3) info = self.data_by_bp.get (bp2) if info is None: continue data2, flags2 = info bp3 = (ap1, aps[i3]) info = self.data_by_bp.get (bp3) if info is None: continue data3, flags3 = info # try to minimize allocations: tflags = flags1 & flags2 np.logical_and (tflags, flags3, tflags) if not tflags.any (): continue triple = data3.conj () np.multiply (triple, data1, triple) np.multiply (triple, data2, triple) self._process_sample (ap1, ap2, ap3, triple, tflags) # Reset for next timeslot self.cur_time = -1. self.bp_by_ap = None self.ap_by_fpol = None
We have loaded in all of the visibilities in one timeslot. We can now compute the phase closure triples. XXX: we should only process independent triples. Are we???
def _process_sample (self, ap1, ap2, ap3, triple, tflags): # Frequency-resolved: np.divide (triple, np.abs (triple), triple) phase = np.angle (triple) self.ap_spec_stats_by_ddid[self.cur_ddid].accum (ap1, phase, tflags + 0.) self.ap_spec_stats_by_ddid[self.cur_ddid].accum (ap2, phase, tflags + 0.) self.ap_spec_stats_by_ddid[self.cur_ddid].accum (ap3, phase, tflags + 0.) # Frequency-averaged: triple = np.dot (triple, tflags) / tflags.sum () phase = np.angle (triple) self.global_stats_by_time.accum (self.cur_time, phase) self.ap_stats_by_ddid[self.cur_ddid].accum (ap1, phase) self.ap_stats_by_ddid[self.cur_ddid].accum (ap2, phase) self.ap_stats_by_ddid[self.cur_ddid].accum (ap3, phase) self.bp_stats_by_ddid[self.cur_ddid].accum ((ap1, ap2), phase) self.bp_stats_by_ddid[self.cur_ddid].accum ((ap1, ap3), phase) self.bp_stats_by_ddid[self.cur_ddid].accum ((ap2, ap3), phase) self.ap_time_stats_by_ddid[self.cur_ddid].accum (self.cur_time, ap1, phase) self.ap_time_stats_by_ddid[self.cur_ddid].accum (self.cur_time, ap2, phase) self.ap_time_stats_by_ddid[self.cur_ddid].accum (self.cur_time, ap3, phase)
We have computed one independent phase closure triple in one timeslot.
def dftphotom_cli(argv): check_usage(dftphotom_doc, argv, usageifnoargs=True) cfg = Config().parse(argv[1:]) util.logger(cfg.loglevel) dftphotom(cfg)
Command-line access to the :func:`dftphotom` algorithm. This function implements the behavior of the command-line ``casatask dftphotom`` tool, wrapped up into a single callable function. The argument *argv* is a list of command-line arguments, in Unix style where the zeroth item is the name of the command.
def download_links(self, dir_path): links = self.links if not path.exists(dir_path): makedirs(dir_path) for i, url in enumerate(links): if 'start' in self.cseargs: i += int(self.cseargs['start']) ext = self.cseargs['fileType'] ext = '.html' if ext == '' else '.' + ext file_name = self.cseargs['q'].replace(' ', '_') + '_' + str(i) + ext file_path = path.join(dir_path, file_name) r = requests.get(url, stream=True) if r.status_code == 200: with open(file_path, 'wb') as f: r.raw.decode_content = True shutil.copyfileobj(r.raw, f)
Download web pages or images from search result links. Args: dir_path (str): Path of directory to save downloads of :class:`api.results`.links
def get_values(self, k, v): metadata = self.metadata values = [] if metadata != None: if k in metadata: for metav in metadata[k]: if v in metav: values.append(metav[v]) return values
Get a list of values from the key value metadata attribute. Args: k (str): Key in :class:`api.results`.metadata v (str): Values from each item in the key of :class:`api.results`.metadata Returns: A list containing all the ``v`` values in the ``k`` key for the :class:`api.results`.metadata attribute.
def preview(self, n=10, k='items', kheader='displayLink', klink='link', kdescription='snippet'): if 'searchType' in self.cseargs: searchType = self.cseargs['searchType'] else: searchType = None items = self.metadata[k] # (cse_print) Print results for i, kv in enumerate(items[:n]): if 'start' in self.cseargs: i += int(self.cseargs['start']) # (print_header) Print result header header = '\n[' + str(i) + '] ' + kv[kheader] print(header) print('=' * len(header)) # (print_image) Print result image file if searchType == 'image': link = '\n' + path.basename(kv[klink]) print(link) # (print_description) Print result snippet description = '\n' + kv[kdescription] print(description)
Print a preview of the search results. Args: n (int): Maximum number of search results to preview k (str): Key in :class:`api.results`.metadata to preview kheader (str): Key in :class:`api.results`.metadata[``k``] to use as the header klink (str): Key in :class:`api.results`.metadata[``k``] to use as the link if image search kdescription (str): Key in :class:`api.results`.metadata[``k``] to use as the description
def save_links(self, file_path): data = '\n'.join(self.links) with open(file_path, 'w') as out_file: out_file.write(data)
Saves a text file of the search result links. Saves a text file of the search result links, where each link is saved in a new line. An example is provided below:: http://www.google.ca http://www.gmail.com Args: file_path (str): Path to the text file to save links to.
def save_metadata(self, file_path): data = self.metadata with open(file_path, 'w') as out_file: json.dump(data, out_file)
Saves a json file of the search result metadata. Saves a json file of the search result metadata from :class:`api.results`.metadata. Args: file_path (str): Path to the json file to save metadata to.
def bcj_from_spt (spt): return np.where ((spt >= 0) & (spt <= 10), 1.53 + 0.148 * spt - 0.0105 * spt**2, np.nan)
Calculate a bolometric correction constant for a J band magnitude based on a spectral type, using the fit of Wilking+ (1999AJ....117..469W). spt - Numerical spectral type. M0=0, M9=9, L0=10, ... Returns: the correction `bcj` such that `m_bol = j_abs + bcj`, or NaN if `spt` is out of range. Valid values of `spt` are between 0 and 10.
def bck_from_spt (spt): # NOTE: the way np.piecewise() is implemented, the last 'true' value in # the condition list is the one that takes precedence. This motivates the # construction of our condition list. # # XXX: I've restructured the implementation; this needs testing! spt = np.asfarray (spt) # we crash with integer inputs for some reason. return np.piecewise (spt, [spt < 30, spt < 19, spt <= 14, spt < 10, (spt < 2) | (spt >= 30)], [lambda s: 3.41 - 0.21 * (s - 20), # Nakajima lambda s: 3.42 - 0.075 * (s - 14), # Dahn, Nakajima lambda s: 3.42 + 0.075 * (s - 14), # Dahn, Nakajima lambda s: 2.43 + 0.0895 * s, # Wilking; only ok for spt >= M2! np.nan])
Calculate a bolometric correction constant for a J band magnitude based on a spectral type, using the fits of Wilking+ (1999AJ....117..469W), Dahn+ (2002AJ....124.1170D), and Nakajima+ (2004ApJ...607..499N). spt - Numerical spectral type. M0=0, M9=9, L0=10, ... Returns: the correction `bck` such that `m_bol = k_abs + bck`, or NaN if `spt` is out of range. Valid values of `spt` are between 2 and 30.
def lbol_from_spt_dist_mag (sptnum, dist_pc, jmag, kmag, format='cgs'): bcj = bcj_from_spt (sptnum) bck = bck_from_spt (sptnum) n = np.zeros (sptnum.shape, dtype=np.int) app_mbol = np.zeros (sptnum.shape) w = np.isfinite (bcj) & np.isfinite (jmag) app_mbol[w] += jmag[w] + bcj[w] n[w] += 1 w = np.isfinite (bck) & np.isfinite (kmag) app_mbol[w] += kmag[w] + bck[w] n[w] += 1 w = (n != 0) abs_mbol = (app_mbol[w] / n[w]) - 5 * (np.log10 (dist_pc[w]) - 1) # note: abs_mbol is filtered by `w` lbol = np.empty (sptnum.shape) lbol.fill (np.nan) lbol[w] = lbol_from_mbol (abs_mbol, format=format) return lbol
Estimate a UCD's bolometric luminosity given some basic parameters. sptnum: the spectral type as a number; 8 -> M8; 10 -> L0 ; 20 -> T0 Valid values range between 0 and 30, ie M0 to Y0. dist_pc: distance to the object in parsecs jmag: object's J-band magnitude or NaN (*not* None) if unavailable kmag: same with K-band magnitude format: either 'cgs', 'logcgs', or 'logsun', defining the form of the outputs. Logarithmic quantities are base 10. This routine can be used with vectors of measurements. The result will be NaN if a value cannot be computed. This routine implements the method documented in the Appendix of Williams et al., 2014ApJ...785....9W (doi:10.1088/0004-637X/785/1/9).
def mass_from_j (j_abs): j_abs = np.asfarray (j_abs) return np.piecewise (j_abs, [j_abs > 11, j_abs <= 11, j_abs < 5.5], [0.1 * cgs.msun, _delfosse_mass_from_j_helper, np.nan])
Estimate mass in cgs from absolute J magnitude, using the relationship of Delfosse+ (2000A&A...364..217D). j_abs - The absolute J magnitude. Returns: the estimated mass in grams. If j_abs > 11, a fixed result of 0.1 Msun is returned. Values of j_abs < 5.5 are illegal and get NaN. There is a discontinuity in the relation at j_abs = 11, which yields 0.0824 Msun.
def load_bcah98_mass_radius (tablelines, metallicity=0, heliumfrac=0.275, age_gyr=5., age_tol=0.05): mdata, rdata = [], [] for line in tablelines: a = line.strip ().split () thismetallicity = float (a[0]) if thismetallicity != metallicity: continue thisheliumfrac = float (a[1]) if thisheliumfrac != heliumfrac: continue thisage = float (a[4]) if abs (thisage - age_gyr) > age_tol: continue mass = float (a[3]) * cgs.msun teff = float (a[5]) mbol = float (a[7]) # XXX to check: do they specify m_bol_sun = 4.64? IIRC, yes. lbol = 10**(0.4 * (4.64 - mbol)) * cgs.lsun area = lbol / (cgs.sigma * teff**4) r = np.sqrt (area / (4 * np.pi)) mdata.append (mass) rdata.append (r) return np.asarray (mdata), np.asarray (rdata)
Load mass and radius from the main data table for the famous models of Baraffe+ (1998A&A...337..403B). tablelines An iterable yielding lines from the table data file. I've named the file '1998A&A...337..403B_tbl1-3.dat' in some repositories (it's about 150K, not too bad). metallicity The metallicity of the model to select. heliumfrac The helium fraction of the model to select. age_gyr The age of the model to select, in Gyr. age_tol The tolerance on the matched age, in Gyr. Returns: (mass, radius), where both are Numpy arrays. The ages in the data table vary slightly at fixed metallicity and helium fraction. Therefore, there needs to be a tolerance parameter for matching the age.
def mk_radius_from_mass_bcah98 (*args, **kwargs): from scipy.interpolate import UnivariateSpline m, r = load_bcah98_mass_radius (*args, **kwargs) spl = UnivariateSpline (m, r, s=1) # This allows us to do range-checking with either scalars or vectors with # minimal gymnastics. @numutil.broadcastize (1) def interp (mass_g): if np.any (mass_g < 0.05 * cgs.msun) or np.any (mass_g > 0.7 * cgs.msun): raise ValueError ('mass_g must must be between 0.05 and 0.7 Msun') return spl (mass_g) return interp
Create a function that maps (sub)stellar mass to radius, based on the famous models of Baraffe+ (1998A&A...337..403B). tablelines An iterable yielding lines from the table data file. I've named the file '1998A&A...337..403B_tbl1-3.dat' in some repositories (it's about 150K, not too bad). metallicity The metallicity of the model to select. heliumfrac The helium fraction of the model to select. age_gyr The age of the model to select, in Gyr. age_tol The tolerance on the matched age, in Gyr. Returns: a function mtor(mass_g), return a radius in cm as a function of a mass in grams. The mass must be between 0.05 and 0.7 Msun. The ages in the data table vary slightly at fixed metallicity and helium fraction. Therefore, there needs to be a tolerance parameter for matching the age. This function requires Scipy.
def tauc_from_mass (mass_g): m = mass_g / cgs.msun return np.piecewise (m, [m < 1.3, m < 0.82, m < 0.65, m < 0.1], [lambda x: 61.7 - 44.7 * x, 25., lambda x: 86.9 - 94.3 * x, 70., np.nan]) * 86400.
Estimate the convective turnover time from mass, using the method described in Cook+ (2014ApJ...785...10C). mass_g - UCD mass in grams. Returns: the convective turnover timescale in seconds. Masses larger than 1.3 Msun are out of range and yield NaN. If the mass is <0.1 Msun, the turnover time is fixed at 70 days. The Cook method was inspired by the description in McLean+ (2012ApJ...746...23M). It is a hybrid of the method described in Reiners & Basri (2010ApJ...710..924R) and the data shown in Kiraga & Stepien (2007AcA....57..149K). However, this version imposes the 70-day cutoff in terms of mass, not spectral type, so that it is entirely defined in terms of a single quantity. There are discontinuities between the different break points! Any future use should tweak the coefficients to make everything smooth.
def serial_ppmap(func, fixed_arg, var_arg_iter): return [func(i, fixed_arg, x) for i, x in enumerate(var_arg_iter)]
A serial implementation of the "partially-pickling map" function returned by the :meth:`ParallelHelper.get_ppmap` interface. Its arguments are: *func* A callable taking three arguments and returning a Pickle-able value. *fixed_arg* Any value, even one that is not pickle-able. *var_arg_iter* An iterable that generates Pickle-able values. The functionality is:: def serial_ppmap(func, fixed_arg, var_arg_iter): return [func(i, fixed_arg, x) for i, x in enumerate(var_arg_iter)] Therefore the arguments to your ``func`` function, which actually does the interesting computations, are: *index* The 0-based index number of the item being processed; often this can be ignored. *fixed_arg* The same *fixed_arg* that was passed to ``ppmap``. *var_arg* The *index*'th item in the *var_arg_iter* iterable passed to ``ppmap``.
def multiprocessing_ppmap_worker(in_queue, out_queue, func, fixed_arg): while True: i, var_arg = in_queue.get() if i is None: break out_queue.put((i, func(i, fixed_arg, var_arg)))
Worker for the :mod:`multiprocessing` ppmap implementation. Strongly derived from code posted on StackExchange by "klaus se": `<http://stackoverflow.com/a/16071616/3760486>`_.
def map(self, func, iterable, chunksize=None): # The key magic is that we must call r.get() with a timeout, because a # Condition.wait() without a timeout swallows KeyboardInterrupts. r = self.map_async(func, iterable, chunksize) while True: try: return r.get(self.wait_timeout) except TimeoutError: pass except KeyboardInterrupt: self.terminate() self.join() raise
Equivalent of `map` built-in, without swallowing KeyboardInterrupt. func The function to apply to the items. iterable An iterable of items that will have `func` applied to them.
def _ppmap(self, func, fixed_arg, var_arg_iter): n_procs = self.pool_kwargs.get('processes') if n_procs is None: # Logic copied from multiprocessing.pool.Pool.__init__() try: from multiprocessing import cpu_count n_procs = cpu_count() except NotImplementedError: n_procs = 1 in_queue = Queue(1) out_queue = Queue() procs = [Process(target=multiprocessing_ppmap_worker, args=(in_queue, out_queue, func, fixed_arg)) for _ in range(n_procs)] for p in procs: p.daemon = True p.start() i = -1 for i, var_arg in enumerate(var_arg_iter): in_queue.put((i, var_arg)) n_items = i + 1 result = [None] * n_items for p in procs: in_queue.put((None, None)) for _ in range(n_items): i, value = out_queue.get() result[i] = value for p in procs: p.join() return result
The multiprocessing implementation of the partially-Pickling "ppmap" function. This doesn't use a Pool like map() does, because the whole problem is that Pool chokes on un-Pickle-able values. Strongly derived from code posted on StackExchange by "klaus se": `<http://stackoverflow.com/a/16071616/3760486>`_. This implementation could definitely be improved -- that's basically what the Pool class is all about -- but this gets us off the ground for those cases where the Pickle limitation is important. XXX This deadlocks if a child process crashes!!! XXX
def fmthours (radians, norm='wrap', precision=3, seps='::'): return _fmtsexagesimal (radians * R2H, norm, 24, seps, precision=precision)
Format an angle as sexagesimal hours in a string. Arguments are: radians The angle, in radians. norm (default "wrap") The normalization mode, used for angles outside of the standard range of 0 to 2π. If "none", the value is formatted ignoring any potential problems. If "wrap", it is wrapped to lie within the standard range. If "raise", a :exc:`ValueError` is raised. precision (default 3) The number of decimal places in the "seconds" place to use in the formatted string. seps (default "::") A two- or three-item iterable, used to separate the hours, minutes, and seconds components. If a third element is present, it appears after the seconds component. Specifying "hms" yields something like "12h34m56s"; specifying ``['', '']`` yields something like "123456". Returns a string.
def fmtdeglon (radians, norm='wrap', precision=2, seps='::'): return _fmtsexagesimal (radians * R2D, norm, 360, seps, precision=precision)
Format a longitudinal angle as sexagesimal degrees in a string. Arguments are: radians The angle, in radians. norm (default "wrap") The normalization mode, used for angles outside of the standard range of 0 to 2π. If "none", the value is formatted ignoring any potential problems. If "wrap", it is wrapped to lie within the standard range. If "raise", a :exc:`ValueError` is raised. precision (default 2) The number of decimal places in the "arcseconds" place to use in the formatted string. seps (default "::") A two- or three-item iterable, used to separate the degrees, arcminutes, and arcseconds components. If a third element is present, it appears after the arcseconds component. Specifying "dms" yields something like "12d34m56s"; specifying ``['', '']`` yields something like "123456". Returns a string.
def fmtradec (rarad, decrad, precision=2, raseps='::', decseps='::', intersep=' '): return (fmthours (rarad, precision=precision + 1, seps=raseps) + text_type (intersep) + fmtdeglat (decrad, precision=precision, seps=decseps))
Format equatorial coordinates in a single sexagesimal string. Returns a string of the RA/lon coordinate, formatted as sexagesimal hours, then *intersep*, then the Dec/lat coordinate, formatted as degrees. This yields something like "12:34:56.78 -01:23:45.6". Arguments are: rarad The right ascension coordinate, in radians. More generically, this is the longitudinal coordinate; note that the ordering in this function differs than the other spherical functions, which generally prefer coordinates in "lat, lon" order. decrad The declination coordinate, in radians. More generically, this is the latitudinal coordinate. precision (default 2) The number of decimal places in the "arcseconds" place of the latitudinal (declination) coordinate. The longitudinal (right ascension) coordinate gets one additional place, since hours are bigger than degrees. raseps (default "::") A two- or three-item iterable, used to separate the hours, minutes, and seconds components of the RA/lon coordinate. If a third element is present, it appears after the seconds component. Specifying "hms" yields something like "12h34m56s"; specifying ``['', '']`` yields something like "123456". decseps (default "::") A two- or three-item iterable, used to separate the degrees, arcminutes, and arcseconds components of the Dec/lat coordinate. intersep (default " ") The string separating the RA/lon and Dec/lat coordinates
def parsehours (hrstr): hr = _parsesexagesimal (hrstr, 'hours', False) if hr >= 24: raise ValueError ('illegal hour specification: ' + hrstr) return hr * H2R
Parse a string formatted as sexagesimal hours into an angle. This function converts a textual representation of an angle, measured in hours, into a floating point value measured in radians. The format of *hrstr* is very limited: it may not have leading or trailing whitespace, and the components of the sexagesimal representation must be separated by colons. The input must therefore resemble something like ``"12:34:56.78"``. A :exc:`ValueError` will be raised if the input does not resemble this template. Hours greater than 24 are not allowed, but negative values are.
def parsedeglat (latstr): deg = _parsesexagesimal (latstr, 'latitude', True) if abs (deg) > 90: raise ValueError ('illegal latitude specification: ' + latstr) return deg * D2R
Parse a latitude formatted as sexagesimal degrees into an angle. This function converts a textual representation of a latitude, measured in degrees, into a floating point value measured in radians. The format of *latstr* is very limited: it may not have leading or trailing whitespace, and the components of the sexagesimal representation must be separated by colons. The input must therefore resemble something like ``"-00:12:34.5"``. A :exc:`ValueError` will be raised if the input does not resemble this template. Latitudes greater than 90 or less than -90 degrees are not allowed.
def sphdist (lat1, lon1, lat2, lon2): cd = np.cos (lon2 - lon1) sd = np.sin (lon2 - lon1) c2 = np.cos (lat2) c1 = np.cos (lat1) s2 = np.sin (lat2) s1 = np.sin (lat1) a = np.sqrt ((c2 * sd)**2 + (c1 * s2 - s1 * c2 * cd)**2) b = s1 * s2 + c1 * c2 * cd return np.arctan2 (a, b)
Calculate the distance between two locations on a sphere. lat1 The latitude of the first location. lon1 The longitude of the first location. lat2 The latitude of the second location. lon2 The longitude of the second location. Returns the separation in radians. All arguments are in radians as well. The arguments may be vectors. Note that the ordering of the arguments maps to the nonstandard ordering ``(Dec, RA)`` in equatorial coordinates. In a spherical projection it maps to ``(Y, X)`` which may also be unexpected. The distance is computed with the "specialized Vincenty formula". Faster but more error-prone formulae are possible; see Wikipedia on Great-circle Distance.
def parang (hourangle, declination, latitude): return -np.arctan2 (-np.sin (hourangle), np.cos (declination) * np.tan (latitude) - np.sin (declination) * np.cos (hourangle))
Calculate the parallactic angle of a sky position. This computes the parallactic angle of a sky position expressed in terms of an hour angle and declination. Arguments: hourangle The hour angle of the location on the sky. declination The declination of the location on the sky. latitude The latitude of the observatory. Inputs and outputs are all in radians. Implementation adapted from GBTIDL ``parangle.pro``.
def load_skyfield_data(): import os.path from astropy.config import paths from skyfield.api import Loader cache_dir = os.path.join(paths.get_cache_dir(), 'pwkit') loader = Loader(cache_dir) planets = loader('de421.bsp') ts = loader.timescale() return planets, ts
Load data files used in Skyfield. This will download files from the internet if they haven't been downloaded before. Skyfield downloads files to the current directory by default, which is not ideal. Here we abuse astropy and use its cache directory to cache the data files per-user. If we start downloading files in other places in pwkit we should maybe make this system more generic. And the dep on astropy is not at all necessary. Skyfield will print out a progress bar as it downloads things. Returns ``(planets, ts)``, the standard Skyfield ephemeris and timescale data files.
def get_2mass_epoch (tmra, tmdec, debug=False): import codecs try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen postdata = b'''-mime=csv -source=2MASS -out=_q,JD -c=%.6f %.6f -c.eq=J2000''' % (tmra * R2D, tmdec * R2D) jd = None for line in codecs.getreader('utf-8')(urlopen (_vizurl, postdata)): line = line.strip () if debug: print_ ('D: 2M >>', line) if line.startswith ('1;'): jd = float (line[2:]) if jd is None: import sys print_ ('warning: 2MASS epoch lookup failed; astrometry could be very wrong!', file=sys.stderr) return J2000 return jd - 2400000.5
Given a 2MASS position, look up the epoch when it was observed. This function uses the CDS Vizier web service to look up information in the 2MASS point source database. Arguments are: tmra The source's J2000 right ascension, in radians. tmdec The source's J2000 declination, in radians. debug If True, the web server's response will be printed to :data:`sys.stdout`. The return value is an MJD. If the lookup fails, a message will be printed to :data:`sys.stderr` (unconditionally!) and the :data:`J2000` epoch will be returned.
def get_simbad_astrometry_info (ident, items=_simbaditems, debug=False): import codecs try: from urllib.parse import quote except ImportError: from urllib import quote try: from urllib.request import urlopen except ImportError: from urllib2 import urlopen s = '\\n'.join ('%s %%%s' % (i, i) for i in items) s = '''output console=off script=off format object "%s" query id %s''' % (s, ident) url = _simbadbase + quote (s) results = {} errtext = None for line in codecs.getreader('utf-8')(urlopen (url)): line = line.strip () if debug: print_ ('D: SA >>', line) if errtext is not None: errtext += line elif line.startswith ('::error'): errtext = '' elif len (line): k, v = line.split (' ', 1) results[k] = v if errtext is not None: raise Exception ('SIMBAD query error: ' + errtext) return results
Fetch astrometric information from the Simbad web service. Given the name of a source as known to the CDS Simbad service, this function looks up its positional information and returns it in a dictionary. In most cases you should use an :class:`AstrometryInfo` object and its :meth:`~AstrometryInfo.fill_from_simbad` method instead of this function. Arguments: ident The Simbad name of the source to look up. items An iterable of data items to look up. The default fetches position, proper motion, parallax, and radial velocity information. Each item name resembles the string ``COO(d;A)`` or ``PLX(E)``. The allowed formats are defined `on this CDS page <http://simbad.u-strasbg.fr/Pages/guide/sim-fscript.htx>`_. debug If true, the response from the webserver will be printed. The return value is a dictionary with a key corresponding to the textual result returned for each requested item.
def predict_without_uncertainties(self, mjd, complain=True): import sys self.verify(complain=complain) planets, ts = load_skyfield_data() # might download stuff from the internet earth = planets['earth'] t = ts.tdb(jd = mjd + 2400000.5) # "Best" position. The implementation here is a bit weird to keep # parallelism with predict(). args = { 'ra_hours': self.ra * R2H, 'dec_degrees': self.dec * R2D, } if self.pos_epoch is not None: args['jd_of_position'] = self.pos_epoch + 2400000.5 if self.promo_ra is not None: args['ra_mas_per_year'] = self.promo_ra args['dec_mas_per_year'] = self.promo_dec if self.parallax is not None: args['parallax_mas'] = self.parallax if self.vradial is not None: args['radial_km_per_s'] = self.vradial bestra, bestdec, _ = earth.at(t).observe(PromoEpochStar(**args)).radec() return bestra.radians, bestdec.radians
Predict the object position at a given MJD. The return value is a tuple ``(ra, dec)``, in radians, giving the predicted position of the object at *mjd*. Unlike :meth:`predict`, the astrometric uncertainties are ignored. This function is therefore deterministic but potentially misleading. If *complain* is True, print out warnings for incomplete information. This function relies on the external :mod:`skyfield` package.
def print_prediction (self, ptup, precision=2): from . import ellipses bestra, bestdec, maj, min, pa = ptup f = ellipses.sigmascale (1) maj *= R2A min *= R2A pa *= R2D print_ ('position =', fmtradec (bestra, bestdec, precision=precision)) print_ ('err(1σ) = %.*f" × %.*f" @ %.0f°' % (precision, maj * f, precision, min * f, pa))
Print a summary of a predicted position. The argument *ptup* is a tuple returned by :meth:`predict`. It is printed to :data:`sys.stdout` in a reasonable format that uses Unicode characters.
def unicode_stdio (): if six.PY3: return enc = sys.stdin.encoding or 'utf-8' sys.stdin = codecs.getreader (enc) (sys.stdin) enc = sys.stdout.encoding or enc sys.stdout = codecs.getwriter (enc) (sys.stdout) enc = sys.stderr.encoding or enc sys.stderr = codecs.getwriter (enc) (sys.stderr)
Make sure that the standard I/O streams accept Unicode. In Python 2, the standard I/O streams accept bytes, not Unicode characters. This means that in principle every Unicode string that we want to output should be encoded to utf-8 before print()ing. But Python 2.X has a hack where, if the output is a terminal, it will automatically encode your strings, using UTF-8 in most cases. BUT this hack doesn't kick in if you pipe your program's output to another program. So it's easy to write a tool that works fine in most cases but then blows up when you log its output to a file. The proper solution is just to do the encoding right. This function sets things up to do this in the most sensible way I can devise, if we're running on Python 2. This approach sets up compatibility with Python 3, which has the stdio streams be in text mode rather than bytes mode to begin with. Basically, every command-line Python program should call this right at startup. I'm tempted to just invoke this code whenever this module is imported since I foresee many accidentally omissions of the call.
def backtrace_on_usr1 (): import signal try: signal.signal (signal.SIGUSR1, _print_backtrace_signal_handler) except Exception as e: warn ('failed to set up Python backtraces on SIGUSR1: %s', e)
Install a signal handler such that this program prints a Python traceback upon receipt of SIGUSR1. This could be useful for checking that long-running programs are behaving properly, or for discovering where an infinite loop is occurring. Note, however, that the Python interpreter does not invoke Python signal handlers exactly when the process is signaled. For instance, a signal delivered in the midst of a time.sleep() call will only be seen by Python code after that call completes. This means that this feature may not be as helpful as one might like for debugging certain kinds of problems.
def die (fmt, *args): if not len (args): raise SystemExit ('error: ' + text_type (fmt)) raise SystemExit ('error: ' + (fmt % args))
Raise a :exc:`SystemExit` exception with a formatted error message. :arg str fmt: a format string :arg args: arguments to the format string If *args* is empty, a :exc:`SystemExit` exception is raised with the argument ``'error: ' + str (fmt)``. Otherwise, the string component is ``fmt % args``. If uncaught, the interpreter exits with an error code and prints the exception argument. Example:: if ndim != 3: die ('require exactly 3 dimensions, not %d', ndim)
def pop_option (ident, argv=None): if argv is None: from sys import argv if len (ident) == 1: ident = '-' + ident else: ident = '--' + ident found = ident in argv if found: argv.remove (ident) return found
A lame routine for grabbing command-line arguments. Returns a boolean indicating whether the option was present. If it was, it's removed from the argument string. Because of the lame behavior, options can't be combined, and non-boolean options aren't supported. Operates on sys.argv by default. Note that this will proceed merrily if argv[0] matches your option.
def show_usage (docstring, short, stream, exitcode): if stream is None: from sys import stdout as stream if not short: print ('Usage:', docstring.strip (), file=stream) else: intext = False for l in docstring.splitlines (): if intext: if not len (l): break print (l, file=stream) elif len (l): intext = True print ('Usage:', l, file=stream) print ('\nRun with a sole argument --help for more detailed ' 'usage information.', file=stream) raise SystemExit (exitcode)
Print program usage information and exit. :arg str docstring: the program help text This function just prints *docstring* and exits. In most cases, the function :func:`check_usage` should be used: it automatically checks :data:`sys.argv` for a sole "-h" or "--help" argument and invokes this function. This function is provided in case there are instances where the user should get a friendly usage message that :func:`check_usage` doesn't catch. It can be contrasted with :func:`wrong_usage`, which prints a terser usage message and exits with an error code.
def check_usage (docstring, argv=None, usageifnoargs=False): if argv is None: from sys import argv if len (argv) == 1 and usageifnoargs: show_usage (docstring, (usageifnoargs != 'long'), None, 0) if len (argv) == 2 and argv[1] in ('-h', '--help'): show_usage (docstring, False, None, 0)
Check if the program has been run with a --help argument; if so, print usage information and exit. :arg str docstring: the program help text :arg argv: the program arguments; taken as :data:`sys.argv` if given as :const:`None` (the default). (Note that this implies ``argv[0]`` should be the program name and not the first option.) :arg bool usageifnoargs: if :const:`True`, usage information will be printed and the program will exit if no command-line arguments are passed. If "long", print long usasge. Default is :const:`False`. This function is intended for small programs launched from the command line. The intention is for the program help information to be written in its docstring, and then for the preamble to contain something like:: \"\"\"myprogram - this is all the usage help you get\"\"\" import sys ... # other setup check_usage (__doc__) ... # go on with business If it is determined that usage information should be shown, :func:`show_usage` is called and the program exits. See also :func:`wrong_usage`.
def wrong_usage (docstring, *rest): intext = False if len (rest) == 0: detail = 'invalid command-line arguments' elif len (rest) == 1: detail = rest[0] else: detail = rest[0] % tuple (rest[1:]) print ('error:', detail, '\n', file=sys.stderr) # extra NL show_usage (docstring, True, sys.stderr, 1)
Print a message indicating invalid command-line arguments and exit with an error code. :arg str docstring: the program help text :arg rest: an optional specific error message This function is intended for small programs launched from the command line. The intention is for the program help information to be written in its docstring, and then for argument checking to look something like this:: \"\"\"mytask <input> <output> Do something to the input to create the output. \"\"\" ... import sys ... # other setup check_usage (__doc__) ... # more setup if len (sys.argv) != 3: wrong_usage (__doc__, "expect exactly 2 arguments, not %d", len (sys.argv)) When called, an error message is printed along with the *first stanza* of *docstring*. The program then exits with an error code and a suggestion to run the program with a --help argument to see more detailed usage information. The "first stanza" of *docstring* is defined as everything up until the first blank line, ignoring any leading blank lines. The optional message in *rest* is treated as follows. If *rest* is empty, the error message "invalid command-line arguments" is printed. If it is a single item, the stringification of that item is printed. If it is more than one item, the first item is treated as a format string, and it is percent-formatted with the remaining values. See the above example. See also :func:`check_usage` and :func:`show_usage`.
def excepthook (self, etype, evalue, etb): self.inner_excepthook (etype, evalue, etb) if issubclass (etype, KeyboardInterrupt): # Don't try this at home, kids. On some systems os.kill (0, ...) # signals our entire progress group, which is not what we want, # so we use os.getpid (). signal.signal (signal.SIGINT, signal.SIG_DFL) os.kill (os.getpid (), signal.SIGINT)
Handle an uncaught exception. We always forward the exception on to whatever `sys.excepthook` was present upon setup. However, if the exception is a KeyboardInterrupt, we additionally kill ourselves with an uncaught SIGINT, so that invoking programs know what happened.
def calc_nu_b(b): return cgs.e * b / (2 * cgs.pi * cgs.me * cgs.c)
Calculate the cyclotron frequency in Hz given a magnetic field strength in Gauss. This is in cycles per second not radians per second; i.e. there is a 2π in the denominator: ν_B = e B / (2π m_e c)