code
stringlengths
52
7.75k
docs
stringlengths
1
5.85k
def calc_snu(eta, kappa, width, elongation, dist): omega = (width / dist)**2 depth = width * elongation tau = depth * kappa sourcefn = eta / kappa return 2 * omega * sourcefn * (1 - np.exp(-tau))
Calculate the flux density S_ν given a simple physical configuration. This is basic radiative transfer as per Dulk (1985) equations 5, 6, and 11. eta The emissivity, in units of ``erg s^-1 Hz^-1 cm^-3 sr^-1``. kappa The absorption coefficient, in units of ``cm^-1``. width The characteristic cross-sectional width of the emitting region, in cm. elongation The the elongation of the emitting region; ``depth = width * elongation``. dist The distance to the emitting region, in cm. The return value is the flux density, in units of ``erg s^-1 cm^-2 Hz^-1``. The angular size of the source is taken to be ``(width / dist)**2``.
def calc_freefree_kappa(ne, t, hz): return 9.78e-3 * ne**2 * hz**-2 * t**-1.5 * (24.5 + np.log(t) - np.log(hz))
Dulk (1985) eq 20, assuming pure hydrogen.
def calc_freefree_eta(ne, t, hz): kappa = calc_freefree_kappa(ne, t, hz) return kappa * cgs.k * t * hz**2 / cgs.c**2
Dulk (1985) equations 7 and 20, assuming pure hydrogen.
def calc_freefree_snu_ujy(ne, t, width, elongation, dist, ghz): hz = ghz * 1e9 eta = calc_freefree_eta(ne, t, hz) kappa = calc_freefree_kappa(ne, t, hz) snu = calc_snu(eta, kappa, width, elongation, dist) ujy = snu * cgs.jypercgs * 1e6 return ujy
Calculate a flux density from pure free-free emission.
def calc_gs_eta(b, ne, delta, sinth, nu): s = nu / calc_nu_b(b) return (b * ne * 3.3e-24 * 10**(-0.52 * delta) * sinth**(-0.43 + 0.65 * delta) * s**(1.22 - 0.90 * delta))
Calculate the gyrosynchrotron emission coefficient η_ν. This is Dulk (1985) equation 35, which is a fitting function assuming a power-law electron population. Arguments are: b Magnetic field strength in Gauss ne The density of electrons per cubic centimeter with energies greater than 10 keV. delta The power-law index defining the energy distribution of the electron population, with ``n(E) ~ E^(-delta)``. The equation is valid for ``2 <~ delta <~ 7``. sinth The sine of the angle between the line of sight and the magnetic field direction. The equation is valid for θ > 20° or ``sinth > 0.34`` or so. nu The frequency at which to calculate η, in Hz. The equation is valid for ``10 <~ nu/nu_b <~ 100``, which sets a limit on the ratio of ``nu`` and ``b``. The return value is the emission coefficient (AKA "emissivity"), in units of ``erg s^-1 Hz^-1 cm^-3 sr^-1``. No complaints are raised if you attempt to use the equation outside of its range of validity.
def calc_gs_kappa(b, ne, delta, sinth, nu): s = nu / calc_nu_b(b) return (ne / b * 1.4e-9 * 10**(-0.22 * delta) * sinth**(-0.09 + 0.72 * delta) * s**(-1.30 - 0.98 * delta))
Calculate the gyrosynchrotron absorption coefficient κ_ν. This is Dulk (1985) equation 36, which is a fitting function assuming a power-law electron population. Arguments are: b Magnetic field strength in Gauss ne The density of electrons per cubic centimeter with energies greater than 10 keV. delta The power-law index defining the energy distribution of the electron population, with ``n(E) ~ E^(-delta)``. The equation is valid for ``2 <~ delta <~ 7``. sinth The sine of the angle between the line of sight and the magnetic field direction. The equation is valid for θ > 20° or ``sinth > 0.34`` or so. nu The frequency at which to calculate η, in Hz. The equation is valid for ``10 <~ nu/nu_b <~ 100``, which sets a limit on the ratio of ``nu`` and ``b``. The return value is the absorption coefficient, in units of ``cm^-1``. No complaints are raised if you attempt to use the equation outside of its range of validity.
def calc_gs_nu_pk(b, ne, delta, sinth, depth): coldens = ne * depth return (2.72e3 * 10**(0.27 * delta) * sinth**(0.41 + 0.03 * delta) * coldens**(0.32 - 0.03 * delta) * b**(0.68 + 0.03 * delta))
Calculate the frequency of peak synchrotron emission, ν_pk. This is Dulk (1985) equation 39, which is a fitting function assuming a power-law electron population. Arguments are: b Magnetic field strength in Gauss ne The density of electrons per cubic centimeter with energies greater than 10 keV. delta The power-law index defining the energy distribution of the electron population, with ``n(E) ~ E^(-delta)``. The equation is valid for ``2 <~ delta <~ 7``. sinth The sine of the angle between the line of sight and the magnetic field direction. The equation is valid for θ > 20° or ``sinth > 0.34`` or so. depth The path length through the emitting medium, in cm. The return value is peak frequency in Hz. No complaints are raised if you attempt to use the equation outside of its range of validity.
def calc_gs_snu_ujy(b, ne, delta, sinth, width, elongation, dist, ghz): hz = ghz * 1e9 eta = calc_gs_eta(b, ne, delta, sinth, hz) kappa = calc_gs_kappa(b, ne, delta, sinth, hz) snu = calc_snu(eta, kappa, width, elongation, dist) ujy = snu * cgs.jypercgs * 1e6 return ujy
Calculate a flux density from pure gyrosynchrotron emission. This combines Dulk (1985) equations 35 and 36, which are fitting functions assuming a power-law electron population, with standard radiative transfer through a uniform medium. Arguments are: b Magnetic field strength in Gauss ne The density of electrons per cubic centimeter with energies greater than 10 keV. delta The power-law index defining the energy distribution of the electron population, with ``n(E) ~ E^(-delta)``. The equation is valid for ``2 <~ delta <~ 7``. sinth The sine of the angle between the line of sight and the magnetic field direction. The equation is valid for θ > 20° or ``sinth > 0.34`` or so. width The characteristic cross-sectional width of the emitting region, in cm. elongation The the elongation of the emitting region; ``depth = width * elongation``. dist The distance to the emitting region, in cm. ghz The frequencies at which to evaluate the spectrum, **in GHz**. The return value is the flux density **in μJy**. The arguments can be Numpy arrays. No complaints are raised if you attempt to use the equations outside of their range of validity.
def calc_synch_eta(b, ne, delta, sinth, nu, E0=1.): s = nu / calc_nu_b(b) return (b * ne * 8.6e-24 * (delta - 1) * sinth * (0.175 * s / (E0**2 * sinth))**(0.5 * (1 - delta)))
Calculate the relativistic synchrotron emission coefficient η_ν. This is Dulk (1985) equation 40, which is an approximation assuming a power-law electron population. Arguments are: b Magnetic field strength in Gauss ne The density of electrons per cubic centimeter with energies greater than E0. delta The power-law index defining the energy distribution of the electron population, with ``n(E) ~ E^(-delta)``. The equation is valid for ``2 <~ delta <~ 5``. sinth The sine of the angle between the line of sight and the magnetic field direction. It's not specified for what range of values the expressions work well. nu The frequency at which to calculate η, in Hz. The equation is valid for It's not specified for what range of values the expressions work well. E0 The minimum energy of electrons to consider, in MeV. Defaults to 1 so that these functions can be called identically to the gyrosynchrotron functions. The return value is the emission coefficient (AKA "emissivity"), in units of ``erg s^-1 Hz^-1 cm^-3 sr^-1``. No complaints are raised if you attempt to use the equation outside of its range of validity.
def calc_synch_nu_pk(b, ne, delta, sinth, depth, E0=1.): coldens = ne * depth return (3.2e7 * sinth * E0**((2 * delta - 2) / (delta + 4)) * (8.7e-12 * (delta - 1) * coldens / sinth)**(2./(delta + 4)) * b**((delta + 2) / (delta + 4)))
Calculate the frequency of peak synchrotron emission, ν_pk. This is Dulk (1985) equation 43, which is a fitting function assuming a power-law electron population. Arguments are: b Magnetic field strength in Gauss ne The density of electrons per cubic centimeter with energies greater than 10 keV. delta The power-law index defining the energy distribution of the electron population, with ``n(E) ~ E^(-delta)``. The equation is valid for ``2 <~ delta <~ 5``. sinth The sine of the angle between the line of sight and the magnetic field direction. It's not specified for what range of values the expressions work well. depth The path length through the emitting medium, in cm. E0 The minimum energy of electrons to consider, in MeV. Defaults to 1 so that these functions can be called identically to the gyrosynchrotron functions. The return value is peak frequency in Hz. No complaints are raised if you attempt to use the equation outside of its range of validity.
def calc_synch_snu_ujy(b, ne, delta, sinth, width, elongation, dist, ghz, E0=1.): hz = ghz * 1e9 eta = calc_synch_eta(b, ne, delta, sinth, hz, E0=E0) kappa = calc_synch_kappa(b, ne, delta, sinth, hz, E0=E0) snu = calc_snu(eta, kappa, width, elongation, dist) ujy = snu * cgs.jypercgs * 1e6 return ujy
Calculate a flux density from pure gyrosynchrotron emission. This combines Dulk (1985) equations 40 and 41, which are fitting functions assuming a power-law electron population, with standard radiative transfer through a uniform medium. Arguments are: b Magnetic field strength in Gauss ne The density of electrons per cubic centimeter with energies greater than 10 keV. delta The power-law index defining the energy distribution of the electron population, with ``n(E) ~ E^(-delta)``. The equation is valid for ``2 <~ delta <~ 5``. sinth The sine of the angle between the line of sight and the magnetic field direction. It's not specified for what range of values the expressions work well. width The characteristic cross-sectional width of the emitting region, in cm. elongation The the elongation of the emitting region; ``depth = width * elongation``. dist The distance to the emitting region, in cm. ghz The frequencies at which to evaluate the spectrum, **in GHz**. E0 The minimum energy of electrons to consider, in MeV. Defaults to 1 so that these functions can be called identically to the gyrosynchrotron functions. The return value is the flux density **in μJy**. The arguments can be Numpy arrays. No complaints are raised if you attempt to use the equations outside of their range of validity.
def makecfgdoc(taskname, doc): doc_args = dict( bulk = '\n'.join(l for l in doc.splitlines() if not l.startswith ('casatask ')), taskname = taskname ) return _kwcli_cfg_class_doc_template % doc_args
In Python 2.x you can't alter the __doc__ of a class after you define it, so we need to provide a function that does the munging when we define each class. This is that function.
def clearcal(vis, weightonly=False): tb = util.tools.table() cb = util.tools.calibrater() # cb.open() will create the tables if they're not present, so # if that's the case, we don't actually need to run initcalset() tb.open(b(vis), nomodify=False) colnames = tb.colnames() needinit = ('MODEL_DATA' in colnames) or('CORRECTED_DATA' in colnames) if 'IMAGING_WEIGHT' not in colnames: c = dict(clearcal_imaging_col_tmpl) c['shape'] = tb.getcell(b'DATA', 0).shape[-1:] tb.addcols({b'IMAGING_WEIGHT': c}, clearcal_imaging_dminfo_tmpl) tb.close() if not weightonly: import casadef if casadef.casa_version.startswith('5.'): cb.setvi(old=True, quiet=False) cb.open(b(vis)) if needinit: cb.initcalset() cb.close()
Fill the imaging and calibration columns (``MODEL_DATA``, ``CORRECTED_DATA``, ``IMAGING_WEIGHT``) of each measurement set with default values, creating the columns if necessary. vis (string) Path to the input measurement set weightonly (boolean) If true, just create the ``IMAGING_WEIGHT`` column; do not fill in the visibility data columns. If you want to reset calibration models, these days you probably want :func:`delmod_cli`. If you want to quickly make the columns go away, you probably want :func:`delcal`. Example:: from pwkit.environments.casa import tasks tasks.clearcal('myvis.ms')
def concat(invises, outvis, timesort=False): tb = util.tools.table() ms = util.tools.ms() if os.path.exists(outvis): raise RuntimeError('output "%s" already exists' % outvis) for invis in invises: if not os.path.isdir(invis): raise RuntimeError('input "%s" does not exist' % invis) tb.open(b(invises[0])) tb.copy(b(outvis), deep=True, valuecopy=True) tb.close() ms.open(b(outvis), nomodify=False) for invis in invises[1:]: ms.concatenate(msfile=b(invis), freqtol=b(concat_freqtol), dirtol=b(concat_dirtol)) ms.writehistory(message=b'taskname=tasklib.concat', origin=b'tasklib.concat') ms.writehistory(message=b('vis = ' + ', '.join(invises)), origin=b'tasklib.concat') ms.writehistory(message=b('timesort = ' + 'FT'[int(timesort)]), origin=b'tasklib.concat') if timesort: ms.timesort() ms.close()
Concatenate visibility measurement sets. invises (list of str) Paths to the input measurement sets outvis (str) Path to the output measurement set. timesort (boolean) If true, sort the output in time after concatenation. Example:: from pwkit.environments.casa import tasks tasks.concat(['epoch1.ms', 'epoch2.ms'], 'combined.ms')
def delcal(mspath): wantremove = 'MODEL_DATA CORRECTED_DATA'.split() tb = util.tools.table() tb.open(b(mspath), nomodify=False) cols = frozenset(tb.colnames()) toremove = [b(c) for c in wantremove if c in cols] if len(toremove): tb.removecols(toremove) tb.close() # We want to return a `str` type, which is what we already # have in Python 2 but not in 3. if six.PY2: return toremove else: return [c.decode('utf8') for c in toremove]
Delete the ``MODEL_DATA`` and ``CORRECTED_DATA`` columns from a measurement set. mspath (str) The path to the MS to modify Example:: from pwkit.environments.casa import tasks tasks.delcal('dataset.ms')
def delmod_cli(argv, alter_logger=True): check_usage(delmod_doc, argv, usageifnoargs=True) if alter_logger: util.logger() cb = util.tools.calibrater() for mspath in argv[1:]: cb.open(b(mspath), addcorr=False, addmodel=False) cb.delmod(otf=True, scr=False) cb.close()
Command-line access to ``delmod`` functionality. The ``delmod`` task deletes "on-the-fly" model information from a Measurement Set. It is so easy to implement that a standalone function is essentially unnecessary. Just write:: from pwkit.environments.casa import util cb = util.tools.calibrater() cb.open('datasaet.ms', addcorr=False, addmodel=False) cb.delmod(otf=True, scr=False) cb.close() If you want to delete the scratch columns, use :func:`delcal`. If you want to clear the scratch columns, use :func:`clearcal`.
def image2fits(mspath, fitspath, velocity=False, optical=False, bitpix=-32, minpix=0, maxpix=-1, overwrite=False, dropstokes=False, stokeslast=True, history=True, **kwargs): ia = util.tools.image() ia.open(b(mspath)) ia.tofits(outfile=b(fitspath), velocity=velocity, optical=optical, bitpix=bitpix, minpix=minpix, maxpix=maxpix, overwrite=overwrite, dropstokes=dropstokes, stokeslast=stokeslast, history=history, **kwargs) ia.close()
Convert an image in MS format to FITS format. mspath (str) The path to the input MS. fitspath (str) The path to the output FITS file. velocity (boolean) (To be documented.) optical (boolean) (To be documented.) bitpix (integer) (To be documented.) minpix (integer) (To be documented.) maxpix (integer) (To be documented.) overwrite (boolean) Whether the task is allowed to overwrite an existing destination file. dropstokes (boolean) Whether the "Stokes" (polarization) axis of the image should be dropped. stokeslast (boolean) Whether the "Stokes" (polarization) axis of the image should be placed as the last (innermost?) axis of the image cube. history (boolean) (To be documented.) ``**kwargs`` Forwarded on to the ``tofits`` function of the CASA ``image`` tool.
def importalma(asdm, ms): from .scripting import CasapyScript script = os.path.join(os.path.dirname(__file__), 'cscript_importalma.py') with CasapyScript(script, asdm=asdm, ms=ms) as cs: pass
Convert an ALMA low-level ASDM dataset to Measurement Set format. asdm (str) The path to the input ASDM dataset. ms (str) The path to the output MS dataset. This implementation automatically infers the value of the "tbuff" parameter. Example:: from pwkit.environments.casa import tasks tasks.importalma('myalma.asdm', 'myalma.ms')
def importevla(asdm, ms): from .scripting import CasapyScript # Here's the best way I can figure to find the recommended value of tbuff #(= 1.5 * integration time). Obviously you might have different # integration times in the dataset and such, and we're just going to # ignore that possibility. bdfstem = os.listdir(os.path.join(asdm, 'ASDMBinary'))[0] bdf = os.path.join(asdm, 'ASDMBinary', bdfstem) tbuff = None with open(bdf, 'rb') as f: for linenum, line in enumerate(f): if linenum > 60: raise PKError('cannot find integration time info in %s', bdf) if not line.startswith(b'<sdmDataSubsetHeader'): continue try: i1 = line.index(b'<interval>') + len(b'<interval>') i2 = line.index(b'</interval>') if i2 <= i1: raise ValueError() except ValueError: raise PKError('cannot parse integration time info in %s', bdf) tbuff = float(line[i1:i2]) * 1.5e-9 # nanosecs, and want 1.5x break if tbuff is None: raise PKError('found no integration time info') print('importevla: %s -> %s with tbuff=%.2f' % (asdm, ms, tbuff)) script = os.path.join(os.path.dirname(__file__), 'cscript_importevla.py') with CasapyScript(script, asdm=asdm, ms=ms, tbuff=tbuff) as cs: pass
Convert an EVLA low-level SDM dataset to Measurement Set format. asdm (str) The path to the input ASDM dataset. ms (str) The path to the output MS dataset. This implementation automatically infers the value of the "tbuff" parameter. Example:: from pwkit.environments.casa import tasks tasks.importevla('myvla.sdm', 'myvla.ms')
def listobs(vis): def inner_list(sink): try: ms = util.tools.ms() ms.open(vis) ms.summary(verbose=True) ms.close() except Exception as e: sink.post(b'listobs failed: %s' % e, priority=b'SEVERE') for line in util.forkandlog(inner_list): info = line.rstrip().split('\t', 3) # date, priority, origin, message if len(info) > 3: yield info[3] else: yield ''
Textually describe the contents of a measurement set. vis (str) The path to the dataset. Returns A generator of lines of human-readable output Errors can only be detected by looking at the output. Example:: from pwkit.environments.casa import tasks for line in tasks.listobs('mydataset.ms'): print(line)
def mjd2date(mjd, precision=3): from astropy.time import Time dt = Time(mjd, format='mjd', scale='utc').to_datetime() fracsec = ('%.*f' % (precision, 1e-6 * dt.microsecond)).split('.')[1] return '%04d/%02d/%02d/%02d:%02d:%02d.%s' % ( dt.year, dt.month, dt.day, dt.hour, dt.minute, dt.second, fracsec )
Convert an MJD to a data string in the format used by CASA. mjd (numeric) An MJD value in the UTC timescale. precision (integer, default 3) The number of digits of decimal precision in the seconds portion of the returned string Returns A string representing the input argument in CASA format: ``YYYY/MM/DD/HH:MM:SS.SSS``. Example:: from pwkit.environment.casa import tasks print(tasks.mjd2date(55555)) # yields '2010/12/25/00:00:00.000'
def plotants(vis, figfile): from .scripting import CasapyScript script = os.path.join(os.path.dirname(__file__), 'cscript_plotants.py') with CasapyScript(script, vis=vis, figfile=figfile) as cs: pass
Plot the physical layout of the antennas described in the MS. vis (str) Path to the input dataset figfile (str) Path to the output image file. The output image format will be inferred from the extension of *figfile*. Example:: from pwkit.environments.casa import tasks tasks.plotants('dataset.ms', 'antennas.png')
def latexify(obj, **kwargs): if hasattr(obj, '__pk_latex__'): return obj.__pk_latex__(**kwargs) if isinstance(obj, text_type): from .unicode_to_latex import unicode_to_latex return unicode_to_latex(obj) if isinstance(obj, bool): # isinstance(True, int) = True, so gotta handle this first. raise ValueError('no well-defined LaTeXification of bool %r' % obj) if isinstance(obj, float): nplaces = kwargs.get('nplaces') if nplaces is None: return '$%f$' % obj return '$%.*f$' % (nplaces, obj) if isinstance(obj, int): return '$%d$' % obj if isinstance(obj, binary_type): if all(c in _printable_ascii for c in obj): return obj.decode('ascii') raise ValueError('no safe LaTeXification of binary string %r' % obj) raise ValueError('can\'t LaTeXify %r' % obj)
Render an object in LaTeX appropriately.
def latexify_n2col(x, nplaces=None, **kwargs): if nplaces is not None: t = '%.*f' % (nplaces, x) else: t = '%f' % x if '.' not in t: return '$%s$ &' % t left, right = t.split('.') return '$%s$ & $.%s$' % (left, right)
Render a number into LaTeX in a 2-column format, where the columns split immediately to the left of the decimal point. This gives nice alignment of numbers in a table.
def latexify_u3col(obj, **kwargs): if hasattr(obj, '__pk_latex_u3col__'): return obj.__pk_latex_u3col__(**kwargs) # TODO: there are reasonable ways to format many basic types, but I'm not # going to implement them until I need to. raise ValueError('can\'t LaTeXify %r in 3-column uncertain format' % obj)
Convert an object to special LaTeX for uncertainty tables. This conversion is meant for uncertain values in a table. The return value should span three columns. The first column ends just before the decimal point in the main number value, if it has one. It has no separation from the second column. The second column goes from the decimal point until just before the "plus-or-minus" indicator. The third column goes from the "plus-or-minus" until the end. If the item being formatted does not fit this schema, it can be wrapped in something like '\multicolumn{3}{c}{...}'.
def latexify_l3col(obj, **kwargs): if hasattr(obj, '__pk_latex_l3col__'): return obj.__pk_latex_l3col__(**kwargs) if isinstance(obj, bool): # isinstance(True, int) = True, so gotta handle this first. raise ValueError('no well-defined l3col LaTeXification of bool %r' % obj) if isinstance(obj, float): return '&' + latexify_n2col(obj, **kwargs) if isinstance(obj, int): return '& $%d$ &' % obj raise ValueError('can\'t LaTeXify %r in 3-column limit format' % obj)
Convert an object to special LaTeX for limit tables. This conversion is meant for limit values in a table. The return value should span three columns. The first column is the limit indicator: <, >, ~, etc. The second column is the whole part of the value, up until just before the decimal point. The third column is the decimal point and the fractional part of the value, if present. If the item being formatted does not fit this schema, it can be wrapped in something like '\multicolumn{3}{c}{...}'.
def _broadcast_shapes(s1, s2): n1 = len(s1) n2 = len(s2) n = max(n1, n2) res = [1] * n for i in range(n): if i >= n1: c1 = 1 else: c1 = s1[n1-1-i] if i >= n2: c2 = 1 else: c2 = s2[n2-1-i] if c1 == 1: rc = c2 elif c2 == 1 or c1 == c2: rc = c1 else: raise ValueError('array shapes %r and %r are not compatible' % (s1, s2)) res[n-1-i] = rc return tuple(res)
Given array shapes `s1` and `s2`, compute the shape of the array that would result from broadcasting them together.
def uval(self): "Accesses :attr:`value` and :attr:`uncert` as a :class:`pwkit.msmt.Uval`." from .msmt import Uval return Uval.from_norm(self.value, self.uncertf uval(self): "Accesses :attr:`value` and :attr:`uncert` as a :class:`pwkit.msmt.Uval`." from .msmt import Uval return Uval.from_norm(self.value, self.uncert)
Accesses :attr:`value` and :attr:`uncert` as a :class:`pwkit.msmt.Uval`.
def set_data(self, data, invsigma=None): self.data = np.array(data, dtype=np.float, ndmin=1) if invsigma is None: self.invsigma = np.ones(self.data.shape) else: i = np.array(invsigma, dtype=np.float) self.invsigma = np.broadcast_arrays(self.data, i)[1] # allow scalar invsigma if self.invsigma.shape != self.data.shape: raise ValueError('data values and inverse-sigma values must have same shape') return self
Set the data to be modeled. Returns *self*.
def print_soln(self): lmax = reduce(max,(len(x) for x in self.pnames), len('r chi sq')) if self.puncerts is None: for pn, val in zip(self.pnames, self.params): print('%s: %14g' % (pn.rjust(lmax), val)) else: for pn, val, err in zip(self.pnames, self.params, self.puncerts): frac = abs(100. * err / val) print('%s: %14g +/- %14g (%.2f%%)' % (pn.rjust(lmax), val, err, frac)) if self.rchisq is not None: print('%s: %14g' % ('r chi sq'.rjust(lmax), self.rchisq)) elif self.chisq is not None: print('%s: %14g' % ('chi sq'.rjust(lmax), self.chisq)) else: print('%s: unknown/undefined' % ('r chi sq'.rjust(lmax))) return self
Print information about the model solution.
def plot(self, modelx, dlines=False, xmin=None, xmax=None, ymin=None, ymax=None, **kwargs): import omega as om modelx = np.asarray(modelx) if modelx.shape != self.data.shape: raise ValueError('modelx and data arrays must have same shape') modely = self.mfunc(modelx) sigmas = self.invsigma**-1 # TODO: handle invsigma = 0 vb = om.layout.VBox(2) vb.pData = om.quickXYErr(modelx, self.data, sigmas, 'Data', lines=dlines, **kwargs) vb[0] = vb.pData vb[0].addXY(modelx, modely, 'Model') vb[0].setYLabel('Y') vb[0].rebound(False, True) vb[0].setBounds(xmin, xmax, ymin, ymax) vb[1] = vb.pResid = om.RectPlot() vb[1].defaultField.xaxis = vb[1].defaultField.xaxis vb[1].addXYErr(modelx, self.resids, sigmas, None, lines=False) vb[1].setLabels('X', 'Residuals') vb[1].rebound(False, True) # ignore Y values since residuals are on different scale: vb[1].setBounds(xmin, xmax) vb.setWeight(0, 3) return vb
Plot the data and model (requires `omega`). This assumes that `data` is 1D and that `mfunc` takes one argument that should be treated as the X variable.
def show_corr(self): "Show the parameter correlation matrix with `pwkit.ndshow_gtk3`." from .ndshow_gtk3 import view d = np.diag(self.covar) ** -0.5 corr = self.covar * d[np.newaxis,:] * d[:,np.newaxis] view(corr, title='Correlation Matrix'f show_corr(self): "Show the parameter correlation matrix with `pwkit.ndshow_gtk3`." from .ndshow_gtk3 import view d = np.diag(self.covar) ** -0.5 corr = self.covar * d[np.newaxis,:] * d[:,np.newaxis] view(corr, title='Correlation Matrix')
Show the parameter correlation matrix with `pwkit.ndshow_gtk3`.
def set_func(self, func, pnames, args=()): from .lmmin import Problem self.func = func self._args = args self.pnames = list(pnames) self.lm_prob = Problem(len(self.pnames)) return self
Set the model function to use an efficient but tedious calling convention. The function should obey the following convention:: def func(param_vec, *args): modeled_data = { do something using param_vec } return modeled_data This function creates the :class:`pwkit.lmmin.Problem` so that the caller can futz with it before calling :meth:`solve`, if so desired. Returns *self*.
def set_simple_func(self, func, args=()): code = get_function_code(func) npar = code.co_argcount - len(args) pnames = code.co_varnames[:npar] def wrapper(params, *args): return func(*(tuple(params) + args)) return self.set_func(wrapper, pnames, args)
Set the model function to use a simple but somewhat inefficient calling convention. The function should obey the following convention:: def func(param0, param1, ..., paramN, *args): modeled_data = { do something using the parameters } return modeled_data Returns *self*.
def make_frozen_func(self, params): params = np.array(params, dtype=np.float, ndmin=1) from functools import partial return partial(self.func, params)
Returns a model function frozen to the specified parameter values. Any remaining arguments are left free and must be provided when the function is called. For this model, the returned function is the application of :func:`functools.partial` to the :attr:`func` property of this object.
def solve(self, guess): guess = np.array(guess, dtype=np.float, ndmin=1) f = self.func args = self._args def lmfunc(params, vec): vec[:] = f(params, *args).flatten() self.lm_prob.set_residual_func(self.data.flatten(), self.invsigma.flatten(), lmfunc, None) self.lm_soln = soln = self.lm_prob.solve(guess) self.params = soln.params self.puncerts = soln.perror self.covar = soln.covar self.mfunc = self.make_frozen_func(soln.params) # fvec = resids * invsigma = (data - mdata) * invsigma self.resids = soln.fvec.reshape(self.data.shape) / self.invsigma self.mdata = self.data - self.resids # lm_soln.fnorm can be unreliable ("max(fnorm, fnorm1)" branch) self.chisq = (self.lm_soln.fvec**2).sum() if soln.ndof > 0: self.rchisq = self.chisq / soln.ndof return self
Solve for the parameters, using an initial guess. This uses the Levenberg-Marquardt optimizer described in :mod:`pwkit.lmmin`. Returns *self*.
def as_nonlinear(self, params=None): if params is None: params = self.params nlm = Model(None, self.data, self.invsigma) nlm.set_func(lambda p, x: npoly.polyval(x, p), self.pnames, args=(self.x,)) if params is not None: nlm.solve(params) return nlm
Return a `Model` equivalent to this object. The nonlinear solver is less efficient, but lets you freeze parameters, compute uncertainties, etc. If the `params` argument is provided, solve() will be called on the returned object with those parameters. If it is `None` and this object has parameters in `self.params`, those will be use. Otherwise, solve() will not be called on the returned object.
def debug_derivative(self, guess): from .lmmin import check_derivative return check_derivative(self.component.npar, self.data.size, self.lm_model, self.lm_deriv, guess)
returns (explicit, auto)
def files(self): if self.topic.has_file: yield self.topic.file.file_url for reply in self.replies: if reply.has_file: yield reply.file.file_url
Returns the URLs of all files attached to posts in the thread.
def thumbs(self): if self.topic.has_file: yield self.topic.file.thumbnail_url for reply in self.replies: if reply.has_file: yield reply.file.thumbnail_url
Returns the URLs of all thumbnails in the thread.
def filenames(self): if self.topic.has_file: yield self.topic.file.filename for reply in self.replies: if reply.has_file: yield reply.file.filename
Returns the filenames of all files attached to posts in the thread.
def thumbnames(self): if self.topic.has_file: yield self.topic.file.thumbnail_fname for reply in self.replies: if reply.has_file: yield reply.file.thumbnail_fname
Returns the filenames of all thumbnails in the thread.
def file_objects(self): if self.topic.has_file: yield self.topic.file for reply in self.replies: if reply.has_file: yield reply.file
Returns the :class:`basc_py4chan.File` objects of all files attached to posts in the thread.
def basic(args=None): if args is None: import sys args = sys.argv[1:] parsed = Holder() for arg in args: if arg[0] == '+': for kw in arg[1:].split(','): parsed.set_one(kw, True) # avoid analogous -a,b,c syntax because it gets confused with -a --help, etc. else: t = arg.split('=', 1) if len(t) < 2: raise KwargvError('don\'t know what to do with argument "%s"', arg) if not len(t[1]): raise KwargvError('empty value for keyword argument "%s"', t[0]) parsed.set_one(t[0], t[1]) return parsed
Parse the string list *args* as a set of keyword arguments in a very simple-minded way, splitting on equals signs. Returns a :class:`pwkit.Holder` instance with attributes set to strings. The form ``+foo`` is mapped to setting ``foo = True`` on the :class:`pwkit.Holder` instance. If *args* is ``None``, ``sys.argv[1:]`` is used. Raises :exc:`KwargvError` on invalid arguments (i.e., ones without an equals sign or a leading plus sign).
def parse_or_die(self, args=None): from .cli import die try: return self.parse(args) except KwargvError as e: die(e)
Like :meth:`ParseKeywords.parse`, but calls :func:`pkwit.cli.die` if a :exc:`KwargvError` is raised, printing the exception text. Returns *self* for convenience.
def cas_a (freq_mhz, year): # The snu rule is right out of Baars et al. The dnu is corrected # for the frequency being measured in MHz, not GHz. snu = 10. ** (5.745 - 0.770 * np.log10 (freq_mhz)) # Jy dnu = 0.01 * (0.07 - 0.30 * np.log10 (freq_mhz)) # percent per yr. loss = (1 - dnu) ** (year - 1980.) return snu * loss
Return the flux of Cas A given a frequency and the year of observation. Based on the formula given in Baars et al., 1977. Parameters: freq - Observation frequency in MHz. year - Year of observation. May be floating-point. Returns: s, flux in Jy.
def init_cas_a (year): year = float (year) models['CasA'] = lambda f: cas_a (f, year)
Insert an entry for Cas A into the table of models. Need to specify the year of the observations to account for the time variation of Cas A's emission.
def add_from_vla_obs (src, Lband, Cband): if src in models: raise PKError ('already have a model for ' + src) fL = np.log10 (1425) fC = np.log10 (4860) lL = np.log10 (Lband) lC = np.log10 (Cband) A = (lL - lC) / (fL - fC) B = lL - A * fL def fluxdens (freq_mhz): return 10. ** (A * np.log10 (freq_mhz) + B) def spindex (freq_mhz): return A models[src] = fluxdens spindexes[src] = spindex
Add an entry into the models table for a source based on L-band and C-band flux densities.
def in_casapy(helper, vis=None): import numpy as np, sys from correct_ant_posns import correct_ant_posns info = correct_ant_posns(vis, False) if len(info) != 3 or info[0] != 0 or not len(info[1]): helper.die('failed to fetch VLA antenna positions; got %r', info) antenna = info[1] parameter = info[2] with open(helper.temppath('info.npy'), 'wb') as f: np.save(f, antenna) np.save(f, parameter)
This function is run inside the weirdo casapy IPython environment! A strange set of modules is available, and the `pwkit.environments.casa.scripting` system sets up a very particular environment to allow encapsulated scripting.
def sigmascale (nsigma): from scipy.special import erfc return np.sqrt (-2 * np.log (erfc (nsigma / np.sqrt (2))))
Say we take a Gaussian bivariate and convert the parameters of the distribution to an ellipse (major, minor, PA). By what factor should we scale those axes to make the area of the ellipse correspond to the n-sigma confidence interval? Negative or zero values result in NaN.
def clscale (cl): rv = np.sqrt (-2 * np.log (1 - cl)) rv[np.where (cl <= 0)] = np.nan return rv
Say we take a Gaussian bivariate and convert the parameters of the distribution to an ellipse (major, minor, PA). By what factor should we scale those axes to make the area of the ellipse correspond to the confidence interval CL? (I.e. 0 < CL < 1)
def bivell (sx, sy, cxy): # See CfA notebook #1 pp. 129-133. _bivcheck (sx, sy, cxy) from numpy import arctan2, sqrt sx2, sy2, cxy2 = sx**2, sy**2, cxy**2 pa = 0.5 * arctan2 (2 * cxy, sx2 - sy2) h = sqrt ((sx2 - sy2)**2 + 4*cxy2) t = 2 * (sx2 * sy2 - cxy2) / (sx2 + sy2 - h) if t < 0: raise ValueError ('covariance just barely out of bounds [1] ' '(sx=%.10e, sy=%.10e, cxy=%.10e, cxy/sxsy=%.16f)' % (sx, sy, cxy, cxy / (sx * sy))) mjr = sqrt (t) t = 2 * (sx2 * sy2 - cxy2) / (sx2 + sy2 + h) if t < 0: # if we got this far, shouldn't happen, but ... raise ValueError ('covariance just barely out of bounds [2] ' '(sx=%.10e, sy=%.10e, cxy=%.10e, cxy/sxsy=%.16f)' % (sx, sy, cxy, cxy / (sx * sy))) mnr = sqrt (t) return ellnorm (mjr, mnr, pa)
Given the parameters of a Gaussian bivariate distribution, compute the parameters for the equivalent 2D Gaussian in ellipse form (major, minor, pa). Inputs: * sx: standard deviation (not variance) of x var * sy: standard deviation (not variance) of y var * cxy: covariance (not correlation coefficient) of x and y Outputs: * mjr: major axis of equivalent 2D Gaussian (sigma, not FWHM) * mnr: minor axis * pa: position angle, rotating from +x to +y Lots of sanity-checking because it's obnoxiously easy to have numerics that just barely blow up on you.
def bivnorm (sx, sy, cxy): _bivcheck (sx, sy, cxy) from numpy import pi, sqrt t = (sx * sy)**2 - cxy**2 if t <= 0: raise ValueError ('covariance just barely out of bounds ' '(sx=%.10e, sy=%.10e, cxy=%.10e, cxy/sxsy=%.16f)' % (sx, sy, cxy, cxy / (sx * sy))) return (2 * pi * sqrt (t))**-1
Given the parameters of a Gaussian bivariate distribution, compute the correct normalization for the equivalent 2D Gaussian. It's 1 / (2 pi sqrt (sx**2 sy**2 - cxy**2). This function adds a lot of sanity checking. Inputs: * sx: standard deviation (not variance) of x var * sy: standard deviation (not variance) of y var * cxy: covariance (not correlation coefficient) of x and y Returns: the scalar normalization
def bivabc (sx, sy, cxy): _bivcheck (sx, sy, cxy) sx2, sy2, cxy2 = sx**2, sy**2, cxy**2 t = 1. / (sx2 * sy2 - cxy2) if t <= 0: raise ValueError ('covariance just barely out of bounds ' '(sx=%.10e, sy=%.10e, cxy=%.10e, cxy/sxsy=%.16f)' % (sx, sy, cxy, cxy / (sx * sy))) a = -0.5 * sy2 * t c = -0.5 * sx2 * t b = cxy * t return _abccheck (a, b, c)
Compute nontrivial parameters for evaluating a bivariate distribution as a 2D Gaussian. Inputs: * sx: standard deviation (not variance) of x var * sy: standard deviation (not variance) of y var * cxy: covariance (not correlation coefficient) of x and y Returns: (a, b, c), where z = k exp (ax² + bxy + cy²) The proper value for k can be obtained from bivnorm().
def databiv (xy, coordouter=False, **kwargs): xy = np.asarray (xy) if xy.ndim != 2: raise ValueError ('"xy" must be a 2D array') if coordouter: if xy.shape[0] != 2: raise ValueError ('if "coordouter" is True, first axis of "xy" ' 'must have size 2') else: if xy.shape[1] != 2: raise ValueError ('if "coordouter" is False, second axis of "xy" ' 'must have size 2') cov = np.cov (xy, rowvar=coordouter, **kwargs) sx, sy = np.sqrt (np.diag (cov)) cxy = cov[0,1] return _bivcheck (sx, sy, cxy)
Compute the main parameters of a bivariate distribution from data. The parameters are returned in the same format as used in the rest of this module. * xy: a 2D data array of shape (2, nsamp) or (nsamp, 2) * coordouter: if True, the coordinate axis is the outer axis; i.e. the shape is (2, nsamp). Otherwise, the coordinate axis is the inner axis; i.e. shape is (nsamp, 2). Returns: (sx, sy, cxy) In both cases, the first slice along the coordinate axis gives the X data (i.e., xy[0] or xy[:,0]) and the second slice gives the Y data (xy[1] or xy[:,1]).
def bivrandom (x0, y0, sx, sy, cxy, size=None): from numpy.random import multivariate_normal as mvn p0 = np.asarray ([x0, y0]) cov = np.asarray ([[sx**2, cxy], [cxy, sy**2]]) return mvn (p0, cov, size)
Compute random values distributed according to the specified bivariate distribution. Inputs: * x0: the center of the x distribution (i.e. its intended mean) * y0: the center of the y distribution * sx: standard deviation (not variance) of x var * sy: standard deviation (not variance) of y var * cxy: covariance (not correlation coefficient) of x and y * size (optional): the number of values to compute Returns: array of shape (size, 2); or just (2, ), if size was not specified. The bivariate parameters of the generated data are approximately recoverable by calling 'databiv(retval)'.
def bivconvolve (sx_a, sy_a, cxy_a, sx_b, sy_b, cxy_b): _bivcheck (sx_a, sy_a, cxy_a) _bivcheck (sx_b, sy_b, cxy_b) sx_c = np.sqrt (sx_a**2 + sx_b**2) sy_c = np.sqrt (sy_a**2 + sy_b**2) cxy_c = cxy_a + cxy_b return _bivcheck (sx_c, sy_c, cxy_c)
Given two independent bivariate distributions, compute a bivariate distribution corresponding to their convolution. I'm sure this is worked out in a ton of places, but I got the equations from Pineau+ (2011A&A...527A.126P). Returns: (sx_c, sy_c, cxy_c), the parameters of the convolved distribution.
def ellpoint (mjr, mnr, pa, th): _ellcheck (mjr, mnr, pa) from numpy import cos, sin ct, st = cos (th), sin (th) cp, sp = cos (pa), sin (pa) x = mjr * cp * ct - mnr * sp * st y = mjr * sp * ct + mnr * cp * st return x, y
Compute a point on an ellipse parametrically. Inputs: * mjr: major axis (sigma not FWHM) of the ellipse * mnr: minor axis (sigma not FWHM) of the ellipse * pa: position angle (from +x to +y) of the ellipse, radians * th: the parameter, 0 <= th < 2pi: the eccentric anomaly Returns: (x, y) th may be a vector, in which case x and y will be as well.
def elld2 (x0, y0, mjr, mnr, pa, x, y): _ellcheck (mjr, mnr, pa) dx, dy = x - x0, y - y0 c, s = np.cos (pa), np.sin (pa) a = c * dx + s * dy b = -s * dx + c * dy return (a / mjr)**2 + (b / mnr)**2
Given an 2D Gaussian expressed as an ellipse (major, minor, pa), compute a "squared distance parameter" such that z = exp (-0.5 * d2) Inputs: * x0: position of Gaussian center on x axis * y0: position of Gaussian center on y axis * mjr: major axis (sigma not FWHM) of the Gaussian * mnr: minor axis (sigma not FWHM) of the Gaussian * pa: position angle (from +x to +y) of the Gaussian, radians * x: x coordinates of the locations for which to evaluate d2 * y: y coordinates of the locations for which to evaluate d2 Returns: d2, distance parameter defined as above. x0, y0, mjr, and mnr may be in any units so long as they're consistent. x and y may be arrays (of the same shape), in which case d2 will be an array as well.
def ellbiv (mjr, mnr, pa): _ellcheck (mjr, mnr, pa) cpa, spa = np.cos (pa), np.sin (pa) q = np.asarray ([[cpa, -spa], [spa, cpa]]) cov = np.dot (q, np.dot (np.diag ([mjr**2, mnr**2]), q.T)) sx = np.sqrt (cov[0,0]) sy = np.sqrt (cov[1,1]) cxy = cov[0,1] return _bivcheck (sx, sy, cxy)
Given a 2D Gaussian expressed as an ellipse (major, minor, pa), compute the equivalent parameters for a Gaussian bivariate distribution. We assume that the ellipse is normalized so that the functions evaluate identicall for major = minor. Inputs: * mjr: major axis (sigma not FWHM) of the Gaussian * mnr: minor axis (sigma not FWHM) of the Gaussian * pa: position angle (from +x to +y) of the Gaussian, radians Returns: * sx: standard deviation (not variance) of x var * sy: standard deviation (not variance) of y var * cxy: covariance (not correlation coefficient) of x and y
def ellabc (mjr, mnr, pa): _ellcheck (mjr, mnr, pa) cpa, spa = np.cos (pa), np.sin (pa) mjrm2, mnrm2 = mjr**-2, mnr**-2 a = -0.5 * (cpa**2 * mjrm2 + spa**2 * mnrm2) c = -0.5 * (spa**2 * mjrm2 + cpa**2 * mnrm2) b = cpa * spa * (mnrm2 - mjrm2) return _abccheck (a, b, c)
Given a 2D Gaussian expressed as an ellipse (major, minor, pa), compute the nontrivial parameters for its evaluation. * mjr: major axis (sigma not FWHM) of the Gaussian * mnr: minor axis (sigma not FWHM) of the Gaussian * pa: position angle (from +x to +y) of the Gaussian, radians Returns: (a, b, c), where z = exp (ax² + bxy + cy²)
def double_ell_distance (mjr0, mnr0, pa0, mjr1, mnr1, pa1, dx, dy): # 1. We need to rotate the frame so that ellipse 1 lies on the X axis. theta = -np.arctan2 (dy, dx) # 2. We also need to express these rotated ellipses in "biv" format. sx0, sy0, cxy0 = ellbiv (mjr0, mnr0, pa0 + theta) sx1, sy1, cxy1 = ellbiv (mjr1, mnr1, pa1 + theta) # 3. Their convolution is: sx, sy, cxy = bivconvolve (sx0, sy0, cxy0, sx1, sy1, cxy1) # 4. The separation between the centers is still just: d = np.sqrt (dx**2 + dy**2) # 5. The effective sigma in the purely X direction, taking into account # the covariance term, is: sigma_eff = sx * np.sqrt (1 - (cxy / (sx * sy))**2) # 6. Therefore the answer is: return d / sigma_eff
Given two ellipses separated by *dx* and *dy*, compute their separation in terms of σ. Based on Pineau et al (2011A&A...527A.126P). The "0" ellipse is taken to be centered at (0, 0), while the "1" ellipse is centered at (dx, dy).
def ellplot (mjr, mnr, pa): _ellcheck (mjr, mnr, pa) import omega as om th = np.linspace (0, 2 * np.pi, 200) x, y = ellpoint (mjr, mnr, pa, th) return om.quickXY (x, y, 'mjr=%f mnr=%f pa=%f' % (mjr, mnr, pa * 180 / np.pi))
Utility for debugging.
def abcell (a, b, c): from numpy import arctan2, sqrt bad = _abccheck (a, b, c) pa = 0.5 * arctan2 (b, a - c) t1 = np.sqrt ((a - c)**2 + b**2) t2 = -t1 - a - c bad |= (t2 <= 0) mjr = t2**-0.5 t2 = t1 - a - c bad |= (t2 <= 0) mnr = t2**-0.5 w = np.where (bad) mjr[w] = np.nan mnr[w] = np.nan pa[w] = np.nan return ellnorm (mjr, mnr, pa)
Given the nontrivial parameters for evaluation a 2D Gaussian as a polynomial, compute the equivalent ellipse parameters (major, minor, pa) Inputs: (a, b, c), where z = exp (ax² + bxy + cy²) Returns: * mjr: major axis (sigma not FWHM) of the Gaussian * mnr: minor axis (sigma not FWHM) of the Gaussian * pa: position angle (from +x to +y) of the Gaussian, radians
def abcd2 (x0, y0, a, b, c, x, y): _abccheck (a, b, c) dx, dy = x - x0, y - y0 return -2 * (a * dx**2 + b * dx * dy + c * dy**2)
Given an 2D Gaussian expressed as the ABC polynomial coefficients, compute a "squared distance parameter" such that z = exp (-0.5 * d2) Inputs: * x0: position of Gaussian center on x axis * y0: position of Gaussian center on y axis * a: such that z = exp (ax² + bxy + cy²) * b: see above * c: see above * x: x coordinates of the locations for which to evaluate d2 * y: y coordinates of the locations for which to evaluate d2 Returns: d2, distance parameter defined as above. This is pretty trivial.
def _compute_projection(self, X, W): # TODO: check W input; handle sparse case X = check_array(X) D = np.diag(W.sum(1)) L = D - W evals, evecs = eigh_robust(np.dot(X.T, np.dot(L, X)), np.dot(X.T, np.dot(D, X)), eigvals=(0, self.n_components - 1)) return evecs
Compute the LPP projection matrix Parameters ---------- X : array_like, (n_samples, n_features) The input data W : array_like or sparse matrix, (n_samples, n_samples) The precomputed adjacency matrix Returns ------- P : ndarray, (n_features, self.n_components) The matrix encoding the locality preserving projection
def find_common_dtype(*args): '''Returns common dtype of numpy and scipy objects. Recognizes ndarray, spmatrix and LinearOperator. All other objects are ignored (most notably None).''' dtypes = [] for arg in args: if type(arg) is numpy.ndarray or \ isspmatrix(arg) or \ isinstance(arg, LinearOperator): if hasattr(arg, 'dtype'): dtypes.append(arg.dtype) else: warnings.warn('object %s does not have a dtype.' % arg.__repr__) return numpy.find_common_type(dtypes, []f find_common_dtype(*args): '''Returns common dtype of numpy and scipy objects. Recognizes ndarray, spmatrix and LinearOperator. All other objects are ignored (most notably None).''' dtypes = [] for arg in args: if type(arg) is numpy.ndarray or \ isspmatrix(arg) or \ isinstance(arg, LinearOperator): if hasattr(arg, 'dtype'): dtypes.append(arg.dtype) else: warnings.warn('object %s does not have a dtype.' % arg.__repr__) return numpy.find_common_type(dtypes, [])
Returns common dtype of numpy and scipy objects. Recognizes ndarray, spmatrix and LinearOperator. All other objects are ignored (most notably None).
def shape_vecs(*args): '''Reshape all ndarrays with ``shape==(n,)`` to ``shape==(n,1)``. Recognizes ndarrays and ignores all others.''' ret_args = [] flat_vecs = True for arg in args: if type(arg) is numpy.ndarray: if len(arg.shape) == 1: arg = shape_vec(arg) else: flat_vecs = False ret_args.append(arg) return flat_vecs, ret_argf shape_vecs(*args): '''Reshape all ndarrays with ``shape==(n,)`` to ``shape==(n,1)``. Recognizes ndarrays and ignores all others.''' ret_args = [] flat_vecs = True for arg in args: if type(arg) is numpy.ndarray: if len(arg.shape) == 1: arg = shape_vec(arg) else: flat_vecs = False ret_args.append(arg) return flat_vecs, ret_args
Reshape all ndarrays with ``shape==(n,)`` to ``shape==(n,1)``. Recognizes ndarrays and ignores all others.
def norm_squared(x, Mx=None, inner_product=ip_euclid): '''Compute the norm^2 w.r.t. to a given scalar product.''' assert(len(x.shape) == 2) if Mx is None: rho = inner_product(x, x) else: assert(len(Mx.shape) == 2) rho = inner_product(x, Mx) if rho.shape == (1, 1): if abs(rho[0, 0].imag) > abs(rho[0, 0])*1e-10 or rho[0, 0].real < 0.0: raise InnerProductError(('<x,Mx> = %g. Is the inner product ' 'indefinite?') % rho[0, 0]) return numpy.linalg.norm(rho, 2f norm_squared(x, Mx=None, inner_product=ip_euclid): '''Compute the norm^2 w.r.t. to a given scalar product.''' assert(len(x.shape) == 2) if Mx is None: rho = inner_product(x, x) else: assert(len(Mx.shape) == 2) rho = inner_product(x, Mx) if rho.shape == (1, 1): if abs(rho[0, 0].imag) > abs(rho[0, 0])*1e-10 or rho[0, 0].real < 0.0: raise InnerProductError(('<x,Mx> = %g. Is the inner product ' 'indefinite?') % rho[0, 0]) return numpy.linalg.norm(rho, 2)
Compute the norm^2 w.r.t. to a given scalar product.
def get_linearoperator(shape, A, timer=None): ret = None import scipy.sparse.linalg as scipylinalg if isinstance(A, LinearOperator): ret = A elif A is None: ret = IdentityLinearOperator(shape) elif isinstance(A, numpy.ndarray) or isspmatrix(A): ret = MatrixLinearOperator(A) elif isinstance(A, numpy.matrix): ret = MatrixLinearOperator(numpy.atleast_2d(numpy.asarray(A))) elif isinstance(A, scipylinalg.LinearOperator): if not hasattr(A, 'dtype'): raise ArgumentError('scipy LinearOperator has no dtype.') ret = LinearOperator(A.shape, dot=A.matvec, dot_adj=A.rmatvec, dtype=A.dtype) else: raise TypeError('type not understood') # set up timer if requested if A is not None and not isinstance(A, IdentityLinearOperator) \ and timer is not None: ret = TimedLinearOperator(ret, timer) # check shape if shape != ret.shape: raise LinearOperatorError('shape mismatch') return ret
Enhances aslinearoperator if A is None.
def orthonormality(V, ip_B=None): return norm(numpy.eye(V.shape[1]) - inner(V, V, ip_B=ip_B))
Measure orthonormality of given basis. :param V: a matrix :math:`V=[v_1,\ldots,v_n]` with ``shape==(N,n)``. :param ip_B: (optional) the inner product to use, see :py:meth:`inner`. :return: :math:`\\| I_n - \\langle V,V \\rangle \\|_2`.
def arnoldi_res(A, V, H, ip_B=None): N = V.shape[0] invariant = H.shape[0] == H.shape[1] A = get_linearoperator((N, N), A) if invariant: res = A*V - numpy.dot(V, H) else: res = A*V[:, :-1] - numpy.dot(V, H) return norm(res, ip_B=ip_B)
Measure Arnoldi residual. :param A: a linear operator that can be used with scipy's aslinearoperator with ``shape==(N,N)``. :param V: Arnoldi basis matrix with ``shape==(N,n)``. :param H: Hessenberg matrix: either :math:`\\underline{H}_{n-1}` with ``shape==(n,n-1)`` or :math:`H_n` with ``shape==(n,n)`` (if the Arnoldi basis spans an A-invariant subspace). :param ip_B: (optional) the inner product to use, see :py:meth:`inner`. :returns: either :math:`\\|AV_{n-1} - V_n \\underline{H}_{n-1}\\|` or :math:`\\|A V_n - V_n H_n\\|` (in the invariant case).
def qr(X, ip_B=None, reorthos=1): if ip_B is None and X.shape[1] > 0: return scipy.linalg.qr(X, mode='economic') else: (N, k) = X.shape Q = X.copy() R = numpy.zeros((k, k), dtype=X.dtype) for i in range(k): for reortho in range(reorthos+1): for j in range(i): alpha = inner(Q[:, [j]], Q[:, [i]], ip_B=ip_B)[0, 0] R[j, i] += alpha Q[:, [i]] -= alpha * Q[:, [j]] R[i, i] = norm(Q[:, [i]], ip_B=ip_B) if R[i, i] >= 1e-15: Q[:, [i]] /= R[i, i] return Q, R
QR factorization with customizable inner product. :param X: array with ``shape==(N,k)`` :param ip_B: (optional) inner product, see :py:meth:`inner`. :param reorthos: (optional) numer of reorthogonalizations. Defaults to 1 (i.e. 2 runs of modified Gram-Schmidt) which should be enough in most cases (TODO: add reference). :return: Q, R where :math:`X=QR` with :math:`\\langle Q,Q \\rangle=I_k` and R upper triangular.
def strakos(n, l_min=0.1, l_max=100, rho=0.9): d = [l_min + (i-1)*1./(n-1)*(l_max-l_min)*(rho**(n-i)) for i in range(1, n+1)] return numpy.diag(d)
Return the Strakoš matrix. See [Str92]_.
def bound_perturbed_gmres(pseudo, p, epsilon, deltas): '''Compute GMRES perturbation bound based on pseudospectrum Computes the GMRES bound from [SifEM13]_. ''' if not numpy.all(numpy.array(deltas) > epsilon): raise ArgumentError('all deltas have to be greater than epsilon') bound = [] for delta in deltas: # get boundary paths paths = pseudo.contour_paths(delta) # get vertices on boundary vertices = paths.vertices() # evaluate polynomial supremum = numpy.max(numpy.abs(p(vertices))) # compute bound bound.append(epsilon/(delta-epsilon) * paths.length()/(2*numpy.pi*delta) * supremum) return bounf bound_perturbed_gmres(pseudo, p, epsilon, deltas): '''Compute GMRES perturbation bound based on pseudospectrum Computes the GMRES bound from [SifEM13]_. ''' if not numpy.all(numpy.array(deltas) > epsilon): raise ArgumentError('all deltas have to be greater than epsilon') bound = [] for delta in deltas: # get boundary paths paths = pseudo.contour_paths(delta) # get vertices on boundary vertices = paths.vertices() # evaluate polynomial supremum = numpy.max(numpy.abs(p(vertices))) # compute bound bound.append(epsilon/(delta-epsilon) * paths.length()/(2*numpy.pi*delta) * supremum) return bound
Compute GMRES perturbation bound based on pseudospectrum Computes the GMRES bound from [SifEM13]_.
def apply(self, x): # make sure that x is a (N,*) matrix if len(x.shape) != 2: raise ArgumentError('x is not a matrix of shape (N,*)') if self.beta == 0: return x return x - self.beta * self.v * numpy.dot(self.v.T.conj(), x)
Apply Householder transformation to vector x. Applies the Householder transformation efficiently to the given vector.
def matrix(self): n = self.v.shape[0] return numpy.eye(n, n) - self.beta * numpy.dot(self.v, self.v.T.conj())
Build matrix representation of Householder transformation. Builds the matrix representation :math:`H = I - \\beta vv^*`. **Use with care!** This routine may be helpful for testing purposes but should not be used in production codes for high dimensions since the resulting matrix is dense.
def _apply_adj(self, a): # is projection the zero operator? if self.V.shape[1] == 0: return numpy.zeros(a.shape) '''Single application of the adjoint projection.''' c = inner(self.V, a, ip_B=self.ip_B) if self.Q is not None and self.R is not None: c = self.Q.dot(scipy.linalg.solve_triangular(self.R.T.conj(), c, lower=True)) return self.W.dot(cf _apply_adj(self, a): # is projection the zero operator? if self.V.shape[1] == 0: return numpy.zeros(a.shape) '''Single application of the adjoint projection.''' c = inner(self.V, a, ip_B=self.ip_B) if self.Q is not None and self.R is not None: c = self.Q.dot(scipy.linalg.solve_triangular(self.R.T.conj(), c, lower=True)) return self.W.dot(c)
Single application of the adjoint projection.
def apply(self, a, return_Ya=False): r # is projection the zero operator? if self.V.shape[1] == 0: Pa = numpy.zeros(a.shape) if return_Ya: return Pa, numpy.zeros((0, a.shape[1])) return Pa if return_Ya: x, Ya = self._apply(a, return_Ya=return_Ya) else: x = self._apply(a) for i in range(self.iterations-1): z = a - x w = self._apply(z) x = x + w if return_Ya: return x, Ya return x
r"""Apply the projection to an array. The computation is carried out without explicitly forming the matrix corresponding to the projection (which would be an array with ``shape==(N,N)``). See also :py:meth:`_apply`.
def apply_complement(self, a, return_Ya=False): # is projection the zero operator? --> complement is identity if self.V.shape[1] == 0: if return_Ya: return a.copy(), numpy.zeros((0, a.shape[1])) return a.copy() if return_Ya: x, Ya = self._apply(a, return_Ya=True) else: x = self._apply(a) z = a - x for i in range(self.iterations-1): w = self._apply(z) z = z - w if return_Ya: return z, Ya return z
Apply the complementary projection to an array. :param z: array with ``shape==(N,m)``. :return: :math:`P_{\\mathcal{Y}^\\perp,\\mathcal{X}}z = z - P_{\\mathcal{X},\\mathcal{Y}^\\perp} z`.
def operator(self): # is projection the zero operator? if self.V.shape[1] == 0: N = self.V.shape[0] return ZeroLinearOperator((N, N)) return self._get_operator(self.apply, self.apply_adj)
Get a ``LinearOperator`` corresponding to apply(). :return: a LinearOperator that calls apply().
def operator_complement(self): # is projection the zero operator? --> complement is identity if self.V.shape[1] == 0: N = self.V.shape[0] return IdentityLinearOperator((N, N)) return self._get_operator(self.apply_complement, self.apply_complement_adj)
Get a ``LinearOperator`` corresponding to apply_complement(). :return: a LinearOperator that calls apply_complement().
def get(self, key): '''Return timings for `key`. Returns 0 if not present.''' if key in self and len(self[key]) > 0: return min(self[key]) else: return f get(self, key): '''Return timings for `key`. Returns 0 if not present.''' if key in self and len(self[key]) > 0: return min(self[key]) else: return 0
Return timings for `key`. Returns 0 if not present.
def get_ops(self, ops): '''Return timings for dictionary ops holding the operation names as keys and the number of applications as values.''' time = 0. for op, count in ops.items(): time += self.get(op) * count return timf get_ops(self, ops): '''Return timings for dictionary ops holding the operation names as keys and the number of applications as values.''' time = 0. for op, count in ops.items(): time += self.get(op) * count return time
Return timings for dictionary ops holding the operation names as keys and the number of applications as values.
def distance(self, other): '''Returns the distance to other (0 if intersection is nonempty).''' if self & other: return 0 return numpy.max([other.left-self.right, self.left-other.right]f distance(self, other): '''Returns the distance to other (0 if intersection is nonempty).''' if self & other: return 0 return numpy.max([other.left-self.right, self.left-other.right])
Returns the distance to other (0 if intersection is nonempty).
def min_pos(self): '''Returns minimal positive value or None.''' if self.__len__() == 0: return ArgumentError('empty set has no minimum positive value.') if self.contains(0): return None positive = [interval for interval in self.intervals if interval.left > 0] if len(positive) == 0: return None return numpy.min(list(map(lambda i: i.left, positive))f min_pos(self): '''Returns minimal positive value or None.''' if self.__len__() == 0: return ArgumentError('empty set has no minimum positive value.') if self.contains(0): return None positive = [interval for interval in self.intervals if interval.left > 0] if len(positive) == 0: return None return numpy.min(list(map(lambda i: i.left, positive)))
Returns minimal positive value or None.
def max_neg(self): '''Returns maximum negative value or None.''' if self.__len__() == 0: return ArgumentError('empty set has no maximum negative value.') if self.contains(0): return None negative = [interval for interval in self.intervals if interval.right < 0] if len(negative) == 0: return None return numpy.max(list(map(lambda i: i.right, negative))f max_neg(self): '''Returns maximum negative value or None.''' if self.__len__() == 0: return ArgumentError('empty set has no maximum negative value.') if self.contains(0): return None negative = [interval for interval in self.intervals if interval.right < 0] if len(negative) == 0: return None return numpy.max(list(map(lambda i: i.right, negative)))
Returns maximum negative value or None.
def min_abs(self): '''Returns minimum absolute value.''' if self.__len__() == 0: return ArgumentError('empty set has no minimum absolute value.') if self.contains(0): return 0 return numpy.min([numpy.abs(val) for val in [self.max_neg(), self.min_pos()] if val is not None]f min_abs(self): '''Returns minimum absolute value.''' if self.__len__() == 0: return ArgumentError('empty set has no minimum absolute value.') if self.contains(0): return 0 return numpy.min([numpy.abs(val) for val in [self.max_neg(), self.min_pos()] if val is not None])
Returns minimum absolute value.
def max_abs(self): '''Returns maximum absolute value.''' if self.__len__() == 0: return ArgumentError('empty set has no maximum absolute value.') return numpy.max(numpy.abs([self.max(), self.min()])f max_abs(self): '''Returns maximum absolute value.''' if self.__len__() == 0: return ArgumentError('empty set has no maximum absolute value.') return numpy.max(numpy.abs([self.max(), self.min()]))
Returns maximum absolute value.
def get_step(self, tol): '''Return step at which bound falls below tolerance. ''' return 2 * numpy.log(tol/2.)/numpy.log(self.basef get_step(self, tol): '''Return step at which bound falls below tolerance. ''' return 2 * numpy.log(tol/2.)/numpy.log(self.base)
Return step at which bound falls below tolerance.
def minmax_candidates(self): '''Get points where derivative is zero. Useful for computing the extrema of the polynomial over an interval if the polynomial has real roots. In this case, the maximum is attained for one of the interval endpoints or a point from the result of this function that is contained in the interval. ''' from numpy.polynomial import Polynomial as P p = P.fromroots(self.roots) return p.deriv(1).roots(f minmax_candidates(self): '''Get points where derivative is zero. Useful for computing the extrema of the polynomial over an interval if the polynomial has real roots. In this case, the maximum is attained for one of the interval endpoints or a point from the result of this function that is contained in the interval. ''' from numpy.polynomial import Polynomial as P p = P.fromroots(self.roots) return p.deriv(1).roots()
Get points where derivative is zero. Useful for computing the extrema of the polynomial over an interval if the polynomial has real roots. In this case, the maximum is attained for one of the interval endpoints or a point from the result of this function that is contained in the interval.
def config(): alarm_day = alarm_time = alarm_attempts = song = [] for line in open(alarm_config, "r"): line = line.lstrip() if line.startswith("DAY"): alarm_day = line[4:].split() if line.startswith("ALARM_TIME"): alarm_time = line[11:].split() if line.startswith("ATTEMPTS"): alarm_attempts = line[9:].split() if line.startswith("SONG"): song = line[5:].split() if alarm_day == ["today"]: alarm_day = time.strftime("%d").split() alarm_args = alarm_day + alarm_time + alarm_attempts + song if alarm_args: if len(alarm_args) == 4: return alarm_args else: print("Error: config file: missing argument") sys.exit() else: print("Error: config file: missing argument") sys.exit()
Reading config file in $HOME directory /home/user/.alarm/config
def errors(self): try: self.now = datetime.datetime.now() if len(self.alarm_day) < 2 or len(self.alarm_day) > 2: print("error: day: usage 'DD' such us '0%s' not '%s'" % ( self.alarm_day, self.alarm_day)) self.RUN_ALARM = False if int(self.alarm_day) > calendar.monthrange( self.now.year, self.now.month)[1] or int( self.alarm_day) < 1: print("error: day: out of range") self.RUN_ALARM = False # compare alarm time with alarm pattern if (len(self.alarm_time) != len(self.alarm_pattern) or len(self.alarm_time[0]) < 2 or len(self.alarm_time[0]) > 2 or len(self.alarm_time[1]) < 2 or len(self.alarm_time[1]) > 2): print("error: time: usage '%s'" % ":".join(self.alarm_pattern)) self.RUN_ALARM = False # compare if alarm hour or alarm minutes # is within the range if int(self.alarm_hour) not in range(0, 24): print("error: hour: out of range") self.RUN_ALARM = False if int(self.alarm_minutes) not in range(0, 60): print("error: minutes: out of range") self.RUN_ALARM = False except ValueError: print("Usage '%s'" % ":".join(self.alarm_pattern)) self.RUN_ALARM = False if not os.path.isfile(self.song): print("error: song: file does not exist") self.RUN_ALARM = False
Check for usage errors
def position(self, x, y, text): sys.stdout.write("\x1b7\x1b[%d;%df%s\x1b8" % (x, y, text)) sys.stdout.flush()
ANSI Escape sequences http://ascii-table.com/ansi-escape-sequences.php
def set_default_command(self, command): cmd_name = command.name self.add_command(command) self.default_cmd_name = cmd_name
Sets a command function as the default command.
def get_ip_Minv_B(self): '''Returns the inner product that is implicitly used with the positive definite preconditioner ``M``.''' if not isinstance(self.M, utils.IdentityLinearOperator): if isinstance(self.Minv, utils.IdentityLinearOperator): raise utils.ArgumentError( 'Minv has to be provided for the evaluation of the inner ' 'product that is implicitly defined by M.') if isinstance(self.ip_B, utils.LinearOperator): return self.Minv*self.ip_B else: return lambda x, y: self.ip_B(x, self.Minv*y) return self.ip_f get_ip_Minv_B(self): '''Returns the inner product that is implicitly used with the positive definite preconditioner ``M``.''' if not isinstance(self.M, utils.IdentityLinearOperator): if isinstance(self.Minv, utils.IdentityLinearOperator): raise utils.ArgumentError( 'Minv has to be provided for the evaluation of the inner ' 'product that is implicitly defined by M.') if isinstance(self.ip_B, utils.LinearOperator): return self.Minv*self.ip_B else: return lambda x, y: self.ip_B(x, self.Minv*y) return self.ip_B
Returns the inner product that is implicitly used with the positive definite preconditioner ``M``.
def _get_xk(self, yk): '''Compute approximate solution from initial guess and approximate solution of the preconditioned linear system.''' if yk is not None: return self.x0 + self.linear_system.Mr * yk return self.xf _get_xk(self, yk): '''Compute approximate solution from initial guess and approximate solution of the preconditioned linear system.''' if yk is not None: return self.x0 + self.linear_system.Mr * yk return self.x0
Compute approximate solution from initial guess and approximate solution of the preconditioned linear system.
def operations(nsteps): '''Returns the number of operations needed for nsteps of GMRES''' return {'A': 1 + nsteps, 'M': 2 + nsteps, 'Ml': 2 + nsteps, 'Mr': 1 + nsteps, 'ip_B': 2 + nsteps + nsteps*(nsteps+1)/2, 'axpy': 4 + 2*nsteps + nsteps*(nsteps+1)/2 f operations(nsteps): '''Returns the number of operations needed for nsteps of GMRES''' return {'A': 1 + nsteps, 'M': 2 + nsteps, 'Ml': 2 + nsteps, 'Mr': 1 + nsteps, 'ip_B': 2 + nsteps + nsteps*(nsteps+1)/2, 'axpy': 4 + 2*nsteps + nsteps*(nsteps+1)/2 }
Returns the number of operations needed for nsteps of GMRES
def compute_hash(func, string): h = func() h.update(string) return h.hexdigest()
compute hash of string using given hash function
def get_local_serial(): ''' Retrieves the serial number from the executing host. For example, 'C02NT43PFY14' ''' return [x for x in [subprocess.Popen("system_profiler SPHardwareDataType |grep -v tray |awk '/Serial/ {print $4}'", shell=True, stdout=subprocess.PIPE).communicate()[0].strip()] if xf get_local_serial(): ''' Retrieves the serial number from the executing host. For example, 'C02NT43PFY14' ''' return [x for x in [subprocess.Popen("system_profiler SPHardwareDataType |grep -v tray |awk '/Serial/ {print $4}'", shell=True, stdout=subprocess.PIPE).communicate()[0].strip()] if x]
Retrieves the serial number from the executing host. For example, 'C02NT43PFY14'