code
stringlengths
52
7.75k
docs
stringlengths
1
5.85k
def pivot_wavelength_ee(bpass): from scipy.integrate import simps return np.sqrt(simps(bpass.resp, bpass.wlen) / simps(bpass.resp / bpass.wlen**2, bpass.wlen))
Compute pivot wavelength assuming equal-energy convention. `bpass` should have two properties, `resp` and `wlen`. The units of `wlen` can be anything, and `resp` need not be normalized in any particular way.
def interpolated_halfmax_points(x, y): from scipy.interpolate import interp1d from scipy.optimize import fmin x = np.asarray(x) y = np.asarray(y) halfmax = 0.5 * y.max() # Guess from the actual samples. delta = y - halfmax guess1 = 0 while delta[guess1] < 0: guess1 += 1 guess2 = y.size - 1 while delta[guess2] < 0: guess2 -= 1 # Interpolate for fanciness. terp = interp1d(x, y, kind='linear', bounds_error=False, fill_value=0.) x1 = fmin(lambda x: (terp(x) - halfmax)**2, x[guess1], disp=False) x2 = fmin(lambda x: (terp(x) - halfmax)**2, x[guess2], disp=False) x1 = np.asscalar(x1) x2 = np.asscalar(x2) if x1 == x2: raise PKError('halfmax finding failed') if x1 > x2: x1, x2 = x2, x1 return x1, x2
Given a curve y(x), find the x coordinates of points that have half the value of max(y), using linear interpolation. We're assuming that y(x) has a bandpass-ish shape, i.e., a single maximum and a drop to zero as we go to the edges of the function's domain. We also assume that x is sorted increasingly.
def get_std_registry(): from six import itervalues reg = Registry() for fn in itervalues(builtin_registrars): fn(reg) return reg
Get a Registry object pre-filled with information for standard telescopes.
def pivot_wavelength(self): wl = self.registry._pivot_wavelengths.get((self.telescope, self.band)) if wl is not None: return wl wl = self.calc_pivot_wavelength() self.registry.register_pivot_wavelength(self.telescope, self.band, wl) return wl
Get the bandpass' pivot wavelength. Unlike calc_pivot_wavelength(), this function will use a cached value if available.
def calc_halfmax_points(self): d = self._ensure_data() return interpolated_halfmax_points(d.wlen, d.resp)
Calculate the wavelengths of the filter half-maximum values.
def halfmax_points(self): t = self.registry._halfmaxes.get((self.telescope, self.band)) if t is not None: return t t = self.calc_halfmax_points() self.registry.register_halfmaxes(self.telescope, self.band, t[0], t[1]) return t
Get the bandpass' half-maximum wavelengths. These can be used to compute a representative bandwidth, or for display purposes. Unlike calc_halfmax_points(), this function will use a cached value if available.
def mag_to_fnu(self, mag): if self.native_flux_kind == 'flam': return flam_ang_to_fnu_cgs(self.mag_to_flam(mag), self.pivot_wavelength()) raise PKError('dont\'t know how to get f_ν from mag for bandpass %s/%s', self.telescope, self.band)
Convert a magnitude in this band to a f_ν flux density. It is assumed that the magnitude has been computed in the appropriate photometric system. The definition of "appropriate" will vary from case to case.
def synphot(self, wlen, flam): from scipy.interpolate import interp1d from scipy.integrate import romberg d = self._ensure_data() mflam = interp1d(wlen, flam, kind='linear', bounds_error=False, fill_value=0) mresp = interp1d(d.wlen, d.resp, kind='linear', bounds_error=False, fill_value=0) bmin = d.wlen.min() bmax = d.wlen.max() numer = romberg(lambda x: mresp(x) * mflam(x), bmin, bmax, divmax=20) denom = romberg(lambda x: mresp(x), bmin, bmax, divmax=20) return numer / denom
`wlen` and `flam` give a tabulated model spectrum in wavelength and f_λ units. We interpolate linearly over both the model and the bandpass since they're both discretely sampled. Note that quadratic interpolation is both much slower and can blow up fatally in some cases. The latter issue might have to do with really large X values that aren't zero-centered, maybe? I used to use the quadrature integrator, but Romberg doesn't issue complaints the way quadrature did. I should probably acquire some idea about what's going on under the hood.
def blackbody(self, T): from scipy.integrate import simps d = self._ensure_data() # factor of pi is going from specific intensity (sr^-1) to unidirectional # inner factor of 1e-8 is Å to cm # outer factor of 1e-8 is f_λ in cm^-1 to f_λ in Å^-1 from .cgs import blambda numer_samples = d.resp * np.pi * blambda(d.wlen * 1e-8, T) * 1e-8 numer = simps(numer_samples, d.wlen) denom = simps(d.resp, d.wlen) return numer / denom
Calculate the contribution of a blackbody through this filter. *T* is the blackbody temperature in Kelvin. Returns a band-averaged spectrum in f_λ units. We use the composite Simpson's rule to integrate over the points at which the filter response is sampled. Note that this is a different technique than used by `synphot`, and so may give slightly different answers than that function.
def bands(self, telescope): q = self._seen_bands.get(telescope) if q is None: return [] return list(q)
Return a list of bands associated with the specified telescope.
def register_pivot_wavelength(self, telescope, band, wlen): if (telescope, band) in self._pivot_wavelengths: raise AlreadyDefinedError('pivot wavelength for %s/%s already ' 'defined', telescope, band) self._note(telescope, band) self._pivot_wavelengths[telescope,band] = wlen return self
Register precomputed pivot wavelengths.
def register_halfmaxes(self, telescope, band, lower, upper): if (telescope, band) in self._halfmaxes: raise AlreadyDefinedError('half-max points for %s/%s already ' 'defined', telescope, band) self._note(telescope, band) self._halfmaxes[telescope,band] = (lower, upper) return self
Register precomputed half-max points.
def register_bpass(self, telescope, klass): if telescope in self._bpass_classes: raise AlreadyDefinedError('bandpass class for %s already ' 'defined', telescope) self._note(telescope, None) self._bpass_classes[telescope] = klass return self
Register a Bandpass class.
def get(self, telescope, band): klass = self._bpass_classes.get(telescope) if klass is None: raise NotDefinedError('bandpass data for %s not defined', telescope) bp = klass() bp.registry = self bp.telescope = telescope bp.band = band return bp
Get a Bandpass object for a known telescope and filter.
def _load_data(self, band): # `band` should be 'nuv' or 'fuv' df = bandpass_data_frame('filter_galex_' + band + '.dat', 'wlen resp') df.resp *= df.wlen # QE -> EE response convention. return df
From Morrissey+ 2005, with the actual data coming from http://www.astro.caltech.edu/~capak/filters/. According to the latter, these are in QE units and thus need to be multiplied by the wavelength when integrating per-energy.
def _load_data(self, band): # `band` should be 'Lp'. df = bandpass_data_frame('filter_mko_' + band + '.dat', 'wlen resp') # Put in increasing wavelength order: df = df[::-1] df.index = np.arange(df.shape[0]) df.wlen *= 1e4 # micron to Angstrom df.resp *= df.wlen # QE to equal-energy response. return df
Filter responses for MKO NIR filters as specified in Tokunaga+ 2002 (see also Tokunaga+ 2005). I downloaded the L' profile from http://irtfweb.ifa.hawaii.edu/~nsfcam/hist/filters.2006.html. Pivot wavelengths from Tokunaga+ 2005 (Table 2) confirm that the profile is in QE convention, although my calculation of the pivot wavelength for L' is actually closer if I assume otherwise. M' and K_s are substantially better in QE convention, though, and based on the paper and nomenclature it seems more appropriate.
def mag_to_fnu(self, mag): return cgs.cgsperjy * self._zeropoints[self.band] * 10**(-0.4 * mag)
Compute F_ν for an MKO IR filter band. There are some problems here since "MKO" is filters, not a photometric system, but people try to make Vega = 0.
def _load_data(self, band): h = bandpass_data_fits('sdss3_filter_responses.fits') section = 'ugriz'.index(band[0]) + 1 d = h[section].data if d.wavelength.dtype.isnative: df = pd.DataFrame({'wlen': d.wavelength, 'resp': d.respt}) else: df = pd.DataFrame({'wlen': d.wavelength.byteswap(True).newbyteorder(), 'resp': d.respt.byteswap(True).newbyteorder()}) df.resp *= df.wlen # QE to equal-energy response. return df
Filter responses for SDSS. Data table from https://www.sdss3.org/binaries/filter_curves.fits, as linked from https://www.sdss3.org/instruments/camera.php#Filters. SHA1 hash of the file is d3f638c41e21489ba7d6dbe7bb8217d938f21184. "Determined by Jim Gunn in June 2001." Doi+ 2010 have updated estimates but these are per-column in the SDSS camera, which we don't care about. Note that these are for the main SDSS 2.5m telescope. Magnitudes in the primed SDSS system were determined on the "photometric telescope", and the whole reason for the existence of both primed and unprimed ugriz systems is that the two have filters with slightly different behavior. My current application involves an entirely different telescope emulating the primed SDSS photometric system, and their precise system response is neither going to be ultra-precisely characterized nor exactly equal to either of the SDSS systems. These responses will be good enough, though. Wavelengths are in Angstrom. Based on the pivot wavelengths listed in http://www.astro.ljmu.ac.uk/~ikb/research/mags-fluxes/, the data table stores QE responses, so we have to convert them to equal-energy responses. Responses both including and excluding the atmosphere are included; I use the former.
def mag_to_fnu(self, mag): # `band` should be 'up', 'gp', 'rp', 'ip', or 'zp'. if len(band) != 2 or band[1] != 'p': raise ValueError('band: ' + band) return abmag_to_fnu_cgs(mag)
SDSS *primed* magnitudes to F_ν. The primed magnitudes are the "USNO" standard-star system defined in Smith+ (2002AJ....123.2121S) and Fukugita+ (1996AJ....111.1748F). This system is anchored to the AB magnitude system, and as far as I can tell it is not known to have measurable offsets from that system. (As of DR10, the *unprimed* SDSS system is known to have small offsets from AB, but I do not believe that that necessarily has implications for u'g'r'i'z'.) However, as far as I can tell the filter responses of the USNO telescope are not published -- only those of the main SDSS 2.5m telescope. The whole reason for the existence of both the primed and unprimed ugriz systems is that their responses do not quite match. For my current application, which involves a completely different telescope anyway, the difference shouldn't matter.
def _load_data(self, band): d = bandpass_data_fits('sw' + self._band_map[band] + '_20041120v106.arf')[1].data # note: # data.WAVE_MIN[i] < data.WAVE_MIN[i+1], but # data.WAVE_MIN[i] > data.WAVE_MAX[i] (!) # data.WAVE_MIN[i] = data.WAVE_MAX[i+1] (!) wmid = 0.5 * (d.WAVE_MIN + d.WAVE_MAX) # in Ångström df = pd.DataFrame({'wlen': wmid, 'resp': d.SPECRESP, 'wlo': d.WAVE_MAX, 'whi': d.WAVE_MIN}) return df
In-flight effective areas for the Swift UVOT, as obtained from the CALDB. See Breeveld+ 2011. XXX: confirm that these are equal-energy, not quantum-efficiency.
def _load_data(self, band): # `band` should be 1, 2, 3, or 4. df = bandpass_data_frame('filter_wise_' + str(band) + '.dat', 'wlen resp uncert') df.wlen *= 1e4 # micron to Angstrom df.uncert *= df.resp / 1000. # parts per thou. to absolute values. lo, hi = self._filter_subsets[band] df = df[lo:hi] # clip zero parts of response. return df
From the WISE All-Sky Explanatory Supplement, IV.4.h.i.1, and Jarrett+ 2011. These are relative response per erg and so can be integrated directly against F_nu spectra. Wavelengths are in micron, uncertainties are in parts per thousand.
def invoke (self, args, **kwargs): if len (args) not in (3, 6): raise multitool.UsageError ('c2m expected exactly 3 or 6 arguments') year = int (args[0]) month = int (args[1]) import astropy.time if len (args) == 3: day = float (args[2]) iday = int (math.floor (day)) r = 24 * (day - iday) hour = int (np.floor (r)) r = 60 * (r - hour) minute = int (np.floor (r)) second = 60 * (r - minute) else: iday = int (args[2]) hour = int (args[3]) minute = int (args[4]) second = float (args[5]) s = '%d-%02d-%02d %02d:%02d:%02.8f' % (year, month, iday, hour, minute, second) t = astropy.time.Time (s, format='iso', scale='utc') print ('%.4f' % t.tai.mjd)
c2m - UTC calendar to MJD[TAI]
def clean_comment_body(body): body = _parser.unescape(body) body = re.sub(r'<a [^>]+>(.+?)</a>', r'\1', body) body = body.replace('<br>', '\n') body = re.sub(r'<.+?>', '', body) return body
Returns given comment HTML as plaintext. Converts all HTML tags and entities within 4chan comments into human-readable text equivalents.
def _create_wcs (fitsheader): wcsmodule = _load_wcs_module () is_pywcs = hasattr (wcsmodule, 'UnitConverter') wcs = wcsmodule.WCS (fitsheader) wcs.wcs.set () wcs.wcs.fix () # I'm interested in MJD computation via datfix() if hasattr (wcs, 'wcs_pix2sky'): wcs.wcs_pix2world = wcs.wcs_pix2sky wcs.wcs_world2pix = wcs.wcs_sky2pix return wcs
For compatibility between astropy and pywcs.
def sanitize_unicode(item): if isinstance(item, text_type): return item.encode('utf8') if isinstance(item, dict): return dict((sanitize_unicode(k), sanitize_unicode(v)) for k, v in six.iteritems(item)) if isinstance(item,(list, tuple)): return item.__class__(sanitize_unicode(x) for x in item) from ...io import Path if isinstance(item, Path): return str(item) return item
Safely pass string values to the CASA tools. item A value to be passed to a CASA tool. In Python 2, the bindings to CASA tasks expect to receive all string values as binary data (:class:`str`) and not Unicode. But :mod:`pwkit` often uses the ``from __future__ import unicode_literals`` statement to prepare for Python 3 compatibility, and other Python modules are getting better about using Unicode consistently, so more and more module code ends up using Unicode strings in cases where they might get exposed to CASA. Doing so will lead to errors. This helper function converts Unicode into UTF-8 encoded bytes for arguments that you might pass to a CASA tool. It will leave non-strings unchanged and recursively transform collections, so you can safely use it just about anywhere. I usually import this as just ``b`` and write ``tool.method(b(arg))``, in analogy with the ``b''`` byte string syntax. This leads to code such as:: from pwkit.environments.casa.util import tools, sanitize_unicode as b tb = tools.table() path = u'data.ms' tb.open(path) # => raises exception tb.open(b(path)) # => works
def datadir(*subdirs): import os.path data = None if 'CASAPATH' in os.environ: data = os.path.join(os.environ['CASAPATH'].split()[0], 'data') if data is None: # The Conda CASA directory layout: try: import casadef except ImportError: pass else: data = os.path.join(os.path.dirname(casadef.task_directory), 'data') if not os.path.isdir(data): # Sigh, hack for CASA 4.7 + Conda; should be straightened out: dn = os.path.dirname data = os.path.join(dn(dn(dn(casadef.task_directory))), 'lib', 'casa', 'data') if not os.path.isdir(data): data = None if data is None: import casac prevp = None p = os.path.dirname(casac.__file__) while len(p) and p != prevp: data = os.path.join(p, 'data') if os.path.isdir(data): break prevp = p p = os.path.dirname(p) if not os.path.isdir(data): raise RuntimeError('cannot identify CASA data directory') return os.path.join(data, *subdirs)
Get a path within the CASA data directory. subdirs Extra elements to append to the returned path. This function locates the directory where CASA resource data files (tables of time offsets, calibrator models, etc.) are stored. If called with no arguments, it simply returns that path. If arguments are provided, they are appended to the returned path using :func:`os.path.join`, making it easy to construct the names of specific data files. For instance:: from pwkit.environments.casa import util cal_image_path = util.datadir('nrao', 'VLA', 'CalModels', '3C286_C.im') tb = util.tools.image() tb.open(cal_image_path)
def logger(filter='WARN'): import os, shutil, tempfile cwd = os.getcwd() tempdir = None try: tempdir = tempfile.mkdtemp(prefix='casautil') try: os.chdir(tempdir) sink = tools.logsink() sink.setlogfile(sanitize_unicode(os.devnull)) try: os.unlink('casapy.log') except OSError as e: if e.errno != 2: raise # otherwise, it's a ENOENT, in which case, no worries. finally: os.chdir(cwd) finally: if tempdir is not None: shutil.rmtree(tempdir, onerror=_rmtree_error) sink.showconsole(True) sink.setglobal(True) sink.filter(sanitize_unicode(filter.upper())) return sink
Set up CASA to write log messages to standard output. filter The log level filter: less urgent messages will not be shown. Valid values are strings: "DEBUG1", "INFO5", ... "INFO1", "INFO", "WARN", "SEVERE". This function creates and returns a CASA ”log sink” object that is configured to write to standard output. The default CASA implementation would *always* create a file named ``casapy.log`` in the current directory; this function safely prevents such a file from being left around. This is particularly important if you don’t have write permissions to the current directory.
def _get_extended(scene, resp): root = ElementTree.fromstring(resp.text) items = root.findall("eemetadata:metadataFields/eemetadata:metadataField", NAMESPACES) scene['extended'] = {item.attrib.get('name').strip(): xsi.get(item[0]) for item in items} return scene
Parse metadata returned from the metadataUrl of a USGS scene. :param scene: Dictionary representation of a USGS scene :param resp: Response object from requests/grequests
def _async_requests(urls): session = FuturesSession(max_workers=30) futures = [ session.get(url) for url in urls ] return [ future.result() for future in futures ]
Sends multiple non-blocking requests. Returns a list of responses. :param urls: List of urls
def metadata(dataset, node, entityids, extended=False, api_key=None): api_key = _get_api_key(api_key) url = '{}/metadata'.format(USGS_API) payload = { "jsonRequest": payloads.metadata(dataset, node, entityids, api_key=api_key) } r = requests.post(url, payload) response = r.json() _check_for_usgs_error(response) if extended: metadata_urls = map(_get_metadata_url, response['data']) results = _async_requests(metadata_urls) data = map(lambda idx: _get_extended(response['data'][idx], results[idx]), range(len(response['data']))) return response
Request metadata for a given scene in a USGS dataset. :param dataset: :param node: :param entityids: :param extended: Send a second request to the metadata url to get extended metadata on the scene. :param api_key:
def reraise_context(fmt, *args): import sys if len(args): cstr = fmt % args else: cstr = text_type(fmt) ex = sys.exc_info()[1] if isinstance(ex, EnvironmentError): ex.strerror = '%s: %s' % (cstr, ex.strerror) ex.args = (ex.errno, ex.strerror) else: if len(ex.args): cstr = '%s: %s' % (cstr, ex.args[0]) ex.args = (cstr, ) + ex.args[1:] raise
Reraise an exception with its message modified to specify additional context. This function tries to help provide context when a piece of code encounters an exception while trying to get something done, and it wishes to propagate contextual information farther up the call stack. It only makes sense in Python 2, which does not provide Python 3’s `exception chaining <https://www.python.org/dev/peps/pep-3134/>`_ functionality. Instead of that more sophisticated infrastructure, this function just modifies the textual message associated with the exception being raised. If only a single argument is supplied, the exception text prepended with the stringification of that argument. If multiple arguments are supplied, the first argument is treated as an old-fashioned ``printf``-type (``%``-based) format string, and the remaining arguments are the formatted values. Example usage:: from pwkit import reraise_context from pwkit.io import Path filename = 'my-filename.txt' try: f = Path(filename).open('rt') for line in f.readlines(): # do stuff ... except Exception as e: reraise_context('while reading "%r"', filename) # The exception is reraised and so control leaves this function. If an exception with text ``"bad value"`` were to be raised inside the ``try`` block in the above example, its text would be modified to read ``"while reading \"my-filename.txt\": bad value"``.
def copy(self): new = self.__class__() new.__dict__ = dict(self.__dict__) return new
Return a shallow copy of this object.
def to_pretty(self, format='str'): if format == 'str': template = '%-*s = %s' elif format == 'repr': template = '%-*s = %r' else: raise ValueError('unrecognized value for "format": %r' % format) d = self.__dict__ maxlen = 0 for k in six.iterkeys(d): maxlen = max(maxlen, len(k)) return '\n'.join(template % (maxlen, k, d[k]) for k in sorted(six.iterkeys(d)))
Return a string with a prettified version of this object’s contents. The format is a multiline string where each line is of the form ``key = value``. If the *format* argument is equal to ``"str"``, each ``value`` is the stringification of the value; if it is ``"repr"``, it is its :func:`repr`. Calling :func:`str` on a :class:`Holder` returns a slightly different pretty stringification that uses a textual representation similar to a Python :class:`dict` literal.
def in_casapy (helper, asdm=None, ms=None, tbuff=None): if asdm is None: raise ValueError ('asdm') if ms is None: raise ValueError ('ms') if tbuff is None: raise ValueError ('tbuff') helper.casans.importevla (asdm=asdm, vis=ms, ocorr_mode='co', online=True, tbuff=tbuff, flagpol=False, tolerance=1.3, applyflags=True, flagbackup=False)
This function is run inside the weirdo casapy IPython environment! A strange set of modules is available, and the `pwkit.environments.casa.scripting` system sets up a very particular environment to allow encapsulated scripting.
def write_stream (stream, holders, defaultsection=None, extrapos=(), sha1sum=False, **kwargs): if sha1sum: import hashlib sha1 = hashlib.sha1 () else: sha1 = None inifile.write_stream (stream, _format_many (holders, defaultsection, extrapos, sha1), defaultsection=defaultsection, **kwargs) if sha1sum: return sha1.digest ()
`extrapos` is basically a hack for multi-step processing. We have some flux measurements that are computed from luminosities and distances. The flux value is therefore an unwrapped Uval, which doesn't retain memory of any positivity constraint it may have had. Therefore, if we write out such a value using this routine, we may get something like `fx:u = 1pm1`, and the next time it's read in we'll get negative fluxes. Fields listed in `extrapos` will have a "P" constraint added if they are imprecise and their typetag is just "f" or "u".
def _process_hdu (self, hdu): "We've hacked the load order a bit to get t0 and mjd0 in _process_main()." if hdu.name == 'EVENTS': pass else: super (Events, self)._process_hdu (hduf _process_hdu (self, hdu): "We've hacked the load order a bit to get t0 and mjd0 in _process_main()." if hdu.name == 'EVENTS': pass else: super (Events, self)._process_hdu (hdu)
We've hacked the load order a bit to get t0 and mjd0 in _process_main().
def output(self, kind, line): "*line* should be bytes" self.destination.write(b''.join([ self._cyan, b't=%07d' % (time.time() - self._t0), self._reset, self._kind_prefixes[kind], self.markers[kind], line, self._reset, ])) self.destination.flush(f output(self, kind, line): "*line* should be bytes" self.destination.write(b''.join([ self._cyan, b't=%07d' % (time.time() - self._t0), self._reset, self._kind_prefixes[kind], self.markers[kind], line, self._reset, ])) self.destination.flush()
*line* should be bytes
def output_stderr(self, text): "*text* should be bytes" binary_stderr.write(b''.join([ self._red, b't=%07d' % (time.time() - self._t0), self._reset, b' ', text, ])) binary_stderr.flush(f output_stderr(self, text): "*text* should be bytes" binary_stderr.write(b''.join([ self._red, b't=%07d' % (time.time() - self._t0), self._reset, b' ', text, ])) binary_stderr.flush()
*text* should be bytes
def get_boards(board_name_list, *args, **kwargs): if isinstance(board_name_list, basestring): board_name_list = board_name_list.split() return [Board(name, *args, **kwargs) for name in board_name_list]
Given a list of boards, return :class:`basc_py4chan.Board` objects. Args: board_name_list (list): List of board names to get, eg: ['b', 'tg'] Returns: dict of :class:`basc_py4chan.Board`: Requested boards.
def get_all_boards(*args, **kwargs): # Use https based on how the Board class instances are to be instantiated https = kwargs.get('https', args[1] if len(args) > 1 else False) # Dummy URL generator, only used to generate the board list which doesn't # require a valid board name url_generator = Url(None, https) _fetch_boards_metadata(url_generator) return get_boards(_metadata.keys(), *args, **kwargs)
Returns every board on 4chan. Returns: dict of :class:`basc_py4chan.Board`: All boards.
def get_thread(self, thread_id, update_if_cached=True, raise_404=False): # see if already cached cached_thread = self._thread_cache.get(thread_id) if cached_thread: if update_if_cached: cached_thread.update() return cached_thread res = self._requests_session.get( self._url.thread_api_url( thread_id = thread_id ) ) # check if thread exists if raise_404: res.raise_for_status() elif not res.ok: return None thread = Thread._from_request(self, res, thread_id) self._thread_cache[thread_id] = thread return thread
Get a thread from 4chan via 4chan API. Args: thread_id (int): Thread ID update_if_cached (bool): Whether the thread should be updated if it's already in our cache raise_404 (bool): Raise an Exception if thread has 404'd Returns: :class:`basc_py4chan.Thread`: Thread object
def thread_exists(self, thread_id): return self._requests_session.head( self._url.thread_api_url( thread_id=thread_id ) ).ok
Check if a thread exists or has 404'd. Args: thread_id (int): Thread ID Returns: bool: Whether the given thread exists on this board.
def get_threads(self, page=1): url = self._url.page_url(page) return self._request_threads(url)
Returns all threads on a certain page. Gets a list of Thread objects for every thread on the given page. If a thread is already in our cache, the cached version is returned and thread.want_update is set to True on the specific thread object. Pages on 4chan are indexed from 1 onwards. Args: page (int): Page to request threads for. Defaults to the first page. Returns: list of :mod:`basc_py4chan.Thread`: List of Thread objects representing the threads on the given page.
def get_all_thread_ids(self): json = self._get_json(self._url.thread_list()) return [thread['no'] for page in json for thread in page['threads']]
Return the ID of every thread on this board. Returns: list of ints: List of IDs of every thread on this board.
def get_all_threads(self, expand=False): if not expand: return self._request_threads(self._url.catalog()) thread_ids = self.get_all_thread_ids() threads = [self.get_thread(id, raise_404=False) for id in thread_ids] return filter(None, threads)
Return every thread on this board. If not expanded, result is same as get_threads run across all board pages, with last 3-5 replies included. Uses the catalog when not expanding, and uses the flat thread ID listing at /{board}/threads.json when expanding for more efficient resource usage. If expanded, all data of all threads is returned with no omitted posts. Args: expand (bool): Whether to download every single post of every thread. If enabled, this option can be very slow and bandwidth-intensive. Returns: list of :mod:`basc_py4chan.Thread`: List of Thread objects representing every thread on this board.
def refresh_cache(self, if_want_update=False): for thread in tuple(self._thread_cache.values()): if if_want_update: if not thread.want_update: continue thread.update()
Update all threads currently stored in our cache.
def in_casapy(helper, ms=None, plotdest=None): import numpy as np, os if ms is None: raise ValueError('ms') if plotdest is None: raise ValueError('plotdest') opac = helper.casans.plotweather(vis=ms, plotName=plotdest) opac = np.asarray(opac) with open(helper.temppath('opac.npy'), 'wb') as f: np.save(f, opac)
This function is run inside the weirdo casapy IPython environment! A strange set of modules is available, and the `pwkit.environments.casa.scripting` system sets up a very particular environment to allow encapsulated scripting.
def get_node(dataset, node): if node is None: cur_dir = os.path.dirname(os.path.realpath(__file__)) data_dir = os.path.join(cur_dir, "..", "data") dataset_path = os.path.join(data_dir, "datasets.json") with open(dataset_path, "r") as f: datasets = json.loads(f.read()) node = datasets[dataset].upper() return node
.. todo:: Move to more appropriate place in module.
def count_events (env, evtpath, filter): with env.slurp (argv=['dmstat', '%s%s[cols energy]' % (evtpath, filter)], linebreak=True) as s: for etype, payload in s: if etype != 'stdout': continue if b'good:' not in payload: continue return int (payload.split ()[-1]) raise Exception ('parsing of dmlist output failed')
TODO: this can probably be replaced with simply reading the file ourselves!
def prepend_path(orig, text, pathsep=os.pathsep): if orig is None: orig = '' if not len(orig): return text return ''.join([text, pathsep, orig])
Returns a $PATH-like environment variable with `text` prepended. `orig` is the original variable value, or None. `pathsep` is the character separating path elements, defaulting to `os.pathsep`. Example: newpath = cli.prepend_path(oldpath, '/mypackage/bin') See also `prepend_environ_path`.
def prepend_environ_path(env, name, text, pathsep=os.pathsep): env[name] = prepend_path(env.get(name), text, pathsep=pathsep) return env
Prepend `text` into a $PATH-like environment variable. `env` is a dictionary of environment variables and `name` is the variable name. `pathsep` is the character separating path elements, defaulting to `os.pathsep`. The variable will be created if it is not already in `env`. Returns `env`. Example:: prepend_environ_path(env, 'PATH', '/mypackage/bin') The `name` and `text` arguments should be `str` objects; that is, bytes in Python 2 and Unicode in Python 3. Literal strings will be OK unless you use the ``from __future__ import unicode_literals`` feature.
def make_figure9_plot(shlib_path, use_lowlevel=True, **kwargs): import omega as om if use_lowlevel: out_vals = do_figure9_calc_lowlevel(shlib_path, **kwargs) else: out_vals = do_figure9_calc_highlevel(shlib_path, **kwargs) freqs = out_vals[:,OUT_VAL_FREQ] tot_ints = out_vals[:,OUT_VAL_OINT] + out_vals[:,OUT_VAL_XINT] pos = (tot_ints > 0) p = om.quickXY(freqs[pos], tot_ints[pos], 'Calculation', xlog=1, ylog=1) nu_obs = np.array([1.0, 2.0, 3.75, 9.4, 17.0, 34.0]) int_obs = np.array([12.0, 43.0, 29.0, 6.3, 1.7, 0.5]) p.addXY(nu_obs, int_obs, 'Observations', lines=False) p.defaultKeyOverlay.hAlign = 0.93 p.setBounds(0.5, 47, 0.1, 60) p.setLabels('Emission frequency, GHz', 'Total intensity, sfu') return p
Reproduce Figure 9 of the Fleischman & Kuznetsov (2010) paper, using our low-level interfaces. Uses OmegaPlot, of course. Input parameters, etc., come from the file ``Flare071231a.pro`` that is distributed with the paper’s Supplementary Data archive. Invoke with something like:: from pwkit import fk10 fk10.make_figure9_plot('path/to/libGS_Std_HomSrc_CEH.so.64').show()
def new_for_fk10_fig9(cls, shlib_path): inst = (cls(shlib_path) .set_thermal_background(2.1e7, 3e9) .set_bfield(48) .set_edist_powerlaw(0.016, 4.0, 3.7, 5e9/3) .set_freqs(100, 0.5, 50) .set_hybrid_parameters(12, 12) .set_ignore_q_terms(False) .set_obs_angle(50 * np.pi / 180) .set_padist_gaussian_loss_cone(0.5 * np.pi, 0.4) .set_trapezoidal_integration(15)) # haven't yet figure out how to deal with this part: inst.in_vals[0] = 1.33e18 inst.in_vals[1] = 6e8 return inst
Create a calculator initialized to reproduce Figure 9 from FK10. This is mostly to provide a handy way to create a new :class:`Calculator` instance that is initialized with reasonable values for all of its parameters.
def set_bfield(self, B_G): if not (B_G > 0): raise ValueError('must have B_G > 0; got %r' % (B_G,)) self.in_vals[IN_VAL_B] = B_G return self
Set the strength of the local magnetic field. **Call signature** *B_G* The magnetic field strength, in Gauss Returns *self* for convenience in chaining.
def set_bfield_for_s0(self, s0): if not (s0 > 0): raise ValueError('must have s0 > 0; got %r' % (s0,)) B0 = 2 * np.pi * cgs.me * cgs.c * self.in_vals[IN_VAL_FREQ0] / (cgs.e * s0) self.in_vals[IN_VAL_B] = B0 return self
Set B to probe a certain harmonic number. **Call signature** *s0* The harmonic number to probe at the lowest frequency Returns *self* for convenience in chaining. This just proceeds from the relation ``nu = s nu_c = s e B / 2 pi m_e c``. Since *s* and *nu* scale with each other, if multiple frequencies are being probed, the harmonic numbers being probed will scale in the same way.
def set_edist_powerlaw(self, emin_mev, emax_mev, delta, ne_cc): if not (emin_mev >= 0): raise ValueError('must have emin_mev >= 0; got %r' % (emin_mev,)) if not (emax_mev >= emin_mev): raise ValueError('must have emax_mev >= emin_mev; got %r, %r' % (emax_mev, emin_mev)) if not (delta >= 0): raise ValueError('must have delta >= 0; got %r, %r' % (delta,)) if not (ne_cc >= 0): raise ValueError('must have ne_cc >= 0; got %r, %r' % (ne_cc,)) self.in_vals[IN_VAL_EDIST] = EDIST_PLW self.in_vals[IN_VAL_EMIN] = emin_mev self.in_vals[IN_VAL_EMAX] = emax_mev self.in_vals[IN_VAL_DELTA1] = delta self.in_vals[IN_VAL_NB] = ne_cc return self
Set the energy distribution function to a power law. **Call signature** *emin_mev* The minimum energy of the distribution, in MeV *emax_mev* The maximum energy of the distribution, in MeV *delta* The power-law index of the distribution *ne_cc* The number density of energetic electrons, in cm^-3. Returns *self* for convenience in chaining.
def set_edist_powerlaw_gamma(self, gmin, gmax, delta, ne_cc): if not (gmin >= 1): raise ValueError('must have gmin >= 1; got %r' % (gmin,)) if not (gmax >= gmin): raise ValueError('must have gmax >= gmin; got %r, %r' % (gmax, gmin)) if not (delta >= 0): raise ValueError('must have delta >= 0; got %r, %r' % (delta,)) if not (ne_cc >= 0): raise ValueError('must have ne_cc >= 0; got %r, %r' % (ne_cc,)) self.in_vals[IN_VAL_EDIST] = EDIST_PLG self.in_vals[IN_VAL_EMIN] = (gmin - 1) * E0_MEV self.in_vals[IN_VAL_EMAX] = (gmax - 1) * E0_MEV self.in_vals[IN_VAL_DELTA1] = delta self.in_vals[IN_VAL_NB] = ne_cc return self
Set the energy distribution function to a power law in the Lorentz factor **Call signature** *gmin* The minimum Lorentz factor of the distribution *gmax* The maximum Lorentz factor of the distribution *delta* The power-law index of the distribution *ne_cc* The number density of energetic electrons, in cm^-3. Returns *self* for convenience in chaining.
def set_freqs(self, n, f_lo_ghz, f_hi_ghz): if not (f_lo_ghz >= 0): raise ValueError('must have f_lo_ghz >= 0; got %r' % (f_lo_ghz,)) if not (f_hi_ghz >= f_lo_ghz): raise ValueError('must have f_hi_ghz >= f_lo_ghz; got %r, %r' % (f_hi_ghz, f_lo_ghz)) if not n >= 1: raise ValueError('must have n >= 1; got %r' % (n,)) self.in_vals[IN_VAL_NFREQ] = n self.in_vals[IN_VAL_FREQ0] = f_lo_ghz * 1e9 # GHz => Hz self.in_vals[IN_VAL_LOGDFREQ] = np.log10(f_hi_ghz / f_lo_ghz) / n return self
Set the frequency grid on which to perform the calculations. **Call signature** *n* The number of frequency points to sample. *f_lo_ghz* The lowest frequency to sample, in GHz. *f_hi_ghz* The highest frequency to sample, in GHz. Returns *self* for convenience in chaining.
def set_obs_angle(self, theta_rad): self.in_vals[IN_VAL_THETA] = theta_rad * 180 / np.pi # rad => deg return self
Set the observer angle relative to the field. **Call signature** *theta_rad* The angle between the ray path and the local magnetic field, in radians. Returns *self* for convenience in chaining.
def set_one_freq(self, f_ghz): if not (f_ghz >= 0): raise ValueError('must have f_lo_ghz >= 0; got %r' % (f_lo_ghz,)) self.in_vals[IN_VAL_NFREQ] = 1 self.in_vals[IN_VAL_FREQ0] = f_ghz * 1e9 # GHz -> Hz self.in_vals[IN_VAL_LOGDFREQ] = 1.0 return self
Set the code to calculate results at just one frequency. **Call signature** *f_ghz* The frequency to sample, in GHz. Returns *self* for convenience in chaining.
def set_padist_gaussian_loss_cone(self, boundary_rad, expwidth): self.in_vals[IN_VAL_PADIST] = PADIST_GLC self.in_vals[IN_VAL_LCBDY] = boundary_rad * 180 / np.pi # rad => deg self.in_vals[IN_VAL_DELTAMU] = expwidth return self
Set the pitch-angle distribution to a Gaussian loss cone. **Call signature** *boundary_rad* The angle inside which there are no losses, in radians. *expwidth* The characteristic width of the Gaussian loss profile *in direction-cosine units*. Returns *self* for convenience in chaining. See ``OnlineI.pdf`` in the Supplementary Data for a precise definition. (And note the distinction between α_c and μ_c since not everything is direction cosines.)
def set_thermal_background(self, T_K, nth_cc): if not (T_K >= 0): raise ValueError('must have T_K >= 0; got %r' % (T_K,)) if not (nth_cc >= 0): raise ValueError('must have nth_cc >= 0; got %r, %r' % (nth_cc,)) self.in_vals[IN_VAL_T0] = T_K self.in_vals[IN_VAL_N0] = nth_cc return self
Set the properties of the background thermal plasma. **Call signature** *T_K* The temperature of the background plasma, in Kelvin. *nth_cc* The number density of thermal electrons, in cm^-3. Returns *self* for convenience in chaining. Note that the parameters set here are the same as the ones that describe the thermal electron distribution, if you choose one of the electron energy distributions that explicitly models a thermal component ("thm", "tnt", "tnp", "tng", "kappa" in the code's terminology). For the power-law-y electron distributions, these parameters are used to calculate dispersion parameters (e.g. refractive indices) and a free-free contribution, but their synchrotron contribution is ignored.
def set_trapezoidal_integration(self, n): if not (n >= 2): raise ValueError('must have n >= 2; got %r' % (n,)) self.in_vals[IN_VAL_INTEG_METH] = n + 1 return self
Set the code to use trapezoidal integration. **Call signature** *n* Use this many nodes Returns *self* for convenience in chaining.
def find_rt_coefficients_tot_intens(self, depth0=None): j_O, alpha_O, j_X, alpha_X = self.find_rt_coefficients(depth0=depth0) j_I = j_O + j_X alpha_I = 0.5 * (alpha_O + alpha_X) # uhh... right? return (j_I, alpha_I)
Figure out total-intensity emission and absorption coefficients for the current parameters. **Argument** *depth0* (default None) A first guess to use for a good integration depth, in cm. If None, the most recent value is used. **Return value** A tuple ``(j_I, alpha_I)``, where: *j_I* The total intensity emission coefficient, in erg/s/cm^3/Hz/sr. *alpha_I* The total intensity absorption coefficient, in cm^-1. See :meth:`find_rt_coefficients` for an explanation how this routine works. This version merely postprocesses the results from that method to convert the coefficients to refer to total intensity.
def try_open (*args, **kwargs): try: return io.open (*args, **kwargs) except IOError as e: if e.errno == 2: return None raise
Simply a wrapper for io.open(), unless an IOError with errno=2 (ENOENT) is raised, in which case None is retured.
def make_path_func (*baseparts): from os.path import join base = join (*baseparts) def path_func (*args): return join (base, *args) return path_func
Return a function that joins paths onto some base directory.
def djoin (*args): from os.path import join i = 0 alen = len (args) while i < alen and (args[i] == '' or args[i] == '.'): i += 1 if i == alen: return '.' return join (*args[i:])
dotless' join, for nicer paths.
def rellink (source, dest): from os.path import isabs, dirname, relpath, abspath if isabs (source): os.symlink (source, dest) elif isabs (dest): os.symlink (abspath (source), dest) else: os.symlink (relpath (source, dirname (dest)), dest)
Create a symbolic link to path *source* from path *dest*. If either *source* or *dest* is an absolute path, the link from *dest* will point to the absolute path of *source*. Otherwise, the link to *source* from *dest* will be a relative link.
def ensure_dir (path, parents=False): if parents: from os.path import dirname parent = dirname (path) if len (parent) and parent != path: ensure_dir (parent, True) try: os.mkdir (path) except OSError as e: if e.errno == 17: # EEXIST return True raise return False
Returns a boolean indicating whether the directory already existed. Will attempt to create parent directories if *parents* is True.
def ensure_symlink (src, dst): try: os.symlink (src, dst) except OSError as e: if e.errno == 17: # EEXIST return True raise return False
Ensure the existence of a symbolic link pointing to src named dst. Returns a boolean indicating whether the symlink already existed.
def expand (self, user=False, vars=False, glob=False, resolve=False): from os import path from glob import glob text = text_type (self) if user: text = path.expanduser (text) if vars: text = path.expandvars (text) if glob: results = glob (text) if len (results) == 1: text = results[0] elif len (results) > 1: raise IOError ('glob of %r should\'ve returned 0 or 1 matches; got %d' % (text, len (results))) other = self.__class__ (text) if resolve: other = other.resolve () return other
Return a new :class:`Path` with various expansions performed. All expansions are disabled by default but can be enabled by passing in true values in the keyword arguments. user : bool (default False) Expand ``~`` and ``~user`` home-directory constructs. If a username is unmatched or ``$HOME`` is unset, no change is made. Calls :func:`os.path.expanduser`. vars : bool (default False) Expand ``$var`` and ``${var}`` environment variable constructs. Unknown variables are not substituted. Calls :func:`os.path.expandvars`. glob : bool (default False) Evaluate the path as a :mod:`glob` expression and use the matched path. If the glob does not match anything, do not change anything. If the glob matches more than one path, raise an :exc:`IOError`. resolve : bool (default False) Call :meth:`resolve` on the return value before returning it.
def format (self, *args, **kwargs): return self.__class__ (str (self).format (*args, **kwargs))
Return a new path formed by calling :meth:`str.format` on the textualization of this path.
def get_parent (self, mode='naive'): if mode == 'textual': return self.parent if mode == 'resolved': return self.resolve ().parent if mode == 'naive': from os.path import pardir if not len (self.parts): return self.__class__ (pardir) if all (p == pardir for p in self.parts): return self / pardir return self.parent raise ValueError ('unhandled get_parent() mode %r' % (mode, ))
Get the path of this path’s parent directory. Unlike the :attr:`parent` attribute, this function can correctly ascend into parent directories if *self* is ``"."`` or a sequence of ``".."``. The precise way in which it handles these kinds of paths, however, depends on the *mode* parameter: ``"textual"`` Return the same thing as the :attr:`parent` attribute. ``"resolved"`` As *textual*, but on the :meth:`resolve`-d version of the path. This will always return the physical parent directory in the filesystem. The path pointed to by *self* must exist for this call to succeed. ``"naive"`` As *textual*, but the parent of ``"."`` is ``".."``, and the parent of a sequence of ``".."`` is the same sequence with another ``".."``. Note that this manipulation is still strictly textual, so results when called on paths like ``"foo/../bar/../other"`` will likely not be what you want. Furthermore, ``p.get_parent(mode="naive")`` never yields a path equal to ``p``, so some kinds of loops will execute infinitely.
def make_relative (self, other): if self.is_absolute (): return self from os.path import relpath other = self.__class__ (other) return self.__class__ (relpath (text_type (self), text_type (other)))
Return a new path that is the equivalent of this one relative to the path *other*. Unlike :meth:`relative_to`, this will not throw an error if *self* is not a sub-path of *other*; instead, it will use ``..`` to build a relative path. This can result in invalid relative paths if *other* contains a directory symbolic link. If *self* is an absolute path, it is returned unmodified.
def scandir (self): if hasattr (os, 'scandir'): scandir = os.scandir else: from scandir import scandir return scandir (path_type (self))
Iteratively scan this path, assuming it’s a directory. This requires and uses the :mod:`scandir` module. `scandir` is different than `iterdir` because it generates `DirEntry` items rather than Path instances. DirEntry objects have their properties filled from the directory info itself, so querying them avoids syscalls that would be necessary with iterdir(). The generated values are :class:`scandir.DirEntry` objects which have some information pre-filled. These objects have methods ``inode()``, ``is_dir()``, ``is_file()``, ``is_symlink()``, and ``stat()``. They have attributes ``name`` (the basename of the entry) and ``path`` (its full path).
def copy_to (self, dest, preserve='mode'): # shutil.copyfile() doesn't let the destination be a directory, so we # have to manage that possibility ourselves. import shutil dest = Path (dest) if dest.is_dir (): dest = dest / self.name if preserve == 'none': shutil.copyfile (str(self), str(dest)) elif preserve == 'mode': shutil.copy (str(self), str(dest)) elif preserve == 'all': shutil.copy2 (str(self), str(dest)) else: raise ValueError ('unrecognized "preserve" value %r' % (preserve,)) return dest
Copy this path — as a file — to another *dest*. The *preserve* argument specifies which meta-properties of the file should be preserved: ``none`` Only copy the file data. ``mode`` Copy the data and the file mode (permissions, etc). ``all`` Preserve as much as possible: mode, modification times, etc. The destination *dest* may be a directory. Returns the final destination path.
def ensure_dir (self, mode=0o777, parents=False): if parents: p = self.parent if p == self: return False # can never create root; avoids loop when parents=True p.ensure_dir (mode, True) made_it = False try: self.mkdir (mode) made_it = True except OSError as e: if e.errno == 17: # EEXIST? return False # that's fine raise # other exceptions are not fine if not self.is_dir (): import errno raise OSError (errno.ENOTDIR, 'Not a directory', str(self)) return made_it
Ensure that this path exists as a directory. This function calls :meth:`mkdir` on this path, but does not raise an exception if it already exists. It does raise an exception if this path exists but is not a directory. If the directory is created, *mode* is used to set the permissions of the resulting directory, with the important caveat that the current :func:`os.umask` is applied. It returns a boolean indicating if the directory was actually created. If *parents* is true, parent directories will be created in the same manner.
def ensure_parent (self, mode=0o777, parents=False): return self.parent.ensure_dir (mode, parents)
Ensure that this path's *parent* directory exists. Returns a boolean whether the parent directory was created. Will attempt to create superior parent directories if *parents* is true.
def rellink_to (self, target, force=False): target = self.__class__ (target) if force: self.try_unlink () if self.is_absolute (): target = target.absolute () # force absolute link return self.symlink_to (target.make_relative (self.parent))
Make this path a symlink pointing to the given *target*, generating the proper relative path using :meth:`make_relative`. This gives different behavior than :meth:`symlink_to`. For instance, ``Path ('a/b').symlink_to ('c')`` results in ``a/b`` pointing to the path ``c``, whereas :meth:`rellink_to` results in it pointing to ``../c``. This can result in broken relative paths if (continuing the example) ``a`` is a symbolic link to a directory. If either *target* or *self* is absolute, the symlink will point at the absolute path to *target*. The intention is that if you’re trying to link ``/foo/bar`` to ``bee/boo``, it probably makes more sense for the link to point to ``/path/to/.../bee/boo`` rather than ``../../../../bee/boo``. If *force* is true, :meth:`try_unlink` will be called on *self* before the link is made, forcing its re-creation.
def rmtree (self, errors='warn'): import shutil if errors == 'ignore': ignore_errors = True onerror = None elif errors == 'warn': ignore_errors = False from .cli import warn def onerror (func, path, exc_info): warn ('couldn\'t rmtree %s: in %s of %s: %s', self, func.__name__, path, exc_info[1]) else: raise ValueError ('unexpected "errors" keyword %r' % (errors,)) shutil.rmtree (text_type (self), ignore_errors=ignore_errors, onerror=onerror) return self
Recursively delete this directory and its contents. The *errors* keyword specifies how errors are handled: "warn" (the default) Print a warning to standard error. "ignore" Ignore errors.
def try_unlink (self): try: self.unlink () return True except OSError as e: if e.errno == 2: return False # ENOENT raise
Try to unlink this path. If it doesn't exist, no error is returned. Returns a boolean indicating whether the path was really unlinked.
def try_open (self, null_if_noexist=False, **kwargs): try: return self.open (**kwargs) except IOError as e: if e.errno == 2: if null_if_noexist: import io, os return io.open (os.devnull, **kwargs) return None raise
Call :meth:`Path.open` on this path (passing *kwargs*) and return the result. If the file doesn't exist, the behavior depends on *null_if_noexist*. If it is false (the default), ``None`` is returned. Otherwise, :data:`os.devnull` is opened and returned.
def as_hdf_store (self, mode='r', **kwargs): from pandas import HDFStore return HDFStore (text_type (self), mode=mode, **kwargs)
Return the path as an opened :class:`pandas.HDFStore` object. Note that the :class:`HDFStore` constructor unconditionally prints messages to standard output when opening and closing files, so use of this function will pollute your program’s standard output. The *kwargs* are forwarded to the :class:`HDFStore` constructor.
def read_astropy_ascii (self, **kwargs): from astropy.io import ascii return ascii.read (text_type (self), **kwargs)
Open as an ASCII table, returning a :class:`astropy.table.Table` object. Keyword arguments are passed to :func:`astropy.io.ascii.open`; valid ones likely include: - ``names = <list>`` (column names) - ``format`` ('basic', 'cds', 'csv', 'ipac', ...) - ``guess = True`` (guess table format) - ``delimiter`` (column delimiter) - ``comment = <regex>`` - ``header_start = <int>`` (line number of header, ignoring blank and comment lines) - ``data_start = <int>`` - ``data_end = <int>`` - ``converters = <dict>`` - ``include_names = <list>`` (names of columns to include) - ``exclude_names = <list>`` (names of columns to exclude; applied after include) - ``fill_values = <dict>`` (filler values)
def read_fits (self, **kwargs): from astropy.io import fits return fits.open (text_type (self), **kwargs)
Open as a FITS file, returning a :class:`astropy.io.fits.HDUList` object. Keyword arguments are passed to :func:`astropy.io.fits.open`; valid ones likely include: - ``mode = 'readonly'`` (or "update", "append", "denywrite", "ostream") - ``memmap = None`` - ``save_backup = False`` - ``cache = True`` - ``uint = False`` - ``ignore_missing_end = False`` - ``checksum = False`` - ``disable_image_compression = False`` - ``do_not_scale_image_data = False`` - ``ignore_blank = False`` - ``scale_back = False``
def read_fits_bintable (self, hdu=1, drop_nonscalar_ok=True, **kwargs): from astropy.io import fits from .numutil import fits_recarray_to_data_frame as frtdf with fits.open (text_type (self), mode='readonly', **kwargs) as hdulist: return frtdf (hdulist[hdu].data, drop_nonscalar_ok=drop_nonscalar_ok)
Open as a FITS file, read in a binary table, and return it as a :class:`pandas.DataFrame`, converted with :func:`pkwit.numutil.fits_recarray_to_data_frame`. The *hdu* argument specifies which HDU to read, with its default 1 indicating the first FITS extension. The *drop_nonscalar_ok* argument specifies if non-scalar table values (which are inexpressible in :class:`pandas.DataFrame`s) should be silently ignored (``True``) or cause a :exc:`ValueError` to be raised (``False``). Other **kwargs** are passed to :func:`astropy.io.fits.open`, (see :meth:`Path.read_fits`) although the open mode is hardcoded to be ``"readonly"``.
def read_hdf (self, key, **kwargs): # This one needs special handling because of the "key" and path input. import pandas return pandas.read_hdf (text_type (self), key, **kwargs)
Open as an HDF5 file using :mod:`pandas` and return the item stored under the key *key*. *kwargs* are passed to :func:`pandas.read_hdf`.
def read_inifile (self, noexistok=False, typed=False): if typed: from .tinifile import read_stream else: from .inifile import read_stream try: with self.open ('rb') as f: for item in read_stream (f): yield item except IOError as e: if e.errno != 2 or not noexistok: raise
Open assuming an “ini-file” format and return a generator yielding data records using either :func:`pwkit.inifile.read_stream` (if *typed* is false) or :func:`pwkit.tinifile.read_stream` (if it’s true). The latter version is designed to work with numerical data using the :mod:`pwkit.msmt` subsystem. If *noexistok* is true, a nonexistent file will result in no items being generated rather than an :exc:`IOError` being raised.
def read_json (self, mode='rt', **kwargs): import json with self.open (mode=mode) as f: return json.load (f, **kwargs)
Use the :mod:`json` module to read in this file as a JSON-formatted data structure. Keyword arguments are passed to :func:`json.load`. Returns the read-in data structure.
def read_lines (self, mode='rt', noexistok=False, **kwargs): try: with self.open (mode=mode, **kwargs) as f: for line in f: yield line except IOError as e: if e.errno != 2 or not noexistok: raise
Generate a sequence of lines from the file pointed to by this path, by opening as a regular file and iterating over it. The lines therefore contain their newline characters. If *noexistok*, a nonexistent file will result in an empty sequence rather than an exception. *kwargs* are passed to :meth:`Path.open`.
def read_numpy (self, **kwargs): import numpy as np with self.open ('rb') as f: return np.load (f, **kwargs)
Read this path into a :class:`numpy.ndarray` using :func:`numpy.load`. *kwargs* are passed to :func:`numpy.load`; they likely are: mmap_mode : None, 'r+', 'r', 'w+', 'c' Load the array using memory-mapping allow_pickle : bool = True Whether Pickle-format data are allowed; potential security hazard. fix_imports : bool = True Try to fix Python 2->3 import renames when loading Pickle-format data. encoding : 'ASCII', 'latin1', 'bytes' The encoding to use when reading Python 2 strings in Pickle-format data.
def read_numpy_text (self, dfcols=None, **kwargs): import numpy as np if dfcols is not None: kwargs['unpack'] = True retval = np.loadtxt (text_type (self), **kwargs) if dfcols is not None: import pandas as pd if isinstance (dfcols, six.string_types): dfcols = dfcols.split () retval = pd.DataFrame (dict (zip (dfcols, retval))) return retval
Read this path into a :class:`numpy.ndarray` as a text file using :func:`numpy.loadtxt`. In normal conditions the returned array is two-dimensional, with the first axis spanning the rows in the file and the second axis columns (but see the *unpack* and *dfcols* keywords). If *dfcols* is not None, the return value is a :class:`pandas.DataFrame` constructed from the array. *dfcols* should be an iterable of column names, one for each of the columns returned by the :func:`numpy.loadtxt` call. For convenience, if *dfcols* is a single string, it will by turned into an iterable by a call to :func:`str.split`. The remaining *kwargs* are passed to :func:`numpy.loadtxt`; they likely are: dtype : data type The data type of the resulting array. comments : str If specific, a character indicating the start of a comment. delimiter : str The string that separates values. If unspecified, any span of whitespace works. converters : dict A dictionary mapping zero-based column *number* to a function that will turn the cell text into a number. skiprows : int (default=0) Skip this many lines at the top of the file usecols : sequence Which columns keep, by number, starting at zero. unpack : bool (default=False) If true, the return value is transposed to be of shape ``(cols, rows)``. ndmin : int (default=0) The returned array will have at least this many dimensions; otherwise mono-dimensional axes will be squeezed.
def read_pandas (self, format='table', **kwargs): import pandas reader = getattr (pandas, 'read_' + format, None) if not callable (reader): raise PKError ('unrecognized Pandas format %r: no function pandas.read_%s', format, format) with self.open ('rb') as f: return reader (f, **kwargs)
Read using :mod:`pandas`. The function ``pandas.read_FORMAT`` is called where ``FORMAT`` is set from the argument *format*. *kwargs* are passed to this function. Supported formats likely include ``clipboard``, ``csv``, ``excel``, ``fwf``, ``gbq``, ``html``, ``json``, ``msgpack``, ``pickle``, ``sql``, ``sql_query``, ``sql_table``, ``stata``, ``table``. Note that ``hdf`` is not supported because it requires a non-keyword argument; see :meth:`Path.read_hdf`.
def read_pickle (self): gen = self.read_pickles () value = gen.next () gen.close () return value
Open the file, unpickle one object from it using :mod:`pickle`, and return it.
def read_pickles (self): try: import cPickle as pickle except ImportError: import pickle with self.open (mode='rb') as f: while True: try: obj = pickle.load (f) except EOFError: break yield obj
Generate a sequence of objects by opening the path and unpickling items until EOF is reached.
def read_text(self, encoding=None, errors=None, newline=None): with self.open (mode='rt', encoding=encoding, errors=errors, newline=newline) as f: return f.read()
Read this path as one large chunk of text. This function reads in the entire file as one big piece of text and returns it. The *encoding*, *errors*, and *newline* keywords are passed to :meth:`open`. This is not a good way to read files unless you know for sure that they are small.
def read_toml(self, encoding=None, errors=None, newline=None, **kwargs): import pytoml with self.open (mode='rt', encoding=encoding, errors=errors, newline=newline) as f: return pytoml.load (f, **kwargs)
Read this path as a TOML document. The `TOML <https://github.com/toml-lang/toml>`_ parsing is done with the :mod:`pytoml` module. The *encoding*, *errors*, and *newline* keywords are passed to :meth:`open`. The remaining *kwargs* are passed to :meth:`toml.load`. Returns the decoded data structure.
def read_yaml (self, encoding=None, errors=None, newline=None, **kwargs): import yaml with self.open (mode='rt', encoding=encoding, errors=errors, newline=newline) as f: return yaml.load (f, **kwargs)
Read this path as a YAML document. The YAML parsing is done with the :mod:`yaml` module. The *encoding*, *errors*, and *newline* keywords are passed to :meth:`open`. The remaining *kwargs* are passed to :meth:`yaml.load`. Returns the decoded data structure.
def write_pickles (self, objs): try: import cPickle as pickle except ImportError: import pickle with self.open (mode='wb') as f: for obj in objs: pickle.dump (obj, f)
*objs* must be iterable. Write each of its values to this path in sequence using :mod:`cPickle`.
def write_yaml (self, data, encoding=None, errors=None, newline=None, **kwargs): import yaml with self.open (mode='wt', encoding=encoding, errors=errors, newline=newline) as f: return yaml.dump (data, stream=f, **kwargs)
Read *data* to this path as a YAML document. The *encoding*, *errors*, and *newline* keywords are passed to :meth:`open`. The remaining *kwargs* are passed to :meth:`yaml.dump`.