id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
1,599 |
DM-7869
|
10/04/2016 09:52:44
|
virtualDevice assumes that the display has a .frame member
|
When running in verbose mode the virtualDevice assumes that it can access {{self.display.frame}}; this isn't always true.
| 1 |
1,600 |
DM-7886
|
10/04/2016 14:42:06
|
Replace pyfits with astropy.io.fits in all code
|
We currently use {{pyfits}} in multiple packages: afw, coadd_chisquared, obs_base, meas_astrom, meas_deblender, meas_extensions_psfex, meas_mosaic, obs_cfht, obs_lsstSim, obs_sdss, obs_subaru and obs_test. Strangely, we only have explicit dependencies on {{pyfits}} listed for afw, obs_base, galsim, healpy and obs_subaru. galsim can use astropy.io.fits or pyfits. healpy really does seem to not work with astropy.io.fits -- is there a newer version that does? Please replace {{pyfits}} with {{astropy.io.fits}} where appropriate and update the table files to correctly express the dependency (and removing {{pyfits}} where inappropriate).
| 2 |
1,601 |
DM-7889
|
10/04/2016 17:39:53
|
Activate HSC afterburner functionality
|
We have just merged the functionality of the HSC afterburner (DM-6784, DM-6785). We need to add configuration options in obs_subaru to activate the features in the coadds.
| 5 |
1,602 |
DM-7892
|
10/04/2016 18:28:38
|
Transfer relevant C++ doc guidelines from Confluence to Developer Guide
|
Investigation supporting RFC-225 revealed that some rules in the [Confluence documentation guidelines|https://confluence.lsstcorp.org/display/LDMDG/Documentation+Standards] (e.g., the use of @ for Doxygen tags, or placing all documentation at the point of declaration) are not present in the developer guide. Ensure all relevant rules are present in the final documentation (senior developers have the final word on which rules are relevant).
| 2 |
1,603 |
DM-7893
|
10/04/2016 18:36:14
|
Add exception safety tag to Doxygen
|
To allow easier implementation of DM-7891, all DM projects' doxygen config files should include an alias {{@exceptsafe}} that expands to a paragraph with the heading "Exception Safety". The tag can be used to describe any guarantees made by documented code in the event of an exception. -Yes, this requires touching (almost) every repository in the stack.-
| 2 |
1,604 |
DM-7894
|
10/04/2016 19:33:06
|
mapper and butler queryMetadata method badly documented
|
The {{queryMetadata}} methods in both Mapper and Butler are very badly documented. Their docs both claim to accept a "key" parameter, but the methods don't take "key" (and Butler's version attempts to use one if "format" is not specified), while the "format" parameter (which appears non-optional from the code itself) doesn't have any docstring in Mapper. Looks like these docs rotted badly.
| 1 |
1,605 |
DM-7896
|
10/05/2016 06:54:30
|
meas_extensions_convolved is undocumented
|
Please add at least a README file providing a short summary of its functionality and some bare-bones documentation on how to enable and use it.
| 2 |
1,606 |
DM-7899
|
10/05/2016 12:24:33
|
Move datatypes for Indexed reference catalogs to daf_butlerUtils
|
The new indexed reference catalogs (for photometric and astrometric calibration) are stored under the dataset "cal_ref_cat". This dataset was only added to obs_lsstSim and obs_test, and did not make it to the rest of the cameras. Since this is independent of camera it makes the most sense to move this into the common datasets in daf_butlerUtils. (The other required datatype, "IngestIndexedReferenceTask_config", is already in daf_butlerUtils). This is currently blocking usage of the new Gaia reference catalogs, so it would be very useful to have this implemented soon instead of waiting for the various dataset RFCs to close. Giving this to [~pgee] per his suggestion.
| 1 |
1,607 |
DM-7900
|
10/05/2016 12:32:52
|
Add --batch-type None to possibilities, disabling any MPI
|
It can be desirable to run ctrl_pool enabled commands (such as constructBias.py from pipe_drivers) without any batch system. Please add an option bq. --batch-type None that simply runs the job in the current process.
| 2 |
1,608 |
DM-7905
|
10/05/2016 13:47:54
|
Finish Convert GWT code to pure JavaScript (F16)
|
Finish the remaining work for converting GWT code to pure JavaScript and fix the bugs found
| 100 |
1,609 |
DM-7918
|
10/06/2016 12:09:08
|
constraints column is not rendered correctly if empty cell is displayed
|
[~cwang] found a problem in the catalog constraint search panel, the constraint column cell are replaced by disabled input field cell when empty value is entered. This component was working before, it must be a change recently in the TableRenderer.js. We were talking with [~loi] and [~cwang], seems that [~cwang] was looking at it while doing the catalog search for LSST and could have a fix. Please fix for release soon.
| 2 |
1,610 |
DM-7921
|
10/06/2016 13:26:43
|
Experiment with multiple series histogram display
|
Multiple series histogram display (not connected to table) Show how multiple histogram series can be displayed. Display multiple histogram data in an API. (This is needed to understand how to organize multiple series data)
| 3 |
1,611 |
DM-7938
|
10/10/2016 01:57:44
|
Re-install Qserv on PDAC
|
Qserv master on PDAC has been re-install from scratch for security reasons: https://jira.ncsa.illinois.edu/browse/LSST-801 Qserv needs to be re-installed and re-tested here once master node will be available.
| 2 |
1,612 |
DM-7940
|
10/10/2016 10:27:33
|
Disable colourisation when not writing to a terminal
|
There's code in pex.config.history to colour output, but it's en/disabled unconditionally. Please change it to never colour text that isn't going to the terminal.
| 0.5 |
1,613 |
DM-7941
|
10/10/2016 11:15:24
|
Fix config for reference object loader
|
{{meas_mosaic}} if failing to un-persist the source matches for the HSC NB0921 filter with: {code} WARNING: Failed to read DataId(initialdata={'taiObs': '2015-03-16', 'pointing': 1170, 'visit': 23046, 'dateObs': '2015-03-16', 'filter': 'NB0921', 'field': 'SSP_UDEEP_COSMOS', 'tract': 0, 'ccd': 41, 'expTime': 900.0}, tag=set([])): Could not find flux field(s) N921_camFlux, N921_flux {code} It seems the filterMap is not being loaded properly. Please fix this.
| 1 |
1,614 |
DM-7947
|
10/10/2016 14:29:49
|
Unit test for Circle class
|
This is one of series unit test ticket for package "package edu.caltech.ipac.visualize.plot".
| 1 |
1,615 |
DM-7948
|
10/10/2016 17:04:42
|
misc. bug fixes related to Gator/Atlas/irsaviewer - feedbacks from test team
|
address the bugs reported by the test team
| 2 |
1,616 |
DM-7949
|
10/10/2016 17:19:59
|
Delete daf_butlerUtils and move obs_base into lsst
|
Now that the daf_butlerUtils -> obs_base move is complete, we need to move obs_base to live in the lsst organization (it's currently in lsst-dm). github provides a "transfer repository" mechanism: https://help.github.com/articles/transferring-a-repository-owned-by-your-organization/ but we can't use that until we've deleted daf_butlerUtils since obs_base is a fork (github gives the following error: "lsst already has a repository in the lsst/daf_butlerUtils network"). Once we're sure there are no more outstanding daf_butlerUtils branches, we can delete that repo (and thus break the fork link) and move obs_base to lsst and update repos.yaml. This shouldn't break Jenkins at all, as github does redirects when a repository is transferred. I plan to do this at the beginning of November.
| 0.5 |
1,617 |
DM-7951
|
10/10/2016 17:50:17
|
Rename daf_butlerUtils component to obs_base
|
Now that the package move/rename in DM-7915 has been finished, can we please get the daf_butlerUtils component renamed to obs_base to match?
| 0.5 |
1,618 |
DM-7952
|
10/10/2016 18:48:10
|
Modifications to project documents to flow down LSR-REQ-0026, re: predefined transient filters, or remove it
|
In trying to close LIT-101, which was about a reference to "limited classification" in the Level 2 object catalog, [~zivezic] and I rediscovered a gap in DM's requirements flowdown. We discussed this by teleconference today. The [SRD (LPM-17)|http://ls.st/lpm-17*] contains a "will" statement about DM providing "pre-defined filters optimized for traditionally popular transients, such as supernovae and micro lensed sources". This was flowed down nearly verbatim to the [LSR (LSE-29)|http://ls.st/lse-29*] as LSR-REQ-0026, "Predefined Transient Filters": {quote}*Requirement:* Pre-defined filters optimized for traditionally popular transients shall be made available. It shall be possible for the project to add new pre-defined filters as the survey progresses. *Discussion:* The list of pre-defined filters, by way of example, should include ones for supernovae and microlensed sources.{quote} This requirement was never flowed down to the [OSS (LSE-30)|http://ls.st/lse-30*], the [DMSR (LSE-61)|http://ls.st/lse-61*], or the [DPDD (LSE-163)|http://ls.st/lse-163*], in the space of system-level controlled documents. The requirement should either be formally disclaimed, which would require a variance against the SRD and a change request against the LSR, or the proper flowdown should be performed. The latter would be in two parts: a CCB-level change request for the OSS, DMSR, and DPDD, as well as, within DM, the addition of substantive language to [LDM-151|http://ls.st/ldm-151*] towards fulfilling this requirement. As the OSS and DMSR are currently completely silent on this, it would be acceptable simply to flow down the LSR requirement verbatim to both of these (as if a <copy> relationship in SysML terms). However, as noted in LIT-101, the DPDD currently contains language which appears to be in conflict with LSR-REQ-0026, specifically: {quote}we do not plan to provide any classification (eg., “is the light curve consistent with an RR Lyra?”, or “a Type Ia SN?”).{quote} This would have to be edited to clarify that while we will _not_ attempt to produce _exclusive_ classifications - that is, assignment of objects to unique categories, but we _will_ provide pre-defined filters, with potentially highly overlapping selections, that provide good completeness but perhaps only very modest purity for a small number of object types of common interest. It is important to retain the notion that this would be done using _only_ LSST data. Strictly speaking, as a "pre-defined filter" will not be thought of by most of our readers as a "data product", it might not need to be mentioned in the DPDD, but because the existing DPDD language suggests a strong conflict, it would be very good to clarify it. It will also be important to harmonize what we do about this requirement with what we say about the "mini-broker" in our requirements flowdown, as this is also not currently very clear.
| 2 |
1,619 |
DM-7955
|
10/11/2016 10:22:48
|
Improve the default log configuration
|
When {{log}}/log4cxx is not configured, it uses the default configuration which is at DEBUG level. Right now {{log}} is configured explicitly when running CmdLineTasks or python unit tests, but the stack is also used outside of the Task environment or utils unit tests environments. It should have a more friendly and useful default level and pattern, without the user having to do anything.
| 3 |
1,620 |
DM-7956
|
10/11/2016 10:48:28
|
Add the Starlink AST package
|
As per RFC-193 add the starlink AST package. Name it {{starlink_ast}}. Note: until we have the new WCS code further along it seems premature to add {{starlink_ast}} to {{lsst_distrib}}
| 2 |
1,621 |
DM-7963
|
10/11/2016 18:31:36
|
Add cutout image size option to be exposed and user defined in LC viewer
|
The LC viewer should get cutouts rather than full images. We need to decide when the user inputs the desired size of the cutouts and add an input field option in the image viewer to control it. By default it should be a cutout anyway.
| 1 |
1,622 |
DM-7964
|
10/11/2016 18:36:06
|
User should be able to click on a XY-plot point and have the period filled in phase folding panel
|
If you are looking at the periodogram, the user should be able to click on a point in the XY-plot and have that period (from periodogram) put into the phase folding tool. Pending cases to be confirmed/agreed: What about if you click between points? What if you highlight a row in that table?
| 2 |
1,623 |
DM-7965
|
10/11/2016 18:45:48
|
Make consistent highlight colors in the tri-view
|
In the triview, a point in the XY Plot is highlighted. Simultaneously, a row in the table is highlighted and one of the images is outlined. The linking between these elements is a strength of the tool. The linkage would be stronger and look more professional if these were all the same color. The gold color used for the image highlight would work, as long as it is not too fluorescent, and as long as the highighted point in the XY plot were outlined in black or a darker color. Use the color of image border for table rows and points on chart. (12/2/2016 XW)
| 2 |
1,624 |
DM-7967
|
10/11/2016 18:50:01
|
Magnitudes should be plotted decreasing to the top or to the left by default
|
Users shouldn't have to manually flip this. Perhaps firefly can detect it by the units? mag or magnitude or mags or magnitudes not case-sensitive should cover it. Yes, this requires some sort of hard-coding but it makes us look like we know what we are doing, so it seems worth it if a more sophisticated solution cannot be found in a reasonable timeframe.
| 2 |
1,625 |
DM-7969
|
10/11/2016 18:55:41
|
Add time zero as user defined offset different than default when phase folding
|
The phase folding tool needs to allow the user to enter a zero point. By default it should be using the minimum value of the time in the raw table but can be overwritten by the user. The phase folding option panel needs a new input field to let the user define it.
| 2 |
1,626 |
DM-7970
|
10/11/2016 19:02:05
|
Image settings specific to LC on by default
|
LC image viewer preferences by default should be: * WCS match should be on by default. * The image stretch should be locked by default. * The object position should be centered in the display by default. * The object should be overlaid as the active target (right now the app is just guessing)
| 2 |
1,627 |
DM-7971
|
10/11/2016 19:04:54
|
A highlighted point in the phase-folded light curve, the same point should be highlighted in the raw light curve and vice versa
|
If the user changes XY Plots from phased to raw, the highlighted/active image displayed should not change. But the sequence of images displayed will change, because the images sorting is tied to the XY Plot/Table, which will change from {{phase}} to {{mjd}}. And vice versa, when XY plots changed from raw to phased ... N.B.: Make more sense when more than 1 image is displayed.
| 8 |
1,628 |
DM-7976
|
10/12/2016 14:26:53
|
coadd cannot be loaded directly as afw.image.ExposureF
|
Loading a {{deepCoadd}} as an afw Exposure gives the following error: {code:java} >>> import lsst.afw.image as afwImage >>> coadd = afwImage.ExposureF("validation_data_hsc/DATA/rerun/20160805/deepCoadd/HSC-Y/0/7,7.fits") 31424 [0x7fff74d45000] DEBUG afw.image.Mask null - Number of mask planes: 16 Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/opt/sw/lsstsw/stack/DarwinX86/afw/12.1-4-gaba3f16/python/lsst/afw/image/imageLib.py", line 11623, in __init__ this = _imageLib.new_ExposureF(*args) lsst.pex.exceptions.wrappers.LogicError: File "include/lsst/afw/table/BaseRecord.h", line 95, in const typename Field<T>::Element *lsst::afw::table::BaseRecord::getElement(const Key<T> &) const [T = int] Key is not valid (if this is a SourceRecord, make sure slot aliases have been setup). {0} File "src/table/io/InputArchive.cc", line 105, in std::shared_ptr<Persistable> lsst::afw::table::io::InputArchive::Impl::get(int, const lsst::afw::table::io::InputArchive &) loading object with id=132, name='CoaddPsf' {1} lsst::pex::exceptions::LogicError: 'Key is not valid (if this is a SourceRecord, make sure slot aliases have been setup). {0}; loading object with id=132, name='CoaddPsf' {1}' {code} This used to work, or at least it worked with a stack from Sep 8.
| 3 |
1,629 |
DM-7979
|
10/12/2016 18:41:07
|
write tests and document use of URI or relative path as input and output to butler.
|
butler can take a URI or relative path for it's inputs and outputs arguments, provided that it has a way to figure out what the details of the repo are, e.g. mapper, and provided the default mode (inputs 'r', outputs 'w') is acceptable. this needs to be documented in the butler init function. and a formal test written.
| 2 |
1,630 |
DM-7986
|
10/13/2016 14:29:43
|
fix circles connecting & add way to reorder image tabs
|
fix circles connecting & add way to reorder image tabs. Most of this ticket has the code to reorder the images tabs in the image select panel
| 2 |
1,631 |
DM-7987
|
10/13/2016 17:06:15
|
Ambiguous conversion between spherical coordinates and unit vectors
|
The {{sphgeom::LonLat}} and {{sphgeom::UnitVector3d}} classes provide constructors for converting between the two types. However, the axis convention used for the conversion is left unspecified, limiting the situations where these classes' interoperability can be used. I propose that the axis convention currently used by the implementations of these classes be made part of their APIs, so that external code knows what results to expect: {noformat} (0°, 0°) <--> <1, 0, 0> (90°, 0°) <--> <0, 1, 0> (*, 90°) <--> <0, 0, 1> {noformat}
| 1 |
1,632 |
DM-7992
|
10/13/2016 22:15:54
|
Handling of SDSS forced photometry data in the PDACv1 light curve viewer
|
(Apologies if I'm missing an existing ticket for this; there doesn't seem to be an exactly on-point one, though DM-7990 covers some of it.) The SUIT portal for PDACv1, the main focus of which is the service of the forced photometry data from SDSS Stripe82 (LSST 2013 processing), needs to be able to deal with multi-band photometry data in a table in which data from all five SDSS filter bands (_u, g, r, i, z_) are mixed together as separate rows in the same table. It is a minimal requirement to be able to display the data for a single selected filter band, and this should be able to be done in a more natural way than requiring the user to enter a filtering expression in the {{filterId}} column. As noted in DM-7990, the next step would be the ability to show the light curves for multiple filter bands over-plotted on each other.
| 1 |
1,633 |
DM-8000
|
10/14/2016 15:52:51
|
Error instantiating MultiBandDriverTask with LoadIndexedReferenceObjectsTask
|
Error running {{multiBandDriver}} if {{refObjLoader}} is retargeted to {{LoadIndexedReferenceObjectsTask}} in {{measureCoaddSources}}, e.g. with this config override: {code:java} from lsst.meas.algorithms.loadIndexedReferenceObjects import LoadIndexedReferenceObjectsTask config.match.refObjLoader.retarget(LoadIndexedReferenceObjectsTask) {code} {code:java} Traceback (most recent call last): File "/software/lsstsw/stack/Linux64/pipe_drivers/12.1-7-ga5bc178+1/bin/multiBandDriver.py", line 3, in <module> MultiBandDriverTask.parseAndSubmit() File "/software/lsstsw/stack/Linux64/ctrl_pool/12.1-1-g3e1834e/python/lsst/ctrl/pool/parallel.py", line 410, in parseAndSubmit if not cls.RunnerClass(cls, batchArgs.parent).precall(batchArgs.parent): # Write config, schema File "/software/lsstsw/stack/Linux64/pipe_base/12.1-1-g06158e9+2/python/lsst/pipe/base/cmdLineTask.py", line 300, in precall task = self.makeTask(parsedCmd=parsedCmd) File "/software/lsstsw/stack/Linux64/pipe_drivers/12.1-7-ga5bc178+1/python/lsst/pipe/drivers/multiBandDriver.py", line 111, in makeTask return self.TaskClass(config=self.config, log=self.log, butler=butler) File "/software/lsstsw/stack/Linux64/pipe_drivers/12.1-7-ga5bc178+1/python/lsst/pipe/drivers/multiBandDriver.py", line 131, in __init__ peakSchema=afwTable.Schema(self.mergeCoaddDetections.merged.getPeakSchema())) File "/software/lsstsw/stack/Linux64/pipe_base/12.1-1-g06158e9+2/python/lsst/pipe/base/task.py", line 237, in makeSubtask subtask = taskField.apply(name=name, parentTask=self, **keyArgs) File "/software/lsstsw/stack/Linux64/pex_config/12.1+6/python/lsst/pex/config/configurableField.py", line 83, in apply return self.target(*args, config=self.value, **kw) File "/software/lsstsw/stack/Linux64/pipe_tasks/12.1-3-g35418c8/python/lsst/pipe/tasks/multiBand.py", line 1039, in __init__ self.makeSubtask("match", butler=butler) File "/software/lsstsw/stack/Linux64/pipe_base/12.1-1-g06158e9+2/python/lsst/pipe/base/task.py", line 237, in makeSubtask subtask = taskField.apply(name=name, parentTask=self, **keyArgs) File "/software/lsstsw/stack/Linux64/pex_config/12.1+6/python/lsst/pex/config/configurableField.py", line 83, in apply return self.target(*args, config=self.value, **kw) File "/software/lsstsw/stack/Linux64/meas_astrom/12.1-2-gf2a177e+2/python/lsst/meas/astrom/directMatch.py", line 78, in __init__ self.makeSubtask("refObjLoader", butler=butler) File "/software/lsstsw/stack/Linux64/pipe_base/12.1-1-g06158e9+2/python/lsst/pipe/base/task.py", line 237, in makeSubtask subtask = taskField.apply(name=name, parentTask=self, **keyArgs) File "/software/lsstsw/stack/Linux64/pex_config/12.1+6/python/lsst/pex/config/configurableField.py", line 83, in apply return self.target(*args, config=self.value, **kw) File "/software/lsstsw/stack/Linux64/meas_algorithms/12.1-6-g1f798ce+1/python/lsst/meas/algorithms/loadIndexedReferenceObjects.py", line 48, in __init__ ingest_config = butler.get(self.config.ingest_config_name, immediate=True) AttributeError: 'NoneType' object has no attribute 'get' {code} I included a way to reproduce this without actual data in {{obs_decam}} branch {{u/hfc/DM-8000}}. There I added an empty Butler repo and the measureCoaddSources config override. With that branch this command reproduces the error: {{multiBandDriver.py $OBS_DECAM_DIR/repo/ --rerun test --cores 1}}
| 2 |
1,634 |
DM-8002
|
10/17/2016 09:00:22
|
Fix unguarded display code in SecondMomentStarSelector
|
Something recent (maybe DM-7848) either broke or uncovered an existing bug in {{SecondMomentStarSelector}}, in which a block of display code (specifically a call to `{{lsst.afw.display.ds9.Buffering()}} is not protected by an {{if display}} guard. I'm not sure if we expect to require all display code to be guarded like that, but all of the other display code in this file is, so I'm just going to fix that block. If someone ([~rhl]?) can confirm that unguarded display code is safe, then there's a bug somewhere else and {{SecondMomentStarSelector}} should have a lot of its display guards removed.
| 1 |
1,635 |
DM-8004
|
10/17/2016 12:53:22
|
Crash in pipe_supertask.NewExampleCmdLineTask
|
I try to follow http://dmtn-002.lsst.io/ and run all examples there. {{NewExampleCmdLineTask}} example crashes with an exception: {noformat} $ cmdLineActivator NewExampleCmdLineTask --extras $OBS_TEST_DIR/data/input/ --output test --id lsst.pipe.supertask.examples.NewExampleCmdLineTask.NewExampleCmdLineTask found! Classes inside module lsst.pipe.supertask.examples.NewExampleCmdLineTask : NewExampleCmdLineTask.NewExampleCmdLineConfig NewExampleCmdLineTask.NewExampleCmdLineTask SuperTask 283 [0x7f1bd2a28740] INFO exampleTask null - exampleTask was initiated root INFO: Config override file does not exist: '/u2/salnikov/lsstsw/stack/Linux64/obs_test/12.1-3-gf809d79/config/exampleTask.py' root INFO: Config override file does not exist: u'/u2/salnikov/lsstsw/stack/Linux64/obs_test/12.1-3-gf809d79/config/test/exampleTask.py' root INFO: input=/u2/salnikov/lsstsw/stack/Linux64/obs_test/12.1-3-gf809d79/data/input root INFO: calib=None root INFO: output=/home/salnikov/test CameraMapper INFO: Loading registry registry from /home/salnikov/test/_parent/registry.sqlite3 Traceback (most recent call last): File "/home/salnikov/pipe_supertask/bin/cmdLineActivator", line 24, in <module> CmdLineActivator.parse_and_run() File "/home/salnikov/pipe_supertask/python/lsst/pipe/supertask/activator.py", line 556, in parse_and_run CmdLineClass.activate() # This is where everything starts File "/home/salnikov/pipe_supertask/python/lsst/pipe/supertask/activator.py", line 234, in activate if self.precall(): File "/home/salnikov/pipe_supertask/python/lsst/pipe/supertask/activator.py", line 222, in precall self.SuperTask.write_config(self.parsed_cmd.butler, clobber=self.clobber_config, do_backup=self.do_backup) File "/home/salnikov/pipe_supertask/python/lsst/pipe/supertask/super_task.py", line 336, in write_config elif butler.datasetExists(config_name): File "/u2/salnikov/lsstsw/stack/Linux64/daf_persistence/12.1+1/python/lsst/daf/persistence/butler.py", line 544, in datasetExists location = repoData.repo.map(datasetType, dataId) File "/u2/salnikov/lsstsw/stack/Linux64/daf_persistence/12.1+1/python/lsst/daf/persistence/repository.py", line 180, in map loc = self._mapper.map(*args, **kwargs) File "/u2/salnikov/lsstsw/stack/Linux64/daf_persistence/12.1+1/python/lsst/daf/persistence/mapper.py", line 172, in map func = getattr(self, 'map_' + datasetType) AttributeError: 'TestMapper' object has no attribute 'map_exampleTask_config' {noformat} Need to see what I'm doing wrong.
| 1 |
1,636 |
DM-8006
|
10/17/2016 13:11:39
|
Add 2D Chart to client side phase folding dialog
|
Add 2D Chart to client side phase folding dialog. The chart should update as the user moves the slider. copied from the github: (Nov. 15, 2016 XW) The implementation mainly, add 2D chart on the client side phase folding dialog and the plot changes as any of the phase folding parameters (time column, flux column, zero time point, period) changes. update the parameter items in the parameter setting dialog. Test: localhost:8080/firefly/lc.html open 'lc_raw.tbl' from 'Raw Table', click 'search' select tab 'Upload/Phase folding' and click button 'Phase Folding' experiment the setting of all entries and move slider to set 'period', the plot will be updated as any of the parameter (except Flux error column) changes click button 'Phase Folded Table' to close the dialog and a table with phase column is created (or updated) based on current setting of period, time column and zero point time. Note: The time and flux column names are set in default for IRSA cases. Those column names are settable while the phase folding dialog component is used.
| 8 |
1,637 |
DM-8007
|
10/17/2016 13:22:29
|
Make firefly_client pip-installable
|
Make the {{firefly_client}} Python API installable via pip. The package will be named {{firefly_client}}. The package will exist within the {{firefly}} repository. copied from the pul request: (XW, 1018/2016) copied Firefly License.txt added a simple README added setup.py and setup.cfg moved package files to firefly_client directory
| 1 |
1,638 |
DM-8011
|
10/17/2016 18:06:48
|
Update XRootD from upstream
|
Merge in latest changes from upstream XRootD (includes gcc 6.2 fixes)
| 0.5 |
1,639 |
DM-8015
|
10/17/2016 18:53:32
|
VisitInfo repr() and str() should print a useful summary of contents
|
The new VisitInfo object is a bit opaque from within Python: you can look at the individual components via e.g. {{visitInfo.darkTime}} and {{visitInfo.boresightAirmass}}, but {{print(visitInfo)}} is not helpful. it would be extremely useful for {{str()}} and {{repr()}} to print either the whole contents of the VisitInfo (it's not that much information), or for {{str()}} to print a useful summary and {{repr()}} the whole thing.
| 2 |
1,640 |
DM-8021
|
10/18/2016 09:43:41
|
Deal with large pickles
|
[~lauren] is running: {code} coaddDriver.py /tigress/HSC/HSC --rerun lauren/LSST/DM-6816/cosmos --job DM-6816-cosmos-y-coaddDriver --time 100 --cores 96 --batch-type=slurm --mpiexec='-bind-to socket' --id tract=0 filter=HSC-Y --selectId ccd=0..103 filter=HSC-Y visit=274..302:2^306..334:2^342..370:2^1858..1862:2^1868..1882:2^11718..11742:2^22602..22608:2^22626..22632:2^22642..22648:2^22658..22664:2 --batch-submit '--mem-per-cpu 8000' {code} and it is producing: {code} OverflowError on tiger-r8c1n12:19889 in map: integer 2155421250 does not fit in 'int' Traceback (most recent call last): File "/tigress/HSC/LSST/stack_20160915/Linux64/ctrl_pool/12.1+5/python/lsst/ctrl/pool/pool.py", line 99, in wrapper return func(*args, **kwargs) File "/tigress/HSC/LSST/stack_20160915/Linux64/ctrl_pool/12.1+5/python/lsst/ctrl/pool/pool.py", line 218, in wrapper return func(*args, **kwargs) File "/tigress/HSC/LSST/stack_20160915/Linux64/ctrl_pool/12.1+5/python/lsst/ctrl/pool/pool.py", line 554, in map self.comm.scatter(initial, root=self.rank) File "MPI/Comm.pyx", line 1286, in mpi4py.MPI.Comm.scatter (src/mpi4py.MPI.c:109079) File "MPI/msgpickle.pxi", line 707, in mpi4py.MPI.PyMPI_scatter (src/mpi4py.MPI.c:48114) File "MPI/msgpickle.pxi", line 168, in mpi4py.MPI.Pickle.dumpv (src/mpi4py.MPI.c:41672) File "MPI/msgbuffer.pxi", line 35, in mpi4py.MPI.downcast (src/mpi4py.MPI.c:29070) OverflowError: integer 2155421250 does not fit in 'int' application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0``` {code} We need to fix or work around this problem.
| 2 |
1,641 |
DM-8023
|
10/18/2016 11:15:43
|
afw interface.py crashes with "ds9" backend because there's no ds9.DisplayImpl
|
Making an afw.display with "ds9" backend (and perhaps others!) throws an exception and crashes because ds9 doesn't have DisplayImpl. As an example: import lsst.afw.display as afwDisplay display = afwDisplay.Display("ds9") throws: AttributeError: 'module' object has no attribute 'DisplayImpl'
| 2 |
1,642 |
DM-8026
|
10/18/2016 17:05:31
|
log4cxx gcc 6.2 compatibility fixes
|
Minor fixes for gcc 6.2 compatibility
| 0.5 |
1,643 |
DM-8027
|
10/18/2016 17:23:02
|
Update obs_lsstSim to add VisitInfo to eimages
|
The eimage dataset type is special to obs_lsstSim so was not updated in the process of implementing the VisitInfo ticket.
| 2 |
1,644 |
DM-8028
|
10/18/2016 17:33:18
|
Add a utility class to upload the unit test data file
|
A utility class is needed to upload the unit testing data file in firefly_test_data tree.
| 2 |
1,645 |
DM-8030
|
10/18/2016 19:52:47
|
Identify and correct differing background model between py2 and py3
|
The background model pixel values differ by 0.00011 ± 0.00035 on average in python2 versus python3. The cause of this discrepancy needs to be tracked down and fixed.
| 2 |
1,646 |
DM-8032
|
10/18/2016 20:01:05
|
Tighten testProcessCcd thresholds once background model is fixed
|
Once the {{meas_algorithms}} background model is being generated correctly, the original {{pipe_tasks}} test that first pointed to this problem in needs to be put back to higher precision. A comment in {{testProcessCcd.py}} indicates the assertions that should have their precisions lowered back to pre-py3-porting values. Presently, this test fails with lower thresholds because the background model is different than expected.
| 0.5 |
1,647 |
DM-8034
|
10/18/2016 22:09:56
|
reprocess validation_data* to contain VisitInfo
|
Please reprocess the validation data sets so that their output data contains the VisitInfo metadata structure (see DM-5503). I will provide a specific weekly to in the reprocessing in a comment.
| 1 |
1,648 |
DM-8035
|
10/19/2016 09:03:51
|
daf_persistence test failures with gcc 5.2 on el5
|
The majority of the {{daf_persistence}} tests fail when attempting to build a conda package via {{conda-lsst}} on el5 with gcc 5.2. {code:java} Traceback (most recent call last): File "tests/reposInButler.py", line 36, in <module> import lsst.daf.persistence as dp File "/home/jhoblitt/conda-lsst/miniconda/conda-bld/work/python/lsst/daf/persistence/__init__.py", line 28, in <module> from .persistenceLib import * File "/home/jhoblitt/conda-lsst/miniconda/conda-bld/work/python/lsst/daf/persistence/persistenceLib.py", line 27, in <module> _persistenceLib = swig_import_helper() File "/home/jhoblitt/conda-lsst/miniconda/conda-bld/work/python/lsst/daf/persistence/persistenceLib.py", line 26, in swig_import_helper return importlib.import_module('_persistenceLib') File "/home/jhoblitt/conda-lsst/miniconda/envs/_build/lib/python2.7/importlib/__init__.py", line 37, in import_module __import__(name) ImportError: No module named _persistenceLib tests/butlerSubset.py {code} I'm at a loss as why the tests are failing in this environment. The library is being successfully built. {code:java} $ ls -la /home/jhoblitt/conda-lsst/miniconda/conda-bld/work/python/lsst/daf/persistence/_persistenceLib.so -rwxrwxr-x 1 jhoblitt jhoblitt 9716560 Oct 19 14:47 /home/jhoblitt/conda-lsst/miniconda/conda-bld/work/python/lsst/daf/persistence/_persistenceLib.so {code} {code:java} $ swig -copyright SWIG Version 3.0.10 Copyright (c) 1995-1998 University of Utah and the Regents of the University of California Copyright (c) 1998-2005 University of Chicago Copyright (c) 2005-2006 Arizona Board of Regents (University of Arizona) {code}
| 2 |
1,649 |
DM-8044
|
10/20/2016 15:38:02
|
Default local background subtraction to False for safe coadd clipping
|
In RFC-212 we adopted turning on the "junk suppression" temporary local background subtraction by default: {{SourceDetectionConfig.doTempLocalBackground=True}}. Since the safe clipping coadd assembly (introduced in DM-2915) also performs object detection, this had the effect of turning on the junk suppression background there too. The safe clipping algorithm was designed and tuned with no extra background estimations, thus {{doTempLocalBackground}} should default to *False* for that task.
| 0.5 |
1,650 |
DM-8051
|
10/21/2016 14:27:28
|
weekly-release/build-build-tag jobs broken
|
{{weekly-release}} failed on monday because master was broken. The job seems to failing in odd ways today, seemingly requiring some updates the pipeline dsl syntax.
| 1 |
1,651 |
DM-8063
|
10/24/2016 12:45:08
|
Check the Stubbs Am241 gain calibration plan
|
Review the document detailing Stubb's plan to use Am241 gammas from outside the cryostat to perform absolute gain calibration in the main camera for any gross errors in reasoning.
| 1 |
1,652 |
DM-8074
|
10/25/2016 03:47:40
|
Test "Swarm mode" high availabilty features
|
Swarm mode provide high availabilty feature wich will be tested on Openstack.
| 8 |
1,653 |
DM-8076
|
10/25/2016 13:12:31
|
Cleanup repository move wording in DMTN-027
|
There are some lingering incorrect "you should..." statements in the "merging work" section of DMTN-027, that should be referring to the "other developer", and the text needs to be reworded to reflect that dependencies on master need to live in {{lsst}}, and so some extra repository shuffling has to happen. A few relevant quotes from Slack: About how to move things without deleting daf_butlerUtils: {quote} Tim Jenness [11:50 AM] move daf_butlerUtils into lsst-sqre, move obs_base into lsst, move daf_butlerUtils into lsst-dm {quote} About unclear "you": {quote} Kian-Tat Lim [11:53 AM] Oh, except I missed this at the top "Once your rename has been merged to master," One thing about that section that I noticed before but didn't comment on: it's sometimes unclear who "you/your" is -- the renamer or the "other developer" John Parejko [11:55 AM] I tried to make it always be “you” as the person doing the rename. The person the whole document is aimed at. If I got that wrong somewhere, I’m happy to fix it. Kian-Tat Lim [11:55 AM] But "your work-in-progress branch" is really "the other developer", no? Same for "Check that the branch in the new clone matches your branch in meas_worst," @parejkoj So I think the rest of the document is fine as long as you put in that merging to master also involves moving to lsst and change the remote url in the instructions. {quote}
| 0.5 |
1,654 |
DM-8078
|
10/25/2016 14:35:28
|
Remove int and long from schema aliases
|
In python 3 all integers are 64 bit "long" integers, while in python 2, 32 and 64 bit integers are differentiated by the int and long types respectively. This causes confusion when adding a new field to a schema using addField in afw/table/Base.i in python 3, where the python type is converted into a C++ type using a dictionary {code:python} aliases = { long: "L", int: "I", float: "D", str: "String", futurestr: "String", numpy.uint16: "U", numpy.int32: "I", numpy.int64: "L", numpy.float32: "F", numpy.float64: "D", Angle: "Angle", } {code} In RFC-227 it was decided that {{int}} and {{long}} will be removed from {{aliases}}, leaving {code:python} aliases = { str: "String", futurestr: "String", numpy.uint16: "U", numpy.int32: "I", numpy.int64: "L", numpy.float32: "F", numpy.float64: "D", Angle: "Angle", } {code} All existing code using `type=long` will need to be updated as part of this ticket. This will be included in the pybind11 wrapped code, with a deprecation warning for users attempting to add fields using {{int}} or {{long}}.
| 0 |
1,655 |
DM-8080
|
10/25/2016 15:11:52
|
start a old-vs-new butler document
|
[~rhl] requested: I think it'd be really helpful to have a relatively short document describing (or at least mentioning) the features that are in the butler that the `classic' butler didn't have. I think we can put a section in LDM-463 for this.
| 2 |
1,656 |
DM-8081
|
10/25/2016 15:51:38
|
unable to build release git tag when 3rd party deps change
|
The fundamental issue is that {{lsstsw/lsst-build}} operate on refs in the git repo. Due to the way eups generates version strings from git tags, we are unable to tag the repos for 3rd party products. This means that we are unable to build a last tagged release from source once a 3rd party dependency has been upgraded to an incompatible version.
| 3 |
1,657 |
DM-8084
|
10/25/2016 19:31:19
|
Throwing exception TableExistsError with no parameters
|
Database utility function *createTableFromSchema* doesn't provide parameters to the constructor of the exception class *TableExistsError* when throwing the exceptions. This causes the *wmgr* Web service to fail with the following stack: {code} Traceback (most recent call last): File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1475, in full_dispatch_request rv = self.dispatch_request() File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/qserv/stack/Linux64/qserv/12.1.rc1-3-g72e15fd+3/lib/python/lsst/qserv/wmgr/dbMgr.py", line 321, in createTable utils.createTableFromSchema(dbConn, schema) File "/qserv/stack/Linux64/db/12.1.rc1+1/python/lsst/db/utils.py", line 288, in createTableFromSchema raise TableExistsError() TypeError: __init__() takes at least 4 arguments (1 given) {code} And this is the function affected: {code} Linux64/db/12.1.rc1/python/lsst/db/utils.py {code} {code:python} def createTableFromSchema(conn, schema): """ Create database table from given schema. @param conn Database connection or engine. @param schema String containing full schema of the table (it can be a dump containing "CREATE TABLE", "DROP TABLE IF EXISTS", comments, etc. Raises TableExistsError if the table already exists. Raises sqlalchemy exceptions. """ if conn.engine.url.get_backend_name() == "mysql": try: conn.execute(schema) except OperationalError as exc: log.error('Exception when creating table: %s', exc) if exc.orig.args[0] == MySqlErr.ER_TABLE_EXISTS_ERROR: raise TableExistsError() raise else: raise NoSuchModuleError(conn.engine.url.get_backend_name()) {code} The relevant constructor is found in one of the base classes of the thrown exception: {code} Linux64/sqlalchemy/1.0.8.lsst3+2/lib/python/SQLAlchemy-1.0.8-py2.7-linux-x86_64.egg/sqlalchemy/exc.py {code} {code:python} class DBAPIError(StatementError): ... def __init__(self, statement, params, orig, connection_invalidated=False): {code} {code}
| 2 |
1,658 |
DM-8085
|
10/25/2016 19:40:33
|
Using misspelled name TableExistError for exception TableExistsError
|
The *wmgr* Web services attempts to throw an exception of the non-existing class *TableExistError* instead of *TableExistsError*. This causes the service to fail with with the following stack trace: {code} 2016-10-25 23:57:01,814 [PID:392] [ERROR] (log_exception() at app.py:1423) wmgr: Exception on /dbs/sdss_stripe82_00/tables/RunDeepForcedSource/chunks [POST] Traceback (most recent call last): File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1817, in wsgi_app response = self.full_dispatch_request() File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1477, in full_dispatch_request rv = self.handle_user_exception(e) File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1381, in handle_user_exception reraise(exc_type, exc_value, tb) File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1475, in full_dispatch_request rv = self.dispatch_request() File "/qserv/stack/Linux64/flask/0.10.1.lsst2+1/lib/python/Flask-0.10.1-py2.7.egg/flask/app.py", line 1461, in dispatch_request return self.view_functions[rule.endpoint](**req.view_args) File "/qserv/stack/Linux64/qserv/12.1.rc1-3-g72e15fd+3/lib/python/lsst/qserv/wmgr/dbMgr.py", line 593, in createChunk except utils.TableExistError as exc: AttributeError: 'module' object has no attribute 'TableExistError' 2016-10-25 23:57:01,815 [PID:392] [INFO] (_log() at _internal.py:87) werkzeug: 141.142.181.132 - - [25/Oct/2016 23:57:01] "POST /dbs/sdss_stripe82_00/tables/RunDeepForcedSource/chunks HTTP/1.1" 500 - {code} The relevant code: {code} /qserv/stack/Linux64/qserv/12.1.rc1-3-g72e15fd+3/lib/python/lsst/qserv/wmgr/dbMgr.py:593 {code} {code:python} @dbService.route('/<dbName>/tables/<tblName>/chunks', methods=['POST']) def createChunk(dbName, tblName): ... 593: except utils.TableExistError as exc: {code}
| 2 |
1,659 |
DM-8087
|
10/26/2016 10:57:10
|
scons should clean the rerun
|
When you rerun {{ci_hsc}} after changing something it fails as the versions have changed. The/a natural reaction is to run {{scons -c}} (i.e. clean), but unfortunately that doesn't help. Please make {{scons -c}} delete the rerun/ci_hsc directory.
| 0.5 |
1,660 |
DM-8090
|
10/26/2016 13:25:22
|
Update Verification Documentation 2
|
We update the lsst-dev7 + Verification Cluster documentation to cover the transition / renaming of the head node from lsst-dev7 to lsst-dev01 .
| 2 |
1,661 |
DM-8091
|
10/26/2016 13:47:30
|
Fix cmsd warning message
|
Andy H. provided the solution: {quote}Hi Fabrice, Yes, this is common for configurations that don't expect to be dynamically written to but yet export the space in R/W mode (which we do). All the message is saying is you don't have enough space for impromptu writes (which is true but then you don't expect any). So, there is a configuration directive to resolve this problem: cms.space 1k 2k The directive is documented here: http://xrootd.org/doc/dev45/cms_config.htm#_Toc454223038 I would add this directive to the config file as soon as you can and restart he cmsd. Andy{quote}
| 2 |
1,662 |
DM-8094
|
10/26/2016 14:28:52
|
Replace Swarm default test container
|
version should be 'dev' here, but 'dev' need to be updated to contains DM-7139 code: {code} admin/tools/docker/deployment/swarm/manager/env-docker.sh @@ -2,8 +2,8 @@ # Allow to customize Docker container execution # VERSION can be a git ticket branch but with _ instead of / -# example: u_fjammes_DM-4295 -VERSION=dev +# example: tickets_DM-7139, or dev +VERSION=tickets_DM-7139 {code}
| 2 |
1,663 |
DM-8095
|
10/26/2016 14:51:11
|
Remove integration tests warning message
|
sock option must be replaced by socket: {code} dev@clrinfopc04:/qserv/stack/Linux64/qserv_testdata/12.0+45$ grep -rw sock * ... python/lsst/qserv/tests/sql/cmd.py: self._mysql_cmd.append("--sock=%s" % self.config['mysqld']['sock']) python/lsst/qserv/tests/sql/cmd.py: self._mysql_cmd.append("--sock=%s" % self.config['mysqld']['sock']) {code} Maybe sock should also be replaced in Qserv code/config files int the future...
| 1 |
1,664 |
DM-8096
|
10/26/2016 17:37:27
|
Make "immediate=True" the default for butler.get()
|
Considering how problematic readProxy objects are to deal with, I feel it might be better to default to {{immediate=True}} for {{butler.get()}} calls. For any test code or interactive code, we want to get an immediately useable object, while inside the pipeline we can be more explicit about {{immediate=False}} when we realize that we're waiting on I/O.
| 0.5 |
1,665 |
DM-8099
|
10/27/2016 10:37:02
|
LDM-493 v1 Edits from DMLT feedback
|
This ticket covers edits to the [v1 integration branch|https://github.com/lsst/LDM-493/pull/4] of LDM-493 based on feedback from the DMLT.
| 0.5 |
1,666 |
DM-8100
|
10/27/2016 11:38:27
|
Determine what astropy.io.fits does with FITS header cards for missing data
|
As part of implementing RFC-239 determine how {{astropy.io.fits}} handles FITS header cards with missing (unknown) values. I am virtually certain it cannot write such cards, but what happens when it tries to read them? This will inform implementation of RFC-239.
| 1 |
1,667 |
DM-8102
|
10/27/2016 12:08:33
|
Introduce Set operations between SpanSets and masks
|
Currently SpanSets can intersect, intersectNot, and union between a themselves and another SpanSet. This ticket will add the ability to use these methods with masks as "other". This work will replace the intersectMask and footprintAndMask methods from the old footprint implementation.
| 2 |
1,668 |
DM-8103
|
10/27/2016 12:31:30
|
Add method to find edge pixels to SpanSet
|
Add a method to SpanSets which returns a new SpanSet that contains the pixels at the boarder of the original SpanSet
| 2 |
1,669 |
DM-8105
|
10/27/2016 14:24:49
|
Missing test case for SpherePoint
|
One of the changes introduced during review of DM-5529 was a mismatch between the behavior of indexing in Python and C++ -- negative indices are allowed in the former but not the latter, consistent with either language's conventions. However, indexing is tested only in the Python test suite against the Python specification. A test case should be added to {{testSpherePoint.cc}} to verify that {{operator[]}} behaves as specified when called directly.
| 2 |
1,670 |
DM-8106
|
10/27/2016 14:29:02
|
SpherePoint does not have move constructors/assignment
|
The {{SpherePoint}} class explicitly declares that it is using the default copy-constructor and copy assignment operator, but does not do the same with the move constructor and move assignment operator. This is inconsistent with RFC-209. Since there is no reason to disallow move semantics for {{SpherePoint}}, move constructors and move assignment should be explicitly declared. I am not aware of any way in which this change can be tested.
| 1 |
1,671 |
DM-8115
|
10/28/2016 07:04:05
|
jenkins master can't run builds / write temp files
|
I discovered this morning that there are a bunch of "dead" builds (zombies) listed for the build slave that runs on the same node as the jenkins master. The master was reporting that it is unable to create temp files and that is why the threads for these builds crashed. Further investigation shows that the master node is out of disk space and this seems to be due to 100's of GiB of console output from a build that has been running since the 25th and it is still running.
| 0.5 |
1,672 |
DM-8119
|
10/28/2016 10:41:07
|
Incorrect time zone settings within Qserv Docker containers
|
I have noticed that the timezone is not set correctly within the latest version of the Qserv containers. This is an example: {code:bash} % hostname lsst-qserv-db01 % date Fri Oct 28 11:33:00 CDT 2016 % docker exec -it qserv date Fri Oct 28 16:33:10 UTC 2016 {code} This may have various (some of them would be unpleasant) side effects, such as (to count just a few): * incorrect timing in the log files (which will make it hard to investigate problems) * shifted monitoring data (hard to interpret/analyze the monitoring data) * anything else which is time-dependent within the application (such as scheduling, etc.) It looks like the timezone configuration within the container is set to: {code:bash} % docker exec -it qserv cat /etc/timezone Etc/UTC {code} Should this be solved by running the container with this extra option which would override the default setting of *Etc/UTC* set at a container build time? {code:bash} % ls -l /etc/localtime lrwxrwxrwx 1 root root 35 Sep 23 09:43 /etc/localtime -> /usr/share/zoneinfo/America/Chicago % docker run ... -v /etc/localtime:/etc/localtime:ro {code} This would add the read-only map from the container's configuration file to the hosts's one.
| 2 |
1,673 |
DM-8122
|
10/28/2016 13:24:29
|
SpherePoint tests fail on macOS
|
Building afw on macOS Sierra I get: {code} tests/testSpherePoint.py .........................F.... ====================================================================== FAIL: testStrValue (__main__.SpherePointTestSuite) Test if __str__ produces output consistent with its spec. ---------------------------------------------------------------------- Traceback (most recent call last): File "tests/testSpherePoint.py", line 762, in testStrValue self.assertRegexpMatches(numbers[1], r'\+|-nan') AssertionError: Regexp didn't match: '\\+|-nan' not found in 'nan' ---------------------------------------------------------------------- Ran 30 tests in 2.210s FAILED (failures=1) {code} It is failling for point {{(180.000000, nan)}}.
| 2 |
1,674 |
DM-8138
|
10/31/2016 08:22:05
|
Implement changes requested in RFC-247
|
Move several common datasets to obs_base. This will not affect the dataset definitions of the individual cameraMappers.
| 3 |
1,675 |
DM-8147
|
10/31/2016 13:54:23
|
Fix error while loading deepCoadd fits file.
|
dax_imgserv experiences an error while loading lsst-qserv-dax01:/datasets/gapon/data/DC_2013/coadd/deepCoadd/r/0/321,4.fits. This same file should be available as: lsst-dev01.ncsa.illinois.edu:/datasets/gapon/data/DC_2013/coadd/deepCoadd/r/0/321,4.fits. The error is: {code} MalformedArchiveError: File "src/CoaddPsf.cc", line 358, in virtual std::shared_ptr<lsst::afw::table::io::Persistable> lsst::meas::algorithms::CoaddPsf::Factory::read(const InputArchive&, const CatalogVector&) const Archive assertion failed: catalogs.size() == 2u {0} File "src/table/io/InputArchive.cc", line 105, in std::shared_ptr<lsst::afw::table::io::Persistable> lsst::afw::table::io::InputArchive::Impl::get(int, const lsst::afw::table::io::InputArchive&) {code} On lsst-qserv-dax01.ncsa.illinois.edu this can be reproduced by {code} $ source /software/lsstsw/stack/loadLSST.bash $ setup meas_algorithms python >>> import lsst.afw.image >>> e = lsst.afw.image.ExposureF("/datasets/gapon/data/DC_2013/coadd/deepCoadd/r/0/321,4/coadd-r-0-321,4.fits") {code}
| 1 |
1,676 |
DM-8155
|
11/01/2016 10:56:14
|
Update XRootD from upstream (again)
|
This captures the logging plugin changes from upstream xrootd xrdssi branch. Passed jenkins.
| 0.5 |
1,677 |
DM-8156
|
11/01/2016 11:14:25
|
add support for Slurm to allocateNodes.py
|
Add support for allocation of HTCondor nodes through Slurm via allocateNodes.py command. This is related to work being done for DM-8154
| 20 |
1,678 |
DM-8157
|
11/01/2016 11:20:57
|
Implement an image search processor to access the image from PDAC
|
This is the working extension for DM-8010. The DM-8010 implemented MetaData and TableData search processors. The image search processor is needed as well. It will also exercise the cutout service that SLAC imgServ API will provide. The image search processor (LSSTImageSearch) is searching the DAX using either id or a set of ids to locate the image and then displaying them in the tree-view window. There are two types of search URL: Coadded image (and cutout) retrieval curl -o outImageCoadd.fits "http://lsst-qserv-dax01.ncsa.illinois.edu:5000/image/v0/deepCoadd/id?id=23986176" For Science CCD, it can also be searched using id. But the preferred way is using a set of ids, such as: curl -o outImage3.fits "http://lsst-qserv-dax01.ncsa.illinois.edu:5000/image/v0/calexp/ids?run=3325&camcol=1&field=171&filter=z" The image search processor processes the parameters passed from the UI and then build the URL like above to find the image.
| 8 |
1,679 |
DM-8169
|
11/02/2016 11:47:00
|
Use -isystem (rather than -I) for include files from external packages
|
See RFC-246 for the reasoning and for an example implementation.
| 2 |
1,680 |
DM-8179
|
11/03/2016 15:22:03
|
Preserve background jobs and statuses beyond a browser session.
|
Currently, background jobs and statuses are kept on the client. That information is lost after a browser reload. We need to save that information so that a user can come back at a later time and still get this information presented. Implementation: - save data on the server. - push all statuses to client when connected - continue to update statuses while client is connected
| 8 |
1,681 |
DM-8182
|
11/03/2016 17:16:15
|
Resend email notification when email value changes
|
A new email notification should be sent for every successfully completed background jobs when a new email is entered.
| 2 |
1,682 |
DM-8187
|
11/04/2016 09:21:41
|
Qserv czar crashes itself and mysql-proxy on invalid queries
|
h1. Problem summary Qserv *mysql-proxy* service always crashes on queries made on either non-existing databases or made in a lack of any specific database context. The same behavior is obsolved for queries addressed to databases which are present within the MySQL/MariaDB service of the Qserv *master* node while not being registered with Qserv's CSS. Examples of queries based on the integration test setup: {code:sql} SELECT COUNT(*) FROM AnyTable; SELECT COUNT(*) FROM UnknownDatabase.SomeTable; SELECT COUNT(*) FROM qservTest_case03_mysql.RunDeepSource; {code} h1. Details Once the crash happens no further details found in the service's log files (the report was made by logging into a running Docker container): {code:bash} [gapon@lsst-qserv-master01 ~] docker exec -it qserv bash qserv@lsst-qserv-master01:/qserv$ ls -al run/var/log/ .. -rw-r--r-- 1 qserv qserv 2193695372 Nov 4 00:52 mysql-proxy-lua.log -rw-r----- 1 qserv qserv 11811 Nov 4 00:51 mysql-proxy.log {code} The only (and the last) relevant record left in *mysql-proxy-lua.log* is about the query causing the crash. For example: {code} % tail mysql-proxy-lua.log ..[2016-11-04T01:27:40.883-0500] [LWP:1402] DEBUG ccontrol.UserQuerySelect (core/modules/ccontrol/UserQuerySelect.cc:397) - QI=227: UserQuery registered SELECT * FROM R LIMIT 1 {code} Another obstacle for investigating the root cause of the problem was that no core file was left by the crashed process. Further investigation has revealed that this was happening because the proxy is usually launched with the *--daemon* option (the report was taken from within a running Docker container): {code:bash} qserv@lsst-qserv-master01:/qserv$ ps -ef | grep proxy qserv 1540 0 0 01:44 ? 00:00:00 mysql-proxy --daemon --proxy-lua-script=… {code} In order to get the core dump the following actions were taken. First of all the core configuration file of the container's host machine was modified to prefix core files with the name of the crashed executables: {code:bash} sudo -i echo "%e.core" > /proc/sys/kernel/core_pattern {code} The next step was to ensure no limit for core dumps is set for user *qserv* within the Docker container of the *Master* image: {code:bash} [gapon@lsst-qserv-master01 ~] docker exec -it qserv bash qserv@lsst-qserv-master01:/qserv$ ulimit -c unlimited qserv@lsst-qserv-master01:/qserv$ ulimit -a core file size (blocks, -c) unlimited ... {code} The next step was to disable option *--daemon* in the service management file: {code} /qserv/run/etc/init.d/mysql-proxy {code} The new configuration was tested by stopping/starting the service from within the container: {code:code} qserv@lsst-qserv-master01:/qserv$ run/etc/init.d/mysql-proxy stop [ ok ing mysql-proxy. qserv@lsst-qserv-master01:/qserv$ run/etc/init.d/mysql-proxy start [ ok ing mysql-proxy.. {code} The the following query was made to crash the service: {code:sql} SELECT * FROM R LIMIT 1 {code} After the service went down a desired core file was found in the following folder of the running container: {code:bash} qserv@lsst-qserv-master01:/qserv$ ls -al .. -rw------- 1 qserv qserv 28422144 Nov 4 01:27 mysql-proxy.core.1402 {code} The dump was analyzed with *gdb* to get the stack of the crash: {code} qserv@lsst-qserv-master01:/qserv$ which mysql-proxy /qserv/stack/Linux64/mysqlproxy/0.8.5+12/bin/mysql-proxy qserv@lsst-qserv-master01:/qserv$ gdb `which mysql-proxy` mysql-proxy.core.1402 … Reading symbols from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/bin/mysql-proxy...done. [New LWP 1402] [New LWP 1436] [New LWP 1437] [New LWP 1438] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `mysql-proxy --proxy-lua-script=/qserv/stack/Linux64/qserv/12.1.rc1-3-g72e15fd+3'. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00007f4bf1445573 in lsst::qserv::qdisp::Executive::setQueryId (this=0x0, id=227) at core/modules/qdisp/Executive.cc:103 103 core/modules/qdisp/Executive.cc: No such file or directory. (gdb) where #0 0x00007f4bf1445573 in lsst::qserv::qdisp::Executive::setQueryId (this=0x0, id=227) at core/modules/qdisp/Executive.cc:103 #1 0x00007f4bf1293bc8 in lsst::qserv::ccontrol::UserQuerySelect::_qMetaRegister (this=0x180ea80) at core/modules/ccontrol/UserQuerySelect.cc:398 #2 0x00007f4bf1290c1c in lsst::qserv::ccontrol::UserQuerySelect::UserQuerySelect (this=0x180ea80, qs=std::shared_ptr (count 2, weak 0) 0x1801870, messageStore=std::shared_ptr (count 2, weak 0) 0x180efc0, executive=std::shared_ptr (empty) 0x0, infileMergerConfig=std::shared_ptr (empty) 0x0, secondaryIndex=std::shared_ptr (count 2, weak 0) 0x179fa70, queryMetadata=std::shared_ptr (count 2, weak 0) 0x17c9da0, czarId=2, errorExtra="") at core/modules/ccontrol/UserQuerySelect.cc:151 #3 0x00007f4bf12868b6 in __gnu_cxx::new_allocator<lsst::qserv::ccontrol::UserQuerySelect>::construct<lsst::qserv::ccontrol::UserQuerySelect<std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> > (this=0x7ffc21ece13f, __p=0x180ea80) at /usr/include/c++/4.9/ext/new_allocator.h:120 #4 0x00007f4bf128607a in std::allocator_traits<std::allocator<lsst::qserv::ccontrol::UserQuerySelect> >::_S_construct<lsst::qserv::ccontrol::UserQuerySelect<std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> >(std::allocator<lsst::qserv::ccontrol::UserQuerySelect>&, std::allocator_traits<std::allocator<lsst::qserv::ccontrol::UserQuerySelect> >::__construct_helper*, (lsst::qserv::ccontrol::UserQuerySelect<std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&>&&)...) (__a=..., __p=0x180ea80) at /usr/include/c++/4.9/bits/alloc_traits.h:253 #5 0x00007f4bf12858d4 in std::allocator_traits<std::allocator<lsst::qserv::ccontrol::UserQuerySelect> >::construct<lsst::qserv::ccontrol::UserQuerySelect<std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> >(std::allocator<lsst::qserv::ccontrol::UserQuerySelect>&, lsst::qserv::ccontrol::UserQuerySelect<std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&>*, (lsst::qserv::ccontrol::UserQuerySelect<std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&>&&)...) (__a=..., __p=0x180ea80) at /usr/include/c++/4.9/bits/alloc_traits.h:399 #6 0x00007f4bf1284c74 in std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2>::_Sp_counted_ptr_inplace<std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> (this=0x180ea70, __a=...) at /usr/include/c++/4.9/bits/shared_ptr_base.h:515 #7 0x00007f4bf1283f40 in __gnu_cxx::new_allocator<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2> >::construct<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2><std::allocator<lsst::qserv::ccontrol::UserQuerySelect> const, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> > (this=0x7ffc21ece387, __p=0x180ea70) at /usr/include/c++/4.9/ext/new_allocator.h:120 #8 0x00007f4bf1283427 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2> > >::_S_construct<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySele---Type <return> to continue, or q <return> to quit--- ct>, (__gnu_cxx::_Lock_policy)2><std::allocator<lsst::qserv::ccontrol::UserQuerySelect> const, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> >(std::allocator<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2> >&, std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2> > >::__construct_helper*, (std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2><std::allocator<lsst::qserv::ccontrol::UserQuerySelect> const, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&>&&)...) (__a=..., __p=0x180ea70) at /usr/include/c++/4.9/bits/alloc_traits.h:253 #9 0x00007f4bf1282887 in std::allocator_traits<std::allocator<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2> > >::construct<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2><std::allocator<lsst::qserv::ccontrol::UserQuerySelect> const, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> >(std::allocator<std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2> >&, std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2><std::allocator<lsst::qserv::ccontrol::UserQuerySelect> const, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&>*, (std::_Sp_counted_ptr_inplace<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, (__gnu_cxx::_Lock_policy)2><std::allocator<lsst::qserv::ccontrol::UserQuerySelect> const, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&>&&)...) (__a=..., __p=0x180ea70) at /usr/include/c++/4.9/bits/alloc_traits.h:399 #10 0x00007f4bf1281b27 in std::__shared_count<(__gnu_cxx::_Lock_policy)2>::__shared_count<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> ( this=0x7ffc21ece928, __a=...) at /usr/include/c++/4.9/bits/shared_ptr_base.h:619 #11 0x00007f4bf1280e15 in std::__shared_ptr<lsst::qserv::ccontrol::UserQuerySelect, (__gnu_cxx::_Lock_policy)2>::__shared_ptr<std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> ( this=0x7ffc21ece920, __tag=..., __a=...) at /usr/include/c++/4.9/bits/shared_ptr_base.h:1090 #12 0x00007f4bf1280388 in std::shared_ptr<lsst::qserv::ccontrol::UserQuerySelect>::shared_ptr<std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> (this=0x7ffc21ece920, __tag=..., __a=...) at /usr/include/c++/4.9/bits/shared_ptr.h:316 #13 0x00007f4bf127f778 in std::allocate_shared<lsst::qserv::ccontrol::UserQuerySelect, std::allocator<lsst::qserv::ccontrol::UserQuerySelect>, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, ---Type <return> to continue, or q <return> to quit--- std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> (__a=...) at /usr/include/c++/4.9/bits/shared_ptr.h:588 #14 0x00007f4bf127eafb in std::make_shared<lsst::qserv::ccontrol::UserQuerySelect, std::shared_ptr<lsst::qserv::qproc::QuerySession>&, std::shared_ptr<lsst::qserv::qdisp::MessageStore>&, std::shared_ptr<lsst::qserv::qdisp::Executive>&, std::shared_ptr<lsst::qserv::rproc::InfileMergerConfig>&, std::shared_ptr<lsst::qserv::qproc::SecondaryIndex>&, std::shared_ptr<lsst::qserv::qmeta::QMeta>&, unsigned int&, std::string&> () at /usr/include/c++/4.9/bits/shared_ptr.h:604 #15 0x00007f4bf127cf8d in lsst::qserv::ccontrol::UserQueryFactory::newUserQuery (this=0x17c79a0, query="SELECT * FROM R LIMIT 1", defaultDb="") at core/modules/ccontrol/UserQueryFactory.cc:126 Python Exception <type 'exceptions.ValueError'> Cannot find type const std::map<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::_Rep_type: #16 0x00007f4bf129f3aa in lsst::qserv::czar::Czar::submitQuery (this=0x17c6df0, query="SELECT * FROM R LIMIT 1", hints=std::map with 3 elements) at core/modules/czar/Czar.cc:125 Python Exception <type 'exceptions.ValueError'> Cannot find type const std::map<std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::basic_string<char, std::char_traits<char>, std::allocator<char> >, std::less<std::basic_string<char, std::char_traits<char>, std::allocator<char> > >, std::allocator<std::pair<std::basic_string<char, std::char_traits<char>, std::allocator<char> > const, std::basic_string<char, std::char_traits<char>, std::allocator<char> > > > >::_Rep_type: #17 0x00007f4bf17e778a in lsst::qserv::proxy::submitQuery (query="SELECT * FROM R LIMIT 1", hints=std::map with 3 elements) at core/modules/proxy/czarProxy.cc:102 #18 0x00007f4bf17f5556 in _wrap_submitQuery (L=0x17703c0) at build/proxy/czarProxy_wrap.c++:4177 #19 0x00007f4bf38e50c4 in luaD_precall () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #20 0x00007f4bf38e54a4 in luaD_call () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #21 0x00007f4bf38e487b in luaD_rawrunprotected () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #22 0x00007f4bf38e565b in luaD_pcall () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #23 0x00007f4bf38e2ebc in lua_pcall () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #24 0x00007f4bf38f32f8 in luaB_pcall () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #25 0x00007f4bf38e50c4 in luaD_precall () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #26 0x00007f4bf38ee26a in luaV_execute () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #27 0x00007f4bf38e54ed in luaD_call () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #28 0x00007f4bf38e487b in luaD_rawrunprotected () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #29 0x00007f4bf38e565b in luaD_pcall () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #30 0x00007f4bf38e2ebc in lua_pcall () from /qserv/stack/Linux64/mysqlproxy/0.8.5+12/lib/libmysql-chassis.so.0 #31 0x00007f4bf1a08fa6 in proxy_lua_read_query (con=con@entry=0x176c470) at proxy-plugin.c:1227 #32 0x00007f4bf1a09155 in proxy_read_query (chas=chas@entry=0x1758620, con=con@entry=0x176c470) at proxy-plugin.c:1334 #33 0x00007f4bf36b4c4d in plugin_call (srv=0x1758620, con=0x176c470, state=<optimized out>) at network-mysqld.c:892 #34 0x00007f4bf36b6283 in network_mysqld_con_handle (event_fd=11, events=2, user_data=0x176c470) at network-mysqld.c:1617 #35 0x00007f4bf2958ed0 in event_process_active_single_queue (activeq=<optimized out>, base=<optimized out>) at event.c:1325 #36 event_process_active (base=<optimized out>) at event.c:1392 #37 event_base_loop (base=0x1769f40, flags=flags@entry=0) at event.c:1589 #38 0x00007f4bf2959b87 in event_base_dispatch (event_base=<optimized out>) at event.c:1420 #39 0x00007f4bf38dfa0a in chassis_event_thread_loop (event_thread=0x1769e90) at chassis-event-thread.c:466 #40 0x00007f4bf38df496 in chassis_mainloop (_chas=0x1758620) at chassis-mainloop.c:359 #41 0x0000000000402a09 in main_cmdline (argc=1, argv=0x7ffc21ed1d78) at mysql-proxy-cli.c:597 #42 0x00007f4bf20a1b45 in __libc_start_main (main=0x401db0 <main>, argc=6, argv=0x7ffc21ed1d78, init=<optimized out>, fini=<optimized out>, rtld_fini=<optimized out>, stack_end=0x7ffc21ed1d68) at libc-start.c:287 ---Type <return> to continue, or q <return> to quit--- #43 0x0000000000401dde in _start () {code}
| 1 |
1,683 |
DM-8189
|
11/04/2016 10:30:50
|
CatalogConstraintsPanel need to handle fetch errors
|
At the moment, when an error occurs, the panel does not update leaving the loading masks visible indefinitely. This panel should instead update itself with error message(s) from the fetch. TODO: - Fix CatalogConstraintsPanel - add generic error display into BasicTableView - move createErrorTbl from TablesCntlr to TableUtil
| 2 |
1,684 |
DM-8206
|
11/07/2016 16:44:28
|
Coverage is not shown if active table set before table loads
|
I think there are flaws in how showImages are set in FireflyViewerManager layoutManager saga. In the first catalog load, coverage is not shown if active table is set before the table is loaded. At dispatchUpdateLayoutInfo time When active table is set before table is loaded: tableResults.added showImages=false tableResults.active showImages=false table.loaded showImages=true charts.data/chartAdd showImages=false When active table is set after table is loaded: tableResults.added showImages=false table.loaded showImages=true charts.data/chartAdd showImages=false tableResults.active showImages=true You can test it by changing line 340 in TablesCntlr.js: dispatchActiveTableChanged(tbl_id, options.tbl_group); instead of dispatchAddSaga(doOnTblLoaded, {tbl_id, callback:() => dispatchActiveTableChanged(tbl_id, options.tbl_group)});
| 8 |
1,685 |
DM-8210
|
11/08/2016 08:47:00
|
Revert accidental rename of id->objectId in ForcedPhotCoaddTask
|
The "id" column in coadd forced photometry outputs is being renamed to "objectId", due to some code intended to *add* an "objectId" column to CCD level forced photometry being accidentally applied.
| 0.5 |
1,686 |
DM-8211
|
11/08/2016 09:29:46
|
Add support for reading a catalog schema
|
It would be helpful to be able to read a catalog schema without reading the entire catalog (e.g., to prepare for concatenating catalogs). I propose adding {{static}} methods: {code} Schema::readFits(std::string const& filename); Schema::fromFitsMetadata(daf::base::PropertySet & header); {code}
| 1 |
1,687 |
DM-8212
|
11/08/2016 09:30:30
|
Add support for reading metadata, length and schema of a catalog
|
The {{butler.get(dataset + "_md", dataId)}} pattern is only supported for images, not catalogs. It would be useful to allow this to work on catalogs. At the same time, it would be useful to add support for getting the length and schema of a catalog.
| 1 |
1,688 |
DM-8215
|
11/08/2016 11:24:47
|
Use DAX metaServ to properly handle multiple databases in PDAC
|
5/15/2017 dbserv v1 (albuquery) is integrated with metaserv v1. As a consequence, column metadata are returned with with the data in the metadata section of the result. As a part of this ticket, dbserv v1 and metaserv v1 were tested and debugged, tickets were created for outstanding issues, and SUIT code was updated. * Consolidate PDAC table information, used by SUIT in LSSTMetaInfo.json on the server side. In the future this file might be converted to a table and stored in metaserv under SUIT curation. To facilitate development, we'd like to store it as a resource with the source code. * Update SUIT code to work with dbserv v1 (albuquery) and metaserv v1 a. Query metaserv to create table constraints table b. Use column metadata returned by dbserv for data definition info, such as units and description XW (12/21/2016) We will be using MetaServ to get the database names, data types served, table names, and other information. 09/18/2017 (Tatiana Goldina) The information provided by the current [metaserv prototype]([http://lsst-qserv-dax01.ncsa.illinois.edu:5005/meta/v1/db/]) is not yet sufficient to automate search interface creation. There is an ongoing discussion with DAX team on which information is missing. See [https://confluence.lsstcorp.org/display/DM/Data+Access+meeting+2017-09-11] At the moment, it is not clear how to replace the information in [LSSTMetaInfo.json]([https://github.com/Caltech-IPAC/firefly/blob/dev/src/firefly/java/edu/caltech/ipac/firefly/resources/LSSTMetaInfo.json]) (used by SUIT to support PDAC) with the information from metaserv. The current implementation is sufficient to get column name, datatype, unit, description, if we'd like to descope the use of metaserv just for this purpose. Metaserv prototype: [http://lsst-qserv-dax01.ncsa.illinois.edu:5005/meta/v1/db] Metaserv documentation: [http://dm.lsst.org/dax_metaserv/api.html] Table definitions in the available database schema: [http://lsst-qserv-dax01.ncsa.illinois.edu:5005/meta/v1/db/W13_sdss/sdss_stripe82_00/tables/] Dump of column definitions: [https://gist.github.com/brianv0/e6cfc4ba36ced2a57eb131210ba16c46#file-dump-yaml-L34] _______________________________________ This ticket list two issues to be considered later: * The test version of the two search processors had hard coded database's name. But there will be multiple databases, we need to find a better way to handle it. Should it be passed to the server side or should it be stored in a configuration file? Currently, if the database name is not defined in the property, the hard coded default name is used. {code} //private static final String DATABASE_NAME =AppProperties.getProperty("lsst.database" , "gapon_sdss_stripe92_patch366_0"); private static final String DATABASE_NAME =AppProperties.getProperty("lsst.database" , ""); {code} * When the DAX server is down, it takes a long time to return a failed message back. To avoid confusing the users, the timeout control is set in in the getDataToFileUsingPost of the URLDownload class. The default value is 30seconds. The timeout field may be stored in the property file later. {code} URLConnection c = makeConnection(url, cookies, requestHeader, false); c.setConnectTimeout(timeout * 1000);//Sets a specified timeout value, in milliseconds c.setReadTimeout(timeout * 1000); {code}
| 20 |
1,689 |
DM-8216
|
11/08/2016 11:33:56
|
Pass full available precision, when doing histogram for Long values
|
This ticket will handle the first part of DM-8180, which will result in correct histogram display for the long values that can be converted into doubles without loosing precision. (ex. LSST htm index)
| 2 |
1,690 |
DM-8221
|
11/08/2016 15:04:30
|
Inconsistency in forced schema catalogs
|
[~nchotard] has discovered an inconsistency between the {{deepCoadd_forced_src}} catalogs and their accompanying {{_schema}} dataset in some processed Megacam data; one of these (I forget which) has a few additional fields, which mostly seem related to aperture correction. First step is to add a test for this in pipe_task's {{testCoadds.py}}; if that doesn't fail, we can try to reproduce in ci_hsc, and if that fails I'll ask for more details about the specific dataset.
| 2 |
1,691 |
DM-8222
|
11/08/2016 17:04:03
|
further qserv container time offset fix
|
Improve the original fix proposed by [DM-8119] to make it fully compatible with the Debian Linux distribution used to build Qserv containers. It requires to have the following two files to be properly initialized: {code} /etc/localtime /etc/timezone {code} *NOTE*: the second file doesn't exists in the RedHat-based Linux distributions. Some ideas on how to solve this problem for Docker container can be found in the following documents: * [https://www.ivankrizsan.se/2015/10/31/time-in-docker-containers/] * [https://debian-administration.org/article/213/Changing_the_timezone_of_your_Debian_system]
| 1 |
1,692 |
DM-8223
|
11/08/2016 17:10:05
|
write a test for getting mapper from _parent in v1 butlers
|
write a test that shows that butler can find mapper from _parent directories when using v1 butler functionality
| 1 |
1,693 |
DM-8224
|
11/08/2016 17:27:49
|
DAX dbserv fails when returning a COUNT() result of zero.
|
This query: {{curl -d 'query=SELECT+COUNT\(*)+FROM+RunDeepForcedSource+WHERE+objectId=3219370046129638' http://lsst-qserv-dax01.ncsa.illinois.edu:5000/db/v0/tap/sync}}, which selects no rows, fails with a spectacular stream of HTML containing an error report (see attached file). The equivalent `SELECT+*` works fine. I suspect this is a {{dbserv}}-level problem. Note that the {{<title>}} element in the HTML contains the string "TypeError: Decimal('0') is not JSON serializable", which seems very suggestive. This is a rather high priority to fix, because it is easy to imagine the PDAC SUIT generating queries containing {{COUNT()}} clauses. I don't _know_ that there already is such code, though.
| 1 |
1,694 |
DM-8227
|
11/08/2016 23:01:40
|
webserv: don't use single-threaded flask internal server
|
The flask internal web server is single-threaded, which will cause web services to be blocked e.g. during long running queries. We will need to place some sort of multithreaded/pooling web server (e.g. nginx + wsgi) ahead of the flask service for acceptable behavior until implementing/adopting disconnected queries.
| 2 |
1,695 |
DM-8228
|
11/08/2016 23:19:36
|
Discuss 20% DR1 dataset strategy with Serge
|
Talked to Serge about the existing 10% DR1 dataset, and potential strategies for building a 20% DR1 dataset. The existing dataset was produced from SDSS stripe82 data, scaled up to 10% DR1 by replicating to cover greater sky area. Serge thinks we may not be able to similarly reach a 20% DR1 dataset by just scaling coverage (there may not be enough unused coverage). Instead, we may need to increase chunk density by modifying the replicator to map source HTM cells to HTM cells of the next smaller scale. This would increase density by a factor of four; we could then either manipulate coverage or thin data during replication to converge on the 20% DR1 target.
| 0.5 |
1,696 |
DM-8229
|
11/09/2016 09:00:50
|
Add sims_survey_fields to lsstsw
|
A new package, {{sims_survey_fields}}, will be added to the _repos.yaml_ in {{lsstsw}}.
| 1 |
1,697 |
DM-8230
|
11/09/2016 12:45:52
|
forcedPhotCcd.py doesn't work on DECam data
|
{{forcedPhotCcd.py}} on DECam data with {{tract,visit,ccdnum}} data IDs fails with a registry lookup error. This is due to an incomplete workaround for a known issue: [incomplete data IDs with skymap keys cannot be filled in by the registry|https://community.lsst.org/t/problem-with-megacam-dataid/1199/8], because the skymap keys are included in the registry queries but don't exist in the registry tables. Our usual workaround for datasets like {{forced_src}} (which typically require a registry lookup to fill in unspecified data ID keys) is to do registry lookup on a similar type without a skymap key (such as {{src}}) by calling {{Butler.subset}}. In {{obs_decam}}, however, the {{src}} template requires only {{visit, ccdnum}}, while {{forced_src}} also requires {{filter}}. Because the data ID we're passing to {{Butler.subset}} is complete for {{src}}, it skips the registry lookup and passes an incomplete (for {{forced_src}}) data ID on to later code, which fails trying to complete a {{forced_src}} data ID that includes {{tract}}. There are a few ways I can imagine fixing this: - We could modify the {{obs_decam}} template to remove {{filter}}. This is really just a band-aid, but it'd be easy, and since we're discovering this bug now I think it's unlikely there's any DECam CCD forced photometry results sitting on disk that this would break. - We could copy some of the logic in {{ButlerSubset}} into the data ID mangling code for {{forcedPhotCcd.py}}, having it call {{getKeys}} and {{queryMetadata}} directly to fill out the data ID. We could alternatively add a new {{Butler}} method to do this more flexibly that the {{forcedPhotCcd.py}} code could delegate to. - We could modify {{ButlerSubset}} or {{queryMetdata}} to explicitly ignore skymap ({{tract}} and {{patch}}) data ID keys when querying the registry. I think this would be a simple fix and it would avoid painful workarounds of the sort mentioned above when reading {{forced_src}} data. [~ktl], [~npease], the last of the above solutions is my preferred one, but I wanted to check with you to make sure you weren't bothered by the special-casing of "tract" and "patch" data ID keys in the butler. I'm happy to do the work myself (this is to support some DESC work that uncovered the bug, and I'm at DESC hack week).
| 1 |
1,698 |
DM-8235
|
11/09/2016 15:36:16
|
make utils.sequencify allow dicts
|
per https://community.lsst.org/t/bug-in-daf-persistence-blob-master-python-lsst-daf-persistence-utils-py/1389/2 allow dicts to count as a 'sequence' in sequencify.
| 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.