id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
2,599 |
DM-13169
|
01/04/2018 11:18:54
|
jenkins security update - spectre/meltdown
|
Update the running kernel on all jenkins related linux nodes as a partial mitigation of the https://spectreattack.com/ Centos 7/EL7 related links: https://access.redhat.com/security/vulnerabilities/speculativeexecution https://lwn.net/Articles/742919/ https://access.redhat.com/errata/RHSA-2018:0007 EC2 instances may need to be restarted regardless due to AWS patching hypervisors, so it makes sense to update the kernel at the same time. Presumably, the jenkins OSX nodes will also need to be updated but this may be split into a separate ticket.
| 1 |
2,600 |
DM-13172
|
01/04/2018 13:09:18
|
Make HiPS popular list configurable and update HiPS list content
|
- Create a method for the application to configure the list containing popular HiPS data. - Research what information to report for the HiPS data list in addition to data's type, title and url.
| 8 |
2,601 |
DM-13181
|
01/05/2018 16:33:30
|
Flowdown LCR-1024 OSS changes to LSE-61
|
In LCR-1024 requirements language was added to LSE-30 to connect it to LSE-163. This ticket is for adding the derived requirements relationships from the relevant DM requirements back to LSE-30.
| 1 |
2,602 |
DM-13187
|
01/08/2018 20:28:14
|
jointcal selected_*_refStars is not correctly computed
|
Jointcal reports the {{collected\_\*\_refStars}} and the {{selected\_\*\_refStars}}, which should represent the total number of available refStars from the input refcat, and the number that were associated to fittedStars. The latter is the number that are important for the actual fit. Thanks to [~rowen]'s investigation about a separate issue, I realized that the {{selected_*_refStars}} metrics are incorrect: jointcal does not alter the {{associations.refStarList}} during selection, but rather the pointers between fittedStars and refStars. To fix this, we need to traverse the fittedStarList and count the number of fittedStars that have a non-nullptr \{\{refStar}}. Once done, we'll have to update all of the {{selected_*_refStars}} metrics. Fixing this will help debug the jointcal test failures in DM-10765.
| 2 |
2,603 |
DM-13189
|
01/09/2018 09:38:16
|
Add FunctorKey for Boxes
|
We often store Box2Is and Box2Ds in tables. We should have a FunctorKey to do this to reduce code duplication. Unfortunately we may not be able to use this actually reduce existing code duplication when code has used a different convention for naming fields (as we'd need to maintain backwards compatibility with already-persisted objects), but this should at least reduce duplication going forward. I'm planning to do this now for DM-12370 so I can use it there.
| 1 |
2,604 |
DM-13193
|
01/09/2018 12:04:22
|
weekly release w_2018_01 failed
|
{{w_2018_01}} failed over the weekend due to git-lfs being down due to the nebula maintenance period. (Jira was down for maintenance most of yesterday, so there may not be a tickets for the git-lfs problems) The build was restarted yesterday, which failed attempting to publish eupspkgs. https://ci.lsst.codes/blue/organizations/jenkins/release%2Fweekly-release/detail/weekly-release/137/pipeline
| 0.5 |
2,605 |
DM-13201
|
01/09/2018 15:23:09
|
calexp have TPV and SIP terms
|
The {{calexp}} exposures in {{validation_data_decam}} have TPV distortion terms in HDU 0, which should have been stripped (and probably are when using the current master of the DM stack). In addition they have the expected SIP distortion terms in HDU 1. Thus it is possible that some FITS readers will read the WCS as having both TPV and SIP distortion terms. Please reprocess the calexp (and all other processed files, for consistency). The current DM stack properly strips the TPV terms when generating {{calexp}} exposures, so that should take care of the problem.
| 1 |
2,606 |
DM-13204
|
01/10/2018 07:20:38
|
Create v14.0 versions of validation_data_(cfht|decam|hsc)
|
Create v14.0 versions of validation_data_(cfht|decam|hsc). 1. Update these data reference sets with a processing by v14.0 of the stack 2. Add a v14.0
| 2 |
2,607 |
DM-13212
|
01/10/2018 17:28:19
|
dbserv on PDAC returning 502 Bad Gateway when querying image metadata
|
Tatiana of SUIT reporting this bug when executing the following query on PDAC: curl -o imagesContainTarget.json -d 'query=SELECT+*+FROM+sdss_stripe82_00.DeepCoadd+WHERE+scisql_s2PtInCPoly(9.462, -1.152, corner1Ra, corner1Decl, corner2Ra, corner2Decl, corner3Ra, corner3Decl, corner4Ra, corner4Decl)=1;' [http://lsst-qserv-dax01.ncsa.illinois.edu:5000/db/v0/tap/sync] Brian's investigation: Python 3 issue with base64encoding of binary data
| 2 |
2,608 |
DM-13213
|
01/11/2018 08:50:28
|
Cannot build packages against galsim binaries distributed by eups distrib
|
I have installed my stack using the eups distrib binaries. Today I went to rebuild the meas_extensions_hsm package which links to the galsim package, and the build failed. It complained about not being able to find libgalsim.dylib in the galsim package. Upon looking in the directory for galsim I see: {code} _-rwxr-xr-x 1 nate staff 3289320 Dec 13 10:12_ _libgalsim.1.5.dylib_ _lrwxr-xr-x 1 nate staff 190 Dec 13 10:12_ _libgalsim.dylib_ _-> /Users/square/jenkins/workspace/release/tarball/osx/10.9/clang-800.0.42.1/miniconda3-4.3.21-10a4fa6/build/stack/miniconda3-4.3.21-10a4fa6/DarwinX86/galsim/1.5.1.lsst1/lib/libgalsim.1.5.dylib_ {code} Which shows the simlink for the dylib is still pointing to the a directory on the build system. I propose that if this cant be fixed on the packaging side that we need a program like the shebangtron that will rewrite all the simlinks
| 1 |
2,609 |
DM-13214
|
01/11/2018 08:56:26
|
Simultaneously recenter all sources in a blend
|
When the deblender was updated to use different size boxes for all objects, the {{recenter_sources}} method was modified to fit each sources position separately. This might be the reason behind some faint sources that we see drifting during the fit, so this ticket will update the {{recenter_sources}} method to project each source onto the full model so that the positions can be updated simultaneously again.
| 1 |
2,610 |
DM-13215
|
01/11/2018 10:54:26
|
Install 2018-vintage shared stacks
|
The shared stacks on lsst-dev01 (and Tiger & Perseus, for the Princeton crowd) should be updated to install 2018-vintage weeklies. This should just be a matter of tweaking the regexp.
| 1 |
2,611 |
DM-13225
|
01/12/2018 13:20:59
|
Edit SuperTask requirements for order-independence
|
MagicDraw enforces alphabetic order for requirements (within a package). This makes it impossible to define the order of the requirements as they appear. The existing SuperTask requirements in LDM-556 were written without realizing this and contain references to previous requirements as "above". It is desirable for requirements to be meaningful in isolation in any event, so this task is to clean up the language and avoid positional references.
| 2 |
2,612 |
DM-13231
|
01/12/2018 17:07:01
|
Make photoCalib outField write to _flux instead of _calFlux
|
DM-10729 implemented one way of writing back out to a catalog prior to RFC-322 being implemented, that introduced a {{_calFlux}} field for the {{outField}}. {{_calFlux}} isn't used elsewhere in the stack, and is inconsistent with the documentation. This ticket is to change it to write to {{outField+"_flux"}}, which allows writing the calibrated fluxes back to the same field (by having {{inField==outField}}), which is consistent with what meas_mosaic is currently doing.
| 1 |
2,613 |
DM-13232
|
01/12/2018 18:22:25
|
Python PropertySet.set mis-handles array of bool
|
PropertySet and PropertyList both mis-handle set(name, array-of-bool). The call succeeds, but the item is not correctly saved. Consider the following example: {code} from lsst.daf.base import PropertySet, PropertyList ps = PropertySet() # or PropertyList() ps.set("A", [False, True]) ps.get("A") # throws *** lsst.pex.exceptions.wrappers.TypeError: Unknown PropertySet value type for A ps.set("B", False) ps.add("B", True) ps.get("B") # returns [False, True] {code} Note that it is possible to set an array of bool using add, so it seems to be something about PropertySet.set.
| 2 |
2,614 |
DM-13234
|
01/15/2018 01:50:42
|
Use k8s headless service
|
Headless service will set up a constant DNS name for all Qserv nodes. This will allow to ease configuration.
| 8 |
2,615 |
DM-13240
|
01/16/2018 10:36:43
|
Fixes in dbserv for better handling with reporting SQL errors and int datatype
|
Tatiana G. at SUIT reported this: {code} SELECT * FROM sdss_stripe82_01.RunDeepSource1 WHERE qserv_areaspec_ellipse(9.469,-1.152,58,36,0); [DAX] Bad Gateway {code} Instead, it'll be more informative to redirect the SQL error back up through dbserv to query originator. The {{int}} datatype should be set to {{long}}, removing the implicit assumption that clients of dbserv must be Python-based.
| 2 |
2,616 |
DM-13242
|
01/16/2018 11:40:05
|
Leftover result tables must produce errors
|
There was an incident reported by [~vaikunth] on slack: {quote} I did a count(*) on LSST20 and it gave me back real data and columns for the result … [3:31] ```[vthukral@ccosvms0070 in2p3]$ time mysql --host ccqserv125 --port 4040 --user qsmaster LSST20 -e "SELECT COUNT(*) FROM Object;"; +--------------------+---------------------+---------------------------+----------------------------+-----------------------------------+------------------------+------------------------+------------------------+ | ra | decl | raVar | declVar | radeclCov | u_psfFlux | u_psfFluxSigma | u_apFlux | +--------------------+---------------------+---------------------------+----------------------------+-----------------------------------+------------------------+------------------------+------------------------+ | 284.98083849415934 | -63.32859963197394 | 0.03588015008077164 | 0.04575243321809053 | 0.0000000012152185681231455 | 7.396697501646931e-30 | 9.604600362557029e-31 | 9.647062881997884e-30 | ``` (and many more rows) {quote} and Igor dug out this from the log: {noformat} [2018-01-13T00:27:12.724+0100] [LWP:503] DEBUG rproc.InfileMerger (core/modules/rproc/InfileMerger.cc:264) - Merging w/CREATE TABLE result_364646 ENGINE=MyISAM SELECT SUM(QS1_COUNT) FROM result_364646_m [2018-01-13T00:27:12.746+0100] [LWP:503] ERROR util.Error (core/modules/util/Error.cc:50) - Error [1050] Error applying sql: Error 1050: Table 'result_364646' already exists Unable to execute query: CREATE TABLE result_364646 ENGINE=MyISAM SELECT SUM(QS1_COUNT) FROM result_364646_m [2018-01-13T00:27:12.746+0100] [LWP:503] ERROR rproc.InfileMerger (core/modules/rproc/InfileMerger.cc:342) - InfileMerger error: Error applying sql: Error 1050: Table 'result_364646' already exists Unable to execute query: CREATE TABLE result_364646 ENGINE=MyISAM SELECT SUM(QS1_COUNT) FROM result_364646_m {noformat} This looks like there was a leftover result table whose name was reused for new query (this could happen when someone resets QMeta autoincrement ID). Qserv should handle this situation better, there should be an error returned to user in that case, not data from old result table (or leftover table should be deleted).
| 2 |
2,617 |
DM-13250
|
01/16/2018 14:38:38
|
Write simple filter for sims alerts
|
Experiment with a simple way to filter alert data in Python using sims data.
| 8 |
2,618 |
DM-13258
|
01/17/2018 11:17:09
|
upgrade blueocean to 1.5.x
|
blueocean 1.4.0 was released today: https://plugins.jenkins.io/blueocean The changelog needs to be inspected to see if it addresses any of the open user requests.
| 1 |
2,619 |
DM-13264
|
01/18/2018 12:06:15
|
Add use case for jobs as composite datasets
|
Add a use case and requirement that describes the need for the {{Butler}} to have the ability to persist the data blobs associated with a {{Job}} and the JSON file that describes the {{Job}} in different datastores. For example the blobs may be placed in an object store with the {{Job}} could be ingested in a JSON aware SQL database.
| 1 |
2,620 |
DM-13265
|
01/18/2018 12:08:42
|
Saga error handling
|
Firefly is using sagas for action side-effects. An error in masterSaga causes problems in later application behavior, like the following table loads not completing on the client unless they are explicitly backgrounded. To test, modify getCovColumnsForQuery in CoverageWatcher.js to produce an error: ``` {noformat}function getCovColumnsForQuery(options, table) { const cAry= [...options.getCornersColumns(table), null]; const base = cAry.map( (c)=> `"${c.lonCol}","${c.latCol}"`).join(); return base+',"ROW_IDX"'; }{noformat} The function above produces the exception and console output in the attachment, when a catalog search is requested in firefly. Notice that the following catalog searches will not complete on the client (trackFetch in TableCntlr.js will not be called). Implementation done: * Use spawn when using dispatchAddSaga. This prevents the unhandled exceptions in one saga to cancel all the siblings. * Use fork with dispatchAddActionWatcher, because we are catching the unhandled exceptions. If an exception occurs in the callback, it won't cancel the saga. * Added documentation for actionWatcherCallback * Fixed a bug preventing remote charts on the test page * converted some dispatchAddSaga to dispatchAddActionWatcher, including coverage and image metadata watchers and firfely.utl.addActionListener API method. Test case: [http://localhost:8080/firefly/demo/ffapi-highlevel-test.html] Open console to view the output, "Start selection extensions", "Track mouth readout". Before, you would get an error when switching readout from one image to the other, which would cancel both listeners. Now, a single error in user defined readout listener does not prevent the following calls to succeed.
| 5 |
2,621 |
DM-13267
|
01/18/2018 12:26:33
|
Create presentation for Princeton Monday Meeting
|
Create a presentation to give the Princeton software group an update on the new deblender, which is likely to be implemented in the stack as part of the current epic.
| 2 |
2,622 |
DM-13269
|
01/18/2018 17:14:17
|
Improve jointcal debugging output
|
The fitter chi2 contributions debug output files need to be cleaned up and improved so we can do direct comparisons of jointcal's internal model, pre- and post-fit. Also have to make some plots to test that the output is good.
| 8 |
2,623 |
DM-13270
|
01/18/2018 17:25:26
|
cherry pick ccdImage method cleanups from DM-9071
|
I made a number of cleanups of method names in DM-9071, in this commit: https://github.com/lsst/jointcal/commit/9efc5d23808b5e6f33d69e7ccd9bcf6c0bc844cb These should be picked out of that ticket and pushed to master.
| 1 |
2,624 |
DM-13274
|
01/19/2018 09:17:50
|
Deblender sometimes fails to model second object, or crashes
|
This issue was reported by Erin Sheldon's student. The problem was that some blends failed during initialization with a {{ValueError:}} {code:java}/Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/operators.py:37: RuntimeWarning: invalid value encountered in true_divide /Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/source.py:149: RuntimeWarning: invalid value encountered in less morph[morph<0] = 0 /Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/source.py:161: RuntimeWarning: invalid value encountered in greater ypix, xpix = np.where(morph>blend._bg_rms[band]/2) /Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/source.py:163: RuntimeWarning: invalid value encountered in greater ypix, xpix = np.where(morph>0) Traceback (most recent call last): File "galsim_deblend.py", line 194, in <module> model,mod2 = make_model(img,bg_rms,B,coords) File "galsim_deblend.py", line 80, in make_model blend = scarlet.Blend(sources, img, bg_rms=bg_rms) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/blend.py", line 76, in __init__ self.init_sources() File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/blend.py", line 255, in init_sources self.sources[m].init_source(self, self._img) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/source.py", line 579, in init_source self.init_func(self, blend, img) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/source.py", line 164, in init_templates Ny = np.max(ypix)-np.min(ypix) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/numpy/core/fromnumeric.py", line 2272, in amax out=out, **kwargs) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/numpy/core/_methods.py", line 26, in _amax return umr_maximum(a, axis, None, out, keepdims) ValueError: zero-size array to reduction operation maximum which has no identity {code} And other times failed during the fit with a {{LinAlgError}}: {code:java}/Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py:340: RuntimeWarning: divide by zero encountered in double_scalars /Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py:340: RuntimeWarning: invalid value encountered in multiply /Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py:310: RuntimeWarning: divide by zero encountered in double_scalars /Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py:310: RuntimeWarning: invalid value encountered in multiply /Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py:358: RuntimeWarning: divide by zero encountered in double_scalars /Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py:360: RuntimeWarning: divide by zero encountered in double_scalars /Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py:360: RuntimeWarning: invalid value encountered in true_divide Traceback (most recent call last): File "galsim_deblend.py", line 194, in <module> model,mod2 = make_model(img,bg_rms,B,coords) File "galsim_deblend.py", line 92, in make_model blend.fit(200)#, e_rel=1e-1) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/blend.py", line 229, in fit e_rel=self._e_rel, e_abs=self._e_abs, accelerated=accelerated, traceback=traceback) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/algorithms.py", line 487, in bsdmm File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/blend.py", line 541, in _steps_f self._stepAS = [self._cbAS[block](block) for block in [0,1]] File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/blend.py", line 541, in <listcomp> self._stepAS = [self._cbAS[block](block) for block in [0,1]] File "/Users/lorena/anaconda3/lib/python3.6/site-packages/proxmin-0.4.3-py3.6.egg/proxmin/utils.py", line 222, in __call__ File "/Users/lorena/anaconda3/lib/python3.6/site-packages/scarlet-0.1.d2e9b44-py3.6-macosx-10.7-x86_64.egg/scarlet/blend.py", line 486, in _one_over_lipschitz LA = np.real(np.linalg.eigvals(SSigma_1S).max()) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 889, in eigvals _assertFinite(a) File "/Users/lorena/anaconda3/lib/python3.6/site-packages/numpy/linalg/linalg.py", line 217, in _assertFinite raise LinAlgError("Array must not contain infs or NaNs") numpy.linalg.linalg.LinAlgError: Array must not contain infs or NaNs{code} The root cause of both issues is that the new deblender takes coordinates as (y,x) while the user was passing (x,y) (see the discussion on [github|https://github.com/fred3m/scarlet/issues/26] for more). The first error was thrown because the incorrect position given had no flux above the noise level at the peak, so the source couldn't be initialized. We should implement a check for this to warn the user and have some fallback initialization. The second error is caused by a problem with fitting positions for sources that have no flux, which [~pmelchior] has fixed in a soon to be merged branch. I'll make the change to the initialization error and close this ticket once [~pmelchior]'s branch has been merged.
| 2 |
2,625 |
DM-13289
|
01/19/2018 14:17:27
|
Change SsiSession to be deleted by shared pointer.
|
SsiSession is currently deleted by a call to Finished, and this can happen before tasks using the object are all done with it, causing a segfault and worker crash. Moving the the UnBindRequest call to the destructor and using a shared pointer for the SsiSesion object should make it safe. Also, it looks like eliminating the SsiSession::ReplyChannel class will make the code simpler, but some changes need to be made to expose the functions ReplyChannel calls in xrdssi.
| 5 |
2,626 |
DM-13301
|
01/19/2018 19:39:32
|
Configurable worker identity and chunk availability map in Qserv
|
This ticket proposes two improvements to be made in a scope of the Qserv worker services: # replacing the current mechanism of deriving a list of available chunk numbers by parsing table names within the MySQL's _information schema_ with a new one based on an explicit persistent configuration stored within a database. The present mechanism has one major limitation preventing from integrating Qserv with the dynamic chunk replication system - it runs only one time when the worker services are being started. In the proposed model a list of chunk numbers (along with the corresponding table and database names) will be stored in a dedicated table seen by the relevant worker services. This will also allow a consistent view onto a stable collection of chunks. It will also lay a foundation for implementing a safe and efficient coordination mechanism between the replication system and Qserv worker services. # adding a persistent support for the unique identity of the worker services. The idea here is to store some unique (in a scope of a particular Qserv cluster) name within databases used by the workers. These identifiers will be used as a foundation for setting up worker-specific resources and by the replication system. Extend the implementation and the API of the Qserv's module *wpublish* to support the proposed improvement. For both tasks make proper changes to the Qserv management, integration and data loading scripts to populate the newly added table with a valid set of chunks and the worker identity strings, so that these resources would be available to the corresponding worker services. Make sure the unit tests were updated accordingly.
| 3 |
2,627 |
DM-13302
|
01/19/2018 19:57:27
|
Adding support for worker-specific resources in Qserv
|
Extend the present implementation of Qserv worker plugin (xrootd SSI) to recognize (and route to the corresponding handler) requests to the worker-specific resources. The proposed extension is planned to be used for the following tasks: * integrating Qserv with the replication system which requires an interface for interacting with the worker services to notify the ones about changes in chunk availability on the corresponding nodes * allowing to pull worker-specific stats and counters into the custom service monitoring applications
| 3 |
2,628 |
DM-13307
|
01/22/2018 12:20:13
|
Add final conclusions to DMTN-028
|
Add summary and conclusions regarding suitability of Kafka to alert distribution system needs to DMTN-028.
| 3 |
2,629 |
DM-13317
|
01/23/2018 14:12:34
|
Filtered table display in image overlay fails to apply the filter
|
This is in the test build [~tatianag] is running in response to DM-13099. I've queried two tables on PDAC: AllWISE catalog sources and WISE All-Sky (4 band) single-epoch sources. I've identified a cluster of sources in the single-epoch data that I suspect come from a single object, and I'm trying to identify that object in the AllWISE catalog. I drew a box around the cluster in the single-epoch data and used it as a filter. I then tried to copy the filter specification from the "filter panel" dialog from the single-epoch table: {quote}"ra" > 0.9909882575279574;"ra" < 0.992205622144179;"decl" > 0.0072994581062348135;"decl" < 0.009400119782442063"{quote} to the filter panel on the AllWISE table. That didn't work at all - a separate problem? - so instead I copied the two expressions from the column headers from the one table to the other: "> 0.9909882575279574; < 0.992205622144179" for {{ra}} and "> 0.0072994581062348135; < 0.009400119782442063" for {{decl}}. That worked, and the row I wanted ended up selected in the table. However, the selection was not successfully applied to the coverage image for the AllWISE table, only for the single-epoch table. See screenshots.
| 1 |
2,630 |
DM-13322
|
01/23/2018 15:07:10
|
memory mishandled inside UnitNormMap
|
AST UnitNormMap mis-handles the memory for the center parameter when unpersisting: it may allocate one too many doubles and fill the final one with garbage.
| 0.5 |
2,631 |
DM-13325
|
01/23/2018 16:20:01
|
warpExposure does not propogate visitInfo
|
[~sullivan] discovered that warpExposure is not propagating the visitInfo of its parent exposure. This is a bug, and it should be a trivial fix on line 250 of {{afw/math/warpExposure.cc}} (doing the same as the wcs and calib), plus a bit of unittests.
| 2 |
2,632 |
DM-13337
|
01/24/2018 16:24:51
|
selection of overlaid catalog does not work after filtering with a region selection (either circle or rectangle)
|
When I have a catalog overlay and an elliptical or rectangle region, I can successfully filter the catalog on that region. However, I cannot then click on any of the overlay points to select them. I have to get rid of the elliptical region by clicking on the image toolbar. And then I can't get that region back.
| 1 |
2,633 |
DM-13342
|
01/25/2018 07:26:30
|
Refactor Datastore prototype to improve orthogonality
|
In order to facilitate work on the Datastore prototype the existing code should be refactored a bit.
| 0.5 |
2,634 |
DM-13345
|
01/25/2018 09:43:26
|
Improve template and warp variance for Warp Compare
|
Template: The PSF-matched sigma-clipped coadd variance is computed from the variance planes of the inputs and not the empirical dispersion of the images as expected. Intuitively if we're looking for outliers, we'd want the template variance to use the dispersion of the images. Warp: As pointed out by [~price], we'd also want to scale the variance in the warps for the same reason we scale the variance on coadds before detection on coadds.
| 2 |
2,635 |
DM-13361
|
01/25/2018 12:26:32
|
Minimal S3 backed Datastore
|
This would exercise {{Datastore}} and {{Formatter}} by requiring: * Pass-through of credentials * SHA instead of filename * Serializing into memory rather than file * Write to file then upload. Only implement for one or possibly two {{DatasetType}}s.
| 20 |
2,636 |
DM-13363
|
01/25/2018 12:36:41
|
Minimal in-memory caching Datastore
|
This ticket is for creating an in-memory datastore and adjusting the tests to run with it and the existing posix datastore.
| 8 |
2,637 |
DM-13371
|
01/25/2018 15:28:15
|
Enable flake8 testing in daf_butler
|
Now that the butler prototype is becoming the new butler, flake8 tests should be enabled.
| 0.5 |
2,638 |
DM-13374
|
01/25/2018 15:34:58
|
Deconstruct Butler prototype for redesign
|
Remove all elements (and associated tests) in current prototype of Butler and Registry that will be redesigned (as opposed to keeping everything working during redesign).
| 2 |
2,639 |
DM-13379
|
01/26/2018 12:59:05
|
Determine whether the LSST LaTeX documentclass document type codes can be retired
|
Currently, LSST documents generated from LaTeX source use a template that has a "document type" parameter. This parameter has documented values of "DM", "MN", and "CP" at this time, and a commonly-used but undocumented value of "SE" as well (for LSE- and LPM- documents). As the documentation at https://lsst-texmf.lsst.io/lsstdoc.html says: {quote}DM defines the document type to be a “Data Management” document. Other options include MN for minutes and CP for conference proceedings but these are holdovers from the original Gaia class file and currently have no effect on the document output. They are considered optional, but descriptive, at this time.{quote} I suggest that we either add "SE" to the list above or, recognizing that the document type is in fact unused in the LaTeX template (it relies on the DocuShare handle, instead), remove the mention of the document type from the {{lsst-texmf}} documentation and, opportunistically, remove the document types from existing documents when other editing actions occur.
| 0.5 |
2,640 |
DM-13381
|
01/26/2018 15:43:44
|
Rewrite updateRefCentroids and updateSourceCoords to convert all positions at once
|
The current updateRefCentroids and updateSourceCoords functions are pure Python and compute one position at a time. SkyWcs performs much better if it can convert a batch of values at once. I could rewrite the Python code to extract all the positions, convert them, then loop through the catalog again. But I worry this will be needlessly slow (the looping is already slow, and this will only make it worse), so I propose to rewrite the functions in C++ instead.
| 2 |
2,641 |
DM-13384
|
01/26/2018 18:41:51
|
Fix Qserv docker image build
|
A recent merge has broken docker image build scripts used by developers and Travis and Jenkins CI (undefined variable $DOCKER_RUN_DIR in several Dockerfiles)
| 1 |
2,642 |
DM-13388
|
01/27/2018 10:36:27
|
Enable visit-level sky subtraction for HSC by default
|
Turn on visit-level sky subtraction by default for HSC, which I believe means updating the coaddition configuration to expect its outputs to be available.
| 0.5 |
2,643 |
DM-13389
|
01/27/2018 10:38:14
|
Enable transmission curve attachment for HSC by default
|
Set {code:java} self.doAttachTransmissionCurve = True{code} in {{SubaruIsrTask}}.
| 0.5 |
2,644 |
DM-13396
|
01/29/2018 10:45:43
|
Fix coadd mask propagation
|
DM-9953 created the SENSOR_EDGE mask to mark coadd pixels that were on or near the boundary of an input CCD (and hence should have INEXACT_PSF set as well). It also started the propagation of those EDGE regions to the coadd. Propagating the EDGE regions to the coadd caused problems, however, because many of them were affected by otherwise-unmasked bad pixels (leaks from amps or somesuch), so that was disabled on DM-12931. Contrary to comments on that ticket, this seems to have broken the propagation of SENSOR_EDGE, or perhaps something more recent broke it. We should also consider whether to split the current CLIPPED flag into both CLIPPED and REJECTED, with the latter being used for pixels rejected due to mask values from the calexps rather than an explicit smart-clip algorithm (e.g. SafeClip or CompareWarps).
| 2 |
2,645 |
DM-13403
|
01/29/2018 14:15:55
|
numpy types fail in butler dataIds
|
The {{np.int64}} type appears to be treated differently from the built-in python {{int}} type in butler dataIds. As an example: {code} In [9]: butler.datasetExists('src',dataId={'visit':13376, 'ccd':15}) Out[9]: True In [10]: butler.datasetExists('src',dataId={'visit':13376, 'ccd':np.int64(15)}) Out[10]: False {code} However, if one attempts to use that `np.int64` type in a {{butler.get}}, it raises an exception with the value appearing without its type ({{str(15)}} and {{str(np.int64(15))}} are the same), which is extremely confusing to the user. {code} NoResults: No locations for get: datasetType:src dataId:DataId(initialdata={'visit': 13376, 'ccd': 15}, tag=set()) {code} Note also that {{hash(15) == hash(np.int64(15))}}, so there is a general expectation that these will behave similarly.
| 1 |
2,646 |
DM-13410
|
01/29/2018 15:06:53
|
Shrink input bboxes in inputRecorder per psfMatched Warp in WarpCompare
|
WarpCompare has no temporal information for the pixels that are outside the boundary of the psfMatched Warps. These pixels are marked NO_DATA. The BBoxes in the CoaddInputRecorder should be shrunk so that we can have an exact CoaddPsf for sources that fall in the border of calexp.
| 5 |
2,647 |
DM-13412
|
01/29/2018 16:41:37
|
camera mapper should specify DecoratedImageU instead of ImageU
|
Here's pointer to the camera mapper in question: lsst.obs.sdss.sdssMapper.SdssMapper Problem with the return image object type (via butler) for the following data: /datasets/sdss/preprocessed/dr7/runs/2708/40/corr/3/fpC-002708-r3-0103.fit.gz Per [~ktl], the correct fix should be following (and new RFC for caution): If you want the metadata and WCS, I think it will be sufficient to change these lines [https://github.com/lsst/obs_sdss/blob/master/policy/SdssMapper.yaml#L37-L38] GitHub lsst/obs_sdss obs_sdss - SDSS-specific configuration and tasks for the LSST Data Management Stack [2:31 PM] to say `DecoratedImageU` instead of `ImageU`
| 3 |
2,648 |
DM-13413
|
01/30/2018 10:31:37
|
blueocean 1.4.0 pipeline view ignoring clicks
|
After a jenkins core + blueocean update to {{1.4.0}}, clicks in the build pipeline view to change between branches are frequently ignored. It appears that this is only happening for builds that are actively running. A search of the upstream jenkins Jira did not find an existing issues thats even in the ballpark. A screencast should be made to report upstream. Downgrading BO to {{1.3.x}} may be the best option.
| 1 |
2,649 |
DM-13416
|
01/30/2018 16:39:16
|
Firefly should reconnect periodically to the server when the connection fails
|
Firefly requires a persistent websocket connection to the server in order to function properly. In the case when the client is disconnected from the server, Firefly should periodically attempt to reconnect. This can happen when one closes his/her laptop or switching from one network to another. When Firefly is no connected to the server, there should be an indication showing that it's no longer connected.
| 2 |
2,650 |
DM-13417
|
01/30/2018 18:22:01
|
Cleanup error reporting and docstrings in cameraGeom.utils
|
The docstring for {{cameraGeom.utils.makeImageFromCamera}} could be cleaned up (return type specified, binSize and other parameters clarified), and the "Unable to fit image for detector" warning message should include the exact exception message that was raised (figuring out the nature of the problem is difficult otherwise). That {{print()}} should probably also be turned into a log message.
| 0.5 |
2,651 |
DM-13420
|
01/31/2018 12:57:14
|
error message handling for failed table filtering
|
From DM-13203 item 1, 4, 5. # Generate a meaningful error message when a column filter expression applied to a table is invalid. This may not require any extra parsing effort for the filter expressions - the system knows the name of the column and the text typed in the filter field, and can just wrap the underlying error message with something like "The filter expression 'xyzzy' applied to column 'foobar' is invalid.". For the purposes of debugging it may be useful to provide a UI action (e.g., a disclosure triangle) that makes it possible to see the full underlying exception text. # # # Ensure that, when an invalid filter expression is entered, after dismissing the error message the user is returned to the _same state of their data and previously-specified filters as before the failed attempt to construct a filter_. This does not require arbitrary undos, but only that the displayed state of the table is not changed before the filter has been determined to be valid. # Do not use the word "Reload" for the button that dismisses the error dialog. "Back", "Close", "Cancel" would all be better choices. Consider asking for feedback from others, e.g., Vandana, for this choice. "Reload" _strongly_ suggests - to me - that the underlying data retrieval operation would be repeated rather than just abandoning the filter attempt.
| 5 |
2,652 |
DM-13438
|
02/01/2018 12:47:24
|
Create simulations with real COSMOS galaxies
|
LSST internal reviewers suggested using a set of simulations using real galaxy data as a more robust test of the new deblender. This ticket involves creating another 10k simulated blends (this time with real galaxies) and running both the new and old deblenders on it.
| 8 |
2,653 |
DM-13439
|
02/01/2018 12:48:23
|
jenkins update-cmirror job broken
|
This job fails in the current stripped down jenkins env and probably needs to build in a container: {code:java} [linux-64] Running shell script + wget https://repo.continuum.io/pkgs/free/linux-64/repodata.json /home/jenkins-slave/workspace/sqre/infrastructure/update-cmirror/repodata/linux-64_tmp/durable-716fc7f8/script.sh: line 2: wget: command not found {code} https://ci.lsst.codes/job/sqre/job/infrastructure/job/update-cmirror/267/console This job is also not running periodically as it should be. The cron trigger must have been lost during a re-factoring.
| 0.5 |
2,654 |
DM-13441
|
02/01/2018 13:11:50
|
Fix nominal gains and readNoise values
|
Some gains are zero, that's bad (leads to inf variance, and therefore NaNs). The other values are meaningless as this is currently being used for any given raft, so lets set them all to 1 for now. Also set all the readNoise values to 10 for now.
| 2 |
2,655 |
DM-13451
|
02/01/2018 16:32:39
|
Make ap_verify responsible for ingestion
|
Currently, {{ap_pipe}} ingests and processes data. For its ongoing conversion to a {{CmdLineTask}} (DM-13163) and forward-compatibility with {{Pipeline}} classes, {{ap_pipe}} should work on an externally provided repository. For the time being, this functionality should be moved to {{ap_verify}} (which currently requires uningested data), and the final responsibility for ingestion will be determined later.
| 8 |
2,656 |
DM-13452
|
02/01/2018 16:47:18
|
Extend QuantumGraph implementation
|
Current implementation of QuantumGraph (with different name) is rather minimalist and is optimized mostly for efficient storage. We want to extend it to make usable for other cases as well.
| 8 |
2,657 |
DM-13453
|
02/01/2018 16:50:52
|
Upgrade psutil to version 5
|
psutil has had some major updates since RFC-176 with some significant speed ups. This also seems to fix a DeprecationWarning triggered by the current version (which is my main motivation for updating this). psutil is only used in utils.tests at this time.
| 0.5 |
2,658 |
DM-13456
|
02/01/2018 17:53:37
|
Finalize and cleanup data for KPM30
|
Prepare and finalize data readiness, generate appropriate ObjectIDs and queries for KPM30 run.
| 8 |
2,659 |
DM-13459
|
02/01/2018 18:10:15
|
Parse single example query statement with antlr4 and build query objects as antlr2 would
|
Parse a query like "SELECT objectId, ra_PS FROM Object WHERE objectId=1234" with the new antlr4 parser.
| 13 |
2,660 |
DM-13460
|
02/01/2018 18:10:47
|
Extend antlr4 parser abilities
|
Extend antlr4 parser beyond simple statement, for example: {{SELECT COUNT ( * ) as OBJ_COUNT FROM Object WHERE qserv_areaspec_box ( 0.1 , - 6 , 4 , 6 ) AND scisql_fluxToAbMag ( zFlux_PS ) BETWEEN 20 AND 24 AND scisql_fluxToAbMag ( gFlux_PS ) - scisql_fluxToAbMag ( rFlux_PS ) BETWEEN 0.1 AND 0.9 AND scisql_fluxToAbMag ( iFlux_PS ) - scisql_fluxToAbMag ( zFlux_PS ) BETWEEN 0.1 AND 1.0)))));}}
| 13 |
2,661 |
DM-13477
|
02/02/2018 16:41:54
|
Move association math from DIAObjectCollection into AssociationTask
|
Removes the score and match methods from DIAObjectCollection and puts them, for now, into AssociationTask. Once AssociationTask moves into lsst-distrib we can move the further into meas_algorithms.
| 5 |
2,662 |
DM-13485
|
02/05/2018 08:22:46
|
Fix NB filter transmission curve dataset filenames
|
Some narrow-band filter names used by {{installTransmissionCurves.py}} are missing leading zeros (e.g. should be NB0921 rather than NB921).
| 0.5 |
2,663 |
DM-13492
|
02/05/2018 14:00:00
|
Remove --rerun argument for ap_verify
|
Nobody remembers why {{\-\-rerun}} was originally added to {{ap_verify}}'s command-line interface; see discussion on DM-12853. In the absence of a compelling case for it (it cannot behave exactly like the {{\-\-rerun}} argument for command line tasks, because {{ap_verify}} does not have an input repository), and given its currently confusing behavior, we should remove {{\-\-rerun}} from the {{ap_verify}} API. It can be added back once we have a clear expectation of how it should behave.
| 1 |
2,664 |
DM-13493
|
02/05/2018 14:38:33
|
BaseSourceSelectorConfig should not filter on "interpolated"
|
Given the recent change to how {{interpolated}} is used in the stack, it appears that {{BaseSourceSelectorConfig}} should not include {{interpolated}} in its list of default {{badFlags}}. We may want to add any new "why interplated" flags instead.
| 1 |
2,665 |
DM-13498
|
02/06/2018 09:40:23
|
Add config to make WarpCompare very conservative
|
Currently a drawback of WarpCompare is any epochs where a source doesn't look like the other epochs gets clipped. This leads to some loss of signal in bright sources. While future work on the image comparison will improve this (background matching, image-to-image psf matching, etc....) for now, we probably want a nuclear option config: If an artifact candidate fits entirely within a template coadd source, then don't clip it. This means that unfortunate CRs or supernovae won't be clipped, but will improve the photometry. For the HSC release, coadd photometry seems to be a higher priority.
| 3 |
2,666 |
DM-13501
|
02/06/2018 13:19:20
|
Add obs_decam to validation_data_decam/ups
|
Add setupRequired(obs_decam) to a ups/validation_data_decam.table file.
| 0 |
2,667 |
DM-13506
|
02/06/2018 20:25:16
|
Review and fixups for antlr4 eups package
|
Assist Nate in packaging antlr4 as a third-party TaP eups package
| 2 |
2,668 |
DM-13507
|
02/07/2018 09:10:22
|
Add stable hash to SkyMap objects
|
In the Gen3 butler, the tracts and patches defined by a SkyMap will be loaded into a database, and that will make it much more important to recognize when the same SkyMap has already been loaded. While SkyMap objects already support equality comparison, it'd be nice if they could also produce a stable hash that can be used to uniquely label them. -Since that basically amounts to being able to hash the SkyMap's configuration, I think it makes the most sense to actually add this hashing support directly to {{pex_config}}. Being able to compare hashes to check for config equality seems like it'd be generally useful to.- I'm currently planning to do this with {{hashlib.sha1}}, rather than just the {{hash}} builtin, because I want something that's guaranteed to be stable between Python versions. Note that these in-memory hashes will not be equivalent to hashes of the files in which these objects are stored.
| 2 |
2,669 |
DM-13509
|
02/07/2018 13:33:44
|
Some pure python packages add to LD_LIBRARY_PATH
|
The pure python packages listed in Components have ups tables that add themselves to LD_LIBRARY_PATH and two related library paths. They should not do this. Remove the following from their ups table files: {code} envPrepend(LD_LIBRARY_PATH, ${PRODUCT_DIR}/lib) envPrepend(DYLD_LIBRARY_PATH, ${PRODUCT_DIR}/lib) envPrepend(LSST_LIBRARY_PATH, ${PRODUCT_DIR}/lib) {code}
| 0.5 |
2,670 |
DM-13510
|
02/07/2018 13:45:11
|
Correct inconsistencies in LDM-503 text and tables and improve auto-generation process
|
The existing LDM-503 package has traces of a strategy to autogenerate substantial sections of the document from a milestone table maintained as the {{dmtestmilstones.csv}} file (as well as other portions from other .csv files). In particular, from this file a {{dmtestmilstones.tex}} file is generated, for use in {{schedtab.tex}} in Section 6 of the document, as well as a file {{testsections.tex}} used as the body of Section 7 of the document. There are signs that both of those {{.tex}} files were hand-edited after the last time the autogeneration was performed, and these edits are at variance with each other, with the outcome that there are three versions, slightly differing, of the associated text. The auto-generation script is not part of the {{Makefile}}, and its outputs are part of the Github _acquis_ for the package, or this would have turned out differently. Note also that the autogeneration script has a provision to substitute a longer description of a milestone into {{testsections.tex}} if one is available. Currently this is only the case for LDM-503-2, for which there is an {{LDM-503-2.tex}} file in the package (and a much more extensive {{f17_drp.tex}} file which it in turn includes). This longer description appears to predate the fuller, separate test specification for DRP and may be redundant or even in conflict with it; I have not checked that.
| 3 |
2,671 |
DM-13519
|
02/08/2018 14:53:01
|
Implement per-object Galactic Extinction correction in color analysis QA plots
|
Implement a per-object Galactic Extinction correction for use with the color-analysis QA plots to replace the per-field placeholder included in DM-13154. It looks like there is code in \{\{sims_photUtils}} (and dependencies) to do this, so this will be an attempt to get that working with the analysis scripts. Note that this requires the A_filter/E(B-V) extinction coefficients for the HSC filters (awaiting a response from the HSC team, the placeholder noted above is just using SDSS filter values).
| 5 |
2,672 |
DM-13520
|
02/08/2018 16:47:44
|
Add readme to obs_subaru
|
[~yusra] pointed out the diagram of the HSC focal plane in obs_subaru, which I wouldn't have thought to check. To make it more obvious for future me, I've created a readme and added a note about the diagram to it.
| 0.5 |
2,673 |
DM-13524
|
02/09/2018 15:31:43
|
Add unit tests for ingestion
|
As a self-contained module within {{ap_verify}}, {{ingestion}} should have unit tests of its functions. It may be possible to implement a unit test by specifically ingesting only 1-2 files of each type.
| 2 |
2,674 |
DM-13526
|
02/09/2018 18:54:44
|
Fixed a bug in the schema migration tool for worker databases
|
The original implementation of the schema migration tool deployed for worker side databases *qservw_worker* as per [DM-13301] won't work for tables which have '_' in their base names. The original tool was designed and tested for the LSST-style table names: {code}Object_<chunk>{code} However, it won't work for tables like: {code} Sart_More_<chunk> {code} The first symbol '_' will confuse the stored procedure implemented in the original version of the tool. A goal of this ticket is to fix this problem by making the stored procedure to look for the last '_' when separating the base names of tables from the trailing chunk numbers.
| 0.5 |
2,675 |
DM-13534
|
02/12/2018 12:53:08
|
Upgrade ndarray to upstream 1.4.2
|
If all goes well, this should allow us to start removing our dependence on the NumPy C API (left for another ticket). This will probably require modifying our pybind11 build to install its CMake config files (unless we're doing that already, which I doubt), since ndarray now uses those to find pybind11.
| 1 |
2,676 |
DM-13535
|
02/12/2018 17:16:03
|
Accept idiomatic input repositories
|
Currently, {{ap_pipe}} requires a single input repository with two directories, {{ingested}} and {{calibingested}}, each a repository in its own right. This will be a problem for DM-13163 because most command-line tasks allow the URIs of the data and calib repositories to be independent, and in any case the Stack convention is different from our current usage. This ticket will change the command line to accept a separate calib repository (the argument should behave the same as {{processCcd.py --calib}}?). The API to {{ap_verify}} will not be changed, as it will almost certainly change when DM-13163 is implemented.
| 1 |
2,677 |
DM-13536
|
02/12/2018 17:16:13
|
Use repositories more idiomatically
|
For forward-compatibility with DM-13163, {{ap_verify}} should create separate repositories for ingestion and calibration. In effect, the current "output repository" should be a convenient "workspace" directory but not a repository. In addition, the interface module {{pipeline_driver}} should make a distinction between input and output repositories, choosing the location of the latter instead of deferring the choice to {{ap_pipe}}. Neither {{CmdLineTasks}}, nor in the future {{Pipelines}}, are responsible for output paths. Because the API will change again as part of DM-13163, changes to the interface between {{ap_pipe}} and {{ap_verify}} should be kept minimal; most likely, this will entail removing the ill-advised wrappers added in DM-12257 and calling the existing functions from {{pipeline_driver}} again.
| 3 |
2,678 |
DM-13538
|
02/13/2018 09:46:08
|
Add a Jenkins job to build and deploy Firefly's app to a local k8s cluster
|
During a pull request or an evaluation of an added feature, it's useful to have a specific version of a Firefly app up and running. Add a job to Jenkins to build and deploy a Firefly app to a local k8s cluster. This job should take as input; branch, env, and a label used to reference the instance running in k8s.
| 2 |
2,679 |
DM-13539
|
02/13/2018 09:58:28
|
astshim fails to preserve SIP terms for some TAN SIP when writing FITS metadata
|
[~lauren] has found some cases where a TAN-SIP SkyWcs is not written as TAN-SIP to FITS metadata (instead a local TAN approximation is used). The attached file is a program showing an example. I have reported the issue to David Berry in hopes he can fix it or suggest a workaround. This issue actually consists of two parts: - Normal TAN-SIP WCS cannot be written as FITS-WCS header cards. David Berry has implement a fix for that problem. - WCS rotated by rotateWcsPixelsBy90 cannot be written as FITS-WCS. I have split that into a separate ticket: DM-13564.
| 0.5 |
2,680 |
DM-13546
|
02/13/2018 14:08:08
|
Build robust Kafka 3-broker cluster
|
Current alert_stream Kafka prototype uses a single broker Kafka instance. Build a cluster of 3 brokers and ensure that if one is inaccessible the others can receive and emit alerts. Also, the current alert_stream Kafka cluster loses all data if a broker container goes down because the Kafka container houses all the data. Figure out how and where to mount external volumes for Kafka and Zookeeper so that data is recovered and so that consumer offsets are recovered (consumers unaffected) if one or all brokers go down.
| 8 |
2,681 |
DM-13550
|
02/13/2018 18:39:47
|
Fix Qserv docker image build
|
Qserv docker image builds broken after merge of DM-13458 (because libuuid prerequisite is not captured in Debian prereq install script)
| 0.5 |
2,682 |
DM-13554
|
02/14/2018 11:52:14
|
Build starlink_ast with opt=3
|
At present stalrink_ast is built using optimization level 2. I propose to build it with our standard optimization level of 3 in hope of increasing performance.
| 0.5 |
2,683 |
DM-13556
|
02/14/2018 12:04:13
|
Upgrade sqlalchemy to 1.2
|
This is to upgrade lsst/alchemy from 1.08 to 1.2.2, with two years worth of improvements and bug fixes, which will facilitate dax_v1 implementation at retrieving/mapping data from the SQL database.
| 5 |
2,684 |
DM-13557
|
02/14/2018 12:54:13
|
Minor config doc fixes for SourceDetectionTask
|
These are being generated in real-time when answering questions about detection in the DRP team meeting, but I want to get everyone to review my improvements instead of just merging them (even though that's formally permitted).
| 0.5 |
2,685 |
DM-13562
|
02/14/2018 15:53:49
|
Migrate base docker containers to Centos7 + Devtoolset6
|
The new antlr4 package requires gcc 5.0 or greater, but Debian Jessie, on which the Qserv base containers are based, only supports gcc 4.9. Upgrading the base containers to Centos7 + Devtoolset6 seems the best option. This should also address currently busted Travis CI runs, and buildability of containers off latest master (sphgeom pybind11 compiler compatibility issue).
| 1 |
2,686 |
DM-13565
|
02/15/2018 00:42:54
|
Put correct copyright/license headers in all jointcal files
|
Jointcal got caught during the RFC-45 copyright/license uncertainty and many of its files don't follow any of the proposals there. We should get them cleaned up, following whatever standard was finalized in that RFC.
| 1 |
2,687 |
DM-13568
|
02/15/2018 11:54:56
|
nightly-release d_2018_02_15 -- some eups tarball builds failing with pip install error
|
https://ci.lsst.codes/blue/organizations/jenkins/release%2Ftarball/detail/tarball/1978/pipeline {code:java} + pip install awscli Collecting awscli Downloading awscli-1.14.39-py2.py3-none-any.whl (1.2MB) Collecting rsa<=3.5.0,>=3.1.2 (from awscli) Using cached rsa-3.4.2-py2.py3-none-any.whl Collecting PyYAML<=3.12,>=3.10 (from awscli) Collecting botocore==1.8.43 (from awscli) Could not find a version that satisfies the requirement botocore==1.8.43 (from awscli) (from versions: 0.4.1, 0.4.2, 0.5.0, 0.5.1, 0.5.2, 0.5.3, 0.5.4, 0.6.0, 0.7.0, 0.8.0, 0.8.1, 0.8.2, 0.8.3, 0.9.0, 0.9.1, 0.9.2, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.14.0, 0.15.0, 0.15.1, 0.16.0, 0.17.0, 0.18.0, 0.19.0, 0.20.0, 0.21.0, 0.22.0, 0.23.0, 0.24.0, 0.25.0, 0.26.0, 0.27.0, 0.28.0, 0.29.0, 0.30.0, 0.31.0, 0.32.0, 0.33.0, 0.34.0, 0.35.0, 0.36.0, 0.37.0, 0.38.0, 0.39.0, 0.40.0, 0.41.0, 0.42.0, 0.43.0, 0.44.0, 0.45.0, 0.46.0, 0.47.0, 0.48.0, 0.49.0, 0.50.0, 0.51.0, 0.52.0, 0.53.0, 0.54.0, 0.55.0, 0.56.0, 0.57.0, 0.58.0, 0.59.0, 0.60.0, 0.61.0, 0.62.0, 0.63.0, 0.64.0, 0.65.0, 0.66.0, 0.67.0, 0.68.0, 0.69.0, 0.70.0, 0.71.0, 0.72.0, 0.73.0, 0.74.0, 0.75.0, 0.76.0, 0.77.0, 0.78.0, 0.79.0, 0.80.0, 0.81.0, 0.82.0, 0.83.0, 0.84.0, 0.85.0, 0.86.0, 0.87.0, 0.88.0, 0.89.0, 0.90.0, 0.91.0, 0.92.0, 0.93.0, 0.94.0, 0.95.0, 0.96.0, 0.97.0, 0.98.0, 0.99.0, 0.100.0, 0.101.0, 0.102.0, 0.103.0, 0.104.0, 0.105.0, 0.106.0, 0.107.0, 0.108.0, 0.109.0, 1.0.0a1, 1.0.0a2, 1.0.0a3, 1.0.0b1, 1.0.0b2, 1.0.0b3, 1.0.0rc1, 1.0.0, 1.0.1, 1.1.0, 1.1.1, 1.1.2, 1.1.3, 1.1.4, 1.1.5, 1.1.6, 1.1.7, 1.1.8, 1.1.9, 1.1.10, 1.1.11, 1.1.12, 1.2.0, 1.2.1, 1.2.2, 1.2.3, 1.2.4, 1.2.5, 1.2.6, 1.2.7, 1.2.8, 1.2.9, 1.2.10, 1.2.11, 1.3.0, 1.3.1, 1.3.2, 1.3.3, 1.3.4, 1.3.5, 1.3.6, 1.3.7, 1.3.8, 1.3.9, 1.3.10, 1.3.11, 1.3.12, 1.3.13, 1.3.14, 1.3.15, 1.3.16, 1.3.17, 1.3.18, 1.3.19, 1.3.20, 1.3.21, 1.3.22, 1.3.23, 1.3.24, 1.3.25, 1.3.26, 1.3.27, 1.3.28, 1.3.29, 1.3.30, 1.4.0, 1.4.1, 1.4.2, 1.4.3, 1.4.4, 1.4.5, 1.4.6, 1.4.7, 1.4.8, 1.4.9, 1.4.10, 1.4.11, 1.4.12, 1.4.13, 1.4.14, 1.4.15, 1.4.16, 1.4.17, 1.4.18, 1.4.19, 1.4.20, 1.4.21, 1.4.22, 1.4.23, 1.4.24, 1.4.25, 1.4.26, 1.4.27, 1.4.28, 1.4.29, 1.4.30, 1.4.31, 1.4.32, 1.4.33, 1.4.34, 1.4.35, 1.4.36, 1.4.37, 1.4.38, 1.4.39, 1.4.40, 1.4.41, 1.4.42, 1.4.43, 1.4.44, 1.4.46, 1.4.47, 1.4.48, 1.4.49, 1.4.50, 1.4.51, 1.4.52, 1.4.53, 1.4.54, 1.4.55, 1.4.56, 1.4.57, 1.4.58, 1.4.59, 1.4.60, 1.4.61, 1.4.62, 1.4.63, 1.4.64, 1.4.65, 1.4.66, 1.4.67, 1.4.68, 1.4.69, 1.4.70, 1.4.71, 1.4.72, 1.4.73, 1.4.74, 1.4.75, 1.4.76, 1.4.77, 1.4.78, 1.4.79, 1.4.80, 1.4.81, 1.4.82, 1.4.83, 1.4.84, 1.4.85, 1.4.86, 1.4.87, 1.4.88, 1.4.89, 1.4.90, 1.4.91, 1.4.92, 1.4.93, 1.5.0, 1.5.1, 1.5.2, 1.5.3, 1.5.4, 1.5.5, 1.5.6, 1.5.7, 1.5.8, 1.5.9, 1.5.10, 1.5.11, 1.5.12, 1.5.13, 1.5.14, 1.5.15, 1.5.16, 1.5.17, 1.5.18, 1.5.19, 1.5.20, 1.5.21, 1.5.22, 1.5.23, 1.5.24, 1.5.25, 1.5.26, 1.5.27, 1.5.28, 1.5.29, 1.5.30, 1.5.31, 1.5.32, 1.5.33, 1.5.34, 1.5.35, 1.5.36, 1.5.37, 1.5.38, 1.5.39, 1.5.40, 1.5.41, 1.5.42, 1.5.43, 1.5.44, 1.5.45, 1.5.46, 1.5.47, 1.5.48, 1.5.49, 1.5.50, 1.5.51, 1.5.52, 1.5.53, 1.5.54, 1.5.55, 1.5.56, 1.5.57, 1.5.58, 1.5.59, 1.5.60, 1.5.61, 1.5.62, 1.5.63, 1.5.64, 1.5.65, 1.5.66, 1.5.67, 1.5.68, 1.5.69, 1.5.70, 1.5.71, 1.5.72, 1.5.73, 1.5.74, 1.5.75, 1.5.76, 1.5.77, 1.5.78, 1.5.79, 1.5.80, 1.5.81, 1.5.82, 1.5.83, 1.5.84, 1.5.85, 1.5.86, 1.5.87, 1.5.88, 1.5.89, 1.5.90, 1.5.91, 1.5.92, 1.5.93, 1.5.94, 1.5.95, 1.6.0, 1.6.1, 1.6.2, 1.6.3, 1.6.4, 1.6.5, 1.6.6, 1.6.7, 1.6.8, 1.7.0, 1.7.1, 1.7.2, 1.7.3, 1.7.4, 1.7.5, 1.7.6, 1.7.7, 1.7.8, 1.7.9, 1.7.10, 1.7.11, 1.7.12, 1.7.13, 1.7.14, 1.7.15, 1.7.16, 1.7.17, 1.7.18, 1.7.19, 1.7.20, 1.7.21, 1.7.22, 1.7.23, 1.7.24, 1.7.25, 1.7.26, 1.7.27, 1.7.28, 1.7.29, 1.7.30, 1.7.31, 1.7.32, 1.7.33, 1.7.34, 1.7.35, 1.7.36, 1.7.37, 1.7.38, 1.7.39, 1.7.40, 1.7.41, 1.7.42, 1.7.43, 1.7.44, 1.7.45, 1.7.46, 1.7.47, 1.7.48, 1.8.0, 1.8.1, 1.8.2, 1.8.3, 1.8.4, 1.8.5, 1.8.6, 1.8.7, 1.8.8, 1.8.9, 1.8.10, 1.8.11, 1.8.12, 1.8.13, 1.8.14, 1.8.15, 1.8.16, 1.8.17, 1.8.18, 1.8.19, 1.8.20, 1.8.21, 1.8.22, 1.8.23, 1.8.24, 1.8.25, 1.8.26, 1.8.27, 1.8.28, 1.8.29, 1.8.30, 1.8.31, 1.8.32, 1.8.33, 1.8.34, 1.8.35, 1.8.36, 1.8.37, 1.8.38, 1.8.39, 1.8.40, 1.8.41, 1.8.42) No matching distribution found for botocore==1.8.43 (from awscli) script returned exit code 1 {code}
| 1 |
2,688 |
DM-13571
|
02/15/2018 12:48:45
|
fix plot_photoCalib bounds
|
[~jbosch] identified the bug that caused {{plot_photoCalib.py}} to not be able to plot meas_mosaic output. The fix is to change the linspace bounds.
| 0.5 |
2,689 |
DM-13574
|
02/15/2018 16:43:22
|
Adding support for building executables of C++ applications
|
The current build system for the *qserv* package lacks a support for building executables from the C++ applications. A goal behind this ticket is to fix this.
| 1 |
2,690 |
DM-13575
|
02/15/2018 18:30:09
|
fix minor bug in photometry ipynb
|
While looking at HSC data, I noticed that my on-sky dependence plots were not handling the initial dataset correctly. I've fixed that, but it should be committed separately from other open tickets.
| 0.5 |
2,691 |
DM-13583
|
02/19/2018 08:56:51
|
Strategize DMTN upload to Docushare
|
Propose plan for handling DM tech notes in docushare. There are a number of issues: * Generating a PDF from rst tech note * Allocating numbers in docushare.
| 5 |
2,692 |
DM-13588
|
02/19/2018 22:52:50
|
Document API for current L1DB prototype
|
Pieces of API that I have now in L1DB prototype could be useful for AP group, need to make sure that it has reasonable documentation.
| 2 |
2,693 |
DM-13599
|
02/20/2018 13:28:57
|
Update copyright info following RFC-45
|
Now that RFC-45 is official, update the daf_butler package to be compliant.
| 0.5 |
2,694 |
DM-13600
|
02/20/2018 14:14:17
|
Add YAML formatter
|
Add a YAML formatter to daf_butler.
| 1 |
2,695 |
DM-13608
|
02/21/2018 09:01:13
|
newinstall should force LANG=C
|
[~sogo.mineo] notes that: {code:java} the call to `curl` in `n8l::up2date_check` (in `newinstall.sh`) to be prefixed with `env LANG=C` . The message shown by curl when files differ is not always "differ" due to i18n.{code} The easiest way to make sure that the diffs are always legible is to force {{LANG=C}} everywhere.
| 2 |
2,696 |
DM-13609
|
02/21/2018 12:13:52
|
Undo EXTRACT_PRIVATE override in ip_diffim
|
{{ip_diffim}}'s Doxygen settings were modified in 2014 to generate HTML documentation for both public and private members. Because of how the build system handles Doxygen overrides, this change had the (presumably unintended) consequence that the Stack-wide documentation also exposed private class members, whether or not they had documentation comments. Per RFC-451, the override will be removed, so that both {{ip_diffim}}'s and the Stack's documentation includes only API members.
| 1 |
2,697 |
DM-13612
|
02/21/2018 16:46:52
|
Upgrading SQLAlchemy from 1.0.8 to 1.2.2
|
The third-party SQLAlchemy is being used in DAX and Sims. FYI, release 1.2.2 is a 2+ year newer than 1.0.8, with lots of improvements and new features, as documented in this link: [http://docs.sqlalchemy.org/en/latest/changelog/migration_12.html] which would greatly aid the future work related to DB access in DAX services.
| 2 |
2,698 |
DM-13623
|
02/22/2018 12:36:58
|
Make the Ipac Jenkins send build messages to the IPAC slack
|
The Ipac Jenkins currently send out email on build failures. Make it send message to a IPAC slack channel. The message could contain at least (if possible): * URL and date of the build/console * URL of the PR to test if build need to be tested * Author of the last commit that trigger the build * Last changes or git log
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.