id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
99 |
DM-778
|
05/28/2014 22:22:14
|
Restructure and package logging prototype
|
Restructure and package log4cxx-based prototype (currently in branch u/bchick/protolog). It should go into package called "log"
| 8 |
100 |
DM-780
|
05/28/2014 22:37:50
|
Access patterns for data store that supports data distribution
|
Data distribution related data store includes things like. chunk --> node mapping, locations of chunk replicas, runtime information about nodes (and maybe also node configuration?). Need to understand access patterns - who needs to access, how frequently etc.
| 5 |
101 |
DM-781
|
05/28/2014 22:41:33
|
research mysql cluster ndb
|
Checkout mysql cluster ndb from the perspective of data distribution - could it be potentially useful to store data related to data distribution?
| 2 |
102 |
DM-783
|
05/29/2014 10:40:20
|
Disable failing test cases in automated tests
|
There are currently 4 test cases failing in out automated tests. Until we have a fix we want to disable them.
| 1 |
103 |
DM-786
|
05/29/2014 12:48:39
|
JOIN queries are broken
|
Running a simple query that does a join: {code} SELECT s.ra, s.decl, o.raRange, o.declRange FROM Object o JOIN Source s USING (objectId) WHERE o.objectId = 390034570102582 AND o.latestObsTime = s.taiMidPoint; {code} results in czar crashing with: {code} 2terminate called after throwing an instance of 'std::logic_error' what(): Attempted subchunk spec list without subchunks. {code} This query has been taken from integration tests (case01, 0003_selectMetadataForOneGalaxy.sql)
| 3 |
104 |
DM-794
|
05/29/2014 19:02:42
|
SQL injection in czar/proxy.py
|
Running automated tests for some queries I observe python exceptions in czar log which look like this: {code} 20140529 19:47:19.364371 0x7faacc003550 INF <py> Query dispatch (7) toUnhandled exception in thread started by <function waitAndUnlock at 0x18cd8c0> Traceback (most recent call last): File "/usr/local/home/salnikov/qserv-master/build/dist/lib/python/lsst/qserv/czar/proxy.py", line 78, in waitAndUnlock lock.unlock() File "/usr/local/home/salnikov/qserv-master/build/dist/lib/python/lsst/qserv/czar/proxy.py", line 65, in unlock self._saveQueryMessages() File "/usr/local/home/salnikov/qserv-master/build/dist/lib/python/lsst/qserv/czar/proxy.py", line 87, in _saveQueryMessages self.db.applySql(Lock.writeTmpl % (self._tableName, chunkId, code, msg, timestamp)) File "/usr/local/home/salnikov/qserv-master/build/dist/lib/python/lsst/qserv/czar/db.py", line 95, in applySql c.execute(sql) File "/u2/salnikov/STACK/Linux64/mysqlpython/1.2.3+8/lib/python/MySQL_python-1.2.3-py2.7-linux-x86_64.egg/MySQLdb/cursors.py", line 174, in execute self.errorhandler(self, exc, value) File "/u2/salnikov/STACK/Linux64/mysqlpython/1.2.3+8/lib/python/MySQL_python-1.2.3-py2.7-linux-x86_64.egg/MySQLdb/connections.py", line 36, in defaulterrorhandler raise errorclass, errorvalue _mysql_exceptions.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'r' AND sce.tract=0 AND sce.patch='159,3';', 1401410839.000000)' at line 1") ok 0.000532 seconds {code} I believe this is due to how query string is being constructed in czar/proxy.py: {code:py} class Lock: writeTmpl = "INSERT INTO %s VALUES (%d, %d, '%s', %f);" # ................... self.db.applySql(Lock.writeTmpl % (self._tableName, chunkId, code, msg, timestamp)) {code} If {{msg}} happens to contain quotes then resulting query is broken. One should not use Python formatting to construct query strings, instead the parameters should be passed directly to {{cursor.execute()}} method.
| 2 |
105 |
DM-800
|
05/31/2014 00:53:54
|
Zookeeper times out
|
I noticed running some queries, leaving system up and them returning few hours later and running more queries can result in: {code} ZOO_ERROR@handle_socket_error_msg@1723: Socket [127.0.0.1:12181] zk retcode=-4, errno=112(Host is down): failed while receiving a server response {code} It needs to be investigated (if we can reproduce)
| 3 |
106 |
DM-814
|
06/03/2014 08:29:59
|
Cleanup in core/examples and core/doc
|
- core/examples and core/doc seems to be out of data. Some cleanup here would be welcome.
| 1 |
107 |
DM-817
|
06/03/2014 17:35:51
|
qserv have to use boost from stack
|
To quote Jacek and KT: {code} Andy, re dm-751, KT says never use the system version. J. {code} So we need to switch qserv to eups-boost. This should be easy once DM-751 is done, just add boost to qserv.table. Then one can remove conditional part of {{BoostChecker}} which works with system-installed boost.
| 1 |
108 |
DM-827
|
06/09/2014 10:05:17
|
Reimplement C++/Python Exception Translation
|
I'd like to reimplement our Swig bindings for C++ exceptions to replace the "LsstCppException" class with a more user-friendly mechanism. We'd have a Python exception hierarchy that mirrors the C++ hierarchy (generated automatically with the help of a few Swig macros). These wrapped exceptions could be thrown in Python as if they were pure-Python exceptions, and could be caught in Python in the same language regardless of where they were thrown. We're doing this as part of a "Measurement" sprint because we'd like to define custom exceptions for different kinds of common measurement errors, and we want to be able to raise those exceptions in either language.
| 8 |
109 |
DM-829
|
06/09/2014 10:19:41
|
Algorithm API without (or with optional) Result objects
|
In this design prototype, I'll see how much simpler things could be made by making the main algorithm interface one that sets record values directly, instead of going through an intermediate Result object. Ideally the Result objects would still be an option, but they may not be standardized or reusable.
| 3 |
110 |
DM-832
|
06/09/2014 12:07:51
|
add persistable class for aperture corrections
|
We need to create a persistable, map-like container class to hold aperture corrections, with each element of the container being an instance of the class to be added in DM-740. A prototype has been developed on DM-797 on the HSC side: https://hsc-jira.astro.princeton.edu/jira/browse/HSC-797 and the corresponding code can be found on these changesets: https://github.com/HyperSuprime-Cam/afw/compare/32d7a8e7b75da6f5327fee65515ee59a5b09f6c7...tickets/DM-797
| 2 |
111 |
DM-833
|
06/09/2014 12:09:16
|
implement coaddition for aperture corrections
|
We need to be able to coadd aperture corrections in much the same way we coadd PSFs. See the HSC-side HSC-798 and HSC-897 implementation for a prototype: https://hsc-jira.astro.princeton.edu/jira/browse/HSC-798 https://hsc-jira.astro.princeton.edu/jira/browse/HSC-897 with code here: https://github.com/HyperSuprime-Cam/meas_algorithms/compare/d2782da175c...u/jbosch/DM-798 https://github.com/HyperSuprime-Cam/meas_algorithms/compare/c4fcab3251...u/price/HSC-897a https://github.com/HyperSuprime-Cam/pipe_tasks/compare/6eb48e90be12d...u/price/HSC-897a
| 3 |
112 |
DM-837
|
06/09/2014 13:00:49
|
Rewrite multiple-aperture photometry class
|
We've never figured out how to handle wrapping multiple-aperture photometry algorithms. They can't use the existing Result objects - at least not out of the box. We should try to write a new multiple-aperture photometry algorithm from the ground up, using the old ones on the HSC branch as a guide, but not trying to transfer the old code over. The new one should: - Have the option of using elliptical apertures (as defined by the shape slot) or circular apertures. - Have a transition radius at which we switch from the sinc photometry algorithm to the naive algorithm (for performance reasons).
| 2 |
113 |
DM-840
|
06/09/2014 16:27:00
|
Change code so ImageOrigin must be specified (temporary)
|
Image-like classes have a getBBox method and various constructors that use an ImageOrigin argument which in most or all cases defaults to LOCAL. As the first stage in cleaning this up, try to break code that uses the default as follows: * Remove the default from getBBox(ImageOrigin) so an origin must be specified. * Change the default origin of constructors to a temporary new value UNDEFINED * Modify code that uses image origin to fail if origin is needed (it is ignored if bbox is empty) and is UNDEFINED. Note: this is less safe than changing constructors to not have a default value for origin, because the error will be caught at runtime rather than compile time. However, that is messy because then the bounding box will also have to be always specified, and possibly an HDU, so it would be a much more intrusive change.
| 2 |
114 |
DM-841
|
06/09/2014 16:29:21
|
Change data butler I/O of image-like objects to require imageOrigin if bbox specified (temporary)
|
As part of making PARENT the default for image origin, change the data butler to require that imageOrigin be specified if bbox is specified when reading or writing image-like objects. Note: this ticket turns out to be unnecessary, as all the few necessary change are done as part of DM-840.
| 2 |
115 |
DM-843
|
06/09/2014 16:43:52
|
Restore names of methods that return pixel iterators and locators
|
Restore the names of methods that return pixel iterators and pixel locators on image-like classes. (This is part of the final stage of eliminating LOCAL pixel indexing).
| 2 |
116 |
DM-845
|
06/09/2014 16:46:45
|
Eliminate image origin argument from butler for (un)persisting image-like objects
|
Eliminate the image origin argument for butler get and put when dealing with image-like objects.
| 2 |
117 |
DM-854
|
06/10/2014 00:47:31
|
duplicate column name when running near neighbor query
|
Running a simplified version of near neighbor query on test data from case01: {code} SELECT DISTINCT o1.objectId, o2.objectId FROM Object o1, Object o2 WHERE scisql_angSep(o1.ra_PS, o1.decl_PS, o2.ra_PS, o2.decl_PS) < 1 AND o1.objectId <> o2.objectId {code} Result in an error on the worker: {code} Foreman:Broken! ,q_38f9QueryExec---Duplicate column name 'objectId' Unable to execute query: CREATE TABLE r_13237cd4cfc9e0fa01497bcf\ 67a91add2_6630_0 SELECT o1.objectId,o2.objectId FROM Subchunks_LSST_6630.Object_6630_0 AS o1,Subchunks_LSST_6630.Object_6630_0 AS o2\ WHERE scisql_angSep(o1.ra_PS,o1.decl_PS,o2.ra_PS,o2.decl_PS)<1 AND o1.objectId<>o2.objectId; {code} It is fairly obvious what is going on. "SELECT t1.x, t2.x" is perfectly valid, but if we add "INSERT INTO SELECT t1.x, t2.x", we need to add names, eg. something like "INSERT INTO SELECT t1.x as x1, t2.x as x2"
| 8 |
118 |
DM-863
|
06/11/2014 17:51:37
|
near neighbor does not return results
|
A query from qserv_testdata (case01/queries/1051_nn.sql) runs through Qserv, but it returns no results, while the same query run on myql does return results. The exact query for qserv is: {code} SELECT o1.objectId AS objId FROM Object o1, Object o2 WHERE qserv_areaspec_box(0, 0, 0.2, 1) AND scisql_angSep(o1.ra_PS, o1.decl_PS, o2.ra_PS, o2.decl_PS) < 1 AND o1.objectId <> o2.objectId; {code}
| 1 |
119 |
DM-869
|
06/12/2014 14:20:20
|
disable extraneous warnings from boost (gcc 4.8)
|
Compiling qserv on ubuntu 14.04 (comes with gcc 4.8.2) results in huge number of warnings coming from boost. We should use the flag "-Wno-unused-local-typedefs".
| 0.5 |
120 |
DM-873
|
06/13/2014 19:18:55
|
XLDB - strategic positioning
|
Discussions with strategic partners. Improving website and adding new context (community, speakers). 1-pager document
| 3 |
121 |
DM-874
|
06/16/2014 07:46:13
|
W'14 newinstall.sh picks up wrong python?
|
newinstall.sh fails with: Installing the basic environment ... Traceback (most recent call last): File "/tmp/test_lsst/eups/bin/eups_impl.py", line 11, in ? import eups.cmd File "/tmp/test_lsst/eups/python/eups/__init__.py", line 5, in ? from cmd import commandCallbacks File "/tmp/test_lsst/eups/python/eups/cmd.py", line 38, in ? import distrib File "/tmp/test_lsst/eups/python/eups/distrib/__init__.py", line 30, in ? from Repositories import Repositories File "/tmp/test_lsst/eups/python/eups/distrib/Repositories.py", line 8, in ? import server File "/tmp/test_lsst/eups/python/eups/distrib/server.py", line 1498 mapping = self._noReinstall if outVersion and outVersion.lower() == "noreinstall" else self._mapping ^ SyntaxError: invalid syntax Perhaps from running the wrong version of python. Full script/log is attached.
| 1 |
122 |
DM-875
|
06/16/2014 10:11:49
|
lsst_dm_stack_demo
|
lsst-dm_stack_demo has obsolete benchmark files (circa Release 7.0) which fail to serve the purpose of validating, for the user, the correct functioning of a freshly built Release v8.0 stack. At the very least, the benchmark files should be regenerated for each official Release. Tasks: (1) Build the benchmark files for Release v8.0 (2) Debate (a) recommending the use of 'numdiff' to check if the output is within realistic bounds. Or, (b) develop another procedure to better show how the current algorithms compare to the algorithms used at the benchmarked Release. (3) Depending on result of the debate on #2: for: (a) provide appropriate 'numdiff' command invocation in manual.; for (b) implement the new procedure.
| 40 |
123 |
DM-903
|
06/25/2014 19:14:47
|
SourceDetectionTask should only add flags.negative if config.thresholdParity == "both"
|
The SourceDetectionTask always adds "flags.negative" to the schema (if provided) but it is only used if config.thresholdParity == "both". As adding a field to a schema requires that the table passed to the run method have that field this is a significant nuisance when reusing the task. Please change the code to only modify the schema if it's going to set it.
| 1 |
124 |
DM-911
|
06/27/2014 20:20:25
|
Provide Task documentation for DipoleMeasurementTask
|
See Summary.
| 2 |
125 |
DM-913
|
06/27/2014 20:26:35
|
Provide Task documentation for ImagePsfMatchTask
|
See summary
| 2 |
126 |
DM-914
|
06/27/2014 20:27:19
|
Provide Task documentation for SnapPsfMatchTask
|
See summary
| 2 |
127 |
DM-933
|
06/30/2014 12:52:38
|
Photometric calibration uses a column "flux" not the specified filter unless a colour term is active
|
The photometric calibration code uses a field "flux" in the reference catalog to impose a magnitude limit. If a colour term is specified, it uses the primary and secondary filters to calculate the reference magnitude, but if there is no colour term it uses the column labelled "flux" and ignores the filtername. Please change the code so that "flux" is ignored, and the flux associated with filterName is used.
| 1 |
128 |
DM-951
|
07/08/2014 13:28:14
|
Add Doxygen documentation on rebuilds
|
Master-branch doxygen documentation should be rebuild on every full master build.
| 20 |
129 |
DM-957
|
07/11/2014 11:00:01
|
Use aliases to clean up table version transition
|
The addition of schema aliases on DM-417 should allow us to clean up some of the transitional code added on DM-545, as we can now alias new versions of fields to the old ones and vice versa.
| 2 |
130 |
DM-964
|
07/15/2014 07:21:49
|
Include aliases in Schema introspection
|
Schema stringification and iteration should include aliases somehow. Likewise the extract() Python methods.
| 1 |
131 |
DM-966
|
07/15/2014 12:25:35
|
fix int/long conversion on 32-bit systems and selected 64-bit systems
|
tests/wrap.py fails in pex_config on 32-bit systems and some 64-bit systems (including Ubuntu 14.04) with the following: {code:no-linenum} tests/wrap.py ...EE.E. ====================================================================== ERROR: testDefaults (__main__.NestedWrapTest) Test that C++ Control object defaults are correctly used as defaults for Config objects. ---------------------------------------------------------------------- Traceback (most recent call last): File "tests/wrap.py", line 89, in testDefaults self.assert_(testLib.checkNestedControl(control, config.a.p, config.a.q, config.b)) File "/home/boutigny/CFHT/stack_5/build/pex_config/tests/testLib.py", line 987, in checkNestedControl return _testLib.checkNestedControl(*args) TypeError: in method 'checkNestedControl', argument 2 of type 'double' ====================================================================== ERROR: testInt64 (__main__.NestedWrapTest) Test that we can wrap C++ Control objects with int64 members. ---------------------------------------------------------------------- Traceback (most recent call last): File "tests/wrap.py", line 95, in testInt64 self.assert_(testLib.checkNestedControl(control, config.a.p, config.a.q, config.b)) File "/home/boutigny/CFHT/stack_5/build/pex_config/tests/testLib.py", line 987, in checkNestedControl return _testLib.checkNestedControl(*args) TypeError: in method 'checkNestedControl', argument 2 of type 'double' ====================================================================== ERROR: testReadControl (__main__.NestedWrapTest) Test reading the values from a C++ Control object into a Config object. ---------------------------------------------------------------------- Traceback (most recent call last): File "tests/wrap.py", line 82, in testReadControl config.readControl(control) File "/home/boutigny/CFHT/stack_5/build/pex_config/python/lsst/pex/config/wrap.py", line 212, in readControl __at=__at, __label=__label, __reset=__reset) File "/home/boutigny/CFHT/stack_5/build/pex_config/python/lsst/pex/config/wrap.py", line 217, in readControl self.update(__at=__at, __label=__label, **values) File "/home/boutigny/CFHT/stack_5/build/pex_config/python/lsst/pex/config/config.py", line 515, in update field.__set__(self, value, at=at, label=label) File "/home/boutigny/CFHT/stack_5/build/pex_config/python/lsst/pex/config/config.py", line 310, in __set__ raise FieldValidationError(self, instance, e.message) FieldValidationError: Field 'a.q' failed validation: Value 4 is of incorrect type long. Expected type int For more information read the Field definition at: File "/home/boutigny/CFHT/stack_5/build/pex_config/python/lsst/pex/config/wrap.py", line 184, in makeConfigClass fields[k] = FieldCls(doc=doc, dtype=dtype, optional=True) And the Config definition at: File "/home/boutigny/CFHT/stack_5/build/pex_config/python/lsst/pex/config/wrap.py", line 131, in makeConfigClass cls = type(name, (base,), {"__doc__":doc}) ---------------------------------------------------------------------- Ran 8 tests in 0.017s FAILED (errors=3) {code} There is a partial fix on u/jbosch/intwrappers; this seems to work for Ubuntu 14.04, but not on 32-bit systems.
| 2 |
132 |
DM-967
|
07/16/2014 10:56:37
|
qserv-configure.py is broken in master
|
It looks like there was a bug introduced either during the merge of DM-622 with master or right before that. Running {{qserv-configure.py}} from master fails now: {code} $ qserv-configure.py File "/usr/local/home/salnikov/qserv-master/build/dist/bin/qserv-configure.py", line 229 ("Do you want to update user configuration file (currently pointing ^ SyntaxError: EOL while scanning string literal {code} I assign this to myself, Fabrice is on vacation now and we need to fix this quickly.
| 1 |
133 |
DM-976
|
07/18/2014 09:51:02
|
Detailed documentation for meas_base tasks
|
We should follow RHL's example for detailed task documentation and document all meas_base tasks.
| 2 |
134 |
DM-977
|
07/18/2014 09:52:29
|
Documentation audit and cleanup for meas_base plugins
|
Many meas_base Plugins and Algorithms have poor documentation, including several whose documentation is a copy/paste relic from some other algorithm. These need to be fixed.
| 2 |
135 |
DM-978
|
07/18/2014 09:56:09
|
add base class for measurement tasks
|
We should consider adding a base class for measurement tasks (SingleFrameMeasurementTask, ForcedMeasuremedTask) that includes the callMeasure methods. I'm hoping this will help cleanup callMeasure and improve code reuse.
| 1 |
136 |
DM-980
|
07/18/2014 14:40:06
|
convert measurement algorithms in ip_diffim
|
ip_diffim includes a few measurement algorithms which need to be converted to the new framework.
| 5 |
137 |
DM-981
|
07/18/2014 14:49:33
|
convert measurement algorithms in meas_extensions_shapeHSM
|
This is a low-priority ticket to replace the old-style plugins in meas_extensions_shapeHSM with new ones compatible with meas_base. As this isn't a part of the main-line stack, we should delay it until other the meas_base conversion is nearly (or perhaps fully) complete.
| 3 |
138 |
DM-982
|
07/18/2014 14:50:28
|
convert meas_extensions_photometryKron to new measurement framework
|
This is a low-priority ticket to replace the old-style plugins in meas_extensions_photometryKron with new ones compatible with meas_base. As this isn't a part of the main-line stack, we should delay it until other the meas_base conversion is nearly (or perhaps fully) complete.
| 3 |
139 |
DM-984
|
07/21/2014 14:48:27
|
allow partial measurement results to be set when error flag is set
|
We need to be able to return values at the same time that an error flag is set. The easiest way to do this is to have Algorithms take a Result object as an output argument rather than return it. We'll revisit this design later.
| 2 |
140 |
DM-989
|
07/24/2014 12:05:05
|
.my.cnf in user HOME directory breaks setup script
|
Presence of {{.my.cnf}} file in the user HOME directory crashes {{qserv-configure.py}} script if parameters in {{.my.cnf}} conflict with parameters in {{qserv.conf}}. How to reproduce: * create .my.cnf file in the home directory: {code} [client] user = anything # host/port and/or socket host = 127.0.0.1 port = 3306 socket = /tmp/mysql.sock {code} * try to run {{qserv-configure}}, it fails with error: {code} /usr/local/home/salnikov/qserv-run/u.salnikov.DM-595/tmp/configure/mysql.sh: connect: Connection refused /usr/local/home/salnikov/qserv-run/u.salnikov.DM-595/tmp/configure/mysql.sh: line 13: /dev/tcp/127.0.0.1/23306: Connection refused ERROR 2003 (HY000): Can't connect to MySQL server on '127.0.0.1' (111) {code} It looks like {{~/.my.cnf}} may be a left-over from some earlier qserv installation. If I remove it and re-run {{qserv-configure.py}} now it's not created anymore. Maybe worth adding some kind of protection to {{qserv-configure.py}} in case other users have this file in their home directory.
| 2 |
141 |
DM-991
|
07/24/2014 19:51:44
|
add query involving a blob to the integration tests
|
We need to add a query (or more?) to the qserv_testdata that involve blobs. Blobs are interesting because they might break some parts of the qserv if we failed to escape things properly etc.
| 2 |
142 |
DM-993
|
07/24/2014 23:01:32
|
improve message from qserv_testdata
|
Currently, when I try to run qserv-benchmark but qserv_testdata was not setup, I am getting {code} CRITICAL Unable to find tests datasets. -- FOR EUPS USERS : Please run : eups distrib install qserv_testdata setup qserv_testdata FOR NON-EUPS USERS : Please fill 'testdata_dir' value in ~/.lsst/qserv.conf with the path of the directory containing tests datasets or use --testdata-dir option. {code} It is important to note in the section for eups users that this has to be called BEFORE qserv is setup, otherwise it has no effect.
| 1 |
143 |
DM-999
|
07/28/2014 11:23:18
|
rename config file(s) in Qserv
|
Rename local.qserv.cnf to qserv-czar.cnf. It is quite likely there are some other config files that would make sense to rename. If you see some candidates, let's discussion on qserv-l and do the renames.
| 1 |
144 |
DM-1001
|
07/28/2014 12:29:08
|
Modify assertAlmostEqual in ip_diffim subtractExposures.py unit test
|
In unit test, the comparison self.assertAlmostEqual(skp1[nk][np], skp2[nk][np], 4) fails. However if changed to self.assertTrue(abs(skp1[nk][np]-skp2[nk][np]) < 10**-4) which is the desired test, this succeeds. This ticket will remove all assertAlmostEquals from subtractExposure.py and replace with the fundamental comparison operator of the absolute value of the differences.
| 1 |
145 |
DM-1004
|
07/29/2014 14:22:10
|
Provide Task documentation for ModelPsfMatchTask
|
See Description (it's currently called PsfMatch)
| 2 |
146 |
DM-1010
|
08/05/2014 13:39:40
|
fix names of meas_base plugins to match new naming standards
|
Some meas_base plugins still have old-style algorithm names.
| 1 |
147 |
DM-1012
|
08/05/2014 13:44:44
|
remove temporary workaround in new SkyCoord algorithm
|
SingleFrameSkyCoordPlugin is using the Footprint Peak, not the centroid slot. According to comments in the code, this is a workaround for some problem with centroids. This needs to be fixed.
| 1 |
148 |
DM-1013
|
08/05/2014 13:51:07
|
Classification should set flags upon failure
|
The classification algorithm claims it can never fail. It can, and should report this.
| 2 |
149 |
DM-1015
|
08/05/2014 14:29:56
|
convert GaussianFlux to use shape, centroid slots
|
We should cleanup and simplify the GaussianFlux algorithm to simply use the shape and centroid slot values instead of either computing its own or having configurable field names for where to look these up.
| 1 |
150 |
DM-1017
|
08/05/2014 15:51:57
|
fix testForced.py
|
testForced.py is currently passing even though it probably should be failing: it's trying to get centroid values from a source which has neither a valid centroid slot or a Footprint with Peaks (I suspect because transforming a footprint might remove the peaks). Prior to DM-976, that would have caused a segfault; on DM-976, I've turned it into an exception, which is then turned into a warning by the measurement framework.
| 2 |
151 |
DM-1018
|
08/06/2014 11:40:26
|
Fix incorrect eupspkg config for astrometry_net
|
The clang patch from 8.0.0. version was (correctly) deleted. However, the patch identity was still left in the eupspkg config's protocol. This will delete the last vestige of the formerly necessary clang patch.
| 2 |
152 |
DM-1022
|
08/08/2014 16:10:29
|
fix warnings related to libraries pulled through dependent package
|
This came up during migrating qserv to the new logging system, and it can be reproduced by taking log4cxx, see DM-983, essentially: {code} eups distrib install -c log4cxx 0.10.0.lsst1 -r http://lsst-web.ncsa.illinois.edu/~becla/distrib -r http://sw.lsstcorp.org/eupspkg {code} cloning log package (contrib/log.git), building it and installing in your stack, and finally taking the branch u/jbecla/DM-207 of qserv and building it. The warnings looks like: {code}/usr/bin/ld: warning: libutils.so, needed by /usr/local/home/becla/qservDev/Linux64/log/1.0.0/lib/liblog.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libpex_exceptions.so, needed by /usr/local/home/becla/qservDev/Linux64/log/1.0.0/lib/liblog.so, not found (try using -rpath or -rpath-link) /usr/bin/ld: warning: libbase.so, needed by /usr/local/home/becla/qservDev/Linux64/log/1.0.0/lib/liblog.so, not found (try using -rpath or -rpath-link) {code} and they show up when I build qserv package, and are triggered by the liblog. I suspect sconsUtils deal with that sort of issues, but since we have our own scons system for qserv it is not handled. Fabrice, can you try to find a reasonable solution for that? Thanks!
| 0.5 |
153 |
DM-1028
|
08/11/2014 14:46:59
|
qserv-version.sh produces incorrect version number
|
I have just installed qserv on a clean machine (this is in a new virtual machine running Ubuntu12.04) which got me version 2014_07.0 installed: {code} $ eups list qserv 2014_07.0 current b76 $ setup qserv $ eups list qserv 2014_07.0 current b76 setup $ echo $QSERV_DIR /opt/salnikov/STACK/Linux64/qserv/2014_07.0 {code} but the {{qserv-version.sh}} script still thinks that I'm running older version: {code} $ qserv-version.sh 2014_05.0 {code}
| 2 |
154 |
DM-1029
|
08/11/2014 15:12:47
|
"source" command is not in standard shell
|
{{qserv-start.sh}} script fails when installed on Ubuntu12.04: {code} $ ~/qserv-run/2014_05.0/bin/qserv-start.sh /home/salnikov/qserv-run/2014_05.0/bin/qserv-start.sh: 4: /home/salnikov/qserv-run/2014_05.0/bin/qserv-start.sh: source: not found /home/salnikov/qserv-run/2014_05.0/bin/qserv-start.sh: 6: /home/salnikov/qserv-run/2014_05.0/bin/qserv-start.sh: check_qserv_run_dir: not found {code} It complains about {{source}} command. {{source}} is not standard POSIX shell command, it is an extension which exists in many shells. Apparently in older Ubuntu version {{/bin/sh}} is stricter about non-standard features. To fix the script one either has to use standard . (dot) command or change shebang to {{#!/bin/bash}}. This of course applies to all our executable scripts.
| 2 |
155 |
DM-1038
|
08/11/2014 22:29:12
|
S15 Implement Query Mgmt in Qserv
|
Initial version of system for managing queries run through qserv. This includes capturing information about queries running in Qserv. Note, we are not dealing with query cost estimate here, (it will be covered through DM-1490).
| 40 |
156 |
DM-1041
|
08/12/2014 14:13:17
|
eliminate confusing config side-effects in CalibrateTask
|
CalibrateTask does some unexpected things differently if you configure it certain ways, because it perceives certain processing as only being necessary to feed other steps. In particular, if you disable astrometry and photometric calibration, it only runs measurement once, because it assumes the only purpose of the post-PSF measurement is to feed those algorithms. This (as well as poor test coverage) made it easy to break CalibrateTask in the case where those options are disabled a few branches back. After conferring with Simon and Andy, we think the best solution is to remove this sort of conditional processing from CalibrateTask, which should also make it much easier to read. Instead, we'll always do both the initial and final phase of measurement, even if one of those phases is not explicitly being used within CalibrateTask itself.
| 1 |
157 |
DM-1045
|
08/13/2014 11:27:12
|
Create a permanent and accessible mapping of the BB# and the bNNN.
|
Create a permanent and accessible mapping of the BB# and the bNNN. The users are interested in the BB# since is is used to point to the STDIO file form the entire stack build. The bNNN is needed because the daily life of the developer revolves around the stack tagged alternately by the bNNN tags and/or the DM Release tags.
| 2 |
158 |
DM-1054
|
08/14/2014 11:14:23
|
init.d/qserv-czar needs LD_LIBRARY path
|
With the addition of log we now need to find some shared libraries from stack. Current version of qserv-czar init.d script does not capture LD_LIBRARY_PATH, so we should add it there.
| 0.5 |
159 |
DM-1055
|
08/14/2014 13:44:22
|
Remove unnecessary pieces from qserv czar config
|
The config file for the qserv czar has some items that are no longer relevant, and in this issue, we focus on the ones that are clearly the responsibility of our qserv css. This ticket includes: -- removing these items from the installation/configuration templates -- removing these items from sample configuration files -- removing these items from the code that reads in the configuration file and sets defaults for these items -- fixing things that seem to break as a result of this cleanup. danielw volunteers to assist on the last item, as needed.
| 2 |
160 |
DM-1058
|
08/18/2014 13:48:39
|
fix SubSchema handling of "." and "_"
|
SubSchema didn't get included in the rest of the switch from "." to "_" as a field name separator. As part of fixing this, we should also be able to simplify the code in the slot definers in SourceTable.
| 1 |
161 |
DM-1059
|
08/18/2014 14:00:16
|
track down difference in SdssShape implementation
|
The meas_base version of SdssShape produces slightly different outputs from the original version in meas_algorithms, but these should be identical. We should understand this difference rather than assume its benign just because it's small.
| 2 |
162 |
DM-1067
|
08/19/2014 13:34:06
|
move algorithm implementations out of separate subdirectory
|
We should move the code in the algorithms subdirectory (and namespace) into the .cc files that correspond to individual algorithms. They should generally go into anonymous namespaces there. After doing so, we should do one more test to compare the meas_base and meas_algorithms implementations.
| 1 |
163 |
DM-1068
|
08/19/2014 14:11:39
|
audit and clean up algorithm flag and config usage
|
Check that meas_base plugins and algorithms have appropriate config options and flags (mainly, check that there are no unused config options or flags due to copy/paste relics).
| 1 |
164 |
DM-1070
|
08/19/2014 14:14:23
|
switch default table version to 1
|
Now that all tasks that use catalogs explicitly set the table version, it should be relatively straightforward to set the default version to 1 in afw. Code that cannot handle version > 0 tables should continue to explicitly set version=0.
| 2 |
165 |
DM-1071
|
08/19/2014 14:15:40
|
Switch default measurement tasks to meas_base
|
We should set the default measurement task in ProcessImageTask to SingleFrameMeasurementTask, and note that SourceMeasurementTask and the old forced photometry drivers are deprecated.
| 2 |
166 |
DM-1072
|
08/19/2014 14:17:02
|
create forced wrappers for algorithms
|
We have multiple algorithms in meas_base which could be used in forced mode but have no forced plugin. We should go through the algorithms we have implemented and create forced plugin wrappers for these.
| 1 |
167 |
DM-1073
|
08/19/2014 14:18:09
|
remove old forced photometry tasks
|
After meas_base has been fully integrated, remove the old forced photometry tasks from pipe_tasks
| 1 |
168 |
DM-1076
|
08/19/2014 14:52:21
|
convert afw::table unit tests to version 1
|
Most afw::table unit tests explicitly set version 0. We should change these to test the new behaviors, not the deprecated ones.
| 2 |
169 |
DM-1077
|
08/19/2014 15:01:35
|
Audit TCT recommendations to ensure that all standards updates were installed into Standards documents.
|
Audit TCT recommendations to ensure that all standards updates were installed into Standards documents. It was found that the meeting recorded in: [https://dev.lsstcorp.org/trac/wiki/Winter2012/CodingStandardsChanges] failed to include two recommendations: * recommended: 3-30: I find the Error suffix to be usually more appropriate than Exception. ** current: 3-30. Exception classes SHOULD be suffixed with Exception. * recommended but not specifically included: Namespaces in source files: we should use namespace blocks in source files, and prefer unqualified (or less-qualified) names within those blocks over global-namespace aliases. ** Rule 3-6 is an amalgam of namespace rules which doesn't quite have the particulars desired. FYI: The actual vote was to: "Allow namespace blocks in source code (cc) files." To simplify the future audit, all other recommendations in that specific meeting were verified as installed into the standards.
| 2 |
170 |
DM-1083
|
08/20/2014 12:50:52
|
Fix overload problems in SourceCatalog.append and .extend
|
This example fails with an exception: {code:py} import lsst.afw.table as afwTable schema = afwTable.SourceTable.makeMinimalSchema() st = afwTable.SourceTable.make(schema) cat = afwTable.SourceCatalog(st) tmp = afwTable.SourceCatalog(cat.getTable()) cat.extend(tmp) {code} Expected behavior is that the last line is equivalent to {{cat.extend(tmp, deep=False)}}.
| 1 |
171 |
DM-1088
|
08/21/2014 07:59:46
|
Investigate HTCondor config settings to control speed of ClassAd propagation
|
With default settings we do not have good visibility as to whether an updated ClassAd on a compute node (e.g., CacheDataList now has ccd "S00") will be in effect on the submit node in time for a Job to be matched to an optimal HTCondor node/slot. There are several components (negotiator, schedd, startd) and their associated activities that could impact the time that it takes for a new ClassAd on a worker node to 'propagate' back to the submit side. We investigate these configuration settings to try to determine what thresholds for configuration settings are required to meet a given time cadence of job submissions.
| 2 |
172 |
DM-1113
|
08/22/2014 10:39:51
|
Make the API for ISR explicit
|
The run method of the IsrTask currently takes a dataRef which has getters for calibration products. This makes the task hard to re-use because one needs a butler and because the interface is opaque. This task will make the IsrTask API more transparent. JK: In PMCS this would be Krughoff S
| 20 |
173 |
DM-1125
|
08/26/2014 13:38:38
|
avoid usage of measurement framework in star selectors
|
At least one of the star selectors uses the old measurement framework system to measure the moments of a cloud of points. With the new versions of all the measurement plugins, it should be much easier (and cleaner) to just call the SdssShape algorithm directly, instead of dealing with the complexity of applying the measurement framework to something that isn't really an image.
| 3 |
174 |
DM-1126
|
08/26/2014 15:29:40
|
design new Footprint API
|
This issue is for *planning* (not implementing) some changes to Footprint's interface, including the following: - make Footprint immutable - create a separate SpanRegion class that holds Spans and provides geometric operators does not hold Peaks or a "region" bbox (Footprint would then hold one of these). - many operations currently implemented as free functions should be moved to methods - we should switch the container from vector<PTR(Span)> to simply vector<Span>, as Span is nonpolymorphic and at last as cheap to copy as a shared_ptr. The output of this issue will be a set of header files that define the new interface, signed off by an SAT design review. Other issues will be responsible for implementing the new interface and fixing code broken by the change.
| 8 |
175 |
DM-1135
|
08/26/2014 16:37:24
|
test how large pixel region used in galaxy fitting needs to be
|
Using simulations built on DM-1132 and driver code from DM-1133, test different pixel region sizes and shapes, and determine at what point shear bias due to finite fit region drops below a TBD threshold.
| 20 |
176 |
DM-1137
|
08/26/2014 19:07:34
|
Evaluate python/c++ documentation generation and publication tools
|
This epic related to documentation that is provided as part of normal development activities. The desire is to keep this documentation in and near the codebase as this is best practice for it being maintainable. At the other end, we wish to publish this documentation in a coherent and searchable way for users. A number of tools exist in this area and this item requires a preliminary evaluation to be made. This is part of curating our documentation infrastructure. [FE 75% DOC 100% starting August 20th]
| 20 |
177 |
DM-1138
|
08/26/2014 19:23:33
|
Demonstrate & iterate with team on documentation toolchain
|
Following from DM-1137, this epic relates to demonstrating various options for documentation tools workflows to the team, gathering input as to the preferred solution, adopting a workflow, and defining any specific implementation choices. This is part of curating our documentation infrastructure.
| 5 |
178 |
DM-1143
|
08/26/2014 19:33:31
|
Investigate candidates for Verification and Integration Data Sets
|
The task here is to develop a data set that can be used both for continuous integration (build tests) and automatic QA (integration tests). We want to maximise the richness of the data set in terms of its usefulness, but minimise it in terms of its size. DN to co-ordinate contributions. [DN 95% FE 5%]
| 40 |
179 |
DM-1147
|
08/29/2014 08:36:27
|
Create a top-level qserv_distrib package
|
qserv_distrib will be a meta-package embedding qserv, qserv_testdata and partition.
| 2 |
180 |
DM-1151
|
08/29/2014 15:19:09
|
Fix example of IsrTask to be callable with data on disk
|
Currently the example of the IsrTask takes a fake dataref. This is hard to use with real data. In DM-1113 we will update IsrTask to not take a dataRef. This will make it easy to update the example script to work with real data. This ticket will also include removing from the unit tests any fake dataRefs that have become unnecessary as a result of DM-1299.
| 2 |
181 |
DM-1152
|
08/29/2014 18:32:13
|
Css C++ client needs to auto-reconnect
|
The zookeeper client in C++ that the czar uses doesn't auto-reconnect. This is a capability provided in the kazoo library that qserv's python layer provides, but isn't provided in the c++ client. The zookeeper client disconnects pretty easily: if you step through your code in gdb, the zk client will probably disconnect because its threads expect to keep running. zk sessions may expire too. Our layer should reconnect unless there is really no way to recover without assistance from the calling code (e.g. configuration is wrong, etc.). This ticket includes only basic reconnection attempting, throwing an exception only when some "reconnection-is-impossible" condition is met.
| 2 |
182 |
DM-1160
|
09/01/2014 14:29:41
|
SUI catalog and image interactive visualization with LSST data
|
Using the current software components developed in IPAC to put together a prototype of visualization capabilities. The purpose is to exercise the data access APIs developed by SLAC and get feedback from DM people and potential users of the tool. 20% Goldina, Zhang 10% Roby, Ly, Wu, Ciardi
| 20 |
183 |
DM-1161
|
09/02/2014 12:12:47
|
Cleanup SdssShape
|
We should do a comprehensive cleanup of the SdssShapeAlgorithm class. This includes removing the SdssShapeImpl interface (never supposed to have been public, but it became public) from other code that uses it, and integrating this code directly into the algorithm class. We should also ensure that the source from which the algorithm is derived is clearly cited -- that's Bernstein and Jarvis (2002, http://adsabs.harvard.edu/abs/2002AJ....123..583B); see also DM-2304.
| 8 |
184 |
DM-1188
|
09/17/2014 13:25:49
|
rewrite low-level shapelet evaluation code
|
While trying to track down some bugs on DM-641, I've grown frustrated with the difficulty of testing the deeply-buried (i.e. interfaces I want to test are private) shapelet evaluation code there. That sort of code really belongs in the shapelet package (not meas_multifit) anyway, where I have a lot of similar code, so on this issue I'm going to move it there and refactor the existing code so it all fits together better.
| 2 |
185 |
DM-1192
|
09/18/2014 10:11:14
|
Write a transition plan to move gitolite and Stash repositories to GitHub
|
As recommended by the SAT meeting on 2014-09-16, we need this document to promote the use of GitHub by other subsystems within the project and to understand the impacts on DM. The plan should include, but is not limited to: * Whether and how the repositories should be reorganized. * How existing commit attributions will be translated. * Moving comments in Stash to GitHub
| 20 |
186 |
DM-1195
|
09/18/2014 18:09:24
|
There is a bug in the prescan bbox for megacam.
|
The bounding box of the prescan region in the megacam camera should have zero y extent (I think). Instead it goes from y=-1 to y=2. This is either a bug in the generation of the ampInfoTables or in the way the bounding boxes are interpreted.
| 1 |
187 |
DM-1196
|
09/18/2014 18:32:01
|
exampleUtils in ip_isr is wrong about read corner
|
https://dev.lsstcorp.org/cgit/LSST/DMS/ip_isr.git/tree/examples/exampleUtils.py#n95 Says that the read corner is in assembled coordinates. This is not true, it is in the coordinates of the raw amp. That is, if the raw amp is in electronic coordinates (like the lsstSim images) it is always LL, but if it is pre-assembled, it may be some other corner. This should probably use the methods in cameraGeom.utils to do the image generation.
| 1 |
188 |
DM-1197
|
09/18/2014 19:49:31
|
Support some mixed-type operations for Point and Extent
|
The current lack of automatic conversions in python is pretty irritating, and I think it's a big enough issue for people writing scripts that we should fix it. In particular, allow {code} Point2D + Extent2I Point2D - Extent2I Point2D - Point2I Extend2D + Extent2I Extend2D - Extent2I {code} (and the respective operations in the opposite order where well defined) It would also be good to allow the all functions expecting PointD to accept PointI, but I'm not sure if swig makes this possible. It's probably not worth providing C++ overloads for all of these functions (and to be consistent we should probably do all or none). I realize that you invented these types to avoid bare 2-tuples, but I'm not convinced that we shouldn't also provide overloads to transparently convert tuples to afwGeom objects.
| 2 |
189 |
DM-1211
|
09/24/2014 17:30:16
|
anaconda is too outdated to work with pip
|
The version of anaconda distributed with the stack is too outdated to be used with pip (and probably other things). The issue is an unsafe version of ssh. A workaround is to issue this command while anaconda is setup: {code} conda update conda {code} Warning: it is unwise to try to update anaconda itself (with "conda update anaconda") because that will revert some of the changes and may result in an unusable anaconda. I think what is required is an obvious change to ups/eupspkg.cfg.sh The current version of anaconda is 2.0.1 based on http://repo.continuum.io/archive/ Note: there is no component for anaconda. I will submit another ticket.
| 2 |
190 |
DM-1213
|
09/24/2014 17:41:06
|
cleanup order/grouping of header files
|
We want: * header for the class * then system * then third party * then lsst * then qserv We currently don't have the "lsst" group (with a few exceptions), and we call the last one "local" in most places.
| 1 |
191 |
DM-1217
|
09/25/2014 13:52:17
|
Refactor meas_base Python wrappers and plugin registration
|
meas_base currently has a single Swig library (like most packages), defined within a single .i file (like some packages). It also registers all of its plugins in a single python module, plugins.py. Instead, it should: - Have two Swig libraries: one for the interfaces and helper classes, and one for plugin algorithms. Most downstream packages will only want to %import (and hence #include) the interface, and having them build against everything slows the build down unnecessarily. The package __init__.py should import all symbols from both libraries, so the change would be transparent to the user. - Have separate .i files for each algorithm or small group of algorithms. Each of these could %import the interface library file and the pure-Python registry code, and then register the plugins wrapped there within a %pythoncode block. That'd make the implementation of the algorithms a bit less scattered throughout the package, making them easier to maintain and better examples for new plugins.
| 3 |
192 |
DM-1218
|
09/25/2014 16:08:29
|
Support multiple-aperture fluxes in slots
|
We should be able to use multiple-aperture flux results in slots. While this is technically possible already by setting specific aliases, it doesn't work through the usual mechanisms for setting up slots (the define methods in SourceTable and the SourceSlotConfig in meas_base). After addressing this, we should remove the old SincFlux and NaiveFlux algorithms, as the new CircularApertureFlux algorithm will be able to do everything they can do.
| 2 |
193 |
DM-1231
|
09/30/2014 03:25:03
|
LSE-69: Bring to Phase 3
|
Reflects work needed in Summer 2015
| 0 |
194 |
DM-1232
|
09/30/2014 03:27:04
|
LSE-72: Bring Summer 2014 work to CCB approval
|
Remaining work is to proofread the SysML-ization by Brian Selvy of the LSE-72 draft, do any required cleanup in conjunction with the OCS team, and advocate for LCR-202 (already exists) at the CCB.
| 3 |
195 |
DM-1233
|
09/30/2014 03:43:46
|
Refine requirements and use cases for Level 3 facilities
|
Refine the requirements and use cases for the three branches of Level 3 capabilities exposed to users: * Level 3 programming toolkit (user reconfiguration / extension of DM pipelines and stack) * Level 3 compute cycle delivery (user access to 10% of compute base) * Level 3 data product storage Deliverables: * Refinement, if necessary, to Level 3 requirements in DMSR * Flowed-down requirements as a separate document. Sufficient detail to allow a breakdown of the deliverables in the three areas of Level 3 by annual release cycle through construction period.
| 20 |
196 |
DM-1237
|
09/30/2014 06:58:01
|
LSE-75: Refine WCS and PSF requirements
|
Clarify the data format and precision requirements of the TCS (or other Telescope and Site components) on the reporting of WCS and PSF information by DM on a per-image basis. Depends on the ability of the T&S group to engage with this subject during the Winter 2015 period. Can be deferred to Summer 2015 without major impacts. Current PMCS deadline for Phase 3 readiness of LSE-75 is 29-Sep-2015.
| 8 |
197 |
DM-1242
|
09/30/2014 07:32:12
|
Risk Register refresh 1/2015
|
Periodic review of DM risk register contents. Covers preparation for a review expected at the end of January 2015, the only one during Winter 2015.
| 3 |
198 |
DM-1245
|
09/30/2014 07:37:06
|
Install scisql plugin (shared library) outside of eups stack.
|
sciSQL plugin is currently deployed in eups stack (i.e. $MYSQL_DIR/lib/plugin) during configuration step. Nevertheless eups stack should be immutable during configuration step. MySQL plugin-dir option may allow to deploy sciSQL plugin outside of eups stack (for example in QSERV_RUN_DIR).
| 3 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.