id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
299 |
DM-1668
|
12/16/2014 16:23:43
|
Create SQL code to read Qserv into Python Pandas data frame
|
This works well, at least for a simple case. You can move directly from a query statement to a Pandas data frame for analysis in just a few lines of code. Here is the start of an iPython Qserv session showing how easy it is. In [6]: import pandas as pd In [7]: import pymysql as db In [8]: conn = db.connect(host='lsst-db1.ipac.caltech.edu',port=4040, user='qsmaster', passwd='', db='LSST') In [11]: df = pd.read_sql("select deepCoaddId, tract, patch, ra, decl from DeepCoadd", conn) In [12]: df Out[12]: deepCoaddId tract patch ra decl 0 26607706 0 406,11 0.669945 1.152218 1 26673242 0 407,11 0.449945 1.152218 2 26804242 0 409,2 0.011595 -0.734160 3 26673154 0 407,0 0.449945 -1.152108 …
| 2 |
300 |
DM-1670
|
12/16/2014 16:26:12
|
Begin looking at how Python Pandas can be used for LSST data analysis.
|
Pandas is well integrated with the other parts of SciPy: numpy, matlibpy, etc. It’s a good candidate for data analysis, especially where time series are involved. However, there are no multidimensional columns, poor metadata support for FITS files and a need to use masks instead of NaN values. These may, or may not, be problems. There is a 400 page book about Pandas, so it will take some further time to learn its value, especially with astronomical data in different situations.
| 5 |
301 |
DM-1673
|
12/16/2014 20:45:01
|
Allow SWIG override for broken SWIG installations
|
Dependency on SWIG 2.0+ was introduced into Qserv, and this broke Qserv building on systems relying on SWIG 1.3.x. This ticket introduces basic code to override SWIG_LIB on those systems to allow use of the broken installation (some SWIG search paths are fixed during its build process otherwise).
| 1 |
302 |
DM-1685
|
12/17/2014 17:22:32
|
Minor bug in a test
|
tests/centroid.py has a bug in testMeasureCentroid: "c" is undefined in the following bit of code: {code} if display: ds9.dot("x", c.getX(), c.getY(), ctype=ds9.GREEN) {code}
| 1 |
303 |
DM-1695
|
12/17/2014 20:13:14
|
Implement interfaces for Data Access Services
|
Implement proof of concept, skeleton of the prototype. The work will continue in follow up stories in February and in S15.
| 8 |
304 |
DM-1705
|
12/18/2014 13:22:14
|
S15 Tune Qserv
|
Fix scalability and performance issues uncovered through large scale tests DM-1704
| 100 |
305 |
DM-1706
|
12/18/2014 13:24:21
|
S15 Analyze Qserv Performance
|
Final analysis of Qserv performance, measure KPIs. Based on LDM-240, we are aiming to demonstrate: * 50 simultaneous low volume queries, 18 sec/query * 5 simultaneous high-volume queries, 24 h/query * data size: 10% of DR1 level. * Continuous running for 24 h with no software failures.
| 5 |
306 |
DM-1709
|
12/18/2014 15:07:07
|
Implement result sorting for integration tests
|
We need to be able to sort results, because we can't always rely on ORDER BY. So we need a formatting per query in the integration tests (sort result for some, don't sort for others etc.) The following queries have been disabled because we don't have result sorting, so once it is implemented, we will need to re-enabled them prior to closing this ticket: {code} case02/queries/0003_selectMetadataForOneGalaxy_withUSING.sql case02/queries/3001_query_035.sql case02/queries/3008_selectObjectWithColorMagnitudeGreaterThan.sql case02/queries/3011_selectObjectWithMagnitudes.sql case02/queries/3011_selectObjectWithMagnitudes_noalias.sql {code}
| 8 |
307 |
DM-1710
|
12/18/2014 16:04:45
|
ValueError in lsst.afw.table.Catalog.extend()
|
{code} from lsst.afw.table import BaseCatalog, Schema s = Schema() c1 = BaseCatalog(s) c2 = BaseCatalog(s) c1.extend(c2) {code} The above fails, saying: {code} Traceback (most recent call last): File "test.py", line 7, in <module> c1.extend(c2) File "/Users/jds/Projects/Astronomy/LSST/stack/DarwinX86/afw/10.0+3/python/lsst/afw/table/tableLib.py", line 6909, in extend _tableLib.BaseCatalog_extend(self, iterable, deep) ValueError: invalid null reference in method 'BaseCatalog_extend', argument 3 of type 'lsst::afw::table::SchemaMapper const &' {code}
| 1 |
308 |
DM-1713
|
12/18/2014 20:49:43
|
S15 Image & File Archive v2
|
System for tracking existing image data sets integrated with metadata services.
| 5 |
309 |
DM-1715
|
12/18/2014 22:23:17
|
Disable query killing
|
Apparently killing a query through Ctrl-C is confusing xrootd. Disable query killing (which seems to be only partly implemented).
| 1 |
310 |
DM-1720
|
12/19/2014 14:07:22
|
Make secondary index for director table only
|
Following discussion on qserv-l, we only need to generate "secondary" index for director table, no other table is supposed to have it. Need to modify data loader to recognize which table is director table and generate index only for that table.
| 2 |
311 |
DM-1721
|
12/19/2014 14:13:30
|
S15 Improve Query Coverage in Qserv
|
Query coverage in the qserv integration testing is very limited, we have been turning off more and more queries and we were making the qserv code and the data loader more strict. This epic covers work (fixes and improvements) related to * re-enabling test queries marked as "fixme" (when it make sense, some queries are for features that are not implemented yet) * adding more queries to test interfaces and features that are implemented but are not currently tested.
| 40 |
312 |
DM-1731
|
12/31/2014 12:11:44
|
fix table file handling of MANPATH in dependencies
|
As discussed on DM-1220, the table files for: - mysqlproxy - protobuf - lua - expat should have the MANPATH entry removed entirely, while: - xrootd should have ":" added to the end of its MANPATH value, to allow the default paths to be searched as well.
| 1 |
313 |
DM-1733
|
01/05/2015 14:46:48
|
Build 2015_01 Qserv release
|
See https://confluence.lsstcorp.org/display/DM/Qserv+Release+Procedure for recipe.
| 1 |
314 |
DM-1735
|
01/06/2015 10:16:17
|
Have newinstall.sh check itself against distrib version
|
We want to alert people who are just using a newinstall.sh they have lying around (old or hacked up or...) that they are not using the official server version.
| 1 |
315 |
DM-1738
|
01/06/2015 14:59:08
|
deblender artifacts in noise-replaced images
|
We still see noise artifacts in some deblended images on the LSST side when running the M31 HSC data. They look like the result of running NoiseReplacer on HeavyFootprints in which the children can extend beyond the parents. This was fixed on the HSC side on DM-340 (before the HSC JIRA split off), and I *think* we just need to transfer the fix to LSST.
| 1 |
316 |
DM-1743
|
01/07/2015 18:26:54
|
CSV reader for Qserv partitioner doesn't handle no-escape and no-quote options properly
|
Both the no-quote and no-escape CSV formatting command line options should not have a default value, as specifying any value turns off field escaping and quoting. Furthermore, when quoting is turned off, the reader incorrectly treats embedded NUL characters as a quote character.
| 1 |
317 |
DM-1744
|
01/08/2015 12:37:07
|
Fix SWIG_SWIG_LIB empty list default value
|
See Serge message to Qserv-l "xrootd premature death": {quote} However, there are bigger problems. First of all, master doesn’t build for me. I get this error: File "/home/lsstadm/qserv/SConstruct", line 104: env.Alias("dist-core", get_install_targets()) File "/home/lsstadm/qserv/SConstruct", line 90: exports=['env', 'ARGUMENTS']) File "/home/lsstadm/stack/Linux64/scons/2.3.0+1/lib/scons/SCons/Script/SConscript.py", line 609: return method(*args, **kw) File "/home/lsstadm/stack/Linux64/scons/2.3.0+1/lib/scons/SCons/Script/SConscript.py", line 546: return _SConscript(self.fs, *files, **subst_kw) File "/home/lsstadm/stack/Linux64/scons/2.3.0+1/lib/scons/SCons/Script/SConscript.py", line 260: exec _file_ in call_stack[-1].globals File "/home/lsstadm/qserv/build/SConscript", line 39: canBuild = detect.checkMySql(env) and detect.setXrootd(env) and detect.checkXrootdLink(env) File "/home/lsstadm/qserv/site_scons/detect.py", line 225: xrdLibPath = findXrootdLibPath("XrdCl", env["LIBPATH"]) File "/home/lsstadm/qserv/site_scons/detect.py", line 213: if os.access(os.path.join(path, fName), os.R_OK): File "/home/lsstadm/stack/Linux64/anaconda/2.1.0/lib/python2.7/posixpath.py", line 77: elif path == '' or path.endswith('/'): which is caused by the fact that env[“LIBPATH”] looks like: [[], '/home/lsstadm/stack/Linux64/antlr/2.7.7/lib', '/home/lsstadm/stack/Linux64/boost/1.55.0.1.lsst2/lib', '/home/lsstadm/stack/Linux64/log4cxx/0.10.0.lsst1+2/lib', '/home/lsstadm/stack/Linux64/xrootd/4.0.0rc4-qsClient2/lib', '/home/lsstadm/stack/Linux64/zookeeper/3.4.6/c-binding/lib', '/home/lsstadm/stack/Linux64/mysql/5.1.65.lsst1/lib', '/home/lsstadm/stack/Linux64/protobuf/2.4.1/lib', '/home/lsstadm/stack/Linux64/log/10.0+3/lib'] The first element is [], which comes from https://github.com/LSST/qserv/blob/master/site_scons/state.py#L173 where a PathVariable called SWIG_SWIG_LIB is given a default value of []. I can fix the build by changing the default to an empty string… but I don’t know enough scons to say whether that’s the right thing to do. Can one of the scons gurus confirm that’s the right fix? {quote}
| 1 |
318 |
DM-1754
|
01/09/2015 18:14:10
|
Update auto build tool to work with new split repositories
|
After the repository split, changes are required to get the auto build tool to work properly. Firefly and Firefly based applications are built using Gradle system.
| 8 |
319 |
DM-1761
|
01/13/2015 08:56:06
|
Provide input data for exampleCmdLineTask.py
|
{{pipe_tasks/examples/exampleCmdLineTask.py}} reads data from a repository. The comments in {{pipe_tasks/python/lsst/pipe/tasks/exampleCmdLineTask.py}} suggest that {code} # The following will work on an NCSA lsst* computer: examples/exampleCmdLineTask.py /lsst8/krughoff/diffim_data/sparse_diffim_output_v7_2 --id visit=6866601 {code} There are a few problems with that: * External contributors don't have access to {{lsst*}}; * Even though that data exists now, it's unclear how long it will remain there, or what steps are being taken to preserve it; * The mention of this data is fairly well buried -- it does appear in the documentation, but it's certainly not the first thing a new user will stumble upon. At least the first two points could be addressed by referring to a publicly available data repository. For example, the following works once {{afwdata}} has been set up: {code} examples/exampleCmdLineTask.py ${AFWDATA_DIR}/ImSim --id visit=85408556 {code} Although this has the downside of only providing a single image.
| 1 |
320 |
DM-1762
|
01/13/2015 09:49:03
|
Export SUI data (DC_W13_Stripe82_subset)
|
- import sui.sql.bzip2.out (produced by Serge) into MySQL for DeepSource and DeepForcedSource tables: - remove columns chunkId and subChunkId for each chunk table - merge all chunk table into the main table - join DeepSource and DeepForcedSource to add coordinates of DeepSource (director) object in DeepForcedSource table. then dump DeepSource and DeepForcedSource to files DeepSource.csv and DeepForcedSource.csv {code:sql} SELECT f.*, COALESCE(s.ra, f.ra), COALESCE(s.decl, f.decl) FROM DeepForcedSource f LEFT JOIN DeepSource s ON (f.deepSourceId = s.deepSourceId) INTO OUTFILE '/db1/dump/DeepForcedSource.csv' FIELDS TERMINATED BY ',' OPTIONALLY ENCLOSED BY '"' LINES TERMINATED BY '\n'; {code} - Load this file using Qserv loader. A sample should be made and tested first to validate this procedure. This sample could be added in qserv_testdata
| 3 |
321 |
DM-1770
|
01/13/2015 12:50:27
|
Support DDL in MetaServ - design
|
DDL information is embedded as comments in the master version of the schema (in "cat" repo). Currently we are only using it for schema browser. This story involves designing the procedure involving loading DDL information into MetaServ. We need to be ready to support a variety of scenarios: * we are getting already preloaded database, need to just load metadata about it to metaserv (we might have the original ascii file with extra information, or not) * we are starting from scratch, need to initialize database (including loading schema), and need to load the information to the metaserv * we already have the database and metadata in metaserv, but we want to change something (eg. alter table, or delete table, or delete database).
| 2 |
322 |
DM-1771
|
01/13/2015 15:02:01
|
move executionOrder from plugin config class to plugin class
|
We originally put the executionOrder parameter (which determines when a plugin is run, relative to others), in the config object, simply because that's where it was in the old framework. But it's really not something that should be configurable, as it depends only on the inputs the algorithm needs, which don't change.
| 1 |
323 |
DM-1783
|
01/16/2015 15:06:25
|
fix faint source and minimum-radius problems in Kron photometry
|
This transfers some improvements to the Kron photometry from the HSC side: - HSC-983: address failures on faint sources - HSC-989: fix the minimum radius - HSC-865: switch to determinant radius instead of semimajor axis - HSC-962: bad radius flag was not being used - HSC-121: fix scaling in forced photometry The story points estimate here is 50% of the actual effort, as the work (already done) also benefited HSC.
| 5 |
324 |
DM-1785
|
01/16/2015 19:25:55
|
Add rotAngle to baseline schema
|
Add "rotAngle DOUBLE" to every table that has image ra/decl.
| 1 |
325 |
DM-1792
|
01/20/2015 11:02:20
|
Update documentation and automatic install script w.r.t. new newinstall.sh script
|
newinstall.sh script has evolved and breaks Qserv install procedure.
| 1 |
326 |
DM-1793
|
01/20/2015 11:21:45
|
remove reference data members from FootprintFunctor
|
The FootprintFunctor class uses references for data members, which could cause memory problems if the class (or a subclass) is ever initialized with a temporary. Fixing this would probably require changing the constructor to take a shared_ptr, however, so it would break a lot of downstream code. I'd rather actually rewrite FootprintFunctor entirely (one of the goals for Epic DM-1107), but it's not clear when that will happen; if it slips too much, this issue is to remind us to fix at least this problem.
| 0 |
327 |
DM-1797
|
01/21/2015 10:06:46
|
Package flask
|
The Data Access Webservice APIs are relying on flask, so we need to package flask according to the LSST standards. For my initial testing, I just run "sudo aptitude install python-flask".
| 1 |
328 |
DM-1802
|
01/21/2015 15:46:09
|
remove unused local typedefs
|
gcc 4.8 now warns about locally-defined typedefs that aren't used. We have a few of these in ndarray and afw::gpu that should be removed.
| 1 |
329 |
DM-1803
|
01/21/2015 16:32:13
|
S15 Explore Qserv Authorization
|
Explore authorization centrally: use information generated by parser. Either generate dummy query and run on mysql that runs near czar, or use info produced by parser to determine if user is authorized. Note, we want to limit this to ~1 week, just to reveal potential problems, or do a quick proof of concept.
| 8 |
330 |
DM-1810
|
01/22/2015 12:49:53
|
segfaults in ip_diffim on gcc 4.8
|
I'm seeing test segfaults in ip_diffim on gcc 4.8, similar to those resolved on DM-1725, but with no similar smoking gun yet. Preliminary indication is that the problem is actually in meas_algorithms.
| 2 |
331 |
DM-1812
|
01/23/2015 10:28:09
|
Determine LSE-130 impact of collimated projector calibration plan
|
During a working meeting with Robert Lupton and Chris Stubbs, determine the impact on LSE-130 of the introduction of the collimated projector for calibration.
| 8 |
332 |
DM-1814
|
01/23/2015 10:30:39
|
Support Camera CD-2 (mainly re: LSE-130)
|
Provide slides and other information needed for CD-2, mainly relative to the open questions around LSE-130
| 2 |
333 |
DM-1816
|
01/23/2015 10:33:43
|
Convert LSE-130 to SysML
|
Following CCB recommendation of approval of LSE-130 draft, convert Word draft to SysML and provide a docgen to Robert McKercher for final posting.
| 2 |
334 |
DM-1818
|
01/23/2015 11:35:40
|
Support completion of final document
|
Based on CCB approval of LSE-72 on 10 October, support the completion of the final copy of the document for posting on Docushare.
| 1 |
335 |
DM-1819
|
01/23/2015 11:53:38
|
Complete LSE-140 work as needed to produce final document
|
Complete any review-driven revisions of LSE-140 and support the CCB meeting and following final document preparation.
| 2 |
336 |
DM-1820
|
01/23/2015 11:57:39
|
LSE-140: Collect desired changes for future release
|
Prepare for a future revision (Phase 3) of LSE-140. Collect issues to be addressed in the revision. Determine if any affect Phase 2 scope (which would require a prompt revision). It is not anticipated that there will be an actual revision of LSE-140 during the Winter 2015 cycle, because additional detail on calibration requirements will not be available in time.
| 1 |
337 |
DM-1821
|
01/23/2015 12:05:46
|
Clarify scope of DM data quality analysis requirement
|
Clarify in LSE-140 that the DM data quality analysis referred to is primarily that of the Level 1 data products.
| 0 |
338 |
DM-1824
|
01/23/2015 16:14:23
|
Define issues to be addressed
|
Work with TCS contacts (Jacques Sebag, Paul Lotz, etc.) to define the principal issues
| 1 |
339 |
DM-1841
|
01/26/2015 13:10:12
|
Fix query error on case03: "SELECT scienceCcdExposureId FROM Science_Ccd_Exposure_Metadata"
|
Xrootd prevents the worker to return more than 2MB data. On GB-sized data: {code} mysql --host=127.0.0.1 --port=4040 --user=qsmaster --batch -e "SELECT scienceCcdExposureId FROM Science_Ccd_Exposure_Metadata" ERROR 4120 (Proxy) at line 1: Error during execution: -1 Ref=1 Resource(/chk/qservTest_case03_qserv/1234567890): 20150123-16:27:45, Error merging result, 1420, Result message MD5 mismatch (-1) {code} On integration test case 04: {code} qserv@clrinfoport09:~/src/qserv (u/fjammes/DM-1841 *)⟫ mysql --host=127.0.0.1 --port=4040 --user=qsmaster qservTest_case04_qserv -e "SELECT * FROM DeepForcedSource" ERROR 4120 (Proxy) at line 1: Error during execution: -1 Ref=1 Resource(/chk/qservTest_case04_qserv/6970): 20150204-16:23:43, Error merging result, 1420, Result message MD5 mismatch Ref=2 Resource(/chk/qservTest_case04_qserv/7138): 20150204-16:23:43, Error merging result, 1420, Result message MD5 mismatch Ref=3 (-1) {code}
| 5 |
340 |
DM-1843
|
01/27/2015 09:47:02
|
Permit PropertySets to be represented in event payloads
|
In the old marshalling code, property sets were representable within the payload of the event. This was removed in the new marshalling scheme. There are things (ctrl_orca) that still used this, so this needs to be added to the new marshaling code. At the same time, new new filtering code can not allow this to be added, because the JMS headers only take simple data types.
| 2 |
341 |
DM-1844
|
01/27/2015 11:13:51
|
Test Qserv on SL7
|
Needed to run Qserv on CC-IN2P3 cluster.
| 2 |
342 |
DM-1854
|
01/27/2015 22:56:11
|
SUI propose a structure definition for user workspace
|
Workspace is an integral part of SUI. We want to start the discussion and definition of workspace concept and structure. SUI team had several discussions and Xiuqin presented the results at the DM AHM at SLAC. The slides and the discussion notes are here: https://confluence.lsstcorp.org/display/DM/Workspace+discussion
| 20 |
343 |
DM-1860
|
01/28/2015 00:28:21
|
Update documentation for v10_0 release
|
All done bar obtaining some release notes.
| 2 |
344 |
DM-1868
|
01/28/2015 10:17:49
|
Define JSON Results for Data Access Services
|
As discussed at [Data Access Hangout 2015-02-23|https://confluence.lsstcorp.org/display/DM/Data+Access+Hangout+2015-02-23], we should support json format. This story covers defining structure of JSON results for Data Access Services (dbserv, imgserv, metaserv)
| 3 |
345 |
DM-1873
|
01/28/2015 19:29:35
|
SUI 2D data visualization (XY plot)
|
Better algorithm in spatial binning to visualize large number of catalog sources Plot histogram for tabular data Plot basic light curve
| 40 |
346 |
DM-1875
|
01/28/2015 23:35:15
|
SUI infrastructure implementation
|
Identify the hardware resources needed at NCSA for short term development and Set up the basic git repository and build system Explore multi resolution images display for background iamge
| 40 |
347 |
DM-1878
|
01/29/2015 09:21:33
|
Collect, understand, and define more use cases
|
This is an on-going effort. The collected use cases will be posted at confluence page https://confluence.lsstcorp.org/pages/viewpage.action?pageId=41784036.
| 20 |
348 |
DM-1880
|
01/29/2015 11:20:19
|
Implement RESTful interfaces for Database (GET)
|
Implement RESTful interfaces for Database (see all D* in https://confluence.lsstcorp.org/display/DM/API), based on the first prototype developed through DM-1695. The work includes adding support for returning appropriately formatted results (support the most common formats). This covers "GET" type requests only, "POST" will be handled separately.
| 5 |
349 |
DM-1885
|
01/29/2015 11:57:21
|
Contribute to the workspace capability discussion
|
This include past experience, collection of use cases.
| 2 |
350 |
DM-1887
|
01/29/2015 17:34:40
|
HDF5 file format study
|
Xiquin, Loi, Trey, and myself discussed HDF5 as a default format to return result set and metadata from lower-level database services vs. traditional IPAC table. Here is the summary: Advantages of IPAC Table format - Simple and human-readable, contains a single table - Fixed length rows (easy to page through) - Supported by many astronomical tools - Provides a way to pass data type, units, and null values in the header - More metadata can be added through keywords (attributes) Disadvantages of IPAC table format - Steaming can not be started before all data are received – need to know column width before the table can be written (csv is better alternative) - Only alpha-numeric and '_' characters are allowed in column names (small subset of available characters) - Only predefined datatypes and one attribute type (string) - ASCII representation requires about twice as much storage to represent floating-point number data than the binary equivalent. Advantages of HDF5 - Can represent complex data and metadata (according to LOFAR, good to represent time series) - Structured data, arbitrary attribute types, datatypes can be combined to create structured datatypes - Flexible datatypes: can be enumerations, bit strings, pointers, composite datatypes, custom atomic datatypes - Access time and storage space optimizations - Partial I/O: “Chunked” data for faster access - Supports parallel I/O (reading and writing) - Built-in compression (GNU zlib, but can be replaced with others) - Existing inspection and visualization tools (HDFView, MATLAB, etc.) Disadvantages of HDF5 - Complex - Tuned to do efficient I/O and storage for "big" data (hundreds of megabytes and more), not efficient for small reads/writes. - Requires native libraries (available in prepackaged jars, see below) - Not human readable - (?) Not yet widely supported by astronomical tools (counter-examples: AstroPy, IDL, more at hdfgroup site) Tools and Java wrappers: * JHI5 - the low level JNI wrappers: very flexible, but also quite tedious to use. * Java HDF object package - a high-level interface based on JHI5. * HDFView - a Java-based viewer application based on the Java HDF object package. * JHDF5 - a high-level interface building on the JHI5 layer which provides most of the functionality of HDF5 to Java. The API has a shallow learning curve and hides most of the house-keeping work from the developer. You can run the Java HDF object package (and HDFView) on the JHI5 interface that is part of JHDF5, so the two APIs can co-exist within one Java program. (from StackOverflow answer, 2012) * NetCDF-Java is a Pure Java Library, that reads HDF5. However, it's hard to keep pure java version up-to-date with the standard, does not support all the features. A way to set up native libraries (3rd option from JHDF5 FAQ): "Use a library packaged in a jar file and provided as a resource (by putting the jar file on the class path). Internally this uses the same directory structure as method 2., but packaged in a jar file so you don't have to care about it. Jar files with the appropriate structure are cisd-jhdf5-batteries_included.jar and lib/nativejar/.jar (one file for each platform). This is the simplest way to use the library."
| 1 |
351 |
DM-1897
|
01/30/2015 13:06:15
|
Modify CSS structure to support table deletion
|
Modify CSS structures to support DROP TABLE, as defined in DM-1896.
| 2 |
352 |
DM-1900
|
01/30/2015 13:20:39
|
Worker management service - design
|
We need to replace direct worker-mysql communication and other administrative channels with a special service which will control all worker communication. Some light-weight service running alongside other worker servers, probably HTTP-based. Data loading, start/stop should be handled by this service.
| 5 |
353 |
DM-1901
|
01/30/2015 13:25:08
|
Re-implement data loading scripts based on new worker control service
|
Once we have new service that controls worker communication we'll need to reimplement WorkerAdmin class based on that.
| 8 |
354 |
DM-1903
|
02/02/2015 06:35:40
|
Implementation of calibration transformation framework
|
Following DM-1598 there will be a detailed design and prototype implementation for the calibration & ingest system. This issue covers cleaning up that code, documenting it, having it reviewed, and merging to master.
| 2 |
355 |
DM-1904
|
02/02/2015 10:20:18
|
Continued footprint improvements
|
A redesigned API and support for topological operations within the Footprint class. This continues the work started in DM-1107 in W15. Breakdown: jbosch 15%; swinbank 85%
| 8 |
356 |
DM-1917
|
02/02/2015 16:35:02
|
Fix missing virtual destructors
|
The compiler is warning about some derived class hierarchies that are lacking virtual destructors. We should add at least empty implementations to the base classes of these hierarchies.
| 1 |
357 |
DM-1919
|
02/02/2015 16:55:46
|
Address misc. compiler warnings
|
Fix places where compiler is warning about some things we are doing on purpose and which we don't intend to change. This helps keep compiler noise down so its easier to notice "real" warnings.
| 1 |
358 |
DM-1943
|
02/04/2015 13:11:15
|
HSC backport: convert Peak to PeakRecord
|
This issue covers transferring all changesets from [HSC-1074|https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1074] and its subtasks, as well as: - An RFC to propose the API change, and any requested modifications generated by the RFC. - Additional fixes to downstream code that's broken by this change (HSC-side changesets should be present for most of downstream fixes, but perhaps not all).
| 8 |
359 |
DM-1945
|
02/04/2015 13:36:48
|
HSC backport: multiband processing for coadds
|
This issue includes transferring changesets from many HSC issues: - HSC-1060 - HSC-1064 - HSC-1065 - HSC-1061 Most of this is in multiBand.py in pipe_tasks, but there are scattered changes elsewhere (including updates to camera mappers to include the new datasets, for which we'll need to modify more than just obs_subaru). However, before we make these changes, we'll need to open an RFC to gather comments on the design of this task. We should qualify there that this is not a long-term plan for consistent multiband processing (which we'll be starting to design on DM-1908), but a step towards better processing in the interim. Note: while I've assigned this to [~lauren], as I think it will be very helpful for her to get familiar with this code by doing the transfers, the RFC will have to involve a collaboration with [~jbosch], [~price], and Bob Armstrong, as we can't expect someone who wasn't involved in the design to be able to write a document justifying it.
| 8 |
360 |
DM-1952
|
02/04/2015 16:13:25
|
Change log priority for message "Unknown column 'whatever' in 'field list'"
|
Next message should be logged with ERROR priority: {code} 0204 15:08:03.748 [0x7f1f4b4f4700] INFO Foreman (build/wdb/QueryAction.cc:250) - [1054] Unknown column 'whatever' in 'field list' {code}
| 0.5 |
361 |
DM-1953
|
02/04/2015 18:37:18
|
Post meas_base move changes to Kron
|
These are to note leftovers from DM-982. They could be done in a single issue. 1. I commented code out referring to correctfluxes, but it will need to be restored once it is available in the new framework. 2. Jim asked me to replace the computeSincFlux which is currently in PsfImage.cc in meas_algorithms with a similar call in meas_base/ApertureFlux.cc. I did not do this because it became rather complicated, and can just as easily be done when the meas_algorithms routine is moved or removed. Basically, the templating in ApertureFlux is on Pixel type, whereas in meas_algorithms it is on ImageT (where ImageT is not necessarily a single class hierarchy -- e.g., Image and MaskedImage). So I left this for now.
| 1 |
362 |
DM-1954
|
02/04/2015 23:47:11
|
HSC backport: deblended HeavyFootprints in forced photometry
|
This is a transfer for changesets for [HSC-1062|https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1062]. Unlike most of the HSC backport issues for multiband deblending, these changes will require significant modification the LSST side, because we need to apply them to the new forced measurement framework in meas_base rather than the old, HSC-only one in meas_algorithms and pipe_tasks. Also include [HSC-1256|https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1256], [HSC-1218|https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1218], [HSC-1235|https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1235], [HSC-1216|https://hsc-jira.astro.princeton.edu/jira/browse/HSC-1216].
| 20 |
363 |
DM-1972
|
02/06/2015 15:12:29
|
upgrade SWIG to 3.0.8 or later
|
SWIG 3.0.5 is now out and has several useful fixes w.r.t. 3.0.2 (which we are presently using) including: - A bug we've had to work around involving templated methods of classes - improved handling of new-style enums (they are no longer hoisted into the global namespace, which was a serious misfeature of SWIG 3.0.2) I propose we try it out using buildbot (when we have some time), and if it works, we adopt it. Adopting it will help us relax the restrictions on what C++11 features can be used in C++ header files.
| 2 |
364 |
DM-1973
|
02/06/2015 15:56:23
|
Build 2015_02 Qserv release
|
See https://confluence.lsstcorp.org/display/DM/Qserv+Release+Procedure for recipe.
| 1 |
365 |
DM-1974
|
02/06/2015 18:30:02
|
Fix enclose, escape, and line termination characters in qserv-data-loader
|
Add this string to mysql loader 'LOAD DATA INFILE' command: {code} q += "ENCLOSED BY '%s' ESCAPED BY '%s' LINES TERMINATED BY '%s'" % (enclose, escape, newline) {code} and add params in cfg file.
| 2 |
366 |
DM-1982
|
02/09/2015 12:30:15
|
Fix JDBC timestamp error
|
JDBC driver returns an error on next query: {code:sql} sql> select * from Science_Ccd_Exposure [2015-02-06 13:39:37] 1 row(s) retrieved starting from 0 in 927/970 ms [2015-02-06 13:39:37] [S1009] Cannot convert value '0000-00-00 00:00:00' from column 32 to TIMESTAMP. [2015-02-06 13:39:37] [S1009] Value '[B@548997d1' can not be represented as java.sql.Timestamp {code}
| 1 |
367 |
DM-1987
|
02/10/2015 23:04:48
|
Redesign/Refactor WCS and Coord
|
%50 KSK, %50 RO Currently WCS is mutable and Coord objects are heavyweight. Refactor WCS to be immutable and make Coord less heavyweight. Include lists of Coord objects. It's possible astropy could inform in that area. Also, remove TanWcs in favor of TanSipWcs since TanWcs can have SIP terms.
| 40 |
368 |
DM-1994
|
02/11/2015 14:30:29
|
Story point display and roll-up in epic display
|
I understand that there is a pending request to display the story points for individual story issues in the mini-table in which they are displayed for an epic. It would also be useful to see a rolled-up total of the story points for the defined set of stories - so that, among other things, this could be compared to the story point value for the epic. Ideally the story points for the roll-up might be displayed as "nn (mm)" where nn is the total points and mm is the number of points remaining to do (or done already - I don't care which as long as the definition is clear).
| 1 |
369 |
DM-2005
|
02/12/2015 18:07:11
|
switch ndarray to external package
|
There is already an external ndarray project on GitHub (we've been using a fork of that). We should merge the forks and switch to using the external package.
| 2 |
370 |
DM-2009
|
02/12/2015 18:12:43
|
Please add cbegin and cend to afw tables
|
It would be helpful if afw tables had the C++11 iterator methods cbegin and cend that return iterators-to-const.
| 2 |
371 |
DM-2029
|
02/13/2015 10:32:12
|
Update Confluence build instructions to match github move
|
The Build tool documentation https://confluence.lsstcorp.org/display/LDMDG/The+LSST+Software+Build+Tool refers to git clone [email protected]:LSST/DMS/devenv/lsstsw.git This should be updated to reflect the move to GitHub git clone https://github.com/lsst/lsstsw.git
| 0 |
372 |
DM-2050
|
02/18/2015 04:09:10
|
Integration and test monitoring architecture Part I
|
[retitled to better capture cycle scope] Develop and deploy a layer to capture the outputs, initially numeric, of integration testing afterburners such as sdss_demo, hsc_demo, and others developed this cycle. Also capture meta-information such as execution time and memory footprint. Propose log format to standardise production of such informations. Investigate notification system based on trending away from expected values. Investigate data provisioning of integration tests such as storage of test data in GithubLFS. [75% JMP 25% JH]
| 100 |
373 |
DM-2052
|
02/18/2015 04:32:29
|
Maintain list of OSes that pass build and integration testing
|
Provide an automatiically generated and updated pages showing operating systems that are successfully building and integrating the stack from source. [FE at 75%, JH at 75%]
| 20 |
374 |
DM-2054
|
02/18/2015 04:39:22
|
Release engineering Part One
|
Bucket for public stack releases [FE at 75%, JH at 75%]
| 40 |
375 |
DM-2057
|
02/18/2015 13:36:11
|
Attend Scale 13x conference
|
Attend database talks, in particular the MaxScale proxy talk (http://www.socallinuxexpo.org/scale/13x/presentations/advanced-query-routing-and-proxying-maxscale?utm_campaign=north-american-trade-shows&utm_source=hs_email&utm_medium=email&utm_content=16099082&_hsenc=p2ANqtz-_MFjfxvpCdmV_Ax2RKDdOGypHPQ85UL-UMuy0eRs_MrlJ2qJVp-MXx-g7_-dAQsq0trpA61hkZrzO-3gp6bKVkpK52fQ&_hsmi=16099082). If anyone has questions they would like me to ask, please post them here as well. I will post notes to this issue.
| 2 |
376 |
DM-2058
|
02/18/2015 14:14:32
|
Data loader should always create overlap tables
|
We have discovered that some overlap tables that are supposed to exist were not actually created. It looks like partitioner is not creating overlap files when there is no overlap data and loader is not creating overlap table if there is no input file. Situation is actually symmetric, there could be non-empty overlap table but empty/missing chunk table. When we create one table we should always make another as well.
| 2 |
377 |
DM-2060
|
02/18/2015 15:17:18
|
Rename TaskMsgFactory2
|
Rename TaskMsgFactory2 to TaskMsgFactory. Please see DM-211 for more information.
| 0.5 |
378 |
DM-2094
|
02/19/2015 17:50:20
|
Port metaREST.py to db
|
metaREST_v0.py in metaserv is currently using MySQLdb instead of going through the db API, because we need to use parameter binding for security reasons. We should switch to using db, once the db interfaces will support it.
| 1 |
379 |
DM-2095
|
02/19/2015 18:27:38
|
Port dbREST.py to db
|
dbREST_v0.py in dbserv is currently using MySQLdb instead of going through the db API, because we need to use parameter binding for security reasons. We should switch to using db, once the db interfaces will support it.
| 1 |
380 |
DM-2096
|
02/19/2015 23:16:01
|
Long term database work planning
|
Long term planning (updating LDM-240).
| 8 |
381 |
DM-2097
|
02/19/2015 23:47:00
|
Package andyH xssi fixed version (>2MB answer pb) in eups
|
See DM-1847 - Andy made a patch, it'd be good to the xrootd we use for our stack.
| 1 |
382 |
DM-2129
|
02/20/2015 14:54:26
|
S19 Improve Query Coverage in Qserv
|
This epic holds budgeted effort for work directed at improving query coverage (additional or previously unsupported query types) in Qserv
| 40 |
383 |
DM-2131
|
02/20/2015 17:10:14
|
Resolve compiler warnings in new measurement framework
|
When building {{meas_base}}, or any other measurement plugins which follow the same interface, with clang, I see a bunch of warnings along the lines of: {code} In file included from src/ApertureFlux.cc:34: include/lsst/meas/base/ApertureFlux.h:197:18: warning: 'lsst::meas::base::ApertureFluxAlgorithm::measure' hides overloaded virtual function [-Woverloaded-virtual] virtual void measure( ^ include/lsst/meas/base/Algorithm.h:183:18: note: hidden overloaded virtual function 'lsst::meas::base::SimpleAlgorithm::measure' declared here: different number of parameters (4 vs 2) virtual void measure( {code} This is an artefact of a [workaround for SWIG issues|https://confluence.lsstcorp.org/pages/viewpage.action?pageId=20284390]; the warnings aren't indicative of a fundamental problem, but if we can avoid them we should. While we're at it, we should also fix: {code} include/lsst/meas/base/ApertureFlux.h:233:1: warning: 'ApertureFluxResult' defined as a struct here but previously declared as a class [-Wmismatched-tags] struct ApertureFluxResult : public FluxResult { ^ include/lsst/meas/base/ApertureFlux.h:65:1: note: did you mean struct here? class ApertureFluxResult; ^~~~~ struct {code}
| 1 |
384 |
DM-2139
|
02/22/2015 01:24:23
|
Support DDL in MetaServ - implementation
|
DDL information is embedded as comments in the master version of the schema (in "cat" repo). Currently we are only using it for schema browser. This story involves building tools that will load the DDL schema into MetaServ. Design aspects are covered in DM-1770.
| 8 |
385 |
DM-2141
|
02/22/2015 15:21:45
|
Add meas_extensions_shapeHSM to lsstsw, lsst_distrib
|
meas_extensions_shapeHSM has just been resurrected from bitrot, and should be included in our distribution. Contrary to DM-2140, it should probably not be included in lsst_apps, as it's not clear we want to add a dependency on tmv and GalSim there.
| 1 |
386 |
DM-2157
|
02/23/2015 17:37:12
|
Data loader crashes on uncompressed data.
|
Vaikunth just mentioned to me that the is a crash in data loader when it tries to load uncompressed data: {noformat} root - CRITICAL - Exception occured: local variable 'outfile' referenced before assignment Traceback (most recent call last): File "/home/vaikunth/src/qserv/bin/qserv-data-loader.py", line 312, in <module> sys.exit(loader.run()) File "/home/vaikunth/src/qserv/bin/qserv-data-loader.py", line 248, in run self.loader.load(self.args.database, self.args.table, self.args.schema, self.args.data) File "/home/vaikunth/src/qserv/lib/python/lsst/qserv/admin/dataLoader.py", line 168, in load return self._run(d atabase, table, schema, data) File "/home/vaikunth/src/qserv/lib/python/lsst/qserv/admin/dataLoader.py", line 192, in _run files = self._gunzip(data) File "/home/vaikunth/src/qserv/lib/python/lsst/qserv/admin/dataLoader.py", line 388, in _gunzip result.append(outfile) UnboundLocalError: local variable 'outfile' referenced before assignment {noformat} It looks like we never tested loader on uncompressed data and there is a bug in handling uncompressed data.
| 1 |
387 |
DM-2159
|
02/23/2015 19:17:07
|
Implement Image Response for ImgServ
|
This story covers implementing proper response, and the header metadata for the fits image response.
| 3 |
388 |
DM-2161
|
02/23/2015 20:12:54
|
Setup webserv for SUI tests
|
We need to setup a service (eg on lsst-dev) that can be used by the IPAC team to play with our webserv/metaserv/dbserv/imgserv. The server runs on lsst-dev machine, port 5000. To ssh-tunnel, try: {code} ssh -L 5000:localhost:5000 lsst-dev.ncsa.illinois.edu {code} An example usage: {code} curl 'http://localhost:5000/db/v0/query?sql=SHOW+DATABASES+LIKE+"%Stripe%"' curl 'http://localhost:5000/db/v0/query?sql=SHOW+TABLES+IN+DC_W13_Stripe82' curl 'http://localhost:5000/db/v0/query?sql=DESCRIBE+DC_W13_Stripe82.DeepForcedSource' curl 'http://localhost:5000/db/v0/query?sql=DESCRIBE+DC_W13_Stripe82.Science_Ccd_Exposure' curl 'http://localhost:5000/db/v0/query?sql=SELECT+deepForcedSourceId,scienceCcdExposureId+FROM+DC_W13_Stripe82.DeepForcedSource+LIMIT+10' curl 'http://localhost:5000/db/v0/query?sql=SELECT+ra,decl,filterName+FROM+DC_W13_Stripe82.Science_Ccd_Exposure+WHERE+scienceCcdExposureId=125230127' curl 'http://localhost:5000/image/v0/raw/cutout?ra=7.90481567257&dec=-0.299951669961&filter=r&width=30.0&height=45.0' {code}
| 2 |
389 |
DM-2171
|
02/24/2015 20:55:15
|
Implement JSON Results for MetaServ and DbServ
|
Implement JSON results for Metadata Service (see all M* in https://confluence.lsstcorp.org/display/DM/API), and Database Service (see all D*) as defined in DM-1868
| 3 |
390 |
DM-2173
|
02/25/2015 12:08:08
|
Disable testDbLocal.py in db if auth file not found
|
tests/testDbLocal.py can easily fail if required mysql authorization file is not found in user home dir. Skip the test instead of failing in such case.
| 1 |
391 |
DM-2186
|
02/26/2015 13:56:25
|
Move astrometry_net wrapper code from meas_astrom to a new package
|
Remove all astrometry.net wrapper code from {{meas_astrom}} and put it in a new package named {{meas_extensions_astrometryNet}}.
| 2 |
392 |
DM-2190
|
02/26/2015 14:58:07
|
Documentation for data loader
|
Vaikunth had some "expected" troubles playing with data loader options for his DM-1570 ticket. Main issue I believe is the absence of the documented use cases and their corresponding data loader options. I'll try to add a bunch of common use cases to RST documentation and also verify that all options behave as expected.
| 2 |
393 |
DM-2193
|
02/27/2015 11:15:38
|
Add assertXNearlyEqual to afw
|
We often want to compare two WCS for approximate equality. afw/image/testUtils has similar functions to compare images and masks and I would like to add one for WCS This ended up being expanded to adding functions for many afw classes (not yet including image-like classes, though existing functions in image/testUtils for that purpose should probably be wrapped or rewritten on a different ticket)
| 5 |
394 |
DM-2199
|
02/27/2015 16:20:43
|
Build 2015_03 Qserv release
|
See https://confluence.lsstcorp.org/display/DM/Qserv+Release+Procedure for recipe.
| 1 |
395 |
DM-2241
|
03/02/2015 17:40:38
|
Raw image definition and usage
|
SUI needs to serve raw data to the user community. We want to understand the use cases and definition of raw data. More specifically, what meta data will be available in the FITS file that we call raw image?
| 1 |
396 |
DM-2243
|
03/02/2015 22:00:41
|
Extend API: expose cursor
|
Extend API to expose cursor. This was brought up by Andy in DM-2137.
| 1 |
397 |
DM-2257
|
03/03/2015 16:07:11
|
Allow eups xrootd install script to be relocatable
|
xrootd lib/ directory should be s relative symlink to lib64, no a full path link.
| 1 |
398 |
DM-2270
|
03/04/2015 15:50:35
|
Move VMs to Docker containers
|
We anticipate being able to move from the VMs that we currently use to using docker. This will require some coordination with Greg Daues to see how HTCondor is configured.
| 2 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.