id
int64
0
5.38k
issuekey
stringlengths
4
16
created
stringlengths
19
19
title
stringlengths
5
252
description
stringlengths
1
1.39M
storypoint
float64
0
100
217
XD-808
09/04/2013 20:37:58
Update to spring-data-hadoop 1.0.1.RELEASE
This might mean we should adjust our hadoopDistro options to the ones supported in the new release - hadoop12 (default), cdh4, hdp13, phd1 and hadoop21
3
218
XD-819
09/06/2013 14:01:09
Add Service Activator Processor
Would be nice to have a ServiceActivator Processor available so that if one had an existing Spring bean they could simply describe the bean id and method name - without going through the full complexity of creating a processing module.
3
219
XD-842
09/12/2013 08:35:19
Add back classifier = 'dist' to distZip build target
Add back "classifier = 'dist'" to distZip build target - it was was accidentally removed.
1
220
XD-847
09/14/2013 10:45:13
Revise the available hadoopDistro options
We should adjust our --hadoopDistro options to the ones supported in the new spring-data-hadoop 1.0.1.RELEASE - hadoop12 (default), cdh4, hdp13, phd1, hadoop20 This includes updating the wiki pages
5
221
XD-849
09/16/2013 05:22:16
Gemfire modules should support connection via locator
The gemfire modules currently accept server host and port. Provide an option to specify a locator host and port
2
222
XD-850
09/16/2013 08:29:32
JAR version mismatches
Looks like there are some version mismatch issues with the build/packaging of the XD components. Looking in xd/lib I see the following which looks suspicious: mqtt-client-0.2.1.jar mqtt-client-1.0.jar jackson-core-asl-1.9.13.jar jackson-mapper-asl-1.9.12.jar spring-integration-core-3.0.0.M3.jar spring-integration-http-2.2.5.RELEASE.jar spring-data-commons-1.6.0.M1.jar spring-data-commons-core-1.4.0.RELEASE.jar
3
223
XD-872
09/19/2013 21:02:09
Make in-memory meta data stores persistent
Just wanted to create story for this - so we can consider whether this should be addressed. In at least 2 modules we use non-persisted states. We may want to consider making them persistent: *Twitter Search* uses an in-memory *MetadataStore* that keeps track of the twitter ids. There exists a corresponding issue for Spring Integration: "Create a Redis-backed MetadataStore" See: https://jira.springsource.org/browse/INT-3085 *File Soure*'s File Inbound Channel Adapter uses a AcceptOnceFileListFilter, which uses an in-memory Queue to keep track of duplicate files.
8
224
XD-873
09/19/2013 21:26:09
File Source - Provide option to pass on File object
This story may need to be broken into several stories Particularly for Batch scenarios one may not want to run a "file-to-string-transformer" on the payload file in the file source but rather handle/pass the file reference itself (local SAN etc.) - e.g. in case somebody drops a 2GB or in scenarios where one wants to push those large files into HDFS and run hadoop jobs on the data. This is important for Batch Jobs as they need to access the file itself for the reader. We need to *keep in mind the various transports we support*. Not sure how Kryo handles file serialization. I would think we only need the File Meta Data to be persisted not the file-data itself (make that configurable??).
8
225
XD-874
09/19/2013 21:33:03
For file based item reader jobs, step/job completion message should have name of file sent on named channel
It looks like we don't handle deletion of source files currently. We should provide some support for that - Maybe there is a way to into Spring Integration's PseudoTransactionManager support: http://docs.spring.io/spring-integration/api/org/springframework/integration/transaction/PseudoTransactionManager.html The *File Source* should possibly also support File archival functionality (But that might also be a dedicated processor?). Not sure where we want to set the semantic boundaries for the File Source.
8
226
XD-885
09/20/2013 20:24:05
Add Batch Job Listeners Automatically
Add Batch Job Listeners Automatically * Each major listener category should send notifications to own channel (StepExecution, Chunk, Item etc.) * Add attribute to disallow automatic adding of listeners
8
227
XD-892
09/23/2013 12:16:21
Spring Batch Behavior change from M2 to M3
In M3, the batch job behavior has changed. In M2, it was much easier to create an invoke a batch job. In M3, a trigger is required. Figuring that change out isn't a big deal but the behavior of this batch job in M3 throws a stack trace, yet it executes. In M2, this same batch job runs fine with no stack trace. Logs are attached. I can't see a difference in the container log property files from M2 to M3. Turning the log settings down will suppress the traces, but I was not expecting the traces since they did not show up in M2. Stream Definitions: job create --name pdfLoadBatchJob --definition "batch-pdfload --inputPath='LOCAL_PDF_PATH' --hdfsPath='REMOTE_HDFS_PATH'" stream create --name pdfloadtrigger --definition "trigger > job:pdfLoadBatchJob"
1
228
XD-897
09/24/2013 08:44:15
The HDFS Sink should support copying File payloads
We should support *java.io.file* payloads in order to support non-textual file and large text file payloads being uploaded to HDFS. Currently text file payloads are converted to a text stream in memory and, non-String payloads are converted to JSON first, using an "object-to-json-transformer". Ultimately we need to support streams such as "file | hdfs" where the actually payload being copied to HDFS is not necessarily JSON or textual. Need to be able to support headers in the message that will indicate which HDFS file the data should be stored in.
8
229
XD-901
09/24/2013 15:54:33
Wrong Jetty Util on classpath for WebHdfs
We currently include jetty-util-6.1.26.jar but we need to add correct jar for different distributions - PHD uses jetty-util-7.6.10.v20130312.jar Need to check hadoop-hdfs dependencies for the distros and add jetty-util-* to the jar copy for each distro
3
230
XD-904
09/27/2013 01:36:26
Fix hardcoded redis port from tests
kparikh-mbpro:spring-xd kparikh$ grep -r 6379 * | grep java spring-xd-analytics/src/test/java/org/springframework/xd/analytics/metrics/common/RedisRepositoriesConfig.java: cf.setPort(6379); spring-xd-analytics/src/test/java/org/springframework/xd/analytics/metrics/integration/GaugeHandlerTests.java: cf.setPort(6379); spring-xd-analytics/src/test/java/org/springframework/xd/analytics/metrics/integration/RichGaugeHandlerTests.java: cf.setPort(6379); spring-xd-dirt/src/test/java/org/springframework/xd/dirt/listener/RedisContainerEventListenerTest.java: cf.setPort(6379);
1
231
XD-908
09/30/2013 04:31:48
Add aggregate counter query by number of points
It should be possible to supply a start or end date (or none for the present), plus a "count" value for the number of points required (i.e. after or prior to the given time).
3
232
XD-912
09/30/2013 11:50:06
Support for registering custom message converters
Users need to register custom message converters used by modules.
5
233
XD-917
10/01/2013 11:30:32
Make the parser aware of message conversion configuration
Enhance the stream parser to take message conversion into account in order to validate or automatically configure converters. For example: {noformat:nopanel=true} source --outputType=my.Foo | sink --inputType=some.other.Bar is likely invalid since XD doesn't know how to convert Foo->Bar. {noformat}
8
234
XD-919
10/02/2013 07:04:05
Remove json parameter from twittersearch source
json parameter is no longer required. Use --outputType=application/json instead
2
235
XD-928
10/08/2013 09:33:15
Refactor src/test/resources in Dirt
* In the testmodules.source ** Rename source-config to packaged-source ** Rename source-config to packaged-source-no-lib * All xml files should be prefixed with test. i.e. testsource, testsink * Make sure all tests pass with new configuration
1
236
XD-930
10/08/2013 10:40:56
Return rounded interval values from aggregate counter queries
The aggregate counter query result currently returns the interval that is passed in, whether it is aligned with the bucket resolution requested or not. It would be more intuitive if the time values returned are rounded (down) to the resolution of the query (i.e. whole minutes, hours, days or whatever).
2
237
XD-931
10/08/2013 14:40:24
Format option to display runtime module properties in shell
The runtime module properties requires a format option when displayed in the Shell Based on the PR (https://github.com/spring-projects/spring-xd/pull/340), the module properties are stored as String and displayed as is.
2
238
XD-939
10/09/2013 12:18:03
Make Runtime modules listing by ContainerId pageable
The RuntimeContainersController (from PR#340) returns the list of runtime modules. Instead we need make it pageable.
2
239
XD-955
10/14/2013 10:08:27
Update Jobs documentation to include "job launch" command
This is currently missing and probably supersedes some of the stuff that's in there now.
1
240
XD-974
10/21/2013 14:23:12
The HDFS Sink should support compressing files as they are copied
Get a java.io.File and copy it into HDFS. Could be text or binary. Write compressed with Hadoop and third party codecs see: (XD-277, XD-279) should initially support: - bzip2 - LZO
8
241
XD-981
10/21/2013 21:11:57
Missing guava-11.0.2.jar dependency for hadoop distros
We used to have a shared guava-11.0.2.jar dependency in the lib dir. That's no longer there so hadoop distros that require this now fail (at least any hadoop 2.0.x based ones) We should also upgrade to current Hadoop versions (Hadoop 2.2 stable)
3
242
XD-990
10/22/2013 15:41:21
The HDFS Store Library should support writing text with delimiter
Support writing lines of text separated by a delimiter Support writing a CSV (comma-separated variables), TSV (tab-separated variables), No compression
8
243
XD-991
10/22/2013 15:45:55
The HDFS Store Library should support compression when writing text
Need to support writing text in compressed format should initially support: - bzip2 - LZO
8
244
XD-992
10/22/2013 15:50:23
The HDFS Store Library should support writing to Sequence Files
Support for writing Sequence Files Without Compression Need a means to specify the key/value to be used
8
245
XD-993
10/22/2013 15:52:11
The HDFS Store Library should support compression when writing to Sequence Files
Support for using compression when writing Sequence Files Either block or record-based compression.
8
246
XD-994
10/22/2013 16:18:16
The HDFS Sink should support writing POJOs to HDFS using Parquet
Writing POJOs using Kite SDK
8
247
XD-998
10/23/2013 09:54:37
Add documentation for gemfire cache-listener source
Need some sample usage, docs for https://github.com/spring-projects/spring-xd/tree/master/modules/source/gemfire
1
248
XD-1005
10/23/2013 22:15:20
UI: User should be able to filter the list of executions on the execution tab
On clicking the “Executions” tab, user should see the list of all batch job executions. There should be options to filter job executions by few criteria such as by “Job name”, “execution time” etc.,
3
249
XD-1006
10/23/2013 22:18:22
UI: User should be able to view job detail from a specific job execution at Job Executions page
On clicking "details" link on a job execution row, user should see the job details. Job detail page will show all the information about the job, where as the table listing of jobs on the Execution tab may have omitted some columns or aggregated values to convey information more easily.
3
250
XD-1007
10/23/2013 23:09:48
UI: User should be able to see step execution info in a table below job detail
On clicking the job detail page, we should display all the step executions associated with the specific job execution in a table view.
3
251
XD-1016
10/25/2013 13:55:41
Provide an option to pretty print JSON output
Probably the cleanest approach is to provide a properties file in the xd config directory that enables this globally, e.g., json.pretty.print=true. This will require some refactoring of the ModuleTypeConversion plugin, i.e., use DI in streams.xml
3
252
XD-1039
11/07/2013 02:10:12
Composed of Composed fails at stream deployment time
Although composition of a module out of an already composed module seems to work at the 'module compose' level, trying to deploy a stream with that more complex module fails with at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:312) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) Caused by: java.lang.IllegalArgumentException: each module before the last must provide 'output' at org.springframework.util.Assert.notNull(Assert.java:112) at org.springframework.xd.module.CompositeModule.initialize(CompositeModule.java:132) at org.springframework.xd.dirt.module.ModuleDeployer.deploy(ModuleDeployer.java:234) at org.springframework.xd.dirt.module.ModuleDeployer.deployModule(ModuleDeployer.java:224) at org.springframework.xd.dirt.module.ModuleDeployer.handleCompositeModuleDeployment(ModuleDeployer.java:180) at org.springframework.xd.dirt.module.ModuleDeployer.handleMessageInternal(ModuleDeployer.java:129) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:73) ... 63 more
5
253
XD-1041
11/08/2013 09:12:37
Upgrade to Spring for Apache Hadoop 1.0.2.RELEASE and Pivotal HD 1.1
Make sure the sinks and jobs work against Pivotal HD 1.1
3
254
XD-1045
11/09/2013 12:56:25
Create project for model that is common between client and server
this would elminate dependencies that are currently in the codebase, such as: * RESTModuleType and ModuleType enums * ModuleOption and DetailedModuleDefinitionResource.Option
5
255
XD-1047
11/10/2013 12:41:11
Allow Aggregate Counter to use timestamp field in data.
Currently the aggregate counter aggerates by the current time. However the data may already have a timestamp in it (eg streams from activity events on a website). It would be useful as an alternative approach to be able to specify this field to aggregate on. This would have the following benefits: 1) The aggregate counts would be more accurate as they would reflect the acutal event times and not have any lag from an intermediate messgaging system they might have passed through. 2) If for whatever reason XD is down, comes back up and starts pulling queued messages from the messaging system the aggregate counter will reflect the correct event time. Currently you would get a gap and then a spike as a backlog of messages would get allocated to the current aggregate count. 3) Old data could be rerun through XD still creating the correct aggregate counts. Configuration would be something like stream create --name mytap --definition "tap:mystream > aggregatecounter --name=mycount --timestampField=eventtime" without the timestampfield it would behave as currently.
5
256
XD-1048
11/10/2013 13:19:29
Extend aggregate counter to dynamically aggregate by field values in addition to time.
This would be a combination of the existing aggregate counter and field value counter functionality. For example if the stream data was for car purchases some fields might be colour, make and model. When analysing the aggregate data I dont just want to know how many were sold on Monday, but how many of each make or how many of each colour, or how many of a particular colour, make AND model. This would allow a dashboard type client to 'drill down' into each dimension or combination of dimensions (in real time without executing batch queries against the raw data) Ideally the aggregate counter would be specified as stream create --name mytap --definition "tap:mystream > aggregatecounter --name=mycount --fieldNames=colour,make,model" The keys would be dynamically created according to the field values in each record (ie in a similar way to the field value counter you would not need to predefine field values) and keys would be created for all combinations of the fields specified eg the record { "colour":"silver" , "make":"VW" , "model" : "Golf" } would increment the following key counters (in addtion to the existing time buckets) <existing base counter - ie all fields for this time bucket> colour:black make:VW model:Golf colour:black.make:VW colour:black.model:Golf make:VW.model:Golf colour:black.make:VW.model:Golf ie the actual keys would look something like aggregatecounters.mycount.make:VW.model:Golf.201307 etc This may seem like it would generate a lot of key combinations but in practice the data generated will still be massively less than the raw data, and keys will only be created if that combination occurs in a time period. Also some fields may be dependent on each other (such as make and model in the above example) so the amount of possibilites for those composite keys would be a lot less that the number of one times the number of the other.
5
257
XD-1060
11/12/2013 08:28:43
Add support for Hortonworks Data Platform 2.0
(apologies if a ticket already exists for this, but I didn't see one) I spun up the Hortonworks Data Platform 2.0 sandbox, but see it isn't supported by Spring XD yet. How hard would it be to add these Distro's in? Is it just a matter of dropping in a lib folder for hadoop22 and/or hdp20, and allowing those and options to be passed in via the --hadoopDistro option? I'm currently trying to work through the following tutorial, but using the HDP 2.0 sandbox instead of the 1.3 sandbox http://hortonworks.com/hadoop-tutorial/using-spring-xd-to-stream-tweets-to-hadoop-for-sentiment-analysis/ Thanks!
1
258
XD-1061
11/12/2013 12:55:40
Upgrade asciidoctor-gradle-plugin to 0.7.0
Looks like we need to spend a cycle on Asciidoc - as we still have the author-tag-issue - I thought we can simply upgrade the asciidoctor-gradle-plugin to 0.7.0 (currently 0.4.1) but that breaks the docs being generated.
2
259
XD-1072
11/15/2013 11:00:46
Add bridge module
Add a bridge module per XD-956 to support definitions like topic:foo > queue:bar . Convenient for testing for XD-1066
1
260
XD-1080
11/18/2013 20:17:33
Make deploy=false as the default when creating a new job
The automatic deployment of the job makes it harder to understand the lifecycle of the job and also does not allow for the opportunity to define any additional deployment metadata for how that job runs, e.g is it partitioned etc.
1
261
XD-1097
11/19/2013 03:20:56
Redo Hadoop distribution dependency management
The way we now include various Hadoop distributions is cumbersome to maintain. Need a better way of managing and isolating these dependencies on a module level rather than container level.
8
262
XD-1103
11/20/2013 14:03:34
JDBC sink is broken - looks like some config options got booted
The JDBC sink is broken. Simple "time | jdbc" results in: org.springframework.jdbc.BadSqlGrammarException: PreparedStatementCallback; bad SQL grammar [insert into test (payload) values(?)]; nested exception is java.sql.SQLSyntaxErrorException: user lacks privilege or object not found: TEST Looks like some config options got clobbered during bootification.
5
263
XD-1104
11/21/2013 04:01:11
Create Shell Integration test fixture for jdbc related sink
Would be nice to have some kind of regression testing on the jdbc sink, as it becomes more prominent in XD. Use of an in memory db where we expose eg a JdbcTemplate to assert state
5
264
XD-1105
11/21/2013 04:03:31
Add some test coverage to mqtt modules
Even though it may be hard to come up with a mqtt broker, an easy test that should be automated is somesource | mqtt --topic=foo with mqtt --topics=foo | somesink And asserting that what is emitted to somesource ends up in somesink.
3
265
XD-1108
11/22/2013 07:47:00
Restore lax command line options
Restore --foo=bar as well as --foo bar Validation of values should be done as a separate story
2
266
XD-1112
11/25/2013 05:37:30
Add port scan (and ability to disable) to container launcher
Spring Boot support port scanning if you set server.port=0 (and disable with -1), so we could make that the default for the container node.
0
267
XD-1114
11/25/2013 07:03:47
Investigate dropped Module Deployment Requests
We have observed in unit tests (see AbstractSingleNodeStreamIntegrationTests) that(Redis/SingleNode) occasionally fail. The root cause must be investigated further but there is some evidence to suggest that the control messages (ModuleDeploymentRequests) are not always received and handled by the ModuleDeployer. This does not produce an error but results in runtime stream failures. This problem may be resolved as part of the planned Deployment SPI but is being tracked here until we are certain that it has been resolved.
5
268
XD-1115
11/25/2013 07:36:43
We no longer validate the --hadoopDistro options in the xd scripts
We no longer validate the --hadoopDistro options in the xd scripts. Seem sthe classes doing this validation were removed for boot. We do this validation in the xd-shell script
3
269
XD-1122
11/26/2013 01:46:43
Add jmxPort to list of coerced cmd line options
Following merge of XD-1109. See discussion at https://github.com/spring-projects/spring-xd/commit/eaf886eab3b2ef07da55575029ccabb2c8a36af9#commitcomment-4701947
2
270
XD-1132
12/01/2013 12:08:41
JMS Module - add support for TOPICS
As a Spring XD user I need to listen on a JMS Topic and ingest the messages, so I can process the messages. Currently the module only allows for Queues
2
271
XD-1147
12/06/2013 09:30:28
Allow alternate transports to be used within a stream
Need to clarify if this means alternate transports within a stream, e.g source |[rabbit] | processor |[redis]| sink or specifying that a stream use an alternate transport to the one configured for the container.
0
272
XD-1155
12/11/2013 12:21:10
The lib directory for hadoop12 contains mix of hadoop versions
This causes issues depending on which version of the core/common jar gets loaded first - like: xd:>hadoop fs ls -ls: Fatal internal error java.lang.UnsupportedOperationException: Not implemented by the DistributedFileSystem FileSystem implementation   at org.apache.hadoop.fs.FileSystem.getScheme(FileSystem.java:213)   at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2401)   at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2411)   at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)   at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)   at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)   at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:166)   at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:351)   at org.apache.hadoop.fs.Path.getFileSystem(Path.java:287)   at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:325)   at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)   at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)   at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)   at org.apache.hadoop.fs.shell.Command.run(Command.java:154)   at org.apache.hadoop.fs.FsShell.run(FsShell.java:255)   at org.springframework.xd.shell.hadoop.FsShellCommands.run(FsShellCommands.java:412)   at org.springframework.xd.shell.hadoop.FsShellCommands.runCommand(FsShellCommands.java:407)   at org.springframework.xd.shell.hadoop.FsShellCommands.ls(FsShellCommands.java:110)   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)   at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)   at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)   at java.lang.reflect.Method.invoke(Method.java:606)   at org.springframework.util.ReflectionUtils.invokeMethod(ReflectionUtils.java:191)   at org.springframework.shell.core.SimpleExecutionStrategy.invoke(SimpleExecutionStrategy.java:64)   at org.springframework.shell.core.SimpleExecutionStrategy.execute(SimpleExecutionStrategy.java:48)   at org.springframework.shell.core.AbstractShell.executeCommand(AbstractShell.java:127)   at org.springframework.shell.core.JLineShell.promptLoop(JLineShell.java:483)   at org.springframework.shell.core.JLineShell.run(JLineShell.java:157)   at java.lang.Thread.run(Thread.java:724)
3
273
XD-1159
12/13/2013 08:09:46
Add a MongoDB Sink
This should be quite straightforward, since the Spring Data Mongo jars are already included. We have this working by just adding the attached sink context file and the spring-integration-mongodb jar. (This works for JSON string streams, but a mongo converter probably needs added to support Tuple conversion)
5
274
XD-1160
12/13/2013 11:44:21
Standardize naming and unit for options across modules
We should standardize on the options between modules: idleTimeout - timeout rolloverSize - rollover Also, need to standardize on unit used for timeout - should this be s or ms?
8
275
XD-1161
12/13/2013 13:33:27
Re-deployment of hdfs sink reuses filename of first deployment
Need to check for existing files with the same file counter
3
276
XD-1162
12/13/2013 17:07:49
Column option of JDBC sink should not convert underscore to property name.
Current implementation of column option of JDBC sink convert underscore to java property name. If database column contains underscore, there is no way to store data. So JdbcMessagePayloadTransformer should not use JdbcUtils.convertUnderscoreNameToPropertyName even if column contains "_".
1
277
XD-1170
12/16/2013 16:23:51
Splunk module is broken
Splunk sink module doesn't work at all. It throws java.lang.VerifyError exception like following. nested exception is java.lang.VerifyError: class org.springframework.integration.splunk.outbound.SplunkOutboundChannelAdapter overrides final method onInit.()V This is because SplunkOutputChannelAdapter refers old spring integration jar, but recent AbstractReplyProducingMessageHandler (which SplunkOutputChannelAdapter extends) set final to onInit method. Hence it doesn't work. SplunkOutboundChannelAdapter should be fixed to not override onInit method and replace the jar file spring-integration-splunk-1.0.0.M1.jar.
2
278
XD-1176
12/18/2013 10:42:19
Update to spring-data-hadoop 2.0.0.M4
Update dependencies to spring-data-hadoop 2.0.0.M4
1
279
XD-1182
12/19/2013 11:42:39
Update to spring-data-hadoop 2.0.0.M5
Update to spring-data-hadoop 2.0.0.M5 when it is released and remove the temporary DatasetTemplateAllowingNulls in spring-xd-hadoop We should also review the supported hadoop distros - think we should support anything that is current/stable: - hadoop12 - hadoop22 - phd1 (PHD 1.1) - hdp13 - hdp20 - cdh4
3
280
XD-1190
12/27/2013 07:20:33
Setup precedence order for module properties' property resolver
The PropertyResolver needs to follow the below precedence order on PropertySources when resolving the module properties: From lowest to the highest order, 0 application.yml 1 applicaiton.yml fragment 2 property placeholders 2a property placeholder under 'shared' config directory 2b property placeholder under module/(source/sink/processor)/config directory 3. environment variables 4. system properties 5. command line
5
281
XD-1191
12/30/2013 08:18:57
JDBC sink destroys existing table
The jdbc sink deletes existing table and creates a single column payload one even if properties file has 'initializeDatabase=false'
3
282
XD-1217
01/10/2014 07:14:47
twittersearch and twitterstream should support compatible formats
Currently twitterstream emits native twitter json whereas twittersearch uses SI/Spring Social and emits spring social Tweet types. This makes it difficult to replace twitter sources and reuse XD stream definitions. This requires coordination with SS 1.1.0 and SI 4.0 GA releases. NOTE: I think it's a good idea to continue to support native twitter JSON, keep as an option for twitterstream, but the default should be Tweet types.
5
283
XD-1220
01/10/2014 12:30:46
Batch jobs should use application.yml provided connection as default
Batch jobs should use application.yml provided connection as default. They now have their own configuration in batch-jdbc.properties. This config needs to account for any changes made to application.yml settings so the data is written to the batch metadata database by default.
5
284
XD-1228
01/14/2014 22:07:42
Provide a easy, prescriptive means to perform unit and basic stream integration tests.
AbstractSingleNodeStreamDeploymentIntegrationTests is the basis of 'state of the art' testing for a stream that allows you to get a reference to the input and output channel of the stream http | filter | transform | file. One can send messages to the channel after the http module, but before filter and one can retrieve the messages that were sent to the channel after the transform module but before file. The current implementation inside AbstractSingleNodeStreamDeploymentIntegrationTests can be improved in terms of ease of use for end-users. The issue is to create as simple a way as possible for a user to test their processing modules/stream definitions without having to actually do a real integration test by sending data to the input module. Either as a separate issue or as part of this one, the documentation https://github.com/spring-projects/spring-xd/wiki/Creating-a-Processor-Module should be updated to explicitly show how to use this issue's test functionality.
8
285
XD-1240
01/14/2014 22:58:40
Add to Acceptance Test EC2 CI build plan a stage that uses XD distributed mode with rabbit
See https://quickstart.atlassian.com/download/bamboo/get-started/bamboo-elements "Stages are comprised of one or more Jobs, which run in parallel" we would like the tests across the rabbit and redis transport to occur in parallel.
8
286
XD-1241
01/14/2014 22:59:58
Add to Acceptance Test EC2 job a stage that uses XD distributed mode with redis
See https://quickstart.atlassian.com/download/bamboo/get-started/bamboo-elements "Stages are comprised of one or more Jobs, which run in parallel" we would like the tests across the rabbit and redis transport to occur in parallel.
8
287
XD-1245
01/15/2014 07:04:56
Develop basic acceptance test application to exercise based XD-EC2 deployment from CI
Create a first pass at an acceptance test app for a stream definition of http | log. This will involve creating two new projects in xd 1. spring-xd-integration-test 2. spring-xd-acceptance-tests #1 will contain generally useful utility methods for acceptance test, such as sending data over http, obtaining and asserting JMX values of specific modules. #2 will contain tests that use #1 to test the various out of the box modules provides in XD.
5
288
XD-1252
01/17/2014 04:43:43
Allow processor script variables to be passed as module parameters
Currently, if we want to bind values to script variables we need to put them in a properties file like so: xd:> stream create --name groovyprocessortest --definition "http --port=9006 | script --location=custom-processor.groovy --properties-location=custom-processor.properties | log Ideally it should be: xd:> stream create --name groovyprocessortest --definition "http --port=9006 | script --location=custom-processor.groovy --foo=bar --baz=boo | log
5
289
XD-1255
01/20/2014 09:00:31
Create assertion to get count of messages processed by a specific module in a stream
The modules are exposed via JMX and in turn exposed over http via jolokia. See https://jira.springsource.org/browse/XD-343. This issue is to develop a helper method that given a stream id and/or module name, assert that the number of messages processed after sending stimulus messages is as expected. e.g. int originalCount = getCount("testStream", "file"); //do stuff that generates 100 messages assertCount("testStream", "file", 100, originalCount) For now we can assume we know the location of where the modules are located by assuming we have only one container deployed.
5
290
XD-1256
01/21/2014 02:49:06
Running XD as service
It is useful to configure operating system so that it will start Spring XD automatically on boot. For example, in Linux it would be great if Spring XD distro contains init.d script to run it as service. A typical init.d script gets executed with arguments such as "start", "stop", "restart", "pause", etc. In order for an init.d script to be started or stopped by init during startup and shutdown, the script needs to handle at least "start" and "stop" arguments.
2
291
XD-1267
01/24/2014 15:33:07
Improve configuration option handling
There are inconsistencies in our current approach for handling module options (using property file for default vs. classes has different behavior in terms of over-riding with system properties. Need to rationalize the behavior.
0
292
XD-1270
01/27/2014 20:51:07
Add states to the deployment of stream
Improve how the state of the stream is managed. A deploy command moves the stream from the undeployed state to the deploying state. If all modules in the stream are successfully deployed, the stream state is ‘deployed’ If one or more module deployments failed, the stream state is failed. Any modules that were successfully deployed, are still running. Sending an undeploy command will stop all modules of the stream and return the stream to the undeployed state. For the individual modules that failed, we will be able to find out which ones failed. Not yet sure if we can try to redeploy just those parts of the stream that failed. See the [design doc|https://docs.google.com/a/gopivotal.com/document/d/1kWtoH_xEF1wMklzQ8AZaiuhBZWIlpCDi8G9_hAP8Fgc/edit#heading=h.2rk74f16ow4i] for more details. Story points for this issue are the total of all the story points for the subtasks.
20
293
XD-1273
01/28/2014 06:02:59
The use of labelled modules and taps needs more explanation
https://github.com/spring-projects/spring-xd/wiki/Taps mentions this but the explanation needs more elaboration and example, e.g. mystream -> "http | flibble: transform --expression=payload.toUpperCase() | file" "tap:stream:mystream.flibble > transform --expression=payload.replaceAll('A','.') | log");
1
294
XD-1282
01/30/2014 06:41:13
Add caching to ModuleOptionsMetadataResolver
Will likely involve having the module identity (type+name) be part of the OptionsMetadata identity/cache key
5
295
XD-1296
02/10/2014 01:10:41
Few integration tests fail if JMX is enabled
If JMX is enabled, some of the integration tests fail. This is similar to what we see in XD-1295. One example of this case is, the test classes that extend StreamTestSupport. In StreamTestSupport, the @BeforeClass has this line: moduleDeployer = containerContext.getBean(ModuleDeployer.class); When JMX is enable, the IntegrationMBeanExporter creates JdkDynamicProxy for the ModuleDeployer (since it is of type MessageHandler) and thereby the above line to get bean by the implementing class type (ModuleDeployer) fails. There are few other places where we use to refer the implementing classes on getBean(). Looks like we need to fix those as well.
2
296
XD-1300
02/10/2014 19:01:06
Handling boolean type module option properties defaults in option metadata
There are few boolean type module option properties whose default values are specified in the module definitions than their corresponding ModuleOptionsMetaData. Also, when using boolean we need to have module option using primitive type boolean than Boolean type. Currently, these are some of the module options that require this change: "initializeDatabase" in modules filejdbc, hdfsjdbc job modules, aggregator processor module, jdbc sink module "restartable" in all the job modules "deleteFiles" in filejdbc, filepollhdfs job modules
3
297
XD-1301
02/11/2014 08:54:05
MBeans are not destroyed if stream is created and destroyed with no delay
Problem: The container that the stream was deployed to, will not allow new streams to be deployed. Once the error occurs, the only solution is to terminate the XD Container and restart it. To reproduce create a stream foo and destroy the stream, then create the stream foo again. This best done programmatically, taking the same steps using the "shell" may not reproduce the problem. i.e. if you put a Sleep of 1-2 seconds between the destroy and the next create, it works fine
5
298
XD-1307
02/12/2014 03:53:20
Use HATEOAS Link templates
HATEOAS 0.9 introduced some support for templated links. This should be leveraged to properly handle eg /streams/{id} instead of using string concatenation
5
299
XD-1309
02/12/2014 06:08:55
JSR303 validation of options interferes with dsl completion
When using a JSR303 annotated class for module options, the binding failures should be bypassed, as they interfere with completion proposals.
5
300
XD-1310
02/12/2014 07:56:20
Misleading error message when trying to restart a job exec
Disregard the missing date that is caused by another problem. Here is the setup: {noformat} xd:>job execution list Id Job Name Start Time Step Execution Count Status -- -------- -------------------------------- -------------------- --------- 13 foo Europe/Paris 0 STARTING 12 foo 2014-02-12 15:39:46 Europe/Paris 1 FAILED 11 foo 2014-02-12 15:39:29 Europe/Paris 1 COMPLETED 10 foo 2014-02-12 15:38:36 Europe/Paris 1 COMPLETED 9 foo 2014-02-12 15:38:21 Europe/Paris 1 COMPLETED 8 foo Europe/Paris 0 STARTING 7 foo 2014-02-12 15:25:41 Europe/Paris 1 COMPLETED 6 foo 2014-02-12 15:25:04 Europe/Paris 1 FAILED 5 foo 2014-02-12 15:14:32 Europe/Paris 1 FAILED 4 foo 2014-02-12 15:14:13 Europe/Paris 1 FAILED 3 foo 2014-02-12 15:13:54 Europe/Paris 1 FAILED 2 foo 2014-02-12 15:13:18 Europe/Paris 1 FAILED 1 foo 2014-02-12 15:12:58 Europe/Paris 1 FAILED 0 foo 2014-02-12 15:11:44 Europe/Paris 1 FAILED xd:>job execution restart --id 12 Command failed org.springframework.xd.rest.client.impl.SpringXDException: Job Execution 12 is already running. {noformat} while the server exception is a bit better: {noformat} Caused by: org.springframework.batch.core.repository.JobExecutionAlreadyRunningException: A job execution for this job is already running: JobInstance: id=11, version=0, Job=[foo] at org.springframework.batch.core.repository.support.SimpleJobRepository.createJobExecution(SimpleJobRepository.java:120) {noformat} I'd argue we should not speak in terms of execution ids if possible, but rather in terms of job names
1
301
XD-1311
02/12/2014 07:58:43
Job execution list should mention jobs that have been deleted
Create a job, execute it a couple of times, destroy it and then invoke job execution list. The job name column should mention that a job is defunct (even though a job with the same name could have been re-created in the interim)
3
302
XD-1312
02/12/2014 08:01:19
Job execution restart fails with NPE
Create a job, launch it but make it fail (eg filejdbc with missing file) job execution list => it's there, as FAILED. Good job execution restart <theid> ==> Fails with NPE: {noformat} 16:59:42,160 ERROR http-nio-9393-exec-7 rest.RestControllerAdvice:191 - Caught exception while handling a request java.lang.NullPointerException at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:351) at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:135) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:128) at sun.reflect.GeneratedMethodAccessor157.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:117) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy39.run(Unknown Source) at org.springframework.batch.admin.service.SimpleJobService.restart(SimpleJobService.java:179) at org.springframework.xd.dirt.plugins.job.DistributedJobService.restart(DistributedJobService.java:77) at org.springframework.xd.dirt.rest.BatchJobExecutionsController.restartJobExecution(BatchJobExecutionsController.java:146) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springfram {noformat}
5
303
XD-1313
02/12/2014 08:02:49
Commands that start a job should return a representation of the JobExecution
See discussion at https://github.com/spring-projects/spring-xd/pull/572
0
304
XD-1314
02/12/2014 09:05:27
Create XD .zip distribution for YARN
Create XD .zip distribution for YARN that adds an additional sub-project to the spring-xd repo for building the xd-YARN.zip Link into main build file Produce a new artifact spring-xd-v-xyz-yarn.zip as part of the nightly CI process -- will now have 2 artifacts, main xd.zip distribution and xd-yarn.zip Does not include any Hadoop distribution libraries Does include spring-hadoop jars for Apache22 ‘unflavored’
3
305
XD-1316
02/12/2014 11:51:29
UI:Fix E2E test warning
When running E2E tests the following warning may be observed: {code} Running "karma:e2e" (karma) task INFO [karma]: Karma v0.10.9 server started at http://localhost:7070/_karma_/ INFO [launcher]: Starting browser PhantomJS TypeError: Cannot read property 'verbose' of undefined at enableWebsocket (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-connect-proxy/lib/utils.js:101:18) at Object.utils.proxyRequest [as handle] (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-connect-proxy/lib/utils.js:109:5) at next (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/proto.js:193:15) at Object.livereload [as handle] (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect-livereload/index.js:147:5) at next (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/proto.js:193:15) at Function.app.handle (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/proto.js:201:3) at Server.app (/Users/hillert/dev/git/spring-xd/spring-xd-ui/node_modules/grunt-contrib-connect/node_modules/connect/lib/connect.js:65:37) at Server.EventEmitter.emit (events.js:98:17) at HTTPParser.parser.onIncoming (http.js:2108:12) at HTTPParser.parserOnHeadersComplete [as onHeadersComplete] (http.js:121:23) at Socket.socket.ondata (http.js:1966:22) at TCP.onread (net.js:525:27) {code}
2
306
XD-1319
02/13/2014 03:27:49
Allow mixins of ModuleOptionsMetadata
A lot of modules have similar options. Moreover, job modules often have options that belong to at least two domains (eg jdbc + hdfs). I think that by using FlattenedCompositeModuleOptionsMetadata, we could come up with a way to combine several options POJOs into one. Something like: public class JdbcHdfsOptionsMetadata { @OptionsMixin private JdbcOptionsMetadata jdbc; @OptionsMixin private HdfsOptionsMetadata hdfs; } this would expose eg "driverClass" as well as "rolloverSize" as top level options. Values could be actually injected into the fields, so that eg custom validation could occur (default validation for the mixin class would occur by default)
5
307
XD-1320
02/13/2014 07:53:12
Make Batch Job Restarts Work with Distributed Nodes
Job restart fails with NPE. See PR for XD-1090: https://github.com/spring-projects/spring-xd/pull/572
0
308
XD-1321
02/13/2014 11:45:14
Add XD deployment for YARN
Add YARN specific code based on Janne's prototyping Add YARN Client and AppMaster implementations and startup config files This includes shell scripts to deploy XD to YARN Test working on Apache 2.2 distribution We can modify config files, everything should be possible to override by providing command-line args or env variables. ./xd-yarn-deploy --zipFile /tmp/spring-xd-yarn.zip --config /tmp/spring-xd-yarn.yml
8
309
XD-1322
02/13/2014 11:47:07
Add way to provide module config options for XD on YARN
There seems to be some intersection with the work for this issue and the rationalization of how module properties are handled. There will be changes to configuration/property management support such that each module (source, sink, etc) will be able to also be overridden in spring-xd.yml (or wherever -Dspring.config.location points to. The HDFS sink module for example, will have default values based on it's OptionsMetadata and will be of the form <type>.<module>.<option> That means in the configuration for hdfs.xml sink, there would be a config section such as {code:xml} <configuration> fs.default.name=${sink.hdfs.hd.fs} mapred.job.tracker=${sink.hdfs.hd.jt} yarn.resourcemanager.address=${sink.hdfs.hd.rm} mapreduce.framework.name=${sink.hdfs.mr.fw} </configuration> {code} With default values defined by a HdfsSinkOptionsMetadata class. The hdfs.xml module file would not contain any references to a properties file. A file specified by -Dspring.config.location could override the values in a config section such as sink: hdfs: hd.fs : hdfs://foobarhost:8020 hd.jt : 10.123.123.123:9000 etc.
5
310
XD-1326
02/14/2014 14:27:35
Provide xd-shell integration for deploying XD on YARN
Command such as yarn app list yarn deploy-xd --zipFile /tmp/myapp.zip --config /tmp/myconfig.yml
8
311
XD-1327
02/14/2014 18:02:33
Rabbit source module with outputType fails to deploy
To replicate the issue: Create stream: stream create rabbittest --definition "rabbit --queues=test --outputType=text/plain | log" Stacktrace thrown: 17:59:56,436 ERROR http-nio-9393-exec-3 rest.RestControllerAdvice:191 - Caught exception while handling a request java.lang.IllegalArgumentException: Module option named outputType is already present at org.springframework.xd.module.options.FlattenedCompositeModuleOptionsMetadata.<init>(FlattenedCompositeModuleOptionsMetadata.java:56) at org.springframework.xd.module.options.DelegatingModuleOptionsMetadataResolver.resolve(DelegatingModuleOptionsMetadataResolver.java:49) at org.springframework.xd.dirt.stream.XDStreamParser.parse(XDStreamParser.java:117) at org.springframework.xd.dirt.stream.AbstractDeployer.save(AbstractDeployer.java:73) at org.springframework.xd.dirt.rest.XDController.save(XDController.java:227) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:214) at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132) at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749) at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690) at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:863) at javax.servlet.http.HttpServlet.service(HttpServlet.java:647) at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837) at javax.servlet.http.HttpServlet.service(HttpServlet.java:728) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:305) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilter(WebRequestTraceFilter.java:114) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.springframework.boot.actuate.autoconfigure.EndpointWebMvcAutoConfiguration$ApplicationContextFilterConfiguration$1.doFilterInternal(EndpointWebMvcAutoConfiguration.java:131) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:88) at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:108) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.springframework.boot.actuate.autoconfigure.MetricFilterAutoConfiguration$MetricsFilter.doFilter(MetricFilterAutoConfiguration.java:97) at org.springframework.boot.actuate.autoconfigure.MetricFilterAutoConfiguration$MetricsFilter.doFilter(MetricFilterAutoConfiguration.java:82) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:243) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:210) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:222) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:123) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:472) at org.apache.catalina.valves.RemoteIpValve.invoke(RemoteIpValve.java:680) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:171) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:99) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:118) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:407) at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1004) at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:589) at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1680) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722)
1
312
XD-1329
02/18/2014 10:00:57
Add a Kafka Source
This would use the Kafka Spring Integration Extension. We have a version of this working but had to modify the adapter code as its not currently compatible with Spring Integration 4. See INTEXT-97
8
313
XD-1330
02/18/2014 15:58:58
Enhance HadoopFileSystemTestSupport to obtain resource for a specific hadoop distro
It looks like the HadoopFileSystemTestSupport test rule by default runs against hadoop 1.2 and we can add a way to support running the hadoop centric tests to run against a given hadoop distro. Currently, if the test is run against a version other than 1.2, the rule says: 15:47:34,469 ERROR main hadoop.HadoopFileSystemTestSupport:95 - HADOOP_FS IS NOT AVAILABLE, SKIPPING TESTS org.apache.hadoop.ipc.RemoteException: Server IPC version 9 cannot communicate with client version 4 at org.apache.hadoop.ipc.Client.call(Client.java:1113) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:229) at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:601) at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:85) at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:62) at com.sun.proxy.$Proxy8.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.checkVersion(RPC.java:422) at org.apache.hadoop.hdfs.DFSClient.createNamenode(DFSClient.java:183) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:281) at org.apache.hadoop.hdfs.DFSClient.<init>(DFSClient.java:245) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:100) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1446) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:67) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1464) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:263) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:124) at org.springframework.xd.test.hadoop.HadoopFileSystemTestSupport.obtainResource(HadoopFileSystemTestSupport.java:49) at org.springframework.xd.test.AbstractExternalResourceTestSupport.apply(AbstractExternalResourceTestSupport.java:58) at org.junit.rules.RunRules.applyAll(RunRules.java:26) at org.junit.rules.RunRules.<init>(RunRules.java:15) at org.junit.runners.BlockJUnit4ClassRunner.withTestRules(BlockJUnit4ClassRunner.java:379) at org.junit.runners.BlockJUnit4ClassRunner.withRules(BlockJUnit4ClassRunner.java:340) at org.junit.runners.BlockJUnit4ClassRunner.methodBlock(BlockJUnit4ClassRunner.java:256) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70) at org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50) at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238) at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63) at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236) at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53) at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:229) at org.junit.runners.ParentRunner.run(ParentRunner.java:309) at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:50) at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:467) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:683) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:390) at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:197)
3
314
XD-1333
02/19/2014 23:58:15
Add config file fragment support configuration in XD windows bat scripts
The external configuration fragment file support by setting spring.config.location in the XD startup scripts are not updated in xd-admin, xd-container and xd-singlenode .bat scripts. Please refer: https://github.com/spring-projects/spring-xd/issues/582
2
315
XD-1336
02/20/2014 09:53:00
Allow easy integration with other types of message transports - remove enums for transport layers
If a third party messaging solution wants to be the transport layer in SpringXD they must currently fork the SpringXD code base and change the enums. Example: CommonDistributedOptions.ControlTransport currently limits to the following options (rabbit, redis). So if a third party like messaging system, like ZeroMQ, wanted to plug-in they would have to add to the enum. Here is another example where GemFire was used as the messaging system: https://github.com/charliemblack/spring-xd/blob/master/spring-xd-dirt/src/main/java/org/springframework/xd/dirt/server/options/CommonDistributedOptions.java#L38 All messaging enums should be removed for an extensible model.
1
316
XD-1337
02/20/2014 10:04:44
Stream partitioning metadata should allow updating at runtime - dynamically / anytime
In a running system some times the algorithm for partitioning the data might overload a given server with work. When that happens we might need to "rebalance" the partitioned work / data to achieve a even balance of stream throughput across servers in a given compute group. We can think of this dynamic rebalancing behavior as an extension of a failure use case. In the failure scenario we need to re-partition the stream to other servers in the group. We should allow third parties to plug-in to help with this capability. As an example GemFire will report the new partitioning meta-data when this type failure / rebalance happens.
8