id
int64 0
5.38k
| issuekey
stringlengths 4
16
| created
stringlengths 19
19
| title
stringlengths 5
252
| description
stringlengths 1
1.39M
| storypoint
float64 0
100
|
---|---|---|---|---|---|
417 | XD-1629 | 04/23/2014 15:30:54 | RabbitMessageBus should prefix all created queues with a prefix in order to support HA | To configure Rabbit HA a naming convention should be used to identify the queue that need to be mirrored. | 3 |
418 | XD-1630 | 04/24/2014 10:12:34 | Packaging of lib directory for shell contains many jars that are not used | Between M5 and M6 the size of the shell/lib directory went up ~50 MB. Investigate and remove jars from being packaged that are not used. | 2 |
419 | XD-1632 | 04/24/2014 19:29:08 | Use unique queue names in shell tests | There seems to be some cross talk among the shell integration tests. It looks like the same singlenode application might get shared among the test classes when they run in parallel. Using unique queue names across the tests seem to fix the issue for now. | 1 |
420 | XD-1635 | 04/25/2014 11:59:52 | Documentation: Hovering over some of the examples corrupts the text | If you mouse over any of the examples in the documentation, the grey boxes, containing code, shell commands, etc., typically in the upper right hand corner a label for the type of code/example will appear. E.g., 'Ruby', 'Javascript' ,etc. 1) The labels that appear seem to be random and incorrect. Shell scripts show as 'Ruby' and 'Javascript'. 2) More importantly, on some of the examples the label appears in front of and part of the example, corrupting the example. To see this hover your mouse over the two examples, grey boxes, here: http://docs.spring.io/spring-xd/docs/1.0.0.M6/reference/html/#_xd_shell_in_distributed_mode There may be more but this is the ones I noticed. -Derek | 2 |
421 | XD-1636 | 04/25/2014 12:30:49 | servers.yaml's 'xd: -> transport: rabbit' overrides xd-singlenode's default of local transport | When working w/ SXD xd-singlenode, out of the box, it defaults to using all embedded components (transport, analytics, hsqldb, & zookeeper), which is easy and a great way to get going. This is also great for development. When I then started trying out the M6 distributed mode I set my transport to rabbit in servers.yaml (now that the --transport option is gone). Rabbit is my preferred transport here. I then went back to running the singlenode, for simplicity, and then got an exception saying that the singlenode couldn't contact RabbitMQ/AMQP (I was no longer running rabbit). I then had to add the '--transport local' flag back to xd-singlenode. Having the --transport option on xd-singlenode but not on xd-container is confusing. Also I would expect xd-singlenode to default to local transport unless I specify another option in --transport. -Derek | 1 |
422 | XD-1637 | 04/25/2014 12:39:55 | Re-enable JSHint during grunt build | JSHint should be enabled in grunt build. There are few minor issues and needs to be fixed. | 1 |
423 | XD-1641 | 04/25/2014 18:16:58 | Upon a container departure, redeployment of batch job fails on an existing container | When there are multiple containers (A, B and C) and a batch job is deployed into one of the containers A. When the container A goes down, the admin server tries re-deploy the job module that was deployed in container A into other matching container. But, when the re-deployment happens, it tries to update the distributed job locator as if a new job is being deployed and following exception is thrown: 17:13:38,811 ERROR DeploymentsPathChildrenCache-0 cache.PathChildrenCache:550 - java.lang.RuntimeException: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'job': Post-processing of the FactoryBean's object failed; nested exception is org.springframework.xd.dirt.job.BatchJobAlreadyExistsException: Batch Job with the name myjob3 already exists at org.springframework.xd.dirt.server.ContainerRegistrar.deployJob(ContainerRegistrar.java:411) at org.springframework.xd.dirt.server.ContainerRegistrar.onChildAdded(ContainerRegistrar.java:355) at org.springframework.xd.dirt.server.ContainerRegistrar.access$8(ContainerRegistrar.java:349) at org.springframework.xd.dirt.server.ContainerRegistrar$DeploymentListener.childEvent(ContainerRegistrar.java:695) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:494) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:488) at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:253) at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83) at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:485) at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) at org.apache.curator.framework.recipes.cache.PathChildrenCache$11.run(PathChildrenCache.java:755) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334) at java.util.concurrent.FutureTask.run(FutureTask.java:166) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:722) Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'job': Post-processing of the FactoryBean's object failed; nested exception is org.springframework.xd.dirt.job.BatchJobAlreadyExistsException: Batch Job with the name myjob3 already exists at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:167) at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.getObjectFromFactoryBean(FactoryBeanRegistrySupport.java:103) at org.springframework.beans.factory.support.AbstractBeanFactory.getObjectForBeanInstance(AbstractBeanFactory.java:1514) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:252) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:195) at org.springframework.beans.factory.support.DefaultListableBeanFactory.preInstantiateSingletons(DefaultListableBeanFactory.java:699) at org.springframework.context.support.AbstractApplicationContext.finishBeanFactoryInitialization(AbstractApplicationContext.java:760) at org.springframework.context.support.AbstractApplicationContext.refresh(AbstractApplicationContext.java:482) at org.springframework.boot.SpringApplication.refresh(SpringApplication.java:648) at org.springframework.boot.SpringApplication.run(SpringApplication.java:311) at org.springframework.boot.builder.SpringApplicationBuilder.run(SpringApplicationBuilder.java:130) at org.springframework.xd.module.core.SimpleModule.initialize(SimpleModule.java:241) at org.springframework.xd.dirt.module.ModuleDeployer.deploy(ModuleDeployer.java:186) at org.springframework.xd.dirt.module.ModuleDeployer.deployAndStore(ModuleDeployer.java:176) at org.springframework.xd.dirt.module.ModuleDeployer.deployAndStore(ModuleDeployer.java:166) at org.springframework.xd.dirt.server.ContainerRegistrar.deployModule(ContainerRegistrar.java:230) at org.springframework.xd.dirt.server.ContainerRegistrar.deployJob(ContainerRegistrar.java:399) ... 20 more Caused by: org.springframework.xd.dirt.job.BatchJobAlreadyExistsException: Batch Job with the name myjob3 already exists at org.springframework.xd.dirt.plugins.job.DistributedJobLocator.addJob(DistributedJobLocator.java:114) at org.springframework.xd.dirt.plugins.job.BatchJobRegistryBeanPostProcessor.postProcessAfterInitialization(BatchJobRegistryBeanPostProcessor.java:106) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.applyBeanPostProcessorsAfterInitialization(AbstractAutowireCapableBeanFactory.java:421) at org.springframework.beans.factory.support.AbstractAutowireCapableBeanFactory.postProcessObjectFromFactoryBean(AbstractAutowireCapableBeanFactory.java:1698) at org.springframework.beans.factory.support.FactoryBeanRegistrySupport.doGetObjectFromFactoryBean(FactoryBeanRegistrySupport.java:164) ... 36 more | 2 |
424 | XD-1642 | 04/28/2014 14:34:35 | Fail fast admin server if admin's embedded tomcat couldn't start | During admin server startup, if it fails due to embedded tomcat failure, then the admin server instance still up and running. Since its tomcat isn't running it can not handle any REST client requests. In this scenario, we need fail fast the admin server process itself with better error message. | 1 |
425 | XD-1650 | 04/29/2014 12:58:34 | Update HDFS sink to accept a partition strategy | Add configuration for the partition strategy to HDFS sink to support writing files into subdirectories based on a partition key provided in the header or field in the message of the stream data. The writing using HDFS Store DataWriter should pass in the partition key value to be used for the write operation. Partition configuration could be made available to the sink using a --format parameter: that could then be used in XML config like: {code} expression="new java.text.SimpleDateFormat('${format}').format(${timestamp}) {code} Similar to the time source. | 8 |
426 | XD-1651 | 04/29/2014 13:00:45 | Update HDFS sink to use unique id (GUID) as part of file name | HDFS sink needs to have unique identifier for container id added as part of file name. Part of the file name in the directory will be the container id (GUID) - like base-path/logfile-GUID-1.txt | 5 |
427 | XD-1653 | 04/29/2014 15:55:08 | Add More Sophisticated Retry Configuration to the Rabbit MessageBus | XD-1019 added simple (stateless) retry to the message bus. Use stateful retry and an {{AmqpRejectAndDontRequeueRecoverer}} enabling failed messages to be requeued on the broker until successful (perhaps because another instance can handle the message); also provides a mechanism to route failed messages to a dead-letter exchange. Requires setting the message id header in bus-generated messages. Also add profiles and properties for common retry/backoff policies. | 8 |
428 | XD-1654 | 04/30/2014 07:44:45 | Change twittersearch default outputType to be application/json | The current output type is a Java object - this raises issues wrt to consumers in other JVM that to no have the spring social tweet object in the main container classpath. See https://jira.spring.io/browse/XD-1370 Will also create another issue to update twittersearch to generate the raw twitterstream output vs. the structure of the spring social tweet object | 1 |
429 | XD-1656 | 04/30/2014 12:54:56 | The type StubDatasetOperations must implement the inherited abstract method DatasetOperations.getDatasetDescriptor(Class<T>) | StubDatasetOperations class needs to be either declared asbtract or implemente inherited methods from DatasetOperations | 1 |
430 | XD-1663 | 05/02/2014 11:27:46 | Tap naming consistency for stream taps | Currently, when creating the taps for streams, the name of the pub/sub channel inside the message bus would be "tap:<name-of-the-stream>.<module-name>.<module-index> For instance, the following stream with name "test": http | transform --expression=payload.toLowerCase() | file will have the exchanges as 'topic.tap:test.http.0', 'topic.tap:test.transform.1' when using rabbit message bus. Though, the stream config parser takes care of translating what user would provide in the DSL (for example: tap:stream:test.transform.1 to use the message bus exchange topic.tap:test.transform.1), it would be better we have the consistency inside the message bus channel name as well. Also, this would be in sync with how we name taps for jobs. (tap:job:*) | 2 |
431 | XD-1667 | 05/03/2014 00:29:35 | Add Steams page to show job triggers | The streams page needs to be added to the UI at least to show the job triggers that are created while scheduling XD jobs. | 2 |
432 | XD-1668 | 05/03/2014 00:33:45 | Modularize angular app modules based on the functionality | When adding streams page to the UI (from XD-1667), it is necessary to modularize the angular app modules based on the functionality/components (job, stream, auth etc.,). As we expand into more components and use cases in the UI, this definitely makes it easier to concentrate on specific modules based on the functionality. | 5 |
433 | XD-1670 | 05/05/2014 13:15:13 | NPE when a container departs | When a container departs the cluster the admin will try to redeploy any modules that container was running. If the stream was *destroyed* and the container exited before it had the chance to clean up its deployments under {{/xd/deployments/modules}} (for example, with {{kill -9}}) the following NPE occurs: {noformat} java.lang.NullPointerException at org.springframework.xd.dirt.server.ContainerListener.loadStream(ContainerListener.java:347) at org.springframework.xd.dirt.server.ContainerListener.onChildLeft(ContainerListener.java:403) at org.springframework.xd.dirt.server.ContainerListener.childEvent(ContainerListener.java:158) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:494) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:488) at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83) at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:485) at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) at org.apache.curator.framework.recipes.cache.PathChildrenCache$11.run(PathChildrenCache.java:755) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:724) {noformat} If the stream was *undeployed* the following stack appears: {noformat} 15:13:06,002 ERROR ContainersPathChildrenCache-0 cache.PathChildrenCache:550 - java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /xd/deployments/streams/t0 at org.springframework.xd.dirt.server.ContainerListener.onChildLeft(ContainerListener.java:468) at org.springframework.xd.dirt.server.ContainerListener.childEvent(ContainerListener.java:159) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:494) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:488) at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83) at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:485) at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) at org.apache.curator.framework.recipes.cache.PathChildrenCache$11.run(PathChildrenCache.java:755) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:744) Caused by: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /xd/deployments/streams/t0 at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getData(ZooKeeper.java:1155) at org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:302) at org.apache.curator.framework.imps.GetDataBuilderImpl$4.call(GetDataBuilderImpl.java:291) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) at org.apache.curator.framework.imps.GetDataBuilderImpl.pathInForeground(GetDataBuilderImpl.java:287) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:279) at org.apache.curator.framework.imps.GetDataBuilderImpl.forPath(GetDataBuilderImpl.java:41) at org.springframework.xd.dirt.server.ContainerListener.loadStream(ContainerListener.java:358) at org.springframework.xd.dirt.server.ContainerListener.onChildLeft(ContainerListener.java:417) ... 16 more {noformat} In short, this logic makes the assumption that the stream is still present and deployed. It needs to take into account the fact that neither assumption can be made. | 2 |
434 | XD-1675 | 05/07/2014 15:40:19 | FilePollHdfs is not writing results to hdfs | XD Deployment Description: XD Cluster (1 Container) Environment: EC2 Type Of Test: Manual Test Test Failed On filepollhdfs (only test that was run) Build Used Built May 7, 10:29 UTC From the shell, attempted to create filepollhdfs however no results were written to hdfs (hadoop22). The commands executed were the following: job create myjob --definition "filepollhdfs --names=forename,surname,address" --deploy stream create mystream --definition "file --dir=67fc27a6-224d-4c67-a02a-40730bcf8906 --pattern='*.out' > queue:job:myjob" --deploy No warnings nor exceptions were displayed till I changed the log4j.logger.org.springframework to INFO and restarted the container. Then when I copied the sample file to the monitored directory the log reported: 21:30:07,605 INFO DeploymentsPathChildrenCache-0 module.ModuleDeployer:118 - deployed SimpleModule [name=file, type=source, group=mystream, index=0 @61612c7c] Exception in thread "inbound.job:myjob-redis:queue-inbound-channel-adapter1" org.springframework.messaging.core.DestinationResolutionException: failed to look up MessageChannel with name 'errorChannel' in the BeanFactory (and there is no HeaderChannelRegistry present). at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:108) at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:44) at org.springframework.integration.channel.MessagePublishingErrorHandler.resolveErrorChannel(MessagePublishingErrorHandler.java:111) at org.springframework.integration.channel.MessagePublishingErrorHandler.handleError(MessagePublishingErrorHandler.java:78) at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55) at java.lang.Thread.run(Thread.java:724) Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'errorChannel' is defined at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanDefinition(DefaultListableBeanFactory.java:641) at org.springframework.beans.factory.support.AbstractBeanFactory.getMergedLocalBeanDefinition(AbstractBeanFactory.java:1159) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:282) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:273) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:99) When using the attached sample file, you need to rename the file to try2.out. | 5 |
435 | XD-1677 | 05/08/2014 09:07:23 | Add "log-full-message" Property to the Log Sink | Allows looking at message headers without turning on debugging. | 1 |
436 | XD-1683 | 05/09/2014 15:51:28 | syslog-tcp throws exception when receiving syslog data | XD Deployment Description XD Cluster (1 Container) Environment EC2 Type Of Test Manual test via shell Test Failed On syslog-tcp (only test that was run) Build Used Built May 7, 10:29 UTC [Setting up the Environment] * Used the wiki instructions to setup the syslog on the ec2 instance. * Deploy the stream below: stream create mystream --definition "syslog-tcp | file --binary=true --mode=REPLACE" --deploy * On the EC2 Instance execute the line below: logger -p local3.info -t TESTING "Test Syslog Message" [What occurred] Stream fails to process inbound syslog information and throws the exception below: Exception in thread "inbound.mystream.0-redis:queue-inbound-channel-adapter17" org.springframework.messaging.core.DestinationResolutionException: failed to look up MessageChannel with name 'errorChannel' in the BeanFactory (and there is no HeaderChannelRegistry present). at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:108) at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:44) at org.springframework.integration.channel.MessagePublishingErrorHandler.resolveErrorChannel(MessagePublishingErrorHandler.java:111) at org.springframework.integration.channel.MessagePublishingErrorHandler.handleError(MessagePublishingErrorHandler.java:78) at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:55) at java.lang.Thread.run(Thread.java:724) Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'errorChannel' is defined at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanDefinition(DefaultListableBeanFactory.java:641) at org.springframework.beans.factory.support.AbstractBeanFactory.getMergedLocalBeanDefinition(AbstractBeanFactory.java:1159) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:282) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:273) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:99) ... 5 more | 5 |
437 | XD-1686 | 05/12/2014 07:50:02 | Pluralization of admin nodes leadership selector group path (/xd/admin) | Currently, the admin nodes that participate in the leadership election are grouped under /xd/admin. Since, there are multiple lock nodes that correspond to all the admin servers that participate in leadership election, we can pluralize this node name to /xd/admins. | 1 |
438 | XD-1695 | 05/12/2014 10:35:57 | Research how to secure Admin's REST endpoints | As a user, I'd like to have the option to provide security configurations so that I can access REST endpoints in a secured manner. Ideally, all the listed [REST|https://github.com/spring-projects/spring-xd/wiki/REST-API#xd-resources] endpoints needs to be wrapped within a security layer. *Scope of this spike:* * Research Spring Security and Spring Boot and the OOTB features * Design considerations and approach for XD * Developer experience ** How users will be configuring security credentials? ** How DSL shell will be handled? ** How Admin UI will be handled? | 8 |
439 | XD-1701 | 05/14/2014 05:49:33 | hdfs sink loads Codecs class during 'module info --name sink:hdfs' command | The hdfs sink metadata causes loading of org.springframework.data.hadoop.store.codec.Codecs class during 'module info --name sink:hdfs' command since the type is a specific Spring Hadoop class options.codec.description = compression codec alias name options.codec.type = org.springframework.data.hadoop.store.codec.Codecs options.codec.default = Don't think we want to tie the sink module to specific Spring Hadoop classes during runtime of the admin, we can't be sure that admin has hadoop classes on classpath in all environments and there is no way of specifying the hadoop distro for admin. Wouldn't it be better to have this option as a String to be passed in to the module's context that could then load the class | 3 |
440 | XD-1704 | 05/14/2014 07:36:18 | Create doc section about quotes handling | Document the different "onion layers" that come in play with regard to quoting and escaping (shell, xd-parser, SpEL expressions in some cases) and provide practical examples to common scenarios | 5 |
441 | XD-1705 | 05/14/2014 08:24:19 | Add defaultYarnClasspath entry for phd20, cdh5 and hdp21 | Each Hadoop distro uses different settings for "yarn.application.classpath" and we should provide some starting points for the distros we support running XD on YARN for. We should add a commented out stub "defaultYarnClasspath" entry for phd20, cdh5 and hdp21 to replace the one for hadoop22 when someone deploys on these distros. | 3 |
442 | XD-1707 | 05/14/2014 08:55:46 | The Dynamic Router example in the docs throws an exception with Rabbit Transport | The example in the M6 documentation for the Dynamic Router (here: http://docs.spring.io/spring-xd/docs/1.0.0.M6/reference/html/#dynamic-router) for the SpEL-Based Routing throws an exception when processing the message (from the HTTP post) saying "No bean named 'queue:foo' is defined", when using RabbitMQ as the transport. I do not know a workaround. Steps to reproduce: 1) Run RabbitMQ locally 2) Run xd-singlenode --transport rabbit 3) xd:>stream create f --definition "queue:foo > transform --expression=payload+'-foo' | log" --deploy xd:>stream create b --definition "queue:bar > transform --expression=payload+'-bar' | log" --deploy xd:>stream create r --definition "http | router --expression=payload.contains('a')?'queue:foo':'queue:bar'" --deploy 4) xd:>http post --data "a" 5) This should give a stacktrace: Caused by: org.springframework.beans.factory.NoSuchBeanDefinitionException: No bean named 'queue:foo' is defined at org.springframework.beans.factory.support.DefaultListableBeanFactory.getBeanDefinition(DefaultListableBeanFactory.java:641) at org.springframework.beans.factory.support.AbstractBeanFactory.getMergedLocalBeanDefinition(AbstractBeanFactory.java:1159) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:282) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:273) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) at org.springframework.beans.factory.support.AbstractBeanFactory.doGetBean(AbstractBeanFactory.java:273) at org.springframework.beans.factory.support.AbstractBeanFactory.getBean(AbstractBeanFactory.java:200) at org.springframework.integration.support.channel.BeanFactoryChannelResolver.resolveDestination(BeanFactoryChannelResolver.java:99) ... 83 more | 1 |
443 | XD-1709 | 05/14/2014 13:22:08 | Handling JobExecution stop action if the JobExecution is COMPLETED | Currently, the flag "stoppable" on JobExecutionInfoResource is used to find if the jobExecution can be stopped. Since this flag is set to true even if the JobExecution status is COMPLETED, the jobExecution can still say it can be stopped. | 1 |
444 | XD-1710 | 05/14/2014 14:33:45 | ProcessorTest.testfailedSink needs to use http as its test source | Also check the JMX output to see that the filter rejected the entry. | 5 |
445 | XD-1712 | 05/15/2014 06:12:42 | StreamUtil Cleanup | Update StreamUtils based on Code Review comments. | 3 |
446 | XD-1715 | 05/15/2014 11:35:53 | Create documentation section for the shell | Create a new section in the docs regaring shell usage, in particular how to represent single and double quotes. Include some discussion of basic commands to manipulate streams, jobs and list modules. How to pass in a file that can be executed when the shell starts up. Also point to spring-shell ref docs for extensibility in terms of adding custom commands. | 3 |
447 | XD-1716 | 05/15/2014 11:40:28 | Document that modules can reference property values in servers.yml | Modules can use property values in servers.yml which is very handy to keep batch and hdfs functionality working without duplication of config values in servers.yml and modules.yml (or individual modules). The configuration section should highlight the common cases where this occurs, batch, hdfs, rabbitmq/mqtt where using the server config values as defaults is useful and that they can still be overridden. | 1 |
448 | XD-1718 | 05/15/2014 18:38:47 | Twitter Search test uses case sensitive search when it should be case insensitive. | The TwitterSearch does a case insensitive search. Tests need to do a insensitive check for the keywords in the search result. | 3 |
449 | XD-1719 | 05/15/2014 18:47:25 | ZooKeeper Job deployments path state is not updated after successful deployment | After successful job deployment, the Job deployments path in ZK doesn't get updated with the data {"state": "deployed"} Though this data is not used for deployed instance repository (org.springframework.xd.dirt.stream.zookeeper.ZooKeeperJobRepository) to check for the deployment status, it may be better to have this state updated like stream deployment path. | 1 |
450 | XD-1723 | 05/16/2014 10:03:29 | '--type=' not supported by module delete as shown in documentation examples | In the Module Composition example here: http://docs.spring.io/spring-xd/docs/1.0.0.M6/reference/html/#composing-modules on of the examples is "module delete --name foo --type sink" which fails as the '--type' argument is not supported by the CLI. There are 3 other references to the '--type' argument in the documentation which may not be supported by the CLI anymore. | 1 |
451 | XD-1724 | 05/16/2014 10:07:16 | CLI error when not specifying module type in module commands is cryptic an not helpful | All of the CLI module commands that require the module name (e.g., 'module display source:mqtt') require that you preface the name with the module type. If you forget to do this, e.g., 'module display mqtt', you get a fairly cryptic exception which can confuse end users. The exception is: java.lang.StringIndexOutOfBoundsException: Failed to convert 'mqtt' to type QualifiedModuleName for option 'name,' String index out of range: -1 | 2 |
452 | XD-1728 | 05/17/2014 13:24:31 | Add Support for Bold/Strong Fonts | Hitting this issue in Chrome: http://stackoverflow.com/questions/22891611/google-font-varela-round-doesnt-support-font-weight-in-chrome Looks like Chrome has some issues with making text bold if the font does not explicitly support it. | 2 |
453 | XD-1733 | 05/19/2014 11:50:07 | Investigate fall through of server.yml values when running in YARN | We don't support using @Configuration for modules ATM. The current code was committed during the same time as improvements to handling module configuration. We should switch the reactor-ip.xml to include all bean definitions and remove referencing @Configuration classes or see how to add support for @Configuration. Another short term hack is to put the prefix 'sink.reactor-ip' in all @Value used in NetServerInboundChannelAdapterConfiguration. | 3 |
454 | XD-1735 | 05/19/2014 20:01:14 | FileJdbcTest & JdbcHdfsTest failing | JdbcHdfsTest, FileJdbcTest works for singlenode but not for admin & Container on the same machine. | 5 |
455 | XD-1739 | 05/20/2014 10:27:56 | Container reconnection to ZK fails intermittently | As reported by Matt Stine: After closing and reopening a laptop, the following stack trace appears in the container log: {noformat} 00:47:28,226 INFO main-EventThread state.ConnectionStateManager:194 - State change: RECONNECTED 00:47:28,226 INFO ConnectionStateManager-0 zookeeper.ZooKeeperConnection:255 - >>> Curator connected event: RECONNECTED 00:47:28,322 ERROR ConnectionStateManager-0 listen.ListenerContainer:96 - Listener (org.springframework.xd.dirt.zookeeper.ZooKeeperConnection$DelegatingConnectionStateListener@6abf4158) threw an exception java.lang.RuntimeException: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /xd/containers/5a8deb7b-fd93-42a7-a393-2f15023e007a at org.springframework.xd.dirt.server.ContainerRegistrar.registerWithZooKeeper(ContainerRegistrar.java:301) at org.springframework.xd.dirt.server.ContainerRegistrar.access$100(ContainerRegistrar.java:93) at org.springframework.xd.dirt.server.ContainerRegistrar$ContainerAttributesRegisteringZooKeeperConnectionListener.onConnect(ContainerRegistrar.java:316) at org.springframework.xd.dirt.zookeeper.ZooKeeperConnection$DelegatingConnectionStateListener.stateChanged(ZooKeeperConnection.java:257) at org.apache.curator.framework.state.ConnectionStateManager$2.apply(ConnectionStateManager.java:222) at org.apache.curator.framework.state.ConnectionStateManager$2.apply(ConnectionStateManager.java:218) at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83) at org.apache.curator.framework.state.ConnectionStateManager.processEvents(ConnectionStateManager.java:215) at org.apache.curator.framework.state.ConnectionStateManager.access$000(ConnectionStateManager.java:42) at org.apache.curator.framework.state.ConnectionStateManager$1.call(ConnectionStateManager.java:110) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /xd/containers/5a8deb7b-fd93-42a7-a393-2f15023e007a at org.springframework.xd.dirt.container.store.ZooKeeperContainerAttributesRepository.save(ZooKeeperContainerAttributesRepository.java:75) at org.springframework.xd.dirt.container.store.ZooKeeperContainerAttributesRepository.save(ZooKeeperContainerAttributesRepository.java:42) at org.springframework.xd.dirt.server.ContainerRegistrar.registerWithZooKeeper(ContainerRegistrar.java:295) ... 15 more Caused by: org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /xd/containers/5a8deb7b-fd93-42a7-a393-2f15023e007a at org.apache.zookeeper.KeeperException.create(KeeperException.java:119) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:676) at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:660) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:656) at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:441) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:431) at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:44) at org.springframework.xd.dirt.container.store.ZooKeeperContainerAttributesRepository.save(ZooKeeperContainerAttributesRepository.java:69) ... 17 more {noformat} This can occur if ZK does not remove the ephemeral node before the container creates a new one. This can be fixed in the following ways: * Remove the existing ephemeral node if it already exists * Register containers with a new UUID upon every new connection For now I'll implement the first solution. | 2 |
456 | XD-1740 | 05/20/2014 11:17:28 | ZooKeeper Admin server node data to have admin server host address | It would be useful to store admin server ip address in ZooKeeper leadership group node (/xd/admins) to identify admin server and it's admin port. | 1 |
457 | XD-1741 | 05/21/2014 08:47:04 | Register StringToByteArrayMessageConverter | The converter was not configured, therefore String to byte[] for --outPutType application/octet-stream fails for a String payload. | 1 |
458 | XD-1742 | 05/21/2014 10:37:23 | Remove toStringTransformer from tcp Source; Add Binary Support to the http Source | The TCP source unconditionally converts to String. This prevents binary transfers. Remove the transformer; if the user wants a String; (s)he can use {{tcp --outputType=text/plain;charset=UTF-8}} (assuming the byte stream has valid UTF-8 encoding). Another option would be to add a {{--binary}} option, but since conversion can already handle it, it's probably better to use that. On the other hand, a {{--binary}} option would enable backwards compatibility. The http source also unconditionally converts to String. | 1 |
459 | XD-1745 | 05/21/2014 15:09:38 | Support for hadoop name node HA configuration | Hadoop supports namenode HA with two name nodes running, one being active and other in standby. If the active name node fails the standby name node has all the data readily available and can start serving requests. In this configuration name node url is no longer a host:port url but a logical name that translates to any active name node at runtime. This is to ensure spring xd stream can handle a name node failure, for instance when writing a hdfs sink, seamlessly | 5 |
460 | XD-1748 | 05/22/2014 10:03:53 | Update to Spring Integration 4.0.1 | Add messages store optimization to the `hdfs-dataset` | 1 |
461 | XD-1750 | 05/22/2014 17:28:32 | Exception handling at Module info command | When not prefixing with appropriate module type, module info command throws StringIndexOutOfBoundsException: xd:>module info file java.lang.StringIndexOutOfBoundsException: Failed to convert 'file' to type QualifiedModuleName for option 'name,' String index out of range: -1 | 2 |
462 | XD-1751 | 05/22/2014 17:33:43 | Modules that use tomcat connection pool need to expose configurations | filejdbc, hdfsjdbc, jdbchdfs & jdbc modules each support a tomcat connection pool. At this time none of the configurations allowed by the tomcat connection pool are available unless the user adds them to the appropriate module xml file. We need to allow the user to configure them via yml, property file and environment variables. | 8 |
463 | XD-1756 | 05/23/2014 12:20:25 | Update spring-data-hadoop version to 2.0.0.RC4 | Update spring-data-hadoop version to 2.0.0.RC4 and make necessary changes to the YARN configuration. | 3 |
464 | XD-1757 | 05/23/2014 12:33:54 | Resolve runtime module option properties using module metadata | Since the module metadata properties are resolved at runtime (when the module gets deployed), we can resolve the module options values that are already resolved in there. For example, currently the "runtime modules" command for "log" module would show this: runtime modules [7m[27;32m Module Container Id Options ---------------- ------------------------------------ -------------------------------------------------------- s1.source.http-0 633f0fb1-5396-4bc0-8f1e-c9d5104e0ea7 {port=9000} s1.sink.log-1 633f0fb1-5396-4bc0-8f1e-c9d5104e0ea7 {name=${xd.stream.name}, expression=payload, level=INFO} In this case, we can resolve the module option "name" from the module metadata. | 2 |
465 | XD-1758 | 05/26/2014 08:08:37 | JMS Source (ActiveMQ) failing to use jmsUrl environment variable | Deployed on: SingleNode Ec2, SingleNode Mac SHA: 942c7868e3e0d0cf7730b536170438a0291f5cab [Description] JMS Source (Activemq) tried to access a broker on localhost. The current deployment uses the following to set the JMS Broker: * export amq_url=tcp://ec2-54-221-32-82.compute-1.amazonaws.com:61616 [Analysis] After reviewing the configuration of the jms-activemq-infrastructure-context.xml, it was noted that the brokerUrl environment variable has been changed from amq.url to amqUrl. While the jms-activemq.properties has not been changed (still amq.url). After setting the following, the test still failed: * export amqUrl=tcp://ec2-54-221-32-82.compute-1.amazonaws.com:61616 After going into the jms-activemq-infrastructure-context.xml and replacing the amqUrl with amq.url, the jms source (activemq) returned to normal operation. [Incident] Acceptance tests reported a failure on Saturday Morning's build that the JMS Source failed. | 2 |
466 | XD-1760 | 05/27/2014 12:46:33 | Support in-memory transport for co-located modules | We are looking to speed up the message passing from source to sink and wondering if we could use a in-memory transport whenever we know that source and sink modules are co-located on the same container. Currently we do not see a straight forward way of doing it Option 1 : Create a composite module and let users deploy a composite module by itself or in other words deploy a stream with one module Option 2 : Let users define a transport as in-memory when defining a stream. This could be used along with the deployment manifest feature enforcing co-location of a source and sink module, with in-memory transport cc @adenissov | 8 |
467 | XD-1765 | 05/27/2014 13:40:44 | Update documentation to list supported Hadoop distributions | After spring hadoop 2.0 RC4 update. | 1 |
468 | XD-1766 | 05/27/2014 14:21:06 | Failing tcp to file in script tests | build 22-May-2014 08:45:04 Creating stream tcptofile with definition 'tcp+--port%3D21234+--socketTimeout%3D2000+%7C+file+--dir%3D%2Ftmp%2Fxdtest%2Fbasic' ... build 22-May-2014 08:45:04 {"name":"tcptofile","deployed":null,"definition":"tcp --port=21234 --socketTimeout=2000 | file --dir=/tmp/xdtest/basic","links":[{"rel":"self","href":"http://127.0.0.1:9393/streams/tcptofile"}]} build 22-May-2014 08:45:04 build 22-May-2014 08:45:11 Destroying stream tcptofile ... build 22-May-2014 08:45:11 build 22-May-2014 08:45:11 build 22-May-2014 08:45:11 Expected blahblah does not match actual value (98,108,97,104,98,108,97,104) simple 22-May-2014 08:45:11 Failing task since return code of [/bin/sh /tmp/XD-SCRIPTS-RS-513-ScriptBuildTask-7280766559152712153.sh] was 1 while expected 0 simple 22-May-2014 08:45:11 Finished task 'Run basic_stream_tests' See https://build.spring.io/download/XD-SCRIPTS-RS/build_logs/XD-SCRIPTS-RS-513.log | 2 |
469 | XD-1767 | 05/27/2014 23:43:56 | JobExecution restart action should depend on job deployment status | At the JobExecution page, if the job execution is failed and restartable, then we should enable the "restart" action only if the job is deployed. Please see https://github.com/spring-projects/spring-xd/pull/884 for the discussion related to this. | 3 |
470 | XD-1768 | 05/28/2014 00:25:51 | User should be able to specify deploy properties for Jobs | When clicking deploy from the job definitions page, user should be able to specify the deployment manifest (module count, module criteria etc.,) | 3 |
471 | XD-1769 | 05/28/2014 00:26:34 | User should be able to provide job deployment properties | At the job definitions page, user should be able to provide the job deployment manifest (module count, criteria etc.,) | 3 |
472 | XD-1770 | 05/28/2014 09:13:29 | Handle NPE while deploying stream module at the Container | When trying to deploy a stream module, the ContainerRegistrar throws NPE if the deployment loader couldn't load a non-null stream based on the stream name. 07:10:29,902 ERROR DeploymentsPathChildrenCache-0 server.ContainerRegistrar:450 - Exception deploying module java.lang.NullPointerException at org.springframework.xd.dirt.server.ContainerRegistrar.deployStreamModule(ContainerRegistrar.java:549) at org.springframework.xd.dirt.server.ContainerRegistrar.onChildAdded(ContainerRegistrar.java:436) at org.springframework.xd.dirt.server.ContainerRegistrar.access$800(ContainerRegistrar.java:96) at org.springframework.xd.dirt.server.ContainerRegistrar$DeploymentListener.childEvent(ContainerRegistrar.java:803) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:494) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:488) at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293) at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83) at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:485) at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) at org.apache.curator.framework.recipes.cache.PathChildrenCache$11.run(PathChildrenCache.java:755) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) | 1 |
473 | XD-1771 | 05/28/2014 10:31:07 | Update twitterSearchTest to handle the latest release of twitterSearch | The changes to twitterSearch means that it will send multiple messages during the duration of the test. To support these changes: 1) Remove assertReceived. Since the number of messages is indeterminate 2) Change file sink that captures the results to append mode. Because each message will overwrite the previous messages result. | 5 |
474 | XD-1774 | 05/28/2014 22:42:59 | UI Automatically close notification messages | * Automatically close notification messages * Polish UI | 3 |
475 | XD-1777 | 05/29/2014 09:48:22 | Restore deployment properties for orphaned modules | As part of XD-1338 we modified how module deployment works. Now module deployment requests include deployment properties as the data for the ZooKeeper node. This allows us to reuse those properties when a container exit the cluster and the module is redeployed to another container. However if there are no other containers to handle the deployment, the module deployment node is erased, along with the properties. This mean no module will ever handle the partition that module was responsible for. This condition needs to be handled so that partitioned streams continue to function in cases where the cluster temporarily doesn't have enough containers to support the stream. | 20 |
476 | XD-1778 | 05/29/2014 11:38:29 | Check job "restartable" flag for JobExecution restart action | job create bogus --definition "jdbchdfs --sql='select * from bogus' --restartable=false" job deploy bogus job launch bogus http://localhost:9393/admin-ui/#/jobs/executions click "Restart Job Execution" on the failed job execution get message "Job was relaunched" container log has: 12:36:27,231 ERROR task-scheduler-10 handler.LoggingHandler:145 - org.springframework.messaging.MessageHandlingException: org.springframework.batch.core.repository.JobRestartException: JobInstance already exists and is not restartable | 2 |
477 | XD-1786 | 05/30/2014 17:14:47 | Support Partitioning/Bus Properties in the RedisMessageBus | PR: https://github.com/spring-projects/spring-xd/pull/926 | 5 |
478 | XD-1791 | 06/01/2014 06:59:50 | New job that executes a Spark job | Create OOTB batch job that executes a job on Spark as a tasklet could be something along this: job create yarnJob --definition "sparkjob --master=spark://localhost:7077 --class=SimpleApp" | 5 |
479 | XD-1805 | 06/04/2014 06:41:53 | Support the ability to create module definitions in Groovy | XML is currently required for module definitions. XD should also support Java @Config and Groovy bean definitions and potentially, SI DSLs. | 8 |
480 | XD-1812 | 06/05/2014 13:36:24 | Support Bus Producer Properties for Dynamic Producers | Pass module properties from stream plugin to {{MessageBusAwareChannelResolver}}. Disallow partitioning properties. | 2 |
481 | XD-1817 | 06/06/2014 09:58:51 | ContainerListener to redeploy modules based on stream order. | When redeploying in the case of a container failure the modules are now redeployed in a random order. The list of modules in the failed container needs to be sorted based on its position in a given stream and then redeployed. | 5 |
482 | XD-1823 | 06/06/2014 13:03:06 | Investigate need for UI Pagination | This issue could be more involved. Proper pagination may not be implemented correctly by the REST controller (making the respective service call). This would also necessitate some form of improved state management for the UI. E.g. * User is on page 5 of the listing of Job Executions * User views details * User presses the back-button (on the screen) * The the listing of Job Executions *should* be still on page 5 | 8 |
483 | XD-1831 | 06/09/2014 11:11:43 | Mask Database Passwords in REST Controllers and Admin UI | When deploying a batch job, the UI displays the database password found in the server.yml in plain text to the user. At the very least, this should be displayed in a password field so it's masked out and have it masked out in the resulting definition at the bottom of the page. Ideally, we wouldn't provide the password on that page at all and only accept overriding options (if the user wants a password other than the configured one, enter it…otherwise, we'll use what we have). I'm finding that this occurs in other places as well. A full pass though of the UI should be done to mask out passwords (or eliminate their display all together). | 2 |
484 | XD-1838 | 06/11/2014 19:01:45 | FileSourceTest needs to apply label to source and sink | * Currently Acceptance FileSource Acceptance Tests are failing ** This is because the sink that tests the result for the file source test is a filesink. Both use the "file" token. Thus causing a failure * SimpleFileSource and SimpleFileSink needs to support a label method. * Update testFileSource to use the labels. | 3 |
485 | XD-1839 | 06/12/2014 07:11:17 | Do not allow the use of named channels in composed modules | This needs closer inspection, but here are some things that currently do not work, either at the parser level, or at actual deployment time: {noformat} xd:>module compose foo --definition "queue:bar > filter" Command failed org.springframework.xd.rest.client.impl.SpringXDException: Could not find module with name 'filter' and type 'sink' xd:>module compose foolog --definition "queue:foo > log"
Successfully created module 'foolog' with type sink ==> should fail (not a module, but a full stream) xd:>module compose foo --definition "queue:bar > filter | transform" Successfully created module 'foo' with type processor ==> should be source {noformat} | 8 |
486 | XD-1840 | 06/12/2014 09:43:40 | Document and review REST API | REST API needs to be finalized and documented for the GA release. The API to be reviewed by REST experts | 8 |
487 | XD-1850 | 06/13/2014 13:52:16 | IllegalStateException when deploying orphaned stream modules upon a matching container arrival | Upon a matching container arrival, if there are orphaned stream modules to be deployed, then following exception is thrown: java.lang.IllegalStateException: Container missing at org.springframework.util.Assert.state(Assert.java:385) at org.springframework.xd.dirt.core.StreamDeploymentsPath.hasDeploymentInfo(StreamDeploymentsPath.java:275) at org.springframework.xd.dirt.core.StreamDeploymentsPath.build(StreamDeploymentsPath.java:233) at org.springframework.xd.dirt.server.ContainerListener.getContainersForStreamModule(ContainerListener.java:337) at org.springframework.xd.dirt.server.ContainerListener.redeployStreams(ContainerListener.java:278) at org.springframework.xd.dirt.server.ContainerListener.onChildAdded(ContainerListener.java:186) at org.springframework.xd.dirt.server.ContainerListener.childEvent(ContainerListener.java:155) | 3 |
488 | XD-1851 | 06/13/2014 14:36:10 | Introduce cache to ZooKeeperContainerRepository | Add Cache implementation for ZooKeeperContainerRepository | 5 |
489 | XD-1854 | 06/17/2014 07:33:48 | Remove Hadoop v1 support | Going forward it seems that providing Hadoop v1 will be of lesser importance and we might as well drop it now. SHDP 2.1 will also drop any v1 support. Remove support for: - hadoop12 - Apache Hadoop 1.2.1 - cdh4 - Cloudera CDH 4.6.0 - hdp13 - Hortonworks Data Platform 1.3 Keep: - hadoop22 - Apache Hadoop 2.2.0 (default) - phd1 - Pivotal HD 1.1 - phd20 - Pivotal HD 2.0 - cdh5 - Cloudera CDH 5.0.0 - hdp21 - Hortonworks Data Platform 2.1 This should make configuration and documentation easier too. Not to mention testing. This affects startup scripts and the shell plus the build script. | 5 |
490 | XD-1856 | 06/17/2014 08:39:10 | Add option to specify fsUri to hdfs sinks | We should have an --fsUri parameter for hdfs and hdfs-dataset sinks so we can write to different file systems (hdfs, webhdfs) | 5 |
491 | XD-1857 | 06/17/2014 09:08:43 | Can't use webhdfs with hdfs sink | When using spring.hadoop.fsUri set to webhdfs://localhost/ I'm getting an error: java.lang.NoClassDefFoundError: javax/ws/rs/core/MediaType including the following in xd/lib seems to fix this: - jersey-core-1.9.jar - jersey-server-1.9.jar | 3 |
492 | XD-1860 | 06/17/2014 12:12:18 | Support for configuring more than one broker in rabbit source | Spring XD rabbit source supports these options http://docs.spring.io/spring-xd/docs/1.0.0.BUILD-SNAPSHOT/reference/html/#rabbit However, if there are multiple brokers available for a client to connect to then there is no way to configure that when creating a stream. I believe there is support for this already in the rabbitmq client (addresses field if I remember right from the meeting) but it needs to be exposed as one of the options in defining a stream with rabbitmq source. This way if one of the brokers die the client can automatically switch to one of the other configured brokers and provide high availability on the client side. | 0 |
493 | XD-1861 | 06/17/2014 18:17:52 | Fix XD config initializer for ZK connection string | Spring Boot 1.1.1 has the following change: https://github.com/spring-projects/spring-boot/commit/b75578d99c8d435e1f8bf18d0dbb3a2ddf56fdc4 where, an external property source precedence would get re-ordered after the application configuration properties. This change affects Spring XD config initializer which expects an external "zk-properties" property source always preceding over the application configuration properties. | 3 |
494 | XD-1863 | 06/19/2014 10:28:54 | Create way to deploy custom modules for XD on YARN | Need a way for end-user to package and add custom modules/scripts when deploying XD on YARN. Currently we have a zip file containing all code including modules. It's not convenient to un-zip/re-zip this archive to add custom modules/scripts. See - https://github.com/spring-projects/spring-xd/issues/931 | 5 |
495 | XD-1864 | 06/19/2014 12:02:55 | Add paging support for UI list views | As a user, I'd like to have _paging_ support so that I can scroll through the list of streams, jobs and containers. Currently the following error is thrown when we cross >20 rows: http://localhost:9393/jobs/definitions.json JSON Response: {code:xml} [ { links: [ ], logref: "IllegalStateException", message: "Not all instances were looked at" } ] {code} Stack trace: {code} 15:51:21,931 ERROR http-nio-9393-exec-9 rest.RestControllerAdvice - Caught exception while handling a request java.lang.IllegalStateException: Not all instances were looked at at org.springframework.util.Assert.state(Assert.java:385) {code} | 5 |
496 | XD-1869 | 06/20/2014 09:11:43 | Provide option for sources/sinks to configure mapped headers to/from Messages | See the discussion: https://gopivotal-com.socialcast.com/messages/20771872 | 1 |
497 | XD-1870 | 06/20/2014 15:18:32 | Rabbit Sink & Source --host and --port are not updating module host/port. | Acceptance Tests failed on the Rabbit Source and Sink Tests. The test started failing when XD-1824 was introduced (Support RabbitMQ Cluster in source/sink). This story added addresses to support rabbit cluster failover. Currently if a user set --host --port to a remote Rabbit instance, XD will use the default host=localhost and port=5672. However using --addresses does work. | 5 |
498 | XD-1897 | 06/29/2014 21:11:51 | Spring XD - Handling sink failures | If a sink fails for whatever reason, will it be possible to handle it? Say by sending the payload to an error queue for later processing when a JDBC or Mongo sink fails due to a database connectivity loss? Or the modules are designed by certain principles / contracts not to be meant to handle such failures? | 3 |
499 | XD-1899 | 06/30/2014 11:23:28 | IllegalStateException on single node shutdown | Upon shutdown via ^C, an IllegalStateException stack trace appears in the server logs. While harmless, the traces are annoying and should be prevented. | 2 |
500 | XD-1901 | 06/30/2014 11:28:45 | Job undeploy operation throws exception | Job `undeploy` operation throws the following stacktrace: ``` http-nio-9393-exec-5 zookeeper.ZooKeeperJobRepository - Exception while transitioning job 'j' state to undeploying org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /xd/deployments/jobs/j/status at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.setData(ZooKeeper.java:1266) at org.apache.curator.framework.imps.SetDataBuilderImpl$4.call(SetDataBuilderImpl.java:260) at org.apache.curator.framework.imps.SetDataBuilderImpl$4.call(SetDataBuilderImpl.java:256) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) at org.apache.curator.framework.imps.SetDataBuilderImpl.pathInForeground(SetDataBuilderImpl.java:252) at org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:239) at org.apache.curator.framework.imps.SetDataBuilderImpl.forPath(SetDataBuilderImpl.java:39) at org.springframework.xd.dirt.stream.zookeeper.ZooKeeperJobRepository.delete(ZooKeeperJobRepository.java:177) at org.springframework.xd.dirt.stream.zookeeper.ZooKeeperJobRepository.delete(ZooKeeperJobRepository.java:199) at org.springframework.xd.dirt.stream.zookeeper.ZooKeeperJobRepository.delete(ZooKeeperJobRepository.java:1) at org.springframework.xd.dirt.stream.AbstractInstancePersistingDeployer.undeploy(AbstractInstancePersistingDeployer.java:68) at org.springframework.xd.dirt.rest.XDController.undeploy(XDController.java:125) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ``` | 2 |
501 | XD-1905 | 07/01/2014 10:01:50 | DefaultContainerMatcher - Improve Logging and mention affected Module | When deploying a definition with a container match criteria specified, and no container could be selected - the logging is ambiguous and should mention the affected module: {code} 11:58:24,089 WARN DeploymentSupervisorCacheListener-0 cluster.DefaultContainerMatcher - No currently available containers match criteria 'somecriteria' {code} | 1 |
502 | XD-1906 | 07/01/2014 10:14:09 | Handle Status Changes in Client (Dynamically update UI) | As a minimum we need some common polling strategy on the client side to detect status changes of job + streams etc. (E.g. during deployment of streams/jobs) Ideally, I would like to have this addressed on the server-side as well. It would be nice if we could propagate events between, containers and admin-server that would inform about any changes in the system. We could then use those to notify connected UI clients. | 3 |
503 | XD-1907 | 07/01/2014 11:22:11 | Handle 'deploying' state at the Admin UI | When the job is in "deploying" state, until we decide whether the job is actually "deployed" or "failed"/"incomplete", there is no way to know if it is fine to launch/schedule (though the launching requests are going to go to the job launch request queue). We could either disable both "deploy"/"undeploy" until the state changes from "deploying"? | 3 |
504 | XD-1908 | 07/01/2014 12:55:09 | Remove Retry from TCP Sink | Now that the bus supports retry it is no longer necessary to have the retry advice in the TCP Sink. | 1 |
505 | XD-1912 | 07/02/2014 01:06:54 | Rabbitmq source is not ingested the data into jdbc sink | I am using Spring XD to ingest the data into Pivotal HD.My source is log files which is coming from logstash through Rabbitmq. I could able to ingest the log files in HDFS (by using Rabbitmq source and HDFS sink) However when i try to ingest the data directly into Hawq by using JDBC sink,it's not working. Shall we directly load Rabbitmq source into any databases like Hawq? stream create --name pivotalqueue --definition "rabbit --host=<my host name> | jdbc --columns='colum list'" ---Not working I configured jdbc in jdbc.properties. There was no issue with jdbc configuration(because i tested this with simple tail source it's working and load the data into HAWQ. stream create --name pivotalqueue --definition "tail --name=/tmp/xd/output/test.out | jdbc --columns='columns list'" ) | 3 |
506 | XD-1915 | 07/04/2014 05:51:04 | Add Hadoop 2.4.x as an option | Hadoop 2.4.1 is now a stable release and we should add support for running against it | 3 |
507 | XD-1918 | 07/07/2014 09:15:38 | Update TypeConversion Page | Need to update the examples in the TypeConversion doc, re spring social Tweet which is no longer used. | 1 |
508 | XD-1925 | 07/07/2014 15:10:45 | Rename ModuleDeployer | For more info, please see here: https://github.com/spring-projects/spring-xd/pull/1021/files#r14617723 | 1 |
509 | XD-1940 | 07/09/2014 11:35:16 | Clean up duplicated dependencies from XD on YARN installation | Remove unnecessary/duplicated jars from the lib directory in spring-xd-yarn zip distribution | 3 |
510 | XD-1941 | 07/09/2014 14:39:02 | No main manifest attribute in xd-yarn-client jar | Error deploying to YARN - $ ./spring-xd-1.0.0.BUILD-SNAPSHOT-yarn/bin/xd-yarn push -p spring-xd-1.0.0.BUILD-SNAPSHOT-yarn no main manifest attribute, in spring-xd-1.0.0.BUILD-SNAPSHOT-yarn/lib/spring-xd-yarn-client-1.0.0.BUILD-SNAPSHOT.jar probably related to boot changes | 3 |
511 | XD-1944 | 07/10/2014 07:29:48 | Error deploying stream when admin running and container arrives after stream deployment request | Steps to reproduce: 1. start xd-admin 2. start shell and create and deploy stream ("time | hdfs") 3. start container I got: [2014-07-10 09:10:29.019] boot - 19923 INFO [DeploymentSupervisorCacheListener-0] --- InitialDeploymentListener: Path cache event: /deployments/streams/test, type: CHILD_ADDED [2014-07-10 09:10:29.137] boot - 19923 INFO [Deployer] --- StreamDeploymentListener: Deploying stream Stream{name='test'} [2014-07-10 09:10:29.146] boot - 19923 WARN [Deployer] --- StreamDeploymentListener: No containers available for deployment of stream test [2014-07-10 09:10:29.146] boot - 19923 INFO [Deployer] --- StreamDeploymentListener: Stream Stream{name='test'} deployment attempt complete [2014-07-10 09:11:08.003] boot - 19923 INFO [DeploymentSupervisorCacheListener-0] --- ContainerListener: Path cache event: /containers/007c2bcc-13f4-466e-95d3-bd926bb456ea, type: CHILD_ADDED [2014-07-10 09:11:08.006] boot - 19923 INFO [DeploymentSupervisorCacheListener-0] --- ArrivingContainerModuleRedeployer: Container arrived: 007c2bcc-13f4-466e-95d3-bd926bb456ea [2014-07-10 09:11:08.176] boot - 19923 ERROR [DeploymentSupervisorCacheListener-0] --- PathChildrenCache: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /xd/deployments/streams/test/modules at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590) at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:107) at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:199) at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) at org.springframework.xd.dirt.server.ArrivingContainerModuleRedeployer.deployUnallocatedStreamModules(ArrivingContainerModuleRedeployer.java:133) at org.springframework.xd.dirt.server.ArrivingContainerModuleRedeployer.deployModules(ArrivingContainerModuleRedeployer.java:106) at org.springframework.xd.dirt.server.ContainerListener.childEvent(ContainerListener.java:99) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:509) at org.apache.curator.framework.recipes.cache.PathChildrenCache$5.apply(PathChildrenCache.java:503) at org.apache.curator.framework.listen.ListenerContainer$1.run(ListenerContainer.java:92) at com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297) at org.apache.curator.framework.listen.ListenerContainer.forEach(ListenerContainer.java:83) at org.apache.curator.framework.recipes.cache.PathChildrenCache.callListeners(PathChildrenCache.java:500) at org.apache.curator.framework.recipes.cache.EventOperation.invoke(EventOperation.java:35) at org.apache.curator.framework.recipes.cache.PathChildrenCache$10.run(PathChildrenCache.java:762) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) | 3 |
512 | XD-1948 | 07/10/2014 11:32:10 | Build should use Spring Boot plugin version 1.1.4 | The platform uses Boot version 1.1.4 so the plugin version used in build.gradle should match that. | 1 |
513 | XD-1950 | 07/10/2014 13:50:17 | Single step partition support on filejdbc module uses module's datasource | The filejdbc module's single step partition support configures to use jdbc module's datasource rather than XD's batch datasource. ``` org.springframework.messaging.MessageHandlingException: org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException for SQL [SELECT JOB_EXECUTION_ID, START_TIME, END_TIME, STATUS, EXIT_CODE, EXIT_MESSAGE, CREATE_TIME, LAST_UPDATED, VERSION, JOB_CONFIGURATION_LOCATION from BATCH_JOB_EXECUTION where JOB_EXECUTION_ID = ?]; SQL state [null]; error code [0]; [SQLITE_ERROR] SQL error or missing database (no such table: BATCH_JOB_EXECUTION); nested exception is java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such table: BATCH_JOB_EXECUTION) at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:78) at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:71) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:170) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.integration.monitor.SimpleMessageHandlerMetrics.handleMessage(SimpleMessageHandlerMetrics.java:106) at org.springframework.integration.monitor.SimpleMessageHandlerMetrics.invoke(SimpleMessageHandlerMetrics.java:86) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy117.handleMessage(Unknown Source) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:255) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:223) at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.integration.monitor.DirectChannelMetrics.monitorSend(DirectChannelMetrics.java:113) at org.springframework.integration.monitor.DirectChannelMetrics.doInvoke(DirectChannelMetrics.java:97) at org.springframework.integration.monitor.DirectChannelMetrics.invoke(DirectChannelMetrics.java:91) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy115.send(Unknown Source) at org.springframework.xd.dirt.integration.bus.LocalMessageBus$3.handleMessage(LocalMessageBus.java:188) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101) at org.springframework.integration.dispatcher.UnicastingDispatcher.access$000(UnicastingDispatcher.java:48) at org.springframework.integration.dispatcher.UnicastingDispatcher$1.run(UnicastingDispatcher.java:92) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) Caused by: org.springframework.jdbc.UncategorizedSQLException: PreparedStatementCallback; uncategorized SQLException for SQL [SELECT JOB_EXECUTION_ID, START_TIME, END_TIME, STATUS, EXIT_CODE, EXIT_MESSAGE, CREATE_TIME, LAST_UPDATED, VERSION, JOB_CONFIGURATION_LOCATION from BATCH_JOB_EXECUTION where JOB_EXECUTION_ID = ?]; SQL state [null]; error code [0]; [SQLITE_ERROR] SQL error or missing database (no such table: BATCH_JOB_EXECUTION); nested exception is java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such table: BATCH_JOB_EXECUTION) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:84) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) at org.springframework.jdbc.support.AbstractFallbackSQLExceptionTranslator.translate(AbstractFallbackSQLExceptionTranslator.java:81) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:660) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:695) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:727) at org.springframework.jdbc.core.JdbcTemplate.query(JdbcTemplate.java:737) at org.springframework.jdbc.core.JdbcTemplate.queryForObject(JdbcTemplate.java:811) at org.springframework.batch.core.repository.dao.JdbcJobExecutionDao.getJobExecution(JdbcJobExecutionDao.java:267) at org.springframework.batch.core.explore.support.SimpleJobExplorer.getStepExecution(SimpleJobExplorer.java:142) at org.springframework.batch.integration.partition.StepExecutionRequestHandler.handle(StepExecutionRequestHandler.java:52) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:63) at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:122) at org.springframework.expression.spel.ast.MethodReference.access$000(MethodReference.java:44) at org.springframework.expression.spel.ast.MethodReference$MethodValueRef.getValue(MethodReference.java:258) at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:84) at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:114) at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:111) at org.springframework.integration.util.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:159) at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:268) at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:142) at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:75) ... 41 more Caused by: java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no such table: BATCH_JOB_EXECUTION) at org.sqlite.DB.newSQLException(DB.java:383) at org.sqlite.DB.newSQLException(DB.java:387) at org.sqlite.DB.throwex(DB.java:374) at org.sqlite.NestedDB.prepare(NestedDB.java:134) at org.sqlite.DB.prepare(DB.java:123) at org.sqlite.PrepStmt.<init>(PrepStmt.java:42) at org.sqlite.Conn.prepareStatement(Conn.java:404) at org.sqlite.Conn.prepareStatement(Conn.java:399) at org.sqlite.Conn.prepareStatement(Conn.java:383) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.tomcat.jdbc.pool.ProxyConnection.invoke(ProxyConnection.java:126) at org.apache.tomcat.jdbc.pool.JdbcInterceptor.invoke(JdbcInterceptor.java:109) at org.apache.tomcat.jdbc.pool.DisposableConnectionFacade.invoke(DisposableConnectionFacade.java:80) at com.sun.proxy.$Proxy109.prepareStatement(Unknown Source) at org.springframework.jdbc.core.JdbcTemplate$SimplePreparedStatementCreator.createPreparedStatement(JdbcTemplate.java:1557) at org.springframework.jdbc.core.JdbcTemplate.execute(JdbcTemplate.java:638) ... 63 more 12:23:37,941 INFO main-EventThread server.ContainerRegistrar:254 - Undeploying module [ModuleDescriptor@d192973 moduleName = 'filejdbc', moduleLabel = 'filejdbc', group = 'csvjdbcjob0', sourceChannelName = [null], sinkChannelName = [null], sinkChannelName = [null], index = 0, type = job, parameters = map['resources' -> 'file:///tmp/xdtest/jdbc/delete_after_use.csv', 'initializeDatabase' -> 'true', 'names' -> 'col1,col2,col3', 'deleteFiles' -> 'true', 'driverClassName' -> 'org.sqlite.JDBC', 'url' -> 'jdbc:sqlite:/tmp/xdtest/jdbc/jdbc.db'], children = list[[empty]]] 12:23:37,941 INFO main-EventThread module.ModuleDeployer:158 - removed SimpleModule [name=filejdbc, type=job, group=csvjdbcjob0, index=0 @73cc35b5] 12:23:37,944 ERROR task-scheduler-1 step.AbstractStep:225 - Encountered an error executing step step1-master in job csvjdbcjob0 org.springframework.integration.MessageTimeoutException: Timeout occurred before all partitions returned at org.springframework.batch.integration.partition.MessageChannelPartitionHandler.handle(MessageChannelPartitionHandler.java:141) at org.springframework.batch.core.partition.support.PartitionStep.doExecute(PartitionStep.java:106) at org.springframework.batch.core.step.AbstractStep.execute(AbstractStep.java:198) at org.springframework.batch.core.job.SimpleStepHandler.handleStep(SimpleStepHandler.java:148) at org.springframework.batch.core.job.flow.JobFlowExecutor.executeStep(JobFlowExecutor.java:64) at org.springframework.batch.core.job.flow.support.state.StepState.handle(StepState.java:67) at org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:162) at org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:141) at org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:134) at org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:304) at org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:135) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) at org.springframework.batch.core.launch.support.SimpleJobLauncher.run(SimpleJobLauncher.java:128) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.batch.core.configuration.annotation.SimpleBatchConfiguration$PassthruAdvice.invoke(SimpleBatchConfiguration.java:127) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy44.run(Unknown Source) at org.springframework.batch.integration.launch.JobLaunchingMessageHandler.launch(JobLaunchingMessageHandler.java:50) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.expression.spel.support.ReflectiveMethodExecutor.execute(ReflectiveMethodExecutor.java:63) at org.springframework.expression.spel.ast.MethodReference.getValueInternal(MethodReference.java:122) at org.springframework.expression.spel.ast.MethodReference.access$000(MethodReference.java:44) at org.springframework.expression.spel.ast.MethodReference$MethodValueRef.getValue(MethodReference.java:258) at org.springframework.expression.spel.ast.CompoundExpression.getValueInternal(CompoundExpression.java:84) at org.springframework.expression.spel.ast.SpelNodeImpl.getTypedValue(SpelNodeImpl.java:114) at org.springframework.expression.spel.standard.SpelExpression.getValue(SpelExpression.java:111) at org.springframework.integration.util.AbstractExpressionEvaluator.evaluateExpression(AbstractExpressionEvaluator.java:159) at org.springframework.integration.util.MessagingMethodInvokerHelper.processInternal(MessagingMethodInvokerHelper.java:268) at org.springframework.integration.util.MessagingMethodInvokerHelper.process(MessagingMethodInvokerHelper.java:142) at org.springframework.integration.handler.MethodInvokingMessageProcessor.processMessage(MethodInvokingMessageProcessor.java:75) at org.springframework.integration.handler.ServiceActivatingHandler.handleRequestMessage(ServiceActivatingHandler.java:71) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:170) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.integration.monitor.SimpleMessageHandlerMetrics.handleMessage(SimpleMessageHandlerMetrics.java:106) at org.springframework.integration.monitor.SimpleMessageHandlerMetrics.invoke(SimpleMessageHandlerMetrics.java:86) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy117.handleMessage(Unknown Source) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:255) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:223) at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.integration.monitor.DirectChannelMetrics.monitorSend(DirectChannelMetrics.java:113) at org.springframework.integration.monitor.DirectChannelMetrics.doInvoke(DirectChannelMetrics.java:97) at org.springframework.integration.monitor.DirectChannelMetrics.invoke(DirectChannelMetrics.java:91) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy115.send(Unknown Source) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:109) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:44) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:94) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.sendMessage(AbstractReplyProducingMessageHandler.java:260) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.sendReplyMessage(AbstractReplyProducingMessageHandler.java:241) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.produceReply(AbstractReplyProducingMessageHandler.java:205) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleResult(AbstractReplyProducingMessageHandler.java:199) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:177) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.integration.monitor.SimpleMessageHandlerMetrics.handleMessage(SimpleMessageHandlerMetrics.java:106) at org.springframework.integration.monitor.SimpleMessageHandlerMetrics.invoke(SimpleMessageHandlerMetrics.java:86) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy117.handleMessage(Unknown Source) at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:116) at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:101) at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:97) at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:77) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:255) at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:223) at sun.reflect.GeneratedMethodAccessor107.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:317) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:190) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) at org.springframework.integration.monitor.DirectChannelMetrics.monitorSend(DirectChannelMetrics.java:113) at org.springframework.integration.monitor.DirectChannelMetrics.doInvoke(DirectChannelMetrics.java:97) at org.springframework.integration.monitor.DirectChannelMetrics.invoke(DirectChannelMetrics.java:91) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:207) at com.sun.proxy.$Proxy115.send(Unknown Source) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:109) at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:44) at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:94) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.sendMessage(AbstractReplyProducingMessageHandler.java:260) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.sendReplyMessage(AbstractReplyProducingMessageHandler.java:241) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.produceReply(AbstractReplyProducingMessageHandler.java:205) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleResult(AbstractReplyProducingMessageHandler.java:199) at org.springframework.integration.handler.AbstractReplyProducingMessageHandler.handleMessageInternal(AbstractReplyProducingMessageHandler.java:177) at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:78) at org.springframework.integration.endpoint.PollingConsumer.handleMessage(PollingConsumer.java:74) at org.springframework.integration.endpoint.AbstractPollingEndpoint.doPoll(AbstractPollingEndpoint.java:205) at org.springframework.integration.endpoint.AbstractPollingEndpoint.access$000(AbstractPollingEndpoint.java:55) at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:149) at org.springframework.integration.endpoint.AbstractPollingEndpoint$1.call(AbstractPollingEndpoint.java:146) at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller$1.run(AbstractPollingEndpoint.java:284) at org.springframework.integration.util.ErrorHandlingTaskExecutor$1.run(ErrorHandlingTaskExecutor.java:52) at org.springframework.core.task.SyncTaskExecutor.execute(SyncTaskExecutor.java:50) at org.springframework.integration.util.ErrorHandlingTaskExecutor.execute(ErrorHandlingTaskExecutor.java:49) at org.springframework.integration.endpoint.AbstractPollingEndpoint$Poller.run(AbstractPollingEndpoint.java:278) at org.springframework.scheduling.support.DelegatingErrorHandlingRunnable.run(DelegatingErrorHandlingRunnable.java:54) at org.springframework.scheduling.concurrent.ReschedulingRunnable.run(ReschedulingRunnable.java:81) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:178) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:292) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) ``` | 1 |
514 | XD-1953 | 07/11/2014 11:09:24 | Stacktrace on container with deployed modules is shutdown | When the container that has deployed module is shutdown, following stacktrace is thrown: 10:10:27,560 INFO main-EventThread server.ContainerRegistrar:254 - Undeploying module [ModuleDescriptor@3a615460 moduleName = 'job', moduleLabel = 'job', group = 'j4', sourceChannelName = [null], sinkChannelName = [null], sinkChannelName = [null], index = 0, type = job, parameters = map[[empty]], children = list[[empty]]] 10:10:27,560 INFO main-EventThread module.ModuleDeployer:158 - removed SimpleModule [name=job, type=job, group=j4, index=0 @7df1aff2] 10:10:27,561 ERROR main-EventThread imps.CuratorFrameworkImpl:555 - Watcher exception java.lang.IllegalStateException: org.springframework.context.annotation.AnnotationConfigApplicationContext@422fd7b7 has been closed already at org.springframework.context.support.AbstractApplicationContext.assertBeanFactoryActive(AbstractApplicationContext.java:956) at org.springframework.context.support.AbstractApplicationContext.getBean(AbstractApplicationContext.java:978) at org.springframework.xd.module.core.SimpleModule.getComponent(SimpleModule.java:164) at org.springframework.xd.dirt.plugins.AbstractMessageBusBinderPlugin.unbindConsumerAndProducers(AbstractMessageBusBinderPlugin.java:219) at org.springframework.xd.dirt.plugins.job.JobPlugin.removeModule(JobPlugin.java:70) at org.springframework.xd.dirt.module.ModuleDeployer.removeModule(ModuleDeployer.java:204) at org.springframework.xd.dirt.module.ModuleDeployer.destroyModule(ModuleDeployer.java:162) at org.springframework.xd.dirt.module.ModuleDeployer.handleUndeploy(ModuleDeployer.java:140) at org.springframework.xd.dirt.module.ModuleDeployer.undeploy(ModuleDeployer.java:112) at org.springframework.xd.dirt.server.ContainerRegistrar.undeployModule(ContainerRegistrar.java:256) at org.springframework.xd.dirt.server.ContainerRegistrar$JobModuleWatcher.process(ContainerRegistrar.java:753) at org.apache.curator.framework.imps.NamespaceWatcher.process(NamespaceWatcher.java:67) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) 10:10:27,561 INFO main-EventThread zookeeper.ClientCnxn:512 - EventThread shut down 10:10:27,564 INFO Thread-2 jmx.EndpointMBeanExporter:433 - Unregistering JMX-exposed beans on shutdown | 2 |
515 | XD-1956 | 07/11/2014 12:16:59 | filepollhdfs --deleteFiles=true has no effect, files are not deleted | Setting --deleteFiles=true has no effect any longer. This also causes the Script Integration Tests to fail. Suspect this is related to the change here https://github.com/spring-projects/spring-xd/commit/6dbac167758ce23b9a4dbf07169b2d26d1eddef1 | 3 |
516 | XD-1957 | 07/11/2014 12:40:42 | Remove footer from admin UI | Please see the discussion here: https://github.com/spring-projects/spring-xd/pull/1052#issuecomment-48761686 | 1 |
Subsets and Splits