question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am new to Cassandra. I am reading about the num_tokens parameter for virtual nodes in the cassandra.yml file. I don't think I quite understand what this is doing or how tokens/partitions are assigned. What is really going on here? The default value of 256 does not make any sense if we are really talking about number of tokens/node. Is num_tokens really num_token_partitions/node? Let us pick 2 nodes A and B to begin with, add a 3rd node C and then try explaining how things work. To begin, each node is configured with num_tokens of 256. Now, when A and B come up How many tokens do A and B get when they join the cluster? What partition ranges do A and B get and how is that decided? What kind of meta data is stored in Cassandra to know which partition ranges A and B carry. What happens when C joins now? How does Cassandra decide what partition ranges C gets? How many partitions should be put on C? How is the partition range for A and B decided when C joins? Anybody kind enough to clarify in detail for the benefit of everyone?
4) Partition ranges are determined by granting each node the range from their available tokens up until the next specified token. 2)Data is exchanged through gossip detailing which nodes have which tokens. This meta-data allows every node to know which nodes are responsible for which ranges. Keyspace/Replication settings also change where data is actually saved. EXAMPLE: 1)A gets 256 ranges B gets 256 Ranges. But to make this simple lets give them each 2 tokens and pretend the token range is 0 to 30 Given tokens: A 10,15 and B 3,11 Nodes are responsible for the following ranges (3-9:B)(10:A)(11-14:B)(15-30,0-2:A) 3)If C Joins also with 2 tokens 20,5 Nodes will now be responsible for the following ranges (3-4:B)(5-9:C)(10:A)(11-14:B)(15-19:A)(20-30,0-2:C) Vnodes are powerful because now when C joins the cluster it gets its data from multiple nodes (5-9 from B and 20-30,0-2 from A) sharing the load between those machines. In this toy example you can see that having only 2 tokens allows for some nodes to host the majority of the data while others get almost none. As the number of Vnodes increases the balance between the nodes increases as the ranges become randomly subdivided more and more. At 256 nodes you are extremely likely to have distributed an even amount of data to each node in the cluster. For more information VNodes: http://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2
Cassandra
19,995,342
12
I am trying to set up a completely basic Titan Rexster Cassandra instance, but I can't seem to break the code. I have tried a whole lot of things now to get it to work but I just can't seem to get it to work. No matter how much I read about it I am not able to set it up properly. What I want is a Titan-rexster-cassandra instance running in embedded mode with a few indexes including elastic search. After all the stuff I have read it seems that this is what I should get when i download titan-server-0.4.0 and run the bin/titan.sh start command. An this also starts the server. However: When I try to add an index to this, nothing happens. When I try to populate it over RexPro nothing is added. When I restart the server my graph is gone. It is no longer in the Rexster list of graphs when I go to http://localhost:8182/graphs. So it appears that my data does not persist, or at least disappears for rexster. I feel like I have tried just about everything to get this to work: Changing the .properties to include the search-index like so: storrage.index.search.backend=elasticsearch... Changing the .properties files (all of them) to use cassandra, embeddedcassandra and cassandrathrift for storage.backend Trying to start the server with properties as indicated in this question to point to specific config files. I have looked through the titan.sh file to see what actually happens, then gone to the config files indicated by these and had a look to see what goes on there, upon which I have tried a lot of things such as the above. I have struggled with this for well over a week, probably two or even more and I am starting to lose faith. I am considering going back to neo4j, but unfortunately I really need the scalability of Titan. However if I can't get it to work then it is no use. I feel like there might be some trivial but essential thing that I have not figured out, or forgot. Do anyone know of a guide out there that brings you from absolute scratch (eg. starting a fresh VM or something), or close to it, to getting a titan-rexster-cassandra instance running with elastic search index? Or perhaps, if you are awesome, provide such a guide? I feel lost :( Key Points: Ubuntu 12.04 (also tried 13.10. Same issue) Titan 0.4.0 Goal: To get persistance, index a vertex name property with Elastic search, and get edges with weight. Connecting with ruby rexpro like this: require "rexpro" #the "rexpro" gem rexpro_client = Rexpro::Client.new(host: 'the.ip.of.my.machine.running.rexster', port: 8184) results = rexpro_client.execute("g.getClass()", graph_name: "graph").results #=> returns the following: class com.thinkaurelius.titan.graphdb.database.StandardTitanGraph The steps I follow to create the problem where the DB does not persist: On WindowsAzure: Create a new small (1 core, 1.75GB ram) VM with Ubuntu 12.04 LTS with name vmname (or whatever). Log on to this VM with SSH when it is ready (ssh [email protected] -p 22) Run: sudo apt-get update Run: sudo apt-get install openjdk-7-jdk openjdk-7-jre p7zip-full Run: mkdir /home/azureuser/Downloads Run: wget -O /home/azureuser/Downloads/titan-server-0.4.0.zip "http://s3.thinkaurelius.com/downloads/titan/titan-server-0.4.0.zip" Run: cd /home/azureuser/Downloads/ Run: 7z x titan-server-0.4.0.zip Run: cd /home/azureuser/Downloads/titan-server-0.4.0 Run: sudo bin/titan.sh -c cassandra-es start Run: sudo bin/rexster-console.sh In rexster console, run: g = rexster.getGraph("graph"), returns titangraph[cassandra:null] CTRL-C out of rexster consloe Run: sudo bin/titan.sh stop Run: sudo bin/titan.sh -c cassandra-es start Run: sudo bin/rexster-console.sh In rexster console, run: g = rexster.getGraph("graph"). Now this returns null, not a graph. There appears to be some issues here when shutting down and starting up againt: On shutdown [WARN] ShutdownManager - ShutdownListener JVM Shutdown Hook Remover threw an exception, continuing with shutdown On Startup #2 Starting Cassandra... xss = -Dtitan.logdir=/home/azureuser/Downloads/titan-server-0.4.0/log -ea -javaagent:/home/azureuser/Downloads/titan-server-0.4.0/lib/jamm-0.2.5.jar -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms840M -Xmx840M -Xmn100M -XX:+HeapDumpOnOutOfMemoryError -Xss256k Starting Titan + Rexster... INFO 12:00:12,780 Logging initialized INFO 12:00:12,805 JVM vendor/version: OpenJDK 64-Bit Server VM/1.7.0_25 INFO 12:00:12,806 Heap size: 870318080/870318080 INFO 12:00:12,806 Classpath: /home/azureuser/Downloads/titan-server-0.4.0/conf:/home/azureuser/Downloads/titan-server-0.4.0/build/classes/main:/home/azureuser/Downloads/titan-server-0.4.0/build/classes/thrift:/home/azureuser/Downloads/titan-server-0.4.0/lib/activation-... INFO 12:00:13,397 JNA mlockall successful INFO 12:00:13,419 Loading settings from file:/home/azureuser/Downloads/titan-server-0.4.0/conf/cassandra.yaml INFO 12:00:14,093 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap INFO 12:00:14,093 disk_failure_policy is stop INFO 12:00:14,101 Global memtable threshold is enabled at 276MB INFO 12:00:14,878 Initializing key cache with capacity of 41 MBs. INFO 12:00:14,892 Scheduling key cache save to each 14400 seconds (going to save all keys). INFO 12:00:14,894 Initializing row cache with capacity of 0 MBs and provider org.apache.cassandra.cache.SerializingCacheProvider INFO 12:00:14,955 Scheduling row cache save to each 0 seconds (going to save all keys). INFO 12:00:15,273 Opening db/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-ib-2 (167 bytes) INFO 12:00:15,347 Opening db/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-ib-1 (264 bytes) INFO 12:00:15,376 Opening db/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-ib-11 (717 bytes) INFO 12:00:15,387 Opening db/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-ib-9 (6183 bytes) INFO 12:00:15,392 Opening db/cassandra/data/system/schema_columnfamilies/system-schema_columnfamilies-ib-10 (687 bytes) INFO 12:00:15,411 Opening db/cassandra/data/system/schema_columns/system-schema_columns-ib-2 (209 bytes) INFO 12:00:15,416 Opening db/cassandra/data/system/schema_columns/system-schema_columns-ib-1 (3771 bytes) INFO 12:00:15,450 Opening db/cassandra/data/system/local/system-local-ib-3 (109 bytes) INFO 12:00:15,455 Opening db/cassandra/data/system/local/system-local-ib-2 (120 bytes) INFO 12:00:15,521 Opening db/cassandra/data/system/local/system-local-ib-1 (356 bytes) Processes forked. Setup may take some time. Run bin/rexster-console.sh to connect. azureuser@neugle:~/Downloads/titan-server-0.4.0$ INFO 12:00:16,705 completed pre-loading (8 keys) key cache. INFO 12:00:16,777 Replaying db/cassandra/commitlog/CommitLog-2-1383479792488.log, db/cassandra/commitlog/CommitLog-2-1383479792489.log INFO 12:00:16,802 Replaying db/cassandra/commitlog/CommitLog-2-1383479792488.log INFO 12:00:17,178 Finished reading db/cassandra/commitlog/CommitLog-2-1383479792488.log INFO 12:00:17,179 Replaying db/cassandra/commitlog/CommitLog-2-1383479792489.log INFO 12:00:17,179 Finished reading db/cassandra/commitlog/CommitLog-2-1383479792489.log INFO 12:00:17,191 Enqueuing flush of Memtable-local@1221155490(52/52 serialized/live bytes, 22 ops) INFO 12:00:17,194 Writing Memtable-local@1221155490(52/52 serialized/live bytes, 22 ops) INFO 12:00:17,204 Enqueuing flush of Memtable-users@1341189399(28/28 serialized/live bytes, 2 ops) INFO 12:00:17,211 Enqueuing flush of Memtable-system_properties@1057472358(26/26 serialized/live bytes, 1 ops) INFO 12:00:17,416 Completed flushing db/cassandra/data/system/local/system-local-ib-4-Data.db (84 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=142) INFO 12:00:17,480 Writing Memtable-users@1341189399(28/28 serialized/live bytes, 2 ops) INFO 12:00:17,626 Completed flushing db/cassandra/data/system_auth/users/system_auth-users-ib-1-Data.db (64 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=142) INFO 12:00:17,630 Writing Memtable-system_properties@1057472358(26/26 serialized/live bytes, 1 ops) INFO 12:00:17,776 Completed flushing db/cassandra/data/titan/system_properties/titan-system_properties-ib-1-Data.db (64 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=142) INFO 12:00:17,780 Log replay complete, 12 replayed mutations INFO 12:00:17,787 Fixing timestamps of schema ColumnFamily schema_keyspaces... INFO 12:00:17,864 Enqueuing flush of Memtable-local@1592659210(65/65 serialized/live bytes, 2 ops) INFO 12:00:17,872 Writing Memtable-local@1592659210(65/65 serialized/live bytes, 2 ops) [INFO] Application - .:Welcome to Rexster:. INFO 12:00:18,027 Completed flushing db/cassandra/data/system/local/system-local-ib-5-Data.db (97 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=297) INFO 12:00:18,036 Enqueuing flush of Memtable-schema_keyspaces@1453195003(527/527 serialized/live bytes, 12 ops) INFO 12:00:18,038 Writing Memtable-schema_keyspaces@1453195003(527/527 serialized/live bytes, 12 ops) [INFO] RexsterProperties - Using [/home/azureuser/Downloads/titan-server-0.4.0/conf/rexster-cassandra-es.xml] as configuration source. INFO 12:00:18,179 Completed flushing db/cassandra/data/system/schema_keyspaces/system-schema_keyspaces-ib-3-Data.db (257 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=1227) [INFO] Application - Rexster is watching [/home/azureuser/Downloads/titan-server-0.4.0/conf/rexster-cassandra-es.xml] for change. [WARN] AstyanaxStoreManager - Couldn't set custom Thrift Frame Size property, use 'cassandrathrift' instead. INFO 12:00:18,904 Cassandra version: 1.2.2 INFO 12:00:18,906 Thrift API version: 19.35.0 INFO 12:00:18,906 CQL supported versions: 2.0.0,3.0.1 (default: 3.0.1) [INFO] ConnectionPoolMBeanManager - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterTitanConnectionPool,ServiceType=connectionpool [INFO] CountingConnectionPoolMonitor - AddHost: 127.0.0.1 INFO 12:00:19,087 Loading persisted ring state INFO 12:00:19,097 Starting up server gossip INFO 12:00:19,162 Enqueuing flush of Memtable-local@114523622(251/251 serialized/live bytes, 9 ops) INFO 12:00:19,169 Writing Memtable-local@114523622(251/251 serialized/live bytes, 9 ops) INFO 12:00:19,314 Completed flushing db/cassandra/data/system/local/system-local-ib-6-Data.db (238 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=51470) INFO 12:00:19,369 Compacting [SSTableReader(path='db/cassandra/data/system/local/system-local-ib-3-Data.db'), SSTableReader(path='db/cassandra/data/system/local/system-local-ib-2-Data.db'), SSTableReader(path='db/cassandra/data/system/local/system-local-ib-4-Data.db'), SSTableReader(path='db/cassandra/data/system/local/system-local-ib-1-Data.db'), SSTableReader(path='db/cassandra/data/system/local/system-local-ib-6-Data.db'), SSTableReader(path='db/cassandra/data/system/local/system-local-ib-5-Data.db')] INFO 12:00:19,479 Starting Messaging Service on port 7000 INFO 12:00:19,585 Using saved token [7398637255000140098] INFO 12:00:19,588 Enqueuing flush of Memtable-local@365797436(84/84 serialized/live bytes, 4 ops) INFO 12:00:19,588 Writing Memtable-local@365797436(84/84 serialized/live bytes, 4 ops) INFO 12:00:19,666 Compacted 6 sstables to [db/cassandra/data/system/local/system-local-ib-7,]. 1,004 bytes to 496 (~49% of original) in 286ms = 0.001654MB/s. 6 total rows, 1 unique. Row merge counts were {1:0, 2:0, 3:0, 4:0, 5:0, 6:1, } INFO 12:00:19,796 Completed flushing db/cassandra/data/system/local/system-local-ib-8-Data.db (120 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=51745) INFO 12:00:19,810 Enqueuing flush of Memtable-local@1775610672(50/50 serialized/live bytes, 2 ops) INFO 12:00:19,812 Writing Memtable-local@1775610672(50/50 serialized/live bytes, 2 ops) INFO 12:00:19,967 Completed flushing db/cassandra/data/system/local/system-local-ib-9-Data.db (109 bytes) for commitlog position ReplayPosition(segmentId=1383480016398, position=51919) INFO 12:00:20,088 Node localhost/127.0.0.1 state jump to normal INFO 12:00:20,108 Startup completed! Now serving reads. ^C azureuser@neugle:~/Downloads/titan-server-0.4.0$ sudo bin/rexster-console.sh[WARN] GraphConfigurationContainer - Could not load graph graph. Please check the XML configuration. [WARN] GraphConfigurationContainer - GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path. com.tinkerpop.rexster.config.GraphConfigurationException: GraphConfiguration could not be found or otherwise instantiated: [com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration]. Ensure that it is in Rexster's path. at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:137) at com.tinkerpop.rexster.config.GraphConfigurationContainer.<init>(GraphConfigurationContainer.java:54) at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99) at com.tinkerpop.rexster.server.XmlRexsterApplication.<init>(XmlRexsterApplication.java:47) at com.tinkerpop.rexster.Application.<init>(Application.java:96) at com.tinkerpop.rexster.Application.main(Application.java:188) Caused by: java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager at com.thinkaurelius.titan.diskstorage.Backend.instantiate(Backend.java:339) at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:351) at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:294) at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:112) at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:682) at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:72) at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40) at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:25) at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:119) ... 5 more Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.thinkaurelius.titan.diskstorage.Backend.instantiate(Backend.java:328) ... 13 more Caused by: com.thinkaurelius.titan.diskstorage.TemporaryStorageException: Temporary failure in storage backend at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:429) at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.<init>(AstyanaxStoreManager.java:172) ... 18 more Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=42(60), attempts=1]InvalidRequestException(why:Keyspace names must be case-insensitively unique ("titan" conflicts with "titan")) at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28) at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:151) at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:69) at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:256) at com.netflix.astyanax.thrift.ThriftClusterImpl.executeSchemaChangeOperation(ThriftClusterImpl.java:146) at com.netflix.astyanax.thrift.ThriftClusterImpl.addKeyspace(ThriftClusterImpl.java:246) at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:424) ... 19 more Caused by: InvalidRequestException(why:Keyspace names must be case-insensitively unique ("titan" conflicts with "titan")) at org.apache.cassandra.thrift.Cassandra$system_add_keyspace_result.read(Cassandra.java:33158) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_system_add_keyspace(Cassandra.java:1408) at org.apache.cassandra.thrift.Cassandra$Client.system_add_keyspace(Cassandra.java:1395) at com.netflix.astyanax.thrift.ThriftClusterImpl$9.internalExecute(ThriftClusterImpl.java:250) at com.netflix.astyanax.thrift.ThriftClusterImpl$9.internalExecute(ThriftClusterImpl.java:247) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60) ... 26 more [WARN] GraphConfigurationContainer - Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager java.lang.IllegalArgumentException: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager at com.thinkaurelius.titan.diskstorage.Backend.instantiate(Backend.java:339) at com.thinkaurelius.titan.diskstorage.Backend.getImplementationClass(Backend.java:351) at com.thinkaurelius.titan.diskstorage.Backend.getStorageManager(Backend.java:294) at com.thinkaurelius.titan.diskstorage.Backend.<init>(Backend.java:112) at com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration.getBackend(GraphDatabaseConfiguration.java:682) at com.thinkaurelius.titan.graphdb.database.StandardTitanGraph.<init>(StandardTitanGraph.java:72) at com.thinkaurelius.titan.core.TitanFactory.open(TitanFactory.java:40) at com.thinkaurelius.titan.tinkerpop.rexster.TitanGraphConfiguration.configureGraphInstance(TitanGraphConfiguration.java:25) at com.tinkerpop.rexster.config.GraphConfigurationContainer.getGraphFromConfiguration(GraphConfigurationContainer.java:119) at com.tinkerpop.rexster.config.GraphConfigurationContainer.<init>(GraphConfigurationContainer.java:54) at com.tinkerpop.rexster.server.XmlRexsterApplication.reconfigure(XmlRexsterApplication.java:99) at com.tinkerpop.rexster.server.XmlRexsterApplication.<init>(XmlRexsterApplication.java:47) at com.tinkerpop.rexster.Application.<init>(Application.java:96) at com.tinkerpop.rexster.Application.main(Application.java:188) Caused by: java.lang.reflect.InvocationTargetException at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:526) at com.thinkaurelius.titan.diskstorage.Backend.instantiate(Backend.java:328) ... 13 more Caused by: com.thinkaurelius.titan.diskstorage.TemporaryStorageException: Temporary failure in storage backend at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:429) at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.<init>(AstyanaxStoreManager.java:172) ... 18 more Caused by: com.netflix.astyanax.connectionpool.exceptions.BadRequestException: BadRequestException: [host=127.0.0.1(127.0.0.1):9160, latency=42(60), attempts=1]InvalidRequestException(why:Keyspace names must be case-insensitively unique ("titan" conflicts with "titan")) at com.netflix.astyanax.thrift.ThriftConverter.ToConnectionPoolException(ThriftConverter.java:159) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:65) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:28) at com.netflix.astyanax.thrift.ThriftSyncConnectionFactoryImpl$ThriftConnection.execute(ThriftSyncConnectionFactoryImpl.java:151) at com.netflix.astyanax.connectionpool.impl.AbstractExecuteWithFailoverImpl.tryOperation(AbstractExecuteWithFailoverImpl.java:69) at com.netflix.astyanax.connectionpool.impl.AbstractHostPartitionConnectionPool.executeWithFailover(AbstractHostPartitionConnectionPool.java:256) at com.netflix.astyanax.thrift.ThriftClusterImpl.executeSchemaChangeOperation(ThriftClusterImpl.java:146) at com.netflix.astyanax.thrift.ThriftClusterImpl.addKeyspace(ThriftClusterImpl.java:246) at com.thinkaurelius.titan.diskstorage.cassandra.astyanax.AstyanaxStoreManager.ensureKeyspaceExists(AstyanaxStoreManager.java:424) ... 19 more Caused by: InvalidRequestException(why:Keyspace names must be case-insensitively unique ("titan" conflicts with "titan")) at org.apache.cassandra.thrift.Cassandra$system_add_keyspace_result.read(Cassandra.java:33158) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78) at org.apache.cassandra.thrift.Cassandra$Client.recv_system_add_keyspace(Cassandra.java:1408) at org.apache.cassandra.thrift.Cassandra$Client.system_add_keyspace(Cassandra.java:1395) at com.netflix.astyanax.thrift.ThriftClusterImpl$9.internalExecute(ThriftClusterImpl.java:250) at com.netflix.astyanax.thrift.ThriftClusterImpl$9.internalExecute(ThriftClusterImpl.java:247) at com.netflix.astyanax.thrift.AbstractOperationImpl.execute(AbstractOperationImpl.java:60) ... 26 more [INFO] HttpReporterConfig - Configured HTTP Metric Reporter. [INFO] ConsoleReporterConfig - Configured Console Metric Reporter. [INFO] HttpRexsterServer - HTTP/REST thread pool configuration: kernal[4 / 4] worker[8 / 8] [INFO] HttpRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for HTTP/REST. [INFO] HttpRexsterServer - Rexster Server running on: [http://localhost:8182] [INFO] RexProRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for RexPro. [INFO] RexProRexsterServer - RexPro thread pool configuration: kernal[4 / 4] worker[8 / 8] [INFO] RexProRexsterServer - Rexster configured with no security. [INFO] RexProRexsterServer - RexPro Server bound to [0.0.0.0:8184] [INFO] ShutdownManager$ShutdownSocketListener - Bound shutdown socket to /127.0.0.1:8183. Starting listener thread for shutdown requests. ^C
As a first note, embeddedcassandra is no longer what you want in Titan 0.4.0. You can read more about that here. In the Titan Server distribution for 0.4.0 cassandra and rexster run in separate JVMs and should generally run out-of-the-box from the distribution. Also note, that I would say it is recommended to create types/indices by way of the Gremlin Console directly. I like being "close to the graph" when working with TypeMaker. You can read more about such production implementation patterns here. As for your specific problem, your issue helped uncover a hole in the documentation (which has since been remedied). To ensure that elasticsearch gets started with Titan Server make sure that you do: bin/titan.sh -c cassandra-es start At this point you can connect via Rexster to construct and query elasticsearch indices. Here's an example from Rexster Console: rexster[groovy]> g = rexster.getGraph("graph") ==>titangraph[cassandra:null] rexster[groovy]> g.makeKey("name").dataType(String.class).indexed("search",Vertex.class).make() ==>v[74] rexster[groovy]> g.commit() ==>null rexster[groovy]> g.addVertex([name:'marko']) ==>v[4] rexster[groovy]> g.addVertex([name:'stephen']) ==>v[8] rexster[groovy]> g.commit() ==>null rexster[groovy]> g.V.has('name',PREFIX,'mar') ==>v[4] Note that by starting Titan Server in this mode, elasticsearch is running in embedded mode to the Titan instance started by Rexster which means that: Elasticsearch will not be accessible from outside of this particular Titan instance, i.e., remote connections will not be possible So if you are trying to connect via a Titan Gremlin Console, I don't believe it will work. Connections have to run through Rexster.
Cassandra
19,712,610
12
As there are two ways to support wide rows in CQL3..One is to use composite keys and another is to use collections like Map, List and Set. The composite keys method can have millions of columns (transposed to rows).. This is solving some of our use cases. However, if we use collections, I want to know if there is a limit that the collections can store a certain number/amount of data (Like earlier with Thrift C* supports up-to 2 billion columns in a row)
It is strongly recommended to store only a limited amount of data in collections & maps. The reasons: Collections and maps are fetched as a whole, entirely. You can not "slice" on collections so putting lots of data in collections/maps will have impact on perf when reading them The CQL3 implementation of Lists is not performant for insertion/removal in the middle of the list. For append/prepend operations, it's quite fast. For insertion/removal element at index i, it will require a read-before-write. Basically, part of the list will be re-written because they need to be shifted to the good index Insertion/removal for Set and Map are more performant since they use the column key for storage/sorting/indexing Now to answer to your question, is there a hard limit on the number of elements in a collection/map, the answer is no, technically there is no limit other than the classical 2 billions limit that already exist in Thrift yes, it is limited to 65536 as mentioned above by GlynD. The related JIRA CASSANDRA-5428
Cassandra
18,573,507
12
I am using datastax as a client for connecting to cassandra. I have successfully connected to cassandra cluster/keyspace/column families through Java. I am trying, firing some queries on cassandra column family thriugh java. For me it is working for simple queries like ResultSet results = session.execute("select * from demodb.customer where id = 1"); Now I want to take id parameter from user and pass it to session.execute(); statement. How should I go about it?
Here is a code example of inserting data about an image using a prepared statements. PreparedStatement statement = getSession().prepare( "INSERT INTO pixelstore.image " + "(image_name, " + " upload_time, " + " upload_by, " + " file_type, " + " file_size" + ") VALUES (?, ?, ?, ?, ?);"); // create the bound statement and initialise it with your prepared statement BoundStatement boundStatement = new BoundStatement(statement); session.execute( // this is where the query is executed boundStatement.bind( // here you are binding the 'boundStatement' "background", TimeUtil.getTimeUUID(), "lyubent", "png", "130527")); There have been two recent blog posts on planet cassandra with a demo of what the driver can do, they contain code examples so check them out: Materialized View with Cassandra and DataStax Java Driver Small Java Application using DataStax Java Driver and Cassandra 1.2 working
Cassandra
17,419,142
12
What is the command to update a column family and alter its gc_grace_seconds value using cassandra-cli?
For CQLSH alter table <table_name> with GC_GRACE_SECONDS = <timeout>; e.g: alter table yawn with GC_GRACE_SECONDS = 3600; where yawn is our table name and 3600 is an hour;
Cassandra
15,526,379
12
What is the maximum number of keyspaces allowed in a Cassandra cluster? The wiki page on limitations doesn't mention one. Is there such a limit?
A keyspace is basically just a Map entry to Cassandra... you can have as many as you have memory for. Millions, easily. ColumnFamilies are more expensive, since Cassandra will reserve a minimum of 1MB for each CF's memtable: http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-0-performance
Cassandra
11,256,881
12
In the below cassandra, "get result"..we can able to retrieve the column name and values. But how to retrieve the timestamp..Is there any better idea to get the values by using timestamp [default@sample]get user[bob]; => (column=name, value=bobdroid, timestamp=1335361733545850) => (column=email, [email protected], timestamp=1335361733545850) => (column=age, value=23, timestamp=1335361733545850) => (column=password, value=MTIz, timestamp=1335361733545850) Returned 4 results. Elapsed time: 4 msec(s).
Just ran across this thread, and found that the answer is out of date. CQL now exposes the internal timestamps using the writetime() function: select key,columnfoo,writetime(columnfoo) from tablebar;
Cassandra
10,346,839
12
I'm planning a side project where I will be dealing with Time Series like data and would like to give one of those shiny new NoSQL DBs a try and am looking for a recommendation. For a (growing) set of symbols I will have a list of (time,value) tuples (increasing over time). Not all symbols will be updated; some symbols may be updated while others may not, and completely new symbols may be added. The database should therefore allow: Add Symbols with initial one-element (tuple) list. E.g. A: [(2012-04-14 10:23, 50)] Update Symbols with a new tuple. (Append that tuple to the list of that symbol). Read the data for a given symbol. (Ideally even let me specify the time frame for which the data should be returned) The create and update operations should possibly be atomic. If reading multiple symbols at once is possible, that would be interesting. Performance is not critical. Updates/Creates will happen roughly once every few hours.
I believe literally all the major NoSQL databases will support that requirement, especially if you don't actually have a large volume of data (which begs the question, why NoSQL?). That said, I've had to recently design and work with a NoSQL database for time series data so can give some input on that design, which can then be extrapolated for all others. Our chosen database was Cassandra, and our design was as follows: A single keyspace for all 'symbols' Each symbol was a new row Each time entry was a new column for that relevant row Each value (can be more than a single value) was the value part of the time entry This lets you achieve everything you asked for, most notably to read the data for a single symbol, and using a range if necessary (column range calls). Although you said performance wasn't critical, it was for us and this was quite performant also - all data for any single symbol is by definition sorted (column name sort) and always stored on the same node (no cross node communication for simple queries). Finally, this design translates well to other NoSQL databases that have have dynamic columns. Further to this, here's some information on using MongoDB (and capped collections if necessary) for a time series store: MongoDB as a Time Series Database Finally, here's a discussion of SQL vs NoSQL for time series: https://dba.stackexchange.com/questions/7634/timeseries-sql-or-nosql I can add to that discussion the following: Learning curve for NoSQL will be higher, you don't get the added flexibility and functionality for free in terms of 'soft costs'. Who will be supporting this database operationally? If you expect this functionality to grow in future (either as more fields to be added to each time entry, or much larger capacity in terms of number of symbols or size of symbol's time series), then definitely go with NoSQL. The flexibility benefit is huge, and the scalability you get (with the above design) on both the 'per symbol' and 'number of symbols' basis is almost unbounded (I say almost unbounded - maximum columns per row is in the billions, maximum rows per key space is unbounded I believe).
Cassandra
10,157,931
12
I'm studying the Apache Cassandra version 0.7.6 with Java and Hector, and I tried to create a cluster, a keyspace and insert a column in this keyspace created. By looking examples I understood that keyspace is equivalent to the database in SQL databases, and the Column Family's is equivalent with the tables. Knowing this I tried to create my simple example structure. Cluster tutorialCluster = HFactory.getOrCreateCluster("TutorialCluster","127.0.0.1:9160"); ConfigurableConsistencyLevel ccl = new ConfigurableConsistencyLevel(); ccl.setDefaultReadConsistencyLevel(HConsistencyLevel.ONE); Keyspace tutorialKeyspace = HFactory.createKeyspace("Tutorial", tutorialCluster, ccl); Mutator<String> mutator = HFactory.createMutator(tutorialKeyspace, stringSerializer); mutator.addInsertion("CA Burlingame", "StateCity", HFactory.createColumn(650L, "37.57x122.34", longSerializer, stringSerializer)); MutationResult mr = mutator.execute(); But when I tried to run this with the cassandra started, but it returns an exception. Exception in thread "main" me.prettyprint.hector.api.exceptions.HInvalidRequestException: InvalidRequestException(why:Keyspace Tutorial does not exist) at me.prettyprint.cassandra.connection.HThriftClient.getCassandra(HThriftClient.java:70) at me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:226) But I already created the "Tutorial" keyspace, and used in the mutator.
The createKeyspace() call in HFactory is meant for creating a hector Keyspace object for local use, but it does not actually create a keyspace in Cassandra. For that you want to use the 'addKeyspace()' and 'addColumnFamily' methods on the actual cluster object. https://github.com/rantav/hector/blob/master/core/src/main/java/me/prettyprint/hector/api/Cluster.java#L117
Cassandra
9,168,538
12
I'm trying to do some research to find the best option for sessions management in a multi-server environment and was wondering what people have found successful and why. Pros and cons. RDBMS - Slower. Better used for other data. Memcached - You can't take down a memcached server without losing sessions Redis - Fixes the problem of memcached, but what about ease of scalability? Fault tolerance? Cassandra - Has good fault tolerance. Pros and cons? MongoDB, Others? Thanks!
Personally, I use Cassandra to persist php session data. It stores it in a single column on a single row with session_id:{session_data_as_json} and I set the TTL on the column so that it does garbage cleanup automatically. Works a treat. I went with cassandra as it has all other user data already ... For caching, I enabled APC on all front end webservers and haven't had any issues ... Is this the best approach? Not sure. it was fit for purpose for the environment, technologies and business rules I needed to fulfill. ... Side note, I did start working on a native php -> cassandra session handler: https://github.com/sdolgy/php-cassandra-sessions -- this shows how the TTL's are set with PHPCassa and Cassandra
Cassandra
8,570,659
12
I am looking for if cassandra has limitations of node hardware spec like what could be the max storage per node if there is any such limitation. I intend to use couple of nodes with 48TB storage (2TB X 24 hard drives 7200rpm) per node with some good dual xeon processor. I have looked up for such limitations if exists any but didn't find any material about this issue. And guys why there is so much less buzz about cassandra recently while its getting mature and its up 0.8 version while most of articles/blogs are related to 0.6v only.
Cassandra distributes its data by row, so the only hard limitation is that a row must be able to fit on a single node. So the short answer is no. The longer answer is that you'll want to make sure that you're setting up a separate storage area for your permanent data and your commit logs. One other thing to keep in mind is that you'll still run into seek speed issues. One of the nice things about Cassandra is that you don't need to have a single node with that much data (and in fact its probably not well advised, you're storage will outpace your processing power). If you use smaller nodes (hard drive space wise) then your storage and processing capabilities will scale together.
Cassandra
7,190,573
12
I have used Relational DB's a lot and decided to venture out on other types available. This particular product looks good and promising: http://neo4j.org/ Has anyone used graph-based databases? What are the pros and cons from a usability prespective? Have you used these in a production environment? What was the requirement that prompted you to use them?
I used a graph database in a previous job. We weren't using neo4j, it was an in-house thing built on top of Berkeley DB, but it was similar. It was used in production (it still is). The reason we used a graph database was that the data being stored by the system and the operations the system was doing with the data were exactly the weak spot of relational databases and were exactly the strong spot of graph databases. The system needed to store collections of objects that lack a fixed schema and are linked together by relationships. To reason about the data, the system needed to do a lot of operations that would be a couple of traversals in a graph database, but that would be quite complex queries in SQL. The main advantages of the graph model were rapid development time and flexibility. We could quickly add new functionality without impacting existing deployments. If a potential customer wanted to import some of their own data and graft it on top of our model, it could usually be done on site by the sales rep. Flexibility also helped when we were designing a new feature, saving us from trying to squeeze new data into a rigid data model. Having a weird database let us build a lot of our other weird technologies, giving us lots of secret-sauce to distinguish our product from those of our competitors. The main disadvantage was that we weren't using the standard relational database technology, which can be a problem when your customers are enterprisey. Our customers would ask why we couldn't just host our data on their giant Oracle clusters (our customers usually had large datacenters). One of the team actually rewrote the database layer to use Oracle (or PostgreSQL, or MySQL), but it was slightly slower than the original. At least one large enterprise even had an Oracle-only policy, but luckily Oracle bought Berkeley DB. We also had to write a lot of extra tools - we couldn't just use Crystal Reports for example. The other disadvantage of our graph database was that we built it ourselves, which meant when we hit a problem (usually with scalability) we had to solve it ourselves. If we'd used a relational database, the vendor would have already solved the problem ten years ago. If you're building a product for enterprisey customers and your data fits into the relational model, use a relational database if you can. If your application doesn't fit the relational model but it does fit the graph model, use a graph database. If it only fits something else, use that. If your application doesn't need to fit into the current blub architecture, use a graph database, or CouchDB, or BigTable, or whatever fits your app and you think is cool. It might give you an advantage, and its fun to try new things. Whatever you chose, try not to build the database engine yourself unless you really like building database engines.
Neo4j
1,000,162
136
I'm starting to develop with Neo4j using the REST API. I saw that there are two options for performing complex queries - Cypher (Neo4j's query language) and Gremlin (the general purpose graph query/traversal language). Here's what I want to know - is there any query or operation that can be done by using Gremlin and can't be done with Cypher? or vice versa? Cypher seems much more clear to me than Gremlin, and in general it seems that the guys in Neo4j are going with Cypher. But - if Cypher is limited compared to Gremlin - I would really like to know that in advance.
For general querying, Cypher is enough and is probably faster. The advantage of Gremlin over Cypher is when you get into high level traversing. In Gremlin, you can better define the exact traversal pattern (or your own algorithms) whereas in Cypher the engine tries to find the best traversing solution itself. I personally use Cypher because of its simplicity and, to date, I have not had any situations where I had to use Gremlin (except working with Gremlin graphML import/export functions). I expect, however, that even if i would need to use Gremlin, I would do so for a specific query I would find on the net and never come back to again. You can always learn Cypher really fast (in days) and then continue with the (longer-run) general Gremlin.
Neo4j
13,824,962
130
We can delete all nodes and relationships by following query. MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r But newly created node get internal id as ({last node internal id} + 1) . It doesn't reset to zero. How can we reset neo4j database such as newly created node will get id as 0? From 2.3, we can delete all nodes with relationships, MATCH (n) DETACH DELETE n
Shut down your Neo4j server, do a rm -rf data/graph.db and start up the server again. This procedure completely wipes your data, so handle with care.
Neo4j
23,310,114
118
I am using neo4j for one of my project, there's a node which only has a single property as name, I want to get that node using ID, it already has a ID but when I use this code MATCH (s:SKILLS{ID:65110}) return s It returns nothing, heres my node If the query is wrong then how do I query it using the number
MATCH (s) WHERE ID(s) = 65110 RETURN s The ID function gets you the id of a node or relationship. This is different from any property called id or ID that you create.
Neo4j
22,369,520
111
I'm trying to create a query using cypher that will "Find" missing ingredients that a chef might have, My graph is set up like so: (ingredient_value)-[:is_part_of]->(ingredient) (ingredient) would have a key/value of name="dye colors". (ingredient_value) could have a key/value of value="red" and "is part of" the (ingredient, name="dye colors"). (chef)-[:has_value]->(ingredient_value)<-[:requires_value]-(recipe)-[:requires_ingredient]->(ingredient) I'm using this query to get all the ingredients, but not their actual values, that a recipe requires, but I would like the return only the ingredients that the chef does not have, instead of all the ingredients each recipe requires. I tried (chef)-[:has_value]->(ingredient_value)<-[:requires_value]-(recipe)-[:requires_ingredient]->(ingredient)<-[:has_ingredient*0..0]-chef but this returned nothing. Is this something that can be accomplished by cypher/neo4j or is this something that is best handled by returning all ingredients and sorted through them myself? Bonus: Also is there a way to use cypher to match all values that a chef has to all values that a recipe requires. So far I've only returned all partial matches that are returned by a chef-[:has_value]->ingredient_value<-[:requires_value]-recipe and aggregating the results myself.
Update 01/10/2013: Came across this in the Neo4j 2.0 reference: Try not to use optional relationships. Above all, don’t use them like this: MATCH a-[r?:LOVES]->() WHERE r IS NULL where you just make sure that they don’t exist. Instead do this like so: MATCH (a) WHERE NOT (a)-[:LOVES]->() Using cypher for checking if relationship doesn't exist: ... MATCH source-[r?:someType]-target WHERE r is null RETURN source The ? mark makes the relationship optional. OR In neo4j 2 do: ... OPTIONAL MATCH source-[r:someType]-target WHERE r is null RETURN source Now you can check for non-existing (null) relationship.
Neo4j
10,952,332
110
Is it possible to create/delete different databases in the graph database Neo4j like in MySQL? Or, at least, how to delete all nodes and relationships of an existing graph to get a clean setup for tests, e.g., using shell commands similar to rmrel or rm?
You can just remove the entire graph directory with rm -rf, because Neo4j is not storing anything outside that: rm -rf data/* Also, you can of course iterate through all nodes and delete their relationships and the nodes themselves, but that might be too costly just for testing ...
Neo4j
4,498,523
109
I know this question is asked by many people already for my research, here's some questions asked before How to delete all relationships in neo4j graph? https://groups.google.com/forum/#!topic/neo4j/lgIaESPgUgE But after all, still can't solve our problems, we just want to delete "ALL" nodes and "ALL" relationships suppose delete "ALL" can see there are left 0 nodes 0 properties and 0 relationships This is the screenshot i took after executing the delete "ALL" suggested by forum My question still the same, how do delete all nodes and all relationships in neo4j
As of 2.3.0 and up to 3.3.0 MATCH (n) DETACH DELETE n Docs Pre 2.3.0 MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n,r Docs
Neo4j
14,252,591
102
I can't find how to return a node labels with Cypher. Anybody knows the syntax for this operation?
To get all distinct node labels: MATCH (n) RETURN distinct labels(n) To get the node count for each label: MATCH (n) RETURN distinct labels(n), count(*)
Neo4j
18,398,576
82
I know that there are similar questions around on Stackoverflow but I don't feel they answer the following. Graph Databases to my understanding store data following mostly this schema: Table/Collection 1: store nodes with UID Table/Collection 2: store relations referencing nodes via UID This allows storing arbitrary types of graphs. Now as I understand triple stores store nothing but triples: Triple/Collection 1: store triples (2 nodes, 1 relation) Now I would see the following distinction regarding use cases: Graph Databases: when you have known, static connections Triple Stores: when you have loosely connected nodes and are often looking for new connections I am confused by the fact that people do not seem to be discussing which one to use according to these criteria. Most article I find are talking about arguments like speed or compatibility. But is this not the most relevant point? Put the other way round: Imagine having a clearly connected, user defined graph. Why on earth would you want to store that as triples only, loosing all the info about connections? Or having to implement some custom solution storing IDs in the triple subject. Imagine having loosely collected nodes that you want to query for unknown relations using SPARQL. Graph databases do support that. But for this they have to build another index I assume and would be slower? EDIT: I see that "loosing info about connections" is the wrong way to put it. If you do as shown in the accepted answer and insert several triples for 2 nodes + 1 relation then you keep all the info and specifically the info what exact nodes are connected.
The main difference between graph databases and triple stores is how they model the graph. In a triple store (or quad store), the data tends to be very atomic. What I mean is that the "nodes" in the graph tend to be primitive data types like string, integer, date, etc. Relationships link primitives together, and so the "unit of discourse" in a triple store is a triple, and not a node or a relationship, typically. By contrast, other graph databases are often called "property stores" because nodes are data containers that correspond to objects in a domain. A node stands in for an object, and has properties; they act as rich data types specified by the graph modelers, more than just primitive data types. In these graph databases, nodes and relationships are the "unit of discourse". Let's say I have a person named "Bob" who knows "Susan". In RDF, it would be something like this: <http://example.org/person/1> :hasName "Bob". <http://example.org/person/1> foaf:knows <http://example.org/person/2>. <http://example.org/person/2> :hasName "Susan". In a graph database like neo4j, it would be this: (a:Person {name: "Bob"})-[:KNOWS]->(b:Person {name: "Susan"}) Notice that in RDF, it's 3 relationships but only one of those relationships actually expresses semantics between two entities. The other two relationships are just tracking properties of a single higher-level entity (the person). In neo4j, it's 1 relationship amongst two nodes, with each node having a property. In RDF you'll tend to identify things by URI, in neo4j it's a database object that gets a database ID automatically. That's what I mean about the difference between a more atomic/primitive store (triple stores) and a richer property graph. RDF and triple stores are mostly built for the kinds of architectural challenges you'd run into with the semantic web. For example, XML namespacing is built in, on the architectural assumption that you'll be mixing and matching the use of many different vocabularies and namespaces. (That right there is a very "semantic web" assumption). So in SPARQL and RDF you'll see typically at least the use of xsd, rdf, and rdfs namespaces concurrently, and probably also owl, skos, and many others. SPARQL and RDF/RDFS also have many hooks and features that are there explicitly to make things like ontology inference easier. You'll tend to identify things with URIs as a way of "namespacing your identifiers" but also because some people may want to de-reference the URI...again the assumption here is a wide data sharing arrangement between many parties. Property stores by contrast are keyed towards different use cases, like flexible modeling of data within one model/namespace, mappings between objects and graphs for persistence of enterprise applications, rapid evolvability, and so on. You'll tend to identify things with your own scheme (or an internal database ID). An auto-incrementing integer may not be best form of ID for any random consumer on the web, (and they certainly can't be de-referenced like URLs) but they might not be your first thought for a company internal application. So which is better? The more atomic triple store format, or a rich property graph? Do you need to mix and match many different vocabularies in one query or data model? Do you need to create an OWL ontology or do inference? Do you need to serialize a bunch of java objects in memory to a database? Do you need to do fast traversal of long paths? Those types of questions would guide your selection. Graphs are graphs, both of them do graphs, and so I don't think there's much difference in terms of what they can represent, or how you go about thinking about a problem in "graph terms". The differences boil down to the architecture underneath of the hood, and what sorts of use cases you think you'll need. I won't tell you one is better than the other, but choose wisely.
Neo4j
30,166,007
81
Currently ulimit -n shows 10000. I want to increase it to 40000. I've edited "/etc/sysctl.conf" and put fs.file-max=40000. I've also edited /etc/security/limits.conf and updated hard and soft values. But still ulimit shows 10000. After making all these changes I rebooted my laptop. I've access to root password. usr_name@usr_name-lap:/etc$ /sbin/sysctl fs.file-max fs.file-max = 500000 Added following lines in /etc/security/limits.conf - * soft nofile 40000 * hard nofile 40000 I also added following line in /etc/pam.d/su- session    required   pam_limits.so I've tried every possible way as given on other forums, but I can reach up to a maximum limit of 10000, not beyond that. What can be the issue? I'm making this change because neo4j throws maximum open file limits reached error.
What you are doing will not work for root user. Maybe you are running your services as root and hence you don't get to see the change. To increase the ulimit for root user you should replace the * by root. * does not apply for root user. Rest is the same as you did. I will re-quote it here. Add the following lines to the file: /etc/security/limits.conf root soft nofile 40000 root hard nofile 40000 And then add following line in the file: /etc/pam.d/common-session session required pam_limits.so This will update the ulimit for root user. As mentioned in comments, you may don't even have to reboot to see the change.
Neo4j
21,515,463
79
I would like to do a search, and I would like to start traversing from 2 labels (OR condition). For example, I need to find out all the nodes which have labels either 'Male' or 'Female' and whose property, name =~ '.ail.'.
You can put this condition in the WHERE clause: MATCH (n) WHERE n:Male OR n:Female RETURN n EDIT As @tbaum points out this performs an AllNodesScan. I wrote the answer when labels were fairly new and expected the query planner to eventually implement it with a NodeByLabelScan for each label, as it does for the single label case MATCH (n) WHERE n:Male RETURN n I still think this is a reasonable expression of the query and that it is reasonable to expect the query planner to implement it with label scans, but as of Neo4j 2.2.3 the query is still implemented with an AllNodesScan and a label filter. Here is therefore a more verbose alternative. Since the label disjunction signifies a set union and this union can be expressed in different ways, we can express it in a way that the query planner implements without scanning all nodes, and instead starts with a NodeByLabelScan per label. MATCH (n:Male) WHERE n.name =~ '.ail.' RETURN n UNION MATCH (n:Female) WHERE n.name =~ '.ail.' RETURN n This means expressing the query once for each label and joining them with an explicit UNION. This is not unreasonable, at least for smaller number of labels, but it's not clear to me why the query planners shouldn't be able to infer the same implementation from the simpler query so I have opened a github issue here.
Neo4j
20,003,769
66
It appears that LIKE is not supported in Cypher queries. Is there any other construct that would perform the same task? For instance: start n = node(*) where n.Name LIKE('%SUBSTRING%') return n.Name, n;
using regular expressions: http://neo4j.com/docs/developer-manual/current/#query-where-regex start n = node(*) where n.Name =~ '.*SUBSTRING.*' return n.Name, n;
Neo4j
13,828,953
66
How can I add a label to an existing node using a Cypher query?
That's in the reference docs, see http://docs.neo4j.org/chunked/stable/query-set.html#set-set-a-label-on-a-node, you need to use set to a add a label to a existing node: match (n {id:desired-id}) set n :newLabel return n
Neo4j
21,625,081
58
I'm new to Neo4j - just started playing with it yesterday evening. I've notice all nodes are identified by an auto-incremented integer that is generated during node creation - is this always the case? My dataset has natural string keys so I'd like to avoid having to map between the Neo4j assigned ids and my own. Is it possible to use string identifiers instead?
Think of the node-id as an implementation detail (like the rowid of relational databases, can be used to identify nodes but should not be relied on to be never reused). You would add your natural keys as properties to the node and then index your nodes with the natural key (or enable auto-indexing for them). E..g in the Java API: Index<Node> idIndex = db.index().forNodes("identifiers"); Node n = db.createNode(); n.setProperty("id", "my-natural-key"); idIndex.add(n, "id",n.getProperty("id")); // later Node n = idIndex.get("id","my-natural-key").getSingle(); // node or null With auto-indexer you would enable auto-indexing for your "id" field. // via configuration GraphDatabaseService db = new EmbeddedGraphDatabase("path/to/db", MapUtils.stringMap( Config.NODE_KEYS_INDEXABLE, "id", Config.NODE_AUTO_INDEXING, "true" )); // programmatic (not persistent) db.index().getNodeAutoIndexer().startAutoIndexingProperty( "id" ); // Nodes with property "id" will be automatically indexed at tx-commit Node n = db.createNode(); n.setProperty("id", "my-natural-key"); // Usage ReadableIndex<Node> autoIndex = db.index().getNodeAutoIndexer().getAutoIndex(); Node n = autoIndex.get("id","my-natural-key").getSingle(); See: http://docs.neo4j.org/chunked/milestone/auto-indexing.html And: http://docs.neo4j.org/chunked/milestone/indexing.html
Neo4j
9,051,442
56
Any rule of thumb on where to use label vs node property vs relationship + node. Let's have an example, say I have a store and I want to put my products in neo4j. Their identifier is the product sku, and I also want to have a categorization on them like this one is for clothes, food, electronics, and you get the idea. I'll be having a free search on my graph, like the user can search anything, and I'd return all the things related to that search string. Would it be better to use: I have a node with sku 001, and I'll tag it a label of Food. I have a node with sku 001, and have property on this node called category:"Food" I have a node with sku 001, and I'll create another node for the Food, and will create a relationship of "category" to relate them. I have read that if you'll be looking up a property, it's better off as a relationship + node, as traversing is much faster than looking up properties of node. TIA
Whether you should use a property, a label or a node for the category depends on how you will be querying the data. (I'll assume here that you have a fairly small, fairly fixed set of categories.) Use a property if you won't be querying by category, but just need to return the category of a node that has been found by other means. (For example: what is the category of the item with sku 001?) Use a label if you need to query by category. (For example: what are all the foods costing less than $10?) Use a node if you need to traverse the category without knowing what it is. (For example: what are the ten most popular items in the same category as one that the user has chosen?)
Neo4j
22,340,475
55
I will make an application with a lot of similar items (millions), and I would like to store them in a MySQL database, because I would like to do a lot of statistics and search on specific values for specific columns. But at the same time, I will store relations between all the items, that are related in many connected binary-tree-like structures (transitive closure), and relation databases are not good at that kind of structures, so I would like to store all relations in Neo4j which have good performance for this kind of data. My plan is to have all data except the relations in the MySQL database and all relations with item_id stored in the Neo4j database. When I want to lookup a tree, I first search the Neo4j for all the item_id:s in the tree, then I search the MySQL-database for all the specified items in a query that would look like: SELECT * FROM items WHERE item_id = 45 OR item_id = 345435 OR item_id = 343 OR item_id = 78 OR item_id = 4522 OR item_id = 676 OR item_id = 443 OR item_id = 4255 OR item_id = 4345 Is this a good idea, or am I very wrong? I haven't used graph-databases before. Are there any better approaches to my problem? How would the MySQL-query perform in this case?
Few thoughts on this: I would try modelling your Neo4j domain model to include the attributes of each node in the graph. By separating your data into two different data stores you might limit some operations that you might want to do. I guess it comes down to what you will be doing with your graph. If, for example, you want to find all the nodes connected to a specific node whose attributes (ie name, age.. whatever) are certain values, would you first have to find the correct node ID in your MySQL database and then go into Neo4j? This just seems slow and overly complicated when you could do all this in Neo4j. So the question is: will you need the attributes of a node when traversing the graph? Will your data change or is it static? By having two separate data stores it will complicate matters. Whilst generating statistics using a MySQL database might be easier than doing everything in Neo4j, the code required to traverse a graph to find all the nodes that meet a defined criteria isn't overly difficult. What these stats are should drive your solution. I can't comment on the performance of the MySQL query to select node ids. I guess that comes down to how many nodes you will need to select and your indexing strategy. I agree about the performance side of things when it comes to traversing a graph though. This is a good article on just this: MySQL vs. Neo4j on a Large-Scale Graph Traversal and in this case, when they say large, they only mean a million vertices/nodes and four million edges. So it wasn't even a particularly dense graph.
Neo4j
2,541,891
54
I'm trying out Neo4j for the first time. I'm using the 2.0-RC1 community edition. I've created some nodes: MERGE (u:User{username:'admin',password:'admin'}) MERGE (r1:Role{name:'ROLE_ADMIN'}) MERGE (r2:Role{name:'ROLE_WEB_USER'}) MERGE (r3:Role{name:'ROLE_REST_USER'}) and now I want to add relationships between the nodes. However, I don't want to clear out the existing database created with the script above, add the statements and run it again. I want to add relationships to the existing nodes. Google helped me find this: START n=node(*), m=node(*) where has(n.username) and has(m.name) and n.username = 'admin' and m.name = 'ROLE_WEB_USER' create (n)-[:HAS_ROLE]->(m) Which works fine (even though I don't understand all the syntax). However, I am aware that this finds any node with a username property and any node with a name property, instead of using labels to check that it has the right type of node. How can I do the same using labels?
In Neo4j 2.0 you can create schema indexes for your labels and the properties you use for lookup: CREATE INDEX ON :User(username) CREATE INDEX ON :Role(name) To create relationships you might use: MATCH (u:User {username:'admin'}), (r:Role {name:'ROLE_WEB_USER'}) CREATE (u)-[:HAS_ROLE]->(r) The MATCH will use an index if possible. If there is no index, it will lookup up all nodes carrying the label and see if the property matches. N.B. the syntax above will only work with Neo4j 2.0.0-RC1 and above.
Neo4j
20,456,002
52
Using Cypher how can I get all nodes in a graph? I am running some testing against the graph and I have some nodes without relationships so am having trouble crafting a query. The reason I want to get them all is that I want to delete all the nodes in the graph at the start of every test.
So, this gives you all nodes: MATCH (n) RETURN n; If you want to delete everything from a graph, you can do something like this: MATCH (n) OPTIONAL MATCH (n)-[r]-() DELETE n, r; Updated for 2.0+ Edit: Now in 2.3 they have DETACH DELETE, so you can do something like: MATCH (n) DETACH DELETE n;
Neo4j
12,903,873
52
What is the difference between graph-based databases (http://neo4j.org/) and object-oriented databases (http://www.db4o.com/)?
I'd answer this differently: object and graph databases operate on two different levels of abstraction. An object database's main data elements are objects, the way we know them from an object-oriented programming language. A graph database's main data elements are nodes and edges. An object database does not have the notion of a (bidirectional) edge between two things with automatic referential integrity etc. A graph database does not have the notion of a pointer that can be NULL. (Of course one can imagine hybrids.) In terms of schema, an object database's schema is whatever the set of classes is in the application. A graph database's schema (whether implicit, by convention of what String labels mean, or explicit, by declaration as models as we do it in InfoGrid for example) is independent of the application. This makes it much simpler, for example, to write multiple applications against the same data using a graph database instead of an object database, because the schema is application-independent. On the other hand, using a graph database you can't simply take an arbitrary object and persist it. Different tools for different jobs I would think.
Neo4j
2,218,118
52
I'm looking for something similar to the MySQL ( SHOW INDEXES ). I was able to get a list of indexes using py2neo in Python graphDB = neo4j.GraphDatabaseService() indexes = graphDB.get_indexes(neo4j.Node) print(format(indexes)) but I wanted to know if there's a way to do something similar in Cypher.
neo4j 3.1 now supports this as a built-in procedure that you can CALL from Cypher: CALL db.indexes(); http://neo4j.com/docs/operations-manual/3.1/reference/procedures/
Neo4j
19,801,599
51
I am currently on design phase of a MMO browser game, game will include tilemaps for some real time locations (so tile data for each cell) and a general world map. Game engine I prefer uses MongoDB for persistent data world. I will also implement a shipping simulation (which I will explain more below) which is basically a Dijkstra module, I had decided to use a graph database hoping it will make things easier, found Neo4j as it is quite popular. I was happy with MongoDB + Neo4J setup but then noticed OrientDB , which apparently acts like both MongoDB and Neo4J (best of both worlds?), they even have VS pages for MongoDB and Neo4J. Point is, I heard some horror stories of MongoDB losing data (though not sure it still does) and I don't have such luxury. And for Neo4J, I am not big fan of 12K€ per year "startup friendly" cost although I'll probably not have a DB of millions of vertexes. OrientDB seems a viable option as there may be also be some opportunities of using one database solution. In that case, a logical move might be jumping to OrientDB but it has a small community and tbh didn't find much reviews about it, MongoDB and Neo4J are popular tools widely used, I have concerns if OrientDB is an adventure. My first question would be if you have any experience/opinion regarding these databases. And second question would be which Graph Database is better for a shipping simulation. Used Database is expected to calculate cheapest route from any vertex to any vertex and traverse it (classic Dijkstra). But also have to change weights depending on situations like "country B has embargo on country A so any item originating from country A can't pass through B, there is flood at region XYZ so no land transport is possible" etc. Also that database is expected to cache results. I expect no more than 1000 vertexes but many edges. Thanks in advance and apologies in advance if questions are a bit ambiguous PS : I added ArangoDB at title but tbh, hadn't much chance to take a look. Late edit as of 18-Apr-2016 : After evaluating responses to my questions and development strategies, I decided to use ArangoDB as their roadmap is more promising for me as they apparently not trying to add tons of hype features that are half baked.
Disclaimer: I am the author and owner of OrientDB. As developer, in general, I don't like companies that hide costs and let you play with their technology for a while and as soon as you're tight with it, start asking for money. Actually once you invested months to develop your application that use a non standard language or API you're screwed up: pay or migrate the application with huge costs. You know, OrientDB is FREE for any usage, even commercial. Furthermore OrientDB supports standards like SQL (with extensions) and the main Java API is the TinkerPop Blueprints, the "JDBC" standard for Graph Databases. Furthermore OrientDB supports also Gremlin. The OrientDB project is growing every day with new contributors and users. The Community Group (Free channel to ask support) is the most active community in GraphDB market. If you have doubts with the GraphDB to use, my suggestion is to get what is closer to your needs, but then use standards as more as you can. In this way an eventual switch would have a low impact.
Neo4j
26,704,134
49
How do you create multiple databases on one server using neo4j? I have multiple clients, and I want to separate all client information into different database to avoid data leaks.
You need to have multiple Neo4j installations with a different port configurations in conf/neo4j.properties and conf/neo4j-server.properties. Alternatively you might use some virtualization or container tool like http//docker.io for a more sophisticated approach.
Neo4j
25,659,378
49
How can I show all nodes and relationships in Data Browser tab? What are sample index queries that I can type in in search field?
You may also want to try a cypher query such as: START n=node(*) RETURN n; It's very obvious, and it will return all the existing nodes in the database. EDIT : the following displays the nodes and the relationships : START n=node(*) MATCH (n)-[r]->(m) RETURN n,r,m;
Neo4j
8,372,788
48
Is it possible to run a case-insensitive cypher query on neo4j? Try that: http://console.neo4j.org/ When I type into this: start n=node(*) match n-[]->m where (m.name="Neo") return m it returns one row. But when I type into this: start n=node(*) match n-[]->m where (m.name="neo") return m it does not return anything; because the name is saved as "Neo". Is there a simple way to run case-insensitive queries?
Yes, by using case insensitive regular expressions: WHERE m.name =~ '(?i)neo' https://neo4j.com/docs/cypher-manual/current/clauses/where/#case-insensitive-regular-expressions
Neo4j
13,439,278
46
Greetings, Is there any open source graph database available other than Neo4J?? NOTE: Why not Neo4J? Neo4J is opensource, but counts primitives (number of nodes,relationships & properties). If you are using it for commercial use. And does not have any straight forward information of pricing on official website. so there can be potential vendor lock-in (Although I have just started my company, and don't have budget to spent money on software anyway.) so It is out of option. Regards,
OrientDB (old link) appears to support graph storage in much the same was as Neo4j
Neo4j
1,754,628
44
In SQL: Delete From Person Where ID = 1; In Cypher, what's the script to delete a node by ID? (Edited: ID = Neo4j's internal Node ID)
(Answer updated for 2024!) Assuming you're referring to Neo4j's internal element id: MATCH (p:Person) where elementId(p)=1 DETACH DELETE p Assuming you're referring to Neo4j's (legacy) internal id: MATCH (p:Person) where ID(p)=1 DETACH DELETE p If you're referring to your own property 'id' on the node: MATCH (p:Person {id:1}) DETACH DELETE p
Neo4j
28,144,751
43
Hi I created a neo4j database with custom java application and tried to change path in configuration file in order to connect to created database. While trying to check the data in webadmin console only node 0 is visible (seems that the database is empty). I tried to import the same database to Gephi and it's not empty. Furthermore when I tried to switch back to the original database, which also wasn't empty, in webadmin only node 0 appeared. I tried to modify the neo4j-server.propertied file the following way: #***************************************************************** # Administration client configuration #***************************************************************** # location of the servers round-robin database directory. possible values: # - absolute path like /var/rrd # - path relative to the server working directory like data/rrd # - commented out, will default to the database data directory. org.neo4j.server.webadmin.rrdb.location=data/rrd # REST endpoint for the data API # Note the / in the end is mandatory #org.neo4j.server.webadmin.data.uri=/db/data/ #original database org.neo4j.server.webadmin.data.uri="/db/mydatabase" #my database # REST endpoint of the administration API (used by Webadmin) org.neo4j.server.webadmin.management.uri=/db/manage/ # Low-level graph engine tuning file org.neo4j.server.db.tuning.properties=conf/neo4j.properties After switching back to the original database (commenting the new path and uncommenting the old) org.neo4j.server.webadmin.data.uri=/db/data/ #original database #org.neo4j.server.webadmin.data.uri="/db/mydatabase" #my database the old one seemed to be empty as well. Does anyone know how and where to set the path in order to see the appropriate database in the webadmin console and to be able to execute queries on the desired database? Thank you!
You first need confirm that the database you are connecting to was properly shut down (means you should not take the image of a running database). Set the location of the database if you are in server mode from the file conf/neo4j-server.properties by editing the below line. org.neo4j.server.database.location=data/graph.db if you are using embedded neo4j you can set the location of your db while instantaniating the GraphDatabaseService as under: new EmbeddedGraphDatabase("Path To Db Directory");
Neo4j
10,888,280
42
Is there a way to create bidirectional relationship in Neo4j using Cypher? I would like the relationship to be bidirectional rather than making two unidirectional relationships in both directions For eg: (A)<-[FRIEND]->(B) Rather than: (A)-[FRIEND]->(B) (A)<-[FRIEND]-(B) Thanks in advance :)
No, there isn't. All relationships in neo4j have a direction, starting and ending at a given node. There are a small number of workarounds. Firstly, as you've suggested, we can either have two relationships, one going from A to B and the other from B to A. Alternatively, when writing our MATCH query, we can specify to match patterns directionlessly, by using a query such as MATCH (A)-[FRIEND]-(B) RETURN A, B which will not care about whether A is friends with B or vice versa, and allows us to choose a direction arbitrarily when we create the relationship.
Neo4j
24,010,932
40
Using Cypher, how can I find a node where a property doesn't exist? For example, I have two nodes: A = {foo: true, name: 'A'}, B = { name: 'B'} Now I'd like to find B, selecting it on the basis of not having the foo property set. How can I do this?
As Michael Hunger mentioned MATCH (n) WHERE NOT EXISTS(n.foo) RETURN n On older versions of Neo4j you can use HAS: # Causes error with later versions of Neo4j MATCH (n) WHERE NOT HAS(n.foo) RETURN n
Neo4j
35,400,674
39
Is there a GUI tool which allows you to look at the contents of the Neo4j database visually.
The easiest is to start the neo4j server and view your graph via the webadmin: http://docs.neo4j.org/chunked/stable/tools-webadmin.html
Neo4j
10,814,336
39
I want to match between entities by multiple relationship types. Is it possible to say the following query: match (Yoav:Person{name:"Yoav"})-[:liked & watched & ... ]->(movie:Movie) return movie I need "and" between all the relation types; Yova liked and watched and .. a movie.
Yes, you can do something like: match (gal:Person{name:"Yoav"})-[:liked|:watched|:other]->(movie:Movie) return movie Take a look in the docs: Match on multiple relationship types EDIT: From the comments: I need "and" between the relation types.. you gave me an "or" In this case, you can do: match (Yoav:Person{name:"Yoav"})-[:liked]->(movie:Movie), (Yoav)-[:watched]->(movie), (Yoav)-[:other]->(movie) return movie
Neo4j
46,132,345
38
I would like to find out all the incoming and outgoing relationships for a node. I tried couple of queries suggested in other questions but not having much luck. These are the two I tried MATCH (a:User {username: "user6"})-[r*]-(b) RETURN a, r, b I only have 500 nodes and it runs forever. I gave up after an hour. I tried this MATCH (c:User {username : 'user6'})-[r:*0..1]-(d) WITH c, collect(r) as rs RETURN c, rs But I get this error WARNING: Invalid input '*': expected whitespace or a rel type name (line 1, column 35 (offset: 34)) "MATCH (c {username : 'user6'})-[r:*0..1]-(d)" What would be correct way to get all the relationships for a node? I'm using version 3.0.3
The simplest way to get all relationships for a single node is like this: MATCH (:User {username: 'user6'})-[r]-() RETURN r
Neo4j
38,423,683
37
The version I use is neo4j-enterprise-2.2.0-M02 My question is : How can I configure a user (like add a new user, change the password ,etc) in backend or browser, instead of REST API? Can I do it via neo4j-shell? imagine that I am a DBA, it is not very convenient to do this by REST API. Any help will be greatly appreciated!
You can use the browser instead of the API. Just go to http://localhost:7474 (or whatever IP to which the web console is bound) and you will be prompted to change the password. Once authenticated, use the command :server change-password to change the password again. It is not yet possible to create multiple user accounts within the system. You can use the command :help server to see available authentication commands.
Neo4j
27,645,951
37
I am trying to make a database where every time a node doesn't exist it will create a new one and set a relationship between this node and another. If the node exists, both nodes get a relationship. My problem is that, if I try to connect 2 existing nodes, the 2nd node will be recreated. I tried with MERGE and CREATE UNIQUE, both didn't work. My example code: CREATE (test1 name:'1'}) MATCH (n) WHERE n.name = '1' MERGE (n)-[:know {r:'123'}]->(test3 {name:'3'}) MATCH (n) WHERE n.name = '1' MERGE (n)-[:know {r:'123'}]->(test2 {name:'2'}) Till here it works but with: MATCH (n) WHERE n.name = '3' MERGE (n)-[:know {r:'123'}]->(test2 {name:'2'}) it creates a new node "2" instead of connecting to the existing one.
When using MERGE on full patterns, the behavior is that either the whole pattern matches, or the whole pattern is created. MERGE will not partially use existing patterns — it’s all or nothing. If partial matches are needed, this can be accomplished by splitting a pattern up into multiple MERGE clauses. http://docs.neo4j.org/chunked/stable/query-merge.html MERGE (n)-[:know {r:'123'}]->(test2 {name:'2'}) will try to match the entire pattern and since it does not exist, it creates it. What you can do is: MERGE (n {name: '3'}) //Create if a node with name='3' does not exist else match it MERGE (test2 {name:'2'}) //Create if a node with name='2' does not exist else match it MERGE (n)-[:know {r:'123'}]->(test2) //Create the relation between these nodes if it does not already exist
Neo4j
24,015,854
37
I know how to remove a vertex by id in Gremlin. But now I'm need to cleanup the database. How do I delete multiple vertices? Deleting 1 v is like this: ver = g.v(1) g.removeVertex(ver) I mean something like SQL TRUNCATE. How do you remove the vertices / vertexes without removing the class?
In more recent terms as of Gremlin 2.3.0, removal of all vertices would be best accomplished with: g.V.remove() UPDATE: For version Gremlin 3.x you would use drop(): gremlin> graph = TinkerFactory.createModern() ==>tinkergraph[vertices:6 edges:6] gremlin> g = graph.traversal() ==>graphtraversalsource[tinkergraph[vertices:6 edges:6], standard] gremlin> g.V().drop().iterate() gremlin> graph ==>tinkergraph[vertices:0 edges:0] Note that drop() does not automatically iterate the Traversal as remove() did so you have to explicitly call iterate() for the deletion to occur. Iteration in the Gremlin Console is discussed in detail in this tutorial. Also, consider that different graph systems will potentially have their own methods for more quickly and efficiently removing all data in that system. For example, JanusGraph has this approach: JanusGraphFactory.drop(graph) where "graph" is a JanusGraph instance you want cleared out.
Neo4j
12,814,305
36
Recently I have been looking into graph databases like Neo4j and into logic programming in Prolog and miniKanren. From what I have learned so far, both allow specifying facts and relations between them, and also querying the resulting system for some selections. So, actually I cannot see much difference between them in that they both can be used to build a graph and query it, but using different syntax. However, they are presented as totally different kinds of software. Except the technicality that databases maybe propose a more space-time effective storage technology, and except that tiny logic cores like miniKanren are simpler and embeddable, what is the actual difference between graph databases and logic programming languages, if they are both just a graph database + query API?
No, logic programming as embodied by those things and neo4j are quite different. On one level, you're right that they conceptually both amount to graph storage and graph query. But for logic programming, it's only conceptually graph query, there's no guarantee that it's actually stored that way (where with neo4j, it is). Second, with logic programming you're usually trying to establish horn clauses that allow you to reason through lots of data. You can think of a horn clause as a simple rule, like "If a person is male, and is the direct ancestor of a biological child, that implies that person is a father". In cypher with neo4j, you would describe a graph pattern you wish to match, that results in data, e.g.: MATCH (p:Person)-[:father*]->(maleAncestor:Person) RETURN maleAncestor This tells to traverse the graph by father relationships, and return male ancestors. In a logic programming language, you wouldn't do it this way. You might specify that a being a father of b means that a is male, and a is an ancestor of b. This would implicitly and transitively state that for all valid a/b pairings. Then you'd ask a question, "who are the male ancestors"? The programming environment would then answer that by exploiting your rules. That would have the effect of building a traversal through the data that's very similar to the cypher I specified above, but the way you go about understanding your data and building that traversal is totally different. Logic programming languages usually work via predicate resolution. A graph query language like cypher works by a combination of pattern matching, and explicit path designation. They're very different.
Neo4j
29,192,927
35
Looking at Neo4j, and the 32 billion relationship limit has me worried (imagine 40 million users who upload 500 photos, have 500 friends, make 500 comments etc and before you know it you are past 32 billion).. So I have some concerns and have to make sure I'm making the best choice on which database to use. Not looking for subjective answers nor debate here - ie. which one is better etc - rather, since I'm betting a startup's future on what graph database is uses, I need to know the risks the different databases present, such as Neo4j not having more than 32billion relationships. Now, several companies have called their graph databases the "leading graph database".. but let's look past the hype -which one has the most financial backing? Which db enjoys a large community support? Which one has a solid company behind it for commercial support? Which one is most likely to be mature enough so if you wanted, you could easily create facebook with minimal effort? It's easy to choose a graph database on technical features or familiarity - but I'm looking for more than that - I want to make sure a few years from the company is still around. I want to make sure I'm not choosing to go with Neo4j based on hype and the momentum it currently (temporarily?) has... And What other graphs can contend with Neo4gj to create a full fledged social network similar to facebook (again, not looking for better, just looking for a solid competitor ). Please don't let this turn into a subjective Neo vs Dex debate - just facts and solids answers please..
Disclaimer: I work for/with Neo4j Just talking about the maturity here (not technicalities) - Neo Technology as a company with more than 50 employees, $25M funding and a thriving user-base with half a million downloads, 30k new databases running each month and an active community won't go away. You can also check the SO questions to see the community activity. We have a healthy set of customers in many domains from big ones like Adobe (runs creative cloud on Neo4j), Cisco (Org-Management, MDM), social networks like Viadeo and many Job search companies (GlassDoor, and others) to startups like fiftythree who published the popular "Paper" app on iOS. Our community site neo4j.org should be a good place to go, to get started, you find there introductory content as well as information on programming languages, drivers and deployments that should help you get started. Emil, Ian and Jim wrote an introductory book about "graph databases" with O'Reilly which is currently available as a free ebook download. So you see we're not just taking care about our own product but also the bigger graph ecosystem, also with many conference talks, meetup groups (41 worldwide) and support of the open source ecosystem. Hope that helps you deciding. P.S. Regarding your concerns: The size limits (which are artificially anyway) will be increased this year.
Neo4j
15,623,384
35
I'm defining the relationship between two entities, Gene and Chromosome, in what I think is the simple and normal way, after importing the data from CSV: MATCH (g:Gene),(c:Chromosome) WHERE g.chromosomeID = c.chromosomeID CREATE (g)-[:PART_OF]->(c); Yet, when I do so, neo4j (browser UI) complains: This query builds a cartesian product between disconnected patterns. If a part of a query contains multiple disconnected patterns, this will build a cartesian product between all those parts. This may produce a large amount of data and slow down query processing. While occasionally intended, it may often be possible to reformulate the query that avoids the use of this cross product, perhaps by adding a relationship between the different parts or by using OPTIONAL MATCH (identifier is: (c)). I don't see what the issue is. chromosomeID is a very straightforward foreign key.
The browser is telling you that: It is handling your query by doing a comparison between every Gene instance and every Chromosome instance. If your DB has G genes and C chromosomes, then the complexity of the query is O(GC). For instance, if we are working with the human genome, there are 46 chromosomes and maybe 25000 genes, so the DB would have to do 1150000 comparisons. You might be able to improve the complexity (and performance) by altering your query. For example, if we created an index on :Gene(chromosomeID), and altered the query so that we initially matched just on the node with the smallest cardinality (the 46 chromosomes), we would only do O(G) (or 25000) "comparisons" -- and those comparisons would actually be quick index lookups! This is approach should be much faster. Once we have created the index, we can use this query: MATCH (c:Chromosome) WITH c MATCH (g:Gene) WHERE g.chromosomeID = c.chromosomeID CREATE (g)-[:PART_OF]->(c); It uses a WITH clause to force the first MATCH clause to execute first, avoiding the cartesian product. The second MATCH (and WHERE) clause uses the results of the first MATCH clause and the index to quickly get the exact genes that belong to each chromosome. [UPDATE] The WITH clause was helpful when this answer was originally written. The Cypher planner in newer versions of neo4j (like 4.0.3) now generate the same plan even if the WITH is omitted, and without creating a cartesian product. You can always PROFILE both versions of your query to see the effect with/without the WITH.
Neo4j
33,352,673
34
Usually I can find everything I need already on SO but not this time. I'm looking for a very simple way to exclude labels, for example (pseudo code): match (n) where n not in (Label1, Label2) return n Sorry about crappy query. In short I have labels x,y,z and I want to return all of them apart from z. Thnx!
This should do it: MATCH (n) WHERE NOT n:Label1 AND NOT n:Label2 RETURN n;
Neo4j
32,817,075
34
I know I can create a unique constraint on a single property with Cypher like CREATE CONSTRAINT ON (p:Person) ASSERT p.name IS UNIQUE. But I was wondering whether it is possible to create a unique constraint which involves multiple properties. If so, how?
neo4j (2.0.1) does not currently support a uniqueness constraint that covers multiple properties simultaneously. However, I can think of a workaround that might be acceptable, depending on your use cases. Let's say you want properties a, b, and c to be unique as a group. You can add an extra property, d, that concatenates the stringified values of a, b, and c, using appropriate delimiter(s) to separate the substrings (such that, for example, the a/b delimiter is a character that never appears in a or b). You can then create a uniqueness constraint on d. [UPDATE] Neo4j 3.3 added support for uniqueness constraints that cover multiple properties, via node key constraints. However, this feature is only available in the Enterprise Edition. [UPDATE] Current versions of Neo4j Desktop now support: CREATE CONSTRAINT FOR (n:Person) REQUIRE (n.firstname, n.surname) IS NODE KEY
Neo4j
22,498,054
34
I understand it is possible to use the wildcard (*) symbol to return all references in a Cypher query, such as: MATCH p:Product WHERE p.price='1950' RETURN *; ==> +----------------------------------------------------------------+ ==> | p | ==> +----------------------------------------------------------------+ ==> | Node[686]{title:"Giorgio Armani Briefcase",price:"1950",... | ==> +----------------------------------------------------------------+ However, the result is a row with a single node 'column' named "p", from which the properties can be accessed. However, I'd like the result-set 'rows' to have the property names as 'columns'. Something like: MATCH p:Product WHERE p.price='1950' RETURN p.*; ==> +-------------------------------------------+ ==> | title | price | ... | ==> +-------------------------------------------+ ==> | "Giorgio Armani Briefcase" | "1950" | ... | ==> +-------------------------------------------+ That particular query isn't valid, but is there a way to achieve the same result (short of listing all the properties explicitly, as in p.title,p.price,p... )?
You can't do this in Cypher yet. I think it would be a nice feature though, if you want to request it. Edit (thanks for comment pointing it out): You can now do this as of 2.2: MATCH (p:Product) WHERE p.price='1950' RETURN keys(p);
Neo4j
17,735,005
34
I can't find a way to change a relationship type in Cypher. Is this operation possible at all? If not: what's the best way achieve this result?
Unfortunately there is no direct change of rel-type possible at the moment. You can do: MATCH (n:User {name:"foo"})-[r:REL]->(m:User {name:"bar"}) CREATE (n)-[r2:NEWREL]->(m) // copy properties, if necessary SET r2 = r WITH r DELETE r
Neo4j
22,670,369
33
I have created a node with a wrong label. Is there any way to change node label or relationship type without re-creating it? I have tried something like MATCH n WHERE Id(n)=14 SET n.Labels = 'Person' but it is fault...
MATCH (n:OLD_LABEL {id:14}) REMOVE n:OLD_LABEL SET n:NEW_LABEL Guess this query explains itself.
Neo4j
22,542,802
33
I have two nodes user and files with a relationship :contains, the relationship has a property idwhich is an array, represented as (:user)-[:contains{id:[12345]}]->(:files) However I want to populate the property array id with values 1111 and 14567 sequentially using Cypher queries, I dont find any method to push values into the array. After inserting 1111 to property id it will be: (:user)-[:contains{id:[12345,1111]}]->(:files) After inserting 14567 to property id it will be: (:user)-[:contains{id:[12345,1111,14567]}]->(:files) I don't know how to populate values to an property array sequentially.
Adding values to an array is analogous to incrementing an integer or concatenating a string and is signified the same way, in your case (let c be your [c:contains {id:[12345]}]) c.id = c.id + 1111 // [12345,1111] c.id = c.id + 14567 // [12345,1111,14567] or c.id = c.id + [1111,14567] // [12345,1111,14567]
Neo4j
21,979,782
33
I am looking at integrating Neo4j into a Clojure system I am building. The first question I was asked was why I didn't use Datomic. Does anyone have a good answer for this? I have heard of and seen videos on Datomic, but I don't know enough about Graph Databases to know the difference between Neo4j and Datomic, and what difference it would make to me?
There are a few fundamental difference between them: Data Model Both Neo4j and Datomic can model arbitrary relationships. They both use, effectively, an EAV (entity-attribute-value) schema so they both can model many of the same problem domains except Datomic's EAV schema also embeds a time dimension (i.e. EAVT) which makes it very powerful if you want to perform efficient queries against your database at arbitrary points in time. This is something that non-immutable data stores (Neo4j included) could simply not do. Data Access Both Neo4j and Datomic provide traversal APIs and query languages: Queries Both Neo4j and Datomic provide declarative query languages (Cypher and Datalog, respectively) that support recursive queries except Datomic's Datalog provides far superior querying capabilities by allowing custom filtering and aggregate functions to be implemented as arbitrary JVM code. In practice, this means Cypher's built-in functions can effectively be superseded by Clojure's sequence library. This is possible because your application, not the database, is the one running queries. Traversal Traversal APIs are always driven by application code, which means both Neo4j and Datomic are able to walk a graph using arbitrary traversal, filtering and data transformation code except Neo4j requires a running transaction which in practice means it's time-bounded. Data Consistency Another fundamental difference is that Datomic queries don't require database coordination (i.e. no read transactions) and they always work with a consistent data snapshot which means you could perform multiple queries and data transformations over an arbitrary period of time and guarantee your results will always be consistent and that no transaction will timeout (because there's none). Again, this is impossible to do in non-immutable data stores like the vast majority of existing databases (Neo4j included). This also applies to their traversal APIs. Both Neo4j and Datomic are transactional (ACID) systems, but because Neo4j uses traditional interactive transactions -using optimistic concurrency controls-, queries need to happen inside transactions (need to be coordinated) which imposes timeout constraints to your queries. In practice, this means that for very complex, long-running queries, you'll end-up splitting your queries, so they finish within certain time limits, giving up data consistency. Working Set If for some reason your queries needed to involve a huge amount of data (more than it would normally fit in memory) and you couldn't stream the results (since Datomic provides streaming APIs), Datomic would probably not be a good fit since you wouldn't be taking advantage of Datomic's architecture, forcing peers to constantly evict their working memory, performing additional network calls and decompressing data segments.
Neo4j
17,895,129
33
I am trying to run queries from the neo4j browser to reproduce results from my neo4j-javascript-driver client. What is the syntax for defining query parameters in the neo4j b I recently attended a neo4j training session in NYC where the trainer (David Fauth) did this, unfortunately, I did not take notes on it, since I figured that I could read-up on this online...but no success.
In neo4j-browser you need type for example: :params {nodes: [{name: "John", age: 18}, {name: "Phill", age: 23}]} Then you can use params as usual: UNWIND {nodes} as node MERGE (A:User {name: node.name, age: node.age}) RETURN A For clear params in neo4j-browser type :params {}. For additional help type :help params.
Neo4j
42,397,773
32
i have created a new node labeled User CREATE (n:User) i want to add a name property to my User node i tried it by MATCH (n { label: 'User' }) SET n.surname = 'Taylor' RETURN n but seems it is not affecting . how can i add properties to a already created node . Thank you very much.
Your matching by label is incorrect, the query should be: MATCH (n:User) SET n.surname = 'Taylor' RETURN n What you wrote is: "match a user whose label property is User". Label isn't a property, this is a notion apart. As Michael mentioned, if you want to match a node with a specific property, you've got two alternatives: MATCH (n:User {surname: 'Some Surname'}) or: MATCH (n:User) WHERE n.surname = 'Some Surname' Now the combo: MATCH (n:User {surname: 'Some Surname'}) SET n.surname = 'Taylor' RETURN n
Neo4j
24,407,716
32
I've deleted all my nodes and relationships (Delete all nodes and relationships in neo4j 1.8), but I see that in Neo4j Browser the "property keys" that existed before the deletion remain. See the picture below: How can I make all the "Property Keys" go away too, so I can end up with a fresh new database? I understand this orphan property keys do not pose a problem themselves, but they clutter the browser experience and will start confusing with newer properties. Thanks!
You should be able to clear everything out by: stopping your Neo4j database deleting everything matching data/graph.db/* (look inside the graph.db folder) starting up again.
Neo4j
33,982,639
31
I'm using neo4j and making executing this query: MATCH (n:Person) RETURN n.name LIMIT 5 I'm getting the names but i need the ids too. Please help!
Since ID isn't a property, it's returned using the ID function. MATCH (n:Person) RETURN ID(n) LIMIT 5
Neo4j
26,203,538
31
I'm trying to figure out what is the difference between MERGE and CREATE UNIQUE. I know these features: #MERGE# I'm able to create node, if doesn't exist pattern. MERGE (n { name:"X" }) RETURN n; This create node "n" with property name, empty node "m" and relationship RELATED. MERGE (n { name:"X" })-[:RELATED]->(m) RETURN n, m; #CREATE UNIQUE# I'm not able to create node like this. CREATE UNIQUE (n { name:"X" }) RETURN n; If exists node "n", create unique makes empty node "m" and relationship RELATED. MATCH (n { name: 'X' }) CREATE UNIQUE (n)-[:RELATED]->(m) RETURN n, m; If this pattern exists, nothing created, only returns pattern. From my point of view, I see MERGE and CREATE UNIQUE are quite same queries, but with CREATE UNIQUE you can't create start node in relationship. I would be grateful if someone could explain this issue and compare these queries.
CREATE UNIQUE has slightly more obscure semantics than MERGE. MERGE was developed as an alternative with more intuitive behavior than CREATE UNIQUE; if in doubt, MERGE is usually the right choice. The easiest way to think of MERGE is as a MATCH-or-create. That is, if something in the database would MATCH the pattern you are using in MERGE, then MERGE will just return that pattern. If nothing matches, the MERGE will create all missing elements in the pattern, where a missing element means any unbound identifier. Given MATCH (a {uid:123}) MERGE (a)-[r:LIKES]->(b)-[:LIKES]->(c) "a" is a bound identifier from the perspective of the MERGE. This means cypher somehow already knows which node it represents. This statement can have two outcomes. Either the whole pattern already exists, and nothing will be created, or parts of the pattern are missing, and a whole new set of relationships and nodes matching the pattern will be created. Examples // Before merge: (a)-[:LIKES]->()-[:LIKES]->() // After merge: (a)-[:LIKES]->()-[:LIKES]->() // Before merge: (a)-[:LIKES]->()-[:OWNS]->() // After merge: (a)-[:LIKES]->()-[:OWNS]->() (a)-[:LIKES]->()-[:LIKES]->() // Before merge: (a) // After merge: (a)-[:LIKES]->()-[:LIKES]->()
Neo4j
22,773,562
31
I'm using Linux 16.04 OS. I have installed fresh neo4j. I get referenced exegetic and digitalocean sites. By default there's graph.db database. My question is how to create a new database and create nodes and relation ship between nodes? As I show in picture default DB name is graph.db.
Since you're using Neo 3.x, to create a new database without removing your existing one, you can simply edit the neo4j.conf file in your conf directory of your $NEO4J_HOME. Search for dbms.active_database=, which should have the default value of graph.db. Replace it with some other name and start neo4j again. Now, a new database will be created under that directory name. To switch back to your previous db, repeat the steps, just replace your new value with graph.db in the configuration file.
Neo4j
45,784,232
29
I am trying to get the relationship type of a very simple Cypher query, like the following MATCH (n)-[r]-(m) RETURN n, r, m; Unfortunately this return an empty object for r. This is troublesome since I can't distinguish between the different types of relationships. I can monkey patch this by adding a property like [r:KNOWS {type:'KNOWS'}] but I am wondering if there isn't a direct way to get the relationship type. I even followed the official Neo4J tutorial (as described below), demonstrating the problem. Graph Setup: create (_0 {`age`:55, `happy`:"Yes!", `name`:"A"}) create (_1 {`name`:"B"}) create _0-[:`KNOWS`]->_1 create _0-[:`BLOCKS`]->_1 Query: MATCH p=(a { name: "A" })-[r]->(b) RETURN * JSON RESPONSE BODY: { "results": [ { "columns": [ "a", "b", "p", "r" ], "data": [ { "row": [ { "name": "A", "age": 55, "happy": "Yes!" }, { "name": "B" }, [ { "name": "A", "age": 55, "happy": "Yes!" }, {}, { "name": "B" } ], {} ] }, { "row": [ { "name": "A", "age": 55, "happy": "Yes!" }, { "name": "B" }, [ { "name": "A", "age": 55, "happy": "Yes!" }, {}, { "name": "B" } ], {} ] } ] } ], "errors": [] } As you can see, I get an empty object for r, which makes it impossible to distinguish between the relationships. NOTE: I am running Neo4J v.2.2.2
Use the type() function. MATCH (n)-[r]-(m) RETURN type(r);
Neo4j
31,485,802
28
I'm new to the Graph Database scene, looking into Neo4j and learning Cypher, we're trying to model a graph database, it's a fairly simple one, we got users, and we got movies, users can VIEW movies, RATE movies, create playlists and playlists can HAVE movies. The question is regarding the Super Node performance issue. And I will quote something from a very good book I am currently reading - Learning Neo4j by Rik Van Bruggen, so here it is: A very interesting problem then occurs in datasets where some parts of the graph are all connected to the same node. This node, also referred to as a dense node or a supernode, becomes a real problem for graph traversals because the graph database management system will have to evaluate all of the connected relationships to that node in order to determine what the next step will be in the graph traversal. The solution to this problem proposed in the book is to have a Meta node with 100 connections to it, and the 101th connection to be linked to a new Meta node that is linked to the previous Meta Node. I have seen a blog post from the official Neo4j Blog saying that they will fix this problem in the upcoming future (the blog post is from January 2013) - http://neo4j.com/blog/2013-whats-coming-next-in-neo4j/ More exactly they say: Another project we have planned around “bigger data” is to add some specific optimizations to handle traversals across densely-connected nodes, having very large numbers (millions) of relationships. (This problem is sometimes referred to as the “supernodes” problem.) What are your opinions on this issue? Should we go with the Meta node fanning-out pattern or go with the basic relationship that every tutorial seem to be using? Any other suggestions?
UPDATE - October 2020. This article is the best source on this topic, covering all aspects of super nodes (my original answer below) It's a good question. This isn't really an answer, but why shouldn't we be able to discuss this here? Technically I think I'm supposed to flag your question as "primarily opinion based" since you're explicitly soliciting opinions, but I think it's worth the discussion. The boring but honest answer is that it always depends on your query patterns. Without knowing what kinds of queries you're going to issue against this data structure, there's really no way to know the "best" approach. Supernodes are problems in other areas as well. Graph databases sometimes are very difficult to scale in some ways, because the data in them is hard to partition. If this were a relational database, we could partition vertically or horizontally. In a graph DB when you have supernodes, everything is "close" to everything else. (An Alaskan farmer likes Lady Gaga, so does a New York banker). Moreso than just graph traversal speed, supernodes are a big problem for all sorts of scalability. Rik's suggestion boils down to encouraging you to create "sub-clusters" or "partitions" of the super-node. For certain query patterns, this might be a good idea, and I'm not knocking the idea, but I think hidden in here is the notion of a clustering strategy. How many meta nodes do you assign? How many max links per meta-node? How did you go about assigning this user to this meta node (and not some other)? Depending on your queries, those questions are going to be very hard to answer, hard to implement correctly, or both. A different (but conceptually very similar) approach is to clone Lady Gaga about a thousand times, and duplicate her data and keep it in sync between nodes, then assert a bunch of "same as" relationships between the clones. This isn't that different than the "meta" approach, but it has the advantage that it copies Lady Gaga's data to the clone, and the "Meta" node isn't just a dumb placeholder for navigation. Most of the same problems apply though. Here's a different suggestion though: you have a large-scale many-to-many mapping problem here. It's possible that if this is a really huge problem for you, you'd be better off breaking this out into a single relational table with two columns (from_id, to_id), each referencing a neo4j node ID. You then might have a hybrid system that's mostly graph (but with some exceptions). Lots of tradeoffs here; of course you couldn't traverse that rel in cypher at all, but it would scale and partition much better, and querying for a particular rel would probably be much faster. One general observation here: whether we're talking about relational, graph, documents, K/V databases, or whatever -- when the databases get really big, and the performance requirements get really intense, it's almost inevitable that people end up with some kind of a hybrid solution with more than one kind of DBMS. This is because of the inescapable reality that all databases are good at some things, and not good at others. So if you need a system that's good at most everything, you're going to have to use more than one kind of database. :) There is probably quite a bit neo4j can do to optimize in these cases, but it would seem to me that the system would need some kinds of hints on access patterns in order to do a really good job at that. Of the 2,000,000 relations present, how to the endpoints best cluster? Are older relationships more important than newer, or vice versa?
Neo4j
27,568,265
28
If I have the cypher query MATCH (a)-[r]->(b) I can get the labels of a and b fine line so MATCH (a)-[r]->(b) RETURN labels(a), labels(b) But when I want the label of r using the same syntax MATCH (a)-[r]->(b) RETURN labels(r) I get Type mismatch: expected Node but was Relationship How do I return the label of r, the relationship?
In Neo4j, relationships don't have labels - they have a single type, so it would be: MATCH (a)-[r]->(b) RETURN TYPE(r)
Neo4j
23,999,044
28
Is there a cypher command to drop all constraints? I know I can drop specific constraints. DROP CONSTRAINT ON (book:Book) ASSERT book.isbn IS UNIQUE However I want to clear all constraints as part of teardown after testing. Can't find anything in the docs, but something like: DROP CONSTRAINT * Update: My testing setup. Writing a tiny promise-based nodejs cypher client. I want to test defining unique indexes in application code.
Note using APOC you can drop all indexes and constraints via CALL apoc.schema.assert({}, {}).
Neo4j
22,357,379
28
I was looking on the scalability of Neo4j, and read a document written by David Montag in January 2013. Concerning the sharding aspect, he said the 1st release of 2014 would come with a first solution. Does anyone know if it was done or its status if not? Thanks!
Disclosure: I'm working as VP Product for Neo Technology, the sponsor of the Neo4j open source graph database. Now that we've just released Neo4j 2.0 (actually 2.0.1 today!) we are embarking on a 2.1 release that is mostly oriented around (even more) performance & scalability. This will increase the upper limits of the graph to an effectively unlimited number of entities, and improve various other things. Let me set some context first, and then answer your question. As you probably saw from the paper, Neo4j's current horizontal-scaling architecture allows read scaling, with writes all going to master and fanning out. This gets you effectively unlimited read scaling, and into the tens of thousands of writes per second. Practically speaking, there are production Neo4j customers (including Snap Interactive and Glassdoor) with around a billion people in their social graph... in all cases behind an active and heavily-hit web site, being handled by comparatively quite modest Neo4j clusters (no more than 5 instances). So that's one key feature: the Neo4j of today an incredible computational density, and so we regularly see fairly small clusters handling a substantially large production workload... with very fast response times. More on the current architecture can be found here: www.neotechnology.com/neo4j-scales-for-the-enterprise/ And a list of customers (which includes companies like Wal-Mart and eBay) can be found here: neotechnology.com/customers/ One of the world's largest parcel delivery carriers uses Neo4j to route all of their packages, in real time, with peaks of 3000 routing operations per second, and zero downtime. (This arguably is the world's largest and most mission-critical use of a graph database and of a NOSQL database; though unfortunately I can't say who it is.) So in one sense the tl;dr is that if you're not yet as big as Wal-Mart or eBay, then you're probably ok. That oversimplifies it only a bit. There is the 1% of cases where you have sustained transactional write workloads into the 100s of thousands per second. However even in those cases it's often not the right thing to load all of that data into the real-time graph. We usually advise people to do some aggregation or filtering, and bring only the more important things into the graph. Intuit gave a good talk about this. They filter a billion B2B transactions into a much smaller number of aggregate monthly transaction relationships with aggregated counts and currency amounts by direction. Enter sharding... Sharding has gained a lot of popularity these days. This is largely thanks to the other three categories of NOSQL, where joins are an anti-pattern. Most queries involve reading or writing just a single piece of discrete data. Just as joining is an anti-pattern for key-value stores and document databases, sharding is an anti-pattern for graph databases. What I mean by that is... the very best performance will occur when all of your data is available in memory on a single instance, because hopping back and forth all over the network whenever you're reading and writing will slow things significantly down, unless you've been really really smart about how you distribute your data... and even then. Our approach has been twofold: Do as many smart things as possible in order to support extremely high read & write volumes without having to resort to sharding. This gets you the best and most predictable latency and efficiency. In other words: if we can be good enough to support your requirement without sharding, that will always be the best approach. The link above describes some of these tricks, including the deployment pattern that lets you shard your data in memory without having to shard it on disk (a trick we call cache-sharding). There are other tricks along similar lines, and more coming down the pike... Add a secondary architecture pattern into Neo4j that does support sharding. Why do this if sharding is best avoided? As more people find more uses for graphs, and data volumes continue to increase, we think eventually it will be an important and inevitable thing. This would allow you to run all of Facebook for example, in one Neo4j cluster (a pretty huge one)... not just the social part of the graph, which we can handle today. We've already done a lot of work on this, and have an architecture developed that we believe balances the many considerations. This is a multi-year effort, and while we could very easily release a version of Neo4j that shards naively (that would no doubt be really popular), we probably won't do that. We want to do it right, which amounts to rocket science.
Neo4j
21,558,589
28
I would like to predefine some graph data for neo4j and be able to load it, maybe via a console tool. I'd like it to be precisely the same as MySQL CLI and .sql files. Does anyone know if there exists a file format like .neo or .neo4j? I couldn't find such thing in the docs...
We usually do .cql or .cypher for script files. You can pipe it to the shell to run it, like so: ./neo4j-shell -c < MY_FILE.cypher Michael Hunger was doing some great work on this feature, also, just recently. He got performance up and noise down from the console. I hope it gets into 1.9 release.
Neo4j
15,161,221
28
I was trying to save directed graphs into databases for further processing and query. And neo4j seems to fit my needs. However, I don't seem to find a good tutorial regarding the following: Creating the database and put data in. Making queries. I want to be able to do them both manually and automatically (i.e. using a program). The official manual keeps talking about stuff like Maven, Index, REST API and so on, basically things I don't care about at all for now. So any good hands-on tutorial on neo4j? Or any other graph databases you think is good for total beginners with simple needs (i.e. store graph and query graph)?
For getting started just download the Neo4j Server and start it. Then go to http://localhost:7474 for the integrated web-admin UI which allows you to enter data visually and browser/visualize and query it. Please have a look at the Neo4j Koans by Jim Webber and Ian Robinson which are material that are used in real-world tutorials. Otherwise also have a look on http://video.neo4j.org for some screencasts and presentations and the collection of introduction links at the neo4j delicious site.
Neo4j
8,623,080
28
Is there a GUI-builder for neo4j? I want to be able to quickly add new nodes, set labels, set properties and relationships all in a gui-environment by clicking on nodes in a visualisation. I have searched, but have found nothing. Thanks.
@Zuriar Two years after your original post :) but nevertheless .. Now there is also Graphileon InterActor (http://www.graphileon.com) , an enhanced user-interface for Neo4j. Multi-panel, create / update nodes and relations without writing a single line of code. UPDATE August 15th, 2018 We have replaced the Sandbox and Community Edition by the Personal Edition. This version is free as well, and is distributed as a desktop app for MacOS, Windows and Linux. For more info, visit our blog. UPDATE June 22th, 2020 We released version 2.7.0 of the Personal Edition, which supports Neo4j 4.0. For release notes , go here https://docs.graphileon.com/graphileon/Release_notes.html UPDATE Aug 8th, 2022 Graphileon is now also available as a fully managed Cloud Service. Read more about it here https://graphileon.com/graphileon-cloud-has-arrived/ Disclosure : I work for Graphileon
Neo4j
32,462,505
27
Neo4j is a great tool for mapping relational data, but I am curious what under what conditions it would not be a good tool to use. In which use cases would using neo4j be a bad idea?
You might want to check out this slide deck and in particular slides 18-22. Your question could have a lot of details to it, but let me try to focus on the big pieces. Graph databases are naturally indexed by relationships. So graph databases will be good when you need to traverse a lot of relationships. Graphs themselves are very flexible, so they'll be good when the inter-connections between your data need to change from time to time, or when the data about your core objects that's important to store needs to change. Graphs are a very natural method of modeling some (but not all) data sources, things like peer to peer networks, road maps, organizational structures, etc. Graphs tend to not be good at managing huge lists of things. For example, if you were going to build a customer transaction database with analytics (where you need 1 million customers, 50 million transactions, and all you do is post transactions all day long) then it's probably not a good fit. RDBMS is great at that, notice how that use case doesn't exploit relationships really. Make sure to read those two links I provided, they have much more discussion.
Neo4j
30,133,924
27
How to delete labels in neo4j? Actually I deleted all nodes and relationships, then I recreated the movie database and still the labels I created before appeared on the webinterface. I also tried to use a different location for the database and even after an uninstall and reinstall the labels still appeared. Why? Where are the labels stored? After the uninstall the programm, the database folder and the appdata folder were deleted. How to reproduce? Install neo4j -> use the movie database example -> create (l:SomeLabel {name:"A freaky label"}) -> delete the node -> stop neo, create new folder -> start neo -> create movie shema -> match (n) return (n) -> SomeLabel appears, even if you changed the folder or make an uninstall / install. Is there a way to delete labels even if there is no node with it?
There isn't at the moment (Neo4j 2.0.1) a way to explicitly delete a label once it has been created. Neo4j Browser will display all labels which are reported by the REST endpoint at: http://localhost:7474/db/data/labels Separately, the Neo4j Browser sidebar which displays labels doesn't properly refresh the listing when it loses connection with Neo4j. A web browser reload should work. Lastly, there was a bug in Neo4j Browser's visualization which would display all labels for which a style had been created. If using a version of Neo4j which has the bug, you can clear the styling by clicking on "View Stylesheet" in the property inspector, then clicking the fire extinguisher icon. All of that needs usability improvement, admittedly. Cheers, Andreas
Neo4j
21,983,425
27
I am quite new for zookeeper port through which I am coming across from past few days. I introduced with zookeeper port keyword at two occasion: while configuring neo4j db cluster (link) and while running compiled voltdb catalog (link) (See Network Configuration Arguments) Then, I came across Apache Zookeeper, (which I guess is related to distributed application, I am a newbie in distributed application as well). hence question came in my mind: is there any implementation of apache zookeeper in above 2 scenarios ? What exactly this zookeeper port do internally ? Any help would be appreciated, Thanks.
Zookeeper is used in distributed applications mainly for configuration management and high availability operations. Zookeeper does this by a Master-Slave architecture. Neo4j and VoltDb might be using zookeeper for this purpose Coming to the ports understanding : suppose u have 3 servers for zookeepers ... You need to mention in configuration as clientPort=2181 server.1=zookeeper1:2888:3888 server.2=zookeeper2:2888:3888 server.3=zookeeper3:2888:3888 Out of these one server will be the master and rest all will be slaves.If any server goes OFF then zookeeper elects leader automatically . Servers listen on three ports: 2181 for client connections; 2888 for follower connections, if they are the leader; and 3888 for other server connections during the leader election phase .
Neo4j
18,168,541
27
When I run this query: START n1=node(7727), n2=node(7730) MATCH n1-[r:SKILL]->n2 RETURN r it gives me a list of duplicate relationships that I have between the two nodes. what do I add to the cypher query to iterate over the relationship to keep one relationship and delete the rest?
To do this for two known nodes: start n=node(1), m=node(2) match (n)-[r]->(m) with n,m,type(r) as t, tail(collect(r)) as coll foreach(x in coll | delete x) To do this globally for all relationships (be warned this operation might be very expensive depending on the size of your graph): start r=relationship(*) match (s)-[r]->(e) with s,e,type(r) as typ, tail(collect(r)) as coll foreach(x in coll | delete x)
Neo4j
18,202,197
26
Is it possible to have cypher query paginated. For instance, a list of products, but I don't want to display/retrieve/cache all the results as i can have a lot of results. I'm looking for something similar to the offset / limit in SQL. Is cypher skip + limit + orderby a good option ? http://docs.neo4j.org/chunked/stable/query-skip.html
SKIP and LIMIT combined is indeed the way to go. Using ORDER BY inevitably makes cypher scan every node that is relevant to your query. Same thing for using a WHERE clause. Performance should not be that bad though.
Neo4j
16,338,670
26
What is the pros and cons of MongoDB (document-based), HBase (column-based) and Neo4j (objects graph)? I'm particularly interested to know some of the typical use cases for each one. What are good examples of problems that graphs can solve better than the alternative? Maybe any Slideshare or Scribd worthy presentation?
MongoDB Scalability: Highly available and consistent but sucks at relations and many distributed writes. It's primary benefit is storing and indexing schemaless documents. Document size is capped at 4mb and indexing only makes sense for limited depth. See http://www.paperplanes.de/2010/2/25/notes_on_mongodb.html Best suited for: Tree structures with limited depth Use Cases: Diverse Type Hierarchies, Biological Systematics, Library Catalogs Neo4j Scalability: Highly available but not distributed. Powerful traversal framework for high-speed traversals in the node space. Limited to graphs around several billion nodes/relationships. See http://highscalability.com/neo4j-graph-database-kicks-buttox Best suited for: Deep graphs with unlimited depth and cyclical, weighted connections Use Cases: Social Networks, Topological analysis, Semantic Web Data, Inferencing HBase Scalability: Reliable, consistent storage in the petabytes and beyond. Supports very large numbers of objects with a limited set of sparse attributes. Works in tandem with Hadoop for large data processing jobs. http://www.ibm.com/developerworks/opensource/library/os-hbase/index.html Best suited for: directed, acyclic graphs Use Cases: Log analysis, Semantic Web Data, Machine Learning
Neo4j
3,735,784
26
I'm trying to find all the nodes with more than one incoming relationship. Given this data: a-[has]->b a-[has]->c d-[has]->b So, I'm looking for a query that returns 'b', because it has more that one incoming. This query is close. It returns 'a' and 'b', because they both have 2 relations: match (n)--() with n,count(*) as rel_cnt where rel_cnt > 1 return n; However, this query (the addition of '-->') doesn't return any and I don't know why: match (n)-->() with n,count(*) as rel_cnt where rel_cnt > 1 return n; Am I going about this all wrong?
Does this work for you? MATCH ()-[r:has]->(n) WITH n, count(r) as rel_cnt WHERE rel_cnt > 1 RETURN n; I am assuming, perhaps incorrectly, that 'has' is the appropriate relationship type. If not, then try: MATCH ()-[r]->(n) WITH n, count(r) as rel_cnt WHERE rel_cnt > 1 RETURN n;
Neo4j
22,998,090
25
I am creating a new Neo4j database. I have a type of node called User and I would like an index on the properties of user Identifier and EmailAddress. How does one go setting up an index when the database is new? I have noticed in the neo4j.properties file there looks to be support for creating indexes. However when I set these as so # Autoindexing # Enable auto-indexing for nodes, default is false node_auto_indexing=true # The node property keys to be auto-indexed, if enabled node_keys_indexable=EmailAddress,Identifier And add a node and do a query to find an Identifier that I know exists START n=node:Identifier(Identifier = "USER0") RETURN n; then I get an MissingIndexException: Index `Identifier` does not exist How do I create an index and use it in a start query? I only want to use config files and cypher to achieve this. i.e. at the present time I am only playing in the Power Tool Console.
Add the following to the neo4j.properties file # Autoindexing # Enable auto-indexing for nodes, default is false node_auto_indexing=true # The node property keys to be auto-indexed, if enabled node_keys_indexable=EmailAddress,Identifier Create the auto index for nodes neo4j-sh (0)$ index --create node_auto_index -t Node Check if they exist neo4j-sh (0)$ index --indexes Should return Node indexes: node_auto_index When querying use the following syntax to specify the index start a = node:node_auto_index(Identifier="USER0") return a; As the node is auto indexed the name of the index is node_auto_index This information came from a comment at the bottom of this page Update In case you want to index your current data which was there before automatic indexing was turned on (where Property_Name is the name of your index) START nd =node(*) WHERE has(nd.Property_Name) WITH nd SET nd.Property_Name = nd.Property_Name RETURN count(nd);
Neo4j
12,877,678
25
What is the difference between a Graph Database (e.g. Neo4J) and a Network Database (e.g. IDS, CODASYL)? In principle are they the same thing?
The network databases like CODSASYL are still more or less based on a hierarchical data model, thinking in terms of parent-child (or owner-member in CODASYL terminology) relationships. This also means that in network database you can't relate arbitrary records to each other, which makes it hard to work with graph-oriented datasets. For example, you may use a graph database to analyze what relationships exist between entities. Also, network databases use fixed records with a predefined set of fields, while graph databases use the more flexible Property Graph Model, allowing for arbitrary key/value pairs on both nodes/vertices and relationships/edges.
Neo4j
5,040,617
25
I want to get list of all connected nodes starting from node 0 as shown in the diagram
Based on your comment: I want to get a list of all the connected nodes. For example in the above case when I search for connected nodes for 0, it should return nodes- 1,2,3 This query will do what you want: MATCH ({id : 0})-[*]-(connected) RETURN connected The above query will return all nodes connected with a node with id=0 (I'm considering that the numbers inside the nodes are values of an id property) in any depth, both directions and considering any relationship type. Take a look in the section Relationships in depth of the docs. While this will work fine for small graphs note that this is a very expensive operation. It will go through the entire graph starting from the start point ({id : 0}) considering any relationship type. This is really not a good idea for production environments.
Neo4j
45,032,283
24
Suppose you're Twitter, and: You have (:User) and (:Tweet) nodes; Tweets can get flagged; and You want to query the list of flagged tweets currently awaiting moderation. You can either add a label for those tweets, e.g. :AwaitingModeration, or add and index a property, e.g. isAwaitingModeration = true|false. Is one option inherently better than the other? I know the best answer is probably to try and load test both :), but is there anything from Neo4j's implementation POV that makes one option more robust or suited for this kind of query? Does it depend on the volume of tweets in this state at any given moment? If it's in the 10s vs. the 1000s, does that make a difference? My impression is that labels are better suited for a large volume of nodes, whereas indexed properties are better for smaller volumes (ideally, unique nodes), but I'm not sure if that's actually true. Thanks!
UPDATE: Follow up blog post published. This is a common question when we model datasets for customers and a typical use case for Active/NonActive entities. This is a little feedback about what I've experienced valid for Neo4j2.1.6 : Point 1. You will not have difference in db accesses between matching on a label or on an indexed property and return the nodes Point 2. The difference will be encountered when such nodes are at the end of a pattern, for example MATCH (n:User {id:1}) WITH n MATCH (n)-[:WRITTEN]->(post:Post) WHERE post.published = true RETURN n, collect(post) as posts; - PROFILE MATCH (n:User) WHERE n._id = 'c084e0ca-22b6-35f8-a786-c07891f108fc' > WITH n > MATCH (n)-[:WRITTEN]->(post:BlogPost) > WHERE post.active = true > RETURN n, size(collect(post)) as posts; +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | n | posts | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Node[118]{_id:"c084e0ca-22b6-35f8-a786-c07891f108fc",login:"joy.wiza",password:"7425b990a544ae26ea764a4473c1863253240128",email:"[email protected]"} | 1 | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row ColumnFilter(0) | +Extract | +ColumnFilter(1) | +EagerAggregation | +Filter | +SimplePatternMatcher | +SchemaIndex +----------------------+------+--------+----------------------+----------------------------------------------------------------------------+ | Operator | Rows | DbHits | Identifiers | Other | +----------------------+------+--------+----------------------+----------------------------------------------------------------------------+ | ColumnFilter(0) | 1 | 0 | | keep columns n, posts | | Extract | 1 | 0 | | posts | | ColumnFilter(1) | 1 | 0 | | keep columns n, AGGREGATION153 | | EagerAggregation | 1 | 0 | | n | | Filter | 1 | 3 | | (hasLabel(post:BlogPost(1)) AND Property(post,active(8)) == { AUTOBOOL1}) | | SimplePatternMatcher | 1 | 12 | n, post, UNNAMED84 | | | SchemaIndex | 1 | 2 | n, n | { AUTOSTRING0}; :User(_id) | +----------------------+------+--------+----------------------+----------------------------------------------------------------------------+ Total database accesses: 17 In this case, Cypher will not make use of the index :Post(published). Thus the use of labels is more performant in the case you have a ActivePost label for e.g. : neo4j-sh (?)$ PROFILE MATCH (n:User) WHERE n._id = 'c084e0ca-22b6-35f8-a786-c07891f108fc' > WITH n > MATCH (n)-[:WRITTEN]->(post:ActivePost) > RETURN n, size(collect(post)) as posts; +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | n | posts | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Node[118]{_id:"c084e0ca-22b6-35f8-a786-c07891f108fc",login:"joy.wiza",password:"7425b990a544ae26ea764a4473c1863253240128",email:"[email protected]"} | 1 | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row ColumnFilter(0) | +Extract | +ColumnFilter(1) | +EagerAggregation | +Filter | +SimplePatternMatcher | +SchemaIndex +----------------------+------+--------+----------------------+----------------------------------+ | Operator | Rows | DbHits | Identifiers | Other | +----------------------+------+--------+----------------------+----------------------------------+ | ColumnFilter(0) | 1 | 0 | | keep columns n, posts | | Extract | 1 | 0 | | posts | | ColumnFilter(1) | 1 | 0 | | keep columns n, AGGREGATION130 | | EagerAggregation | 1 | 0 | | n | | Filter | 1 | 1 | | hasLabel(post:ActivePost(2)) | | SimplePatternMatcher | 1 | 4 | n, post, UNNAMED84 | | | SchemaIndex | 1 | 2 | n, n | { AUTOSTRING0}; :User(_id) | +----------------------+------+--------+----------------------+----------------------------------+ Total database accesses: 7 Point 3. Always use labels for positives, meaning for the case above, having a Draft label will force you to execute the following query : MATCH (n:User {id:1}) WITH n MATCH (n)-[:POST]->(post:Post) WHERE NOT post :Draft RETURN n, collect(post) as posts; Meaning that Cypher will open each node label headers and do a filter on it. Point 4. Avoid having the need to match on multiple labels MATCH (n:User {id:1}) WITH n MATCH (n)-[:POST]->(post:Post:ActivePost) RETURN n, collect(post) as posts; neo4j-sh (?)$ PROFILE MATCH (n:User) WHERE n._id = 'c084e0ca-22b6-35f8-a786-c07891f108fc' > WITH n > MATCH (n)-[:WRITTEN]->(post:BlogPost:ActivePost) > RETURN n, size(collect(post)) as posts; +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | n | posts | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Node[118]{_id:"c084e0ca-22b6-35f8-a786-c07891f108fc",login:"joy.wiza",password:"7425b990a544ae26ea764a4473c1863253240128",email:"[email protected]"} | 1 | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row ColumnFilter(0) | +Extract | +ColumnFilter(1) | +EagerAggregation | +Filter | +SimplePatternMatcher | +SchemaIndex +----------------------+------+--------+----------------------+---------------------------------------------------------------+ | Operator | Rows | DbHits | Identifiers | Other | +----------------------+------+--------+----------------------+---------------------------------------------------------------+ | ColumnFilter(0) | 1 | 0 | | keep columns n, posts | | Extract | 1 | 0 | | posts | | ColumnFilter(1) | 1 | 0 | | keep columns n, AGGREGATION139 | | EagerAggregation | 1 | 0 | | n | | Filter | 1 | 2 | | (hasLabel(post:BlogPost(1)) AND hasLabel(post:ActivePost(2))) | | SimplePatternMatcher | 1 | 8 | n, post, UNNAMED84 | | | SchemaIndex | 1 | 2 | n, n | { AUTOSTRING0}; :User(_id) | +----------------------+------+--------+----------------------+---------------------------------------------------------------+ Total database accesses: 12 This will result in the same process for Cypher that on point 3. Point 5. If possible, avoid the need to match on labels by having well typed named relationships MATCH (n:User {id:1}) WITH n MATCH (n)-[:PUBLISHED]->(p) RETURN n, collect(p) as posts - MATCH (n:User {id:1}) WITH n MATCH (n)-[:DRAFTED]->(post) RETURN n, collect(post) as posts; neo4j-sh (?)$ PROFILE MATCH (n:User) WHERE n._id = 'c084e0ca-22b6-35f8-a786-c07891f108fc' > WITH n > MATCH (n)-[:DRAFTED]->(post) > RETURN n, size(collect(post)) as posts; +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | n | posts | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Node[118]{_id:"c084e0ca-22b6-35f8-a786-c07891f108fc",login:"joy.wiza",password:"7425b990a544ae26ea764a4473c1863253240128",email:"[email protected]"} | 3 | +-------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 1 row ColumnFilter(0) | +Extract | +ColumnFilter(1) | +EagerAggregation | +SimplePatternMatcher | +SchemaIndex +----------------------+------+--------+----------------------+----------------------------------+ | Operator | Rows | DbHits | Identifiers | Other | +----------------------+------+--------+----------------------+----------------------------------+ | ColumnFilter(0) | 1 | 0 | | keep columns n, posts | | Extract | 1 | 0 | | posts | | ColumnFilter(1) | 1 | 0 | | keep columns n, AGGREGATION119 | | EagerAggregation | 1 | 0 | | n | | SimplePatternMatcher | 3 | 0 | n, post, UNNAMED84 | | | SchemaIndex | 1 | 2 | n, n | { AUTOSTRING0}; :User(_id) | +----------------------+------+--------+----------------------+----------------------------------+ Total database accesses: 2 Will be more performant, because it will use all the power of the graph and just follow the relationships from the node resulting in no more db accesses than matching the user node and thus no filtering on labels. This was my 0,02€
Neo4j
27,956,367
24
My question is from the view of developer (not specifically respect to User) and may be bit messy. I want to know that how the structure of Nodes and Relationships is get stored in database logically. Like, when I say that I have bla bla information. Where? - then the answer is, in BOOK, either in form of Grid or lines on a page. In case of RDBMS, data is stored in Grid/Tabular format. But I am unable to understand how graph is get stored in Neo4j/graph database. I am using neo4j client 2.1.2.
http://www.slideshare.net/thobe/an-overview-of-neo4j-internals is very outdated but this gives you a good overview of Neo4j logical representation. A node references: its first label (my guess is that labels are stored as a singly linked list) its first property (properties are organized as a singly linked list) its start/end relationships Relationships are organized as doubly linked lists. A relationship points to: its first property (same as nodes) the predecessor and successor relationship of its start node the predecessor and successor relationship of its end node Because of this chaining structure, the notion of traversal (i.e. THE way of querying data) easily emerges. That's why a graph database like Neo4j excels at traversing graph-structured data. My rough guess would be also, since Neo4j version 2.1 (and its newly introduced dense node management), nodes' relationships are segregated by type. By doing so, if a node N is for example a start node for 5 relationships of type A and for 5 million rels of type B, traversing rels of type A for N remains O(n=5).
Neo4j
24,366,078
24
I need to make a string contain filter in Neo4J. The idea is simple. A good example is that I need to retrieve from a database of persons all the people that contain in his name the car substring. How can I do this?
As an additional update, from neo4j 3.0 it may be more readable to use: MATCH(n) WHERE n.name CONTAINS 'car' RETURN n (Edited to include Maciej fix to my response, thank you!)
Neo4j
24,094,882
24
I am using Neo4j 2.0 and using the following query to find out the count of number of a particular relationship from a particular node. I have to check the number of relationships named "LIVES" from a particular node PERSON. My query is: match (p:PERSON)-[r:LIVES]->(u:CITY) where count(r)>1 return count(p); The error shown is: SyntaxException: Invalid use of aggregating function count(...) How should I correct it?
What you want is a version of having? People living in more than one city? MATCH (p:PERSON)-[:LIVES]->(c:CITY) WITH p,count(c) as rels, collect(c) as cities WHERE rels > 1 RETURN p,cities, rels
Neo4j
22,346,526
24
Does http://localhost:7474/browser/ not support multiple unrelated queries? This code: MATCH (a {cond:'1'}), (b {cond:'x'}) CREATE a-[:rel]->b MATCH (a {cond:'2'}), (b {cond:'y'}) CREATE a-[:rel]->b MATCH (a {cond:'3'}), (b {cond:'z'}) CREATE a-[:rel]->b causes an error: WITH is required between CREATE and MATCH But since my queries aren't related, I don't think I shall need a WITH. How do I do the above without having to enter it one-line-at-a-time?
As a work around you can do: MATCH (a {cond:'1'}), (b {cond:'x'}) CREATE a-[:rel]->b WITH 1 as dummy MATCH (a {cond:'2'}), (b {cond:'y'}) CREATE a-[:rel]->b WITH 1 as dummy MATCH (a {cond:'3'}), (b {cond:'z'}) CREATE a-[:rel]->b See also the import blog post: http://blog.neo4j.org/2014/01/importing-data-to-neo4j-spreadsheet-way.html
Neo4j
21,778,435
24
Lets say, I have an user: CREATE (n { name: 'Tamil' }) and 2 roles: CREATE (n { name: 'developer' } ) CREATE (n { name: 'tester' } ) Then, I make relationship between the user & each of the 2 roles. CYPHER 1.9 START a = node(*), b = node(*) WHERE a.name = 'Tamil' AND b.name = 'developer' CREATE (a)-[r:HAS_ROLE]->(b) RETURN r CYPHER 1.9 START a = node(*), b = node(*) WHERE a.name = 'Tamil' AND b.name = 'tester' CREATE (a)-[r:HAS_ROLE]->(b) RETURN r Now, I want to remove tester role relationship from the user. I tried: CYPHER 1.9 START a = node:node_auto_index('name:Tamil') MATCH a-[r:HAS_ROLE]-() RETURN r But, it returns both of the relationships. I know that i can attach property with relationships. But, again, I don't know the cypher syntax for that. I am new to Neo4j. Any suggestions would be really great! Thanks!
I deleted the relationship on your original graph with this query: START n=node(*) MATCH (n)-[rel:HAS_ROLE]->(r) WHERE n.name='Tamil' AND r.name='tester' DELETE rel
Neo4j
19,016,947
24
I would like to retrieve a specific number of random nodes. The graph consists of 3 000 000 nodes where some of them are sources, some are target and some are both. The aim is to retrieve random sources and as I don't know how to select random, the program generates k random numbers from 1 to 3 000 000 which represent node IDs and then discards all randomly selected nodes that are not sources. As this procedure is time-consuming, I wonder whether it is possible to directly select random sources with cypher query. In case to select all sources, the query would be the following START t=node(*) MATCH (a)-[:LEADS_TO]->(t) RETURN a Does anyone know how would it be possible to select the limited number of random nodes directly with a cypher or, if not possible, suggest any workaround?
You can use such construction: MATCH (a)-[:LEADS_TO]->(t) RETURN a, rand() as r ORDER BY r It should return you random set of object. Tested with Neo4j 2.1.3
Neo4j
12,510,696
24
I'm new in Neo4j and I have a weird requirement. I have some node CREATE (a:node {title:1}) CREATE (b:node {title:2}) CREATE (c:node {title:3}) CREATE (d:node {title:4}) and multiple relationships between them: CREATE (a)-[:RELATES{jump:[1]}]->(b) CREATE (b)-[:RELATES{jump:[1]}]->(c) CREATE (c)-[:RELATES{jump:[1]}]->(d) CREATE (a)-[:RELATES{jump:[2]}]->(c) CREATE (c)-[:RELATES{jump:[2]}]->(d) CREATE (d)-[:RELATES{jump:[1]}]->(b) CREATE (a)-[:RELATES{jump:[3]}]->(d) CREATE (d)-[:RELATES{jump:[3]}]->(c) CREATE (c)-[:RELATES{jump:[3]}]->(b) The graph and the relationship are shown here: I want to check the graph such that only those relationships should be visible which I'm interested in. Now when I do something like this: MATCH (a)-[r]->(b) WHERE 1 IN r.jump RETURN a,b I get the something like: Is there a way where I can hide(not delete) the not relevant relationships while displaying the graph? May be something like this(edited on Image tool): PS: Let Grey be white.
In neo4j 3.2.1 this feature has been relocated to the bottom left corner, under the gear icon: "Connect result nodes" (checked by default, thus returning all relationships between nodes included in the result).
Neo4j
37,603,618
23
I want to execute several queries at the same time on the browser console, here are my requests : CREATE (newNode1:NEW_NODE) CREATE (newNode2:NEW_NODE) MATCH (n1:LABEL_1 {id: "node1"}) CREATE (newNode1)-[:LINKED_TO]->(n1) MATCH (n2:LABEL_2 {id: "node2"}) CREATE (newNode2)-[:LINKED_TO]->(n2) When I execute them one by one there is no problem, but when I execute them at the same time, I get the following error : WITH is required between CREATE and MATCH Is there any way to correct this?
Add a couple of WITHs? CREATE (newNode1:NEW_NODE) CREATE (newNode2:NEW_NODE) WITH newNode1, newNode2 MATCH (n1:LABEL_1 {id: "node1"}) CREATE (newNode1)-[:LINKED_TO]->(n1) WITH newNode1, newNode2 MATCH (n2:LABEL_2 {id: "node2"}) CREATE (newNode2)-[:LINKED_TO]->(n2) Alternatively, you could do it in a different order and avoid the WITHs, the difference being that it won't create anything if n1/n2 don't MATCH. MATCH (n1:LABEL_1 { id: "node1" }) MATCH (n2:LABEL_2 { id: "node2" }) CREATE (newNode1:NEW_NODE)-[:LINKED_TO]->(n1) CREATE (newNode2:NEW_NODE)-[:LINKED_TO]->(n2)
Neo4j
21,297,679
23
I need to delete all relationships between all nodes. Is there any way to delete all relationships in the neo4j graph? Note that I am using ruby bindings - the neography gem. There is no info about that in the wiki of the gem. I've also tried to find a way to do it in the neo4j documentation without any result. Neo4j version is 1.7.2.
in cypher: deleting all relationships: start r=relationship(*) delete r; creating all relationships between all nodes, i'd assume: start n=node(*),m=node(*) create unique n-[r:RELTYPE]-m; but you rather dont want to have too many vertices, since it collapse on low memory (at least in my case i got 1mil vertices and 1gb ram)
Neo4j
12,899,538
23
Using Neo4J v2.2.3 community version. From inside web admin console does anyone know of any way to log out?
type this in tnto the browser's input line: :server disconnect
Neo4j
31,189,719
22
I just got started on Neo & tried to look for prior questions on this topic. I need help to rename one of the property keys. I created the following node: CREATE (Commerce:Category {title:' Commerce', Property:'Category', Owner:'Magic Pie', Manager:'Simple Simon'}) Now want to rename title to name. Is there a way to do it? I don't want to delete the node as there are 100's of nodes with the property "title".
Yes, you want to SET a new property name with the value of the old property title. And then REMOVE the old property title. Something like this... MATCH (c:Category) WHERE c.name IS NULL SET c.name = c.title REMOVE c.title If you have MANY nodes, it is advisable to perform the operation in smaller batches. Here is an example of limiting the operation to 10k at a time. MATCH (c:Category) WHERE c.name IS NULL WITH c LIMIT 10000 SET c.name = c.title REMOVE c.title
Neo4j
28,618,410
22
I have a neo4j db with the following: a:Foo b:Bar about 10% of db have (a)-[:has]->(b) I need to get only the nodes that do NOT have that relationship! previously doing ()-[r?]-() would've been perfect! However it is no longer supported :( instead, doing as they suggest a OPTIONAL MATCH (a:Foo)-[r:has]->(b:Bar) WHERE b is NULL RETURN a gives me a null result since optional match needs BOTH nodes to either be there or BOTH nodes not to be there... So how do i get all the a:Foo nodes that are NOT attached to b:Bar? Note: dataset is millions of nodes so the query needs to be efficient or otherwise it times out.
That would be MATCH (a:Foo) WHERE not ((a)-[:has]->(:Bar)) RETURN a;
Neo4j
25,673,223
22
I'm storing some nodes in Neo4j graph database and each node has property values that can be localized to various language. Is there any best practice for storing multi-language property values?
There are couple of ways to model this. Which one is the best, depends on your use case and the way you want to use i18n-ized properties. I'm sketching some examples below assuming n is a node that should have it's productName and color property translated in various languages. Gonna use Cypher-like notation below. 1) storing all translations with the node. CREATE (n { productName:'default', color:'default', productName_en:'engl.prod.name', color_en:'red', productName_fr:'fr.prod.name', color_fr:'rouge', }) You apply a naming convention and use <propertyName>_<lang> for the i18n-ized variants. This approach is simplistic and not really graphy. 2) have a subnode per language and entity, indicate the language by relationship type: CREATE (n {productName:'default', color:'default'}), (n)-[:TRANSLATION_EN]->({productName: 'engl.prod.name', color:'red'}), (n)-[:TRANSLATION_FR]->({productName: 'fr.prod.name', color:'rouge'}) So you have 1 additional node and 1 additional rel per language. In Neo4j 2.0 you might additionally indicate the translation nodes with a label indicating the language. By this you can easy extract a list of all text in language xyz. 3) just like 2) but use a generic relationship type TRANSLATION with a property on it indicating the language. There are couple of more approaches, e.g. you can theoretically use array properties as well. As said, there is no silver bullet, it depends on your use case and requirements.
Neo4j
19,924,253
21
How do I return all the labels for a node using a Cypher query? Note that I don't know the node id in advance, I do some sort of index match to get it.
You can get labels by using the labels() method. Example (Neo4j 2.0): Lets say you have the "name" property indexed and would like to search on that basis, the following query would give you all nodes and their labels which have name = "some_name" MATCH (r) WHERE r.name="some_name" RETURN ID(r), labels(r); If you know one of the labels of the starting node, that's even better. For some known label called "Label", this query would give you all nodes along with all labels that are associated with the node. MATCH (r:Label {name:"some_name}) RETURN ID(r), labels(r); Need more assistance? Go through the Cypher docs! for labels()
Neo4j
19,125,442
21
If someone builds a database on top of another database, such as twitter has done, does that database inherit the limitations and inefficiencies of the underlying database? I'm specifically interested in titan db (http://thinkaurelius.com) because of their claim to support splitting the dataset efficiently across nodes. They claim to support distributing data across nodes, because of the efficiency of cassandra. However, neo4j claims that the reason they aren't distributing data between nodes, but rather duplicating the whole dataset on every node, is because any graph traversal that leaves one node, and therefor has to move across an ethernet network, is way too slow to be practical. Since cassandra has no knowledge of the graph, it cannot optimize to keep graph traversals on one node. Therefor, most graph traversals will be across node boundaries. Is titans claim to scale efficiently across nodes true?
Titan determines the key sort order of the underlying storage backend (BOP for Cassandra, default for HBase) and then assigns ids to vertices such that vertices which are assigned to the same partition block have ids that are assigned to the same physical machine. In other words, Titan "understands" how the underlying storage backend distributes the data and uses graph partitioning techniques that exploit this awareness. Titan uses semi-automatic partitioning which incorporates domain knowledge. In the Pearson benchmark (http://arli.us/edu-planet-scale) the graph was partitioned according to universities which is a near optimal partitioning criterion for this particular dataset. Without partitioning, scaling to 120 billion edges would be near impossible. Titan builds on top of proven technologies (for scale, persistence, hot-backup, availability, disaster recovery, etc) while innovating on the graph layer. This is the same route that both, Twitter's Flock and Facebook's Tao, have taken. While this means that Titan is slower at very deep traversals, it does allow Titan to scale to very large graphs or very many concurrent transactions (read and write).
Neo4j
17,811,472
21
I realized only after importing a ton of nodes that I had created relationships called START, which is a reserved keyword. Querying the DB through the Cypher console hence always complains about the reserved keywords: SyntaxException: reserved keyword "start n=node(0) match n<-[:START]-r return count(r)" The only workaround that comes to mind is creating new copy relationships with a different name and then deleting the old ones. Is there an easy way to rename all of these relationships or some way to escape reserved keywords in Cypher?
To do the equivalent of a rename, you can create a new one and delete the old one like so: match (n1)-[old:`Start`]->(n2) create (n1)-[new:StartDate]->(n2) delete old n.b. use backticks like those around `Start` to escape reserved keywords
Neo4j
13,816,712
21
There is some hype around graph databases. I'm wondering why. What are the possible problems that one can be confronted with in today's web environment that can be solved using graph databases? And are graph databases suitable for classical applications, i.e. can one be used as a drop-in replacement for a Relational Database? So in fact it's two questions in one. Related: Has anyone used Graph-based Databases (http://neo4j.org/)?
Many relational representations of graphs aren't particularly efficient for all operations you might want to perform. For example, if one wants the connected set of all nodes where edges satisfy a given predicate, starting from a given node, there's no natural way in SQL to express that. Likely you'll either do a query for edges with the predicate, and then have to exclude disconnected edges locally, or have a very verbose conversation with the database server following one set of links to the next in iterated queries. Graphs aren't a general replacement for relational databases. RDBs deal primarily in sets (tables), while graphs are primarily interesting because of the "shape" of interconnections. With relational DBs you follow links of a predetermined depth (a fixed number of joins) between sets, with results progressively filtered and grouped, while graphs are usually navigated to arbitrary and recursively-defined depth (i.e. not a predetermined number of "joins"). You can abuse either to match the characteristics of the other, but they'll have different strengths.
Neo4j
1,159,190
21
I am starting a new project and I am looking at using MongoDB as the document storage facility and Neo4j as the mechanism to map relationships between documents and then I want to expose the results of my queries via rest API. What would one say are the advantages and disadvantages of doing it this manner? Are there any better ways of achieving this perhaps using a different NoSQL document store? Are there any examples one could find online where someone has tried to do something similar?
I have been thinking about using these two together for a while because my data is already in mongodb. But I don't want to add one more DB top of the existing architecture, because addition of neo4j will require more resources e.g. memory, diskspace and not to mention time invested in maintaining 2 DBs. Another problem which I can think of is when you shard your data with mongodb, you'll also have to manage your neo4j data w.r.t. these new shards. Scaling in neo4j is done through clusters and it is a part of enterprise edition which is commercial. I did further research and found out that OrientDB can store the data as documents and its a graph db. Another way is building the relationships in MongoDB itself and write your logic on top of that and expose this logic through a REST API.
Neo4j
15,114,997
20
I am a CS Research student at UW, and my group is at the point of trying to visualize specific network traffic that is put into a neo4j graph DB in real time. I have read about many different tools such as gephi, cytoscape, rickshaw (based on D3.js), some others, and D3.js. We are so far going forward with D3.js, but wanted to get the community opinion. We can't use cytoscape because of neo4j, and feel that D3.js would work the best with semi-large data in a fast real-time environment. Suggestions? Perhaps for another question, but also feel free to input: Best way to implement neo4j? Java, Ruby, node.js? Thank you!
There's not a silver bullet solution for this kind of problem and most depends from what you have in mind to do, the team and the budget (of money and time) you have. I wouldn't recommend you D3, unless you have to met one of the following: you want to create a brand new way to visualize your data you have skilled people in your team - that can be you - with D3 you have already other D3 widgets/viz to integrate If you don't met any of the entries above I would put D3 on a side, and tell you to have a look at: SigmaJS, Open Source and free Javascript library. KeyLines, Commercial Javascript Toolkit. VivaGraphJS, Open Source and free JS library. Disclaimer: I'm one of the KeyLines developers. Depending on the size of the data you have, the choice of the library can change: if you plan to have no more than 3/400 nodes on your chart and not need particular styling/animations then SigmaJS I think is more than fine; if you're looking for something more advanced for styling or animation I would recommend KeyLines, because it is designed to handle this kind of situations (providing an incremental layout) and it does scale up to 2000 nodes with no problems - although I might suggest to have a filter on a side with this size. I would name VivaGraph as last resort: SigmaJS has a WebGL renderer as well and provide a much nicer rendering IMHO. VivaGraphJS will be soon replaced with ngraph that will use an agnostic aproach for renders: you can use PIXI, Fabric or whatever you want.... Using a WebGL renderer makes sense when you load your assets once and reuse them all the time: if you're styling your chart elements in a real-time scenario there's not advantage on Canvas IMHO.
Neo4j
14,867,132
20