question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
What is the best way to drop a collection in MongoDB? I am using the following: db.collection.drop() As described in the manual: db.collection.drop() Removes a collection from the database. The method also removes any indexes associated with the dropped collection. The method provides a wrapper around the drop command. But how can I drop it from the command line?
So either of these are valid ways to do it: mongo <dbname> --eval 'db.<collection>.drop()' # ^^^^^^^^ ^^^^^^^^^^^^ db.<collection>.drop() # ^^^^^^^^^^^^ For example, for a collection mycollection in a database mydb you would say: mongo mydb --eval 'db.mycollection.drop()' db.mycollection.drop() This is the way I fully tested it, creating a database mydb with a collection hello. Create db mydb: > use mydb switched to db mydb Create a collection mycollection: > db.createCollection("mycollection") { "ok" : 1 } Show all the collections there: > db.getCollectionNames() [ "mycollection", "system.indexes" ] Insert some dummy data: > db.mycollection.insert({'a':'b'}) WriteResult({ "nInserted" : 1 }) Make sure it was inserted: > db.mycollection.find() { "_id" : ObjectId("55849b22317df91febf39fa9"), "a" : "b" } Delete the collection and make sure it is not present any more: > db.mycollection.drop() true > db.getCollectionNames() [ "system.indexes" ] This also works (I am not repeating the previous commands, since it is just about recreating the database and the collection): $ mongo mydb --eval 'db.mycollection.drop()' MongoDB shell version: 2.6.10 connecting to: mydb true $
MongoDB
30,948,151
75
I know this may be a silly question, but I read on an e-book that there is an upsert option in MongoDB insert. I couldn't find proper documentation about this. Can someone educate me about this?
Since upsert is defined as operation that "creates a new document when no document matches the query criteria" there is no place for upsertsin insert command. It is an option for the update command. If you execute command like below it works as an update, if there is a document matching query, or as an insert with document described by update as an argument. db.collection.update(query, update, {upsert: true}) MongoDB 3.2 adds replaceOne: db.collection.replaceOne(query, replacement, {upsert: true}) which has similar behavior, but its replacement cannot contain update operators.
MongoDB
19,974,216
75
can I use combination of OR and AND in mongodb queries? the code below doesn't work as expected db.things.find({ $and:[ {$or:[ {"first_name" : "john"}, {"last_name" : "john"} ]}, {"phone": "12345678"} ]}); database content: > db.things.find(); { "_id" : ObjectId("4fe8734ac27bc8be56947d60"), "first_name" : "john", "last_name" : "hersh", "phone" : "2222" } { "_id" : ObjectId("4fe8736dc27bc8be56947d61"), "first_name" : "john", "last_name" : "hersh", "phone" : "12345678" } { "_id" : ObjectId("4fe8737ec27bc8be56947d62"), "first_name" : "elton", "last_name" : "john", "phone" : "12345678" } { "_id" : ObjectId("4fe8738ac27bc8be56947d63"), "first_name" : "eltonush", "last_name" : "john", "phone" : "5555" } when querying the above query - i get nothing! > db.things.find({$and:[{$or:[{"first_name" : "john"}, {"last_name" : "john"}]},{"phone": "12345678"}]}); > I'm using mongo 1.8.3
db.things.find({ $and: [ { $or: [ {"first_name": "john"}, {"last_name": "john"} ] }, { "Phone": "12345678" } ] }) AND takes an array of 2 expressions OR , phone. OR takes an array of 2 expressions first_name , last_name. AND OR first_name last_name Phone Number. Note: Upgrade to latest version of MongoDB, if this doesn't work.
MongoDB
11,196,101
75
Does MongoDB offer a find or query method to test if an item exists based on any field value? We just want check existence, not return the full contents of the item.
Since you don't need the count, you should make sure the query will return after it found the first match. Since count performance is not ideal, that is rather important. The following query should accomplish that: db.Collection.find({ /* criteria */}).limit(1).size(); Note that find().count() by default does not honor the limit clause and might hence return unexpected results (and will try to find all matches). size() or count(true) will honor the limit flag. If you want to go to extremes, you should make sure that your query uses covered indexes. Covered indexes only access the index, but they require that the field you query on is indexed. In general, that should do it because a count() obviously does not return any fields. Still, covered indexes sometimes need rather verbose cursors: db.values.find({"value" : 3553}, {"_id": 0, "value" : 1}).limit(1).explain(); { // ... "cursor" : "BtreeCursor value_1", "indexOnly" : true, // covered! } Unfortunately, count() does not offer explain(), so whether it's worth it or not is hard to say. As usual, measurement is a better companion than theory, but theory can at least save you from the bigger problems.
MongoDB
8,389,811
75
When installing MongoDb, I get the option to install it as a service. What does that mean? If I don't select that option, what difference would it make? Also, selecting "install as a service" will bring up additional options, such as "Run service as a network service user" or "run service as a local or domain user". What do these options do?
I'm speaking in the perspective of Windows development, but the concepts are similar with other Operating Systems, such as Linux. What are services? Services are application types that run in the system's background. These are applications such as task schedulers and event loggers. If you look at the Task Manager > Processes, you can see that you have a series of Service Hosts which are containers hosting your Windows Services. What difference does setting MongoDB as a service make? Running MongoDB as a service gives you some flexibility with how you can run and deploy MongoDB. For example, you can have MongoDB run at startup and restart on failures. If you don't set MongoDB up as a service, you will have to run the MongoDB server every time. So, what is the difference between a network service and a local service? Running MongoDB as a network service means that your service will have permission to access the network with the same credentials as the computer you are using. Running MongoDB locally will run the service without network connectivity.(Refer Source here)
MongoDB
52,068,925
74
I'm using node.js and Mongo.db (I'm newly on Mongo). I have a document like this: Tag : { name: string, videoIDs: array } The idea is, the server receives a JSON like JSON: { name: "sport", videoId: "34f54e34c" } with this JSON, it has to find the tag with the same name and check if in the array it has the videoId, if not, insert it to the array. How can I check the array and append data?
You can use $addToSet operator to check exist before append element into array. db.tags.update( {name: 'sport'}, {$addToSet: { videoIDs: "34f54e34c" } } ); In this update statement example, mongoDB will find the TAG document which matches name == sport, and then check whether the videoIDs array contains 34f54e34c. If not, append it to the array. Detail usage of $addToSet please read here.
MongoDB
38,970,835
74
How would I get an array containing all values of a certain field for all of my documents in a collection? db.collection: { "_id" : ObjectId("51a7dc7b2cacf40b79990be6"), "x" : 1 } { "_id" : ObjectId("51a7dc7b2cacf40b79990be7"), "x" : 2 } { "_id" : ObjectId("51a7dc7b2cacf40b79990be8"), "x" : 3 } { "_id" : ObjectId("51a7dc7b2cacf40b79990be9"), "x" : 4 } { "_id" : ObjectId("51a7dc7b2cacf40b79990bea"), "x" : 5 } "db.collection.ListAllValuesForfield(x)" Result: [1,2,3,4,5] Also, what if this field was an array? { "_id" : ObjectId("51a7dc7b2cacf40b79990be6"), "y" : [1,2] } { "_id" : ObjectId("51a7dc7b2cacf40b79990be7"), "y" : [3,4] } { "_id" : ObjectId("51a7dc7b2cacf40b79990be8"), "y" : [5,6] } { "_id" : ObjectId("51a7dc7b2cacf40b79990be9"), "y" : [1,2] } { "_id" : ObjectId("51a7dc7b2cacf40b79990bea"), "y" : [3,4] } "db.collection.ListAllValuesInArrayField(y)" Result: [1,2,3,4,5,6,1,2,3,4] Additionally, can I make this array unique? [1,2,3,4,5,6]
db.collection.distinct('x') should give you an array of unique values for that field.
MongoDB
23,273,123
74
If I delete the 3.1G journal file, sudo service mongodb restart will fail. However, this file is taking too much space. How can I solve this problem? How can I remove it? bash$ du -sh /var/lib/mongodb/* 4.0K _tmp 65M auction_development.0 128M auction_development.1 17M auction_development.ns 3.1G journal 4.0K mongod.lock
TL;DR: You have two options. Use the --smallfiles startup option when starting MongoDB to limit the size of the journal files to 128MB, or turn off journalling using the --nojournal option. Using --nojournal in production is usually a bad idea, and it often makes sense to use different write concerns also in development so you don't have different code in dev and prod. The long answer: No, deleting the journal file isn't safe. The idea of journalling is this: A write comes in. Now, to make that write persistent (and the database durable), the write must somehow go to the disk. Unfortunately, writes to the disk take eons compared to writes to the RAM, so the database is in a dilemma: not writing to the disk is risky, because an unexpected shutdown would cause data loss. But writing to the disk for every single write operation will decrease the database's performance so badly that it becomes unusable for practical purposes. Now instead of writing to the data files themselves, and instead of doing it for every request, the database will simply append to a journal file where it stores all the operations that haven't been committed to the actual data files yet. This is a lot faster, because the file is already 'hot' since it's read and written to all the time, and it's only one file, not a bunch of files, and lastly, because it writes all pending operations in a batch every 100ms by default. Deleting this file in the middle of something wreaks havoc.
MongoDB
19,533,019
74
Using mongoskin, I can do a query like this, which will return a cursor: myCollection.find({}, function(err, resultCursor) { resultCursor.each(function(err, result) { } } However, I'd like to call some async functions for each document, and only move on to the next item on the cursor after this has called back (similar to the eachSeries structure in the async.js module). E.g: myCollection.find({}, function(err, resultCursor) { resultCursor.each(function(err, result) { externalAsyncFunction(result, function(err) { //externalAsyncFunction completed - now want to move to next doc }); } } How could I do this? Thanks UPDATE: I don't wan't to use toArray() as this is a large batch operation, and the results might not fit in memory in one go.
A more modern approach that uses async/await: const cursor = db.collection("foo").find({}); while(await cursor.hasNext()) { const doc = await cursor.next(); // process doc here } Notes: This may be even more simple to do when async iterators arrive. You'll probably want to add try/catch for error checking. The containing function should be async or the code should be wrapped in (async function() { ... })() since it uses await. If you want, add await new Promise(resolve => setTimeout(resolve, 1000)); (pause for 1 second) at the end of the while loop to show that it does process docs one after the other.
MongoDB
18,119,387
74
Consider following is the Node.js code: function My_function1(_params) { db.once('open', function (err){ //Do some task 1 }); } function My_function2(_params) { db.once('open', function (err){ //Do some task 2 }); } See the link for best practice, which says not to close any connections https://groups.google.com/forum/#!topic/node-mongodb-native/5cPt84TUsVg I have seen log file contains following data: Fri Jan 18 11:00:03 Trying to start Windows service 'MongoDB' Fri Jan 18 11:00:03 Service running Fri Jan 18 11:00:03 [initandlisten] MongoDB starting : pid=1592 port=27017 dbpath=\data\db\ 64-bit host=AMOL-KULKARNI Fri Jan 18 11:00:03 [initandlisten] db version v2.2.1, pdfile version 4.5 Fri Jan 18 11:00:03 [initandlisten] git version: d6...e0685521b8bc7b98fd1fab8cfeb5ae Fri Jan 18 11:00:03 [initandlisten] build info: windows sys.getwindowsversion(major=6, minor=1, build=7601, platform=2, service_pack='Service Pack 1') BOOST_LIB_VERSION=1_49 Fri Jan 18 11:00:03 [initandlisten] options: { config: "c:\mongodb\mongod.cfg", logpath: "c:\mongodb\log\mongo.log", service: true } Fri Jan 18 11:00:03 [initandlisten] journal dir=/data/db/journal Fri Jan 18 11:00:03 [initandlisten] recover begin Fri Jan 18 11:00:04 [initandlisten] recover lsn: 6624179 Fri Jan 18 11:00:04 [initandlisten] recover /data/db/journal/j._0 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:59343 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:118828 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:238138 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:835658 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:955218 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3467218 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3526418 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3646154 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section seq:3705844 < lsn:6624179 Fri Jan 18 11:00:04 [initandlisten] recover skipping application of section more... Fri Jan 18 11:00:05 [initandlisten] recover cleaning up Fri Jan 18 11:00:05 [initandlisten] removeJournalFiles Fri Jan 18 11:00:05 [initandlisten] recover done Fri Jan 18 11:00:10 [initandlisten] query MYDB.system.namespaces query: { options.temp: { $in: [ true, 1 ] } } ntoreturn:0 ntoskip:0 nscanned:5 keyUpdates:0 nreturned:0 reslen:20 577ms Fri Jan 18 11:00:10 [initandlisten] waiting for connections on port 27017 Fri Jan 18 11:00:10 [websvr] admin web console waiting for connections on port 28017 Fri Jan 18 11:01:10 [PeriodicTask::Runner] task: WriteBackManager::cleaner took: 32ms Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50076 #1 (1 connection now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50077 #2 (2 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50078 #3 (3 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50079 #4 (4 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50080 #5 (5 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50081 #6 (6 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50082 #7 (7 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50083 #8 (8 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50084 #9 (9 connections now open) Fri Jan 18 13:36:27 [initandlisten] connection accepted from 192.168.0.1:50085 #10 (10 connections now open) ........................................... Fri Jan 18 13:36:48 [initandlisten] connection accepted from 192.168.0.1:50092 #97 (97 connections now open) Doesn't this create a overhead on server by opening multiple connection and not closing it, Does it handles connection pooling internally? But in MongoDB Docs it is mentioned "This is normal behavior for applications that do not use request pooling" Can somebody help me understanding this.
You open a Db connection once with MongoClient and reuse it across your application. If you need to use multiple db's you use the .db function on the Db object to work on a different db using the same underlying pool of connections. A pool is kept to ensure a single blocking operation cannot freeze up your node.js application. Default size if 5 connections in a pool. http://mongodb.github.io/node-mongodb-native/driver-articles/mongoclient.html I also forgot to add. As the other answer pointed out setting up a new TCP connection is EXPENSIVE timewise and memory wise that's why you reuse connections. Also a new connection will cause a new Thread to be created on MongoDB using memory on the Db as well.
MongoDB
14,495,975
74
I really like MongoDB's automatically generated ids. They are really useful. However, is it save to use them publicly? Let's say there is a posts collection, and the /posts page that takes id paramater (something like /posts/4d901acd8df94c1fe600009b) and displays info about it. This way the user/hacker will know the real object id of the document. Is it okay or is it not secure? Thanks
The ObjectID documentation states that the automatically generated IDs include a 3-byte machine ID (presumably a hash of the MAC address). It's not inconceivable that someone could figure out things about your internal network by comparing those three bytes in various ids, but unless you're working for the Pentagon that doesn't seem worth worrying about (you're much more likely to be vulnerable to something more boring like a misconfigured Apache). Other than that, Epcylon's right; there's nothing inherently insecure about exposing ids through URLs. Whether it's ugly is another matter, of course. You can base64 them to make them shorter (been thinking about this myself), but then there's the weird fact that they're all about half the same.
MongoDB
4,587,523
74
I have two collections User { "_id" : ObjectId("584aac38686860d502929b8b"), "name" : "John" } Role { "_id" : ObjectId("584aaca6686860d502929b8d"), "role" : "Admin", "userId" : "584aac38686860d502929b8b" } I want to join these collection based on the userId (in role collection) - _id ( in user collection). I tried the below query: db.role.aggregate({ "$lookup": { "from": "user", "localField": "userId", "foreignField": "_id", "as": "output" } }) This gives me expected results as long as i store userId as a ObjectId. When my userId is a string there are no results. Ps: I tried foreignField: '_id'.valueOf() and foreignField: '_id'.toString() . But no luck to match/join based on a ObjectId-string fields. Any help will be appreciated.
You can use $toObjectId aggregation from mongodb 4.0 which converts String id to ObjectId db.role.aggregate([ { "$lookup": { "from": "user", "let": { "userId": "$_id" }, "pipeline": [ { "$addFields": { "userId": { "$toObjectId": "$userId" }}}, { "$match": { "$expr": { "$eq": [ "$userId", "$$userId" ] } } } ], "as": "output" }} ]) Or you can use $toString aggregation from mongodb 4.0 which converts ObjectId to String db.role.aggregate([ { "$addFields": { "userId": { "$toString": "$_id" }}}, { "$lookup": { "from": "user", "localField": "userId", "foreignField": "userId", "as": "output" }} ])
MongoDB
41,093,647
73
Is it necessary to give 'worker' information in Procfile? If yes then what it is actually? I have already added web: node server/server.js detail in the Procfile.
Procfile is a mechanism for declaring what commands are run by your application’s dynos on the Heroku platform. From Process Types and the Procfile, which is a good introduction, but basically you use the Procfile to tell Heroku how to run various pieces of your app. The part to the left of the colon on each line is the process type; the part on the right is the command to run to start that process. Process types can be anything, although web is special, as Heroku will route HTTP requests to processes started with the web name. Other processes, such as background workers, can be named anything, and you can use the Heroku toolbelt to start or stop those processes by referring to its name. So, in short, worker is not necessary, unless you want to run some other process in the background by controlling process with the heroku ps command.
MongoDB
16,128,395
73
I am in the middle of building a new app which will have very similar features to Facebook and although obviously it wont ever have to deal with the likes of 400,000,000 million users it will still be used by a substantial user base and most of them will demand it run very very quickly. I have extensive experience with MySQL but a social app offers complexities which MySQL is not well suited too. I know Facebook, Twitter etc have moved towards Cassandra for a lot of their data but I am not sure how far to go with it. For example would you store such things as user data - username, passwords, addresses etc in Cassandra? Would you store e-mails, comments, status updates etc in Cassandra? I have also read alot that something like neo4j is much better for representing the friend relationships used by social apps as it is a graph database. I am only just starting down the NoSQL route so any guidance is greatly appreciated. Would anyone be able to advise me on this? I hope I am not being too general!
I would suggest doing some testing with MySQL and with Cassandra. When we had to make a choice between PostgreSQL and MongoDB in one of my jobs, we compared query time on millions of records in both and found out that with about 10M records Postgres would provide us with adequate response times. We knew that we wouldn't get to that number of records for at least a couple of years, and we had experience with Postgres (while MongoDB wasn't very mature at the time), so we went with Postgres. My point is that you can probably look at MySQL benchmarks, do some performance tests yourself, estimate the size of your dataset and how it's going to grow, and make an informed decision that way. As for mixing relational and non-relational databases, it's something we considered as well, but decided that it would be too much of a hassle, as that would mean maintaining two kinds of software, and writing quite a bit of glue code to get the data from both. I think Cassandra would be perfectly capable of storing all your data.
Neo4j
2,581,465
11
What's the difference between these two lines? call apoc.periodic.iterate("MATCH (n:Nodes) return n", "DETACH DELETE n", {batchSize:10000, iterateList:true})" call apoc.periodic.commit("match (n:Nodes) limit {limit} detach delete n RETURN count(*)",{limit:10000}) What is the best way to delete lots of nodes?
The procedure apoc.periodic.iterate takes two queries : the first one to create a set of nodes in your example the second one will be executed for each result of the first query So in your example your match all the Node of your database, and then you delete them with a batch size of 10000. The procedure apoc.periodic.commit takes only one query, and the procedure will execute the query again, and again and again ... tilt its result is 0. So in your example, you take the first 10000 nodes and delete them. You repeat this behaviour till there is no more Node in your database. To resume, both queries give the same result, but not in the same way. The apoc.periodic.iterate will take a little more RAM than the apoc.periodic.commit (the procedure needs to build the set of nodes at first), but one good thing with it is that you can use all yours CPU, via the configuration parallel:true (but be carefull to locks). If you have a really huge number of nodes to delete, I recommand you to use the apoc.periodic.commit.
Neo4j
51,171,928
11
Most of the reasons for using a graph database seem to be that relational databases are slow when making graph like queries. However, if I am using GraphQL with a data loader, all my queries are flattened and combined using the data loader, so you end up making simpler SELECT * FROM X type queries instead of doing any heavy joins. I might even be using a No-SQL database which is usually pretty fast at these kinds of flat queries. If this is the case, is there a use case for Graph databases anymore when combined with GraphQL? Neo4j seems to be promoting GraphQL. I'd like to understand the advantages if any.
GraphQL doesn't negate the need for graph databases at all, the connection is very powerful and makes GraphQL more performant and powerful. You mentioned: However, if I am using GraphQL with a data loader, all my queries are flattened and combined using the data loader, so you end up making simpler SELECT * FROM X type queries instead of doing any heavy joins. This is a curious point, because if you do a lot of SELECT * FROM X and the data is connected by a graph loader, you're still doing the joins, you're just doing them in software outside of the database, at another layer, by another means. If even that software layer isn't joining anything, then what you gain by not doing joins in the database you're losing by executing many queries against the database, plus the overhead of the additional layer. Look into the performance profile of sequencing a series of those individual "easy selects". By not doing those joins, you may have lost 30 years value of computer science research...rather than letting the RDMBS optimize the query execution path, the software layer above it is forcing a particular path by choosing which selects to execute in which order, at which time. It stands to reason that if you don't have to go through any layer of formalism transformation (relational -> graph) you're going to be in a better position. Because that formalism translation is a cost you must pay every time, every query, no exceptions. This is sort of equivalent to the obvious observation that XML databases are going to be better at executing XPath expressions than relational databases that have some XPath abstraction on top. The computer science of this is straightforward; purpose-built data structures for the task typically outperform generic data structures adapted to a new task. I recommend Jim Webber's article on the motivations for a native graph database if you want to go deeper on why the storage format and query processing approach matters. What if it's not a native graph database? If you have a graph abstraction on top of an RDBMS, and then you use GraphQL to do graph queries against that, then you've shifted where and how the graph traversal happens, but you still can't get around the fact that the underlying data structure (tables) isn't optimized for that, and you're incurring extra overhead in translation. So for all of these reasons, a native graph database + GraphQL is going to be the most performant option, and as a result I'd conclude that GraphQL doesn't make graph databases unnecessary, it's the opposite, it shows where they shine. They're like chocolate and peanut butter. Both great, but really fantastic together. :)
Neo4j
50,134,500
11
Getting results on a pandas dataframe from a cypher query on a Neo4j database with py2neo is really straightforward, as: >>> from pandas import DataFrame >>> DataFrame(graph.data("MATCH (a:Person) RETURN a.name, a.born LIMIT 4")) a.born a.name 0 1964 Keanu Reeves 1 1967 Carrie-Anne Moss 2 1961 Laurence Fishburne 3 1960 Hugo Weaving Now I am trying to create (or better MERGE) a set of nodes and relationships from a pandas dataframe into a Neo4j database with py2neo. Imagine I have a dataframe like: LABEL1 LABEL2 p1 n1 p2 n1 p3 n2 p4 n2 where Labels are column header and properties as values. I would like to reproduce the following cypher query (for the first row as example), for every rows of my dataframe: query=""" MATCH (a:Label1 {property:p1)) MERGE (a)-[r:R_TYPE]->(b:Label2 {property:n1)) """ I know I can tell py2neo just to graph.run(query), or even run a LOAD CSV cypher script in the same way, but I wonder whether I can iterate through the dataframe and apply the above query row by row WITHIN py2neo.
You can use DataFrame.iterrows() to iterate through the DataFrame and execute a query for each row, passing in the values from the row as parameters. for index, row in df.iterrows(): graph.run(''' MATCH (a:Label1 {property:$label1}) MERGE (a)-[r:R_TYPE]->(b:Label2 {property:$label2}) ''', parameters = {'label1': row['label1'], 'label2': row['label2']}) That will execute one transaction per row. We can batch multiple queries into one transaction for better performance. tx = graph.begin() for index, row in df.iterrows(): tx.evaluate(''' MATCH (a:Label1 {property:$label1}) MERGE (a)-[r:R_TYPE]->(b:Label2 {property:$label2}) ''', parameters = {'label1': row['label1'], 'label2': row['label2']}) tx.commit() Typically we can batch ~20k database operations in a single transaction.
Neo4j
45,738,180
11
I am using Neo4j CE 3.1.1 and I have a relationship WRITES between authors and books. I want to find the N (say N=10 for example) books with the largest number of authors. Following some examples I found, I came up with the query: MATCH (a)-[r:WRITES]->(b) RETURN r, COUNT(r) ORDER BY COUNT(r) DESC LIMIT 10 When I execute this query in the Neo4j browser I get 10 books, but these do not look like the ones written by most authors, as they show only a few WRITES relationships to authors. If I change the query to MATCH (a)-[r:WRITES]->(b) RETURN b, COUNT(r) ORDER BY COUNT(r) DESC LIMIT 10 Then I get the 10 books with the most authors, but I don't see their relationship to authors. To do so, I have to write additional queries explicitly stating the name of a book I found in the previous query: MATCH ()-[r:WRITES]->(b) WHERE b.title="Title of a book with many authors" RETURN r What am I doing wrong? Why isn't the first query working as expected?
Aggregations only have context based on the non-aggregation columns, and with your match, a unique relationship will only occur once in your results. So your first query is asking for each relationship on a row, and the count of that particular relationship, which is 1. You might rewrite this in a couple different ways. One is to collect the authors and order on the size of the author list: MATCH (a)-[:WRITES]->(b) RETURN b, COLLECT(a) as authors ORDER BY SIZE(authors) DESC LIMIT 10 You can always collect the author and its relationship, if the relationship itself is interesting to you. EDIT If you happen to have labels on your nodes (you absolutely SHOULD have labels on your nodes), you can try a different approach by matching to all books, getting the size of the incoming :WRITES relationships to each book, ordering and limiting on that, and then performing the match to the authors: MATCH (b:Book) WITH b, SIZE(()-[:WRITES]->(b)) as authorCnt ORDER BY authorCnt DESC LIMIT 10 MATCH (a)-[:WRITES]->(b) RETURN b, a You can collect on the authors and/or return the relationship as well, depending on what you need from the output.
Neo4j
42,238,183
11
Everytime I try to divide something in neo4j, I keep getting zero. I am using the following query: MATCH (m:Member)-[:ACTIVITY{issue_d:"16-Jan"}]->(l:Loan) MATCH (m)-[:ACTIVITY]->(p:Payments) WHERE l.installment<1000 AND p.total_pymnt>0 RETURN (l.funded_amnt-p.total_pymnt),(l.funded_amnt-p.total_pymnt)/(l.funded_amnt), l.funded_amnt, p.total_pymnt, m.member_id LIMIT 1000; I checked to make sure that my values for funded_amnt and total_pymnt are not messing up the operation they seem good: Even when I just do: 500/l.funded_amnt I still get zero. What am I doing wrong?
Multiply your numerator by 1.0. MATCH (m:Member)-[:ACTIVITY {issue_d:"16-Jan"}]->(l:Loan) MATCH (m)-[:ACTIVITY]->(p:Payments) WHERE l.installment < 1000 AND p.total_pymnt > 0 RETURN (l.funded_amnt - p.total_pymnt), ((l.funded_amnt - p.total_pymnt) * 1.0) / l.funded_amnt, l.funded_amnt, p.total_pymnt, m.member_id LIMIT 1000; Integer division discards the remainder, so you need to multiply your numerator by 1.0 or wrap it in toFloat(). RETURN 5 / 12; // 0 RETURN 5 * 1.0 / 12; // 0.4166666666666667 RETURN toFloat(5) / 12; // 0.4166666666666667
Neo4j
37,599,289
11
I'm trying to import a local csv file but I have got InvalidSyntax Error. LOAD CSV WITH HEADERS FROM file:C:/csv/user.csv Invalid input '/' (line 1, column 35 (offset: 34)) "LOAD CSV WITH HEADERS FROM file:C:/csv/user.csv"
You need to put the filename in quotes, and add a few more slashes: LOAD CSV WITH HEADERS FROM "file:///C:/csv/user.csv" Full documentation here.
Neo4j
37,299,077
11
My local Neo4j has a lot of transaction logs in data/graph.db: 251M 3 Sep 16:44 neostore.transaction.db.0 255M 3 Sep 20:01 neostore.transaction.db.1 255M 3 Sep 23:20 neostore.transaction.db.2 251M 4 Sep 19:34 neostore.transaction.db.3 250M 4 Sep 22:16 neostore.transaction.db.4 134M 5 Sep 05:02 neostore.transaction.db.5 16B 5 Sep 09:57 neostore.transaction.db.6 16B 7 Sep 16:44 neostore.transaction.db.7 I'm backing the graph.db folder up (I have stopped the neo4j instance) in order to reload in another offsite instance, so it would be nice to reduce the folder size. What methods are there to control these logs? How do I check if a given neostore.transaction.db.X file has been successfully processed? Is it safe to remove older processed files? Logical logs are referred to in the docs, which I believe are the same files: http://neo4j.com/docs/stable/configuration-logical-logs.html In conf/neo4j.properties I've changed the option keep_logical_logs to 100M size: # Keep logical logs, helps debugging but uses more disk space, enabled for # legacy reasons To limit space needed to store historical logs use values such # as: "7 days" or "100M size" instead of "true". keep_logical_logs=100M size and restarted neo4j, but it hasn't removed any of the old log files. Can I do this manually when neo4j has stopped? Or are all of these files required? I stopped neo4j, made a backup of the graph.db directory, removed all bar neostore.transaction.db.7 and started neo4j again. It appears to be happy but... Thanks!
If you database in good condition you can delete all neostore.transaction.db.x files, but I recommend to you backup them. Stop Neo4j Delete neostore.transaction.db.x files Start Neo4j
Neo4j
32,442,951
11
I'm trying to access neo4j running on an aws ec2 instance from the command line where I get authorisation errors. I've enabled org.neo4j.server.webserver.address=0.0.0.0 and get a 503 error on the first statement and the same errors for the rest using the ec2 host name. ubuntu@ip-10-0-0-192:/etc/neo4j$ curl http://localhost:7474/ { "management" : "http://localhost:7474/db/manage/", "data" : "http://localhost:7474/db/data/" }ubuntu@ip-10-0-0-192:/etc/neo4j$ curl http://localhost:7474/db/data/ { "errors" : [ { "message" : "No authorization header supplied.", "code" : "Neo.ClientError.Security.AuthorizationFailed" } ] }ubuntu@ip-10-0-0-192:/etc/neo4j$ curl http://localhost:7474/user/neo4j/ { "errors" : [ { "message" : "No authorization header supplied.", "code" : "Neo.ClientError.Security.AuthorizationFailed" } ] ubuntu@ip-10-0-0-192:/etc/neo4j$ curl http://localhost:7474/user/neo4j/password { "errors" : [ { "message" : "No authorization header supplied.", "code" : "Neo.ClientError.Security.AuthorizationFailed" } ] Am I logging in correctly or have I missed a step somewhere? Any help is appreciated.
You need to provide authorization header in your request Authorization: Basic bmVvNGo6bmVvNGo= curl --header "Authorization: Basic bmVvNGo6bmVvNGo=" http://localhost:7474 bmVvNGo6bmVvNGo= is default Neo4j password: neo4j by @michael-hunger note: For the auth APIs you still need authorization. curl -u neo4j:password http://localhost:7474 Or turn off authorization in Neo4j configuration conf/neo4j-server.properties # Disable authorization dbms.security.auth_enabled=false Here is more information about that http://neo4j.com/docs/stable/rest-api-security.html
Neo4j
31,966,591
11
Anybody know of any Graph DB's that support time series data? Ideally we're looking for one that will scale well, and ideally use Cassandra or HBase as their persistent store.
Why would you want to do that? Best practice would be to store the dependency graph (in other words, the "Model" of the time series data) in a graphdb, but the actual time series in something more suited to that. Eg a KV store or a log-specific tool like Splunk... See the KNMI (Dutch Weather Service) example for a case study: http://vimeopro.com/neo4j/graphconnect-europe-2015/video/128351859 Cheers! Rik
Neo4j
31,129,492
11
How can i get node by propery value? I mean something like that: I'll tried match (n) where has (n.name = 'Mark') return n But it's incorrect. And also How can i find node with max property value. I have nodes with property "VIEWS" and i want see node with max views.
So close... MATCH (n) WHERE n.name = 'Mark' RETURN n It is better to include a node label if you have one that will serve to segregate your node from other nodes of different types. This way if you have an index on the name property and label combination you will get better search responsiveness. For instance, you can create the index... CREATE INDEX ON :Person(name) And then query with the Person label. MATCH (n:Person) WHERE n.name = 'Mark' RETURN n Or alternatively you can query this way... MATCH (n:Person {name:'Mark'}) RETURN n To find the person with the most views... MATCH (n:Person) RETURN n, n.views ORDER BY n.views desc LIMIT 1 To find the most views without the person... MATCH (n:Person) RETURN max(n.views)
Neo4j
29,382,025
11
I have nodes without label but a property NodeType Is there a way to set the label of those nodes with the value of the NodeType property? Thanks!
No, currently there is no possibility to define a label with a variable. You'll have to do it in your application by fetching all nodes that you want to add a label on it and sending a Cypher Query to add this label. A quick example in PHP : $nodes = $client->sendCypherQuery('MATCH (n) WHERE n.nodeType = "MyType" RETURN n'); foreach ($nodes as $node) { $label = $node->getProperty('nodeType'); $id = $node->getId(); $client->sendCypherQuery('MATCH (n) WHERE id(n) = '.$id.' SET n :'.$label; }
Neo4j
26,536,573
11
Previously I had a problem when making a 'backup' as shown in this question where I get an error when trying to restore the database because I did a copy when the database was running. So I did an experiment with a new database from another computer (this time with ubuntu) I tried this: I created some nodes and relations, very few like 10 (the matrix example). Then I stopped the service neo4j I copied the folder data that contains graph.db to another location After that I deleted the graph.db folder and started neo4j It created automatically a new graph.db folder and the database runs as new without any data, that is normal. Then I stopped again and paste the old graph.db folder I get an error: Starting Neo4j Server...WARNING: not changing user waiting for server to be ready... Failed to start within 120 seconds. The error appears after 5 seconds not after 120 seconds. I tried pasting the folder called data. Same error. How should I backup and restore in neo4j community offline manually? I read in some posts that you only copy and restore but that does not work. Thank you for your help
Online backup, in a sense of taking a consistent backup while Neo4j is running, is only available in Neo4j enterprise edition. Enterprise edition's backup also features a verbose consistency check of the backup, something you do not get in community either. The only safe option in community edition is to shutdown Neo4j cleanly and copy away the graph.db folder recursively. I'm typically using: cd data tar -zcf graph.db.tar.gz graph.db/ For restoring you shut down neo4j, clean out a existing graph.db folder and restore the original graph.db folder from your backup: cd data rm -rf graph.db tar -zxf graph.db.tar.gz
Neo4j
25,567,744
11
How to set up the following in neo4j community edition version 2.x failover master-slave setup cluster Is HA (high availability) is different from cluster setup in neo4j?
HA, failover and clustering are only available in Neo4j's enterprise edition. For detailed documentation please refer to http://docs.neo4j.org/chunked/stable/ha.html Neo4j enterprise edition is licensed open source via AGPL or via commercial licensing provided by Neo Technology. The commercial licenses come with support as well. Since I'm working for Neo Technology please reach out to me directly in case you want to know more about the commercial side.
Neo4j
24,646,962
11
I'm looking at visualization options for a graph database project that I have coming up. Part of the job is to provide an interactive visualization of the data for public website visitors. The standard Neo4j Server Web Interface does all I would need it to and more. I was wandering if I could simply embed it in a webpage or provide a public url (that could be accessed without a login) that general users could use to view the visualization without being able to edit it or add nodes/relationships? If you know of any examples of how this can be done, I would be very grateful. Thanks!
The Neo4j browser is an Angular.js application using d3.js as visualization. The code is all open source an on https://github.com/neo4j/neo4j/tree/2.2/community/browser/lib/visualization so you can check it out there. In general http://maxdemarzi.com is a good source for visualization blog posts as is http://neo4j.org/develop/visualization
Neo4j
21,506,825
11
In this cypher query,the longest path/paths between nodes which have relationship with STATUS="on" property with each other,will be returned,but I want to get also the last node of the path/paths. query: START n=node(*) MATCH p=n-[rels:INCLUDE*]->m WHERE ALL (rel IN rels WHERE rel.status='on') WITH COLLECT(p) AS paths, MAX(length(p)) AS maxLength RETURN FILTER(path IN paths WHERE length(path)= maxLength) AS longestPaths how should I add it to the query? thanks.
This would give two arrays. The first array is the last item in each path, the second is each path: START n=node(*) MATCH p=n-[rels:INCLUDE*]->m WHERE ALL (rel IN rels WHERE rel.status='on') WITH COLLECT(p) AS paths, MAX(length(p)) AS maxLength WITH FILTER(path IN paths WHERE length(path)= maxLength) AS longestPaths RETURN EXTRACT(path IN longestPaths | LAST(path)) as last, longestPaths
Neo4j
19,772,472
11
I need to delete some node properties from my graph. Following the cypher guidelines I have tried the following: START n=node(1) DELETE n.property RETURN n I get an error message: Expression `Property` yielded `true`. Don't know how to delete that. I can replicate this on console.neo4j.org. How are you supposed to delete the property of a node?
What version of Neo4j are you using? Since Neo4j 2.0 (I'm not sure what milestone exactly, tried it with M03), properties are not "deleted" anymore but "removed": START n=node(1) REMOVE n.property RETURN n Should work with Neo4j 2.x. This is also reflected in the documentation. On the right side of the page (perhaps after some loading time) you have a pull-down menu for choosing your Neo4j version. When you go to the DELETE documentation and choose the 2.0.0-M03 milestone, you will notice that the "Delete a property" menu point disappears (link to the M03 documentation on DELETE: http://docs.neo4j.org/chunked/2.0.0-M03/query-delete.html). Instead, the documentation for 2.0.0-M03 on REMOVE (here: http://docs.neo4j.org/chunked/2.0.0-M03/query-remove.html) does now list the "Remove a property" section.
Neo4j
18,010,551
11
I try to use n4j in my app, but I have problem with big log files. Are they necessary or is there some way to reduce the number and size of them? At the moment I see files like: nioneo_logical.log.v0 nioneo_logical.log.v1 nioneo_logical.log.v2 etc and they are ~26MB each (over 50% of neo4j folder).
These files are created whenever the logical logs are rotated. You can configure rules for them in the server properties file. See details here: http://docs.neo4j.org/chunked/stable/configuration-logical-logs.html You can safely remove them (but only the *.v*) if your database is shutdown and in a clean state. Don't remove them while the db is running because they could be needed in case of recovery on a crash.
Neo4j
14,696,819
11
How can i get the current running neo4j-server version (or in general server informations) via REST? Is there any "/status" URI or something similar?
Try this one, Get http://localhost:7474/db/manage/server/version This will give you a json response like { "edition" : "community", "version" : "2.3.3" }
Neo4j
10,881,485
11
This is how you can sort (order) results from Neo4j graph using Gremlin: g.v(id).out('knows').sort{it.name} or g.v(id).out('knows').sort{a,b -> a.name <=> b.name} This is how to limit result using offset/skip and limit: g.v(id).out('knows')[0..9] However if you combine both sort and limit g.v(id).out('knows').sort{it.name}[0..9] it would throw an error... javax.script.ScriptException: groovy.lang.MissingMethodException: No signature of method: java.util.ArrayList$ListItr.getAt() is applicable for argument types: (groovy.lang.IntRange) values: [0..9] Possible solutions: getAt(java.lang.String), getAt(int), next(), mean(), set(java.lang.Object), putAt(java.lang.String, java.lang.Object)
It took me a while to figure out that native Groovy methods like sort do not return Pipes, but iterators, iterables, etc. As such, to convert one of these objects back into a Pipeline flow you need to use _(): g.v(id).out('knows').sort{it.name}._()[0..9]
Neo4j
10,367,331
11
I'm playing around with neo4j, and I was wondering, is it common to have a type property on nodes that specify what type of Node it is? I've tried searching for this practice, and I've seen some people use name for a purpose like this, but I was wondering if it was considered a good practice or if indexes would be the more practical method? An example would be a "User" node, which would have type: user, this way if the index was bad, I would be able to do an all-node scan and look for types of user.
Labels have been added to neo4j 2.0. They fix this problem. You can create nodes with labels: CREATE (me:American {name: "Emil"}) RETURN me; You can match on labels: MATCH (n:American) WHERE n.name = 'Emil' RETURN n You can set any number of labels on a node: MATCH (n) WHERE n.name='Emil' SET n :Swedish:Bossman RETURN n You can delete any number of labels on a node: MATCH (n { name: 'Emil' }) REMOVE n:Swedish Etc...
Neo4j
10,239,709
11
I'm getting ready to start a project where I will be building a recommendation engine for restaurants. I have been waffling between neo4j (graph db) and mongodb (document db). my nodes/documents will be things like restaurant and person. i know i will want some edges, something like person->likes->restaurant, or person->ate_at->restaurant. my main query, however, will be to find restaurants within X miles of location Y. if i have 20 restaurant's within X miles of Y, but not connected by any edges, how will neo4j be able to handle the spatial query? i know with mongodb i can index on lat/long and query all restaurant types. does neo4j offer the same functionality in a disconnected graph? when it comes to answering questions like, 'which restaurants do my friends eat at most often?', is neo4j (graph db) the way to go? or will mongodb (document db) provide me similar functionality?
Neo4j Spatial introduces a Spatial RTree (or other means) index that is part of the graph itself. That means, even disconnected domain entities will be found via the spatial search, if you index them (that is relationships will connect the Spatial index to the Restaurants). Also, this is flexible enough that you can combine the Raw BBox search in the RTree with other things like check on the restaurants categories in the same go, since you can hop out and in the different parts of the graph. This way, neo4j Spatial is supporting the full range of search capabilities that you would expect form a full Topology, like combined searches and searches on polygons with holes etc. Be aware that Neo4j Spatial is in 0.7, so be gentle and ask on http://groups.google.com/group/neo4j/about :)
Neo4j
9,605,271
11
I wonder what the REST API clients are available for using from Ruby (not JRuby, so native bindings are not an option)? Ideally, I would want the API similar to the neo4j gem or ActiveRecord (validations, migrations, observers etc). Currently available (REST) tooling doesn't even come close to what we have, for example, in ActiveRecrod: neograhy - just plain REST API. Nothing to do with models etc. neology - is just a wrapper over neography and isn't a full featured ActiveModel. architect4r - conforms to ActiveModel, but provides only one way to query data (Cypher language), also no indexes support. I like the code of architect4r a little bit more (primarily because it uses ActiveModel). But neology seems to be much more pragmatic choice as it already is using neography under the hood. The choice is pretty small and tough. Could you please tell when one should be used rather than the other? Also any recommendations that would help me to decide on the gem are very welcome. Thanks.
The short answer is that there is no any mature ActiveModel-like gems for RESTful neo4j. The most common scenario is to just use Neography.
Neo4j
8,335,136
11
Has anyone gone any experience of using Neo4j with terabyte sized datasets? I would like to hear about your expereinces with how Neo4j performs
As long as your disk is large and fast enough and your memory allows for caching of the relevant (hot) portion of your data, you shouldn't run into issues. There are optimizations for tuning the Neo4j datastore to specific needs. Otherwise it depends on the kind of your dataset. Query performance shouldn't be an issue, insert performance might suffer if you have to do a lot of index lookups for joining imported nodes (But the Neo4j team works on that). Perhaps you should join the Neo4j mailing list to answer all your questions more consistently.
Neo4j
5,680,169
11
I recently started researching database features of databases. At the moment I'm looking into Neo4j Graph database. Unfortunately, I can't find every bit of information I need. I found most information except the following: Supporting datatypes? (Integer, Max. database size? Max. nodes in db? Max. relations in db?
The supported datatypes: boolean or boolean[] byte or byte[] short or short[] int or int[] long or long[] float or float[] double or double[] char or char[] java.lang.String or String[] Source: Neo4j API docs There's no limit on database size, but the current release (1.2) has limitations on the number of nodes, relationships and properties. The limit on each of these is 4 billion. The work on increasing the limits is done right now, and will be included in a milestone release soon. The new limit is 32B on nodes and relationships and 64B on properties. In the 1.3.M03 milestone release support for a more efficient way of storing short strings was included, which will lower disk consumption considerably for many datasets. See Better support for short strings in Neo4j.
Neo4j
5,152,164
11
I have a cypher script file and I would like to run it directly. All answers I could find on SO to the best of my knowledge use the command neo4j-shell which in my version (Neo4j server 3.5.5) seems to be deprecated and substituted with the command cyphershell. Using the command sudo ./neo4j-community-3.5.5/bin/cypher-shell --help I got the following instructions. usage: cypher-shell [-h] [-a ADDRESS] [-u USERNAME] [-p PASSWORD] [--encryption {true,false}] [--format {auto,verbose,plain}] [--debug] [--non-interactive] [--sample-rows SAMPLE-ROWS] [--wrap {true,false}] [-v] [--driver-version] [--fail-fast | --fail-at-end] [cypher] A command line shell where you can execute Cypher against an instance of Neo4j. By default the shell is interactive but you can use it for scripting by passing cypher directly on the command line or by piping a file with cypher statements (requires Powershell on Windows). My file is the following which tries to create a graph from csv files and it comes from the book "Graph Algorithms". WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data" AS base WITH base + "transport-nodes.csv" AS uri LOAD CSV WITH HEADERS FROM uri AS row MERGE (place:Place {id:row.id}) SET place.latitude = toFloat(row.latitude), place.longitude = toFloat(row.latitude), place.population = toInteger(row.population) WITH "https://github.com/neo4j-graph-analytics/book/raw/master/data/" AS base WITH base + "transport-relationships.csv" AS uri LOAD CSV WITH HEADERS FROM uri AS row MATCH (origin:Place {id: row.src}) MATCH (destination:Place {id: row.dst}) MERGE (origin)-[:EROAD {distance: toInteger(row.cost)}]->(destination) When I try to pass the file directly with the command: sudo ./neo4j-community-3.5.5/bin/cypher-shell neo_4.cypher first it asks for username and password but after typing the correct password (the wrong password results in the error The client is unauthorized due to authentication failure.) I get the error: Invalid input 'n': expected <init> (line 1, column 1 (offset: 0)) "neo_4.cypher" ^ When I try piping with the command: sudo cat neo_4.cypher| sudo ./neo4j-community-3.5.5/bin/cypher-shell -u usr -p 'pwd' no output is generated and no graph either. How to run a cypher script file with the neo4j command cypher-shell?
Use cypher-shell -f yourscriptname. Check with --help for more description.
Neo4j
56,038,659
10
what I did neo4j console (work fine) ctrl-C upon restarting I have message above. I delete /var/lib/neo4j/data/databases/graph.db/store_lock then I have Externally locked: /var/lib/neo4j/data/databases/graph.db/neostore Is there any way of cleaning lock ? (short of reinstalling)
Killing the Java process and deleting the store_lock worked for me: Found the lingering process, ps aux | grep "org.neo4j.server" killed it, kill -9 <pid-of-neo4js-java-process> and deleted sudo rm /var/lib/neo4j/data/databases/graph.db/store_lock Allegedly, just killing the lingering process may do the trick but I went ahead and deleted the lock anyway.
Neo4j
44,757,181
10
Has there been any update to the syntax of an IF/ELSE statement in Cypher? I know about CASE and the FOREACH "hacks" but they are so unsightly to read :) I was wanting to do something with optional parameters such as: CASE WHEN exists($refs.client) THEN MATCH (cl:client {uuid: $refs.client}) END ... // and later use it like CASE WHEN exists(cl) THEN DELETE tcr MERGE (t)-[:references]->(cl) END // and again in my return RETURN { client: CASE WHEN exists(cl) THEN {uuid: cl.uuid} ELSE NULL END, } I know that doesn't make a lot of sense given the context, but I'm basically passing in a refs object which may or may not contain parameters (or the parameters exist and are NULL) Somewhere I read there might be an update to how an "if/else" may be handled in neo4j so I really just wanted to check in and see if anyone was aware of a "nicer" way to handle cases like this. Currently, I just handle all my queries in code and run a bunch of smaller queries, but it requires duplicate lookups for creating and deleting references. I'd like to move it all into one larger query so I can use variable references. Again, I know I could use FOREACH...CASE, but when there is a lot of smaller cases like this, it gets hairy. Currently the error is { Error: Invalid input 'S': expected 'l/L' (line 7, column 9 (offset: 246)) " CASE true WHEN exists($refs.client) THEN MATCH (cl:client {uuid: $refs.client}) END" ^ I also know that I can use WITH...CASE if I'm passing back a known value, but cannot do a MATCH inside it. One of the reasons for wanting to do MATCH inside the CASE at the top of the query, is because I want the query to fail if the property on refs exists but the MATCH does not succeed. Using OPTIONAL MATCH does not accomplish this. EDIT Oh, also... I'm reviewing using MATCH (cl:client {uuid: $refs.client}) WHERE exists($refs.client) but I recall that not working correctly. EDIT I can do MATCH...WHERE exists() but later it's futile if I can't do MERGE WHERE exists() EDIT For reference to show why I'm asking about an IF/ELSE, here is the query I'm looking at. I've modified it from the above example so it doesn't error out. MATCH (u:user {uuid: $uid})-[:allowed_to {read: true}]->(c:company {uuid: $cid}) MATCH (t:timesheet {uuid: $tid})<-[:owns]-(:timesheets)<-[:owns]-(u) // Make sure the incoming references are valid or fail query // Here, I'd like only do a match IF $refs.client exists and IS NOT NULL. If it is null or does not exist, I don't want the query to fail. OPTIONAL MATCH will not fail if the value is passed in is invalid but will simply return NULL. Which is why IF/ELSE (or CASE) would be helpful here. MATCH (cl:client {uuid: $refs.client}) MATCH (ca:case {uuid: $refs.case}) MATCH (s:step {uuid: $refs.step}) MATCH (n:note {uuid: $refs.note}) // clone timesheet entry to a revision CREATE (t)-[:assembled_with]->(r:revision) SET r = t, r.created_at = $data.updated_at WITH * // Get the old references MATCH (t)-[tcr:references]->(rc:client) MATCH (t)-[tcar:references]->(rca:case) MATCH (t)-[tsr:references]->(rs:step) MATCH (t)-[tnr:references]->(rn:note) // Copy old references to revision (won't create new relationships with NULL) MERGE (r)-[:references]->(rc) MERGE (r)-[:references]->(rca) MERGE (r)-[:references]->(rs) MERGE (r)-[:references]->(rn) // Update the current timesheet with new data SET t += $data // If new references are incoming, delete the old ones and update for new ones DELETE tcr DELETE tcar DELETE tsr DELETE tnr MERGE (t)-[:references]->(cl) MERGE (t)-[:references]->(ca) MERGE (t)-[:references]->(s) MERGE (t)-[:references]->(n) WITH * // Get the new count of revisions MATCH (t)-[:assembled_with]->(_r:revision) RETURN { uuid: t.uuid, start: t.start, end: t.end, description: t.description, client: CASE WHEN exists(cl.uuid) THEN {uuid: cl.uuid} ELSE NULL END, case: CASE WHEN exists(ca.uuid) THEN {uuid: ca.uuid} ELSE NULL END, step: CASE WHEN exists(s.uuid) THEN {uuid: s.uuid} ELSE NULL END, note: CASE WHEN exists(n.uuid) THEN {uuid: n.uuid} ELSE NULL END, revisions: count(_r) }
APOC Procedures just updated with support for conditional cypher execution. You'll need version 3.1.3.7 or greater (if using Neo4j 3.1.x), or version 3.2.0.3 or greater (if using Neo4j 3.2.x). Here's an example of some of the cases you mentioned, using the new procedures: CALL apoc.when($refs.client IS NOT NULL, "MATCH (cl:client {uuid: refs.client}) RETURN cl", '', {refs:$refs}) YIELD value WITH value.cl as cl // which might be null... ... ... CALL apoc.do.when(cl IS NOT NULL, "DELETE tcr MERGE (t)-[:references]->(cl)", '', {tcr:tcr, t:t, cl:cl}) YIELD value ... ... RETURN { client: cl {.uuid}, ... } In your return, map projection is enough to meet your needs, you'll get an object with the uuid if cl exists, or a null for client if not.
Neo4j
43,481,472
10
I have a graph database that maps out connections between buildings and bus stations, where the graph contains other connecting pieces like roads and intersections (among many node types). What I'm trying to figure out is how to filter a path down to only return specific node types. I have two related questions that I'm currently struggling with. Question 1: How do I return the labels of nodes along a path? It seems like a logical first step is to determine what type of nodes occur along the path. I have tried the following: MATCH p=(a:Building)­-[:CONNECTED_TO*..5]­-(b:Bus) WITH nodes(p) AS nodes RETURN DISTINCT labels(nodes); However, I'm getting a type exception error that labels() expects data of type node and not Collection. I'd like to dynamically know what types of nodes are on my paths so that I can eventually filter my paths. Question 2: How can I return a subset of the nodes in a path that match a label I identified in the first step? Say I found that that between (a:Building) and (d1:Bus) and (d2:Bus) I can expect to find (:Intersection) nodes and (:Street) nodes. This is a simplified model of my graph: (a:Building)­­--(:Street)­--­(:Street)--­­(b1:Bus) \­­(:Street)--­­(:Intersection)­­--(:Street)--­­(b2:Bus) I've written a MATCH statement that would look for all possible paths between (:Building) and (:Bus) nodes. What would I need to do next to filter to selectively return the Street nodes? MATCH p=(a:Building)-[r:CONNECTED_TO*]-(b:Bus) // Insert logic to only return (:Street) nodes from p Any guidance on this would be greatly appreciated!
To get the distinct labels along matching paths: MATCH p=(a:Building)-[:CONNECTED_TO*..5]-(b:Bus) WITH NODES(p) AS nodes UNWIND nodes AS n WITH LABELS(n) AS ls UNWIND ls AS label RETURN DISTINCT label; To return the nodes that have the Street label. MATCH p=(a:Building)-[r:CONNECTED_TO*]-(b:Bus) WITH NODES(p) AS nodes UNWIND nodes AS n WITH n WHERE 'Street' IN LABELS(n) RETURN n;
Neo4j
39,733,178
10
I am working on migration of data from postgres to Graph Database manually. I have wrote script below: import psycopg2 from py2neo import authenticate, Graph authenticate("localhost:7474", "neo4j", "password") n4j_graph = Graph("http://localhost:7474/db/data/") try: conn=psycopg2.connect("dbname='db_name' user='user' password='password'") except: print "good bye" cur = conn.cursor() try: cur.execute("""SELECT * from table_name""") except: print "not found" rows = cur.fetchall() for row in rows: username = row[4] email = row[7] s = '''MERGE (u:User { username: "%(username)s"}) MERGE (e:Email { email: "%(email)s"}) CREATE UNIQUE (u)-[:BELONGS_TO]->(e)''' %{"username": username, "email": email} print s n4j_graph.cypher.execute(s) Error: AttributeError: 'Graph' object has no attribute 'cypher' This issue I resolved by updating py2neo to version 2.0.8. pip uninstall py2neo pip install py2neo==2.0.8 I am following documentation of py2neo. While for production I am still getting: AttributeError: 'Graph' object has no attribute 'cypher' GET 404 response What can be issue?
I had this problem too. In my case I was looking at the py2neo v2 documentation but on my machine was installed py2neo v3. You should check your py2neo version and replace .cyper({query}) with .run({query}) The previous version of py2neo allowed Cypher execution through Graph.cypher.execute(). This facility is now instead accessible via Graph.run() and returns a lazily-evaluated Cursor rather than an eagerly-evaluated RecordList.
Neo4j
37,530,309
10
I have recently installed Neo4j 3.0, and since I need to enable outside access, I need the configuration file, and where in the 2.3.3 the configuration files were located in within the /var/lib/neo4j/ structure. I am not able to locate them anywhere in the 3.0 version. I know it have changed name to neo4j.conf. My folder structure in the above directory is: plugins import data certificates I am running Ubuntu 16.04 (Xenial Xerus). I have tried the documentation. However, that doesn't describe the location. I also already tried "find -name "neo4j.conf" without luck.
[UPDATED] According to the 3.0.0 Operations Manual, the default location of the config file for "Debian" is: /etc/neo4j/neo4j.conf
Neo4j
36,919,507
10
I try to import CSV in a Neo4j Database and I have a problem. On my desktop computer (windows 7, java 1.8.0_40-b25), the LOAD CSV works great. But on the server (windows 2012 R2, java 1.8.0_65-b17), i have this error message "URI is not hierarchical". I try to put the data on C:, F: ... no change. Here's the code : USING PERIODIC COMMIT 100 LOAD CSV WITH HEADERS FROM "file:F:/Neo4JData/Destination.csv" AS line MERGE (d:Destination {`Code`: line.`Code`}); Thanks for your help.
Are you using 2.3.0 Community Edition? try: USING PERIODIC COMMIT 10000 LOAD CSV FROM 'file:///F:\\Neo4JData\\Destination.csv
Neo4j
33,481,042
10
I need to group the data from a neo4j database and then to filter out everything except the top n records of every group. Example: I have two node types : Order and Article. Between them there is an "ADDED" relationship. "ADDED" relationship has a timestamp property. What I want to know (for every article) is how many times it was among the first two articles added to an order. What I tried is the following approach: get all the Order-[ADDED]-Article sort the result from step 1 by order id as first sorting key and then by timestamp of ADDED relationship as second sorting key; for every subgroup from step 2 representing one order, keep only the top 2 rows; Count distinct article ids in the output of step 3; My problem is that I got stuck at step 3. Is it possible to get top 2 rows for every subgroup representing an order? Thanks, Tiberiu
Try MATCH (o:Order)-[r:ADDED]->(a:Article) WITH o, r, a ORDER BY o.oid, r.t WITH o, COLLECT(a)[..2] AS topArticlesByOrder UNWIND topArticlesByOrder AS a RETURN a.aid AS articleId, COUNT(*) AS count Results look like articleId count 8 6 2 2 4 5 7 2 3 3 6 5 0 7 on this sample graph created with FOREACH(opar IN RANGE(1,15) | MERGE (o:Order {oid:opar}) FOREACH(apar IN RANGE(1,5) | MERGE (a:Article {aid:TOINT(RAND()*10)}) CREATE o-[:ADDED {t:timestamp() - TOINT(RAND()*1000)}]->a ) )
Neo4j
32,951,651
10
I was wondering if I could run multiple standalone instances of neo4j on a single machine. I understand that I could configure multiple instances as HA cluster (here), but that is not my intention, I only need two totally different and independent instances of neo4j on my machine (Which is a Mac OSX if that makes a difference). This is only for my dev testing and I tried having two separate directories with different data/ and setting two different ports for them, but only one runs properly. I would appreciate any help coming my way. Thank you.
The most easy way is to unpack the neo4j installation into two different locations. In one of the locations you need to change the port settings in conf/neo4j-server.properties and, if neo4j-shell is enabled conf/neo4j.properties as well. Also consider to set dbms.pagecache.memory to a reasonable value. By default each instance will eat up up to 75 % of RAM minus heap space - which is too much when running multiple instance on one box. Based on @mepla's findings: the https port in neo4j-server.properties needs to be changed as well.
Neo4j
32,548,590
10
I want to delete an element from an array property on a node using Cypher. I know the value of the element I want to delete, but not its index. e.g. suppose I have a node like ({some_array: ["apples", "oranges"]}) I want a query like (pseudocode): MATCH (n) REMOVE "oranges" IN n.some_array
Cypher doesn't have functions for mutating arrays, but you can create a new array with "oranges" removed using FILTER: MATCH (n) WHERE HAS(n.some_array) SET n.array = FILTER(x IN n.some_array WHERE x <> "oranges");
Neo4j
31,953,794
10
is there pagination support for custom queries in SDN4? If yes, how does it work? If no, is there a workarround? I have the following Spring Data Neo4j 4 repository: @Repository public interface TopicRepository extends GraphRepository<Topic>,IAuthorityLookup { // other methods omitted @Query("MATCH (t:Topic)-[:HAS_OFFICER]->(u:User) " + "WHERE t.id = {0} " + "RETURN u") public Page<User> topicOfficers(Long topicId, Pageable pageable); } And the corresponding testcase: @Test public void itShouldReturnAllOfficersAsAPage() { Pageable pageable = new PageRequest(1,10); Page<User> officers = topicRepository.topicOfficers(1L, pageable); assertNotNull(officers); } When I run the test, I run into the following exception Failed to convert from type java.util.ArrayList<?> to type org.springframework.data.domain.Page<?> for value '[org.lecture.model.User@1]'; nested exception is org.springframework.core.convert.ConverterNotFoundException: No converter found capable of converting from type java.util.ArrayList<?> to type org.springframework.data.domain.Page<?> This is my setup: dependencies { //other dependencies omitted compile("org.neo4j:neo4j-cypher-dsl:2.0.1") compile "org.neo4j.app:neo4j-server:2.2.2" compile(group: 'org.springframework.data', name: 'spring-data-neo4j', version: '4.0.0.BUILD-SNAPSHOT') compile(group: 'org.springframework.data', name: 'spring-data-neo4j', version: '4.0.0.BUILD-SNAPSHOT', classifier: 'tests') testCompile(group: 'org.neo4j', name: 'neo4j-kernel', version: '2.2.2', classifier: 'tests') testCompile(group: 'org.neo4j.app', name: 'neo4j-server', version: '2.2.2', classifier: 'tests') testCompile(group: 'org.neo4j', name: 'neo4j-io', version: '2.2.2', classifier: 'tests') } The Snapshot I use should able to handle pagination, since the following test runs just fine: @Test public void itShouldReturnAllTopicsAsAPage() { Pageable pageable = new PageRequest(1,10); Page<Topic> topics = topicRepository.findAll(pageable); assertNotNull(topics); }
This is now allowed using Sort or Pageable interfaces in your query, and was fixed in DATAGRAPH-653 and marked as fixed in version 4.2.0.M1 (currently in pre-release). Queries such as the following are possible: @Query("MATCH (movie:Movie {title={0}})<-[:ACTS_IN]-(actor) RETURN actor") List<Actor> getActorsThatActInMovieFromTitle(String movieTitle, Sort sort); and: @Query("MATCH (movie:Movie {title={0}})<-[:ACTS_IN]-(actor) RETURN actor") Page<Actor> getActorsThatActInMovieFromTitle(String movieTitle, PageRequest page); (sample from Cypher Examples in the Spring Data + Neo4j docs) Finding Spring Data Neo4j Pre-Release Milestone Builds: You can view the dependencies information for any release on the project page. And for the 4.2.0.M1 build the information for Gradle (you can infer Maven) is: dependencies { compile 'org.springframework.data:spring-data-neo4j:4.2.0.M1' } repositories { maven { url 'https://repo.spring.io/libs-milestone' } } Any newer final release should be used instead.
Neo4j
30,624,435
10
Currently I use Neo4j 2.2.0-RC01. It has basic Auth enable as default. How can I disable the default Basic Auth on Neo4j 2.2.0-RC01?
In file conf/neo4j-server.properties, change the dbms.security.auth_enabled to false and restart Neo4j: # Require (or disable the requirement of) auth to access Neo4j dbms.security.auth_enabled=false
Neo4j
29,096,616
10
Is it possible to extract in a single cypher query a limited set of nodes and the total number of nodes? match (n:Molecule) with n, count(*) as nb limit 10 return {N: nb, nodes: collect(n)} The above query properly returns the nodes, but returns 1 as number of nodes. I certainly understand why it returns 1, since there is no grouping, but can't figure out how to correct it.
The following query returns the counter for the entire number of rows (which I guess is what was needed). Then it matches again and limits your search, but the original counter is still available since it is carried through via the WITH-statement. MATCH (n:Molecule) WITH count(*) AS cnt MATCH (n:Molecule) WITH n, cnt LIMIT 10 RETURN { N: cnt, nodes:collect(n) } AS molecules
Neo4j
27,805,248
10
I’m trying to use LOAD CSV to create nodes with the labels being set to values from the CSV. Is that possible? I’m trying something like: LOAD CSV WITH HEADERS FROM 'file:///testfile.csv' AS line CREATE (x:line.label) ...but I get an invalid syntax error. Is there any way to do this?
bicpence, First off, this is pretty easy to do with a Java batch import application, and they aren't hard to write. See this batch inserter example. You can use opencsv to read your CSV file. If you would rather stick with Cypher, and if you have a finite set of labels to work with, then you could do something like this: USING PERIODIC COMMIT 1000 LOAD CSV WITH HEADERS FROM 'file:///testfile.csv' AS LINE CREATE (n:load {lab:line.label, prop:line.prop}); CREATE INDEX ON :load(lab); MATCH (n:load {lab:'label1'}) SET n:label1 REMOVE n:load REMOVE n.lab; MATCH (n:load {lab:'label2'}) SET n:label2 REMOVE n:load REMOVE n.lab; Grace and peace, Jim
Neo4j
24,992,977
10
Is it possible to change a label on a node using Cypher? I have a node with label Book, as shown below. I want to change the Book label to DeletedBook. (u:Person)-[r]-(b:Book{id:id1}) (u:Person)-[r]-(b:DeletedBook{id:id1})
You can do that using REMOVE on the Book label and SET on the new label: MATCH (p:Person)-[r]-(b:Book {id: id1}) REMOVE b:Book SET b:DeletedBook RETURN b You should check out the Neo4j Cypher Refcard for a complete reference to Cypher 2.x.
Neo4j
24,056,127
10
I have stored a double (-0.1643) as string ("-0.1643") in a property on a neo4j relationship. If I try to filter on this value with a numeric comparison: MATCH (n1:Node)-[r:RELATION]-(n2:Node) WHERE r.number < -0.1 RETURN n1, n2 Cypher throws an error: Don't know how to compare that. Left: "-0.1643" (String); Right: -0.1 (Double) Neo.ClientError.Statement.InvalidSyntax Obviously, I could store the data as a numeric value. But is it possible to convert the string to double in cypher? Something like: MATCH (n1:Node)-[r:RELATION]-(n2:Node) WHERE as.double(r.number) < -0.1 RETURN n1, n2
Check out release 2.0.2. It added to type functions, "toInt, toFloat, toStr". It looks like toDouble doesn't exist, but perhaps float is precise enough for you? http://www.neo4j.org/release-notes#2.0.2
Neo4j
21,349,366
10
Under Neo4j v1.9.x, I used the following sort of code. private Category CreateNodeCategory(Category cat) { var node = client.Create(cat, new IRelationshipAllowingParticipantNode<Category>[0], new[] { new IndexEntry(NeoConst.IDX_Category) { { NeoConst.PRP_Name, cat.Name }, { NeoConst.PRP_Guid, cat.Nguid.ToString() } } }); cat.Nid = node.Id; client.Update<Category>(node, cat); return cat; } The reason being that the Node Id was auto generated and I could use it later for a quick look up, start bits in other queries, etc. Like the following: private Node<Category> CategoryGet(long nodeId) { return client.Get<Category>((NodeReference<Category>)nodeId); } This enables the following which appeared to work well. public Category CategoryAdd(Category cat) { cat = CategoryFind(cat); if (cat.Nid != 0) { return cat; } return CreateNodeCategory(cat); } public Category CategoryFind(Category cat) { if (cat.Nid != 0) { return cat; } var node = client.Cypher.Start(new { n = Node.ByIndexLookup(NeoConst.IDX_Category, NeoConst.PRP_Name, cat.Name)}) .Return<Node<Category>>("n") .Results.FirstOrDefault(); if (node != null) { cat = node.Data; } return cat; } Now the cypher Wiki, examples and bad-habits recommend using the .ExecuteWithoutResults() in all the CRUD. So the question I have is how do you have an Auto Increment value for the node ID?
First up, for Neo4j 2 and onwards, you always need to start with the frame of reference "how would I do this in Cypher?". Then, and only then, do you worry about the C#. Now, distilling your question, it sounds like your primary goal is to create a node, and then return a reference to it for further work. You can do this in cypher with: CREATE (myNode) RETURN myNode In C#, this would be: var categoryNode = graphClient.Cypher .Create("(category {cat})") .WithParams(new { cat }) .Return(cat => cat.Node<Category>()) .Results .Single(); However, this still isn't 100% what you were doing in your original CreateNodeCategory method. You are creating the node in the DB, getting Neo4j's internal identifier for it, then saving that identifier back into the same node. Basically, you're using Neo4j to generate auto-incrementing numbers for you. That's functional, but not really a good approach. I'll explain more ... First up, the concept of Neo4j even giving you the node id back is going away. It's an internal identifier that actually happens to be a file offset on disk. It can change. It is low level. If you think about SQL for a second, do you use a SQL query to get the file byte offset of a row, then reference that for future updates? A: No; you write a query that finds and manipulates the row all in one hit. Now, I notice that you already have an Nguid property on the nodes. Why can't you use that as the id? Or if the name is always unique, use that? (Domain relevant ids are always preferable to magic numbers.) If neither are appropriate, you might want to look at a project like SnowMaker to help you out. Next, we need to look at indexing. The type of indexing that you're using is referred to in the 2.0 docs as "Legacy Indexing" and misses out on some of the cool Neo4j 2.0 features. For the rest of this answer, I'm going to assume your Category class looks like this: public class Category { public Guid UniqueId { get; set; } public string Name { get; set; } } Let's start by creating our category node with a label: var category = new Category { UnqiueId = Guid.NewGuid(), Name = "Spanners" }; graphClient.Cypher .Create("(category:Category {category})") .WithParams(new { category }) .ExecuteWithoutResults(); And, as a one-time operation, let's establish a schema-based index on the Name property of any nodes with the Category label: graphClient.Cypher .Create("INDEX ON :Category(Name)") .ExecuteWithoutResults(); Now, we don't need to worry about manually keeping indexes up to date. We can also introduce an index and unique constraint on UniqueId: graphClient.Cypher .Create("CONSTRAINT ON (category:Category) ASSERT category.UniqueId IS UNIQUE") .ExecuteWithoutResults(); Querying is now very easy: graphClient.Cypher .Match("(c:Category)") .Where((Category c) => c.UniqueId == someGuidVariable) .Return(c => c.As<Category>()) .Results .Single(); Rather than looking up a category node, to then do another query, just do it all in one go: var productsInCategory = graphClient.Cypher .Match("(c:Category)<-[:IN_CATEGORY]-(p:Product)") .Where((Category c) => c.UniqueId == someGuidVariable) .Return(p => p.As<Product>()) .Results; If you want to update a category, do that in one go as well: graphClient.Cypher .Match("(c:Category)") .Where((Category c) => c.UniqueId == someGuidVariable) .Update("c = {category}") .WithParams(new { category }) .ExecuteWithoutResults(); Finally, your CategoryAdd method currently 1) does one DB hit to find an existing node, 2) a second DB hit to create a new one, 3) a third DB hit to update the ID on it. Instead, you can compress all of this to a single call too using the MERGE keyword: public Category GetOrCreateCategoryByName(string name) { return graphClient.Cypher .WithParams(new { name, newIdIfRequired = Guid.NewGuid() }) .Merge("(c:Category { Name = {name})") .OnCreate("c") .Set("c.UniqueId = {newIdIfRequired}") .Return(c => c.As<Category>()) .Results .Single(); } Basically, Don't use Neo4j's internal ids as a way to hack around managing your own identities. (But they may release some form of autonumbering in the future. Even if they do, domain identities like email addresses or SKUs or airport codes or ... are preferred. You don't even always need an id: you can often infer a node based on its position in the graph.) Generally, Node<T> will disappear over time. If you use it now, you're just accruing legacy code. Look into labels and schema-based indexing. They will make your life easier. Try and do things in the one query. It will be much faster. Hope that helps!
Neo4j
19,534,511
10
In the below query, does the 2nd match pattern john-[r?:HAS_SEEN]->(movie) run on the result of the first match john-[:IS_FRIEND_OF]->(user)-[:HAS_SEEN]->(movie) . I am trying to understand if this is similar to the unix pipe concept i.e. the result of the 1st pattern is the input to the 2nd pattern. start john=node(1) match john-[:IS_FRIEND_OF]->(user)-[:HAS_SEEN]->(movie), john-[r?:HAS_SEEN]->(movie) where r is null return movie;
I don't think I would compare multiple MATCH clauses to the UNIX pipes concept. Using multiple, comma-separated matches is just a way of breaking out of the 1-dimensional constraint of writing relationships with a single sentence. For example, the following is completely valid: MATCH a--b, b--c, c--d, d--e, a--c At the very end I went back and referenced a and c even though they weren't used in the clause directly before. Again, this is just a way of drawing 2 dimensions' worth of relationships by only using 1-dimensional sentences. We're drawing a 2-dimensional picture with several 1-dimensional pieces. On a side note, I WOULD compare the WITH clause to UNIX pipes -- I'd call them analogous. WITH will pipe out any results it finds into the next set of clauses you give it.
Neo4j
16,466,625
10
I've been recently exposed to the world of graph databases. Its quite an interesting paradigm shift for an old relational dog like me. Also quite recently, I've been tinkering with liquibase and its been quite a neat tool in managing databases. So, two worlds collide and I was just wondering if there are any tools out there that take on liquibase-like change management for graph databases. I'm especially interested in neo4j and orientdb.
Liquigraph exists now and although still quite new, the author is very receptive to feedback and is actively working on the project.
Neo4j
15,312,760
10
I'm working on a project where I have to deal with graphs... I'm using a graph to get routes by bus and bike between two stops. The fact is,all my relationship contains the time needed to go from the start point of the relationship and the end. In order to get the shortest path between to node, I'm using the shortest path function of cypher. But something, the shortest path is not the fastest.... Is there a way to get all paths between two nodes not linked by a relationship? Thanks EDIT: In fact I change my graph, to make it easier. So I still have all my nodes. Now the relationship type correspond to the time needed to go from a node to another. The shortestPath function of cypher give the path which contains less relationship. I would like that it returns the path where the addition of all Type (the time) is the smallest.. Is that possible? Thanks
In cypher, to get all paths between two nodes not linked by a relationship, and sort by a total in a weight, you can use the reduce function introduced in 1.9: start a=node(...), b=node(...) // get your start nodes match p=a-[r*2..5]->b // match paths (best to provide maximum lengths to prevent queries from running away) where not(a-->b) // where a is not directly connected to b with p, relationships(p) as rcoll // just for readability, alias rcoll return p, reduce(totalTime=0, x in rcoll: totalTime + x.time) as totalTime order by totalTime You can throw a limit 1 at the end, if you need only the shortest.
Neo4j
14,814,124
10
I had an embedded neo4j server with admin console working within a Play 2.0.1 application. I recently upgraded to the release candidate for compatibilities with DeadBolt and found that the application no longer runs. To start the server I was doing the following: graphDb = (GraphDatabaseAPI) new GraphDatabaseFactory() .newEmbeddedDatabaseBuilder(CONF_DBMETA_LOCATION) .setConfig(ShellSettings.remote_shell_enabled, "true") .newGraphDatabase(); ServerConfigurator config; config = new ServerConfigurator(graphDb); // let the server endpoint be on a custom port srv = new WrappingNeoServerBootstrapper(graphDb, config); srv.start(); Unfortunately I then get: > java.lang.RuntimeException: > org.neo4j.kernel.lifecycle.LifecycleException: Component > 'org.neo4j.kernel.logging.LogbackService@4c043845' failed to > initialize. Please see attached cause exception. I have tried removing slf4j and logback dependencies from my Build.scala where neo4j-server is added but to no avail. It seems that the wrong logback.xml is being loaded by neo4j. Also, if I add notTransitive() to the neo4j-server dependency the logback.xml warnings at startup go away. I imagine that the neo4j specific logback.xml is embedded within the jar(s) and is causing the issue. One potential solution I see is to write a custom configuration via code, but I'm unsure how to do this. Any thoughts? For reference, I get these errors at startup: > 22:11:05,124 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find > resource [logback.groovy] > 22:11:05,125 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find > resource [logback-test.xml] > 22:11:05,125 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource > [logback.xml] at > [jar:file:/Users/steve/Code/play-2.1-RC1/repository/local/play/play_2.10/2.1-RC1/jars/play_2.10.jar!/logback.xml] > 22:11:05,126 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] > occurs multiple times on the classpath. > 22:11:05,126 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] > occurs at > [jar:file:/Users/steve/Code/play-2.1-RC1/framework/../repository/cache/org.neo4j.app/neo4j-server/jars/neo4j-server-1.9-SNAPSHOT.jar!/logback.xml] > 22:11:05,126 |-WARN in ch.qos.logback.classic.LoggerContext[default] - Resource [logback.xml] > occurs at > [jar:file:/Users/steve/Code/play-2.1-RC1/repository/local/play/play_2.10/2.1-RC1/jars/play_2.10.jar!/logback.xml] > 22:11:05,139 |-INFO in ch.qos.logback.core.joran.spi.ConfigurationWatchList@733b8bc1 - URL > [jar:file:/Users/steve/Code/play-2.1-RC1/repository/local/play/play_2.10/2.1-RC1/jars/play_2.10.jar!/logback.xml] > is not of type file > 22:11:05,265 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - debug > attribute not set > 22:11:05,614 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate > appender of type [ch.qos.logback.core.ConsoleAppender] > 22:11:05,625 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as > [STDOUT] > 22:11:05,657 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming > default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for > [encoder] property > 22:11:05,707 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level > of ROOT logger to ERROR > 22:11:05,707 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching > appender named [STDOUT] to Logger[ROOT] > 22:11:05,707 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of > configuration. > 22:11:05,709 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator@4a546701 - Registering > current configuration as safe fallback point See below for the full exception: > play.api.UnexpectedException: Unexpected exception[RuntimeException: > org.neo4j.kernel.lifecycle.LifecycleException: Component > 'org.neo4j.kernel.logging.LogbackService@4c043845' failed to > initialize. Please see attached cause exception.] at > play.core.ReloadableApplication$$anonfun$get$1$$anonfun$1.apply(ApplicationProvider.scala:134) > ~[play_2.10.jar:2.1-RC1] at > play.core.ReloadableApplication$$anonfun$get$1$$anonfun$1.apply(ApplicationProvider.scala:101) > ~[play_2.10.jar:2.1-RC1] at scala.Option.map(Option.scala:145) > ~[scala-library.jar:na] at > play.core.ReloadableApplication$$anonfun$get$1.apply(ApplicationProvider.scala:101) > ~[play_2.10.jar:2.1-RC1] at > play.core.ReloadableApplication$$anonfun$get$1.apply(ApplicationProvider.scala:99) > ~[play_2.10.jar:2.1-RC1] at > scala.util.Either$RightProjection.flatMap(Either.scala:523) > [scala-library.jar:na] Caused by: java.lang.RuntimeException: > org.neo4j.kernel.lifecycle.LifecycleException: Component > 'org.neo4j.kernel.logging.LogbackService@4c043845' failed to > initialize. Please see attached cause exception. at > org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:258) > ~[neo4j-kernel-1.9.M03.jar:na] at > org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:88) > ~[neo4j-kernel-1.9.M03.jar:na] at > org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:83) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.graphdb.factory.GraphDatabaseBuilder.newGraphDatabase(GraphDatabaseBuilder.java:206) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > EmbeddedGraphDB.initializeDatabase(EmbeddedGraphDB.java:70) > ~[na:na] at > EmbeddedGraphDB.<init>(EmbeddedGraphDB.java:51) > ~[na:na] Caused by: org.neo4j.kernel.lifecycle.LifecycleException: > Component 'org.neo4j.kernel.logging.LogbackService@4c043845' failed to > initialize. Please see attached cause exception. at > org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:471) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:62) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:96) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.InternalAbstractGraphDatabase.run(InternalAbstractGraphDatabase.java:245) > ~[neo4j-kernel-1.9.M03.jar:na] at > org.neo4j.kernel.EmbeddedGraphDatabase.<init>(EmbeddedGraphDatabase.java:88) > ~[neo4j-kernel-1.9.M03.jar:na] at > org.neo4j.graphdb.factory.GraphDatabaseFactory$1.newDatabase(GraphDatabaseFactory.java:83) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] Caused by: > org.neo4j.kernel.lifecycle.LifecycleException: Component > 'org.neo4j.kernel.logging.LogbackService$1@1955bd61' was successfully > initialized, but failed to start. Please see attached cause exception. > at > org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.start(LifeSupport.java:495) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:105) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.logging.LogbackService.init(LogbackService.java:106) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.lifecycle.LifeSupport$LifecycleInstance.init(LifeSupport.java:465) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.lifecycle.LifeSupport.init(LifeSupport.java:62) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] at > org.neo4j.kernel.lifecycle.LifeSupport.start(LifeSupport.java:96) > ~[neo4j-kernel-1.9.M03.jar:1.9.M03] Caused by: > java.lang.NoSuchMethodError: > org.codehaus.janino.ClassBodyEvaluator.setImplementedInterfaces([Ljava/lang/Class;)V > at > ch.qos.logback.core.joran.conditional.PropertyEvalScriptBuilder.build(PropertyEvalScriptBuilder.java:48) > ~[logback-core.jar:na] at > ch.qos.logback.core.joran.conditional.IfAction.begin(IfAction.java:67) > ~[logback-core.jar:na] at > ch.qos.logback.core.joran.spi.Interpreter.callBeginAction(Interpreter.java:276) > ~[logback-core.jar:na] at > ch.qos.logback.core.joran.spi.Interpreter.startElement(Interpreter.java:148) > ~[logback-core.jar:na] at > ch.qos.logback.core.joran.spi.Interpreter.startElement(Interpreter.java:130) > ~[logback-core.jar:na] at > ch.qos.logback.core.joran.spi.EventPlayer.play(EventPlayer.java:50) > ~[logback-core.jar:na] EDIT1 More Details I removed the logback.xml file from play_2.10.jar and no longer get the duplicate warning from logback at startup of the play application. I then tried locating putting the contents of both the neo4j logback.xml and play2.1 logback.xml as custom-logback.xml within the root of my play project. The same path as Play.application().path() Perhaps this is the wrong location for neo4j to pick it up? When reviewing dependencies I have one janino required by neo4j-server. Also, I'm not seeing any conflicts in jars for logging but perhaps I'm missing something. Here's my dependency hierarchy from 'play dependencies': https://gist.github.com/4559389 I also tried copying the default configuration listed on the Play2.1 wiki as below into custom-logback.xml with no success: <configuration> <conversionRule conversionWord="coloredLevel" converterClass="play.api.Logger$ColoredLevel" /> <appender name="FILE" class="ch.qos.logback.core.FileAppender"> <file>${application.home}/logs/application.log</file> <encoder> <pattern>%date - [%level] - from %logger in %thread %n%message%n%xException%n</pattern> </encoder> </appender> <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> <encoder> <pattern>%coloredLevel %logger{15} - %message%n%xException{5}</pattern> </encoder> </appender> <logger name="play" level="INFO" /> <logger name="application" level="INFO" /> <root level="ERROR"> <appender-ref ref="STDOUT" /> <appender-ref ref="FILE" /> </root> </configuration> EDIT 2 Definitely seems to be an issue with the logback dependency. Neo4j depends on 0.9.30 and play depends on 1.0.7 it seems. I'm guessing there's an api change between those versions that when the library gets loaded by ?janino? it can't find the appropriate method. Still unsure as to how to specify in the logback.xml properly to select the proper dependency at runtime. Graphs were generated by yed + sbt-dependency-graph.
With regard to the Neo4j lifecycle exception that gets thrown because Play 2.1's newer version of logback is not compatible with Neo4j's. I ran into this issue and ended up just overriding Play's logback to an older, compatible version by putting this in my Build.scala's project dependencies: "ch.qos.logback" % "logback-core" % "1.0.3" force(), // this should override the Play version "ch.qos.logback" % "logback-classic" % "1.0.3" force(), For good measure I also tried excluding any log4j transitive dependencies being pulled in by setting SBT's ivyXML parameter: ivyXML := <dependencies> <exclude module="log4j"/> </dependencies> This is obviously a fragile fix but, at least for Play 2.1-RC2, it seems to work. I still have issues actually configuring the logging for Neo4j so I'll try and update this answer later. Update: Since I am new to Logback I had a bit of difficulty configuring it with Play/Neo4j. To prevent Logback from erroring and drowning me in status messages I needed to put a file called custom-logback.xml in my Play app's conf directory. I think the Neo4j logging config requires this. Mine contains the following: <included> <logger name="eu.mypackage" level="info"> </logger> <logger name="org.neo4j" level="warn"> </logger> <root level="warn"> </root> </included> Also in my conf directory, I seemed to need a file called logback.properties which (in my case) contains just this line: CONSOLE_LEVEL=ERROR (Logback experts, feel free to correct any of this.)
Neo4j
14,373,029
10
I am trying to use dBpedia with neo4j ontop of ruby on rails. Assuming I have installed neo4j and downloaded one of the dBpedia datasets. How do I import the dbpedia dataset into neo4j ?
The simplest way to load dbpedia into Neo4j is to use the dbpedia4neo library. This is a Java library, but you don't need to know any Java because all you need to do is run the executable. You could rewrite this in JRuby if you want, but regular Ruby won't work because it relies on Blueprints, a Java library with no Ruby equivalent. Here are the two key files, which provide the loading procedure. https://github.com/oleiade/dbpedia4neo/blob/master/src/main/java/org/acaro/dbpedia4neo/inserter/DBpediaLoader.java https://github.com/oleiade/dbpedia4neo/blob/master/src/main/java/org/acaro/dbpedia4neo/inserter/TripleHandler.java Here is a description of what's involved. Blueprints is translating the RDF data to a graph representation. To understand what's going on under the hood, see Blueprints Sail Ouplementation: After you download the dbpedia dump files, you should be able to build the dbpedia4neo Java library and run it without modifying the Java code. First, clone the oleiade's fork of the GitHub repository and change to the dbpedia4neo directory: $ git clone https://github.com/oleiade/dbpedia4neo.git $ cd dbpedia4neo (Oleiade's fork includes a minor Blueprints update that does sail.initialize(); See https://groups.google.com/d/msg/gremlin-users/lfpNcOwZ49Y/WI91ae-UzKQJ). Before you build it, you will need to update the pom.xml to use more current Blueprints versions and the current Blueprints repository (Sonatype). To do this, open pom.xml and at the top of the dependencies section, change all of the TinkerPop Blueprints versions from 0.6 to 0.9. While you are in the file, add the Sonatype repository to the repositories section at the end of the file: <repository> <id>sonatype-nexus-snapshots</id> <name>Sonatype Nexus Snapshots</name> <url>https://oss.sonatype.org/content/repositories/releases</url> </repository> Save the file and then build it using maven: $ mvn clean install This will download and install all the dependencies for you and create a jar file in the target directory. To load dbpedia, use maven to run the executable: $ mvn exec:java \ -Dexec.mainClass=org.acaro.dbpedia4neo.inserter.DBpediaLoader \ -Dexec.args="/path/to/dbpedia-dump.nt" The dbpedia dump is large so this will take a while to load. Now that the data is loaded, you can access the graph in one of two ways: Use JRuby and the Blueprints-Neo4j API directly. Use regular Ruby and the Rexster REST server, which is similar to Neo4j Server except that it supports multiple graph databases. For an example of how to create a Rexster client, see Bulbs, a Python framework I wrote that supports both Neo4j Server and Rexster. http://bulbflow.com/ https://github.com/espeed/bulbs https://github.com/espeed/bulbs/tree/master/bulbs/rexster Another approach to all this would be to process the dbpedia RDF dump file in Ruby, write out the nodes and relationships to a CSV file, and use the Neo4j batch importer to load it. But this will require that you manually translate the RDF data into Neo4j relationships.
Neo4j
12,212,015
10
In neo4j should all nodes connect to node 0 so that you can create a traversal that spans across all objects? Is that a performance problem when you get to large datasets? If so, how many nodes is too much? Is it ok not to have nodes connect to node 0 if I don't see a use case for it now, assuming I use indexes for finding specific nodes?
There is no need or requirement to connect everything to the root node. Indexes work great in finding starting points for your traversal. If you have say less then 5000 nodes connected to a starting node (like the root node), then a relationship scan is cheaper than an index lookup. To judge what is better, you need to know a bit more about the domain.
Neo4j
12,186,803
10
I am trying to load all my Neo4j DB to the RAM so querying will work faster. When passing the properties map to the graph creation, I do not see the process taking more space in memory as it did before, and it is also not proportional to the space of files at disk. What could be the problem? and how can it be fixed.... Thanks
Neo4j loads all the data lazily, meaning it loads them into memory at first access. The caching option is just about the GC strategy, so when (or if) the references will be GCed. To load the whole graph into memory, your cache type must be strong and you need to traverse the whole graph once. You can do it like this: // untested java code import org.neo4j.helpers.collection.IteratorUtil; // ... for(Node node : graph.getAllNodes()) { IteratorUtil.count(node.getRelationships()); } This way all nodes and relationships will be used once and thus loaded into the cache.
Neo4j
9,995,949
10
The class GraphDatabaseService seems not provide any method to drop/clear the database. It there any other means to drop/clear the current embedded database with Java?
Just perform a GraphDatabaseService.shutdown() and after it has returned, remove the database files (using code like this). You could also use getAllNodes() to iterate over all nodes, delete their relationships and the nodes themselves. Maybe avoid deleting the reference node. If your use case is testing, then you could use the ImpermanentGraphDatabase, which will delete the database after shutdown. To use ImpermanentGraphDatabase add the neo4j-kernel tests jar/dependency to your project. Look for the file with a name ending with "tests.jar" on maven central.
Neo4j
5,335,951
10
This is my source code of Main.java. It was grabbed from neo4j-apoc-1.0 examples. The goal of modification to store 1M records of 2 nodes and 1 relation: package javaapplication2; import org.neo4j.graphdb.GraphDatabaseService; import org.neo4j.graphdb.Node; import org.neo4j.graphdb.RelationshipType; import org.neo4j.graphdb.Transaction; import org.neo4j.kernel.EmbeddedGraphDatabase; public class Main { private static final String DB_PATH = "neo4j-store-1M"; private static final String NAME_KEY = "name"; private static enum ExampleRelationshipTypes implements RelationshipType { EXAMPLE } public static void main(String[] args) { GraphDatabaseService graphDb = null; try { System.out.println( "Init database..." ); graphDb = new EmbeddedGraphDatabase( DB_PATH ); registerShutdownHook( graphDb ); System.out.println( "Start of creating database..." ); int valIndex = 0; for(int i=0; i<1000; ++i) { for(int j=0; j<1000; ++j) { Transaction tx = graphDb.beginTx(); try { Node firstNode = graphDb.createNode(); firstNode.setProperty( NAME_KEY, "Hello" + valIndex ); Node secondNode = graphDb.createNode(); secondNode.setProperty( NAME_KEY, "World" + valIndex ); firstNode.createRelationshipTo( secondNode, ExampleRelationshipTypes.EXAMPLE ); tx.success(); ++valIndex; } finally { tx.finish(); } } } System.out.println("Ok, client processing finished!"); } finally { System.out.println( "Shutting down database ..." ); graphDb.shutdown(); } } private static void registerShutdownHook( final GraphDatabaseService graphDb ) { // Registers a shutdown hook for the Neo4j instance so that it // shuts down nicely when the VM exits (even if you "Ctrl-C" the // running example before it's completed) Runtime.getRuntime().addShutdownHook( new Thread() { @Override public void run() { graphDb.shutdown(); } } ); } } After a few iterations (around 150K) I got error message: "java.lang.OutOfMemoryError: Java heap space at java.nio.HeapByteBuffer.(HeapByteBuffer.java:39) at java.nio.ByteBuffer.allocate(ByteBuffer.java:312) at org.neo4j.kernel.impl.nioneo.store.PlainPersistenceWindow.(PlainPersistenceWindow.java:30) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.allocateNewWindow(PersistenceWindowPool.java:534) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.refreshBricks(PersistenceWindowPool.java:430) at org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool.acquire(PersistenceWindowPool.java:122) at org.neo4j.kernel.impl.nioneo.store.CommonAbstractStore.acquireWindow(CommonAbstractStore.java:459) at org.neo4j.kernel.impl.nioneo.store.AbstractDynamicStore.updateRecord(AbstractDynamicStore.java:240) at org.neo4j.kernel.impl.nioneo.store.PropertyStore.updateRecord(PropertyStore.java:209) at org.neo4j.kernel.impl.nioneo.xa.Command$PropertyCommand.execute(Command.java:513) at org.neo4j.kernel.impl.nioneo.xa.NeoTransaction.doCommit(NeoTransaction.java:443) at org.neo4j.kernel.impl.transaction.xaframework.XaTransaction.commit(XaTransaction.java:316) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceManager.commit(XaResourceManager.java:399) at org.neo4j.kernel.impl.transaction.xaframework.XaResourceHelpImpl.commit(XaResourceHelpImpl.java:64) at org.neo4j.kernel.impl.transaction.TransactionImpl.doCommit(TransactionImpl.java:514) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:571) at org.neo4j.kernel.impl.transaction.TxManager.commit(TxManager.java:543) at org.neo4j.kernel.impl.transaction.TransactionImpl.commit(TransactionImpl.java:102) at org.neo4j.kernel.EmbeddedGraphDbImpl$TransactionImpl.finish(EmbeddedGraphDbImpl.java:329) at javaapplication2.Main.main(Main.java:62) 28.05.2010 9:52:14 org.neo4j.kernel.impl.nioneo.store.PersistenceWindowPool logWarn WARNING: [neo4j-store-1M\neostore.propertystore.db.strings] Unable to allocate direct buffer" Guys! Help me plzzz, what I did wrong, how can I repair it? Tested on platform Windows XP 32bit SP3. Maybe solution within creation custom configuration? thnx 4 every advice!
this is a configuration issue on Windows, where Neo4j cannot use memory mapped buffers. Instead, a Java Buffer on the heap is created. In 1.0 this buffer was 470MB per default, which is more than the default heap for the Windows JVM. You have two options: Switch to APOC 1.1-SNAPSHOT instead of 1.0 in your pom.xml which has an autoconfiguration, assigning max 50% of the available JVM heap to Neo4j Adjust the JVM heap to more (e.g. 512Mb) by running Java with java -Xmx512m .... You can even insert that under JVM arguments in the Run Configurations in Eclipse Let us know if this helps! Also, doing a full transaction for every node pair is going to take a long time. Try opening a transaction in the first loop and do commits only every 1000 node pairs? /peter
Neo4j
2,927,329
10
I'm pretty sure the following query used to work for me on Presto: select segment, sum(count) from modeling_trends where segment='2557172' and date = '2016-06-23' and count_time between '2016-06-23 14:00:00.000' and '2016-06-23 14:59:59.000'; group by 1; now when I run it (on Presto 0.147 on EMR) I get an error of trying to assigning varchar to date/timestamp.. I can make it work using: select segment, sum(count) from modeling_trends where segment='2557172' and date = cast('2016-06-23' as date) and count_time between cast('2016-06-23 14:00:00.000' as TIMESTAMP) and cast('2016-06-23 14:59:59.000' as TIMESTAMP) group by segment; but it feels dirty... is there a better way to do this?
Unlike some other databases, Trino doesn't automatically convert between varchar and other types, even for constants. The cast works, but a simpler way is to use the type constructors: WHERE segment = '2557172' AND date = date '2016-06-23' AND count_time BETWEEN timestamp '2016-06-23 14:00:00.000' AND timestamp '2016-06-23 14:59:59.000' You can see examples for various types here: https://trino.io/docs/current/language/types.html
Presto
38,037,713
59
I have external tables created in AWS Athena to query S3 data, however, the location path has 1000+ files. So I need the corresponding filename of the record to be displayed as a column in the table. select file_name , col1 from table where file_name = "test20170516" In short, I need to know INPUT__FILE__NAME(hive) equivalent in AWS Athena Presto or any other ways to achieve the same.
You can do this with the $path pseudo column. select "$path" from table
Presto
44,011,433
59
I have the following query that I am trying to run on Athena. SELECT observation_date, COUNT(*) AS count FROM db.table_name WHERE observation_date > '2017-12-31' GROUP BY observation_date However it is producing this error: SYNTAX_ERROR: line 3:24: '>' cannot be applied to date, varchar(10) This seems odd to me. Is there an error in my query or is Athena not able to handle greater than operators on date columns? Thanks!
You need to use a cast to format the date correctly before making this comparison. Try the following: SELECT observation_date, COUNT(*) AS count FROM db.table_name WHERE observation_date > CAST('2017-12-31' AS DATE) GROUP BY observation_date Check it out in Fiddler: SQL Fidle UPDATE 17/07/2019 In order to reflect comments SELECT observation_date, COUNT(*) AS count FROM db.table_name WHERE observation_date > DATE('2017-12-31') GROUP BY observation_date
Presto
51,269,919
58
Is there any analog of NVL in Presto DB? I need to check if a field is NULL and return a default value. I solve this somehow like this: SELECT CASE WHEN my_field is null THEN 0 ELSE my_field END FROM my_table But I'm curious if there is something that could simplify this code.
The ISO SQL function for that is COALESCE coalesce(my_field,0) https://prestodb.io/docs/current/functions/conditional.html P.S. COALESCE can be used with multiple arguments. It will return the first (from the left) non-NULL argument, or NULL if not found. e.g. coalesce (my_field_1,my_field_2,my_field_3,my_field_4,my_field_5)
Presto
43,275,356
42
Why is Presto faster than Spark SQL? Besides what is the difference between Presto and Spark SQL in computing architectures and memory management?
In general, it is hard to say if Presto is definitely faster or slower than Spark SQL. It really depends on the type of query you’re executing, environment and engine tuning parameters. However, what I see in the industry(Uber, Neflix examples) Presto is used as ad-hock SQL analytics whereas Spark for ETL/ML pipelines.  One possible explanation, there is no much overhead for scheduling a query for Presto. Presto coordinator is always up and waits for query. On the other hand, Spark is doing lazy approach. It takes time for the driver to negotiate with the cluster manager the resources, copy jars and start processing. Another one that Presto architecture quite straightforward. It has a coordinator that does SQL parsing, planning, scheduling and a set of workers that execute a physical plan. On the other hand, Spark core has much more layers in between. Besides stages that Presto has, Spark SQL has to cope with a resiliency build into RDD, do resource management and negotiation for the jobs. Please also note that Spark SQL has Cost-Based-Optimizer that performs better on complex queries. While Presto(0.199) has a legacy ruled based optimizer. There is ongoing effort to bring CBO to Presto which might potentially beat Spark SQL performance.
Presto
50,014,017
41
Looking at the Date/Time Athena documentation, I don't see a function to do this, which surprises me. The closest I see is date_trunc('week', timestamp) but that results in something like 2017-07-09 00:00:00.000 while I would like the format to be 2017-07-09 Is there an easy function to convert a timestamp to a date?
The reason for not having a conversion function is, that this can be achieved with a type cast. So a converting query would look like this: select DATE(current_timestamp)
Presto
51,292,219
29
I'm trying to obtain a random sample of N rows from Athena. But since the table from which I want to draw this sample is huge the naive SELECT id FROM mytable ORDER BY RANDOM() LIMIT 100 takes forever to run, presumably because the ORDER BY requires all data to be sent to a single node, which then shuffles and orders the data. I know about TABLESAMPLE but that allows one to sample some percentage of rows rather than some number of them. Is there a better way of doing this?
Athena is actually behind Presto. You can use TABLESAMPLE to get a random sample of your table. Lets say you want 10% sample of your table, your query will be something like: SELECT id FROM mytable TABLESAMPLE BERNOULLI(10) Pay attention that there is BERNOULLI and SYSTEM sampling. Here is the documentation for it.
Presto
44,510,714
28
I am running a query like: SELECT f.*, p.countryName, p.airportName, a.name AS agentName FROM ( SELECT f.outboundlegid, f.inboundlegid, f.querydatetime, cast(f.agent as bigint) as agent, cast(f.querydestinationplace as bigint) as querydestinationplace, f.queryoutbounddate, f.queryinbounddate, f.quoteageinminutes, f.price FROM flights f WHERE querydatetime >= '2018-01-02' AND querydatetime <= '2019-01-10' ) f INNER JOIN ( SELECT airportId, airportName, countryName FROM airports WHERE countryName IN ('Philippines', 'Indonesia', 'Malaysia', 'Hong Kong', 'Thailand', 'Vietnam') ) p ON f.querydestinationplace = p.airportId INNER JOIN agents a ON f.agent = a.id ORDER BY f.outboundlegid, f.inboundlegid, f.agent, querydatetime DESC What's wrong with it? Or how can I optimize it? It gives me Query exhausted resources at this scale factor I have a flights table and I want to query for flights inside a specific country
I have been facing this problem since the begining of Athena, the problem is the ORDER BY clause. Athena is just an EMR cluster with hive and prestodb installed. The problem you are facing is: Even if your query is distributed across X numbers of nodes, the ordering phase must be done by just a single node, the master node in this case. So at the end, you can order as much data as memory have the master node. You can test it by reducing the amount of data the query returns maybe reducing the time range.
Presto
54,375,913
26
New to presto, any pointer how can I use LATERAL VIEW EXPLODE in presto for below table. I need to filter on names in my presto query CREATE EXTERNAL TABLE `id`( `id` string, `names` map<string,map<string,string>>, `tags` map<string,map<string,string>>) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat' LOCATION 's3://test' ; sample names value : {3081={short=Abbazia 81427 - Milan}, 2057={short=Abbazia 81427 - Milan}, 1033={short=Abbazia 81427 - Milan}, 4105={short=Abbazia 81427 - Milan}, 5129={short=Abbazia 81427 - Milan}}
From the documentation: https://trino.io/docs/current/appendix/from-hive.html Trino [formerly PrestoSQL] supports UNNEST for expanding arrays and maps. Use UNNEST instead of LATERAL VIEW explode(). Hive query: SELECT student, score FROM tests LATERAL VIEW explode(scores) t AS score; Presto query: SELECT student, score FROM tests CROSS JOIN UNNEST(scores) AS t (score);
Presto
51,314,218
23
I've got an Athena table where some fields have a fairly complex nested format. The backing records in S3 are JSON. Along these lines (but we have several more levels of nesting): CREATE EXTERNAL TABLE IF NOT EXISTS test ( timestamp double, stats array<struct<time:double, mean:double, var:double>>, dets array<struct<coords: array<double>, header:struct<frame:int, seq:int, name:string>>>, pos struct<x:double, y:double, theta:double> ) ROW FORMAT SERDE 'org.openx.data.jsonserde.JsonSerDe' WITH SERDEPROPERTIES ('ignore.malformed.json'='true') LOCATION 's3://test-bucket/test-folder/' Now we need to be able to query the data and import the results into Python for analysis. Because of security restrictions I can't connect directly to Athena; I need to be able to give someone the query and then they will give me the CSV results. If we just do a straight select * we get back the struct/array columns in a format that isn't quite JSON. Here's a sample input file entry: {"timestamp":1520640777.666096,"stats":[{"time":15,"mean":45.23,"var":0.31},{"time":19,"mean":17.315,"var":2.612}],"dets":[{"coords":[2.4,1.7,0.3], "header":{"frame":1,"seq":1,"name":"hello"}}],"pos": {"x":5,"y":1.4,"theta":0.04}} And example output: select * from test "timestamp","stats","dets","pos" "1.520640777666096E9","[{time=15.0, mean=45.23, var=0.31}, {time=19.0, mean=17.315, var=2.612}]","[{coords=[2.4, 1.7, 0.3], header={frame=1, seq=1, name=hello}}]","{x=5.0, y=1.4, theta=0.04}" I was hoping to get those nested fields exported in a more convenient format - getting them in JSON would be great. Unfortunately it seems that cast to JSON only works for maps, not structs, because it just flattens everything into arrays: SELECT timestamp, cast(stats as JSON) as stats, cast(dets as JSON) as dets, cast(pos as JSON) as pos FROM "sampledb"."test" "timestamp","stats","dets","pos" "1.520640777666096E9","[[15.0,45.23,0.31],[19.0,17.315,2.612]]","[[[2.4,1.7,0.3],[1,1,""hello""]]]","[5.0,1.4,0.04]" Is there a good way to convert to JSON (or another easy-to-import format) or should I just go ahead and do a custom parsing function?
I have skimmed through all the documentation and unfortunately there seems to be no way to do this as of now. The only possible workaround is converting a struct to a json when querying athena SELECT my_field, my_field.a, my_field.b, my_field.c.d, my_field.c.e FROM my_table Or I would convert the data to json using post processing. Below script shows how #!/usr/bin/env python import io import re pattern1 = re.compile(r'(?<={)([a-z]+)=', re.I) pattern2 = re.compile(r':([a-z][^,{}. [\]]+)', re.I) pattern3 = re.compile(r'\\"', re.I) with io.open("test.csv") as f: headers = list(map(lambda f: f.strip(), f.readline().split(","))) for line in f.readlines(): orig_line = line data = [] for i, l in enumerate(line.split('","')): data.append(headers[i] + ":" + re.sub('^"|"$', "", l)) line = "{" + ','.join(data) + "}" line = pattern1.sub(r'"\1":', line) line = pattern2.sub(r':"\1"', line) print(line) The output on your input data is {"timestamp":1.520640777666096E9,"stats":[{"time":15.0, "mean":45.23, "var":0.31}, {"time":19.0, "mean":17.315, "var":2.612}],"dets":[{"coords":[2.4, 1.7, 0.3], "header":{"frame":1, "seq":1, "name":"hello"}}],"pos":{"x":5.0, "y":1.4, "theta":0.04} } Which is a valid JSON
Presto
49,308,410
22
It's very convenient to be able to set script variables. For example, SET start_date = 20151201; SELECT * FROM some_table where date = {$hiveconf:start_date}; Does Presto have this capability?
You can do this WITH VARIABLES AS (SELECT VALUE AS VAR1, VALUE AS VAR2) SELECT * FROM TABLE CROSS JOIN VARIABLES WHERE COLUMN = VAR1
Presto
34,301,577
21
I have this CSV file: reference,address V7T452F4H9,"12410 W 62TH ST, AA D" The following options are being used in the table definition ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ( 'quoteChar'='\"', 'separatorChar'=',') but it still won't recognize the double quotes in the data, and that comma in the double quote fiel is messing up the data. When I run the Athena query, the result looks like this reference address V7T452F4H9 "12410 W 62TH ST How do I fix this issue?
I do this to solve: 1 - Create a Crawler that doesn't overwrite the target table properties, I used boto3 for this but it can be created in AWS console too, by doing this (change the xxx-var): import boto3 client = boto3.client('glue') response = client.create_crawler( Name='xxx-Crawler-Name', Role='xxx-Put-here-your-rol', DatabaseName='xxx-databaseName', Description='xxx-Crawler description if u need it', Targets={ 'S3Targets': [ { 'Path': 's3://xxx-Path-to-s3/', 'Exclusions': [ ] }, ] }, SchemaChangePolicy={ 'UpdateBehavior': 'LOG', 'DeleteBehavior': 'LOG' }, Configuration='{ \ "Version": 1.0, \ "CrawlerOutput": { \ "Partitions": {"AddOrUpdateBehavior": "InheritFromTable" \ }, \ "Tables": {"AddOrUpdateBehavior": "MergeNewColumns" } \ } \ }' ) # run the crawler response = client.start_crawler( Name='xxx-Crawler-Name' ) 2 - Edit the serialization lib, I do this in AWS Console like say this post (https://docs.aws.amazon.com/athena/latest/ug/glue-best-practices.html#schema-csv-quotes) just change this: 3 - Run Crawler again. Run the crawler as always do: 4 - That's it, your 2nd run should not change any data in the table, it's just for testing that it's works ¯\(ツ)/¯.
Presto
50,354,123
20
I'm using the latest(0.117) Presto and trying to execute CROSS JOIN UNNEST with complex JSON array like this. [{"id": 1, "value":"xxx"}, {"id":2, "value":"yy"}, ...] To do that, first I tried to make an ARRAY with the values of id by SELECT CAST(JSON_EXTRACT('[{"id": 1, "value":"xxx"}, {"id":2, "value":"yy"}]', '$..id') AS ARRAY<BIGINT>) but it doesn't work. What is the best JSON Path to extract the values of id?
This will solve your problem. It is more generic cast to an ARRAY of json (less prone to errors given an arbitrary map structure): select TRANSFORM(CAST(JSON_PARSE(arr1) AS ARRAY<JSON>), x -> JSON_EXTRACT_SCALAR(x, '$.id')) from (values ('[{"id": 1, "value":"xxx"}, {"id":2, "value":"yy"}]')) t(arr1) Output in presto: [1,2] ... I ran into a situation where a list of jsons was nested within a json. My list of jsons had an ambiguous nested map structure. The following code returns an array of values given a specific key in a list of jsons. Extract the list using JSON EXTRACT Cast the list as an array of jsons Loop through the json elements in the array using the TRANSFORM function and extract the value of the key that you are interested in. > TRANSFORM(CAST(JSON_EXTRACT(json, '$.path.toListOfJSONs') AS ARRAY<JSON>), x -> JSON_EXTRACT_SCALAR(x, '$.id')) as id
Presto
32,478,518
19
I'm working in an environment where I have an S3 service being used as a data lake, but not AWS Athena. I'm trying to setup Presto to be able to query the data in S3 and I know I need the define the data structure as Hive tables through the Hive Metastore service. I'm deploying each component in Docker, so I'd like to keep the container size as minimal as possible. What components from Hive do I need to be able to just run the Metastore service? I don't really actually care about running Hive, just the Metastore. Can I trim down what's needed, or is there already a pre-configured package just for that? I haven't been able to find anything online that doesn't include downloading all of Hadoop and Hive. Is what I'm trying to do possible?
There is a workaround, that you do not need hive to run presto. However I haven't tried that with any distributed file system like s3, but code suggest it should work (at least with HDFS). In my opinion it is worth trying, because you do not need any new docker image for hive at all. The idea is to use a builtin FileHiveMetastore. It is neither documented nor advised to be used in production but you could play with it. Schema information is stored next to the data in the file system. Obviously, it has its prons and cons. I do not know the details of your use case, so I don't know if it fits your needs. Configuration: connector.name=hive-hadoop2 hive.metastore=file hive.metastore.catalog.dir=file:///tmp/hive_catalog hive.metastore.user=cox Demo: presto:tiny> create schema hive.default; CREATE SCHEMA presto:tiny> use hive.default; USE presto:default> create table t (t bigint); CREATE TABLE presto:default> show tables; Table ------- t (1 row) Query 20180223_202609_00009_iuchi, FINISHED, 1 node Splits: 18 total, 18 done (100.00%) 0:00 [1 rows, 18B] [11 rows/s, 201B/s] presto:default> insert into t (values 1); INSERT: 1 row Query 20180223_202616_00010_iuchi, FINISHED, 1 node Splits: 51 total, 51 done (100.00%) 0:00 [0 rows, 0B] [0 rows/s, 0B/s] presto:default> select * from t; t --- 1 (1 row) After the above I was able to find the following on my machine: /tmp/hive_catalog/ /tmp/hive_catalog/default /tmp/hive_catalog/default/t /tmp/hive_catalog/default/t/.prestoPermissions /tmp/hive_catalog/default/t/.prestoPermissions/user_cox /tmp/hive_catalog/default/t/.prestoPermissions/.user_cox.crc /tmp/hive_catalog/default/t/.20180223_202616_00010_iuchi_79dee041-58a3-45ce-b86c-9f14e6260278.crc /tmp/hive_catalog/default/t/.prestoSchema /tmp/hive_catalog/default/t/20180223_202616_00010_iuchi_79dee041-58a3-45ce-b86c-9f14e6260278 /tmp/hive_catalog/default/t/..prestoSchema.crc /tmp/hive_catalog/default/.prestoSchema /tmp/hive_catalog/default/..prestoSchema.crc
Presto
48,932,907
19
I am new to Presto, and can't quite figure out how to check if a key is present in a map. When I run a SELECT query, this error message is returned: Key not present in map: element SELECT value_map['element'] FROM mytable WHERE name = 'foobar' Adding AND contains(value_map, 'element') does not work The data type is a string array SELECT typeof('value_map') FROM mytable returns varchar(9) How would I only select records where 'element' is present in the value_map?
You can lookup a value in a map if the key is present with element_at, like this: SELECT element_at(value_map, 'element') FROM ... WHERE element_at(value_map, 'element') IS NOT NULL
Presto
55,426,024
19
Does Presto SQL really lack TOP X functionality in SELECT statements? If so, is there a workaround in the meantime? https://prestodb.io/
If you simply want to limit the number of rows in the result set, you can use LIMIT, with or without ORDER BY: SELECT department, salary FROM employees ORDER BY salary DESC LIMIT 10 If you want the top values per group, you can use the standard SQL row_number() window function. For example, to get the top 3 employees per department by salary: SELECT department, salary FROM ( SELECT department, salary, row_number() OVER ( PARTITION BY department ORDER BY salary DESC) AS rn FROM employees ) WHERE rn <= 3
Presto
37,667,265
18
I have some issue while formatting a timestamp with Amazon Athena service. select date_format(current_timestamp, 'y') Returns just 'y' (the string). The only way I found to format dates in Amazon Athena is trough CONCAT + YEAR + MONTH + DAY functions, like this: select CONCAT(cast(year(current_timestamp) as varchar), '_', cast(day(current_timestamp) as varchar))
select current_timestamp ,date_format (current_timestamp, '%Y_%m_%d') ,format_datetime (current_timestamp, 'y_M_d') ; +---------------------+------------+-----------+ | _col0 | _col1 | _col2 | +---------------------+------------+-----------+ | 2017-05-19 14:46:12 | 2017_05_19 | 2017_5_19 | +---------------------+------------+-----------+ https://prestodb.io/docs/current/functions/datetime.html
Presto
44,064,923
18
I know that MSCK REPAIR TABLE updates the metastore with the current partitions of an external table. To do that, you only need to do ls on the root folder of the table (given the table is partitioned by only one column), and get all its partitions, clearly a < 1s operation. But in practice, the operation can take a very long time to execute (or even timeout if ran on AWS Athena). So my question is, what does MSCK REPAIR TABLE actually do behind the scenes and why? How does MSCK REPAIR TABLE find the partitions? Additional data in case it's relevant: Our data is all on S3, it's both slow when running on EMR (Hive) or Athena (Presto), there are ~450 partitions in the table, every partition has on avg 90 files, overall 3 Gigabytes for a partition, files are in Apache parquet format
You are right in the sense it reads the directory structure, creates partitions out of it and then updates the hive metastore. In fact more recently, the command was improved to remove non-existing partitions from metastore as well. The example that you are giving is very simple since it has only one level of partition keys. Consider table with multiple partition keys (2-3 partition keys is common in practice). msck repair will have to do a full-tree traversal of all the sub-directories under the table directory, parse the file names, make sure that the file names are valid, check if the partition is already existing in the metastore and then add the only partitions which are not present in the metastore. Note that each listing on the filesystem is a RPC to the namenode (in case of HDFS) or a web-service call in case of S3 or ADLS which can add to significant amount of time. Additionally, in order to figure out if the partition is already present in metastore or not, it needs to do a full listing of all the partitions which metastore knows of for the table. Both these steps can potentially increase the time taken for the command on large tables. The performance of msck repair table was improved considerably recently Hive 2.3.0 (see HIVE-15879 for more details). You may want to tune hive.metastore.fshandler.threads and hive.metastore.batch.retrieve.max to improve the performance of command.
Presto
53,667,639
18
How do I check if a map has no keys in Presto? If I have a way to check if an array is empty, I can use the map_keys function to determine if the map is empty.
You can use the cardinality function: https://prestodb.io/docs/current/functions/array.html#cardinality select cardinality(array[]) = 0; _col0 ------- true (1 row)
Presto
44,192,105
17
I am trying to do what I think is a simple date diff function but for some reason, my unit value is being read as a column ("dd") so I keep getting a column cannot be resolved error I am using AWS Athena My code is this SELECT "reservations"."id" "Booking_ID" , "reservations"."bookingid" "Booking_Code" , "reservations"."property"."id" "Property_id" , CAST("from_iso8601_timestamp"("reservations"."created") AS date) "Created" , CAST("from_iso8601_timestamp"("reservations"."arrival") AS date) "Arrival" , CAST("from_iso8601_timestamp"("reservations"."departure") AS date) "Departure" , CAST("from_iso8601_timestamp"("reservations"."modified") AS date) "Modified" , date_diff("dd", CAST("from_iso8601_timestamp"("reservations"."created") AS date), CAST("from_iso8601_timestamp"("reservations"."arrival") AS date)) "LoS" FROM "database".reservations LIMIT 5; I am trying to get the difference in days from the "created date" and "Arrival Date" I have tried date_diff with DD,"DD","dd",dd,Day,day,"day" and i get the same error.
Athena is based on Presto. See Presto documentation for date_diff() -- the unit is regular varchar, so it needs to go in single quotes: date_diff('day', ts_from, ts_to)
Presto
58,326,786
17
I have timestamps stored in time since epoch (ms) and I would like to query and display results using a date formatted like 'yyyy-mm-dd'.
cast(from_unixtime(unixtime) as date) See https://prestodb.io/docs/current/functions/datetime.html for more datetime functions.
Presto
44,420,926
16
This question is primarily about older versions of PrestoSQL, which have been resolved in the (now renamed) Trino project as of versions 346. However, Amazon's Athena project is based off of Presto versions 0.217 (Athena Engine 2) and 0.172 (Athena Engine 1), which does have the issues described below. This question was written specifically around Athena Engine 1 / PrestoSQL version 0.172 Questions (tl;dr) What is the difference between ROWS BETWEEN and RANGE BETWEEN in Presto window Functions? Are these just synonyms for each other, or are there core conceptual differences? If they are just synonyms, why does ROWS BETWEEN allow more options than RANGE BETWEEN? Is there a query scenario where it's possible to use the exact same parameters on ROWS BETWEEN and RANGE BETWEEN and get different results? If using just unbounded/current row, is there a scenario where you'd use RANGE instead of ROWS (or vice-versa)? Since ROWS has more options, why isn't it mentioned at all in the documentation? o_O Comments The presto documentation is fairly quiet about even RANGE, and doesn't mention ROWS. I haven't found many discussions or examples around window functions in Presto. I'm starting to set through the Presto code-base to try to figure this out. Hopefully someone can save me from that, and we can improve the documentation together. The Presto code has a parser and test cases for the ROWS variant, but there's no mention in the documentation of ROWS. The test cases I found with both ROWS and RANGE don't test anything different between the two syntaxes. They almost look like synonyms, but they do behave differently in my testing, and have different allowed parameters and validation rules. The following examples can be run with the starburstdata/presto Docker image running Presto 0.213-e-0.1. Typically I run Presto 0.172 through Amazon Athena, and have almost always ended up using ROWS. RANGE RANGE seems to be limited to "UNBOUNDED" and "CURRENT ROW". The following returns an error: range between 1 preceding and 1 following use tpch.tiny; select custkey, orderdate, array_agg(orderdate) over ( partition by custkey order by orderdate asc range between 1 preceding and 1 following ) previous_orders from orders where custkey in (419, 320) and orderdate < date('1996-01-01') order by custkey, orderdate asc; ERROR: Window frame RANGE PRECEDING is only supported with UNBOUNDED The following range syntaxes do work fine (with expected differing results). All following examples based on the above query, just changing the range range between unbounded preceding and current row custkey | orderdate | previous_orders ---------+------------+-------------------------------------------------------------------------- 320 | 1992-07-10 | [1992-07-10] 320 | 1992-07-30 | [1992-07-10, 1992-07-30] 320 | 1994-07-08 | [1992-07-10, 1992-07-30, 1994-07-08] 320 | 1994-08-04 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04] 320 | 1994-09-18 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18] 320 | 1994-10-12 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 419 | 1992-03-16 | [1992-03-16] 419 | 1993-12-29 | [1992-03-16, 1993-12-29] 419 | 1995-01-30 | [1992-03-16, 1993-12-29, 1995-01-30] range between current row and unbounded following custkey | orderdate | previous_orders ---------+------------+-------------------------------------------------------------------------- 320 | 1992-07-10 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1992-07-30 | [1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-07-08 | [1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-08-04 | [1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-09-18 | [1994-09-18, 1994-10-12] 320 | 1994-10-12 | [1994-10-12] 419 | 1992-03-16 | [1992-03-16, 1993-12-29, 1995-01-30] 419 | 1993-12-29 | [1993-12-29, 1995-01-30] 419 | 1995-01-30 | [1995-01-30] range between unbounded preceding and unbounded following custkey | orderdate | previous_orders ---------+------------+-------------------------------------------------------------------------- 320 | 1992-07-10 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1992-07-30 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-07-08 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-08-04 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-09-18 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-10-12 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04, 1994-09-18, 1994-10-12] 419 | 1992-03-16 | [1992-03-16, 1993-12-29, 1995-01-30] 419 | 1993-12-29 | [1992-03-16, 1993-12-29, 1995-01-30] 419 | 1995-01-30 | [1992-03-16, 1993-12-29, 1995-01-30] ROWS The three working examples for RANGE above all work for ROWS and produce identical output. rows between unbounded preceding and current row rows between current row and unbounded following rows between unbounded preceding and unbounded following output omitted - identical to above However, ROWS allows for far more control, since you can also do the syntax above that fails with range: rows between 1 preceding and 1 following custkey | orderdate | previous_orders ---------+------------+-------------------------------------- 320 | 1992-07-10 | [1992-07-10, 1992-07-30] 320 | 1992-07-30 | [1992-07-10, 1992-07-30, 1994-07-08] 320 | 1994-07-08 | [1992-07-30, 1994-07-08, 1994-08-04] 320 | 1994-08-04 | [1994-07-08, 1994-08-04, 1994-09-18] 320 | 1994-09-18 | [1994-08-04, 1994-09-18, 1994-10-12] 320 | 1994-10-12 | [1994-09-18, 1994-10-12] 419 | 1992-03-16 | [1992-03-16, 1993-12-29] 419 | 1993-12-29 | [1992-03-16, 1993-12-29, 1995-01-30] 419 | 1995-01-30 | [1993-12-29, 1995-01-30] rows between current row and 1 following custkey | orderdate | previous_orders ---------+------------+-------------------------- 320 | 1992-07-10 | [1992-07-10, 1992-07-30] 320 | 1992-07-30 | [1992-07-30, 1994-07-08] 320 | 1994-07-08 | [1994-07-08, 1994-08-04] 320 | 1994-08-04 | [1994-08-04, 1994-09-18] 320 | 1994-09-18 | [1994-09-18, 1994-10-12] 320 | 1994-10-12 | [1994-10-12] 419 | 1992-03-16 | [1992-03-16, 1993-12-29] 419 | 1993-12-29 | [1993-12-29, 1995-01-30] 419 | 1995-01-30 | [1995-01-30] rows between 5 preceding and 2 preceding custkey | orderdate | previous_orders ---------+------------+-------------------------------------------------- 320 | 1992-07-10 | NULL 320 | 1992-07-30 | NULL 320 | 1994-07-08 | [1992-07-10] 320 | 1994-08-04 | [1992-07-10, 1992-07-30] 320 | 1994-09-18 | [1992-07-10, 1992-07-30, 1994-07-08] 320 | 1994-10-12 | [1992-07-10, 1992-07-30, 1994-07-08, 1994-08-04] 419 | 1992-03-16 | NULL 419 | 1993-12-29 | NULL 419 | 1995-01-30 | [1992-03-16]
ROWS are literally number of rows before and after that you want to aggregate. So ORDER BY day ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING will end up with 3 rows: the curnet row 1 row before and 1 row after, regardless of the value of orderdate. RANGE will look at the values of orderdate and will decide what should be aggregated and what not. So ORDER BY day RANGE BETWEEN 1 PRECEDING AND 1 FOLLOWING would theoretically take all lines with values of orderdate-1, orderdate and orderdate+1 - this can be more than 3 lines (see more explanations here) In Presto the ROWS is fully implemented, but the RANGE is somehow only partially implemented, and you can only use in with CURRENT ROW and UNBOUNDED. NOTE: Recent versions of Trino (formerly known as Presto SQL) have full support for RANGE and GROUPS framing. See this blog post for an explanation of how they work. The best way in Presto, to be able to see the diff between the two, is to make sure you have same values of the order clause: WITH tt1 (custkey, orderdate, product) AS ( SELECT * FROM ( VALUES ('a','1992-07-10', 3), ('a','1993-08-10', 4), ('a','1994-07-13', 5), ('a','1995-09-13', 5), ('a','1995-09-13', 9), ('a','1997-01-13', 4), ('b','1992-07-10', 6), ('b','1992-07-10', 4), ('b','1994-07-13', 5), ('b','1994-07-13', 9), ('b','1998-11-11', 9) ) ) SELECT *, array_agg(product) OVER (partition by custkey) c, array_agg(product) OVER (partition by custkey order by orderdate) c_order, array_agg(product) OVER (partition by custkey order by orderdate RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) range_ubub, array_agg(product) OVER (partition by custkey order by orderdate ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) rows_ubub, array_agg(product) OVER (partition by custkey order by orderdate RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) range_ubc, array_agg(product) OVER (partition by custkey order by orderdate ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) rows_ubc, array_agg(product) OVER (partition by custkey order by orderdate RANGE BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) range_cub, array_agg(product) OVER (partition by custkey order by orderdate ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) rows_cub, -- array_agg(product) OVER (partition by custkey order by orderdate RANGE BETWEEN 2 PRECEDING AND 2 FOLLOWING) range22, -- SYNTAX_ERROR: line 19:65: Window frame RANGE PRECEDING is only supported with UNBOUNDED array_agg(product) OVER (partition by custkey order by orderdate ROWS BETWEEN 2 PRECEDING AND 2 FOLLOWING) rows22 from tt1 order by custkey, orderdate, product You can run, and see full results, and learn from them.. I'll put here only some interesting columns: custkey orderdate product range_ubc rows_ubc a 10/07/1992 3 [3] [3] a 10/08/1993 4 [3, 4] [3, 4] a 13/07/1994 5 [3, 4, 5] [3, 4, 5] a 13/09/1995 5 [3, 4, 5, 5, 9] [3, 4, 5, 5] a 13/09/1995 9 [3, 4, 5, 5, 9] [3, 4, 5, 5, 9] a 13/01/1997 4 [3, 4, 5, 5, 9, 4] [3, 4, 5, 5, 9, 4] b 10/07/1992 4 [6, 4] [6, 4] b 10/07/1992 6 [6, 4] [6] b 13/07/1994 5 [6, 4, 5, 9] [6, 4, 5] b 13/07/1994 9 [6, 4, 5, 9] [6, 4, 5, 9] b 11/11/1998 9 [6, 4, 5, 9, 9] [6, 4, 5, 9, 9] If you look at the 5th line of: orderdate:13/09/1995, product:5 (Note: 13/09/1995 appears twice for custkey:a) you can see that the ROWS indeed took all rows from top till current line. But if you look at the RANGE, you see it includes also the value from the row after as it has the exact same orderdate so it is considered in same window.
Presto
60,302,379
16
I'm getting values from nested maps and it's hard to figure out what data type each value is. Is there a typeof function that can tell me the data type of each value?
Yes, there is the typeof function: presto> select typeof(1), typeof('a'); _col0 | _col1 ---------+------------ integer | varchar(1) (1 row)
Presto
44,192,096
15
I have a column in my dataset that has a datatype of bigint: Col1 Col2 1 1519778444938790 2 1520563808877450 3 1519880608427160 4 1520319586578960 5 1519999133096120 How do I convert Col2 to the following format: year-month-day hr:mm:ss I am not sure what format my current column is in but I know that it is supposed to be a timestamp. Any help will be great, thanks!
Have you tried to use functions like from_unixtime? You could use it to convert unix time to timestamp, then you could use date_format to display it in way you want. Notice that in your example your unix time is with microseconds, so you might want to convert it first to milliseconds. I have not tested that but I am assuming that your code should look like: date_format(from_unixtime(col2/1000), '%Y-%m-%d %h:%i:%s') Notice that from_unixtime accepts also a time zone. Please visit this page to see the more details about date related functions: https://docs.starburstdata.com/latest/functions/datetime.html
Presto
50,050,603
15
In Presto SHOW SCHEMAS; returns all schemas SHOW TABLES FROM foo; returns all tables for foo schema Is there a simple way to return tables from all schemas in Presto?
You can use select table_schema, table_name from information_schema.tables;
Presto
40,938,321
14
I like to convert my timestamp columns to date and time format. How should I write the query from presto? my timestamp is UTC time. Thank you very much Timestamp format"1506929478589" After query convert it looks like "2016-10-25 21:04:08.436"
You can convert timestamp to date with cast(col as date) or date(col).
Presto
46,886,856
14
Is this possible in SQL (preferably Presto): I want to reshape this table: id, array 1, ['something'] 1, ['something else'] 2, ['something'] To this table: id, array 1, ['something', 'something else'] 2, ['something']
In Presto you can use array_agg. Assuming that on input, all your arrays are single-element, this would look like this: select id, array_agg(array[0]) from ... group by id; If, however, your input arrays are not necessarily single-element, you can combine this with flatten, like this: select id, flatten(array_agg(array)) from ... group by id;
Presto
52,501,221
14
It seems like there is no native function for that purpose in Presto SQL. Do you know any way to efficiently aggregate a group and return its median?
approx_percentile() should be a reasonable approach. Assuming a table like mytable(id, val), that you want to aggregate by id: select id, approx_percentile(val, 0.5) median_val from mytable group by id
Presto
64,030,409
14
typically to create a table in Presto (from existing db tables), I do: create table abc as ( select... ) But to make my code simple, I've broken out subqueries like this: with sub1 as ( select... ), sub2 as ( select... ), sub3 as ( select... ) select from sub1 join sub2 on ... join sub3 on ... Where do I put the create table statement here? The actual query is more complex than the above so I am trying to avoid having to put the subqueries within the main query.
This is possible with an INSERT INTO not sure about CREATE TABLE: INSERT INTO s1 WITH q1 AS (...) SELECT * FROM q1 Maybe you could give this a shot: CREATE TABLE s1 as WITH q1 AS (...) SELECT * FROM q1
Presto
42,563,301
13
In my case, Presto connects to a MySQL database which has been configured to be case-insensitive. But any search through Presto seems to be case-sensitive. Questions: 1) Is there a way to configure Presto searches to be case-insensitive? If not, can something be changed in the Presto-MySQL connector to make the searches case-insensitive? 2) If underlying DB is case-insensitive, shouldn't Presto searches also be case-insensitive? (I presume that Presto only generates the query plan and the actual execution happens on the underlying database) Example: Consider the below table on MySQL. name ____ adam Alan select * from table where name like '%a%' // returns adam, Alan on MySQL // returns only adam on Presto select * from table where name = 'Adam' // returns adam on MySQL // returns NIL on Presto
You have to explicitly ask for case-insensitive comparison by normalizing compared values either to-lower, or to-upper, like this: select * from table where lower(name) like '%a%'; select * from table where lower(name) = lower('Adam');
Presto
42,850,329
13
This query in Presto: select *, cast(ts_w_tz as timestamp) as ts, cast(substr(cast(ts_w_tz as varchar), 1, 23) as timestamp) as local_ts_workaround from (select timestamp '2018-02-06 23:00:00.000 Australia/Melbourne' as ts_w_tz); Returns: ts_w_tz | ts | local_ts_workaround ---------------------------------------------+-------------------------+------------------------- 2018-02-06 23:00:00.000 Australia/Melbourne | 2018-02-06 12:00:00.000 | 2018-02-06 23:00:00.000 As you can see, the act of casting the timestamp with timezone to a timestamp has resulted in the timestamp being converted back to UTC time (eg ts). IMO the correct behaviour should be to return the 'wall reading' of the timestamp, as per local_ts_workaround. I realise there are many posts about how Presto's handling of this is wrong and doesn't conform to the SQL standard, and that there is a fix in the works. But in the meantime this is a major pain since the effect is that there appears to be no built in way to get a localised timestamp withOUT timezone (as per local_ts_workaround). Obviously, I have the string conversion workaround for now, but this seems horrible. I am wondering if anyone has a better workaround or can point out something that I have missed? Thanks.
It seems like there's no great solution, but building off of the previous answer, I like this a little better... see the date_format_workaround column: select *, cast(from_iso8601_timestamp(date_format(ts_w_tz, '%Y-%m-%dT%H:%i:%s')) as timestamp) as date_format_workaround, cast(ts_w_tz as timestamp) as ts, cast(substr(cast(ts_w_tz as varchar), 1, 23) as timestamp) as local_ts_workaround from (select timestamp '2018-02-06 23:00:00.000 Australia/Melbourne' as ts_w_tz);
Presto
48,633,900
13
I have a very simple csv file on S3 "i","d","f","s" "1","2018-01-01","1.001","something great!" "2","2018-01-02","2.002","something terrible!" "3","2018-01-03","3.003","I'm an oil man" I'm trying to create a table across this using the following command CREATE EXTERNAL TABLE test (i int, d date, f float, s string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' LOCATION 's3://mybucket/test/' TBLPROPERTIES ("skip.header.line.count"="1"); When I query the table (select * from test) I'm getting an error like this: HIVE_BAD_DATA: Error parsing field value '2018-01-01' for field 1: For input string: "2018-01-01" Some more info: If I change the d column to a string the query will succeed I've previously parsed dates in text files using Athena; I believe using LazySimpleSerDe Definitely seems like a problem with the OpenCSVSerde The documentation definitely implies that this is supported. Looking for anyone who has encountered this, or any suggestions.
In fact, it is a problem with the documentation that you mentioned. You were probably referring to this excerpt: [OpenCSVSerDe] recognizes the DATE type if it is specified in the UNIX format, such as YYYY-MM-DD, as the type LONG. Understandably, you were formatting your date as YYYY-MM-DD. However, the documentation is deeply misleading in that sentence. When it refers to UNIX format, it actually has UNIX Epoch Time in mind. Based on the definition of UNIX Epoch, your dates should be integers (hence the reference to the type LONG in the documentation). Your dates should be the number of days that have elapsed since January 1, 1970. For instance, your sample CSV should look like this: "i","d","f","s" "1","17532","1.001","something great!" "2","17533","2.002","something terrible!" "3","17534","3.003","I'm an oil man" Then you can run that exact same command: CREATE EXTERNAL TABLE test (i int, d date, f float, s string) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' LOCATION 's3://mybucket/test/' TBLPROPERTIES ("skip.header.line.count"="1"); If you query your Athena table with select * from test, you will get: i d f s --- ------------ ------- --------------------- 1 2018-01-01 1.001 something great! 2 2018-01-02 2.002 something terrible! 3 2018-01-03 3.003 I'm an oil man An analogous problem also compromises the explanation on TIMESTAMP in the aforementioned documentation: [OpenCSVSerDe] recognizes the TIMESTAMP type if it is specified in the UNIX format, such as yyyy-mm-dd hh:mm:ss[.f...], as the type LONG. It seems to indicate that we should format TIMESTAMPs as yyyy-mm-dd hh:mm:ss[.f...]. Not really. In fact, we need to use UNIX Epoch Time again, but this time with the number of milliseconds that have elapsed since Midnight 1 January 1970. For instance, consider the following sample CSV: "i","d","f","s","t" "1","17532","1.001","something great!","1564286638027" "2","17533","2.002","something terrible!","1564486638027" "3","17534","3.003","I'm an oil man","1563486638012" And the following CREATE TABLE statement: CREATE EXTERNAL TABLE test (i int, d date, f float, s string, t timestamp) ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' LOCATION 's3://mybucket/test/' TBLPROPERTIES ("skip.header.line.count"="1"); This will be the result set for select * from test: i d f s t --- ------------ ------- --------------------- ------------------------- 1 2018-01-01 1.001 something great! 2019-07-28 04:03:58.027 2 2018-01-02 2.002 something terrible! 2019-07-30 11:37:18.027 3 2018-01-03 3.003 I'm an oil man 2019-07-18 21:50:38.012
Presto
52,564,194
13
Currently, my table has three different fields, id1, id2 and actions. action is of type string. For example, my table looks something like the table given below: id1 | id2 | actions --------------------------- "a1" "a2" "action1" "b1" "b2" "action2" "a1" "a2" "action3" If the values of id1 and also the valuesid2 are same for any number of rows, I want to combine those rows so that the actions field becomes a list of string. If none of the rows have same values for id1 and same values for id2, I want to still convert the actions fields as a list but only with one string. For example, the output of the query should look something like the following: id1 | id2 | actions --------------------------- "a1" "a2" ["action1", "action3"] "b1" "b2" ["action2"] I know some basics of Presto and can join columns based on conditions but was not sure if this can be achieved with query. If this can be achieved, what is a good approach to move forward with the implementation of this logic?
Try using ARRAY_JOIN with ARRAY_AGG: SELECT id1, id2, ARRAY_JOIN(ARRAY_AGG(actions), ',') actions FROM yourTable GROUP BY id1, id2;
Presto
55,370,212
13
Recently, I've experienced an issue with AWS Athena when there is quite high number of partitions. The old version had a database and tables with only 1 partition level, say id=x. Let's take one table; for example, where we store payment parameters per id (product), and there are not plenty of IDs. Assume its around 1000-5000. Now while querying that table with passing id number on where clause like ".. where id = 10". The queries were returned pretty fast actually. Assume we update the data twice a day. Lately, we've been thinking to add another partition level for day like, "../id=x/dt=yyyy-mm-dd/..". This means that partition number grows xID times per day if a month passes and if we have 3000 IDs, we'd approximately get 3000x30=90000 partitions a month. Thus, a rapid grow in number of partitions. On, say 3 months old data (~270k partitions), we'd like to see a query like the following would return in at most 20 seconds or so. select count(*) from db.table where id = x and dt = 'yyyy-mm-dd' This takes like a minute. The Real Case It turns out Athena first fetches the all partitions (metadata) and s3 paths (regardless the usage of where clause) and then filter those s3 paths that you would like to see on where condition. The first part (fetching all s3 paths by partitions lasts long proportionally to the number of partitions) The more partitions you have, the slower the query executed. Intuitively, I expected that Athena fetches only s3 paths stated on where clause, I mean this would be the one way of magic of the partitioning. Maybe it fetches all paths Does anybody know a work around, or do we use Athena in a wrong way ? Should Athena be used only with small number of partitions ? Edit In order to clarify the statement above, I add a piece from support mail. from Support ... You mentioned that your new system has 360000 which is a huge number. So when you are doing select * from <partitioned table>, Athena first download all partition metadata and searched S3 path mapped with those partitions. This process of fetching data for each partition lead to longer time in query execution. ... Update An issue opened on AWS forums. The linked issue raised on aws forums is here. Thanks.
This is impossible to properly answer without knowing the amount of data, what file formats, and how many files we're talking about. TL; DR I suspect you have partitions with thousands of files and that the bottleneck is listing and reading them all. For any data set that grows over time you should have a temporal partitioning, on date or even time, depending on query patterns. If you should have partitioning on other properties depends on a lot of factors and in the end it often turns out that not partitioning is better. Not always, but often. Using reasonably sized (~100 MB) Parquet can in many cases be more effective than partitioning. The reason is that partitioning increases the number of prefixes that have to be listed on S3, and the number of files that have to be read. A single 100 MB Parquet file can be more efficient than ten 10 MB files in many cases. When Athena executes a query it will first load partitions from Glue. Glue supports limited filtering on partitions, and will help a bit in pruning the list of partitions – so to the best of my knowledge it's not true that Athena reads all partition metadata. When it has the partitions it will issue LIST operations to the partition locations to gather the files that are involved in the query – in other words, Athena won't list every partition location, just the ones in partitions selected for the query. This may still be a large number, and these list operations are definitely a bottleneck. It becomes especially bad if there is more than 1000 files in a partition because that's the page size of S3's list operations, and multiple requests will have to be made sequentially. With all files listed Athena will generate a list of splits, which may or may not equal the list of files – some file formats are splittable, and if files are big enough they are split and processed in parallel. Only after all of that work is done the actual query processing starts. Depending on the total number of splits and the amount of available capacity in the Athena cluster your query will be allocated resources and start executing. If your data was in Parquet format, and there was one or a few files per partition, the count query in your question should run in a second or less. Parquet has enough metadata in the files that a count query doesn't have to read the data, just the file footer. It's hard to get any query to run in less than a second due to the multiple steps involved, but a query hitting a single partition should run quickly. Since it takes two minutes I suspect you have hundreds of files per partition, if not thousands, and your bottleneck is that it takes too much time to run all the list and get operations in S3.
Presto
59,488,379
13
I have a list of creation time stamps and ending time stamps , i would like to get the amount of seconds last from creation to ending . could not find any way to do that without using UNIX time stamp (which i dont have at the moment) . something like that : datediff('second',min(creation_time),max(ending_time)) creation_time = '2017-03-20 10:55:00' ..
date_diff date_diff('second', min(creation_time),max(ending_time))
Presto
42,899,288
12
I'm running a query with a select bar_tbl.thing1 from foo cross join unnest(bar) as t(bar_tbl) And got the error Error Query failed: Cannot unnest type: row Why? The bar column looks like this {thing1=abc, thing2=def}
Turns out I was trying to expand a row, which doesn't make sense. I should have just done select bar.thing1 from foo
Presto
49,949,652
12