question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I am looking to use Flower (https://github.com/mher/flower) to monitor my Celery tasks in place of the django-admin as reccomended in their docs (http://docs.celeryproject.org/en/latest/userguide/monitoring.html#flower-real-time-celery-web-monitor). However, because I am new to this I am a little confused about the way Flower's page is only based on HTTP, and not HTTPS. How can I enable security for my Celery tasks such that any old user can't just visit the no-login-needed website http://flowerserver.com:5555 and change something? I have considered Celery's own documentation on this, but they unfortunately there is no mention of how to secure Flower's api or web ui. All it says: [Need more text here] Thanks! Update: My question is in part a duplicate of here: How do I add authentication and endpoint to Django Celery Flower Monitoring? However, I clarify his question here by asking how to run it using an environment that includes nginx, gunicorn, and celery all on the same remote machine. I too am wondering about how to set up Flower's outside accessible url, but also would prefer something like https instead of http if possible (or some way of securing the webui and accessing it remotely). I also need to know if leaving Flower running is a considerable security risk for anyone who may gain access to Flower's internal API and what the best way for securing this could be, or if it should just be disabled altogether and used just on an as-needed basis.
You can run flower with --auth flag, which will authenticate using a particular google email: celery flower [email protected] Edit 1: New version of Flower requires couple more flags and a registered OAuth2 Client with Google Developer Console: celery flower \ [email protected] \ --oauth2_key="client_id" \ --oauth2_secret="client_secret" \ --oauth2_redirect_uri="http://example.com:5555/login" oauth2_redirect_uri has to be the actual flower login url, and it also has to be added to authorized redirect url's in Google Development Console. Unfortunately this feature doesn't work properly in current stable version 0.7.2, but it is now fixed in development version 0.8.0-dev with this commit. Edit 2: You can configure Flower using basic authentication: celery flower --basic_auth=user1:password1,user2:password2 Then block 5555 port for all but localhost and configure reverse proxy for nginx or for apache: ProxyRequests off ProxyPreserveHost On ProxyPass / http://localhost:5555 Then make sure proxy mod is on: sudo a2enmod proxy sudo a2enmod proxy_http In case you can't set it up on a separate subdomain, ex: flower.example.com (config above), you can set it up for example.com/flower: run flower with url_prefix: celery flower --url_prefix=flower --basic_auth=user1:password1,user2:password2 in apache config: ProxyPass /flower http://localhost:5555 Of course, make sure SSL is configured, otherwise there is no point :)
NGINX
19,689,510
58
I'm trying to have a docker container with nginx work as reverse proxy to other docker containers and I keep getting "Bad Gateway" on locations other other than the base location '/'. I have the following server block: server { listen 80; location / { proxy_pass "http://game2048:8080"; } location /game { proxy_pass "http://game:9999"; } } It works for http://localhost but not for http://localhost/game which gives "Bad Gateway" in the browser and this on the nginx container: [error] 7#7: *6 connect() failed (111: Connection refused) while connecting to upstream, client: 172.17.0.1, server: , request: "GET /game HTTP/1.1", upstream: "http://172.17.0.4:9999/game", host: "localhost" I use the official nginx docker image and put my own configuration on it. You can test it and see all details here: https://github.com/jollege/ngprox1 Any ideas what goes wrong? NB: I have set local hostname entries on docker host to match those names: 127.0.1.1 game2048 127.0.1.1 game
I fixed it! I set the server name in different server blocks in nginx config. Remember to use docker port, not host port. server { listen 80; server_name game2048; location / { proxy_pass "http://game2048:8080"; } } server { listen 80; server_name game; location / { # Remember to refer to docker port, not host port # which is 9999 in this case: proxy_pass "http://game:8080"; } } The github repo has been updated to reflect the fix, the old readme file is there under ./README.old01.md. Typical that I find the answer when I carefully phrase the question to others. Do you know that feeling?
NGINX
47,091,356
57
I have a Sinatra application hosted with Unicorn, and nginx in front of it. When the Sinatra application errors out (returns 500), I'd like to serve a static page, rather than the default "Internal Server Error". I have the following nginx configuration: server { listen 80 default; server_name *.example.com; root /home/deploy/www-frontend/current/public; location / { proxy_pass_header Server; proxy_set_header Host $http_host; proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Scheme $scheme; proxy_connect_timeout 5; proxy_read_timeout 240; proxy_pass http://127.0.0.1:4701/; } error_page 500 502 503 504 /50x.html; } The error_page directive is there, and I have sudo'd as www-data (Ubuntu) and verified I can cat the file, thus it's not a permission problem. With the above config file, and service nginx reload, the page I receive on error is still the same "Internal Server Error". What's my error?
error_page handles errors that are generated by nginx. By default, nginx will return whatever the proxy server returns regardless of http status code. What you're looking for is proxy_intercept_errors This directive decides if nginx will intercept responses with HTTP status codes of 400 and higher. By default all responses will be sent as-is from the proxied server. If you set this to on then nginx will intercept status codes that are explicitly handled by an error_page directive. Responses with status codes that do not match an error_page directive will be sent as-is from the proxied server.
NGINX
8,715,064
57
I am new to Nginx and hope to get some help. I want to extract certain data (certain fields set by my PHP scripts) from browser cookie in nginx so that I can log it. If possible, I want to do this just by modifying nginx configuration. Any pointer/help would be greatly appreciated.
You can access cookie values by using the $cookie_COOKIE_NAME_GOES_HERE variable. See Nginx Documentation
NGINX
26,128,412
56
I want to parse NGINX error logs. However, there seems to be no documentation at all, concerning the used log format. While the meaning of some fields like the data is pretty obvious, some are not at all. In addition, I cannot be sure that my parser is complete if I do not have a documentation of all the possible fields. Since it seems you can change the access log format, but not that of the error log, I really have no idea how to get the information I need. Does anyone know of such a documentation?
From reading src/core/ngx_log.c I guess the general error log format seems to be YYYY/MM/DD HH:MM:SS [LEVEL] PID#TID: *CID MESSAGE With PID and TID being the logging process and thread id and CID a number identifying a (probably proxied) connection, probably a counter. The *CID part is optional.
NGINX
16,711,573
56
I use reverse proxy with Nginx and I want to force the request into HTTPS, so if a user wants to access the url with http, he will be automatically redirected to HTTPS. I'm also using a non-standard port. Here is my nginx reverse proxy config: server { listen 8001 ssl; ssl_certificate /home/xxx/server.crt; ssl_certificate_key /home/xxx/server.key; location / { proxy_pass https://localhost:8000; proxy_redirect off; proxy_set_header Host $host:$server_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Forwarded-Proto https; } } I've tried many things and also read posts about it, including this serverfault question, but nothing has worked so far.
Found something that is working well : server { listen 8001 ssl; ssl_certificate /home/xxx/server.crt; ssl_certificate_key /home/xxx/server.key; error_page 497 301 =307 https://$host:$server_port$request_uri; location /{ proxy_pass http://localhost:8000; proxy_redirect off; proxy_set_header Host $host:$server_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; } }
NGINX
15,429,043
56
I'm using Nginx as a reverse proxy. What is the difference between these headers: proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; In some documents/tutorials I see both are used, in others only the first. They seem similar, so I'd like to understand how they differ and whether I need to use both at the same time.
What is the difference between these headers? Did you check the $proxy_add_x_forwarded_for variable documentation? the X-Forwarded-For client request header field with the $remote_addr variable appended to it, separated by a comma. If the X-Forwarded-For field is not present in the client request header, the $proxy_add_x_forwarded_for variable is equal to the $remote_addr variable. If the incoming request already contains the X-Forwarded-For header, lets say X-Forwarded-For: 203.0.113.195, 150.172.238.178 and your request is coming from the IP 198.51.100.17, the new X-Forwarded-For header value (to be passed to the upstream) will be the X-Forwarded-For: 203.0.113.195, 150.172.238.178, 198.51.100.17 If the incoming request doesn't contain the X-Forwarded-For header, this header will be passed to the upstream as X-Forwarded-For: 198.51.100.17 On the other hand, the X-Real-IP header being set the way you show in your question will be always be equal to the $remote_addr nginx internal variable, in this case it will be X-Real-IP: 198.51.100.17 (unless the ngx_http_realip_module will get involved to change that variable value to something other than the actual remote peer address; read the module documentation to find out all the details; this SO questions has some useful examples/additional details too.) Whether I need to use both at the same time? The very first your question should be "do I need to add any those headers to the request going to my backend at all?" That really depends on your backend app. Does it count on any of those headers? Does those headers values actually make any difference on the app behavior? How your backend app treats those headers values? As you can see, the request origin assumed to be the very first address from the X-Forwarded-For addresses list. On the other hand, that header can be easily spoofed, so some server setups may allow to use that header for the trusted sources only, removing it otherwise. If you set the X-Real-IP header by your server setup, it will always contain the actual remote peer address; if you don't, and you've got a spoofed request with the X-Real-IP header already present in it, it will be passed to your backend as is, which may be really bad if your app will prefer to rely on that header rather than X-Forwarded-For one. Different backend apps may behave differently; you can check this GitHub issue discussion to get the idea. Summarizing all this up. If you definitely know what headers your backend app can actually process and how it will be done, you should set required headers according to the way they will be processed and skip non-required to minimize the proxied payload. If you don't, and you don't know if your app can be spoofed with the incorrect X-Forwarded-For header, and you don't have a trusted proxy server(s) in front of your nginx instance, the most safe way will be to set both according to an actual remote peer address: proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header X-Real-IP $remote_addr; If you know for sure your backend app cannot be spoofed with the wrong X-Forwarded-For HTTP header and you want to provide it with all the information you've got in the original request, use the example you've shown in your question: proxy_set_header X-Forwarded-For $proxy_x_forwarded_for; proxy_set_header X-Real-IP $remote_addr; Some additional technical information. Actually, those X-Forwarded-... HTTP headers are some kind of non-standard headers. According to MDN, it was assumed that the standard headers for transmitting such information would be Via, described in the RFC7230, and Forwarded, described in the RFC7239. However the X-Forwarded-For, X-Forwarded-Host and X-Forwarded-Proto became an alternative and de-facto standard version instead of those. Instead of using X-Forwarded-Host, which may or may not be interpreted by your backend app, a more reliable approach is to explicitly set the Host HTTP header for the proxied request using either proxy_set_header Host $host; or proxy_set_header Host $http_host; or even proxy_set_header Host $server_name; (you can check the difference between $host, $http_host and $server_name nginx internal variables here.) On the other hand the X-Forwarded-Proto used quite often to tell the backend app if the original request was made over the encrypted HTTPS protocol or not. Sometimes you can even see the X-Forwarded-Proxy header used in the configuration; as for me, this one looks senseless since the backend app should not behave differently depending of the reverse proxy software you actually use; however I can believe there can be web apps that really can deal with that one in a some useful way. MDN does not mention the X-Real-IP header at all; however that are definitely quite a lot of web apps that should be provided with that one. One more technical detail. Like some other reverse proxy servers, nginx will "fold" multiple X-Forwarded-For headers into a single one, so the proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; and the proxy_set_header X-Forwarded-For $http_x_forwarded_for; proxy_set_header X-Forwarded-For $remote_addr; configuration fragments will behave identically, passing the single X-Forwarded-For header to your backend app, equal no matter of what configuration will be used.
NGINX
72,557,636
55
I used this sh file to install Nginx. When I modify the nginx.conf and try to reload or restart Nginx it didn't restart. I used below command. sudo systemctl restart nginx gave me sudo: unable to resolve host localhost.localdomain sudo: systemctl: command not found and this one sudo service nginx restart sudo: unable to resolve host localhost.localdomain nginx: unrecognized service and this one sudo /etc/init.d/nginx restart sudo: unable to resolve host localhost.localdomain sudo: /etc/init.d/nginx: command not found
The nginx web server can be restarted using any one of the following command line syntax. Use systemctl on systemd based version such as Ubuntu Linux 16.04LTS and above: sudo systemctl restart nginx OR sudo service nginx restart OR (older Ubuntu Linux version): sudo /etc/init.d/nginx restart The same commands can be used to start / stop / restart the nginx server on a Red Hat 7 versions: sudo systemctl start nginx sudo systemctl stop nginx sudo systemctl restart nginx OR sudo service nginx start sudo service nginx stop sudo service nginx restart OR sudo /etc/init.d/nginx start sudo /etc/init.d/nginx stop sudo /etc/init.d/nginx restart To view status of your nginx server, use any one of the following command: sudo service nginx status OR sudo /etc/init.d/nginx status OR for Red Hat 7, CentOS 7 and higher sudo systemctl status nginx
NGINX
42,451,592
55
A lot of Django app deployments over Amazon's EC2 use HTTP servers NGINX and Gunicorn. I was wondering what they actually do and why both are used in parallel. What is the purpose of running them both in parallel?
They aren't used in parallel. NGINX is a reverse proxy. It's first in line. It accepts incoming connections and decides where they should go next. It also (usually) serves static media such as CSS, JS and images. It can also do other things such as encryption via SSL, caching etc. Gunicorn is the next layer and is an application server. NGINX sees that the incoming connection is for www.domain.com and knows (via configuration files) that it should pass that connection onto Gunicorn. Gunicorn is a WSGI server which is basically a: simple and universal interface between web servers and web applications or frameworks Gunicorn's job is to manage and run the Django instance(s) (similar to using django-admin runserver during development) The contrast to this setup is to use Apache with the mod_wsgi module. In this situation, the application server is actually a part of Apache, running as a module.
NGINX
13,182,892
55
I have my site which is using nginx, and testing site with header testing tools e.g. http://www.webconfs.com/http-header-check.php but every time it says 400 bad request below is the out put from the tool. Though all my pages load perfectly fine in browser and when I see in chrome console it says status code 200OK. HTTP/1.1 400 Bad Request => Server => nginx Date => Fri, 07 Sep 2012 09:40:09 GMT Content-Type => text/html Content-Length => 166 Connection => close I really don't understand what is the problem with my server config? A bit of googling suggests to increase the buffer size using, and I increased it to following: large_client_header_buffers 4 16k; The same results persist. Can some one guide me to the right direction?
As stated by Maxim Dounin in the comments above: When nginx returns 400 (Bad Request) it will log the reason into error log, at "info" level. Hence an obvious way to find out what's going on is to configure error_log to log messages at "info" level and take a look into error log when testing.
NGINX
12,315,832
55
I know that this is a common question, and there are answers for the same, but the reason I ask this question is because I do not know how to approach the solution. Depending on the way I decide to do it, the solution I can pick changes. Anyways, I have an AWS EC2 instance. My DNS is handled by Route53 and I own example.com. Currently, on my instance, there are two services running: example.com:80 [nginx/php/wordpress] example.com:8142 [flask] What I want to do is, make app.example.com point to example.com:8142. How exactly do I go about doing this? I am pretty sure that I will have to point app.example.com to the same IP as example.com, since it is the same box that will be serving it. And, nginx will be the first one to handle these requests at port 80. Is there a way with which I can make nginx forward all requests to localhost:8142? Is there a better way that I can solve this problem?
You could add a virtual host for app.example.com that listens on port 80 then proxy pass all requests to flask: server { listen 80; server_name app.example.com; location / { proxy_pass http://localhost:8142; } }
NGINX
23,649,444
54
How can I check that nginx is serving the .gz version of static files, if they exist? I compiled nginx with the gzip static module, but I don't see any mention of the .gz version being served in my logs. (I have minified global.js and global.css files with .gz versions of them in the same directory). The relevant part of nginx.conf looks like this: gzip on; gzip_static on; gzip_http_version 1.0; gzip_disable "MSIE [1-6]\."; gzip_vary on; gzip_comp_level 2; gzip_proxied any; gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; Any pointers would be appreciated.
Use strace. First, you need to detect PID of nginx process: # ps ax | grep nginx 25043 ? Ss 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf 25044 ? S 0:02 nginx: worker process Ok, so 25044 is the worker process. Now, we trace it: # strace -p 25044 2>&1 | grep gz open("/var/www/css/ymax.css.gz", O_RDONLY|O_NONBLOCK) = 438 open("/var/www/css/patches/patch_my_layout.css.gz", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) open("/var/www/yaml/core/iehacks.css.gz", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) open("/var/www/js/koznazna5.js.gz", O_RDONLY|O_NONBLOCK) = -1 ENOENT (No such file or directory) open("/var/www/css/ymax.css.gz", O_RDONLY|O_NONBLOCK) = 216 As you can see, it is trying to find .gz versions of files.
NGINX
2,460,821
54
I have been working with spring and now would like to learn spring boot and microservices. I understand what microservice is all about and how it works. While going through docs i came across many things used to develop microservices along with spring boot which i am very much confused. I have listed the systems below.and the questions: Netflix Eureka - I understand this is service discovery platform. All services will be registered to eureka server and all microservices are eureka clients. Now my doubt is , without having an API gateway is there any use with this service registry ? This is to understand the actual use of service registry. ZUULApi gateway- I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate microservice corresponding to request URL. iS that assumption correct? will the api gateway interact with Eureka for getting the appropriate microservice? NGINX - I have read NGINX can also be used as API gateway? Is that possible? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both? I know nginx is a webserver and reverse proxies can be powerfully configured. AWS api gateway - Is this can also be used as an alternate for ZUUL? RIBBON - for what ribbon is used? I didn't understand ! AWS ALB- This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB? Please help
without having an API gateway is there any use with this service registry ? Yes. For example you can use it to locate (IP and port) of all your microservices. This comes in handy for devops type work. For example, at one project I worked on, we used Eureka to find all instances of our microservices and ping them for their status (/health, /info). I understand ZUUL can be used as API gateway which is basically a load balancer , that calls appropriate microservice corresponding to request URL. iS that assumption correct? Yes but it can do a lot more. Essentially because Zuul is more of a framework/library that you turn into a microservice, you can code it to implement any kind of routing logic you can come up with. It is very powerful in that sense. For example, lets say you want to change how you route based on time of day or any other external factors, with Zuul you can do it. will the api gateway interact with Eureka for getting the appropriate microservice? Yes. You configure Zuul to point to Eureka. It becomes a client to Eureka and even subscribes to Eureka for realtime updates (which instances have joined or left). I have read NGINX can also be used as API gateway? Also i read some where else like NGINX can be used as a service registry , that is as an alternate for Eureka ! Thus which is right? Api gateway or service registry or both? Nginx is pretty powerful and can do API gateway type work. But there are some major differences. AFAIK, microservices cannot dynamically register with Nginx, please correct me if I am wrong... as they can with Eureka. Second, while I know Nginx is highly (very highly) configurable, I suspect its configuration abilities do not come close to Zuul's routing capabilities (due to having the whole Java language at your disposal within Zuul to code your routing logic). It could be the case that there are service discovery solutions that work with Nginx. So Nginx will take care of the routing and such, but service discovery will still require a solution. Is this can also be used as an alternate for ZUUL? Yes AWS API Gateway can be used as a Zuul replacement of sorts. The issue here, just like Nginx, is service discovery. AWS API Gateway lets you apply logic to your routing... though not as open ended as Zuul. for what ribbon is used? While you can use the Ribbon library directly, for the most part consider it as an internal dependency of Zuul. It helps Zuul do the simple load balancing that it does. Please note that this project is in maintenance mode and not recommended any more. This can also be used for load balancing. Thus do we need ZUUL if we have AWS ALB? You can use ALB with ECS (elastic container service) to replace Eureka/Zuul. ECS will take care of the service discover for you and will map all instances of a particular service to a Target Group. Your ALB routing table can then route to Target Groups based on simple routing rules. The routing rules in ALB are very simple though, but improving over time.
NGINX
52,834,628
53
I am currently on design phase of a MMO browser game, game will include tilemaps for some real time locations (so tile data for each cell) and a general world map. Game engine I prefer uses MongoDB for persistent data world. I will also implement a shipping simulation (which I will explain more below) which is basically a Dijkstra module, I had decided to use a graph database hoping it will make things easier, found Neo4j as it is quite popular. I was happy with MongoDB + Neo4J setup but then noticed OrientDB , which apparently acts like both MongoDB and Neo4J (best of both worlds?), they even have VS pages for MongoDB and Neo4J. Point is, I heard some horror stories of MongoDB losing data (though not sure it still does) and I don't have such luxury. And for Neo4J, I am not big fan of 12K€ per year "startup friendly" cost although I'll probably not have a DB of millions of vertexes. OrientDB seems a viable option as there may be also be some opportunities of using one database solution. In that case, a logical move might be jumping to OrientDB but it has a small community and tbh didn't find much reviews about it, MongoDB and Neo4J are popular tools widely used, I have concerns if OrientDB is an adventure. My first question would be if you have any experience/opinion regarding these databases. And second question would be which Graph Database is better for a shipping simulation. Used Database is expected to calculate cheapest route from any vertex to any vertex and traverse it (classic Dijkstra). But also have to change weights depending on situations like "country B has embargo on country A so any item originating from country A can't pass through B, there is flood at region XYZ so no land transport is possible" etc. Also that database is expected to cache results. I expect no more than 1000 vertexes but many edges. Thanks in advance and apologies in advance if questions are a bit ambiguous PS : I added ArangoDB at title but tbh, hadn't much chance to take a look. Late edit as of 18-Apr-2016 : After evaluating responses to my questions and development strategies, I decided to use ArangoDB as their roadmap is more promising for me as they apparently not trying to add tons of hype features that are half baked.
Disclaimer: I am the author and owner of OrientDB. As developer, in general, I don't like companies that hide costs and let you play with their technology for a while and as soon as you're tight with it, start asking for money. Actually once you invested months to develop your application that use a non standard language or API you're screwed up: pay or migrate the application with huge costs. You know, OrientDB is FREE for any usage, even commercial. Furthermore OrientDB supports standards like SQL (with extensions) and the main Java API is the TinkerPop Blueprints, the "JDBC" standard for Graph Databases. Furthermore OrientDB supports also Gremlin. The OrientDB project is growing every day with new contributors and users. The Community Group (Free channel to ask support) is the most active community in GraphDB market. If you have doubts with the GraphDB to use, my suggestion is to get what is closer to your needs, but then use standards as more as you can. In this way an eventual switch would have a low impact.
ArangoDB
26,704,134
49
I am looking to dip my hands into the world of Multi-Model DBMS, I have no particular use cases, just want to start learning. I find that there are two prominent ones - OrientDB vs ArangoDB, but was unable to find any meaningful comparison, unopinionated between them. Can someone shed some light on the difference in features between the two, and any caveats in using one over the other? If I learn one would I be able to easily transition to the other? (I tagged FoundationDB as well, but it is proprietary and I probably won't consider it) This question asks for a general comparison between OrientDB vs ArangoDB for someone looking to learn about Multi-model DBMS, and not an opinionated answer about which is better.
Disclaimer: I would no longer recommend OrientDB, see my comments below. I can provide a slightly less biased opinion, having used both ArangoDB and OrientDB. It's still biased as I'm the author of OrientDB's node.js driver - oriento but I don't have a vested interest in either company or product, I've just necessarily used OrientDB more. ArangoDB and OrientDB are both targeting a similar market and have a lot of similarities: Both are multi-model, you can use them to store documents, graphs and simple key / values. Both have support for Gremlin, but it's firmly a second class citizen compared to their own preferred query languages. Both support server-side "stored procedures" in JavaScript. In both systems this comes via a slightly less than idiomatic JavaScript API, although ArangoDB's is a lot better. This is getting fixed in a forthcoming version of OrientDB. Both offer REST APIs, both aim to be usable as an "API Server" via JavaScript request handlers. This is a lot more practical in ArangoDB than OrientDB. Both are distributed under a permissive license. Both are ACID and have transaction support, but in both the transactions are server-side operations - they're more like atomic batches of commands rather than the kinds of transactions you might be used to in a traditional RDBMS. However, there are a lot of differences: ArangoDB has no concept of "links", which are a very useful feature in OrientDB. They allow unidirectional relationships (just like a hyperlink on the web), without the overhead of edges. ArangoDB is written in C++ (and JavaScript), whereas OrientDB is written in Java. Both have their advantages: Being written in C++ means ArangoDB uses V8, the same high performance JavaScript engine that powers node.js and Google Chrome. Whereas being written in Java means OrientDB uses Nashorn, which is still fast but not the fastest. This means that ArangoDB can offer a greater level of compatibility with the node.js ecosystem compared to OrientDB. Being written in Java means that OrientDB runs on more platforms, including e.g. Raspberry PI. It also means that OrientDB can leverage a lot of other technologies written in Java, e.g. OrientDB has superb full text / geospatial search support via Lucene, which is not available to ArangoDB. OrientDB uses a dialect of SQL as its query language, whereas ArangoDB uses its own custom language called AQL. In theory, AQL is better because it's designed explicitly for the problem, in practise though it feels quite similar to SQL but with different keywords, and is yet another language to learn while OrientDB's implementation feels a lot more comfortable if you're used to SQL. SQL is declarative whereas AQL is imperative - YMMV here. ArangoDB is a "mostly-memory" database, it works best when most of your data fits in RAM. This may or may not be suitable for your needs. OrientDB doesn't have this restriction (but also loves RAM). OrientDB is fully object oriented - it supports classes with properties and inheritance. This is exceptionally useful because it means that your database structure can map 1-1 to your application structure, with no need for ugly hacks like ActiveRecord. ArangoDB supports something fairly similar via models in Foxx, but it's more like an optional addon rather than a core part of how the database works. ArangoDB offers a lot of flexibility via Foxx, but it has not been designed by people with strong server-side JS backgrounds and reinvents the wheel a lot of the time. Rather than leveraging frameworks like express for their request handling, they created their own clone of Sinatra, which of course makes it almost the same as express (express is also a Sinatra clone), but subtly different, and means that none of express's middleware or plugins can be reused. Similarly, they embed V8, but not libuv, which means they do not offer the same non blocking APIs as node.js and therefore users cannot be sure about whether a given npm module will work there. This means that non trivial applications cannot use ArangoDB as a replacement for the backend, which negates a lot of the potential usefulness of Foxx. OrientDB supports first class property level and database level indices. You can query and insert into specific indexes directly for maximum efficiency. I've not seen support for this in ArangoDB. OrientDB is the more established option, with many high profile users. ArangoDB is newer, less well known, but growing fast. ArangoDB's documentation is excellent, and they offer official drivers for many different programming languages. OrientDB's documentation is not quite as good, and while there are drivers for most platforms, they're community powered and therefore not always kept up to date with bleeding edge OrientDB features. If you're using Java (or a Java bridge), you can embed OrientDB directly within your application, as a library. This use case is not possible in ArangoDB. OrientDB has the concept of users and roles, as well as Record Level Security. This may be a killer feature for you, it is for me. It also supports token based authentication, so it's possible to use OrientDB as your primary means of authorizing/authenticating users. OrientDB also has LDAP integration. In contrast, ArangoDB support only a very simple auth option. Both systems have their own advantages, so choosing between them comes down to your own situation: If you're building a small application, and you're a web developer optimizing for developer productivity, it will probably be easier to get up and running quickly with ArangoDB. If you're building a larger application, which could potentially store many gigabytes or terabytes of data, or have many thousands of concurrent users, or have "enterprise" use cases, or need fine grained security controls, OrientDB is the one for you. If you're storing RDF or similarly structured linked data, choose OrientDB. If you're using Java, just choose OrientDB. Note: This is (my opinion of) the state of play today, things change quickly and I would not underestimate the ruthless efficiency of the awesome team behind ArangoDB, I just think that it's not quite there yet :) Charles Pick (codemix.com)
ArangoDB
28,553,942
36
I am trying to understand what the limits of Arangodb are and what the ideal setup is. From what I have understood arango stores all the collection data in the virtual memory and ideally you want this to fit in the RAM. If the collection grows and cannot fit in the RAM it will be swapped to disk. So my first question. If my db grows will I need to adjust the swap partition/file to accommodate the db? Since arango also syncs the data to disk does this mean that the data will always be located in the RAM and disk? So if I have a db that's 1.5GB and my RAM is 1GB I will need to at least have 0.5GB of swap disk and 1.5GB of regular disk space? I am a bit confused how arango uses the virtual memory. Right now I have 7 collections that are practically empty. I have 1GB of RAM and 1GB of swap disk. The admin reports that arango is using 4.5GB of virtual memory. How is this possible if the swap disk is 1GB? It's currently using 80MB of RAM. Shouldn't this be 224MB if the journal size is 32MB for each collection? What is the recommendation on the journal size vs collection size? Can this be dynamically adjusted as the collection grows? What kind of performance is expected if the swap disk is used a lot when the disk is an SSD? If the swap disk is used a lot would the performance be similar to using a more traditional db such as mysql?
ArangoDB stores all data in memory-mapped files. Each collection can have 0 to n datafiles, with a default filesize of 32 MB each (note that this filesize can be adjusted globally or on a per-collection level). An empty collection (that never had any data) will not have a datafile. The first write to a collection will create the datafile, and whenever a datafile is full, a new one will be created automatically. Collections allocate datafiles in chunks of 32 MB by default. If you do have many but small collections this might waste some memory. If you many few but big collections, the potential waste (free space at the end of a datafile) probably doesn't matter too much. Whenever any ArangoDB operation reads data from or writes data to a memory-mapped datafile, the operating system will first translate the offset into the file into a page number. This is because each datafile is implicitly split into pages of a specific size. How big a page is is platform-dependent, but let's assume pages are 4 KB in size. So a datafile with a default filesize will have 8192 pages. After the OS has translated the offset into the file into a page number, it will make sure the data of requested page are present in physical RAM. If the page is not yet in physical RAM, the operating system will issue a page fault to trigger loading of the requested page from disk or swap into physical RAM. This will eventually make the complete page available in RAM, and any reads or writes to the page's data may occur after that. All of this is done by the operating system's virtual memory manager. The operating system is free to map as many pages from a datafile into RAM as it thinks is good. For example, when a memory-mapped file is accessed sequentially, the operating system will likely be clever and read-ahead many pages, so they are already in physical RAM when actually accessed. The OS is also free to swap out some or all pages of a datafile. It will likely swap out pages if there is not enough physical RAM available to keep all pages from all datafiles in RAM at the same time. It may also swap out pages that haven't been used for a while, to make RAM available for other operations. It will likely use some LRU algorithm for this. How the virtual memory manager of an OS behaves exactly is wildly different across platforms and implementations. Most systems also allow configuring the VM subsystem. For example, here are some parameters for Linux's VM subsystem. It is therefore hard to tell how much physical memory ArangoDB will actually use for a given number of collection and their datafiles. If the collections aren't accessed at all, having the datafiles memory-mapped might use almost no RAM as the OS has probably swapped the collections out fully or at least partially. If the collections are heavily in use, the OS will likely have their datafiles fully mapped into RAM. But in both cases the memory counts as memory-mapped. This is you can have a much higher virtual memory usage than you have physical RAM. As mentioned before, the OS has to do a lot of work when accessing pages that are not in RAM, and you want to avoid this if possible. If the total size of your frequently used collections exceeds the size of the physical RAM, the OS has no alternative but to swap pages out and in a lot when you access these collections. Using an SSD for the swap will likely be better than using a spinning HDD, but is still far slower than RAM access. Long story short: the data of your active collections (datafiles plus indexes) should fit into physical RAM if possible, or you will see a lot of disk activity. Apart from that, ArangoDB does not only allocate virtual memory for the collection datafiles, but it also starts a few V8 threads (V8 is the JavaScript engine in ArangoDB) that also use virtual memory. This virtual memory is not file-backed. In an empty ArangoDB V8 accounts for most of the virtual memory usage. For example, on my 64 bit computer, the V8 threads consume about 5 GB of virtual memory (but ArangoDB in total only uses 140 MB RAM), whereas on my 32 bit computer with less RAM, the V8 threads use about 600 - 700 MB virtual memory. In your case, with the 4.5 GB VM usage, I suspect V8 is the reason, too. The virtual memory usage for the V8 threads obviously correlates with the number of V8 threads started. For example, increasing the value of the startup parameter --server.threads will start more threads and use more virtual memory for V8, and lowering the value will start less threads and use less virtual memory.
ArangoDB
24,380,071
19
I'm trying to use ArangoDB to get a list of friends-of-friends. Not just a basic friends-of-friends list, I also want to know how many friends the user and the friend-of-a-friend have in common and sort the result. After several attempts at (re)writing the best performing AQL query, this is what I ended up with: LET friends = ( FOR f IN GRAPH_NEIGHBORS('graph', @user, {"direction": "any", "includeData": true, "edgeExamples": { name: "FRIENDS_WITH"}}) RETURN f._id ) LET foafs = (FOR friend IN friends FOR foaf in GRAPH_NEIGHBORS('graph', friend, {"direction": "any", "includeData": true, "edgeExamples": { name: "FRIENDS_WITH"}}) FILTER foaf._id != @user AND foaf._id NOT IN friends COLLECT foaf_result = foaf WITH COUNT INTO common_friend_count RETURN { user: foaf_result, common_friend_count: common_friend_count } ) FOR foaf IN foafs SORT foaf.common_friend_count DESC RETURN foaf Unfortunately, performance is not as good as I would've liked. Compared to the Neo4j versions of the same query(and data), AQL seems quite a bit slower (5-10x). What I'd like to know is... How can I improve our query to make it perform better?
I am one of the core developers of ArangoDB and tried to optimize your query. As I do not have your dataset I can only talk about my test dataset and would be happy to hear if you can validate my results. First if all I am running on ArangoDB 2.7 but in this particular case I do not expect a major performance difference to 2.6. In my dataset I could execute your query as it is in ~7sec. First fix: In your friends statement you use includeData: true and only return the _id. With includeData: false GRAPH_NEIGHBORS directly returns the _id and we can also get rid of the subquery here LET friends = GRAPH_NEIGHBORS('graph', @user, {"direction": "any", "edgeExamples": { name: "FRIENDS_WITH" }}) This got it down to ~ 1.1 sec on my machine. So I expect that this will be close to the performance of Neo4J. Why does this have a high impact? Internally we first find the _id value without actually loading the documents JSON. In your query you do not need any of this data, so we can safely continue with not opening it. But now for the real improvement Your query goes the "logical" way and first gets users neighbors, than finds their neighbors, counts how often a foaf is found and sorts it. This has to build up the complete foaf network in memory and sort it as a whole. You can also do it in a different way: 1. Find all friends of user (only _ids) 2. Find all foaf (complete document) 3. For each foaf find all foaf_friends (only _ids) 4. Find the intersection of friends and foaf_friends and COUNT them This query would like this: LET fids = GRAPH_NEIGHBORS("graph", @user, { "direction":"any", "edgeExamples": { "name": "FRIENDS_WITH" } } ) FOR foaf IN GRAPH_NEIGHBORS("graph", @user, { "minDepth": 2, "maxDepth": 2, "direction": "any", "includeData": true, "edgeExamples": { "name": "FRIENDS_WITH" } } ) LET commonIds = GRAPH_NEIGHBORS("graph", foaf._id, { "direction": "any", "edgeExamples": { "name": "FRIENDS_WITH" } } ) LET common_friend_count = LENGTH(INTERSECTION(fids, commonIds)) SORT common_friend_count DESC RETURN {user: foaf, common_friend_count: common_friend_count} Which in my test graph was executed in ~ 0.024 sec So this gave me a factor 250 faster execution time and I would expect this to be faster than your current query in Neo4j, but as I do not have your dataset I can not verify it, it would be good if you could do it and tell me. One last thing With the edgeExamples: {name : "FRIENDS_WITH" } it is the same as with includeData, in this case we have to find the real edge and look into it. This could be avoided if you store your edges in separate collections based on their name. And then remove the edgeExamples as well. This will further increase the performance (especially if there are a lot of edges). Future Stay tuned for our next release, we are right now adding some more functionality to AQL which will make your case much easier to query and should give another performance boost.
ArangoDB
33,279,811
19
I want to build a social network. (E.g. Persons have other persons as friends) and I guess a graph database would do the trick better than a classic database. I would like to store attributes on the edges and on the nodes. They can be json, but I do not care if the DB understands JSON. ArangoDB can also store documents and Neo4J is "only" a graph Database. I would like to have an user node an to each person 2 eg. Users -[username]-> person Users -[ID]-> person And there is a need that there is an index on the edges. I do not want a different database, so it would be nice to store an image (byte array) in the database, maybe even different sizes for each image / video whatever. Also posts and such should be stored in the database. What I got is that Neo4j better supports an manufacture independent query language, but I guess it is easier and better to learn the manufacturer standard. Any recommendations on which database management system is better suited? I will be writing the code in Java (and some Scala).
Both ArangoDB and Neo4j are capable of doing the job you have in mind. Both projects have amazing documentation and getting answers for either of them is easy. Both can be used from Java (though Neo4j can be embedded). One thing that might help your decision making process is recognizing that many NoSQL databases solve a much narrower problem than people appreciate. Sarah Mei wrote an epic blog post about MongoDB, using an example with some data about TV shows. From the summary: MongoDB’s ideal use case is even narrower than our television data. The only thing it’s good at is storing arbitrary pieces of JSON. I believe that Neo4j solves a similarly narrow problem, as evidenced by how common it is to use Neo4j alongside some other data store. I don't know that storing picture or video data is a great idea in either ArangoDB or Neo4j. I would look to store it on some other server (like S3) and save the url to that file in Neo4j/Arango. While it's true that it is possible to create queries that only a graph database can answer, the performance of graph database on any given query varies wildly and can give you some pretty surprising results. For instance, here is a paper from the International Journal of Computer Science and Information Technologies doing a comparison of Neo4j vs MySQL, Vertica and VoltDB with queries you would assume Neo4j would be amazing at: The idea is that a "social network" does not automatically imply the superiority, or even the use of a graph database (especially since GraphQL and Falcor were released). To address your question about query languages. There is no standard language for graph databases. AQL is a query language that provides a unified interface for working with key/value, document and graph data. Cypher is a graph query language. Badwolf Query Language is a SPARQL inspired language for temporal graphs. These languages exist because they tackle different problems. The databases that support them also tackle different problems. Neo4j has an example of "polyglot persistence" on their site: I think that is the problem that ArangoDB and AQL is out to solve, the hypothesis being that it's possible to solve that without being worse than specialists like Neo4j. So far it looks like they might be right.
ArangoDB
35,118,458
14
Is it possible to link documents from different collections in ArangoDB as it is in OrientDB? In OrientDB you can create a field of type LINK and specify the type linked. That creates a relation between both documents. Do I have to use edge collections to do this in ArangoDB? I'm trying to define a main collection and a secondary collection with extra information to supplement the main one. I don't want to have all the data in the main collection as this is shared between other entities. Thanks in advance.
There are actually two options: Using Joins You can define an attribute on the main document containing information that identifies the sub-document (e.g. by its _key) and then use AQL to join the two documents in your query: FOR x IN maindocuments FILTER x.whatever < 42 FOR y in secondarydocuments FILTER x.sub = y._key RETURN MERGE(x,y) Using Edges You can define an edge collection holding all the "relations" between your documents. The edge documents can also optionally additional information on the edges themselves. FOR x in maindocuments LET n = NEIGHBORS("maindocuments", "edgecollection", x._id, "any"); RETURN MERGE(x, n[0].vertex); However there is no such thing like a foreign key constraint in ArangoDB. You can refer to non-existing documents in your edges or delete the sub-document without the main document being informed. The benefit of the second approach is that you can use an arbitrary number of edges between these documents and even decide on 0, 1 or more during runtime of your application without any modification. With the first approach you have to decide that at the beginning by making the attribute a single value or a list of values.
ArangoDB
26,589,705
13
I try to connect to ArangoDB which located in another server from my PC but seems unsuccessful. I then tried to access it by using the Web UI provided by typing the server ip http://x.x.x.x:8529 but failed too. I tried my luck on the localhost ArangoDB and replace it with my own PC ip address and it doesn't work too. It only works when the ip name is 127.0.0.1 or the name is localhost. May I know how to access arangoDB. FYI, I have tried the approach here. Remote javascript interaction with arangodb but cannot get through. Appreciate if anyone can help. Thanks.
The server by default only opens its ports on 127.0.0.1 and without any authentication. You can edit the config file "arangod.conf" to change this. Change the line endpoint = tcp://127.0.0.1:8529 to endpoint = tcp://0.0.0.0:8529 In order to enable authentication you can change disable-authentication = yes to disable-authentication = no
ArangoDB
28,081,017
13
I'm building an app based around a d3 force-directed graph with ArangoDB on the backend, and I want to be able to load node and link data dynamically from Arango as efficiently as possible. I'm not an expert in d3, but in general the force layout seems to want the its data as an array of nodes and an array of links that have the actual node objects as their sources and targets, like so: var nodes = [ {id: 0, reflexive: false}, {id: 1, reflexive: true }, {id: 2, reflexive: false} ], links = [ {source: nodes[0], target: nodes[1], left: false, right: true }, {source: nodes[1], target: nodes[2], left: false, right: true } ]; Currently I am using the following AQL query to get neighboring nodes, but it is quite cumbersome. Part of the difficulty is that I want to include edge information for nodes even when those edges are not traversed (in order to display the number of links a node has before loading those links from the database). LET docId = "ExampleDocClass/1234567" // get data for all the edges LET es = GRAPH_EDGES('EdgeClass',docId,{direction:'any',maxDepth:1,includeData:true}) // create an array of all the neighbor nodes LET vArray = ( FOR v IN GRAPH_TRAVERSAL('EdgeClass',docId[0],'any',{ maxDepth:1}) FOR v1 IN v RETURN v1.vertex ) // using node array, return inbound and outbound for each node LET vs = ( FOR v IN vArray // inbound and outbound are separate queries because I couldn't figure out // how to get Arango to differentiate inbout and outbound in the query results LET oe = (FOR oe1 IN GRAPH_EDGES('EdgeClass',v,{direction:'outbound',maxDepth:1,includeData:true}) RETURN oe1._to) LET ie = (FOR ie1 IN GRAPH_EDGES('EdgeClass',v,{direction:'inbound',maxDepth:1,includeData:true}) RETURN ie1._from) RETURN {'vertexData': v, 'outEdges': oe, 'inEdges': ie} ) RETURN {'edges':es,'vertices':vs} The end output looks like this: http://pastebin.com/raw.php?i=B7uzaWxs ...which can be read almost directly into d3 (I just have to deduplicate a bit). My graph nodes have a large amount of links, so performance is important (both in terms of load on the server and client, and file size for communication between the two). I am also planning on creating a variety of commands to interact with the graph aside from simply expanding neighboring nodes. Is there a way to better structure this AQL query (e.g. by avoiding four separate graph queries) or avoid AQL altogether using arangojs functions or a FOXX app, while still structuring the response in the format I need for d3 (including link data with each node)?
sorry for the late reply, we were busy building v2.8 ;) I would suggest to do as many things as possible on the database side, as copying and serializing/deserializing JSON over the network is typically expensive, so transferring as little data as possible should be a good aim. First of all i have used your query and executed it on a sample dataset i created (~ 800 vertices and 800 edges are hit in my dataset) As a baseline i used the execution time of your query which in my case was ~5.0s So i tried to create the exact same result as you need in AQL only. I have found some improvements in your query: 1. GRAPH_NEIGHBORS is a bit faster than GRAPH_EDGES. 2. If possible avoid {includeData: true} if you do not need the data Especially if you need to/from vertices._id only GRAPH_NEIGHBORS with {includeData: false} outperforms GRAPH_EDGES by an order of magnitude. 3. GRAPH_NEIGHBORS is deduplicated, GRAPH_EDGES is not. Which in your case seems to be desired. 3. You can get rid of a couple of subqueries there. So here is the pure AQL query i could come up with: LET docId = "ExampleDocClass/1234567" LET edges = GRAPH_EDGES('EdgeClass',docId,{direction:'any',maxDepth:1,includeData:true}) LET verticesTmp = (FOR v IN GRAPH_NEIGHBORS('EdgeClass', docId, {direction: 'any', maxDepth: 1, includeData: true}) RETURN { vertexData: v, outEdges: GRAPH_NEIGHBORS('EdgeClass', v, {direction: 'outbound', maxDepth: 1, includeData: false}), inEdges: GRAPH_NEIGHBORS('EdgeClass', v, {direction: 'inbound', maxDepth: 1, includeData: false}) }) LET vertices = PUSH(verticesTmp, { vertexData: DOCUMENT(docId), outEdges: GRAPH_NEIGHBORS('EdgeClass', docId, {direction: 'outbound', maxDepth: 1, includeData: false}), inEdges: GRAPH_NEIGHBORS('EdgeClass', docId, {direction: 'inbound', maxDepth: 1, includeData: false}) }) RETURN { edges, vertices } This yields the same result format as your query and has the advantage that every vertex connected to docId is stored exactly once in vertices. Also docId itself is stored exactly once in vertices. No deduplication required on client side. But, in outEdges / inEdges of each vertices all connected vertices are also exactly once, i do not know if you need to know if there are multiple edges between vertices in this list as well. This query uses ~0.06s on my dataset. However if you put some more effort into it you could also consider to use a hand-crafted traversal inside a Foxx application. This is a bit more complicated but might be faster in your case, as you do less subqueries. The code for this could look like the following: var traversal = require("org/arangodb/graph/traversal"); var result = { edges: [], vertices: {} } var myVisitor = function (config, result, vertex, path, connected) { switch (path.edges.length) { case 0: if (! result.vertices.hasOwnProperty(vertex._id)) { // If we visit a vertex, we store it's data and prepare out/in result.vertices[vertex._id] = { vertexData: vertex, outEdges: [], inEdges: [] }; } // No further action break; case 1: if (! result.vertices.hasOwnProperty(vertex._id)) { // If we visit a vertex, we store it's data and prepare out/in result.vertices[vertex._id] = { vertexData: vertex, outEdges: [], inEdges: [] }; } // First Depth, we need EdgeData var e = path.edges[0]; result.edges.push(e); // We fill from / to for both vertices result.vertices[e._from].outEdges.push(e._to); result.vertices[e._to].inEdges.push(e._from); break; case 2: // Second Depth, we do not need EdgeData var e = path.edges[1]; // We fill from / to for all vertices that exist if (result.vertices.hasOwnProperty(e._from)) { result.vertices[e._from].outEdges.push(e._to); } if (result.vertices.hasOwnProperty(e._to)) { result.vertices[e._to].inEdges.push(e._from); } break; } }; var config = { datasource: traversal.generalGraphDatasourceFactory("EdgeClass"), strategy: "depthfirst", order: "preorder", visitor: myVisitor, expander: traversal.anyExpander, minDepth: 0, maxDepth: 2 }; var traverser = new traversal.Traverser(config); traverser.traverse(result, {_id: "ExampleDocClass/1234567"}); return { edges: result.edges, vertices: Object.keys(result.vertices).map(function (key) { return result.vertices[key]; }) }; The idea of this traversal is to visit all vertices from the start vertex to up to two edges away. All vertices in 0 - 1 depth will be added with data into the vertices object. All edges originating from the start vertex will be added with data into the edges list. All vertices in depth 2 will only set the outEdges / inEdges in the result. This has the advantage that, vertices is deduplicated. and outEdges/inEdges contain all connected vertices multiple times, if there are multiple edges between them. This traversal executes on my dataset in ~0.025s so it is twice as fast as the AQL only solution. hope this still helps ;)
ArangoDB
33,855,799
12
I am trying to put together a unit test setup with Arango. For that I need to be able to reset the test database around every test. I know we can directly delete a database from the REST API but it is mentioned in the documentation that creation and deletion can "take a while". Would that be the recommended way to do that kind of setup or is there an AQL statement to do something similar ?
After some struggling with similar need I have found this solution: for (let col of db._collections()) { if (!col.properties().isSystem) { db._drop(col._name); } }
ArangoDB
34,841,879
12
I'm root and I forgot it What can I do now? I tried to reinstall arangodb, remove all databases but after new installation old password still exist
service arangodb3 stop /usr/sbin/arangod --server.authentication false and then require("@arangodb/users").replace("root", "my-changed-password"); exit service arangodb3 restart // **VERY IMPORTANT STEP!!!** //if you don't restart the server everyone can have access to your database
ArangoDB
38,555,962
12
In the context of ArangoDB, there are different database shells to query data: arangosh: The JavaScript based console AQL: Arangodb Query Language, see http://www.arangodb.org/2012/06/20/querying-a-nosql-database-the-elegant-way MRuby: Embedded Ruby Although I understand the use of JavaScript and MRuby, I am not sure why I would learn, and where I would use AQL. Is there any information on this? Is the idea to POST AQL directly to the database server?
AQL is ArangoDB's query language. It has a lot of ways to query, filter, sort, limit and modify the result that will be returned. It should be noted that AQL only reads data. (Update: This answer was targeting an older version of ArangoDB. Since version 2.2, the features have been expanded and data modification on the database is also possible with AQL. For more information on that, visit the documentation link at the end of the answer.) You cannot store data to the database with AQL. In contrast to AQL, the Javascript or MRuby can read and store data to the database. However their querying capabilities are very basic and limited, compared to the possibilities that open up with AQL. It is possible though to send AQL queries from javascript. Within the arangosh Javascript shell you would issue an AQL query like this: arangosh> db._query('FOR user IN example FILTER user.age > 30 RETURN user').toArray() [ { _id : "4538791/6308263", _rev : "6308263", age : 31, name : "Musterfrau" } ] You can find more info on AQL here: http://www.arangodb.org/manuals/current/Aql.html
ArangoDB
14,933,258
11
I am trying to update the attribute on a json document in an embedded array using AQL. How do i update the "addressline" for "home" type address using AQL below? User: { name: "test", address: [ {"addressline": "1234 superway", type:"home"}, {"addressline": "5678 superway", type:"work"} ] } AQL Attempt so far for u in users for a in u.address FILTER a.type='home' UPDATE u WITH {<What goes here to update addressline?>} in users Thank you for the help. Regards, Anjan
To do this we have to work with temporary variables. We will collect the sublist in there and alter it. We choose a simple boolean filter condition to make the query better comprehensible. First lets create a collection with a sample: database = db._create('complexCollection') database.save({ "topLevelAttribute" : "a", "subList" : [ { "attributeToAlter" : "oldValue", "filterByMe" : true }, { "attributeToAlter" : "moreOldValues", "filterByMe" : true }, { "attributeToAlter" : "unchangedValue", "filterByMe" : false } ] }) Heres the Query which keeps the subList on alteredList to update it later: FOR document in complexCollection LET alteredList = ( FOR element IN document.subList LET newItem = (! element.filterByMe ? element : MERGE(element, { attributeToAlter: "shiny New Value" })) RETURN newItem) UPDATE document WITH { subList: alteredList } IN complexCollection While the query as it is is now functional: db.complexCollection.toArray() [ { "_id" : "complexCollection/392671569467", "_key" : "392671569467", "_rev" : "392799430203", "topLevelAttribute" : "a", "subList" : [ { "filterByMe" : true, "attributeToAlter" : "shiny New Value" }, { "filterByMe" : true, "attributeToAlter" : "shiny New Value" }, { "filterByMe" : false, "attributeToAlter" : "unchangedValue" } ] } ] This query will probably be soonish a performance bottleneck, since it modifies all documents in the collection regardless whether the values change or not. Therefore we want to only UPDATE the documents if we really change their value. Therefore we employ a second FOR to test whether subList will be altered or not: FOR document in complexCollection LET willUpdateDocument = ( FOR element IN document.subList FILTER element.filterByMe LIMIT 1 RETURN 1) FILTER LENGTH(willUpdateDocument) > 0 LET alteredList = ( FOR element IN document.subList LET newItem = (! element.filterByMe ? element : MERGE(element, { attributeToAlter: "shiny New Value" })) RETURN newItem) UPDATE document WITH { subList: alteredList } IN complexCollection
ArangoDB
29,105,660
11
I'm learning more about ArangoDB and it's Foxx framework. But it's not clear to me what I gain by using that framework over building my own stand alone nodejs app for API/access control, logic, etc. What does Foxx offer that a regular nodejs app wouldn't?
Full disclosure: I'm an ArangoDB core maintainer and part of the Foxx team. I would recommend taking a look at the webinar I gave last year for a detailed overview of the differences between Foxx and Node and the advantages of using Foxx when you are using ArangoDB. I'll try to give a quick summary here. If you apply ideas like the Single Responsibility Principle to your architecture, your server-side code has two responsibilities: Backend: persist and query data using the backend data storage (i.e. ArangoDB or other databases). Frontend: transform the query results into a format acceptable for the client (e.g. HTML, JSON, XML, CSV, etc). In most conventional applications, these two responsibilities are fulfilled by the same monolithic application code base running in the same process. However the task of interacting with the data storage usually requires writing a lot of code that is specific to the database technology. You need to write queries (e.g. using SQL, AQL, ReQL or any other technology-specific language) or use database-specific drivers. Additionally in many non-trivial applications you need to interact with things like stored procedures which are also part of the "backend code" but live in the database. So in addition to having the application server do two different tasks (storage and rendering), half the code for one of the tasks ends up living somewhere else, often using an entirely different language. Foxx lets you solve this problem by allowing you to move the logic we identified as the "backend" of your server-side code into ArangoDB. Not only can you hide all the nitty gritty of query languages, edges and collections behind a more application-specific API, you also eliminate the network overhead often necessary to handle requests that would cause more than a single roundtrip to the database. For trivial applications this may mean that you can eliminate the Node server completely and access your Foxx API directly from the client. For more complicated scenarios you may want to use Node to build external micro services your Foxx service can tap into (e.g. to interface with external non-HTTP APIs). Or you just put your conventional Node app in front of ArangoDB and use Foxx to create an HTTP API that better represents your application's problem domain than the database's raw HTTP API. It's also worth keeping in mind that structurally Foxx services aren't entirely dissimilar from Node applications. You can use NPM dependencies and split your code up into modules and it can all live in version control and be deployed from zip bundles. If you're not convinced I'd suggest giving it a try by implementing a few of your most frequent queries as Foxx endpoints and then deciding whether you want to move more of your logic over or not.
ArangoDB
35,313,141
11
Currently my workflow with Emacs when I am coding in C or C++ involves three windows. The largest on the right contains the file I am working with. The left is split into two, the bottom being a shell which I use to type in compile or make commands, and the top is often some sort of documentation or README file that I want to consult while I am working. Now I know there are some pretty expert Emacs users out there, and I am curious what other Emacs functionally is useful if the intention is to use it as a complete IDE. Specifically, most IDEs usually fulfill these functions is some form or another: Source code editor Compiler Debugging Documentation Lookup Version Control OO features like class lookup and object inspector For a few of these, it's pretty obvious how Emacs can fit these functions, but what about the rest? Also, if a specific language must be focused on, I'd say it should be C++. Edit: One user pointed out that I should have been more specific when I said 'what about the rest'. Mostly I was curious about efficient version control, as well as documentation lookup. For example, in SLIME it is fairly easy to do a quick hyperspec lookup on a Lisp function. Is there a quick way to look up something in C++ STL documentation (if I forgot the exact syntax of hash_map, for example)?
You'll have to be specific as to what you mean by "the rest". Except for the object inspector (that I"m aware of), emacs does all the above quite easily: editor (obvious) compiler - just run M-x compile and enter your compile command. From there on, you can just M-x compile and use the default. Emacs will capture C/C++ compiler errors (works best with GCC) and help you navigate to lines with warnings or errors. Debugging - similarly, when you want to debug, type M-x gdb and it will create a gdb buffer with special bindings Documentation Lookup - emacs has excellent CScope bindings for code navigation. For other documentation: Emacs also has a manpage reader, and for everything else, there's the web and books. version control - there are lots of Emacs bindings for various VCS backends (CVS, SCCS, RCS, SVN, GIT all come to mind) Edit: I realize my answer about documentation lookup really pertained to code navigation. Here's some more to-the-point info: Looking up manpages, info manuals, and Elisp documentation from within emacs Looking up Python documentation from within Emacs. Google searching will no doubt reveal further examples. As the second link shows, looking up functions (and whatever) in other documentation can be done, even if not supported out of the box.
Slime
63,421
178
I do most of my development in Common Lisp, but there are some moments when I want to switch to Scheme (while reading Lisp in Small Pieces, when I want to play with continuations, or when I want to do some scripting in Gauche, for example). In such situations, my main source of discomfort is that I don't have Slime (yes, you may call me an addict). What is Scheme's closest counterpart to Slime? Specifically, I am most interested in: Emacs integration (this point is obvious ;)) Decent tab completion (ideally, c-w-c-c TAB should expand to call-with-current-continuation). It may be even symbol-table based (ie. it doesn't have to notice a function I defined in a let at once). Function argument hints in the minibuffer (if I have typed (map |) (cursor position is indicated by |)), I'd like to see (map predicate . lists) in the minibuffer Sending forms to the interpreter Integration with a debugger. I have ordered the features by descending importance. My Scheme implementations of choice are: MzScheme Ikarus Gauche Bigloo Chicken It would be great if it worked at least with them.
SLIME's contrib directory seems to have SWANK implementations for MIT Scheme and Kawa.
Slime
110,911
45
In clojure I have lines like this that define default values: (def *http-port* 8080) I've now decided to formalize these kinds of values into a configuration unit and I would like to undefine the value *http-port* so that I can find the locations that still refer to this value and change them to use the new value. I'm doing a refactoring in other words by moving the value to a different location. The way I've been doing this is to quit slime and try to restart the slime session. During maven's compile phase errors like these are picked up and I can find and fix one reference at a time. I then fix the error, wash rinse and repeat. This is obviously frustrating. How would I do this while connected to a slime session?
If I understand you correctly, ns-unmap should do what you want: user=> foo java.lang.Exception: Unable to resolve symbol: foo in this context (NO_SOURCE_FILE:1) user=> (def foo 1) #'user/foo user=> foo 1 user=> (ns-unmap (find-ns 'user) 'foo) nil user=> foo java.lang.Exception: Unable to resolve symbol: foo in this context (NO_SOURCE_FILE:1)
Slime
4,208,680
32
I've tried to migrate to Emacs several times for Clojure development, following a variety of blogposts, screencast and tutorials, but somewhere along the way something always went wrong - keybindings that didn't work, incompatible versions, etc, and I found myself scrambling back to Vim. But I know I want Paredit and SLIME. So, I'm going to try again, this time backed by the powerful Stack Overflow™ community. I hope that the answer to this question will remain up-to-date, and can serve as a reference for tentative converts like me. What I'd like is: The latest stable release of Clojure Aquamacs (if it's good enough for Rich Hickey, it's good enough for me), a recent version Clojure Mode SLIME/SWANK Paredit Anything else that's indispensible? Step-by-step instructions to install the above would be excellent - preferably in shell script format. I'd also like some hints on how to get started with the most common Clojure-related actions (including key-bindings), including links to documentation and cheatsheets.
These are the steps I took to set them up without using ELPA. Hope this helps. Get SLIME using MacPorts sudo port -v install slime Get paredit curl -O http://mumble.net/~campbell/emacs/paredit.el Get clojure & clojure-contrib Either using MacPorts sudo port -v install clojure clojure-contrib Or downloading directly curl -O http://build.clojure.org/snapshots/org/clojure/clojure/1.1.0-master-SNAPSHOT/clojure-1.1.0-master-20091202.150145-1.jar curl -O http://build.clojure.org/snapshots/org/clojure/clojure-contrib/1.1.0-master-SNAPSHOT/clojure-contrib-1.1.0-master-20091212.205045-1.jar Get clojure-mode and swank-clojure (Emacs side) git clone http://github.com/technomancy/clojure-mode.git git clone http://github.com/technomancy/swank-clojure.git Get swank-clojure (Clojure side) Either downloading pre-built jar file curl -O http://repo.technomancy.us/swank-clojure-1.1.0.jar Or building from source (assuming lein is installed) cd path/to/dir/swank-clojure lein jar Put clojure, clojure-contrib and swank-clojure .jar files in ~/.swank-clojure or ~/.clojure (the default places where swank-clojure.el searches for them). Add to either ~/.emacs or ~/Library/Preferences/Aquamacs Emacs/customization.el (change paths to match your own settings) (add-to-list 'load-path "/opt/local/share/emacs/site-lisp/slime/") (add-to-list 'load-path "/opt/local/share/emacs/site-lisp/slime/contrib/") ;; Change these paths to match your settings (add-to-list 'load-path "path/to/dir/clojure-mode/") (add-to-list 'load-path "path/to/dir/swank-clojure/") (add-to-list 'load-path "path/to/dir/paredit/") ;; Customize swank-clojure start-up to reflect possible classpath changes ;; M-x ielm `slime-lisp-implementations RET or see `swank-clojure.el' for more info (defadvice slime-read-interactive-args (before add-clojure) (require 'assoc) (aput 'slime-lisp-implementations 'clojure (list (swank-clojure-cmd) :init 'swank-clojure-init))) (require 'slime) (require 'paredit) (require 'clojure-mode) (require 'swank-clojure) (eval-after-load "slime" '(progn ;; "Extra" features (contrib) (slime-setup '(slime-repl slime-banner slime-highlight-edits slime-fuzzy)) (setq ;; Use UTF-8 coding slime-net-coding-system 'utf-8-unix ;; Use fuzzy completion (M-Tab) slime-complete-symbol-function 'slime-fuzzy-complete-symbol) ;; Use parentheses editting mode paredit (defun paredit-mode-enable () (paredit-mode 1)) (add-hook 'slime-mode-hook 'paredit-mode-enable) (add-hook 'slime-repl-mode-hook 'paredit-mode-enable))) ;; By default inputs and results have the same color ;; Customize result color to differentiate them ;; Look for `defface' in `slime-repl.el' if you want to further customize (custom-set-faces '(slime-repl-result-face ((t (:foreground "LightGreen"))))) (eval-after-load "swank-clojure" '(progn ;; Make REPL more friendly to Clojure (ELPA does not include this?) ;; The function is defined in swank-clojure.el but not used?!? (add-hook 'slime-repl-mode-hook 'swank-clojure-slime-repl-modify-syntax t) ;; Add classpath for Incanter (just an example) ;; The preferred way to set classpath is to use swank-clojure-project (add-to-list 'swank-clojure-classpath "path/to/incanter/modules/incanter-app/target/*")))
Slime
2,120,533
31
I have centered-cursor-mode activated globaly, like this: (require 'centered-cursor-mode) (global-centered-cursor-mode 1) It works fine, but there are some major modes where I would like to disable it automatically. For example slime-repl and shell. There is another question dealing with the same problem, but another minor mode. Unfortunately the answers only offer workarounds for this specific minor mode (global-smart-tab-mode), that doesn't work with centered-cursor-mode. I tried this hook, but it has no effect. The variable doesn't change. (eval-after-load "slime" (progn (add-hook 'slime-repl-mode-hook (lambda () (set (make-local-variable 'centered-cursor-mode) nil))) (slime-setup '(slime-repl slime-autodoc))))
Global minor modes created with the define-globalized-minor-mode1 macro are a bit tricky. The reason your code doesn't appear to do anything is that globalized modes utilise after-change-major-mode-hook to activate the buffer-local minor mode that they control; and that hook runs immediately after the major mode's own hooks4. Individual modes may implement custom ways of specifying some kind of black list or other method of preventing the mode from being enabled in certain circumstances, so in general it would be worth looking at the relevant M-x customize-group options for the package to see if such facilities exist. However, a nice clean general way of achieving this for ANY globalized minor mode is eluding me for the moment. It's a shame that the MODE-enable-in-buffers function defined by that macro doesn't do the same (with-current-buffer buf (if ,global-mode ...)) check which is performed by the global mode function. If it did, you could simply use slime-repl-mode-hook to make the global mode variable buffer-local and nil. A quick hack is to check2 what the turn-on argument is for the globalized minor mode definition (in this instance it's centered-cursor-mode itself3), and write some around advice to stop that from being evaluated for the modes in question. (defadvice centered-cursor-mode (around my-centered-cursor-mode-turn-on-maybe) (unless (memq major-mode (list 'slime-repl-mode 'shell-mode)) ad-do-it)) (ad-activate 'centered-cursor-mode) Something we can do (with an easy re-usable pattern) is immediately disable the buffer-local minor mode again after it has been enabled. An after-change-major-mode-hook function added with the APPEND argument to add-hook will reliably run after the globalized minor mode has acted, and so we can do things like: (add-hook 'term-mode-hook 'my-inhibit-global-linum-mode) (defun my-inhibit-global-linum-mode () "Counter-act `global-linum-mode'." (add-hook 'after-change-major-mode-hook (lambda () (linum-mode 0)) :append :local)) 1 or its alias define-global-minor-mode which I feel should be avoided, due to the potential for confusion with "global" minor modes created with define-minor-mode. "Globalized" minor modes, while still involving a global minor mode, work very differently in practice, so it is better to refer to them as "globalized" rather than "global". 2 C-hf define-globalized-minor-mode RET shows that turn-on is the third argument, and we check that in the mode definition with M-x find-function RET global-centered-cursor-mode RET. 3 with this approach, that fact is going to prevent you from ever enabling this minor mode with slime-repl-mode or shell-mode buffers, whereas a globalized minor mode with a separate turn-on function could still be invoked in its non-global form if you so desired. 4 https://stackoverflow.com/a/19295380/324105
Slime
6,837,511
31
Is there a way to stop a running operation in the SLIME REPL? The Clojure SLIME folks apparently have some way to do this, so how about in ordinary Common Lisp? Thanks /Erik
As expected, it turns out it was quite simple. To stop a running operation use the command slime-interrupt (C-c C-b).
Slime
2,899,320
21
I'm wondering what are some efficient ways to debug Common Lisp interactively using Emacs and SLIME. What I did before: As someone who learned C and Python using IDEs (VS and PyCharm), I am used to setting break points, adding watches, and do stepping. But when I started to use CL I found the debugging workflow fundamentally different. I did not find good ways to set break points, step though lines and see how variables change. The dumb method I used was adding "print" in code and run the code over and over again, which is very inefficient. I know we can "inspect" variables in SLIME but not sure how to do it interactively. What I found: I came across this video on the development of a Morse code translator recently, and it shows a complete process of how to debug interactively in SLIME, which has been very informative and enlightening. It is as if we could "talk" to the compiler. What I want: I searched online but found minimal tutorials demonstrating how an experienced Lisper actually develop and debug their programs. I am eager to learn such experiences. How to debug interactively? What are some good practices and tips? How to add breakpoint and step? What shortcuts / tools / workflow do you use most frequently / find most useful when debugging?
There is a number of things you can do: You can trace a function call (see TRACE and UNTRACE in Common Lisp or slime-toggle-trace-fdefinition*). This helps with recursive calls: you can see what you pass and what they return at each level. Standard thing: add (format t ...) in places. I guess, no need to comment. If the code breaks, you will get into debugger. From there you can examine the stack, see what was called and what arguments were passed. See @jkiiski link: it has really great information about it, including (break) form that will act as a breakpoint and get you to debugger. Spoiler alert: you can change the values in the inspector, you can change and re-compile your code and restart from (almost) any place in the stack. Last but not least: to debug macros, you will need slime-macroexpand-1 (wrapper over MACROEXPAND-1) and even better C-c M-e for macro stepper. One last advice: if you are to do a serious debugging, include (declaim (optimize (debug 3))) in your file, otherwise some CL implementations have a tendency to optimize away the calls on the stack or make arguments inaccessible.
Slime
37,754,935
20
I'm using Emacs, with CLISP and Slime, and want to be able to draw pictures on the screen. I'm specifically thinking about drawing graphs, but anything that would let me draw basic shapes and manipulate them would be able to get me started.
Doug is right; CAPI will work fine. Other things you can try: cltk: http://www.cliki.net/Lisp-Tk I know that Allegro has something for Windows programming also, but I've never tried it. What may also work is cells-gtk: http://common-lisp.net/project/cells-gtk/ Again, I can only tell you that it exists but not how bad it is or if it even really works... I can not comment also on the quality of http://www.cliki.net/GTK%20binding But that's mostly what is available. Corman Lisp probably has something to offer for Windows programming also. Anyway, the choices on Windows are relatively slim. The you can probably have the most confidence in CAPI, which is used for the LispWorks IDE on Windows, Linux, MacOS X and on quite few big unices also... Regards
Slime
450,538
17
can I use common lisp and Clojure from within emacs at the same time? I would like to have each lisp-REPL in its own buffer, and If i did this how could I controll which buffer sent its data to which lisp?
Yes. In the documentation to Slime you will find slime-lisp-implementations. Here is how I have it defined in my .emacs: (setq slime-lisp-implementations '((cmucl ("/usr/local/bin/lisp") :coding-system iso-8859-1-unix) (sbcl ("/usr/local/bin/sbcl" "--core" "/Users/pinochle/bin/sbcl.core-with-swank") :init (lambda (port-file _) (format "(swank:start-server %S :coding-system \"utf-8-unix\")\n" port-file))) (clozure ("/Users/pinochle/bin/ccl")) (clojure ("/Users/pinochle/bin/clojure") :init swank-clojure-init))) You start up your lisps using M-- M-x Slime. It will ask you which Lisp to start up, and you use the name you defined in slime-lisp-implementations. In this example, I would use cmucl, sbcl, clozure or clojure. You can switch the "active" REPL using the command C-c C-x c. For more info, see the Slime Documentation on controlling multiple connections.
Slime
1,223,394
14
I want to use the functions in the clojure.contrib.trace namespace in slime at the REPL. How can I get slime to load them automatically? A related question, how can I add a specific namespace into a running repl? On the clojure.contrib API it describes usage like this: (ns my-namespace (:require clojure.contrib.trace)) But adding this to my code results in the file being unable to load with an "Unable to resolve symbol" error for any function from the trace namespace. I use leiningen 'lein swank' to start the ServerSocket and the project.clj file looks like this (defproject test-project "0.1.0" :description "Connect 4 Agent written in Clojure" :dependencies [[org.clojure/clojure "1.2.0-master-SNAPSHOT"] [org.clojure/clojure-contrib "1.2.0-SNAPSHOT"]] :dev-dependencies [[leiningen/lein-swank "1.2.0-SNAPSHOT"] [swank-clojure "1.2.0"]]) Everything seems up to date, i.e. 'lein deps' doesn't produce any changes. So what's up?
You're getting "Unable to resolve symbol" exceptions because :require doesn't pull in any Vars from the given namespace, it only makes the namespace itself available. Thus if you (:require foo.bar) in your ns form, you have to write foo.bar/quux to access the Var quux from the namespace foo.bar. You can also use (:require [foo.bar :as fb]) to be able to shorten that to fb/quux. A final possiblity is to write (:use foo.bar) instead; that makes all the Vars from foo.bar available in your namespace. Note that it is generally considered bad style to :use external libraries; it's probably ok within a single project, though. Re: automatically making stuff available at the REPL: The :require, :use and :refer clauses of ns forms have counterparts in the require, use and refer functions in clojure.core. There are also macros corresponding to :refer-clojure and :import. That means that in order to make clojure.contrib.trace available at the REPL you can do something like (require 'clojure.contrib.trace) or (require '[clojure.contrib.trace :as trace]). Note that because require is a function, you need to quote the library spec. (use and refer also take quoted lib specs; import and refer-clojure require no quoting.) The simplest way to have certain namespaces available every time you launch a Clojure REPL (including when you do it with SLIME) is to put the appropriate require calls in ~/.clojure/user.clj. See the Requiring all possible namespaces blog post by John Lawrence Aspden for a description of what you might put in user.clj to pull in all of contrib (something I don't do, personally, though I do have a (use 'clojure.contrib.repl-utils) in there).
Slime
2,854,618
14
My superficial understanding is that 'swank-clojure' makes 'M-x slime-connect' possible. I mean, it gives a connection to a clojure server something like 'lein swank'. Is my understanding correct? If not, what's the purpose of swank? Then, is there any 'swank-SOMETHING_ELSE' for other lisp like implementations? For example, swank-clisp? Do I need 'swank-clojure' for using SLIME/Clojure with 'M-x slime'? ADDED I found this link pretty useful.
SLIME and swank form a client server architecture to run and debug lisp programs. SLIME is the emacs frontend and swank is the backend. In between they create a network socket and communicate by sending across messages (S-expressions). In short it is just an RPC mechanism between emacs and the actual lisp backend. The fact that the slime and swank are separate, are connected over a network and communicate via rpc messages means that they can be anywhere. So, slime can connect to a remote host/port to swank. All other forms you see (lein swank etc etc) do the same. They start swank on a port allowing for a remote connection of slime. swank-clojure is the clojure port of swank. originally swank-clojure came with a helper elisp file called swank-clojure.el. The job of this file was to enable manual setup of swank parameters like the classpaths, jvm parameters etc. Since other tools like lein came along later, swank-clojure.el was deprecated. But it still lives on at: http://github.com/vu3rdd/swank-clojure-extra and provides the M-x swank-clojure-project which enables starting swank on a lein project. It should be noted that SLIME originated in (and is still being actively developed for) Common Lisp. Infact, the clojure port of swank only has a subset of the features enjoyed by the original SLIME/swank versions. SLIME exists for all major variants of Common Lisp. There is a partial port of it for Scheme48. There are some partial implementations available under the contrib directory. If you know that swank is already running on a port, use slime-connect. If you just want to use slime on a project, swank-clojure-project and lein swank seem to be the way to go.
Slime
3,550,971
14
This is a double question for you amazingly kind Stacked Overflow Wizards out there. How do I set emacs/slime/swank to use UTF-8 when talking with Clojure, or use UTF-8 at the command-line REPL? At the moment I cannot send any non-roman characters to swank-clojure, and using the command-line REPL garbles things. It's really easy to do regular expressions on latin text: (re-seq #"[\w]+" "It's really true that Japanese sentences don't need spaces?") But what if I had some japanese? I thought that this would work, but I can't test it: (re-seq #"[(?u)\w]+" "日本語 の 文章 に は スペース が 必要 ない って、 本当?") It gets harder if we have to use a dictionary to find word breaks, or to find a katakana-only word ourselves: (re-seq #"[アイウエオ-ン]" "日本語の文章にはスペースが必要ないって、本当?") Thanks!
Can't help with swank or Emacs, I'm afraid. I'm using Enclojure on NetBeans and it works well there. On matching: As Alex said, \w doesn't work for non-English characters, not even the extended Latin charsets for Western Europe: (re-seq #"\w+" "prøve") =>("pr" "ve") ; Norwegian (re-seq #"\w+" "mañana") => ("ma" "ana") ; Spanish (re-seq #"\w+" "große") => ("gro" "e") ; German (re-seq #"\w+" "plaît") => ("pla" "t") ; French The \w skips the extended chars. Using [(?u)\w]+ instead makes no difference, same with the Japanese. But see this regex reference: \p{L} matches any Unicode character in category Letter, so it actually works for Norwegian (re-seq #"\p{L}+" "prøve") => ("prøve") as well as for Japanese (at least I suppose so, I can't read it but it seems to be in the ballpark): (re-seq #"\p{L}+" "日本語 の 文章 に は スペース が 必要 ない って、 本当?") => ("日本語" "の" "文章" "に" "は" "スペース" "が" "必要" "ない" "って" "本当") There are lots of other options, like matching on combining diacritical marks and whatnot, check out the reference. Edit: More on Unicode in Java A quick reference to other points of potential interest when working with Unicode. Fortunately, Java generally does a very good job of reading and writing text in the correct encodings for the location and platform, but occasionally you need to override it. This is all Java, most of this stuff does not have a Clojure wrapper (at least not yet). java.nio.charset.Charset - represents a charset like US-ASCII, ISO-8859-1, UTF-8 java.io.InputStreamReader - lets you specify a charset to translate from bytes to strings when reading. There is a corresponding OutputStreamWriter. java.lang.String - lets you specify a charset when creating a String from an array of bytes. java.lang.Character - has methods for getting the Unicode category of a character and converting between Java chars and Unicode code points. java.util.regex.Pattern - specification of regexp patterns, including Unicode blocks and categories. Java characters/strings are UTF-16 internally. The char type (and its wrapper Character) is 16 bits, which is not enough to represent all of Unicode, so many non-Latin scripts need two chars to represent one symbol. When dealing with non-Latin Unicode it's often better to use code points rather than characters. A code point is one Unicode character/symbol represented as an int. The String and Character classes have methods for converting between Java chars and Unicode code points. unicode.org - the Unicode standard and code charts. I'm putting this here since I occasionally need this stuff, but not often enough to actually remember the details from one time to the next. Sort of a note to my future self, and it might be useful to others starting out with international languages and encodings as well.
Slime
3,101,279
12
I used Aquamacs so far, and I need to install and run Clojure using SLIME. I googled to get some way to use Clojure on SLIME of Aquamacs, but without success. Questions Is it possible to install Clojure on Aquamacs? Or, can you guess why Clojure on Aquamacs doesn't work? Is it normal that Emacs and Aquamacs can't share the same ELPA? Is it possible to use ELPA to install Conjure on Emacs/Aquamacs? I was told that one can use 'lein swank' to run as a server, do you know how to do that? Sequences that I tried (and half succeeded) I tried with Mac OS X Emacs, and by following the steps I could make it work. I mean, I could run Clojure with SLIME. Emacs for Mac OS X Step 1) Install ESK. Git clone and copy all the files into the .emacs.d directory Add the following code to .emacs and relaunch (when (load (expand-file-name "~/.emacs.d/package.el")) (package-initialize)) Step2) Install using ELPA M-x package-list-packages to select packages Install clojure-mode, clojure-test-mode slime, slime-repl swank-clojure M-x slime to install the clojure Add the following code to .emacs and relaunch ;; clojure mode (add-to-list 'load-path "/Users/smcho/.emacs.d/elpa/clojure-mode-1.7.1") (require 'clojure-mode-autoloads) (add-to-list 'load-path "/Users/smcho/.emacs.d/elpa/clojure-test-mode-1.4") (require 'clojure-test-mode-autoloads) ;; slime ;(setq inferior-lisp-program "/Users/smcho/bin/clojure") (add-to-list 'load-path "/Users/smcho/.emacs.d/elpa/slime-20100404") (require 'slime-autoloads) (add-to-list 'load-path "/Users/smcho/.emacs.d/elpa/slime-repl-20100404") (require 'slime-repl-autoloads) ;; swank-clojure (add-to-list 'load-path "/Users/smcho/.emacs.d/elpa/swank-clojure-1.1.0") (require 'slime-repl-autoloads) Aquamacs Now I could use Clojure on Emacs, I tried the same(or very similar) method to run Clojure on Aquamacs once more. Step 1) Install ESK for Aquamacs Copy the files to ~/Library/Preference/Aquamacs Emacs Modify "~/Library/Preferences/Aquamacs Emacs/Preferences.el" to add the following (setq kitfiles-dir (concat (file-name-directory (or (buffer-file-name) load-file-name)) "/aquamacs-emacs-starter-kit")) ; set up our various directories to load (add-to-list 'load-path kitfiles-dir) (require 'init) Step2) * Follow the same step as before to install all the (same) packages, but "M-x slime" gives me the following error message. "Symbol's function definition is void: define-slime-contrib" ELPA I tried to combine the packages from Emacs and Aquamacs, but they don't combine. I thought I could use the ELPA itself, not from the ESK to make it shared. The result was not good, as ELPA couldn't download the swank-conjure package. Success - Running Aquamacs/Clojure with 'lein swank'. Please refer to this.
Aquamacs most definitely works with Clojure, since the author of Clojure uses it. However, I use Emacs, and after you perform the steps above in the Emacs section, I recommend checking out labrepl, http://github.com/relevance/labrepl If you don't have leiningen, the link to get and install it is in the instructions of the labrepl readme file. I found it extremely helpful when first learning how to set up an environment for Clojure programming. You can take apart the project.clj file in labrepl and piece together how it works pretty easily. Not to mention the lessons and training in the built in web application that comes with labrepl. If you want to use lein swank instead: Make sure you have leiningen installed. In your project.clj dev dependencies you want to have an entry like this: [leiningen/lein-swank "1.1.0"] http://clojars.org/leiningen/lein-swank Then after you've done lein deps you should be able to run lein swank and then from within Emacs run M-x slime-connect and just press enter through the defaults. If you're going to go this route, here is the link directly to leiningen so you can skip the labrepl repository: http://github.com/technomancy/leiningen
Slime
3,261,714
12
In certain kinds of code it's relatively easy to cause an infinite loop without blowing the stack. When testing code of this nature using clojure-test, is there a way to abort the current running tests without restarting the swank server? Currently my workflow has involved $ lein swank Connect to swank with emacs using slime-connect, and switch to the the tests, execute with C-c C-,, tests run until infinite loop, then just return but one cpu is still churning away on the test. The only way to stop this I have found is to restart lein swank, but it seems like this would be a relatively common problem? Anyone have a better solution?
Yes, it is a common problem for programmers to write infinite loops in development :). And the answer is very simple. It's called "Interrupt Command" and it is C-c C-b Leiningen has nothing to do with this. This is SLIME/Swank/Clojure. When you evaluate code in Emacs you are spawning a new thread within Clojure. SLIME keeps reference to those threads and shows you how many are running in the Emacs modeline. If you're in a graphical environment you can click the modeline where it indicates your namespace and see lots of options. One option is "Interrupt Command" Eval (while true) and C-c C-b to get a dialog showing a java.lang.ThreadDeath error with probably just one option. You can type 0 or q to quit that thread, kill that error message buffer and return focus to your previous buffer.
Slime
5,113,403
12
I'm trying to write some python, and I'm used to the lispy way of doing things, a REPL in EMACS and the ability to send arbitrary code snippets to the REPL. I like this way of developing code, and python's built-in IDLE seems to do it pretty well. However I do like EMACS as an editor. What's the best thing analogous to SLIME for Python? So far: It seems that the trick is to open a python file, and then to use 'Start Interpreter' from the Python menu, after which you get an Inferior Python buffer. You can then use C-c C-c to load the whole buffer you're editing into the 'REPL', and use normal copy and paste to put snippets into the REPL. This works as far as it goes. Is there any way to say 'reevaluate the big thing that the cursor is in now and display the answer', or 'reevaluate the thing the cursor is just at the end of and display the answer', like M-C-x and C-x-e in SLIME? And it all seems to work better if you use the python-mode.el from Bozhidar's answer
There is ipython.el which you can use for some extended functionality over the vanilla python-mode. Ropemacs provides a few extra completion and refactoring related options that might help you. This is discussed here. However, I don't expect you're going to get anything close to SLIME.
Slime
5,316,175
12
When I type something wrong in dos/linux and it yells at me I can push the up arrow and then modify my line - maybe it was missing a '-' or something. I just installed lispbox and up arrow moves the cursor up the REPL history. How do i put on the current line the last line I entered. So like I type + 3 2 But obviously I meant (+ 3 2) How do I get it to say "+ 3 2" so I can just push "Home", "(", "End", ")"? Or is there some MUCH easier M-x waaahFIXIT command for this?
Try (slime-repl-previous-input) which is bound to M-p by default. (Meta is normally the Alt key) M-p / M-n is standard for going backwards / forwards through history in emacs - it also works in the minibuffer too
Slime
7,870,770
12
I was trying to install SLIME. I downloaded the zipped package and according to the README file, I have to put this piece of code in my Emacs configuration file: (add-to-list 'load-path "~/hacking/lisp/slime/") ; your SLIME directory (setq inferior-lisp-program "/opt/sbcl/bin/sbcl") ; your Lisp system (require 'slime) (slime-setup) Setting the SLIME directory is straightforward, but what about the Lisp "system"? How do I find it?
Some Linuxes come with CMUCL preinstalled, but since you seem to want to use SBCL, you would need to install it. In terminal, or in Emacs M-xshell. If you are using Debian-like distro, you can use apt-get or aptitude with the following: $ sudo apt-get install sbcl or $ sudo aptitude install sbcl on RHEL-like distro: $ sudo yum install sbcl After SBCL is installed, you can set inferior-lisp-program to "sbcl". Also, I'd advise to install SLIME through quicklisp-slime-helper You would need to install some Lisp you like (let it be SBCL for this purpose, as described above), then, in the same shell do this: (Suppose you are on a Debian-like Linux) $ sudo apt-get install wget $ cd ~/Downloads $ wget http://beta.quicklisp.org/quicklisp.lisp $ sbcl --load ./quicklisp.lisp wait until you see Lisp shell prompt, * (quicklisp-quickstart:install) * (ql:add-to-init-file) * (ql:quickload "quicklisp-slime-helper") * (quit) now you are back in the regular shell. Launch Emacs, if not open yet. C-f x~/.emacs. Add the lines below to it (instead of what you posted above): (load (expand-file-name "~/quicklisp/slime-helper.el")) (setq inferior-lisp-program "sbcl") Or replace "sbcl" with the Lisp implementation you installed. Look into Quicklisp documentation for more information. You will find that you will be using Quicklisp later anyway, so it's useful you get it all in one place from the start.
Slime
12,607,716
12
I'm using Emacs with clojure mode and slime connected to a swank server produced by running lein swank and would really love to be able to easily jump to function definitions within my project. Can I do this with out having to manually rebuild tags every time I change branches?
If you're using SLIME this can be done easily with M-. EDIT: When Clojure code is compiled the location of definitions is stored. Note that this works best when you compile entire files. Jumping to an definition that you evaluated with C-x C-e doesn't work so well (tho it does works for Common Lisp and SLIME).
Slime
2,374,246
11
when I start swank through leiningen it accepts the next slime connection and off I go. I would really like to have several emacs instances connect to the same swank instance. Can I do this? can I do this through leiningen?
Well, you can start your first SLIME normally, then (require 'swank.swank) (or maybe it's required by default... not sure), do (swank.swank/start-repl port) with port replaced by some port number and you can connect a second instance of SLIME to that newly created REPL. I've done it just now, with one Emacs connecting to a REPL started with lein swank, (swank.swank/start-repl 4006) in the first Emacs, M-x slime-connect in the second Emacs (providing 4006 as the port number), then I could do this: ; first Emacs (def x 5) ; second Emacs x ; evaluates to 5 (def y 1234) ; first Emacs y ; evaluates to 1234 Cool, no? :-) Update: Oh, BTW -- (swank.swank/start-repl) starts the new REPL in the background and does not block the REPL you use to execute it. The return value is nil, so I'm not sure how to kill the new REPL... (Update 2: Removed something I'm no longer sure about.) Update 3: While the above method is perfectly general in that it makes it possible to connect an extra client regardless of how the original Swank instance has been started, it might be more convenient to start Swank with the command lein swank 4005 "localhost" :dont-close true The port and host name arguments must mentioned explicitly if :dont-close true is to be passed. 4005 and "localhost" are the default values. This will make it possible to disconnect from Swank and reconnect later, but also to connect a number of clients simultaneously. (I just noticed that this is possible while answering this new question on how to enable reconnections to Leiningen-started Swank; it suddenly occurred to me to check if :dont-close would also cause simultaneous connections to be accepted -- and it does.)
Slime
2,374,776
11
I can't use auto indentation function on emacs + slime + sbcl when I define my function and so on. My .emacs file configuration is this: (setq inferior-lisp-program "D:/emacs/sbcl_1.0.37/sbcl.exe" lisp-indent-function 'common-lisp-indent-function slime-complete-symbol-function 'slime-fuzzy-complete-symbol slime-startup-animation nil slime-enable-evaluate-in-emacs t slime-log-events t slime-outline-mode-in-events-buffer nil slime-repl-return-behaviour :send-only-if-after-complete slime-autodoc-use-multiline-p t slime-highlight-compiler-notes t) (add-to-list 'load-path "d:/emacs/site-lisp/slime") ; your SLIME directory (require 'slime) (slime-setup) Can someone help me?
The slime section in my .emacs: ;;; SLIME (setq inferior-lisp-program "/usr/bin/sbcl") (add-to-list 'load-path "/usr/share/emacs/site-lisp/slime/") (require 'slime) (require 'slime-autoloads) (slime-setup '(slime-fancy)) (global-set-key "\C-cs" 'slime-selector)
Slime
3,132,000
11
Is there a way to expand the current command at the Clojure repl like I'd be able to do in Common Lisp? For example say I have typed: Math/ I would like the tab key to expand to all the available variables and functions in that namespace. I'm using Clojure as inferior-lisp would like to know how to do this from the plain vanilla repl in Clojure, and through swank slime.
Another vote in favour of clojure-mode and slime under Emacs. In particular, if you set up auto-complete, then you can use my ac-slime package to get context-aware tab completion in a dropdown list. Here's a screencast showing it in action. And, further to technomancy's comment about hippie-expand, here's how to tie slime completion into hippie-expand. Update: as of 2012, nrepl, nrepl.el and ac-nrepl are replacing slime and ac-slime; same functionality, smaller and cleaner codebase. Update2: as of Oct 2013 nrepl.el is renamed to cider and it and ac-nrepl have moved to the clojure-emacs organisation on github. nrepl remains as the server component
Slime
4,289,480
11
During development I defined an 'initialize-instance :after' method which after a while was not needed anymore and actually gets in my way because inside it calls code that is not valid anymore. Since the unintern function does not have an argument for the qualifier, is there any way I can "unintern" the symbol-qualifier combination of a method so that I don't have to slime-restart-inferior-lisp and load the project again from the start?
You can use the standard functions find-method and remove-method to do it: (remove-method (find-method #'frob '(:before) '(vehicle t))) I find it's much easier to use the slime inspector. If your function is named frob, you can use M-x slime-inspect #'frob RET to see a list of all methods on frob and select individual methods for removal.
Slime
5,976,470
11
I've been dancing around LISP for decades, but now have decided to get serious. I'm going through the online version of Practical Common LISP. This is my setup: MacOSX 10.7.8 Xcode 4.5.2 SBCL 1.0.55.0-abb03f9 Emacs 24.2.1 (x86_64-apple-darwin, NS apple-appkit-1038.36) SLIME 1.6 I tried to follow the instructions listed in the link: http://emacs-sbcl-slime.blogspot.com/2010/11/sbcl-emacs-slime-macosx.html …but the problem is that on the MacOSX platform, nothing seems to be located where it should. SBCL was installed using its own script…it is working. I setup the SBCL_HOME env var as instructed. Emacs was installed by dmg from this link: http://emacs-sbcl-slime.blogspot.com/2010/11/sbcl-emacs-slime-macosx.html …and is running. SLIME, however (which was download via cvs to ˜/.emacs.d/slime) doesn't appear to be recognized. I can't get the "CL-USER>" prompt described by the author. Any help would be greatly appreciated!
Copy the entire directory of slime to emacs/site-lisp Ensure your lisp is accesible from the terminal. Just type sbcl in Terminal. Lisp interpreter should start. put into your .emacs file something like (setq inferior-lisp-program "sbcl") It should work then.
Slime
13,822,504
11
I noticed how SLIME (lisp development package for Emacs) does not come with a frame-source-location function for CLISP, so you can't automagically jump to a source location when inside the debugger. Given that, I figured CLISP users must be using some other IDE (though I guess IDE is a little bit misleading here, maybe they're just using a different Emacs package). So what IDE/Emacs package are CLISP programmers using?
I think Emacs and SLIME is still what those people use.
Slime
339,662
10
I set up emacs for both clojure and common lisp, but I want also (slime-setup '(slime-fancy)) for common lisp. If I add that line to init.el, clojure won't work: it gives me repl, but it hangs after I run any code. My configuration For clojure: I set up clojure-mode, slime, slime-repl via ELPA I run $ lein swank in project directory Then M-x slime-connect to hack clojure For common lisp I place this after ELPA code in init.el: (add-to-list 'load-path "~/.elisp/slime") (require 'slime) (add-to-list 'slime-lisp-implementations '(sbcl ("/opt/local/bin/sbcl") :coding-system utf-8-unix)) ;; (slime-setup '(slime-fancy)) So if I uncomment the last line, clojure will be broken. But slime-fancy a very important meta package for hacking common lisp. Is there a way to set them both up to work without changing configuration and restarting when I need to switch languages? Update I found that slime-autodoc loaded with slime-fancy is the cause of hangs. (slime-setup '(slime-fancy)) (setq slime-use-autodoc-mode nil) This configuration lets run both common lisp and clojure SLIMEs. Even simultaneously. But without slime-autodoc. I also found I'm using the CVS version of SLIME since I manually do (add-to-list 'load-path "~/.elisp/slime") after ELPA code. That does not solve the problem. Maybe there is a version from some magic date which works with clojure? Here a guy says CVS version works for him: http://www.youtube.com/watch?v=lf_xI3fZdIg&feature=player_detailpage#t=221s
Here is a solution. (using hooks) That is ugly but quite convenient. (add-hook 'slime-connected-hook (lambda () (if (string= (slime-lisp-implementation-type) "Clojure") (setq slime-use-autodoc-mode nil) (setq slime-use-autodoc-mode t)) )) (add-hook 'slime-mode-hook (lambda () (if (eq major-mode 'clojure-mode) (slime-autodoc-mode 0) (slime-autodoc-mode 1)))) Update If the problem still exists on the slime-repl buffer, try the following code: (add-hook 'slime-repl-mode-hook (lambda () (if (string= (slime-lisp-implementation-type) "Clojure") (progn (setq slime-use-autodoc-mode nil) (slime-autodoc-mode 0)) (progn (setq slime-use-autodoc-mode t) (slime-autodoc-mode 1)))))
Slime
4,419,544
10
I just started with common-lisp, having come from C++ and Python. I'm trying to run a simple SDL program that does nothing other than show an image on-screen. I can get it working from within SLIME. The problem is, it won't work when run from the shell as a script. My program looks like this: #!/usr/bin/sbcl --script (asdf:operate 'asdf:load-op :lispbuilder-sdl) (defun main () (sdl:with-init () (sdl:window 320 240) (sdl:draw-surface (sdl:load-image "image.png")) (sdl:update-display) (sdl:with-events () (:quit-event () t) (:video-expose-event () (sdl:update-display))))) (main) When I run this as a script, I get the following error: mkg@chisel:~/projects/common-lisp/sandbox$ ./hello-world.lisp unhandled ASDF:MISSING-COMPONENT in thread #<SB-THREAD:THREAD "initial thread" RUNNING {AA5E849}>: component "lispbuilder-sdl" not found 0: (SB-DEBUG::MAP-BACKTRACE #<CLOSURE (LAMBDA #) {AAF1EF5}>)[:EXTERNAL] (... long backtrace omitted) Oddly, this program works fine if I do the following. I open the program in Emacs, start SLIME in another window, and in the SLIME window, I enter the first line of the program: (asdf:operate 'asdf:load-op :lispbuilder-sdl) Then, in the editor window, I hit C-c C-k (compile/load file). This pops up a window showing image.png, as expected. Why does this not work when run as a shebang script? How can I fix it?
As the man page for sbcl says, --script implies --no-sysinit --no-userinit --disable-debugger --end-toplevel-options, which means that initialization files are not read, and so if you set up ASDF registry there it is not set up, and so it cannot find the lispbuilder-sdl system. You need to either set up the registry in the script itself, or save an executable core with the registry already set up and call that instead of the default sbcl. Usually you can also save libraries in the core instead of loading them in the script, but I am not quite sure how that interacts with non-Lisp libraries and resources.
Slime
4,914,636
10
When writing Common Lisp code, I use SLIME. In particular, I compile the buffer containing definitions of functions by using C-C C-k, and then switch to the REPL to run those functions. Putting executable code to run those functions in the buffer does not appear to work so well. If the code has bugs it can make a mess. It would be handy to have a way to include code that doesn't get compiled in the buffer, but do get run from the command line, e.g. when doing sbcl --script foo.lisp If that were the case, I would not have to keep adding and removing code every time I wanted to run code from the command line. Does there exist such a condition? This is analogous to the Python condition. if __name__=='__main__': which is false if a Python file is imported as a module, but true if it is run as a script. This blog post entitled "Using SBCL for Common Lisp shell scripting", found by random Googling, has ;; If run from command line, the following if-test will succeed (if (> (length sb-ext:*posix-argv*) 1) ;; Code that is executed from the command line only goes here ) The code included indeed does not get run by the compiler inside SLIME, but it doesn't get run by sbcl --script either. UPDATE: Thanks to Vsevolod Dyomkin for his helpful answer and the followups. Some notes about that answer follow, compiled from the comments to that answer. @Vsevolod, if you add these to your answer, I'll delete them. First, I asked for a way to run code from the command line, but not from the interpreter. The supplied solution does more; it also gives a way to run code from the interpreter but not the command line. The first step is to define a reader macro function for the macro character #!. As stated in the link "Upon encountering a macro character, the Lisp reader calls its reader macro function". The reader function is defined by the call to set-dispatch-macro-character. So, when the #! character is seen, the set-dispatch-macro-character causes the lambda function defined in the body to be called. This function then adds the keyword :noscript to the *features* variable. See also a discussion of what reader macros are good for in the SO question Read macros: what do you use them for?. Observe that the keyword :noscript is added to *features* precisely when the #! character is present. Furthermore, the #! character is present when the code is run inside the interpreter e.g. when using slime, but apparently is absent (removed) from program's text by sbcl --script is run. Therefore, :noscript is added to *features* when the code is run in the interpeter, but not when run as a script. We now use the builtin reader macros #-/#+, which as Vsevolod said, behave similarly to the to C's #IFDEF/#IFNDEF. They check for a symbol in *features*. In this case, #-:noscript checks for the absence of :noscript, and #+:noscript checks for the presence of :noscript. If those conditions are satisfied, it runs the corresponding code. To wrap a block of code, one can use progn like this: #-:noscript (progn <your code here>). Finally, one needs to call set-dispatch-macro-character before running code that uses this functionality. In the case of sbcl, one can put it in the initialization file ~/.sbclrc. Observe that this approach does not depend on the Common Lisp implementation being SBCL. A simpler alternative, as mentioned in the sbcl-devel list, is to use the fact that the keyword :SWANK appears when one types *features* in a REPL inside emacs using SLIME. SWANK is the server side of SLIME. SLIME should probably more accurately called SLIME/SWANK, as these two are the client/server components of a client-server architecture. I found this blog post called Understanding SLIME, which was helpful. So, one can use #-:swank and #+:swank just like #-:noscript and #+:noscript, except that one doesn't need to write any code. Of course, this then won't work if one is using the command line interpreter sbcl, for instance, since then :SWANK will not appear in *features*.
You can use the following trick: Define a dispatch function for shebang: (set-dispatch-macro-character #\# #\! (lambda (stream c n) (declare (ignore c n)) (read-line stream) `(eval-when (:compile-toplevel :load-toplevel :execute) (pushnew :noscript *features*)))) In your script file use #-:noscript: #!/usr/local/bin/sbcl --script (defun test (a) (print a)) (test 1) #-:noscript (test 2) #+:noscript (test 3) Executing ./test.lisp will print 1 and 2, while C-c C-k will output 1 and 3. EDITS This trick should work, because the shebang line is removed altogether by sbcl --script, but not removed, when the file is loaded through SLIME or other mechanisms. The drawback of this approach is that we condition on absence of :noscript in features, and not presence of :script. To amend it, pushing of the appropriate feature should be done in sbcl --script processing itself.
Slime
9,796,353
10
When calling the saveAll method of my JpaRepository with a long List<Entity> from the service layer, trace logging of Hibernate shows single SQL statements being issued per entity. Can I force it to do a bulk insert (i.e. multi-row) without needing to manually fiddle with EntityManger, transactions etc. or even raw SQL statement strings? With multi-row insert I mean not just transitioning from: start transaction INSERT INTO table VALUES (1, 2) end transaction start transaction INSERT INTO table VALUES (3, 4) end transaction start transaction INSERT INTO table VALUES (5, 6) end transaction to: start transaction INSERT INTO table VALUES (1, 2) INSERT INTO table VALUES (3, 4) INSERT INTO table VALUES (5, 6) end transaction but instead to: start transaction INSERT INTO table VALUES (1, 2), (3, 4), (5, 6) end transaction In PROD I'm using CockroachDB, and the difference in performance is significant. Below is a minimal example that reproduces the problem (H2 for simplicity). ./src/main/kotlin/ThingService.kt: package things import org.springframework.boot.autoconfigure.SpringBootApplication import org.springframework.boot.runApplication import org.springframework.web.bind.annotation.RestController import org.springframework.web.bind.annotation.GetMapping import org.springframework.data.jpa.repository.JpaRepository import javax.persistence.Entity import javax.persistence.Id import javax.persistence.GeneratedValue interface ThingRepository : JpaRepository<Thing, Long> { } @RestController class ThingController(private val repository: ThingRepository) { @GetMapping("/test_trigger") fun trigger() { val things: MutableList<Thing> = mutableListOf() for (i in 3000..3013) { things.add(Thing(i)) } repository.saveAll(things) } } @Entity data class Thing ( var value: Int, @Id @GeneratedValue var id: Long = -1 ) @SpringBootApplication class Application { } fun main(args: Array<String>) { runApplication<Application>(*args) } ./src/main/resources/application.properties: jdbc.driverClassName = org.h2.Driver jdbc.url = jdbc:h2:mem:db jdbc.username = sa jdbc.password = sa hibernate.dialect=org.hibernate.dialect.H2Dialect hibernate.hbm2ddl.auto=create spring.jpa.generate-ddl = true spring.jpa.show-sql = true spring.jpa.properties.hibernate.jdbc.batch_size = 10 spring.jpa.properties.hibernate.order_inserts = true spring.jpa.properties.hibernate.order_updates = true spring.jpa.properties.hibernate.jdbc.batch_versioned_data = true ./build.gradle.kts: import org.jetbrains.kotlin.gradle.tasks.KotlinCompile plugins { val kotlinVersion = "1.2.30" id("org.springframework.boot") version "2.0.2.RELEASE" id("org.jetbrains.kotlin.jvm") version kotlinVersion id("org.jetbrains.kotlin.plugin.spring") version kotlinVersion id("org.jetbrains.kotlin.plugin.jpa") version kotlinVersion id("io.spring.dependency-management") version "1.0.5.RELEASE" } version = "1.0.0-SNAPSHOT" tasks.withType<KotlinCompile> { kotlinOptions { jvmTarget = "1.8" freeCompilerArgs = listOf("-Xjsr305=strict") } } repositories { mavenCentral() } dependencies { compile("org.springframework.boot:spring-boot-starter-web") compile("org.springframework.boot:spring-boot-starter-data-jpa") compile("org.jetbrains.kotlin:kotlin-stdlib-jdk8") compile("org.jetbrains.kotlin:kotlin-reflect") compile("org.hibernate:hibernate-core") compile("com.h2database:h2") } Run: ./gradlew bootRun Trigger DB INSERTs: curl http://localhost:8080/test_trigger Log output: Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: select thing0_.id as id1_0_0_, thing0_.value as value2_0_0_ from thing thing0_ where thing0_.id=? Hibernate: call next value for hibernate_sequence Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?) Hibernate: insert into thing (value, id) values (?, ?)
To get a bulk insert with Spring Boot and Spring Data JPA you need only two things: set the option spring.jpa.properties.hibernate.jdbc.batch_size to appropriate value you need (for example: 20). use saveAll() method of your repo with the list of entities prepared for inserting. Working example is here. Regarding the transformation of the insert statement into something like this: INSERT INTO table VALUES (1, 2), (3, 4), (5, 6) the such is available in PostgreSQL: you can set the option reWriteBatchedInserts to true in jdbc connection string: jdbc:postgresql://localhost:5432/db?reWriteBatchedInserts=true then jdbc driver will do this transformation. Additional info about batching you can find here. UPDATED Demo project in Kotlin: sb-kotlin-batch-insert-demo UPDATED Hibernate disables insert batching at the JDBC level transparently if you use an IDENTITY identifier generator.
CockroachDB
50,772,230
114
Is there any way to do local development with cloud spanner? I've taken a look through the docs and the CLI tool and there doesn't seem to be anything there. Alternatively, can someone suggest a SQL database that behaves similarly for reads (not sure what to do about writes)? EDIT: To clarify, I'm looking for a database which speaks the same flavour of SQL as Cloud Spanner so I can do development locally. The exact performance characteristics are not as important as the API and consistency behaviour. I don't think Cockroach meets these requirements?
There is currently no local development option for Cloud Spanner. Your current option would be to start a single node instance on GCP. There currently isn't another database that operates like Cloud Spanner, however CockroachDB operates on similar principles. Since they don't have access to atomic clocks and GPS units, they do make different trade-offs. In particular around reads & writes and lacking 'stale reads'. You can read more on the Jepsen blog: Where Spanner waits after every write to ensure linearizability, CockroachDB blocks only on contested reads. As a consequence, its consistency guarantees are slightly weaker.
CockroachDB
42,289,920
21
In MySQL, I can use AUTO INCREMENT to generate unique IDs for my application’s customers. How do I get similar functionality when using CockroachDB?
Applications cannot use constructs like SEQUENCE or AUTO_INCREMENT and also expect horizontal scalability -- this is a general limitation of any distributed database. Instead, CockroachDB provides its own SERIAL type which generates increasing but not necessarily contiguous values. For example, you would use: CREATE TABLE customers (id SERIAL PRIMARY KEY, name STRING); Then when you’re inserting values, you would use something like: INSERT INTO customers (name) VALUES ('Kira Randell') RETURNING id; This would return the randomly generated ID, which you’d be able to use elsewhere in your application
CockroachDB
43,330,072
10
As far as I know, clickhouse allows only inserting new data. But is it possible to delete block older then some period to avoid overflow of HDD?
Lightweight delete Available since v22.8 Standard DELETE syntax for MergeTree tables has been introduced in #37893. SET allow_experimental_lightweight_delete = 1; DELETE FROM merge_table_standard_delete WHERE id = 10; Altering data using Mutations See the docs on Mutations feature https://clickhouse.yandex/docs/en/query_language/alter/#mutations. The feature was implemented in Q3 2018. Delete data ALTER TABLE <table> DELETE WHERE <filter expression> "Dirty" delete all You always have to specify a filter expression. If you want to delete all the data through Mutation, specify something that's always true, eg.: ALTER TABLE <table> DELETE WHERE 1=1 Update data It's also possible to mutate (UPDATE) the similar way ALTER TABLE <table> UPDATE column1 = expr1 [, ...] WHERE <filter expression> Mind it's async Please note that all commands above do not execute the data mutation directly (in sync). Instead they schedule ClickHouse Mutation that is executed independently (async) on background. That is the reason why ALTER TABLE syntax was chosen instead of typical SQL UPDATE/DELETE. You can check unfinished Mutations' progress via SELECT * FROM system.mutations WHERE is_done = 0 ...unless you change mutations_sync settings to 1 so it synchronously waits for current server 2 so it waits for all replicas Altering data without using Mutations Theres's TRUNCATE TABLE statement with syntax as follows: TRUNCATE TABLE [IF EXISTS] [db.]name [ON CLUSTER cluster] This synchronously truncates the table. It will check for table size so won't allow you to delete if table size exceeds max_table_size_to_drop. See docs here: https://clickhouse.tech/docs/en/sql-reference/statements/truncate/
ClickHouse
52,355,143
21
I know there's a bunch of system tables. If one has access to those where do I find the currently installed version?
SELECT version() ┌─version()───┐ │ 20.9.1.4571 │ └─────────────┘ SELECT * FROM system.build_options ┌─name──────────────────────┬─value────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ VERSION_FULL │ ClickHouse 20.9.1.4571 │ │ VERSION_DESCRIBE │ v20.9.1.4571-testing
ClickHouse
63,847,537
19
I have an event table (MergeTree) in clickhouse and want to run a lot of small inserts at the same time. However the server becomes overloaded and unresponsive. Moreover, some of the inserts are lost. There are a lot of records in clickhouse error log: 01:43:01.668 [ 16 ] <Error> events (Merger): Part 201 61109_20161109_240760_266738_51 intersects previous part Is there a way to optimize such queries? I know I can use bulk insert for some types of events. Basically, running one insert with many records, which clickhouse handles pretty well. However, some of the events, such as clicks or opens could not be handled in this way. The other question: why clickhouse decides that similar records exist, when they don't? There are similar records at the time of insert, which have the same fields as in index, but other fields are different. From time to time I also receive the following error: Caused by: ru.yandex.clickhouse.except.ClickHouseUnknownException: ClickHouse exception, message: Connect to localhost:8123 [ip6-localhost/0:0:0:0:0:0:0:1] timed out, host: localhost, port: 8123; Connect to ip6-localhost:8123 [ip6-localhost/0:0:0:0:0:0:0:1] timed out ... 36 more Mostly during project build when test against clickhouse database are run.
Clickhouse has special type of tables for this - Buffer. It's stored in memory and allow many small inserts with out problem. We have near 200 different inserts per second - it works fine. Buffer table: CREATE TABLE logs.log_buffer (rid String, created DateTime, some String, d Date MATERIALIZED toDate(created)) ENGINE = Buffer('logs', 'log_main', 16, 5, 30, 1000, 10000, 1000000, 10000000); Main table: CREATE TABLE logs.log_main (rid String, created DateTime, some String, d Date) ENGINE = MergeTree(d, sipHash128(rid), (created, sipHash128(rid)), 8192); Details in manual: https://clickhouse.yandex/docs/en/operations/table_engines/buffer/
ClickHouse
40,592,010
15
I'm running Clickhouse in a docker container on a Windows host. I tried to create an account towards making it an admin account. It looks like the default user does not have permission to create other accounts. How can I get around this error and create an admin account? docker-compose exec -T dash-clickhouse clickhouse-client --query="CREATE USER 'foo' IDENTIFIED WITH sha256_password BY 'bar'" gave the error Received exception from server (version 20.7.2): Code: 497. DB::Exception: Received from localhost:9000. DB::Exception: default: Not enough privileges. To execute this query it's necessary to have the grant CREATE USER ON *.*.
To fix it need to enable access_management-setting in the users.xml file: # execute an interactive bash shell on the container docker-compose exec {container_name} bash # docker exec -it {container_name} bash # install preferable text editor (i prefer using 'nano') apt-get update apt-get install nano # open file users.xml in the editor nano /etc/clickhouse-server/users.xml Uncomment the access_management-setting for the default user and save changes: .. <!-- User can create other users and grant rights to them. --> <!-- <access_management>1</access_management> --> ..
ClickHouse
64,166,492
15
I have a clickhouse table that has one Array(UInt16) column. I want to be able to filter results from this table to only get rows where the values in the array column are above a threshold value. I've been trying to achieve this using some of the array functions (arrayFilter and arrayExists) but I'm not familiar enough with the SQL/Clickhouse query syntax to get this working. I've created the table using: CREATE TABLE IF NOT EXISTS ArrayTest ( date Date, sessionSecond UInt16, distance Array(UInt16) ) Engine = MergeTree(date, (date, sessionSecond), 8192); Where the distance values will be distances from a certain point at a certain amount of seconds (sessionSecond) after the date. I've added some sample values so the table looks like the following: Now I want to get all rows which contain distances greater than 7. I found the array operators documentation here and tried the arrayExists function but it's not working how I'd expect. From the documentation, it says that this function "Returns 1 if there is at least one element in 'arr' for which 'func' returns something other than 0. Otherwise, it returns 0". But when I run the query below I get three zeros returned where I should get a 0 and two ones: SELECT arrayExists( val -> val > 7, arrayEnumerate(distance)) FROM ArrayTest; Eventually I want to perform this select and then join it with the table contents to only return rows that have an exists = 1 but I need this first step to work before that. Am I using the arrayExists wrong? What I found more confusing is that when I change the comparison value to 2 I get all 1s back. Can this kind of filtering be achieved using the array functions? Thanks
You can use arrayExists in the WHERE clause. SELECT * FROM ArrayTest WHERE arrayExists(x -> x > 7, distance) = 1; Another way is to use ARRAY JOIN, if you need to know which values is greater than 7: SELECT d, distance, sessionSecond FROM ArrayTest ARRAY JOIN distance as d WHERE d > 7
ClickHouse
47,591,813
13
I'm designing a schema for a large Clickhouse table with string fields that can be pretty sparse. I'm wondering if these fields should be nullable or if I should store an empty string "" as a default value. Which would be better in terms of storage?
You should store an empty string "" Nullable column takes more disk space and slowdown queries upto two times. This is an expected behaviour by design. Inserts slowed down as well, because Nullable columns are stored in 4 files but non-Nullable only in 2 files for each column. https://gist.github.com/den-crane/e43f8d0ad6f67ab9ffd09ea3e63d98aa
ClickHouse
63,057,886
13
Suppose I have a given time range. For explanation, let's consider something simple, like whole year 2018. I want to query data from ClickHouse as a sum aggregation for each quarter so the result should be 4 rows. The problem is that I have data for only two quarters so when using GROUP BY quarter, only two rows are returned. SELECT toStartOfQuarter(created_at) AS time, sum(metric) metric FROM mytable WHERE created_at >= toDate(1514761200) AND created_at >= toDateTime(1514761200) AND created_at <= toDate(1546210800) AND created_at <= toDateTime(1546210800) GROUP BY time ORDER BY time 1514761200 – 2018-01-01 1546210800 – 2018-12-31 This returns: time metric 2018-01-01 345 2018-04-01 123 And I need: time metric 2018-01-01 345 2018-04-01 123 2018-07-01 0 2018-10-01 0 This is simplified example but in real use case the aggregation would be eg. 5 minutes instead of quarters and GROUP BY would have at least one more attribute like GROUP BY attribute1, time so desired result is time metric attribute1 2018-01-01 345 1 2018-01-01 345 2 2018-04-01 123 1 2018-04-01 123 2 2018-07-01 0 1 2018-07-01 0 2 2018-10-01 0 1 2018-10-01 0 2 Is there a way to somehow fill the whole given interval? Like InfluxDB has fill argument for group or TimescaleDb's time_bucket() function with generate_series() I tried to search ClickHouse documentation and github issues and it seems this is not implemented yet so the question perhaps is whether there's any workaround.
From ClickHouse 19.14 you can use the WITH FILL clause. It can fill quarters in this way: WITH ( SELECT toRelativeQuarterNum(toDate('1970-01-01')) ) AS init SELECT -- build the date from the relative quarter number toDate('1970-01-01') + toIntervalQuarter(q - init) AS time, metric FROM ( SELECT toRelativeQuarterNum(created_at) AS q, sum(rand()) AS metric FROM ( -- generate some dates and metrics values with gaps SELECT toDate(arrayJoin(range(1514761200, 1546210800, ((60 * 60) * 24) * 180))) AS created_at ) GROUP BY q ORDER BY q ASC WITH FILL FROM toRelativeQuarterNum(toDate(1514761200)) TO toRelativeQuarterNum(toDate(1546210800)) STEP 1 ) ┌───────time─┬─────metric─┐ │ 2018-01-01 │ 2950782089 │ │ 2018-04-01 │ 2972073797 │ │ 2018-07-01 │ 0 │ │ 2018-10-01 │ 179581958 │ └────────────┴────────────┘
ClickHouse
50,238,568
12
I am not clear about these two words. Whether does one block have a fixed number of rows? Whether is one block the minimum unit to read from disk? Whether are different blocks stored in different files? Whether is the range of one block bigger than granule? That means, one block can have several granules skip indices.
https://clickhouse.tech/docs/en/operations/table_engines/mergetree/#primary-keys-and-indexes-in-queries Primary key is sparsed. By default it contains 1 value of each 8192 rows (= 1 granule). Let's disable adaptive granularity (for the test) -- index_granularity_bytes=0 create table X (A Int64) Engine=MergeTree order by A settings index_granularity=16,index_granularity_bytes=0; insert into X select * from numbers(32); index_granularity=16 -- 32 rows = 2 granule , primary index have 2 values 0 and 16 select marks, primary_key_bytes_in_memory from system.parts where table = 'X'; ┌─marks─┬─primary_key_bytes_in_memory─┐ │ 2 │ 16 │ └───────┴─────────────────────────────┘ 16 bytes === 2 values of INT64. Adaptive index granularity means that granules size various. Because wide rows (many bytes) needs (for performance) fewer (<8192) rows in granule. index_granularity_bytes = 10MB ~ 1k row * 8129. So each granule have 10MB. If rows size 100k (long Strings), granule will have 100 rows (not 8192). Skip index granules GRANULARITY 3 -- means that an index will store one value for each 3 table granules. create table X (A Int64, B Int64, INDEX IX1 (B) TYPE minmax GRANULARITY 4) Engine=MergeTree order by A settings index_granularity=16,index_granularity_bytes=0; insert into X select number, number from numbers(128); 128/16 = 8, table have 8 granules, INDEX IX1 stores 2 values of minmax (8/4) So minmax index stores 2 values -- (0..63) and (64..128) 0..63 -- points to the first 4 table's granules. 64..128 -- points to the second 4 table' granules. set send_logs_level='debug' select * from X where B=77 [ 84 ] <Debug> dw.X (SelectExecutor): **Index `IX1` has dropped 1 granules** [ 84 ] <Debug> dw.X (SelectExecutor): Selected 1 parts by date, 1 parts by key, **4 marks** to read from 1 ranges SelectExecutor checked skip index - 4 table granules can be skipped because 77 is not in 0..63 . And another 4 granules must be read ( 4 marks ) because 77 in (64..128) -- some of that 4 granules have B=77.
ClickHouse
60,255,863
12
I have a String field with timestamp like this: "2020-01-13T07:34:25.804445Z". And i want to parse it to datetime (to use in Grafana filters, for example). But i getting this error: SELECT SELECT "@timestamp" AS timestamp, CAST(timestamp AS DateTime) as datetime from table Cannot parse string '2020-01-13T06:55:05.704Z' as DateTime: syntax error at position 19 (parsed just '2020-01-13T06:55:05'). I found variable date_time_input_format on documentation which "allows extended parsing". But it says that this setting doesn't apply to date and time functions. So how do i cast string date with timezone to DateTime?
SELECT parseDateTimeBestEffortOrNull('2020-01-13T07:34:25.804445Z') ┌─parseDateTimeBestEffortOrNull('2020-01-13T07:34:25.804445Z')─┐ │ 2020-01-13 07:34:25 │ └──────────────────────────────────────────────────────────────┘ https://clickhouse.yandex/docs/en/query_language/functions/type_conversion_functions/#type_conversion_functions-parsedatetimebesteffort
ClickHouse
59,712,399
10
I'm getting "Read timed out" when running a query on a 1,3b row db. It is not a particular advanced query that groups together hashtags in tweets: SELECT case when match(hashtag, '[Cc]orona.*|COVID.*|[Cc]ovid.*|[Cc]oVID_19.*|[Cc]orvid19.*|COVD19.*|CORONA.*|KILLTHEVI.*|SARSCoV.*|ChineseVi.*|WuhanVir.*|ChinaVir.*|[Vv]irus.*| [Qq]uarantine|[Pp]andemic.*|[Cc]linical[Tt]rial.*|FlattenTheCurve.*|SocialDistancing.*|StayHome.*|StayTheFHome.*|StayAtHome.*|stopthespread.*| SafeHands.*|WashYourHands.*|SelfIsolation.*') then 'COVID19' when match(hashtag, '[Jj]anta[Cc]urfew.*|[Jj]anata[Cc]urfew.*') then 'JantaCurfew' when match(hashtag, 'Bhula.*') then 'Bhula' when match(hashtag, '[Ss]t[Pp]atrick.*|HappyStPatrick') then 'StPatricks day' when match(hashtag, '[Cc]hina.*') then 'China' when match(hashtag, '[Ii]taly.*') then 'Italy' when match(hashtag, '[Ii]ran.*') then 'Iran' when match(hashtag, '[Ii]ndia.*') then 'India' when match(hashtag, '[Hh]appy[Mm]others[Dd]ay.*|[Mm]others[Dd]ay.*') then 'MothersDay' else hashtag END as Hashtag, SUM(CASE WHEN created >= '2020-05-14 00:00:00' AND created <= '2020-03-14 23:59:59' THEN 1 END) "May 14th'20", SUM(CASE WHEN created >= '2020-05-13 00:00:00' AND created <= '2020-03-13 23:59:59' THEN 1 END) "May 13th'20", SUM(CASE WHEN created >= '2020-05-12 00:00:00' AND created <= '2020-03-12 23:59:59' THEN 1 END) "May 12th'20" FROM twitterDBhashtags group by Hashtag order by "May 12th'20" DESC limit 20; Clickhouse is running on a striped hdd and accessed through GB network. How can the timeout, if that is the challenge, be changed to allow for more time? I would very much want to be able to run multi minutes queries without getting the "Read timed out" message, if possible.
CH jdbc driver has a socket_timeout = 30000 (30s) by default Under the Advanced tab, you can configure advanced connections settings, > e.g., Character Coding. Connection / Advanced properties / New property -> socket_timeout = 300000
ClickHouse
63,621,318
10
I'm going to migrate data from PostgreSQL database to Yandex's ClickHouse. One of the fields in a source table is of type JSON - called additional_data. So, PostgreSQL allows me to access json attributes during e.g. SELECT ... queries with ->> and -> and so on. I need the same behavior to persist in my resulting table in ClickHouse storage. (i.e. the ability to parse JSON during select queries and/or when using filtering and aggregation clauses) Here is what I've done during CREATE TABLE ... in ClickHouse client: create table if not exists analytics.events ( uuid UUID, ..., created_at DateTime, updated_at DateTime, additional_data Nested ( message Nullable(String), eventValue Nullable(String), rating Nullable(String), focalLength Nullable(Float64) ) ) engine = MergeTree ORDER BY (uuid, created_at) PRIMARY KEY uuid; Is that a good choice how to store JSON-serializable data? Any Ideas? Maybe It's better to store a JSON data as a plain String instead of Nested and playing with It using special functions?
Although ClickHouse uses the fast JSON libraries (such as simdjson and rapidjson) to parsing I think the Nesting-fields should be faster. If the JSON structure is fixed or be changed predictably try to consider the way of denormalizing data: .. created_at DateTime, updated_at DateTime, additional_data_message Nullable(String), additional_data_eventValue Nullable(String), additional_data_rating Nullable(String), additional_data_focalLength Nullable(Float64) .. On one hand, it can significantly increase the count of rows and disk space, on another side, it should give a significant increase in performance (especially in the right indexing). Moreover, the disk size can be reduced using LowCardinality-type and Codecs. Some others remarks: avoid to use Nullable types, prefer to use some replacement such as '', 0, etc (see explanation Clickhouse string field disk usage: null vs empty) UUID type doesn't give index monotonicity, this one should be much better (More secrets of ClickHouse Query Performance): .. ORDER BY (created_at, uuid); consider using Aggregating-engines to significantly increase the speed of calculation aggregated values In any case before making a final decision need to do manual testing on a data subset (this applies as to choose the schema (json as string/Nested type/denormalized way), as choosing the column codec).
ClickHouse
64,131,915
10
Is it possible to alter a table engine in clickhouse table like in MySQL, something like this: CREATE TABLE example_table (id UInt32, data String) ENGINE=MergeTree() ORDER BY id; ALTER example_table ENGINE=SummingMergeTree(); Because I didn't find such capability in the documentation. If it is not possible, are there any plans to implement it in near future, or what architecture limitations prevent from doing this?
It's possible to change an Engine by several ways. But it's impossible to change PARTITION BY / ORDER BY. That's why it's not documented explicitly. So in 99.99999% cases it does not make any sense. SummingMergeTree uses table's ORDER BY as a collapsing rule and the existing ORDER BY usually does not suit. Here is an example of one the ways (less hacky one), (you can copy partitions from one table to another, it's almost free operation, it exploits FS hardlinks and does not copy real data). (COW -- copy on write). CREATE TABLE example_table (id UInt32, data Float64) ENGINE=MergeTree() ORDER BY id; Insert into example_table values(1,1), (1,1), (2,1); CREATE TABLE example_table1 (id UInt32, data Float64) ENGINE=SummingMergeTree() ORDER BY id; -- this does not copy any data (instant & almost free command) alter table example_table1 attach partition tuple() from example_table; SELECT * FROM example_table1; ┌─id─┬─data─┐ │ 1 │ 1 │ │ 1 │ 1 │ │ 2 │ 1 │ └────┴──────┘ optimize table example_table1 final; select * from example_table1; ┌─id─┬─data─┐ │ 1 │ 2 │ │ 2 │ 1 │ └────┴──────┘ One more way (edit metadata file, also ZK records if a table Replicated) detach table example_table; vi /var/lib/clickhouse/metadata/default/example_table.sql replace MergeTree with SummingMergeTree attach table example_table; SHOW CREATE TABLE example_table ┌─statement──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┐ │ CREATE TABLE default.example_table ( `id` UInt32, `data` Float64 ) ENGINE = SummingMergeTree ORDER BY id SETTINGS index_granularity = 8192 │ └────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘ SELECT * FROM example_table; ┌─id─┬─data─┐ │ 1 │ 1 │ │ 1 │ 1 │ │ 2 │ 1 │ └────┴──────┘ optimize table example_table final; SELECT * FROM example_table; ┌─id─┬─data─┐ │ 1 │ 2 │ │ 2 │ 1 │ └────┴──────┘
ClickHouse
68,716,267
10
Druid is used for both real time and batch processing. But can it totally replace hadoop? If not why? As in what is the advantage of hadoop over druid? I have read that druid is used along with hadoop. So can the use of Hadoop be avoided?
We are talking about two slightly related but very different technologies here. Druid is a real-time analytics system and is a perfect fit for timeseries and time based events aggregation. Hadoop is HDFS (a distributed file system) + Map Reduce (a paradigm for executing distributed processes), which together have created an eco system for distributed processing and act as underlying/influencing technology for many other open source projects. You can setup druid to use Hadoop; that is to fire MR jobs to index batch data and to read its indexed data from HDFS (of course it will cache them locally on the local disk) If you want to ignore Hadoop, you can do your indexing and loading from a local machine as well, of course with the penalty of being limited to one machine.
Druid
24,121,947
11
I currently connect to the druid cluster through the druid connector in Apache Superset. I heard that SQL can be used to query druid. Is it possible to point my SQL database connection to druid?
Follow the steps below You need to use latest version of pydruid for enabling sqlalchemy support. For me pydruid 0.4.1 is working fine. On Superset, in the Databases section you need to provide the SQLAlchemy URI druid://XX.XX:8082/druid/v2/sql/using a broker ip/host. Third thing you need to do is to enable druid.sql.enable=true on broker. I hope this will help you.
Druid
50,182,494
10
I have a web application that is <distributable/>, but also deployed to stand alone instances of Wildfly for local development work. Sometimes we have calls to the backend that can stall for a few seconds, which often leads to exceptions like the one shown below. How can I fix this given that I have no control over long running backend requests? 14:55:04,808 ERROR [org.infinispan.interceptors.InvocationContextInterceptor] (default task-6) ISPN000136: Error executing command LockControlCommand, writing keys []: org.infinispan.util.concurrent.TimeoutException: ISPN000299: Unable to acquire lock after 15 seconds for key LA6Q5r9L1q-VF2tyKE9Pc_bO9yYtzXu8dYt8l-BQ and requestor GlobalTransaction:<null>:37:local. Lock is held by GlobalTransaction:<null>:36:local at org.infinispan.util.concurrent.locks.impl.DefaultLockManager$KeyAwareExtendedLockPromise.lock(DefaultLockManager.java:236) at org.infinispan.interceptors.locking.AbstractLockingInterceptor.lockAllAndRecord(AbstractLockingInterceptor.java:199) at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.checkPendingAndLockAllKeys(AbstractTxLockingInterceptor.java:199) at org.infinispan.interceptors.locking.AbstractTxLockingInterceptor.lockAllOrRegisterBackupLock(AbstractTxLockingInterceptor.java:165) at org.infinispan.interceptors.locking.PessimisticLockingInterceptor.visitLockControlCommand(PessimisticLockingInterceptor.java:184) at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99) at org.infinispan.interceptors.TxInterceptor.invokeNextInterceptorAndVerifyTransaction(TxInterceptor.java:157) at org.infinispan.interceptors.TxInterceptor.visitLockControlCommand(TxInterceptor.java:215) at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110) at org.infinispan.interceptors.base.CommandInterceptor.invokeNextInterceptor(CommandInterceptor.java:99) at org.infinispan.interceptors.InvocationContextInterceptor.handleAll(InvocationContextInterceptor.java:107) at org.infinispan.interceptors.InvocationContextInterceptor.visitLockControlCommand(InvocationContextInterceptor.java:81) at org.infinispan.commands.control.LockControlCommand.acceptVisitor(LockControlCommand.java:110) at org.infinispan.interceptors.InterceptorChain.invoke(InterceptorChain.java:336) at org.infinispan.cache.impl.CacheImpl.lock(CacheImpl.java:828) at org.infinispan.cache.impl.CacheImpl.lock(CacheImpl.java:810) at org.infinispan.cache.impl.AbstractDelegatingAdvancedCache.lock(AbstractDelegatingAdvancedCache.java:177) at org.wildfly.clustering.web.infinispan.session.InfinispanSessionMetaDataFactory.getValue(InfinispanSessionMetaDataFactory.java:84) at org.wildfly.clustering.web.infinispan.session.InfinispanSessionMetaDataFactory.findValue(InfinispanSessionMetaDataFactory.java:69) at org.wildfly.clustering.web.infinispan.session.InfinispanSessionMetaDataFactory.findValue(InfinispanSessionMetaDataFactory.java:39) at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.findValue(InfinispanSessionFactory.java:61) at org.wildfly.clustering.web.infinispan.session.InfinispanSessionFactory.findValue(InfinispanSessionFactory.java:40) at org.wildfly.clustering.web.infinispan.session.InfinispanSessionManager.findSession(InfinispanSessionManager.java:234) at org.wildfly.clustering.web.undertow.session.DistributableSessionManager.getSession(DistributableSessionManager.java:140) at io.undertow.servlet.spec.ServletContextImpl.getSession(ServletContextImpl.java:726) at io.undertow.servlet.spec.HttpServletRequestImpl.getSession(HttpServletRequestImpl.java:370) at au.com.agic.settings.listener.SessionInvalidatorListener.clearSession(SessionInvalidatorListener.java:57) at au.com.agic.settings.listener.SessionInvalidatorListener.requestInitialized(SessionInvalidatorListener.java:52) at io.undertow.servlet.core.ApplicationListeners.requestInitialized(ApplicationListeners.java:245) at io.undertow.servlet.handlers.ServletInitialHandler.handleFirstRequest(ServletInitialHandler.java:283) at io.undertow.servlet.handlers.ServletInitialHandler.dispatchRequest(ServletInitialHandler.java:263) at io.undertow.servlet.handlers.ServletInitialHandler.access$000(ServletInitialHandler.java:81) at io.undertow.servlet.handlers.ServletInitialHandler$1.handleRequest(ServletInitialHandler.java:174) at io.undertow.server.Connectors.executeRootHandler(Connectors.java:202) at io.undertow.server.HttpServerExchange$1.run(HttpServerExchange.java:793) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)
I believe the answer is to update the infinispan configuration like this, which increases the lock timeout to 60 seconds. <cache-container name="web" default-cache="passivation" module="org.wildfly.clustering.web.infinispan"> <local-cache name="passivation"> <locking isolation="REPEATABLE_READ" striping="false" acquire-timeout="60000" /> <transaction mode="BATCH" /> <file-store passivation="true" purge="false" /> </local-cache> <local-cache name="persistent"> <locking isolation="REPEATABLE_READ" striping="false" acquire-timeout="60000" /> <transaction mode="BATCH" /> <file-store passivation="false" purge="false" /> </local-cache> </cache-container>
Infinispan
35,711,423
14
We are working in the design phase of new project where we need to decide the caching framework. We need decide whether to go with EHCache with Terracotta or Infinispan for caching requirement? Can anyone provide me the advantages & disadvantages of EHCache and Infinispan? Thanks in advance.
Is your environment distributed? If so, Infinispan would have an advantage of scalability due to its p2p design. Even in standalone (non-clustered mode), you'd get to take advantage of the non-blocking nature of Infinispan internals, state of the art eviction algorithms (LIRS), etc. Have a look at this article for a discussion of Infinispan as a local cache. DISCLAIMER: I am the founder and project lead of Infinspan.
Infinispan
5,621,209
13
I use Observables in couchbase. What is the difference between Schedulers.io() and Schedulers.computation()?
Brief introduction of RxJava schedulers. Schedulers.io() – This is used to perform non-CPU-intensive operations like making network calls, reading disc/files, database operations, etc., This maintains a pool of threads. Schedulers.newThread() – Using this, a new thread will be created each time a task is scheduled. It’s usually suggested not to use scheduler unless there is a very long-running operation. The threads created via newThread() won’t be reused. Schedulers.computation() – This schedular can be used to perform CPU-intensive operations like processing huge data, bitmap processing etc., The number of threads created using this scheduler completely depends on number CPU cores available. Schedulers.single() – This scheduler will execute all the tasks in sequential order they are added. This can be used when there is a necessity of sequential execution is required. Schedulers.immediate() – This scheduler executes the task immediately in a synchronous way by blocking the main thread. Schedulers.trampoline() – It executes the tasks in First In – First Out manner. All the scheduled tasks will be executed one by one by limiting the number of background threads to one. Schedulers.from() – This allows us to create a scheduler from an executor by limiting the number of threads to be created. When the thread pool is occupied, tasks will be queued.
Couchbase
33,370,339
36
I tried to edit document via couchbase console, and caught this warning message: Warning: Editing of document with size more than 2.5kb is not allowed How can I increase max editing document size?
You can raise the limit or disable completely on version 2.2: To raise the limit; edit file: /opt/couchbase/lib/ns_server/erlang/lib/ns_server/priv/public/js/documents.js at line 214: var DocumentsSection = { docsLimit: 1000, docBytesLimit: 2500, init: function () { var self = this; Edit the docBytesLimit variable set to 2500 and increase it to your preferred value. To disable completely; You can comment out the conditional statement and return a false value. At line 362 comment out the statement and return false: function isJsonOverLimited(json) { //return getStringBytes(json) > self.docBytesLimit; return false; } Hope this helps.. There are limitations as to how much your WYSYWIG editor can handle. So please be careful and as always editing core files can have negative results. We did it on our system and it works for us.
Couchbase
19,090,611
28
What does this error mean .. It runs fine in Eclipse but not in intellij idea Exception in thread "main" java.lang.VerifyError: Cannot inherit from final class at java.lang.ClassLoader.defineClass1(Native Method) at java.lang.ClassLoader.defineClassCond(ClassLoader.java:631) at java.lang.ClassLoader.defineClass(ClassLoader.java:615) at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:141) at java.net.URLClassLoader.defineClass(URLClassLoader.java:283) at java.net.URLClassLoader.access$000(URLClassLoader.java:58) at java.net.URLClassLoader$1.run(URLClassLoader.java:197) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:306) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:247) at com.couchbase.client.ViewConnection.createConnections(ViewConnection.java:120) at com.couchbase.client.ViewConnection.<init>(ViewConnection.java:100) at com.couchbase.client.CouchbaseConnectionFactory.createViewConnection(CouchbaseConnectionFactory.java:179) at com.couchbase.client.CouchbaseClient.<init>(CouchbaseClient.java:243) at com.couchbase.client.CouchbaseClient.<init>(CouchbaseClient.java:175) at com.couchbase.App.putincbase(App.java:122) at examplesCons.TestCons.run(TestCons.java:89) at examplesCons.TestCons.main(TestCons.java:121) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.intellij.rt.execution.application.AppMain.main(AppMain.java:120) I get this error when I try to run couchbase using couchbase-client-1.1.6.jar from Intellij IDea.
if you're using kotlin, add open to your class (extends RealmObject) declaration open class Foo() { }
Couchbase
18,138,136
26
With the two merging under the same roof recently, it has become difficult to determine what the major differences between Membase and Couchbase. Why would one be used over the other?
I want to elaborate on the answer given by James. At the moment Couchbase server is CouchDB with GeoCouch integration out of the box. What is great about CouchDB is that you have the ability to create structured documents and do map-reduce queries on those documents. Membase server is memcached with persistence and very simple cluster management interface. It's strengths are the ability to do very low latency queries as well as the ability to easily add and remove servers from a cluster. Late this summer however Membase and CouchDB will be merged together to form the next version of Couchbase. So what will the new version of Couchbase look like? Right now in Membase the persistence layer for memcached is implemented with SQLite. After the merger of these two products CouchDB will be the new persistence layer. This means that you will get the low latency requests and great cluster management that was provided by Membase and you will also get the great document oriented model that CouchDB is known for.
Couchbase
6,170,909
25
I've got a Eclipse project which I somehow managed to get working in Android Studio awhile back. It uses TouchDB library/project which I now want to upgrade to their latest offering couchbase-lite-android which looks like it comes ready built for Android Studio with gradle files. However I'm at a loss how to go ahead and import this project into my existing one. File -> Import Project gives me 3 options, create project from existing sources, import from external model (mavern), import from external model (gradle) If I choose gradle it builds couchdbase-lite-android then opens it into it's own Android Studio window, it definitely doesn't get imported into my current project. Any ideas...
Try going to File -> "Import Module" instead of "Import Project". In Android Studio, an entire window is a project. Each top-level item in that project is called a module. Coming from the Eclipse world, it'd be: Eclipse workspace = Android Studio project Eclipse project = Android Studio module
Couchbase
17,625,219
21
Write operations on Couchbase accept a parameter cas (create and set). Also the return result object of any non-data fetching query has cas property in it. I Googled a bit and couldn't find a good conceptual article about it. Could anyone tell me when to use CAS and how to do it? What should be the common work-flow of using CAS? My guess is we need to fetch CAS for the first write operation and then pass it along with next write. Also we need to update it using result's CAS. Correct me if I am wrong.
CAS actually stands for check-and-set, and is a method of optimistic locking. The CAS value is associated with each document which is updated whenever the document changes - a bit like a revision ID. The intent is that instead of pessimistically locking a document (and the associated lock overhead) you just read it's CAS value, and then only perform the write if the CAS matches. The general use-case is: Read an existing document, and obtain it's current CAS (get_with_cas) Prepare a new value for that document, assuming no-one else has modified the document (and hence caused the CAS to change). Write the document using the check_and_set operation, providing the CAS value from (1). Step 3 will only succeed (perform the write) if the document is unchanged between (1) and (3) - i.e. no other user has modified it in the meantime. Typically if (3) does fail you would retry the whole sequence (get_with_cas, modify, check_and_set). There's a much more detailed description of check-and-set in the Couchbase Developer Guide under Concurrent Document Mutations.
Couchbase
22,601,503
21
In a Phonegap offline/online project: What is the difference between using PouchDB and using CouchBase Lite with the new LiteGap plugin? Are they two different solutions to the same problem? Can the PouchDB API be used to interact with a local CouchBase Lite database?
After some research and being a relatively new topic, i thought it would be interesting to share my experiences replying my own question: What is the difference between using PouchDB and using CouchBase Lite with the new LiteGap plugin? PouchDB can create a local database (websql or IndexedDB) on the device and replicate it with an external CouchDB. Also can be used as a client for an external CouchDB. Couchbase Lite creates a iOS/Android database on the device, accesible by default on http://localhost:5984. You can then replicate the local Couchbase Lite with other external Couchbase/CouchDB services. LiteGap allows you to create and use a Couchbase Lite db in a PhoneGap project. Both solutions are available to use on a Phonegap project. Are they two different solutions to the same problem? In short, no. PouchDB is cross-platform so you can use it in a web project and also in a hybrid app. Also it provides a useful API to interact directly with a local db or external CouchDB. Being based on websql/IndexedDB technologies, you have storage limitations that keep asking the user to allow more local storage space for your web/app. Couchbase Lite is a native iOS/Android solution that sets a Couchbase database in the device localhost. Together with the LiteGap plugin, you can use it in a Phonegap project. Can the PouchDB API be used to interact with a local CouchBase Lite database? Yes, but some functionality was not working as expected in my tests. First, Couchbase Lite has no javascript HTTP API so i thought to use Pouch to act just as client. PouchDB can use external Couch services, so we setup Pouch to use the device Couchbase Lite on localhost:5984. Now, with Pouch you can create a database, put() or replicate from local to the cloud. However, i found problems replicating from cloud to local using Pouch's replicate.from method. One workaround to that is to setup 2-way replication using good old $.ajax to post to the device's http://localhost:5984/_replicate as if you were using node curl (passing object data with source, target, continous etc..). I hope this helps to someone taking decisions on which technologies use when creating a offline/online syncable hybrid app.
Couchbase
18,416,289
20
I've been browsing the net trying to find a solution that will allow us to generate unique IDs in a regionally distributed environment. I looked at the following options (among others): SNOWFLAKE (by Twitter) It seems like a great solutions, but I just don't like the added complexity of having to manage another software just to create IDs; It lacks documentation at this stage, so I don't think it will be a good investment; The nodes need to be able to communicate to one another using Zookeeper (what about latency / communication failure?) UUID Just look at it: 550e8400-e29b-41d4-a716-446655440000; Its a 128 bit ID; There has been some known collisions (depending on the version I guess) see this post. AUTOINCREMENT IN RELATIONAL DATABASE LIKE MYSQL This seems safe, but unfortunately, we are not using relational databases (scalability preferences); We could deploy a MySQL server for this like what Flickr does, but again, this introduces another point of failure / bottleneck. Also added complexity. AUTOINCREMENT IN A NON-RELATIONAL DATABASE LIKE COUCHBASE This could work since we are using Couchbase as our database server, but; This will not work when we have more than one clusters in different regions, latency issues, network failures: At some point, IDs will collide depending on the amount of traffic; MY PROPOSED SOLUTION (this is what I need help with) Lets say that we have clusters consisting of 10 Couchbase Nodes and 10 Application nodes in 5 different regions (Africa, Europe, Asia, America and Oceania). This is to ensure that content is served from a location closest to the user (to boost speed) and to ensure redundancy in case of disasters etc. Now, the task is to generate IDs that wont collide when the replication (and balancing) occurs and I think this can be achieved in 3 steps: Step 1 All regions will be assigned integer IDs (unique identifiers): 1 - Africa; 2 - America; 3 - Asia; 4 - Europe; 5 - Ociania. Step 2 Assign an ID to every Application node that is added to the cluster keeping in mind that there may be up to 99 999 servers in one cluster (even though I doubt: just as a safely precaution). This will look something like this (fake IPs): 00001 - 192.187.22.14 00002 - 164.254.58.22 00003 - 142.77.22.45 and so forth. Please note that all of these are in the same cluster, so that means you can have node 00001 per region. Step 3 For every record inserted into the database, an incremented ID will be used to identify it, and this is how it will work: Couchbase offers an increment feature that we can use to create IDs internally within the cluster. To ensure redundancy, 3 replicas will be created within the cluster. Since these are in the same place, I think it should be safe to assume that unless the whole cluster is down, one of the nodes responsible for this will be available, otherwise a number of replicas can be increased. Bringing it all together Say a user is signing up from Europe: The application node serving the request will grab the region code (4 in this case), get its own ID (say 00005) and then get an incremented ID (1) from Couchbase (from the same cluster). We end up with 3 components: 4, 00005,1. Now, to create an ID from this, we can just join these components into 4.00005.1. To make it even better (I'm not too sure about this), we can concatenate (not add them up) the components to end up with: 4000051. In code, this will look something like this: $id = '4'.'00005'.'1'; NB: Not $id = 4+00005+1;. Pros IDs look better than UUIDs; They seem unique enough. Even if a node in another region generated the same incremented ID and has the same node ID as the one above, we always have the region code to set them apart; They can still be stored as integers (probably Big Unsigned integers); It's all part of the architecture, no added complexities. Cons No sorting (or is there)? This is where I need your input (most) I know that every solution has flaws, and possibly more that what we see on the surface. Can you spot any issues with this whole approach? Thank you in advance for your help :-) EDIT As @DaveRandom suggested, we can add the 4th step: Step 4 We can just generate a random number and append it to the ID to prevent predictability. Effectively, you end up with something like this: 4000051357 instead of just 4000051.
You are concerned about IDs for two reasons: Potential for collisions in a complex network infrastructure Appearance Starting with the second issue, Appearance. While a UUID certainly isn't a great beauty when it comes to an identifier, there are diminishing returns as you introduce a truly unique number across a complex data center (or data centers) as you mention. I'm not convinced that there is a dramatic change in perception of an application when a long number versus a UUID is used for example in a URL to a web application. Ideally, neither would be shown, and the ID would only ever be sent via Ajax requests, etc. While a nice clean memorable URL is preferable, it's never stopped me from shopping at Amazon (where they have absolutely hideous URLs). :) Even with your proposal, the identifiers, while they would be shorter in the number of characters than a UUID, they are no more memorable than a UUID. So, the appearance likely would remain debatable. Talking about the first point..., yes, there are a few cases where UUIDs have been known to generate conflicts. While that shouldn't happen in a properly configured and consistently obtained architecture, I can see how it might happen (but I'm personally a lot less concerned about it). So, if you're talking about alternatives, I've become a fan of the simplicity of the MongoDB ObjectId and its techniques for avoiding duplication when generating an ID. The full documentation is here. The quick relevant pieces are similar to your potential design in several ways: ObjectId is a 12-byte BSON type, constructed using: a 4-byte value representing the seconds since the Unix epoch, a 3-byte machine identifier, a 2-byte process id, and a 3-byte counter, starting with a random value. The timestamp can often be useful for sorting. The machine identifier is similar to your application server having a unique ID. The process id is just additional entropy, and finally to prevent conflicts, there is a counter that is auto incremented whenever the timestamp is the same as the last time an ObjectId is generated (so that ObjectIds can be created rapidly). ObjectIds can be generated on the client or on the database. Further, ObjectIds do take up fewer bytes than a UUID (but only 4). Of course, you could not use the timestamp and drop 4 bytes. For clarification, I'm not suggesting you use MongoDB, but be inspired by the technique they use for ID generation. So, I think your solution is decent (and maybe you want to be inspired by MongoDB's implementation of a unique ID) and doable. As to whether you need to do it, I think that's a question only you can answer.
Couchbase
18,248,644
16
I'm unclear about the requirements for using Couchbase-lite. Is it possible to use Couchbase-lite with CouchDB? Or does Couchbase-lite require Couchbase Server and Sync Gateway? Thanks!
According to the documents it is 100% compatible with both CouchDB and Couchbase. http://docs.couchbase.com/couchbase-lite/cbl-concepts/#can-couchbase-lite-replicate-with-apache-couchdb-servers Also I found this blog post on syncing IOS with CouchDB, might be useful! http://blog.lunarlogic.io/2013/synchronization-using-couchdb/ Edit Official Couchbase link above isn't valid anymore however the following official article from Couchbase lists the other databases that are compatible: (CouchDB,PouchDB,Cloudant)http://developer.couchbase.com/documentation/mobile/current/develop/guides/couchbase-lite/native-api/replication/index.html
Couchbase
20,489,162
15
I have a Debian server with about 16GB RAM that I'm using with nginx and several heavy mysql databases, and some custom php apps. I'd like to implement a memory cache between Mysql and PHP, but the databases are too large to store everything in RAM. I'm thinking a LRU cache may be better so far as I research. Does this rule out Redis? Couchbase is also a consideration.
Supposing there is a unique server running nginx + php + mysql instances with some remaining free RAM, the easiest way to use that RAM to cache data is simply to increase the buffer caches of the mysql instances. Databases already use LRU-like mechanisms to handle their buffers. Now, if you need to move part of the processing away from the databases, then pre-caching may be an option. Before talking about memcached/redis, a shared memory cache integrated with php such as APC will be efficient provided only one server is considered (actually more efficient than redis/memcached). Both memcached and redis can be considered to perform remote caching (i.e. to share the cache between various nodes). I would not rule out redis for this: it can easily be configured for this purpose. Both will allow to define a memory limit, and handle the cache with LRU-like behavior. However, I would not use couchbase here, which is an elastic (i.e. supposed to be used on several nodes) NoSQL key/value store (i.e. not a cache). You could probably move some data from your mysql instances to a couchbase cluster, but using it just for caching is over-engineering IMO.
Couchbase
9,213,498
14
For an iPhone App I decided to give a try to a NoSQL DB, because the nature of the data I need to store locally. The most sophisticated solution I found is Couchbase Mobile. But it seems, that the project has only beta status. Is it too soon to use it?
Couchbase Mobile is currently beta, with plans for a GA/1.0 at the end of September (2011). By the next developer preview release at the end of August the iOS version should be entirely ready for you to start development with. The Android version is lagging a little in terms of documentation, but should also be ready for active development at the end of August. If you are hard core, you can start today, but if you wait a few weeks we'll be ready for even the non-hardcore.
Couchbase
7,063,402
11
Is there any stable nosql database for iOS except for Couchbase? Couchbase is now a beta version which i don't want to use on a app with many users.(Although i like Couchbase very much) Any suggestions? Special Thx!
There are several projects to get a CouchDB-compatible API available on mobile devices. TouchDB, a native iOS build PouchDB, an HTML5 implementation, for web and PhoneGap apps
Couchbase
10,471,867
11
I'm using iOS Couchbase Mobile to have a couchdb server on an iPad that uses replication to sync with a server on https://cloudant.com. cloudant uses HTTPS, and when I try replicating on the iPad, i just get spammed by errors. This is a known issue, as seen on this FAQ article. It recommends using 1.0.2 to fix the issue, but how do i know if I'm running it on Erlang R14? Version Info On myserver.cloudant.com: {"couchdb":"Welcome","version":"1.0.2","cloudant_build":"1.3.49"} On iOS Couchbase Mobile: {"couchdb":"Welcome","version":"2.0.0-beta"} (For some reason it says I'm using 2.0.0-beta on iOS, even though I downloaded this version (2.0.1).) Here's the kind of error that I get: [info] [<0.327.0>] Retrying HEAD request to https://user:[email protected]/mydb/ in 16.0 seconds due to error {'EXIT', {no_ssl_server, {gen_server,call, [<0.347.0>, {send_req, {{url, "https://user:[email protected]/mydb/", "mycompany.cloudant.com",443,"mycompany","password", "/mydb/",https,hostname}, [{"Accept","application/json"}, {"User-Agent","CouchDB/2.0.0-beta"}], head,<<>>, [{response_format,binary}, {inactivity_timeout,30000}, {is_ssl,true}, {socket_options,[{keepalive,true},{nodelay,false}]}, {ssl_options,[{depth,3},{verify,verify_none}]}], infinity}}, infinity]}}}
The issue of enabling https connection between CouchBase Mobile for iOS and another CouchDB/CouchBase instance is also discussed here: https://groups.google.com/d/msg/mobile-couchbase/DDHSisVWEyo/hxtlVRhQtwkJ Apparently it can be done.
Couchbase
10,521,451
11
i am currently building a website around a couchbase database and if it gets popular it is likely that i will be hosting the site and database on more than 2 machines at some stage in the future. its a fair way off still, so i would like some information to help me decide which direction to go from here. my questions are: does anybody know if i am allowed to deploy the free edition (ce) of couchbase on more than 2 nodes? if the answer varies depending on the version then please could you tell me which version permits this (if any). if it is not possible to deploy the the free version of couchbase on more than 2 nodes then could someone explain whether this is prevented by software, or by law? i found the following statement on the couchbase website: Community Edition (CE) are best for non-commercial developers, where taking some time to figure out or resolve issues doesn’t result in major problems. There are no constraints on using these binaries on production systems which makes it sound like the software can be installed on as many machines as desired in production without a requirement to pay, but then another couchbase page reads: Looking for the free version? Our Enterprise Edition Free version offers the full functionality of Couchbase Server, with free unlimited use in development and up to two nodes in a production cluster. so i'm confused. maybe this last one is just referring to the cost of support and not any cost associated with the software itself?
You can deploy Community Edition on 2 nodes and more. Restrictions exist only for Enterprise Edition: http://www.couchbase.com/couchbase-support
Couchbase
12,685,801
11
I am preparing to build an Android/iOS app that will require me to make complex polygon and containment geospatial queries. I like Apache Cassandra's no single point of failure, fault tolerance and data center awareness. Cassandra does not have direct support for geospatial queries (that I am aware of) but MongoDB and Couchbase Server do. MongoDB has scaling issues and I'm not sure if Couchbase would be a better alternative than Cassandra with Solr or Elasticsearch. Would I be making a mistake by going with Datastax Enterprise (DSE), Cassandra and Elasticsearch over Couchbase Server? Will there be a noticeable difference in load times for web pages with the Cassandra/ES back end vs. Couchbase?
Aerospike just released Server Community Edition 3.7.0, which includes Geospatial Indexes as a feature. Aerospike can now store GeoJSON objects and execute various queries, allowing an application to track rapidly changing Geospatial objects or simply ask the question of “what’s near me”. Internally, we use Google’s S2 library and Geo Hashing to encode and index these points and regions. The following types of queries are supported: Points within a Region Points within a Radius Regions a Point is in This can be combined with a User-Defined Function (UDF) to filter the results – i.e., to further refine the results to only include Bars, Restaurants or Places of Worship near you – even ones that are currently open or have availability. Additionally, finding the Region a point is in allows, for example, an advertiser to figure out campaign regions that the mobile user is in – and therefore place a geospatially targeted advertisement. Internally, the same storage mechanisms are used, which enables highly concurrent reads and writes to the Geospatial data or other data held on the record. Geospatial data is a lot of fun to play around with, so we have included a set of examples based on Open Street Map and Yelp Dataset Challenge data. Geospatial is an Experimental feature in the 3.7.0 release. It’s meant for developers to try out and provide feedback. We think the APIs are good, but in an experimental feature, based on the feedback from the community, Aerospike may choose to modify these APIs by the time this feature is GA. It’s not intended for Production usage right now (though we know some developers will go directly to Production ...)
Couchbase
24,121,192
11
I am creating a demo project for reative programming with springboot and Couchbase. I have set the below properties in application.properties file: spring.couchbase.bootstrap-hosts=localhost spring.couchbase.bucket.name=vanquish spring.couchbase.bucket.password= spring.data.couchbase.repositories.type=auto As I don't have any bucket level password while creating it. Still, service is not able to start because of below exception: Caused by: com.couchbase.client.java.error.InvalidPasswordException: Passwords for bucket "vanquish" do not match. at com.couchbase.client.java.CouchbaseAsyncCluster$OpenBucketErrorHandler.call(CouchbaseAsyncCluster.java:651) ~[java-client-2.5.9.jar:na] at com.couchbase.client.java.CouchbaseAsyncCluster$OpenBucketErrorHandler.call(CouchbaseAsyncCluster.java:634) ~[java-client-2.5.9.jar:na] at rx.internal.operators.OperatorOnErrorResumeNextViaFunction$4.onError(OperatorOnErrorResumeNextViaFunction.java:140) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OnSubscribeMap$MapSubscriber.onError(OnSubscribeMap.java:88) ~[rxjava-1.3.8.jar:1.3.8] at rx.observers.Subscribers$5.onError(Subscribers.java:230) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.checkTerminated(OperatorObserveOn.java:273) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.operators.OperatorObserveOn$ObserveOnSubscriber.call(OperatorObserveOn.java:216) ~[rxjava-1.3.8.jar:1.3.8] at rx.internal.schedulers.ScheduledAction.run(ScheduledAction.java:55) ~[rxjava-1.3.8.jar:1.3.8] I tried searching all properties but not able to find any relevant property to set username and password or setting the password in couchbase for the bucket.
Assuming that you're using a couchBase version 5.x: According to the couchBase documentation: To access cluster-resources, Couchbase Server users — administrators and applications — must specify a username and password. Steps to follow: Open your couchBase admin console: http://<couchBase-host>:8091/ui/index.html#!/overview Click on 'Security' click on 'Add user' In the 'Add user' form add these parameters: User Name: This must be the bucket name, in your case vanquish. Password: Set the password that you want, this must be the value set in spring.couchbase.bucket.password. Roles: Go to Roles -> Bucket Roles -> Bucket Admin and select your bucket, in your case vanquish. Click on 'Save'. After doing this and set the password in spring.couchbase.bucket.password it should work.
Couchbase
51,496,589
11
For example I have created cluster with 1GB RAM memory per node. After some time I want to increase RAM memory for claster for example to 2GB per node. I assumed that I can do that through Couchbase Console. But "edit" button is disabled for every node. So can someone advise me the solution? Thanks.
You can do so with the couchbase-cli utility that is installed with Couchbase. This tool should be located with the other Couchbase binaries on your system (e.g., C:\Program Files\Couchbase\Server\bin). From the command line: c:\>couchbase-cli cluster-init -c <CLUSTER_IP> -u <USERNAME> -p <PASSWORD> --cluster-init-ramsize=<NEW_RAM_SIZE> With actual values: c:\>couchbase-cli cluster-init -c 127.0.0.1:8091 -u Administrator -p s3cr3t --cluster-init-ramsize=4096 More information may be found at http://www.couchbase.com/docs/couchbase-manual-1.8/couchbase-admin-cmdline-couchbase-cli.html
Couchbase
11,210,719
10
i have one Job Distributor who publishes messages on different Channels. Further, i want to have two (and more in the future) Consumers who work on different tasks and run on different machines. (Currently i have only one and need to scale it) Let's name these tasks (just examples): FIBONACCI (generates fibonacci numbers) RANDOMBOOKS (generates random sentences to write a book) Those tasks run up to 2-3 hours and should be divided equally to each Consumer. Every Consumer can have x parallel threads for working on these tasks. So i say: (those numbers are just examples and will be replaced by variables) Machine 1 can consume 3 parallel jobs for FIBONACCI and 5 parallel jobs for RANDOMBOOKS Machine 2 can consume 7 parallel jobs for FIBONACCI and 3 parallel jobs for RANDOMBOOKS How can i achieve this? Do i have to start x Threads for each Channel to listen on on each Consumer ? When do i have to ack that? My current approach for only one Consumer is: Start x Threads for each Task - each Thread is a Defaultconsumer implementing Runnable. In the handleDelivery method, i call basicAck(deliveryTag,false) and then do the work. Further: I want to send some tasks to a special consumer. How can i achieve that in combination with the fair distribution as mentioned above? This is my Code for publishing String QUEUE_NAME = "FIBONACCI"; Channel channel = this.clientManager.getRabbitMQConnection().createChannel(); channel.queueDeclare(QUEUE_NAME, true, false, false, null); channel.basicPublish("", QUEUE_NAME, MessageProperties.BASIC, Control.getBytes(this.getArgument())); channel.close(); This is my code for the Consumer public final class Worker extends DefaultConsumer implements Runnable { @Override public void run() { try { this.getChannel().queueDeclare(this.jobType.toString(), true, false, false, null); this.getChannel().basicConsume(this.jobType.toString(), this); this.getChannel().basicQos(1); } catch (IOException e) { // catch something } while (true) { try { Thread.sleep(1000); } catch (InterruptedException e) { Control.getLogger().error("Exception!", e); } } } @Override public void handleDelivery(String consumerTag, Envelope envelope, AMQP.BasicProperties properties, byte[] bytes) throws IOException { String routingKey = envelope.getRoutingKey(); String contentType = properties.getContentType(); this.getChannel().basicAck(deliveryTag, false); // Is this right? // Start new Thread for this task with my own ExecutorService } } The class Worker is started twice in this case: Once for FIBUNACCI and once for RANDOMBOOKS UPDATE As the answers stated, RabbitMQ would not be the best solution for this, but a Couchbase or MongoDB pull approach would be best. I'm new to those systems, is there anybody that could explain to me, how this would be achieved?
Here's a conceptual view of how I would build this on couchbase. You have some number of machines to process jobs, and some number of machines (maybe the same ones) creating jobs to do. You can create a document for each job in a bucket in couchbase (and set its type to "job" or something if you're mixing it with other data in that bucket). Each job description, along with the specific commands to be done, could include the time it was created, the time it is due (if there's a specific time due) and some sort of generated work value. This work value would be arbitrary units. Each consumer of jobs would know how many work units it can do at a time, and how many are available (because other workers may be working.) So a machine with, say, 10 work units of capacity that has 6 work units being done, would do a query looking for jobs of 4 work units or less. In couchbase there are views which are incrementally updated map/reduce jobs, I think you'll only need the map phase here. You'd write a view that lets you query on time due, time entered into the system and number of work units. This way you can get "the most overdue job of 4 work units or less." This kind of query, as capacity frees up, will get the most overdue jobs first, though you could get the largest overdue job, and if there are none then the largest not-overdue job. (Where "overdue" is the delta between current time and the due date on the job.) Couchbase views allow for very sophisticated queries like this. And while they are incrementally updated, they are not perfectly realtime. Thus you wouldn't be looking for a single job, but a list of job candidates (ordered however you wish.) So, the next step would be to take the list of job candidates and check a second location - possibly a membase bucket (eg: RAM Cache, non-persistant) for a lock file. The lock file would have multiple phases (here you do a little bit of partition resolving logic using CRDTs or whatever methods work best for your needs.) Since this bucket is ram based it's faster than views and going to have less lag from total state. If there's no lock file, then create one with a status flag of "provisional". If another worker gets the same job and sees the lock file, then it can just skip that job candidate and do the next one on the list. IF, somehow two workers attempt to create lock files for the same job, there will be a conflict. In the case of a conflict you can just punt. Or you can have logic where each worker makes an update to the lock file (CRDT resolution so make these idempotents so that siblings can be merged) possibly putting in a random number or some priority figure. After a specified period of time (probably a few seconds) the lock file is checked by the worker, and if it has not had to engage in any race resolution changes, it changes the status of the lock file from "provisional" to "taken" It then updates the job itself with the status of "taken" or some such so that it will not show up in the views when other workers are looking for available jobs. Finally, you'll want to add another step where before doing the query to get these job candidates I described above, you do a special query to find jobs that were taken, but where the worker involved has died. (eg: jobs that are overdue). One way to know when workers die, is that the lock file put in the membase bucket should have an expiration time that will cause it to disappear eventually. Possibly this time could be short and the worker simply touches it to update the expiration (this is supported in the couchbase API) If a worker dies, eventually its lock files will dissipate and the orphaned jobs will be marked as "taken" but with no lock file, which is a condition the workers looking for jobs can look for. So in summary, each worker does a query for orphaned jobs, if there are any, checks to see if there is a lock file for them in turn, and if none then creates one and it follows the normal locking protocol as above. If there are no orphaned jobs then it looks for overdue jobs, and follows the locking protocol. If there are no overdue jobs, then it just takes the oldest job and follows the locking protocol. Of course this will also work if there's no such thing as "overdue" for your system, and if timeliness doesn't matter then instead of taking the oldest job you can use another method. One other method might be to create a random value between 1-N where N is a reasonably large number, say 4X the number of workers, and have each job be tagged with that value. Each time a worker goes looking for a job, it could roll the dice and see if there are any jobs with that number. If not, it would do so again until it finds a job with that number. This way, instead of multiple workers contending for the few "oldest" or highest priority jobs, and more liklihood of lock contention, they would be spread out.... at the cost of time in the que being more random than a FIFO situation. The random method could also be applied in the situation where you have load values that have to be accommodated (So that a single machine doesn't take on too much load) and instead of taking the oldest candidate, just take a random candidate form the list of viable jobs and try to do it. Edit to add: In step 12 where I say "possibly putting in a random number" what I mean is, if the workers know the priority (eg: which one needs to do the job most) they can put a figure representing this into the file. If there's no concept of "needing" the job, then they can both roll the dice. They update this file with their role of the dice. Then both of them can look at it and see what the other rolled. If they lost then they punt and the other worker knows it has it. This way you can resolve which worker takes a job without a lot of complex protocol or negotiation. I'm assuming both workers are hitting the same lock file here, It could be implemented with two lock files and a query that finds all of them. If after some amount of time, no worker has rolled a higher number (and new workers thinking about he job would know that others are already rolling on it so they'd skip it) you can take the job safely knowing you are the only worker working on it.
Couchbase
12,277,067
10
In couch base URL, e.g. server:port/pools/default what exactly a couch base pool is. Will it always be default or we can change it. There is some text written there http://www.couchbase.com/docs/couchbase-manual-1.8/couchbase-admin-restapi-key-concepts-resources.html but I cannot really get it 100%. Please anyone can explain.
A long time ago the Couchbase engineers intended to build out a concept of having pools similar to zfs pools, but for a distributed database. The feature isn't dead, but just never got much attention compared to other database features that needed to be added. What ended up happening was that the pools/default just ended up being a placeholder for something that the engineers wanted to build in the future. In the old days the idea was that a pool would be a subset of buckets that was assigned to a subset of nodes in the cluster and that this would help with management of large clusters (100+ nodes). So right now I would say don't worry about the whole pools concept because in the current (2.x releases) this is a placeholder that doesn't have any special meaning. In the future though there will likely be a feature around the pools concept and it will be well documented. Please also note that no decisions have been made about what Couchbase will do with pools, how exactly they will work, or when they will be implemented. This post is only meant to give the history for why the pools concept exists.
Couchbase
16,978,324
10
I'm learning Couchbase, now on version 3.x My doubt is, when should i use a N1QL query vs a View query? And, are there performance differences between them? Note: I have a situation: A Bucket with two document types for my Traveling App: Route and City A Route doc holds the information about the traveling route and an array of City ids that are part of it, then another doc holds the city's information (each city having its own doc). Example: //Bucket : "Traveling App" { "type" : "route" "name" : "The Great Adventure", "cities" : ["234", "h4345", "h42da"] } { "type" : "city", "name" : "Little Town", "UID" : "234" } When i query for a certain traveling route, should i do a N1QL query or a View query? Because i would have to first open the Route doc, get the cities array than get each City doc. And i think this architecture would be best, because some routes can have very few cities and others can have a lot of cities.
N1QL looks promising for your data. Even though it is, as another poster points out, in developer preview, it's worth exploring. You can NEST traveling_app with itself to get all city docs 'nested' with each route: SELECT r.name, c FROM traveling_app r NEST traveling_app c ON KEYS r.cities; To get say the city names for a particular route, join the traveling_app with itself using the route's cities as keys: SELECT c.name as city_name FROM traveling_app r JOIN traveling_app c ON KEYS r.cities WHERE r.name = "The Great Adventure"; These queries will operate the same, regardless of how many cities a route has.
Couchbase
28,500,633
10
in Couchbase DB, is it possible to retrieve multiple documents using key prefix as query string, and it returns all the key-values which has key starting with supplied key prefix (like operator kind of thing)? without using Views or queries/indices. I am designing my keys the way it is shown in Slide 51 of this presentation http://www.slideshare.net/Couchbase/couchbase-103-data-modeling
If you don't want to use a view or n1ql query, there is no way to retrieve documents without knowing their exact keys. That is, you can only retrieve your prefix-based keys if you have a way to generate the possible keys on the client side in advance, e.g. User-1, User-2 ... User-n. You can, however, do the sort of prefix query you're talking about in n1ql without creating any additional indexes, because with n1ql you will already have a primary index on all the document keys. So you can do something like "SELECT META(myBucket).id FROM myBucket WHERE META(myBucket).id LIKE "prefix%";
Couchbase
30,573,992
10
I've been testing Couchbase 5 and created a bucket called fp-conversion-data which has some JSON data in it. I have been trying to run some simple queries such as: SELECT * FROM fp-conversion-data limit 5; Instead of getting the expected results, I keep getting this error: [ { "code": 4010, "msg": "FROM expression term must have a name or alias", "query_from_user": "SELECT * FROM fp-conversion-data limit 5;" } ]
I think the problem is that you have dashes in the name of the bucket. Use backticks (`) around the bucket name. Try this: SELECT * FROM `fp-conversion-data` LIMIT 5;
Couchbase
48,510,648
10
I Have two repository interfaces that connect : MongoDB and CouchBase : public interface UserRepositoryMongo extends MongoRepository<User, Long> { } public interface UserRepositoryCouch extends CouchbasePagingAndSortingRepository<User, Long> { } Is there a way to interchangeably @Autowire these repositories into UserService on a condition? The condition will be inside property file : application.properties. **Note :** *These repositories can have custom methods too.*
We can use either ConditionalOnProperty or ConditionalOnExpression to switch between two different repository implementation. If we want to control the autowiring with simple property presence/absence or property value, then ConditionalOnProperty can be used. If complex evaluation is required, then we can use ConditionalOnExpression. ConditionalOnProperty (presence/absence of a property) @Qualifier("specificRepo") @ConditionalOnProperty("mongo.url") public interface UserRepositoryMongo extends MongoRepository<User, Long>{ } @Qualifier("specificRepo") @ConditionalOnProperty("couch.url") public interface UserRepositoryCouch extends CouchbasePagingAndSortingRepository<User, Long>{ } ConditionalOnProperty (based on value) @ConditionalOnProperty("repo.url", havingValue="mongo", matchIfMissing = true) //this will be default implementation if no value is matching public interface UserRepositoryMongo extends MongoRepository<User, Long> { } @ConditionalOnProperty("repo.url", havingValue="couch") public interface UserRepositoryCouch extends CouchbasePagingAndSortingRepository<User, Long> { } ConditionalOnExpression @ConditionalOnExpression("#{'${repository.url}'.contains('couch')}") public interface UserRepositoryCouch extends CouchbasePagingAndSortingRepository<User, Long> { } UPDATE Use CrudRepository/Repository type to inject based on your requirement. public class DemoService { @Autowired @Qualifier("specificRepo") private CrudRepository repository; } Based on bean created, either UserRepositoryMongo or UserRepositoryCouch will be autowired. Make sure only one bean is instantiated to avoid ambiguity error.
Couchbase
62,124,675
10
I am evaluating what might be the best migration option. Currently, I am on a sharded MySQL (horizontal partition), with most of my data stored in JSON blobs. I do not have any complex SQL queries (already migrated away after since I partitioned my db). Right now, it seems like both MongoDB and Cassandra would be likely options. My situation: Lots of reads in every query, less regular writes Not worried about "massive" scalability More concerned about simple setup, maintenance and code Minimize hardware/server cost
Lots of reads in every query, fewer regular writes Both databases perform well on reads where the hot data set fits in memory. Both also emphasize join-less data models (and encourage denormalization instead), and both provide indexes on documents or rows, although MongoDB's indexes are currently more flexible. Cassandra's storage engine provides constant-time writes no matter how big your data set grows. Writes are more problematic in MongoDB, partly because of the b-tree based storage engine, but more because of the multi-granularity locking it does. For analytics, MongoDB provides a custom map/reduce implementation; Cassandra provides native Hadoop support, including for Hive (a SQL data warehouse built on Hadoop map/reduce) and Pig (a Hadoop-specific analysis language that many think is a better fit for map/reduce workloads than SQL). Cassandra also supports use of Spark. Not worried about "massive" scalability If you're looking at a single server, MongoDB is probably a better fit. For those more concerned about scaling, Cassandra's no-single-point-of-failure architecture will be easier to set up and more reliable. (MongoDB's global write lock tends to become more painful, too.) Cassandra also gives a lot more control over how your replication works, including support for multiple data centers. More concerned about simple setup, maintenance and code Both are trivial to set up, with reasonable out-of-the-box defaults for a single server. Cassandra is simpler to set up in a multi-server configuration since there are no special-role nodes to worry about. If you're presently using JSON blobs, MongoDB is an insanely good match for your use case, given that it uses BSON to store the data. You'll be able to have richer and more queryable data than you would in your present database. This would be the most significant win for Mongo.
Cassandra
2,892,729
764
I have been reading articles around the net to understand the differences between the following key types. But it just seems hard for me to grasp. Examples will definitely help make understanding better. primary key, partition key, composite key clustering key
There is a lot of confusion around this, I will try to make it as simple as possible. The primary key is a general concept to indicate one or more columns used to retrieve data from a Table. The primary key may be SIMPLE and even declared inline: create table stackoverflow_simple ( key text PRIMARY KEY, data text ); That means that it is made by a single column. But the primary key can also be COMPOSITE (aka COMPOUND), generated from more columns. create table stackoverflow_composite ( key_part_one text, key_part_two int, data text, PRIMARY KEY(key_part_one, key_part_two) ); In a situation of COMPOSITE primary key, the "first part" of the key is called PARTITION KEY (in this example key_part_one is the partition key) and the second part of the key is the CLUSTERING KEY (in this example key_part_two) Please note that both partition and clustering key can be made by more columns, here's how: create table stackoverflow_multiple ( k_part_one text, k_part_two int, k_clust_one text, k_clust_two int, k_clust_three uuid, data text, PRIMARY KEY((k_part_one, k_part_two), k_clust_one, k_clust_two, k_clust_three) ); Behind these names ... The Partition Key is responsible for data distribution across your nodes. The Clustering Key is responsible for data sorting within the partition. The Primary Key is equivalent to the Partition Key in a single-field-key table (i.e. Simple). The Composite/Compound Key is just any multiple-column key Further usage information: DATASTAX DOCUMENTATION Small usage and content examples ***SIMPLE*** KEY: insert into stackoverflow_simple (key, data) VALUES ('han', 'solo'); select * from stackoverflow_simple where key='han'; table content key | data ----+------ han | solo COMPOSITE/COMPOUND KEY can retrieve "wide rows" (i.e. you can query by just the partition key, even if you have clustering keys defined) insert into stackoverflow_composite (key_part_one, key_part_two, data) VALUES ('ronaldo', 9, 'football player'); insert into stackoverflow_composite (key_part_one, key_part_two, data) VALUES ('ronaldo', 10, 'ex-football player'); select * from stackoverflow_composite where key_part_one = 'ronaldo'; table content key_part_one | key_part_two | data --------------+--------------+-------------------- ronaldo | 9 | football player ronaldo | 10 | ex-football player But you can query with all keys (both partition and clustering) ... select * from stackoverflow_composite where key_part_one = 'ronaldo' and key_part_two = 10; query output key_part_one | key_part_two | data --------------+--------------+-------------------- ronaldo | 10 | ex-football player Important note: the partition key is the minimum-specifier needed to perform a query using a where clause. If you have a composite partition key, like the following eg: PRIMARY KEY((col1, col2), col10, col4)) You can perform query only by passing at least both col1 and col2, these are the 2 columns that define the partition key. The "general" rule to make query is you must pass at least all partition key columns, then you can add optionally each clustering key in the order they're set. so, the valid queries are (excluding secondary indexes) col1 and col2 col1 and col2 and col10 col1 and col2 and col10 and col 4 Invalid: col1 and col2 and col4 anything that does not contain both col1 and col2
Cassandra
24,949,676
659
I am newbie in Cassandra and trying to implement one toy application using Cassandra. I had created one keyspace and few column families in my Cassandra DB but I forgot the name of my cluster. I am trying to find if there is any query which can list down all the available keyspaces. Anybody knows such a query or command?
[cqlsh 4.1.0 | Cassandra 2.0.4 | CQL spec 3.1.1 | Thrift protocol 19.39.0] Currently, the command to use is: DESC[RIBE] keyspaces;
Cassandra
18,712,967
244
I scaffold an app using jhipster which is microservice gateway using cassandra db and using maven to build which was building fine after scaffold.i ran gulp command to for the live reload of ui. i made a change slighlty in navbar and home page of it. which was also working file & made some changes in the json files of home & navbar & do some minor changes as adding the search box and other. it failed to reload. I stop the gulp & maven & restarted them. maven is building but again not loading the site in localhost when i ran gulp it is showing me this error. gulp fs.js:952 return binding.readdir(pathModule._makeLong(path), options.encoding); ^ Error: ENOENT: no such file or directory, scandir '/home/hartron/foodnetteam/codebase/mandi/node_modules/node-sass/vendor' at Error (native) at Object.fs.readdirSync (fs.js:952:18) at Object.getInstalledBinaries (/home/hartron/foodnetteam/codebase/mandi/node_modules/node-sass/lib/extensions.js:121:13) at foundBinariesList (/home/hartron/foodnetteam/codebase/mandi/node_modules/node-sass/lib/errors.js:20:15) at foundBinaries (/home/hartron/foodnetteam/codebase/mandi/node_modules/node-sass/lib/errors.js:15:5) at Object.module.exports.missingBinary (/home/hartron/foodnetteam/codebase/mandi/node_modules/node-sass/lib/errors.js:45:5) at module.exports (/home/hartron/foodnetteam/codebase/mandi/node_modules/node-sass/lib/binding.js:15:30) at Object.<anonymous> (/home/hartron/foodnetteam/codebase/mandi/node_modules/node-sass/lib/index.js:14:35) at Module._compile (module.js:570:32) at Object.Module._extensions..js (module.js:579:10) Could anyone tell me solution for this
I sometimes also get this error when starting my gulp server. My workaround is to just run: npm rebuild node-sass And then gulp starts nicely afterward.
Cassandra
45,251,645
182
What does going with a document based NoSQL option buy you over a KV store, and vice-versa?
A key-value store provides the simplest possible data model and is exactly what the name suggests: it's a storage system that stores values indexed by a key. You're limited to query by key and the values are opaque, the store doesn't know anything about them. This allows very fast read and write operations (a simple disk access) and I see this model as a kind of non volatile cache (i.e. well suited if you need fast accesses by key to long-lived data). A document-oriented database extends the previous model and values are stored in a structured format (a document, hence the name) that the database can understand. For example, a document could be a blog post and the comments and the tags stored in a denormalized way. Since the data are transparent, the store can do more work (like indexing fields of the document) and you're not limited to query by key. As I hinted, such databases allows to fetch an entire page's data with a single query and are well suited for content oriented applications (which is why big sites like Facebook or Amazon like them). Other kinds of NoSQL databases include column-oriented stores, graph databases and even object databases. But this goes beyond the question. See also Comparing Document Databases to Key-Value Stores Analysis of the NoSQL Landscape
Cassandra
3,046,001
169