question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
I've been banging my head on this issue for days now and have finally reached a brick wall. I've been trying to get my stack to run: http://django-websocket-redis.readthedocs.org/en/latest/running.html#django-with-websockets-for-redis-behind-nginx-using-uwsgi I've been looking at some other SO articles like this one: nginx - uWSGI HTTP + websocket config They seem to have a similar issue i am encountering but the solution does not work for me. Basically, i keep encountering the nginx 502 bad gateway screen whenever i try starting up my uWSGI processes. I have two separate uwsgi processes running, as per instructions in the documentation. When i run the websocket uwsgi instance, i get the following: *** running gevent loop engine [addr:0x487690] *** [2015-05-27 00:45:34,119 wsgi_server] DEBUG: Subscribed to channels: subscribe-broadcast, publish-broadcast which tells me that that uwsgi instance is running okay. Then i run my next uwsgi process and no error logs there either... When i navigate to the page in the browser, the page with hang for a few seconds, before getting the 502 Bad Gateway Screen. According to NGINX logs, NGINX says: 2015/05/26 22:46:08 [error] 18044#0: *3855 upstream prematurely closed connection while reading response header from upstream, client: 192.168.59.3, server: , request: "GET /chat/ HTTP/1.1", upstream: "uwsgi://unix:/opt/django/django.sock:", host: "192.168.59.103:32768" This is the only error log i get when trying to access the page in the web browser. Any ideas anyone??? Below are some of my config files: nginx.conf user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; } http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # SSL Settings ## ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE ssl_prefer_server_ciphers on; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/django.conf; } I have the following django.conf file, which extends nginx.conf upstream django { server unix:/opt/django/django.sock; } server { listen 80 default_server; charset utf-8; client_max_body_size 20M; sendfile on; keepalive_timeout 0; large_client_header_buffers 8 32k; location /media { alias /opt/django/app/media/media; } location /static { alias /opt/django/app/static; } location / { include /opt/django/uwsgi_params; } location /ws/ { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass http://unix:/opt/django/app.sock; proxy_buffers 8 32k; proxy_buffer_size 64k; } } And two files that are responsible for my uwsgi processes as follows: runserver_uwsgi.ini: [uwsgi] ini = :runserver [default] userhome = /opt/django chdir = %dapp/ master = true module = chatserver.wsgi:application no-orphans = true threads = 1 env = DJANGO_SETTINGS_MODULE=myapp.settings vacuum = true [runserver] ini = :default socket = /opt/django/app.sock module = wsgi_django buffer-size = 32768 processes = 4 chmod-socket=666 and wsserver_uwsgi.ini [uwsgi] ini = :wsserver [default] userhome = /opt/django chdir = %dapp/ master = true module = chatserver.wsgi:application no-orphans = true threads = 1 env = DJANGO_SETTINGS_MODULE=chatserver.settings vacuum = true [wsserver] ini = :default http-socket = /opt/django/django.sock module = wsgi_websocket http-websockets = true processes = 2 gevent = 1000 chmod-socket=666
I found the issue. My [runserver] socket (app.sock) should be pointed under upstream django and my [wsserver] socket (django.sock) should be pointed under location /ws/ like so: upstream django { server unix:/opt/django/app.sock; } server { listen 80 default_server; charset utf-8; client_max_body_size 20M; sendfile on; keepalive_timeout 0; large_client_header_buffers 8 32k; location /media { alias /opt/django/app/media/media; } location /static { alias /opt/django/app/static; } location / { include /opt/django/uwsgi_params; } location /ws/ { proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass http://unix:/opt/django/django.sock; proxy_buffers 8 32k; proxy_buffer_size 64k; } }
Redis
30,470,448
22
I'm using the redis-cli tool to observe redis-server latency. Here's an example: ubuntu:~$ redis-cli --latency -h 127.0.0.1 -p 6379 min: 0, max: 15, avg: 0.12 (2839 samples) Question is, what these values actually mean? I'm struggling to find documentation on this beyond what's available through the tool's own help document.
The redis-cli --latency -h -p command is a tool that helps troubleshoot and understand latency problems you maybe experiencing with Redis. It does so by measuring the time for the Redis server to respond to the Redis PING command in milliseconds. In this context latency is the maximum delay between the time a client issues a command and the time the reply to the command is received by the client. Usually Redis processing time is extremely low, in the sub microsecond range, but there are certain conditions leading to higher latency figures. -- Redis latency problems troubleshooting So when we ran the command redis-cli --latency -h 127.0.0.1 -p 6379 Redis enters into a special mode in which it continuously samples latency (by running PING). Now let's breakdown that data it returns: min: 0, max: 15, avg: 0.12 (2839 samples) What's (2839 samples)? This is the amount of times the redis-cli recorded issuing the PING command and receiving a response. In other words, this is your sample data. In our example we recorded 2839 requests and responses. What's min: 0? The min value represents the minimum delay between the time the CLI issued PING and the time the reply was received. In other words, this was the absolute best response time from our sampled data. What's max: 15? The max value is the opposite of min. It represents the maximum delay between the time the CLI issued PING and the time the reply to the command was received. This is the longest response time from our sampled data. In our example of 2839 samples, the longest transaction took 15ms. What's avg: 0.12? The avg value is the average response time in milliseconds for all our sampled data. So on average, from our 2839 samples the response time took 0.12ms. Basically, higher numbers for min, max, and avg is a bad thing. Some good followup material on how to use this data: Redis latency problems troubleshooting Redis latency monitoring framework How fast is Redis? Redis Performance Thoughts
Redis
27,735,411
22
How do I install and configure Redis on AWS ElasticBeanstalk? Does anyone know how to write an .ebextension script to accomplish that?
The accepted answer is great if you are using ElastiCache (like RDS, but for Memcached or Redis). But, if what you are trying to do is tell EB to provision Redis into the EC2 instance in which it spins up your app, you want a different config file, something like this gist: packages: yum: gcc-c++: [] make: [] sources: /home/ec2-user: http://download.redis.io/releases/redis-2.8.4.tar.gz commands: redis_build: command: make cwd: /home/ec2-user/redis-2.8.4 redis_config_001: command: sed -i -e "s/daemonize no/daemonize yes/" redis.conf cwd: /home/ec2-user/redis-2.8.4 redis_config_002: command: sed -i -e "s/# maxmemory <bytes>/maxmemory 500MB/" redis.conf cwd: /home/ec2-user/redis-2.8.4 redis_config_003: command: sed -i -e "s/# maxmemory-policy volatile-lru/maxmemory-policy allkeys-lru/" redis.conf cwd: /home/ec2-user/redis-2.8.4 redis_server: command: src/redis-server redis.conf cwd: /home/ec2-user/redis-2.8.4 IMPORTANT: The commands are executed in alphabetical order by name, so if you pick different names than redis_build, redis_config_xxx, redis_server, make sure they are such that they execute in the way you expect. Your other option is to containerize your app with Redis using Docker, then deploy your app as some number of Docker containers, instead of whatever language you wrote it in. Doing that for a Flask app is described here. You can jam it all into one container and deploy that way, which is easier, but doesn't scale well, or you can use AWS' Elastic Beanstalk multi-container deployments. If you have used docker-compose, you can use this tool to turn a docker-compose.yml into the form AWS wants, Dockerrun.aws.json.
Redis
26,528,395
22
I installed Stack Exchange redis client in C#. I can only delete one key or array of keys but I don't know how to delete keys with prefix. Or another solution can be first get all keys by pattern and then delete them. But I don't know how to get keys by pattern too.
You can do as the following to batch delete items from redis cache. (StackExchange.Redis.StrongName v1.0.488) foreach (var ep in _muxer.GetEndPoints()) { var server = _muxer.GetServer(ep); var keys = server.Keys(database: _redisDatabase, pattern: pattern + "*").ToArray(); _db.KeyDeleteAsync(keys); } _muxer is instance of ConnectionMultiplexer It does not delete by pattern as you asked but much faster than deleting each key separately.
Redis
26,488,830
22
Is SQL Server 2014's In-Memory OLTP (Hekaton) the same or similar concept with Redis? I use Redis for in-memory storage (storage in RAM) and caching, while having a separate SQL Server database (like StackExchange does). Can Hekaton do the same thing?
They're similar by both being primarily in-memory, but that's about it. Redis is an in-memory key-value database. It can persist data to disk if configure it, but it keeps the entire dataset in memory so you need enough RAM for that. The key-value architecture allows various different data types so you can store a value as a simple string or lists, sets, hashes, etc. Basically all the data structures you can use inside of a programming language are available in Redis natively. SQL Server Hekaton (In-Memory OLTP) is a new engine designed to run relational tables in memory. All the data for these tables is kept in RAM but also stored to disk so they are fully durable. Hekaton can take individual tables in a SQL Server database and run them in a different process using MVCC (instead of pages and locks) and other optimizations so operations are thousands of times faster than the traditional disk-based engine. There is a lot of research that went into this and the primary use-case would be to take a table that is under heavy load and switch it to run in-memory to increase performance and scalability. Hekaton was not meant to run an entire database in memory (although you can do that if you really want to) but rather as a new engine designed to handle specific cases while keeping the interface the same. Everything to the end-user is identical to the rest of SQL Server: you can use SQL, stored procedures, triggers, indexes, atomic operations with ACID properties and you can work seamlessly with data in both regular and in-memory tables. Because of the performance potential of Hekaton, you can use it to replace Redis if you need the speed and want to model your data within traditional relational tables. If you need the other key-value and data structure features of Redis, you're better off staying with that. With SQL 2016 SP1 and newer, all tiers of SQL Server now have access to the same features and the only difference is pricing for support and capacity.
Redis
25,402,890
22
I've looking around and I'm unable to find how to perform a subscription to keyspace notifications on Redis using StackExchange.Redis library. Checking available tests I've found pubsub using channels, but this is more to work like a service bus/queueing rather than subscribing to specific Redis key events. Is it possible to take advantage of this Redis feature using StackExchange.Redis?
The regular subscriber API should work fine - there is no assumption on use-cases, and this should work fine. However, I do kinda agree that this is inbuilt functionality that could perhaps benefit from helper methods on the API, and perhaps a different delegate signature - to encapsulate the syntax of the keyapace notifications so that people don't need to duplicate it. For that: I suggest you log an issue so that it doesn't get forgotten. Simple sample of how to subscribe to a keyspace event First of all, it's important to check that Redis keyspace events are enabled. For example, events should be enabled on keys of type Set. This can be done using CONFIG SET command: CONFIG SET notify-keyspace-events KEs Once keyspace events are enabled, it's just about subscribing to the pub-sub channel: using (ConnectionMultiplexer connection = ConnectionMultiplexer.Connect("localhost")) { IDatabase db = connection.GetDatabase(); ISubscriber subscriber = connection.GetSubscriber(); subscriber.Subscribe("__keyspace@0__:*", (channel, value) => { if ((string)channel == "__keyspace@0__:users" && (string)value == "sadd") { // Do stuff if some item is added to a hypothethical "users" set in Redis } } ); } Learn more about keyspace events here.
Redis
23,180,765
22
Totally new to nodejs and redis. Node.js is working fine and NPM works fine too. I want to play around with Redis so I ran: npm install redis and this seemed to work ok but now I'm trying to run: redis-server and I'm getting a Command Not Found error. I'm on a Mac if that's relevant. Can anyone offer some advice?
npm install redis doesn't install redis, it installs a redis client for node. You need to install the redis server.
Redis
22,362,674
22
I haven't been able to find in the documentation on how the messages in a channel get stored in redis publish/subscribe. When you publish to a redis channel, is that message stored or persisted? If so, how long is it stored and how do you get historical messages? Otherwise, I'm assuming that it just broadcasts that message and drops/deletes that message after doing so?
The pub/sub messages are not queued, and even less persisted. They are only buffered in the socket buffers, and immediately sent to the subscribers in the same event loop iteration as the publication. If a subscriber fails to read a message, this message is lost for the subscriber.
Redis
18,079,951
22
I'm currently bulding a web app and would like to use Redis to store sessions. At login, the session is inserted into Redis with a corresponding user id, and expiration set at 15 minutes. I would now like to implement reverse look-up for sessions (get sessions with a certain user id). The problem here is, since I can't search the Redis keyspace, how to implement this. One way would be to have a redis set for each userId, containing all session ids. But since Redis doesn't allow expiration of an item in a set, and sessions are set to expire, there would be a ton of inexistent session ids in the sets. What would be the best way to remove ids from sets on key expiration? Or, is there a better way of accomplishing what I want (reverse look-up)?
On the current release branch of Redis (2.6), you cannot have notifications when items are expired. It will probably change with the next versions. In the meantime, to support your requirement, you need to manually implement expiration notification support. So you have: session:<sessionid> -> a hash storing your session data - one of the field is <userid> user:<userid> -> a set of <sessionid> You need to remove sessionid from the user set when the session expires. So you can maintain a additional sorted set whose score is a timestamp. When you create session 10 for user 100: MULTI HMSET session:10 userid:100 ... other session data ... SADD user:100 10 ZADD to_be_expired <current timestamp + session timeout> 10 EXEC Then, you need to build a daemon which will poll the zset to identify the session to expire (ZRANGEBYSCORE). For each expired session, it has to maintain the data structure: pop the session out of the zset (ZREMRANGEBYRANK) retrieve session userid (HMGET) delete session (DEL) remove session from userid set (SREM) The main difficulty is to ensure there is no race conditions when the daemon polls and processes the items. See my answer to this question to see how it can be implemented: how to handle session expire basing redis?
Redis
16,741,476
22
Redis recommends a method of using SET with optional parameters as a locking mechanism. I.e. SET lock 1 EX 10 NX will set a lock only if it does not already exists and it will expire after 10 second. I'm using Node Redis, which has a set() method, but I'm not sure how to pass it the additional parameters to have the key expire and not be created if it already exists, or even if it's possible. Perhaps I have to use setnx() and expire() as separate calls?
After reading the Node Redis source code, I found that all methods accept an arbitrary number of arguments. When an error about incorrect number of arguments is generated, this is generated by Redis not the node module. My early attempts to supply multiple arguments were because I only had Redis 2.2.x installed, where SET only accepts the NX and EX arguments with 2.6.12. So with Redis 2.6.12 installed, the follow method calls will work with node redis to set a variable if it doesn't exist and set it to expire after 5 minutes: $client->set('hello', 'world', 'NX', 'EX', 300, function(err, reply) {...}); $client->set(['hello', 'world', 'NX', 'EX', 300], function(err, reply) {...});
Redis
15,861,424
22
Using the console, how can I tell if sidekiq is connected to a redis server? I want to be able to do something like this: if (sidekiq is connected to redis) # psuedo code MrWorker.perform_async('do_work', user.id) else MrWorker.new.perform('do_work', user.id) end
You can use Redis info provided by Sidekiq: redis_info = Sidekiq.redis { |conn| conn.info } redis_info['connected_clients'] # => "16" Took it from Sidekiq's Sinatra status app.
Redis
15,843,637
22
In an relational database, i have an user table, an category table and an user-category table which do many to many relationship. What's the form of this structure in Redis?
With Redis, relationships are typically represented by sets. A set can be used to represent a one-way relationship, so you need one set per object to represent a many-to-many relationship. It is pretty useless to try to compare a relational database model to Redis data structures. With Redis, everything is stored in a denormalized way. Example: # Here are my categories > hset category:1 name cinema ... more fields ... > hset category:2 name music ... more fields ... > hset category:3 name sports ... more fields ... > hset category:4 name nature ... more fields ... # Here are my users > hset user:1 name Jack ... more fields ... > hset user:2 name John ... more fields ... > hset user:3 name Julia ... more fields ... # Let's establish the many-to-many relationship # Jack likes cinema and sports # John likes music and nature # Julia likes cinema, music and nature # For each category, we keep a set of reference on the users > sadd category:1:users 1 3 > sadd category:2:users 2 3 > sadd category:3:users 1 > sadd category:4:users 2 3 # For each user, we keep a set of reference on the categories > sadd user:1:categories 1 3 > sadd user:2:categories 2 4 > sadd user:3:categories 1 2 4 Once we have this data structure, it is easy to query it using the set algebra: # Categories of Julia > smembers user:3:categories 1) "1" 2) "2" 3) "4" # Users interested by music > smembers category:2:users 1) "2" 2) "3" # Users interested by both music and cinema > sinter category:1:users category:2:users 1) "3"
Redis
10,907,942
22
I just wrote a simple piece of code to perf test Redis + gevent to see how async helps perforamance and I was surprised to find bad performance. here is my code. If you get rid of the first two lines to monkey patch this code then you will see the "normal execution" timing. On a Ubuntu 12.04 LTS VM, I am seeing a timing of without monkey patch - 54 secs With monkey patch - 61 seconds Is there something wrong with my code / approach? Is there a perf issue here? #!/usr/bin/python from gevent import monkey monkey.patch_all() import timeit import redis from redis.connection import UnixDomainSocketConnection def UxDomainSocket(): pool = redis.ConnectionPool(connection_class=UnixDomainSocketConnection, path = '/var/redis/redis.sock') r = redis.Redis(connection_pool = pool) r.set("testsocket", 1) for i in range(100): r.incr('testsocket', 10) r.get('testsocket') r.delete('testsocket') print timeit.Timer(stmt='UxDomainSocket()', setup='from __main__ import UxDomainSocket').timeit(number=1000)
This is expected. You run this benchmark on a VM, on which the cost of system calls is higher than on physical hardware. When gevent is activated, it tends to generate more system calls (to handle the epoll device), so you end up with less performance. You can easily check this point by using strace on the script. Without gevent, the inner loop generates: recvfrom(3, ":931\r\n", 4096, 0, NULL, NULL) = 6 sendto(3, "*3\r\n$6\r\nINCRBY\r\n$10\r\ntestsocket\r"..., 41, 0, NULL, 0) = 41 recvfrom(3, ":941\r\n", 4096, 0, NULL, NULL) = 6 sendto(3, "*3\r\n$6\r\nINCRBY\r\n$10\r\ntestsocket\r"..., 41, 0, NULL, 0) = 41 With gevent, you will have occurences of: recvfrom(3, ":221\r\n", 4096, 0, NULL, NULL) = 6 sendto(3, "*3\r\n$6\r\nINCRBY\r\n$10\r\ntestsocket\r"..., 41, 0, NULL, 0) = 41 recvfrom(3, 0x7b0f04, 4096, 0, 0, 0) = -1 EAGAIN (Resource temporarily unavailable) epoll_ctl(5, EPOLL_CTL_ADD, 3, {EPOLLIN, {u32=3, u64=3}}) = 0 epoll_wait(5, {{EPOLLIN, {u32=3, u64=3}}}, 32, 4294967295) = 1 clock_gettime(CLOCK_MONOTONIC, {2469, 779710323}) = 0 epoll_ctl(5, EPOLL_CTL_DEL, 3, {EPOLLIN, {u32=3, u64=3}}) = 0 recvfrom(3, ":231\r\n", 4096, 0, NULL, NULL) = 6 sendto(3, "*3\r\n$6\r\nINCRBY\r\n$10\r\ntestsocket\r"..., 41, 0, NULL, 0) = 41 When the recvfrom call is blocking (EAGAIN), gevent goes back to the event loop, so additional calls are done to wait for file descriptors events (epoll_wait). Please note this kind of benchmark is a worst case for any event loop system, because you only have one file descriptor, so the wait operations cannot be factorized on several descriptors. Furthermore, async I/Os cannot improve anything here since everything is synchronous. It is also a worst case for Redis because: it generates many roundtrips to the server it systematically connects/disconnects (1000 times) because the pool is declared in UxDomainSocket function. Actually your benchmark does not test gevent, redis or redis-py: it exercises the capability of a VM to sustain a ping-pong game between 2 processes. If you want to increase performance, you need to: use pipelining to decrease the number of roundtrips make the pool persistent across the whole benchmark For instance, consider with the following script: #!/usr/bin/python from gevent import monkey monkey.patch_all() import timeit import redis from redis.connection import UnixDomainSocketConnection pool = redis.ConnectionPool(connection_class=UnixDomainSocketConnection, path = '/tmp/redis.sock') def UxDomainSocket(): r = redis.Redis(connection_pool = pool) p = r.pipeline(transaction=False) p.set("testsocket", 1) for i in range(100): p.incr('testsocket', 10) p.get('testsocket') p.delete('testsocket') p.execute() print timeit.Timer(stmt='UxDomainSocket()', setup='from __main__ import UxDomainSocket').timeit(number=1000) With this script, I get about 3x better performance and almost no overhead with gevent.
Redis
10,656,953
22
I've seen many people using Redis as a cache lately, why not Mongo? As far as I could tell Redis can set an expire date on an index, like memcache but otherwise are there any reasons not to use Mongo for this? I ask as I'm doing a large join in MySQL and then changing the data after selecting it. I'm already using memcache on other parts of the site but saving this in Mongo would allow me to do geospatial searches on the cached data.
A lot of people do use MongoDB for a low-medium grade cache and it works just great. Because it offers more functionality than a simple key value store via ad-hoc queryability it isn't as pure of a caching layer as a memcache or redis (it can be slower to insert and retrieve data). Extremely high performance is attainable (the working set is in RAM after all), but the data model is heavier. However, on the flip side, MongoDB does offer a persistance layer that makes a lot more sense (to most developers) for the type of data that is most likely needed at a later time, unlike Redis.
Redis
10,317,732
22
I'm currently playing around with Redis and i've got a few questions. Is it possible to get values from an array of keys? Example: users:1:name "daniel" users:1:age "24" users:2:name "user2" users:2:age "24" events:1:attendees "users:1", "users:2" When i redis.get events:1:attendees it returns "users:1", "users:2". I can loop through this list and get users:1, get users:2. But this feels wrong, is there a way to get all the attendees info on 1 get?! In rails i would do something like this: @event.attendees.each do |att| att.name end But in redis i can't because it returns the keys and not the actual object stored at that key. thanks :)
Doing a loop on the items and synchronously accessing each element is not very efficient. With Redis 2.4, there are various ways to do what you want: by using the sort command by using pipelining by using variadic parameter commands With Redis 2.6, you can also use Lua scripting, but this is not really required here. By the way, the data structure you described could be improved by using hashes. Instead of storing user data in separate keys, you could group them in a hash object. Using the sort command You can use the Redis sort command to retrieve the data in one roundtrip. redis> set users:1:name "daniel" OK redis> set users:1:age 24 OK redis> set users:2:name "user2" OK redis> set users:2:age 24 OK redis> sadd events:1:attendees users:1 users:2 (integer) 2 redis> sort events:1:attendees by nosort get *:name get *:age 1) "user2" 2) "24" 3) "daniel" 4) "24" Using pipelining The Ruby client support pipelining (i.e. the capability to send several queries to Redis and wait for several replies). keys = $redis.smembers("events:1:attendees") res = $redis.pipelined do keys.each do |x| $redis.mget(x+":name",x+":age") end end The above code will retrieve the data in two roundtrips only. Using variadic parameter command The MGET command can be used to retrieve several data in one shot: redis> smembers events:1:attendees 1) "users:2" 2) "users:1" redis> mget users:1:name users:1:age users:2:name users:2:age 1) "daniel" 2) "24" 3) "user2" 4) "24" The cost here is also two roundtrips. This works if you can guarantee that the number of keys to retrieve is limited. If not, pipelining is a much better solution.
Redis
10,155,398
22
Port 6379 is open on the server, and I can successfully run telnet localhost 6379 in SSH. I tried both Predis/phpredis client library in PHP, but it still does not work: Predis gives "Permission denied" error when opening socket to 6379. phpredis gives "redis server went away".
Problem solved, type: /usr/sbin/setsebool httpd_can_network_connect=1 By default, SELinux does not allow Apache to make socket connections. More information can be found here.
Redis
8,765,848
22
I am currently testing insertion of keys in a database Redis (on local). I have more than 5 millions keys and I have just 4GB RAM so at one moment I reach capacity of RAM and swap fill in (and my PC goes down)... My problematic : How can I make monitoring memory usage on the machine which has the Redis database, and in this way alert no more insert some keys in the Redis database ? Thanks.
Memory is a critical resource for Redis performance. Used memory defines total number of bytes allocated by Redis using its allocator (either standard libc, jemalloc, or an alternative allocator such as tcmalloc). You can collect all memory utilization metrics data for a Redis instance by running “info memory”. 127.0.0.1:6379> info memory Memory used_memory:1007280 used_memory_human:983.67K used_memory_rss:2002944 used_memory_rss_human:1.91M used_memory_peak:1008128 used_memory_peak_human:984.50K Sometimes, when Redis is configured with no max memory limit, memory usage will eventually reach system memory, and the server will start throwing “Out of Memory” errors. At other times, Redis is configured with a max memory limit but noeviction policy. This would cause the server not to evict any keys, thus preventing any writes until memory is freed. The solution to such problems would be configuring Redis with max memory and some eviction policy. In this case, the server starts evicting keys using eviction policy as memory usage reaches the max. Memory RSS (Resident Set Size) is the number of bytes that the operating system has allocated to Redis. If the ratio of ‘memory_rss’ to ‘memory_used’ is greater than ~1.5, then it signifies memory fragmentation. The fragmented memory can be recovered by restarting the server.
Redis
6,450,932
22
How do I start using Redis database with ASP.NET? What I should install and what I should download? I'm using Visual Studio 2008 with C#.
FYI, both the: RedisStackOverflow with C# Source Code and RedisAdminUI with C# Source Code are open source ASP.NET web applications that only use the ServiceStack.Redis C# client. Here is an example of how you would use an Inversion of control (IoC) container to register a Redis client connection pool and its accompanying IRepository with an IoC: //Register any dependencies you want injected into your services container.Register<IRedisClientsManager>(c => new PooledRedisClientManager()); container.Register<IRepository>(c => new Repository(c.Resolve<IRedisClientsManager>())); Note: if you're just starting out with the client, I recommend you go through the C# Client Wiki, Especially the Designing a Simple Blog application with Redis tutorial*.
Redis
5,006,326
22
I want to scale my Node.js Socket application vertically and horizontally and I haven´t found a sophisticated solution yet. My application has two use-cases: Broadcast messages from one user to all others Push messages from one user to a subset of users On one hand, I´ve read that I need Redis for both cases together with socket.io-redis On the other hand, I´ve watched this video and read this SO answer where it says that Redis isn´t reliable and it´s not guaranteed that the published messages will arrive, so you should only use it for clustering/vertical scaling Microsoft Azures solution to use ServiceBus is out of question, because I don´t want to use Azure. Instead of Redis, the guy recommends using RabbitMQ for horizontal scaling. For the vertical scaling there is also socket.io-clusterhub, an IPC for node processes, but it seems to work only on Socket.io <= v0.9.0 Then there is this guy, who has implemented his own method to pass messages to other nodes via HTTP requests, which makes somehow sense. But why HTTP requests if you could also establish direct socket connections between servers, push the message to all servers simultaneously and overcome the delay of going from one server to another? As a conclusion I thought maybe I could go with Redis on EACH server, just for the exchange of messages when clustering my application on multiple processes, together with RabbitMQ as a S2S communication solution. But it seems a bit like an overkill to have one Redis per Server and another central RabbitMQ. Is there any known shorter/better solution to scale Socket.io reliably in both directions? EDIT: I´ve tried using a single Redis Server for multiple Node.js Servers, where each of them uses Clustering via sticky-session over all cores. While the Clustering at its own works like a charm with redis, there seems to be a problem when using multiple servers. Messages won´t arrive at the other nodes.
I'd say Kafka is a good fit for the horizontal scaling. It is a fairly sophisticated way of distributing a huge amount of events across servers (which at the end is what you want). This is a good read about it: https://engineering.linkedin.com/kafka/running-kafka-scale Regarding the vertical scale, instead of socket.io-clusterhub I would use something called PM2 (https://github.com/Unitech/pm2) which allows you to resize the scale of the apps in every computer dynamically as well as controlling the logs and reporting to keymetrics.io (if you are using it). If you need any snippets ask me and I will edit the answer but in the PM2 github there are quite few.
Redis
37,116,615
21
I have a Redis cluster of 6 instances, 3 master and 3 slaves. My ASP .NET Core application uses it as a cache. Sometimes I get such an error: StackExchange.Redis.RedisTimeoutException: Timeout awaiting response (outbound=0KiB, inbound=5KiB, 5504ms elapsed, timeout is 5000ms), command=GET, next: GET [email protected], inst: 0, qu: 0, qs: 6, aw: False, rs: DequeueResult, ws: Idle, in: 0, in-pipe: 5831, out-pipe: 0, serverEndpoint: 5.178.85.30:7002, mgr: 9 of 10 available, clientName: f0b7b81f5ce5, PerfCounterHelperkeyHashSlot: 9236, IOCP: (Busy=0,Free=1000,Min=2,Max=1000), WORKER: (Busy=10,Free=32757,Min=2,Max=32767), v: 2.0.601.3402 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts) (Most recent call last)
As I can see from your exception message, your minimum worker process count is too low for the traffic you have. WORKER: (Busy=10,Free=32757,Min=2,Max=32767) You had 10 busy worker threads when this exception happened, while you had 2 worker threads for start. When your application runs out of available threads to complete an operation, .NET starts a new one (until maximum value, of course). And waits a bit to see if an additional worker thread is needed. If your application still needs worker threads, then .NET starts another one. Then another, then another... But this requires time. It doesn't occur in 0 ms. By looking your exception message, we can see that .NET had created 8 additional worker threads (10 - 2 = 8). While the creation process, this particular Redis operation had waited and eventually timed out. You could use ThreadPool.SetMinThreads(Int32, Int32) method at the beginning of your application to set minimum thread count. I suggest you to start with ThreadPool.SetMinThreads(10, 10) and tweak it as you test it. Additional Read: https://learn.microsoft.com/en-us/dotnet/api/system.threading.threadpool.setminthreads https://stackexchange.github.io/StackExchange.Redis/Timeouts.html
Redis
57,661,799
21
I would like to insert data into sorted set in redis using python to do complex queries like on range etc. import redis redisClient = redis.StrictRedis(host='localhost', port=6379,db=0) redisClient.zadd("players",1,"rishu") but when i run the the above piece of code ,i get the following error as AttributeError: 'str' object has no attribute 'items' What am i doing wrong here.used this link for reference https://pythontic.com/database/redis/sorted%20set%20-%20add%20and%20remove%20elements
@TheDude is almost close. The newer version of redis from (redis-py 3.0), the method signature has changed. Along with ZADD, MSET and MSETNX signatures were also changed. The old signature was: data = "hello world" score = 1 redis.zadd("redis_key_name", data, score) # not used in redis-py > 3.0 The new signature is: data = "hello world" score = 1 redis.zadd("redis_key_name", {data: score}) To add multiple at once: data1 = "foo" score1 = 10 data2 = "bar" score2 = 20 redis.zadd("redis_key_name", {data1: score1, data2: score2}) Instead of args/kwargs, a dict is expected, with key as data and value is the ZADD score. There are no changes in retrieving the data back.
Redis
53,553,009
21
I have seen answers in couple of threads but didn't work out for me and since my problem occurs occasionally, asking this question if any one has any idea. I am using jedis version 2.8.0, Spring Data redis version 1.7.5. and redis server version 2.8.4 for our caching application. I have multiple cache that gets saved in redis and get request is done from redis. I am using spring data redis APIs to save and get data. All save and get works fine, but getting below exception occasionally: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool | org.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the poolorg.springframework.data.redis.RedisConnectionFailureException: Cannot get Jedis connection; nested exception is redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool org.springframework.data.redis.connection.jedis.JedisConnectionFactory.fetchJedisConnector(JedisConnectionFactory.java:198) org.springframework.data.redis.connection.jedis.JedisConnectionFactory.getConnection(JedisConnectionFactory.java:345) org.springframework.data.redis.core.RedisConnectionUtils.doGetConnection(RedisConnectionUtils.java:129) org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:92) org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:79) org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:191) org.springframework.data.redis.core.RedisTemplate.execute(RedisTemplate.java:166) org.springframework.data.redis.core.AbstractOperations.execute(AbstractOperations.java:88) org.springframework.data.redis.core.DefaultHashOperations.get(DefaultHashOperations.java:49) My redis configuration class: @Configuration public class RedisConfiguration { @Value("${redisCentralCachingURL}") private String redisHost; @Value("${redisCentralCachingPort}") private int redisPort; @Bean public StringRedisSerializer stringRedisSerializer() { StringRedisSerializer stringRedisSerializer = new StringRedisSerializer(); return stringRedisSerializer; } @Bean JedisConnectionFactory jedisConnectionFactory() { JedisConnectionFactory factory = new JedisConnectionFactory(); factory.setHostName(redisHost); factory.setPort(redisPort); factory.setUsePool(true); return factory; } @Bean public RedisTemplate<String, Object> redisTemplate() { RedisTemplate<String, Object> redisTemplate = new RedisTemplate<>(); redisTemplate.setConnectionFactory(jedisConnectionFactory()); redisTemplate.setExposeConnection(true); // No serializer required all serialization done during impl redisTemplate.setKeySerializer(stringRedisSerializer()); //`redisTemplate.setHashKeySerializer(stringRedisSerializer()); redisTemplate.setHashValueSerializer(new GenericSnappyRedisSerializer()); redisTemplate.afterPropertiesSet(); return redisTemplate; } @Bean public RedisCacheManager cacheManager() { RedisCacheManager redisCacheManager = new RedisCacheManager(redisTemplate()); redisCacheManager.setTransactionAware(true); redisCacheManager.setLoadRemoteCachesOnStartup(true); redisCacheManager.setUsePrefix(true); return redisCacheManager; } } Did anyone faced this issue or have any idea on this, why might this happen?
We were facing the same problem with RxJava, the application was running fine but after some time, no connections could be aquired from the pool anymore. After days of debugging we finally figured out what caused the problem: redisTemplate.setEnableTransactionSupport(true) somehow caused spring-data-redis to not release connections. We needed transaction support for MULTI / EXEC but in the end changed the implementation to get rid of this problem. Still we don't know if this is a bug or wrong usage on our side.
Redis
43,492,474
21
I am connecting to a Redis sentinel using the code given below var Redis = require('ioredis'); var redis = new Redis({ sentinels: [{ host: '99.9.999.99', port: 88888 }], name: 'mymaster' }); I am setting the value of a key by using following code: function (key, data) { var dataInStringFormat = JSON.stringify(data); /// conbverting obj to string /// making promise for returning var promise = new Promise(function (resolve, reject) { /// set data in redis redis.set(key, dataInStringFormat) .then(function (data) { resolve(data); }, function (err) { reject(err); }); }); return promise; } Can you please help me by providing a solution to set an expire time for the key value e.g. 12 hours
It's documented redis.set('key', 100, 'EX', 10) Where EX and 10 stands for 10 seconds. If you want to use milliseconds, replace EX with PX
Redis
41,237,001
21
Suppose I have [Slave IP Address] which is the slave of [Master IP Address]. Now my master server has been shut down, and I need to set this slave to be master MANUALLY (WITHOUT using sentinel automatic failover, WITH redis command). Is it possible doing this without restarting the redis service ? (and losing all the cached data)
use SLAVEOF NO ONE to promote a slave to master http://redis.io/commands/slaveof
Redis
34,155,977
21
Please consider the following example >>import redis >>redis_db_url = '127.0.0.1' >>r = redis.StrictRedis(host = redis_db_url,port = 6379,db = 0) >>r.sadd('a',1) >>r.sadd('a',2) >>r.sadd('a',3) >>r.smembers('a') [+] output: set(['1', '3', '2']) >>r.sadd('a',set([3,4])) >>r.smembers('a') [+] output: set(['1', '3', '2', 'set([3, 4])']) >>r.sadd('a',[3,4]) >>r.smember('a') [+] set(['1', '[3, 4]', '3', '2', 'set([3, 4])']) According to the official documentation in https://redis-py.readthedocs.org/en/latest/ sadd(name, *values) Add value(s) to set name So is it a bug or I am missing something ?
When you see the syntax *values in an argument list, it means the function takes a variable number of arguments. Therefore, call it as r.sadd('a', 1, 2, 3) You can pass an iterable by using the splat operator to unpack it: r.sadd('a', *set([3, 4])) or r.sadd('a', *[3, 4])
Redis
31,035,274
21
We're using Redis to store various application configurations in a DB 0. Is it possible to query Redis for every key/valuie pair within the database, without having to perform two separate queries and joining the key/value pairs yourself? I would expect functionality similar to the following: kv = redis_conn.getall() # --OR-- # kv = redis_conn.mget('*') ... Where kv would return a tuple of tuples, list of lists, or a dictionary: However after scouring StackOverflow, Google and Redis documentation, the only solution I can derive (I haven't yet found anyone else asking this question..) is something similar to the following: import redis red = redis.Redis(host='localhost', db=0) keys = red.keys() vals = red.mget(keys) kv = zip(keys, vals) Am I crazy here, in assuming there to be a more elegant approach to this problem? Additional Info Every value within this database is a String. My question is not how to retrieve values for each unique data type or data-type related at all. Rather, my question is: Is there a way to say "hey Redis, return to me every string value in the database" without having to ask for the keys, then query for the values based on the keys returned?
There are differences between different types in Redis, so you have to look at the data type to determine how to get the values from the key. So: keys = redis.keys('*') for key in keys: type = redis.type(key) if type == "string": val = redis.get(key) if type == "hash": vals = redis.hgetall(key) if type == "zset": vals = redis.zrange(key, 0, -1) if type == "list": vals = redis.lrange(key, 0, -1) if type == "set": vals = redis. smembers(key)
Redis
19,282,580
21
I'm performing some analysis on a data stream and publishing the results on a Redis channel. Consumers subscribe to these channels and get real-time data feeds. All historical data analysis results are lost. Now I want to add the ability to store historical data in Redis so that consumers can query this historical data (mainly by time). Since the analysis results are partitioned by time what would be the a good design to store the results in Redis?
Use redis sorted sets. Sorted sets store data based on "scores", so in your case, just use a time stamp in millis; the data will be sorted automatically, allowing you to retrieve historical items using start/end date ranges, here's an example... Add items to a sorted set... zadd historical <timestamp> <dataValue> ..add some sample data.. zadd historical 1 data1 zadd historical 2 data2 zadd historical 3 data3 zadd historical 4 data4 zadd historical 5 data5 zadd historical 6 data6 zadd historical 7 data7 ..retrieve a subset of items using start/end range... zrangebyscore historical 2 5 ..returns... 1) "data2" 2) "data3" 3) "data4" 4) "data5" So, in your case, if you want to retrieve all historical items for the last day, just do this... zrangebyscore historical <currentTimeInMillis - 86400000> <currentTimeInMillis>
Redis
17,153,154
21
We are deploying a large scale web application that uses only redis as a data store. I notice the the benchmark of our redis master is around 8000 transactions per second on EC2, far less than the stated benchmarks on dedicated hardware. I understand that there is a performance penalty for running Redis on a virtual machine like EC2, but I would love some pointers from people who have deployed Redis in production environments on EC2 on what EC2 setup you have found most effective for getting more out of redis. Thanks.
EC2 is probably not the best environment to run Redis on virtualized hardware, but it is a popular one, and there are a number of points to know to get the best from Redis on this platform. I'm one of the authors of http://redis.io/topics/benchmarks and http://redis.io/topics/latency which cover most of the topics I present below. This is just a summary of the main points. Virtualization toll It is not specific to EC2, but Redis is significantly slower when running on a VM (in term of maximum supported throughput). This is due to the fact for basic operations, Redis does not add much overhead to the epoll/read/write system calls required to handle client connections (like memcached, or other efficient key/value stores). System calls are typically more expensive on a VM, and they represent a significant part of Redis activity (especially in benchmarks). In that conditions, a 50% decrease in term of maximum throughput compared to bare metal is not uncommon. Of course, it also depends on the quality of the hypervisor. For EC2, Xen is used. Benchmarking in good conditions Benchmarking can be tricky, especially on a platform like EC2. One point often forgotten is to ensure a proper configuration for both the benchmark client and server. For instance, do not run redis-benchmark on a CPU starved micro-instance (which will likely be throttled down by Amazon) while targeting your Redis server. Both machines are equally important to get a good maximum throughput. Actually, to evaluate Redis performance, you need to: run redis-benchmark locally (on the same machine than the server), assuming you have more than one vCPU core. run redis-benchmark remotely (from a different VM), on a machine whose QoS configuration is equivalent to the server machine So you can evaluate and compare performance of the machines and the network. On EC2, you will have the best results with second generation M3 instances (or high-memory, or cluster compute instances) so you can benefit of HVM (hardware virtualization) instead of relying on slower para-virtualization. The fork issue This is not specific to EC2, but to Xen: forking a large process can be really slow on Xen (it looks better with kvm). For Redis this is a big problem if you plan to use persistence: both persistence options (RDB or AOF) require the main thread to fork and launch background save or rewrite processes. In some cases, fork latency can freeze Redis event loop for several seconds. The more memory managed by the Redis instance, the more latency. On EC2, be sure to use a HVM enabled instance (M3, high-memory, cluster), it will mitigate the issue. Then, if you have large memory requirements, and your application can tolerate it, consider running several smaller Redis instances on the same machine, and shard your data. It can decrease the latency due to fork operations to an acceptable level. Persistence configuration This is a key point to get good performance from Redis (both on VM and bare metal). So please take the time to carefully read http://redis.io/topics/persistence If you use RDB, keep in mind the memory copy-on-write mechanism will start duplicating pages once the save background process has been forked off. So you need to ensure there is enough memory for Redis itself, plus some margin to cope with the COW. the amount of extra memory depends on your workload. The more you write in the instance, the more extra memory you need. Please note writing a file may also consume some memory (because of the filesystem cache), so during a Redis background save, you need to account for Redis memory, COW overhead, and size of the dump file. The machine running the Redis server must never swap. If it does, the result will be catastrophic. Contrary to some other stores, Redis is not virtual memory friendly. With Linux, be sure to set sensible system parameters: vm.overcommit_memory=1 and vm.swappiness=0 (or a very low value anyway). Do not use old kernel versions: they are quite bad at enforcing a low swappiness (resulting in swapping when a large file is written). If you use AOF, review the fsync options. It is a tradeoff between raw performance and durability of the write operations. You need to make a choice and define a strategy. You also need to get familiar with the EC2 storage options. On some VM, you have the choice between ephemeral storage and EBS. On some others, you only have EBS. Ephemeral storage is generally faster, and you will probably get less issues than with EBS, but you can easily loose your data in case of disk failure or reboot of the host, etc ... You can imagine putting RDB snapshots on ephemeral storage, and then copying the resulting files to EBS directories, as a tradeoff between performance and robustness. EBS is remote storage: it may eat the standard network bandwidth allocated to the VM, and impact the maximum throughput of Redis. If you plan to use EBS, consider selecting the "EBS-optimized" option to establish a QoS between the standard network and storage links. Finally, a very common setup for performance demanding instances with EC2 is to deactivate persistence on the master, and only activate it on a slave instance. It is probably less safe for the data, but it may prevent a lot of potential latency issues on the master.
Redis
11,765,502
21
I am thinking about creating an open source data management web application for various types of data. A privileged user must be able to add new entity types (for example a 'user' or a 'family') add new properties to entity types (for example 'gender' to 'user') remove/modify entities and properties These will be common tasks for the privileged user. He will do this through the web interface of the application. In the end, all data must be searchable and sortable by all types of users of the application. Two questions trouble me: a) How should the data be stored in the database? Should I dynamically add/remove database tables and/or columns during runtime? I am no database expert. I am stuck with the imagination that in terms of relational databases, the application has to be able to dynamically add/remove tables (entities) and/or columns (properties) at runtime. And I don't like this idea. Likewise, I am thinking if such dynamic data should be handled in a NoSQL database. Anyway, I believe that this kind of problem has an intelligent canonical solution, which I just did not find and think of so far. What is the best approach for this kind of dynamic data management? b) How to implement this in Python using an ORM or NoSQL? If you recommend using a relational database model, then I would like to use SQLAlchemy. However, I don't see how to dynamically create tables/columns with an ORM at runtime. This is one of the reasons why I hope that there is a much better approach than creating tables and columns during runtime. Is the recommended database model efficiently implementable with SQLAlchemy? If you recommend using a NoSQL database, which one? I like using Redis -- can you imagine an efficient implementation based on Redis? Thanks for your suggestions! Edit in response to some comments: The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes. Basically, users will search the data through a simple form on a website. They query for e.g. all instances of an entity E with property P having a value V higher than T. The result can be sorted by the value of any property. The datasets won't become too large. Hence, I think even the stupidest approach would still lead to a working system. However, I am an enthusiast and I'd like to apply modern and appropriate technology as well as I'd like to be aware of theoretical bottlenecks. I want to use this project in order to gather experience in designing a "Pythonic", state-of-the-art, scalable, and reliable web application. I see that the first comments tend to recommending a NoSQL approach. Although I really like Redis, it looks like it would be stupid not to take advantage of the Document/Collection model of Mongo/Couch. I've been looking into mongodb and mongoengine for Python. By doing so, do I take steps into the right direction? Edit 2 in response to some answers/comments: From most of your answers, I conclude that the dynamic creation/deletion of tables and columns in the relational picture is not the way to go. This already is valuable information. Also, one opinion is that the whole idea of the dynamic modification of entities and properties could be bad design. As exactly this dynamic nature should be the main purpose/feature of the application, I don't give up on this. From the theoretical point of view, I accept that performing operations on a dynamic data model must necessarily be slower than performing operations on a static data model. This is totally fine. Expressed in an abstract way, the application needs to manage the data layout, i.e. a "dynamic list" of valid entity types and a "dynamic list" of properties for each valid entity type the data itself I am looking for an intelligent and efficient way to implement this. From your answers, it looks like NoSQL is the way to go here, which is another important conclusion.
The SQL or NoSQL choice is not your problem. You need to read little more about database design in general. As you said, you're not a database expert(and you don't need to be), but you absolutely must study a little more the RDBMS paradigm. It's a common mistake for amateur enthusiasts to choose a NoSQL solution. Sometimes NoSQL is a good solution, most of the times is not. Take for example MongoDB, which you mentioned(and is one of the good NoSQL solutions I've tried). Schema-less, right? Err.. not exactly. You see when something is schema-less means no constraints, validation, etc. But your application's models/entities can't stand on thin air! Surely there will be some constraints and validation logic which you will implement on your software layer. So I give you mongokit! I will just quote from the project's description this tiny bit MongoKit brings structured schema and validation layer on top of the great pymongo driver Hmmm... unstructured became structured. At least we don't have SQL right? Yeah, we don't. We have a different query language which is of course inferior to SQL. At least you don't need to resort to map/reduce for basic queries(see CouchDB). Don't get me wrong, NoSQL(and especially MongoDB) has its purpose, but most of the times these technologies are used for the wrong reason. Also, if you care about serious persistence and data integrity forget about NoSQL solutions. All these technologies are too experimental to keep your serious data. By researching a bit who(except Google/Amazon) uses NoSQL solutions and for what exactly, you will find that almost no one uses it for keeping their important data. They mostly use them for logging, messages and real time data. Basically anything to off-load some burden from their SQL db storage. Redis, in my opinion, is probably the only project who is going to survive the NoSQL explosion unscathed. Maybe because it doesn't advertise itself as NoSQL, but as a key-value store, which is exactly what it is and a pretty damn good one! Also they seem serious about persistence. It is a swiss army knife, but not a good solution to replace entirely your RDBMS. I am sorry, I said too much :) So here is my suggestion: 1) Study the RDBMS model a bit. 2) Django is a good framework if most of your project is going to use an RDBMS. 3) Postgresql rocks! Also keep in mind that version 9.2 will bring native JSON support. You could dump all your 'dynamic' properties in there and you could use a secondary storage/engine to perform queries(map/reduce) on said properties. Have your cake and eat it too! 4) For serious search capabilities consider specialized engines like solr. EDIT: 6 April 2013 5) django-ext-hstore gives you access to postgresql hstore type. It's similar to a python dictionary and you can perform queries on it, with the limitation that you can't have nested dictionaries as values. Also the value of key can be only of type string. Have fun Update in response to OP's comment 0) Consider the application 'contains data' and has already been used for a while I am not sure if you mean that it contains data in a legacy dbms or you are just trying to say that "imagine that the DB is not empty and consider the following points...". In the former case, it seems a migration issue(completely different question), in the latter, well OK. 1) Admin deletes entity "family" and all related data Why should someone eliminate completely an entity(table)? Either your application has to do with families, houses, etc or it doesn't. Deleting instances(rows) of families is understandable of course. 2) Admin creates entity "house" Same with #1. If you introduce a brand new entity in your app then most probably it will encapsulate semantics and business logic, for which new code must be written. This happens to all applications as they evolve through time and of course warrants a creation of a new table, or maybe ALTERing an existing one. But this process is not a part of the functionality of your application. i.e. it happens rarely, and is a migration/refactoring issue. 3) Admin adds properties "floors", "age", .. Why? Don't we know beforehand that a House has floors? That a User has a gender? Adding and removing, dynamically, this type of attributes is not a feature, but a design flaw. It is part of the analysis/design phase to identify your entities and their respective properties. 4) Privileged user adds some houses. Yes, he is adding an instance(row) to the existing entity(table) House. 5) User searches for all houses with at least five floors cheaper than 100 $ A perfectly valid query which can be achieved with either SQL or NoSQL solution. In django it would be something along those lines: House.objects.filter(floors__gte=5, price__lt=100) provided that House has the attributes floors and price. But if you need to do text-based queries, then neither SQL nor NoSQL will be satisfying enough. Because you don't want to implement faceting or stemming on your own! You will use some of the already discussed solutions(Solr, ElasticSearch, etc). Some more general notes: The examples you gave about Houses, Users and their properties, do not warrant any dynamic schema. Maybe you simplified your example just to make your point, but you talk about adding/removing Entities(tables) as if they are rows in a db. Entities are supposed to be a big deal in an application. They define the purpose of your application and its functionality. As such, they can't change every minute. Also you said: The idea is that all instances ("rows") of a certain entity ("table") share the same set of properties/attributes ("columns"). However, it will be perfectly valid if certain instances have an empty value for certain properties/attributes. This seems like a common case where an attribute has null=True. And as a final note, I would like to suggest to you to try both approaches(SQL and NoSQL), since it doesn't seem like your career depends on this project. It will be a beneficiary experience, as you will understand first-hand, the cons and pros of each approach. Or even how to "blend" these approaches together.
Redis
10,672,939
21
The documentation for transactions says: "we may deprecate and finally remove transactions" and "everything you can do with a Redis transaction, you can also do with a script" http://redis.io/topics/transactions But does it? I see a problem with this. Within a transaction you can WATCH multiple variables, read those variables, and based on the unique state of those variables you can make a completely different set of writes before calling EXEC. If anything interferes with the state of those variables in the intervening time, EXEC will not perform the transaction. (Which allows you to retry it. This is a perfect transaction system.) An EVAL script will not let you do that. According to the documentation on this page: "Scripts as pure functions...The script always evaluates the same Redis write commands with the same arguments given the same input data set. Operations performed by the script cannot depend on any hidden (non explicit) information or state that may change as script execution proceeds or between different executions of the script, nor can it depend on any external input from I/O devices." http://redis.io/commands/eval The problem I see with EVAL is that you cannot GET the state of those variables inside of the script and make a unique set of writes based on the state of those variables. Again: "The script always evaluates the same Redis write commands with the same arguments given the same input data set." So the resulting writes are already determined (cached from the first run) and the EVAL script doesn't care what the GET values are inside of the script. The only thing you can do is perform GET for those variables before calling EVAL and then pass those variables to the EVAL script, but here's the problem: now you have an atomicity problem between calling GET and calling EVAL. In other words, all the variables that you would have done a WATCH for a transaction, in the case of EVAL you instead need to GET those variables and then pass them to the EVAL script. Since the atomic nature of the script isn't guaranteed until the script actually starts, and you need to GET those variables before calling EVAL to start the script, that leaves an opening where the state of those variables could change between the GET and passing them to EVAL. Therefore, the guarantee of atomicity that you have with WATCH you do not have with EVAL for a very important set of use cases. So why is there talk about deprecating transactions when that would cause important Redis functionality to be lost? Or is there actually a way to do this with EVAL scripts that I do not understand yet? Or are there features planned that can solve this for EVAL? (Hypothetical example: if they made WATCH work with EVAL the same way that WATCH works with EXEC, that might work.) Is there a solution to this? Or am I to understand that Redis may not be completely transaction safe in the long term?
Its true that lua scripts can do whatever transactions can, but I don't think Redis transactions are going away. EVAL script does not let you watch variables When an eval script is running, nothing else can run concurrently. So, watching variables is pointless. You can be sure that nobody else has modified the variables once you have read the values in the script. The problem I see with EVAL is that you cannot GET the state of those variables inside of the script and make a unique set of writes based on the state of those variables. Not true. You can pass keys to the eval script. Within your eval script, you can read the values from Redis, and then based on those values conditionally execute other commands. The script is still deterministic. If you take that script and run it on the slave, it will still execute the same write commands because the master and slave have the same data.
Redis
10,532,520
21
I'm trying to use redis for sessions in my express app. I do the following: var express = require('express'); var RedisStore = require('connect-redis')(express); app.configure('development', function(){ app.use(express.session({ secret: "password", store: new RedisStore({ host: "127.0.0.1", port: "6379", db: "mydb" }) })); Later on, in my app, if i do something like: var whatever = req.session.someProperty; I get: Cannot read property 'someProperty' of undefined This indicates that req.session is undefined (I can see this from a console.log entry in my config section) I've definitely got redis running, and can see my app connects to it initially (using redis-cli monitor)
Sessions won't work unless you have these 3 in this order: app.use(express.cookieParser()); app.use(express.session()); app.use(app.router); I'm not sure if router is mandatory to use sessions, but it breaks them if it's placed before them.
Redis
10,191,692
21
Using redis-rb in a Rails app, the following doesn't work: irb> keys = $redis.keys("autocomplete*") => ["autocomplete_foo", "autocomplete_bar", "autocomplete_bat"] irb> $redis.del(keys) => 0 This works fine: irb> $redis.del("autocomplete_foo", "autocomplete_bar") => 2 Am I missing something obvious? The source is just: # Delete a key. def del(*keys) synchronize do @client.call [:del, *keys] end end which looks to me like it should work to pass it an array...?
A little coding exploration of the way the splat operator works: def foo(*keys) puts keys.inspect end >> foo("hi", "there") ["hi", "there"] >> foo(["hi", "there"]) [["hi", "there"]] >> foo(*["hi", "there"]) ["hi", "there"] So passing in a regular array will cause that array to be evaluated as a single item, so that you get an array inside an array within your method. If you preface the array with * when you call the method: $redis.del(*keys) That lets the method know to unpack it/not to accept any further arguments. So that should solve the problem that you're having! Just for the sake of further clarification, this works: >> foo("hello", *["hi", "there"]) This causes a syntax error: >> foo("hello", *["hi", "there"], "world")
Redis
6,061,996
21
I'm trying to use redis-store as my Rails 3 cache_store. I also have an initializer/app_config.rb which loads a yaml file for config settings. In my initializer/redis.rb I have: MyApp::Application.config.cache_store = :redis_store, APP_CONFIG['redis'] However, this doesn't appear to work. If I do: Rails.cache in my rails console I can clearly see it's using the ActiveSupport.Cache.FileStore as the cache store instead of redis-store. However, if I add the config in my application.rb file like this: config.cache_store = :redis_store it works just fine, except the app config initializer is loaded after application.rb, so I don't have access to APP_CONFIG. Has anyone experienced this? I can't seem to set a cache store in an initializer.
After some research, a probable explanation is that the initialize_cache initializer is run way before the rails/initializers are. So if it's not defined earlier in the execution chain then the cache store wont be set. You have to configure it earlier in the chain, like in application.rb or environments/production.rb My solution was to move the APP_CONFIG loading before the app gets configured like this: APP_CONFIG = YAML.load_file(File.expand_path('../config.yml', __FILE__))[Rails.env] and then in the same file: config.cache_store = :redis_store, APP_CONFIG['redis'] Another option was to put the cache_store in a before_configuration block, something like this: config.before_configuration do APP_CONFIG = YAML.load_file(File.expand_path('../config.yml', __FILE__))[Rails.env] config.cache_store = :redis_store, APP_CONFIG['redis'] end
Redis
5,810,289
21
I have an application which inserts record to a postgresql table and after the insert, I want to send a PUBLISH command to redis. Is it possible to pass an object of that record to redis' PUBLISH command so the subscriber on the other end will receive the object too?
Redis has no meaning of "objects", all redis gets are bytes, specifically strings! So when you want to publish an object you have to serialize it some way and deserialize it on the subscriber.
Redis
5,190,914
21
One of the decisions I need to make is what caching framework to use in my system. With so many to choose from, I am currently investigating redis, ehcache and memcached. Can anyone point to performance benchmarks of these three particular frameworks? Also an overview of their features - I am particularly interested in disadvantages, ie. situations where you would use one over the other.
A small feature comparison is here: http://toddrobinson.com/appfabric/appfabric-cache-feature-comparisons/ UPDATE 25.02.2016 Dead link fixed thanks to WebArchive.org: http://web.archive.org/web/20140205010302/http://toddrobinson.com/appfabric/appfabric-cache-feature-comparisons/
Redis
4,208,912
21
As far as Redis do not allow to reSet expire date to key (because of nans with replication) I'd like to know is there any method to check if key set to be expired or not? Thank you
Use the TTL command. If an expiration is set, it returns the number of seconds until the key expires; otherwise it returns -1.
Redis
2,888,340
21
I'm trying to connect to my local Redis server from inside a Docker container, but am unsuccessful. Here is the setup I have done so far: I have Redis up an running on my host machine. I am able to connect to it via redis-cli. I started an interactive Docker container from an Ubuntu image. I have installed redis-tools inside the Docker container. I tried to connect to Redis via redis-cli: redis-cli -h 172.17.0.3 -p 6379 172.17.0.3 is the IP address I got via ifconfig inside the Docker container. I appended bind 0.0.0.0 to my local Redis' redis.conf. After all my setup, I tried running the Docker container. However, I received an error stating "connection refused". I then tried to forward the port 6379 to 6379 when running the Docker container, but received a different error stating the address is already in use. What's the trick I'm not thinking of in order to get a working connection? Thanks in advance!
You should not connect to IP address of the container, but IP of the host (one you see on host for Docker bridge). Looking at your question it should be 172.17.0.1
Redis
47,376,417
20
I am using StackExchange.Redis in my application to store key/values. I need to flush the entire db now which Redis is using. I found a way via command How do I delete everything in Redis? but how can I do this with StackExchange.Redis? I was not able to find any method for that? I searched for Empty, RemoveAll etc on IDatabase object and found nothing.
The easiest way is to use FlushDatabase method or FlushDatabaseAsync from IServer ConnectionMultiplexer redis = ConnectionMultiplexer.Connect("localhost,allowAdmin=true"); var server = redis.GetServer("localhost"); server.FlushDatabase();
Redis
35,452,081
20
I'm backing a real-time websocket server application with MongoDB. The client base is growing, and single-threaded performance is no longer enough. I need a pub/sub layer to distribute messages across threads. I would normally go for Redis, but since the app already uses MongoDB, I could avoid the dependency using tailable cursors. However, I worry about performance. How does MongoDB's tailable cursor performance compare to that of Redis for a pub/sub architecture?
Actually, they are very different beasts. A MongoDB tailable cursor would work a bit like a queue. It can work with a capped collection so you do not have to explicitly delete items in the collection. It is quite efficient, but keep in mind that MongoDB will lock the whole collection (the database actually) at each write operation, so it limits the scalability. Another scalability limitation is the number of connections. Each client connection will add a connection thread in the mongod servers (or mongos). Still you can expect tens of thousands of items per second without major problems, which may be enough for a range of applications. On the other hand, Redis can generally handle much more connections simultaneously, because each connection does not create a thread (Redis is a single-theaded event loop). It is also extremely CPU efficient, because it does not queue at all the items. With Redis pub/sub, the items are propagated to the subscribers in the same event loop iteration than the publication. The items are not even stored in memory, Redis does not even have a single index to maintain. They are only retrieved from a socket buffer to be pushed in another socket buffer. However, because there is no queuing, delivery of Redis pub/sub messages is not guaranteed at all. If a subscriber is down when a message is published, the message will be lost for this subscriber. With Redis, you can expect hundreds of thousands of items per second on a single core, especially if you use pipelining, and multiple publication clients.
Redis
24,761,612
20
I've been tasked to work on a project for a client that has a site which he is estimating will receive 1-2M hits per day. He has an existing database of 58M users that need to get seeded on a per-registration basis for the new brand. Most of the site's content is served up from external API supplied data with most of the data stored on our Mongo setup being profile information and saved API parameters. NginX will be on port 80 and load balancing to a Node cluster on ports 8000 - 8010. My question is what to do about caching. I come from a LAMP background so I'm used to either writing static HTML files with PHP and serving those up to minimize MySQL load or using Memcached for sites that required a higher level of caching. This setup is a bit foreign to me. Which is the most ideal as far as minimal response time and CPU load? 1: Page-level caching with NginX Reference: http://andytson.com/blog/2010/04/page-level-caching-with-nginx/ server { listen 80; servername mysite.com; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; location / { proxy_pass http://localhost:8080/; proxy_cache anonymous; } # don't cache admin folder, send all requests through the proxy location /admin { proxy_pass http://localhost:8080/; } # handle static files directly. Set their expiry time to max, so they'll # always use the browser cache after first request location ~* (css|js|png|jpe?g|gif|ico)$ { root /var/www/${host}/http; expires max; } } 2: Redis as a cache bucket The hash() function is the numbers() function on this page: http://jsperf.com/hashing-strings function hash(str) { var res = 0, len = str.length; for (var i = 0; i < len; i++) { res = res * 31 + str.charCodeAt(i); } return res; } var apiUrl = 'https://www.myexternalapi.com/rest/someparam/someotherparam/?auth=3dfssd6s98d7f09s8df98sdef'; var key = hash(apiUrl).toString(); // 1.8006908172911553e+136 myRedisClient.set(key,theJSONresponse, function(err) {...}); 3: Node write JSON files The hash() function is the numbers() function on this page: http://jsperf.com/hashing-strings function hash(str) { var res = 0, len = str.length; for (var i = 0; i < len; i++) { res = res * 31 + str.charCodeAt(i); } return res; } var fs = require('fs'); var apiUrl = 'https://www.myexternalapi.com/rest/someparam/someotherparam/?auth=3dfssd6s98d7f09s8df98sdef'; var key = hash(apiUrl).toString(); // 1.8006908172911553e+136 fs.writeFile('/var/www/_cache/' + key + '.json', theJSONresponse, function(err) {...}); 4: Varnish in front I did some research and benchmarks like the ones shown on this site are leaning me away from this solution, but I'm still open to consider it if it makes the most sense: http://todsul.com/nginx-varnish
I would do a combination, and use Redis to cache session user API calls that have a short TTL, and use Nginx to cache long term RESTless data and static assets. I wouldn't write JSON files as I imagine the file system IO would be the slowest and most CPU intensive of the options listed.
Redis
15,555,896
20
I tried to use the Z axis data from SensorEvent.values, but it doesn't detect rotation of my phone in the XY plane, ie. around the Z-axis. I am using this as a reference for the co-ordinate axes. Is it correct? How do I measure that motion using accelerometer values? These games do something similar: Extreme Skater, Doodle Jump. PS: my phone orientation will be landscape.
Essentially, there is 2 cases here: the device is laying flat and not flat. Flat here means the angle between the surface of the device screen and the world xy plane (I call it the inclination) is less than 25 degree or larger than 155 degree. Think of the phone lying flat or tilt up just a little bit from a table. First you need to normalize the accelerometer vector. That is if g is the vector returns by the accelerometer sensor event values. In code float[] g = new float[3]; g = event.values.clone(); double norm_Of_g = Math.sqrt(g[0] * g[0] + g[1] * g[1] + g[2] * g[2]); // Normalize the accelerometer vector g[0] = g[0] / norm_Of_g g[1] = g[1] / norm_Of_g g[2] = g[2] / norm_Of_g Then the inclination can be calculated as int inclination = (int) Math.round(Math.toDegrees(Math.acos(g[2]))); Thus if (inclination < 25 || inclination > 155) { // device is flat } else { // device is not flat } For the case of laying flat, you have to use a compass to see how much the device is rotating from the starting position. For the case of not flat, the rotation (tilt) is calculated as follow int rotation = (int) Math.round(Math.toDegrees(Math.atan2(g[0], g[1]))); Now rotation = 0 means the device is in normal position. That is portrait without any tilt for most phone and probably landscape for tablet. So if you hold a phone as in your picture above and start rotating, the rotation will change and when the phone is in landscape the rotation will be 90 or -90 depends on the direction of rotation.
Tilt
11,175,599
38
The Android game My Paper Plane is a great example of how to implement tilt controls, but I've been struggling to understand how I can do something similar. I have the following example that uses getOrientation() from the SensorManager. The whole thing is on pastebin here. It just prints the orientation values to text fields. Here is the most relevant snippet: private void computeOrientation() { if (SensorManager.getRotationMatrix(m_rotationMatrix, null, m_lastMagFields, m_lastAccels)) { SensorManager.getOrientation(m_rotationMatrix, m_orientation); /* 1 radian = 57.2957795 degrees */ /* [0] : yaw, rotation around z axis * [1] : pitch, rotation around x axis * [2] : roll, rotation around y axis */ float yaw = m_orientation[0] * 57.2957795f; float pitch = m_orientation[1] * 57.2957795f; float roll = m_orientation[2] * 57.2957795f; /* append returns an average of the last 10 values */ m_lastYaw = m_filters[0].append(yaw); m_lastPitch = m_filters[1].append(pitch); m_lastRoll = m_filters[2].append(roll); TextView rt = (TextView) findViewById(R.id.roll); TextView pt = (TextView) findViewById(R.id.pitch); TextView yt = (TextView) findViewById(R.id.yaw); yt.setText("azi z: " + m_lastYaw); pt.setText("pitch x: " + m_lastPitch); rt.setText("roll y: " + m_lastRoll); } } The problem is that the values this spits out look like nonsense, or at least there's no way to isolate which type of motion the user performed. I've drawn a diagram to indicate the 2 types of motion that I'd like to detect - 1. "tilt" for pitch and 2. "rotate" for roll/steering: (That's an isometric-ish view of a phone in landscape mode, of course) When I tilt the phone forwards and backwards along its long axis - shown by 1. - I expected only 1 of the values to change much, but all of them seem to change drastically. Similarly, if I rotate the phone about an imaginary line that comes out of the screen - shown in 2. - I'd hope that only the roll value changes, but all the values change a lot. The problem is when I calibrate my game - which means recording the current values of the angles x, y and z - I later don't know how to interpret incoming updated angles to say "ok, looks like you have tilted the phone and you want to roll 3 degrees left". It's more like "ok, you've moved the phone and you're a-tiltin' and a-rollin' at the same time", even if the intent was only a roll. Make sense? Any ideas? I've tried using remapCoordinateSystem to see if changing the axis has any effect. No joy. I think I'm missing something fundamental with this :-(
You mixed the accelerator and megnetic sensor arrays. The code should be: if (SensorManager.getRotationMatrix(m_rotationMatrix, null, m_lastAccels, m_lastMagFields)) { Check out getRotationMatrix(..)
Tilt
4,576,493
25
I'm trying to create a sprockets preprocessor for Rails that finds .png.rb files in the asset pipeline and uses them to generate png screenshots of various pages in my application. I've read up on this topic quite a bit but I can't seem to find any straightforward documentation on how to get this set up. Help, please? Here's what I have so far: /initializers/sprockets.rb: require 'screenshot_preprocessor' Rails.application.assets.register_mime_type('screenshot/png', '.png.rb') Rails.application.assets.register_preprocessor('screenshot/png', ScreenshotPreprocessor) /lib/screenshot_preprocessor.rb: class ScreenshotPreprocessor # What API do I need to provide here? # - What methods do I need to provide? # - What parameters does Sprockets pass me? # - What do I need to return to Sprockets? end
Okay, I'm still not sure where to find documentation on this. But, by reading Sprockets' source code, playing around with the pry debugger, and reading blog posts from people who have done similar things with Sprockets, I was able to come up with this: /initializers/sprockets.rb: require 'screenshot_generator' Rails.application.assets.register_engine('.screenshot', ScreenshotGenerator) /lib/screenshot_generator.rb: require_relative 'capybara_screenshot' # Don't worry about this, it's not # relevant to this question. class ScreenshotGenerator < Sprockets::Processor def evaluate(context, locals) generator_class = ScreenshotGenerator.get_generator_class(context.pathname) return generator_class.new.generate end private def self.get_generator_class(generator_file) # This evaluates the Ruby code in the given file and returns a class that # can generate a binary string containing an image file. # (Code omitted for brevity) end end This works fine for me now, but I'd really prefer to see some real documentation on how Sprockets preprocessors, postprocessors, and engines work. If anyone finds any such documentation, please post an answer.
Tilt
18,128,633
11
I have launched my application using the Quarkus dev mode (mvn quarkus:dev) and I would like to be able to debug it. How can do that?
When launching a Quarkus app simply using mvn quarkus:dev, the running application is configured to open port 5005 for remote debugging. That means that all you have to do is point your remote debugger to that port and you will be able to debug it in your favorite IDE/lightweight editor. If however you would like to be able to suspend the application until a debugger is connected then simply execute: Maven: mvn quarkus:dev -Dsuspend Gradle: ./gradlew quarkusDev -Dsuspend=true The same port is used (5005) but this time the application doesn't start until a remote debugger is connected. You can use -Ddebug to change the debugging port. UPDATE As of version 2020.3, IntelliJ Ultimate should recognize a quarkus application and automatically create a launch configuration that uses quarkus:dev under the hood.
Quarkus
55,190,015
53
First of all, I have a multi-module maven hierarchy like that: ├── project (parent pom.xml) │   ├── service │   ├── api-library So now to the problem: I am writing a JAX-RS Endpoint in the service module which uses classes in the api-library. When I start quarkus, I am getting this warning: 13:01:18,784 WARN [io.qua.dep.ste.ReflectiveHierarchyStep] Unable to properly register the hierarchy of the following classes for reflection as they are not in the Jandex index: - com.example.Fruit - com.example.Car Consider adding them to the index either by creating a Jandex index for your dependency or via quarkus.index-dependency properties. This two classes com.example.Fruit and com.example.Car are located in the api-library module. So I think I need to add them to the Jandex index-dependency in the application.properties. But how can I add Jandex index-dependencies into quarkus?
Quarkus automatically indexes the main module but, when you have additional modules containing CDI beans, entities, objects serialized as JSON, you need to explicitly index them. There are a couple of different (easy to implement) options to do so. Using the Jandex Maven plugin Add the following to the pom.xml of the module you want to index: <build> <plugins> <plugin> <groupId>io.smallrye</groupId> <artifactId>jandex-maven-plugin</artifactId> <version>3.1.2</version> <executions> <execution> <id>make-index</id> <goals> <goal>jandex</goal> </goals> </execution> </executions> </plugin> </plugins> </build> It's the most beneficial option if your dependency is external to your project and you want to build the index once and for all. Using the Gradle Jandex plugin If you are using Gradle, there is a third party plugin allowing to generate a Jandex index: https://github.com/kordamp/jandex-gradle-plugin . Adding an empty META-INF/beans.xml If you add an empty META-INF/beans.xml file in the additional module src/main/resources, the classes will also be indexed. The classes will be indexed by Quarkus itself. Indexing other dependencies If you can't modify the dependency (think of a third-party dependency, for instance), you can still index it by adding an entry to your application.properties: quarkus.index-dependency.<name>.group-id= quarkus.index-dependency.<name>.artifact-id= quarkus.index-dependency.<name>.classifier=(this one is optional) with <name> being a name you choose to identify your dependency.
Quarkus
55,513,502
51
recently I swapped from thorntail to quarkus and I'm facing some difficulties trying to find how to set environment variables in application.properties in thorntail I used something like this ${env.HOST: localhost} that basically means put environment variable, if you don't find anything put localhost as default is that possible to quarkus application.properties? I haven't found any issue on GitHub or someone that have an answered this problem?
In application.properties you can use: somename=${HOST:localhost} which will correctly expand the HOST environment variable and use localhost as the default value if HOST is not set. See this for more information.
Quarkus
55,796,370
25
I have two apps running. App1: Read from amq, enrich the message and send the message to App2 through other amq App2: Read the message and call another project for processing. Y want to debug booth Apps in the same time and see how the message change in time. When I start the App2 with mvn compile quarkus:dev I got this: [ERROR] Port 5005 in use, not starting in debug mode of course the app is runnig but without debuger. Exist some way to change the default debug port in quarkus? PD: I just try -Dquarkus.debug.port=5006, but nothing happens... Thanks
The -Ddebug system property can be used to specify a debug port as well. In your case, mvn compile quarkus:dev -Ddebug=5006 should work. See this javadoc https://github.com/quarkusio/quarkus/blob/1.8.1.Final/devtools/maven/src/main/java/io/quarkus/maven/DevMojo.java#L140-L166 for more info.
Quarkus
55,289,627
22
I have some configurations in my application.properties file ... quarkus.datasource.url=jdbc:postgresql://...:5432/.... quarkus.datasource.driver=org.postgresql.Driver quarkus.datasource.username=user quarkus.datasource.password=password quarkus.hibernate-orm.database.generation=update ... I have a scheduler with a @Transactional method that takes a long time to finish executing: @ApplicationScoped class MyScheduler { ... @Transactional @Scheduled(every = "7200s") open fun process() { ... my slow proccess goes here... entityManager.persist(myObject) } } And then, the transactional method receives a timeout error like that 2019-06-24 20:11:59,874 WARN [com.arj.ats.arjuna] (Transaction Reaper) ARJUNA012117: TransactionReaper::check timeout for TX 0:ffff0a000020:d58d:5cdad26e:81 in state RUN 2019-06-24 20:12:47,198 WARN [com.arj.ats.arjuna] (DefaultQuartzScheduler_Worker-3) ARJUNA012077: Abort called on already aborted atomic action 0:ffff0a000020:d58d:5cdad26e:81 Caused by: javax.transaction.RollbackException: ARJUNA016102: The transaction is not active! Uid is 0:ffff0a000020:d58d:5cdad26e:81 I believe that I must increase the timeout of my transactional method. But I don't know how I can do this. Someone could help me, please?
Seems that this has changed -> it is now possible to set the Transaction timeout: https://quarkus.io/guides/transaction You can configure the default transaction timeout, the timeout that applies to all transactions managed by the transaction manager, via the property: quarkus.transaction-manager.default-transaction-timeout = 240s -> specified as a duration (java.time.Duration format). Default is 60 sec
Quarkus
56,746,385
21
When I use something like the following in my Quarkus application: @Path("v1") @Produces(APPLICATION_JSON) public class HelloWorldResource { @Inject private SomeBean someBean; } then I get a warning the following during the build process. [INFO] [io.quarkus.arc.processor.BeanProcessor] Found unrecommended usage of private members (use package-private instead) in application beans: - @Inject field acme.jaxrs.v1.HelloWorldResource#someBean Everything seems to work just fine so why is Quarkus suggesting that change private to package-private?
If a property is package-private, Quarkus can inject it directly without requiring any reflection to come into play. That is why Quarkus recommends package-private members for injection as it tries to avoid reflection as much as possible (the reason for this being that less reflection means better performance which is something Quarkus strives to achieve). See section 2 of this guide for more details.
Quarkus
55,101,095
21
As per the Quarkus documentation : In Quarkus, the preferred datasource and connection pooling implementation is Agroal. But, I don't see any review or comparison of 'Agroal' with the well known JDBC Connection Pooling implementation 'HikariCP'. What makes 'Agroal' better than 'HikariCP', except that BOTH Quarkus and Agroal are from RedHat?
With Agroal you can update configuration on runtime Configuration property overridable at runtime While Hikari doesn't support it You can't dynamically update the property values by resetting them on the config object Another reason is Quarkus integration features first class integration with the other components in Quarkus, such as security, transaction management components, health metrics
Quarkus
60,137,423
19
I have one project to parse some info from a large file. The project uses maven and java: And the structure bellow: When I run the application from my IDEA, I can read the file with: public void buffer() throws IOException { try (InputStream inputStream = getClass().getResourceAsStream("/151279.txt"); BufferedReader reader = new BufferedReader(new InputStreamReader(inputStream))) { String contents = reader.lines() .collect(Collectors.joining(System.lineSeparator())); } } Then, if I run: ./mvnw package java -jar target/file-parser-1.0-SNAPSHOT-runner.jar Everything goes well. Even when I generate the GraalNative jar and run the application from the native generate jar with: ./mvnw package -Pnative -Dquarkus.native.container-build=true java -jar target/file-parser-1.0-SNAPSHOT-native-image-source-jar/file-parser-1.0-SNAPSHOT-runner.jar it all works Well. But then, when I run the commands to build and run with docker, is where I got my error: docker build -f src/main/docker/Dockerfile.native -t quarkus/file-parser docker run -i --rm -p 8080:8080 quarkus/file-parser 2020-03-16 17:48:04,908 ERROR [io.qua.ver.htt.run.QuarkusErrorHandler] (executor-thread-1) HTTP Request to /init failed, error id: 8471ff6c-f124-4e0f-9d83-afe7f066b3a8-1: org.jboss.resteasy.spi.UnhandledException: java.lang.NullPointerException at org.jboss.resteasy.core.ExceptionHandler.handleApplicationException(ExceptionHandler.java:106) at org.jboss.resteasy.core.ExceptionHandler.handleException(ExceptionHandler.java:372) at org.jboss.resteasy.core.SynchronousDispatcher.writeException(SynchronousDispatcher.java:209) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:496) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$invoke$4(SynchronousDispatcher.java:252) at org.jboss.resteasy.core.SynchronousDispatcher.lambda$preprocess$0(SynchronousDispatcher.java:153) at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:363) at org.jboss.resteasy.core.SynchronousDispatcher.preprocess(SynchronousDispatcher.java:156) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:238) at io.quarkus.resteasy.runtime.standalone.RequestDispatcher.service(RequestDispatcher.java:73) at io.quarkus.resteasy.runtime.standalone.VertxRequestHandler.dispatch(VertxRequestHandler.java:120) at io.quarkus.resteasy.runtime.standalone.VertxRequestHandler.access$000(VertxRequestHandler.java:36) at io.quarkus.resteasy.runtime.standalone.VertxRequestHandler$1.run(VertxRequestHandler.java:85) at org.jboss.threads.ContextClassLoaderSavingRunnable.run(ContextClassLoaderSavingRunnable.java:35) at org.jboss.threads.EnhancedQueueExecutor.safeRun(EnhancedQueueExecutor.java:2011) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.doRunTask(EnhancedQueueExecutor.java:1535) at org.jboss.threads.EnhancedQueueExecutor$ThreadBody.run(EnhancedQueueExecutor.java:1395) at org.jboss.threads.DelegatingRunnable.run(DelegatingRunnable.java:29) at org.jboss.threads.ThreadLocalResettingRunnable.run(ThreadLocalResettingRunnable.java:29) at java.lang.Thread.run(Thread.java:748) at org.jboss.threads.JBossThread.run(JBossThread.java:479) at com.oracle.svm.core.thread.JavaThreads.threadStartRoutine(JavaThreads.java:460) at com.oracle.svm.core.posix.thread.PosixJavaThreads.pthreadStartRoutine(PosixJavaThreads.java:193) Caused by: java.lang.NullPointerException at java.io.Reader.<init>(Reader.java:78) at java.io.InputStreamReader.<init>(InputStreamReader.java:72) at com.erickmob.fileparser.service.ParseService.buffer(ParseService.java:74) at com.erickmob.fileparser.service.ParseService_ClientProxy.buffer(ParseService_ClientProxy.zig:98) at com.erickmob.fileparser.resource.ParseResource.hello(ParseResource.java:27) at java.lang.reflect.Method.invoke(Method.java:498) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:151) at org.jboss.resteasy.core.MethodInjectorImpl.lambda$invoke$3(MethodInjectorImpl.java:122) at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616) at java.util.concurrent.CompletableFuture.uniApplyStage(CompletableFuture.java:628) at java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:1996) at java.util.concurrent.CompletableFuture.thenApply(CompletableFuture.java:110) at org.jboss.resteasy.core.MethodInjectorImpl.invoke(MethodInjectorImpl.java:122) at org.jboss.resteasy.core.ResourceMethodInvoker.internalInvokeOnTarget(ResourceMethodInvoker.java:594) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTargetAfterFilter(ResourceMethodInvoker.java:468) at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invokeOnTarget$2(ResourceMethodInvoker.java:421) at org.jboss.resteasy.core.interception.jaxrs.PreMatchContainerRequestContext.filter(PreMatchContainerRequestContext.java:363) at org.jboss.resteasy.core.ResourceMethodInvoker.invokeOnTarget(ResourceMethodInvoker.java:423) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:391) at org.jboss.resteasy.core.ResourceMethodInvoker.lambda$invoke$1(ResourceMethodInvoker.java:365) at java.util.concurrent.CompletableFuture.uniComposeStage(CompletableFuture.java:995) at java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:2137) at java.util.concurrent.CompletableFuture.thenCompose(CompletableFuture.java:110) at org.jboss.resteasy.core.ResourceMethodInvoker.invoke(ResourceMethodInvoker.java:365) at org.jboss.resteasy.core.SynchronousDispatcher.invoke(SynchronousDispatcher.java:477) ... 19 more Does anyone can help me with this? How can I read a txt file on src/main/resources on a Docker Container? Dockerfile.Native: #### # This Dockerfile is used in order to build a container that runs the Quarkus application in native (no JVM) mode # # Before building the docker image run: # # mvn package -Pnative -Dquarkus.native.container-build=true # # Then, build the image with: # # docker build -f src/main/docker/Dockerfile.native -t quarkus/file-parser . # # Then run the container using: # # docker run -i --rm -p 8080:8080 quarkus/file-parser # ### FROM registry.access.redhat.com/ubi8/ubi-minimal:8.1 WORKDIR /work/ COPY target/*-runner /work/application # set up permissions for user `1001` RUN chmod 775 /work /work/application \ && chown -R 1001 /work \ && chmod -R "g+rwX" /work \ && chown -R 1001:root /work EXPOSE 8080 USER 1001 CMD ["./application", "-Dquarkus.http.host=0.0.0.0"] DockerFile.jvm #### # This Dockerfile is used in order to build a container that runs the Quarkus application in JVM mode # # Before building the docker image run: # # mvn package # # Then, build the image with: # # docker build -f src/main/docker/Dockerfile.jvm -t quarkus/file-parser-jvm . # # Then run the container using: # # docker run -i --rm -p 8080:8080 quarkus/file-parser-jvm # ### FROM registry.access.redhat.com/ubi8/ubi-minimal:8.1 ARG JAVA_PACKAGE=java-1.8.0-openjdk-headless ARG RUN_JAVA_VERSION=1.3.5 ENV LANG='en_US.UTF-8' LANGUAGE='en_US:en' # Install java and the run-java script # Also set up permissions for user `1001` RUN microdnf install openssl curl ca-certificates ${JAVA_PACKAGE} \ && microdnf update \ && microdnf clean all \ && mkdir /deployments \ && chown 1001 /deployments \ && chmod "g+rwX" /deployments \ && chown 1001:root /deployments \ && curl https://repo1.maven.org/maven2/io/fabric8/run-java-sh/${RUN_JAVA_VERSION}/run-java-sh-${RUN_JAVA_VERSION}-sh.sh -o /deployments/run-java.sh \ && chown 1001 /deployments/run-java.sh \ && chmod 540 /deployments/run-java.sh \ && echo "securerandom.source=file:/dev/urandom" >> /etc/alternatives/jre/lib/security/java.security # Configure the JAVA_OPTIONS, you can add -XshowSettings:vm to also display the heap size. ENV JAVA_OPTIONS="-Dquarkus.http.host=0.0.0.0 -Djava.util.logging.manager=org.jboss.logmanager.LogManager" COPY target/lib/* /deployments/lib/ COPY target/*-runner.jar /deployments/app.jar EXPOSE 8080 USER 1001 ENTRYPOINT [ "/deployments/run-java.sh" ] Refs: https://www.baeldung.com/java-classpath-resource-cannot-be-opened
You need to make sure that the resource is included in the native image (it isn't by default). Add a src/main/resources/resources-config.json that includes something like: { "resources": [ { "pattern": "151279\\.txt$" } ] } You will also need to set the following property: quarkus.native.additional-build-args =-H:ResourceConfigurationFiles=resources-config.json See this for more details.
Quarkus
60,711,034
16
I plan to use PostgreSQL as the database for my Quarkus application but I would like the convenience of using H2 in my tests. Is there a way I can accomplish such a feat?
Update 2 Recent versions of Quarkus can launch H2 automatically in dev and test mode when quarkus-jdbc-h2 is on the classpath and no URL configuration is provided. See this for more information. Also, you should favor the datasource "kind" when configuring the driver, instead of pointing to a driver explicitly. In short: Add io.quarkus:quarkus-jdbc-h2 as a test scoped dependency Configure your app this: quarkus.datasource.kind=postgresql %test.quarkus.datasource.kind=h2 # Only set this in the prod profile, otherwise tests won't work %prod.quarkus.datasource.jdbc.url=jdbc:postgresql://mypostgres:5432 Update As of version 1.13, Quarkus can launch H2 automatically in dev and test mode when quarkus-jdbc-h2 is on the classpath and no URL configuration is provided. See this for more information. Original answer Quarkus provides the H2DatabaseTestResource which starts an in memory H2 database as part of the test process. You will need to add io.quarkus:quarkus-test-h2 as a test scoped dependency and annotate your test with @QuarkusTestResource(H2DatabaseTestResource.class). You will also need to have something like: quarkus.datasource.url=jdbc:h2:tcp://localhost/mem:test quarkus.datasource.driver=org.h2.Driver in src/test/resources/application.properties In order for the application use PostgreSQL as part of its regular run, quarkus-jdbc-postgresql should be a dependency and quarkus.datasource.url=jdbc:postgresql://mypostgres:5432 quarkus.datasource.driver=org.postgresql.Driver should be set in src/main/resources/application.properties
Quarkus
55,063,778
16
Apologies if it has been answered before, but I can't seem to find a good answer. What is the context of how @QuarkusTest runs versus QuarkusIntegrationTest? So far, all I got is the integration test runs against a packaged form of the app (.jar, native compilation), whereas the plain @QuarkusTest doesn't? But this does not leave much explanation, and apologies if this comes from a lack of understanding in test runtimes. To start a test instance of Quarkus (via @QuarkusTest), does it not compile and package into a jar? Makes sense to not I suppose, and just test against running compiled classes but I would rather get the real answer than assuming. https://quarkus.io/guides/getting-started-testing#native-executable-testing
Besides the difference you mention, there's another crucial difference between @QuarkusTest and @QuarkusIntegrationTest. With @QuarkusTest, the test runs in the same process as the tested application, so you can inject the application's beans into the test instance etc., while with @QuarkusIntegrationTest, the tested application runs in an external process, so you can only interact with it over network.
Quarkus
71,022,392
15
I would like my Quarkus application to run on a port other than the default. How can I accomplish that?
The Quarkus configuration property to be used is quarkus.http.port (the default value is 8080). If this property is set in application.properties then that value will be used. The property can also be overridden at runtime as follows: When running a Quarkus application in JVM mode you can set the port using the quarkus.http.port System property. For example: java -Dquarkus.http.port=8081 -jar example-runner.java The same property applies to GraalVM Native Mode images. For example: ./example-runner -Dquarkus.http.port=8081
Quarkus
55,043,620
15
As Spring boot application provides a property to set the web console URL of the H2 Database. spring.h2.console.path=/h2 Is there a way to set this same property in the Quarkus application? If not then what is the default web console URL.
Yes, there is a way. But it's not quite as simple as in Spring Boot because Quarkus does not do the same first-class support for H2 as Spring Boot does. First, you need to activate Servlet support in Quarkus. Then, you go ahead and configure the H2 servlet in a web.xml deployment descriptor or in a undertow-handlers.conf if you're familiar with it. Here we go: Assuming that you already have the quarkus-jdbc-h2 extension added Add the quarkus-vertx and quarkus-undertow extensions Create the deployment descriptor under src/main/resources/META-INF/web.xml Configure the H2 console Servlet like so <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE web-app PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN" "http://java.sun.com/dtd/web-app_2_3.dtd"> <web-app> <display-name>My Web Application</display-name> <servlet> <servlet-name>h2-console</servlet-name> <servlet-class>org.h2.server.web.WebServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>h2-console</servlet-name> <url-pattern>/h2/*</url-pattern> </servlet-mapping> </web-app> Run ./mvnw quarkus:dev and go to http://localhost:8080/h2 where the console should show up. If you need to set a parameter use <init-param> like e.g.: <servlet> <servlet-name>h2-console</servlet-name> <servlet-class>org.h2.server.web.WebServlet</servlet-class> <init-param> <param-name>webAllowOthers</param-name> <param-value>true</param-value> </init-param> </servlet> http://www.h2database.com/html/tutorial.html#usingH2ConsoleServlet
Quarkus
61,853,691
14
I've created a quarkus quick start project with mvn io.quarkus:quarkus-maven-plugin:0.13.1:create \ -DprojectGroupId=com.demo.quarkus \ -DprojectArtifactId=quarkus-project \ -DclassName="com.demo.quarkus.HelloResource" \ -Dpath="/hello" And afterwards when I run: mvn clean package I get the following error: java.lang.RuntimeException: java.net.BindException: Address already in use: bind at io.undertow.Undertow.start(Undertow.java:247) at io.quarkus.undertow.runtime.UndertowDeploymentTemplate.doServerStart(UndertowDeploymentTemplate.java:349) at io.quarkus.undertow.runtime.UndertowDeploymentTemplate.startUndertow(UndertowDeploymentTemplate.java:262) at io.quarkus.deployment.steps.UndertowBuildStep$boot9.deploy(Unknown Source) at io.quarkus.runner.ApplicationImpl1.doStart(Unknown Source) at io.quarkus.runtime.Application.start(Application.java:96) at io.quarkus.runner.RuntimeRunner.run(RuntimeRunner.java:119) at io.quarkus.test.junit.QuarkusTestExtension.doJavaStart(QuarkusTestExtension.java:236) at io.quarkus.test.junit.QuarkusTestExtension.createTestInstance(QuarkusTestExtension.java:301) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.invokeTestInstanceFactory(ClassTestDescriptor.java:299) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.instantiateTestClass(ClassTestDescriptor.java:289) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.instantiateTestClass(ClassTestDescriptor.java:281) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.instantiateAndPostProcessTestInstance(ClassTestDescriptor.java:269) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstancesProvider$2(ClassTestDescriptor.java:259) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstancesProvider$3(ClassTestDescriptor.java:263) at java.util.Optional.orElseGet(Optional.java:267) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstancesProvider$4(ClassTestDescriptor.java:262) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$prepare$0(TestMethodTestDescriptor.java:98) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:97) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:68) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$prepare$1(NodeTestTask.java:107) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.prepare(NodeTestTask.java:107) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:75) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:229) at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:197) at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:211) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:191) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:69) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) Caused by: java.net.BindException: Address already in use: bind at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.xnio.nio.NioXnioWorker.createTcpConnectionServer(NioXnioWorker.java:178) at org.xnio.XnioWorker.createStreamConnectionServer(XnioWorker.java:310) at io.undertow.Undertow.start(Undertow.java:193) ... 56 more org.junit.jupiter.api.extension.TestInstantiationException: TestInstanceFactory [io.quarkus.test.junit.QuarkusTestExtension] failed to instantiate test class [com.baeldung.quarkus.HelloResourceTest]: Failed to start quarkus at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.invokeTestInstanceFactory(ClassTestDescriptor.java:314) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.instantiateTestClass(ClassTestDescriptor.java:289) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.instantiateTestClass(ClassTestDescriptor.java:281) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.instantiateAndPostProcessTestInstance(ClassTestDescriptor.java:269) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstancesProvider$2(ClassTestDescriptor.java:259) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstancesProvider$3(ClassTestDescriptor.java:263) at java.util.Optional.orElseGet(Optional.java:267) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.lambda$testInstancesProvider$4(ClassTestDescriptor.java:262) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.lambda$prepare$0(TestMethodTestDescriptor.java:98) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:97) at org.junit.jupiter.engine.descriptor.TestMethodTestDescriptor.prepare(TestMethodTestDescriptor.java:68) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$prepare$1(NodeTestTask.java:107) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.prepare(NodeTestTask.java:107) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:75) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80) at java.util.ArrayList.forEach(ArrayList.java:1257) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.invokeAll(SameThreadHierarchicalTestExecutorService.java:38) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$5(NodeTestTask.java:139) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$7(NodeTestTask.java:125) at org.junit.platform.engine.support.hierarchical.Node.around(Node.java:135) at org.junit.platform.engine.support.hierarchical.NodeTestTask.lambda$executeRecursively$8(NodeTestTask.java:123) at org.junit.platform.engine.support.hierarchical.ThrowableCollector.execute(ThrowableCollector.java:73) at org.junit.platform.engine.support.hierarchical.NodeTestTask.executeRecursively(NodeTestTask.java:122) at org.junit.platform.engine.support.hierarchical.NodeTestTask.execute(NodeTestTask.java:80) at org.junit.platform.engine.support.hierarchical.SameThreadHierarchicalTestExecutorService.submit(SameThreadHierarchicalTestExecutorService.java:32) at org.junit.platform.engine.support.hierarchical.HierarchicalTestExecutor.execute(HierarchicalTestExecutor.java:57) at org.junit.platform.engine.support.hierarchical.HierarchicalTestEngine.execute(HierarchicalTestEngine.java:51) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:229) at org.junit.platform.launcher.core.DefaultLauncher.lambda$execute$6(DefaultLauncher.java:197) at org.junit.platform.launcher.core.DefaultLauncher.withInterceptedStreams(DefaultLauncher.java:211) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:191) at org.junit.platform.launcher.core.DefaultLauncher.execute(DefaultLauncher.java:128) at com.intellij.junit5.JUnit5IdeaTestRunner.startRunnerWithArgs(JUnit5IdeaTestRunner.java:69) at com.intellij.rt.execution.junit.IdeaTestRunner$Repeater.startRunnerWithArgs(IdeaTestRunner.java:47) at com.intellij.rt.execution.junit.JUnitStarter.prepareStreamsAndStart(JUnitStarter.java:242) at com.intellij.rt.execution.junit.JUnitStarter.main(JUnitStarter.java:70) Caused by: java.lang.RuntimeException: Failed to start quarkus at io.quarkus.runner.ApplicationImpl1.doStart(Unknown Source) at io.quarkus.runtime.Application.start(Application.java:96) at io.quarkus.runner.RuntimeRunner.run(RuntimeRunner.java:119) at io.quarkus.test.junit.QuarkusTestExtension.doJavaStart(QuarkusTestExtension.java:236) at io.quarkus.test.junit.QuarkusTestExtension.createTestInstance(QuarkusTestExtension.java:301) at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.invokeTestInstanceFactory(ClassTestDescriptor.java:299) ... 47 more Caused by: java.lang.RuntimeException: java.net.BindException: Address already in use: bind at io.undertow.Undertow.start(Undertow.java:247) at io.quarkus.undertow.runtime.UndertowDeploymentTemplate.doServerStart(UndertowDeploymentTemplate.java:349) at io.quarkus.undertow.runtime.UndertowDeploymentTemplate.startUndertow(UndertowDeploymentTemplate.java:262) at io.quarkus.deployment.steps.UndertowBuildStep$boot9.deploy(Unknown Source) ... 53 more Caused by: java.net.BindException: Address already in use: bind at sun.nio.ch.Net.bind0(Native Method) at sun.nio.ch.Net.bind(Net.java:433) at sun.nio.ch.Net.bind(Net.java:425) at sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:223) at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:74) at org.xnio.nio.NioXnioWorker.createTcpConnectionServer(NioXnioWorker.java:178) at org.xnio.XnioWorker.createStreamConnectionServer(XnioWorker.java:310) at io.undertow.Undertow.start(Undertow.java:193) ... 56 more The origin of the error seems to be coming from @QuarkusTest public class HelloResourceTest { @Test public void testHelloEndpoint() { given() .when().get("/hello/filipe") .then() .statusCode(200) .body(is("hello")); } } I'm not sure which port it is trying to bind to. I suppose 8080. Any ideas how to override the default port using the application.properties? I'm on windows. Thanks!
It seems that besides the normal HTTP port, there is also a default port used during tests. So in order to override the default port used in tests, the following property needs to be overridden: quarkus.http.test-port=8888 This will run the rests using port 8888. The answer for this question pointed me in the right direction: How can I configure the port a Quarkus application runs on? As well as the comment pointing to https://github.com/quarkusio/quarkus/blob/0.14.0/extensions/undertow/runtime/src/main/java/io/quarkus/undertow/runtime/HttpConfig.java More info here: https://quarkus.io/guides/getting-started-testing#controlling-the-test-port
Quarkus
57,095,830
14
When I run my Quarkus application it listens/binds to localhost only by default. How can I alter this behavior?
UPDATE With the inclusion of this PR in Quarkus, starting with version 0.12.0 the configuration explained in the following section will no longer be needed since Quarkus will use 0.0.0.0 as the default host. By default Quarkus only listens on localhost (127.0.0.1). To make Quarkus listen on all network interfaces (something that is very handy for example when running inside a Docker container or Kubernetes Pod), the quarkus.http.host property needs to be set. If you always want your Quarkus application to listen on all interfaces you can set quarkus.http.host=0.0.0.0 in your application.properties (under src/main/resources). If you would rather keep the default setting and only override at runtime, you can do that as follows: When running a Quarkus application in JVM mode you can set the port using the quarkus.http.host System property to 0.0.0.0. For example: java -Dquarkus.http.host=0.0.0.0 -jar example-runner.java The same property applies to GraalVM Native Mode images. For example: ./example-runner -Dquarkus.http.host=0.0.0.0
Quarkus
55,043,764
14
I have a value configured in my quarkus application.properties skipvaluecheck=true Now whenever I want to execute my tests, I want to have this value to be set to false instead of true. But I do not want to change in application.properties because it will affect the latest application deployment. I just want my tests to be executed with value false so that my test coverage goes green in sonar. From java code, I fetch this value by doing below ConfigProvider.getConfig().getValue("skipvaluecheck", Boolean.class); Something similar already exists in Sprint boot and I am curious if such thing also exist in quarkus Override default Spring-Boot application.properties settings in Junit Test
You need to define an implementation of io.quarkus.test.junit.QuarkusTestProfile and add it to the test via @TestProfile. Something like: @QuarkusTest @TestProfile(MyTest.BuildTimeValueChangeTestProfile.class) public class MyTest { @Test public void testSomething() { } public static class BuildTimeValueChangeTestProfile implements QuarkusTestProfile { @Override public Map<String, String> getConfigOverrides() { return Map.of("skipvaluecheck", "true"); } } } See more details can be found here
Quarkus
69,267,442
13
I am looking for a way to change the log level of one or multiple classes/packages of a Quarkus app (JVM) during runtime. Is there an API I can use to programmatically change the levels, e.g. by exposing a REST API or does there already exist some other solution? I am aware of https://quarkus.io/guides/logging but this only discusses changing the log levels statically via a JVM property or applications.properties.
Apparently Quarkus uses java.util.logging under the hood, so I created a simple REST resource like this: import javax.ws.rs.*; import java.util.logging.*; @Path("/logging") public class LoggingResource { private static Level getLogLevel(Logger logger) { for (Logger current = logger; current != null;) { Level level = current.getLevel(); if (level != null) return level; current = current.getParent(); } return Level.INFO; } @GET @Path("/{logger}") @Produces("text/plain") public String logger(@PathParam("logger") String loggerName, @QueryParam("level") String level) { // get the logger instance Logger logger = Logger.getLogger(loggerName); // change the log-level if requested if (level != null && level.length() > 0) logger.setLevel(Level.parse(level)); // return the current log-level return getLogLevel(logger); } } Now I can get the current log level like this: curl http://myserver:8080/logging/com.example.mypackage And set the log level like this: curl http://myserver:8080/logging/com.example.mypackage?level=DEBUG
Quarkus
63,250,947
13
I would like to add an HTTP interceptor to my Quarkus application so I can intercept all HTTP requests. How can such that be achieved?
Quarkus uses RESTEasy as its JAX-RS engine. That means that you can take advantage of all of RESTEasy's features, including Filters and Interceptors. For example to create a very simple security mechanism, all you would need to do is add code like the following: @Provider public class SecurityInterceptor implements ContainerRequestFilter { @Override public void filter(ContainerRequestContext context) { if ("/secret".equals(context.getUriInfo().getPath())) { context.abortWith(Response.accepted("forbidden!").build()); } } } It should be noted that this only works for requests that are handled by JAX-RS in Quarkus. If the requests are handled by pure Vert.x or Undertow, the filtering mechanisms of those stacks will need to be used. UPDATE When using RESTEasy Reactive with Quarkus, the @ServerRequestFilter annotation can be used instead of implementing ContainerRequestFilter. See this for more information class Filters { @ServerRequestFilter(preMatching = true) public void preMatchingFilter(ContainerRequestContext requestContext) { // make sure we don't lose cheese lovers if("yes".equals(requestContext.getHeaderString("Cheese"))) { requestContext.setRequestUri(URI.create("/cheese")); } } @ServerRequestFilter public Optional<RestResponse<Void>> getFilter(ContainerRequestContext ctx) { // only allow GET methods for now if(ctx.getMethod().equals(HttpMethod.GET)) { return Optional.of(RestResponse.status(Response.Status.METHOD_NOT_ALLOWED)); } return Optional.empty(); } }
Quarkus
56,448,061
13
I am using Quarkus application with the Hibernate extension and I would like Hibernate to show the generated SQL query. I am not sure how that could be accomplished. What's the best way to accomplish that? What's the proper way to configure such a feature?
The Quarkus property that controls this behavior is quarkus.hibernate-orm.log.sql (which is set to false by default). By simply setting quarkus.hibernate-orm.log.sql=true in application.properties, Quarkus will show and format the SQL queries that Hibernate issues to the database. Note that the Hibernate configuration is not overridable at runtime. For a complete set of properties that can be used to control Quarkus/Hibernate behavior, see this guide
Quarkus
55,044,148
13
I'm getting this error below when I try to call a DynamoDB AWS service: Multiple HTTP implementations were found on the classpath. To avoid non-deterministic loading implementations, please explicitly provide an HTTP client via the client builders, set the software.amazon.awssdk.http.service.impl system property with the FQCN of the HTTP service to use as the default, or remove all but one HTTP implementation from the classpath I'm using DynamoDbClient My pom.xml: <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>dynamodb</artifactId> </dependency> <dependency> <groupId>software.amazon.awssdk</groupId> <artifactId>url-connection-client</artifactId> </dependency> Client configuration: @Singleton public class DynamoClientConfig { @Produces @ApplicationScoped public DynamoDbClient getClientDb() { return DynamoDbClient.builder().region(Region.SA_EAST_1).build(); } I'm using Java and Quarkus. Does anyone know what it could be and how to fix it?
Sorted out! I added the parameter in dynamodbclient and it worked. .httpClient(UrlConnectionHttpClient.builder().build())
Quarkus
73,129,078
12
In Spring, it is possible to set the Logging Category Level via environment variables. I've tried the same in a Quarkus application with the following logger declaration: package org.my.group.resteasyjackson; public class JacksonResource { private static final Logger LOGGER = LoggerFactory.getLogger(JacksonResource.class); @GET public Set<Quark> list() { LOGGER.info("Hello"); return quarks; } } Executing the build artifact with QUARKUS_LOG_CATEGORY_ORG_MY_LEVEL=WARN java -jar my-artifactId-my-version-runner.jar will log anything at info level (since it is the default), therefore the "Hello" message. However, inserting quarkus.log.category."org.my".level=WARN in the application.properties file works as desired. Are environment variables in this use case not usable for Quarkus applications?
Just tried with quarkus 1.13.1 and adding extra underscores for the quotes seems to work, try: QUARKUS_LOG_CATEGORY__ORG_MY__LEVEL=WARN
Quarkus
65,503,420
12
When starting a quarkus jar, I don't see any server starting up, all I see is: C:\Java Projects\quarkus-demo\target>java -jar quarkus-demo-1.0-SNAPSHOT-runner.jar 2020-01-04 18:25:54,199 WARN [io.qua.net.run.NettyRecorder] (Thread-1) Localhost lookup took more than one second, you ne ed to add a /etc/hosts entry to improve Quarkus startup time. See https://thoeni.io/post/macos-sierra-java/ for details. 2020-01-04 18:25:54,521 INFO [io.quarkus] (main) quarkus-demo 1.0-SNAPSHOT (running on Quarkus 1.1.0.Final) started in 3. 231s. Listening on: http://0.0.0.0:8080 2020-01-04 18:25:54,522 INFO [io.quarkus] (main) Profile prod activated. 2020-01-04 18:25:54,522 INFO [io.quarkus] (main) Installed features: [cdi, resteasy, resteasy-jackson] What does that mean? Is it not using an application server? Is Quarkus an application server or something similar? I can't seem to find any information online about this.
Quarkus uses Vert.x/Netty. From https://developers.redhat.com/blog/2019/11/18/how-quarkus-brings-imperative-and-reactive-programming-together/: Quarkus uses Vert.x and Netty at its core. And, it uses a bunch of reactive frameworks and extensions on top to help developers. Quarkus is not just for HTTP microservices, but also for event-driven architecture. Its reactive nature makes it very efficient when dealing with messages (e.g., Apache Kafka or AMQP). The secret behind this is to use a single reactive engine for both imperative and reactive code. Quarkus does this quite brilliantly. Between imperative and reactive, the obvious choice is to have a reactive core. What that helps with is a fast non-blocking code that handles almost everything going via the event-loop thread (IO thread). But, if you were creating a typical REST application or a client-side application, Quarkus also gives you the imperative programming model. For example, Quarkus HTTP support is based on a non-blocking and reactive engine (Eclipse Vert.x and Netty). All the HTTP requests your application receive are handled by event loops (IO Thread) and then are routed towards the code that manages the request. Depending on the destination, it can invoke the code managing the request on a worker thread (servlet, Jax-RS) or use the IO was thread (reactive route).
Quarkus
59,592,676
12
In the Quarkus Application Configuration Guide it mentions how to configure an app with profiles (eg. %dev.quarkus.http.port=8181). But is there a way to access a Profile (or Environment) API so I can log the active profiles ? For example something like Spring: @ApplicationScoped public class ApplicationLifeCycle { private static final Logger LOGGER = LoggerFactory.getLogger("ApplicationLifeCycle"); @Inject Environment env; void onStart(@Observes StartupEvent ev) { LOGGER.info("The application is starting with profiles " + env.getActiveProfiles()); }
ProfileManager.getActiveProfile()?
Quarkus
56,617,504
12
Quarkus getting started unittest describes how to mock injected services. However when trying to apply this to an injected rest client this does not seem to work. In my application the class attribute to be injected is defined like this @Inject @RestClient MyService myService; In my test code I created a mock service like this: @Alternative() @Priority(1) @ApplicationScoped public class MockMyService extends MyService { @Override public MyObject myServicemethos() { return new MyObject(); } } Please note that this service is not registered or annotated as a RestClient. Running my unittests like this gives the following error: org.junit.jupiter.api.extension.TestInstantiationException: TestInstanceFactory [io.quarkus.test.junit.QuarkusTestExtension] failed to instantiate test class [...MyMediatorTest]: io.quarkus.builder.BuildException: Build failure: Build failed due to errors [error]: Build step io.quarkus.arc.deployment.ArcAnnotationProcessor#build threw an exception: javax.enterprise.inject.spi.DeploymentException: javax.enterprise.inject.UnsatisfiedResolutionException: Unsatisfied dependency for type ...MyService and qualifiers [@RestClient] - java member: ...MyMediator#myService - declared on CLASS bean [types=[java.lang.Object, ...MyMediator], qualifiers=[@Default, @Any], target=...MyMediator] at org.junit.jupiter.engine.descriptor.ClassTestDescriptor.invokeTestInstanceFactory(ClassTestDescriptor.java:314) ... I can probably overcome this by adding an additional service layer. But that feels like heading in the wrong direction. How can I solve this. Kind regards, misl
I just hit the same problem. There seems to have updates in the documentation and some corner cases that I faced, but google search sent me here first so I'll add my investigation results for future readers. According to the documentation you already do not need creating mock class https://quarkus.io/guides/getting-started-testing#using-injectmock-with-restclient but can use it like Service class @RegisterRestClient(configKey = "country-api") @ApplicationScoped public interface MyService Service Usage @Inject @RestClient MyService myService; Mock it in the test like @InjectMock @RestClient MyService myService; So far so good but following the documentation https://quarkus.io/guides/rest-client if you need configKey you will probably finish with # Your configuration properties country-api/mp-rest/url=https://restcountries.eu/rest # !!country-api/mp-rest/scope=javax.inject.Singleton # / in your config file. And then will hit these issues Ability to use InjectMock on MicroProfile RestClient # 8622 Usage of InjectMock on MicroProfile RestClient # 9630 Error when trying to Test RestClient # 12585 that say: if you are using configKey with RegisterRestClient have to care TO NOT HAVE country-api/mp-rest/scope=javax.inject.Singleton # in the config file which takes precedence before the @ApplicationScoped on the MyService interface
Quarkus
56,393,462
12
I want to send a simple POST request to another application to trigger some action there. I have a quarkus project and want to send the request from inside my CreateEntryHandler - is this possible in a simple way? Or do I need to add something like Apache Httpclient to my project? Does it make sense in combination with quarkus?
The other application, I assume has an API Endpoint? Lets state that the API endpoint you are trying to call in the other app is: POST /v1/helloworld From your Quarkus Application, you will have to do the following: Register a RestClient *As a Service Specify the Service information in your configuration properties Inject and use this Service --- In your current Application --- Pay close attention to the package name. IT has to match exactly in your application.properties file. HelloWorldService.java package com.helloworld.services @Path("/v1") @RegisterRestClient public interface HelloWorldService{ @POST @Path("/helloworld") Response callHeloWorld(HelloWorldPojo payloadToSend); } //Notice that we are not including the /v1 in the mp-rest/url, why? Because it is included in the @RestClient path. Update your application.properties to include the following: com.helloworld.services.HelloWorldService/mp-rest/url=https://yourOtherApplication.com/API --- Your HelloWorldPojo which you will send as payload HelloWorldProjo.java @JsonPropertyOrder({"id", "name"}) public class HelloWorldProjo{ private long id; private String name; //Setters //Getters } In another service where you actually want to use this: ServiceWhichCallsYourOtherAPI.java @RequestScoped public class ServiceWhichCallsYourOtherAPI{ @Inject @RestClient HelloWorldService helloWorldService; public void methodA(){ HelloWorldPojo payloadToSend = new HelloWorldPojo(); payloadToSend.setId(123); payloadToSend.setName("whee"); helloWorldService.callHelloWorld(payloadToSend); } } The POST request will then go to https://yourOtherApplication.com/API/v1/helloworld The json will look like: { "id":123, "name":"whee" } Really great read: https://quarkus.io/guides/rest-client
Quarkus
66,347,707
11
Im currently working with Quarkus and Swagger-UI as delivered by quarkus-smallrye-openapi. We have OIDC from Azure AD as security, which is currently not supported by Swagger-UI (see Swagger-Docs), so I can't add the "real" authorization to swagger. This means, I can't use Swagger since my endpoints are at least secured with @RolesAllowed. We have an endpoint to fetch a mock-security token, but I don't know how to tell swagger to take this token. Basically I want to tell swagger-ui "Here, I have this token, add it as Authorization: Bearer XXXto all requests", but I don't know how to do that in Quarkus.
Register security scheme @Path("/sample") @SecuritySchemes(value = { @SecurityScheme(securitySchemeName = "apiKey", type = SecuritySchemeType.HTTP, scheme = "Bearer")} ) public class SampleResource { Mark the operation's security requirement with the scheme name registered. @GET @SecurityRequirement(name = "apiKey") String hello() { Authorize option should be now available on swagger page. Enter your mock api key here. Trigger the service from swagger ui. You could now see Authorization: Bearer <VALUE> header set in request.
Quarkus
64,154,593
11
Due to security concerns, my company doesn't allows to use containers on our laptops. So we can't use the normal quarkus:dev to run our test that connects to Postgresql. But they provides us a remote machine where we can use Podman to run some containers. What I'm doing now is to manually ssh to that machine and starting a Postgresql container before running local tests. What I would like is to do this automatically, and also find a way to do the same on Jenkins when when we need to run a pipeline to release a new version.
This is a common scenario, and the intention of Quarkus's developer joy features is to allow it to work in a frictionless way, without requiring scripts or manual tunneling. There are two options, although which one works best for you will depend a bit on how your company's remote podman is set up. Remote dev services. (When you run Quarkus in dev mode, the automatic provisioning of unconfigured services is called 'dev services'.) The idea here is that you use dev services normally, but under the covers, the container client is connecting to the remote instances. For Dev services, Testcontainers provides container connectivity under the covers. This should work transparently as long as podman run works. You'd set it up using something like podman system connection add remote --identity ~/.ssh/my-key ssh://my-host/podman/podman.sock podman system connection default remote If you don't have a local podman client, or if the podman connection settings don't sort it out, setting DOCKER_HOST to the right remote socket will also tell Testcontainers where to look. Remote dev mode. Here, the whole application is running in a container on the remote server. Changes to your local files are reflected in the remote instance. To use remote dev mode, you build a special jar and then launch it in the remote environment. Add the following to your application.properties: %dev.quarkus.package.type=mutable-jar Then build the jar (they could be in the application.properties, but then you couldn't commit it to source control): QUARKUS_LIVE-RELOAD_PASSWORD=<arbitrary password> ./mvnw install The install will build you a normal fast-jar dockerfile. Run it in your remote environment with QUARKUS_LAUNCH_DEVMODE=true added to the podman run command. Then, locally, instead of mvn quarkus:dev, you'd run ./mvnw quarkus:remote-dev -Dquarkus.live-reload.url=http://my-remote-host:8080 https://quarkus.io/guides/maven-tooling#remote-development-mode has a more complete set of instructions. This option does have more moving parts and more latency, since you're transferring your whole application to the remote server every time code changes. So if you can, just configuring podman and using remote dev services is probably better. A third option, which probably isn't relevant for you, is to use Testcontainers Cloud. Quarkus dev services use Testcontainers under the covers, and Testcontainers Cloud is a convenient way of running Testcontainers remotely.
Quarkus
78,259,962
10
I dusted off an old Java project implemented with Quarkus and updated the dependencies to Quarkus 2.4.0. However, I've noticed that when I start the application it also fires up a Docker PostgreSQL container. I have another DB for testing, so I don't need Quarkus to create one for me. I couldn't locate any configuration properties to set in application.properties that would prevent this from being created. Am I missing something? Is there a flag somewhere that I need to set?
You can use quarkus.devservices.enabled=false to disable all DevServices, or use the specific properties for each one - which in your case would be quarkus.datasource.devservices.enabled=false
Quarkus
69,919,634
10
I am trying to debug a basic Quarkus app by running the command ./mvnw compile quarkus:dev on IntelliJ (as stated in the Quarkus docs) and it seems to run ok (gives me the following message: Listening for transport dt_socket at address: 5005) I can call the APIs on port 8080 and all fine but when I try to call the same API on port 5005 I get the following error Debugger failed to attach: handshake failed - received >GET /api/domai< - expected >JDWP-Handshake<. I tried configuring a Remote Debug configuration as shown in the image but doesn't seem to work. Does anyone know how to solve this?
One process can listen on multiple TCP/IP sockets for multiple things. The debug port is the port 5005 where you attach the remote debugger to. The API calls still need to go to port 8080, though. When you hit a breakpoint, you will see it in your debugger.
Quarkus
69,900,521
10
I'm doing some tests with Quarkus and PanacheRepository and I'm getting trouble in update an entity data. The update doesn't work, the field values are not updated. In short: I create an entity and persist the data, after that in another request I get the entity from database using repository.findById(id);, change some field value, but the new value is not persisted in the database. I tried call repository.persist(person); after but the behavior is the same, the data is not updated. I tried this with Quarkus version 1.9.0.Final, 1.9.0.CR1, 1.8.3.Final I'm using postgreSQL 12. I also tried with mysql 5.7.26 I use Eclipse 2020-06 (4.16.0) only to write code and I run the application in the command line, with: ./mvnw compile quarkus:dev I've created a brand new simple application and the behavior is the same. Here is the main configurations and some code snippets: pom.xml <properties> <compiler-plugin.version>3.8.1</compiler-plugin.version> <maven.compiler.parameters>true</maven.compiler.parameters> <maven.compiler.source>11</maven.compiler.source> <maven.compiler.target>11</maven.compiler.target> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding> <quarkus-plugin.version>1.9.0.Final</quarkus-plugin.version> <quarkus.platform.artifact-id>quarkus-universe-bom</quarkus.platform.artifact-id> <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id> <quarkus.platform.version>1.9.0.Final</quarkus.platform.version> <surefire-plugin.version>3.0.0-M5</surefire-plugin.version> </properties> <dependencies> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-jsonb</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-hibernate-orm-panache</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-jdbc-postgresql</artifactId> </dependency> </dependencies> application.properties: quarkus.datasource.db-kind = postgresql quarkus.datasource.username = theusername quarkus.datasource.password = thepassword quarkus.datasource.jdbc.url = jdbc:postgresql://localhost:5432/testpanache # drop and create the database at startup (use `update` to only update the schema) quarkus.hibernate-orm.database.generation = drop-and-create Entity: @Entity public class Person { @Id @GeneratedValue public Long id; public String name; @Override public String toString() { return "Person [id=" + id + ", name= '" + name + "']"; } } REST Resource: @Path("/people") @Produces(MediaType.APPLICATION_JSON) public class PersonResource { @Inject PersonRepository repository; @GET @Produces(MediaType.TEXT_PLAIN) public List<Person> hello() { return repository.listAll(); } @POST @Transactional public void create() { Person person = new Person(); person.name = "some name"; repository.persist(person); } @PUT @Path("{id}") @Transactional public Person update(@PathParam("id") Long id) { Person person = repository.findById(id); person.name = "updated updated updated"; // does not work // repository.persist(person); // does not work // repository.persistAndFlush(person); // does not work repository.getEntityManager().merge(person); // does not work return person; } } Repository: @ApplicationScoped public class PersonRepository implements PanacheRepository<Person> { } I made some requests using curl to demonstrate the behavior: $ curl -w "\n" http://localhost:8080/people [] $ curl -X POST http://localhost:8080/people $ curl -w "\n" http://localhost:8080/people [{"id":1,"name":"some name"}] $ curl -X PUT http://localhost:8080/people/1 {"id":1,"name":"updated updated updated"} $ curl -w "\n" http://localhost:8080/people [{"id":1,"name":"some name"}] So, the list starts empty, the second POST request creates a Person with "some name", as shown by the third request; the fourth request does a PUT that is intended to change the name to "updated updated updated", but the fifth request shows the name was not updated. Although it's not needed, I tried repository.persist(person);, repository.persistAndFlush(person);, and even repository.getEntityManager().merge(person); (one each time) as show in the PersonResource snippet above. But none of them made effect. What I am missing? PS.: I tried to change my entity to extends PanacheEntity and used Person.findById(id); to find the entity, this way the subsequent updates did make effect. But it's not my point, I wanna use PanacheRepository and want to understand what I'm missing with this.
Please consider accessing your entity using getter/setter, so Hibernate Proxies will work properly. @Entity public class Person { @Id @GeneratedValue public Long id; private String name; // <--- private field public void setName(String name) { this.name = name; } @Override public String toString() { return "Person [id=" + id + ", name= '" + name + "']"; } } and in your resource: @PUT @Path("{id}") @Transactional public Person update(@PathParam("id") Long id) { Person person = repository.findById(id); person.setName("updated updated updated"); // <--- this way works repository.persist(person); return person; }
Quarkus
64,503,946
10
While migrating my JAX-RS application from Jersey to Quarkus/Resteasy, I came across a behavior change with the method evaluatePreconditions(Date lastModified). Indeed, in my use case, the last modified date contains milliseconds and unfortunately the date format of the headers If-Modified-Since and Last-Modified doesn't support milliseconds as we can see in the RFC 2616. Jersey trims the milliseconds from the provided date (as we can see here) while in Resteasy, the date is not modified so it actually compares dates (the date from the header If-Modified-Since and the provided date) with different precisions (respectively seconds versus milliseconds) which ends up with a mismatch so an HTTP status code 200. The code that illustrates the issue: @Path("/evaluatePreconditions") public class EvaluatePreconditionsResource { @GET @Produces(MediaType.TEXT_PLAIN) public Response findData(@Context Request request) { final Data data = retrieveData(); final Date lastModified = Timestamp.valueOf(data.getLastModified()); final Response.ResponseBuilder responseBuilder = request.evaluatePreconditions(lastModified); if (responseBuilder == null) { // Last modified date didn't match, send new content return Response.ok(data.toString()) .lastModified(lastModified) .build(); } // Sending 304 not modified return responseBuilder.build(); } private Data retrieveData() { // Let's assume that we call a service here that provides this value // The date time is expressed in GMT+2, please adjust it according // to your timezone return new Data( LocalDateTime.of(2020, 10, 2, 10, 23, 16, 1_000_000), "This is my content" ); } public static class Data { private final LocalDateTime lastModified; private final String content; public Data(LocalDateTime lastModified, String content) { this.lastModified = lastModified; this.content = content; } public LocalDateTime getLastModified() { return lastModified; } @Override public String toString() { return content; } } } The corresponding result with Jersey: curl -H "If-Modified-Since: Fri, 02 Oct 2020 08:23:16 GMT" \ -I localhost:8080/evaluatePreconditions HTTP/1.1 304 Not Modified ... The corresponding result with Quarkus/Resteasy: curl -H "If-Modified-Since: Fri, 02 Oct 2020 08:23:16 GMT" \ -I localhost:8080/evaluatePreconditions HTTP/1.1 200 OK Last-Modified: Fri, 02 Oct 2020 08:23:16 GMT ... This behavior has already been raised in the Resteasy project, but for the team, trimming the date would add a new bug because if the data/resource is modified several times within the same second, we would get a 304 if we trim the date and 200 if we don't, which is a fair point. However, I maybe wrong but according to what I understand from the RFC 7232, if several modifications can happen within the same second, we are supposed to rely on an ETag too which means that in the JAX-RS specification, we are supposed to use evaluatePreconditions(Date lastModified, EntityTag eTag) instead. So what is the correct behavior according to the JAX-RS specification regarding this particular case?
The implementation of Request.evaluatePreconditions(Date lastModified) at Resteasy 4.5 is wrong. The implementation at class org.jboss.resteasy.specimpl.RequestImpl relies on a helper class DateUtil which expect the Last-Modified header to be in one of the formats: RFC 1123 "EEE, dd MMM yyyy HH:mm:ss zzz", RFC 1036 "EEEE, dd-MMM-yy HH:mm:ss zzz" or ANSI C "EEE MMM d HH:mm:ss yyyy". Of these three formats, only ANSI C is listed at RFC 7231 Section 7.1.1.1 and it is obsolete. The preferred format for an HTTP 1.1. header is as specified in RFC 5322 Section 3.3 and this format does not contain milliseconds. The format that Resteasy implementation refers as RFC 1123 actually comes from RFC 822 Section 5 but RFC 822 is for text messages (mail) not for HTTP headers. Java supports milliseconds at Date but HTTP headers do not. Therefore, comparing dates with different precisions is a bug. The correct implementation is the one at Jersey ContainerRequest which before comparing rounds down the date to the nearest second. JAX-RS spec 1.1 does not say anything specifically at this regard. Or, at least, I've not been able to find it. JAX-RS spec does not need to address this issue. The implementation must handle HTTP headers as per HTTP specs, which do not include milliseconds in header timestamps.
Quarkus
64,170,860
10
I'm trying to resolve dependency injection with Repository Pattern using Quarkus 1.6.1.Final and OpenJDK 11. I want to achieve Inject with Interface and give them some argument(like @Named or @Qualifier ) for specify the concrete class, but currently I've got UnsatisfiedResolutionException and not sure how to fix it. Here is the my portion of code. UseCase class: @ApplicationScoped public class ProductStockCheckUseCase { @Inject @Named("dummy") ProductStockRepository repo; public int checkProductStock() { ProductStock stock = repo.findBy(""); return stock.getCount(); } } Repository Interface: public interface ProductStockRepository { public ProductStock findBy(String productId); } Repository Implementation: @Named("dummy") public class ProductStockDummyRepository implements ProductStockRepository { public ProductStock findBy(final String productId) { final ProductStock productStock = new ProductStock(); return productStock; } } And here is a portion of my build.gradle's dependencies: dependencies { implementation 'io.quarkus:quarkus-resteasy' implementation 'io.quarkus:quarkus-arc' implementation enforcedPlatform("${quarkusPlatformGroupId}:${quarkusPlatformArtifactId}:${quarkusPlatformVersion}") testImplementation 'io.quarkus:quarkus-junit5' testImplementation 'io.rest-assured:rest-assured' } When I run this (e.g. ./gradlew assemble or ./gradlew quarkusDev ), I've got the following errors: Caused by: javax.enterprise.inject.UnsatisfiedResolutionException: Unsatisfied dependency for type ProductStockRepository and qualifiers [@Named(value = "dummy")] - java member: ProductStockCheckUseCase#repo - declared on CLASS bean [types=[ProductStockCheckUseCase, java.lang.Object], qualifiers=[@Default, @Any], target=ProductStockCheckUseCase] Do you have any ideas how to fix this? or is it wrong idea to implement this kind of interface injection and specify the concrete class with argument/annotation? I've read and tried the following articles: Some official docs: Quarkus - Contexts and Dependency Injection https://quarkus.io/guides/cdi-reference JSR 365: Contexts and Dependency Injection for Java 2.0 https://docs.jboss.org/cdi/spec/2.0/cdi-spec.html#default_bean_discovery Interfaces on Demand with CDI and EJB 3.1 https://www.oracle.com/technical-resources/articles/java/intondemand.html 23.7 Injecting Beans - Java Platform, Enterprise Edition: The Java EE Tutorial (Release 7) https://docs.oracle.com/javaee/7/tutorial/cdi-basic007.htm The other blogs and SOs: java - how inject implementation of JpaRepository - Stack Overflow how inject implementation of JpaRepository java - How to inject two instances of two different classes which implement the same interface? - Stack Overflow How to inject two instances of two different classes which implement the same interface? Java EE Context and Dependency Injection @Qualifier https://memorynotfound.com/context-dependency-injection-qualifier/
My guess is that you need to add a scope annotation to your ProductStockDummyRepository. Probably either @Singleton or @ApplicationScoped.
Quarkus
63,412,802
10
Trying out testcontainers for integration testing. I am testing rest api endpoint. Here is the technology stack - quarkus, RESTEasy and mongodb-client I am able to see MongoDB container is started successfully but getting exception. Exception: "com.mongodb.MongoSocketOpenException: Exception opening socket" 2020-04-26 15:13:18,330 INFO [org.tes.doc.DockerClientProviderStrategy] (main) Loaded org.testcontainers.dockerclient.UnixSocketClientProviderStrategy from ~/.testcontainers.properties, will try it first 2020-04-26 15:13:19,109 INFO [org.tes.doc.UnixSocketClientProviderStrategy] (main) Accessing docker with local Unix socket 2020-04-26 15:13:19,109 INFO [org.tes.doc.DockerClientProviderStrategy] (main) Found Docker environment with local Unix socket (unix:///var/run/docker.sock) 2020-04-26 15:13:19,258 INFO [org.tes.DockerClientFactory] (main) Docker host IP address is localhost 2020-04-26 15:13:19,305 INFO [org.tes.DockerClientFactory] (main) Connected to docker: Server Version: 19.03.8 API Version: 1.40 Operating System: Docker Desktop Total Memory: 3940 MB 2020-04-26 15:13:19,524 INFO [org.tes.uti.RegistryAuthLocator] (main) Credential helper/store (docker-credential-desktop) does not have credentials for quay.io 2020-04-26 15:13:20,106 INFO [org.tes.DockerClientFactory] (main) Ryuk started - will monitor and terminate Testcontainers containers on JVM exit 2020-04-26 15:13:20,107 INFO [org.tes.DockerClientFactory] (main) Checking the system... 2020-04-26 15:13:20,107 INFO [org.tes.DockerClientFactory] (main) ✔︎ Docker server version should be at least 1.6.0 2020-04-26 15:13:20,230 INFO [org.tes.DockerClientFactory] (main) ✔︎ Docker environment should have more than 2GB free disk space 2020-04-26 15:13:20,291 INFO [🐳 .2]] (main) Creating container for image: mongo:4.2 2020-04-26 15:13:20,420 INFO [🐳 .2]] (main) Starting container with ID: d8d142bcdef8e2ebe9c09f171845deffcda503d47aa4893cd44e72d7067f0cdd 2020-04-26 15:13:20,756 INFO [🐳 .2]] (main) Container mongo:4.2 is starting: d8d142bcdef8e2ebe9c09f171845deffcda503d47aa4893cd44e72d7067f0cdd 2020-04-26 15:13:22,035 INFO [🐳 .2]] (main) Container mongo:4.2 started in PT3.721S 2020-04-26 15:13:24,390 INFO [org.mon.dri.cluster] (main) Cluster created with settings {hosts=[127.0.0.1:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 2020-04-26 15:13:24,453 INFO [org.mon.dri.cluster] (main) Cluster created with settings {hosts=[127.0.0.1:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500} 2020-04-26 15:13:24,453 INFO [org.mon.dri.cluster] (cluster-ClusterId{value='5ea5dd542fb66c613dc74629', description='null'}-127.0.0.1:27017) Exception in monitor thread while connecting to server 127.0.0.1:27017: com.mongodb.MongoSocketOpenException: Exception opening socket at com.mongodb.internal.connection.SocketChannelStream.open(SocketChannelStream.java:63) at com.mongodb.internal.connection.InternalStreamConnection.open(InternalStreamConnection.java:126) at com.mongodb.internal.connection.DefaultServerMonitor$ServerMonitorRunnable.run(DefaultServerMonitor.java:117) at java.lang.Thread.run(Thread.java:748) Caused by: java.net.ConnectException: Connection refused at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method) at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:714) at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:122) at com.mongodb.internal.connection.SocketStreamHelper.initialize(SocketStreamHelper.java:64) at com.mongodb.internal.connection.SocketChannelStream.initializeSocketChannel(SocketChannelStream.java:72) at com.mongodb.internal.connection.SocketChannelStream.open(SocketChannelStream.java:60) ... 3 more If I use docker run then my test case works properly. docker run -p 27017:27017 --name mongodb mongo:4.2 using testcontainer as mentioned @ https://www.testcontainers.org/quickstart/junit_5_quickstart/ @Container static GenericContainer mongodb = new GenericContainer<>("mongo:4.2").withExposedPorts(27017);
I can't say for certain without seeing your test configuration, but I'm guessing that it works with docker run and not Testcontainers because docker run exposes a fixed port (always 27017) but Testcontainers will expose port 27017 as a random port (to avoid port conflicts on test machines). To use Testcontainers with a Quarkus test, your tests must follow this flow: Start containers. This is necessary because the random exposed port for MongoDB can only be known after the container has been started. Obtain randomized ports from Testcontainers after containers are started, then set any test configuration properties that depend on container ports. For example: static GenericContainer mongodb = new GenericContainer<>("mongo:4.2").withExposedPorts(27017); static { mongodb.start(); System.setProperty("quarkus.mongodb.connection-string", "mongodb://" + mongodb.getContainerIpAddress() + ":" + mongodb.getFirstMappedPort()); } Let Quarkus start. Since Quarkus does not support dynamic configuration, you must set the MongoDB port before Quarkus starts.
Quarkus
61,447,252
10
I´ve implemented JWT RBAC in my Quarkus application, but I don´t want to provide tokens whenever I´m testing my application locally. EDIT: What I´ve tried so far are setting these properties to "false" without any effect. quarkus.oauth2.enabled=false quarkus.security.enabled=false quarkus.smallrye-jwt.enabled=false Currently I´ve commented out all of //@RolesAllowed({"user"}) to "disable" auth locally. Is there any property to disable security / enable endpoints for any given role?
You can implement an AuthorizationController (io.quarkus.security.spi.runtime.AuthorizationController) public class DisabledAuthController extends AuthorizationController { @ConfigProperty(name = "disable.authorization") boolean disableAuthorization; @Override public boolean isAuthorizationEnabled() { return disableAuthorization; } } In Quarkus guides, you can find more information https://quarkus.io/guides/security-customization#disabling-authorization
Quarkus
59,774,964
10
I would like to change the logging level of my Quarkus application. How can I do that either from the configuration file or at runtime?
The property that controls the root logging level is quarkus.log.level (and defaults to INFO). This property can be set either in application.properties or can be overridden at runtime using -Dquarkus.log.level=DEBUG. You can also specify more fine grained logging using quarkus.log.category. For example for RESTEasy you could set: quarkus.log.category."org.jboss.resteasy".level=DEBUG For more information about logging in Quarkus, please check this guide.
Quarkus
55,044,060
10
I would like to override the properties I have configured in my configuration file in my Quarkus application. How can I accomplish that?
Properties in Quarkus are generally configured in src/main/resources/application.properties. This is true both for properties that configure the behavior of Quarkus (like the http port it listens to or the database URL to connect to for example) and properties that are specific to your application (for example a greeting.message property). The overridability of the former depends on the configuration in question. For example, the http properties (like quarkus.http.port) are overridable. The later are always overridable at runtime. When running a Quarkus application in JVM mode you can, for example, do: java -Dgreeting.message=hi -jar example-runner.java Similarly, when running a Quarkus application that has been converted to a native binary using the GraalVM (specifically the SubstrateVM system), you could do: ./example-runner -Dgreeting.message=hi More information can be found on the "Quarkus - Configuring Your Application" official guide
Quarkus
55,043,399
10
How do I squash my last N commits together into one commit?
You can do this fairly easily without git rebase or git merge --squash. In this example, we'll squash the last 3 commits. If you want to write the new commit message from scratch, this suffices: git reset --soft HEAD~3 git commit If you want to start editing the new commit message with a concatenation of the existing commit messages (i.e. similar to what a pick/squash/squash/…/squash git rebase -i instruction list would start you with), then you need to extract those messages and pass them to git commit: git reset --soft HEAD~3 && git commit --edit -m"$(git log --format=%B --reverse HEAD..HEAD@{1})" Both of those methods squash the last three commits into a single new commit in the same way. The soft reset just re-points HEAD to the last commit that you do not want to squash. Neither the index nor the working tree are touched by the soft reset, leaving the index in the desired state for your new commit (i.e. it already has all the changes from the commits that you are about to “throw away”). Edit Based on Comments You have rewritten that history you must than use the --force flag to push this branch back to remote. This is what the force flag is meant for, but you can be extra careful, and always fully define your target. git push --force-with-lease origin <branch-name>
Squash
5,189,560
5,492
This gives a good explanation of squashing multiple commits: http://git-scm.com/book/en/Git-Branching-Rebasing but it does not work for commits that have already been pushed. How do I squash the most recent few commits both in my local and remote repos? When I do git rebase -i origin/master~4 master, keep the first one as pick, set the other three as squash, and then exit (via c-x c-c in emacs), I get: $ git rebase -i origin/master~4 master # Not currently on any branch. nothing to commit (working directory clean) Could not apply 2f40e2c... Revert "issue 4427: bpf device permission change option added" $ git rebase -i origin/master~4 master Interactive rebase already started where 2f40 is the pick commit. And now none of the 4 commits appear in git log. I expected my editor to be restarted so that I could enter a commit message. What am I doing wrong?
Squash commits locally with: git rebase -i origin/master~4 master where ~4 means the last 4 commits. This will open your default editor. Here, replace pick in the second, third, and fourth lines (since you are interested in the last 4 commits) with squash. The first line (which corresponds to the newest commit) should be left with pick. Save this file. Afterwards, your editor will open again, showing the messages of each commit. Comment the ones you are not interested in (in other words, leave the commit message that will correspond to this squashing uncommented). Save the file and close it. You will than need to push again with the -f flag. and then force push with : git push origin +master Difference between --force and + From the documentation of git push: Note that --force applies to all the refs that are pushed, hence using it with push.default set to matching or with multiple push destinations configured with remote.*.push may overwrite refs other than the current branch (including local refs that are strictly behind their remote counterpart). To force a push to only one branch, use a + in front of the refspec to push (e.g git push origin +master to force a push to the master branch).
Squash
5,667,884
859
How do you squash your entire repository down to the first commit? I can rebase to the first commit, but that would leave me with 2 commits. Is there a way to reference the commit before the first one?
As of git 1.6.2, you can use git rebase --root -i For each commit except the first, change pick to squash in the editor that pops up.
Squash
1,657,017
785
With git rebase --interactive <commit> you can squash any number of commits together into a single one. That's all great unless you want to squash commits into the initial commit. That seems impossible to do. Are there any ways to achieve it? Moderately related: In a related question, I managed to come up with a different approach to the need of squashing against the first commit, which is, well, to make it the second one. If you're interested: git: how to insert a commit as the first, shifting all the others?
Update July 2012 (git 1.7.12+) You now can rebase all commits up to root, and select the second commit Y to be squashed with the first X. git rebase -i --root master pick sha1 X squash sha1 Y pick sha1 Z git rebase [-i] --root $tip This command can now be used to rewrite all the history leading from "$tip" down to the root commit. See commit df5df20c13 (rebase -i: support --root without --onto, 2012-06-26) on GitHub from Chris Webb (arachsys). As noted in the comments, a git push --force-with-lease (safer than --force, as Mikko Mantalainen reminds us) would be needed after any rebase operation, if you need to publish that rework in a remote repository. Original answer (February 2009) I believe you will find different recipes for that in the SO question "How do I combine the first two commits of a git repository?" Charles Bailey provided there the most detailed answer, reminding us that a commit is a full tree (not just diffs from a previous states). And here the old commit (the "initial commit") and the new commit (result of the squashing) will have no common ancestor. That mean you can not "commit --amend" the initial commit into new one, and then rebase onto the new initial commit the history of the previous initial commit (lots of conflicts) (That last sentence is no longer true with git rebase -i --root <aBranch>) Rather (with A the original "initial commit", and B a subsequent commit needed to be squashed into the initial one): Go back to the last commit that we want to form the initial commit (detach HEAD): git checkout <sha1_for_B> Reset the branch pointer to the initial commit, but leaving the index and working tree intact: git reset --soft <sha1_for_A> Amend the initial tree using the tree from 'B': git commit --amend Temporarily tag this new initial commit (or you could remember the new commit sha1 manually): git tag tmp Go back to the original branch (assume master for this example): git checkout master Replay all the commits after B onto the new initial commit: git rebase --onto tmp <sha1_for_B> Remove the temporary tag: git tag -d tmp That way, the "rebase --onto" does not introduce conflicts during the merge, since it rebases history made after the last commit (B) to be squashed into the initial one (which was A) to tmp (representing the squashed new initial commit): trivial fast-forward merges only. That works for "A-B", but also "A-...-...-...-B" (any number of commits can be squashed into the initial one this way)
Squash
598,672
692
I'm trying to understand the difference between a squash and a rebase. As I understand it, one performs a squash when doing a rebase.
Merge commits: retains all of the commits in your branch and interleaves them with commits on the base branch Merge Squash: retains the changes but omits the individual commits from history Rebase: This moves the entire feature branch to begin on the tip of the master branch, effectively incorporating all of the new commits in master More on here The first two diagrams come from About pull request merges on the GitHub Docs
Squash
2,427,238
619
I've been using Git Extensions for a while now (it's awesome!) but I haven't found a simple answer to the following: Sometimes, when typing a commit message, a make a typo. My friend showed me how to fix it the following way (in Git Extentions): Right-Click on the commit > Advanced > Fixup commit Then I simply check the box "Amend" and rewrite my message and voila! My commit message is fixed. However this other option "Squash commit"... I have always wondered what it does?! My question is: Would someone simply explain me what is the exact difference between Squash commit and Fixup commit in Git/Git Extentions? They look kind of... "similar" to me:
I do not know what Git Extensions does with it specifically, but git rebase has an option to automatically squash or fixup commits with squash! or fixup! prefixes, respectively: --autosquash, --no-autosquash When the commit log message begins with "squash! ..." (or "fixup! ..."), and there is a commit whose title begins with the same ..., automatically modify the todo list of rebase -i so that the commit marked for squashing comes right after the commit to be modified, and change the action of the moved commit from pick to squash (or fixup). The difference between squash and fixup is that during the rebase, the squash operation will prompt you to combine the messages of the original and the squash commit, whereas the fixup operation will keep the original message and discard the message from the fixup commit.
Squash
16,758,131
199
I've got eight commits on a branch that I'd like to email to some people who aren't git enlightened, yet. So far, everything I do either gives me 8 patch files, or starts giving me patch files for every commit in the branch's history, since the beginning of time. I used git rebase --interactive to squash the commits, but now everything I try gives me zillions of patches from the beginning of time. What am I doing wrong? git format-patch master HEAD # yields zillions of patches, even though there's # only one commit since master
I'd recommend doing this on a throwaway branch as follows. If your commits are in the "newlines" branch and you have switched back to your "master" branch already, this should do the trick: [adam@mbp2600 example (master)]$ git checkout -b tmpsquash Switched to a new branch "tmpsquash" [adam@mbp2600 example (tmpsquash)]$ git merge --squash newlines Updating 4d2de39..b6768b2 Fast forward Squash commit -- not updating HEAD test.txt | 2 ++ 1 files changed, 2 insertions(+), 0 deletions(-) [adam@mbp2600 example (tmpsquash)]$ git commit -a -m "My squashed commits" [tmpsquash]: created 75b0a89: "My squashed commits" 1 files changed, 2 insertions(+), 0 deletions(-) [adam@mbp2600 example (tmpsquash)]$ git format-patch master 0001-My-squashed-commits.patch
Squash
616,556
182
I'm trying to squash a range of commits - HEAD to HEAD~3. Is there a quick way to do this, or do I need to use rebase --interactive?
Make sure your working tree is clean, then git reset --soft HEAD~3 git commit -m 'new commit message'
Squash
7,275,508
147
I made a pull request on GitHub. Now the owner of the repository is saying to squash all the commits into one. When I type git rebase -i Notepad opens with the following content: noop # Rebase 0b13622..0b13622 onto 0b13622 # # Commands: # p, pick = use commit # r, reword = use commit, but edit the commit message # e, edit = use commit, but stop for amending # s, squash = use commit, but meld into previous commit # f, fixup = like "squash", but discard this commit's log message # x, exec = run command (the rest of the line) using shell # # These lines can be re-ordered; they are executed from top to bottom. # # If you remove a line here THAT COMMIT WILL BE LOST. # # However, if you remove everything, the rebase will be aborted. # # Note that empty commits are commented out I searched on Google but I do not understand how to do this.
Just a simple addition to help someone else looking for this solution. You can pass in the number of previous commits you would like to squash. for example, git rebase -i HEAD~3 This will bring up the last 3 commits in the editor.
Squash
14,534,397
101
Here's a workflow that I commonly deal with at work. git checkout -b feature_branch # Do some development git add . git commit git push origin feature_branch At this point the feature branch is up for review from my colleagues, but I want to keep developing on other features that are dependent on feature_branch. So while feature_branch is in review... git checkout feature_branch git checkout -b dependent_branch # Do some more development git add . git commit Now I make some changes in response to the code review on feature_branch git checkout feature_branch # Do review fixes git add . git commit git checkout dependent_branch git merge feature_branch Now this is where we get have problems. We have a squash policy on master, which means that feature branches that are merged into master have to be squashed into a single commit. git checkout feature_branch git log # Look for hash at beginning of branch git rebase -i first_hash_of_branch # Squash feature_branch into a single commit git merge master Everything is cool, except with dependent_branch. When I try to rebase dependent branch onto master or try and merge master into it, git is confused by the re-written/squashed history and basically marks every single change in depedendent_branch as a conflict. It's a PITA to go through and basically re-do or de-conflicticize all of the changes in dependent_branch. Is there some solution to this? Sometimes, I'll manually create a patch and apply it off a fresh branch of master, but if there's any real conflicts with that, its even worse to fix. git checkout dependent_branch git diff > ~/Desktop/dependent_branch.diff git checkout master git checkout -b new_dependent_branch patch -p1 < ~/Desktop/dependent_branch.diff # Pray for a clean apply. Any ideas? I know this happens because of the re-written history during the squash, but that's a requirement that I can't change. What's the best solution / workaround? Is there some magic I can do? Or is there a faster way to do all the steps involved with manually creating the diff?
A little bit about why this happens: I'll let O be "original master" and FB be "new master", after a feature branch has been merged in: Say feature_branch looks like: O - A - B - C dependent_feature has a few extra commits on top of that: O - A - B - C - D - E - F You merge your original feature branch into master and squash it down, giving you: O - FB Now, when you try to rebase the dependent branch, git is going to try to figure out the common ancestor between those branches. While it originally would have been C, if you had not squashed the commits down, git instead finds O as the common ancestor. As a result, git is trying to replay A, B, and C which are already contained in FB, and you're going to get a bunch of conflicts. For this reason, you can't really rely on a typical rebase command, and you have to be more explicit about it by supplying the --onto parameter: git rebase --onto master HEAD~3 # instruct git to replay only the last # 3 commits, D E and F, onto master. Modify the HEAD~3 parameter as necessary for your branches, and you shouldn't have to deal with any redundant conflict resolution. Some alternate syntax, if you don't like specifying ranges and you haven't deleted your original feature branch yet: git rebase --onto master feature_branch dependent_feature # replay all commits, starting at feature_branch # exclusive, through dependent_feature inclusive # onto master
Squash
22,593,087
101
In Docker 1.13 the new --squash parameter was added. I'm now hoping to reduce the size of my images as well as being able to "hide" secret files I have in my layers. Below you can now see the difference from doing a build with and without the --squash parameter. Without Squash With Squash Now to my question. If I add a secret file in my first layer, then use the secret file in my second layer, and the finally remove my secret file in the third layer, and then build with the --squash flag. Will there be any way now to get the secret file?
If I add a secret file in my first layer, then use the secret file in my second layer, and the finally remove my secret file in the third layer, and then build with the --squash flag. Will there be any way now to get the secret file? Answer: Your image won't have the secret file. How --squash works: Once the build is complete, Docker creates a new image loading the diffs from each layer into a single new layer and references all the parent's layers. In other words: when squashing, Docker will take all the filesystem layers produced by a build and collapse them into a single new layer. This can simplify the process of creating minimal container images, but may result in slightly higher overhead when images are moved around (because squashed layers can no longer be shared between images). Docker still caches individual layers to make subsequent builds fast. Please note this feature squashes all the newly built layers into a single layer, it is not squashing to scratch. Side notes: Docker 1.13 also has support for compressing the build context that is sent from CLI to daemon using the --compress flag. This will speed up builds done on remote daemons by reducing the amount of data sent. Please note as of Docker 1.13 this feature is experimental. Update 2024: Squash has been moved to buildkit and later on deprecated from buildkit WARNING: experimental flag squash is removed with BuildKit. You should squash inside build using a multi-stage Dockerfile for efficiency. As the warning suggests you need to use multi-stage builds instead of squashing layers. Example: # syntax=docker/dockerfile:1 FROM golang:1.21 WORKDIR /src COPY <<EOF ./main.go package main import "fmt" func main() { fmt.Println("hello, world") } EOF RUN go build -o /bin/hello ./main.go FROM scratch COPY --from=0 /bin/hello /bin/hello CMD ["/bin/hello"]
Squash
41,764,336
91
Which one should one use to hide microcommits? Is the only difference between git merge --squash and git merge --no-ff --no-commit the denial of the other parents?
The differences These options exists for separate purposes. Your repository ends up differently. Let's suppose that your repository is like this after you are done developing on the topic branch: --squash If you checkout master and then git merge --squash topic; git commit -m topic, you get this: --no-ff --no-commit Instead, if you do git merge --no-ff --no-commit; git commit -m topic, you get this: Hiding micro-commits If you really want to hide (I mean delete from your repository) your micro-commits, use --squash. Because, as you can see in the above images, you are not really hiding your micro-commits if you do not squash. Moreover, you do not usually push your topic branches to the world. Topic branches are for topic to get mature. If you want your history to have all your micro-commits, but leave them in another line of development (the green line in the above images), use --no-ff --no-commit. But please remember that a) this is not a branch, and b) does not really mean anything in Git because it is just another parent of your commit. Please refer to Git Branching - What a Branch Is if you really want to understand.
Squash
11,983,749
83
As the title says, I am not really clear about the differences between a git merge --squash and a git merge --no-commit. As far as I understand the help page for git merge, both commands would leave me in an updated working-tree, where it is still possible to edit and then to do a final commit (or multiple commits). Could someone clarify the differences of those 2 options? When would I use one instead of the other?
git merge --no-commit This is just like a normal merge but doesn't create a merge-commit. This commit will be a merge commit: when you look at the history, your commit will appear as a normal merge. git merge --squash This will merge the changes into your working tree without creating a merge commit. When you commit the merged changes, it will look like a new "normal" commit on your branch: without a merge commit in the history. It's almost like you did a cherry-pick on all the merged changes.
Squash
9,599,411
70
A common development workflow for us is to checkout branch b, commit a bunch to it, then squash all those commits into one (still on b). However, during the rebase -i process to squash all the commits, there are frequently conflicts at multiple steps. I essentially want to alter the branch into one commit that represents the state of the repository at the time of the final commit on b I've done some searching but I haven't found exactly what I'm looking for. I don't want to merge --squash because we would like to test the squashed feature branch before merging.
If you don't need the commit information, then you could just do a soft reset. Then files remain as they were and when you commit, this commit will be on top of the commit you did reset to. To find the commit to reset to: git merge-base HEAD BRANCH_YOU_BRANCHED_FROM Then git reset --soft COMMIT_HASH Then re-craft the commit, perhaps: git commit -am 'This is the new re-created one commit'
Squash
17,354,353
51
I merged an upstream of a large project with my local git repo. Prior to the merge I had a small amount of history that was easy to read through, but after the merge a massive amount of history is now in my repo. I have no need for all the history commits from the upstream repo. There have been other commits made after this upstream merge that I would like to keep. How do I squash all that history that was merged from the upstream into one commit while keeping the commits made after the upstream merge?
I was able to squash several commits after multiple merges from the master branch using the strategy found here: https://stackoverflow.com/a/17141512/1388104 git checkout my-branch # The branch you want to squash git branch -m my-branch-old # Change the name to something old git checkout master # Checkout the master branch git checkout -b my-branch # Create a new branch git merge --squash my-branch-old # Get all the changes from your old branch git commit # Create one new commit You will have to force an update if you need to push your squashed branch to a remote repository that you have previously pushed to, e.g. git push origin my-branch -f
Squash
14,043,961
45
I'm trying to use the git merge --squash with the --no-ff parameter, but git does not allow. Someone have any sugestion to help me? I can't use fast forward merge and I need to use the --squash parameter to group a lot of commits that were made in another branch. Thanks!
It probably doesn't let you because such a command wouldn't make sense. The documentation for --squash says (emphasis mine): --squash Produce the working tree and index state as if a real merge happened (except for the merge information), but do not actually make a commit or move the HEAD, nor record GIT_DIR/MERGE_HEAD to cause the next git commit command to create a merge commit. This allows you to create a single commit on top of the current branch whose effect is the same as merging another branch (or more in case of an octopus). The --no-ff flag does: Create a merge commit even when the merge resolves as a fast-forward. You are essentially asking git to make a commit and NOT make a commit at the same time. If you want to preserve all of the history of your branch, you should use the --no-ff flag. Commit d is a merge commit that has two parents, a and c. a ------ d -- .... \ / b -- c If you want all of the commits on that branch to be treated as a single unit of work, then use --squash. Commit d is a regular commit that contains all of the changes from commits b and c but has only one parent, a. a ---- d -- .... \ b -- c
Squash
14,321,748
39
If I'm in the following situation, $ git log --oneline * abcdef commit #b * 123456 commit #a I know I can always run $ git reset HEAD~ $ git commit --amend However, I tried to run $ git rebase -i HEAD~2 but I got fatal: Needed a single revision invalid upstream HEAD~2 Hence my question: is there a way to use git rebase to squash these two commits or not?
You want to rebase to the root commit of your master branch. More specifically, to squash the two commits, you need to run git rebase -i --root and then substitute squash for pick on the second line in the buffer of the editor that pops up: pick 123456 a squash abcdef b I refer you to the git-rebase man page for more details about that flag: --root Rebase all commits reachable from <branch>, instead of limiting them with an <upstream>. This allows you to rebase the root commit(s) on a branch. [...] Example of an interactive rebase of the root # Set things up $ mkdir testgit $ cd testgit $ git init # Make two commits $ touch README $ git add README $ git commit -m "add README" $ printf "foo\n" > README $ git commit -am "write 'foo' in README" # Inspect the log $ git log --oneline --decorate --graph * 815b6ca (HEAD -> master) write 'foo' in README * 630ede6 add README # Rebase (interactively) the root of the current branch: # - Substitute 'squash' for 'pick' on the second line; save and quit the editor. # - Then write the commit message of the resulting commit; save and quit the editor. $ git rebase -i --root [detached HEAD c9003cd] add README; write 'foo' in README Date: Sat May 16 17:38:43 2015 +0100 1 file changed, 1 insertion(+) create mode 100644 README Successfully rebased and updated refs/heads/master. # Inspect the log again $ git log --oneline --decorate --graph * c9003cd (HEAD -> master) add README; write 'foo' in README
Squash
30,277,149
36
I have a branch with about 20 commits. The first SHA on the branch is bc3c488... The last SHA on the branch is 2c2be6... How can I merge all the commits together? I want to do this without using interactive rebase as there are so many commits. I need this for a github Pull Request where I am being asked to merge my commits. Need to do this without doing a git merge --squash as I need to squash locally and another developer does the merge and wants me to do the squash first before merging.
If the first SHA is HEAD you can also use this approach: git reset --soft $OLD_SHA; git add -A; git commit --amend --no-edit be careful, this command will change the history of the repo. If you want to squash commits that are in the middle of your history: |---* --- 0 --- 1 ---- 2 --- 3 --- * --- * --- * --- HEAD like in this case the commits 1, 2 and 3 I would really recommend to use rebase -i
Squash
33,901,565
34
I'm trying to rebase and squash all my commits from current branch to master. Here is what I'm trying to do: git checkout -b new-feature make a couple of commits, after it I was trying: git rebase -i master in this case commits will remain in new-feature branch git checkout master git rebase -i new-feature It gives me and edit window with noop message. I know about command: git merge --squash new-feature But I'm currently working on learning of rebase command.
Lets go though the steps. 1 - We create a new feature branch git checkout -b new-feature 2 - Now you can add/remove and update whatever you want on your new branch git add <new-file> git commit -am "Added new file" git rm <file-name> git commit -am "Removed a file" cat "add more stuff to file" >> <new-file> git commit -am "Updated files" 3 - Next, pick and squash any commits down into one nice pretty commit message git rebase -i master The key thing you need to remember here is to change the text that says "pick" to "squash" for all of the commits after the first commit. This will squash all of the commits down to your master branch. 4 - Select the master branch git checkout master 5 - Move the HEAD and the master branch to where new-feature is: git rebase new-feature You can try all of the commands out in this visual tool: http://pcottle.github.io/learnGitBranching/
Squash
15,727,597
30
git branch --merged doesn't appear to play nicely with --squash. If you do a normal git merge, then git branch --merged tells you which branches have been merged. This is not the case however if the --squash option is used, even though the resulting tree is the same. I doubt this is a git defect and would like to know if there is some git-fu I'm missing, or if I have misunderstood something. In short: I want to use --squash, but also want git to tell me if the branch I squashed into another one has been --merged.
You can't get there from here (as the fellow giving directions said). More precisely, it does not make sense. The problem is that git merge --squash does not actually do a merge. Suppose your branch history looks like this, for instance (with branches topic and devel): H ⬅ I ⬅ J <-- topic ⬋ ⬅ F ⬅ G ⬉ K ⬅ L <-- devel If you check out devel and merge topic you get a new merge commit M that contains the result of the merge, and M has two parents: H ⬅ I ⬅ J <-- topic ⬋ ⬆ ⬅ F ⬅ G ⬆ ⬉ ⬆ K ⬅ L ⬅ M <-- devel But if you use git merge --squash topic you get, instead, a new commit (let's label it S for squash): H ⬅ I ⬅ J <-- topic ⬋ ⬅ F ⬅ G ⬉ K ⬅ L ⬅ S <-- devel where (as you already noted) the contents (the tree) of commit S makes all the files come out the same as they would in commit M. But there's no back-link (parent arrow) from S to topic. It's not a merge at all, it's just taking all the changes from topic, squashing them into a single change, and adding that as an entirely independent commit. Now, the other thing about git merge --squash is that it does not make the final commit. So you could create the .git files that git would on a "regular" merge, and do a commit that has the two parents you'd get on a "real" merge. And then you'd get ... exactly what you get if you run git merge topic, a commit (label it S or M, it does not matter) that has the same tree again, but now has two parent-arrows, pointing to L and J, just like M. In fact, running git merge --squash is almost exactly the same as running git merge --no-commit, except for the tracing files left behind when the merge is done (git commit uses some of these to set up the parents). The squash version does not write to .git/MERGE, .git/MERGE_HEAD, and .git/MERGE_MODE. (It does create .git/MERGE_MSG, the same as git merge --no-commit, and it also creates .git/SQUASH_MSG.) So, basically, you have your choice: a real merge (two or more parents on the final commit), or a squash (same tree-combining mechanisms, but only one parent on the final commit). And, since git branch --merged works by looking at the "parent arrows" of each commit stored in the repository, only a real merge is really a merge, so only a real merge can be discovered later by git branch.
Squash
19,308,790
30
What is the difference between amend and squash commands? I tried both and found that both are doing the same for proper management.
In Git, commits are rarely actual destroyed, they just become orphans, or detached, meaning that they are not pointed to or reachable by a reference like a branch or tag. "amending" and "squashing" are similar concepts though. Typically, amending is a single commit operation in which you want to combine work that you have staged with your HEAD commit. This can be very convenient if you have just created a commit and realize that you need to add some content to it. Simply recall your commit command and use the --amend option. Squashing is the more abstract term. I would say that an amend is a type of squash. Whenever you combine commits you could say that you are squashing them. If you have been working on a branch for a little while and have made 5 commits that taken together should be 1 commit, you can interactively rebase to squash them together. There are several ways in Git to amend/squash, but they all center around the concept of organizing your commit history (which means re-writing the history of a branch) in this spirit of making it easier to grok.
Squash
35,044,229
24
I have the following git workflow: Create new feature branch Work on feature branch Commit often Once feature is complete, merge into master branch Rinse and repeat However, sometimes, I have the need to revert a whole feature from master. This could involve a whole lot of reverting. (The reason for needing to revert a feature is I have a website that works off of one repo. From there, we use a script that deploys the site to either our Production site or Staging site. Both are done from our master branch. Don't ask, that's just what I've been given to work with. Sometimes, I'm working on something that I stage, but then an immediate change needs to be made, so I needed some way to pull my changes in order to clean the repo.) I'm thinking that the easiest way to do so is if each feature branch has only one commit. Then I could revert that commit. So naturally, I am thinking of squashing all commits of a feature branch into one, prior to merging it into master. So now my workflow would look like: Create new feature branch Work on feature branch Commit often Once feature is complete git rebase -i HEAD~number_of_commits (or if remote branch is available, origin/feature_branch) Is there any issues with this logic? Does it go against any best practices? I did some testing myself and the whole workflow seems to run smoothly and solves my problem, but I wanted to run the idea by other (smarter) Git-ers to see if there is anything wrong with it. Thanks!
You should look at leveraging the squash merge capability of git i.e. git merge --squash, so that you do not rewrite history unnecessarily. Both git merge --squash and git rebase --interactive can be used to produce a squashed commit with the same resultant work-tree, but they are intended to serve 2 totally different purposes. Your tree eventually ends up looking different in both the cases. Initial tree: a -- b -- c -- d master \ \-- e -- f feature1 After git checkout master; git merge --squash feature1; git commit: a -- b -- c -- d -- F master \ \-- e -- f feature1 After git checkout master; git rebase -i feature1 and choosing to pick c and squash d: a -- b /-- F master \ / \-- e -- f feature1 As you can see from the difference, you do not rewrite the history of any branch when using git merge --squash but you end up rewriting the history of master when using git rebase -i. Also note that the actual commits (for the ones which got squashed) would be present in your git history in both the cases, only as long as you have some branch or tag reference through which those commits are reachable. In other words, in the above example, if you delete feature1 after doing merge --squash, you would not be able to actually view the commits e or f in the future (especially after the 90 days reflog period). The same applies to the commits c and d in the rebase example.
Squash
16,449,029
20