question
stringlengths
11
28.2k
answer
stringlengths
26
27.7k
tag
stringclasses
130 values
question_id
int64
935
78.4M
score
int64
10
5.49k
This question is about Redis persistence. I'm using redis as a 'fast backend' for a social networking website. It's a single server set up. I've been transferring PostgreSQL responsibilities to Redis steadily. Currently in etc/redis/redis.conf, the appendonly setting is set to appendonly no. Snapshotting settings are save 900 1, save 300 10, save 60 10000. All this is true for production and development both. As per production logs, save 60 10000 gets invoked heavily. Does this mean that practically, I'm getting backups every 60 seconds? Some literature suggests using AOF and RDB backups together. Thus I was weighing in on turning appendonly on and using appendfsync everysec. For anyone who has had experience of both sides of the coin: 1) Will using appendonly on and appendfsync everysec cause a performance downgrade? Will it hit the CPU? The write load is on the high side. 2) Once I restart the redis server with these new settings, I'll still lose the last 60 secs of my data, correct? 3) Are restart times something to worry about? My dump.rdb file is small; ~90MB. I'm trying to find out more about redis persistence, and getting my expectations right. Personally, I'm fine with losing 60s of data in the case of a catastrophe, thus whether I should use AOF is also something I'm pondering. Feel free to chime in. Thanks!
Does this mean that practically, I'm getting backups every 60 seconds? NO. Redis does a background save after 60 seconds, if there're at least 10000 keys have been changed. Otherwise, it doesn't do a background save. Will using appendonly on and appendfsync everysec cause a performance downgrade? Will it hit the CPU? The write load is on the high side. It depends on many things, e.g. disk performance (SSD VS HDD), write/read load (QPS), data model, and so on. You need do a benchmark with your own data in your specific environment. Once I restart the redis server with these new settings, I'll still lose the last 60 secs of my data, correct? NO. If you turn on both AOF and RDB, when Redis restarts, the AOF file will be used to rebuild the database. Since you config it to appendfsync everysec, you will only lose the last 1 second of data. Are restart times something to worry about? My dump.rdb file is small; ~90MB. If you turn on AOF, and when Redis restarts, it replays logs in AOF file to rebuild the database. Normally AOF file is larger then RDB file, and it might be slower than recovering from RDB file. Should you worry about that? Do a benchmark with your own data in your specific environment. EDIT IMPORTANT NOTICE Assume that you already set Redis to use RDB saving, and write lots of data to Redis. After a while, you want to turn on AOF saving. NEVER MODIFY THE CONFIG FILE TO TURN ON AOF AND RESTART REDIS, OTHERWISE YOU'LL LOSE EVERYTHING. Because, once you set appendonly yes in redis.conf, and restart Redis, it will load data from AOF file, no matter whether the file exists or not. If the file doesn't exist, it creates an empty file, and tries to load data from that empty file. So you'll lose everything. In fact, you don't have to restart Redis to turn on AOF. Instead, you can use config set command to dynamically turn it on: config set appendonly yes.
Redis
39,953,542
31
TL;DR: which of the three options below is the most efficient for paginating with Redis? I'm implementing a website with multiple user-generated posts, which are saved in a relational DB, and then copied to Redis in form of Hashes with keys like site:{site_id}:post:{post_id}. I want to perform simple pagination queries against Redis, in order to implement lazy-load pagination (ie. user scrolls down, we send an Ajax request to the server asking for the next bunch of posts) in a Pinterest-style interface. Then I created a Set to keep track of published posts ids, with keys like site:{site_id}:posts. I've chosen Sets because I don't want to have duplicated IDs in the collection and I can do it fastly with a simple SADD (no need to check if id exists) on every DB update. Well, as Sets aren't ordered, I'm wheighting the pros and cons of the options I have to paginate: 1) Using SSCAN command to paginate my already-implemented sets In this case, I could persist the returned Scan cursor in the user's session, then send it back to server on next request (it doesn't seem reliable with multiple users accessing and updating the database: at some time the cursor would be invalid and return weird results - unless there is some caveat that I'm missing). 2) Refactor my sets to use Lists or Sorted Sets instead Then I could paginate using LRANGE or ZRANGE. List seems to be the most performant and natural option for my use case. It's perfect for pagination and ordering by date, but I simply can't check for a single item existence without looping all list. Sorted Sets seems to join the advantages of both Sets and Lists, but consumes more server resources. 3) Keep using regular sets and store the page number as part of the key It would be something like site:{site_id}:{page_number}:posts. It was the recommended way before Scan commands were implemented. So, the question is: which one is the most efficient / simplest approach? Is there any other recommended option not listed here?
"Best" is best served subjective :) I recommend you go with the 2nd approach, but definitely use Sorted Sets over Lists. Not only do the make sense for this type of job (see ZRANGE), they're also more efficient in terms of complexity compared to LRANGE-ing a List.
Redis
32,076,783
31
What are the steps required to set up Redis database on Webfaction shared hosting account?
Introduction Because of the special environment restrictions of Webfaction servers the installation instructions are not as straightforward as they would be. Nevertheless at the end you will have a fully functioning Redis server that stays up even after a reboot. I personally installed Redis by the following procedure about a half a year ago and it has been running flawlessy since. A little word of a warning though, half a year is not a long time, especially because the server have not been under a heavy use. The instructions consists of five parts: Installation, Testing, Starting the Server, Managing the Server and Keeping the Server Running. Installation Login to your Webfaction shell ssh [email protected] Download latest Redis from Redis download site. > mkdir -p ~/src/ > cd ~/src/ > wget http://download.redis.io/releases/redis-2.6.16.tar.gz > tar -xzf redis-2.6.16.tar.gz > cd redis-2.6.16/ Before the make, see is your server Linux 32 or 64 bit. The installation script does not handle 32 bit environments well, at least on Webfaction's CentOS 5 machines. The command for bits is uname -m. If Linux is 32 bit the result will be i686, if 64 bit then x86_64. See this answer for details. > uname -m i686 If your server is 64 bit (x86_64) then just simply make. > make But if your server is 32 bit (i686) then you must do little extra stuff. There is a command make 32bit but it produces an error. Edit a line in the installation script to make make 32bit to work. > nano ~/src/redis-2.6.16/src/Makefile Change the line 214 from this $(MAKE) CFLAGS="-m32" LDFLAGS="-m32" to this $(MAKE) CFLAGS="-m32 -march=i686" LDFLAGS="-m32 -march=i686" and save. Then run the make with 32bit flag. > cd ~/src/redis-2.6.16/ ## Note the dir, no trailing src/ > make 32bit The executables were created into directory ~/src/redis-2.6.16/src/. The executables include redis-cli, redis-server, redis-benchmark and redis-sentinel. Testing (optional) As the output of the installation suggests, it would be nice to ensure that everything works as expected by running tests. Hint: To run 'make test' is a good idea ;) Unfortunately the testing requires tlc8.6.0 to be installed which is not the default at least on the machine web223. So you must install it first, from source. See Tcl/Tk installation notes and compiling notes. > cd ~/src/ > wget http://prdownloads.sourceforge.net/tcl/tcl8.6.0-src.tar.gz > tar -xzf tcl8.6.0-src.tar.gz > cd tcl8.6.0-src/unix/ > ./configure --prefix=$HOME > make > make test # Optional, see notes below > make install Testing Tcl with make test will take time and will also fail due to WebFaction's environment restrictions. I suggest you skip this. Now that we have Tlc installed we can run Redis tests. The tests will take a long time and also temporarily uses a quite large amount of memory. > cd ~/src/redis-2.6.16/ > make test After the tests you are ready to continue. Starting the Server First, create a custom application via Webfaction Control Panel (Custom app (listening on port)). Name it for example fooredis. Note that you do not have to create a domain or website for the app if Redis is used only locally i.e. from the same host. Second, make a note about the socket port number that was given for the app. Let the example be 23015. Copy the previously compiled executables to the app's directory. You may choose to copy all or only the ones you need. > cd ~/webapps/fooredis/ > cp ~/src/redis-2.6.16/src/redis-server . > cp ~/src/redis-2.6.16/src/redis-cli . Copy also the sample configuration file. You will soon modify that. > cp ~/src/redis-2.6.16/redis.conf . Now Redis is already runnable. There is couple problems though. First the default Redis port 6379 might be already in use. Second, even if the port was free, yes, you could start the server but it stops running at the same moment you exit the shell. For the first the redis.conf must be edited and for the second, you need a daemon which is also solved by editing redis.conf. Redis is able to run itself in the daemon mode. For that you need to set up a place where the daemon stores its process ids, PIDs. Usually pidfiles are stored in /var/run/ but because the environment restrictions you must select a place for them in your home directory. Because a reason explained later in the part Managing the Server, a good choice is to put the pidfile under the same directory as the executables. You do not have to create the file yourself, Redis creates it for you automatically. Now open the redis.conf for editing. > cd ~/webapps/fooredis/ > nano redis.conf Change the configurations in the following manner. daemonize no -> daemonize yes pidfile /var/run/redis.pid -> pidfile /home/foouser/webapps/fooredis/redis.pid port 6379 -> port 23015 Now finally, start Redis server. Specify the conf-file so Redis listens the right port and runs as a daemon. > cd ~/webapps/fooredis/ > ./redis-server redis.conf > See it running. > cd ~/webapps/fooredis/ > ./redis-cli -p 23015 redis 127.0.0.1:23015> SET myfeeling Phew. OK redis 127.0.0.1:23015> GET myfeeling "Phew." redis 127.0.0.1:23015> (ctrl-d) > Stop the server if you want to. > ps -u $USER -o pid,command | grep redis 718 grep redis 10735 ./redis-server redis.conf > kill 10735 or > cat redis.pid | xargs kill Managing the Server For the ease of use and as a preparatory work for the next part, make a script that helps to open the client and start, restart and stop the server. An easy solution is to write a makefile. When writing a makefile, remember to use tabs instead of spaces. > cd ~/webapps/fooredis/ > nano Makefile # Redis Makefile client cli: ./redis-cli -p 23015 start restart: ./redis-server redis.conf stop: cat redis.pid | xargs kill The rules are quite self-explanatory. The special about the second rule is that while in daemon mode, calling the ./redis-server does not create a new process if there is a one running already. The third rule has some quiet wisdom in it. If redis.pid was not stored under the directory of fooredis but for example to /var/run/redis.pid then it would not be so easy to stop the server. This is especially true if you run multiple Redis instances concurrently. To execute a rule: > make start Keeping the Server Running You now have an instance of Redis running in daemon mode which allows you to quit the shell without stopping it. This is still not enough. What if the process crashes? What if the server machine is rebooted? To cover these you have to create two cronjobs. > export EDITOR=nano > crontab -e Add the following two lines and save. */5 * * * * make -C ~/webapps/fooredis/ -f ~/webapps/fooredis/Makefile start @reboot make -C ~/webapps/fooredis/ -f ~/webapps/fooredis/Makefile start The first one ensures each five minutes that fooredis is running. As said above this does not start new process if one is already running. The second one ensures that fooredis is started immediately after the server machine reboot and long before the first rule kicks in. Some more deligate methods for this could be used, for example forever. See also this Webfaction Community thread for more about the topic. Conclusion Now you have it. Lots of things done but maybe more will come. Things you may like to do in the future which were not covered here includes the following. Setting a password, preventing other users flushing your databases. (See redis.conf) Limiting the memory usage (See redis.conf) Logging the usage and errors (See redis.conf) Backupping the data once in a while. Any ideas, comments or corrections?
Redis
18,622,630
31
I'm using Resque workers to process job in a queue, I have a large number of jobs > 1M in a queue and have some of the jobs that I need to remove ( added by error). Crating the queue with the jobs was not an easy tasks, so clearing the queue using resque-web and adding the correct jobs again is not an option for me. Appreciate any advice. Thanks!
In resque's sources (Job class) there's such method, guess it's what you need :) # Removes a job from a queue. Expects a string queue name, a # string class name, and, optionally, args. # # Returns the number of jobs destroyed. # # If no args are provided, it will remove all jobs of the class # provided. # # That is, for these two jobs: # # { 'class' => 'UpdateGraph', 'args' => ['defunkt'] } # { 'class' => 'UpdateGraph', 'args' => ['mojombo'] } # # The following call will remove both: # # Resque::Job.destroy(queue, 'UpdateGraph') # # Whereas specifying args will only remove the 2nd job: # # Resque::Job.destroy(queue, 'UpdateGraph', 'mojombo') # # This method can be potentially very slow and memory intensive, # depending on the size of your queue, as it loads all jobs into # a Ruby array before processing. def self.destroy(queue, klass, *args)
Redis
10,274,974
31
I have installed redis. The default given name to me is plinking-narwhal. Now I would like to install a service with my assigned name. But first I want to remove the existing one. I had tried deleting them without success. $ kubectl get all NAME READY STATUS RESTARTS AGE pod/plinking-narwhal-redis-master-0 1/1 Running 0 12m pod/plinking-narwhal-redis-slave-9b645b597-2vh82 1/1 Running 7 12m NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 15m service/plinking-narwhal-redis-master ClusterIP 10.109.186.189 <none> 6379/TCP 12m service/plinking-narwhal-redis-slave ClusterIP 10.99.122.12 <none> 6379/TCP 12m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE deployment.apps/plinking-narwhal-redis-slave 1 1 1 1 12m NAME DESIRED CURRENT READY AGE replicaset.apps/plinking-narwhal-redis-slave-9b645b597 1 1 1 12m NAME DESIRED CURRENT AGE statefulset.apps/plinking-narwhal-redis-master 1 1 12m master $ helm delete stable/redis Error: invalid release name, must match regex ^(([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9])+$ and the length must not longer than 53 master $ helm delete --name plinking-narwhal stable/redis Error: unknown flag: --name
You probably need: $ helm delete redis or if you completely want to remove the release: $ helm delete redis --purge stable/redis is not allowed as an expression because of the slash(/) If you'd like to see the name of the releases you can simply run: $ helm list -aq
Redis
55,036,464
30
When using node_redis Node.js module with Redis, should I just use one connection as Redis is single thread process or shall I create a pool of connections to improve performance?
Just use a single connection. Both Node and Redis are effectively single thread. I don't think you'll gain anything by having multiple connections. I asked a similar question before starting to develop with Redis and it seems that one client/one application is pretty effective pattern. Highlighting an important note from the comments on this question. (Thanks to oskarth & stockholmux) For long running processes like servers, it is okay to use a single long lived connection.
Redis
21,976,270
30
In redis-cli, what is the command to print all the values in a list without knowing in advance the size of the list? I see lrange, but it requires naming the start index and the end index.
You use -1 to indicate end of list so: LRANGE key 0 -1 would print all.
Redis
20,829,778
30
I need to see all available keys in Redis. This question: Redis command to get all available keys? Adequately covers the case where I run redis-cli with no arguments, then type keys *. However, how do I get all keys with a single command? redis-cli keys * returns: (error) ERR wrong number of arguments for 'keys' command Even though there are keys set, which checks out if you use the two-command solution.
You need to do redis-cli keys '*' to avoid your shell from expanding * into a list of filenames.
Redis
12,119,075
30
I know the KEYS command, but that only returns the keys (I'm guessing all of the keys with type String), and apparently sets aren't considered keys. Is there a command for getting all of the sets in the database? What about other data types (hash, list, sorted set)? http://redis.io/topics/data-types
I know the KEYS command, but that only returns the keys (I'm guessing all of the keys with type String), and apparently sets aren't considered keys. KEYS command return results no matter what data type are your keys, since it searches key names. At the lowest level of abstraction each data type in redis is key/value based where value can be represented as one of several (advanced) data structures (string, hash, list, set, sorted set). You can see that KEYS command also work for sets in it's examples. Is there a command for getting all of the sets in the database? What about other data types (hash, list, sorted set)? As far as I know there is no dedicated command for this functionality and KEYS command is applied on entire data set of your database. However there is a TYPE command which can determine data type of specified key.
Redis
7,462,457
30
I'm getting more into Node.js and am enjoying it. I'm moving more into web application development. I have wrapped my head around Node.js and currently using Backbone for the front end. I'm making a few applications that uses Backbone to communicate with the server using a RESTful API. In Node.js, I will be using the Express framework. I'm reaching a point where I need a simple database on the server. I'm used to PostgreSQL and MySQL with Django, but what I'm needing here is some simple data storage etc. I know about CouchDB, MongoDB and Redis, but I'm just not sure which one to use? Is any one of them better suited for Node.js? Is any one of them better for beginners, moving from relational databases? I'm just needing some guidance on which to choose, I've come this far, but when it's coming to these sort of databases, I'm just not sure...
Is any one of them better suited for Node JS? Better suited especially for node.js probably no, but each of them is better suited for certain scenarios based on your application needs or use cases. Redis is an advanced key-value store and probably the fastest one among the three NoSQL solutions. Besides basic key data manipulation it supports rich data structures such as lists, sets, hashes or pub/sub functionality which can be really handy, namely in statistics or other real-time madness. It however lacks some sort of querying language. CouchDB is document oriented store which is very durable, offers MVCC, REST interface, great replication system and map-reduce querying. It can be used for wide area of scenarios and substitute your RDBMS, however if you are used to ad hoc SQL queries then you may have certain problems with it's map-reduce views. MongoDB is also document oriented store like CouchDB and it supports ad hoc querying besides map-reduce which is probably one of the crucial features why people searching for DRBMS substitution choose MongoDB over the other NoSQL solutions. Is any one of them better for beginners, moving from relational databases? Since you are coming from the RDBMS world and you are probably used to SQL then, I think, you should go with the Mongodb because, unlike Redis or CouchDB, it supports ad hoc queries and the querying mechanism is similar to SQL. However there may be areas, depending on your application scenarios, where Redis or CouchDB may be better suited to do the job.
Redis
6,507,953
30
I'm currently migrating some data to Redis and I'm considering using a sorted set to store approximately 1.4e6 items (with associated scores/counts). Is this number of items in a set likely to exceed a practical limit, making it too painful to use the set? I plan on running 64 bit redis, so available memory for the data should not be a problem. Does anyone have experience with a sorted set this size? If so, how are your insertion and query times for the set?
It depends what you want to do with the set. The simple operations are mostly O(log n) which means that they take only twice as long for a million item set as they do for a thousand item set. Unless you have something seriously broken in your config like a memory limit smaller than the set, performance shouldn't be a problem. Where you need to be careful is with operations on multiple sets, particularly union - that will take a thousand times longer for the million item set. In practical terms this isn't necessarily a problem though - either it will be fast enough for your purposes anyway (redis has commands documented as too slow for production use that are still best measured in milliseconds) or you can adjust the order of operations to avoid running union on really large sets.
Redis
6,076,342
30
I'm looking for a way to store a list of items for a user, that will expire within 24 hours. Is there a way to accomplish this using Redis? I was thinking of just using the list and setting an expiration for each individual item, is there a better way?
i use: ZADD - adding new unique value to sorted set. ZRANGE - get all current values ordered by score from the set. (ZREMRANGEBYSCORE has been deprecated) ZREMRANGEBYSCORE - remove all keys between scores from the set. in this solution the score = timestamp for example: 3 values insertion: ZADD mykey 160 val1 // 1 ZADD mykey 161 val2 // 1 ZADD mykey 120 val3 // 1 get sorted values between score (between -infinity to 400): ZRANGE mykey -inf 400 BYSCORE // ['val3', 'val1', 'val2'] remove value (between -infinity to 121) - val3 will removed: ZREMRANGEBYSCORE mykey -inf 121 // 1 (again) - get sorted values between score (between -infinity to 400): ZRANGE mykey -inf 400 BYSCORE // ['val1', 'val2']
Redis
48,044,721
29
I know that it is possible to pass your own config file but I'd rather edit the handful of values I care about in the default config. I'm having a hard time finding a default redis.conf anywhere though, do I just have to COPY my own into the container?
The default image from redis does not have a redis.conf. Here is the link for the image on dockerhub. https://hub.docker.com/_/redis/ You will have to copy it to image or have it mapped on the host using a volume mapping.
Redis
37,402,551
29
I've found different zookeeper definitions across multiple resources. Maybe some of them are taken out of context, but look at them pls: A canonical example of Zookeeper usage is distributed-memory computation... ZooKeeper is an open source Apache™ project that provides a centralized infrastructure and services that enable synchronization across a cluster. Apache ZooKeeper is an open source file application program interface (API) that allows distributed processes in large systems to synchronize with each other so that all clients making requests receive consistent data. I've worked with Redis and Hazelcast, that would be easier for me to understand Zookeeper by comparing it with them. Could you please compare Zookeeper with in-memory-data-grids and Redis? If distributed-memory computation, how does zookeeper differ from in-memory-data-grids? If synchronization across cluster, than how does it differs from all other in-memory storages? The same in-memory-data-grids also provide cluster-wide locks. Redis also has some kind of transactions. If it's only about in-memory consistent data, than there are other alternatives. Imdg allow you to achieve the same, don't they?
https://zookeeper.apache.org/doc/current/zookeeperOver.html By default, Zookeeper replicates all your data to every node and lets clients watch the data for changes. Changes are sent very quickly (within a bounded amount of time) to clients. You can also create "ephemeral nodes", which are deleted within a specified time if a client disconnects. ZooKeeper is highly optimized for reads, while writes are very slow (since they generally are sent to every client as soon as the write takes place). Finally, the maximum size of a "file" (znode) in Zookeeper is 1MB, but typically they'll be single strings. Taken together, this means that zookeeper is not meant to store for much data, and definitely not a cache. Instead, it's for managing heartbeats/knowing what servers are online, storing/updating configuration, and possibly message passing (though if you have large #s of messages or high throughput demands, something like RabbitMQ will be much better for this task). Basically, ZooKeeper (and Curator, which is built on it) helps in handling the mechanics of clustering -- heartbeats, distributing updates/configuration, distributed locks, etc. It's not really comparable to Redis, but for the specific questions... It doesn't support any computation and for most data sets, won't be able to store the data with any performance. It's replicated to all nodes in the cluster (there's nothing like Redis clustering where the data can be distributed). All messages are processed atomically in full and are sequenced, so there's no real transactions. It can be USED to implement cluster-wide locks for your services (it's very good at that in fact), and tehre are a lot of locking primitives on the znodes themselves to control which nodes access them. Sure, but ZooKeeper fills a niche. It's a tool for making a distributed applications play nice with multiple instances, not for storing/sharing large amounts of data. Compared to using an IMDG for this purpose, Zookeeper will be faster, manages heartbeats and synchronization in a predictable way (with a lot of APIs for making this part easy), and has a "push" paradigm instead of "pull" so nodes are notified very quickly of changes. The quotation from the linked question... A canonical example of Zookeeper usage is distributed-memory computation ... is, IMO, a bit misleading. You would use it to orchestrate the computation, not provide the data. For example, let's say you had to process rows 1-100 of a table. You might put 10 ZK nodes up, with names like "1-10", "11-20", "21-30", etc. Client applications would be notified of this change automatically by ZK, and the first one would grab "1-10" and set an ephemeral node clients/192.168.77.66/processing/rows_1_10 The next application would see this and go for the next group to process. The actual data to compute would be stored elsewhere (ie Redis, SQL database, etc). If the node failed partway through the computation, another node could see this (after 30-60 seconds) and pick up the job again. I'd say the canonical example of ZooKeeper is leader election, though. Let's say you have 3 nodes -- one is master and the other 2 are slaves. If the master goes down, a slave node must become the new leader. This type of thing is perfect for ZK.
Redis
37,293,928
29
When I run the command redis-cli INFO, one of the returned values indicates the avg_ttl. I'm unsure what unit of time this is represented in? Example: # Keyspace db0:keys=706818,expires=228745,avg_ttl=1521990750
This is a bit confusing indeed. the TTL command return value is in seconds PTTL command return value is in milliseconds avg_ttl from INFO is in milliseconds Also, note that this average value avg_ttl is just an estimate based on random check of keys.
Redis
26,226,588
29
I need to create a solution using php, with a mysql database with lots of data. My program will have many requisitions, I think that if I work with cache and an OO database, I'll have a good result, but I don't have experience. I think for example if I cache the information that is saved in mysql in a redis database, performance will be improved, but I don't know if this is a good idea, so I would like someone to help me to choose. Sorry if my English is not very good, I'm from Brazil.
Yes, redis is good for that. But to get the gist, there are basically two approaches to caching. Depending on whether you use a framework (and which) or not, you may have first option available in standard or with use of a plug-in: Cache database queries, that is - selected queries and their results will be kept in redis for quicker access for a given time or until clearing cache (useful after updating databse). In this case you can use built-in mysql query caching, it will be simpler than using additional key-value store, or you can override default database integration with your own class making use of cache (for example http://pythonhosted.org/johnny-cache/). Custom caching, that is creating your own structures to be kept in cache and periodically or manually refilling them with data fetched from the database. It is more flexible and potentially more powerful, because you can use built-in redis features such as lists or sorted sets, which make update overhead much smaller. It requires a bit more of coding, but it usually offers better results, since it is more customized. Good example is keeping top articles in form of redis list of ids, and then accessing serialized article(s) with given id as well from redis. You can keep that article unnormalized - ie. serialized object can contain user id as well as user name, so that you can keep the overhead of additional queries to a minimum. It is yours to decide which approach to take, I personally almost always go with approach number two. But, of course, everything depends on how much time you have, and what the application is supposed to do - you might as well start with mysql query caching and if the results are not good enough move to redis and custom caching.
Redis
16,268,950
29
I'm using node_redis and I'd like to save a structure like: { users : "alex" : { "email" : "[email protected]", "password" : "alex123"}, "sandra" : { "email" : "[email protected]", "password" : "sandra123"}, ... } Currently, for each user I create a JSON object: jsonObj = { "email" : "[email protected]", "password" : "alex123"} and do a db.hmset("alex", JSON.stringify(jsonObj)) Is it possible to embedded this strucute in another structure (the users one ?) How could I set users["alex"] with this structure ?
As far as I know there isn't native support for nested structures in Redis, but they can be modeled for example with set+hash (similar to hierarchical trees). Hashes are probably best suited for storing fields and values of single JSON object. What I would do is to store each user with a prefix (which is a Redis convention), for example: db.hmset("user:alex", JSON.stringify(jsonObj)); and then use sets to group users into one set with a key named users. I can then get all of the users keys by smembers command and access each of them individually with hgetall.
Redis
5,701,491
29
Question is about keeping Redis data alive between docker-compose up and docker-compose down. In the docker-compose.yaml file bellow db service uses - postgres_data:/var/lib/postgresql/data/ volume to keep data alive. I would like to do something like this for redis service but I can not find workable solution to do so. Only one way I have managed to achieve this goal is to store data in local storage - ./storage/redis/data:/data. All experiments with external volume gave no results. Question is -is it possible somehow to store redis data between docker-compose down and docker-compose up in a volume likewise it made in DB service? Sorry if question is naive… Thanks version: '3.8' services: web: build: . command: python /code/manage.py runserver 0.0.0.0:8000 env_file: - ./series/.env volumes: - .:/code ports: - 8000:8000 depends_on: - db - redis db: build: context: . dockerfile: postgres.dockerfile restart: always env_file: - ./series/.env environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=1q2w3e volumes: - postgres_data:/var/lib/postgresql/data/ ports: - target: 5432 published: 5433 protocol: tcp mode: host redis: image: redis:alpine command: redis-server --appendonly yes ports: - target: 6379 published: 6380 protocol: tcp mode: host volumes: - ./storage/redis/data:/data restart: always environment: - REDIS_REPLICATION_MODE=master volumes: postgres_data:
You just need to add a named volume for Redis data next to the postgres_data: volumes: postgres_data: redis_data: Then change host path to the named volume: redis: ... volumes: - redis_data:/data If Redis saved data with host path, then the above will work for you. I mention that because you have to configure Redis to enable persistent storage (see Redis Docker Hub page https://hub.docker.com/_/redis). Beware, running docker-compose down -v will destroy volumes as well.
Redis
63,906,856
28
Using dd = {'ID': ['H576','H577','H578','H600', 'H700'], 'CD': ['AAAAAAA', 'BBBBB', 'CCCCCC','DDDDDD', 'EEEEEEE']} df = pd.DataFrame(dd) Pre Pandas 0.25, this below worked. set: redisConn.set("key", df.to_msgpack(compress='zlib')) get: pd.read_msgpack(redisConn.get("key")) Now, there are deprecated warnings.. FutureWarning: to_msgpack is deprecated and will be removed in a future version. It is recommended to use pyarrow for on-the-wire transmission of pandas objects. The read_msgpack is deprecated and will be removed in a future version. It is recommended to use pyarrow for on-the-wire transmission of pandas objects. How does pyarrow work? And, how do I get pyarrow objects into and back from Redis. reference: How to set/get pandas.DataFrame to/from Redis?
Here's a full example to use pyarrow for serialization of a pandas dataframe to store in redis apt-get install python3 python3-pip redis-server pip3 install pandas pyarrow redis and then in python import pandas as pd import pyarrow as pa import redis df=pd.DataFrame({'A':[1,2,3]}) r = redis.Redis(host='localhost', port=6379, db=0) context = pa.default_serialization_context() r.set("key", context.serialize(df).to_buffer().to_pybytes()) context.deserialize(r.get("key")) A 0 1 1 2 2 3 I just submitted PR 28494 to pandas to include this pyarrow example in the docs. Reference docs: https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_msgpack.html https://arrow.apache.org/docs/python/ipc.html#arbitrary-object-serialization https://arrow.apache.org/docs/python/memory.html#pyarrow-buffer https://stackoverflow.com/a/37957490/4126114
Redis
57,949,871
28
I have installed and compiled Redis from source and am attempting to connect to an Amazon ElastiCache (Redis) cluster. I can connect to the default localhost with no problem, but attempting to connect to an AWS endpoint causes what seems to be an infinite hangup. With defaults: $ redis-server /etc/redis.conf # daemonized, uses localhost $ redis-cli ping PONG $ sudo service redis_6379 status Redis is running (12919) $ redis-cli shutdown # or sudo service redis_6379 stop Now, here is an attempt to connect to the endpoint, copies from AWS documentation on the topic: redis-cli -c -h my_example_endpoint_name.eaogs8.ng.0001.use1.cache.amazonaws.com -p 6379 ping This hangs up infinitely without anything being issued to stderr/stdout. (Please note this is an example endpoint name; I have verified I am using the primary endpoint listed at the AWS console.) I suspect this may be related to the security group settings for the cluster on the AWS side but am not sure specifically what could/should be modified. I appreciate suggestions of what could be blocking the connection and can provide info on the cluster itself as needed.
I was also seeing the call to redis-cli hang up infinitely, but in my case it did not stem from incorrectly-configured security groups. Instead, it occurred because I had created my Redis cluster with the 'Encryption in-transit' option set to 'Yes'. This meant my database endpoint needed to be accessed through an SSL tunnel, which redis-cli does not do. For my application, encryption in-transit wasn't actually necessary so I created a new Redis cluster with that option not selected. More details on what you need to do differently when using in-transit encryption can be found here: https://aws.amazon.com/premiumsupport/knowledge-center/elasticache-connect-redis-node/
Redis
52,043,233
28
On a fresh Ubuntu 16.04 EC2 instance the warnings appear like so: WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect. WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled. How to eliminate them permanently?
Like the warning suggests, just add the line vm.overcommit_memory=1 to the bottom of /etc/sysctl.conf, with something like sudo vi /etc/sysctl.conf. But permissions don't allow you to edit THP as the warning suggests, so instead do sudo apt install hugepages and add the command sudo hugeadm --thp-never to the bottom of your .bashrc, with something like sudo vi ~/.bashrc. Then just sudo reboot and next time you SSH in run redis-server and the warnings are gone!
Redis
41,203,492
28
I cache some data in redis, and reading data from redis if it's exists, otherwise reading data from database and write the data in redis. I find that there are several ways to update redis after updating database.For example: set keys in redis to expired update redis immediately after updating datebase. put data in MQ and use consumer to update redis. I'm a little confused and don't know how to choose. Could you tell me the advantage and disadvantage of each way and it's better to tell me other ways to update redis or recommend some blog about this problem.
Actual data store and cache should be synchronized using the third approach you've already described in your question. As you add data to your definitive store (i.e. your SQL database), you need to enqueue this data to some service bus or message queue, and let some asynchronous service do the whole synchronization using some kind of background process. You don't want get into this cases (when not using a service bus and asynchronous service): Make your requests or processes slower because the user needs to wait until the data is both stored in your database and cache. Have the risk of a fail during the caching process and not being able to have a retry policy (which is usually a built-in feature in a service bus or some message queues). Also, this failure can end up in a partial or complete cache corruption and you won't be able to automatically and easily schedule some task to fix this situation. About using Redis key expiration, it's a good idea. Since Redis can expire keys using its built-in mechanism, you shouldn't implement key expiration from the whole background process. If a key exists is because it's still valid. BTW, you won't be always on this case (if a key isn't expired it means that it shouldn't be overwritten). It might depend on your actual domain.
Redis
36,302,972
28
In Redis, keys user* will print all keys starting with user. For example: keys user* 1) "user2" 2) "user1" Now, I want all keys that don't start with user to be printed. How could I do that?
IMPORTANT: always use SCAN instead of (the evil) KEYS Redis' pattern matching is somewhat functionally limited (see the implementation of stringmatchlen in util.c) and does not provide that which you seek ATM. That said, consider the following possible routes: Extend stringmatchlen to match your requirements, possibly submitting it as a PR. Consider what you're trying to do - fetching a subset of keys is always going to be inefficient unless you index them, consider tracking the names of all non-user keys (i.e.g. in a Redis Set) instead. If you are really insistent on scanning the entire keyspace and match against negative patterns, one way to accomplish that is with a little bit of Lua magic. Consider the following dataset and script: 127.0.0.1:6379> dbsize (integer) 0 127.0.0.1:6379> set user:1 1 OK 127.0.0.1:6379> set use:the:force luke OK 127.0.0.1:6379> set non:user a OK Lua (save this as scanregex.lua): local re = ARGV[1] local nt = ARGV[2] local cur = 0 local rep = {} local tmp if not re then re = ".*" end repeat tmp = redis.call("SCAN", cur, "MATCH", "*") cur = tonumber(tmp[1]) if tmp[2] then for k, v in pairs(tmp[2]) do local fi = v:find(re) if (fi and not nt) or (not fi and nt) then rep[#rep+1] = v end end end until cur == 0 return rep Output - first time regular matching, 2nd time the complement: foo@bar:~$ redis-cli --eval scanregex.lua , "^user" 1) "user:1" foo@bar:~$ redis-cli --eval scanregex.lua , "^user" 1 1) "use:the:force" 2) "non:user"
Redis
29,942,541
28
What is a zset in a redis database. I have a redis database with some data. In order to get the values KEYS *apple* 1) "compleet-index:products:apple" 2) "compleet-index:brands:apple" after to get the key GET compleet-index:productos:apple and I had the response (error) WRONGTYPE Operation against a key holding the wrong kind of value I get the type TYPE compleet-index:productos:iphone zset When I make DUMP compleet-index:productos:iphone I obtain an exas codes.
Short answer: Use ZRANGE compleet-index:products:apple 0 -1 WITHSCORES ZSET is a short name for Redis Sorted Set, a Redis data type documented here. Each key in a sorted set has multiple values inside, associated with a floating value score.
Redis
29,800,178
28
I need to install redis in amazon cloud. I need it as a part of my npm module kue (deployment). Can anyone link me step by step tutorial or explain how to do it, considering the fact that I'm not good to bad with linux and administration.
If you enable the Extra Packages for Enterprise Linux (EPEL) repository that's present on Amazon Linux, you can install with yum. sudo yum-config-manager --enable epel sudo yum install redis # Start redis server sudo redis-server /etc/redis.conf
Redis
27,690,142
28
I have this code to add object and index field in Stackexchange.Redis. All methods in transaction freeze thread. Why ? var transaction = Database.CreateTransaction(); //this line freeze thread. WHY ? await transaction.StringSetAsync(KeyProvider.GetForID(obj.ID), PreSaveObject(obj)); await transaction.HashSetAsync(emailKey, new[] { new HashEntry(obj.Email, Convert.ToString(obj.ID)) }); return await transaction.ExecuteAsync();
Commands executed inside a transaction do not return results until after you execute the transaction. This is simply a feature of how transactions work in Redis. At the moment you are awaiting something that hasn't even been sent yet (transactions are buffered locally until executed) - but even if it had been sent: results simply aren't available until the transaction completes. If you want the result, you should store (not await) the task, and await it after the execute: var fooTask = tran.SomeCommandAsync(...); if(await tran.ExecuteAsync()) { var foo = await fooTask; } Note that this is cheaper than it looks: when the transaction executes, the nested tasks get their results at the same time - and await handles that scenario efficiently.
Redis
25,976,231
28
I was able to do this in ServiceStack.redis by using, IRedisTypedClient<ObjectName> myObj = redisClient.As<ObjectName>(); But I couldn't find any examples to do this in StackExchange.Redis. Do I have to Serialize to JSON and then store them? Thanx in advance.
At the current time, SE.Redis does not attempt to offer serialisation - there are simply too many different ways of doing that. I'm rather of the opinion that the library should do one thing, not 7. It should be possible to add any hybrid serialisation etc concerns simply by extension methods or other plumbing/wrapping code, choosing any serialisation strategy you choose, and any library you choose.
Redis
25,536,312
28
I have a Flask app that takes parameters from a web form, queries a DB with SQL Alchemy and returns Jinja-generated HTML showing a table with the results. I want to cache the calls to the DB. I looked into Redis (Using redis as an LRU cache for postgres), which led me to http://pythonhosted.org/Flask-Cache/. Now I am trying to use Redis + Flask-Cache to cache the calls to the DB. Based on the Flask-Cache docs, it seems like I need to set up a custom Redis cache. class RedisCache(BaseCache): def __init__(self, servers, default_timeout=500): pass def redis(app, config, args, kwargs): args.append(app.config['REDIS_SERVERS']) return RedisCache(*args, **kwargs) From there I would need to something like: # not sure what to put for args or kwargs cache = redis(app, config={'CACHE_TYPE': 'redis'}) app = Flask(__name__) cache.init_app(app) I have two questions: What do I put for args and kwargs? What do these mean? How do I set up a Redis cache with Flask-Cache? Once the cache is set up, it seems like I would want to somehow "memoize" the calls the DB so that if the method gets the same query it has the output cached. How do I do this? My best guess would be to wrap the call the SQL Alchemy in a method that could then be given memoize decorator? That way if two identical queries were passed to the method, Flask-Cache would recognize this and return to the appropriate response. I'm guessing that it would look like this: @cache.memoize(timeout=50) def queryDB(q): return q.all() This seems like a fairly common use of Redis + Flask + Flask-Cache + SQL Alchemy, but I am unable to find a complete example to follow. If someone could post one, that would be super helpful -- but for me and for others down the line.
You don't need to create custom RedisCache class. The docs is just teaching how you would create new backends that are not available in flask-cache. But RedisCache is already available in werkzeug >= 0.7, which you might have already installed because it is one of the core dependencies of flask. This is how I could run the flask-cache with redis backend: import time from flask import Flask from flask_cache import Cache app = Flask(__name__) cache = Cache(app, config={'CACHE_TYPE': 'redis'}) @cache.memoize(timeout=60) def query_db(): time.sleep(5) return "Results from DB" @app.route('/') def index(): return query_db() app.run(debug=True) The reason you're getting "ImportError: redis is not a valid FlaskCache backend" is probably because you don't have redis (python library) installed which you can simply install by: pip install redis.
Redis
24,589,123
28
I am a novice in using Redis DB. After reading some of the documentation and looking into some of the examples on the Internet and also scanning stackoverflow.com, I can see that Redis is very fast, scales well but this costs the price that we have to think out how our data will be accessed at the design time and what operations they will have to undergo. This I can understand but I am a little confused about searching in the data what was so easy, however slow, with the plain old SQL. I could do it in one way with the KEY command but it is an O(N) operation and not O(log(N)). So I would lose one of the advantages of Redis. What do more experienced colleagues say here? Let's take an example use case: we have need to store personal data for approx. 100.000 people and those data need to be searched by name, phone nr. For this I would use the following structures: 1. SET for storing all persons' ids {id1, id2, ...} 2. HASH for each person to store personal data and name it like map:<id> e.g. map:id1{name:<name>, phone:<number>, etc...} Solution 1: 1. HASH for storing all persons' ids but the key should be the phone number 2. Then with the command KEY 123* all ids could be retrieved who have a phone number sarting with 123. On basis of the ids also the other personal data could be retrieved. 3. So forth for each data to be searched for a separate HASH should be created. But a major drawback of this solution is that the attribute values must also be unique, so that the assigment of the phone number and the ids in the HASH would be unambiguous. On the other hand, O(N) runtime is not ideal. Moreover, this uses more space than would be necessary and the KEY command deteriorates the access performance. (http://redis.io/commands/keys) How should it be done in the right way? I could also imagine that ids would go in a ZSET and the data needed search could be the scores but this make only possible to work with ranges not with seraches. Thank you also in advance, regards, Tamas Answer summary: Actually, both responses state that Redis was not designed to search in the values of the keys. If this use case is necessary, then either workarounds need to be implemented as shown in my original solution or in the below solution. The below solution by Eli has a much better performance, than my original one because the access to the keys can be considered constant, only the list of ids needs to be iterated through, for the access this would give O(const) runtime. This data model also allows that one person might have the same phone number as someone else and so on also for names etc... so 1-n relationship is also possible (I would say with old ERD terminology). The drawback of this solution is, that it consumes much more space than mine and phone numbers whose starting digits are known only, could not be searched. Thanks for both responses.
Redis is for use cases where you need to access and update data at very high frequency and where you benefit from use of data structures (hashes, sets, lists, strings, or sorted sets). It's made to fill very specific use cases. If you have a general use case like very flexible searching, you'd be much better served by something built for this purpose like elastic search or SOLR. That said, if you must do this in Redis, here's how I'd do it (assuming users can share names and phone numbers): name:some_name -> set([id1, id2, etc...]) name:some_other_name -> set([id3, id4, etc...]) phone:some_phone -> set([id1, id3, etc...]) phone:some_other_phone -> set([id2, id4, etc...]) id1 -> {'name' : 'bob', 'phone' : '123-456-7891', etc...} id2 -> {'name' : 'alice', 'phone' : '987-456-7891', etc...} In this case, we're making a new key for every name (prefixed with "name:") and every phone number (prefixed "phone:"). Each key points to a set of ids that have all the info you want for a user. When you search, for a phone, for example, you'll do: HGETALL 'phone:123-456-7891' and then loop through the results and return whatever info on each (name in our example) in your language of choice (you can do this whole thing in server-side Lua on the Redis box to go even faster and avoid network back-and-forth, if you want): for id in results: HGET id 'name' You're cost here will be O(m) where m is the number of users with the given phone number, and this will be a very fast operation on Redis because of how optimized it is for speed. It'll be overkill in your case because you probably don't need things to go so fast, and you'd prefer having flexible search, but this is how you would do it.
Redis
17,193,176
28
I've been using PostgreSQL for the longest time. All of my data lives inside Postgres. I've recently looked into redis and it has a lot of powerful features that would otherwise take a couple of lines in Django (python) to do. Redis data is persistent as long the machine it's running on doesn't go down and you can configure it to write out the data it's storing to disk every 1000 keys or every 5 minutes or so depending on your choice. Redis would make a great cache and it would certainly replace a lot of functions I have written in python (up voting a user's post, viewing their friends list etc...). But my concern is, all of this data would some how need to be translated over to postgres. I don't trust storing this data in redis. I see redis as a temporary storage solution for quick retrieval of information. It's extremely fast and this far outweighs doing repetitive queries against postgres. I'm assuming the only way I could technically write the redis data to the database is to save() whatever I get from the 'get' query from redis to the postgres database through Django. That's the only solution I could think of. Do you know of any other solutions to this problem?
Redis is increasingly used as a caching layer, much like a more sophisticated memcached, and is very useful in this role. You usually use Redis as a write-through cache for data you want to be durable, and write-back for data you might want to accumulate then batch write (where you can afford to lose recent data). PostgreSQL's LISTEN and NOTIFY system is very useful for doing selective cache invalidation, letting you purge records from Redis when they're updated in PostgreSQL. For combining it with PostgreSQL, you will find the Redis foreign data wrapper provider that Andrew Dunstain and Dave Page are working on very interesting. I'm not aware of any tool that makes Redis into a transparent write-back cache for PostgreSQL. Their data models are probably too different for this to work well. Usually you write changes to PostgreSQL and invalidate their Redis cache entries using listen/notify to a cache manager worker, or you queue changes in Redis then have your app read them out and write them into Pg in chunks.
Redis
17,033,031
28
Now that Stack Overflow uses redis, do they handle cache invalidation the same way? i.e. a list of identities hashed to a query string + name (I guess the name is some kind of purpose or object type name). Perhaps they then retrieve individual items that are missing from the cache directly by id (which bypasses a bunch of database indexes and uses the more efficient clustered index instead perhaps). That'd be smart (the rehydration that Jeff mentions?). Right now, I'm struggling to find a way to pivot all of this in a succinct way. Are there any examples of this kind of thing that I could use to help clarify my thinking prior to doing a first cut myself? Also, I'm wondering where the cutoff is between using a .net cache (System.Runtime.Caching or System.Web.Caching) and going out and using redis. Or is Redis just hands down faster? Here's the original SO question from 2009: https://meta.stackexchange.com/questions/6435/how-does-stackoverflow-handle-cache-invalidation A couple of other links: https://meta.stackexchange.com/questions/69164/does-stackoverflow-use-caching-and-if-so-how/69172#69172 https://meta.stackexchange.com/questions/110320/stack-overflow-db-performance-and-redis-cache
I honestly can't decide if this is a SO question or a MSO question, but: Going off to another system is never faster than querying local memory (as long as it is keyed); simple answer: we use both! So we use: local memory else check redis, and update local memory else fetch from source, and update redis and local memory This then, as you say, causes an issue of cache invalidation - although actually that isn't critical in most places. But for this - redis events (pub/sub) allow an easy way to broadcast keys that are changing to all nodes, so they can drop their local copy - meaning: next time it is needed we'll pick up the new copy from redis. Hence we broadcast the key-names that are changing against a single event channel name. Tools: redis on ubuntu server; BookSleeve as a redis wrapper; protobuf-net and GZipStream (enabled / disabled automatically depending on size) for packaging data. So: the redis pub/sub events are used to invalidate the cache for a given key from one node (the one that knows the state has changed) immediately (pretty much) to all nodes. Regarding distinct processes (from comments, "do you use any kind of shared memory model for multiple distinct processes feeding off the same data?"): no, we don't do that. Each web-tier box is only really hosting one process (of any given tier), with multi-tenancy within that, so inside the same process we might have 70 sites. For legacy reasons (i.e. "it works and doesn't need fixing") we primarily use the http cache with the site-identity as part of the key. For the few massively data-intensive parts of the system, we have mechanisms to persist to disk so that the in-memory model can be passed between successive app-domains as the web naturally recycles (or is re-deployed), but that is unrelated to redis. Here's a related example that shows the broad flavour only of how this might work - spin up a number of instances of the following, and then type some key names in: static class Program { static void Main() { const string channelInvalidate = "cache/invalidate"; using(var pub = new RedisConnection("127.0.0.1")) using(var sub = new RedisSubscriberConnection("127.0.0.1")) { pub.Open(); sub.Open(); sub.Subscribe(channelInvalidate, (channel, data) => { string key = Encoding.UTF8.GetString(data); Console.WriteLine("Invalidated {0}", key); }); Console.WriteLine( "Enter a key to invalidate, or an empty line to exit"); string line; do { line = Console.ReadLine(); if(!string.IsNullOrEmpty(line)) { pub.Publish(channelInvalidate, line); } } while (!string.IsNullOrEmpty(line)); } } } What you should see is that when you type a key-name, it is shown immediately in all the running instances, which would then dump their local copy of that key. Obviously in real use the two connections would need to be put somewhere and kept open, so would not be in using statements. We use an almost-a-singleton for this.
Redis
9,596,877
28
I am trying to scale a simple socket.io app across multiple processes and/or servers. Socket.io supports RedisStore but I'm confused as to how to use it. I'm looking at this example, http://www.ranu.com.ar/post/50418940422/redisstore-and-rooms-with-socket-io but I don't understand how using RedisStore in that code would be any different from using MemoryStore. Can someone explain it to me? Also what is difference between configuring socket.io to use redisstore vs. creating your own redis client and set/get your own data? I'm new to node.js, socket.io and redis so please point out if I missed something obvious.
but I don't understand how using RedisStore in that code would be any different from using MemoryStore. Can someone explain it to me? The difference is that when using the default MemoryStore, any message that you emit in a worker will only be sent to clients connected to the same worker, since there is no IPC between the workers. Using the RedisStore, your message will be published to a redis server, which all your workers are subscribing to. Thus, the message will be picked up and broadcast by all workers, and all connected clients. Also what is difference between configuring socket.io to use redisstore vs. creating your own redis client and set/get your own data? I'm not intimately familiar with RedisStore, and so I'm not sure about all differences. But doing it yourself would be a perfectly valid practice. In that case, you could publish all messages to a redis server, and listen to those in your socket handler. It would probably be more work for you, but you would also have more control over how you want to set it up. I've done something similar myself: // Publishing a message somewhere var pub = redis.createClient(); pub.publish("messages", JSON.stringify({type: "foo", content: "bar"})); // Socket handler io.sockets.on("connection", function(socket) { var sub = redis.createClient(); sub.subscribe("messages"); sub.on("message", function(channel, message) { socket.send(message); }); socket.on("disconnect", function() { sub.unsubscribe("messages"); sub.quit(); }); }); This also means you have to take care of more advanced message routing yourself, for instance by publishing/subscribing to different channels. With RedisStore, you get that functionality for free by using socket.io channels (io.sockets.of("channel").emit(...)). A potentially big drawback with this is that socket.io sessions are not shared between workers. This will probably mean problems if you use any of the long-polling transports.
Redis
9,267,292
28
I'm currently running multiple redis instances on one box. Each have their own config, init.d, and listen on different ports. My application(s) have no problem connecting via the redis clients, but I'd like to be able to connect to each one using redis-cli. I couldn't find any information on $:redis-cli [options] in either the redis-doc or on redis.io. Any ideas?
You can specify the server host and port using -h and -p parameters. E.g.: redis-cli -h 127.0.0.1 -p 6379
Redis
6,206,971
28
Why does Redis, a datastore, have Pub/Sub features? My first thought is that it's the wrong layer to implement such a thing. But maybe I need to think outside the box.
Redis is defined as data structure server. Redis provides multiple functionality like memcache, queue, pubsub etc. This is very useful for a cloudapp/webstack where 3 components RabbitMQ(queuing) + XMPP(pubsub) + Memcache can be currently replaced with redis. Queuing is not as feature rich as RabbitMQ though.
Redis
4,938,520
28
Problem On my Ruby on Rails app, I keep getting the error below for the Heroku Redis Premium 0 add-on: OpenSSL::SSL::SSLError: SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain) Heroku Redis documentation mentions that I need to enable TLS in my Redis client's configuration in order to connect to a Redis 6 database. To achive this, I have read SSL/TLS Support documentation on redis-rb. My understanding from it is; I need to assign ca_file, cert and key for Redis.new#ssl_params. The question is how to set these for Redis or through Sidekiq on Heroku? Updates Update 3: Heroku support provided an answer which solved the problem. Update 2: Created Heroku support ticket and waiting response. Update 1: Asked on Sidekiq's Github issues and was adviced go write Heroku support. Will update this question, when I do get an answer. Related Info I have verified the app does work when the add-on is either one of the below: hobby-dev for Redis 6 premium 0 for Redis 5 Versions: Ruby – 3.0.0p0 Ruby on Rails – 6.1.1 Redis – 6.0 redis-rb – 4.2.5 Sidekiq – 6.2.1 Heroku Stack – 20 Some links that helped me to narrow down the issue: https://bibwild.wordpress.com/2020/11/24/are-you-talking-to-heroku-redis-in-cleartext-or-ssl/ https://mislav.net/2013/07/ruby-openssl/
Solution Use OpenSSL::SSL::VERIFY_NONE for your Redis client. Sidekiq # config/initializers/sidekiq.rb Sidekiq.configure_server do |config| config.redis = { ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } } end Sidekiq.configure_client do |config| config.redis = { ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE } } end Redis Redis.new(url: 'url', driver: :ruby, ssl_params: { verify_mode: OpenSSL::SSL::VERIFY_NONE }) Reason Redis 6 requires TLS to connect. However, Heroku support explained that they manage requests from the router level to the application level involving Self Signed Certs. Turns out, Heroku terminates SSL at the router level and requests are forwarded from there to the application via HTTP while everything is behind Heroku's Firewall and security measures. Sources https://ogirginc.github.io/en/heroku-redis-ssl-error https://devcenter.heroku.com/articles/securing-heroku-redis#connecting-directly-to-stunnel
Redis
65,834,575
27
I am trying to insert multiple key/values at once on Redis (some values are sets, some are hashes) and I get this error: ERR CROSSSLOT Keys in request don't hash to the same slot. I'm not doing this from redis-cli but from some Go code that needs to write multiple key/values to a redis cluster. I see other places in the code where multiple key values are done this way and I don't understand why mine don't work. What are the hash requirements to not have this error? Thanks
In a cluster topology, the keyspace is divided into hash slots. Different nodes will hold a subset of hash slots. Multiple keys operations, transactions, or Lua scripts involving multiple keys are allowed only if all the keys involved are in hash slots belonging to the same node. Redis Cluster implements all the single key commands available in the non-distributed version of Redis. Commands performing complex multi-key operations like Set type unions or intersections are implemented as well as long as the keys all belong to the same node. You can force the keys to belong to the same node by using Hash Tags
Redis
38,042,629
27
I tested all the transaction commands (MULTI, EXEC, WATCH, DISCARD) in redis-cli. But when i tried with redis-py the following error occurred: AttributeError: 'Redis' object has no attribute 'multi' I have tried the following code snippet: import redis,time r = redis.Redis() try: r.set("transError",10) r.watch("transError") var = r.get("transError") var = int(var) + 1 print "Run other client to simulate an error without transaction" time.sleep(4) r.multi() r.set("transError",var) r.execute() print "Value in first client",r.get("transError") except redis.WatchError: print "Value Altered" I have seen code example that is using multi() and execute() but they are not working for me. Any help?
In redis-py MULTI and EXEC can only be used through a Pipeline object. Try the following: r = redis.Redis() p = r.pipeline() p.set("transError", var) p.execute() With the monitor command through the redis-cli you can see MULTI, SET, EXEC sent when p.execute() is called. To omit the MULTI/EXEC pair, use r.pipeline(transaction=False).
Redis
31,769,163
27
I am trying to insert a large(-ish) number of elements in the shortest time possible and I tried these two alternatives: 1) Pipelining: List<Task> addTasks = new List<Task>(); for (int i = 0; i < table.Rows.Count; i++) { DataRow row = table.Rows[i]; Task<bool> addAsync = redisDB.SetAddAsync(string.Format(keyFormat, row.Field<int>("Id")), row.Field<int>("Value")); addTasks.Add(addAsync); } Task[] tasks = addTasks.ToArray(); Task.WaitAll(tasks); 2) Batching: List<Task> addTasks = new List<Task>(); IBatch batch = redisDB.CreateBatch(); for (int i = 0; i < table.Rows.Count; i++) { DataRow row = table.Rows[i]; Task<bool> addAsync = batch.SetAddAsync(string.Format(keyFormat, row.Field<int>("Id")), row.Field<int>("Value")); addTasks.Add(addAsync); } batch.Execute(); Task[] tasks = addTasks.ToArray(); Task.WaitAll(tasks); I am not noticing any significant time difference (actually I expected the batch method to be faster): for approx 250K inserts I get approx 7 sec for pipelining vs approx 8 sec for batching. Reading from the documentation on pipelining, "Using pipelining allows us to get both requests onto the network immediately, eliminating most of the latency. Additionally, it also helps reduce packet fragmentation: 20 requests sent individually (waiting for each response) will require at least 20 packets, but 20 requests sent in a pipeline could fit into much fewer packets (perhaps even just one)." To me, this sounds a lot like the a batching behaviour. I wonder if behind the scenes there's any big difference between the two because at a simple check with procmon I see almost the same number of TCP Sends on both versions.
Behind the scenes, SE.Redis does quite a bit of work to try to avoid packet fragmentation, so it isn't surprising that it is quite similar in your case. The main difference between batching and flat pipelining are: a batch will never be interleaved with competing operations on the same multiplexer (although it may be interleaved at the server; to avoid that you need to use a multi/exec transaction or a Lua script) a batch will be always avoid the chance of undersized packets, because it knows about all the data ahead of time but at the same time, the entire batch must be completed before anything can be sent, so this requires more in-memory buffering and may artificially introduce latency In most cases, you will do better by avoiding batching, since SE.Redis achieves most of what it does automatically when simply adding work. As a final note; if you want to avoid local overhead, one final approach might be: redisDB.SetAdd(string.Format(keyFormat, row.Field<int>("Id")), row.Field<int>("Value"), flags: CommandFlags.FireAndForget); This sends everything down the wire, neither waiting for responses nor allocating incomplete Tasks to represent future values. You might want to do something like a Ping at the end without fire-and-forget, to check the server is still talking to you. Note that using fire-and-forget does mean that you won't notice any server errors that get reported.
Redis
27,796,054
27
I try to run Celery example on Windows with redis backend. The code looks like: from celery import Celery app = Celery('risktools.distributed.celery_tasks', backend='redis://localhost', broker='redis://localhost') @app.task(ignore_result=False) def add(x, y): return x + y @app.task(ignore_result=False) def add_2(x, y): return x + y I start the tasks using iPython console: >>> result_1 = add.delay(1, 2) >>> result_1.state 'PENDING' >>> result_2 = add_2.delay(2, 3) >>> result_2.state 'PENDING' It seems that both tasks were not executed, but Celery worker output shows that they succeeded: [2014-12-08 15:00:09,262: INFO/MainProcess] Received task: risktools.distributed.celery_tasks.add[01dedca1-2db2-48df-a4d6-2f06fe285e45] [2014-12-08 15:00:09,267: INFO/MainProcess] Task celery_tasks.add[01dedca1-2db2-48df-a4d6-2f06fe28 5e45] succeeded in 0.0019998550415s: 3 [2014-12-08 15:00:24,219: INFO/MainProcess] Received task: risktools.distributed.celery_tasks.add[cb5505ce-cf93-4f5e-aebb-9b2d98a11320] [2014-12-08 15:00:24,230: INFO/MainProcess] Task celery_tasks.add[cb5505ce-cf93-4f5e-aebb-9b2d98a1 1320] succeeded in 0.010999917984s: 5 I've tried to troubleshoot this issue according to Celery documentation, but none of the advices were useful. What am I doing wrong and how can I receive results from a Celery task? UPD: I've added a task without ignore_result parameter, but nothing has changed @app.task def add_3(x, y): return x + y >>>r = add_3.delay(2, 2) >>>r.state 'PENDING'
According to Celery 'Getting Started' not able to retrieve results; always pending and https://github.com/celery/celery/issues/2146 it is a Windows issue. Celery -P threads or --pool=solo options solves the issue.
Redis
27,357,732
27
I want to use Redis as a cache storage for multiple applications on the same physical machine. I know at least two ways of doing it: by running several Redis instances on different ports; by using different Redis databases for different applications. But I don't know which one is better for me. What are advantages and disadvantages of these methods? Is there any better way of doing it?
Generally, you should prefer the 1st approach, i.e. dedicated Redis servers. Shared databases are managed by the same Redis process and can therefore block each other. Additionally, shared databases share the same configuration (although in your case this may not be an issue since all databases are intended for caching). Lastly, shared databases are not supported by Redis Cluster. For more information refer to this blog post: https://redislabs.com/blog/benchmark-shared-vs-dedicated-redis-instances
Redis
27,217,502
27
How do I use Flask-Cache @cache.cached() decorator with Flask-Restful? For example, I have a class Foo inherited from Resource, and Foo has get, post, put, and delete methods. How can I can invalidate cached results after a POST? @api.resource('/whatever') class Foo(Resource): @cache.cached(timeout=10) def get(self): return expensive_db_operation() def post(self): update_db_here() ## How do I invalidate the value cached in get()? return something_useful()
As Flask-Cache implementation doesn't give you access to the underlying cache object, you'll have to explicitly instantiate a Redis client and use it's keys method (list all cache keys). The cache_key method is used to override the default key generation in your cache.cached decorator. The clear_cache method will clear only the portion of the cache corresponding to the current resource. This is a solution that was tested only for Redis and the implementation will probably differ a little when using a different cache engine. from app import cache # The Flask-Cache object from config import CACHE_REDIS_HOST, CACHE_REDIS_PORT # The Flask-Cache config from redis import Redis from flask import request import urllib redis_client = Redis(CACHE_REDIS_HOST, CACHE_REDIS_PORT) def cache_key(): args = request.args key = request.path + '?' + urllib.urlencode([ (k, v) for k in sorted(args) for v in sorted(args.getlist(k)) ]) return key @api.resource('/whatever') class Foo(Resource): @cache.cached(timeout=10, key_prefix=cache_key) def get(self): return expensive_db_operation() def post(self): update_db_here() self.clear_cache() return something_useful() def clear_cache(self): # Note: we have to use the Redis client to delete key by prefix, # so we can't use the 'cache' Flask extension for this one. key_prefix = request.path keys = [key for key in redis_client.keys() if key.startswith(key_prefix)] nkeys = len(keys) for key in keys: redis_client.delete(key) if nkeys > 0: log.info("Cleared %s cache keys" % nkeys) log.info(keys)
Redis
24,816,799
27
I'm a newbie in some of the AWS services. I was following this documentation link: http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/GettingStarted.ConnectToCacheNode.Redis.html And I already installed redis-cli with brew in my computer(I'm in a mac) and I'm still having the same error when trying to connect to the node: $ redis-cli -h mynode.abcdef.0001.usw2.cache.amazonaws.com -p 6379 Error: Could not connect to Redis at mynode.abcdef.0001.usw2.cache.amazonaws.com:6379: Operation timed out Yes, I have configured the VPC Security Group to allow all inbound traffic to my Node and the problem persist. Security Group Conf: Node Description: Any ideas?
You can't connect to eleasticache from outside of aws. It just the way it is setup. Would be nice to do for debugging and development, but for production it doesn't really make sense to introduce that much latency into a system that main purpose is to give as-fast-as-possible results. From AWS FAQ: Please note that IP-range based access control is currently not enabled for Cache Clusters. All clients to a Cache Cluster must be within the EC2 network, and authorized via security groups as described above. http://aws.amazon.com/elasticache/faqs/
Redis
22,723,812
27
I am currently using Redis for my app, and its features are really excellent for my application (lists, sets, sorted sets etc.). My application relies heavily on sorted sets, lists, sets. And their related functions (push to a list, get list, union of sets etc. The only problem I am facing right now is that my data is large, and most of my data does not need to be in memory, and I want to store them on disk. **I need an on-disk database with redis data structures ** I read about Cassandra but I am not sure if it supports sorted sets, sets, lists. Or at least if it does, I could not find methods to manipulate them the way Redis does. Thanks.
There are numerous on-disk databases with Redis-like datastructures or even trying to be drop-in protocol-compatible replacements for Redis. There are excellent recommendations in "Is there something like Redis DB, but not limited with RAM size?" - pity the community considers such questions to be off-topic. In particular, SSDB is an actively-maintained Redis-like on-disk database (but not directly compatible), and Ardb is an actively-maintained drop-in replacement for Redis that stores the data on disk. Disclaimer: I have not used either of them (yet).
Redis
15,362,076
27
Basic question: Using Node.js I would like to get all the keys in my redis db. My redis db looks like this when I call keys *; aXF x9U lOk So each record I have, has a unique key, generated as a random string. Now I would like to call something like foreach(key in Redis) and get all keys in the redis. Would it be possible to accomplish a "SELECT * FROM Redis"-like query with Node.js & Redis
Sure, you'll need to install the redis module for nodejs which can be found at https://github.com/redis/node-redis. npm install redis Then you would do: var redis = require('redis'), client = redis.createClient(); client.keys('*', function (err, keys) { if (err) return console.log(err); for(var i = 0, len = keys.length; i < len; i++) { console.log(keys[i]); } }); Generally speaking you won't want to always return all of the keys (performance will be bad for larger data sets), but this will work if you are just testing things out. There is even a nice warning in the Redis documentation: Warning: consider KEYS as a command that should only be used in production environments with extreme care. It may ruin performance when it is executed against large databases. This command is intended for debugging and special operations, such as changing your keyspace layout. Don't use KEYS in your regular application code. If you're looking for a way to find keys in a subset of your keyspace, consider using sets.
Redis
12,793,938
27
I am using predis, it's subscribed to a channel and listening. It throws the following error (below) and dies after 60 secs exactly. It's surely not my web servers error or its timeout. There is a similar issue being discussed here. Could not get much of it. I tried setting connection_timeout in predis conf file to 0, but doesn't helps much. Also if i keep using (send data to it and it processes) the worker it doesn't give any error. So its likely a timeout somewhere, and that too in connection. Here is my code snippet, which is likely producing error, because if data is given to worker it runs this code and go forward, which produces no error after that. $pubsub = $redis->pubSub(); $pubsub->subscribe($channel1); foreach ($pubsub as $message) { //doing stuff here and unsubscribing from channel } Trace PHP Fatal error: Uncaught exception 'Predis\Network\ConnectionException' with message 'Error while reading line from the server' in Predis/Network/ConnectionBase.php:159 Stack trace: #0 library/vendor/predis/lib/Predis/Network/StreamConnection.php(195): Predis\Network\ConnectionBase->onConnectionError('Error while rea...') #1 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(259): Predis\Network\StreamConnection->read() #2 library/vendor/predis/lib/Predis/PubSub/PubSubContext.php(206): Predis\PubSub\PubSubContext->getValue() #3 pdf/file.php(16): Predis\PubSub\PubSubContext->current() #4 {main} thrown in Predis/Network/ConnectionBase.php on line 159 Checked the redis.conf timeout too, its also disabled.
Just set the read_write_timeout connection parameter to 0 or -1 to fix this. e.g. $redis = new Predis\Client('tcp://10.0.0.1:6379'."?read_write_timeout=0"); Setting connection parameters is documented in the README. The author of Redis noted the relevance of the read_write_timeout parameter to this error in an issue on GitHub, in which he notes that: If you are using Predis in a daemon-like script you should set read_write_timeout to -1 if you want to completely disable the timeout (this value works with older and newer versions of Predis). Also, remember that you must disable the default timeout of Redis by setting timeout = 0 in redis.conf or Redis will drop the connection of idle clients after 300 seconds of inactivity.
Redis
11,776,029
27
Hi all and thanks in advance. I am new to the NoSQL game but my current place of employment has tasked me with set comparisons of some big data. Our system has customer tag set and targeted tag sets. A tag is an 8 digit number. A customer tag set may have up to 300 tags but averages 100 tags A targeted tag set may have up to 300 tags but averages 40 tags. Pre calculating is not an option as we are shooting for a potential customer base of a billion users. (These tags are hierarchical so having one tag implies that you also have its parent and ancestor tags. Put that info aside for the moment.) When a customer hits our site, we need to intersect their tag set against one million targeted tag sets as fast as possible. The customer set must contain all elements of the targeted set to match. I have been exploring my options and the set intersection in Redis seems like it would be ideal. However, my trolling through the internet has not revealed how much ram would be required to hold one million tag sets. I realize the intersection would be lightning fast, but is this a feasable solution with Redis. I realize this is brute force and inefficient. I also wanted to use this question as means to get suggestions for ways this type of problem has been handled in the past. As stated before, the tags are stored in a tree. I have begun looking at Mongodb as a possible solution as well. Thanks again
This is an interesting problem, and I think Redis can help here. Redis can store sets of integers using an optimized "intset" format. See http://redis.io/topics/memory-optimization for more information. I believe the correct data structure here is a collection of targeted tag sets, plus a reverse index to map tags to their targeted tag sets. To store two targeted tag sets: 0 -> [ 1 2 3 4 5 6 7 8 ] 1 -> [ 6 7 8 9 10 ] I would use: # Targeted tag sets sadd tgt:0 1 2 3 4 5 6 7 8 sadd tgt:1 2 6 7 8 9 10 # Reverse index sadd tag:0 0 sadd tag:1 0 sadd tag:2 0 1 sadd tag:3 0 sadd tag:4 0 sadd tag:5 0 sadd tag:6 0 1 sadd tag:7 0 1 sadd tag:8 0 1 sadd tag:9 1 sadd tag:10 1 This reverse index is quite easy to maintain when targeted tag sets are added/removed from the system. The global memory consumption depends on the number of tags which are common to multiple targeted tag sets. It is quite easy to store pseudo-data in Redis and simulate the memory consumption. I have done it using a simple node.js script. For 1 million targeted tag sets (tags being 8 digits numbers, 40 tags per set), the memory consumption is close to 4 GB when there are very few tags shared by the targeted tag sets (more than 32M entries in the reverse index), and about 500 MB when the tags are shared a lot (only 100K entries in the reverse index). With this data structure, finding the targeted tag sets containing all the tags of a given customer is extremely efficient. 1- Get customer tag set (suppose it is 1 2 3 4) 2- SINTER tag:1 tag:2 tag:3 tag:4 => result is a list of targeted tag sets having all the tags of the customer The intersection operation is efficient because Redis is smart enough to order the sets per cardinality and starts with the set having the lowest cardinality. Now I understand you need to implement the converse operation (i.e. finding the targeted tag sets having all their tags in the customer tag set). The reverse index can still help. Here in an example in ugly pseudo-code: 1- Get customer tag set (suppose it is 1 2 3 4) 2- SUNIONSTORE tmp tag:1 tag:2 tag:3 tag:4 => result is a list of targeted tag sets having at least one tag in common with the customer 3- For t in tmp (iterating on the selected targeted tag sets) n = SCARD tgt:t (cardinality of the targeted tag sets) intersect = SINTER customer tgt:t if n == len(intersect), this targeted tag set matches So you never have to test the customer tag set against 1M targeted tag sets. You can rely on the reverse index to restrict the scope of the search to an acceptable level.
Redis
11,095,331
27
I want to be able to run a callback when any change is made in my redis collection. The callback would take the key and value as inputs. Is something like this possible? Thanks?
UPDATE (2020): Whomever is reading this - these are ancient answers, disregard them all. What you need is Redis Keyspace Notifications which have been around for ages. See https://redis.io/topics/notifications
Redis
10,676,951
27
— I built a simple app that pulls in data (50 items) from a Redis DB and throws it up at localhost. I did an ApacheBench (c = 100, n = 50000) and I'm getting a semi-decent 150 requests/sec on a dual-core T2080 @ 1.73GHz (my 6 y.o laptop), but the proc usage is very disappointing as shown: Only one core is used, which is as per design in Node, but I think I can nearly double my requests/sec to ~300, maybe even more, if I can use Node.js clusters. I fiddled around quite a bit but I haven't been able to figure out how to put the code given here for use with my app which is listed below: var express = require( 'express' ), app = express.createServer(), redis = require( 'redis' ).createClient(); app.configure( function() { app.set( 'view options', { layout: false } ); app.set( 'view engine', 'jade' ); app.set( 'views', __dirname + '/views' ); app.use( express.bodyParser() ); } ); function log( what ) { console.log( what ); } app.get( '/', function( req, res ) { redis.lrange( 'items', 0, 50, function( err, items ) { if( err ) { log( err ); } else { res.render( 'index', { items: items } ); } }); }); app.listen( 8080 ); I also want to emphasize that the app is I/O intensive (not CPU-intensive, which would've made something like threads-a-gogo a better choice than clusters). Would love some help in figuring this out.
Actually, your workload is not really I/O bound: it is CPU bound due to the cost of jade-based dynamic page generation. I cannot guess the complexity of your jade template, but even with simple templates, generating HTML pages is expensive. For my tests I used this template: html(lang="en") head title Example body h1 Jade - node template engine #container ul#users each user in items li User:#{user} I added 100 dummy strings to the items key in Redis. On my box, I get 475 req/s with node.js CPU at 100% (which means 50% CPU consumption on this dual core box). Let's replace: res.render( 'index', { items: items } ); by: res.send( '<html lang="en"><head><title>Example</title></head><body><h1>Jade - node template engine</h1><div id="container"><ul id="users"><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li><li>User:NOTHING</li></ul></div></body></html>' ); Now, the result of the benchmark is close to 2700 req/s. So the bottleneck is clearly due to the formatting of the HTML page. Using the cluster package in this situation is a good idea, and it is straightforward. The code can be modified as follows: var cluster = require('cluster') if ( cluster.isMaster ) { for ( var i=0; i<2; ++i ) cluster.fork(); } else { var express = require( 'express' ), app = express.createServer(), redis = require( 'redis' ).createClient(); app.configure( function() { app.set( 'view options', { layout: false } ); app.set( 'view engine', 'jade' ); app.set( 'views', __dirname + '/views' ); app.use( express.bodyParser() ); }); function log( what ) { console.log( what ); } app.get( '/', function( req, res ) { redis.lrange( 'items', 0, 50, function( err, items ) { if( err ) { log( err ); } else { res.render( 'index', { items: items } ); } }); }); app.listen( 8080 ); } Now the result of the benchmark is close to 750 req/s with 100 % CPU consumption (to be compared with the initial 475 req/s).
Redis
10,663,809
27
We have configured the Redis server with one master and two slaves. If my master fails, how can we handle the failover without restarting the Redis server.
Update: Today, I would recommend checking out redis-sentinel, a tool by Redis' author antirez for monitoring and automatic failover. Original reply: Check the SLAVEOF command: http://redis.io/commands/slaveof When you discover that your master fails, issue a SLAVEOF NO ONE on one of your slaves to promote it to master. Then point your other slave to it's new master. See also "Upgrading or restarting a Redis instance without downtime": http://redis.io/topics/admin For managing configuration files you could do something along these lines (caution: Not tested, meant as an example). The example below assumes two configuration files for each server (/etc/redis/server1.master.conf, /etc/redis/server1.slave.conf, etc), one having that server as a slave of some predefined master: #!/bin/sh master() { server_name=$1 redis-cli slaveof no one ln -sf /etc/redis/$server_name.master.conf /etc/redis/$server_name.conf } # Usage: slave(server1 server2 6379) slave() { server_name=$1 master=$2 master_port=$3 redis-cli slaveof $master $master_port ln -sf /etc/redis/$server_name.slave.conf /etc/redis/$server_name.conf } Instead of having the predefined configuration files, you could edit them on the fly with e.g. sed. Basically, you would make sure to always have a slaveof stanza in the configuration files, either pointing to a master or slaveof no one. Then rewrite the configuration using sed (again, not tested, just meant as food for thought): #!/bin/sh master() { server_name=$1 config=$server_name.conf redis-cli slaveof no one sed -i "s/^slaveof.*/slaveof no one/" $config } # Usage: slave(server1 server2 6379) slave() { server_name=$1 config=$server_name.conf master=$2 master_port=$3 redis-cli slaveof $master $master_port sed -i "s/^slaveof.*/slaveof $master $master_port/" $config }
Redis
9,223,840
27
I have separated the data using a colon. redis> keys party:* 1) "party:congress:president" 2) "party:bjp:president" 3) "party:bjp" 4) "party:sena" Is there any command that will list of all the parties? In case of above example, I expect congress bjp sena
No, there is no command to do that. But it would be trivial to implement it on client side, if you really have to. Applications should never use the KEYS commands to retrieve data. KEYS blocks the whole Redis instance while it is scanning linearly the millions of keys you have stored. It is more a debugging command supposed to be used in administration tools. With Redis, there is no btree structure to index the keys, so you cannot query for keys, except if your keys are stored in an existing collection (set, zset, etc ...)
Redis
8,744,207
27
Lets say I have an object (User) which consists of a few properties (ID, Name, Surename, Age). Which way is better to store this object in redis? store each property value in dedicated key, for example user:{id}:id, user:{id}:name, user:{id}:surename, user:{id}:age store whole User object as JSON string in one key, for example user:{id}:json (value of the key will be something like this: {"ID": 123, "Name": "Johny", "Surename": "Bravo", "Age": 22})
According to these two sources probably the optimal solution would be to use hashes because of memory consumption when using dedicated keys and long string in scenario with JSON as key value.
Redis
5,252,456
27
What is Redis's database size to memory ratio? For instance, if I have an 80MB database, how much RAM will Redis use (when used with a normal web app)?
Redis will use a bit more RAM than disk. The dumpfile format is probably a bit more densely packed. This is some numbers from a real production system (a 64 bit EC2 large instance running Redis 2.0.4 on Ubuntu 10.04): $ redis-cli info | grep used_memory_human used_memory_human:1.36G $ du -sh /mnt/data/redis/dump.rdb 950M /mnt/data/redis/dump.rdb As you can see, the dumpfile is a few hundred megs smaller than the memory usage. In the end it depends on what you store in the database. I have mainly hashes in mine, with only a few (perhaps less than 1%) sets. None of the keys contain very large objects, the average object size is 889 bytes.
Redis
4,731,873
27
I am trying to convey that the authentication/security scheme requires setting a header as follows: Authorization: Bearer <token> This is what I have based on the swagger documentation: securityDefinitions: APIKey: type: apiKey name: Authorization in: header security: - APIKey: []
Maybe this can help: swagger: '2.0' info: version: 1.0.0 title: Bearer auth example description: > An example for how to use Bearer Auth with OpenAPI / Swagger 2.0. host: basic-auth-server.herokuapp.com schemes: - http - https securityDefinitions: Bearer: type: apiKey name: Authorization in: header description: >- Enter the token with the `Bearer: ` prefix, e.g. "Bearer abcde12345". paths: /: get: security: - Bearer: [] responses: '200': description: 'Will send `Authenticated`' '403': description: 'You do not have necessary permissions for the resource' You can copy&paste it to https://editor.swagger.io to check out the results. There are also several examples in the Swagger Editor web with more complex security configurations which could help you. Important: In this example, API consumers must include the "Bearer" prefix as part of the token value. For example, when using Swagger UI's "Authorize" dialog, you need to enter Bearer your_token instead of just your_token.
OpenAPI
32,910,065
181
I have JSON schema file where one of the properties is defined as either string or null: "type":["string", "null"] When converted to YAML (for use with OpenAPI/Swagger), it becomes: type: - 'null' - string but the Swagger Editor shows an error: Schema "type" key must be a string What is the correct way to define a nullable property in OpenAPI?
This depends on the OpenAPI version. OpenAPI 3.1 Your example is valid in OpenAPI 3.1, which is fully compatible with JSON Schema 2020-12. type: - 'null' # Note the quotes around 'null' - string # same as type: ['null', string] The above is equivalent to: oneOf: - type: 'null' # Note the quotes around 'null' - type: string The nullable keyword used in OAS 3.0.x (see below) does not exist in OAS 3.1, it was removed in favor of the 'null' type. OpenAPI 3.0.x Nullable strings are defined as follows: type: string nullable: true This is different from JSON Schema syntax because OpenAPI versions up to 3.0.x use their own flavor of JSON Schema ("extended subset"). One of the differences is that the type must be a single type and cannot be a list of types. Also there's no 'null' type; instead, the nullable keyword serves as a type modifier to allow null values. OpenAPI 2.0 OAS2 does not support 'null' as the data type, so you are out of luck. You can only use type: string. However, some tools support x-nullable: true as a vendor extension, even though nulls are not part of the OpenAPI 2.0 Specification. Consider migrating to OpenAPI v. 3 to get proper support for nulls.
OpenAPI
48,111,459
176
What is the correct way to declare a date in a swagger-file object? I would think it is: startDate: type: string description: Start date example: "2017-01-01" format: date But I see a lot of declarations like these: startDate: type: string description: Start date example: "2017-01-01" format: date pattern: "YYYY-MM-DD" minLength: 0 maxLength: 10
The OpenAPI Specification says that you must use: type: string format: date # or date-time The internet date/time standard used by OpenAPI is defined in RFC 3339, section 5.6 (effectively ISO 8601) and examples are provided in section 5.8. So for date values should look like "2018-03-20" and for date-time, "2018-03-20T09:12:28Z". As such, when using date or date-time, the pattern should be omitted. If you need to support dates/times formatted in a way that differs to RFC 3339, you are not allowed to specify your parameter as format: date or format: date-time. Instead, you should specify type: string with an appropriate pattern and remove format. Finally, note that a pattern of "YYYY-MM-DD" is invalid according to the specification: pattern must be a regular expression, not a placeholder or format string.
OpenAPI
49,379,006
160
I have an API reference in a Swagger file. I want to create a very simple mock server, so that when I call e.g.: mymockurl.com/users it will return a predefined JSON (no need to connect to a database). What's the easiest way to do this? I'm not a backend guy.
An easy way to create simple mock from an OpenAPI (fka Swagger) spec without code is to use a tool call prism available at http://github.com/stoplightio/prism written in Typescript. This command line is all you need: ./prism run --mock --list --spec <your swagger spec file> The mock server will return a dynamic response based on the OpenAPI spec. If examples are provided in the spec, prism will return them, if not it will generate dummy data based on the spec. Edit (Aug 2020): The command has changed in the latest version. The following will do: prism mock <your spec file> It accepts swagger and postman doc as well.
OpenAPI
38,344,711
123
I am developing a REST API. during development I have used postman (chrome extension) to use and document my API. It is a wonderful tool and I have most of my API endpoints in it. However, as we near release I would like to document this API in swagger, how would I do that? Is there a way that I can generate swagger based off of the postman export?
As of 2022 my recommendation is the excellent Rust/Wasm tool from Kevin Swiber, which is available online, as a Rust crate and as an npm module. https://kevinswiber.github.io/postman2openapi/ APIMatic API Transformer can process a Postman collection (v1 or v2) as an input format and produce Swagger 1.2 or 2.0, and now OpenAPI 3.0.0 as output. It has its own API and a Web front-end, and also a command-line version.
OpenAPI
31,299,098
118
When using JSON Schema and Open API specification (OAS) to document a REST API, how do I define the UUID property?
There's no built-in type for UUID, but the OpenAPI Specification suggests using type: string format: uuid From the Data Types section (emphasis mine): Primitives have an optional modifier property: format. OAS uses several known formats to define in fine detail the data type being used. However, to support documentation needs, the format property is an open string-valued property, and can have any value. Formats such as "email", "uuid", and so on, MAY be used even though undefined by this specification. For example, Swagger Codegen maps format: uuid to System.Guid in C# or java.util.UUID in Java. Tools that don't support format: uuid will handle it as just type: string.
OpenAPI
50,204,588
113
How to I define in OpenAPI/Swagger if a field is optional or required and what is the default?
By default, fields in a model are optional unless you put them in the required list. Below is an example - id, category are optional fields, name is required. Note that required is not an attribute of fields, but an attribute of the object itself - it's a list of required properties. type: object required: # List the required properties here - name properties: id: type: integer format: int64 category: $ref: '#/definitions/Category' name: type: string example: doggie Ref: https://github.com/swagger-api/swagger-codegen/blob/master/modules/swagger-codegen/src/test/resources/2_0/petstore.yaml#L658 If this is the model for the request body, you'll probably also need to mark the body itself as required: # swagger: '2.0' parameters: - in: body name: body required: true # <---- schema: $ref: '#/definitions/Pet' # openapi: 3.x.x requestBody: required: true # <---- content: ... To specify the default value of optional fields, you can use the default attribute. Here is an example: type: object properties: huntingSkill: type: string description: The measured skill for hunting default: lazy
OpenAPI
40,113,049
109
Let's say I've got a parameter like limit. This one gets used all over the place and it's a pain to have to change it everywhere if I need to update it: parameters: - name: limit in: query description: Limits the number of returned results required: false type: number format: int32 Can I use $ref to define this elsewhere and make it reusable? I came across this ticket which suggests that someone wants to change or improve feature, but I can't tell if it already exists today or not?
This feature already exists in Swagger 2.0. The linked ticket talks about some specific mechanics of it which doesn't affect the functionality of this feature. At the top level object (referred to as the Swagger Object), there's a parameters property where you can define reusable parameters. You can give the parameter any name, and refer to it from paths/specific operations. The top level parameters are just definitions and are not applied to all operations in the spec automatically. You can find an example for it here - https://github.com/swagger-api/swagger-spec/blob/master/fixtures/v2.0/json/resources/reusableParameters.json - even with a limit parameter. In your case, you'd want to do this: # define a path with parameter reference /path: get: parameters: - $ref: "#/parameters/limitParam" - $ref: "#/parameters/offsetParam" # define reusable parameters: parameters: limitParam: name: limit in: query description: Limits the number of returned results required: false type: integer format: int32 offsetParam: name: offset in: query description: Offset from which start returned results required: false type: integer format: int32
OpenAPI
27,005,105
106
This question is not a duplicate of (Swagger - Specify Optional Object Property or Multiple Responses) because that OP was trying to return a 200 or a 400. I have a GET with an optional parameter; e.g., GET /endpoint?selector=foo. I want to return a 200 whose schema is different based on whether the parameter was passed, e.g.,: GET /endpoint -> {200, schema_1} GET /endpoint?selector=blah -> {200, schema_2} In the yaml, I tried having two 200 codes, but the viewer squashes them down as if I only specified one. Is there a way to do this? Edit: the following seems related: https://github.com/OAI/OpenAPI-Specification/issues/270
OpenAPI 2.0 OAS2 does not support multiple response schemas per status code. You can only have a single schema, for example, a free-form object (type: object without properties). OpenAPI 3.x In OAS3 you can use oneOf to define multiple possible request bodies or response bodies for the same operation: openapi: 3.0.0 ... paths: /path: get: responses: '200': description: Success content: application/json: schema: oneOf: - $ref: '#/components/schemas/ResponseOne' - $ref: '#/components/schemas/ResponseTwo' However, it's not possible to map specific response schemas to specific parameter values. You'll need to document these specifics verbally in the description of the response, operation and/or parameter. Here's a possibly related enhancement request: Allow operationObject overloading with get-^ post-^ etc Note for Swagger UI users: Older versions of Swagger UI (before v. 3.39.0) do not automatically generate examples for oneOf and anyOf schemas. As a workaround, you can specify a response example or examples manually. Note that using multiple examples require Swagger UI 3.23.0+ or Swagger Editor 3.6.31+. responses: '200': description: Success content: application/json: schema: oneOf: - $ref: '#/components/schemas/ResponseOne' - $ref: '#/components/schemas/ResponseTwo' example: # <--- Workaround for Swagger UI < 3.39.0 foo: bar
OpenAPI
36,576,447
90
I am using Swagger to document my REST services. One of my services requires a CSV file to be uploaded. I added the following to the parameters section in my JSON API definition: { "name": "File", "description": "The file in zip format.", "paramType": "body", "required": true, "allowMultiple": false, "dataType": "file" } and now I see the file upload option on my Swagger UI page. But when I select a file and click "try it out", I get the following error: NS_ERROR_XPC_BAD_OP_ON_WN_PROTO: Illegal operation on WrappedNative prototype object in jquery-1.8.0.min.js (line 2) The page is continuously processing and I am not getting any response. Any ideas what could be wrong?
OpenAPI Specification 2.0 In Swagger 2.0 (OpenAPI Specification 2.0), use a form parameter (in: formData) with the type set to file. Additionally, the operation's consumes must be multipart/form-data. consumes: - multipart/form-data parameters: - name: file in: formData # <----- description: The uploaded file data required: true type: file # <----- OpenAPI Specification 3.0 In OpenAPI Specification 3.0, files are defined as binary strings, that is, type: string + format: binary (or format: byte, depending on the use case). File input/output content is described with the same semantics as any other schema type (unlike OpenAPI 2.0): Multi-part request, single file: requestBody: content: multipart/form-data: schema: type: object properties: # 'file' will be the field name in this multipart request file: type: string format: binary Multi-part request, array of files (supported in Swagger UI 3.26.0+ and Swagger Editor 3.10.0+): requestBody: content: multipart/form-data: schema: type: object properties: # The property name 'file' will be used for all files. file: type: array items: type: string format: binary POST/PUT file directly (the request body is the file contents): requestBody: content: application/octet-stream: # any media type is accepted, functionally equivalent to `*/*` schema: # a binary file of any type type: string format: binary Note: the semantics are the same as other OpenAPI 3.0 schema types: # content transferred in binary (octet-stream): schema: type: string format: binary Further information: Considerations for File Uploads Special Considerations for multipart Content File Upload and Multipart Requests
OpenAPI
14,455,408
82
There is a function in my REST web service working with GET method and it has two optional parameters. I tried to define it in Swagger but I encountered an error, Not a valid parameter definition, after I set the required as false. I found out that if I set the required value as true the error will be gone. Here is a sample of my Swagger code. ... paths: '/get/{param1}/{param2}': get: ... parameters: - name: param1 in: path description: 'description regarding param1' required: false type: string - name: param2 in: path description: 'description regarding param2' required: false type: string I didn't experience this with parameters in body or the the ones in query. I think this problem is only related to parameters in path. I could not find any solution in swagger specification files either. Is there any other way to define optional parameters in Swagger or do I have any mistake in my code? Any help would be appreciated.
Given that path parameter must be required according to the OpenAPI/Swagger spec, you can consider adding 2 separate endpoints with the following paths: /get/{param1}/{param2} when param2 is provided /get/{param1}/ when param2 is not provided
OpenAPI
35,011,192
76
Is there a generator to convert OpenAPI 3.0 to Swagger 2.0? Mashery, an API gateway, requires Swagger 2.0 format on input to open endpoint.
LucyBot api-spec-converter (online version, GitHub repo, Node.js module) can convert from OpenAPI 3.0 to 2.0. API Transformer (paid service) also claims to be able to convert OpenAPI 3.0 back to OpenAPI 2.0. It has a command-line version too. Keep in mind that OAS3→OAS2 convertion is lossy in general, because OAS3 has features that did not exist in OAS2 (such as multiple servers, oneOf/anyOf, different schemas per media type, objects in query string parameters, cookie parameters, and others).
OpenAPI
56,637,299
76
Given the following OpenAPI definition Person: required: - id type: object properties: id: type: string Which of the below objects are valid? Only A, or A & B? A. {"id": ""} B. {"id": null} C. {} This boils down to the question whether "required = true" means "non-null value" or "property must be present". The JSON schema validator at https://json-schema-validator.herokuapp.com/ says that B. be invalid because null doesn't satisfy the type: string constraint. Note that it doesn't complain because id is null but because null is not a string. BUT how relevant is this for OpenAPI/Swagger?
The required keyword in OpenAPI Schema Objects is taken from JSON Schema and means: An object instance is valid against this keyword if every item in the [required] array is the name of a property in the instance. In other words, required means "property must be present", regardless of its value. The type, format, etc. of the property value are separate constraints that are evaluated separately from required, but together as a combined schema. In your example: {"id": ""} is valid: ✓ validates against required ✓ the value "" validates against type: string {"id": null} is NOT valid: ✓ validates against required ✗ null does NOT validate against type: string (see the notes about nulls below) {} is NOT valid: ✗ does NOT validate against required Note that 'null' as a type is not supported in OpenAPI 2.0 but is supported in OpenAPI 3.1, and 3.0 has nullable to handle nulls. So, {"id": null} is valid against this OpenAPI 3 schema: Person: required: - id type: object properties: id: # OAS 3.1 type: [string, 'null'] # OAS 3.0 # type: string # nullable: true
OpenAPI
45,575,493
68
I try to follow these: https://www.dariawan.com/tutorials/spring/documenting-spring-boot-rest-api-springdoc-openapi-3/ How do I deal with annotations like: @ApiModel(value = "Response container") @ApiModelProperty(value = "Iventory response", required = true)
Migrating from SpringFox Remove springfox and swagger 2 dependencies. Add springdoc-openapi-ui dependency instead. <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-ui</artifactId> <version>@springdoc.version@</version> </dependency> Replace swagger 2 annotations with swagger 3 annotations (it is already included with springdoc-openapi-ui dependency). Package for swagger 3 annotations is io.swagger.v3.oas.annotations. @ApiParam -> @Parameter @ApiOperation -> @Operation @Api -> @Tag @ApiImplicitParams -> @Parameters @ApiImplicitParam -> @Parameter @ApiIgnore -> @Parameter(hidden = true) or @Operation(hidden = true) or @Hidden @ApiModel -> @Schema @ApiModelProperty -> @Schema This step is optional: Only if you have multiple Docket beans replace them with GroupedOpenApi beans. Before: @Bean public Docket publicApi() { return new Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.basePackage("org.github.springshop.web.public")) .paths(PathSelectors.regex("/public.*")) .build() .groupName("springshop-public") .apiInfo(apiInfo()); } @Bean public Docket adminApi() { return new Docket(DocumentationType.SWAGGER_2) .select() .apis(RequestHandlerSelectors.basePackage("org.github.springshop.web.admin")) .paths(PathSelectors.regex("/admin.*")) .build() .groupName("springshop-admin") .apiInfo(apiInfo()); } Now: @Bean public GroupedOpenApi publicApi() { return GroupedOpenApi.builder() .setGroup("springshop-public") .pathsToMatch("/public/**") .build(); } @Bean public GroupedOpenApi adminApi() { return GroupedOpenApi.builder() .setGroup("springshop-admin") .pathsToMatch("/admin/**") .build(); } If you have only one Docket -- remove it and instead add properties to your application.properties: springdoc.packagesToScan=package1, package2 springdoc.pathsToMatch=/v1, /api/balance/** Add bean of OpenAPI type. See example: @Bean public OpenAPI springShopOpenAPI() { return new OpenAPI() .info(new Info().title("SpringShop API") .description("Spring shop sample application") .version("v0.0.1") .license(new License().name("Apache 2.0").url("http://springdoc.org"))) .externalDocs(new ExternalDocumentation() .description("SpringShop Wiki Documentation") .url("https://springshop.wiki.github.org/docs")); } If the swagger-ui is served behind a proxy: https://springdoc.org/faq.html#how-can-i-deploy-the-doploy-springdoc-openapi-ui-behind-a-reverse-proxy. To customise the Swagger UI https://springdoc.org/faq.html#how-can-i-configure-swagger-ui. To hide an operation or a controller from documentation https://springdoc.org/faq.html#how-can-i-hide-an-operation-or-a-controller-from-documentation-.
OpenAPI
59,291,371
68
I'm currently using Swagger in my NestJS project, and I have the explorer enabled: in main.js const options = new DocumentBuilder() .setTitle('My App') .setSchemes('https') .setDescription('My App API documentation') .setVersion('1.0') .build() const document = SwaggerModule.createDocument(app, options) SwaggerModule.setup('docs', app, document, { customSiteTitle: 'My App documentation', }) With this, the explorer is accessible in /docs which is what I expected. But I was wondering if it's possible to add any Authentication layer to the explorer, so only certain requests are accepted. I want to make this explorer accessible in production, but only for authenticated users.
Securing access to your Swagger with HTTP Basic Auth using NestJS with Express First run npm i express-basic-auth then add the following to your main.{ts,js}: import * as basicAuth from "express-basic-auth"; // ... // Sometime after NestFactory add this to add HTTP Basic Auth app.use( // Paths you want to protect with basic auth "/docs*", basicAuth({ challenge: true, users: { yourUserName: "p4ssw0rd", }, }) ); // Your code const options = new DocumentBuilder() .setTitle("My App") .setSchemes("https") .setDescription("My App API documentation") .setVersion("1.0") .build(); const document = SwaggerModule.createDocument(app, options); SwaggerModule.setup( // Make sure you use the same path just without `/` and `*` "docs", app, document, { customSiteTitle: "My App documentation", } ); // ... With this in place you will be prompted on any of the /docs route with a HTTP Basic Auth prompt. We add the * to also protect the generated JSON (/docs-json) and YAML (/docs-json) OpenAPI files. If you have any other route beginning with /docs, that should not be protected, you should rather explicitly name the routes you want to protect in an array ['/docs', '/docs-json', '/docs-yaml']. You should not put the credentials in your code/repository but rather in your .env and access via the ConfigService. I have seen this solution first here.
OpenAPI
54,802,832
65
I'm using http://editor.swagger.io to design an API and I get an error which I don't know how to address: Schema error at paths['/employees/{employeeId}/roles'].get.parameters[0] should NOT have additional properties additionalProperty: type, format, name, in, description Jump to line 24 I have other endpoints defined in a similar way, and don't get this error. I wondered if I had some issue with indentation or unclosed quotes, but that doesn't seem to be the case. Google also did not seem to provide any useful results. swagger: "2.0" info: description: Initial draft of the API specification version: '1.0' title: App 4.0 API host: api.com basePath: /v1 tags: - name: employees description: Employee management schemes: - https paths: /employees/{employeeId}/roles: get: tags: - employees summary: "Get a specific employee's roles" description: '' operationId: findEmployeeRoles produces: - application/json parameters: - name: employeeId <====== Line 24 in: path description: Id of employee whose roles we are fetching type: integer format: int64 responses: '200': description: successful operation schema: type: array items: $ref: '#/definitions/Role' '403': description: No permission to see employee roles '404': description: EmployeeId not found Any Hints?
The error message is misleading. The actual error is that your path parameter is missing required: true. Path parameters are always required, so remember to add required: true to them.
OpenAPI
45,549,663
64
I am familiar with the Microsoft stack. I am using OData for some of my restful services. Recently I came across Swagger for API documentation and I am trying to understand how it relates to OData. Both of them seem to be RESTful specifications. Which one is widely used?
Swagger is a specification for documenting APIs. By creating a swagger document for your API, you can pass it to an instance of Swagger UI, which renders the document in a neat, readable format and provides tooling to invoke your APIs. See the swagger.io website for further information. OData is a specification for creating data services over http, it defines how a service should be constructed and what patterns it should follow. For example, the use of the $top directive to provide the first n results of a data set. OData is currently at version 4, but the v2 documentation has a very good overview. Swashbuckle is a nuget package for the Microsoft stack that produces swagger documents for your API's automatically, based on inspecting the code and additional metadata you provide to shape the output document. If you want Swashbuckle to automatically generate swagger documents for an OData API you are building, then you can use Swashbuckle.OData to provide this for you. If you are using .NET Core, then it gets a little more complex, but a full example can be found at the .NET Core Swagger OData sample. OpenAPI is a specification for describing API's; Swagger is an implementation of the OpenAPI standard. You can find more details here.
OpenAPI
32,858,371
58
Although I have seen the examples in the OpenAPI spec: type: object additionalProperties: $ref: '#/definitions/ComplexModel' it isn't obvious to me why the use of additionalProperties is the correct schema for a Map/Dictionary. It also doesn't help that the only concrete thing that the spec has to say about additionalProperties is: The following properties are taken from the JSON Schema definition but their definitions were adjusted to the Swagger Specification. Their definition is the same as the one from JSON Schema, only where the original definition references the JSON Schema definition, the Schema Object definition is used instead. items allOf properties additionalProperties
Chen, I think your answer is correct. Some further background that might be helpful: In JavaScript, which was the original context for JSON, an object is like a hash map of strings to values, where some values are data, others are functions. You can think of each name-value pair as a property. But JavaScript doesn't have classes, so the property names are not predefined, and each object can have its own independent set of properties. JSON Schema uses the properties keyword to validate name-value pairs that are known in advance; and uses additionalProperties (or patternProperties, not supported in OpenAPI 2.0) to validate properties that are not known. For clarity: The property names, or "keys" in the map, must be strings. They cannot be numbers, or any other value. As you said, the property names should be unique. Unfortunately the JSON spec doesn't strictly require uniqueness, but uniqueness is recommended, and expected by most JSON implementations. More background here. properties and additionalProperties can be used alone or in combination. When additionalProperties is used alone, without properties, the object essentially functions as a map<string, T> where T is the type described in the additionalProperties sub-schema. Maybe that helps to answer your original question. When evaluating an object against a single schema, if a property name matches one of those specified in properties, its value only needs to be valid against the sub-schema provided for that property. The additionalProperties sub-schema, if provided, will only be used to validate properties that are not included in the properties map. There are some limitations of additionalProperties as implemented in Swagger's core Java libraries. I've documented these limitations here.
OpenAPI
41,239,913
58
How to specify a property as null or a reference? discusses how to specify a property as null or a reference using jsonschema. I'm looking to do the same thing with swagger. To recap the answer to the above, with jsonschema, one could do this: { "definitions": { "Foo": { # some complex object } }, "type": "object", "properties": { "foo": { "oneOf": [ {"$ref": "#/definitions/Foo"}, {"type": "null"} ] } } } The key point to the answer was the use of oneOf. The key points to my question: I have a complex object which I want to keep DRY so I put it in a definitions section for reuse throughout my swagger spec: values of other properties; response objects, etc. In various places in my spec a property may be a reference to such an object OR be null. How do I specify this with Swagger which doesn't support oneOf or anyOf? Note: some swagger implementations use x-nullable (or some-such) to specify a property value can be null, however, $ref replaces the object with what it references, so it would appear any use of x-nullable is ignored.
OpenAPI 3.1 Define the property as anyOf of the $ref and type: 'null'. YAML version: foo: anyOf: - type: 'null' # Note the quotes around 'null' - $ref: '#/components/schemas/Foo' JSON version: "foo": { "anyOf": [ { "type": "null" }, { "$ref": "#/components/schemas/Foo" } ] } Why use anyOf and not oneOf? oneOf will fail validation if the referenced schema itself allows nulls, whereas anyOf will work. OpenAPI 3.0 YAML version: foo: nullable: true allOf: - $ref: '#/components/schemas/Foo' JSON version: "foo": { "nullable": true, "allOf": [ { "$ref": "#/components/schemas/Foo" } ] } In OAS 3.0, wrapping $ref into allOf is needed to combine the $ref with other keywords - because $ref overwrites any sibling keywords. This is further discussed in the OpenAPI Specification repository: Reference objects don't combine well with “nullable”
OpenAPI
40,920,441
56
Are there any tools/libraries to convert OpenAPI 2.0 definitions to OpenAPI 3.0, without doing it one per row?
Swagger Editor Paste your OpenAPI 2.0 definition into https://editor.swagger.io and select Edit > Convert to OpenAPI 3 from the menu. Swagger Converter Converts OpenAPI 2.0 and Swagger 1.x definitions to OpenAPI 3.0. https://converter.swagger.io/api/convert?url=OAS2_YAML_OR_JSON_URL This gives you JSON. If you want YAML, send the request with the Accept: application/yaml header: curl "https://converter.swagger.io/api/convert?url=OAS2_YAML_OR_JSON_URL" -H "Accept: application/yaml" -o ./openapi.yaml API docs: https://converter.swagger.io GitHub repo: https://github.com/swagger-api/swagger-converter Swagger Codegen version 3.x Can also convert OpenAPI 2.0 and Swagger 1.x definitions to OpenAPI 3.0. Swagger Codegen has a CLI version, Maven plugin, Docker images. Here's an example using the command-line version (you can download the latest JAR from Maven Central). Write the entire command on one line. Use openapi-yaml to get YAML or openapi to get JSON. java -jar swagger-codegen-cli-3.0.19.jar generate -l openapi-yaml -i https://petstore.swagger.io/v2/swagger.yaml -o OUT_DIR GitHub repo: https://github.com/swagger-api/swagger-codegen
OpenAPI
59,749,513
56
I have a series of parameters in Swagger like this "parameters": [ { "name": "username", "description": "Fetch username by username/email", "required": false, "type": "string", "paramType": "query" }, { "name": "site", "description": "Fetch username by site", "required": false, "type": "string", "paramType": "query" }, { "name": "survey", "description": "Fetch username by survey", "required": false, "type": "string", "paramType": "query" } ], One parameter MUST be filled out but it doesn't matter which one, the others can be left blank. Is there a way to represent this in Swagger?
Mutually exclusive parameters are possible (sort of) in OpenAPI 3.x: Define the mutually exclusive parameters as object properties, and use oneOf or maxProperties to limit the object to just 1 property. Use the parameter serialization method style: form and explode: true, so that the object is serialized as ?propName=value. An example using the minProperties and maxProperties constraints: openapi: 3.0.0 ... paths: /foo: get: parameters: - in: query name: filter required: true style: form explode: true schema: type: object properties: username: type: string site: type: string survey: type: string minProperties: 1 maxProperties: 1 additionalProperties: false Using oneOf: parameters: - in: query name: filter required: true style: form explode: true schema: type: object oneOf: - properties: username: type: string required: [username] additionalProperties: false - properties: site: type: string required: [site] additionalProperties: false - properties: survey: type: string required: [survey] additionalProperties: false Another version using oneOf: parameters: - in: query name: filter required: true style: form explode: true schema: type: object properties: username: type: string site: type: string survey: type: string additionalProperties: false oneOf: - required: [username] - required: [site] - required: [survey] Note that Swagger UI and Swagger Editor do not support the examples above yet (as of March 2018). This issue seems to cover the parameter rendering part. There's also an open proposal in the OpenAPI Specification repository to support interdependencies between query parameters so maybe future versions of the Specification will have a better way to define such scenarios.
OpenAPI
21,134,029
54
How to enable "Authorize" button in springdoc-openapi-ui (OpenAPI 3.0 /swagger-ui.html) for Bearer Token Authentication, for example JWT. What annotations have to be added to Spring @Controller and @Configuration classes?
I prefer to use bean initialization instead of annotation. import io.swagger.v3.oas.models.Components; import io.swagger.v3.oas.models.OpenAPI; import io.swagger.v3.oas.models.info.Info; import io.swagger.v3.oas.models.security.SecurityRequirement; import io.swagger.v3.oas.models.security.SecurityScheme; import org.springframework.beans.factory.annotation.Value; import org.springframework.context.annotation.Bean; import org.springframework.context.annotation.Configuration; import org.springframework.util.StringUtils; @Configuration public class OpenApi30Config { private final String moduleName; private final String apiVersion; public OpenApi30Config( @Value("${module-name}") String moduleName, @Value("${api-version}") String apiVersion) { this.moduleName = moduleName; this.apiVersion = apiVersion; } @Bean public OpenAPI customOpenAPI() { final String securitySchemeName = "bearerAuth"; final String apiTitle = String.format("%s API", StringUtils.capitalize(moduleName)); return new OpenAPI() .addSecurityItem(new SecurityRequirement().addList(securitySchemeName)) .components( new Components() .addSecuritySchemes(securitySchemeName, new SecurityScheme() .name(securitySchemeName) .type(SecurityScheme.Type.HTTP) .scheme("bearer") .bearerFormat("JWT") ) ) .info(new Info().title(apiTitle).version(apiVersion)); } } The line of code .addSecurityItem(new SecurityRequirement().addList(securitySchemeName)) allows to add global security schema and to get rid of writing security to each @Operation of method.
OpenAPI
59,898,874
54
I have a POST request that uses the following JSON request body. How can I describe this request body using OpenAPI (Swagger)? { "testapi":{ "testapiContext":{ "messageId":"kkkk8", "messageDateTime":"2014-08-17T14:07:30+0530" }, "testapiBody":{ "cameraServiceRq":{ "osType":"android", "deviceType":"samsung555" } } } } So far I tried the following, but I'm stuck at defining the body schema. swagger: "2.0" info: version: 1.0.0 title: get camera license: name: MIT host: localhost basePath: /test/service schemes: - http consumes: - application/json produces: - application/json paths: /getCameraParameters: post: summary: Create new parameters operationId: createnew consumes: - application/json - application/xml produces: - application/json - application/xml parameters: - name: pet in: body description: The pet JSON you want to post schema: # <--- What do I write here? required: true responses: 200: description: "200 response" examples: application/json: { "status": "Success" } I want to define the input body inline, as a sample for documentation.
I made it work with: post: consumes: - application/json produces: - application/json - text/xml - text/html parameters: - name: body in: body required: true schema: # Body schema with atomic property examples type: object properties: testapi: type: object properties: messageId: type: string example: kkkk8 messageDateTime: type: string example: '2014-08-17T14:07:30+0530' testapiBody: type: object properties: cameraServiceRq: type: object properties: osType: type: string example: android deviceType: type: string example: samsung555 # Alternatively, we can use a schema-level example example: testapi: testapiContext: messageId: kkkk8 messageDateTime: '2014-08-17T14:07:30+0530' testapiBody: cameraServiceRq: osType: android deviceType: samsung555
OpenAPI
31,390,806
53
In OpenAPI (Swagger) 2.0, we could define header parameters like so: paths: /post: post: parameters: - in: header name: X-username But in OpenAPI 3.0.0, parameters are replaced by request bodies, and I cannot find a way to define header parameters, which would further be used for authentication. What is the correct way to define request headers in OpenAPI 3.0.0?
In OpenAPI 3.0, header parameters are defined in the same way as in OpenAPI 2.0, except the type has been replaced with schema: paths: /post: post: parameters: - in: header name: X-username schema: type: string When in doubt, check out the Describing Parameters guide. But in Swagger 3.0.0 parameters are replaced by request bodies. This is only true for form and body parameters. Other parameter types (path, query, header) are still defined as parameters. define header parameters, which would further be used for authentication. A better way to define authentication-related parameters is to use securitySchemes rather than define these parameters explicitly in parameters. Security schemes are used for parameters such as API keys, app ID/secret, etc. In your case: components: securitySchemes: usernameHeader: type: apiKey in: header name: X-Username paths: /post: post: security: - usernameHeader: [] ...
OpenAPI
50,117,059
46
I have an Asp.Net web API 5.2 project in c# and generating documentation with Swashbuckle. I have model that contain inheritance something like having an Animal property from an Animal abstract class and Dog and Cat classes that derive from it. Swashbuckle only shows the schema for the Animal class so I tried to play with ISchemaFilter (that what they suggest too) but I couldn't make it work and also I cannot find a proper example. Anybody can help?
It seems Swashbuckle doesn't implement polymorphism correctly and I understand the point of view of the author about subclasses as parameters (if an action expects an Animal class and behaves differently if you call it with a dog object or a cat object, then you should have 2 different actions...) but as return types I believe that it is correct to return Animal and the objects could be Dog or Cat types. So to describe my API and produce a proper JSON schema in line with correct guidelines (be aware of the way I describe the disciminator, if you have your own discriminator you may need to change that part in particular), I use document and schema filters as follows: SwaggerDocsConfig configuration; ..... configuration.DocumentFilter<PolymorphismDocumentFilter<YourBaseClass>>(); configuration.SchemaFilter<PolymorphismSchemaFilter<YourBaseClass>>(); ..... public class PolymorphismSchemaFilter<T> : ISchemaFilter { private readonly Lazy<HashSet<Type>> derivedTypes = new Lazy<HashSet<Type>>(Init); private static HashSet<Type> Init() { var abstractType = typeof(T); var dTypes = abstractType.Assembly .GetTypes() .Where(x => abstractType != x && abstractType.IsAssignableFrom(x)); var result = new HashSet<Type>(); foreach (var item in dTypes) result.Add(item); return result; } public void Apply(Schema schema, SchemaRegistry schemaRegistry, Type type) { if (!derivedTypes.Value.Contains(type)) return; var clonedSchema = new Schema { properties = schema.properties, type = schema.type, required = schema.required }; //schemaRegistry.Definitions[typeof(T).Name]; does not work correctly in SwashBuckle var parentSchema = new Schema { @ref = "#/definitions/" + typeof(T).Name }; schema.allOf = new List<Schema> { parentSchema, clonedSchema }; //reset properties for they are included in allOf, should be null but code does not handle it schema.properties = new Dictionary<string, Schema>(); } } public class PolymorphismDocumentFilter<T> : IDocumentFilter { public void Apply(SwaggerDocument swaggerDoc, SchemaRegistry schemaRegistry, System.Web.Http.Description.IApiExplorer apiExplorer) { RegisterSubClasses(schemaRegistry, typeof(T)); } private static void RegisterSubClasses(SchemaRegistry schemaRegistry, Type abstractType) { const string discriminatorName = "discriminator"; var parentSchema = schemaRegistry.Definitions[SchemaIdProvider.GetSchemaId(abstractType)]; //set up a discriminator property (it must be required) parentSchema.discriminator = discriminatorName; parentSchema.required = new List<string> { discriminatorName }; if (!parentSchema.properties.ContainsKey(discriminatorName)) parentSchema.properties.Add(discriminatorName, new Schema { type = "string" }); //register all subclasses var derivedTypes = abstractType.Assembly .GetTypes() .Where(x => abstractType != x && abstractType.IsAssignableFrom(x)); foreach (var item in derivedTypes) schemaRegistry.GetOrRegister(item); } } What the previous code implements is specified here, in the section "Models with Polymorphism Support. It basically produces something like the following: { "definitions": { "Pet": { "type": "object", "discriminator": "petType", "properties": { "name": { "type": "string" }, "petType": { "type": "string" } }, "required": [ "name", "petType" ] }, "Cat": { "description": "A representation of a cat", "allOf": [ { "$ref": "#/definitions/Pet" }, { "type": "object", "properties": { "huntingSkill": { "type": "string", "description": "The measured skill for hunting", "default": "lazy", "enum": [ "clueless", "lazy", "adventurous", "aggressive" ] } }, "required": [ "huntingSkill" ] } ] }, "Dog": { "description": "A representation of a dog", "allOf": [ { "$ref": "#/definitions/Pet" }, { "type": "object", "properties": { "packSize": { "type": "integer", "format": "int32", "description": "the size of the pack the dog is from", "default": 0, "minimum": 0 } }, "required": [ "packSize" ] } ] } } }
OpenAPI
34,397,349
44
I'm writing an OpenAPI definition in Swagger Editor. One of my type definitions contains an array containing child elements of the same type as the parent. I.e. something like this: definitions: TreeNode: type: object properties: name: type: string description: The name of the tree node. children: type: array items: $ref: '#/definitions/TreeNode' However, Swagger Editor doesn't pick up the recursive reference in the children array, which is simply shown as an array of "undefined" elements. Does anybody have an idea on how to do this?`
Your definition is perfectly fine. It's a known issue with rendering recursive schemas in Swagger Editor and Swagger UI: https://github.com/swagger-api/swagger-ui/issues/3325 To work around the "Example Value" showing null/"string"/undefined instead of a recursive element, you can add a custom example to your schema: definitions: TreeNode: type: object properties: name: type: string description: The name of the tree node. children: type: array items: $ref: '#/definitions/TreeNode' example: name: foo children: - name: bar - name: baz children: - name: qux
OpenAPI
36,866,035
42
I want to combine an API specification written using the OpenAPI 3 spec, that is currently divided into multiple files that reference each other using $ref. How can I do that?
I wrote a quick tool to do this recently. I call it openapi-merge. There is a library and an associated CLI tool: https://www.npmjs.com/package/openapi-merge https://www.npmjs.com/package/openapi-merge-cli In order to use the CLI tool you just write a configuration file and then run npx openapi-merge-cli. The configuration file is fairly simple and would look something like this: { "inputs": [ { "inputFile": "./gateway.swagger.json" }, { "inputFile": "./jira.swagger.json", "pathModification": { "stripStart": "/rest", "prepend": "/jira" } }, { "inputFile": "./confluence.swagger.json", "disputePrefix": "Confluence", "pathModification": { "prepend": "/confluence" } } ], "output": "./output.swagger.json" } For more details, see the README on the NPM package.
OpenAPI
54,586,137
42
I'm preparing my API documentation by doing it per hand and not auto generated. There I have headers that should be sent to all APIs and don't know if it is possible to define parameters globally for the whole API or not? Some of these headers are static and some has to be set when call to API is made, but they are all the same in all APIs, I don't want to copy and paste parameters for each API and each method as this will not be maintainable in the future. I saw the static headers by API definition but there is no single document for how somebody can set them or use them. Is this possible at all or not?
It depends on what kind of parameters they are. The examples below are in YAML (for readability), but you can use http://www.json2yaml.com to convert them to JSON. Security-related parameters: Authorization header, API keys, etc. Parameters used for authentication and authorization, such as the Authorization header, API key, pair of API keys, etc. should be defined as security schemes rather than parameters. In your example, the X-ACCOUNT looks like an API key, so you can use: swagger: "2.0" ... securityDefinitions: accountId: type: apiKey in: header name: X-ACCOUNT description: All requests must include the `X-ACCOUNT` header containing your account ID. # Apply the "X-ACCOUNT" header globally to all paths and operations security: - accountId: [] or in OpenAPI 3.0: openapi: 3.0.0 ... components: securitySchemes: accountId: type: apiKey in: header name: X-ACCOUNT description: All requests must include the `X-ACCOUNT` header containing your account ID. # Apply the "X-ACCOUNT" header globally to all paths and operations security: - accountId: [] Tools may handle security schemes parameters differently than generic parameters. For example, Swagger UI won't list API keys among operation parameters; instead, it will display the "Authorize" button where your users can enter their API key. Generic parameters: offset, limit, resource IDs, etc. OpenAPI 2.0 and 3.0 do not have a concept of global parameters. There are existing feature requests: Allow for responses and parameters shared across all endpoints Group multiple parameter definitions for better maintainability The most you can do is define these parameters in the global parameters section (in OpenAPI 2.0) or the components/parameters section (in OpenAPI 3.0) and then $ref all parameters explicitly in each operation. The drawback is that you need to duplicate the $refs in each operation. swagger: "2.0" ... paths: /users: get: parameters: - $ref: '#/parameters/offset' - $ref: '#/parameters/limit' ... /organizations: get: parameters: - $ref: '#/parameters/offset' - $ref: '#/parameters/limit' ... parameters: offset: in: query name: offset type: integer minimum: 0 limit: in: query name: limit type: integer minimum: 1 maximum: 50 To reduce code duplication somewhat, parameters that apply to all operations on a path can be defined on the path level rather than inside operations. paths: /foo: # These parameters apply to both GET and POST parameters: - $ref: '#/parameters/some_param' - $ref: '#/parameters/another_param' get: ... post: ...
OpenAPI
19,590,197
41
I'm struggling with the syntax of swagger to describe a response type. What I'm trying to model is a hash map with dynamic keys and values. This is needed to allow a localization. The languages may vary, but english should always be provided. The response would look like this in JSON: { id: "1234", name: { en: "english text", de: "Deutscher Text" } } My first try was looking like that, but I have no idea how to write the part for the name. AdditionalProperties seems to be a key, but I can't wrap my head around it. Also the requirement for english text is a mystery to me in this syntax and the example also does not seem to work as expected. It generates an empty $folded: in the UI. delayReason: type: object properties: id: type: string description: Identifier for a delay reason. name: type: object additionalProperties: type: string required: [id, name] example: id: 123 name: en: english text de: Deutscher Text But this produces: There is also no clue in this that the result will have a language code as a key and the text as the value of the hash map.
Your usage of additionalProperties is correct and your model is correct. additionalProperties In Swagger/OpenAPI, hashmap keys are assumed to be strings, so the key type is not defined explicitly. additionalProperties define the type of hashmap values. So, this schema type: object additionalProperties: type: string defines a string-to-string map such as: { "en": "English text", "de": "Deutscher Text" } If you needed a string-to-integer map such as: { "en": 5, "de": 3 } you would define additionalProperties as having value type integer: type: object additionalProperties: type: integer Required key in a hashmap To define en as a required key in the hashmap: type: object properties: en: type: string required: [en] additionalProperties: type: string Complete example definitions: delayReason: type: object properties: id: type: string description: Identifier for a delay reason. name: type: object description: A hashmap with language code as a key and the text as the value. properties: en: type: string description: English text of a delay reason. required: [en] additionalProperties: type: string required: [id, name] example: id: '123' # Note the quotes to force the value as a string name: en: English text de: Deutscher Text There is also no clue in this that the result will have a language code as a key and the text as the value of the hash map. Things like that can be documented verbally in the description. the example also does not seem to work as expected. It generates an empty $folded: in the UI. Not sure what the problem was with your original spec, but the spec above is valid and looks fine in the Swagger Editor.
OpenAPI
41,097,913
40
I have an existing Spring REST API for which I want to generate the OpenAPI 3.0 YAML file and not Swagger 2.0 JSON/YAML? Since as of now, SpringFox does not support YAML generation. It generates JSON with Swagger 2.0 (which follows OPEN API 3.0 spec). Also, there is https://github.com/openapi-tools/swagger-maven-plugin but it does not seem to support Spring Rest. I tried the Kongchen spring-maven-plugin which is able to generate the YAML file but with Swagger 2.0 definition and not OPEN API 3.0 like : swagger: "2.0" info: description: "Test rest project" version: "1.0" title: "Some desc" termsOfService: "http://swagger.io/terms/" contact: name: "Rest Support" url: "http://www.swagger.io/support" email: "[email protected]" license: name: "Apache 2.0" url: "http://www.apache.org/licenses/LICENSE-2.0.html" host: "example.com" basePath: "/api/" So my question is how can I generate the OPEN API YAML file like : openapi: 3.0.0 info: description: Some desc version: "1.0" title: Test rest project termsOfService: http://swagger.io/terms/ contact: name: Rest Support url: http://www.swagger.io/support email: [email protected] license: name: Apache 2.0 url: http://www.apache.org/licenses/LICENSE-2.0.html I am currently using swagger-maven-plugin to generate YAML file with Swagger 2.0 definition and converting it to Open API 3.0 definition using swagger2openapi at https://mermade.org.uk/openapi-converter Question 1: Can spring-maven-plugin capture io.swagger.v3.oas.annotations to generate the YAML ? Question 2: What is the best way to generate the YAML with OPEN API definitions in a Spring MVC Project? Question 3: Can io.swagger.v3.oas be used with Spring projects or it is only for JAX-RS projects?
We have used lately springdoc-openapi java library. It helps automating the generation of API documentation using spring boot projects. It automatically deploys swagger-ui to a spring-boot application Documentation will be available in HTML format, using the official [swagger-ui jars]: The Swagger UI page should then be available at http://server:port/context-path/swagger-ui.html and the OpenAPI description will be available at the following url for json format: http://server:port/context-path/v3/api-docs server: The server name or IP port: The server port context-path: The context path of the application Documentation can be available in yaml format as well, on the following path: /v3/api-docs.yaml. (to convert into yaml) Add the library to the list of your project dependencies (No additional configuration is needed) <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-ui</artifactId> <version>1.2.3</version> </dependency>
OpenAPI
54,921,110
37
I'm trying to build a Swagger model for a time interval, using a simple string to store the time (I know that there is also datetime): definitions: Time: type: string description: Time in 24 hour format "hh:mm". TimeInterval: type: object properties: lowerBound: $ref: "#/definitions/Time" description: Lower bound on the time interval. default: "00:00" upperBound: $ref: "#/definitions/Time" description: Upper bound on the time interval. default: "24:00" For some reason the generated HTML does not show the lowerBound and upperBound "description", but only the original Time "description". This makes me think I'm not doing this correctly. So the question is if using a model as a type can in fact be done as I'm trying to do.
TL;DR: $ref siblings are supported (to an extent) in OpenAPI 3.1. In previous OpenAPI versions, any keywords alongside $ref are ignored. OpenAPI 3.1 Your definition will work as expected when migrated to OpenAPI 3.1. This new version is fully compatible with JSON Schema 2020-12, which allows $ref siblings in schemas. openapi: 3.1.0 ... components: schemas: Time: type: string description: Time in 24 hour format "hh:mm". TimeInterval: type: object properties: lowerBound: # ------- This will work in OAS 3.1 ------- # $ref: "#/components/schemas/Time" description: Lower bound on the time interval. default: "00:00" upperBound: # ------- This will work in OAS 3.1 ------- # $ref: "#/components/schemas/Time" description: Upper bound on the time interval. default: "24:00" Outside of schemas - for example, in responses or parameters - $refs only allow sibling summary and description keywords. Any other keywords alongside these $refs will be ignored. # openapi: 3.1.0 # This is supported parameters: - $ref: '#/components/parameters/id' description: Entity ID # This is NOT supported parameters: - $ref: '#/components/parameters/id' required: true Here're some OpenAPI feature requests about non-schema $ref siblings that you can track/upvote: Allow sibling elements with $ref that overrides the references definition Allow required as sibling of $ref (like summary/description) Extend/override properties of a parameter OpenAPI 2.0 and 3.0.x In these versions, $ref works by replacing itself and all of its sibling elements with the definition it is pointing at. That is why lowerBound: $ref: "#/definitions/Time" description: Lower bound on the time interval. default: "00:00" becomes lowerBound: type: string description: Time in 24 hour format "hh:mm". A possible workaround is to wrap $ref into allOf - this can be used to "add" attributes to a $ref but not override existing attributes. lowerBound: allOf: - $ref: "#/definitions/Time" description: Lower bound on the time interval. default: "00:00" Another way is to use replace the $ref with an inline definition. definitions: TimeInterval: type: object properties: lowerBound: type: string # <------ description: Lower bound on the time interval, using 24 hour format "hh:mm". default: "00:00" upperBound: type: string # <------ description: Upper bound on the time interval, using 24 hour format "hh:mm". default: "24:00"
OpenAPI
33,629,750
36
I am designing an API and I want to define an enum Severity which can have values LOW, MEDIUM or HIGH. Internally Severity gets stored as an integer so I want to map these to 2,1 and 0 respectively. Is there a way to do this in an OpenAPI definition? This is currently what I have for Severity: severity: type: string enum: - HIGH - MEDIUM - LOW
OpenAPI 3.1 OpenAPI 3.1 uses the latest JSON Schema, and the recommended way to annotate individual enum values in JSON Schema is to use oneOf+const instead of enum. This way you can specify both custom names (title) and descriptions for enum values. Severity: type: integer oneOf: - title: HIGH const: 2 description: An urgent problem - title: MEDIUM const: 1 - title: LOW const: 0 description: Can wait forever Note: This schema still uses 1, 2, 3 (i.e. the const values) as the enum values in the actual request/response payload, but code generators can use the title to assign custom names to those values in client/server code, for example: # Python class Severity(Enum): HIGH = 2 MEDIUM = 1 LOW = 0 OpenAPI 3.0 and 2.0 These versions do not have a way to define custom names for enum values, but some tools provide x- extensions for this purpose. For example: OpenAPI Generator supports x-enum-varnames and x-enum-descriptions. openapi-typescript-codegen also supports these extensions. Severity: type: integer enum: [2, 1, 0] x-enum-varnames: - HIGH - MEDIUM - LOW x-enum-descriptions: - An urgent problem - A medium-priority problem - Can wait forever AutoRest supports x-ms-enum. NSwag supports x-enumNames: Severity: type: integer enum: [2, 1, 0] x-enumNames: [HIGH, MEDIUM, LOW] Speakeasy supports x-speakeasy-enums Check with your tooling vendors to see if they have a similar extension.
OpenAPI
66,465,888
34
I'm having a hard time trying to figure out how I can nest models in OpenAPI 2.0. Currently I have: SomeModel: properties: prop1: type: string prop2: type: integer prop3: type: $ref: OtherModel OtherModel: properties: otherProp: type: string I have tried many other ways: prop3: $ref: OtherModel # or prop3: schema: $ref: OtherModel # or prop3: type: schema: $ref: OtherModel None of the above seem to work. However, with arrays works just fine: prop3: type: array items: $ref: OtherModel
The correct way to model it in OpenAPI 2.0 would be: swagger: '2.0' ... definitions: SomeModel: type: object properties: prop1: type: string prop2: type: integer prop3: $ref: '#/definitions/OtherModel' # <----- OtherModel: type: object properties: otherProp: type: string If you use OpenAPI 3.0, models live in components/schemas instead of definitions: openapi: 3.0.1 ... components: schemas: SomeModel: type: object properties: prop1: type: string prop2: type: integer prop3: $ref: '#/components/schemas/OtherModel' # <----- OtherModel: type: object properties: otherProp: type: string Remember to add type: object to your object schemas because the type is not inferred from other keywords.
OpenAPI
26,287,962
32
How to define constant string variable in swagger open api 3.0 ? If I define enum it would be like as follows "StatusCode": { "title": "StatusCode", "enum": [ "success", "fail" ], "type": "string" } But enums can be list of values, Is there any way to define string constant in swagger open api 3.0 code can be executed form the http://editor.swagger.io/
As @Helen already pointed out, and as you can read in the linked answer, currently it does not seem to get any better than an enum with only one value. Full example that can be pasted into http://editor.swagger.io/: { "openapi": "3.0.0", "info": { "title": "Some API", "version": "Some version" }, "paths": {}, "components": { "schemas": { "StatusCode": { "title": "StatusCode", "enum": [ "The only possible value" ], "type": "string" } } } } There is a related topic on Github which is unsolved as of now: https://github.com/OAI/OpenAPI-Specification/issues/1313
OpenAPI
51,780,038
32
I have a project (Spring Boot App + Kotlin) that I would like to have an Open API 3.0 spec for (preferably in YAML). The Springfox libraries are nice but they generate Swagger 2.0 JSON. What is the best way to generate an Open Api 3.0 spec from the annotations in my controllers? Is writing it from scratch the only way?
We have used springdoc-openapi library in our kotlin project, and it meets our need for automating the generation of API documentation using spring boot projects. It automatically deploys swagger-ui to a spring-boot application The Swagger UI page should then be available at: - http://server:port/context-path/swagger-ui.html The OpenAPI description will be available at the following url for json format: - http://server:port/context-path/v3/api-docs Add the library to the list of your project dependencies (No additional configuration is needed) <dependency> <groupId>org.springdoc</groupId> <artifactId>springdoc-openapi-ui</artifactId> <version>1.2.32</version> </dependency>
OpenAPI
55,938,207
31
I would like to POST a json body with Swagger, like this : curl -H "Content-Type: application/json" -X POST -d {"username":"foobar","password":"xxxxxxxxxxxxxxxxx", "email": "[email protected]"}' http://localhost/user/register Currently, I have this definition : "/auth/register": { "post": { "tags": [ "auth" ], "summary": "Create a new user account", "parameters": [ { "name": "username", "in": "query", "description": "The username of the user", "required": true, "type": "string" }, { "name": "password", "in": "query", "description": "The password of the user", "required": true, "type": "string", "format": "password" }, { "name": "email", "in": "query", "description": "The email of the user", "required": true, "type": "string", "format": "email" } ], "responses": { "201": { "description": "The user account has been created", "schema": { "$ref": "#/definitions/User" } }, "default": { "description": "Unexpected error", "schema": { "$ref": "#/definitions/Errors" } } } } } But the data are sent in the URL. Here the generated curl provided by Swagger : curl -X POST --header 'Content-Type: application/json' --header 'Accept: application/json' 'http://localhost/user/register?username=foobar&password=password&email=foo%40bar.com' I understand that the query keywork is not good, but I didn't find the way to POST a JSON body. I tried formData but it didn't work.
You need to use the body parameter: "parameters": [ { "in": "body", "name": "body", "description": "Pet object that needs to be added to the store", "required": false, "schema": { "$ref": "#/definitions/Pet" } } ], and #/definitions/Pet is defined as a model: "Pet": { "required": [ "name", "photoUrls" ], "properties": { "id": { "type": "integer", "format": "int64" }, "category": { "$ref": "#/definitions/Category" }, "name": { "type": "string", "example": "doggie" }, "photoUrls": { "type": "array", "xml": { "name": "photoUrl", "wrapped": true }, "items": { "type": "string" } }, "tags": { "type": "array", "xml": { "name": "tag", "wrapped": true }, "items": { "$ref": "#/definitions/Tag" } }, "status": { "type": "string", "description": "pet status in the store", "enum": [ "available", "pending", "sold" ] } }, "xml": { "name": "Pet" } }, Ref: https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/test/resources/2_0/petstore.json#L35-L43 OpenAPI/Swagger v2 spec: https://github.com/OAI/OpenAPI-Specification/blob/master/versions/2.0.md#parameter-object For OpenAPI v3 spec, body parameter has been deprecated. To define the HTTP payload, one needs to use the requestBody instead, e.g. https://github.com/OpenAPITools/openapi-generator/blob/master/modules/openapi-generator/src/test/resources/3_0/petstore.json#L39-L41 OpenAPI v3 spec: https://github.com/OAI/OpenAPI-Specification/blob/master/versions/3.0.0.md#requestBodyObject
OpenAPI
35,411,628
30
I have a yaml specification that has been updated from swagger 2.0 to openapi 3.0.0 The file itself is about 7,000 lines so it is challenging to validate by hand. I need to figure out which tags I have are no longer compatible with openapi 3.0.0. How can I validate my schema? Are there any command line tools I can use? I do not want to copy/paste this code somewhere online because I don't want to expose all the routes publicly.
Swagger Editor https://editor.swagger.io performs validation on the client side, meaning your definition is not sent anywhere. You can also run the editor locally, e.g. offline. Notes: Because of lazy loading you may need to expand all operations and models in the UI panel to see all of the errors. Warnings are displayed as gutter icons, apart from the error list. Other validators https://openapi.tools has a list of OpenAPI validation tools, including command-line tools and Node.js modules.
OpenAPI
60,216,133
30
The API I'm trying to describe has a structure where the root object can contain an arbitrary number of child objects (properties that are themselves objects). The "key", or property in the root object, is the unique identifier of the child object, and the value is the rest of the child object's data. { "child1": { ... bunch of stuff ... }, "child2": { ... bunch of stuff ... }, ... } This could similarly be modeled as an array, e.g.: [ { "id": "child1", ... bunch of stuff ... }, { "id": "child2", ... bunch of stuff ... }, ... ] but this both makes it structurally less clear what the identifying property is and makes uniqueness among the children's ID implicit rather than explicit, so we want to use an object, or a map. I've seen the Swagger documentation for Model with Map/Dictionary Properties, but that doesn't adequately suit my use case. Writing something like: "Parent": { "additionalProperties": { "$ref": "#/components/schemas/Child", } Yields something like this: This adequately communicates the descriptiveness of the value in the property, but how do I document what the restrictions are for the "key" in the object? Ideally I'd like to say something like "it's not just any arbitrary string, it's the ID that corresponds to the child". Is this supported in any way?
Your example is correct. how do I document what the restrictions are for the "key" in the object? Ideally I'd like to say something like "it's not just any arbitrary string, it's the ID that corresponds to the child". Is this supported in any way? OpenAPI 3.1 OAS 3.1 fully supports JSON Schema 2020-12, including patternProperties. This keyword lets you define the format of dictionary keys by using a regular expression: "Parent": { "type": "object", "patternProperties": { "^child\d+$": { "$ref": "#/components/schemas/Child" } }, "description": "A map of `Child` schemas, where the keys are IDs that correspond to the child" } Or, if the property names are defined by an enum, you can use propertyNames to define that enum: "Parent": { "type": "object", "propertyNames": { "enum": ["foo", "bar"] }, "additionalProperties": { "$ref": "#/components/schemas/Child" } } OpenAPI 3.0 and 2.0 Dictionary keys are assumed to be strings, but there's no way to limit the contents/format of keys. You can document any restrictions and specifics verbally in the schema description. Adding schema examples could help illustrate what your dictionary/map might look like. "Parent": { "type": "object", "additionalProperties": { "$ref": "#/components/schemas/Child" }, "description": "A map of `Child` schemas, where the keys are IDs that correspond to the child", "example": { "child1": { ... bunch of stuff ... }, "child2": { ... bunch of stuff ... }, } If the possible key names are known (for example, they are part of an enum), you can define your dictionary as a regular object and the keys as individual object properties: // Keys can be: key1, key2, key3 "Parent": { "type": "object", "properties": { "key1": { "$ref": "#/components/schemas/Child" }, "key2": { "$ref": "#/components/schemas/Child" }, "key3": { "$ref": "#/components/schemas/Child" } } } Then you can add "additionalProperties": false to really ensure that only those keys are used.
OpenAPI
46,552,863
28
Using this schema definition: schemas: AllContacts: type: array items: $ref: '#/definitions/ContactModel1' example: - id: 1 firstName: Sherlock lastName: Holmes - id: 2 firstName: John lastName: Watson I get this expected result: [ { "id": 1, "firstName": "Sherlock", "lastName": "Holmes" }, { "id": 2, "firstName": "John", "lastName": "Watson" } ] Now I'd like to reuse the Holmes example for both the single user (ContactModel1) and as part of an array of users (AllContacts). But if I use the referenced examples: schemas: AllContacts: type: array items: $ref: '#/definitions/ContactModel1' example: Homes: $ref: '#/components/examples/Homes' Watson: $ref: '#/components/examples/Watson' examples: Holmes: value: id: 1 first_name: Sherlock last_name: Holmes Watson: value: id: 2 first_name: John last_name: Watson I get this unexpected result in Swagger UI: [ { "value": { "id": 1, "first_name": "Sherlock", "last_name": "Holmes", }, "$$ref": "#/components/examples/Holmes" }, { "value": { "id": 2, "first_name": "John", "last_name": "Watson", }, "$$ref": "#/components/examples/Watson" } ] and a similar unexpected example for GET /user/1: [ { "value": { "id": 1, "first_name": "Sherlock", "last_name": "Holmes", }, "$$ref": "#/components/examples/Holmes" } ] What am I doing wrong? I am using this doc as reference: https://swagger.io/docs/specification/adding-examples/#reuse
This is NOT a valid definition: components: schemas: AllContacts: type: array items: $ref: '#/definitions/ContactModel1' example: Homes: $ref: '#/components/examples/Homes' Watson: $ref: '#/components/examples/Watson' 1) The example syntax is wrong. OpenAPI 3.0 has two keywords for examples - example (singular) and examples (plural). They work differently: example requires an inline example and does not support $ref. examples is a map (collection) of named examples. It supports $ref - but you can only $ref whole examples, not individual parts of an example. This also means it's not possible to build an example from multiple $refs. Note that not all elements support plural examples. Note for Swagger UI users: Swagger UI currently supports example (singular) but not examples (plural). Support for examples is tracked in this issue. 2) The Schema Object only supports singular example but not plural examples. In other words, schemas support inline examples only. 3) In OpenAPI 3.0, schema references use the format #/components/schemas/..., not #/definitions/... I'd like to use the same EXAMPLE definition for Holmes in both cases, the array of users and the single user. There's no way to reuse a part of an example in this case. You'll have to repeat the example value in both schemas: components: schemas: ContactModel1: type: object properties: ... example: id: 1 first_name: Sherlock last_name: Holmes AllContacts: type: array items: $ref: '#/components/schemas/ContactModel1' example: - id: 1 first_name: Sherlock last_name: Holmes - id: 2 first_name: John last_name: Watson
OpenAPI
49,839,121
28
Is there any way to document the following query? GET api/v1/users?name1=value1&name2=value where the query parameter names are dynamic and will be received from the client. I'm using the latest Swagger API.
Free-form query parameters can be described using OpenAPI 3.x, but not OpenAPI 2.0 (Swagger 2.0). The parameter must have type: object with the serialization method style: form and explode: true. The object will be serialized as ?prop1=value1&prop2=value2&..., where individual prop=value pairs are the object properties. openapi: 3.0.3 ... paths: /users: get: parameters: - in: query # Arbitrary name. It won't appear in the request URL, but will be used # in server & client code generated from this OAS document. name: params schema: type: object # If the parameter values are of specific type, e.g. string: additionalProperties: type: string # If the parameter values can be of different types # (e.g. string, number, boolean, ...) # additionalProperties: true # `style: form` and `explode: true` is the default serialization method # for query parameters, so these keywords can be omitted style: form explode: true Free-form query parameters are supported in Swagger UI 3.15.0+ and Swagger Editor 3.5.6+. In the parameter editor, enter the parameter names and values in the JSON object format, e.g. { "prop1": "value1", "prop2": "value2" }. "Try it out" will send them as param=value query parameters: Not sure about Codegen support though.
OpenAPI
49,582,559
27
I want to use Swagger Codegen for OpenAPI 3.0 YAML file. And I see Swagger Codegen 3.0.0-rc0 is available. But when I try to use that I run into issues. Following are the details: My pom.xml file with swagger-codegen plugin: <plugin> <groupId>io.swagger</groupId> <artifactId>swagger-codegen-maven-plugin</artifactId> <version>3.0.0-rc0</version> <executions> <execution> <goals> <goal>generate</goal> </goals> <configuration> <inputSpec>${basedir}/src/main/resources/mySpec.yaml</inputSpec> <output>target/generated-sources</output> <language>spring</language> <generateApis>false</generateApis> <modelPackage>com.kj.model</modelPackage> <apiPackage>com.kj</apiPackage> <configOptions> <sourceFolder>swagger</sourceFolder> <library>spring-mvc</library> <interfaceOnly>true</interfaceOnly> <useBeanValidation>true</useBeanValidation> <dateLibrary>java8</dateLibrary> <java8>true</java8> </configOptions> </configuration> </execution> </executions> </plugin> With the above plugin when I run the maven build, I got this ServiceConfigurationError, here is the stack trace: Exception in thread "main" java.util.ServiceConfigurationError: io.swagger.codegen.CodegenConfig: Provider io.swagger.codegen.languages.java.JavaClientCodegen not found at java.util.ServiceLoader.fail(ServiceLoader.java:239) at java.util.ServiceLoader.access$300(ServiceLoader.java:185) at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:372) at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404) at java.util.ServiceLoader$1.next(ServiceLoader.java:480) at io.swagger.codegen.CodegenConfigLoader.forName(CodegenConfigLoader.java:19) at io.swagger.codegen.config.CodegenConfigurator.toClientOptInput(CodegenConfigurator.java:392) at io.swagger.codegen.plugin.CodeGenMojo.execute(CodeGenMojo.java:512) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) In order to fix this I added swagger-codegen-generators dependency within the maven plugin section of pom file: <dependencies> <dependency> <groupId>io.swagger</groupId> <artifactId>swagger-codegen-generators</artifactId> <version>1.0.0-SNAPSHOT</version> </dependency> </dependencies> So with this earlier mentioned issue got resolved but now I see this NPE java.lang.NullPointerException at io.swagger.codegen.languages.SpringCodegen.preprocessOpenAPI(SpringCodegen.java:429) at io.swagger.codegen.DefaultGenerator.configureGeneratorProperties(DefaultGenerator.java:199) at io.swagger.codegen.DefaultGenerator.generate(DefaultGenerator.java:716) at io.swagger.codegen.plugin.CodeGenMojo.execute(CodeGenMojo.java:534) at org.apache.maven.plugin.DefaultBuildPluginManager.executeMojo(DefaultBuildPluginManager.java:134) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:208) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:154) at org.apache.maven.lifecycle.internal.MojoExecutor.execute(MojoExecutor.java:146) at org.apache.maven.lifecycle.internal.LifecycleModuleBuilder.buildProject(LifecycleModuleBuilder.java:117) As you would have noted already that I am using <language>spring</language> and <library>spring-mvc</library>. Please let me know if codegen has worked for someone for 3.0.0-rc0 with these configurations. Note: I looked at this old post which is similar but at that time 3.0.0-rc0 was not available.
To use Swagger Codegen with Maven plug-in for OpenAPI 3.0.0 spec, you may consider using OpenAPI Generator instead (which is a community-driven version of Swagger Codegen). <dependency> <groupId>org.openapitools</groupId> <artifactId>openapi-generator-maven-plugin</artifactId> <version>3.3.4</version> </dependency> Ref: https://github.com/OpenAPITools/openapi-generator#12---artifacts-on-maven-central (please refer to the Q&A on why we forked Swagger Codegen)
OpenAPI
49,616,529
27
I recently upgraded my API to a .net core 3.1 server using Swashbuckle 5 with the newtonsoft json nuget, which produces an openapi 3 schema. I then use NSwag to generate a C# API. Previously I had a .net core 2.2 server with swashbuckle 4, producing a swagger 2.0 api schema. I have a generic response class for all responses, containing some metadata about the response like status code and a message, plus a Payload property of Generic type T containing the meat of the response. When the response is an error code, I set the payload property to null. I am struggling to find a way to define my api so that swashbuckle and NSwag combined produce a C# api that will allow the payload property to be null on deserialization. (swagger 2.0 / swashbuckle 4 worked without issue). Try as I might, the Payload property always gets the annotation [Newtonsoft.Json.JsonProperty("payload", Required = Newtonsoft.Json.Required.DisallowNull...] and the [System.ComponentModel.DataAnnotations.Required] annotation. As I understand it, open API 3 now allows "$ref" properties to have the "nullable": true attribute in the schema definition. If I add this manually to my definition after it is created, NSwag correctly removes the Required attribute in the CSharp api and crucially sets the JsonProperty Required attribute to be "Default" (not required) instead of "DisallowNull". However, nothing that I mark up the payload property with causes the nullable: true to appear in my schema json definition. What I want is this: "properties": { "payload": { "nullable": true, "$ref": "#/components/schemas/VisualService.Client.Models.MyResultClass" }, What I get is this: "properties": { "payload": { "$ref": "#/components/schemas/VisualService.Client.Models.MyResultClass" }, What would also work is setting the "nullable"=true on the definition of the referenced $ref object itself. I can't find a way to do this either. I have tried the following remedies, to no success. Marking up the property in the dto class with JsonProperty in different ways: [JsonProperty(Required = Required.AllowNull)] public T Payload { get; set; } [AllowNull] public T Payload { get; set; } [MaybeNull] public T Payload { get; set; } Trying to tell Swashbuckle / Newtonsoft to use my custom Json Resolver as described in this github issue- doesn't seem to obey services.AddControllers() .AddNewtonsoftJson(options => { options.SerializerSettings.ContractResolver = MyCustomResolver(); I created my own custom attribute and filter to try to set the property as nullable [NullableGenericProperty] public T Payload { get; set; } [AttributeUsage(AttributeTargets.Property)] public class NullableGenericPropertyAttribute : Attribute { } public class SwaggerNullablePayloadFilter : ISchemaFilter { public void Apply(OpenApiSchema schema, SchemaFilterContext context) { if (schema?.Properties == null || context?.Type == null) return; var nullableGenericProperties = context.Type.GetProperties() .Where(t => t.GetCustomAttribute<NullableGenericPropertyAttribute>() != null); foreach (var excludedProperty in nullableGenericProperties) { if (schema.Properties.ContainsKey(excludedProperty.Name.ToLowerInvariant())) { var prop = schema.Properties[excludedProperty.Name.ToLowerInvariant()]; prop.Nullable = true; prop.Required = new HashSet<string>() { "false" }; } } } } I had minor success with this one, in that adding the prop.Nullable = true; caused the attribute[System.ComponentModel.DataAnnotations.Required] to be removed from the c# api. However, the [Newtonsoft.Json.JsonProperty("payload", Required = Newtonsoft.Json.Required.DisallowNull...] still remained, so it didn't help that much. I added prop.Required = new HashSet<string>() { "false" }; as an additional try, but it doesn't seem to do anything. I could downgrade to .net core 2.2 / swashbuckle 4 again but 2.2 is out of long term support and I want to stay at 3.1 if at all possible. I could also do a find and replace on my generated API client every time but I don't want to have to manually remember to do it every time I regenerate the api which can be several times a day in development cycles. I've got a hacky workaround - which is that I'm intercepting the json response and adding the "nullable" = true on my server where it's needed, by using a regex match on the response Body json string, before serving it to the client. It's really hacky though and I'd like a native way to do this if it exists. Any and all help appreciated!
There is a setting that accomplishes this: UseAllOfToExtendReferenceSchemas It changes the schema to this, which nswag can use to allow nulls for $ref properties. "payload": { "required": [ "false" ], "allOf": [ { "$ref": "#/components/schemas/MyResultClass" } ], "nullable": true }, Use it like this: _ = services.AddSwaggerGen(setup => { setup.SwaggerDoc("v1", new OpenApiInfo { Title = AppConst.SwaggerTitle, Version = "v1" }); setup.UseAllOfToExtendReferenceSchemas(); ...
OpenAPI
62,424,769
27
I noticed that in OpenAPI Path Items and some other constructs you have both summary and description fields, what is the difference between those, and what is the purpose of each? For me, they seem to accomplish the same thing, and I did not find anything about this in the documentation. It might seem like a non-sense question at first, but since you can use OpenApi to generate API's code, use it in documentation etc. etc. I think it makes sense to clear up the purpose of these.
summary is short, description is more detailed. Think of the summary as a short one or two sentence explanation of what the intended purpose of the element is. You won't be able to describe all the subtle details, but at a high level, it should be able to explain the purpose of the element. Many documentation tools will only display the summary when there's a list of different components or endpoints, so this is a good place to put just enough information to let an unfamiliar reader know if this is the thing that will let them do what they want to do. On the other hand, description is where the full details should go. For example, if you have special enum values, you can include a table of the behavior of each value. If you have an endpoint with special behavior not easily defined in OpenAPI, this is where you'd explain to the reader those details. Many elements may be straightforward and not need a lot of details, so you may find your summary is sufficient. Different documentation tools may automatically use summary if description is not present (or vice versa), though you will want to verify that your particular tool does this. My personal preference is to default to description, and only use summary when the description is too verbose.
OpenAPI
66,936,118
27
Referencing OpenAPI 2.0, Schema Object, or Swagger 2.0, Schema Object, and the definition of discriminator field as: Adds support for polymorphism. The discriminator is the schema property name that is used to differentiate between other schema that inherit this schema. The property name used MUST be defined at this schema and it MUST be in the required property list. When used, the value MUST be the name of this schema or any schema that inherits it. My confusions/ questions: It is ambiguous to me, what role exactly it plays in inheritance or polymorphism. Could some one please explain discriminator with a working example showing what it exactly does and what if we do not use it? Any errors, warnings or any tools that depends on it for some operations? Is it the case that swagger-editor does not support discriminator, and this field is used in some other tools? What I have tried so far: I have tried to use swagger-editor and the example from the same documentation (also mentioned below), to play around with this property to see if I can see any of its special behaviors. I changed the property, removed it, and extend the Dog model to one level deeper and tried the same on the new sub-model, but I did not see any changes in the preview of swagger-editor. I tried searching online, and specially stackoverflow questions, but did not find any relevant information. The sample code I used to do experiments: definitions: Pet: type: object discriminator: petType properties: name: type: string petType: type: string required: - name - petType Cat: description: A representation of a cat allOf: - $ref: '#/definitions/Pet' - type: object properties: huntingSkill: type: string description: The measured skill for hunting default: lazy enum: - clueless - lazy - adventurous - aggressive required: - huntingSkill Dog: description: A representation of a dog allOf: - $ref: '#/definitions/Pet' - type: object properties: packSize: type: integer format: int32 description: the size of the pack the dog is from default: 0 minimum: 0 required: - packSize
According to this google group, discriminator is used on top of the allOf property and it is defined in the super type for polymorphism. If discriminator is not used, the allOf keyword describes that a model contains the properties of other models for composition. Just like in your sample code, Pet is a super type with property of petType identified as the discriminator and Cat is a sub type of Pet. Following is a json example of a Cat object: { "petType": "Cat", "name": "‎Kitty" } The use of discriminator intends to indicate the property used to identify the type of an object. Assumes that there are tools that can proper support definition objects with the use of discriminator, it is possible to determine the type by scanning the property. For example, identify the object is a Cat according to petType. However, the discriminator field is not well defined in the current version's specification or the samples (see issue #403). As far as I know, there is no tools provided by Swagger properly support it at the time being. discriminator may be used if the model has a property used to determine the type. In this case, it is naturally fit and it can be used as an indicator for other developers understand the polymorphism relationship. If third party tools like ReDoc which support discriminator (see petType in this gif and example) is considered, you may find this useful.
OpenAPI
39,683,846
26
I am defining an API specification in SwaggerHub using OpenAPI 2.0. The /contacts request returns an array of contacts. The definition is below: /contacts: get: tags: - contacts summary: Get all the contacts description: This displays all the contacts present for the user. operationId: getContact produces: - application/json - application/xml responses: 200: description: successful operation schema: $ref: '#/definitions/AllContacts' 400: description: Invalid id supplied 404: description: Contact not found 500: description: Server error definitions: AllContacts: type: array items: - $ref: '#/definitions/ContactModel1' - $ref: '#/definitions/ContactModel2' ContactModel1: type: object properties: id: type: integer example: 1 firstName: type: string example: 'someValue' lastName: type: string example: 'someValue' ContactModel2: type: object properties: id: type: integer example: 2 firstName: type: string example: 'someValue1' lastName: type: string example: 'someValue1' For some reason, it only returns the second object not the whole array of objects. I am using OpenAPI 2.0 and suspect that the arrays are not well supported in this version.
An array of objects is defined as follows. The value of items must be a single model that describes the array items. definitions: AllContacts: type: array items: $ref: '#/definitions/ContactModel' ContactModel: type: object properties: id: type: integer example: 1 firstName: type: string example: Sherlock lastName: type: string example: Holmes By default, Swagger UI displays the array examples with just one item, like so: [ { "id": 1, "firstName": "Sherlock", "lastName": "Holmes" } ] If you want the array example to include multiple items, specify the multi-item example in the array model: definitions: AllContacts: type: array items: $ref: '#/definitions/ContactModel1' example: - id: 1 firstName: Sherlock lastName: Holmes - id: 2 firstName: John lastName: Watson
OpenAPI
46,167,981
26
I have created a RESTful API, and I am now defining the Open API 3.0 JSON representation for the usage of this API. I am requiring usage of a parameter conditionally, when another parameter is present. So I can't really use either required: true or required: false because it needs to be conditional. Should I just define it as required: false, and then in the summary and / or description say that it is required when the other parameter is being used? Or is there a way of defining dependency between parameters? I haven't found anything in the specs that mention a case like this.
From the docs: Parameter Dependencies OpenAPI 3.0 does not support parameter dependencies and mutually exclusive parameters. There is an open feature request at github.com/OAI/OpenAPI-Specification/issues/256. What you can do is document the restrictions in the parameter description and define the logic in the 400 Bad Request response. For more info - swagger.io/docs/specification/describing-parameters
OpenAPI
63,209,596
26
I'm writing an OpenAPI spec for an existing API. This API returns status 200 for both success and failure, but with a different response structure. For example, in the signup API, if the user signed up successfully, the API sends status 200 with the following JSON: { "result": true, "token": RANDOM_STRING } And if there is a duplicated user, the API also sends status 200, but with the following JSON: { "result": false, "errorCode": "00002", // this code is duplicated error "errorMsg": "duplicated account already exist" } In this case, how to define the response?
This is possible in OpenAPI 3.0 but not in 2.0. OpenAPI 3.0 supports oneOf for specifying alternate schemas for the response. You can also add multiple response examples, such as successful and failed responses. Swagger UI supports multiple examples since v. 3.23.0. openapi: 3.0.0 ... paths: /something: get: responses: '200': description: Result content: application/json: schema: oneOf: - $ref: '#/components/schemas/ApiResultOk' - $ref: '#/components/schemas/ApiResultError' examples: success: summary: Example of a successful response value: result: true token: abcde12345 error: summary: Example of an error response value: result: false errorCode: "00002" errorMsg: "duplicated account already exist" components: schemas: ApiResultOk: type: object properties: result: type: boolean enum: [true] token: type: string required: - result - token ApiResultError: type: object properties: result: type: boolean enum: [false] errorCode: type: string example: "00002" errorMsg: type: string example: "duplicated account already exist" In OpenAPI/Swagger 2.0, you can only use a single schema per response code, so the most you can do is define the varying fields as optional and document their usage in the model description or operation description. swagger: "2.0" ... definitions: ApiResult: type: object properties: result: type: boolean token: type: string errorCode: type: string errorMsg: type: string required: - result
OpenAPI
47,447,403
25
I'm looking for some library or example of code to format FastAPI validation messages into human-readable format. E.g. this endpoint: @app.get("/") async def hello(name: str): return {"hello": name} Will produce the next json output if we miss name query parameter: { "detail":[ { "loc":[ "query", "name" ], "msg":"field required", "type":"value_error.missing" } ] } So my questions is, how to: Transform it into something like "name field is required" (for all kinds of possible errors) to show in toasts. Use it to display form validation messages Generate forms themselves from api description if it's possible
FastAPI has a great Exception Handling, so you can customize your exceptions in many ways. You can raise an HTTPException, HTTPException is a normal Python exception with additional data relevant for APIs. But you can't return it you need to raise it because it's a Python exception from fastapi import HTTPException ... @app.get("/") async def hello(name: str): if not name: raise HTTPException(status_code=404, detail="Name field is required") return {"Hello": name} By adding name: str as a query parameter it automatically becomes required so you need to add Optional from typing import Optional ... @app.get("/") async def hello(name: Optional[str] = None): error = {"Error": "Name field is required"} if name: return {"Hello": name} return error $ curl 127.0.0.1:8000/?name=imbolc {"Hello":"imbolc"} ... $ curl 127.0.0.1:8000 {"Error":"Name field is required"} But in your case, and i think this is the best way to handling errors in FastAPI overriding the validation_exception_handler: from fastapi import FastAPI, Request, status from fastapi.encoders import jsonable_encoder from fastapi.exceptions import RequestValidationError from fastapi.responses import JSONResponse ... @app.exception_handler(RequestValidationError) async def validation_exception_handler(request: Request, exc: RequestValidationError): return JSONResponse( status_code=status.HTTP_422_UNPROCESSABLE_ENTITY, content=jsonable_encoder({"detail": exc.errors(), "Error": "Name field is missing"}), ) ... @app.get("/") async def hello(name: str): return {"hello": name} You will get a response like this: $ curl 127.0.0.1:8000 { "detail":[ { "loc":[ "query", "name" ], "msg":"field required", "type":"value_error.missing" } ], "Error":"Name field is missing" } You can customize your content however if you like: { "Error":"Name field is missing", "Customize":{ "This":"content", "Also you can":"make it simpler" } }
OpenAPI
58,642,528
25