question
stringlengths 11
28.2k
| answer
stringlengths 26
27.7k
| tag
stringclasses 130
values | question_id
int64 935
78.4M
| score
int64 10
5.49k
|
---|---|---|---|---|
I'm trying to do a simple insert into a postgres table, but am getting an error that the value I'm trying to insert is being interpreted as a column name
INSERT INTO "imageTagBusinessMainCategory"
(id, businessMainCategory)
VALUES
(DEFAULT, "auto dealer")
Where id is set up to be the primary key, and auto increment, and not null. Those are the boxes I ticked when I set up the table in phpPgAdmin.
I'm getting this error though:
ERROR: ERROR: column "auto dealer" does not exist
Query = INSERT
INTO "imageTagBusinessMainCategory"
(id, businessMainCategory)
VALUES
(DEFAULT,
"auto dealer")
I've put my table name in double quotes, as I've read here I should.
And used DEFAULT to auto-increment the id as I've read here I should.
Any ideas?
Thanks!
| Use 'auto dealer' instead. PostgreSQL interprets " as being quotes for identifiers, ' as being quotes for strings.
Also:
If this is a new project, just don't use mixed case tables; it is a
source of frustration later. Instead of being able to use any case in
your SQL statements, you must both quote the identifier name and get
the case correct.
There is no need to specify id/DEFAULT, you're
asking it to do what it would have done already. I haven't met a DBMS
that requires you to include columnName/DEFAULT if you want it to
put the default value in the column, so I don't think this extra KV
pair is going to make what is happening clearer to anyone reading
your code later.
| PostgreSQL | 12,428,496 | 138 |
I am trying to write the following query on postgresql:
select name, author_id, count(1),
(select count(1)
from names as n2
where n2.id = n1.id
and t2.author_id = t1.author_id
)
from names as n1
group by name, author_id
This would certainly work on Microsoft SQL Server but it does not at all on postegresql. I read its documentation a bit and it seems I could rewrite it as:
select name, author_id, count(1), total
from names as n1, (select count(1) as total
from names as n2
where n2.id = n1.id
and n2.author_id = t1.author_id
) as total
group by name, author_id
But that returns the following error on postegresql: "subquery in FROM cannot refer to other relations of same query level". So I'm stuck. Does anyone know how I can achieve that?
Thanks
| I'm not sure I understand your intent perfectly, but perhaps the following would be close to what you want:
select n1.name, n1.author_id, count_1, total_count
from (select id, name, author_id, count(1) as count_1
from names
group by id, name, author_id) n1
inner join (select id, author_id, count(1) as total_count
from names
group by id, author_id) n2
on (n2.id = n1.id and n2.author_id = n1.author_id)
Unfortunately this adds the requirement of grouping the first subquery by id as well as name and author_id, which I don't think was wanted. I'm not sure how to work around that, though, as you need to have id available to join in the second subquery. Perhaps someone else will come up with a better solution.
| PostgreSQL | 3,004,887 | 137 |
I want to remotely connect to a Postgres instance. I know we can do this using the psql command passing the hostname
I tried the following:
psql -U postgres -p 5432 -h hostname
I modified the /etc/postgresql/9.3/main/pg_hba.conf file on the target machine to allow remote connections by default
I added the following line to the file
host all all source_ip/32 trust
I restarted the cluster using
pg_ctlcluster 9.2 mycluster stop
pg_ctlcluster 9.2 mycluster start
However, when I try to connect from the source_ip, I still get the error
Is the server running on host "" and accepting TCP/IP connections on port 5432?
What am I doing wrong here?
| I resolved this issue using below options:
Whitelist your DB host from your network team to make sure you have access to remote host
Install postgreSQL version 4 or above
Run below command:
psql -h <REMOTE HOST> -p <REMOTE PORT> -U <DB_USER> <DB_NAME>
| PostgreSQL | 32,824,388 | 136 |
I have a table where column is of datatype timestamp
Which contains records multiple records for a day
I want to select all rows corresponding to day
How do I do it?
| Assuming you actually mean timestamp because there is no datetime in Postgres
Cast the timestamp column to a date, that will remove the time part:
select *
from the_table
where the_timestamp_column::date = date '2015-07-15';
This will return all rows from July, 15th.
Note that the above will not use an index on the_timestamp_column. If performance is critical, you need to either create an index on that expression or use a range condition:
select *
from the_table
where the_timestamp_column >= timestamp '2015-07-15 00:00:00'
and the_timestamp_column < timestamp '2015-07-16 00:00:00';
| PostgreSQL | 31,433,747 | 136 |
Is something like this possible?
INSERT INTO Table2 (val)
VALUES ((INSERT INTO Table1 (name) VALUES ('a_title') RETURNING id));
like using the return value as value to insert a row in a second table with a reference to the first table?
| You can do so starting with Postgres 9.1:
with rows as (
INSERT INTO Table1 (name) VALUES ('a_title') RETURNING id
)
INSERT INTO Table2 (val)
SELECT id
FROM rows
In the meanwhile, if you're only interested in the id, you can do so with a trigger:
create function t1_ins_into_t2()
returns trigger
as $$
begin
insert into table2 (val) values (new.id);
return new;
end;
$$ language plpgsql;
create trigger t1_ins_into_t2
after insert on table1
for each row
execute procedure t1_ins_into_t2();
| PostgreSQL | 6,560,447 | 136 |
The table in question contains roughly ten million rows.
for event in Event.objects.all():
print event
This causes memory usage to increase steadily to 4 GB or so, at which point the rows print rapidly. The lengthy delay before the first row printed surprised me – I expected it to print almost instantly.
I also tried Event.objects.iterator() which behaved the same way.
I don't understand what Django is loading into memory or why it is doing this. I expected Django to iterate through the results at the database level, which'd mean the results would be printed at roughly a constant rate (rather than all at once after a lengthy wait).
What have I misunderstood?
(I don't know whether it's relevant, but I'm using PostgreSQL.)
| Nate C was close, but not quite.
From the docs:
You can evaluate a QuerySet in the following ways:
Iteration. A QuerySet is iterable, and it executes its database query the first time you iterate over it. For example, this will print the headline of all entries in the database:
for e in Entry.objects.all():
print e.headline
So your ten million rows are retrieved, all at once, when you first enter that loop and get the iterating form of the queryset. The wait you experience is Django loading the database rows and creating objects for each one, before returning something you can actually iterate over. Then you have everything in memory, and the results come spilling out.
From my reading of the docs, iterator() does nothing more than bypass QuerySet's internal caching mechanisms. I think it might make sense for it to a do a one-by-one thing, but that would conversely require ten-million individual hits on your database. Maybe not all that desirable.
Iterating over large datasets efficiently is something we still haven't gotten quite right, but there are some snippets out there you might find useful for your purposes:
Memory Efficient Django QuerySet iterator
batch querysets
QuerySet Foreach
| PostgreSQL | 4,222,176 | 136 |
I am working on a web application using Python (Django) and would like to know whether MySQL or PostgreSQL would be more suitable when deploying for production.
In one podcast Joel said that he had some problems with MySQL and the data wasn't consistent.
I would like to know whether someone had any such problems. Also when it comes to performance which can be easily tweaked?
| A note to future readers: The text below was last edited in August 2008. That's nearly 11 years ago as of this edit. Software can change rapidly from version to version, so before you go choosing a DBMS based on the advice below, do some research to see if it's still accurate.
Check for newer answers below.
Better?
MySQL is much more commonly provided by web hosts.
PostgreSQL is a much more mature product.
There's this discussion addressing your "better" question
Apparently, according to this web page, MySQL is fast when concurrent access levels are low, and when there are many more reads than writes. On the other hand, it exhibits low scalability with increasing loads and write/read ratios. PostgreSQL is relatively slow at low concurrency levels, but scales well with increasing load levels, while providing enough isolation between concurrent accesses to avoid slowdowns at high write/read ratios. It goes on to link to a number of performance comparisons, because these things are very... sensitive to conditions.
So if your decision factor is, "which is faster?" Then the answer is "it depends. If it really matters, test your application against both." And if you really, really care, you get in two DBAs (one who specializes in each database) and get them to tune the crap out of the databases, and then choose. It's astonishing how expensive good DBAs are; and they are worth every cent.
When it matters.
Which it probably doesn't, so just pick whichever database you like the sound of and go with it; better performance can be bought with more RAM and CPU, and more appropriate database design, and clever stored procedure tricks and so on - and all of that is cheaper and easier for random-website-X than agonizing over which to pick, MySQL or PostgreSQL, and specialist tuning from expensive DBAs.
Joel also said in that podcast that comment would come back to bite him because people would be saying that MySQL was a piece of crap - Joel couldn't get a count of rows back. The plural of anecdote is not data. He said:
MySQL is the only database I've ever programmed against in my career that has had data integrity problems, where you do queries and you get nonsense answers back, that are incorrect.
and he also said:
It's just an anecdote. And that's one of the things that frustrates me, actually, about blogging or just the Internet in general. [...] There's just a weird tendency to make anecdotes into truths and I actually as a blogger I'm starting to feel a little bit guilty about this
| PostgreSQL | 27,435 | 136 |
How can I do such query in Postgres?
IF (select count(*) from orders) > 0
THEN
DELETE from orders
ELSE
INSERT INTO orders values (1,2,3);
| DO
$do$
BEGIN
IF EXISTS (SELECT FROM orders) THEN
DELETE FROM orders;
ELSE
INSERT INTO orders VALUES (1,2,3);
END IF;
END
$do$
There are no procedural elements in standard SQL. The IF statement is part of the default procedural language PL/pgSQL. You need to create a function or execute an ad-hoc statement with the DO command.
You need a semicolon (;) at the end of each statement in plpgsql (except for the final END).
You need END IF; at the end of the IF statement.
A sub-select must be surrounded by parentheses:
IF (SELECT count(*) FROM orders) > 0 ...
Or:
IF (SELECT count(*) > 0 FROM orders) ...
This is equivalent and much faster, though:
IF EXISTS (SELECT FROM orders) ...
Alternative
The additional SELECT is not needed. This does the same, faster:
DO
$do$
BEGIN
DELETE FROM orders;
IF NOT FOUND THEN
INSERT INTO orders VALUES (1,2,3);
END IF;
END
$do$
Though unlikely, concurrent transactions writing to the same table may interfere. To be absolutely sure, write-lock the table in the same transaction before proceeding as demonstrated.
| PostgreSQL | 11,299,037 | 135 |
I'm using Heroku with the Crane Postgres option and I was running a query on the database from my local machine when my local machine crashed. If I run
select * from pg_stat_activity
one of the entries has
<IDLE> in transaction
in the current_query_text column.
As a result, I can't drop the table that was being written to by the query that was terminated. I have tried using pg_cancel_backend(N) and it returns True but nothing seems to happen.
How can I terminate this process so that I can drop the table?
| This is a general PostgreSQL answer, and not specific to Heroku
Possibly easiest quickfix
The simple-stupid answer to this question may be ... just restart postgresql!
Here is another way of quickly killing all long-lasting "idle in transaction":
SELECT pg_terminate_backend(pid) from pg_stat_activity
WHERE state in ('<IDLE> in transaction', 'idle in transaction')
AND now()-xact_start>interval '1 minute';
More complicated quickfix
Find the PID by running this sql*):
SELECT now()-xact_start as trans_time, pid, query from pg_stat_activity
WHERE state != 'idle' ORDER BY xact_start;
You'll find the pid in the first (left) column, and the first (top) row is likely to be the query you'd like to terminate. Use select * to get more information about the queries. I'll assume the pid is 1234 below.
You may cancel a query through SQL (i.e. without shell access) as long as it's yours*) or you have super user access:
select pg_cancel_backend(1234);
That's a "friendly" request to cancel the 1234-query, and with some luck it will disappear after a while. If required, the following is more of a "hard terminate" command which could cause it to cancel more quickly:
select pg_terminate_backend(1234);
If you have shell access and root or postgres permissions you can also do it from the shell. To "cancel" one can do:
kill -INT 1234
and to "terminate", simply:
kill 1234
DO NOT:
kill -9 1234
... that will often result in the the whole postgres server going down in flames, then you may as well restart postgres. Postgres is pretty robust, so the data won't be corrupted, but I'd recommend against using "kill -9" in any case :-)
Permanent fix - dealing with the root cause
A long-lasting "idle in transaction" often means that the transaction was not terminated with a "commit" or a "rollback", meaning that the application is buggy or not properly designed to work with transactional databases - so to properly fix this issue, it's needed to ensure the application always does a commit or a rollback after running queries, even read-only queries (it's also possible to enable auto-commit).
Long-lasting "idle in transaction" should be avoided, as it may (dependent on your usage pattern) cause major performance problems.
Footnotes: The query may need mending on very old or future versions of PostgreSQL, and on very old versions of PostgreSQL only superuser can cancel queries.
| PostgreSQL | 11,291,456 | 135 |
The statement gives me the date and time.
How could I modify the statement so that it returns only the date (and not the time)?
SELECT to_timestamp( TRUNC( CAST( epoch_ms AS bigint ) / 1000 ) );
| You use to_timestamp function and then cast the timestamp to date
select to_timestamp(epoch_column)::date;
You can use more standard cast instead of ::
select cast(to_timestamp(epoch_column) as date);
More details:
/* Current time */
select now(); -- returns timestamp
/* Epoch from current time;
Epoch is number of seconds since 1970-01-01 00:00:00+00 */
select extract(epoch from now());
/* Get back time from epoch */
-- Option 1 - use to_timestamp function
select to_timestamp( extract(epoch from now()));
-- Option 2 - add seconds to 'epoch'
select timestamp with time zone 'epoch'
+ extract(epoch from now()) * interval '1 second';
/* Cast timestamp to date */
-- Based on Option 1
select to_timestamp(extract(epoch from now()))::date;
-- Based on Option 2
select (timestamp with time zone 'epoch'
+ extract(epoch from now()) * interval '1 second')::date;
In your case:
select to_timestamp(epoch_ms / 1000)::date;
PostgreSQL Docs
| PostgreSQL | 16,609,722 | 134 |
I need to run a select without actually connecting to any table. I just have a predefined hardcoded set of values I need to loop over:
foo
bar
fooBar
And I want to loop through those values. I can do:
select 'foo', 'bar', 'fooBar';
But this returns it as one row:
?column? | ?column? | ?column?
----------+----------+----------
foo | bar | fooBar
(1 row)
I am using Postgresql.
| select a
from (
values ('foo'), ('bar'), ('fooBar')
) s(a);
http://www.postgresql.org/docs/current/static/queries-values.html
| PostgreSQL | 15,948,614 | 134 |
How can I query all GRANTS granted to an object in postgres?
For example I have table "mytable":
GRANT SELECT, INSERT ON mytable TO user1
GRANT UPDATE ON mytable TO user2
I need somthing which gives me:
user1: SELECT, INSERT
user2: UPDATE
| I already found it:
SELECT grantee, privilege_type
FROM information_schema.role_table_grants
WHERE table_name='mytable'
| PostgreSQL | 7,336,413 | 134 |
How to use newline character in PostgreSQL?
This is an incorrect script from my experiment:
select 'test line 1'||'\n'||'test line 2';
I want the sql editor display this result from my script above:
test line 1
test line 2
But unfortunately I just get this result from my script when I run it in sql editor:
test line 1 test line 2
| The backslash has no special meaning in SQL, so '\n' is a backslash followed by the character n
To use "escape sequences" in a string literal you need to use an "extended" constant:
select 'test line 1'||E'\n'||'test line 2';
Another option is to use the chr() function:
select 'test line 1'||chr(10)||'test line 2';
Or simply put the newline in the string constant:
select 'test line 1
test line 2';
Whether or not this is actually displayed as two lines in your SQL client, depends on your SQL client.
UPDATE: a good answer from @thedayturns, where you can have a simpler query:
select E'test line 1\ntest line 2';
| PostgreSQL | 36,028,908 | 133 |
I'm trying to integrate PostgreSQL and SQLAlchemy but SQLAlchemy.create_all() is not creating any tables from my models.
My code:
from flask import Flask
from flask.ext.sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app'
db = SQLAlchemy(app)
db.create_all()
db.session.commit()
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
email = db.Column(db.String(120), unique=True)
def __init__(self, username, email):
self.username = username
self.email = email
def __repr__(self):
return '<User %r>' % self.username
admin = User('admin', '[email protected]')
guest = User('guest', '[email protected]')
db.session.add(admin)
db.session.add(guest)
db.session.commit()
users = User.query.all()
print users
But I get this error: sqlalchemy.exc.ProgrammingError: (ProgrammingError) relation "user" does not exist
How can I fix this?
| You should put your model class before create_all() call, like this:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app'
db = SQLAlchemy(app)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True)
email = db.Column(db.String(120), unique=True)
def __init__(self, username, email):
self.username = username
self.email = email
def __repr__(self):
return '<User %r>' % self.username
with app.app_context():
db.create_all()
db.session.add(User('admin', '[email protected]'))
db.session.add(User('guest', '[email protected]'))
db.session.commit()
users = User.query.all()
print(users)
If your models are declared in a separate module, import them before calling create_all().
Say, the User model is in a file called models.py,
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql+psycopg2://login:pass@localhost/flask_app'
db = SQLAlchemy(app)
# See important note below
from models import User
with app.app_context():
db.create_all()
db.session.add(User('admin', '[email protected]'))
db.session.add(User('guest', '[email protected]'))
db.session.commit()
users = User.query.all()
print(users)
Important note: It is important that you import your models after initializing the db object since, in your models.py you also need to import the db object from this module.
| PostgreSQL | 20,744,277 | 133 |
I was trying to delete PostgreSQL user:
DROP USER ryan;
I received this error:
Error in query:
ERROR: role "ryan" cannot be dropped because some objects depend on it
DETAIL: privileges for database mydatabase
I looked for a solution from these threads:
PostgreSQL - how to quickly drop a user with existing privileges
How to drop user in postgres if it has depending objects
Still have the same error.
This happens after I grant all permission to user "ryan" with:
GRANT ALL PRIVILEGES ON DATABASE mydatabase ON SCHEMA public TO ryan;
| DROP USER (or DROP ROLE, same thing) cannot proceed while the role still owns anything or has any granted privileges on other objects.
Get rid of all privileges with DROP OWNED (which isn't too obvious from the wording). The manual:
[...] Any privileges granted to the given roles on objects in the current
database and on shared objects (databases, tablespaces) will also be revoked.
So the reliable sequence of commands to drop a role is:
REASSIGN OWNED BY ryan TO placeholder_role; -- some trusted role
DROP OWNED BY ryan;
Rather not re-assign to a superuser, which could lead to unintended privilege escalation. (Think of SECURITY DEFINER functions ...)
Run both commands in every database of the same cluster where the role owns anything or has any privileges!
And finally:
DROP USER ryan;
REASSIGN OWNED changes ownership for all objects currently owned by the role.
DROP OWNED then only revokes privileges (ownerships out of the way).
Alternatively, you can skip REASSIGN OWNED. Then DROP OWNED will (also) drop all objects owned by the user. (Are you sure?!)
Related:
Drop a role with privileges (with a function to generate commands for all relevant DBs)
Find objects linked to a PostgreSQL role
| PostgreSQL | 51,256,454 | 132 |
I installed Postgres with this command
sudo apt-get install postgresql postgresql-client postgresql-contrib libpq-dev
Using psql --version on terminal I get psql (PostgreSQL) 9.3.4
then I installed pgadmin with
sudo apt-get install pgadmin3
Later I opened the UI and create the server with this information
but this error appear
how can I fix it?
| Modify password for role postgres:
sudo -u postgres psql postgres
alter user postgres with password 'postgres';
Now connect to pgadmin using username postgres and password postgres
Now you can create roles & databases using pgAdmin
How to change PostgreSQL user password?
| PostgreSQL | 24,917,832 | 132 |
What is the best way to find records with duplicate values across multiple columns using Postgres, and Activerecord?
I found this solution here:
User.find(:all, :group => [:first, :email], :having => "count(*) > 1" )
But it doesn't seem to work with postgres. I'm getting this error:
PG::GroupingError: ERROR: column "parts.id" must appear in the GROUP BY clause or be used in an aggregate function
| Tested & Working Version
User.select(:first,:email).group(:first,:email).having("count(*) > 1")
Also, this is a little unrelated but handy. If you want to see how times each combination was found, put .size at the end:
User.select(:first,:email).group(:first,:email).having("count(*) > 1").size
and you'll get a result set back that looks like this:
{[nil, nil]=>512,
["Joe", "[email protected]"]=>23,
["Jim", "[email protected]"]=>36,
["John", "[email protected]"]=>21}
Thought that was pretty cool and hadn't seen it before.
Credit to Taryn, this is just a tweaked version of her answer.
| PostgreSQL | 21,669,202 | 132 |
In Ubuntu, I installed PostgreSQL database and created a superuser for the server.
If I forgot the password of the postgresql superuser, how can I reset it (the password) for that user?
I tried uninstalling it and then installing it again but the previously created superuser is retained.
| Assuming you're the administrator of the machine, Ubuntu has granted you the right to sudo to run any command as any user.
Also assuming you did not restrict the rights in the pg_hba.conf file (in the /etc/postgresql/9.1/main directory), it should contain this line as the first rule:
# Database administrative login by Unix domain socket
local all postgres peer
(About the file location: 9.1 is the major postgres version and main the name of your "cluster". It will differ if using a newer version of postgres or non-default names. Use the pg_lsclusters command to obtain this information for your version/system).
Anyway, if the pg_hba.conf file does not have that line, edit the file, add it, and reload the service with sudo service postgresql reload.
Then you should be able to log in with psql as the postgres superuser with this shell command:
sudo -u postgres psql
Once inside psql, issue the SQL command:
ALTER USER postgres PASSWORD 'newpassword';
In this command, postgres is the name of a superuser. If the user whose password is forgotten was ritesh, the command would be:
ALTER USER ritesh PASSWORD 'newpassword';
References: PostgreSQL 9.1.13 Documentation, Chapter 19. Client Authentication
Keep in mind that you need to type postgres with a single S at the end
If leaving the password in clear text in the history of commands or the server log is a problem, psql provides an interactive meta-command to avoid that, as an alternative to ALTER USER ... PASSWORD:
\password username
It asks for the password with a double blind input, then hashes it according to the password_encryption setting and issue the ALTER USER command to the server with the hashed version of the password, instead of the clear text version.
| PostgreSQL | 14,588,212 | 132 |
I need to sort a PostgreSQL table ascending by a date/time field, e.g. last_updated.
But that field is allowed to be empty or null and I want records with null in last_updated come before non-null last_updated.
Is this possible?
order by last_updated asc -- and null last_updated records first ??
| Postgres has the NULLS FIRST | LAST modifiers for ORDER BY expression:
... ORDER BY last_updated NULLS FIRST
The typical use case is with descending sort order (DESC), which produces the complete inversion of the default ascending order (ASC) with null values first - which is often not desirable. To sort NULL values last:
... ORDER BY last_updated DESC NULLS LAST
To support the query with an index, make it match:
CREATE INDEX foo_idx ON tbl (last_updated DESC NULLS LAST);
Postgres can read btree indexes backwards, so that's effectively almost the same as just:
CREATE INDEX foo_idx ON tbl (last_updated);
For some query plans it matters where NULL values are appended. See:
Performance impact of view on aggregate function vs result set limiting
| PostgreSQL | 9,510,509 | 132 |
Is there a postgresql function that will return a timestamp rounded to the nearest minute? The input value is a timestamp and the return value should be a timestamp.
| Use the built-in function date_trunc(text, timestamp), for example:
select date_trunc('minute', now())
Edit: This truncates to the most recent minute. To get a rounded result, add 30 seconds to the timestamp first, for example:
select date_trunc('minute', now() + interval '30 second')
This returns the nearest minute.
See Postgres Date/Time Functions and Operators for more info
| PostgreSQL | 6,195,439 | 132 |
I would like to define a best practice for storing timestamps in my Postgres database in the context of a multi-timezone project.
I can
choose TIMESTAMP WITHOUT TIME ZONE and remember which timezone was used at insertion time for this field
choose TIMESTAMP WITHOUT TIME ZONE and add another field which will contain the name of the timezone that was used at insertion time
choose TIMESTAMP WITH TIME ZONE and insert the timestamps accordingly
I have a slight preference for option 3 (timestamp with time zone) but would like to have an educated opinion on the matter.
|
First off, PostgreSQL’s time handling and arithmetic is fantastic and Option 3 is fine in the general case. It is, however, an incomplete view of time and timezones and can be supplemented:
Store the name of a user’s time zone as a user preference (e.g. America/Los_Angeles, not -0700).
Have user events/time data submitted local to their frame of reference (most likely an offset from UTC, such as -0700).
In application, convert the time to UTC and stored using a TIMESTAMP WITH TIME ZONE column.
Return time requests local to a user's time zone (i.e. convert from UTC to America/Los_Angeles).
Set your database's timezone to UTC.
This option doesn’t always work because it can be hard to get a user’s time zone and hence the hedge advice to use TIMESTAMP WITH TIME ZONE for lightweight applications. That said, let me explain some background aspects of this this Option 4 in more detail.
Like Option 3, the reason for the WITH TIME ZONE is because the time at which something happened is an absolute moment in time. WITHOUT TIME ZONE yields a relative time zone. Don't ever, ever, ever mix absolute and relative TIMESTAMPs.
From a programmatic and consistency perspective, ensure all calculations are made using UTC as the time zone. This isn’t a PostgreSQL requirement, but it helps when integrating with other programming languages or environments. Setting a CHECK on the column to make sure the write to the time stamp column has a time zone offset of 0 is a defensive position that prevents a few classes of bugs (e.g. a script dumps data to a file and something else sorts the time data using a lexical sort). Again, PostgreSQL doesn’t need this to do date calculations correctly or to convert between time zones (i.e. PostgreSQL is very adept at converting times between any two arbitrary time zones). To ensure data going in to the database is stored with an offset of zero:
CREATE TABLE my_tbl (
my_timestamp TIMESTAMP WITH TIME ZONE NOT NULL DEFAULT NOW(),
CHECK(EXTRACT(TIMEZONE FROM my_timestamp) = '0')
);
test=> SET timezone = 'America/Los_Angeles';
SET
test=> INSERT INTO my_tbl (my_timestamp) VALUES (NOW());
ERROR: new row for relation "my_tbl" violates check constraint "my_tbl_my_timestamp_check"
test=> SET timezone = 'UTC';
SET
test=> INSERT INTO my_tbl (my_timestamp) VALUES (NOW());
INSERT 0 1
It's not 100% perfect, but it provides a strong enough anti-footshooting measure that makes sure the data is already converted to UTC. There are lots of opinions on how to do this, but this seems to be the best in practice from my experience.
Criticisms of database time zone handling is largely justified (there are plenty of databases that handle this with great incompetence), however PostgreSQL’s handling of timestamps and timezones is pretty awesome (despite a few "features" here and there). For example, one such feature:
-- Make sure we're all working off of the same local time zone
test=> SET timezone = 'America/Los_Angeles';
SET
test=> SELECT NOW();
now
-------------------------------
2011-05-27 15:47:58.138995-07
(1 row)
test=> SELECT NOW() AT TIME ZONE 'UTC';
timezone
----------------------------
2011-05-27 22:48:02.235541
(1 row)
Note that AT TIME ZONE 'UTC' strips time zone info and creates a relative TIMESTAMP WITHOUT TIME ZONE using your target’s frame of reference (UTC).
When converting from an incomplete TIMESTAMP WITHOUT TIME ZONE to a TIMESTAMP WITH TIME ZONE, the missing time zone is inherited from your connection:
test=> SET timezone = 'America/Los_Angeles';
SET
test=> SELECT EXTRACT(TIMEZONE_HOUR FROM NOW());
date_part
-----------
-7
(1 row)
test=> SELECT EXTRACT(TIMEZONE_HOUR FROM TIMESTAMP WITH TIME ZONE '2011-05-27 22:48:02.235541');
date_part
-----------
-7
(1 row)
-- Now change to UTC
test=> SET timezone = 'UTC';
SET
-- Create an absolute time with timezone offset:
test=> SELECT NOW();
now
-------------------------------
2011-05-27 22:48:40.540119+00
(1 row)
-- Creates a relative time in a given frame of reference (i.e. no offset)
test=> SELECT NOW() AT TIME ZONE 'UTC';
timezone
----------------------------
2011-05-27 22:48:49.444446
(1 row)
test=> SELECT EXTRACT(TIMEZONE_HOUR FROM NOW());
date_part
-----------
0
(1 row)
test=> SELECT EXTRACT(TIMEZONE_HOUR FROM TIMESTAMP WITH TIME ZONE '2011-05-27 22:48:02.235541');
date_part
-----------
0
(1 row)
The bottom line:
store a user’s time zone as a named label (e.g. America/Los_Angeles) and not an offset from UTC (e.g. -0700)
use UTC for everything unless there is a compelling reason to store a non-zero offset
treat all non-zero UTC times as an input error
never mix and match relative and absolute timestamps
also use UTC as the timezone in the database if possible
Random programming language note: Python's datetime data type is very good at maintaining the distinction between absolute vs relative times (albeit frustrating at first until you supplement it with a library like PyTZ).
EDIT
Let me explain the difference between relative vs absolute a bit more.
Absolute time is used to record an event. Examples: "User 123 logged in" or "a graduation ceremonies start at 2011-05-28 2pm PST." Regardless of your local time zone, if you could teleport to where the event occurred, you could witness the event happening. Most time data in a database is absolute (and therefore should be TIMESTAMP WITH TIME ZONE, ideally with a +0 offset and a textual label representing the rules governing the particular timezone - not an offset).
A relative event would be to record or schedule the time of something from the perspective of a yet-to-be-determined time zone. Examples: "our business's doors open at 8am and close at 9pm", "let's meet every Monday at 7am for a weekly breakfast meeting," or "every Halloween at 8pm." In general, relative time is used in a template or factory for events, and absolute time is used for almost everything else. There is one rare exception that’s worth pointing out which should illustrate the value of relative times. For future events that are far enough in the future where there could be uncertainty about the absolute time at which something could occur, use a relative timestamp. Here’s a real world example:
Suppose it’s the year 2004 and you need to schedule a delivery on October 31st in 2008 at 1pm on the West Coast of the US (i.e. America/Los_Angeles/PST8PDT). If you stored that using absolute time using ’2008-10-31 21:00:00.000000+00’::TIMESTAMP WITH TIME ZONE , the delivery would have shown up at 2pm because the US Government passed the Energy Policy Act of 2005 that changed the rules governing daylight savings time. In 2004 when the delivery was scheduled, the date 10-31-2008 would have been Pacific Standard Time (+8000), but starting in year 2005+ timezone databases recognized that 10-31-2008 would have been Pacific Daylight Savings time (+0700). Storing a relative timestamp with the time zone would have resulted in a correct delivery schedule because a relative timestamp is immune to Congress’ ill-informed tampering. Where the cutoff between using relative vs absolute times for scheduling things is, is a fuzzy line, but my rule of thumb is that scheduling for anything in the future further than 3-6mo should make use of relative timestamps (scheduled = absolute vs planned = relative ???).
The other/last type of relative time is the INTERVAL. Example: "the session will time out 20 minutes after a user logs in". An INTERVAL can be used correctly with either absolute timestamps (TIMESTAMP WITH TIME ZONE) or relative timestamps (TIMESTAMP WITHOUT TIME ZONE). It is equally correct to say, "a user session expires 20min after a successful login (login_utc + session_duration)" or "our morning breakfast meeting can only last 60 minutes (recurring_start_time + meeting_length)".
Last bits of confusion: DATE, TIME, TIME WITHOUT TIME ZONE and TIME WITH TIME ZONE are all relative data types. For example: '2011-05-28'::DATE represents a relative date since you have no time zone information which could be used to identify midnight. Similarly, '23:23:59'::TIME is relative because you don't know either the time zone or the DATE represented by the time. Even with '23:59:59-07'::TIME WITH TIME ZONE, you don't know what the DATE would be. And lastly, DATE with a time zone is not in fact a DATE, it is a TIMESTAMP WITH TIME ZONE:
test=> SET timezone = 'America/Los_Angeles';
SET
test=> SELECT '2011-05-11'::DATE AT TIME ZONE 'UTC';
timezone
---------------------
2011-05-11 07:00:00
(1 row)
test=> SET timezone = 'UTC';
SET
test=> SELECT '2011-05-11'::DATE AT TIME ZONE 'UTC';
timezone
---------------------
2011-05-11 00:00:00
(1 row)
Putting dates and time zones in databases is a good thing, but it is easy to get subtly incorrect results. Minimal additional effort is required to store time information correctly and completely, however that doesn’t mean the extra effort is always required.
| PostgreSQL | 6,151,084 | 132 |
When creating a table in PostgreSQL, default constraint names will assigned if not provided:
CREATE TABLE example (
a integer,
b integer,
UNIQUE (a, b)
);
But using ALTER TABLE to add a constraint it seems a name is mandatory:
ALTER TABLE example ADD CONSTRAINT my_explicit_constraint_name UNIQUE (a, b);
This has caused some naming inconsistencies on projects I've worked on, and prompts the following questions:
Is there a simple way to add a constraint to an extant table with the name it would have received if added during table creation?
If not, should default names be avoided altogether to prevent inconsistencies?
| The standard names for indexes in PostgreSQL are:
{tablename}_{columnname(s)}_{suffix}
where the suffix is one of the following:
pkey for a Primary Key constraint
key for a Unique constraint
excl for an Exclusion constraint
idx for any other kind of index
fkey for a Foreign key
check for a Check constraint
Standard suffix for sequences is
seq for all sequences
Proof of your UNIQUE-constraint:
NOTICE: CREATE TABLE / UNIQUE will
create implicit index
"example_a_b_key" for table "example"
| PostgreSQL | 4,107,915 | 132 |
OperationalError at /admin/
FATAL: Peer authentication failed for user "myuser"
This is the error I am receiving when I try to get to my Django admin site. I had been using MySQL database no problem. I am new to PostgreSQL, but decided to switch because the host I ultimately plan to use for this project does not have MySQL.
Therefore, I figured I could go through the process of installing PostgreSQL, run a syncdb and be all set.
Problem is that I cannot seem to get my app to connect to the database. I can login to PostgreSQL via command line or desktop app that I downloaded. Just not in the script.
Also, I can use manage.py shell to access the db just fine.
Any thoughts?
| I took a peek at the exception, noticed it had to do with my connection settings. Went back to settings.py, and saw I did not have a Host setup. Add localhost and voila.
My settings.py did not have a HOST for MySQL database, but I needed to add one for PostgreSQL to work.
In my case, I added localhost to the HOST setting and it worked.
Here is the DATABASES section from my settings.py.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': '<MYDATABASE>',
'USER': '<MYUSER>',
'PASSWORD': '<MYPASSWORD>',
'HOST': 'localhost', # the missing piece of the puzzle
'PORT': '', # optional, I don't need this since I'm using the standard port
}
}
| PostgreSQL | 8,167,602 | 131 |
Postgres is the database
Can I use a NULL value for a IN clause? example:
SELECT *
FROM tbl_name
WHERE id_field IN ('value1', 'value2', 'value3', NULL)
I want to limit to these four values.
I have tried the above statement and it doesn't work, well it executes but doesn't add the records with NULL id_fields.
I have also tried to add a OR condition but this just make the query run and run with no end in sight.
SELECT *
FROM tbl_name
WHERE other_condition = bar
AND another_condition = foo
AND id_field IN ('value1', 'value2', 'value3')
OR id_field IS NULL
Any suggestions?
|
An in statement will be parsed identically to field=val1 or field=val2 or field=val3. Putting a null in there will boil down to field=null which won't work.
(Comment by Marc B)
I would do this for clairity
SELECT *
FROM tbl_name
WHERE
(id_field IN ('value1', 'value2', 'value3') OR id_field IS NULL)
| PostgreSQL | 6,362,112 | 130 |
I'm in the process of creating a table and it made me wonder.
If I store, say cars that has a make (fx BMW, Audi ect.), will it make any difference on the query speed if I store the make as an int or varchar.
So is
SELECT * FROM table WHERE make = 5 AND ...;
Faster/slower than
SELECT * FROM table WHERE make = 'audi' AND ...;
or will the speed be more or less the same?
| Int comparisons are faster than varchar comparisons, for the simple fact that ints take up much less space than varchars.
This holds true both for unindexed and indexed access. The fastest way to go is an indexed int column.
As I see you've tagged the question postgreql, you might be interested in the space usage of different date types:
int fields occupy between 2 and 8 bytes, with 4 being usually more than enough ( -2147483648 to +2147483647 )
character types occupy 4 bytes plus the actual strings.
| PostgreSQL | 2,346,920 | 130 |
I get the following error when inserting data from mysql into postgres.
Do I have to manually remove all null characters from my input data?
Is there a way to get postgres to do this for me?
ERROR: invalid byte sequence for encoding "UTF8": 0x00
| PostgreSQL doesn't support storing NULL (\0x00) characters in text fields (this is obviously different from the database NULL value, which is fully supported).
Source: http://www.postgresql.org/docs/9.1/static/sql-syntax-lexical.html#SQL-SYNTAX-STRINGS-UESCAPE
If you need to store the NULL character, you must use a bytea field - which should store anything you want, but won't support text operations on it.
Given that PostgreSQL doesn't support it in text values, there's no good way to get it to remove it. You could import your data into bytea and later convert it to text using a special function (in perl or something, maybe?), but it's likely going to be easier to do that in preprocessing before you load it.
| PostgreSQL | 1,347,646 | 130 |
We have Spring-boot/Hibernate/PostgreSQL application in our project and use Hikari as the connection pool.
We keep running into the following problem: after few hours active connections number grows to the limit and we get the errors like this (full stack trace is at the end of the question):
Caused by: java.sql.SQLTransientConnectionException: HikariPool-0 - Connection is not available, request timed out after 30000ms.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:213) ~[HikariCP-2.4.1.jar:na]
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:163) ~[HikariCP-2.4.1.jar:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:85) ~[HikariCP-2.4.1.jar:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:139) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:380) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:228) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
... 126 common frames omitted
Here is the version info:
Spring-boot version: 1.2.3.RELEASE
HikariCP version: 2.4.1
Hibernate version: 4.3.6.Final
Postgresql jdbc: 9.3-1102-jdbc41
Server version: Apache Tomcat/8.0.23
JVM Version: 1.8.0_45-b14
JPA/Hibernate config:
jpa:
database-platform: org.hibernate.dialect.PostgreSQL82Dialect
database: POSTGRESQL
openInView: false
show_sql: false
generate-ddl: false
hibernate:
ddl-auto: none
naming-strategy: org.hibernate.cfg.EJB3NamingStrategy
properties:
hibernate.cache.use_second_level_cache: true
hibernate.cache.use_query_cache: false
hibernate.generate_statistics: false
hibernate.cache.region.factory_class: org.hibernate.cache.ehcache.SingletonEhCacheRegionFactory
hibernate.enable_lazy_load_no_trans: true
hibernate.event.merge.entity_copy_observer: allow
HikariCP config:
2015-10-06 12:26:44,252 DEBUG [localhost-startStop-1] HikariConfig: HikariPool-0 - configuration:
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: allowPoolSuspension.............false
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: autoCommit......................true
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: catalog.........................
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: connectionInitSql...............
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: connectionTestQuery.............
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: connectionTimeout...............30000
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: dataSource......................
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: dataSourceClassName.............org.postgresql.ds.PGSimpleDataSource
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: dataSourceJNDI..................
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: dataSourceProperties............{user=postgres, password=<masked>, databaseName=lms, serverName=*.*.*.*:5432}
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: driverClassName.................
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: healthCheckProperties...........{}
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: healthCheckRegistry.............
2015-10-06 12:26:44,274 DEBUG [localhost-startStop-1] HikariConfig: idleTimeout.....................30000
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: initializationFailFast..........true
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: isolateInternalQueries..........false
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: jdbc4ConnectionTest.............false
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: jdbcUrl.........................
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: leakDetectionThreshold..........0
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: maxLifetime.....................1800000
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: maximumPoolSize.................20
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: metricRegistry..................com.codahale.metrics.MetricRegistry@63d2fc48
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: metricsTrackerFactory...........
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: minimumIdle.....................10
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: password........................<masked>
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: poolName........................HikariPool-0
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: readOnly........................false
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: registerMbeans..................false
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: scheduledExecutorService........
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: threadFactory...................
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: transactionIsolation............
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: username........................
2015-10-06 12:26:44,275 DEBUG [localhost-startStop-1] HikariConfig: validationTimeout...............5000
2015-10-06 12:26:44,276 INFO [localhost-startStop-1] HikariDataSource: HikariPool-0 - is starting.
2015-10-06 12:26:44,432 DEBUG [localhost-startStop-1] PoolElf: HikariPool-0 - Connection.setNetworkTimeout() is not supported (Method org.postgresql.jdbc4.Jdbc4Connection.getNetworkTimeout() is not yet implemented.)
Full stack trace:
2015-10-06 12:09:36,885 DEBUG [http-nio-8080-exec-25] PoolElf: HikariPool-0 - Reset (nothing) on connection org.postgresql.jdbc4.Jdbc4Connection@3cc4d919
2015-10-06 12:09:42,651 DEBUG [Hikari housekeeper (pool HikariPool-0)] HikariPool: Before cleanup pool HikariPool-0 stats (total=20, active=20, idle=0, waiting=1)
2015-10-06 12:09:42,652 DEBUG [Hikari housekeeper (pool HikariPool-0)] HikariPool: After cleanup pool HikariPool-0 stats (total=20, active=20, idle=0, waiting=1)
2015-10-06 12:10:06,885 DEBUG [http-nio-8080-exec-25] HikariPool: Timeout failure pool HikariPool-0 stats (total=20, active=20, idle=0, waiting=0)
2015-10-06 12:10:06,885 WARN [http-nio-8080-exec-25] SqlExceptionHelper: SQL Error: 0, SQLState: null
2015-10-06 12:10:06,885 ERROR [http-nio-8080-exec-25] SqlExceptionHelper: HikariPool-0 - Connection is not available, request timed out after 30000ms.
2015-10-06 12:10:06,885 DEBUG [http-nio-8080-exec-25] PoolElf: HikariPool-0 - Reset (nothing) on connection org.postgresql.jdbc4.Jdbc4Connection@3cc4d919
2015-10-06 12:10:06,886 ERROR [http-nio-8080-exec-25] ErrorPageFilter: Forwarding to error page from request [/api/courses/121387/quizzes/i6fa2562510bf8578712380a87a433e97/student/30175] due to exception [org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.hibernate.exception.JDBCConnectionException: Could not open connection]
java.lang.RuntimeException: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.hibernate.exception.JDBCConnectionException: Could not open connection
at lms.security.xauth.XAuthTokenFilter.doFilter(XAuthTokenFilter.java:86) ~[XAuthTokenFilter.class:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:120) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.header.HeaderWriterFilter.doFilterInternal(HeaderWriterFilter.java:64) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:91) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.context.request.async.WebAsyncManagerIntegrationFilter.doFilterInternal(WebAsyncManagerIntegrationFilter.java:53) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:213) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:176) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.springframework.boot.actuate.trace.WebRequestTraceFilter.doFilterInternal(WebRequestTraceFilter.java:102) ~[spring-boot-actuator-1.2.3.RELEASE.jar:1.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.springframework.web.filter.CharacterEncodingFilter.doFilterInternal(CharacterEncodingFilter.java:85) ~[spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.springframework.boot.context.web.ErrorPageFilter.doFilter(ErrorPageFilter.java:113) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE]
at org.springframework.boot.context.web.ErrorPageFilter.access$000(ErrorPageFilter.java:59) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE]
at org.springframework.boot.context.web.ErrorPageFilter$1.doFilterInternal(ErrorPageFilter.java:88) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.boot.context.web.ErrorPageFilter.doFilter(ErrorPageFilter.java:106) [spring-boot-1.2.3.RELEASE.jar:1.2.3.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:219) [catalina.jar:8.0.23]
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:106) [catalina.jar:8.0.23]
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:502) [catalina.jar:8.0.23]
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:142) [catalina.jar:8.0.23]
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:79) [catalina.jar:8.0.23]
at org.apache.catalina.valves.AbstractAccessLogValve.invoke(AbstractAccessLogValve.java:617) [catalina.jar:8.0.23]
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:88) [catalina.jar:8.0.23]
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:518) [catalina.jar:8.0.23]
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1091) [tomcat-coyote.jar:8.0.23]
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:668) [tomcat-coyote.jar:8.0.23]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.doRun(NioEndpoint.java:1521) [tomcat-coyote.jar:8.0.23]
at org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:1478) [tomcat-coyote.jar:8.0.23]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_45]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_45]
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61) [tomcat-util.jar:8.0.23]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_45]
Caused by: org.springframework.web.util.NestedServletException: Request processing failed; nested exception is org.hibernate.exception.JDBCConnectionException: Could not open connection
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:978) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:868) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:648) ~[servlet-api.jar:na]
at org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:842) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729) ~[servlet-api.jar:na]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:291) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at com.codahale.metrics.servlet.AbstractInstrumentedFilter.doFilter(AbstractInstrumentedFilter.java:104) ~[metrics-servlet-3.1.1.jar:3.1.1]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52) ~[tomcat-websocket.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.springframework.boot.actuate.autoconfigure.EndpointWebMvcAutoConfiguration$ApplicationContextHeaderFilter.doFilterInternal(EndpointWebMvcAutoConfiguration.java:291) ~[spring-boot-actuator-1.2.3.RELEASE.jar:1.2.3.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.springframework.web.filter.HiddenHttpMethodFilter.doFilterInternal(HiddenHttpMethodFilter.java:77) ~[spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107) [spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:239) [catalina.jar:8.0.23]
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) [catalina.jar:8.0.23]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:316) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.invoke(FilterSecurityInterceptor.java:126) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.access.intercept.FilterSecurityInterceptor.doFilter(FilterSecurityInterceptor.java:90) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.access.ExceptionTranslationFilter.doFilter(ExceptionTranslationFilter.java:114) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.session.SessionManagementFilter.doFilter(SessionManagementFilter.java:122) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.authentication.AnonymousAuthenticationFilter.doFilter(AnonymousAuthenticationFilter.java:111) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:168) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:48) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:330) ~[spring-security-web-4.0.0.RELEASE.jar:na]
at lms.security.xauth.XAuthTokenFilter.doFilter(XAuthTokenFilter.java:84) ~[XAuthTokenFilter.class:na]
... 46 common frames omitted
Caused by: org.hibernate.exception.JDBCConnectionException: Could not open connection
at org.hibernate.exception.internal.SQLExceptionTypeDelegate.convert(SQLExceptionTypeDelegate.java:65) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:49) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:126) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:112) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:235) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:171) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:162) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1435) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.withTemporarySessionIfNeeded(AbstractPersistentCollection.java:250) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.collection.internal.AbstractPersistentCollection.readElementByIndex(AbstractPersistentCollection.java:376) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.collection.internal.PersistentMap.get(PersistentMap.java:164) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at java_util_Map$get.call(Unknown Source) ~[na:na]
at lms.service.QuizService.processAnswers(QuizService.groovy:66) ~[QuizService.class:na]
at lms.service.QuizService$$FastClassBySpringCGLIB$$4dcc8beb.invoke(<generated>) ~[spring-core-4.1.6.RELEASE.jar:na]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor$1.proceedWithInvocation(TransactionInterceptor.java:99) ~[spring-tx-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:281) ~[spring-tx-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:96) ~[spring-tx-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at lms.service.QuizService$$EnhancerBySpringCGLIB$$37a60c0a.processAnswers(<generated>) ~[spring-core-4.1.6.RELEASE.jar:na]
at lms.web.rest.CourseResource.saveQuizResults(CourseResource.java:537) ~[CourseResource.class:na]
at lms.web.rest.CourseResource$$FastClassBySpringCGLIB$$e3d2ba4d.invoke(<generated>) ~[spring-core-4.1.6.RELEASE.jar:na]
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) ~[spring-core-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:717) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.security.access.intercept.aopalliance.MethodSecurityInterceptor.invoke(MethodSecurityInterceptor.java:68) ~[spring-security-core-4.0.0.RELEASE.jar:na]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at com.ryantenney.metrics.spring.TimedMethodInterceptor.invoke(TimedMethodInterceptor.java:48) ~[metrics-spring-3.0.4.jar:na]
at com.ryantenney.metrics.spring.TimedMethodInterceptor.invoke(TimedMethodInterceptor.java:34) ~[metrics-spring-3.0.4.jar:na]
at com.ryantenney.metrics.spring.AbstractMetricMethodInterceptor.invoke(AbstractMetricMethodInterceptor.java:59) ~[metrics-spring-3.0.4.jar:na]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:653) ~[spring-aop-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at lms.web.rest.CourseResource$$EnhancerBySpringCGLIB$$ff854301.saveQuizResults(<generated>) ~[spring-core-4.1.6.RELEASE.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.8.0_45]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[na:1.8.0_45]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_45]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_45]
at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221) ~[spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:137) ~[spring-web-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:110) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:776) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:705) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:959) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:893) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:966) ~[spring-webmvc-4.1.6.RELEASE.jar:4.1.6.RELEASE]
... 81 common frames omitted
Caused by: java.sql.SQLTransientConnectionException: HikariPool-0 - Connection is not available, request timed out after 30000ms.
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:213) ~[HikariCP-2.4.1.jar:na]
at com.zaxxer.hikari.pool.HikariPool.getConnection(HikariPool.java:163) ~[HikariCP-2.4.1.jar:na]
at com.zaxxer.hikari.HikariDataSource.getConnection(HikariDataSource.java:85) ~[HikariCP-2.4.1.jar:na]
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:139) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:380) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:228) ~[hibernate-core-4.3.6.Final.jar:4.3.6.Final]
... 126 common frames omitted
2015-10-06 12:10:12,651 DEBUG [Hikari housekeeper (pool HikariPool-0)] HikariPool: Before cleanup pool HikariPool-0 stats (total=20, active=19, idle=1, waiting=0)
2015-10-06 12:10:12,652 DEBUG [Hikari housekeeper (pool HikariPool-0)] HikariPool: After cleanup pool HikariPool-0 stats (total=20, active=19, idle=1, waiting=0)
Any clue would be helpful!
| I managed to fix it finally. The problem is not related to HikariCP.
The problem persisted because of some complex methods in REST controllers executing multiple changes in DB through JPA repositories. For some reasons calls to these interfaces resulted in a growing number of "freezed" active connections, exhausting the pool. Either annotating these methods as @Transactional or enveloping all the logic in a single call to transactional service method seem to solve the problem.
| PostgreSQL | 32,968,530 | 129 |
I am trying to connect postgresql but I am getting this error.
org.postgresql.util.PSQLException: Connection refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
My pg_hba.conf file is like this.
TYPE DATABASE USER CIDR-ADDRESS METHOD
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
I would be much obliged if anyone please be so kind enough to explain whats going on here and how I should correct it.
| The error you quote has nothing to do with pg_hba.conf; it's failing to connect, not failing to authorize the connection.
Do what the error message says:
Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections
You haven't shown the command that produces the error. Assuming you're connecting on localhost port 5432 (the defaults for a standard PostgreSQL install), then either:
PostgreSQL isn't running
PostgreSQL isn't listening for TCP/IP connections (listen_addresses in postgresql.conf)
PostgreSQL is only listening on IPv4 (0.0.0.0 or 127.0.0.1) and you're connecting on IPv6 (::1) or vice versa. This seems to be an issue on some older Mac OS X versions that have weird IPv6 socket behaviour, and on some older Windows versions.
PostgreSQL is listening on a different port to the one you're connecting on
(unlikely) there's an iptables rule blocking loopback connections
(If you are not connecting on localhost, it may also be a network firewall that's blocking TCP/IP connections, but I'm guessing you're using the defaults since you didn't say).
So ... check those:
ps -f -u postgres should list postgres processes
sudo lsof -n -u postgres |grep LISTEN or sudo netstat -ltnp | grep postgres should show the TCP/IP addresses and ports PostgreSQL is listening on
BTW, I think you must be on an old version. On my 9.3 install, the error is rather more detailed:
$ psql -h localhost -p 12345
psql: could not connect to server: Connection refused
Is the server running on host "localhost" (::1) and accepting
TCP/IP connections on port 12345?
| PostgreSQL | 20,825,734 | 129 |
Since my approach for a test query which I worked on in this question did not work out, I'm trying something else now. Is there a way to tell pg's random() function to get me only numbers between 1 and 10?
| If by numbers between 1 and 10 you mean any float that is >= 1 and < 10, then it's easy:
select random() * 9 + 1
This can be easily tested with:
# select min(i), max(i) from (
select random() * 9 + 1 as i from generate_series(1,1000000)
) q;
min | max
-----------------+------------------
1.0000083274208 | 9.99999571684748
(1 row)
If you want integers, that are >= 1 and < 10, then it's simple:
select trunc(random() * 9 + 1)
And again, simple test:
# select min(i), max(i) from (
select trunc(random() * 9 + 1) as i from generate_series(1,1000000)
) q;
min | max
-----+-----
1 | 9
(1 row)
| PostgreSQL | 1,400,505 | 129 |
Is it possible to combine multiple CTEs in single query?
I am looking for way to get result like this:
WITH cte1 AS (
...
),
WITH RECURSIVE cte2 AS (
...
),
WITH cte3 AS (
...
)
SELECT ... FROM cte3 WHERE ...
As you can see, I have one recursive CTE and two non recursive.
| Use the key word WITH once at the top. If any of your Common Table Expressions (CTE) are recursive (rCTE) you have to add the keyword RECURSIVE at the top once also, even if not all CTEs are recursive:
WITH RECURSIVE
cte1 AS (...) -- can still be non-recursive
, cte2 AS (SELECT ...
UNION ALL
SELECT ...) -- recursive term
, cte3 AS (...)
SELECT ... FROM cte3 WHERE ...
The manual:
If RECURSIVE is specified, it allows a SELECT subquery to
reference itself by name.
Bold emphasis mine. And, even more insightful:
Another effect of RECURSIVE is that WITH queries need not be ordered:
a query can reference another one that is later in the list. (However,
circular references, or mutual recursion, are not implemented.)
Without RECURSIVE, WITH queries can only reference sibling WITH
queries that are earlier in the WITH list.
Bold emphasis mine again. Meaning that the order of WITH clauses is meaningless when the RECURSIVE key word has been used.
BTW, since cte1 and cte2 in the example are not referenced in the outer SELECT and are plain SELECT commands themselves (no collateral effects), they are never executed (unless referenced in cte3).
| PostgreSQL | 35,248,217 | 128 |
For Postgres, I keep getting this error multiple times even though I have already set the location of the bin folder to the path variable in Windows 8. Is there something else I'm missing?
| This answer has been added to the documentation, but in case you are still looking.
An update because I was trying it on Windows 10 you do need to set the path to the following:
;C:\Program Files\PostgreSQL\14\bin ;C:\Program Files\PostgreSQL\9.5\lib
PS : 14 is the current version, check whatever version you are on.
You can do that either through the CMD by using set PATH [the path]
or from my
computer => properties => advanced system settings=> Environment
Variables => System Variables
Then search for path.
Important: don't replace the PATHs that are already there just add one beside them as follows ;C:\Program Files\PostgreSQL\9.5\bin ;C:\Program Files\PostgreSQL\9.5\lib
Please note: On windows 10, if you follow this: computer => properties => advanced system settings=> Environment Variables => System Variables> select PATH, you actually get the option to add new row. Click Edit, add the /bin and /lib folder locations and save changes.
Then close your command prompt if it's open and then start it again
try psql --version
If it gives you an answer then you are good to go if not try echo %PATH% and see if the path you set was added or not and if it's added is it added correctly or not.
Important note:
Replace 9.5 with your current version number. As of 2021, that is 13.
For 2022 is 14.
| PostgreSQL | 30,401,460 | 128 |
Postgres 9.1 database contains tables yksus1 .. ykssu9 in public schema. pgAdmin shows those definitions as in code below.
How to move those tables to firma1 schema ?
Other tables in firma1 schema have foreign key references to those table primay keys. Foreign key references to those tables are only from tables in firma1 schema.
Some of those tables contain data.
If tables is moved to firma1 schema, foreign key references shouuld also be updated to firma1.yksusn tables.
Table structures cannot changed.
It looks like primary key sequences are already in firma1 schema so those should not moved.
Version string PostgreSQL 9.1.2 on x86_64-unknown-linux-gnu, compiled by gcc-4.4.real (Debian 4.4.5-8) 4.4.5, 64-bit
CREATE TABLE yksus1
(
yksus character(10) NOT NULL DEFAULT ((nextval('firma1.yksus1_yksus_seq'::regclass))::text || '_'::text),
veebis ebool,
nimetus character(70),
"timestamp" character(14) DEFAULT to_char(now(), 'YYYYMMDDHH24MISS'::text),
username character(10) DEFAULT "current_user"(),
klient character(40),
superinden character(20),
telefon character(10),
aadress character(50),
tlnr character(15),
rus character(60),
CONSTRAINT yksus1_pkey PRIMARY KEY (yksus)
);
ALTER TABLE yksus1
OWNER TO mydb_owner;
CREATE TRIGGER yksus1_trig
BEFORE INSERT OR UPDATE OR DELETE
ON yksus1
FOR EACH STATEMENT
EXECUTE PROCEDURE setlastchange();
other tables are similar:
CREATE TABLE yksus2
(
yksus character(10) NOT NULL DEFAULT ((nextval('firma1.yksus2_yksus_seq'::regclass))::text || '_'::text),
nimetus character(70),
"timestamp" character(14) DEFAULT to_char(now(), 'YYYYMMDDHH24MISS'::text),
osakond character(10),
username character(10) DEFAULT "current_user"(),
klient character(40),
superinden character(20),
telefon character(10),
aadress character(50),
tlnr character(15),
rus character(60),
CONSTRAINT yksus2_pkey PRIMARY KEY (yksus),
CONSTRAINT yksus2_osakond_fkey FOREIGN KEY (osakond)
REFERENCES yksus2 (yksus) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE NO ACTION DEFERRABLE INITIALLY IMMEDIATE
);
ALTER TABLE yksus2
OWNER TO mydb_owner;
CREATE TRIGGER yksus2_trig
BEFORE INSERT OR UPDATE OR DELETE
ON yksus2
FOR EACH STATEMENT
EXECUTE PROCEDURE setlastchange();
| ALTER TABLE yksus1
SET SCHEMA firma1;
More details in the manual: http://www.postgresql.org/docs/current/static/sql-altertable.html
Associated indexes, constraints, and sequences owned by table columns are moved as well.
Not sure about the trigger function though, but there is an equivalent ALTER FUNCTION .. SET SCHEMA ... as well.
| PostgreSQL | 17,770,117 | 128 |
1 S postgres 5038 876 0 80 0 - 11962 sk_wai 09:57 ? 00:00:00 postgres: postgres my_app ::1(45035) idle
1 S postgres 9796 876 0 80 0 - 11964 sk_wai 11:01 ? 00:00:00 postgres: postgres my_app ::1(43084) idle
I see a lot of them. We are trying to fix our connection leak. But meanwhile, we want to set a timeout for these idle connections, maybe max to 5 minute.
| It sounds like you have a connection leak in your application because it fails to close pooled connections. You aren't having issues just with <idle> in transaction sessions, but with too many connections overall.
Killing connections is not the right answer for that, but it's an OK-ish temporary workaround.
Rather than re-starting PostgreSQL to boot all other connections off a PostgreSQL database, see: How do I detach all other users from a postgres database? and How to drop a PostgreSQL database if there are active connections to it? . The latter shows a better query.
For setting timeouts, as @Doon suggested see How to close idle connections in PostgreSQL automatically?, which advises you to use PgBouncer to proxy for PostgreSQL and manage idle connections. This is a very good idea if you have a buggy application that leaks connections anyway; I very strongly recommend configuring PgBouncer.
A TCP keepalive won't do the job here, because the app is still connected and alive, it just shouldn't be.
In PostgreSQL 9.2 and above, you can use the new state_change timestamp column and the state field of pg_stat_activity to implement an idle connection reaper. Have a cron job run something like this:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'regress'
AND pid <> pg_backend_pid()
AND state = 'idle'
AND state_change < current_timestamp - INTERVAL '5' MINUTE;
In older versions you need to implement complicated schemes that keep track of when the connection went idle. Do not bother; just use pgbouncer.
| PostgreSQL | 13,236,160 | 128 |
I'm receiving this message but I can't find the postgresql.conf file:
OperationalError: could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "???" and accepting
TCP/IP connections on port 5432?
could not connect to server: Connection refused (0x0000274D/10061)
Is the server running on host "???" and accepting
TCP/IP connections on port 5432?
| On my machine:
C:\Program Files\PostgreSQL\8.4\data\postgresql.conf
| PostgreSQL | 4,465,475 | 128 |
I have just installed Postgres and have been tinkering with it and various configurations for 1-2 hours.
I am stuck on being unable to change to the postgres-user
$ su - postgres yields the following error: su: unknown login: postgres
$ sudo -u postgres psql yields the following error: sudo: unknown user: postgres
These attempts are made as the normal user. Trying them as root has the exact same results. I have installed postgres via Homebrew on OS X, and I have read the instructions multple times.
| psql: Logs me in with my default username
psql -U postgres: Logs me in as the postgres user
Sudo doesn't seem to be required for me.
I use Postgres.app for my OS X postgres database. It removed the headache of making sure the installation was working and the database server was launched properly. Check it out here: http://postgresapp.com
Edit: Credit to @Erwin Brandstetter for correcting my use of the arguments.
| PostgreSQL | 21,122,598 | 127 |
I have a rails app that's databases are in SQLite (The dev and production). Since I am moving to heroku, I want to convert my database to PostgreSQL.
Anyways, I heard that the local, development, database does not need to be changed from SQLite, so I don't need to change that, however, how do I go about changing the production environment from SQLite to PostgreSQL?
Has anyone ever done this before and can help?
P.S. I'm not sure what exactly this process is called, but I've heard about migrating the database from SQLite to PostgreSQL, is that what needs to be done?
| You can change your database.yml to this instead of using the out of the box sqlite one:
development:
adapter: postgresql
encoding: utf8
database: project_development
pool: 5
username:
password:
test: &TEST
adapter: postgresql
encoding: utf8
database: project_test
pool: 5
username:
password:
production:
adapter: postgresql
encoding: utf8
database: project_production
pool: 5
username:
password:
cucumber:
<<: *TEST
| PostgreSQL | 6,710,654 | 127 |
I've managed to bork my local development environment.
All my local Rails apps are now giving the error:
PGError
could not connect to server: Permission denied
Is the server running locally and accepting
connections on Unix domain socket "/var/pgsql_socket/.s.PGSQL.5432"?
I've no idea what's caused this.
While searching for a solution I've updated all bundled gems, updated system gems, updated MacPorts. No joy.
Others have reported this issue when upgrading from OSX Leopard to Lion, due to confusion over which version of Postgres should be used (i.e., OSX version or MacPorts version). I've been running Lion for several months, so it seems strange that this should happen now.
I'm reluctant to mess around too much without first understanding what the problem is. How can I debug this methodically?
How can I determine how many versions of PostgreSQL are on my system, which one is being accessed, and where it is located? How do I fix this if the wrong PostgreSQL is being used?
Sorry for the noob questions. I'm still learning how this works! Thanks for any pointers.
EDIT
Some updates based on suggestions and comments below.
I tried to run pg_lsclusters which returned a command not found error.
I then tried to local my pg_hba.conf file and found these three sample files:
/opt/local/share/postgresql84/pg_hba.conf.sample
/opt/local/var/macports/software/postgresql84/8.4.7_0/opt/local/share/postgresql84/pg_hba.conf.sample
/usr/share/postgresql/pg_hba.conf.sample
So I assume 3 versions of PSQL are installed? Macports, OSX default and ???.
I then did a search for the launchctl startup script ps -ef | grep postgres which returned
0 56 1 0 11:41AM ?? 0:00.02 /opt/local/bin/daemondo --label=postgresql84-server --start-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql84-server/postgresql84-server.wrapper start ; --stop-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql84-server/postgresql84-server.wrapper stop ; --restart-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql84-server/postgresql84-server.wrapper restart ; --pid=none
500 372 1 0 11:42AM ?? 0:00.17 /opt/local/lib/postgresql84/bin/postgres -D /opt/local/var/db/postgresql84/defaultdb
500 766 372 0 11:43AM ?? 0:00.37 postgres: writer process
500 767 372 0 11:43AM ?? 0:00.24 postgres: wal writer process
500 768 372 0 11:43AM ?? 0:00.16 postgres: autovacuum launcher process
500 769 372 0 11:43AM ?? 0:00.08 postgres: stats collector process
501 4497 1016 0 12:36PM ttys000 0:00.00 grep postgres
I've posted the contents of postgresql84-server.wrapper at http://pastebin.com/Gj5TpP62.
I tried to run port load postgresql184-server but received an error Error: Port postgresql184-server not found.
I'm still very confused how to fix this, and appreciate any "for dummies" pointers.
Thanks!
EDIT2
This issue began after I had some problems with daemondo. My local Rails apps were crashing with an application error along the lines of "daemondo gem can not be found". I then went through a series of bundle updates, gem updates, port updates and brew updates to try and find the issue.
Could this error be an issue with daemondo?
| This really looks like a file permissions error. Unix domain sockets are files and have user permissions just like any other. It looks as though the OSX user attempting to access the database does not have file permissions to access the socket file. To confirm this I've done some tests on Ubuntu and psql to try to generate the same error (included below).
You need to check the permissions on the socket file and its directories /var and /var/pgsql_socket. Your Rails app (OSX user) must have execute (x) permissions on these directories (preferably grant everyone permissions) and the socket should have full permissions (wrx). You can use ls -lAd <file> to check these, and if any of them are a symlink you need to check the file or dir the link points to.
You can change the permissions on the dir for youself, but the socket is configured by postgres in postgresql.conf. This can be found in the same directory as pg_hba.conf (You'll have to figure out which one). Once you've set the permissions you will need to restart postgresql.
# postgresql.conf should contain...
unix_socket_directory = '/var/run/postgresql' # dont worry if yours is different
#unix_socket_group = '' # default is fine here
#unix_socket_permissions = 0777 # check this one and uncomment if necessary.
EDIT:
I've done a quick search on google which you may wish to look into to see if it is relavent.
This might well result in any attempt to find your config file failing.
http://www.postgresqlformac.com/server/howto_edit_postgresql_confi.html
Error messages:
User not found in pg_hba.conf
psql: FATAL: no pg_hba.conf entry for host "[local]", user "couling", database "main", SSL off
User failed password auth:
psql: FATAL: password authentication failed for user "couling"
Missing unix socket file:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Unix socket exists, but server not listening to it.
psql: could not connect to server: Connection refused
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Bad file permissions on unix socket file:
psql: could not connect to server: Permission denied
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
| PostgreSQL | 8,465,508 | 126 |
I have a defined an array field in postgresql 9.4 database:
character varying(64)[]
Can I have an empty array e.g. {} for default value of that field?
What will be the syntax for setting so?
I'm getting following error in case of setting just brackets {}:
SQL error:
ERROR: syntax error at or near "{"
LINE 1: ...public"."accounts" ALTER COLUMN "pwd_history" SET DEFAULT {}
^
In statement:
ALTER TABLE "public"."accounts" ALTER COLUMN "pwd_history" SET DEFAULT {}
| You need to use the explicit array initializer and cast that to the correct type:
ALTER TABLE public.accounts
ALTER COLUMN pwd_history SET DEFAULT array[]::varchar[];
| PostgreSQL | 30,933,266 | 125 |
I have tried using host variable PGPASSWORD and .pgpass and neither of these two will allow me to authenticate to the database. I have chmod'd .pgpass to appropriate permissions and also tried:
export PGPASSWORD=mypass and PGPASSWORD=mypass
The password DOES contain a \ however I was encasing it in single quotes PGPASS='mypass\' and it still will not authenticate.
I'm running:
pg_dump dbname -U username -Fc
and I still receive
pg_dump: [archiver (db)] connection to database "dbname" failed: FATAL: Peer authentication failed for user "username"
| The Quick Solution
The problem is that it's trying to perform local peer authentication based on your current username. If you would like to use a password you must specify the hostname with -h.
pg_dump dbname -U username -h localhost -F c
Explanation
This is due to the following in your pg_hba.conf
local all all peer
host all all 127.0.0.1/32 md5
This tells Postgres to use peer authentication for local users which requires the postgres username to match your current system username. The second line refers to connections using a hostname and will allow you to authenticate with a password via the md5 method.
My Preferred Development Config
NOTE: This should only be used on single-user workstations. This could lead to a major security vulnerability on a production or multi-user machine.
When developing against a local postgres instance I like to change my local authentication method to trust. This will allow connecting to postgres via a local unix socket as any user with no password. It can be done by simply changing peer above to trust and reloading postgres.
# Don't require a password for local connections
local all all trust
| PostgreSQL | 10,430,645 | 125 |
I have PostgreSQL 9.2 Installed in Windows 7 and I have windows XP installed in Virtual Machine, how do I connect these two databases and allow remote access to add/edit the database from both Systems ?
| In order to remotely access a PostgreSQL database, you must set the two main PostgreSQL configuration files:
postgresql.conf
pg_hba.conf
Here is a brief description about how you can set them (note that the following description is purely indicative: To configure a machine safely, you must be familiar with all the parameters and their meanings)
First of all configure PostgreSQL service to listen on port 5432 on all network interfaces in Windows 7 machine:
open the file postgresql.conf (usually located in C:\Program Files\PostgreSQL\9.2\data) and sets the parameter
listen_addresses = '*'
Check the network address of WindowsXP virtual machine, and sets parameters in pg_hba.conf file (located in the same directory of postgresql.conf) so that postgresql can accept connections from virtual machine hosts.
For example, if the machine with Windows XP have 192.168.56.2 IP address, add in the pg_hba.conf file:
host all all 192.168.56.1/24 md5
this way, PostgreSQL will accept connections from all hosts on the network 192.168.1.XXX.
Restart the PostgreSQL service in Windows 7 (Services-> PosgreSQL 9.2: right click and restart sevice). Install pgAdmin on windows XP machine and try to connect to PostgreSQL.
| PostgreSQL | 18,580,066 | 124 |
I have column arr which is of type array.
I need to get rows, where arr column contains value s
This query:
SELECT * FROM table WHERE arr @> ARRAY['s']
gives the error:
ERROR: operator does not exist: character varying[] @> text[]
Why does it not work?
p.s. I know about any() operator, but why doesn't @> work?
| Try
SELECT * FROM table WHERE arr @> ARRAY['s']::varchar[]
| PostgreSQL | 16,606,357 | 124 |
database.yml:
# SQLite version 3.x
# gem install sqlite3
#
# Ensure the SQLite 3 gem is defined in your Gemfile
# gem 'sqlite3'
development:
adapter: postgresql
encoding: utf8
database: sampleapp_dev #can be anything unique
#host: localhost
#username: 7stud
#password:
#adapter: sqlite3
#database: db/development.sqlite3
pool: 5
timeout: 5000
# Warning: The database defined as "test" will be erased and
# re-generated from your development database when you run "rake".
# Do not set this db to the same as development or production.
test:
adapter: postgresql
encoding: utf8
database: sampleapp_test #can be anything unique
#host: localhost
#username: 7stud
#password:
#adapter: sqlite3
#database: db/test.sqlite3
pool: 5
timeout: 5000
production:
adapter: postgresql
database: sampleapp_prod #can be anything unique
#host: localhost
#username: 7stud
#password:
#adapter: sqlite3
#database: db/production.sqlite3
pool: 5
timeout: 5000
pg_hba.conf:
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all trust
# IPv4 local connections:
host all all 127.0.0.1/32 trust
# IPv6 local connections:
host all all ::1/128 trust
# Allow replication connections from localhost, by a user with the
# replication privilege.
#local replication postgres md5
#host replication postgres 127.0.0.1/32 md5
#host replication postgres ::1/128 md5
I changed the METHOD in the first three lines from md5 to trust, but I still get the error.
And no matter what combinations of things I try in database.yml, when I do:
~/rails_projects/sample_app4_0$ bundle exec rake db:create:all
I always get the error:
fe_sendauth: no password supplied
I followed this tutorial to get things setup:
https://pragtob.wordpress.com/2012/09/12/setting-up-postgresql-for-ruby-on-rails-on-linux
Mac OSX 10.6.8
PostgreSQL 9.2.4 installed via enterpriseDB installer
Install dir: /Library/PostgreSQL/9.2
| After making changes to the pg_hba.conf or postgresql.conf files, the cluster needs to be reloaded to pick up the changes.
From the command line: pg_ctl reload
From within a db (as superuser): select pg_reload_conf();
From PGAdmin: right-click db name, select "Reload Configuration"
Note: the reload is not sufficient for changes like enabling archiving, changing shared_buffers, etc -- those require a cluster restart.
| PostgreSQL | 17,996,957 | 123 |
I have the following simplified table in Postgres:
User Model
id (UUID)
uid (varchar)
name (varchar)
I would like a query that can find the user on either its UUID id or its text uid.
SELECT * FROM user
WHERE id = 'jsdfhiureeirh' or uid = 'jsdfhiureeirh';
My query generates an invalid input syntax for uuid since I'm obviously not using a UUID in this instance.
How do I polish this query or check if the value is a valid UUID?
| Found it! Casting the UUID column to ::text stops the error. Not sure about the performance hit but on about 5000 rows I get more than adequate performance.
SELECT * FROM user
WHERE id::text = 'jsdfhiureeirh' OR uid = 'jsdfhiureeirh';
SELECT * FROM user
WHERE id::text = '33bb9554-c616-42e6-a9c6-88d3bba4221c'
OR uid = '33bb9554-c616-42e6-a9c6-88d3bba4221c';
| PostgreSQL | 46,433,459 | 122 |
I have the following table called module_data. Currently it has three rows of entries:
id data
0ab5203b-9157-4934-8aba-1512afb0abd0 {"title":"Board of Supervisors Meeting","id":"1i3Ytw1mw98"}
7ee33a18-63da-4432-8967-bde5a44347a0 {"title":"Board of Supervisors Meeting","id":"4-dNAg2mn6o"}
8d71ca35-74eb-4751-b635-114bf04843f1 {"title":"COPD 101", "id":"l9O0jCR-sxg"}
Column data's datatype is jsonb. I'm trying to query it using like operator. Something like the following:
SELECT * FROM module_data WHERE title LIKE '%Board%';
I've been looking at the jsonb support and there doesn't seem to be a like operator. If anyone has any advice.
| If the data column is text type, then use ->> on cast:
select * from module_data where data::json->>'title' like '%Board%'
If it's already json:
select * from module_data where data->>'title' like '%Board%'
| PostgreSQL | 42,918,348 | 122 |
I have a PostgreSQL database on a Linux system that I want to access from my Windows PC. But the only Windows binaries I have been able to find are the full installer, which includes the database server and client.
Is it possible to get a client-only Windows binary install for PostgreSQL from anywhere?
(To clarify, I want the standard PostgreSQL client, psql - not a GUI client or independent tool).
| As of 2020, when you click download the full installer from here , click next and next and you get the option to install only the command line - tools
. Remember to add the path to the bin folder in the PATH variable.
| PostgreSQL | 33,854,798 | 122 |
I've been instructed "not to bother with LIKE" and use ~ instead. What is wrong with LIKE and how is ~ different?
Does ~ have a name in this context or do people say "use the tilde operator"?
| ~ is the regular expression operator, and has the capabilities implied by that. You can specify a full range of regular expression wildcards and quantifiers; see the documentation for details. It is certainly more powerful than LIKE, and should be used when that power is needed, but they serve different purposes.
| PostgreSQL | 12,452,395 | 122 |
I have a table of about 100M rows that I am going to copy to alter, adding an index. I'm not so concerned with the time it takes to create the new table, but will the created index be more efficient if I alter the table before inserting any data or insert the data first and then add the index?
| Creating index after data insert is more efficient way (it even often recomended to drop index before batch import and after import recreate it).
Syntetic example (PostgreSQL 9.1, slow development machine, one million rows):
CREATE TABLE test1(id serial, x integer);
INSERT INTO test1(id, x) SELECT x.id, x.id*100 FROM generate_series(1,1000000) AS x(id);
-- Time: 7816.561 ms
CREATE INDEX test1_x ON test1 (x);
-- Time: 4183.614 ms
Insert and then create index - about 12 sec
CREATE TABLE test2(id serial, x integer);
CREATE INDEX test2_x ON test2 (x);
-- Time: 2.315 ms
INSERT INTO test2(id, x) SELECT x.id, x.id*100 FROM generate_series(1,1000000) AS x(id);
-- Time: 25399.460 ms
Create index and then insert - about 25.5 sec (more than two times slower)
| PostgreSQL | 3,688,731 | 122 |
Is there a tool or method to analyze Postgres, and determine what missing indexes should be created, and which unused indexes should be removed? I have a little experience doing this with the "profiler" tool for SQLServer, but I'm not aware of a similar tool included with Postgres.
| I like this to find missing indexes:
SELECT
relname AS TableName,
to_char(seq_scan, '999,999,999,999') AS TotalSeqScan,
to_char(idx_scan, '999,999,999,999') AS TotalIndexScan,
to_char(n_live_tup, '999,999,999,999') AS TableRows,
pg_size_pretty(pg_relation_size(relname :: regclass)) AS TableSize
FROM pg_stat_all_tables
WHERE schemaname = 'public'
AND 50 * seq_scan > idx_scan -- more than 2%
AND n_live_tup > 10000
AND pg_relation_size(relname :: regclass) > 5000000
ORDER BY relname ASC;
This checks if there are more sequence scans than index scans. If the table is small, it gets ignored, since Postgres seems to prefer sequence scans for them.
Above query does reveal missing indexes.
The next step would be to detect missing combined indexes. I guess this is not easy, but doable. Maybe analyzing the slow queries ... I heard pg_stat_statements could help...
| PostgreSQL | 3,318,727 | 122 |
I'm trying to build a Flask app using Postgres with Docker. I'd like to connect to an AWS RDS instance of Postgres, but use Docker for my Flask app. However, when trying to set up psycopg2 it runs into an error because it can't find pg_config. Here's the error:
Building api
Step 1/5 : FROM python:3.6.3-alpine3.6
---> 84c98ca3b5c5
Step 2/5 : WORKDIR /usr/src/app
---> Using cache
---> 407c158f5ee4
Step 3/5 : COPY . .
---> 966df18d329e
Step 4/5 : RUN pip install -r requirements.txt
---> Running in 284cc97aeb63
Collecting aniso8601==1.3.0 (from -r requirements.txt (line 1))
Downloading aniso8601-1.3.0.tar.gz (57kB)
Collecting click==6.7 (from -r requirements.txt (line 2))
Downloading click-6.7-py2.py3-none-any.whl (71kB)
Collecting Flask==0.12.2 (from -r requirements.txt (line 3))
Downloading Flask-0.12.2-py2.py3-none-any.whl (83kB)
Collecting Flask-RESTful==0.3.6 (from -r requirements.txt (line 4))
Downloading Flask_RESTful-0.3.6-py2.py3-none-any.whl
Collecting Flask-SQLAlchemy==2.3.2 (from -r requirements.txt (line 5))
Downloading Flask_SQLAlchemy-2.3.2-py2.py3-none-any.whl
Collecting itsdangerous==0.24 (from -r requirements.txt (line 6))
Downloading itsdangerous-0.24.tar.gz (46kB)
Collecting Jinja2==2.9.6 (from -r requirements.txt (line 7))
Downloading Jinja2-2.9.6-py2.py3-none-any.whl (340kB)
Collecting MarkupSafe==1.0 (from -r requirements.txt (line 8))
Downloading MarkupSafe-1.0.tar.gz
Collecting psycopg2==2.7.3.1 (from -r requirements.txt (line 9))
Downloading psycopg2-2.7.3.1.tar.gz (425kB)
Complete output from command python setup.py egg_info:
running egg_info
creating pip-egg-info/psycopg2.egg-info
writing pip-egg-info/psycopg2.egg-info/PKG-INFO
writing dependency_links to pip-egg-info/psycopg2.egg-info/dependency_links.txt
writing top-level names to pip-egg-info/psycopg2.egg-info/top_level.txt
writing manifest file 'pip-egg-info/psycopg2.egg-info/SOURCES.txt'
Error: pg_config executable not found.
Please add the directory containing pg_config to the PATH
or specify the full executable path with the option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-build-01lf5grh/psycopg2/
ERROR: Service 'api' failed to build: The command '/bin/sh -c pip install -r requirements.txt' returned a non-zero code: 1
Here's my Dockerfile:
FROM python:3.6.3-alpine3.6
WORKDIR /usr/src/app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "app.py"]
Many others seem to have a similar issue locally, but none of them involve using Docker. This seems like a Docker issue because I can set up a local virtual environment and the setup works just fine since I have Postgres installed locally and it's able to find my local pg_config.
It appears that during the container build/setup, Docker is trying to find pg_config within the container. Is there a way to install a pg_config in the container, even though I won't be using a containerized instance of Postgres, but rather the instance on RDS?
Any and all suggestions on how to get around this are welcomed.
| Tested with Python 3.4.8, 3.5.5, 3.6.5 and 2.7.14 (just replace 3 with 2):
# You can use a specific version too, like python:3.6.5-alpine3.7
FROM python:3-alpine
WORKDIR /usr/src/app
COPY requirements.txt .
RUN \
apk add --no-cache postgresql-libs && \
apk add --no-cache --virtual .build-deps gcc musl-dev postgresql-dev && \
python3 -m pip install -r requirements.txt --no-cache-dir && \
apk --purge del .build-deps
COPY . .
CMD ["python3", "app.py"]
Explanation: to build Psycopg you need the packages gcc musl-dev postgresql-dev. Then you also need that pg_config executable: while simply installing postgresql-dev will work, postgresql-libs does fine too and takes up some 12 MB less space.
Here's the original version of the answer (based on this Dockerfile) where I manually install Python onto a pure Alpine image because at that time Python did not provide the Docker image with Python 3.6 and Alpine 3.7. If you want to install Python 2.7 like that, also do apk add py2-pip (called py-pip in older Alpine repos).
FROM alpine:3.7
WORKDIR /usr/src/app
COPY requirements.txt .
RUN \
apk add --no-cache python3 postgresql-libs && \
apk add --no-cache --virtual .build-deps gcc python3-dev musl-dev postgresql-dev && \
python3 -m pip install -r requirements.txt --no-cache-dir && \
apk --purge del .build-deps
COPY . .
CMD ["python3", "app.py"]
| PostgreSQL | 46,711,990 | 121 |
I'm trying to map the results of a query to JSON using the row_to_json() function that was added in PostgreSQL 9.2.
I'm having trouble figuring out the best way to represent joined rows as nested objects (1:1 relations)
Here's what I've tried (setup code: tables, sample data, followed by query):
-- some test tables to start out with:
create table role_duties (
id serial primary key,
name varchar
);
create table user_roles (
id serial primary key,
name varchar,
description varchar,
duty_id int, foreign key (duty_id) references role_duties(id)
);
create table users (
id serial primary key,
name varchar,
email varchar,
user_role_id int, foreign key (user_role_id) references user_roles(id)
);
DO $$
DECLARE duty_id int;
DECLARE role_id int;
begin
insert into role_duties (name) values ('Script Execution') returning id into duty_id;
insert into user_roles (name, description, duty_id) values ('admin', 'Administrative duties in the system', duty_id) returning id into role_id;
insert into users (name, email, user_role_id) values ('Dan', '[email protected]', role_id);
END$$;
The query itself:
select row_to_json(row)
from (
select u.*, ROW(ur.*::user_roles, ROW(d.*::role_duties)) as user_role
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id
) row;
I found if I used ROW(), I could separate the resulting fields out into a child object, but it seems limited to a single level. I can't insert more AS XXX statements, as I think I should need in this case.
I am afforded column names, because I cast to the appropriate record type, for example with ::user_roles, in the case of that table's results.
Here's what that query returns:
{
"id":1,
"name":"Dan",
"email":"[email protected]",
"user_role_id":1,
"user_role":{
"f1":{
"id":1,
"name":"admin",
"description":"Administrative duties in the system",
"duty_id":1
},
"f2":{
"f1":{
"id":1,
"name":"Script Execution"
}
}
}
}
What I want to do is generate JSON for joins (again 1:1 is fine) in a way where I can add joins, and have them represented as child objects of the parents they join to, i.e. like the following:
{
"id":1,
"name":"Dan",
"email":"[email protected]",
"user_role_id":1,
"user_role":{
"id":1,
"name":"admin",
"description":"Administrative duties in the system",
"duty_id":1
"duty":{
"id":1,
"name":"Script Execution"
}
}
}
}
| Update: In PostgreSQL 9.4 this improves a lot with the introduction of to_json, json_build_object, json_object and json_build_array, though it's verbose due to the need to name all the fields explicitly:
select
json_build_object(
'id', u.id,
'name', u.name,
'email', u.email,
'user_role_id', u.user_role_id,
'user_role', json_build_object(
'id', ur.id,
'name', ur.name,
'description', ur.description,
'duty_id', ur.duty_id,
'duty', json_build_object(
'id', d.id,
'name', d.name
)
)
)
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id;
For older versions, read on.
It isn't limited to a single row, it's just a bit painful. You can't alias composite rowtypes using AS, so you need to use an aliased subquery expression or CTE to achieve the effect:
select row_to_json(row)
from (
select u.*, urd AS user_role
from users u
inner join (
select ur.*, d
from user_roles ur
inner join role_duties d on d.id = ur.duty_id
) urd(id,name,description,duty_id,duty) on urd.id = u.user_role_id
) row;
produces, via http://jsonprettyprint.com/:
{
"id": 1,
"name": "Dan",
"email": "[email protected]",
"user_role_id": 1,
"user_role": {
"id": 1,
"name": "admin",
"description": "Administrative duties in the system",
"duty_id": 1,
"duty": {
"id": 1,
"name": "Script Execution"
}
}
}
You will want to use array_to_json(array_agg(...)) when you have a 1:many relationship, btw.
The above query should ideally be able to be written as:
select row_to_json(
ROW(u.*, ROW(ur.*, d AS duty) AS user_role)
)
from users u
inner join user_roles ur on ur.id = u.user_role_id
inner join role_duties d on d.id = ur.duty_id;
... but PostgreSQL's ROW constructor doesn't accept AS column aliases. Sadly.
Thankfully, they optimize out the same. Compare the plans:
The nested subquery version; vs
The latter nested ROW constructor version with the aliases removed so it executes
Because CTEs are optimisation fences, rephrasing the nested subquery version to use chained CTEs (WITH expressions) may not perform as well, and won't result in the same plan. In this case you're kind of stuck with ugly nested subqueries until we get some improvements to row_to_json or a way to override the column names in a ROW constructor more directly.
Anyway, in general, the principle is that where you want to create a json object with columns a, b, c, and you wish you could just write the illegal syntax:
ROW(a, b, c) AS outername(name1, name2, name3)
you can instead use scalar subqueries returning row-typed values:
(SELECT x FROM (SELECT a AS name1, b AS name2, c AS name3) x) AS outername
Or:
(SELECT x FROM (SELECT a, b, c) AS x(name1, name2, name3)) AS outername
Additionally, keep in mind that you can compose json values without additional quoting, e.g. if you put the output of a json_agg within a row_to_json, the inner json_agg result won't get quoted as a string, it'll be incorporated directly as json.
e.g. in the arbitrary example:
SELECT row_to_json(
(SELECT x FROM (SELECT
1 AS k1,
2 AS k2,
(SELECT json_agg( (SELECT x FROM (SELECT 1 AS a, 2 AS b) x) )
FROM generate_series(1,2) ) AS k3
) x),
true
);
the output is:
{"k1":1,
"k2":2,
"k3":[{"a":1,"b":2},
{"a":1,"b":2}]}
Note that the json_agg product, [{"a":1,"b":2}, {"a":1,"b":2}], hasn't been escaped again, as text would be.
This means you can compose json operations to construct rows, you don't always have to create hugely complex PostgreSQL composite types then call row_to_json on the output.
| PostgreSQL | 13,227,142 | 121 |
Using PostgreSQL 9.0, I have a group role called "staff" and would like to grant all (or certain) privileges to this role on tables in a particular schema. None of the following work
GRANT ALL ON SCHEMA foo TO staff;
GRANT ALL ON DATABASE mydb TO staff;
Members of "staff" are still unable to SELECT or UPDATE on the individual tables in the schema "foo" or (in the case of the second command) to any table in the database unless I grant all on that specific table.
What can I do make my and my users' lives easier?
Update: Figured it out with the help of a similar question on serverfault.com.
GRANT ALL PRIVILEGES ON ALL TABLES IN SCHEMA foo TO staff;
| You found the shorthand to set privileges for all existing tables in the given schema. The manual clarifies:
(but note that ALL TABLES is considered to include views and foreign tables).
Bold emphasis mine. serial columns are implemented with nextval() on a sequence as column default and, quoting the manual:
For sequences, this privilege allows the use of the currval and nextval functions.
So if there are serial columns, you'll also want to grant USAGE (or ALL PRIVILEGES) on sequences
GRANT USAGE ON ALL SEQUENCES IN SCHEMA foo TO mygrp;
Note: IDENTITY columns in Postgres 10 or later use implicit sequences that don't require additional privileges. (Consider upgrading serial columns.)
What about new objects?
You'll also be interested in DEFAULT PRIVILEGES for users or schemas:
ALTER DEFAULT PRIVILEGES IN SCHEMA foo GRANT ALL PRIVILEGES ON TABLES TO staff;
ALTER DEFAULT PRIVILEGES IN SCHEMA foo GRANT USAGE ON SEQUENCES TO staff;
ALTER DEFAULT PRIVILEGES IN SCHEMA foo REVOKE ...;
This sets privileges for objects created in the future automatically - but not for pre-existing objects.
Default privileges are only applied to objects created by the targeted user (FOR ROLE my_creating_role). If that clause is omitted, it defaults to the current user executing ALTER DEFAULT PRIVILEGES. To be explicit:
ALTER DEFAULT PRIVILEGES FOR ROLE my_creating_role IN SCHEMA foo GRANT ...;
ALTER DEFAULT PRIVILEGES FOR ROLE my_creating_role IN SCHEMA foo REVOKE ...;
Note also that all versions of pgAdmin III have a subtle bug and display default privileges in the SQL pane, even if they do not apply to the current role. Be sure to adjust the FOR ROLE clause manually when copying the SQL script.
| PostgreSQL | 10,352,695 | 121 |
I need to document an API written in pure Flask 2 and I'm looking for what is a consolidated approach for doing this.
I found different viable solutions but being new to Python and Flask I'm not able to choose among them. The solutions I found are:
https://github.com/marshmallow-code/apispec
https://github.com/jmcarp/flask-apispec
https://github.com/marshmallow-code/flask-smorest
In order to separate the different API endpoints I use the Flask blueprint.
The structure of a MWE is as follows:
I first defined two simple domain objects, Author and Book.
# author.py
class Author:
def __init__(self, id: str, name: str):
self.id = id
self.name = name
# book.py
class Book:
def __init__(self, id: str, name: str):
self.id = id
self.name = name
Next, I created a simple GET endpoint for both of them using two separate blueprints.
# author_apy.py
import json
from flask import Blueprint, Response
from domain.author import Author
author = Blueprint("author", __name__, url_prefix="/authors")
@author.get("/")
def authors():
authors: list[Author] = []
for i in range(10):
author: Author = Author(str(i), "Author " + str(i))
authors.append(author)
authors_dicts = [author.__dict__ for author in authors]
return Response(json.dumps(authors_dicts), mimetype="application/json")
and
# book_api.json
import json
from flask import Blueprint, Response
from domain.book import Book
book = Blueprint("book", __name__, url_prefix="/books")
@book.get("/")
def books():
books: list[Book] = []
for i in range(10):
book: Book = Book(str(i), "Book " + str(i))
books.append(book)
books_dicts = [book.__dict__ for book in books]
return Response(json.dumps(books_dicts), mimetype="application/json")
In the end I simply registered both the blueprints under the Flask app.
# app.py
from flask import Flask
from api.author.author_api import author
from api.book.book_api import book
app = Flask(__name__)
app.register_blueprint(author, url_prefix="/authors")
app.register_blueprint(book, url_prefix="/books")
@app.get('/')
def hello_world():
return 'Flask - OpenAPI'
if __name__ == '__main__':
app.run()
The whole source code is also available on GitHub.
Considering this minimal working example, I'd like to know what is the quickest way to automate the generation of an OpenAPI v3 yaml/JSON file, e.g. exposed on a /api-doc.yaml endpoint.
PS: this is my first API using Python and Flask. I am trying to reproduce what I'm able to do with Spring-Boot and SpringDoc
| Following the suggestion of migrating from Flask to FastAPI I gave it a try and rewrote the Flask-Example of the question. The source code is also available on GitHub.
The structure of the project is almost identical, with some additional features available(e.g. the CORS Middleware):
The models of the domain are slightly different and extend the BaseModel from Pydantic.
# author.py
from pydantic import BaseModel
class Author(BaseModel):
id: str
name: str
and
# book.py
from pydantic import BaseModel
class Book(BaseModel):
id: str
name: str
With FastAPI the equivalent of the Flask Blueprint is the APIRouter.
Below are the two controllers for the authors
# author_api.py
from fastapi import APIRouter
from domain.author import Author
router = APIRouter()
@router.get("/", tags=["Authors"], response_model=list[Author])
def get_authors() -> list[Author]:
authors: list[Author] = []
for i in range(10):
authors.append(Author(id="Author-" + str(i), name="Author-Name-" + str(i)))
return authors
and the books
# book_api.py
from fastapi import APIRouter
from domain.book import Book
router = APIRouter()
@router.get("/", tags=["Books"], response_model=list[Book])
def get_books() -> list[Book]:
books: list[Book] = []
for i in range(10):
books.append(Book(id="Book-" + str(i), name="Book-Name-" + str(i)))
return books
It is important to note that the response model of the API endpoints is defined using Python types thanks to Pydantic. These object types are then converted into JSON schemas for the OpenAPI documentation.
In the end I simply registered/included the APIRouters under the FastAPI object and added a configuration for CORS.
# app.py
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from domain.info import Info
from api.author.author_api import router as authors_router
from api.book.book_api import router as books_router
app = FastAPI()
app.include_router(authors_router, prefix="/authors")
app.include_router(books_router, prefix="/books")
app.add_middleware(CORSMiddleware,
allow_credentials=True,
allow_origins=["*"],
allow_methods=["*"],
allow_headers=["*"],
)
@app.get("/", response_model=Info)
def info() -> Info:
info = Info(info="FastAPI - OpenAPI")
return info
The generated OpenAPI documentation is accessible at the endpoint /openapi.json while the UI (aka Swagger UI, Redoc) is accessible at /docs
and /redoc
To conclued, this is the automatically generated OpenAPI v3 documentation in JSON format, which can be used to easily generate an API client for other languages (e.g. using the OpenAPI-Generator tools).
{
"openapi": "3.0.2",
"info": {
"title": "FastAPI",
"version": "0.1.0"
},
"paths": {
"/authors/": {
"get": {
"tags": [
"Authors"
],
"summary": "Get Authors",
"operationId": "get_authors_authors__get",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"title": "Response Get Authors Authors Get",
"type": "array",
"items": {
"$ref": "#/components/schemas/Author"
}
}
}
}
}
}
}
},
"/books/": {
"get": {
"tags": [
"Books"
],
"summary": "Get Books",
"operationId": "get_books_books__get",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"title": "Response Get Books Books Get",
"type": "array",
"items": {
"$ref": "#/components/schemas/Book"
}
}
}
}
}
}
}
},
"/": {
"get": {
"summary": "Info",
"operationId": "info__get",
"responses": {
"200": {
"description": "Successful Response",
"content": {
"application/json": {
"schema": {
"$ref": "#/components/schemas/Info"
}
}
}
}
}
}
}
},
"components": {
"schemas": {
"Author": {
"title": "Author",
"required": [
"id",
"name"
],
"type": "object",
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
}
}
},
"Book": {
"title": "Book",
"required": [
"id",
"name"
],
"type": "object",
"properties": {
"id": {
"title": "Id",
"type": "string"
},
"name": {
"title": "Name",
"type": "string"
}
}
},
"Info": {
"title": "Info",
"required": [
"info"
],
"type": "object",
"properties": {
"info": {
"title": "Info",
"type": "string"
}
}
}
}
}
}
In order to start the application we also need an ASGI server for production, such as Uvicorn or Hypercorn.
I used Uvicorn and the app is started using the command below:
uvicorn app:app --reload
It is then available on the port 8000 of your machine.
| OpenAPI | 67,849,806 | 14 |
With org.openapitools:openapi-generator-maven-plugin, I have noticed that using allOf composed of multiple objects in a response does not generate a class combining these multiple objects. Instead it uses the first class defined in the allOf section.
Here is a minimal example (openapi.yaml) :
openapi: 3.0.0
info:
title: Test
version: v1
paths:
/test:
get:
operationId: get
responses:
'200':
description: Get
content:
application/json:
schema:
allOf:
- $ref: '#/components/schemas/A'
- $ref: '#/components/schemas/B'
components:
schemas:
A:
type: object
properties:
attA:
type: string
B:
type: object
properties:
attB:
type: integer
When generating the classes in Java via :
mvn org.openapitools:openapi-generator-maven-plugin:5.2.0:generate \
-Dopenapi.generator.maven.plugin.inputSpec=openapi.yaml \
-Dopenapi.generator.maven.plugin.generatorName=java
It shows a warning:
[WARNING] allOf with multiple schemas defined. Using only the first one: A
As expected, it generates classes A and B. But, when calling get(), the value returned by the call is of type A:
DefaultApi api = new DefaultApi();
A a = api.get();
Instead, I would have expected a composite object containing A and B properties (attA and attB), like this (result from https://editor.swagger.io/):
I have created an issue on GitHub, but hopefully someone here may have had the same issue and managed to fix it.
Also, I can't modify the openapi.yaml file because it's an OpenAPI spec provided by an API I have to call, so modifying it would make no sense and will make it so difficult to manage if their OpenAPI spec change over time.
| Version 6.0.0 of openapi-generator-maven-plugin solves the issue by generating a class (Get200Response) composed of the two objects A and B. After generating the classes using:
mvn org.openapitools:openapi-generator-maven-plugin:6.0.0:generate \
-Dopenapi.generator.maven.plugin.inputSpec=openapi.yaml \
-Dopenapi.generator.maven.plugin.generatorName=java
I can see that new Get200Response class:
package org.openapitools.client.model;
// ...
public class Get200Response {
public static final String SERIALIZED_NAME_ATT_A = "attA";
@SerializedName(SERIALIZED_NAME_ATT_A)
private String attA;
public static final String SERIALIZED_NAME_ATT_B = "attB";
@SerializedName(SERIALIZED_NAME_ATT_B)
private Integer attB;
// ...
}
And I was able to make the following code work. In that example, I have a dummy webserver listening on port 5000 and defining a /test endpoint returning {"attA": "hello", "attB": 1}.
package org.example;
import org.openapitools.client.ApiClient;
import org.openapitools.client.ApiException;
import org.openapitools.client.api.DefaultApi;
import org.openapitools.client.model.Get200Response;
public class Main {
public static void main(String[] args) throws ApiException {
ApiClient apiClient = new ApiClient();
apiClient.setBasePath("http://localhost:5000");
DefaultApi api = new DefaultApi(apiClient);
Get200Response r = api.get();
System.out.println(r.getAttA());
System.out.println(r.getAttB());
}
}
This successfully prints:
hello
1
| OpenAPI | 68,773,761 | 14 |
I have an API that I created in .NetCore 3.1 and have enabled Swagger(OAS3) using Swashbuckle. By default when my app starts if brings up the Swagger page using this URL:
http://{port}/swagger.index.html
I would like to customize the Swagger URL so that it includes the name of the application that is running. The reason I am doing this is because I am using path-based routing with Nginx running in AWS Fargate.
I will have several API containers running in the Fargate task and Nginx will receive the REST requests coming from the Application Load Balancer and from the path (e.g. /api/app1), it will route the request to the correct container port for the target application.
So, for example, I have three apps: App1 on port 5000, App2 on Port 5001 and App3 on port 5003.
If the user makes a request to https://api/app1, Nginx will detect the path and forward the request to port 5000, which is App1's container port.
However, to make sure that the correct Swagger page comes up, I need to add "api/App1" to Swagger's URL so that Nginx will forward the request to the correct container. In this case, it's App1.
In other words, I want my Swagger URL to look like this:
https://api/app1/swagger/index.html
What I've tried
In my Startup.cs file I have added the following:
// Define prefix for application
private readonly string baseApplicationRoute = "api/app1";
// Enable OAS3 JSON middleware
app.UseSwagger(c =>
{
c.RouteTemplate = baseApplicationRoute+"/{documentName}/swagger.json";
});
app.UseSwaggerUI(c =>
{
var endpoint = $"/{baseApplicationRoute}/{version.ToLower()}/swagger.json";
c.SwaggerEndpoint(endpoint, $"APP1 API - {version}");
c.RoutePrefix = string.Empty;
});
This compiles and works, however it is still using the same Swagger URL of:
http://{port}swagger.index.html
I think all this is doing is changing the location of the swagger.json because on the Swagger UI that comes up it is showing:
/api/app1/v1/swagger.json
My launchSettings.json file is specifying the "launchUrl" as "swagger".
I think I'm close, but I am obviously missing something. To recap I'm just trying to change:
The default Swagger URL
http://{port}swagger.index.html
To my custom one here:
http://{port}/api/app1/v1/swagger.index.html
That way Nginx can detect "/api/app1" and route to the correct container.
What am i missing?
| I found the solution to this issue:
In the Configure section of Startup.cs I did the following:
First I added the folowing variable:
private readonly string swaggerBasePath = "api/app";
Next I configured the path using UseSwagger and UseSwaggerUI to use the swaggerBasePath variable:
app.UseSwagger(c =>
{
c.RouteTemplate = swaggerBasePath+"/swagger/{documentName}/swagger.json";
});
app.UseSwaggerUI(c =>
{
c.SwaggerEndpoint($"/{swaggerBasePath}/swagger/v1/swagger.json", $"APP API - {version}");
c.RoutePrefix = $"{swaggerBasePath}/swagger";
});
Finally, I modified launchSettings.json to point to the new base path:
"launchUrl": "api/app/swagger",
Then is was able to hit the Swagger page using:
https://localhost/api/app/swagger/index.html
I testing this with Nginx and it was able to route to the correct container.
I can easily tweak the base path (for instance to add the API version number) by simply modifying the swaggerBasePath variable and tweaking the launchSettings.json file to match the value of the variable.
Hopefully, this will help someone in the future.
| OpenAPI | 62,376,063 | 14 |
Springdoc automatically generates a API documentation for all handler methods. Even if there are no OpenAPI annotations.
How can I hide endpoints from the API documentation?
| The @io.swagger.v3.oas.annotations.Hidden annotation can be used at the method or class level of a controller to hide one or all endpoints.
(See: https://springdoc.org/faq.html#how-can-i-hide-an-operation-or-a-controller-from-documentation)
Example:
@Hidden // Hide all endpoints
@RestController
@RequestMapping(path = "/test")
public class TestController {
private String test = "Test";
@Operation(summary = "Get test string", description = "Returns a test string", tags = { "test" })
@ApiResponses(value = { @ApiResponse(responseCode = "200", description = "Success" ) })
@GetMapping(value = "", produces = MediaType.TEXT_PLAIN_VALUE)
public @ResponseBody String getTest() {
return test;
}
@Hidden // Hide this endpoint
@PutMapping(value = "", consumes = MediaType.TEXT_PLAIN_VALUE)
@ResponseStatus(HttpStatus.OK)
public void setTest(@RequestBody String test) {
this.test = test;
}
}
Edit:
It's also possible to generate the API documentation only for controllers of specific packages.
Add following to your application.properties file:
springdoc.packagesToScan=package1, package2
(See: https://springdoc.org/faq.html#how-can-i-explicitly-set-which-packages-to-scan)
| OpenAPI | 62,102,261 | 14 |
I have a data model definition in OpenAPI 3.0, using SwaggerHub to display the UI. I want one of the properties of a model to be related, which is an array of properties of the same model.
Foo:
properties:
title:
type: string
related:
type: array
items:
$ref: '#/components/schemas/Foo'
The parser doesn't seem to like this - the UI shows the related property as an empty array. Is this kind of self-reference possible in OpenAPI 3.0?
| Your definition is correct, it's just Swagger UI currently does not render circular-referenced definitions properly. See issue #3325 for details.
What you can do is add a model example, and Swagger UI will display this example instead of trying to generate an example from the definition.
Foo:
type: object
properties:
title:
type: string
related:
type: array
items:
$ref: '#/components/schemas/Foo'
example: # <-------
title: foo
related:
- title: bar
- title: baz
related:
- title: qux
Alternatively, you can add an example just for the related array:
Foo:
type: object
properties:
title:
type: string
related:
type: array
items:
$ref: '#/components/schemas/Foo'
example: # <--- Make sure "example" is on the same level as "type: array"
- title: bar
- title: baz
related:
- title: qux
| OpenAPI | 50,950,278 | 14 |
I am writing an OpenAPI (Swagger) definition where a query parameter can take none, or N values, like this:
/path?sort=field1,field2
How can I write this in OpenAPI YAML?
I tried the following, but it does not produce the expected result:
- name: sort
in: query
schema:
type: string
enum: [field1,field2,field3]
allowEmptyValue: true
required: false
description: Sort the results by attributes. (See http://jsonapi.org/format/1.1/#fetching-sorting)
| A query parameter containing a comma-separated list of values is defined as an array. If the values are predefined, then it's an array of enum.
By default, an array may have any number of items, which matches your "none or more" requirement. If needed, you can restrict the number of items using minItems and maxItems, and optionally enforce uniqueItems: true.
OpenAPI 2.0
The parameter definition would look as follows. collectionFormat: csv indicates that the values are comma-separated, but this is the default format so it can be omitted.
parameters:
- name: sort
in: query
type: array # <-----
items:
type: string
enum: [field1, field2, field3]
collectionFormat: csv # <-----
required: false
description: Sort the results by attributes. (See http://jsonapi.org/format/1.1/#fetching-sorting)
OpenAPI 3.x
collectionFormat: csv from OpenAPI 2.0 has been replaced with style: form + explode: false. style: form is the default style for query parameters, so it can be omitted.
parameters:
- name: sort
in: query
schema:
type: array # <-----
items:
type: string
enum: [field1, field2, field3]
required: false
description: Sort the results by attributes. (See http://jsonapi.org/format/1.1/#fetching-sorting)
explode: false # <-----
I think there's no need for allowEmptyValue, because an empty array will be effectively an empty value in this scenario. Moreover, allowEmptyValue is not recommended for use since OpenAPI 3.0.2 "as it will be removed in a future version."
| OpenAPI | 50,538,138 | 14 |
I am adding swagger UI to my Spring boot application. When I try to access the swagger-ui.html. I get the 404 error.
Config class :
@Configuration
public class SwaggerConfig {
@Bean
public OpenAPI springShopOpenAPI() {
return new OpenAPI()
.info(new Info().title("JOYAS-STOCK API Docs")
.description("JOYAS-STOCK REST API documentation")
.version("v1.0.0"));
}
}
appliaction.properties :
#swagger-ui config
springdoc.swagger-ui.path=/swagger-ui
springdoc.swagger-ui.operationsSorter=method
springdoc.swagger-ui.tagsSorter=alpha
pom.xml :
<dependency>
<groupId>org.springdoc</groupId>
<artifactId>springdoc-openapi-ui</artifactId>
<version>1.6.13</version>
</dependency>
error message :
Whitelabel Error Page
This application has no explicit mapping for /error, so you are seeing this as a fallback.
There was an unexpected error (type=Not Found, status=404).
i started with the implementation of the configuration of swagger
and apparently it's not working.
click to see screen of the error
| Resolved.
the issue was in the versions, they were not compatible! i was using springdoc-openapi v1 with spring boot 3.
which is wrong! with spring boot 3, springdoc-openapi v2 should be used.
see documentation : https://springdoc.org/v2/
| OpenAPI | 74,776,863 | 13 |
This is my FastAPI main.py file.
from fastapi import FastAPI
from project.config.settings import base as settings
app = FastAPI(docs_url=f"{settings.URL_ROOT}/{settings.DOCS_URL}", redoc_url=None)
app.openapi_version = "3.0.0"
# some functions here
And I deployed this project to a server. But when I go to address of docs in my server, 1.2.3.4/url_root/docs_url, it shows me following message:
Unable to render this definition
The provided definition does not specify a valid version field.
Please indicate a valid Swagger or OpenAPI version field.
Supported version fields are swagger: "2.0" and those that match openapi: 3.0.n (for example, openapi: 3.0.0).
What's the problem and how can I solve it?
UPDATE:
FastAPI is behind Nginx. All of my endpoints are working correctly, but I cannot see docs.
| You should check this page for proxy settings.
but as far as i understand, you can fix this by just adding root_path to openapi_url:
app = FastAPI(
docs_url=f"/url_root/docs_url",
openapi_url="/url_root/openapi.json",
redoc_url=None)
| OpenAPI | 71,171,535 | 13 |
Consider this OAS3 spec (testMinMax3.yaml):
openapi: 3.0.1
info:
title: OpenAPI Test
description: Test
license:
name: Apache-2.0
url: http://www.apache.org/licenses/LICENSE-2.0.html
version: 1.0.0
servers:
- url: http://localhost:9999/v2
paths:
/ping:
post:
summary: test
description: test it
operationId: pingOp
requestBody:
content:
application/json:
schema:
$ref: '#/components/schemas/SomeObj'
required: true
responses:
200:
description: OK
content: {}
components:
schemas:
SomeObj:
type: string
minLength: 1
maxLength: 3
Does non-empty (aka minLength: 1) implies required even if a field is not set based on the openapi spec (1)? Or does that apply only if a value for the field is provided by the user (2)?
| No.
minLength and required are separate constraints. minLength means that if a string value is provided, its length must be minLength or more.
| OpenAPI | 67,812,850 | 13 |
I want to have a description for RequestBody in spring boot openapi 3 .
so i make my code like this :
@PostMapping(produces = "application/json", consumes = "application/json")
public ResponseEntity<Book> addBook(
@Schema(
description = "Book to add.",
required=true,
schema=@Schema(implementation = Book.class))
@Valid @RequestBody Book book
) {
return ResponseEntity.ok(bookRepository.add(Book));
}
RequestBody description is Book to add.
My desire UI is like this :
But nothings happen ! There is no description in my UI.
description was added to Schemas panel Book entity !!!
What is the problem ?
| From your Code Snippet it seems to me as if your description actually belongs into the @RequestBody Annotation instead of the @Schema Annotation.
With @Schema you define and describe your Models but what you actually want to do is to describe the parameter in the context of your operation.
Try something along the lines of:
@PostMapping(produces = "application/json", consumes = "application/json")
public ResponseEntity<Book> addBook(
@RequestBody(description = "Book to add.", required = true,
content = @Content(
schema=@Schema(implementation = Book.class)))
@Valid Book book
) {
return ResponseEntity.ok(bookRepository.add(Book));
}
| OpenAPI | 64,645,528 | 13 |
I would like the OpenAPI Generator (https://github.com/OpenAPITools/openapi-generator) to be able to generate Pageable parameter in API according to the implementation in Spring Boot Data. I've been trying to find a suitable, out of the box solution, but couldn't find one.
Ideally, this Pageable parameter should be added only to GET methods in a following manner:
default ResponseEntity<User> getUser(@ApiParam(value = "value",required=true) @PathVariable("id") Long id, **Pageable pageable**)
So after implementing this interface in my Controller I would need to override it and having this aforementioned Pageable parameter. I don't want to have separate parameters for size or page, only this Pageable here.
Thanks for any tips and help!
| Unfortunately this is no final solution but it is half way. Maybe it is of help anyway.
By defining the pageable parameters (size, page etc.) as an object query parameter it is possible to tell the generator to use the Spring object instead of generating a Pageable class from the api. This is done by an import mapping.
in gradle:
openApiGenerate {
....
importMappings = [
'Pageable': 'org.springframework.data.domain.Pageable'
]
}
which tells the generator to use the Spring class instead of the one defined in the api:
openapi: 3.0.2
info:
title: Spring Page/Pageable API
version: 1.0.0
paths:
/page:
get:
parameters:
- in: query
name: pageable
required: false
schema:
$ref: '#/components/schemas/Pageable'
responses:
...
components:
schemas:
Pageable:
description: minimal Pageable query parameters
type: object
properties:
page:
type: integer
size:
type: integer
The issue with the mapping is that the generator still adds a @RequestParam() annotation and that breaks it again. It only works if it is NOT annotated.
If you are a bit adventurous you could try openapi-processor-spring (i'm the author). It it does handle the example above. But it may have other limitations you don't like.
| OpenAPI | 61,307,411 | 13 |
FastAPI automatically generates a schema in the OpenAPI spec for UploadFile parameters.
For example, this code:
from fastapi import FastAPI, File, UploadFile
app = FastAPI()
@app.post("/uploadfile/")
async def create_upload_file(file: UploadFile = File(..., description="The file")):
return {"filename": file.filename}
will generate this schema under components:schemas in the OpenAPI spec:
{
"Body_create_upload_file_uploadfile__post": {
"title": "Body_create_upload_file_uploadfile__post",
"required":["file"],
"type":"object",
"properties":{
"file": {"title": "File", "type": "string", "description": "The file","format":"binary"}
}
}
}
How can I explicitly specify the schema for UploadFiles (or at least its name)?
I have read FastAPIs docs and searched the issue tracker but found nothing.
| I answered this over on FastAPI#1442, but just in case someone else stumbles upon this question here is a copy-and-paste from the post linked above:
After some investigation this is possible, but it requires some monkey patching. Using the example given here, the solution looks like so:
from fastapi import FastAPI, File, UploadFile
from typing import Callable
app = FastAPI()
@app.post("/files/")
async def create_file(file: bytes = File(...)):
return {"file_size": len(file)}
@app.post("/uploadfile/")
async def create_upload_file(file: UploadFile = File(...)):
return {"filename": file.filename}
def update_schema_name(app: FastAPI, function: Callable, name: str) -> None:
"""
Updates the Pydantic schema name for a FastAPI function that takes
in a fastapi.UploadFile = File(...) or bytes = File(...).
This is a known issue that was reported on FastAPI#1442 in which
the schema for file upload routes were auto-generated with no
customization options. This renames the auto-generated schema to
something more useful and clear.
Args:
app: The FastAPI application to modify.
function: The function object to modify.
name: The new name of the schema.
"""
for route in app.routes:
if route.endpoint is function:
route.body_field.type_.__name__ = name
break
update_schema_name(app, create_file, "CreateFileSchema")
update_schema_name(app, create_upload_file, "CreateUploadSchema")
| OpenAPI | 60,765,317 | 13 |
The documentation for defining general API information using the quarkus-smallrye-openapi extension is extremely sparse, and does not explain how to use all the annotations for setting up the openApi generation.
For some background, I am using a clean and largely empty project (quarkus version1.0.1.FINAL) generated from code.quarkus.io, with a single class defined as followed (With the attempted @OpenAPIDefinition annotation):
@OpenAPIDefinition(
info = @Info(
title = "Custom API title",
version = "3.14"
)
)
@Path("/hello")
public class ExampleResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "hello";
}
}
I have eventually found that general api information (contact info, version, etc) through much digging is defined using the @OpenAPIDefinition annotation, but when used on my existing endpoint definition, no changes are made to the generated openapi specification. What am I doing wrong?
| Try putting the annotation on the JAX-RS Application class. I realize you don't need one of those in a Quarkus application, but I think it doesn't hurt either. For reference in the specification TCK:
https://github.com/eclipse/microprofile-open-api/blob/master/tck/src/main/java/org/eclipse/microprofile/openapi/apps/airlines/JAXRSApp.java
| OpenAPI | 59,168,710 | 13 |
The DRF docs mention this:
Note that when using viewsets the basic docstring is used for all
generated views. To provide descriptions for each view, such as for
the the list and retrieve views, use docstring sections as described
in Schemas as documentation: Examples.
But the link is bad, and the similar link, https://www.django-rest-framework.org/api-guide/schemas/, doesn't mention these "sections."
How do I distinctly document my different possible REST actions within my single Viewset when it is composed like,
class ViewSet(mixins.ListModelMixin,
mixins.RetrieveModelMixin,
mixins.CreateModelMixin,
mixins.UpdateModelMixin,
):
| I came here from Google after spending ages tracking this down. There is indeed a special formatting of the docstring to document individual methods for ViewSets.
The relevant example must have been removed from the documentation at some point but I was able to track this down in the source. It is handled by the function get_description in https://github.com/encode/django-rest-framework/blob/master/rest_framework/schemas/coreapi.py
The docstring format is based on the action names (if view.action is defined):
"""
General ViewSet description
list: List somethings
retrieve: Retrieve something
update: Update something
create: Create something
partial_update: Patch something
destroy: Delete something
"""
If view.action is not defined, it falls back to the method name: get, put, patch, delete.
Each new section begins with a lower case HTTP method name followed by colon.
| OpenAPI | 57,367,230 | 13 |
I can't find a sample of currency data type in the object definition, nor a document on the subject.
| There is no built-in "currency" type. You would typically use type: number with an optional format modifier to indicate the meaning of the numeric type:
type: number
format: currency
format can have arbitrary values, so you can use format: currency or format: decimal or whatever your tool supports. Tools that recognize the given format will map the value to the corresponding type.
| OpenAPI | 46,350,701 | 13 |
Due to some backward compatibility reasons, I need to support both the paths /ab and /a-b.
The request and response objects are going to be the same for both of the paths.
Can I have something like the following in my Swagger spec so that I do not have to repeat the request and response object definitions for both the paths.
paths:
/ab:
/a-b:
post:
...
| Yes, you can have a path item that references another path item:
paths:
/ab:
post:
summary: ...
...
responses:
...
/a-b:
$ref: '#/paths/~1ab' # <------------
Here, ~1ab is an encoded version of /ab (see below).
One limitation of this approach is that you cannot have operationId in all operations of the referenced path item. This is because the copy of the path ends up with the same operationId values, but operationId must be unique.
Encoding $ref values
If the characters ~ and / are present in node names (as in case of path names, e.g. /ab) they must be encoded: ~ as ~0, and / as ~1:
/ab → ~1ab → $ref: '#/paths/~1ab'
/foo/bar → ~1foo~1bar → $ref: '#/paths/~1foo~1bar'
/ab~cd → ~1ab~0cd → #/paths/~1ab~0cd
Additionally, { } and other characters not allowed in URI fragment identifiers (RFC 3986, section 3.5) need to be percent-encoded. For example, { becomes %7B, and } becomes %7D.
/{zzz}
→ ~1{zzz} ( / replaced with ~1)
→ ~1%7Bzzz%7D (percent-encoded)
→ $ref: '#/paths/~1%7Bzzz%7D'
/foo/{zzz}
→ ~1foo~1{zzz} ( / replaced with ~1)
→ ~1foo~1%7Bzzz%7D (percent-encoded)
→ $ref: '#/paths/~1foo~1%7Bzzz%7D'
Note that you need to encode just the path name and not the #/paths/ prefix.
| OpenAPI | 44,150,758 | 13 |
petstore_auth:
type: oauth2
authorizationUrl: http://swagger.io/api/oauth/dialog
flow: implicit
scopes:
write:pets: modify pets in your account
read:pets: read your pets
This is a securityDefinitions example from the Swagger Specification. What does the write:pets and read:pets intended for? Is that some categories for the paths?
| write:pets and read:pets are Oauth2 scopes and are not related to OpenAPI (fka. Swagger) operations categorization.
Oauth2 scopes
When an API is secured with Oauth, scopes are used to give different rights/privilege to the API consumer. Scopes are defined by a name (you can use whatever you want).
Oauth scopes authorization in SwaggerUI which can act as an API consumer:
In this case this oauth2 secured API propose 2 scopes:
write:pets: modify pets in your account
read:pets: read your pets
When describing an API with an OpenAPI (fka. Swagger) specification, you can define these scopes as shown in the question.
But only defining these scope is useless if you do not declare which operation(s) is covered by these scopes.
It is done by adding this to an operation:
security:
- petstore_auth:
- read:pets
In this example, the operation is accessible to the API consumer only if he was allowed to use the read:pets scope.
Note that a single operation can belong to multiple oauth2 scopes and also multiple security definitions.
You can read more about security in OpenAPI (fka. Swagger) here
Security Scheme Object
Security Requirement Object object definition
Part 6 of my Writing OpenAPI (Swagger) Specification Tutorial about Security
OpenAPI (fka. Swagger) operation categorization
Regardless of OAuth2 scope, if you need to categorize an API's operations, you can use tags:
tags:
- pets
By adding this to an operation it will be put in the category pets.
A single operation can belong to multiple categories.
Theses categories are used by SwaggerUI to regroup operations. In the screen capture below, we can see 3 categories (pet, store and user):
You can read more about categories here:
Tag Object
Operation Object
Part 7 of my Writing OpenAPI (Swagger) Specification Tutorial about Documentation
Here's the full example using Oauth2 scopes and a category
swagger: "2.0"
info:
version: "1.0.0"
title: Swagger Petstore
securityDefinitions:
petstore_auth:
type: oauth2
authorizationUrl: http://petstore.swagger.io/api/oauth/dialog
flow: implicit
scopes:
write:pets: modify pets in your account
read:pets: read your pets
paths:
/pets/{petId}:
parameters:
- in: path
name: petId
description: ID of pet that needs to be fetched
required: true
type: integer
format: int64
get:
tags:
- pets
summary: Find pet by ID
responses:
"404":
description: Pet not found
"200":
description: A pet
schema:
$ref: "#/definitions/Pet"
security:
- petstore_auth:
- read:pets
delete:
tags:
- pets
summary: Deletes a pet
responses:
"404":
description: Pet not found
"204":
description: Pet deleted
security:
- petstore_auth:
- write:pets
definitions:
Pet:
type: object
properties:
id:
type: integer
format: int64
name:
type: string
example: doggie
| OpenAPI | 38,371,355 | 13 |
The @nestjs/swagger doc describes here that defining an extra model should be done this way:
@ApiExtraModels(ExtraModel)
export class CreateCatDto {}
But what is ExtraModel here ? The doc is not very clear about this.
| Worked for me, when I've set @ApiExtraModels(MyModelClass) on the top of controller.
Thanks for this topic and also to this comment in GitHub issue.
I don't want to list all models in extraModels array in SwaggerModule.createDocument, so this is a great solution for me.
| OpenAPI | 61,143,316 | 12 |
I'm using drf_yasg for swagger documentation. When I publish my DRF app behind AWS Application Load Balancer and set listener to listen on 443 HTTPS and redirect to my EC2 on which DRF is running, swagger UI is trying to send a request to endpoint http://example.com/status rather than e.g. https://example.com/status. This creates a Google Chrome error:
swagger-ui-bundle.js:71 Mixed Content: The page at 'https://example.com/swagger#/status/status_list' was loaded over HTTPS, but requested an insecure resource 'http://example.com/status'. This request has been blocked; the content must be served over HTTPS.
So my solution to solve this was to explicitly set my server URL in drf_yasg.views.get_schema_view. So my code looks like:
schema_view = get_schema_view(
openapi.Info(
title="Server Api Documentation",
default_version="v1",
description="",
url="http://example.com/status"
)
# noinspection PyUnresolvedReferences
swagger_patterns = [
path("", schema_view.with_ui("swagger", cache_timeout=0), name="schema-swagger-ui"),
I would like to be able not to explicitly set URL string but rather choose Schemes between HTTP or HTTPS.
Is it possible in drf_yasg?
| Add these in your Django settings.py
# Setup support for proxy headers
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
| OpenAPI | 58,013,545 | 12 |
We have some Azure Functions exposed through Api Management? Can Api Management expose a /swagger endpoint automatically, the same way the Swashbuckle package does for api's in Asp.Net.
| Azure API management cannot automatically generate the swagger page. Azure API management only can provide you the API definition file. Then you can use other tools (such as Swagger UI) with the definition file to generate the page you need.
Besides, Azure API management has provided you the UI(https://youapimanagementname.portal.azure-api.net) to tell you how to use all the APIs.
| OpenAPI | 56,027,231 | 12 |
I have some model definition inside a XSD file and I need to reference these models from an OpenApi definition. Manually remodeling is no option since the file is too large, and I need to put it into a build system, so that if the XSD is changed, I can regenerate the models/schemas for OpenApi.
What I tried and what nearly worked is using xsd2json and then converting it with the node module json-schema-to-openapi. However xsd2json is dropping some of the complexElement models. For example "$ref": "#/definitions/tns:ContentNode" is used inside of one model as the child type but there is no definition for ContentNode in the schema, where when I look into the XSD, there is a complexElement definition for ContentNode.
Another approach which I haven't tried yet but seems a bit excessive to me is using xjb to generate Java models from the XSD and then using JacksonSchema to generate the json schema.
Is there any established library or way, to use XSD in OpenApi?
| I ended up implementing the second approach using jaxb to convert the XSD to java models and then using Jackson to write the schemas to files.
Gradle:
plugins {
id 'java'
id 'application'
}
group 'foo'
version '1.0-SNAPSHOT'
sourceCompatibility = 1.8
repositories {
mavenCentral()
}
dependencies {
testCompile group: 'junit', name: 'junit', version: '4.12'
compile group: 'com.fasterxml.jackson.module', name: 'jackson-module-jsonSchema', version: '2.9.8'
}
configurations {
jaxb
}
dependencies {
jaxb (
'com.sun.xml.bind:jaxb-xjc:2.2.7',
'com.sun.xml.bind:jaxb-impl:2.2.7'
)
}
application {
mainClassName = 'foo.bar.Main'
}
task runConverter(type: JavaExec, group: 'application') {
classpath = sourceSets.main.runtimeClasspath
main = 'foo.bar.Main'
}
task jaxb {
System.setProperty('javax.xml.accessExternalSchema', 'all')
def jaxbTargetDir = file("src/main/java")
doLast {
jaxbTargetDir.mkdirs()
ant.taskdef(
name: 'xjc',
classname: 'com.sun.tools.xjc.XJCTask',
classpath: configurations.jaxb.asPath
)
ant.jaxbTargetDir = jaxbTargetDir
ant.xjc(
destdir: '${jaxbTargetDir}',
package: 'foo.bar.model',
schema: 'src/main/resources/crs.xsd'
)
}
}
compileJava.dependsOn jaxb
With a converter main class, that does something along the lines of:
package foo.bar;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.JsonMappingException;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.fasterxml.jackson.module.jsonSchema.JsonSchema;
import com.fasterxml.jackson.module.jsonSchema.JsonSchemaGenerator;
import foo.bar.model.Documents;
public class Main {
public static void main(String[] args) {
ObjectMapper mapper = new ObjectMapper();
JsonSchemaGenerator schemaGen = new JsonSchemaGenerator(mapper);
try {
JsonSchema schema = schemaGen.generateSchema(Documents.class);
System.out.print(mapper.writerWithDefaultPrettyPrinter().writeValueAsString(schema));
} catch (JsonMappingException e) {
e.printStackTrace();
} catch (JsonProcessingException e) {
e.printStackTrace();
}
}
}
It is still not perfect though,... this would need to iterate over all the model classes and generate a file with the schema. Also it doesn't use references, if a class has a member of another class, the schema is printed inline instead of referencing. This requires a bit more customization with the SchemaFactoryWrapper but can be done.
| OpenAPI | 56,018,335 | 12 |
I am currently migrating our API docs (which were Swagger 1.5) to Swagger 2.0 (OpenApi 3.0)
The API docs are Swagger docs which get generated with java annotations using maven packages swagger-annotations and swagger-jaxrs. I have already updated the pom.xml with new versions so it looks like:
<dependency>
<groupId>io.swagger.core.v3</groupId>
<artifactId>swagger-annotations</artifactId>
<version>2.0.6</version>
</dependency>
<dependency>
<groupId>io.swagger.core.v3</groupId>
<artifactId>swagger-jaxrs2</artifactId>
<version>2.0.6</version>
</dependency>
And also all the old annotations are replaced with the new ones (which change quite a lot) and looks fine.
The thing is we were using a BeanConfig to define the docs general config and auto-scan all the REST resources so the documentation got generated automatically at /swagger.json.
The problem is I can't find the "new way" of doing such thing as creating a BeanConfig and auto-scan the resources so everything gets generated at /swagger.json or /openapi.json (maybe now is something like OpenAPIDefinition?)
If somebody could point me to the right direction I would be very grateful...
| After some research, I could find some documentation about it in their Github for JAX-RS application, so the result is something similar to what I was doing but now instead of using a BeanConfig, it uses OpenAPI and Info:
@ApplicationPath("/sample")
public class MyApplication extends Application {
public MyApplication(@Context ServletConfig servletConfig) {
super();
OpenAPI oas = new OpenAPI();
Info info = new Info()
.title("Swagger Sample App bootstrap code")
.description("This is a sample server Petstore server. You can find out more about Swagger " +
"at [http://swagger.io](http://swagger.io) or on [irc.freenode.net, #swagger](http://swagger.io/irc/). For this sample, " +
"you can use the api key `special-key` to test the authorization filters.")
.termsOfService("http://swagger.io/terms/")
.contact(new Contact()
.email("[email protected]"))
.license(new License()
.name("Apache 2.0")
.url("http://www.apache.org/licenses/LICENSE-2.0.html"));
oas.info(info);
SwaggerConfiguration oasConfig = new SwaggerConfiguration()
.openAPI(oas)
.prettyPrint(true)
.resourcePackages(Stream.of("io.swagger.sample.resource").collect(Collectors.toSet()));
try {
new JaxrsOpenApiContextBuilder()
.servletConfig(servletConfig)
.application(this)
.openApiConfiguration(oasConfig)
.buildContext(true);
} catch (OpenApiConfigurationException e) {
throw new RuntimeException(e.getMessage(), e);
}
}
}
| OpenAPI | 54,185,836 | 12 |
I have an endpoint with query parameters that use square brackets:
GET /info?sort[name]=1&sort[age]=-1
Here, name and age are the field names from my model definition.
How can I write an OpenAPI (Swagger) definition for these parameters?
| It depends on which version of OpenAPI (Swagger) you use.
OpenAPI 3.x
The sort parameter can be defined an an object with the name and age properties. The parameter serialization method should be style: deepObject and explode: true.
openapi: 3.0.0
...
paths:
/info:
get:
parameters:
- in: query
name: sort
schema:
type: object
properties:
name:
type: integer
example: 1
age:
type: integer
example: -1
style: deepObject
explode: true
responses:
'200':
description: OK
This is supported in Swagger UI 3.15.0+ and Swagger-Editor 3.5.6+.
Important: The deepObject serialization style only supports simple non-nested objects with primitive properties, such as in the example above. The behavior for nested objects and arrays of objects is not defined.
In other words, while we can define
?param[foo]=...¶m[bar]=...
there's currently no way to define more nested query parameters such as
?param[0][foo]=...¶m[1][bar]=...
or
?param[foo][0][smth]=...&?param[foo][1][smth]=
If you need the syntax for deeply nested query parameters, upvote and follow this feature request:
Support deep objects for query parameters with deepObject style
OpenAPI 2.0 (Swagger 2.0)
sort[name] and sort[age] need to be defined as individual parameters:
swagger: '2.0'
...
paths:
/info:
get:
parameters:
- in: query
name: sort[name]
type: integer
- in: query
name: sort[age]
type: integer
responses:
200:
description: OK
| OpenAPI | 48,491,688 | 12 |
I want to extend the "200SuccessDefault" response with a schema or example.
paths:
/home:
...
responses:
200:
$ref: '#/components/responses/200SuccessDefault'
content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/PieChartElement'
examples:
PieChart:
$ref: '#/components/examples/PieChart_1'
This approach runs into an error, the schema and examples fields are ignored:
Sibling values alongside $refs are ignored. To add properties to a $ref, wrap the $ref into allOf, or move the extra properties into the referenced definition (if applicable).
I tried allOf:
paths:
/home:
responses:
200:
allOf:
- $ref: '#/components/responses/200SuccessDefault'
- content:
application/json:
schema:
type: array
items:
$ref: '#/components/schemas/PieChartElement'
examples:
PieChart:
$ref: '#/components/examples/PieChart_1'
This approach runs into the error:
should NOT have additional properties additionalProperty: allOf
should have required property 'description' missingProperty: description
| You cannot extend a referenced response object. But, you can use a shared schema object and extend it utilizing allOf within schema.
Inside allOf you can put:
your $ref
a new type extending your default response
If you want to give an example of an entire extended response (JSON), just put it into "application/json".
An example of OpenAPI would be:
"202":
description: Extended response sample
content:
application/json:
schema:
allOf:
- $ref: "#/components/schemas/application"
- type: object
properties:
anotherProperty:
type: string
maxLength: 200
example: "Property example"
example: {id: 1234, anotherProperty: "Hello"}
| OpenAPI | 72,868,180 | 11 |
I have an application which provides an API with JAX-RS (Java API for RESTful Web Services / JSR-311).
For documentation purposes I provide an URL according to the OpenAPI-Specification, which is generated by Eclipse MicroProfile OpenAPI.
Everything is working fine, except the descriptions of the methods and parameters, which I need to add twice - in annotations and in JavaDoc:
/**
* Finds all resources with the given prefix.
*
* @param prefix
* the prefix of the resource
* @return the resources that start with the prefix
*/
@GET
@Path("/find/{prefix}")
@Produces(MediaType.APPLICATION_JSON)
@Operation(description = "Finds all resources with the given prefix")
public List<Resource> find(
@Parameter(description = "The prefix of the resource")
@PathParam("prefix") final String prefix) {
...
}
I know that no runtime library can read the JavaDoc (because it is not part of the class files), which is the main reason for the annotations. But I wonder if there is some other option for one of the OpenAPI generation tools (Swagger, Eclipse MicroProfile OpenAPI, ...), which prevents me from manually syncing the documentation?
In another project for example I'm using a doclet which serializes the JavaDoc and stores it in the class path, to present an Beans API documentation to the user at runtime. But even if I make use of this doclet here, I see no option to provide that JavaDoc descriptions to the OpenAPI libraries during runtime.
I know that I could drop the JavaDoc, if the users of my API use only "foreign languages", as they wouldn't see the JavaDoc anyway. But what happens if the other side of the API is a JAX-RS client? In that case the JavaDoc would be a huge support.
| I got it running with Eclipse Microprofile OpenAPI.
I had to define my own OASFilter:
public class JavadocOASDescriptionFilter implements OASFilter {
@Override
public void filterOpenAPI(final OpenAPI openAPI) {
openAPI.getComponents().getSchemas().forEach(this::initializeSchema);
openAPI.getPaths().forEach(this::initializePathItem);
}
private void initializeSchema(final String name, final Schema schema) {
final SerializedJavadoc javadoc = findJavadocForSchema(name);
if (StringUtils.isEmpty(schema.getDescription())) {
schema.setDescription(javadoc.getTypeComment());
}
if (schema.getProperties() != null) {
schema.getProperties().forEach((property, propertySchema) -> {
if (StringUtils.isEmpty(propertySchema.getDescription())) {
propertySchema.setDescription(javadoc.getAttributeComments().get(property));
}
});
}
}
...
}
Then I had to declare that filter in META-INF/microprofile-config.properties:
mp.openapi.filter=mypackage.JavadocOASDescriptionReader
See here for the discussion on this topic: https://github.com/eclipse/microprofile-open-api/issues/485
| OpenAPI | 65,935,055 | 11 |
I have a class that one of the properties can be string or array of strings, not sure how should I define it in swagger
@ApiProperty({
description: `to email address`,
type: ???, <- what should be here?
required: true,
})
to: string | Array<string>;
I tried
@ApiProperty({
description: `to email address(es)`,
additionalProperties: {
oneOf: [
{ type: 'string' },
{ type: 'Array<string>' },
],
},
required: true,
})
and
@ApiProperty({
description: `to email address(es)`,
additionalProperties: {
oneOf: [
{ type: 'string' },
{ type: 'string[]' },
],
},
required: true,
})
and
@ApiProperty({
description: `to email address(es)`,
additionalProperties: {
oneOf: [
{ type: 'string' },
{ type: '[string]' },
],
},
required: true,
})
but the result is like below image, which is not correct
| Please try
@ApiProperty({
oneOf: [
{ type: 'string' },
{
type: 'array',
items: {
type: 'string'
}
}
]
})
Array<TItem> can be expressed in OpenAPI with {type: 'array', items: { type: TItem } }
| OpenAPI | 64,939,247 | 11 |
I have an OpenAPI 3.0 spec (in YAML format), and would like to generate Java code for the API. I want to do this as part of an automated build (preferably using Gradle), so I can create the service interface, and the implementation of the interface as part of an automated process.
This working example shows how to do it, however it uses a Swagger 2.0 specification YAML: https://github.com/galovics/swagger-codegen-gradle/tree/first-server-side
I've forked this example and added an OpenAPI 3.0 spec, however it then fails to build: https://github.com/robjwilkins/swagger-codegen-gradle/tree/openapi_v3_test
The error is:
failed to read resource listing
com.fasterxml.jackson.core.JsonParseException: Unrecognized token
'openapi': was expecting (JSON String, Number, Array, Object or token
'null', 'true' or 'false') at [Source: (String)"openapi: 3.0.0
(PR showing changes: https://github.com/robjwilkins/swagger-codegen-gradle/pull/1/files)
My understanding is the code which needs updated is in build.gradle:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath("io.swagger.codegen.v3:swagger-codegen:3.0.16")
}
}
possibly io.swagger.codegen.v3:swagger-codegen:3.0.16 doesn't recognize OpenAPI 3.0?
The Swagger Core v3 project seems to be focused on generating a YAML/JSON spec from code (rather than code from spec): https://github.com/swagger-api/swagger-core
Any help with this problem would be appreciated. Thanks :)
| I've now got this working (thanks to @Helen for help)
The edits required were in build.grade.
First I had to amend the build scripts to pull in a different dependency:
buildscript {
repositories {
mavenCentral()
}
dependencies {
classpath('io.swagger.codegen.v3:swagger-codegen-maven-plugin:3.0.16')
}
}
The change some of the imports:
import io.swagger.codegen.v3.CodegenConfigLoader
import io.swagger.codegen.v3.DefaultGenerator
import io.swagger.codegen.v3.ClientOptInput
import io.swagger.codegen.v3.ClientOpts
import io.swagger.v3.parser.OpenAPIV3Parser
And update the generateServer task:
ext.apiPackage = 'com.example.api'
ext.modelPackage = 'com.example.model'
task generateServer {
doLast {
def openAPI = new OpenAPIV3Parser().read(rootProject.swaggerFile.toString(), null, null)
def clientOpts = new ClientOptInput().openAPI(openAPI)
def codegenConfig = CodegenConfigLoader.forName('spring')
codegenConfig.setOutputDir(project.buildDir.toString())
clientOpts.setConfig(codegenConfig)
def clientOps = new ClientOpts()
clientOps.setProperties([
'dateLibrary' : 'java8', // Date library to use
'useTags' : 'true', // Use tags for the naming
'interfaceOnly' : 'true' // Generating the Controller API interface and the models only
'apiPackage' : project.apiPackage,
'modelPackage' : project.modelPackage
])
clientOpts.setOpts(clientOps)
def generator = new DefaultGenerator().opts(clientOpts)
generator.generate() // Executing the generation
}
}
updated build.gradle is here: https://github.com/robjwilkins/swagger-codegen-gradle/blob/openapi_v3_test/user-service-contract/build.gradle
| OpenAPI | 59,875,910 | 11 |
I need to define in OpenAPI a JSON response with an array. The array always contains 2 items and the first one is always a number and second one is always a string.
[1, "a"] //valid
["a", 1] //invalid
[1] //invalid
[1, "a", 2] //invalid
I've found out that JSON schema does support that by passing a list of items in items instead of single object (source), but OpenAPI explicitly forbids that and accepts only a single object (source). How can that be expressed in OpenAPI?
| You need OpenAPI 3.1 to define tuples precisely. In earlier versions, you can only define generic arrays without a specific item order.
OpenAPI 3.1
Your example can be defined as:
# openapi: 3.1.0
type: array
prefixItems:
# The 1st item
- type: integer
description: Description of the 1st item
# The 2nd item
- type: string
description: Description of the 2nd item
# Define the 3rd etc. items if needed
# ...
# The total number of items in this tuple
minItems: 2
maxItems: 2
additionalItems: false # can be omitted if `maxItems` is specified
OpenAPI 3.1 is fully compatible with JSON Schema 2020-12, including the prefixItems keyword (the new name for the tuple form of items from earlier JSON Schema drafts).
OpenAPI 3.0.x and earlier
Earlier OpenAPI versions do not have a way to describe tuples. The most you can do is define "an array of 2 items that can be either number or string", but you cannot specifically define the type of the 1st and 2nd items. You can, however, mention additional constraints in the schema description.
# openapi: 3.0.0
type: array
items:
oneOf:
- type: integer
- type: string
minItems: 2
maxItems: 2
description: >-
The first item in the array MUST be an integer,
and the second item MUST be a string.
If you are designing a new API rather than describing an existing one, a possible workaround is to use an object instead of an array to represent this data structure.
| OpenAPI | 57,464,633 | 11 |
I am writing a new API and documenting it using Swagger/OpenAPI. It seems to be a good standard to document error responses, that the developer can expect to encounter.
But I cannot find any guide lines or best practices about Internal Server Error. Every path could in theory throw an unhandled exception. I do not expect it to happen, but it might. Should all paths have a response with status code 500 "Internal Server Error" or should I only document responses the developer can do anything about, i.e. 2xx, 3xx and 4xx?
| The offical documentation shows an example for specifying all 5xx status codes in the responses section, but it does not go into details about the specific status code, or the message returned. It also mentions that the API specification should only contain known errors:
Note that an API specification does not necessarily need to cover all possible HTTP response codes, since they may not be known in advance. However, it is expected to cover successful responses and any known errors. By “known errors” we mean, for example, a 404 Not Found response for an operation that returns a resource by ID, or a 400 Bad Request response in case of invalid operation parameters.
You could follow the same approach and specify it like in the example. I think it's not important or even recommended to try to describe it more specifically, since you might not be able to cover all cases anyway and the client is not expected to act on the message returned for internal server errors (possibly other than retrying later). So for example, I would not recommend specifying a message format for it.
Omitting any responses with 5xx HTTP error codes makes sense as well.
| OpenAPI | 54,989,351 | 11 |
I've tried to add a nested array of arbitrary types.
These are my annotations:
* @OA\Property(
* @OA\Schema(
* type="array",
* @OA\Items(
* type="array",
* @OA\Items(type={})
* )
* ),
* description="bla bla bla"
* )
| I've found the solution:
* @OA\Property(
* type="array",
* @OA\Items(
* type="array",
* @OA\Items()
* ),
* description="bla bla bla"
* )
The issue was @OA\Schema
| OpenAPI | 53,947,062 | 11 |
In the OpenAPI 3.0 Specification, the root OpenAPI Object has the servers property which is an array of Server Objects. And the Path Item Object also allows an optional servers property.
The description given in the Specification does not give a clear idea of how servers can be helpful.
What is the significance of the servers property? Do we have any example which explains the use cases of servers both as a direct property of the root OpenAPI object and also as a property of a path item?
| servers specifies one or more target servers for the API, in other words, the base URL for API calls. The endpoint paths (e.g. /users/{id}) are defined relative to these servers. Some APIs have a single target server; others may offer several servers, e.g. sandbox vs. production, or regional servers for different geographical areas (example: AWS).
By default, all operations in an OpenAPI definition use the globally defined servers, but servers may also be overridden for specific paths and operations. This is useful for APIs where some operations use a different server than the rest of the operations. This way you can document all operations in a single API definition instead of splitting it into multiple definitions, one per server.
Example: Dropbox API
Most endpoints are on the api.dropboxapi.com domain.
Content upload/download endpoints are on content.dropboxapi.com.
Longpoll endpoint is on notify.dropboxapi.com.
OAuth endpoints are on www.dropbox.com.
The Dropbox API definition might look like this:
openapi: 3.0.0
info:
title: Dropbox API
version: 1.0.0
servers:
- url: 'https://api.dropboxapi.com/2'
paths:
# These endpoints are on api.dropboxapi.com (use global `servers`)
/file_requests/list:
...
/users/get_account:
...
/files/upload:
# File upload/download uses another target server
servers:
- url: 'https://content.dropboxapi.com/2'
...
/files/list_folder/longpoll:
# Longpolling uses another target server
servers:
- url: 'https://notify.dropboxapi.com/2'
...
Check out the API Host and Base Path guide for more details and examples.
| OpenAPI | 50,546,573 | 11 |
I'm trying to document an existing API that contains various endpoints whose authentication is optional. That is, more data is returned if the user is authorized than if they were not authorized.
Could not find that explicitly in the OAspec v3. Is there a coding trick to define this situation?
My present work-around is to code for authorization, yet in a description of the endpoint write that authorization is optional. This works and seems adequate. Yet the purist in me wonders if there is another way.
| To make security optional, add an empty requirement {} to the security array:
security:
- {} # <----
- api_key: []
This means the endpoint can be called with or without security.
Source: this comment in the OpenAPI Spec repository.
| OpenAPI | 47,659,324 | 11 |
At the time of writing this the OpenAPI 3 spec is relatively new. I am struggling to find any documentation generators that support version 3.0.
Does anyone know of generators that support OpenAPI v3.0?
| You can try OpenAPI Generator (https://openapi-generator.tech), which supports both OpenAPI spec v2, v3 and released a stable version (3.0.0) a few days ago.
Using docker, you can easily generate the API documentation:
docker run --rm -v ${PWD}:/local openapitools/openapi-generator-cli generate \
-i https://raw.githubusercontent.com/openapitools/openapi-generator/master/modules/openapi-generator/src/test/resources/2_0/petstore.yaml \
-g html2 \
-o /local/out/html2
| OpenAPI | 46,290,950 | 11 |
I have generated my API client with openapi-generator-cli generate -i https://linktomybackendswagger/swagger.json -g typescript-axios -o src/components/api --additional-properties=supportsES6=true
Now I have all the files inside my project but I have no clue how to implement this.
How do I instantiate the API? Where do I configure the access token to be used? How do I know each method name for an endpoint?
After 2 hours of googling I can't seem to find a documentation for what seems like the most basic setup questions. Maybe I'm just looking in the wrong places. Can someone point me in the right direction?
| Ok, so I figured out a way that I think is clean that I will document here for others that are going down the same path, which is:
Using an API that is using Authorization: Bearer <Token here>
Created the client with openapi-generator-cli using -g typescript-axios
Using OAS3
Let's say you have an endpoint called UserPolicies. After generating the code via CLI each endpoint will have its own class inside the generated file api.ts with the name extended like so UserPoliciesApi.
For using that endpoint the following setup is required.
Example: Inside UserPolicyList.tsx:
import { UserPoliciesApi } from './components/api/api';
import { Configuration } from './components/api/configuration';
const openapiConfig = new Configuration();
openapiConfig.baseOptions = {
headers: { Authorization: 'Bearer ' + cookies.access_token },
};
const openapi = new UserPoliciesApi(openapiConfig);
Let's assume you want to make a GET call for api/Policies you can do so with:
openapi.userPoliciesGetPolicies.then((response) => {
console.log(response);
})
.catch((error) => {
console.log(error);
});
Now, what I found inconvenient with this design is the boilerplate code necessary for making one simple api call. I wanted to be able to simply do one import and already have the access_token setup.
So I created a wrapper class like this MyApi.tsx:
import { Cookies } from 'react-cookie';
import { Configuration } from './components/api/configuration';
class MyApi {
private cookies: Cookies;
constructor() {
this.cookies = new Cookies();
}
private accessToken = () => {
return this.cookies.get('access_token');
};
private configuration = () => {
const openapiConfig = new Configuration();
openapiConfig.baseOptions = {
headers: { Authorization: 'Bearer ' + this.accessToken() },
};
return openapiConfig;
};
public userPoliciesApi = () => {
const api = new UserPoliciesApi(this.configuration());
return api;
};
}
export default MyApi.tsx;
Now I you can easily replace the boilerplate and call with this:
Inside UserPolicyList.tsx:
import { MyApi } from './components/myapi/MyApi.tsx';
const api = new MyApi();
const userPoliciesApi = api.userPoliciesApi();
userPoliciesApi.userPoliciesGetPolicies.then((response) => {
console.log(response);
})
.catch((error) => {
console.log(error);
});
| OpenAPI | 70,185,507 | 10 |
I do have my .net data classes, containing a few decimal fields (for example quantity). I generate an openapi.json out of it running dotnet swagger.
...
"quantity": {
"type": "number",
"format": "double"
},
...
As you can see it produces a type "number" with format "double". And nswag creates a client, where the field is a double too.
Can I configure somehow to set format as decimal? When I edit the openapi.json manually, the nswag creation produces a decimal as expected.
I tried to add this annotation, but it doesn't change anything: [JsonSchema(JsonObjectType.Number, Format = "decimal")]
| Try adding this line into your .AddSwaggerGen() definition
services.AddSwaggerGen(c =>
c.MapType<decimal>(() => new OpenApiSchema { Type = "number", Format = "decimal" });
// ...
| OpenAPI | 69,523,654 | 10 |
I use OpenAPI spec to generate Java POJOs. What do I need to specify in Open API yaml to generate the equivalent of below POJO ?
...
@JsonIgnore
public String ignoredProperty;
...
I have the yaml spec as below
openapi: 3.0.0
info:
title: Cool API
description: A Cool API spec
version: 0.0.1
servers:
- url: http://api.cool.com/v1
description: Cool server for testing
paths:
/
...
components:
schemas:
MyPojo:
type: object
properties:
id:
type: integer
name:
type: string
# I want the below attribute to be ignored as a part of JSON
ignoreProperty:
type: string
| the openapi generator supports vendor extensions. Specifically, for the Java generator, it supports the following extensions as of the time of writing. However, an up-to-date list can be found here.
Extension name
Description
Applicable for
Default value
x-discriminator-value
Used with model inheritance to specify value for discriminator that identifies current model
MODEL
x-implements
Ability to specify interfaces that model must implements
MODEL
empty array
x-setter-extra-annotation
Custom annotation that can be specified over java setter for specific field
FIELD
When field is array & uniqueItems, then this extension is used to add @JsonDeserialize(as = LinkedHashSet.class) over setter, otherwise no value
x-tags
Specify multiple swagger tags for operation
OPERATION
null
x-accepts
Specify custom value for 'Accept' header for operation
OPERATION
null
x-content-type
Specify custom value for 'Content-Type' header for operation
OPERATION
null
x-class-extra-annotation
List of custom annotations to be added to model
MODEL
null
x-field-extra-annotation
List of custom annotations to be added to property
FIELD
null
x-webclient-blocking
Specifies if method for specific operation should be blocking or non-blocking(ex: return Mono<T>/Flux<T> or return T/List<T>/Set<T> & execute .block() inside generated method)
OPERATION
false
You can use the x-field-extra-annotation vendor extension listed above to add annotations to any field. So, for your example, you can add the following:
openapi: 3.0.0
info:
title: Cool API
description: A Cool API spec
version: 0.0.1
servers:
- url: http://api.cool.com/v1
description: Cool server for testing
paths:
/
...
components:
schemas:
MyPojo:
type: object
properties:
id:
type: integer
name:
type: string
# I want the below attribute to be ignored as a part of JSON
ignoreProperty:
type: string
x-field-extra-annotation: "@com.fasterxml.jackson.annotation.JsonIgnore"
| OpenAPI | 64,898,455 | 10 |
Swagger documentation says you can do that:
https://swagger.io/docs/specification/grouping-operations-with-tags/
But unfortunately drf-yasg not implementing this feature:
https://github.com/axnsan12/drf-yasg/issues/454
It is said, that I can add custom generator class, but it is a very general answer. Now I see that drf_yasg.openapi.Swagger gets info block and I have thoughts, that this might be right place to put global tags section as an additional init argument, but it deeper, than customizing generator class and I have lack of knowledge of this module
Does anybody have solution to this particular problem, or at least maybe a link to some sort of tutorial, how to properly customize generator class?
| Not sure if this is exactly what your are looking for, but I think it might help.
To set tags I use @swagger_auto_schema decorator, which can be applied in a few different ways depending mostly on the type of Views used on your project. Complete details can be found on docs here.
When using Views derived from APIView, you could do something like this:
class ClientView(APIView):
@swagger_auto_schema(tags=['my custom tag'])
def get(self, request, client_id=None):
pass
According to the docs, the limitation is that tags only takes a list of strs as value. So from here on, I believe there is no support for extra attributes over tags as stated at Swagger docs, here.
Anyway, if you only need to define a summary or a description to obtain something like the image below, you could define them using the decorator or a class-level docstring. Here is an example:
class ClientView(APIView):
'''
get:
Client List serialized as JSON.
This is a description from a class level docstring.
'''
def get(self, request, client_id=None):
pass
@swagger_auto_schema(
operation_description="POST description override using
decorator",
operation_summary="this is the summary from decorator",
# request_body is used to specify parameters
request_body=openapi.Schema(
type=openapi.TYPE_OBJECT,
required=['name'],
properties={
'name': openapi.Schema(type=openapi.TYPE_STRING),
},
),
tags=['my custom tag']
)
def post(self, request):
pass
Good luck!
| OpenAPI | 62,572,389 | 10 |
What is the actual advantage of using OpenApi over swagger?
I am new to openApi technology, just wanted to know what more features are present in openApi than in swagger. The online documents didn't helped me. Can anyone help me.
| OpenApi is essentially a further development of swagger, hence the version 3.0.0 instead of 1.0.0
If you read the swagger blog Swagger was handed over to the OpenAPI Initiative, and all the swagger tools like editor.swagger.io support openapi, and conversions between the two.
as they write
OpenAPI = Specification
Swagger = Tools for implementing the specification
(and swagger is also the term for the first two iterations of the spec)
If you are not restricted by a specific version, I would recommend openapi, since the community is in theory bigger, and there has happened a lot since swagger v. 2.0.0, such as simplification and ease of use.
more security schemes are supported, enhanced parameter types based on whether thy are in the path, query, header or a cookie.
Also there is an improvement in how you can define examples. I have participated in a proect where we would have liked to use openapi instead of swagger for this reson, unfortunately, the API GW did not support it yet...
| OpenAPI | 61,019,331 | 10 |
Before Swashbuckle 5 it was possible to define and register a ISchemaFilter that could provide an example implementation of a model:
public class MyModelExampleSchemaFilter : ISchemaFilter
{
public void Apply(Schema schema, SchemaFilterContext context)
{
if (context.SystemType.IsAssignableFrom(typeof(MyModel)))
{
schema.Example = new MyModel
{
Name = "model name",
value = 42
};
}
}
}
The Schema.Example would take an arbitrary object, and it would properly serialize when generating the OpenApi Schema.
However, with the move to .NET Core 3 and Swashbuckle 5 the Schema.Example property is no longer an object and requires the type Microsoft.OpenApi.Any.IOpenApiAny. There does not appear to be a documented path forward regarding how to provide a new example.
I've attempted, based on looking at code within Microsoft.OpenApi, to build my own implementation of an IOpenApiAny but any attempt to use it to generate an example fails from within Microsoft.OpenApi.Writers.OpenApiWriterAnyExtensions.WriteObject(IOpenApiWriter writer, OpenApiObject entity) before its Write method is even called. I don't claim that the code below is fully correct, but I would have expected it to at a minimum light up a path and to how to move forward.
/// <summary>
/// A class that recursively adapts a unidirectional POCO tree into an <see cref="IOpenApiAny" />
/// </summary>
/// <remarks>
/// <para>This will fail if a graph is provided (backwards and forwards references</para>
/// </remarks>
public class OpenApiPoco : IOpenApiAny
{
/// <summary>
/// The model to be converted
/// </summary>
private readonly object _model;
/// <summary>
/// Initializes a new instance of the <see cref="OpenApiPoco" /> class.
/// </summary>
/// <param name="model">the model to convert to an <see cref="IOpenApiAny" /> </param>
public OpenApiPoco(object model)
{
this._model = model;
}
/// <inheritdoc />
public AnyType AnyType => DetermineAnyType(this._model);
#region From Interface IOpenApiExtension
/// <inheritdoc />
public void Write(IOpenApiWriter writer,
OpenApiSpecVersion specVersion)
{
this.Write(this._model, writer, specVersion);
}
#endregion
private static AnyType DetermineAnyType(object model)
{
if (model is null)
{
return AnyType.Null;
}
var modelType = model.GetType();
if (modelType.IsAssignableFrom(typeof(int))
|| modelType.IsAssignableFrom(typeof(long))
|| modelType.IsAssignableFrom(typeof(float))
|| modelType.IsAssignableFrom(typeof(double))
|| modelType.IsAssignableFrom(typeof(string))
|| modelType.IsAssignableFrom(typeof(byte))
|| modelType.IsAssignableFrom(typeof(byte[])) // Binary or Byte
|| modelType.IsAssignableFrom(typeof(bool))
|| modelType.IsAssignableFrom(typeof(DateTimeOffset)) // DateTime
|| modelType.IsAssignableFrom(typeof(DateTime)) // Date
)
{
return AnyType.Primitive;
}
if (modelType.IsAssignableFrom(typeof(IEnumerable))) // test after primitive check so as to avoid catching string and byte[]
{
return AnyType.Array;
}
return AnyType.Object; // Assume object
}
private void Write(object model,
[NotNull] IOpenApiWriter writer,
OpenApiSpecVersion specVersion)
{
if (writer is null)
{
throw new ArgumentNullException(nameof(writer));
}
if (model is null)
{
writer.WriteNull();
return;
}
var modelType = model.GetType();
if (modelType.IsAssignableFrom(typeof(int))
|| modelType.IsAssignableFrom(typeof(long))
|| modelType.IsAssignableFrom(typeof(float))
|| modelType.IsAssignableFrom(typeof(double))
|| modelType.IsAssignableFrom(typeof(string))
|| modelType.IsAssignableFrom(typeof(byte[])) // Binary or Byte
|| modelType.IsAssignableFrom(typeof(bool))
|| modelType.IsAssignableFrom(typeof(DateTimeOffset)) // DateTime
|| modelType.IsAssignableFrom(typeof(DateTime)) // Date
)
{
this.WritePrimitive(model, writer, specVersion);
return;
}
if (modelType.IsAssignableFrom(typeof(IEnumerable))) // test after primitive check so as to avoid catching string and byte[]
{
this.WriteArray((IEnumerable) model, writer, specVersion);
return;
}
this.WriteObject(model, writer, specVersion); // Assume object
}
private void WritePrimitive(object model,
IOpenApiWriter writer,
OpenApiSpecVersion specVersion)
{
switch (model.GetType())
{
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(string)): // string
writer.WriteValue((string) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(byte[])): // assume Binary; can't differentiate from Byte and Binary based on type alone
// if we chose to treat byte[] as Byte we would Base64 it to string. eg: writer.WriteValue(Convert.ToBase64String((byte[]) propertyValue));
writer.WriteValue(Encoding.UTF8.GetString((byte[]) model));
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(bool)): // boolean
writer.WriteValue((bool) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(DateTimeOffset)): // DateTime as DateTimeOffset
writer.WriteValue((DateTimeOffset) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(DateTime)): // Date as DateTime
writer.WriteValue((DateTime) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(double)): // Double
writer.WriteValue((double) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(float)): // Float
writer.WriteValue((float) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(int)): // Integer
writer.WriteValue((int) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(long)): // Long
writer.WriteValue((long) model);
break;
case TypeInfo typeInfo
when typeInfo.IsAssignableFrom(typeof(Guid)): // Guid (as a string)
writer.WriteValue(model.ToString());
break;
default:
throw new ArgumentOutOfRangeException(nameof(model),
model?.GetType()
.Name,
"unexpected model type");
}
}
private void WriteArray(IEnumerable model,
IOpenApiWriter writer,
OpenApiSpecVersion specVersion)
{
writer.WriteStartArray();
foreach (var item in model)
{
this.Write(item, writer, specVersion); // recursive call
}
writer.WriteEndArray();
}
private void WriteObject(object model,
IOpenApiWriter writer,
OpenApiSpecVersion specVersion)
{
var propertyInfos = model.GetType()
.GetProperties();
writer.WriteStartObject();
foreach (var property in propertyInfos)
{
writer.WritePropertyName(property.Name);
var propertyValue = property.GetValue(model);
switch (propertyValue.GetType())
{
case TypeInfo typeInfo // primitives
when typeInfo.IsAssignableFrom(typeof(string)) // string
|| typeInfo.IsAssignableFrom(typeof(byte[])) // assume Binary or Byte
|| typeInfo.IsAssignableFrom(typeof(bool)) // boolean
|| typeInfo.IsAssignableFrom(typeof(DateTimeOffset)) // DateTime as DateTimeOffset
|| typeInfo.IsAssignableFrom(typeof(DateTime)) // Date as DateTime
|| typeInfo.IsAssignableFrom(typeof(double)) // Double
|| typeInfo.IsAssignableFrom(typeof(float)) // Float
|| typeInfo.IsAssignableFrom(typeof(int)) // Integer
|| typeInfo.IsAssignableFrom(typeof(long)) // Long
|| typeInfo.IsAssignableFrom(typeof(Guid)): // Guid (as a string)
this.WritePrimitive(propertyValue, writer, specVersion);
break;
case TypeInfo typeInfo // Array test after primitive check so as to avoid catching string and byte[]
when typeInfo.IsAssignableFrom(typeof(IEnumerable)): // Enumerable as array of objects
this.WriteArray((IEnumerable) propertyValue, writer, specVersion);
break;
case TypeInfo typeInfo // object
when typeInfo.IsAssignableFrom(typeof(object)): // Object
default:
this.Write(propertyValue, writer, specVersion); // recursive call
break;
}
}
writer.WriteEndObject();
}
}
What is the proper way to transition ISchemaFilter examples to Swashbuckle 5.0 so that the appropriate serialization rules are respected?
| They have an example on the repo:
https://github.com/domaindrivendev/Swashbuckle.AspNetCore/blob/9bb9be9b318c576d236152f142aafa8c860fb946/test/WebSites/Basic/Swagger/ExamplesSchemaFilter.cs#L8
public class ExamplesSchemaFilter : ISchemaFilter
{
public void Apply(OpenApiSchema schema, SchemaFilterContext context)
{
schema.Example = GetExampleOrNullFor(context.Type);
}
private IOpenApiAny GetExampleOrNullFor(Type type)
{
switch (type.Name)
{
case "Product":
return new OpenApiObject
{
[ "id" ] = new OpenApiInteger(123),
[ "description" ] = new OpenApiString("foobar"),
[ "price" ] = new OpenApiDouble(14.37)
};
default:
return null;
}
}
}
| OpenAPI | 60,515,825 | 10 |
I have an endpoint to create an address and one to update it. Describing this in an OpenAPI spec I'd like to use a component for the address so that I don't have to specify the address twice. Now the problem is, that the address object used for updating should include a property "id", but the one used for creating doesn't.
So basically, I'm looking for a way to describe the full address (incl. the id property) in the components section and then reference to the create endpoint, but excluding the "id" property there.
| You can accomplish this using the readOnly keyword, which provides a standardized method to achieve the desired outcome.
You can use the readOnly and writeOnly keywords to mark specific properties as read-only or write-only. This is useful, for example, when GET returns more properties than used in POST – you can use the same schema in both GET and POST and mark the extra properties as readOnly. readOnly properties are included in responses but not in requests, and writeOnly properties may be sent in requests but not in responses.
example:
type: object
properties:
id:
# Returned by GET, not used in POST/PUT/PATCH
type: integer
readOnly: true
username:
type: string
password:
# Used in POST/PUT/PATCH, not returned by GET
type: string
writeOnly: true
If a readOnly or writeOnly property is included in the required list, required affects just the relevant scope – responses only or requests only. That is, read-only required properties apply to responses only, and write-only required properties – to requests only.
See documentation: documentation
Petstore example:
Notice that we use the same model for both the payload and response. However, the id property is excluded from the payload due to its readOnly attribute being set to true.
...
Pet:
required:
- name
- photoUrls
type: object
properties:
id:
type: integer
format: int64
example: 10
readOnly: true
...
| OpenAPI | 60,472,631 | 10 |
I am looking for the proper way to specify an Authorization header with a custom type/prefix like "ApiKey" in OpenAPI 3.
The custom Authorization header should look like
Authorization: ApiKey myAPIKeyHere
All my attempts to specify the securitySchemes entry with type: apiKey seems to
produce other results...
The closest I got is something like:
securitySchemes:
ApiKeyAuth:
type: apiKey
in: header
name: ApiKey
... but this generates the ApiKey: myAPIKeyHere header instead of Authorization: ApiKey myAPIKeyHere.
How can such a requirement be specified?
| I think I have found a way that seems acceptable - although not perfect. Would like to see something better in the future...
It seems that there is no other way than to add the custom type to the value (aided by a description like below).
components:
securitySchemes:
ApiKey:
type: apiKey
name: Authorization
in: header
description: 'Prefix the value with \"ApiKey\" to indicate the custom authorization type'
security:
- ApiKey: []
This does at least produce the correct header in curl (if applied correctly).
| OpenAPI | 59,694,733 | 10 |
OpenAPI is good for RESTful services and at the moment, I'm hacking it to do it for asynchronous messaging system (specifically Kafka) by using POST to a /topic so that I can use redoc do create a website for the API.
I am trying to see if there's already established system of documenting for this. Especially since the GET /events which is used for event sourcing is getting larger and larger by the day.
| It seems asyncAPI is basically what you are looking for: openapi but for topics instead of REST endpoints.
https://www.asyncapi.com/docs/getting-started/coming-from-openapi/
| OpenAPI | 59,143,626 | 10 |
This is my code:
definitions:
User:
type: object
properties:
id:
type: integer
username:
type: string
first_name:
type: string
last_name:
type: string
password:
type: string
created_at:
type: string
format: date-time
updated_at:
type: string
format: date-time
required:
- username
- first_name
- last_name
- password
/api/users:
post:
description: Add a new user
operationId: store
parameters:
- name: user
description: User object
in: body
required: true
type: string
schema:
$ref: '#/definitions/User'
produces:
- application/json
responses:
"200":
description: Success
properties:
success:
type: boolean
data:
$ref: '#/definitions/User'
As you can see, in the post key under /api/users I used the User definition as my schema on it.
I want to lessen my code so I reused the User definition as my schema. The problem here is that I do not need the id, created_at and updated_at fields.
Is there a way to just inherit some of the fields except the fields mentioned? Also, I would love some suggestions to make it better since I'm trying to learn swagger. Thank you.
| As explained in this answer to a similar question:
You would have to define the models separately.
However, you have options for the cases of exclusion and difference.
If you're looking to exclude, which is the easy case, create a model
of with the excluded property, say ModelA. Then define ModelB as
ModelA plus the additional property:
ModelB:
allOf:
- $ref: "#/definitions/ModelA"
- type: object
properties:
id:
type: string
If you're looking to define the difference, follow the same method
above, and exclude the id from ModelA. Then define ModelB and ModelC
as extending ModelA and add the id property to them, each with its own
restrictions. Mind you, JSON Schema can allow you to follow the
original example above for some cases to "override" a definition.
However, since it is not really overriding, and one needs to
understand the concepts of JSON Schema better to not make simple
mistakes, I'd recommend going this path for now.
| OpenAPI | 57,339,131 | 10 |
I'm trying to make an OpenAPI autogenerated PHP client using anyOf and allOf properties.
The goal is to be able to return an array with polymorphism in it: objects of different types.
Also those objects have a common base object as well.
In my example schema, Items is an array which items can be of types ItemOne or ItemTwo.
Both types of items have an own property (itemOneProperty and itemTwoProperty, respectively), and a common property baseItemProperty (which is inherited from BaseItem with the allOf keyword).
Here you have the API specification yaml:
openapi: 3.0.0
info:
title: Test API
version: 1.0.0
servers:
- url: https://api.myjson.com/bins
paths:
/roxgd:
get:
operationId: getItems
responses:
'200':
description: Success
content:
application/json:
schema:
$ref: '#/components/schemas/Items'
components:
schemas:
Items:
type: array
items:
anyOf:
- $ref: '#/components/schemas/ItemOne'
- $ref: '#/components/schemas/ItemTwo'
BaseItem:
type: object
properties:
baseItemProperty:
type: string
ItemOne:
allOf:
- $ref: '#/components/schemas/BaseItem'
- type: object
properties:
itemOneProperty:
type: string
ItemTwo:
allOf:
- $ref: '#/components/schemas/BaseItem'
- type: object
properties:
itemTwoProperty:
type: string
This is the endpoint I'm sending the requests: https://api.myjson.com/bins/roxgd
And it returns this example json:
[
{
type: "ItemOne",
baseItemProperty: "foo1",
itemOneProperty: "bar1"
},
{
type: "ItemTwo",
baseItemProperty: "foo2",
itemTwoProperty: "bar2"
}
]
The PHP client is generated with no errors, but when it call the getItems method I get this Fatal Error:
PHP Fatal error: Uncaught Error: Class 'AnyOfItemOneItemTwo' not found in /home/user/projects/openapi-test/lib/ObjectSerializer.php:309
Stack trace:
#0 /home/user/projects/openapi-test/lib/ObjectSerializer.php(261): MyRepo\OpenApiTest\ObjectSerializer::deserialize(Object(stdClass), 'AnyOfItemOneIte...', NULL)
#1 /home/user/projects/openapi-test/lib/Api/DefaultApi.php(182): MyRepo\OpenApiTest\ObjectSerializer::deserialize(Array, 'AnyOfItemOneIte...', Array)
#2 /home/user/projects/openapi-test/lib/Api/DefaultApi.php(128): MyRepo\OpenApiTest\Api\DefaultApi->getItemsWithHttpInfo()
#3 /home/user/projects/tests-for-openapi-test/test.php(10): MyRepo\OpenApiTest\Api\DefaultApi->getItems()
#4 {main}
thrown in /home/user/projects/openapi-test/lib/ObjectSerializer.php on line 309
The same occurs if I use the oneOf property, but the error I get is: Uncaught Error: Class 'OneOfItemOneItemTwo' not found....
My setup works ok when I use any other valid yaml (without the polymorphism).
Also, I checked this related question already, but that is about UI, which I'm not using at all.
Do you know where could be the error? A mistake in my yaml doc? A bug in the PHP client generator?
Edit: I'm using openapi-generator v4.0.3 (latest release at this point).
| After more researching I found there's an open issue with the inheritance in the openapi-generator from version 4.0.0 onwards.
https://github.com/OpenAPITools/openapi-generator/issues/2845
| OpenAPI | 57,313,269 | 10 |
I have my swagger definition like :
someDef:
type: object
properties:
enable:
type: boolean
default: false
nodes:
type: array
maxItems: 3
items:
type: object
properties:
ip:
type: string
default: ''
My nodes are array and it has maxitems: 3.
I want my nodes items length to be either 0 or 3.
Thanks in advance.
| "Either 0 or 3 items" can be defined in OpenAPI 3.x (openapi: 3.x.x) but not in OpenAPI 2.0 (swagger: '2.0').
OpenAPI 3.x
You can use oneOf in combination with minItems and maxItems to define the "either 0 or 3 items" condition:
# openapi: 3.0.0
nodes:
type: array
items:
type: object
properties:
ip:
type: string
default: ''
oneOf:
- minItems: 0
maxItems: 0
- minItems: 3
maxItems: 3
Note while oneOf is part of the OpenAPI 3.0 Specification (i.e. you can write API definitions that include oneOf), actual tooling support for oneOf may vary.
OpenAPI 2.0
OAS 2 does not have a way to define "either 0 or 3 items". The most you can do is to use maxItems: 3 to define the upper limit.
| OpenAPI | 57,035,988 | 10 |
I have my openapi: 3.0.0 YAML file, I'm looking for a way to generate test data response (JSON object) from schema.
This is what I am looking for, but I can't get it working for openapi: 3.0.0, the code works perfectly for "swagger": "2.0" definitions.
I have tried to get the code working with Swagger Java libraries 2.x, which support OpenAPI 3.0. I know I need to use version 2.x of Swagger.
import io.swagger.parser.SwaggerParser;
import io.swagger.models.*;
import io.swagger.inflector.examples.*;
import io.swagger.inflector.examples.models.Example;
import io.swagger.inflector.processors.JsonNodeExampleSerializer;
import io.swagger.util.Json;
import io.swagger.util.Yaml;
import java.util.Map;
import com.fasterxml.jackson.databind.module.SimpleModule;
// Load your OpenAPI/Swagger definition
Swagger swagger = new SwaggerParser().read("http://petstore.swagger.io/v2/swagger.json");
// Create an Example object for the Pet model
Map<String, Model> definitions = swagger.getDefinitions();
Model pet = definitions.get("Pet");
Example example = ExampleBuilder.fromModel("Pet", pet, definitions, new HashSet<String>());
// Another way:
// Example example = ExampleBuilder.fromProperty(new RefProperty("Pet"), swagger.getDefinitions());
// Configure example serializers
SimpleModule simpleModule = new SimpleModule().addSerializer(new JsonNodeExampleSerializer());
Json.mapper().registerModule(simpleModule);
// Convert the Example object to string
// JSON example
String jsonExample = Json.pretty(example);
System.out.println(jsonExample);
This code is working, just need to get the same code working for openapi: 3.0.0.
| import io.swagger.v3.parser.OpenAPIV3Parser;
import io.swagger.v3.oas.models.media.Schema;
import io.swagger.oas.inflector.examples.models.Example;
import io.swagger.oas.inflector.examples.ExampleBuilder;
import com.fasterxml.jackson.databind.module.SimpleModule;
import io.swagger.oas.inflector.processors.JsonNodeExampleSerializer;
import io.swagger.util.Json;
OpenAPI swagger = new OpenAPIV3Parser().read("C:\\Users\\ABC\\Downloads\\Petstore-1.0.yaml")
Map<String, Schema> definitions = swagger.getComponents().getSchemas();
Schema model = definitions.get("Pet");
Example example = ExampleBuilder.fromSchema(model, definitions);
SimpleModule simpleModule = new SimpleModule().addSerializer(new JsonNodeExampleSerializer());
Json.mapper().registerModule(simpleModule);
String jsonExample = Json.pretty(example);
System.out.println(jsonExample);
Dependency for swagger inflector
compile group: 'io.swagger', name: 'swagger-inflector', version: '2.0.0'
| OpenAPI | 55,978,052 | 10 |
I have a new OpenAPI setup via SwaggerHub. Is there an option to force a certain Accept header globally?
I have set up the Content-Type on the response:
openapi: 3.0.0
paths:
/test-path:
get:
responses:
'200':
description: OK
content:
application/vnd.company.v1.0.0+json:
When inserting a different Accept header via cURL request, the following out is made:
{"message":"Missing matching response for specified Accept header"}
That makes sense, since we aren't providing any response for that.
| Unlike OpenAPI/Swagger 2.0, which has global consumes and produces, OpenAPI 3.0 requires that request and response media types be defined in each operation individually. There's no way to define the Content-Type or requests or responses globally.
You can, however, $ref common response definitions (such as error responses), which can reduce the repetition.
openapi: 3.0.2
...
paths:
/foo:
get:
responses:
'400':
$ref: '#/components/responses/ErrorResponse'
/bar:
get:
responses:
'400':
$ref: '#/components/responses/ErrorResponse'
components:
responses:
ErrorResponse:
description: An error occurred
content:
application/vnd.error+json:
schema:
...
| OpenAPI | 54,145,884 | 10 |
I'm use L5-Swagger 5.7.* package (wrapper of Swagger-php) and tried describe Laravel REST API. So, my code like this:
/**
* @OA\Post(path="/subscribers",
* @OA\RequestBody(
* @OA\MediaType(
* mediaType="application/json",
* @OA\Schema(
* type="object",
* @OA\Property(property="email", type="string")
* )
* )
* ),
* @OA\Response(response=201,description="Successful created"),
* @OA\Response(response=422, description="Error: Unprocessable Entity")
* )
*/
public function publicStore(SaveSubscriber $request)
{
$subscriber = Subscriber::create($request->all());
return new SubscriberResource($subscriber);
}
But when I try send request via swagger panel I get code:
curl -X POST "https://examile.com/api/subscribers" -H "accept: */*" -H "Content-Type: application/json" -H "X-CSRF-TOKEN: " -d "{\"email\":\"bademail\"}"
As you can see, accept is not application/json and Laravel doesn't identify this as an AJAX request. So, when I send wrong data and expect to get 422 with errors in real I get 200 code with errors in "session". Request (XHR) through the swagger panel is also processed incorrectly, CURL code just for clarity.
Also, I found that in the previous version was used something like:
* @SWG\Post(
* ...
* consumes={"multipart/form-data"},
* produces={"text/plain, application/json"},
* ...)
But now it's already out of date.
So, how get 422 code without redirect if validation fails? Or maybe add 'XMLHttpRequest' header? What is the best thing to do here?
| The response(s) didn't specify a mimetype.
@OA\Response(response=201, description="Successful created"),
If you specify a json response, swagger-ui will send an Accept: application/json header.
PS. Because json is so common swagger-php has a @OA\JsonContent shorthand, this works for the response:
@OA\Response(response=201, description="Successful created", @OA\JsonContent()),
and the requestbody:
@OA\RequestBody(
@OA\JsonContent(
type="object",
@OA\Property(property="email", type="string")
)
),
| OpenAPI | 53,168,311 | 10 |
Swagger UI refuses to make a request to https with self signed certificate.
The problem is next:
curl -X POST "https://localhost:8088/Authenticate" -H "accept: pplication/json" -H "Content-Type: application/json" -d "{ \"username\":"user\", \"password\": \"user\"}"
Above command is generated by swagger automatically and after run it returns :
TypeError: Failed to fetch
Manually (not using Swagger UI) run returns :
curl: (60) SSL certificate problem: self signed certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
I want to make it like next (add --insecure parameter):
curl -X POST "https://localhost:8088/Authenticate" -H "accept: application/json" -H "Content-Type: application/json" -d "{ \"username\": \"user\", \"password\": \"user\"}" --insecure
This will allow me to perform a desired request. Is there a way to add custom parameters to autogenerated Swagger curl? Thanks.
| Firstly, the cURL command is for display and copy-pasting only. Swagger UI does not actually use cURL for requests – it's a web page so it makes requests using JavaScript (fetch API or XMLHttpRequest or similar).
As explained here, Swagger UI does not support self-signed certificates (emphasis mine):
It appears you have a problem with your certificate. There's no way for Swagger-UI (or any in-browser application) to bypass the certificate verification process built into the browser, for security reasons that are out of our control.
It looks like you have a self-signed certificate. You'll want to look into how to get your computer to trust your certificate, here's a guide for Google Chrome, which it looks like you're using:
Getting Chrome to accept self-signed localhost certificate
So if you want to use Swagger UI to test a server that uses a self-signed certificate, you'll need to configure your browser (or your computer) to trust that certificate. Check out these questions for suggestions:
Chrome: Getting Chrome to accept self-signed localhost certificate
Firefox: Is there a way to make Firefox ignore invalid ssl-certificates?
Is there a way to add custom parameters to autogenerated Swagger curl?
The code to generate the cURL command lives here:
https://github.com/swagger-api/swagger-ui/blob/master/src/core/curlify.js
You can fork Swagger UI and tweak the code as your needs dictate.
But as explained above – changing the displayed cURL command won't change the actual "try it out" behavior, because "try it out" cannot bypass certificate validation.
| OpenAPI | 49,997,748 | 10 |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.