url
stringlengths 53
56
| repository_url
stringclasses 1
value | labels_url
stringlengths 67
70
| comments_url
stringlengths 62
65
| events_url
stringlengths 60
63
| html_url
stringlengths 41
46
| id
int64 450k
1.69B
| node_id
stringlengths 18
32
| number
int64 1
2.72k
| title
stringlengths 1
209
| user
dict | labels
list | state
stringclasses 1
value | locked
bool 2
classes | assignee
null | assignees
sequence | milestone
null | comments
sequence | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
stringclasses 2
values | body
stringlengths 0
104k
⌀ | reactions
dict | timeline_url
stringlengths 62
65
| performed_via_github_app
null | state_reason
stringclasses 2
values | draft
bool 2
classes | pull_request
dict |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/coleifer/peewee/issues/1918 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1918/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1918/comments | https://api.github.com/repos/coleifer/peewee/issues/1918/events | https://github.com/coleifer/peewee/issues/1918 | 438,960,401 | MDU6SXNzdWU0Mzg5NjA0MDE= | 1,918 | [Question] Nested execution_context replacement in peewee 3 | {
"login": "petroprotsakh",
"id": 29977583,
"node_id": "MDQ6VXNlcjI5OTc3NTgz",
"avatar_url": "https://avatars.githubusercontent.com/u/29977583?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/petroprotsakh",
"html_url": "https://github.com/petroprotsakh",
"followers_url": "https://api.github.com/users/petroprotsakh/followers",
"following_url": "https://api.github.com/users/petroprotsakh/following{/other_user}",
"gists_url": "https://api.github.com/users/petroprotsakh/gists{/gist_id}",
"starred_url": "https://api.github.com/users/petroprotsakh/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/petroprotsakh/subscriptions",
"organizations_url": "https://api.github.com/users/petroprotsakh/orgs",
"repos_url": "https://api.github.com/users/petroprotsakh/repos",
"events_url": "https://api.github.com/users/petroprotsakh/events{/privacy}",
"received_events_url": "https://api.github.com/users/petroprotsakh/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Oh, interesting, I hadn't thought of nested execution contexts but I can see how that might happen in a large enough codebase.\r\n\r\nSo when you use nested execution context, you expect a transaction to cover the outermost block, right?",
"Basically my thought was that inner `__exit__` commits it's transaction, but shouldn't try to close the connection. Don't know if it's possible without some sort of `execution_context_depth`.",
"Fixed. Behaves like nested `atomic()` blocks, with the additional behavior that the outermost block will close the connection upon exit.",
"Thanks much, that was quick!"
] | 2019-04-30T20:22:04 | 2019-05-01T14:18:31 | 2019-05-01T13:48:17 | NONE | null | Having existing codebase using peewee 2.x, I recently performed an upgrade to peewee 3.9.3 and ran into some troubles regarding connection context usage. According to [breaking changes doc](http://docs.peewee-orm.com/en/latest/peewee/changes.html#database), all `execution_context()` calls were replaced with database itself as a context manager. But it looks like new approach does not allow nested `with` statements, as several parts of code now fail with `OperationalError: Attempting to close database while transaction is open.`
Considering this simplified example with peewee 2.x:
```python
In [2]: from playhouse.pool import PooledSqliteDatabase
In [3]: db = PooledSqliteDatabase(':memory:')
In [4]: with db.execution_context():
...: with db.execution_context():
...: pass
...:
DEBUG:peewee.pool:No connection available in pool.
DEBUG:peewee.pool:Created new connection 140547264495376.
DEBUG:peewee:('BEGIN', None)
DEBUG:peewee.pool:No connection available in pool.
DEBUG:peewee.pool:Created new connection 140547375042608.
DEBUG:peewee.pool:Returning 140547375042608 to pool.
DEBUG:peewee.pool:Returning 140547264495376 to pool.
```
A new connection was instantiated for nested block, while in peewee 3 the first connection is reused, and then fails, trying to close it on inner `__exit__`, while still in outer transaction:
```python
In [2]: from playhouse.pool import PooledSqliteDatabase
In [3]: db = PooledSqliteDatabase(':memory:')
In [4]: with db:
...: with db:
...: pass
...:
DEBUG:peewee.pool:No connection available in pool.
DEBUG:peewee.pool:Created new connection 139878371475504.
DEBUG:peewee:('BEGIN', None)
DEBUG:peewee.pool:Returning 139878371475504 to pool.
---------------------------------------------------------------------------
OperationalError Traceback (most recent call last)
<ipython-input-4-34a6cd606493> in <module>
1 with db:
2 with db:
----> 3 pass
4
peewee.py in __exit__(self, exc_type, exc_val, exc_tb)
2795 top.__exit__(exc_type, exc_val, exc_tb)
2796 finally:
-> 2797 self.close()
2798
2799 def connection_context(self):
peewee.py in close(self)
2833 'before opening a connection.')
2834 if self.in_transaction():
-> 2835 raise OperationalError('Attempting to close database while '
2836 'transaction is open.')
2837 is_open = not self._state.closed
OperationalError: Attempting to close database while transaction is open.
```
My question is — are there any options to make it similar to the old one `execution_context` behavior?
The codebase is pretty large and it's hard to guarantee the absence of nested contexts now or in the future.
Introduced `connection_context` method does not help when both blocks need a transaction.
Should I nevertheless get rid of any nested context call chance, or maybe implement own connection management on app level — what's a proper way to deal with the subject from peewee's point of view? Was the old behavior (new connection for nested block) an incorrect one?
Thanks. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1918/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1917 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1917/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1917/comments | https://api.github.com/repos/coleifer/peewee/issues/1917/events | https://github.com/coleifer/peewee/issues/1917 | 438,761,153 | MDU6SXNzdWU0Mzg3NjExNTM= | 1,917 | Documentation: playhouse.reflection module does not contain generate_models | {
"login": "erezmarmor",
"id": 16885498,
"node_id": "MDQ6VXNlcjE2ODg1NDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/16885498?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/erezmarmor",
"html_url": "https://github.com/erezmarmor",
"followers_url": "https://api.github.com/users/erezmarmor/followers",
"following_url": "https://api.github.com/users/erezmarmor/following{/other_user}",
"gists_url": "https://api.github.com/users/erezmarmor/gists{/gist_id}",
"starred_url": "https://api.github.com/users/erezmarmor/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/erezmarmor/subscriptions",
"organizations_url": "https://api.github.com/users/erezmarmor/orgs",
"repos_url": "https://api.github.com/users/erezmarmor/repos",
"events_url": "https://api.github.com/users/erezmarmor/events{/privacy}",
"received_events_url": "https://api.github.com/users/erezmarmor/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"False.\r\n\r\n```python\r\nIn [1]: from playhouse.reflection import generate_models\r\n\r\nIn [2]: from peewee import PostgresqlDatabase\r\n\r\nIn [3]: db = PostgresqlDatabase('my_app')\r\n\r\nIn [4]: models = generate_models(db) # models is now a dict of table name -> model class.\r\n```",
"\r\n",
"\r\n@coleifer , are there known compatibility issues with python3?",
"No, sorry I didn't see you had an older version. The generate_models helper was added here:\r\n\r\nhttps://github.com/coleifer/peewee/commit/23dad105456bf4034d3ee21c6d29e56a7e2b9c7a"
] | 2019-04-30T12:50:00 | 2019-05-01T10:51:21 | 2019-04-30T13:41:26 | NONE | null | **peewee version=3.6.4**
both these pages (there might be more, hadn't checked):
- http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#reflection
- http://docs.peewee-orm.com/en/latest/peewee/interactive.html#interactive
the documentation provides this import as an example:
`from playhouse.reflection import generate_models`
however this function is in fact a method of Introspector class (i.e. not available from the module scope).
the proper statement might look like:
```
from playhouse.reflection import Introspector
db = PostgresqlDatabase('my_app')
models = Introspector.from_database(db).generate_models()
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1917/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1916 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1916/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1916/comments | https://api.github.com/repos/coleifer/peewee/issues/1916/events | https://github.com/coleifer/peewee/issues/1916 | 438,578,462 | MDU6SXNzdWU0Mzg1Nzg0NjI= | 1,916 | peewee update thrown exception 'UnknownField' object has no attribute 'get_sort_key' | {
"login": "0xxfu",
"id": 11550519,
"node_id": "MDQ6VXNlcjExNTUwNTE5",
"avatar_url": "https://avatars.githubusercontent.com/u/11550519?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/0xxfu",
"html_url": "https://github.com/0xxfu",
"followers_url": "https://api.github.com/users/0xxfu/followers",
"following_url": "https://api.github.com/users/0xxfu/following{/other_user}",
"gists_url": "https://api.github.com/users/0xxfu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/0xxfu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/0xxfu/subscriptions",
"organizations_url": "https://api.github.com/users/0xxfu/orgs",
"repos_url": "https://api.github.com/users/0xxfu/repos",
"events_url": "https://api.github.com/users/0xxfu/events{/privacy}",
"received_events_url": "https://api.github.com/users/0xxfu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You are clearly using models generated using the `pwiz` tool. If pwiz cannot determine the right field type to use for a column, it uses a placeholder \"UnknownField\". You will need to edit your model definitions and replace the UnknownField with an appropriate field type (e.g. TextField, IntegerField, whatever) if you wish to be able to use it in queries.",
"> You are clearly using models generated using the `pwiz` tool. If pwiz cannot determine the right field type to use for a column, it uses a placeholder \"UnknownField\". You will need to edit your model definitions and replace the UnknownField with an appropriate field type (e.g. TextField, IntegerField, whatever) if you wish to be able to use it in queries.\r\n\r\nThanks.\r\nDue to pwiz generate mysql bit field to UnKnownField. "
] | 2019-04-30T02:16:46 | 2019-05-01T15:01:04 | 2019-04-30T13:38:53 | NONE | null | code:
```
query = Order.update(is_review=1, review_time=review_time).where(Order.order_sn == order_sn)
query.execute()
```
exception:
```
cursor = database.execute(self)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\peewee.py", line 2952, in execute
sql, params = ctx.sql(query).query()
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\peewee.py", line 601, in sql
return obj.__sql__(self)
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\peewee.py", line 2363, in __sql__
for k, v in sorted(self._update.items(), key=ctx.column_sort_key):
File "C:\Users\Administrator\AppData\Local\Programs\Python\Python37\lib\site-packages\peewee.py", line 555, in column_sort_key
return item[0].get_sort_key(self)
AttributeError: 'UnknownField' object has no attribute 'get_sort_key'
```
environment:
peewee version:3.9.5
python:3.7 | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1916/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1915 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1915/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1915/comments | https://api.github.com/repos/coleifer/peewee/issues/1915/events | https://github.com/coleifer/peewee/issues/1915 | 438,024,168 | MDU6SXNzdWU0MzgwMjQxNjg= | 1,915 | ValueError: invalid literal for int() with base 10: 'f5W1vg' | {
"login": "mouday",
"id": 24365682,
"node_id": "MDQ6VXNlcjI0MzY1Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/24365682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mouday",
"html_url": "https://github.com/mouday",
"followers_url": "https://api.github.com/users/mouday/followers",
"following_url": "https://api.github.com/users/mouday/following{/other_user}",
"gists_url": "https://api.github.com/users/mouday/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mouday/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mouday/subscriptions",
"organizations_url": "https://api.github.com/users/mouday/orgs",
"repos_url": "https://api.github.com/users/mouday/repos",
"events_url": "https://api.github.com/users/mouday/events{/privacy}",
"received_events_url": "https://api.github.com/users/mouday/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, for one, that's quite an old version of Peewee.\r\n\r\nThe other issue is the use of \"#\" in the password. Peewee's db_url module uses the stdlib url parsing library, which seems to choke on the \"#\" in the password for whatever reason.\r\n\r\nI'd suggest opening a bug on the Python bug-tracker, if one does not exist.\r\n\r\nFor Peewee, you'll need to figure out a workaround or just avoid the db_url module in this case."
] | 2019-04-28T07:22:42 | 2019-04-28T16:51:16 | 2019-04-28T16:51:16 | NONE | null | peewee 2.8.2
python 2.7.5
```python
from playhouse.db_url import connect
db_url = "mysql://root:f5W1vg##[email protected]:3306/demo"
db = connect(db_url)
```
```
Traceback (most recent call last):
db = connect(db_url)
line 85, in connect
connect_kwargs = parseresult_to_dict(parsed)
line 49, in parseresult_to_dict
if parsed.port:
port = int(port, 10)
ValueError: invalid literal for int() with base 10: 'f5W1vg'
```
maybe password include some not support chart ?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1915/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1915/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1914 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1914/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1914/comments | https://api.github.com/repos/coleifer/peewee/issues/1914/events | https://github.com/coleifer/peewee/issues/1914 | 437,889,234 | MDU6SXNzdWU0Mzc4ODkyMzQ= | 1,914 | Join on expression raises AttributeError: 'Expression' object has no attribute 'name' | {
"login": "mikedrawback",
"id": 2569501,
"node_id": "MDQ6VXNlcjI1Njk1MDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2569501?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mikedrawback",
"html_url": "https://github.com/mikedrawback",
"followers_url": "https://api.github.com/users/mikedrawback/followers",
"following_url": "https://api.github.com/users/mikedrawback/following{/other_user}",
"gists_url": "https://api.github.com/users/mikedrawback/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mikedrawback/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mikedrawback/subscriptions",
"organizations_url": "https://api.github.com/users/mikedrawback/orgs",
"repos_url": "https://api.github.com/users/mikedrawback/repos",
"events_url": "https://api.github.com/users/mikedrawback/events{/privacy}",
"received_events_url": "https://api.github.com/users/mikedrawback/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, part of the issue is that you are creating foreign-key constraints to a composite primary key -- but the fk constraints are to the individual columns that comprise the primary key rather than the (model_number, color) tuple. So, your schema is actually pretty broken.\r\n\r\nYou would instead do something like this (which is described in [the documentation](http://docs.peewee-orm.com/en/latest/peewee/models.html#table-constraints)):\r\n\r\n```python\r\nclass Sku(Model):\r\n upc_code = peewee.CharField(primary_key=True)\r\n model_number = peewee.CharField()\r\n color = peewee.CharField()\r\n\r\n class Meta:\r\n constraints = [SQL('FOREIGN KEY(model_number, color) '\r\n 'REFERENCES product(model_number, color)')]\r\n database = db\r\n\r\n @property\r\n def product(self):\r\n return Product.get(\r\n (Product.model_number == self.model_number) &\r\n (Product.color == self.color))\r\n\r\n @product.setter\r\n def product(self, obj):\r\n self.model_number = obj.model_number\r\n self.color = obj.color\r\n```\r\n\r\nI've fixed the underlying issue, however, so your example is running correctly now.",
"Thank you for fixing and for the advice on composite foreign key constraints, much appreciated.",
"Whoops, I still had left the `Sku.model_number` and `Sku.color` as foreign key fields -- they should actually be `CharField`. Fixed the code snippet above so that it now reads:\r\n\r\n```python\r\nclass Sku(Model):\r\n upc_code = peewee.CharField(primary_key=True)\r\n model_number = peewee.CharField()\r\n color = peewee.CharField()\r\n\r\n class Meta:\r\n constraints = [SQL('FOREIGN KEY(model_number, color) '\r\n 'REFERENCES product(model_number, color)')]\r\n database = db\r\n```"
] | 2019-04-27T00:57:03 | 2019-04-27T19:29:21 | 2019-04-27T13:13:14 | NONE | null | When joining two models on an expression, I started getting an error on 3.9.4 and 3.9.5.
My models:
```
import peewee
db = peewee.SqliteDatabase(':memory:')
class Product(peewee.Model):
model_number = peewee.CharField()
color = peewee.CharField()
class Meta:
database = db
primary_key = peewee.CompositeKey('model_number', 'color')
class Sku(peewee.Model):
upc_code = peewee.CharField(primary_key=True)
model_number = peewee.ForeignKeyField(Product, field=Product.model_number)
color = peewee.ForeignKeyField(Product, field=Product.color)
class Meta:
database = db
join_expr = ((Product.model_number==Sku.model_number)&(Product.color==Sku.color))
```
On version 3.9.3, this works:
```Product.select().join(Sku, on=join_expr)```
On verison 3.9.4 +, I get this error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "peewee.py", line 698, in inner
method(clone, *args, **kwargs)
File "peewee.py", line 6603, in join
on, attr, constructor = self._normalize_join(src, dest, on, attr)
File "peewee.py", line 6530, in _normalize_join
attr = fk_field.name
AttributeError: 'Expression' object has no attribute 'name'
```
I can't quite figure out what changed that is causing this. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1914/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1913 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1913/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1913/comments | https://api.github.com/repos/coleifer/peewee/issues/1913/events | https://github.com/coleifer/peewee/issues/1913 | 437,758,553 | MDU6SXNzdWU0Mzc3NTg1NTM= | 1,913 | Using a BlobField with a context manager results in a AttributeError on 3.9.4 | {
"login": "poljar",
"id": 552026,
"node_id": "MDQ6VXNlcjU1MjAyNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/552026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/poljar",
"html_url": "https://github.com/poljar",
"followers_url": "https://api.github.com/users/poljar/followers",
"following_url": "https://api.github.com/users/poljar/following{/other_user}",
"gists_url": "https://api.github.com/users/poljar/gists{/gist_id}",
"starred_url": "https://api.github.com/users/poljar/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/poljar/subscriptions",
"organizations_url": "https://api.github.com/users/poljar/orgs",
"repos_url": "https://api.github.com/users/poljar/repos",
"events_url": "https://api.github.com/users/poljar/events{/privacy}",
"received_events_url": "https://api.github.com/users/poljar/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fixed. Pushing 3.9.5",
"Thanks for the quick response."
] | 2019-04-26T16:49:15 | 2019-04-27T12:41:32 | 2019-04-27T12:41:32 | NONE | null | The following code snippet results in an AttributeError on 3.9.4, 3.9.3 worked as expected:
```python
from peewee import SqliteDatabase, Model, BlobField
db = SqliteDatabase(":memory:")
class Table(Model):
field = BlobField()
models = [Table]
with db.bind_ctx(models):
db.create_tables(models)
```
The following error is thrown after running the code snippet:
```
Traceback (most recent call last):
File "peewee_fail.py", line 14, in <module>
db.create_tables(models)
File "/usr/lib/python3.7/site-packages/peewee.py", line 5829, in __exit__
model.bind(db, self.bind_refs, self.bind_backrefs)
File "/usr/lib/python3.7/site-packages/peewee.py", line 6175, in bind
cls._meta.set_database(database)
File "/usr/lib/python3.7/site-packages/peewee.py", line 5667, in set_database
hook(database)
File "/usr/lib/python3.7/site-packages/peewee.py", line 4422, in _db_hook
self._constructor = database.get_binary_type()
AttributeError: 'NoneType' object has no attribute 'get_binary_type'
```
The error doesn't happen if we're using a TextField nor does it happen if we don't use a context manager for the database. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1913/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1912 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1912/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1912/comments | https://api.github.com/repos/coleifer/peewee/issues/1912/events | https://github.com/coleifer/peewee/issues/1912 | 436,022,434 | MDU6SXNzdWU0MzYwMjI0MzQ= | 1,912 | bulk_update failed when using composite primary keys | {
"login": "Raysmond",
"id": 4071863,
"node_id": "MDQ6VXNlcjQwNzE4NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/4071863?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Raysmond",
"html_url": "https://github.com/Raysmond",
"followers_url": "https://api.github.com/users/Raysmond/followers",
"following_url": "https://api.github.com/users/Raysmond/following{/other_user}",
"gists_url": "https://api.github.com/users/Raysmond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Raysmond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Raysmond/subscriptions",
"organizations_url": "https://api.github.com/users/Raysmond/orgs",
"repos_url": "https://api.github.com/users/Raysmond/repos",
"events_url": "https://api.github.com/users/Raysmond/events{/privacy}",
"received_events_url": "https://api.github.com/users/Raysmond/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting.",
"MySQL doesn't appear to support that syntax, as far as I can tell. [Docs](https://dev.mysql.com/doc/refman/5.7/en/control-flow-functions.html#operator_case) for 5.7. One would need to write:\r\n```\r\nCASE\r\n WHEN site_id = X1 AND id = Y1 THEN ...\r\n WHEN site_id = X2 AND id = Y2 THEN ...\r\nEND\r\n```\r\n\r\nSimilarly, it doesn't appear to support tuple comparison in the WHERE clause.\r\n\r\nTesting with a relatively recent version of Sqlite, I was able to get it working with minimal changes. Postgres looked to be roughly the same, but required explicit CASTs in places I wasn't expecting.\r\n\r\nIn short, I don't feel there's enough to be gained here by trying to support all these various edge-cases, so I've caused bulk_update() to raise an error if the model uses a composite pk.",
"Actually MySQL supports tuple comparison in the WHERE clause and the CASE clause you gave is right.\r\n\r\nI just confirmed it in MySQL 5.7. For example:\r\n\r\n```sql\r\nUPDATE `article` \r\nSET `title` = CASE \r\n\tWHEN site_id = 1 and id = 'a1' THEN 'a11111'\r\n\tWHEN site_id = 1 and id = 'a2' THEN 'a22222'\r\nEND \r\nWHERE (`site_id`, `id`) IN ((1, 'a1'), (1, 'a2'))\r\n```\r\n\r\nThis shall do the bulk_update work when using composite keys. By the way, composite keys are widely using in our team, since we're using a distributed MySQL database. In this case, `site_id` is used to locate the node and `id` is the unique identifier. \r\n\r\nSo do you think it's possible to support the syntax above?",
"Possibly, but as I was saying, Sqlite and Postgres both have their own quirks as well. Not to mention potential incompatibility in MySQL prior to 5.7?"
] | 2019-04-23T06:38:44 | 2019-04-23T17:23:16 | 2019-04-23T15:59:03 | NONE | null | ```python
class RawArticle(BaseModel):
id = CharField()
site_id = IntegerField(null=False)
link_title = CharField(null=True)
class Meta:
table_name = 'dl_raw_article'
primary_key = CompositeKey('site_id', 'id')
```
I'm using MySQL.
```python
articles = [a for a in RawArticle.select()]
print len(articles) # I have 4 articles
articles[0].link_title = 'l1'
articles[1].link_title = 'l2'
articles[2].link_title = 'l3'
articles[3].link_title = 'l4'
with DB.atomic():
RawArticle.bulk_update(articles, [RawArticle.link_title])
```
Here is the output
```
4
('UPDATE `dl_raw_article` SET `link_title` = CASE `dl_raw_article`.`site_id`, `dl_raw_article`.`id` WHEN (%s, %s) THEN %s WHEN (%s, %s) THEN %s WHEN (%s, %s) THEN %s WHEN (%s, %s) THEN %s END WHERE (`dl_raw_article`.`site_id`, `dl_raw_article`.`id` IN ((%s, %s), (%s, %s), (%s, %s), (%s, %s)))', [1, u'3eeda72c764090900bc35d72b1912a62', u'l1', 1, u'ae5503c96c81e717146d80a80a5aee76', u'l2', 1, u'f218ff642f1ae2a0f2eb0a956a3afb60', u'l3', 2, u'c0adb05d30afb8d4182795a86466f72e', u'l4', 1, u'3eeda72c764090900bc35d72b1912a62', 1, u'ae5503c96c81e717146d80a80a5aee76', 1, u'f218ff642f1ae2a0f2eb0a956a3afb60', 2, u'c0adb05d30afb8d4182795a86466f72e'])
2019-04-23 13:52:00,473:peewee:4451321280::90471::DEBUG: ('UPDATE `dl_raw_article` SET `link_title` = CASE `dl_raw_article`.`site_id`, `dl_raw_article`.`id` WHEN (%s, %s) THEN %s WHEN (%s, %s) THEN %s WHEN (%s, %s) THEN %s WHEN (%s, %s) THEN %s END WHERE (`dl_raw_article`.`site_id`, `dl_raw_article`.`id` IN ((%s, %s), (%s, %s), (%s, %s), (%s, %s)))', [1, u'3eeda72c764090900bc35d72b1912a62', u'l1', 1, u'ae5503c96c81e717146d80a80a5aee76', u'l2', 1, u'f218ff642f1ae2a0f2eb0a956a3afb60', u'l3', 2, u'c0adb05d30afb8d4182795a86466f72e', u'l4', 1, u'3eeda72c764090900bc35d72b1912a62', 1, u'ae5503c96c81e717146d80a80a5aee76', 1, u'f218ff642f1ae2a0f2eb0a956a3afb60', 2, u'c0adb05d30afb8d4182795a86466f72e'])
Traceback (most recent call last):
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/knowledge/domain/peewee_models.py", line 190, in <module>
RawArticle.bulk_update(articles, [RawArticle.link_title])
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 5982, in bulk_update
.where(cls._meta.primary_key.in_(id_list))
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 1778, in inner
return method(self, database, *args, **kwargs)
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 1849, in execute
return self._execute(database)
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 2316, in _execute
cursor = database.execute(self)
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 2949, in execute
return self.execute_sql(sql, params, commit=commit)
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 2943, in execute_sql
self.commit()
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 2725, in __exit__
reraise(new_type, new_type(*exc_args), traceback)
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/peewee.py", line 2936, in execute_sql
cursor.execute(sql, params or ())
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/MySQLdb/cursors.py", line 205, in execute
self.errorhandler(self, exc, value)
File "/Users/raysmond/Baidu/Code/baidu/brand-ns/delphinus/venv/lib/python2.7/site-packages/MySQLdb/connections.py", line 36, in defaulterrorhandler
raise errorclass, errorvalue
peewee.ProgrammingError: (1149, "syntax error! errno: 1 errmsg: syntax error, unexpected ',', in [0:74-74] key:,")
```
Obviously that the generate sql has syntax errors, since the part
```sql
CASE `dl_raw_article`.`site_id`, `dl_raw_article`.`id` WHEN (%s, %s)
```
should be
```sql
CASE (`dl_raw_article`.`site_id`, `dl_raw_article`.`id`) WHEN (%s, %s)
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1912/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1911 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1911/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1911/comments | https://api.github.com/repos/coleifer/peewee/issues/1911/events | https://github.com/coleifer/peewee/issues/1911 | 435,429,222 | MDU6SXNzdWU0MzU0MjkyMjI= | 1,911 | [Question] Extending Query class to add new query clause | {
"login": "DpodDani",
"id": 11577512,
"node_id": "MDQ6VXNlcjExNTc3NTEy",
"avatar_url": "https://avatars.githubusercontent.com/u/11577512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DpodDani",
"html_url": "https://github.com/DpodDani",
"followers_url": "https://api.github.com/users/DpodDani/followers",
"following_url": "https://api.github.com/users/DpodDani/following{/other_user}",
"gists_url": "https://api.github.com/users/DpodDani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DpodDani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DpodDani/subscriptions",
"organizations_url": "https://api.github.com/users/DpodDani/orgs",
"repos_url": "https://api.github.com/users/DpodDani/repos",
"events_url": "https://api.github.com/users/DpodDani/events{/privacy}",
"received_events_url": "https://api.github.com/users/DpodDani/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, the easiest way of course is just to use the `SQL()` helper to insert the sql literal directly (SQL() also supports parameterization):\r\n\r\n```python\r\n\r\nquery = MyModel.select(SQL('TOP 10'))\r\n\r\nquery = MyModel.select(SQL('TOP %s', (10,)))\r\n```\r\n\r\nBut more generally to be able to translate a call to `.limit(10)` method, you will probably have a hard time. Unfortunately, at this time, Peewee doesn't provide the level of flexibility in overriding the essentials of the query-building APIs. That's because Model.select delegates to a ModelSelect which extends Select, etc., and it's not straightforward how one would generalize a change to the query building. You could subclass Model and override the select classmethod to provide your own implementation of ModelSelect, of course.",
"Thank you. I will play around with those suggestions 👍 "
] | 2019-04-20T19:08:37 | 2019-04-22T14:20:28 | 2019-04-22T14:07:35 | NONE | null | Hey,
First of all, thank you for creating this wonderful Python database interface API, it has been fun and educating tinkering around in the code base.
This is not really an issue, but more of a question. I am trying to extend Peewee to work with a SQL Server database, however I cannot seem to figure out how to implement a TOP clause - as in `SELECT TOP 10 name from [db].[schema].[table]`. How do you suggest I go about this?
Thank you,
Daniel | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1911/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1911/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1910 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1910/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1910/comments | https://api.github.com/repos/coleifer/peewee/issues/1910/events | https://github.com/coleifer/peewee/issues/1910 | 434,952,531 | MDU6SXNzdWU0MzQ5NTI1MzE= | 1,910 | Optional Limit after Query Object was Created | {
"login": "r0bc94",
"id": 9255088,
"node_id": "MDQ6VXNlcjkyNTUwODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9255088?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/r0bc94",
"html_url": "https://github.com/r0bc94",
"followers_url": "https://api.github.com/users/r0bc94/followers",
"following_url": "https://api.github.com/users/r0bc94/following{/other_user}",
"gists_url": "https://api.github.com/users/r0bc94/gists{/gist_id}",
"starred_url": "https://api.github.com/users/r0bc94/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/r0bc94/subscriptions",
"organizations_url": "https://api.github.com/users/r0bc94/orgs",
"repos_url": "https://api.github.com/users/r0bc94/repos",
"events_url": "https://api.github.com/users/r0bc94/events{/privacy}",
"received_events_url": "https://api.github.com/users/r0bc94/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"With method chaining, we like to act that the queries at each step are immutable, so a copy is returned whenever the query is changed. You just need to reassign:\r\n\r\n```python\r\nentries = entries.limit(limit)\r\n```\r\n\r\nBut your code is gross on just about every possible level for such a short piece of code.\r\n\r\n* Default of -1 is gross. It could just as easily be zero, if you take a look at the way you've written your conditional...\r\n* But you don't need to use -1 or 0, just use None\r\n* In fact, you don't even need the condition, because specifying .limit(None) is the same as not specifying any limit\r\n* Why coerce to a list before returning? That forces the query to be evaluated and effectively limits the reuse of that function for any subsequent query-building. Just return the query object -- that way you can apply additional filters/etc if you need to later.\r\n\r\n```python\r\ndef get_entries(limit=None):\r\n return Entry.select().limit(limit)\r\n```",
"Hey, thanks for your reply. \r\n\r\n> But your code is gross on just about every possible level for such a short piece of code.\r\n\r\nI would use the excuse, that it was pretty late as I was writing this but I guess I'm just dumb (: \r\n\r\n> But you don't need to use -1 or 0, just use None\r\n\r\nI wasn't aware of this. Even if in the documentation, this value default to `None`, I didn't find a hint that if `None` if passed, the query is evaluated without a `Limit`. \r\n\r\n> Why coerce to a list before returning? That forces the query to be evaluated and effectively limits the reuse of that function for any subsequent query-building. Just return the query object -- that way you can apply additional filters/etc if you need to later.\r\n\r\nYou are having a point here I guess. My thought on this was to wrap all peewee specific calls into a dedicated module and returning pure python objects.\r\n\r\nBut since the object which is returned from `select()` is iterate-able, there is no need to build a python list from the result."
] | 2019-04-18T20:45:47 | 2019-04-19T15:38:57 | 2019-04-18T20:59:26 | NONE | null | Consider that you have a wrapper method, which obtains a list of all `Entry` objects from a table.
This method has an optional `limit` parameter that limits the fetched results. If the `limit` parameter is set to -1, all rows should be selected.
When defining this wrapper function like so:
```Python
def getEntries(limit=-1):
allEntries = Entry.select()
if limit != -1 and limit >= 0:
allEntries.limit(limit)
return list(allEntries)
```
and calling it with `getEntries()` it works fine. However, if I set any value for the `limit` parameter (`getEntries(limit=10)`) the limit is not applied.
Also when looking at the produced query, using `print(allEntries.sql())` before the `return` statement, the `LIMIT` statement is missing.
My question now is: How can I realize an optional limit parameter without creating a whole new query object? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1910/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1910/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1909 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1909/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1909/comments | https://api.github.com/repos/coleifer/peewee/issues/1909/events | https://github.com/coleifer/peewee/issues/1909 | 434,910,778 | MDU6SXNzdWU0MzQ5MTA3Nzg= | 1,909 | Postgres UUID as Primary Key, last_id not returned on save or create. | {
"login": "Ryanb58",
"id": 3086302,
"node_id": "MDQ6VXNlcjMwODYzMDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3086302?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Ryanb58",
"html_url": "https://github.com/Ryanb58",
"followers_url": "https://api.github.com/users/Ryanb58/followers",
"following_url": "https://api.github.com/users/Ryanb58/following{/other_user}",
"gists_url": "https://api.github.com/users/Ryanb58/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Ryanb58/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Ryanb58/subscriptions",
"organizations_url": "https://api.github.com/users/Ryanb58/orgs",
"repos_url": "https://api.github.com/users/Ryanb58/repos",
"events_url": "https://api.github.com/users/Ryanb58/events{/privacy}",
"received_events_url": "https://api.github.com/users/Ryanb58/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Historically Peewee hasn't really offered much support for fields whose defaults are generated on the server. Postgresql has a nice RETURNING clause support which we can use to obtain the server-generated values without incurring an additional query, but no such thing exists for MySQL or Sqlite at the time of writing. MySQL and Sqlite *do*, however, have APIs for obtaining the primary key of the last-inserted row when that row uses an auto-incrementing integer primary key.\r\n\r\nSo the way it worked is -- you use an auto-incrementing integer primary key (the default when declaring a peewee model), and your IDs will be populated correctly after calling save() or create().\r\n\r\nIf you are using a non-auto-incrementing ID, and the value is generated from the Python application, that's no problem. Because you are generating the ID, everything works.\r\n\r\nHowever -- if you have a non-auto-incrementing ID *and* it is generated by the database server -- historically Peewee has not populated the ID after the insert.\r\n\r\nI've made a change in 1be9f329523297cf27855c9276de3904aa968189 that will more aggressively set the primary-key on the model after insert, in the event that it is available (via RETURNING or whatever). Since Peewee already utilizes the RETURNING clause when inserting models into postgres dbs, Peewee should now be working for your use case.\r\n\r\n```python\r\nemail = EmailAddresses.create(\r\n account_uuid= str(account_uuid),\r\n email= \"[email protected]\",\r\n primary=True,\r\n verified=True)\r\nprint(email.uuid) # will print the uuid now.\r\n```\r\n\r\nFor the more general case of populating arbitrary model fields from the database after save, Peewee currently does not do this automatically.\r\n\r\nIf you had multiple fields that were being populated by the DB on insert, you could just use the insert + returning APIs to achieve an equivalent effect for arbitrary columns:\r\n\r\n```python\r\niq = EmailAddress.insert(\r\n account_uuid=str(account_uuid),\r\n primary=True,\r\n verified=True).returning(EmailAddress)\r\n# The insert+returning query returns an EmailAddress instance fully-populated:\r\nemail = list(iq.objects())[0]\r\n```"
] | 2019-04-18T18:54:39 | 2019-04-18T20:45:43 | 2019-04-18T20:33:25 | NONE | null | Hello, I am running python 3.7 with pee wee 3.9.3.
My model seems to not have the primary key attached to the instance after saving the record to the database.
Setup:
```
DATABASE = PostgresqlDatabase(None)
class BaseModel(Model):
class Meta:
database = DATABASE
class EmailAddresses(BaseModel):
uuid = UUIDField(constraints=[SQL("DEFAULT uuid_generate_v4()")], primary_key=True)
account_uuid = TextField()
created_on = DateTimeField(null=True, default=datetime.datetime.now)
email = CharField()
primary = BooleanField(null=True)
verified = BooleanField(null=True)
verified_on = DateTimeField(null=True)
class Meta:
table_name = 'email_addresses'
indexes = (
(('uuid', 'account_uuid'), True),
)
```
I have tried this:
```
email = EmailAddresses.create(
account_uuid= str(account_uuid),
email= "[email protected]",
primary=True,
verified=True
)
```
and
```
email = EmailAddresses()
email.account_uuid = str(account_uuid)
email.email = "[email protected]"
email.primary = True
email.verified = True
email.save()
```
The result each time is :
```
ipdb> email.uuid
ipdb>
```
yet when I go to the database via pgcli or adminer.. I see the record with the primary key.
Am I missing something? Shouldn't the primary key be returned and placed on the object after I "create" or "save" the object to the database? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1909/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1908 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1908/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1908/comments | https://api.github.com/repos/coleifer/peewee/issues/1908/events | https://github.com/coleifer/peewee/issues/1908 | 433,819,516 | MDU6SXNzdWU0MzM4MTk1MTY= | 1,908 | [Question] how to insert literal sql fragment to query | {
"login": "james-lawrence",
"id": 2835871,
"node_id": "MDQ6VXNlcjI4MzU4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2835871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/james-lawrence",
"html_url": "https://github.com/james-lawrence",
"followers_url": "https://api.github.com/users/james-lawrence/followers",
"following_url": "https://api.github.com/users/james-lawrence/following{/other_user}",
"gists_url": "https://api.github.com/users/james-lawrence/gists{/gist_id}",
"starred_url": "https://api.github.com/users/james-lawrence/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/james-lawrence/subscriptions",
"organizations_url": "https://api.github.com/users/james-lawrence/orgs",
"repos_url": "https://api.github.com/users/james-lawrence/repos",
"events_url": "https://api.github.com/users/james-lawrence/events{/privacy}",
"received_events_url": "https://api.github.com/users/james-lawrence/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"For the first one, depends on the data-type of the timestamp column, the database driver and the db you're using. You can try using `.where(MyModel.timestamp > float('-inf'))` but it may or may not work. A much saner way would be to use `datetime.datetime.min` instead of \"-infinity\".\r\n\r\nFor insert / on conflict, there are [extensive docs](http://docs.peewee-orm.com/en/latest/peewee/querying.html#upsert) along with [API documentation](http://docs.peewee-orm.com/en/latest/peewee/api.html#Insert.on_conflict).\r\n\r\nTo reference the default, you can use `SQL('DEFAULT')`, e.g.:\r\n\r\n```python\r\n\r\nTable.insert(id=1, col1='foo').on_conflict(\r\n conflict_target=[Table.id],\r\n update={Table.col1: SQL('DEFAULT')})\r\n```\r\n\r\nAlso, please read the documentation. I've clearly requested that, if you have questions, you do not open github issues: http://docs.peewee-orm.com/en/latest/peewee/contributing.html#questions -- stackoverflow, the mailing-list, and the irc channel are all good places to ask questions.",
"note: i believe peewee.AsIs('-infinity') and peewee.AsIs('DEFAULT') is solving me issue. but I find no reference to the method in the documentation.",
"That's fine, or `SQL('-infinity')` / `SQL('DEFAULT')` would probably be the more usual way.",
"I had issues w/ SQL inproperly quoting '-infinity'. that was the first thing I tried. ./shrug.",
"```\r\nModel.select().where(Model.timestamp > peewee.SQL('-infinity')).execute()\r\n```\r\n\r\nfails with `peewee.ProgrammingError: column \"infinity\" does not exist` because it doesn't quote.\r\nand if you try to force the quotes it starts escaping them (which is fine btw)\r\n\r\n```\r\nModel.select().where(Model.created_at > peewee.AsIs('-infinity')).execute()\r\n```\r\nsucceeds because it quotes.\r\n\r\nthe fact AsIs isn't anywhere in the documentation lead to the issue being filed.",
"> the fact AsIs isn't anywhere in the documentation lead to the issue being filed.\r\n\r\nAnd yet your issue makes no mention of `AsIs` until we get down to the comments.\r\n\r\nDid you try `Value('-infinity')` ?"
] | 2019-04-16T14:41:49 | 2019-04-16T20:12:57 | 2019-04-16T15:25:34 | CONTRIBUTOR | null | how does one express:
`SELECT * FROM table WHERE timestamp > '-infinity';`
or
`INSERT INTO table (...., col1) VALUES (....) ON CONFLICT (id) DO UPDATE SET col1 = DEFAULT;`
w/ peewee? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1908/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1907 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1907/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1907/comments | https://api.github.com/repos/coleifer/peewee/issues/1907/events | https://github.com/coleifer/peewee/issues/1907 | 432,766,727 | MDU6SXNzdWU0MzI3NjY3Mjc= | 1,907 | Add truncate_table back into v3 | {
"login": "caidanw",
"id": 9907093,
"node_id": "MDQ6VXNlcjk5MDcwOTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9907093?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/caidanw",
"html_url": "https://github.com/caidanw",
"followers_url": "https://api.github.com/users/caidanw/followers",
"following_url": "https://api.github.com/users/caidanw/following{/other_user}",
"gists_url": "https://api.github.com/users/caidanw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/caidanw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/caidanw/subscriptions",
"organizations_url": "https://api.github.com/users/caidanw/orgs",
"repos_url": "https://api.github.com/users/caidanw/repos",
"events_url": "https://api.github.com/users/caidanw/events{/privacy}",
"received_events_url": "https://api.github.com/users/caidanw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Fixed.",
"Thank you so much!"
] | 2019-04-12T22:23:40 | 2019-04-13T23:23:42 | 2019-04-13T14:10:13 | NONE | null | I would love to see this functionality in v3 as it was available in v2. It doesn't make sense to omit this simple and very useful function. I would prefer to truncate over drop tables when I don't need a table structure change.
I saw your previous comments on a [related issue](https://github.com/coleifer/peewee/issues/1345) but I feel that this still holds value and should be added back into v3.
Is there any specific reason that this shouldn't be added? I can volunteer to port the functionality from v2 -> v3 with documentation and unit tests. Let me know if this is something you're interested in, thanks! | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1907/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1906 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1906/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1906/comments | https://api.github.com/repos/coleifer/peewee/issues/1906/events | https://github.com/coleifer/peewee/issues/1906 | 431,968,690 | MDU6SXNzdWU0MzE5Njg2OTA= | 1,906 | don't release lock when update failed | {
"login": "sison-yuan",
"id": 18308699,
"node_id": "MDQ6VXNlcjE4MzA4Njk5",
"avatar_url": "https://avatars.githubusercontent.com/u/18308699?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sison-yuan",
"html_url": "https://github.com/sison-yuan",
"followers_url": "https://api.github.com/users/sison-yuan/followers",
"following_url": "https://api.github.com/users/sison-yuan/following{/other_user}",
"gists_url": "https://api.github.com/users/sison-yuan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sison-yuan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sison-yuan/subscriptions",
"organizations_url": "https://api.github.com/users/sison-yuan/orgs",
"repos_url": "https://api.github.com/users/sison-yuan/repos",
"events_url": "https://api.github.com/users/sison-yuan/events{/privacy}",
"received_events_url": "https://api.github.com/users/sison-yuan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You probably need to run that in a transaction and rollback the transaction in the event an integrity error occurs:\r\n\r\n```python\r\n try:\r\n with database.atomic():\r\n if grouping_key_id:\r\n grouping_key_obj = hook_log.query_grouping_key_by_id(grouping_key_id)\r\n grouping_key_obj.service_name = service_name\r\n grouping_key_obj.grouping_key = grouping_key\r\n grouping_key_obj.save()\r\n else:\r\n hook_log.create_grouping_key(service_name, grouping_key)\r\n except IntegrityError:\r\n # do something\r\n pass\r\n```",
"oh!! I find when I create the Database object, I can set autorollback flag to True, it is useful.",
"Yes, you can definitely use `autorollback`. This feature was requested by some users and although I do *not* like it, enough people have expressed interest that it is implemented.\r\n\r\nI would never use it in my own code. Better to use explicit transactions around data-modifying code (`with db.atomic(): ...`)."
] | 2019-04-11T11:21:05 | 2019-04-12T04:00:47 | 2019-04-11T15:57:34 | NONE | null | I use mysqldb + peewee, when I update a row to trigger an IntegrityError on an unique key filed, first time, it is normal, but when second, it is in a mysql lock wait until timeout.
when i open the autocommit flag in the mysqldb, the problem is not appear recurrence, I think it is a bug.
i run the mysql 5.7.25 READ-COMMITED , mysql-python 1.2.3
this is my table define
``` sql
CREATE TABLE `groupingKey` (
`id` int(11) NOT NULL AUTO_INCREMENT,
`serviceName` varchar(200) NOT NULL,
`groupingKey` text NOT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `groupingKey_serviceName_uindex` (`serviceName`)
) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=utf8
```
my model defind
``` python
class GroupingKey(BaseModel):
service_name = CharField(column_name='serviceName', unique=True, index=True)
grouping_key = TextField(column_name='groupingKey')
class Meta:
table_name = 'groupingKey'
```
my code:
``` python
try:
if grouping_key_id:
grouping_key_obj = hook_log.query_grouping_key_by_id(grouping_key_id)
grouping_key_obj.service_name = service_name
grouping_key_obj.grouping_key = grouping_key
grouping_key_obj.save()
else:
hook_log.create_grouping_key(service_name, grouping_key)
except IntegrityError:
# do something
pass
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1906/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1905 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1905/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1905/comments | https://api.github.com/repos/coleifer/peewee/issues/1905/events | https://github.com/coleifer/peewee/issues/1905 | 431,115,771 | MDU6SXNzdWU0MzExMTU3NzE= | 1,905 | Allow disabling lazy loading - 3.x | {
"login": "brendanblackwood",
"id": 207637,
"node_id": "MDQ6VXNlcjIwNzYzNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/207637?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brendanblackwood",
"html_url": "https://github.com/brendanblackwood",
"followers_url": "https://api.github.com/users/brendanblackwood/followers",
"following_url": "https://api.github.com/users/brendanblackwood/following{/other_user}",
"gists_url": "https://api.github.com/users/brendanblackwood/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brendanblackwood/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brendanblackwood/subscriptions",
"organizations_url": "https://api.github.com/users/brendanblackwood/orgs",
"repos_url": "https://api.github.com/users/brendanblackwood/repos",
"events_url": "https://api.github.com/users/brendanblackwood/events{/privacy}",
"received_events_url": "https://api.github.com/users/brendanblackwood/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sure - `ForeignKeyField` has an `accessor_class`, which is `ForeignKeyAccessor`. This is a descriptor and it implements a `get_rel_instance()` method, which is what does the actual query for the related obj.\r\n\r\nI haven't tested this, but probably something like this ought to work:\r\n\r\n```python\r\nclass NoQueryForeignKeyAccessor(ForeignKeyAccessor):\r\n def get_rel_instance(self, instance):\r\n value = instance.__data__.get(self.name)\r\n if value is not None:\r\n if self.name in instance.__rel__:\r\n return instance.__rel__[self.name]\r\n else:\r\n # Ordinarily this would be a query...Instead let's just return the FK col value?\r\n return value\r\n elif not self.field.null:\r\n raise self.rel_model.DoesNotExist\r\n\r\nclass NoQueryForeignKeyField(ForeignKeyField):\r\n accessor_class = NoQueryForeignKeyAccessor\r\n```",
"Makes sense. Thanks!",
"I've got an implementation in a branch you can check out: https://github.com/coleifer/peewee/tree/feature/disable-fk-lazy-load\r\n\r\nIt adds an option to `ForeignKeyField`, `lazy_load`, which defaults to True but can be set to False to prevent lazy loading the related model instance.",
"Perfect! I briefly played with it and that works well.",
"Merged into master, will be included in the next release."
] | 2019-04-09T18:36:13 | 2019-04-14T15:40:53 | 2019-04-09T20:02:25 | NONE | null | I'm upgrading from 2.x to 3.x (finally) and had previously disabled lazy loading of foreign keys via method 2 from #1248. It looks like descriptors have changed/been removed in 3.x. Is there a way to achieve a similar behavior now? Thanks. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1905/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1905/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1904 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1904/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1904/comments | https://api.github.com/repos/coleifer/peewee/issues/1904/events | https://github.com/coleifer/peewee/issues/1904 | 430,395,633 | MDU6SXNzdWU0MzAzOTU2MzM= | 1,904 | Set statement_timeout per connection or per transaction | {
"login": "tuukkamustonen",
"id": 94327,
"node_id": "MDQ6VXNlcjk0MzI3",
"avatar_url": "https://avatars.githubusercontent.com/u/94327?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tuukkamustonen",
"html_url": "https://github.com/tuukkamustonen",
"followers_url": "https://api.github.com/users/tuukkamustonen/followers",
"following_url": "https://api.github.com/users/tuukkamustonen/following{/other_user}",
"gists_url": "https://api.github.com/users/tuukkamustonen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tuukkamustonen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tuukkamustonen/subscriptions",
"organizations_url": "https://api.github.com/users/tuukkamustonen/orgs",
"repos_url": "https://api.github.com/users/tuukkamustonen/repos",
"events_url": "https://api.github.com/users/tuukkamustonen/events{/privacy}",
"received_events_url": "https://api.github.com/users/tuukkamustonen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Peewee uses a special hook, `Database._initialize_connection(self, conn)`, which you can subclass and override to implement special logic when a new connection is opened.\r\n\r\nUntested, but presumably something like this:\r\n\r\n```python\r\nclass PgBouncerPostgresqlDatabase(PostgresqlDatabase):\r\n def _initialize_connection(self, conn):\r\n curs = conn.cursor()\r\n curs.execute('set statement_timeout=\\'1ms\\'')\r\n```\r\n\r\nI'm not sure what the ramifications of transaction-mode would be, and how that would interact with Peewee's own connection management or whatever. But yeah, I guess you could override the `begin()` method as well:\r\n\r\n```python\r\nclass PgBouncerPostgresqlDatabase(PostgresqlDatabase):\r\n def _initialize_connection(self, conn):\r\n curs = conn.cursor()\r\n curs.execute('set statement_timeout=\\'1ms\\'')\r\n\r\n def begin(self):\r\n self.execute_sql('SET ...')\r\n```",
"I'll give those a try, thanks!",
"Tried the suggestions above.\r\n\r\n```python\r\n def begin(self):\r\n self.execute_sql('SET ...')\r\n```\r\n\r\nThis seems to work only when transaction is explicitly defined (e.g. `with db.atomic()`). It doesn't trigger with `autocommit=True`.\r\n\r\nAlso, it only worked for me if I _didn't_ use `LOCAL` and only did `SET statement_timeout=...`.\r\n\r\nPostgres docs say:\r\n\r\n> The effects of SET LOCAL last only till the end of the current transaction, whether committed or not.--\r\n\r\nIn the logs, I cannot see any `BEGIN`, `COMMIT` or `SAVEPOINT` messages so I'm not sure what `db.atomic()` actually triggers. I guess `SET LOCAL` gets run in wrong context and thus doesn't go into effect. I looked into code but it would take time to get into it...\r\n\r\nOn the other hand:\r\n\r\n```python\r\n def _initialize_connection(self, conn):\r\n curs = conn.cursor()\r\n curs.execute('set statement_timeout=\\'1ms\\'')\r\n```\r\n\r\nThis works fine. So, I think I'm just going to use session-wide parameters instead of in-transaction (`LOCAL`).",
"> This seems to work only when transaction is explicitly defined (e.g. with db.atomic()). It doesn't trigger with autocommit=True.\r\n\r\nYes, that's correct.\r\n\r\n> In the logs, I cannot see any BEGIN, COMMIT or SAVEPOINT messages so I'm not sure what db.atomic() actually triggers\r\n\r\nThe transactions are managed behind-the-scenes by `psycopg2`. You can see in the peewee code that it calls `rollback()` or `commit()` on the underlying psycopg2 connection object. The [managing transactions](http://docs.peewee-orm.com/en/latest/peewee/database.html#managing-transactions) document is worth reading if you're unclear on Peewee's APIs or how to use them.\r\n\r\nPeewee always runs in autocommit mode by default. That is: Peewee will issue commit unless there is an active manual_commit or atomic block wrapping the code. If you want to explicitly manage transactions/commit yourself, you wrap the corresponding code in the `manual_commit()` context manager. If you want to wrap multiple operations in a transaction or savepoint, you wrap the corresponding code in an `atomic()` context manager. Otherwise **each statement is effectively in its own transaction**.",
"This might also help: http://initd.org/psycopg/docs/usage.html#transactions-control",
"A few findings, should anyone read this.\r\n\r\n```python\r\n def _initialize_connection(self, conn):\r\n curs = conn.cursor()\r\n curs.execute('SET statement_timeout=\\'1s\\'')\r\n```\r\n\r\nThis opens up new transaction, because:\r\n\r\n> In Psycopg transactions are handled by the connection class. **By default, the first time a command is sent to the database (using one of the cursors created by the connection), a new transaction is created.** The following database commands will be executed in the context of the same transaction – not only the commands issued by the first cursor, but the ones issued by all the cursors created by the same connection. Should any command fail, the transaction will be aborted and no further command will be executed until a call to the rollback() method.\r\n\r\nSo, if you use the code snippet above, it opens up transaction, but does not commit. This means connection is left hanging as \"idle in transaction\" state. If you have slow processing or I/O, that means the connection is kept open / reserved for the whole of this time (as in my case, where I am calling another service that may sometimes take 10-30s to respond).\r\n\r\nIf you're not using connection pooler, then this is just fine. You reserve DB connections for the duration of connect-close cycle, anyway.\r\n\r\nHowever, I am running pgBouncer in *transaction*-pooling mode, so each connection gets served a connection to pgBouncer, but only transactions are routed to the actual PG database, so it's important when you `BEGIN`/`COMMIT`. Opening, but not committing the transaction shows as \"idle in transaction\" in PG and that transaction is then holding an actual pgBouncer->DB connection, just sitting idle, blocking others.\r\n\r\nSo commit even this statement:\r\n\r\n```python\r\n def _initialize_connection(self, conn):\r\n with conn.cursor() as curs: # auto-releases cursor, not required but nicer\r\n curs.execute('SET statement_timeout=\\'1s\\'')\r\n conn.commit() # REQUIRED\r\n```\r\n\r\nFinally, with *transaction*-pooling mode in pgBouncer, nothing actually guarantees that the `SET ...` statement would be run for each *pgBouncer->PG* connection as the first call. Any transaction may get served with pgBouncer->PG connection, which means any `SELECT`, `UPDATE` or whatever you might have there may get be served a fresh pgBouncer->PG connection, where `SET ...` wasn't actually called yet.\r\n\r\nAnd sure, `SET ...` statements leak from connection to connection, so it's completely messed up (but for me _that_ is fine as I use same `statement_timeout` for all connections, so leaking does no harm).\r\n\r\nDue to above, I tried switching to *session*-pooling mode, but faced into some potential issue with pgBouncer, ref https://github.com/pgbouncer/pgbouncer/issues/384. However, even if I managed to switch to session-pooling, it wouldn't help, because I'm opening DB connection (via peewee) when inbound (web app) request processing starts and closing it when the response is about to get returned. With session-pooling mode, I would need finer-grained connect/close cycle, again to avoid connections hanging in \"idle in transaction\" state, doing nothing (as they wait processing or I/O).\r\n\r\nSo, potential solutions here:\r\n\r\n1. Wrap all calls `with db.atomic()` to explicitly define transactions and use the suggested:\r\n\r\n class PgBouncerPostgresqlDatabase(PostgresqlDatabase):\r\n def begin(self):\r\n self.execute_sql('SET ...')\r\n\r\n However, wrapping calls in `with db.atomic()` blocks is problematic, because of code re-use and structure - you don't always know if a transaction is already open or not, so you might actually end up with nested transactions).\r\n\r\n2. Write context-manager (similar to `db.atomic()`) that connects/closes connection and wrap it over all statements (this would allow to use session-pooling... but there's no real benefit here).\r\n\r\n3. Use pgBouncer's `query_timeout` as substitute for `statement_timeout` (if it's ok, see below *)\r\n\r\n4. Use server -level `statement_timeout` (and override it when/if needed, e.g. when debugging and running potentially slow queries)\r\n\r\n5. Set `statement_timeout` in pgBouncer->DB connection via `connect_query` (`options=-c statement_timeout=...` would do the same but it's not allowed). I will go with this.\r\n\r\n*pgBouncer has `query_timeout` (broken in older versions and fixed in 1.9.0+), but it's documented as:\r\n\r\n> **query_timeout**\r\n> Queries running longer than that are canceled. This should be used **only with slightly smaller server-side `statement_timeout`**, to apply only for network problems. [seconds]\r\n\r\nSo apparently it cannot be used as a substitute for `statement_timeout` (but unfortunately, it docs don't explain why).",
"Thank you for the excellent comment and sharing ur experience w/this stuff."
] | 2019-04-08T11:23:06 | 2019-04-29T07:38:45 | 2019-04-08T16:30:11 | NONE | null | Using peewee with postgres, you can normally set `statement_timeout` like this:
```python
conn = PostgresqlExtDatabase(
...
options='-c statement_timeout=1ms'
)
```
However, `pgbouncer` doesn't allow `options` (you can ignore it, but it just doesn't have effect then). So I'm planning to do something like:
```sql
database.connect()
database.execute_sql("SET statement_timeout = '1ms'")
```
This seems to work, but I wonder if it's the correct approach(?)
Also, when you run `pgbouncer` in `transaction` mode, each transaction may get served a different connection. In this case, I think I should rather:
```sql
BEGIN
SET LOCAL statement_timeout = '1ms'
...
```
Is it possible to achieve this with peewee so that I wouldn't need to manually do something like this:
```python
with db.atomic():
db.execute_sql('SET ...')
```
(`transaction` pooling mode is better than `session` mode because connections don't lie idle when connection is open but no DB activity is happening.) | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1904/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1903 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1903/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1903/comments | https://api.github.com/repos/coleifer/peewee/issues/1903/events | https://github.com/coleifer/peewee/pull/1903 | 429,977,906 | MDExOlB1bGxSZXF1ZXN0MjY3OTk4Nzk4 | 1,903 | Add note to quickstart about deferring database initialization | {
"login": "carlwgeorge",
"id": 12187228,
"node_id": "MDQ6VXNlcjEyMTg3MjI4",
"avatar_url": "https://avatars.githubusercontent.com/u/12187228?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/carlwgeorge",
"html_url": "https://github.com/carlwgeorge",
"followers_url": "https://api.github.com/users/carlwgeorge/followers",
"following_url": "https://api.github.com/users/carlwgeorge/following{/other_user}",
"gists_url": "https://api.github.com/users/carlwgeorge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/carlwgeorge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/carlwgeorge/subscriptions",
"organizations_url": "https://api.github.com/users/carlwgeorge/orgs",
"repos_url": "https://api.github.com/users/carlwgeorge/repos",
"events_url": "https://api.github.com/users/carlwgeorge/events{/privacy}",
"received_events_url": "https://api.github.com/users/carlwgeorge/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The quick-start is supposed to be just that -- a quick start. It is nowhere near comprehensive.\r\n\r\nThe [database](http://docs.peewee-orm.com/en/latest/peewee/database.html) documentation contains tons of information:\r\n\r\n* [run-time database configuration](http://docs.peewee-orm.com/en/latest/peewee/database.html#run-time-database-configuration)\r\n* [dynamically defining a database](http://docs.peewee-orm.com/en/latest/peewee/database.html#dynamically-defining-a-database)\r\n* [setting the database at run-time](http://docs.peewee-orm.com/en/latest/peewee/database.html#setting-the-database-at-run-time)\r\n* [testing peewee applications](http://docs.peewee-orm.com/en/latest/peewee/database.html#testing)"
] | 2019-04-06T00:35:18 | 2019-04-06T04:07:48 | 2019-04-06T04:07:48 | CONTRIBUTOR | null | I'm new to peewee, and must admit I spent way too much time trying to figure out how to make my application use a temporary database during tests. I'm generating the database path with [appdirs](https://github.com/ActiveState/appdirs) and changing the environment during tests with [pytest's monkeypatch](https://docs.pytest.org/en/latest/reference.html#_pytest.monkeypatch.MonkeyPatch.setenv). Run-time database configuration was exactly what I needed, I just didn't know to search for that term. Eventually I found the documentation for that, but I think it's important enough to make note of it in the quickstart. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1903/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1903",
"html_url": "https://github.com/coleifer/peewee/pull/1903",
"diff_url": "https://github.com/coleifer/peewee/pull/1903.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1903.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1902 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1902/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1902/comments | https://api.github.com/repos/coleifer/peewee/issues/1902/events | https://github.com/coleifer/peewee/issues/1902 | 427,721,284 | MDU6SXNzdWU0Mjc3MjEyODQ= | 1,902 | regexp not working with database proxy (sqlite) | {
"login": "wice90",
"id": 9954078,
"node_id": "MDQ6VXNlcjk5NTQwNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9954078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wice90",
"html_url": "https://github.com/wice90",
"followers_url": "https://api.github.com/users/wice90/followers",
"following_url": "https://api.github.com/users/wice90/following{/other_user}",
"gists_url": "https://api.github.com/users/wice90/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wice90/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wice90/subscriptions",
"organizations_url": "https://api.github.com/users/wice90/orgs",
"repos_url": "https://api.github.com/users/wice90/repos",
"events_url": "https://api.github.com/users/wice90/events{/privacy}",
"received_events_url": "https://api.github.com/users/wice90/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"How are you initializing your proxy? i.e., what is the database instance you are passing to it to initialize it?\r\n\r\nAdditionally, Peewee does not automatically register a user-defined regexp implementation for sqlite.",
"The `playhouse.sqlite_ext.SqliteExtDatabase` *does* allow you to register a REGEXP user-defined function - [documentation](http://docs.peewee-orm.com/en/latest/peewee/sqlite_ext.html#SqliteExtDatabase):\r\n\r\n```python\r\nfrom playhouse.sqlite_ext import SqliteExtDatabase\r\n\r\n# Instantiate the sqlite-ext db, and tell it we want to register a regexp function.\r\ndb = SqliteExtDatabase('my_database.db', regexp_function=True)\r\n```\r\n\r\nUsing the standard `peewee.SqliteDatabase` it is also possible to register a user-defined function. You **do not** need to do any crazy shit like hand-editing the code. Here are the docs for registering a user-defined function: http://docs.peewee-orm.com/en/latest/peewee/api.html#SqliteDatabase.func\r\n\r\nExample:\r\n\r\n```python\r\ndb = SqliteDatabase('my_app.db')\r\n\r\[email protected]()\r\ndef regexp(expr, s):\r\n return re.search(expr, s) is not None\r\n```\r\n\r\n--------\r\n\r\nNow, regarding the use of `Proxy` - you cannot register user-defined functions using proxy database, as this is a sqlite-only feature. So you would do something like this:\r\n\r\n```python\r\ndb = Proxy()\r\n\r\n# ... define models, whatever\r\n\r\n# Whenever you actually *do* initialize your proxy db, *at that time* you would\r\n# register the user-defined regexp func.\r\nsqlite_db = SqliteDatabase('my_app.db')\r\ndb.initialize(sqlite_db)\r\n\r\[email protected]()\r\ndef regexp(expr, s):\r\n return re.search(expr, s) is not None\r\n```"
] | 2019-04-01T13:58:16 | 2019-04-01T14:50:16 | 2019-04-01T14:49:55 | NONE | null | When I try to execute a query with regex (using the regex() method) on a database with proxy I get an error.
This is the table I am operating on:
```from peewee import *
proxy = Proxy()
class BaseModel(Model):
class Meta:
database = proxy
class Table2(BaseModel):
column1= TextField(null=True)
column2= TextField(null=True)
```
The error:
> Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 2875, in execute_sql
cursor.execute(sql, params or ())
sqlite3.OperationalError: no such function: REGEXP
Printing the .sql() method returns
> ('SELECT "t1"."id", "t1"."column1", "t1"."column2" FROM "TABLE2" AS "t1" WHERE (("t1"."column1" = ?) AND ("t1"."column2" REGEXP ?))', ['app_name_1', '.*'])
I tried to execute the regex method on a database connection without proxy, and it worked as expected. Printing the .sql() method returned different value
> ('SELECT "t1"."id" FROM "TABLE" AS "t1" WHERE ("t1"."name" ~ %s)', ['.* 4'])
PS: I made some changes to peewee.py (in class SqliteDatabase):
- I created a new function:
```
def regexp(self, expr, item):
reg = re.compile(expr)
return reg.search(item) is not None
```
- Under the function `_connect`, after `sqlite3.connect`, I added the following line:
`conn.create_function("REGEXP", 2, self.regexp)`
After these updates, regex worked on a database accessed via proxy.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1902/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1902/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1901 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1901/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1901/comments | https://api.github.com/repos/coleifer/peewee/issues/1901/events | https://github.com/coleifer/peewee/issues/1901 | 427,526,493 | MDU6SXNzdWU0Mjc1MjY0OTM= | 1,901 | When joining subquery, the .get() model instance result doesn't contain that attribute. | {
"login": "eric-spitfire",
"id": 46429515,
"node_id": "MDQ6VXNlcjQ2NDI5NTE1",
"avatar_url": "https://avatars.githubusercontent.com/u/46429515?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eric-spitfire",
"html_url": "https://github.com/eric-spitfire",
"followers_url": "https://api.github.com/users/eric-spitfire/followers",
"following_url": "https://api.github.com/users/eric-spitfire/following{/other_user}",
"gists_url": "https://api.github.com/users/eric-spitfire/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eric-spitfire/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eric-spitfire/subscriptions",
"organizations_url": "https://api.github.com/users/eric-spitfire/orgs",
"repos_url": "https://api.github.com/users/eric-spitfire/repos",
"events_url": "https://api.github.com/users/eric-spitfire/events{/privacy}",
"received_events_url": "https://api.github.com/users/eric-spitfire/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Uhh, this example is damn near inscrutable. Can you make another minimal example? Like wtf is A, B, C and D, and how are they related? It seems like C and B have an \"a_id\", and then D has a \"order_id\"?",
"OK, I think I've made an example that is clear.\r\n\r\n```python\r\nclass User(Base):\r\n username = TextField()\r\n\r\nclass Game(Base):\r\n user = ForeignKeyField(User)\r\n name = TextField()\r\n\r\nclass Score(Base):\r\n game = ForeignKeyField(Game)\r\n points = IntegerField()\r\n```\r\n\r\nTo get a list of users and the sum of their points across all games they've played:\r\n\r\n```python\r\n# Subquery calculates just the user id and the sum of points.\r\nsubq = (Game\r\n .select(Game.user, fn.SUM(Score.points).alias('total_points'))\r\n .join(Score, JOIN.LEFT_OUTER)\r\n .group_by(Game.user))\r\n\r\n# Query all users and get their total points.\r\nquery = (User\r\n .select(User.username, subq.c.total_points)\r\n .join(subq, on=(User.id == subq.c.user_id)))\r\n```\r\n\r\nPeewee sees that you're joining on a subquery whose primary model is `Game`. Thus, Peewee assumes we're looking at attributes and values related to the game table. Peewee attempts to reconstruct the model graph when iterating the results, so you would write:\r\n\r\n```python\r\nfor user in query:\r\n print(user.username, user.game.total_points)\r\n```\r\n\r\nIf you don't want Peewee to try and do the model-graph construction, you can simply patch all joined attributes onto the User instance via the \"objects()\" method:\r\n\r\n```python\r\nfor user in query.objects():\r\n print(user.username, user.total_points)\r\n```",
"Thanks for your time, sorry for the unclear example previously. I've investigated further and discovered if there are two joins, only the model graph for the first join is constructed. See below.\r\n\r\n```\r\nclass User(Base):\r\n username = TextField()\r\n\r\nclass Game(Base):\r\n user = ForeignKeyField(User)\r\n name = TextField()\r\n\r\nclass Score(Base):\r\n game = ForeignKeyField(Game)\r\n points = IntegerField()\r\n\r\nclass ProfilePicture(Base):\r\n url = TextField()\r\n user = ForeignKeyField(User)\r\n \r\nsubq = (Game\r\n .select(Game.user, fn.SUM(Score.points).alias('total_points'))\r\n .join(Score, JOIN.LEFT_OUTER)\r\n .group_by(Game.user))\r\n\r\n# Query all users and get their total points, and their pictures.\r\n# .profile_picture is present, but .game is not.\r\nquery = (User\r\n .select(User.username, subq.c.total_points)\r\n .join(ProfilePicture, on=(User.id == ProfilePicture.user_id), attr=\"profile_picture\")\r\n .join(subq, on=(User.id == subq.c.user_id)))\r\n\r\n\r\n# Query all users and get their total points, and their pictures.\r\n# .game is present, but .profile_picture is not.\r\nquery = (User\r\n .select(User.username, subq.c.total_points)\r\n .join(subq, on=(User.id == subq.c.user_id))\r\n .join(ProfilePicture, on=(User.id == ProfilePicture.user_id), attr=\"profile_picture\"))\r\n```",
"I'm not sure this is quite right:\r\n\r\n```python\r\n# Query all users and get their total points, and their pictures.\r\n# .profile_picture is present, but .game is not.\r\nquery = (User\r\n .select(User.username, subq.c.total_points)\r\n .join(ProfilePicture, on=(User.id == ProfilePicture.user_id), attr=\"profile_picture\")\r\n .join(subq, on=(User.id == subq.c.user_id)))\r\n```\r\n\r\nYou're not actually selecting any columns from the `ProfilePicture` model...\r\n\r\nAdditionally, Peewee uses join context which has to do with the last-model being joined. You probably want to modify your queries:\r\n\r\n```python\r\nquery = (User\r\n .select(User.username, subq.c.total_points)\r\n .join(subq, on=(User.id == subq.c.user_id))\r\n .switch(User)\r\n .join(ProfilePicture, on=(User.id == ProfilePicture.user_id), attr=\"profile_picture\"))\r\n```\r\n\r\nOr you could just:\r\n\r\n```python\r\nquery = (User\r\n .select(User.username, subq.c.total_points)\r\n .join_from(User, subq, on=(User.id == subq.c.user_id))\r\n .join_from(User, ProfilePicture, attr=\"profile_picture\"))\r\n```",
"I think I'll give the last two approaches a try..."
] | 2019-04-01T06:04:30 | 2019-04-02T04:10:47 | 2019-04-01T15:22:48 | NONE | null | When joining a subquery into a Model query, I can only retrieve the subquery column via `.dicts` but not `.get`.
I'd like to be able to get the value from the model instance `.get` so as to not flatten the result / foreign key objects that might also be retrieved in the same query into a single dictionary.
Does this make sense?
Here's an example. (I removed the foreign key joins for brevity.)
```
def select_instances():
summed_value = (
B.select(
B.a_id, fn.SUM(D.value).alias("summed_value")
)
.join(D, on=(B.id == D.order_id))
.group_by(B.a_id)
).alias("summed_value")
return (
A.select(
A,
fn.SUM(C.value).alias("value"),
summed_value.c.summed_value.alias("summed_value"),
)
.join(
C,
JOIN.LEFT_OUTER,
on=(A.id == C.a_id),
)
.join(
summed_value,
JOIN.LEFT_OUTER,
on=(A.id == summed_value.c.a_id),
)
.group_by(
A,
summed_value.c.summed_value,
summed_value.c.a_id,
)
)
instance = select_instances().where(A.id == 1).get()
print (instance.summed_value) # AttributeError
instances = select_instances().where(A.id == 1).dicts()
print(instances) # Can see summed_value
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1901/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1900 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1900/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1900/comments | https://api.github.com/repos/coleifer/peewee/issues/1900/events | https://github.com/coleifer/peewee/pull/1900 | 427,388,572 | MDExOlB1bGxSZXF1ZXN0MjY2MDA4NDIw | 1,900 | Make sure pwiz take care of camelCase | {
"login": "eggachecat",
"id": 18111656,
"node_id": "MDQ6VXNlcjE4MTExNjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18111656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eggachecat",
"html_url": "https://github.com/eggachecat",
"followers_url": "https://api.github.com/users/eggachecat/followers",
"following_url": "https://api.github.com/users/eggachecat/following{/other_user}",
"gists_url": "https://api.github.com/users/eggachecat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eggachecat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eggachecat/subscriptions",
"organizations_url": "https://api.github.com/users/eggachecat/orgs",
"repos_url": "https://api.github.com/users/eggachecat/repos",
"events_url": "https://api.github.com/users/eggachecat/events{/privacy}",
"received_events_url": "https://api.github.com/users/eggachecat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I appreciate the patch. I've chosen to implement it slightly differently (default to using snake case, more tests, factored table-naming into a function to use in both places it is used)."
] | 2019-03-31T11:57:56 | 2019-03-31T17:36:12 | 2019-03-31T17:36:12 | NONE | null | Add feature/testcases so that if original table/column name is camelCase the autogenerated one by pwiz will be snake_case
Let's say we have a table named `CamelCaseTableName` who has a column named `camelCase`, with `pwiz`,
## Before
```python
class Camelcasetablename(BaseModel):
camelcase = CharField(column_name='camelCase')
class Meta:
table_name = 'camelCaseTableName'
```
## After
```python
class CamelCaseTableName(BaseModel):
camel_case = CharField(column_name='camelCase')
class Meta:
table_name = 'camelCaseTableName'
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1900/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1900/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1900",
"html_url": "https://github.com/coleifer/peewee/pull/1900",
"diff_url": "https://github.com/coleifer/peewee/pull/1900.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1900.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1899 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1899/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1899/comments | https://api.github.com/repos/coleifer/peewee/issues/1899/events | https://github.com/coleifer/peewee/pull/1899 | 427,384,984 | MDExOlB1bGxSZXF1ZXN0MjY2MDA1OTI1 | 1,899 | Make sure pwiz take care of camelCase | {
"login": "eggachecat",
"id": 18111656,
"node_id": "MDQ6VXNlcjE4MTExNjU2",
"avatar_url": "https://avatars.githubusercontent.com/u/18111656?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eggachecat",
"html_url": "https://github.com/eggachecat",
"followers_url": "https://api.github.com/users/eggachecat/followers",
"following_url": "https://api.github.com/users/eggachecat/following{/other_user}",
"gists_url": "https://api.github.com/users/eggachecat/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eggachecat/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eggachecat/subscriptions",
"organizations_url": "https://api.github.com/users/eggachecat/orgs",
"repos_url": "https://api.github.com/users/eggachecat/repos",
"events_url": "https://api.github.com/users/eggachecat/events{/privacy}",
"received_events_url": "https://api.github.com/users/eggachecat/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2019-03-31T11:12:14 | 2019-03-31T11:30:25 | 2019-03-31T11:29:26 | NONE | null | Add feature/testcases so that if original table/column name is camelCase the autogenerated one by pwiz will be snake_case
Let's say we have a table named `CamelCaseTableName` who has a column named `camelCase`, with `pwiz`,
## Before
```python
class Camelcasetablename(BaseModel):
camelcase = CharField(column_name='camelCase')
class Meta:
table_name = 'camelCaseTableName'
```
## After
```python
class CamelCaseTableName(BaseModel):
camel_case = CharField(column_name='camelCase')
class Meta:
table_name = 'camelCaseTableName'
```
`I will make a new PR since I notice that there's some unnecessary reformatting thanks to pycharm... ` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1899/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1899",
"html_url": "https://github.com/coleifer/peewee/pull/1899",
"diff_url": "https://github.com/coleifer/peewee/pull/1899.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1899.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1898 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1898/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1898/comments | https://api.github.com/repos/coleifer/peewee/issues/1898/events | https://github.com/coleifer/peewee/issues/1898 | 426,827,128 | MDU6SXNzdWU0MjY4MjcxMjg= | 1,898 | Use Result from Select Query will create a connection | {
"login": "DoubleX69",
"id": 28057284,
"node_id": "MDQ6VXNlcjI4MDU3Mjg0",
"avatar_url": "https://avatars.githubusercontent.com/u/28057284?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DoubleX69",
"html_url": "https://github.com/DoubleX69",
"followers_url": "https://api.github.com/users/DoubleX69/followers",
"following_url": "https://api.github.com/users/DoubleX69/following{/other_user}",
"gists_url": "https://api.github.com/users/DoubleX69/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DoubleX69/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DoubleX69/subscriptions",
"organizations_url": "https://api.github.com/users/DoubleX69/orgs",
"repos_url": "https://api.github.com/users/DoubleX69/repos",
"events_url": "https://api.github.com/users/DoubleX69/events{/privacy}",
"received_events_url": "https://api.github.com/users/DoubleX69/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Select queries are lazily evaluated in peewee. How else do you think it would work, given that peewee uses method chaining to build a query?\r\n\r\nSo when you are building your query -- nothing happens until you:\r\n\r\n* iterate over it\r\n* call .get() / .first() / .count() etc on it\r\n* explicitly call .execute()\r\n\r\nFor your example, you could just coerce to a list if you wanted."
] | 2019-03-29T05:46:48 | 2019-03-29T16:28:32 | 2019-03-29T16:28:32 | NONE | null | Example:
```python
class User(BaseModel):
username = CharField(unique=True)
class Meta:
db_table = 'user'
@classmethod
def query_user(cls):
with DATABASE.connection_context():
query = User.select()
return query
q = User.query_user()
print(DATABASE.is_closed())
for i in q:
print(i.username)
print(DATABASE.is_closed()) ##here will be False
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1898/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1897 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1897/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1897/comments | https://api.github.com/repos/coleifer/peewee/issues/1897/events | https://github.com/coleifer/peewee/issues/1897 | 426,690,164 | MDU6SXNzdWU0MjY2OTAxNjQ= | 1,897 | Can you elaborate more on why AutoField doesn't add 'AUTOINCREMENT' to the column definition in SQLite? | {
"login": "dougthor42",
"id": 5386897,
"node_id": "MDQ6VXNlcjUzODY4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5386897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dougthor42",
"html_url": "https://github.com/dougthor42",
"followers_url": "https://api.github.com/users/dougthor42/followers",
"following_url": "https://api.github.com/users/dougthor42/following{/other_user}",
"gists_url": "https://api.github.com/users/dougthor42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dougthor42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dougthor42/subscriptions",
"organizations_url": "https://api.github.com/users/dougthor42/orgs",
"repos_url": "https://api.github.com/users/dougthor42/repos",
"events_url": "https://api.github.com/users/dougthor42/events{/privacy}",
"received_events_url": "https://api.github.com/users/dougthor42/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You can use this, from the sqlite extension: http://docs.peewee-orm.com/en/latest/peewee/sqlite_ext.html#AutoIncrementField\r\n\r\nThe info in #147 is for a very old version."
] | 2019-03-28T20:21:59 | 2019-03-28T21:12:00 | 2019-03-28T21:12:00 | NONE | null | Let me preface by saying that I was originally going to ask if this was intended behavior, but I did *at least some* research 😆.
I'm well aware of how SQLite handles `INTEGER PRIMARY KEY` (alias for the int64 `ROWID`) and how adding `AUTOINCREMENT` modifies that. https://www.sqlite.org/autoinc.html which you and others have linked many times before.
In [#805](https://github.com/coleifer/peewee/issues/805#issuecomment-168504439) you say:
> SQLite and autoincrement is more involved than you might think. There's a reason peewee doesn't specify AUTOINCREMENT with SQLite ... Peewee instead will use the unique rowid which uses the algorithm MAX(rowid) + 1 to generate new rowids, rather than ensuring they are unique across deletions, etc.
In [#147](https://github.com/coleifer/peewee/issues/147#issuecomment-12056456):
> Because SQLite incurs additional overhead to guarantee monotonically incrementing PKs, I will keep the default behavior which simply guarantees they are unique.
Questions:
+ Is this still true in peewee version 3.8 and above?
+ If so, can you expand on the reasoning for that?
+ Is there a way to force `AUTOINCREMENT` to be added, cpu/memory/disk overhead be damned?
+ In #147 you tell someone to use the `fields` arg of `SqliteDatabase`, but that doesn't work (`TypeError: 'fields' is an invalid keyword argument for this function`)
+ I haven't found anything in the docs ([sqlite extensions](http://docs.peewee-orm.com/en/latest/peewee/sqlite_ext.html#sqlite-ext), [primary keys](http://docs.peewee-orm.com/en/latest/peewee/models.html#primary-keys-composite-keys-and-other-tricks)) that mentions it. Perhaps I'm just blind?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1897/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1896 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1896/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1896/comments | https://api.github.com/repos/coleifer/peewee/issues/1896/events | https://github.com/coleifer/peewee/issues/1896 | 426,002,590 | MDU6SXNzdWU0MjYwMDI1OTA= | 1,896 | Playhouse migrate MySQL DESCRIBE does not escape the table name | {
"login": "arnulfojr",
"id": 1023023,
"node_id": "MDQ6VXNlcjEwMjMwMjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1023023?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arnulfojr",
"html_url": "https://github.com/arnulfojr",
"followers_url": "https://api.github.com/users/arnulfojr/followers",
"following_url": "https://api.github.com/users/arnulfojr/following{/other_user}",
"gists_url": "https://api.github.com/users/arnulfojr/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arnulfojr/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arnulfojr/subscriptions",
"organizations_url": "https://api.github.com/users/arnulfojr/orgs",
"repos_url": "https://api.github.com/users/arnulfojr/repos",
"events_url": "https://api.github.com/users/arnulfojr/events{/privacy}",
"received_events_url": "https://api.github.com/users/arnulfojr/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks, I don't know how I missed that before. Fixed.",
"Thanks for the quick fix!"
] | 2019-03-27T14:44:56 | 2019-05-18T08:39:10 | 2019-03-27T19:31:14 | NONE | null | Hello,
Logs of `migrator.add_column` when a table name is not escaped, e.g., `order`
```bash
('ALTER TABLE `order` ADD COLUMN `type` VARCHAR(255)', [])
[2019-03-27 14:37:27 +0000] [1] [DEBUG] ('UPDATE `order` SET `type` = %s', ['RECURRING_FEE'])
[2019-03-27 14:37:27 +0000] [1] [DEBUG] ('DESCRIBE order;', None)
[2019-03-27 14:37:27 +0000] [1] [ERROR] (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'order' at line 1")
```
This could be easily avoided by doing ```DESCRIBE `order`;``` in:
https://github.com/coleifer/peewee/blob/3e00b633ef4733df34d39e5c789a8b6826e3942f/playhouse/migrate.py#L502 | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1896/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1895 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1895/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1895/comments | https://api.github.com/repos/coleifer/peewee/issues/1895/events | https://github.com/coleifer/peewee/pull/1895 | 425,687,795 | MDExOlB1bGxSZXF1ZXN0MjY0NzI1NDAz | 1,895 | Update app.py | {
"login": "abhikrni",
"id": 35332154,
"node_id": "MDQ6VXNlcjM1MzMyMTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/35332154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abhikrni",
"html_url": "https://github.com/abhikrni",
"followers_url": "https://api.github.com/users/abhikrni/followers",
"following_url": "https://api.github.com/users/abhikrni/following{/other_user}",
"gists_url": "https://api.github.com/users/abhikrni/gists{/gist_id}",
"starred_url": "https://api.github.com/users/abhikrni/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/abhikrni/subscriptions",
"organizations_url": "https://api.github.com/users/abhikrni/orgs",
"repos_url": "https://api.github.com/users/abhikrni/repos",
"events_url": "https://api.github.com/users/abhikrni/events{/privacy}",
"received_events_url": "https://api.github.com/users/abhikrni/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Huh?"
] | 2019-03-26T23:07:40 | 2019-03-27T03:56:01 | 2019-03-27T03:56:00 | NONE | null | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1895/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1895",
"html_url": "https://github.com/coleifer/peewee/pull/1895",
"diff_url": "https://github.com/coleifer/peewee/pull/1895.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1895.patch",
"merged_at": null
} |
|
https://api.github.com/repos/coleifer/peewee/issues/1894 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1894/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1894/comments | https://api.github.com/repos/coleifer/peewee/issues/1894/events | https://github.com/coleifer/peewee/pull/1894 | 425,052,590 | MDExOlB1bGxSZXF1ZXN0MjY0MjM4MjUw | 1,894 | Add support for Postgres's plainto_tsquery | {
"login": "kkinder",
"id": 1115018,
"node_id": "MDQ6VXNlcjExMTUwMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1115018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkinder",
"html_url": "https://github.com/kkinder",
"followers_url": "https://api.github.com/users/kkinder/followers",
"following_url": "https://api.github.com/users/kkinder/following{/other_user}",
"gists_url": "https://api.github.com/users/kkinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkinder/subscriptions",
"organizations_url": "https://api.github.com/users/kkinder/orgs",
"repos_url": "https://api.github.com/users/kkinder/repos",
"events_url": "https://api.github.com/users/kkinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkinder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've made a slightly more compact patch for the actual implementation, plus some improvements to the docs and added a couple tests:\r\n\r\nfa8fc729f37b1ab4da420fb51f8bbc90d4074fb7"
] | 2019-03-25T18:28:10 | 2019-03-25T21:21:14 | 2019-03-25T21:21:14 | NONE | null | peewee's `TSVectorField` helpfully includes a `match` method to add a `tsquery` to a query using `to_tsquery`. However, `to_tsquery` is one of two ways you can convert a string to a `tsquery`, per the [Postgres docs](https://www.postgresql.org/docs/9.1/textsearch-controls.html):
> PostgreSQL provides the functions to_tsquery and plainto_tsquery for converting a query to the tsquery data type. to_tsquery offers access to more features than plainto_tsquery, but is less forgiving about its input.
This pull request adds a `use_plain` parameter to `TSVectorField.match`, thus letting the programmer choose whether to use `to_tsquery` or `plainto_tsquery`. That's especially helpful when you're writing an application with casual user input for a search query. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1894/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1894",
"html_url": "https://github.com/coleifer/peewee/pull/1894",
"diff_url": "https://github.com/coleifer/peewee/pull/1894.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1894.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1893 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1893/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1893/comments | https://api.github.com/repos/coleifer/peewee/issues/1893/events | https://github.com/coleifer/peewee/issues/1893 | 424,650,498 | MDU6SXNzdWU0MjQ2NTA0OTg= | 1,893 | ForeignKeyField(..., primary_key=True) | {
"login": "fredrikchabot",
"id": 4558741,
"node_id": "MDQ6VXNlcjQ1NTg3NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/4558741?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fredrikchabot",
"html_url": "https://github.com/fredrikchabot",
"followers_url": "https://api.github.com/users/fredrikchabot/followers",
"following_url": "https://api.github.com/users/fredrikchabot/following{/other_user}",
"gists_url": "https://api.github.com/users/fredrikchabot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fredrikchabot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fredrikchabot/subscriptions",
"organizations_url": "https://api.github.com/users/fredrikchabot/orgs",
"repos_url": "https://api.github.com/users/fredrikchabot/repos",
"events_url": "https://api.github.com/users/fredrikchabot/events{/privacy}",
"received_events_url": "https://api.github.com/users/fredrikchabot/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just add a `property` or whatever:\r\n\r\n```python\r\nclass Rone(Model):\r\n @property\r\n def r2(self):\r\n return self.rtwo.get()\r\n```"
] | 2019-03-24T19:30:24 | 2019-03-25T03:05:10 | 2019-03-25T03:05:10 | NONE | null | When defining a ForeignKeyField who is also the Primary key for the table, It would make sense to have the backref to be a reference to the record and not a select instance as there can be only one record it is referring to.
class Rone(BaseModel):
xxx = CharField(primary_key=True)
name = CharField(null=True)
class Rtwo(BaseModel):
xxx = ForeignKeyField(Rone, db_column='xxx', backref='rtwo', primary_key=True)
num = IntegerField(null=True)
Works as expected:
rtwo=Rtwo().get(xxx='monkey')
print(rtwo.xxx.name)
Works differently than I would like:
rone=Rone().get(xxx='monkey')
print(rone.rtwo.num) # doesn't work
print(rone.rtwo.get().num) # works
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1893/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1892 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1892/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1892/comments | https://api.github.com/repos/coleifer/peewee/issues/1892/events | https://github.com/coleifer/peewee/issues/1892 | 424,497,585 | MDU6SXNzdWU0MjQ0OTc1ODU= | 1,892 | Does "Model.ForeignKey.id IN subquery" work? | {
"login": "PicoSushi",
"id": 37548801,
"node_id": "MDQ6VXNlcjM3NTQ4ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/37548801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PicoSushi",
"html_url": "https://github.com/PicoSushi",
"followers_url": "https://api.github.com/users/PicoSushi/followers",
"following_url": "https://api.github.com/users/PicoSushi/following{/other_user}",
"gists_url": "https://api.github.com/users/PicoSushi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PicoSushi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PicoSushi/subscriptions",
"organizations_url": "https://api.github.com/users/PicoSushi/orgs",
"repos_url": "https://api.github.com/users/PicoSushi/repos",
"events_url": "https://api.github.com/users/PicoSushi/events{/privacy}",
"received_events_url": "https://api.github.com/users/PicoSushi/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You want \r\n\r\n```python\r\n# Moved to its own line for clarity.\r\nlectures = Lecture.select(Lecture.id).order_by(Lecture.created.desc()).limit(5)\r\nquery = (Attend\r\n .select()\r\n .where(Attend.lecture.in_(lectures)))\r\n```\r\n\r\nIf you plan on accessing attributes on either the `Lecture` or the `Student`, you can select them and then you will not incur an additional query to resolve the foreign-key lookup:\r\n\r\n```python\r\nlectures = Lecture.select(Lecture.id).order_by(Lecture.created.desc()).limit(5)\r\nquery = (Attend\r\n .select(Attend, Lecture, Student)\r\n .join_from(Attend, Lecture)\r\n .join_from(Attend, Student)\r\n .where(Attend.lecture.in_(lectures)))\r\n```\r\n\r\nSince you're joining, you could also just do the join on the subquery!\r\n\r\n```python\r\nlectures = Lecture.select(Lecture.id Lecture.name).order_by(Lecture.created.desc()).limit(5)\r\nquery = (Attend\r\n .select(Attend, Student, lectures.c.name)\r\n .join_from(Attend, Student)\r\n .join_from(Attend, lectures, on=(Attend.lecture == lecture.c.id))\r\n\r\n# This will only execute one query -- the foreign-keys are already loaded.\r\nfor attend in query:\r\n print(attend.student.name, attend.lecture.name)\r\n```",
"It worked! Thanks for quick and clear answer!"
] | 2019-03-23T12:52:04 | 2019-03-24T07:00:32 | 2019-03-23T15:01:59 | NONE | null | I wrote code as below...
```python
import datetime
from peewee import SqliteDatabase, Model, TextField, DateTimeField, ForeignKeyField, IntegerField, JOIN
DATABASE = "example.db"
db = SqliteDatabase(DATABASE)
class BaseModel(Model):
class Meta:
legacy_table_names = False
database = db
class Student(BaseModel):
name = TextField(index=True, unique=True)
class Lecture(BaseModel):
name = TextField(index=True, unique=True)
created = DateTimeField(default=datetime.datetime.now)
class Attend(BaseModel):
student = ForeignKeyField(Student)
lecture = ForeignKeyField(Lecture)
def attendances():
query = Attend.select(
).where(Attend.lecture.id << (Lecture.select(Lecture.id).order_by(Lecture.created.desc()).limit(5)))
print(query.sql())
return query
def init_db():
db.connect()
db.create_tables([Student, Lecture, Attend])
def main():
init_db()
for attend in attendances():
print(attend)
if __name__ == '__main__':
main()
```
And got output and error as below.
```
('SELECT "t1"."id", "t1"."student_id", "t1"."lecture_id" FROM "attend" AS "t1" WHERE ("t2"."id" IN (SELECT "t2"."id" FROM "lecture" AS "t2" ORDER BY "t2"."created" DESC LIMIT ?))', [5])
Traceback (most recent call last):
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 2714, in execute_sql
cursor.execute(sql, params or ())
sqlite3.OperationalError: no such column: t2.id
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "m2.py", line 48, in <module>
main()
File "m2.py", line 43, in main
for attend in attendances():
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 6032, in __iter__
self.execute()
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 1625, in inner
return method(self, database, *args, **kwargs)
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 1696, in execute
return self._execute(database)
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 1847, in _execute
cursor = database.execute(self)
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 2727, in execute
return self.execute_sql(sql, params, commit=commit)
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 2721, in execute_sql
self.commit()
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 2512, in __exit__
reraise(new_type, new_type(*exc_args), traceback)
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 186, in reraise
raise value.with_traceback(tb)
File "/home/picosushi/pyenv/py3/lib/python3.7/site-packages/peewee.py", line 2714, in execute_sql
cursor.execute(sql, params or ())
peewee.OperationalError: no such column: t2.id
```
As I think, there's my mistake for my peewee querying. Any help is welcome. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1892/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1891 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1891/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1891/comments | https://api.github.com/repos/coleifer/peewee/issues/1891/events | https://github.com/coleifer/peewee/issues/1891 | 424,226,437 | MDU6SXNzdWU0MjQyMjY0Mzc= | 1,891 | insert_many fails on sqlite 3.9.2 | {
"login": "mining8",
"id": 24619927,
"node_id": "MDQ6VXNlcjI0NjE5OTI3",
"avatar_url": "https://avatars.githubusercontent.com/u/24619927?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mining8",
"html_url": "https://github.com/mining8",
"followers_url": "https://api.github.com/users/mining8/followers",
"following_url": "https://api.github.com/users/mining8/following{/other_user}",
"gists_url": "https://api.github.com/users/mining8/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mining8/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mining8/subscriptions",
"organizations_url": "https://api.github.com/users/mining8/orgs",
"repos_url": "https://api.github.com/users/mining8/repos",
"events_url": "https://api.github.com/users/mining8/events{/privacy}",
"received_events_url": "https://api.github.com/users/mining8/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Holy shit, Sqlite 3.6? That's like 10 years old.\r\n\r\nIt doesn't support bulk insert, so you will need to use a loop in this case."
] | 2019-03-22T14:06:43 | 2019-03-22T14:58:42 | 2019-03-22T14:58:41 | NONE | null | hi, Thank you for your great project.
On peewee version 3.9.2, I'm trying to use insert_many with sqite 3.6.21, python 2.7.10:
Test code:
=========================
from peewee import *
db = SqliteDatabase('test.db')
class BaseModel(Model):
class Meta:
database = db
class TestDB(BaseModel):
username = CharField(default='')
def Test2():
usernames = ['charlie', 'huedddy', 'peewee', 'mickey'] # only 1 item is success
row_dicts = ({'username': username} for username in usernames)
# Insert 4 new rows
TestDB.insert_many(row_dicts).execute()
db.connect()
db.create_tables([TestDB])
Test2()
=========================
If insert 1 row record is successful, Inserting multiple items will get the following error
File "D:/Code/py/main.py", line 79, in Test
TestDB.insert_many(row_dicts).execute()
File "C:\Python27\lib\site-packages\peewee.py", line 1698, in inner
return method(self, database, *args, **kwargs)
File "C:\Python27\lib\site-packages\peewee.py", line 1769, in execute
return self._execute(database)
File "C:\Python27\lib\site-packages\peewee.py", line 2473, in _execute
return super(Insert, self)._execute(database)
File "C:\Python27\lib\site-packages\peewee.py", line 2236, in _execute
cursor = database.execute(self)
File "C:\Python27\lib\site-packages\peewee.py", line 2848, in execute
return self.execute_sql(sql, params, commit=commit)
File "C:\Python27\lib\site-packages\peewee.py", line 2839, in execute_sql
raise
File "C:\Python27\lib\site-packages\peewee.py", line 2627, in __exit__
reraise(new_type, new_type(*exc_args), traceback)
File "C:\Python27\lib\site-packages\peewee.py", line 2835, in execute_sql
cursor.execute(sql, params or ())
peewee.OperationalError: near ",": syntax error
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1891/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1890 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1890/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1890/comments | https://api.github.com/repos/coleifer/peewee/issues/1890/events | https://github.com/coleifer/peewee/issues/1890 | 424,057,249 | MDU6SXNzdWU0MjQwNTcyNDk= | 1,890 | Order by FIELD function of MySQL | {
"login": "SparkleBo",
"id": 31443274,
"node_id": "MDQ6VXNlcjMxNDQzMjc0",
"avatar_url": "https://avatars.githubusercontent.com/u/31443274?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SparkleBo",
"html_url": "https://github.com/SparkleBo",
"followers_url": "https://api.github.com/users/SparkleBo/followers",
"following_url": "https://api.github.com/users/SparkleBo/following{/other_user}",
"gists_url": "https://api.github.com/users/SparkleBo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SparkleBo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SparkleBo/subscriptions",
"organizations_url": "https://api.github.com/users/SparkleBo/orgs",
"repos_url": "https://api.github.com/users/SparkleBo/repos",
"events_url": "https://api.github.com/users/SparkleBo/events{/privacy}",
"received_events_url": "https://api.github.com/users/SparkleBo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You should be able to just use `fn.FIELD()`, since FIELD is a sql function:\r\n\r\n```python\r\nmy_query = User.select().order_by(fn.FIELD(User.id, 4, 1, 3, 2))\r\n```"
] | 2019-03-22T05:50:57 | 2019-03-22T14:57:14 | 2019-03-22T14:57:14 | NONE | null | MySQL support FIELD function to sort result with specified order. But I didn't find the usage in peewee. I wonder that if peewee support that? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1890/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1889 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1889/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1889/comments | https://api.github.com/repos/coleifer/peewee/issues/1889/events | https://github.com/coleifer/peewee/issues/1889 | 423,426,903 | MDU6SXNzdWU0MjM0MjY5MDM= | 1,889 | Peewee doesn't resolve DeferredForeignKey if class being referred to is already instanciated | {
"login": "kkinder",
"id": 1115018,
"node_id": "MDQ6VXNlcjExMTUwMTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1115018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kkinder",
"html_url": "https://github.com/kkinder",
"followers_url": "https://api.github.com/users/kkinder/followers",
"following_url": "https://api.github.com/users/kkinder/following{/other_user}",
"gists_url": "https://api.github.com/users/kkinder/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kkinder/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kkinder/subscriptions",
"organizations_url": "https://api.github.com/users/kkinder/orgs",
"repos_url": "https://api.github.com/users/kkinder/repos",
"events_url": "https://api.github.com/users/kkinder/events{/privacy}",
"received_events_url": "https://api.github.com/users/kkinder/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"> The reason is that when you're splitting up a dozen or more tables, where each model gets its own file, it becomes too hard to figure out what's created first and what isn't. It's far easier to always use DeferredForeignKey and put dependent imports at the bottom.\r\n\r\nWhat you're saying, in effect, is \"I have a dependency graph but I'm too lazy to think about it\".\r\n\r\nYou have foreign-key constraints in the database, foreign-key dependencies between model classes, and import-order dependencies between Python modules. If you get the import-order figured out, you should have no trouble getting the foreign-keys to work. If this seems too hard, I'd suggest you have too many Python modules, and you might consolidate them.\r\n\r\nThat being said, this feels kinda like a bug. The reason why Peewee doesn't resolve these is because Peewee doesn't keep an internal registry of every model subclass. Deferred foreign keys are resolved as a side-effect during model class creation, which typically happens at import time.\r\n\r\nTo fix this, I'd think that Peewee (or the DeferredForeignKey class) would need to track every model class that is defined, which doesn't seem great.",
"> What you're saying, in effect, is \"I have a dependency graph but I'm too lazy to think about it\".\r\n\r\nWell, we're talking about the *order of imports* mattering. That is to say, one of these might work, while the other will not:\r\n\r\n```\r\nfrom artist import Artist\r\nfrom album import Album\r\n```\r\n\r\nvs:\r\n\r\n```\r\nfrom album import Album\r\nfrom artist import Artist\r\n```\r\n\r\nThat can result in a situation where all your unittests might clear, and then when you sort your imports alphabetically, it breaks, for example. Just broadly speaking, it's my feeling that the order you import modules shouldn't matter.\r\n\r\n> To fix this, I'd think that Peewee (or the DeferredForeignKey class) would need to track every model class that is defined, which doesn't seem great.\r\n\r\nYeah, that's more or less the tree I was barking up too, and it doesn't seem great.\r\n\r\nFor the moment, my actual solution is, in the main file, is this:\r\n\r\n```\r\nfor table in all_tables:\r\n DeferredForeignKey.resolve(table)\r\n```\r\n\r\nThat verifies all the things are resolved.\r\n\r\n*As an aside:* Arguably it's a separate bug that failing to resolve a DeferredForeignKey column results in a syntax error from the database because the column type definition is missing. But to get this to happen, I had to be using uuid's.\r\n",
"> Well, we're talking about the order of imports mattering. That is to say, one of these might work, while the other will not\r\n\r\nOnly because the resolution of the dependency graph has been pushed til later. Ordinarily, using regular `ForeignKeyField`, you would have to resolve your dependencies between the modules themselves in order to declare the `ForeignKeyField()` in the first place.\r\n\r\n`DeferredForeignKey` really exists for one reason: to make it possible to declare circular foreign-key dependencies. This is described where it is documented,\r\n\r\n* http://docs.peewee-orm.com/en/latest/peewee/api.html#DeferredForeignKey\r\n* http://docs.peewee-orm.com/en/latest/peewee/models.html#circular-foreign-key-dependencies\r\n\r\nI'd suggest ironing out your module organization.\r\n\r\nConsider if this were not \"peewee\" but *literally any other python library* and you have all these interdependencies between the classes. You would need to resolve these types of issues if you wanted to break them out into a ton of little modules.",
"I think, however, you're over-simplifying the matter as inter-dependency between classes. It is, but only at create time.\r\n\r\nIf you have a fairly modular library, or a handful of libraries, managing dependencies is one thing, but managing import order is another, especially when you can't control in what order the consumers of your library might choose to import them. I'd argue that, just like how you maintain a list (on a class attribute) in one direction for deferred foreign keys, it would make sense to do it in the other way.\r\n\r\nHaving said that, if the official Peewee solution is just to be careful about in what order you import modules, and since most editors and programmers just go with alphabetical order (I think that's even in PEP8), I'll stick with manually resolving them inside the library to avoid the ambiguity. Thanks for the thorough reply!",
"Peewee model classes are apples-to-apples the same as any other python classes, functions, etc. If you have objects that depend on other objects within your collection of modules, you'll have to resolve those dependencies. Peewee models are the same as any other python objects in this regard.\r\n\r\nThe problem `DeferredForeignKey` was designed to solve is mutual dependence.\r\n\r\nUsing `DeferredForeignKey` to paste over a fragmented or poorly-thought-out dependency graph is possible, but never advisable.\r\n\r\nFor what its worth, when I have two modules that are somehow interdependent, the solution is usually to either move them into a single module - or create a third module that imports and resolves both deps.\r\n\r\nDiscussion with real examples available here: http://charlesleifer.com/blog/structuring-flask-apps-a-how-to-for-those-coming-from-django/",
"Fair enough. I think your update to the docs would have also kept me going down the rabbit hole I originally went down, which was one of getting sql syntax errors because the `DeferredForeignKey` was never resolved.\r\n\r\nWhat I might say is that perhaps, *as a feature* some kind of `LazyForeignKey` that actually does what I'm describing would be useful. At least in my opinion."
] | 2019-03-20T19:12:05 | 2019-03-25T21:52:31 | 2019-03-21T19:34:59 | NONE | null | Here's a quick example:
```
# models.py
import peewee
database = peewee.PostgresqlDatabase("foobar")
database.execute_sql('CREATE EXTENSION IF NOT EXISTS "uuid-ossp"')
class Artist(peewee.Model):
id = peewee.UUIDField(primary_key=True)
name = peewee.CharField()
class Meta:
database = database
class Album(peewee.Model):
id = peewee.UUIDField(primary_key=True)
artist = peewee.DeferredForeignKey('Artist')
title = peewee.CharField()
class Meta:
database = database
print(peewee.DeferredForeignKey._unresolved) # album.artist is unresolved
database.create_tables([Album, Artist])
```
In this instance, because Peewee *must* resolve the foreign key to figure out the type of the column, you'll get an exception:
```
peewee.ProgrammingError: syntax error at or near "NOT"
LINE 1: ...album" ("id" UUID NOT NULL PRIMARY KEY, "artist" NOT NULL, ...
```
The reason is that this column was never resolved: `artist = peewee.DeferredForeignKey('Artist')`. It's still in the `DeferredForeignKey._unresolved` set, thus resulting in a syntax error upon table creation.
You might ask, if the table is already created as a class, why not use ForeignKey directly? The reason is that when you're splitting up a dozen or more tables, where each model gets its own file, it becomes too hard to figure out what's created first and what isn't. It's far easier to always use DeferredForeignKey and put dependent imports at the bottom.
I think the solution would be to have DeferredForeignKey check to see whether the class being referred already exists and if it does, immediately resolve it. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1889/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1888 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1888/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1888/comments | https://api.github.com/repos/coleifer/peewee/issues/1888/events | https://github.com/coleifer/peewee/issues/1888 | 423,221,043 | MDU6SXNzdWU0MjMyMjEwNDM= | 1,888 | How to realize `count + distinct + group_by` query? | {
"login": "hustlibraco",
"id": 5344453,
"node_id": "MDQ6VXNlcjUzNDQ0NTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5344453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hustlibraco",
"html_url": "https://github.com/hustlibraco",
"followers_url": "https://api.github.com/users/hustlibraco/followers",
"following_url": "https://api.github.com/users/hustlibraco/following{/other_user}",
"gists_url": "https://api.github.com/users/hustlibraco/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hustlibraco/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hustlibraco/subscriptions",
"organizations_url": "https://api.github.com/users/hustlibraco/orgs",
"repos_url": "https://api.github.com/users/hustlibraco/repos",
"events_url": "https://api.github.com/users/hustlibraco/events{/privacy}",
"received_events_url": "https://api.github.com/users/hustlibraco/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Pretty straightforward:\r\n\r\n```python\r\nquery = (PortInfo\r\n .select(PortInfo.state, fn.COUNT(PortInfo.ip.distinct(), PortInfo.port))\r\n .group_by(PortInfo.state))\r\n```\r\n\r\nTranslates into:\r\n\r\n```sql\r\nSELECT \"t1\".\"state\", COUNT(DISTINCT \"t1\".\"ip\", \"t1\".\"port\") \r\nFROM \"portinfo\" AS \"t1\" \r\nGROUP BY \"t1\".\"state\"\r\n```"
] | 2019-03-20T12:18:59 | 2019-03-21T19:38:29 | 2019-03-21T19:38:29 | NONE | null | SQL like this:
```
-- portinfo: ip port state
SELECT
state,
count( DISTINCT ip, port )
FROM
port_info
GROUP BY
state
```
I have tried 2 method, use `concat_ws` or `subquery`. Is there a more elegant way to write this sql in peewee? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1888/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1887 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1887/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1887/comments | https://api.github.com/repos/coleifer/peewee/issues/1887/events | https://github.com/coleifer/peewee/issues/1887 | 421,504,671 | MDU6SXNzdWU0MjE1MDQ2NzE= | 1,887 | Make None a default when initializing SqliteDatabase | {
"login": "impredicative",
"id": 566650,
"node_id": "MDQ6VXNlcjU2NjY1MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/566650?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/impredicative",
"html_url": "https://github.com/impredicative",
"followers_url": "https://api.github.com/users/impredicative/followers",
"following_url": "https://api.github.com/users/impredicative/following{/other_user}",
"gists_url": "https://api.github.com/users/impredicative/gists{/gist_id}",
"starred_url": "https://api.github.com/users/impredicative/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/impredicative/subscriptions",
"organizations_url": "https://api.github.com/users/impredicative/orgs",
"repos_url": "https://api.github.com/users/impredicative/repos",
"events_url": "https://api.github.com/users/impredicative/events{/privacy}",
"received_events_url": "https://api.github.com/users/impredicative/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"If you don't initialize your `SqliteDatabase` instance with a database parameter, then you have to call `.init()` at a later time. By explicitly requiring `None`, it reinforces the idea that the database is not initialized yet, and the actual initialization is happening later in a call to `.init()`.",
"A very clear exception is raised, and a message is logged too if the user forgets to use `.init()`:\r\n\r\n```\r\n raise InterfaceError('Error, database must be initialized '\r\npeewee.InterfaceError: Error, database must be initialized before opening a connection.\r\n```\r\n\r\nThe above error is IMHO sufficient reinforcement. Why is further reinforcement necessary?",
"You can always subclass it and override\r\n\r\n```python\r\nclass SpecialSqliteDatabase(SqliteDatabase):\r\n def __init__(self, database=None, *args, **kwargs):\r\n super(SpecialSqliteDatabase, self).__init__(database, *args, **kwargs)\r\n```"
] | 2019-03-15T12:53:23 | 2019-03-26T04:24:47 | 2019-03-15T16:55:38 | NONE | null | Instead of me having to do `SqliteDatabase(None)`, can `None` be made a default? I should then just be able to do `SqliteDatabase()`. Currently it raises `TypeError`. It seems needlessly noisy to have to specify `None` explicitly. Thanks. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1887/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1886 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1886/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1886/comments | https://api.github.com/repos/coleifer/peewee/issues/1886/events | https://github.com/coleifer/peewee/issues/1886 | 421,000,589 | MDU6SXNzdWU0MjEwMDA1ODk= | 1,886 | 'column_name' argument of DeferredForeignKey field is ignored | {
"login": "droserasprout",
"id": 10263434,
"node_id": "MDQ6VXNlcjEwMjYzNDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/10263434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/droserasprout",
"html_url": "https://github.com/droserasprout",
"followers_url": "https://api.github.com/users/droserasprout/followers",
"following_url": "https://api.github.com/users/droserasprout/following{/other_user}",
"gists_url": "https://api.github.com/users/droserasprout/gists{/gist_id}",
"starred_url": "https://api.github.com/users/droserasprout/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/droserasprout/subscriptions",
"organizations_url": "https://api.github.com/users/droserasprout/orgs",
"repos_url": "https://api.github.com/users/droserasprout/repos",
"events_url": "https://api.github.com/users/droserasprout/events{/privacy}",
"received_events_url": "https://api.github.com/users/droserasprout/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the excellent ticket -- I'll take a look.",
"I cannot replicate this issue. I've added some tests which I would have thought would reveal the problem, but they are passing without issue: ea12dadf5f370e470e05c760c02be4e7dca1edcd\r\n\r\nIf you can comment / share a test-case that replicates the problem, I will reopen.",
"@coleifer thanks for your time. I can reproduce this issue in my notebook but it's messy now, will provide updated tests after some digging.\r\n\r\n@NicolasCaous first, you wrong, database is initialized properly. Second, you're pretty rude, man. I have provided just enough data to get maintainer's answer that's not an expected behaviour in some way.",
"At the time of `Task.select` execution `CreateUser` field is not resolved yet. I'd expect `DeferredForeignKey` to act like `IntegerField` (*_id) in those situations. Or maybe some note should be added to the docs.\r\n\r\n```\r\nfrom playhouse import postgres_ext as pw\r\nfrom playhouse.pool import PooledPostgresqlExtDatabase\r\nfrom playhouse.postgres_ext import (ArrayField, AutoField, CharField,\r\n DateTimeField, DeferredForeignKey,\r\n ForeignKeyField, IntegerField)\r\nfrom psycopg2.extras import DictCursor\r\n\r\npg_main = PooledPostgresqlExtDatabase(\r\n None,\r\n timeout=180,\r\n stale_timeout=900,\r\n max_connections=32,\r\n cursor_factory=DictCursor\r\n)\r\n\r\npg_main.init(\r\n database='postgres',\r\n user='postgres',\r\n password=None,\r\n host='172.17.0.5',\r\n port=5432\r\n)\r\n\r\nclass Task(pw.Model):\r\n ID = AutoField(column_name='id')\r\n UpdateDate = DateTimeField(column_name='updatedt', null=True)\r\n CreateDate = DateTimeField(column_name='createdt', null=True)\r\n State = IntegerField(column_name='state', default=0)\r\n CreateUser = DeferredForeignKey('User', column_name='createuser', null=True, field='ID', index=False)\r\n UpdateUser = DeferredForeignKey('User', column_name='updateuser', null=True, field='ID', index=False)\r\n \r\n class Meta:\r\n database = pg_main\r\n table_name = 'tasks'\r\n\r\n# class User(pw.Model):\r\n# ID = AutoField(column_name='id')\r\n# UpdateDate = DateTimeField(column_name='updatedt', null=True)\r\n# CreateDate = DateTimeField(column_name='createdt', null=True)\r\n# State = IntegerField(column_name='state', default=0)\r\n# CreateUser = IntegerField(column_name='createuser', null=True, index=False)\r\n# UpdateUser = IntegerField(column_name='updateuser', null=True, index=False)\r\n \r\n# class Meta:\r\n# database = pg_main\r\n# table_name = 'users'\r\n \r\nwith pg_main:\r\n# User.drop_table()\r\n# User.create_table()\r\n# Task.drop_table()\r\n# Task.create_table()\r\n tsk = Task.select().first()\r\n```",
"Yeah I removed his comments. I was not being sarcastic when I said this was an excellent ticket.\r\n\r\nOk I didn't think about the possibility that you would attempt a query before all the models were imported / eval'd. I'll take another look.",
"Honestly, I kinda feel like attempting to query with an unresolved deferred foreign-key should be an error. But for now, I've just decided to carry forward the column name, so that it works as you'd expect."
] | 2019-03-14T12:51:27 | 2019-03-15T16:54:12 | 2019-03-15T16:53:45 | NONE | null | ```
class Task(Model):
[...]
CreateUser = DeferredForeignKey('User', column_name='createuser', null=True, field='ID', index=False)
UpdateUser = DeferredForeignKey('User', column_name='updateuser', null=True, field='ID', index=False)
>>> Task.select().first()
2019-03-14 12:41:03,109 DEBUG [peewee.py:2823] ('SELECT "t1"."id", "t1"."updatedt", "t1"."createdt", "t1"."state", "t1"."CreateUser", "t1"."UpdateUser", "t1"."name", "t1"."type", "t1"."start_time", "t1"."finish_time", "t1"."deadline", "t1"."result", "t1"."scenario", "t1"."wait_time", "t1"."cases_id", "t1"."locked_by", "t1"."description", "t1"."comments", "t1"."responsible_role", "t1"."previous_tasks" FROM "tasks" AS "t1" LIMIT %s', [1])
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 2835, in execute_sql
cursor.execute(sql, params or ())
File "/usr/local/lib/python3.6/site-packages/psycopg2/extras.py", line 141, in execute
return super(DictCursor, self).execute(query, vars)
psycopg2.ProgrammingError: column t1.CreateUser does not exist
LINE 1: ..., "t1"."updatedt", "t1"."createdt", "t1"."state", "t1"."Crea...
^
HINT: Perhaps you meant to reference the column "t1.createuser".
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 1698, in inner
return method(self, database, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 1958, in first
return self.peek(database, n=n)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 1698, in inner
return method(self, database, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 1949, in peek
rows = self.execute(database)[:n]
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 1698, in inner
return method(self, database, *args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 1769, in execute
return self._execute(database)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 1943, in _execute
cursor = database.execute(self)
File "/usr/local/lib/python3.6/site-packages/playhouse/postgres_ext.py", line 464, in execute
cursor = self.execute_sql(sql, params, commit=commit)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 2842, in execute_sql
self.commit()
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 2627, in __exit__
reraise(new_type, new_type(*exc_args), traceback)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 178, in reraise
raise value.with_traceback(tb)
File "/usr/local/lib/python3.6/site-packages/peewee.py", line 2835, in execute_sql
cursor.execute(sql, params or ())
File "/usr/local/lib/python3.6/site-packages/psycopg2/extras.py", line 141, in execute
return super(DictCursor, self).execute(query, vars)
peewee.ProgrammingError: column t1.CreateUser does not exist
LINE 1: ..., "t1"."updatedt", "t1"."createdt", "t1"."state", "t1"."Crea...
^
HINT: Perhaps you meant to reference the column "t1.createuser".
```
May be related to #1812 | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1886/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1886/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1885 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1885/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1885/comments | https://api.github.com/repos/coleifer/peewee/issues/1885/events | https://github.com/coleifer/peewee/issues/1885 | 420,896,007 | MDU6SXNzdWU0MjA4OTYwMDc= | 1,885 | ForeignKeyField error | {
"login": "vannesspeng",
"id": 11361345,
"node_id": "MDQ6VXNlcjExMzYxMzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/11361345?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vannesspeng",
"html_url": "https://github.com/vannesspeng",
"followers_url": "https://api.github.com/users/vannesspeng/followers",
"following_url": "https://api.github.com/users/vannesspeng/following{/other_user}",
"gists_url": "https://api.github.com/users/vannesspeng/gists{/gist_id}",
"starred_url": "https://api.github.com/users/vannesspeng/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/vannesspeng/subscriptions",
"organizations_url": "https://api.github.com/users/vannesspeng/orgs",
"repos_url": "https://api.github.com/users/vannesspeng/repos",
"events_url": "https://api.github.com/users/vannesspeng/events{/privacy}",
"received_events_url": "https://api.github.com/users/vannesspeng/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"First: format your code. PLEASE. Here is a link: https://guides.github.com/features/mastering-markdown/\r\n\r\nSecond: you have two foreign-keys pointing to `User`. You need to specify a name for the back-reference that doesn't collide.\r\n\r\nIn your example, I would do something like:\r\n\r\n```python\r\nsender = ForeignKeyField(User, backref='outbox')\r\nreceiver = ForeignKeyField(User, backref='inbox')\r\n```\r\n\r\nThen you would do:\r\n\r\n```python\r\nuser = User(...)\r\nfor message in user.outbox: # Messages where \"user\" is \"message.sender\"\r\n # ...\r\n\r\nfor message in user.inbox: # Messages where \"user\" is \"message.receiver\"\r\n # ...\r\n```"
] | 2019-03-14T08:51:47 | 2019-03-14T17:33:48 | 2019-03-14T17:33:47 | NONE | null | class Message(BaseModel):
sender = ForeignKeyField(User, verbose_name="发送者")
receiver = ForeignKeyField(User, verbose_name="接受者")
message_type = IntegerField(choices=MESSAGES_TYPES, verbose_name="类别")
message = CharField(max_length=500, verbose_name="内容", null=True)
parent_content = CharField(max_length=500, verbose_name="标题", null=True)
I do not kown why? error like this:
Traceback (most recent call last):
File "D:/python-study/gitcode/TornadoForum/Myforum/tools/init_db.py", line 8, in <module>
from Myforum.apps.messages.models import Message
File "D:\python-study\gitcode\TornadoForum\Myforum\apps\messages\models.py", line 18, in <module>
class Message(BaseModel):
File "C:\Users\sd\.virtualenvs\TornadoForum-e7zC9psu\lib\site-packages\peewee.py", line 4903, in __new__
field.add_to_class(cls, name)
File "C:\Users\sd\.virtualenvs\TornadoForum-e7zC9psu\lib\site-packages\peewee.py", line 1523, in add_to_class
invalid('The related_name of %(field)s ("%(backref)s") '
File "C:\Users\sd\.virtualenvs\TornadoForum-e7zC9psu\lib\site-packages\peewee.py", line 1517, in invalid
raise AttributeError(msg % context)
AttributeError: The related_name of message.receiver ("message_set") is already in use by another foreign key. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1885/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1884 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1884/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1884/comments | https://api.github.com/repos/coleifer/peewee/issues/1884/events | https://github.com/coleifer/peewee/issues/1884 | 420,742,584 | MDU6SXNzdWU0MjA3NDI1ODQ= | 1,884 | Binding to EXCLUDED in upsert WHERE expression | {
"login": "brandond",
"id": 370103,
"node_id": "MDQ6VXNlcjM3MDEwMw==",
"avatar_url": "https://avatars.githubusercontent.com/u/370103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brandond",
"html_url": "https://github.com/brandond",
"followers_url": "https://api.github.com/users/brandond/followers",
"following_url": "https://api.github.com/users/brandond/following{/other_user}",
"gists_url": "https://api.github.com/users/brandond/gists{/gist_id}",
"starred_url": "https://api.github.com/users/brandond/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/brandond/subscriptions",
"organizations_url": "https://api.github.com/users/brandond/orgs",
"repos_url": "https://api.github.com/users/brandond/repos",
"events_url": "https://api.github.com/users/brandond/events{/privacy}",
"received_events_url": "https://api.github.com/users/brandond/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Seems like I can do this, but I'm not sure if this is correct or not:\r\n\r\n```python\r\nEXCLUDED = Entity('EXCLUDED') \r\n```",
"I mistakenly thought that `EXCLUDED` only applied to the SET part...didn't realize it could also be used in the `WHERE` portion. Thanks, I will put together a fix.",
"Fixed. Docs along with examples are now online: http://docs.peewee-orm.com/en/latest/peewee/api.html#EXCLUDED\r\n\r\nIt is exported if you use `from peewee import *`, or you can use `from peewee import EXCLUDED`."
] | 2019-03-13T22:10:34 | 2019-03-14T04:43:25 | 2019-03-14T04:41:25 | NONE | null | For languages with upsert support, the row that would have been inserted if not for the conflict can be addressed in SQL as 'excluded':
```Column names in the expressions of a DO UPDATE refer to the original unchanged value of the column, before the attempted INSERT. To use the value that would have been inserted had the constraint not failed, add the special "excluded." table qualifier to the column name. ```
How do I bind to this in the on_conflict statement? I would expect to be able to do something like this, but I can't figure out where to import EXCLUDED from.
```python
.on_conflict(conflict_target=[Host.ip_address],
preserve=[Host.instance_id, Host.account_id, Host.tags],
where=(EXCLUDED.state != 'terminated'))
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1884/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1883 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1883/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1883/comments | https://api.github.com/repos/coleifer/peewee/issues/1883/events | https://github.com/coleifer/peewee/issues/1883 | 420,663,883 | MDU6SXNzdWU0MjA2NjM4ODM= | 1,883 | Playhouse PooledPostgresqlDatabase | {
"login": "cypmaster14",
"id": 18011557,
"node_id": "MDQ6VXNlcjE4MDExNTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/18011557?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cypmaster14",
"html_url": "https://github.com/cypmaster14",
"followers_url": "https://api.github.com/users/cypmaster14/followers",
"following_url": "https://api.github.com/users/cypmaster14/following{/other_user}",
"gists_url": "https://api.github.com/users/cypmaster14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cypmaster14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cypmaster14/subscriptions",
"organizations_url": "https://api.github.com/users/cypmaster14/orgs",
"repos_url": "https://api.github.com/users/cypmaster14/repos",
"events_url": "https://api.github.com/users/cypmaster14/events{/privacy}",
"received_events_url": "https://api.github.com/users/cypmaster14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Peewee ```3.9.2``` \r\nPsycopg2 ```2.7.7```\r\n",
"Are you messing with `._connections` in any way? The `._connections` list is a heap of `(timestamp, conn)` 2-tuples.\r\n\r\nPerhaps the issue is that there are two connections in the `_connections` heap with the exact same timestamp (down to the microsecond?).\r\n\r\nCan you replicate this?",
"I just followed the steps from the docs( ```http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#connection-pool```)\r\n\r\nThis is the full traceback.\r\n```\r\n[2019-03-13 19:36:23,065] - [CP Server Thread-19] - [ERROR - app.py] Exception on /api/v1/test \r\nTraceback (most recent call last):\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\flask\\app.py\", line 2292, in wsgi_app\r\n response = self.full_dispatch_request()\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\flask\\app.py\", line 1815, in full_dispatch_request\r\n rv = self.handle_user_exception(e)\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\flask\\app.py\", line 1718, in handle_user_exception\r\n reraise(exc_type, exc_value, tb)\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\flask\\_compat.py\", line 35, in reraise\r\n raise value\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\flask\\app.py\", line 1811, in full_dispatch_request\r\n rv = self.preprocess_request()\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\flask\\app.py\", line 2087, in preprocess_request\r\n rv = func()\r\n File \"D:\\Projects\\test_project\\test_api\\server.py\", line 135, in _db_connect\r\n test_api.libs.models.database.connect()\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\playhouse\\pool.py\", line 108, in connect\r\n return super(PooledDatabase, self).connect(reuse_if_open)\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\peewee.py\", line 2777, in connect\r\n self._state.set_connection(self._connect())\r\n File \"D:\\Python36_x64\\test_project\\lib\\site-packages\\playhouse\\pool.py\", line 125, in _connect\r\n ts, conn = heapq.heappop(self._connections)\r\nTypeError: '<' not supported between instances of 'psycopg2.extensions.connection' and 'psycopg2.extensions.connection'\r\n```\r\n\r\nI have a CheeryPy Server, with 100 threads, that is based on a Flask App.\r\nThe PooledPostgresqlDatabase configuration is: `max_connections=50, stale_timeout=300`",
"Can you inspect the error a bit?\r\n\r\n```python\r\ntry:\r\n test_api.libs.models.database.connect()\r\nexcept TypeError:\r\n print(test_api.libs.models.database._connections)\r\n```",
"There are multiple connections with the same timestamp\r\n```\r\n[\r\n (1552507043.5142784, ....),\r\n (1552507043.498655, ....),\r\n (1552507043.498655, ....),\r\n (1552507043.5142784, ....)\r\n]\r\n```",
"Can you also show the connections in the tuple? I'd like to see whether the connections are also the same.",
"The connections aren't the same. They have different ids.",
"All of the connections are different",
"Perfect, thanks. I wonder if there's a race condition? Two of the same timestamps, down to the microsecond, is just hard for me to imagine in the course of ordinary usage.\r\n\r\nCan you tell me how you're replicating this? Any and all information you can provide will help.\r\n\r\nHow are you triggering the error, for instance? Thanks",
"Additionally, if you could enable pool logging, by adding the following:\r\n\r\n```python\r\nimport logging\r\nlogger = logging.getLogger('peewee.pool')\r\nlogger.addHandler(logging.StreamHandler())\r\nlogger.setLevel(logging.DEBUG)\r\n```\r\n\r\nThat would provide some visibility into what might be causing the issue.\r\n\r\nSo:\r\n\r\nWhat steps do you follow to replicate this? i.e., does it happen on startup, or if you make a bunch of requests, or what?\r\n\r\nCould you try adding the logging and do whatever to trigger the issue?\r\n\r\nThank you.",
"For example, I've tried spinning up a bunch of threads and in each one opening and closing a pool connection several times. I've tried this with a max connections higher and lower than the thread count (lower so that there is contention and some threads have to block while waiting for a connection). I haven't seen any issues so far.\r\n\r\n```python\r\nfrom peewee import *\r\nfrom playhouse.pool import PooledPostgresqlDatabase\r\n\r\ndb = PooledPostgresqlDatabase('peewee_test', max_connections=10,\r\n timeout=10)\r\n\r\nimport threading\r\nimport time\r\n\r\ndef open_sleep_close():\r\n for _ in range(10):\r\n with db: # Hold the connection open for 1/10th of a second.\r\n time.sleep(0.1)\r\n\r\nthreads = [threading.Thread(target=open_sleep_close) for _ in range(50)]\r\nfor t in threads:\r\n t.start()\r\n\r\nprint('waiting...')\r\nfor t in threads:\r\n t.join()\r\n\r\nprint('- in use? %s' % len(db._in_use)) # prints 0\r\nprint('- available? %s' % len(db._connections)) # prints 10 (or whatever max connections is set to).\r\n```",
"The exception occurs after half an hour the server has started. The server receives some kind of heartbeats from other scripts. It isn't usually flooded with requests. When the exception occurs, from what I can see in the logs, the server isn't flooded with requests. At the second when the exception occurs, there are 3 or 4 requests.",
"And another interesting fact is that the server runs on two different hosts and the exception appears just on one of the hosts.",
"I have added the logger of peewee and I don't see something unusual in the logs. Just creation of connection when none are available, closing connections because they are stale, retrieving connections from the pool and returning connection to the pool.\r\nMeanwhile, the exception still occurs.",
"> I have added the logger of peewee and I don't see something unusual in the logs. Just creation of connection when none are available, closing connections because they are stale, retrieving connections from the pool and returning connection to the pool.\r\n> Meanwhile, the exception still occurs.\r\n\r\nThis is not very helpful. I didn't want your \"interpretation\" of the logs, but the logs themselves, especially in the times leading up to (and including) the exception occurring.\r\n\r\nSimilarly, if you could share your database declaration with me -- e.g., what parameters you're using to invoke the DB?\r\n\r\nI think this is just something weird in your setup, but I'd like to be as thorough as possible, and for that I need you to help by providing information.",
"Please comment with more information: what settings are you using with your database (stale timeout, Max conns, timeout, etc). It'd be really helpful to see the pool logs as well.\r\n\r\nAs I can't replicate the problem I'm closing for now. If you can provide more information I'll gladly reopen."
] | 2019-03-13T18:50:49 | 2019-03-15T23:49:03 | 2019-03-15T23:49:03 | NONE | null | Hello,
Some unexpected error occurs when the pooled connection is used for a Postgresql database.
```
File "D:\Python36_x64\test_project\lib\site-packages\playhouse\pool.py", line 125, in _connect
ts, conn = heapq.heappop(self._connections)
TypeError: '<' not supported between instances of 'psycopg2.extensions.connection' and 'psycopg2.extensions.connection'
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1883/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1883/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1882 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1882/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1882/comments | https://api.github.com/repos/coleifer/peewee/issues/1882/events | https://github.com/coleifer/peewee/issues/1882 | 420,453,592 | MDU6SXNzdWU0MjA0NTM1OTI= | 1,882 | auto field for postgres but not primary key | {
"login": "ra-esmith",
"id": 24212262,
"node_id": "MDQ6VXNlcjI0MjEyMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/24212262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ra-esmith",
"html_url": "https://github.com/ra-esmith",
"followers_url": "https://api.github.com/users/ra-esmith/followers",
"following_url": "https://api.github.com/users/ra-esmith/following{/other_user}",
"gists_url": "https://api.github.com/users/ra-esmith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ra-esmith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ra-esmith/subscriptions",
"organizations_url": "https://api.github.com/users/ra-esmith/orgs",
"repos_url": "https://api.github.com/users/ra-esmith/repos",
"events_url": "https://api.github.com/users/ra-esmith/events{/privacy}",
"received_events_url": "https://api.github.com/users/ra-esmith/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I am going to try\r\n\r\nfrom playhouse.signals import Model, pre_save\r\n\r\nclass MyModel(Model):\r\n data = IntegerField()\r\n\r\n@pre_save(sender=MyModel)\r\ndef on_save_handler(model_class, instance, created):\r\n # find max value of temp_id in model\r\n # increment it by one and assign it to model instance object\r\n next_value = MyModel.select(fn.Max(MyModel.temp_id))[0].temp_id +1\r\n instance.temp_id = next_value\r\n\r\nwhich seems likely to work ... though rather involved perhaps ...",
"You use sequences for that.\r\n\r\n```python\r\nmy_field = IntegerField(sequence='my_field_seq')\r\n```"
] | 2019-03-13T11:38:32 | 2019-03-13T12:27:52 | 2019-03-13T12:27:51 | NONE | null | Good Morning,
How would I, using the current version of peewee.
Define a field on my table which is an integer, serially increases in value, and is not a primary key field. Auto Field gets the serial, but always looks to be primary. I have some old code which already has a primary key, but it the primary key is not serial and I wish to chunk some operations based this new serial field.
Thanks,
Evan | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1882/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1881 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1881/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1881/comments | https://api.github.com/repos/coleifer/peewee/issues/1881/events | https://github.com/coleifer/peewee/issues/1881 | 419,859,760 | MDU6SXNzdWU0MTk4NTk3NjA= | 1,881 | MaxConnectionsExceeded: Exceeded maximum connections. | {
"login": "ghost",
"id": 10137,
"node_id": "MDQ6VXNlcjEwMTM3",
"avatar_url": "https://avatars.githubusercontent.com/u/10137?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghost",
"html_url": "https://github.com/ghost",
"followers_url": "https://api.github.com/users/ghost/followers",
"following_url": "https://api.github.com/users/ghost/following{/other_user}",
"gists_url": "https://api.github.com/users/ghost/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ghost/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ghost/subscriptions",
"organizations_url": "https://api.github.com/users/ghost/orgs",
"repos_url": "https://api.github.com/users/ghost/repos",
"events_url": "https://api.github.com/users/ghost/events{/privacy}",
"received_events_url": "https://api.github.com/users/ghost/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Sry,I miss applying the wrapper in some functions.",
"Running each request in a new thread is kinda silly...but ok? Presumably your wsgi server would handle all that. So you're probably running threads-in-threads?\r\n\r\nAnyway...ok?"
] | 2019-03-12T08:34:12 | 2019-03-12T12:51:39 | 2019-03-12T08:45:12 | NONE | null | I wrote a wrapper to handle each request I received. Each request is in a separate thread.
Code Samples:
```
t = Thread(target=handle_request)
t.start()
```
The db config and wrapper:
```
peewee_database = PooledMySQLDatabase(mysql_config['db'], host=mysql_config['host'], user=mysql_config['user'],
password=mysql_config['password'], charset='utf8mb4', max_connections=20,
stale_timeout=300)
def wrap(*args, **kwargs):
db = peewee_database
try:
db.connect()
return func(*args, **kwargs)
finally:
db.manual_close()
```
Exception:
```
File "/usr/lib/python2.7/site-packages/peewee.py", line 2681, in execute_sql
cursor = self.cursor(commit)
File "/usr/lib/python2.7/site-packages/peewee.py", line 2667, in cursor
self.connect()
File "/usr/lib/python2.7/site-packages/playhouse/pool.py", line 108, in connect
return super(PooledDatabase, self).connect(reuse_if_open)
File "/usr/lib/python2.7/site-packages/peewee.py", line 2630, in connect
self._state.set_connection(self._connect())
File "/usr/lib/python2.7/site-packages/playhouse/pool.py", line 154, in _connect
raise MaxConnectionsExceeded('Exceeded maximum connections.')
MaxConnectionsExceeded: Exceeded maximum connections.
```
My peewee version: 3.3.4 | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1881/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1881/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1880 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1880/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1880/comments | https://api.github.com/repos/coleifer/peewee/issues/1880/events | https://github.com/coleifer/peewee/issues/1880 | 419,801,552 | MDU6SXNzdWU0MTk4MDE1NTI= | 1,880 | pool.map cannot pickle database or results object | {
"login": "tsikerdekis",
"id": 232134,
"node_id": "MDQ6VXNlcjIzMjEzNA==",
"avatar_url": "https://avatars.githubusercontent.com/u/232134?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsikerdekis",
"html_url": "https://github.com/tsikerdekis",
"followers_url": "https://api.github.com/users/tsikerdekis/followers",
"following_url": "https://api.github.com/users/tsikerdekis/following{/other_user}",
"gists_url": "https://api.github.com/users/tsikerdekis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tsikerdekis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tsikerdekis/subscriptions",
"organizations_url": "https://api.github.com/users/tsikerdekis/orgs",
"repos_url": "https://api.github.com/users/tsikerdekis/repos",
"events_url": "https://api.github.com/users/tsikerdekis/events{/privacy}",
"received_events_url": "https://api.github.com/users/tsikerdekis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Pickle doesn't like the fact that you've nested your `User` model inside the `class Database`. Consider reorganizing your code. It will work fine to pickle Peewee model classes and instances (though this is really not advisable!), but not in the way you have written your code.\r\n\r\nA couple other things to mention:\r\n\r\n* `BigIntegerField(primary_key=True)` requires you to manually specify the IDs (assuming you plan to create new rows). For auto-incrementing you would use `BigAutoField()` instead.\r\n* Peewee model classes can already be used like dictionaries. So, `User[1]` would return the user with id=1.",
"Hi @coleifer , thank you for your reply, I made some changes to the code but it is still not working. I made a test code for you to evaluate:\r\n\r\nusers.py\r\n```\r\nfrom datetime import datetime\r\nimport os\r\nfrom peewee import *\r\n\r\n# Global database object\r\nDB = SqliteDatabase(None)\r\n\r\nclass Database:\r\n db = None\r\n same_url_limit = 1 # Defines how many same urls are allowed, e.g., in cases of group projects this may be 2\r\n logger = None\r\n db_filename = None\r\n\r\n def __init__(self, logger=None, db_filename='database.sqlite3'):\r\n self.db_filename = db_filename\r\n self.logger = logger\r\n self.db = DB\r\n self.db.init(db_filename)\r\n self.db.connect()\r\n self.db.create_tables([Users])\r\n os.chmod(db_filename, 0o666)\r\n\r\n def __del__(self):\r\n\r\n self.db.close()\r\n\r\n\r\nclass Users(Model):\r\n \"\"\"\r\n The class containing local information obtained through canvas about each user, assignment status, last commit,\r\n plagiarism status and other important information for ATHINA to function.\r\n \"\"\"\r\n user_id = BigIntegerField(primary_key=True)\r\n course_id = BigIntegerField(default=0)\r\n\r\n class Meta:\r\n database = DB\r\n```\r\n\r\ntester.py\r\n```\r\nfrom users import *\r\nimport multiprocessing\r\n\r\n\r\nclass Tester:\r\n user_data = None\r\n\r\n def __init__(self, user_data):\r\n self.user_data = user_data\r\n\r\n def test(self, test):\r\n print(test)\r\n\r\n def parallel_map(self, user_ids):\r\n compute_pool = multiprocessing.Pool(processes=2)\r\n user_object_results = compute_pool.map(self.test, user_ids)\r\n return user_object_results\r\n\r\ntester = Tester(Database('file.sqlite3'))\r\ntester.parallel_map([1,2,3])\r\n```\r\n\r\nI am getting the following error:\r\n```\r\nTraceback (most recent call last):\r\n File \"tester.py\", line 20, in <module>\r\n tester.parallel_map([1,2,3])\r\n File \"tester.py\", line 16, in parallel_map\r\n user_object_results = compute_pool.map(self.test, user_ids)\r\n File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 260, in map\r\n return self._map_async(func, iterable, mapstar, chunksize).get()\r\n File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 608, in get\r\n raise self._value\r\n File \"/usr/lib/python3.5/multiprocessing/pool.py\", line 385, in _handle_tasks\r\n put(task)\r\n File \"/usr/lib/python3.5/multiprocessing/connection.py\", line 206, in send\r\n self._send_bytes(ForkingPickler.dumps(obj))\r\n File \"/usr/lib/python3.5/multiprocessing/reduction.py\", line 50, in dumps\r\n cls(buf, protocol).dump(obj)\r\nTypeError: can't pickle _thread.lock objects\r\n\r\n```",
"Is that `_thread.lock` coming from peewee? It's not clear to me. Your code organization is kind of fucked.\r\n\r\nLook: here is a minimal example with none of this crazy shit, and it works fine:\r\n\r\n```python\r\nfrom multiprocessing import Pool\r\nfrom peewee import *\r\n\r\ndb = SqliteDatabase('/tmp/test-mp.db')\r\n\r\nclass Register(Model):\r\n value = IntegerField()\r\n class Meta:\r\n database = db\r\n\r\nwith db:\r\n db.drop_tables([Register])\r\n db.create_tables([Register])\r\n\r\n\r\ndef create_register(i):\r\n with db:\r\n Register.create(value=i)\r\n\r\nPool(processes=4).map(create_register, range(10))\r\n\r\nwith db:\r\n for row in Register.select().order_by(Register.id):\r\n print(row.id, row.value)\r\n```",
"I suspect that thread_lock comes from peewee. It makes sense that the sqlite connection won't be sharable to other threads. The issue is that I need to have separate modules and separate the db logic with my tester module. The problem is that if the self is passed on the mapped function and that contains the connection then you get that error. There is a work around.\r\n\r\nBefore calling pool, I close the database (del has an alias that closes it) and I connect to the db from within the function. I get multiple connections to the same file but it would be the same like running several python scripts that do operations on the same sqlite db.\r\n\r\n```\r\nfrom users import *\r\nimport multiprocessing\r\n\r\n\r\nclass Tester:\r\n user_data = None\r\n\r\n def __init__(self, user_data):\r\n self.user_data = user_data\r\n\r\n def test(self, test):\r\n self.user_data = Database('file.sqlite3')\r\n print(Users[1])\r\n print(test)\r\n Users[1].course_id = 1\r\n Users[1].save()\r\n del self.user_data\r\n return True\r\n\r\n def parallel_map(self, user_ids):\r\n compute_pool = multiprocessing.Pool(processes=2)\r\n user_object_results = compute_pool.map(self.test, user_ids)\r\n return user_object_results\r\n\r\ntester = Tester(Database('file.sqlite3'))\r\n#Users(user_id=1).save(force_insert=True)\r\nUsers[1]\r\ndel tester.user_data\r\ntester.parallel_map([1, 2, 3])\r\ntester.user_data = Database('file.sqlite3')\r\n```",
"I *feel* like I'm speaking english and my code was clear and concise.",
"One more thought: when you fork, any open connections are carried into the new process. Which usually results in bad things happening when you try to use them from the child process. So: **always** close connections before fork. The same applies to pickling connections.\r\n\r\nBasically: if you want to do multiple processes, always try to minimize the database state (eg open conns) before fork. "
] | 2019-03-12T04:54:39 | 2019-03-15T12:39:59 | 2019-03-12T15:46:22 | NONE | null | I have a db file:
```
from datetime import datetime, timezone
import pickle
import os
from peewee import *
# Global database object
DB = SqliteDatabase(None)
class Database:
# db is meant to be a dict
# contains object User and indexed by a user's user_id
# Why set it as None? See below:
# http://effbot.org/zone/default-values.htm
db = None
same_url_limit = 1 # Defines how many same urls are allowed, e.g., in cases of group projects this may be 2
logger = None
db_filename = None
def __init__(self, logger=None, db_filename='database.sqlite3'):
self.db_filename = db_filename
self.logger = logger
self.db = DB
self.db.init(db_filename)
self.db.connect()
self.db.create_tables([Database.Users])
os.chmod(db_filename, 0o666)
def __del__(self):
self.db.close()
class Users(Model):
"""
The class containing local information obtained through canvas about each user, assignment status, last commit,
plagiarism status and other important information for ATHINA to function.
"""
user_id = BigIntegerField(primary_key=True)
course_id = BigIntegerField(default=0)
user_fullname = TextField(default="")
secondary_id = TextField(default="")
repository_url = TextField(default="")
url_date = DateTimeField(default=datetime(1, 1, 1, 0, 0)) # When a new url was found
new_url = BooleanField(default=False) # Switched when new url is discovered on e-learning site
commit_date = DateTimeField(default=datetime(1, 1, 1, 0, 0)) # Day of the last commit
same_url_flag = BooleanField(default=False) # Is repo url found to be similar with N other students?
plagiarism_to_grade = BooleanField(default=False) # Signifies whether a user received a new grade (plagiarism)
last_plagiarism_check = DateTimeField(default=datetime.now())
last_graded = DateTimeField(default=datetime(1, 1, 1, 0, 0))
last_grade = SmallIntegerField(null=True)
changed_state = BooleanField(default=False)
class Meta:
database = DB
```
I have a class in another file that when it is called, it gets the user object as a variable
```
class Tester:
user_data = None
def __init__(self, user_data):
self.user_data = user_data
def parallel_map(self, user_ids):
compute_pool = multiprocessing.Pool(processes=self.configuration.processes)
user_object_results = compute_pool.map(self.process_student_assignment, user_ids)
return user_object_results
```
Problem is, I get the following error:
`multiprocessing.pool.MaybeEncodingError: Error sending result: '<multiprocessing.pool.ExceptionWithTraceback object at 0x7f3c1b55b198>'. Reason: 'PicklingError("Can't pickle <class 'athina.users.UsersDoesNotExist'>: attribute lookup UsersDoesNotExist on athina.users failed",)'
` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1880/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1879 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1879/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1879/comments | https://api.github.com/repos/coleifer/peewee/issues/1879/events | https://github.com/coleifer/peewee/issues/1879 | 419,203,044 | MDU6SXNzdWU0MTkyMDMwNDQ= | 1,879 | defining table_name dynamically in _meta database not functioning properly | {
"login": "eliteuser26",
"id": 31997745,
"node_id": "MDQ6VXNlcjMxOTk3NzQ1",
"avatar_url": "https://avatars.githubusercontent.com/u/31997745?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eliteuser26",
"html_url": "https://github.com/eliteuser26",
"followers_url": "https://api.github.com/users/eliteuser26/followers",
"following_url": "https://api.github.com/users/eliteuser26/following{/other_user}",
"gists_url": "https://api.github.com/users/eliteuser26/gists{/gist_id}",
"starred_url": "https://api.github.com/users/eliteuser26/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/eliteuser26/subscriptions",
"organizations_url": "https://api.github.com/users/eliteuser26/orgs",
"repos_url": "https://api.github.com/users/eliteuser26/repos",
"events_url": "https://api.github.com/users/eliteuser26/events{/privacy}",
"received_events_url": "https://api.github.com/users/eliteuser26/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"```python\r\nWeathervalue._meta.table_name = self.selectedstation+'_weathervalue'\r\ndel Weathervalue._meta.table # Remove cached table property, will regenerate\r\n```\r\n\r\nCan you test making the addition above?",
"Issue resolved. According to the documentation, I needed to create a table model for each table in the database in this case 2 tables even though fields are identical. I added the table name in the table model in the meta field. Based on which one I selected I accessed the corresponding table model. I originally created only one table model for both which didn't worked. \r\n\r\nI noted in the context manager when using the with statement it doesn't properly close the database connection as I was able to verify with the db.is_closed() statement. I needed to add the db.close() to close the connection. Should I create a separate issue for this?",
"> I noted in the context manager when using the with statement it doesn't properly close the database connection as I was able to verify with the db.is_closed() statement. I needed to add the db.close() to close the connection. Should I create a separate issue for this?\r\n\r\nThis is tested. Probably some issue on your end:\r\n\r\nhttps://github.com/coleifer/peewee/blob/master/tests/database.py#L165-L186",
"Thanks. It is probably in my code the way I initialized the database. I will look at it again. "
] | 2019-03-10T16:15:38 | 2019-03-10T19:36:09 | 2019-03-10T18:54:25 | NONE | null | I am trying to define the table_name in the _meta database dynamically in the Python code. When defining the default table to use for a select query it doesn't access the proper table when it has been defined at the start even do the connection to the database was closed.
Used this command for defining the table_name before opening the database dynamically:
```
Weathervalue._meta.table_name = self.selectedstation+'_weathervalue'
with db:
year_function=fn.YEAR(Weathervalue.Date_Value)
year_db=Weathervalue.select(year_function.alias('year')).group_by(year_function)
.order_by(year_function)
db.close()
```
It does use the table_name the first time around but it doesn't use it on subsequent calls when the name changes. I tried to define the table_name after opening the database connection but to no avail. It doesn't seem to have any effect after opening at least once. It retains the table_name on subsequent calls to the database. It doesn't reset the table_name to None when closing the database.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1879/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1878 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1878/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1878/comments | https://api.github.com/repos/coleifer/peewee/issues/1878/events | https://github.com/coleifer/peewee/pull/1878 | 419,030,096 | MDExOlB1bGxSZXF1ZXN0MjU5NjYzMTM1 | 1,878 | Update peewee.py | {
"login": "jieshukai",
"id": 38142715,
"node_id": "MDQ6VXNlcjM4MTQyNzE1",
"avatar_url": "https://avatars.githubusercontent.com/u/38142715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jieshukai",
"html_url": "https://github.com/jieshukai",
"followers_url": "https://api.github.com/users/jieshukai/followers",
"following_url": "https://api.github.com/users/jieshukai/following{/other_user}",
"gists_url": "https://api.github.com/users/jieshukai/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jieshukai/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jieshukai/subscriptions",
"organizations_url": "https://api.github.com/users/jieshukai/orgs",
"repos_url": "https://api.github.com/users/jieshukai/repos",
"events_url": "https://api.github.com/users/jieshukai/events{/privacy}",
"received_events_url": "https://api.github.com/users/jieshukai/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I'll pass. Plus you've got tests failing due to unbound fields not having a `model` attribute."
] | 2019-03-09T03:54:07 | 2019-03-09T04:26:30 | 2019-03-09T04:26:30 | NONE | null | support mysql comment | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1878/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1878",
"html_url": "https://github.com/coleifer/peewee/pull/1878",
"diff_url": "https://github.com/coleifer/peewee/pull/1878.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1878.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1877 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1877/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1877/comments | https://api.github.com/repos/coleifer/peewee/issues/1877/events | https://github.com/coleifer/peewee/issues/1877 | 418,737,931 | MDU6SXNzdWU0MTg3Mzc5MzE= | 1,877 | NULLS not supported on SQLite | {
"login": "mkoura",
"id": 2352619,
"node_id": "MDQ6VXNlcjIzNTI2MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2352619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mkoura",
"html_url": "https://github.com/mkoura",
"followers_url": "https://api.github.com/users/mkoura/followers",
"following_url": "https://api.github.com/users/mkoura/following{/other_user}",
"gists_url": "https://api.github.com/users/mkoura/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mkoura/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mkoura/subscriptions",
"organizations_url": "https://api.github.com/users/mkoura/orgs",
"repos_url": "https://api.github.com/users/mkoura/repos",
"events_url": "https://api.github.com/users/mkoura/events{/privacy}",
"received_events_url": "https://api.github.com/users/mkoura/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"NULLS is not supported by Sqlite. For SQLite you can use a case expression, however: http://docs.peewee-orm.com/en/latest/peewee/api.html#Case",
"I've added a feature check and compatibility layer so that you can specify ``nulls='first'`` or ``nulls='last'`` with any database, and Peewee will generate the appropriate SQL. For postgres, it uses the usual non-standard ``nulls first/last``, but for sqlite and mysql, an equivalent ``CASE`` statement will be generated.",
"Awesome, thanks a lot!"
] | 2019-03-08T10:56:57 | 2019-03-10T16:13:17 | 2019-03-09T04:12:51 | NONE | null | Syntax like `ORDER BY somevalue DESC NULLS LAST` doesn't work on SQLite. We are using `column.desc(nulls='LAST')` in one of our models and when using SQLite (for unit testing), this fails with
```
Traceback (most recent call last):
File "<venv>/lib64/python3.7/site-packages/peewee.py", line 2835, in execute_sql
cursor.execute(sql, params or ())
sqlite3.OperationalError: near "NULLS": syntax error
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
...
File "<venv>/lib64/python3.7/site-packages/peewee.py", line 2835, in execute_sql
cursor.execute(sql, params or ())
peewee.OperationalError: near "NULLS": syntax error
```
I didn't find any mention that `nulls` argument is PostgreSQL specific so I assume this behavior is a bug. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1877/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1876 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1876/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1876/comments | https://api.github.com/repos/coleifer/peewee/issues/1876/events | https://github.com/coleifer/peewee/issues/1876 | 417,698,604 | MDU6SXNzdWU0MTc2OTg2MDQ= | 1,876 | AttributeError: 'Connection' object has no attribute 'server_version' | {
"login": "strongbugman",
"id": 16114285,
"node_id": "MDQ6VXNlcjE2MTE0Mjg1",
"avatar_url": "https://avatars.githubusercontent.com/u/16114285?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/strongbugman",
"html_url": "https://github.com/strongbugman",
"followers_url": "https://api.github.com/users/strongbugman/followers",
"following_url": "https://api.github.com/users/strongbugman/following{/other_user}",
"gists_url": "https://api.github.com/users/strongbugman/gists{/gist_id}",
"starred_url": "https://api.github.com/users/strongbugman/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/strongbugman/subscriptions",
"organizations_url": "https://api.github.com/users/strongbugman/orgs",
"repos_url": "https://api.github.com/users/strongbugman/repos",
"events_url": "https://api.github.com/users/strongbugman/events{/privacy}",
"received_events_url": "https://api.github.com/users/strongbugman/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the quick report. This should be fixed -- I had been testing against pymysql and neglected to handle `MySQLdb` correctly.\r\n\r\nI've pushed 3.9.1 which contains the fix."
] | 2019-03-06T09:17:47 | 2019-03-06T13:10:32 | 2019-03-06T13:08:07 | NONE | null | Help! I got a problem when I call `create_tables` function with 3.90 version:
```
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:2989: in create_tables
model.create_table(**options)
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:5999: in create_table
and cls.table_exists():
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:5989: in table_exists
return cls._schema.database.table_exists(M.table.__name__, M.schema)
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:2967: in table_exists
return table_name in self.get_tables(schema=schema)
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:3620: in get_tables
return [table for table, in self.execute_sql(query, ('VIEW',))]
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:2827: in execute_sql
cursor = self.cursor(commit)
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:2813: in cursor
self.connect()
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/playhouse/pool.py:108: in connect
return super(PooledDatabase, self).connect(reuse_if_open)
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/peewee.py:2776: in connect
self._state.set_connection(self._connect())
../../../.local/share/virtualenvs/shanbay_sea/lib/python3.6/site-packages/playhouse/pool.py:155: in _connect
conn = super(PooledDatabase, self)._connect()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <playhouse.pool.PooledMySQLDatabase object at 0x1079802e8>
def _connect(self):
if mysql is None:
raise ImproperlyConfigured('MySQL driver not installed!')
conn = mysql.connect(db=self.database, **self.connect_params)
if self._server_version is None:
> version_raw = conn.server_version
E AttributeError: 'Connection' object has no attribute 'server_version'
conn = <_mysql.connection open to 'mysql' at 0x7fbddbb4fc18>
self = <playhouse.pool.PooledMySQLDatabase object at 0x1079802e8>
```
Are there some broken changes? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1876/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1876/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1875 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1875/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1875/comments | https://api.github.com/repos/coleifer/peewee/issues/1875/events | https://github.com/coleifer/peewee/issues/1875 | 416,955,253 | MDU6SXNzdWU0MTY5NTUyNTM= | 1,875 | Inserting a timestamp value of 0 or "1970-01-01 00:00:00" followed by a get returns None | {
"login": "dougthor42",
"id": 5386897,
"node_id": "MDQ6VXNlcjUzODY4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5386897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dougthor42",
"html_url": "https://github.com/dougthor42",
"followers_url": "https://api.github.com/users/dougthor42/followers",
"following_url": "https://api.github.com/users/dougthor42/following{/other_user}",
"gists_url": "https://api.github.com/users/dougthor42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dougthor42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dougthor42/subscriptions",
"organizations_url": "https://api.github.com/users/dougthor42/orgs",
"repos_url": "https://api.github.com/users/dougthor42/repos",
"events_url": "https://api.github.com/users/dougthor42/events{/privacy}",
"received_events_url": "https://api.github.com/users/dougthor42/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This relates to the following code:\r\n\r\n```python\r\n def python_value(self, value):\r\n if value is not None and isinstance(value, (int, float, long)):\r\n if value == 0:\r\n return\r\n```\r\n\r\nI suppose this is a bug, so I've removed the \"if value == 0\" conditional.\r\n\r\nMy original thinking was that a zero timestamp is a special-case which was equivalent to \"no timestamp\". But even if that is true, the use of NULL/None carries consequences that are not present when using a zero value. Should be fixed by df94b2db9fa93b0f56cbca2f95b2bf42ba27bb01",
"Awesome, thanks! I kinda figured it was something like that.\r\n\r\nThe only reason that I ran into the issue is because I was testing my own program and noticed that I couldn't fix particular edge case without some hacks."
] | 2019-03-04T19:24:41 | 2019-03-04T22:28:46 | 2019-03-04T20:38:06 | NONE | null | # Summary
The `.get` method on a model with a `TimestampField` field will return `None` if the value stored is the POSIX epoch `1970-01-01 00:00:00` (POSIX timestamp `0`).
# Steps to reproduce:
Here's a short example program:
```python
# example.py
from datetime import datetime
from peewee import *
db = SqliteDatabase(":memory:")
class Example(Model):
timestamp = TimestampField(utc=True)
class Meta:
database = db
def __str__(self):
return "<{}, {}>".format(self.id, repr(self.timestamp))
db.connect()
db.create_tables([Example])
# Insert some data
a = Example(timestamp=datetime.utcnow())
a.save()
print("(Instance) timestamp=utcnow: {}".format(a))
a_get = Example.get(Example.id == 1)
print("(Query) timestamp=utcnow: {}".format(a_get))
posix_epoch = datetime(1970, 1, 1)
b = Example(timestamp=posix_epoch)
b.save()
print("(Instance) Timestamp=posix_epoch: {}".format(b))
b_get = Example.get(Example.id == 2)
print("(Query) Timestamp=posix_epoch: {}".format(b_get))
db.close()
```
# Expected Result
The timestamp value returned by the `.get` method should be `0`:
```shell
$ python example.py
(Instance) timestamp=utcnow: <1, datetime.datetime(2019, 3, 4, 19, 8, 47, 912755)>
(Query) timestamp=utcnow: <1, datetime.datetime(2019, 3, 4, 19, 8, 48)>
(Instance) timestamp=posix_epoch: <2, datetime.datetime(1970, 1, 1, 0, 0, 0, 0)>
(Query) timestamp=posix_epoch: <2, datetime.datetime(1970, 1, 1, 0, 0, 0)>
```
# Actual Result
The timestamp value returned by `.get` is `None`:
```shell
$ python example.py
(Instance) timestamp=utcnow: <1, datetime.datetime(2019, 3, 4, 19, 16, 10, 217857)>
(Query) timestamp=utcnow: <1, datetime.datetime(2019, 3, 4, 19, 16, 10)>
(Instance) Timestamp=posix_epoch: <2, datetime.datetime(1970, 1, 1, 0, 0)>
(Query) Timestamp=posix_epoch: <2, None>
```
# Comments
+ I'm testing against an SQLite database. I have not checked MySQL or PostgreSQL.
+ You'll notice that the sub-second resolution is dropped. As you know, this is expected because all backends store the value in an `integer` column. might open another issue to adjust the column type, but that's out of scope.
+ The record instance has the correct value, but the result from `.get` does not
# Versions
+ Python: 3.6.7 64-bit
+ OS: Ubuntu 18.04 on Windows Subsystem for Linux
```shell
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.2 LTS
Release: 18.04
Codename: bionic
$ uname -a
Linux Thorium 4.4.0-17763-Microsoft #253-Microsoft Mon Dec 31 17:49:00 PST 2018 x86_64 x86_64 x86_64 GNU/Linux
```
+ Windows 10 Pro, version 1809
+ Peewee versions tested
+ Master 735ff26b
+ v3.8.2 from pypi | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1875/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1875/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1874 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1874/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1874/comments | https://api.github.com/repos/coleifer/peewee/issues/1874/events | https://github.com/coleifer/peewee/issues/1874 | 416,874,080 | MDU6SXNzdWU0MTY4NzQwODA= | 1,874 | SSL and peewee | {
"login": "ra-esmith",
"id": 24212262,
"node_id": "MDQ6VXNlcjI0MjEyMjYy",
"avatar_url": "https://avatars.githubusercontent.com/u/24212262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ra-esmith",
"html_url": "https://github.com/ra-esmith",
"followers_url": "https://api.github.com/users/ra-esmith/followers",
"following_url": "https://api.github.com/users/ra-esmith/following{/other_user}",
"gists_url": "https://api.github.com/users/ra-esmith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ra-esmith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ra-esmith/subscriptions",
"organizations_url": "https://api.github.com/users/ra-esmith/orgs",
"repos_url": "https://api.github.com/users/ra-esmith/repos",
"events_url": "https://api.github.com/users/ra-esmith/events{/privacy}",
"received_events_url": "https://api.github.com/users/ra-esmith/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"From the [docs](http://docs.peewee-orm.com/en/latest/peewee/database.html):\r\n\r\n> The Database initialization method expects the name of the database as the first parameter. Subsequent keyword arguments are passed to the underlying database driver when establishing the connection, allowing you to pass vendor-specific parameters easily.\r\n\r\n```python\r\ndb = PostgresqlDatabase('my_db', sslmode='require', ...etc...)\r\n```",
"I am using Flask, see below. I am unsure how to pass in the sslmode require into my configuration. Does not seem the call above is something I can do. Also I tried using SSLify ... but that does not seem to be putting sqlmode require on for postgres.\r\n\r\n\r\n\r\n----------------\r\n# here is my app code to startup ...\r\n\r\nfrom __future__ import print_function\r\n\r\nimport json\r\nimport logging\r\nimport os\r\n\r\nfrom flask import Flask\r\nfrom flask_caching import Cache\r\nfrom flask_peewee.db import Database\r\nfrom flask_sslify import SSLify\r\nfrom flask_htmlmin import HTMLMIN\r\n\r\nfrom peewee import PostgresqlDatabase\r\n\r\ntry:\r\n theapp_app = Flask(__name__)\r\n theapp_app.config['MINIFY_PAGE'] = True\r\n htmlmin = HTMLMIN(theapp_app)\r\n \r\n theapp_app.logger.setLevel(logging.ERROR)\r\n # theapp_app.logger.setLevel(logging.INFO)\r\n theapp_app.debug = False\r\n sslify = SSLify(theapp_app)\r\n theapp_app.jinja_env.globals.update(len=len, dumps=json.dumps)\r\n if 'ON_HEROKU' in os.environ:\r\n theapp_app.config.from_object('theapp_config.ProductionConfig')\r\n cache = Cache(theapp_app, config={'CACHE_TYPE': 'filesystem', 'CACHE_DIR': 'cache'})\r\n else:\r\n theapp_app.config.from_object('theapp_config.DevelopmentConfig')\r\n cache = Cache(theapp_app, config={'CACHE_TYPE': 'simple'})\r\n db = Database(theapp_app)\r\n # TODO: how to pass sslmode into the current setup above ... \r\n # sslmode='require' as an option ... for heroku\r\n # Example: but how to use flask theapp_app above\r\n # db = PostgresqlDatabase('my_app', user='postgres', password='secret', host='10.1.0.9', port=5432)\r\n\r\nexcept Exception as e:\r\n print(\"Exception: \" + str(e))",
"So I am trying the following, as I am on heroku which binds the environment variable DATABASE_URL\r\nif is_running_on_heroku():\r\n class ProductionConfig(BaseConfig):\r\n def __init__(self):\r\n pass\r\n\r\n urlparse.uses_netloc.append('postgres')\r\n url = urlparse.urlparse(os.environ['DATABASE_URL'])\r\n DATABASE = {\r\n 'name': url.path[1:],\r\n 'user': url.username,\r\n 'password': url.password,\r\n 'host': url.hostname,\r\n 'port': url.port,\r\n 'engine': 'peewee.PostgresqlDatabase',\r\n 'sslmode':'require' # see if we can get sslmode for heroku on\r\n }\r\n DEBUG = False\r\n\r\n\r\n base_url = ra_config[\"base_url\"]\r\nelse:\r\n base_url = ra_config[\"ra\"][\"local_base_url\"]\r\n\r\n\r\nand I think that as password and other args are somehow passed into the database that I can also put sslmode here and it will float along in also ...",
"Yes. If you look at the code you can see that parameters from the database config dict are passed to the Peewee database constructor."
] | 2019-03-04T16:06:52 | 2019-04-04T23:11:45 | 2019-03-04T18:48:09 | NONE | null | Hello,
Not sure if this is something peewee can help with.
I have been using peewee for a few years now, works great. Oddly this morning I am getting lots of errors from heroku related to SSL
The docs from heroku suggest that I should say "sslmode='require' in my connection to the database. As heroku says SSL required for all connections. https://devcenter.heroku.com/articles/heroku-postgresql#heroku-postgres-ssl
Something like
import os
import psycopg2
DATABASE_URL = os.environ['DATABASE_URL']
conn = psycopg2.connect(DATABASE_URL, sslmode='require')
Where in my python code does peewee make a DB connection, and how can I tell peewee to use ssl ... and what do I need to do for that to work.
Thanks!
Evan | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1874/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1873 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1873/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1873/comments | https://api.github.com/repos/coleifer/peewee/issues/1873/events | https://github.com/coleifer/peewee/issues/1873 | 415,328,107 | MDU6SXNzdWU0MTUzMjgxMDc= | 1,873 | Unexpected results when using insert_many | {
"login": "tornikenats",
"id": 1811195,
"node_id": "MDQ6VXNlcjE4MTExOTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/1811195?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tornikenats",
"html_url": "https://github.com/tornikenats",
"followers_url": "https://api.github.com/users/tornikenats/followers",
"following_url": "https://api.github.com/users/tornikenats/following{/other_user}",
"gists_url": "https://api.github.com/users/tornikenats/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tornikenats/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tornikenats/subscriptions",
"organizations_url": "https://api.github.com/users/tornikenats/orgs",
"repos_url": "https://api.github.com/users/tornikenats/repos",
"events_url": "https://api.github.com/users/tornikenats/events{/privacy}",
"received_events_url": "https://api.github.com/users/tornikenats/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes, this is expected. You need to provide dictionaries with the *same* keys and values (or tuples of the same length). When you issue a SQL bulk insert, you list the expected columns, then the tuples of values, e.g.:\r\n\r\n```sql\r\ninsert into tbl_name (cola, colb, colc) values (?, ?, ?), (?, ?, ?), (?, ?, ?)\r\n```\r\n\r\nSo you list that you're inserting into \"tbl_name\" (columns \"a\", \"b\" and \"c\"). You need to provide values for all 3 columns for each row inserted.\r\n\r\nIf columns are not explicitly specified when doing a bulk insert with Peewee, then Peewee inspects the first row being inserted to derive the list of columns being inserted. Your second example, Peewee would assume you are inserting values for columns \"email\" and \"first_name\".\r\n\r\nSo you're trying to do:\r\n\r\n```sql\r\ninsert into testaccount (email, first_name) values (?, ?), (?, <error! no value>)\r\n```\r\n\r\nWhich doesn't work.\r\n\r\nJust make sure your dictionaries all contain the same fields *or* explicitly specify the fields you wish to insert.\r\n\r\nhttp://docs.peewee-orm.com/en/latest/peewee/api.html#Model.insert_many"
] | 2019-02-27T21:09:50 | 2019-02-27T21:36:14 | 2019-02-27T21:36:14 | NONE | null | When you try to insert_many, different first objects lead to different results. See below:
```python
#models.py
from peewee import (
MySQLDatabase,
Model,
CharField
)
from playhouse.db_url import connect
db_url = 'mysql://root@localhost/test'
db = connect(db_url)
class TestAccount(Model):
email = CharField()
first_name = CharField(null=True, default=None)
class Meta:
database = db
```
```python
#app.py
from models import TestAccount, db
# Result: all first_name are set to null
#1 [email protected] NULL
#2 [email protected] NULL
users_1 = [
{
'email': '[email protected]'
},
{
'email': '[email protected]',
'first_name': 'John'
},
]
# Result: ValueError: Missing value for "<CharField: TestAccount.first_name>".
users_2 = [
{
'email': '[email protected]',
'first_name': 'John'
},
{
'email': '[email protected]'
},
]
db.create_tables([TestAccount])
TestAccount.insert_many(users_1).execute()
TestAccount.insert_many(users_2).execute()
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1873/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1873/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1872 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1872/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1872/comments | https://api.github.com/repos/coleifer/peewee/issues/1872/events | https://github.com/coleifer/peewee/issues/1872 | 415,131,019 | MDU6SXNzdWU0MTUxMzEwMTk= | 1,872 | [PostgreSQL - Connection pooling] No results to fetch at restart of application | {
"login": "edebernis",
"id": 2584209,
"node_id": "MDQ6VXNlcjI1ODQyMDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2584209?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edebernis",
"html_url": "https://github.com/edebernis",
"followers_url": "https://api.github.com/users/edebernis/followers",
"following_url": "https://api.github.com/users/edebernis/following{/other_user}",
"gists_url": "https://api.github.com/users/edebernis/gists{/gist_id}",
"starred_url": "https://api.github.com/users/edebernis/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/edebernis/subscriptions",
"organizations_url": "https://api.github.com/users/edebernis/orgs",
"repos_url": "https://api.github.com/users/edebernis/repos",
"events_url": "https://api.github.com/users/edebernis/events{/privacy}",
"received_events_url": "https://api.github.com/users/edebernis/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"So if you are exiting/terminating your Python process, there's no need to call `close_idle()` because the process will exit anyways, closing everything.\r\n\r\nI'm not sure what's going on as far as when the subprocesses are getting forked (before or after the connection pool is initialized? hopefully before). Or what the shutdown/restart behavior is... Or what kinds of queries would trigger the errors you're observing... So many questions. Could this just a timing issue where your db isn't ready when you're attempting to connect/query?\r\n\r\nIf you can reproduce this reliably without requiring docker or uwsgi or bottle web framework (e.g., just peewee and postgres) I'll reopen."
] | 2019-02-27T13:43:13 | 2019-02-27T21:52:16 | 2019-02-27T21:52:16 | NONE | null | Hello,
I am using Peewee in combination with Bottle web framework, uWSGI and Docker.
I have 10 uWSGI workers running in a Docker container. Each worker connects to a PostgreSQL database using connection pooling :
```
from playhouse.pool import *
database = PooledPostgresqlDatabase(None)
database.init(db_name, max_connections=2, stale_timeout=300, **db_args)
```
Each worker opens DB connection before request and closes it afterwards :
```
@webapp.hook('before_request')
def before_request():
database.connect()
@webapp.hook('after_request')
def after_request():
if not database.is_closed():
database.close()
```
I am also using an exit handler to close all idle connections when gracefully stopping uWSGI worker :
```
import atexit
atexit.register(exit_handler, database)
def exit_handler(database):
database.close_idle()
sys.exit(0)
```
The issue I have is when restarting Docker container. During approx 5/10 minutes, I get a lot of following PostgreSQL errors (`psycopg2.ProgrammingError: no results to fetch`) on many peewee requests (not all).
And on database side, I get many warnings like these :
```
WARNING: there is no transaction in progress
WARNING: there is already a transaction in progress
```
However, when looking at active connections to DB from app using netstat, all connections are correctly dropped when stopping container.
Any ideas about this one ?
Thanks for your help ! | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1872/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1871 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1871/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1871/comments | https://api.github.com/repos/coleifer/peewee/issues/1871/events | https://github.com/coleifer/peewee/issues/1871 | 414,240,532 | MDU6SXNzdWU0MTQyNDA1MzI= | 1,871 | Dropping foreign key constraints | {
"login": "arel",
"id": 153497,
"node_id": "MDQ6VXNlcjE1MzQ5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/153497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arel",
"html_url": "https://github.com/arel",
"followers_url": "https://api.github.com/users/arel/followers",
"following_url": "https://api.github.com/users/arel/following{/other_user}",
"gists_url": "https://api.github.com/users/arel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arel/subscriptions",
"organizations_url": "https://api.github.com/users/arel/orgs",
"repos_url": "https://api.github.com/users/arel/repos",
"events_url": "https://api.github.com/users/arel/events{/privacy}",
"received_events_url": "https://api.github.com/users/arel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"For reference (for anyone googling this) if you want to drop a table in MySQL without regard to its foreign key constraints, you can temporarily [`SET FOREIGN_KEY_CHECKS = 0`](https://stackoverflow.com/a/8538716/2438538). Something like this:\r\n\r\n```python\r\n# disable foreign key checks (lasts for this session only)\r\ndb.execute_sql(\"SET FOREIGN_KEY_CHECKS=0\")\r\n\r\n# drop any tables as usual\r\ndb.drop_tables([ModelA, ModelB])\r\n\r\n# (optionally) re-enable foreign key checks\r\ndb.execute_sql(\"SET FOREIGN_KEY_CHECKS=1\")\r\n```",
"This is a consequence of using an ill-advised database schema.\r\n\r\nPeewee's migrations module (`playhouse.migrate`; ([docs](http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#migrate)) contains helpers for dropping foreign-key constraints, but in the case of a circular FK you'll need to disable the constraint check like you have in your example."
] | 2019-02-25T18:21:13 | 2019-02-27T21:24:17 | 2019-02-27T21:24:17 | NONE | null | I have a situation with [circular foreign key dependencies](http://docs.peewee-orm.com/en/latest/peewee/models.html#circular-foreign-key-dependencies), and I followed the approach of using a `DeferredForeignKey`. I am using MySQL.
As mentioned in the docs, in order to use this approach I need to explicitly create foreign key constraints:
> To create the tables and the foreign-key constraint, you can use the `SchemaManager.create_foreign_key()` method to create the constraint after creating the tables
However, once they are created, _how do I drop foreign key constraints in peewee?_ Does it make sense to add a `drop_foreign_key()` method? Otherwise, if I drop a table during development, for example, I get a `peewee.InternalError`:
```
peewee.InternalError: (3730, "Cannot drop table 'image' referenced by a foreign key constraint 'fk_user_profile_image_id_refs_image' on table 'user'.")
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1871/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1870 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1870/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1870/comments | https://api.github.com/repos/coleifer/peewee/issues/1870/events | https://github.com/coleifer/peewee/issues/1870 | 413,927,519 | MDU6SXNzdWU0MTM5Mjc1MTk= | 1,870 | _in_use size exceed Maximum connections despite calling db.close_idle(), db.manual_close() or db.close_stale(60) | {
"login": "ernest-andela",
"id": 46436012,
"node_id": "MDQ6VXNlcjQ2NDM2MDEy",
"avatar_url": "https://avatars.githubusercontent.com/u/46436012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ernest-andela",
"html_url": "https://github.com/ernest-andela",
"followers_url": "https://api.github.com/users/ernest-andela/followers",
"following_url": "https://api.github.com/users/ernest-andela/following{/other_user}",
"gists_url": "https://api.github.com/users/ernest-andela/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ernest-andela/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ernest-andela/subscriptions",
"organizations_url": "https://api.github.com/users/ernest-andela/orgs",
"repos_url": "https://api.github.com/users/ernest-andela/repos",
"events_url": "https://api.github.com/users/ernest-andela/events{/privacy}",
"received_events_url": "https://api.github.com/users/ernest-andela/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`is_closed` is implemented on `peewee.Database` in `peewee.py`. What it does is check whether there is a connection already open (for the current thread, if you're using multiple threads).\r\n\r\nThe logic of `manual_close` could be described in this way:\r\n\r\n1. does the current thread have a connection open? If not, there's nothing to do - return `False`\r\n2. get a reference to the currently-open connection\r\n3. remove the connection from the list of connections currently-in-use by the pool\r\n4. call the `Database.close()` method, which resets the connection state and internally calls `._close()`, which is overridden by the pool subclass.\r\n5. (inside `_close()`) this is overridden by the pool subclass to recycle the conn, which will not happen as the conn is no longer marked as being in-use\r\n6. call `_close()` again, this time with a flag indicating that yes -- we really want to close the connection -- at which point it will be closed.\r\n\r\nThe symptoms you are describing do not make a lot of sense to me... Are you using a multi-threaded application? If so, then `is_closed()` is with respect to the current thread, and maybe you're calling it from a different thread.\r\n\r\nSomething is definitely weird in your app or your setup. For instance, I use the peewee pooled database on a handful of flask apps and it is quite well-behaved -- doesn't leak conns, etc. Your stale timeout of 1 second seems odd, but shouldn't be causing the issue.\r\n\r\nYou can always turn on logging:\r\n\r\n```python\r\nimport logging\r\nlogger = logging.getLogger('peewee.pool')\r\nlogger.addHandler(logging.StreamHandler())\r\nlogger.setLevel(logging.DEBUG)\r\n```\r\n\r\nAlso note the pool is unit-tested.\r\n\r\nIf you can post some more information indicating this is a bug, or a minimal script that reproduces the issue, I will gladly reopen the issue."
] | 2019-02-25T04:26:48 | 2019-02-28T04:53:52 | 2019-02-28T04:53:52 | NONE | null | Calling the manual_close() method always return False because of this line in the code https://github.com/coleifer/peewee/blob/master/playhouse/pool.py#L194 . Where can l find the method is_closed() on this line https://github.com/coleifer/peewee/blob/master/playhouse/pool.py#L194?
**Below is my initials for the creation of the connection**
```db = PooledPostgresqlDatabase(max_connections=30, timeout=300, stale_timeout=1, **db_args)```
**However l am having the following issues**
1. db.is_closed() is always True
2. self.self._in_use quickly gets to 30 causing Maximum Connection error
**Calling any of the following does not help:**
1. db.close_idle()
2. db.manual_close()
3. db.close_stale(60)
I am connecting to the database: **before_request** and closing the database **teardown_request**
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1870/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1869 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1869/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1869/comments | https://api.github.com/repos/coleifer/peewee/issues/1869/events | https://github.com/coleifer/peewee/pull/1869 | 413,801,880 | MDExOlB1bGxSZXF1ZXN0MjU1NjcxODg4 | 1,869 | Travis CI recommends removing the sudo tag | {
"login": "cclauss",
"id": 3709715,
"node_id": "MDQ6VXNlcjM3MDk3MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/3709715?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cclauss",
"html_url": "https://github.com/cclauss",
"followers_url": "https://api.github.com/users/cclauss/followers",
"following_url": "https://api.github.com/users/cclauss/following{/other_user}",
"gists_url": "https://api.github.com/users/cclauss/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cclauss/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cclauss/subscriptions",
"organizations_url": "https://api.github.com/users/cclauss/orgs",
"repos_url": "https://api.github.com/users/cclauss/repos",
"events_url": "https://api.github.com/users/cclauss/events{/privacy}",
"received_events_url": "https://api.github.com/users/cclauss/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"So this change completely fucks the 3.8 tests and the solution is to mark it as \"allow failures\"? Nope.",
"OK, I owe you an apology actually, as the psycopg2 driver installation seems to be failing on 3.8 regardless of any other changes. Sorry about that."
] | 2019-02-24T09:33:12 | 2019-02-27T21:40:31 | 2019-02-27T21:13:20 | NONE | null | [Travis are now recommending removing the __sudo__ tag](https://blog.travis-ci.com/2018-11-19-required-linux-infrastructure-migration).
"_If you currently specify __sudo: false__ in your __.travis.yml__, we recommend removing that configuration_" | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1869/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1869",
"html_url": "https://github.com/coleifer/peewee/pull/1869",
"diff_url": "https://github.com/coleifer/peewee/pull/1869.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1869.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1868 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1868/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1868/comments | https://api.github.com/repos/coleifer/peewee/issues/1868/events | https://github.com/coleifer/peewee/issues/1868 | 413,779,102 | MDU6SXNzdWU0MTM3NzkxMDI= | 1,868 | a | {
"login": "efake2002",
"id": 37454883,
"node_id": "MDQ6VXNlcjM3NDU0ODgz",
"avatar_url": "https://avatars.githubusercontent.com/u/37454883?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/efake2002",
"html_url": "https://github.com/efake2002",
"followers_url": "https://api.github.com/users/efake2002/followers",
"following_url": "https://api.github.com/users/efake2002/following{/other_user}",
"gists_url": "https://api.github.com/users/efake2002/gists{/gist_id}",
"starred_url": "https://api.github.com/users/efake2002/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/efake2002/subscriptions",
"organizations_url": "https://api.github.com/users/efake2002/orgs",
"repos_url": "https://api.github.com/users/efake2002/repos",
"events_url": "https://api.github.com/users/efake2002/events{/privacy}",
"received_events_url": "https://api.github.com/users/efake2002/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think that I have a solution to your problem. But I warn you, I'm quite a inexperienced programmer and this is monkey patching (I know, I know, shame on me, but it works!). So take this solution with a grain of salt.\r\n\r\nI have implemented this months ago, so I don't really know what it is doing under the hood anymore. The trick is that I have implemented a helper called `metaclass_resolver` to get rid of the `metaclass conflict` and inherit from N metaclasses with no problems.\r\n\r\n# The code\r\n\r\n```\r\ndef metaclass_resolver(*classes):\r\n metaclass = tuple(set(type(cls) for cls in classes))\r\n metaclass = (\r\n metaclass[0]\r\n if len(metaclass) == 1\r\n else type(\"_\".join(mcls.__name__ for mcls in metaclass), metaclass, {})\r\n ) # class M_C\r\n return metaclass(\"_\".join(cls.__name__ for cls in classes), classes, {})\r\n```\r\n\r\n### Usage\r\n\r\n```\r\nPython 3.7.2 (default, Dec 26 2018, 08:50:25) \r\n[GCC 6.4.0] on linux\r\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\r\n>>> from util.metaclass_resolver import metaclass_resolver\r\n>>> from peewee import *\r\n>>> \r\n>>> from abc import ABC\r\n>>> \r\n>>> class dummy(metaclass_resolver(ABC, Model)): pass\r\n...\r\n>>> issubclass(dummy, ABC)\r\nTrue\r\n>>> issubclass(dummy, Model)\r\nTrue\r\n```\r\n\r\n# Real Usage Case\r\n\r\nTo show that this (somewhat) works, I will share how this is working in my current system.\r\n\r\nEvery table `table_name` on my database has a sibling called `table_name_history` to keep track of all the changes on the table. To do this, I'm using a simple inheritance to create the columns:\r\n\r\n```\r\nclass BaseModel(Model):\r\n ...\r\n\r\nclass BaseHistory(Model):\r\n ...\r\n\r\nclass Dummy(BaseModel):\r\n dummy_field = DummyPeeweeField()\r\n\r\nclass DummyHistory(Dummy, BaseHistory):\r\n dummy = ForeingKeyField(Dummy)\r\n```\r\n\r\nThe problem was that I couldn't inherit unique fields, as they would need to be not unique in the history table.\r\n\r\nI resorted to monkey patching a metaclass to remove the unique constraint when the class was beeing loaded.\r\n\r\n```\r\nclass UniqueFieldFixMeta(type):\r\n def __init__(cls, name, bases, clsdict):\r\n if (\r\n name != \"UniqueFieldFix\"\r\n and name != \"Model_UniqueFieldFix\"\r\n and name != \"BaseHistory\"\r\n ):\r\n print(\"Fixing history unique columns for\")\r\n print(cls)\r\n for key, value in cls._meta.fields.items():\r\n value.unique = False\r\n super(UniqueFieldFixMeta, cls).__init__(name, bases, clsdict)\r\n\r\n\r\nclass UniqueFieldFix(metaclass=UniqueFieldFixMeta):\r\n pass\r\n```\r\n\r\nNow, the `BaseHistory` looks like this:\r\n\r\n```\r\nclass BaseHistory(metaclass_resolver(Model, UniqueFieldFix)):\r\n ...\r\n```\r\n\r\nAnd it works wonderfully.\r\n\r\n### Please, More Salt Sir\r\n\r\nAgain, I don't really know if this is what you are looking for. If it is, you need to be comfortable with some monkey patching. I'm just a script kid.",
"a",
"What happened to the issue? @efake2002",
"The original comment:\r\n\r\n-------------------------------\r\n\r\nHi.\r\n\r\nFirst of all, thanks for this amazing library. It's helped and simplified our work a lot.\r\n\r\nThis is closely related to #1722 but not quite the same.\r\n\r\nIn #1722, @coleifer told us that he would not be implementing support for the abstract base class module that Python 3 provides built-in. I respect your decision to not implement it. It's your library and your personal choice.\r\n\r\nCurrently, if we try to inherit from both ABC and Peewee's Model class like so:\r\n\r\nclass AbstractModel(ABC, peewee.Model):\r\nit throws this error:\r\n\r\nTypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases\r\nAre there any work arounds for this on our side? My team really needs this functionality to define abstract models with the orm. We have a ton of models and we need some way to distinguish between the abstract and the concrete models we're creating. Therefore, is there any way to implement this on our side? What is the easiest way of doing this in relation to peewee?\r\n\r\nI know that Django provides this functionality and I'd like to implement something similar as well. Something like https://docs.djangoproject.com/en/2.1/topics/db/models/#meta-inheritance would be amazing for us. Please let me know how we can accomplish this.\r\n\r\nSome of my team members are considering moving to some other orm like Pony because of this issue, but we really love this library and would prefer to stay if there was some work around for this. Please let me know.\r\n\r\nThanks for your time and any help is much appreciated!",
"> Some of my team members are considering moving to some other orm like Pony because of this issue, but we really love this library and would prefer to stay if there was some work around for this. Please let me know.\r\n\r\nI love it when people say that they're \"thinking about switching to <other project>\" to try and get my attention. By all means, have fun with a magical ORM that decompiles generator expression bytecode and is 99% magic!",
"(I know, everyone hates comments on closed issues, but)\r\n\r\nIf anyone gets directed here via Google, abstraction of peewee's Model class _is_ possible. You just have to use the `ABCMeta` class vs an `ABC` subclass.\r\n\r\nSo you can do something like this:\r\n\r\n```\r\nfrom abc import ABCMeta, abstractmethod\r\nfrom peewee import Model\r\n\r\nclass BaseModel(Model):\r\n __metaclass__ = ABCMeta\r\n\r\n @abstractmethod\r\n def some_method(self):\r\n pass\r\n\r\n ...\r\n\r\nclass SecondBaseModel(BaseModel):\r\n __metaclass__ = ABCMeta\r\n\r\n @abstractmethod\r\n def some_other_method(self):\r\n pass \r\n\r\n ...\r\n\r\n# ^ etc\r\n\r\nclass Dummy(SecondBaseModel):\r\n dummy_field = DummyPeeweeField()\r\n\r\n def some_method(self):\r\n ...\r\n\r\n def some_other_method(self):\r\n ...\r\n\r\n```\r\n\r\nIDE's don't seem to like this (lol) and you may have to re-state the abstract functions down your sub classes. However, this definitely runs!\r\n\r\nEdit: Formatting / grammar"
] | 2019-02-24T04:05:30 | 2022-03-03T19:23:09 | 2019-02-25T00:06:46 | NONE | null | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1868/timeline | null | completed | null | null |
|
https://api.github.com/repos/coleifer/peewee/issues/1867 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1867/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1867/comments | https://api.github.com/repos/coleifer/peewee/issues/1867/events | https://github.com/coleifer/peewee/issues/1867 | 413,438,113 | MDU6SXNzdWU0MTM0MzgxMTM= | 1,867 | CentOS 6 support | {
"login": "goooroooX",
"id": 27283908,
"node_id": "MDQ6VXNlcjI3MjgzOTA4",
"avatar_url": "https://avatars.githubusercontent.com/u/27283908?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/goooroooX",
"html_url": "https://github.com/goooroooX",
"followers_url": "https://api.github.com/users/goooroooX/followers",
"following_url": "https://api.github.com/users/goooroooX/following{/other_user}",
"gists_url": "https://api.github.com/users/goooroooX/gists{/gist_id}",
"starred_url": "https://api.github.com/users/goooroooX/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/goooroooX/subscriptions",
"organizations_url": "https://api.github.com/users/goooroooX/orgs",
"repos_url": "https://api.github.com/users/goooroooX/repos",
"events_url": "https://api.github.com/users/goooroooX/events{/privacy}",
"received_events_url": "https://api.github.com/users/goooroooX/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Those test failures are likely due to your using an absolutely ancient version of sqlite. This ancient version of sqlite does not seem to support bulk insert. That is the problem. Upgrade sqlite to a newer version."
] | 2019-02-22T14:52:50 | 2019-02-22T15:27:24 | 2019-02-22T15:27:24 | NONE | null | Hi,
It seems peewee module is lack of CentOS 6 support. I have tried it in docker container and normal CentOS 6.8/6.9 installation, and in all cases I have the same exception (Python 2.6/2.7, x86/x64):
` File "/app/base.py", line 233, in feed
LogSourceModel.insert_many(batch).execute()
File "/app/libs/peewee.py", line 1625, in inner
return method(self, database, *args, **kwargs)
File "/app/libs/peewee.py", line 1696, in execute
return self._execute(database)
File "/app/libs/peewee.py", line 2358, in _execute
return super(Insert, self)._execute(database)
File "/app/libs/peewee.py", line 2121, in _execute
cursor = database.execute(self)
File "/app/libs/peewee.py", line 2727, in execute
return self.execute_sql(sql, params, commit=commit)
File "/app/libs/peewee.py", line 2721, in execute_sql
self.commit()
File "/app/libs/peewee.py", line 2512, in __exit__
reraise(new_type, new_type(*exc_args), traceback)
File "/app/libs/peewee.py", line 2714, in execute_sql
cursor.execute(sql, params or ())
OperationalError: near ",": syntax error`
CentOS 7 is OK.
I've executed in-build tests, results are attached - they also contains the same exceptions.
[peewee_test_results.txt](https://github.com/coleifer/peewee/files/2894498/peewee_test_results.txt)
Can you please comment on this? Is there any easy way to make in work on CentOS 6?
Thank you! | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1867/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1866 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1866/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1866/comments | https://api.github.com/repos/coleifer/peewee/issues/1866/events | https://github.com/coleifer/peewee/pull/1866 | 413,313,778 | MDExOlB1bGxSZXF1ZXN0MjU1MzI1MTk4 | 1,866 | Removed incorrect import. ManyToManyField is in peewee module | {
"login": "oscarcp",
"id": 462631,
"node_id": "MDQ6VXNlcjQ2MjYzMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/462631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/oscarcp",
"html_url": "https://github.com/oscarcp",
"followers_url": "https://api.github.com/users/oscarcp/followers",
"following_url": "https://api.github.com/users/oscarcp/following{/other_user}",
"gists_url": "https://api.github.com/users/oscarcp/gists{/gist_id}",
"starred_url": "https://api.github.com/users/oscarcp/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/oscarcp/subscriptions",
"organizations_url": "https://api.github.com/users/oscarcp/orgs",
"repos_url": "https://api.github.com/users/oscarcp/repos",
"events_url": "https://api.github.com/users/oscarcp/events{/privacy}",
"received_events_url": "https://api.github.com/users/oscarcp/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2019-02-22T09:25:08 | 2019-02-22T22:05:21 | 2019-02-22T22:05:21 | CONTRIBUTOR | null | Sorry, this line in the docs has been bugging me for years. It's been a while since ManyToManyField has been moved to the peewee core and is not located in the playhouse module anymore. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1866/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1866/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1866",
"html_url": "https://github.com/coleifer/peewee/pull/1866",
"diff_url": "https://github.com/coleifer/peewee/pull/1866.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1866.patch",
"merged_at": "2019-02-22T22:05:21"
} |
https://api.github.com/repos/coleifer/peewee/issues/1865 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1865/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1865/comments | https://api.github.com/repos/coleifer/peewee/issues/1865/events | https://github.com/coleifer/peewee/issues/1865 | 412,727,563 | MDU6SXNzdWU0MTI3Mjc1NjM= | 1,865 | Update multiple rows by id | {
"login": "gurland",
"id": 30530987,
"node_id": "MDQ6VXNlcjMwNTMwOTg3",
"avatar_url": "https://avatars.githubusercontent.com/u/30530987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gurland",
"html_url": "https://github.com/gurland",
"followers_url": "https://api.github.com/users/gurland/followers",
"following_url": "https://api.github.com/users/gurland/following{/other_user}",
"gists_url": "https://api.github.com/users/gurland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/gurland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/gurland/subscriptions",
"organizations_url": "https://api.github.com/users/gurland/orgs",
"repos_url": "https://api.github.com/users/gurland/repos",
"events_url": "https://api.github.com/users/gurland/events{/privacy}",
"received_events_url": "https://api.github.com/users/gurland/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"If you're using Postgres or a recent-ish sqlite, you can try to use the `ValuesList` with a common table expression in an UPDATE query. Postgres also supports UPDATE FROM which could also work. You can see an example here, as I've added a testcase: c1b6e5f8c5a2b6799f24b2df8788f43a0ba8811d"
] | 2019-02-21T03:08:03 | 2019-02-21T16:56:25 | 2019-02-21T16:56:25 | NONE | null | I have list of dicts `[{"pk": 1, "value": 12}, ...]`, how could I update all rows by their `id`?
Only way I found is:
```
m = MyModel.get(id=pk)
m.value = value
m.save()
```
But that is very slow | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1865/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1864 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1864/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1864/comments | https://api.github.com/repos/coleifer/peewee/issues/1864/events | https://github.com/coleifer/peewee/issues/1864 | 412,382,656 | MDU6SXNzdWU0MTIzODI2NTY= | 1,864 | UnboundLocalError when doing join().join_from() | {
"login": "Defman21",
"id": 7100645,
"node_id": "MDQ6VXNlcjcxMDA2NDU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7100645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Defman21",
"html_url": "https://github.com/Defman21",
"followers_url": "https://api.github.com/users/Defman21/followers",
"following_url": "https://api.github.com/users/Defman21/following{/other_user}",
"gists_url": "https://api.github.com/users/Defman21/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Defman21/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Defman21/subscriptions",
"organizations_url": "https://api.github.com/users/Defman21/orgs",
"repos_url": "https://api.github.com/users/Defman21/repos",
"events_url": "https://api.github.com/users/Defman21/events{/privacy}",
"received_events_url": "https://api.github.com/users/Defman21/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Nevermind, I made a mistake in my code and `ModelAModelB` was not `ModelAModelB`."
] | 2019-02-20T11:40:16 | 2019-02-20T14:05:09 | 2019-02-20T14:05:09 | NONE | null | I have a query that looks like this:
```
ModelAModelB
.select(ModelA, ModelB)
.join(ModelC, on=(ModelC.some_field == ModelAModelB.some_field))
.join_from(ModelAModelB, ModelA) # Exception occurs on this line
.join_from(ModelAModelB, ModelB)
```
For some reason, today I got an error:
```
2019-02-20T11:33:07.005118570Z File "/usr/local/lib/python3.7/site-packages/peewee.py", line 6061, in join_from
2019-02-20T11:33:07.005130799Z return self.join(dest, join_type, on, src, attr)
2019-02-20T11:33:07.005142575Z File "/usr/local/lib/python3.7/site-packages/peewee.py", line 606, in inner
2019-02-20T11:33:07.005154599Z method(clone, *args, **kwargs)
2019-02-20T11:33:07.005166359Z File "/usr/local/lib/python3.7/site-packages/peewee.py", line 6050, in join
2019-02-20T11:33:07.005178361Z on, attr, constructor = self._normalize_join(src, dest, on, attr)
2019-02-20T11:33:07.005190257Z File "/usr/local/lib/python3.7/site-packages/peewee.py", line 6008, in _normalize_join
2019-02-20T11:33:07.005202328Z return (on, attr, constructor)
2019-02-20T11:33:07.005214005Z UnboundLocalError: local variable 'constructor' referenced before assignment
```
Is there something I'm doing wrong? I thought about changing `join` to `join_from(ModelAModelB, ModelC)`, but I'm not sure that's the reason of this error. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1864/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1864/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1863 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1863/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1863/comments | https://api.github.com/repos/coleifer/peewee/issues/1863/events | https://github.com/coleifer/peewee/issues/1863 | 411,606,754 | MDU6SXNzdWU0MTE2MDY3NTQ= | 1,863 | TypeError: type str doesn't define __round__ method | {
"login": "faulander",
"id": 38588197,
"node_id": "MDQ6VXNlcjM4NTg4MTk3",
"avatar_url": "https://avatars.githubusercontent.com/u/38588197?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/faulander",
"html_url": "https://github.com/faulander",
"followers_url": "https://api.github.com/users/faulander/followers",
"following_url": "https://api.github.com/users/faulander/following{/other_user}",
"gists_url": "https://api.github.com/users/faulander/gists{/gist_id}",
"starred_url": "https://api.github.com/users/faulander/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/faulander/subscriptions",
"organizations_url": "https://api.github.com/users/faulander/orgs",
"repos_url": "https://api.github.com/users/faulander/repos",
"events_url": "https://api.github.com/users/faulander/events{/privacy}",
"received_events_url": "https://api.github.com/users/faulander/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The error occurs because you're passing a string as the value of a `TimestampField`. I'm not sure where your data is coming from or what form it is in, but you should probably convert it to either a python `datetime.datetime` object or an integer/float timestamp."
] | 2019-02-18T18:53:16 | 2019-02-18T19:02:03 | 2019-02-18T19:02:02 | NONE | null | Class Definition:
```
class Update(Model):
date = TimestampField()
model = TextField()
provider = TextField()
title = TextField()
filename = TextField()
collected = BooleanField()
class Meta:
database = db
```
Call:
```
Update.get_or_create(
date=up['date'],
model=up['model'],
provider=up['provider'],
title=up['title'],
defaults={'filename': up['filename'], 'collected': 0})
```
results in:
Traceback (most recent call last):
File "/home/harald/.vscode/extensions/ms-python.python-2019.1.0/pythonFiles/ptvsd_launcher.py", line 45, in <module>
main(ptvsdArgs)
File "/home/harald/.vscode/extensions/ms-python.python-2019.1.0/pythonFiles/lib/python/ptvsd/__main__.py", line 348, in main
run()
File "/home/harald/.vscode/extensions/ms-python.python-2019.1.0/pythonFiles/lib/python/ptvsd/__main__.py", line 253, in run_file
runpy.run_path(target, run_name='__main__')
File "/usr/lib/python3.6/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/usr/lib/python3.6/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/usr/lib/python3.6/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/harald/Projects/bc/bc.py", line 34, in <module>
defaults={'filename': up['filename'], 'collected': 0})
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 5677, in get_or_create
return query.get(), False
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 6042, in get
return clone.execute(database)[0]
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1625, in inner
return method(self, database, *args, **kwargs)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1696, in execute
return self._execute(database)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1847, in _execute
cursor = database.execute(self)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 2726, in execute
sql, params = ctx.sql(query).query()
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 2067, in __sql__
ctx.literal(' WHERE ').sql(self._where)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1299, in __sql__
.sql(self.lhs)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1299, in __sql__
.sql(self.lhs)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1299, in __sql__
.sql(self.lhs)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1301, in __sql__
.sql(self.rhs))
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 571, in sql
return self.sql(Value(obj))
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 1225, in __sql__
return ctx.value(self.value, self.converter)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 585, in value
value = self.state.converter(value)
File "/home/harald/Projects/bc/.venv/lib/python3.6/site-packages/peewee.py", line 4444, in db_value
return int(round(value * self.resolution))
TypeError: type str doesn't define __round__ method | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1863/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1862 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1862/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1862/comments | https://api.github.com/repos/coleifer/peewee/issues/1862/events | https://github.com/coleifer/peewee/issues/1862 | 411,560,674 | MDU6SXNzdWU0MTE1NjA2NzQ= | 1,862 | KeyError / AttributeError | {
"login": "databe",
"id": 10868706,
"node_id": "MDQ6VXNlcjEwODY4NzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/10868706?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/databe",
"html_url": "https://github.com/databe",
"followers_url": "https://api.github.com/users/databe/followers",
"following_url": "https://api.github.com/users/databe/following{/other_user}",
"gists_url": "https://api.github.com/users/databe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/databe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/databe/subscriptions",
"organizations_url": "https://api.github.com/users/databe/orgs",
"repos_url": "https://api.github.com/users/databe/repos",
"events_url": "https://api.github.com/users/databe/events{/privacy}",
"received_events_url": "https://api.github.com/users/databe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You're using \"colon\" to separate the parameters? Is this just a typo in the code you've provided? You need equals signs.\r\n\r\nYour code:\r\n\r\n```python\r\nclass Driver(Model):\r\n SiteId: BigIntegerField(null=True)\r\n DriverId: IntegerField(null=False)\r\n```\r\n\r\nCorrect code:\r\n\r\n```python\r\nclass Driver(Model):\r\n SiteId = BigIntegerField(null=True)\r\n DriverId = IntegerField(null=False)\r\n```",
"Thanks! now it's working fine...my fault!!"
] | 2019-02-18T16:34:45 | 2019-02-18T16:58:23 | 2019-02-18T16:44:54 | NONE | null | Hi, I'm using peewee because I'm retrieving data from an API and want to put it into a postgres database. First I tried with 'create()', 'insert()' and 'get_or_create()' and everything worked fine. But now I'm working with a biggest data table and I have to work with 'insert_many()'. Appart of that function, the rest of the code is similar, but adapted to the table in question. With the data a retrieve first, I create a dictionary with all that data and then I try to insert the data using the 'insert_many()' function. This is my code:
```
class Driver(Model):
SiteId: BigIntegerField(null=True)
DriverId: IntegerField(null=False)
Name: CharField(null=True)
ImageUri: CharField(null=True)
FmDriverId: IntegerField(null=True)
EmployeeNumber: CharField(null=True)
IsSystemDriver: BooleanField(null=True)
MobileNumber: CharField(null=True)
Email: CharField(null=True)
ExtendedDriverId: CharField(null=True)
ExtendedDriverIdType: CharField(null=True)
Country: CharField(null=True)
class Meta:
database = db
primary_key = CompositeKey('DriverId')
db_table = 'tmp_drivers'
def insertoDatos(datos): #the data are correct
try:
print('TRY 1')
db.connect() #creo conexion
db.create_table([Driver]) #also create_tables([Driver])
except:
print('EXCEPT 1')
print('\nAlgo ha ocurrido al crear la tabla.')
try:
print('TRY 2')
with db.atomic():
Driver.insert_many(datos).execute()
db.close()
print('Datos insertados en la base de datos')
traceback.print_exc()
except:
print('EXCEPT 2')
print('\nAlgo ha ocurrido al insertar los datos en la base de datos\n')
traceback.print_exc()
```
using the code above I got this KeyError:
`Traceback (most recent call last):
File "db_drivers.py", line 113, in insertoDatos
Driver.insert_many(datos).execute()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 5577, in insert_many
return ModelInsert(cls, insert=rows, columns=fields)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 6377, in __init__
self._returning = self.model._meta.get_primary_keys()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 5312, in get_primary_keys
for field_name in self.primary_key.field_names])
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 5312, in <listcomp>
for field_name in self.primary_key.field_names])
KeyError: 'DriverId'`
But if I change the Model, (I only change the definition of the primary key):
```
class Driver(Model):
SiteId: BigIntegerField(null=True)
DriverId: IntegerField(primary_key=True)
Name: CharField(null=True)
ImageUri: CharField(null=True)
FmDriverId: IntegerField(null=True)
EmployeeNumber: CharField(null=True)
IsSystemDriver: BooleanField(null=True)
MobileNumber: CharField(null=True)
Email: CharField(null=True)
ExtendedDriverId: CharField(null=True)
ExtendedDriverIdType: CharField(null=True)
Country: CharField(null=True)
class Meta:
database = db
#primary_key = CompositeKey('DriverId')
db_table = 'tmp_drivers'
```
I got this:
`Traceback (most recent call last):
File "db_drivers.py", line 113, in insertoDatos
Driver.insert_many(datos).execute()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 1625, in inner
return method(self, database, *args, **kwargs)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 1696, in execute
return self._execute(database)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 2358, in _execute
return super(Insert, self)._execute(database)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 2119, in _execute
cursor = self.execute_returning(database)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 2126, in execute_returning
cursor = database.execute(self)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\playhouse\postgres_ext.py", line 463, in execute
sql, params = ctx.sql(query).query()
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 567, in sql
return obj.__sql__(self)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 2342, in __sql__
self._generate_insert(self._insert, ctx)
File "C:\Program Files (x86)\Python37-32\lib\site-packages\peewee.py", line 2244, in _generate_insert
column = getattr(self.table, key)
AttributeError: type object 'Driver' has no attribute 'SiteId'`
My environment: W10, python 3.7.2, peewee 3.8.2
I need to understand whats happening so I need help.
Thanks in advance!! | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1862/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1861 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1861/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1861/comments | https://api.github.com/repos/coleifer/peewee/issues/1861/events | https://github.com/coleifer/peewee/issues/1861 | 411,203,880 | MDU6SXNzdWU0MTEyMDM4ODA= | 1,861 | 0.5 second connection overhead | {
"login": "willgdjones",
"id": 1719848,
"node_id": "MDQ6VXNlcjE3MTk4NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1719848?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/willgdjones",
"html_url": "https://github.com/willgdjones",
"followers_url": "https://api.github.com/users/willgdjones/followers",
"following_url": "https://api.github.com/users/willgdjones/following{/other_user}",
"gists_url": "https://api.github.com/users/willgdjones/gists{/gist_id}",
"starred_url": "https://api.github.com/users/willgdjones/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/willgdjones/subscriptions",
"organizations_url": "https://api.github.com/users/willgdjones/orgs",
"repos_url": "https://api.github.com/users/willgdjones/repos",
"events_url": "https://api.github.com/users/willgdjones/events{/privacy}",
"received_events_url": "https://api.github.com/users/willgdjones/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Your snippet is opening and closing a database connection at every request. There is an overhead time built into it (You can't escape it). Maybe you should consider altering the way in which you manage connections. Something like:\r\n\r\nOpen a connection at startup.\r\n\r\nOn `@app.before_request` verify that the connection wasn't closed by the database (or by any other reason). If it's closed, open it again, otherwise, recycle the connection.\r\n\r\nYou may be interested in a [pooled connection](http://docs.peewee-orm.com/en/latest/peewee/database.html?highlight=pooled#connection-pooling).\r\n\r\n## Edit\r\nThis is a bad advice, see @coleifer comment for clarification.",
"Hi @NicolasCaous - thanks for those thoughts and link.\r\n\r\nI'll take your first suggestion. Do you know of the helper that checks if the connection is open? I am searching for it now.",
"> Do you know of the helper that checks if the connection is open?\r\n\r\n[This one should do it.](http://docs.peewee-orm.com/en/latest/peewee/api.html#Database.is_closed)",
"The reason for opening and closing connections has to do with thread-safety and being explicit. Most wsgi apps will use either threads or green-threads. Peewee's `Database` implementation, by default, stores connection state in a threadlocal. Opening a connection when the request starts is a good idea because it is explicit -- and it won't fail some arbitrary time later when the application actually tries to query the db. Closing a connection when the request is done is important so that you **release the resources** (which are associated with the thread that made the request). It is important to clean these up!\r\n\r\nNow, 0.5 seconds is a lot of overhead for establishing a db connection...I would suggest you have some serious issues in your network. But it should be on the order of milliseconds. See [this blog post](http://www.craigkerstiens.com/2014/05/22/on-connection-pooling/) for a bit of discussion.\r\n\r\nThat being said, you can avoid the setup cost by using a connection pool. Peewee provides these, and they use the same APIs as the regular non-pooled dbs, so you can drop it in.\r\n\r\nWhen using a connection pool with peewee it is **absolutely imperative** that you `close()` the connection when you're done using it (e.g., in the request teardown). Closing it releases the connection so it can be used again by a subsequent request. If you do not close it, your pool will grow and grow, which is bad.\r\n\r\nSo: follow the best practices. They are there for a reason.\r\n\r\nYou can mitigate the setup costs by using a connection pool, but you still need to call connect and close so the resources are recycled properly!\r\n\r\nLastly, 500ms is too much, I think you probably should investigate your network setup -- somethings very wrong there.",
"Regarding @NicolasCaous comment,\r\n\r\n> On `@app.before_request` verify that the connection wasn't closed by the database (or by any other reason). If it's closed, open it again, otherwise, recycle the connection.\r\n\r\nUnless your WSGI server is single-threaded (ie no python threads, no green threads), this is bad advice.\r\n\r\nAs I said, Peewee stores connection state in a threadlocal. So if a greenlet is spawned to handle each request, it will have no visibility into the connection(s) being used by other greenlets, so it will always appear that the connection has not been opened.\r\n\r\nThe proper fix is to use a connection pool. http://docs.peewee-orm.com/en/latest/peewee/playhouse.html#pool",
"Hi @coleifer - thank you for your detailed comments!\r\n\r\nI will go for connection pooling - it's clear that it is the best solution here.\r\n\r\nRegarding the 500ms overhead, I'm not sure where this could originate from. I'm running a Flask server on Heroku on 2 standard-1x machines that are running 2 web processes and connecting to a Postgres server on Amazon RDS. It could be something to do with that.\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n",
"I see that the standard connection parameters look something like this:\r\n\r\n```\r\ndb = PooledPostgresqlExtDatabase(\r\n 'my_app',\r\n max_connections=32,\r\n stale_timeout=300, # 5 minutes.\r\n user='postgres')\r\n```\r\n\r\nIs it possible to use a database connection string here?\r\n\r\nThanks!",
"Yes, you can use a connection string to configure the pooled database connection.\r\n\r\nhttp://docs.peewee-orm.com/en/latest/peewee/playhouse.html#connect\r\n\r\nThe kwargs are parsed from the URL query parameters. \r\n",
"Regarding heroku, are you using your own RDS database or the database provided by heroku?",
"I'm using my own RDS database.",
"Oh no...that's certainly the problem. You're connecting to your database over the public internet, which I don't think is ever a good idea. That introduces a lot of latency as every query has to go out across the internet. Typically you'll deploy your database in the same network as your application server, and you'll never expose it to the public internet."
] | 2019-02-17T16:10:20 | 2019-02-18T12:32:40 | 2019-02-17T17:25:18 | NONE | null | I'm having some issues with my connection requests ever since I have followed the best practices defined in the docs:
Code in docs:
```
from flask import Flask
from peewee import *
database = SqliteDatabase('my_app.db')
app = Flask(__name__)
# This hook ensures that a connection is opened to handle any queries
# generated by the request.
@app.before_request
def _db_connect():
database.connect()
# This hook ensures that the connection is closed when we've finished
# processing the request.
@app.teardown_request
def _db_close(exc):
if not database.is_closed():
database.close()
```
After this change, all handled requests have a 0.5 second overhead in response time.
<img width="574" alt="screenshot 2019-02-17 at 16 04 04" src="https://user-images.githubusercontent.com/1719848/52915570-afc33080-32cd-11e9-9ca5-745154519890.png">
I'm currently investigating and was wondering if this was something that could be somehow caused by Peewee's handling/closing of new connections, or if it is something else.
Thanks for your help.
Will | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1861/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1861/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1860 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1860/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1860/comments | https://api.github.com/repos/coleifer/peewee/issues/1860/events | https://github.com/coleifer/peewee/issues/1860 | 411,130,967 | MDU6SXNzdWU0MTExMzA5Njc= | 1,860 | PostgreSQL support for on_conflict with partial indexes | {
"login": "iamyohann",
"id": 1330242,
"node_id": "MDQ6VXNlcjEzMzAyNDI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1330242?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iamyohann",
"html_url": "https://github.com/iamyohann",
"followers_url": "https://api.github.com/users/iamyohann/followers",
"following_url": "https://api.github.com/users/iamyohann/following{/other_user}",
"gists_url": "https://api.github.com/users/iamyohann/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iamyohann/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iamyohann/subscriptions",
"organizations_url": "https://api.github.com/users/iamyohann/orgs",
"repos_url": "https://api.github.com/users/iamyohann/repos",
"events_url": "https://api.github.com/users/iamyohann/events{/privacy}",
"received_events_url": "https://api.github.com/users/iamyohann/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes you're right, I didn't think about partial indexes when designing the API. To fix this, I've added a new parameter, `conflict_where`, which should take the expression(s) that comprise the WHERE clause of the partial index.",
"That was quick! Cheers",
"@coleifer \r\nIs there a way to ignore duplicates that contains partial index with multiple conditions. \r\nPostgres Version: psql (10.17 (Ubuntu 10.17-1.pgdg20.04+1))\r\n\r\n**Partial Index** \r\n` \"participant_event_id_user_id_unique\" UNIQUE, btree (event_id, user_id) WHERE deleted = false AND event_id IS NOT NULL AND role <> 'test-admin'::text`.\r\n\r\n**Query**\r\n`insert into participant (role, user_id, created_by, event_id) VALUES ('proctor', 21937, 'com.talview.27585f36-4b68-4bbf-b41f-20a6c4a37d83', 2463) on conflict(user_id, event_id) where (\"role\" != 'test-admin') DO Nothing;`\r\n\r\n**Error**\r\nERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification",
"For it to match the partial index it must contain the same where clause conditions, as I understand it."
] | 2019-02-17T00:24:57 | 2021-07-02T13:00:26 | 2019-02-17T21:10:20 | NONE | null | I'm attempting to upsert multiple records into a PostgreSQL table containing a partial index.
The insert statement looks something like this in plain SQL
```
INSERT INTO abc (a, b, c, value1, value2) VALUES (1, 1, null, '0.2'::numeric, '0.02'::numeric)
ON CONFLICT (b, c) where a is null
DO UPDATE
SET value1 = '0.555'::numeric, value2 = '0.555'::numeric;
```
The table `abc` has a partial unique index on `(b, c) where a is null`.
However `OnConflict` does not appear to support PostgreSQL `index_predicate` as specified in the above query (`ON CONFLICT (b,c) where a is null`).
There is a `where` in `OnConflict` but that seems to be for the `conflict_action` section, not for the `conflict_target` section as specified here https://www.postgresql.org/docs/10/sql-insert.html
This results in a error being raised
```
peewee.ProgrammingError: there is no unique or exclusion constraint matching the ON CONFLICT specification
```
Is it possible to add support for `index_predicate` with `OnConflict` or am I missing something here?
----------------------------------------
Peewee version = `3.8.2`
PostgreSQL version = `PostgreSQL 10.5 on x86_64-apple-darwin18.0.0, compiled by Apple LLVM version 10.0.0 (clang-1000.10.43.1), 64-bit`
Python version = `3.6.x` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1860/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1859 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1859/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1859/comments | https://api.github.com/repos/coleifer/peewee/issues/1859/events | https://github.com/coleifer/peewee/issues/1859 | 411,101,597 | MDU6SXNzdWU0MTExMDE1OTc= | 1,859 | Isssue with varchar fiels in postgres | {
"login": "dorel14",
"id": 10910133,
"node_id": "MDQ6VXNlcjEwOTEwMTMz",
"avatar_url": "https://avatars.githubusercontent.com/u/10910133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dorel14",
"html_url": "https://github.com/dorel14",
"followers_url": "https://api.github.com/users/dorel14/followers",
"following_url": "https://api.github.com/users/dorel14/following{/other_user}",
"gists_url": "https://api.github.com/users/dorel14/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dorel14/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dorel14/subscriptions",
"organizations_url": "https://api.github.com/users/dorel14/orgs",
"repos_url": "https://api.github.com/users/dorel14/repos",
"events_url": "https://api.github.com/users/dorel14/events{/privacy}",
"received_events_url": "https://api.github.com/users/dorel14/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"That doesn't make any sense to me. Peewee doesn't automatically convert anything to ArrayField. People use their model classes with both SQLite and Postgres (ie Sqlite for tests, Postgres for production), and I've never heard of such an issue.\r\n\r\nIf you can provide instructions on how to reproduce this, I'll take a look. But you have not given me any information that I can use...other than you seem to be confused.",
"ok faullt of my code excuse me \r\ni discover peewee and it's not very simple\r\n"
] | 2019-02-16T18:59:03 | 2019-02-16T23:08:32 | 2019-02-16T21:37:07 | NONE | null | Hello
I change my database from sqlite to postgres and varchar column are converted in array fields so all the new text i insert are embraced with brackets
How can i deactivate this ?
THX | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1859/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1859/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1858 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1858/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1858/comments | https://api.github.com/repos/coleifer/peewee/issues/1858/events | https://github.com/coleifer/peewee/issues/1858 | 410,941,270 | MDU6SXNzdWU0MTA5NDEyNzA= | 1,858 | For Update in Subquery Fails with PostgreSQL | {
"login": "href",
"id": 273163,
"node_id": "MDQ6VXNlcjI3MzE2Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/273163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/href",
"html_url": "https://github.com/href",
"followers_url": "https://api.github.com/users/href/followers",
"following_url": "https://api.github.com/users/href/following{/other_user}",
"gists_url": "https://api.github.com/users/href/gists{/gist_id}",
"starred_url": "https://api.github.com/users/href/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/href/subscriptions",
"organizations_url": "https://api.github.com/users/href/orgs",
"repos_url": "https://api.github.com/users/href/repos",
"events_url": "https://api.github.com/users/href/events{/privacy}",
"received_events_url": "https://api.github.com/users/href/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, the reason your specific example is bad is because you need to associate `User` model with a Postgres database instance. But the error is the same even after adding the database, just for different reasons. Fixed.",
"Thanks for fixing this so super quickly 🙂"
] | 2019-02-15T20:58:06 | 2019-02-15T22:02:19 | 2019-02-15T21:21:17 | NONE | null | I'm trying to use SELECT .. FOR UPDATE SKIP LOCKED in a subquery to delete some records concurrently, and I think I stumbled upon a bug. Usually, my FOR UPDATE selects work fine in my Postgres Database, but this query cannot be rendered by Peewee.
When I run this example code:
```python
from peewee import Model
from peewee import AutoField
from peewee import TextField
class User(Model):
id = AutoField()
name = TextField()
User.delete().where(User.id.in_(User.select(User.id).for_update())).sql()
```
I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 1692, in sql
return context.parse(self)
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 600, in parse
return self.sql(node).query()
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 2374, in __sql__
ctx.literal(' WHERE ').sql(self._where)
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 1295, in __sql__
if op_in and Context().parse(self.rhs)[0] == '()':
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 600, in parse
return self.sql(node).query()
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 567, in sql
return obj.__sql__(self)
File "/Users/denis/.virtualenvs/centime/lib/python3.7/site-packages/peewee.py", line 2084, in __sql__
raise ValueError('FOR UPDATE specified but not supported '
ValueError: FOR UPDATE specified but not supported by database.
```
This happens on PostgreSQL (not that I have specified that yet in this example), even though PostgreSQL supports FOR UPDATE. My other FOR UPDATE queries work out fine and when I remove the check leading up to the ValueError the query executes correctly as well. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1858/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1857 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1857/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1857/comments | https://api.github.com/repos/coleifer/peewee/issues/1857/events | https://github.com/coleifer/peewee/issues/1857 | 410,856,608 | MDU6SXNzdWU0MTA4NTY2MDg= | 1,857 | save_optimistic misbehaving when foreign key is assigned before saved | {
"login": "NicolasCaous",
"id": 24411365,
"node_id": "MDQ6VXNlcjI0NDExMzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/24411365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NicolasCaous",
"html_url": "https://github.com/NicolasCaous",
"followers_url": "https://api.github.com/users/NicolasCaous/followers",
"following_url": "https://api.github.com/users/NicolasCaous/following{/other_user}",
"gists_url": "https://api.github.com/users/NicolasCaous/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NicolasCaous/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NicolasCaous/subscriptions",
"organizations_url": "https://api.github.com/users/NicolasCaous/orgs",
"repos_url": "https://api.github.com/users/NicolasCaous/repos",
"events_url": "https://api.github.com/users/NicolasCaous/events{/privacy}",
"received_events_url": "https://api.github.com/users/NicolasCaous/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Well, for what its worth, this seems like a very silly and inefficient way to use Peewee... As you are essentially doing two INSERTs and an UPDATE when you only need to be doing two INSERTs. I'd suggest that the right way would be to first create the `Phone` object, then to create the `Person`...not creating the `Person`, `Phone`, then updating the `Person`.\r\n\r\nHowever I agree this is probably a bug.\r\n\r\nHere's a fix:\r\n\r\n```python\r\n # Update any data that has changed and bump the version counter.\r\n field_data = dict(self.__data__)\r\n current_version = field_data.pop('version', 1)\r\n self._populate_unsaved_relations(field_data) # ADD THIS LINE.\r\n field_data = self._prune_fields(field_data, self.dirty_fields)\r\n if not field_data:\r\n raise ValueError('No changes have been made.')\r\n```",
"> this seems like a very silly and inefficient way to use Peewee...\r\n\r\nI strongly agree :laughing: , this is just a Proof of Concept. I ran into this issue in a totally different context. Since it was a complex scenario, I decided to create this PoC just for the issue for simplicity sake.\r\n\r\n> [If you have found a bug in the code and submit a failing test-case, then hats-off to you, you are a hero!](url)\r\n\r\nA hero once again!\r\n"
] | 2019-02-15T16:52:03 | 2019-02-15T19:06:37 | 2019-02-15T18:49:24 | NONE | null | I'm not sure if this is intended behavior or a valid issue. @coleifer please help :)
# The Issue
Assume [Optimistic Locking](http://docs.peewee-orm.com/en/3.6.0/peewee/hacks.html#optimistic-locking) is in place as in the docs
When a foreign key is assigned before it exists on the database, it results in a None (aka NULL) reference after `_prune_fields` call. This should be a valid sequence of events, but, for some reason, its not:
```
>>> from peewee import *
>>>
>>> db = SqliteDatabase(':memory:')
>>>
>>> class BaseModel(Model):
... class Meta:
... database = db
...
>>>
>>> class Phone(BaseModel):
... number = CharField(max_length=255)
...
>>> class Person(BaseModel):
... phone = ForeignKeyField(Phone, backref="person", null=True)
...
>>> db.connect()
True
>>> db.create_tables([Phone, Person])
>>>
>>> person = Person()
>>> person
<Person: None>
>>> person.save()
1
>>> person
<Person: 1>
>>>
>>> phone = Phone(number="dummy")
>>> phone
<Phone: None>
>>> person.phone = phone
>>>
>>> person.phone
<Phone: None>
>>> person._prune_fields(dict(person.__data__), person.dirty_fields) # Simulating "save_optimistic", here phone should be None, so it's correct.
{'phone': None}
>>> phone.save() # Now save
1
>>> phone # Save was succesfull
<Phone: 1>
>>> person.phone # Relation is also correct
<Phone: 1>
>>> person._prune_fields(dict(person.__data__), person.dirty_fields) # Phone should be 1, not None now, incorrect behavior
{'phone': None}
>>>
>>>
>>> person2 = Person()
>>> person2
<Person: None>
>>> person2.save()
1
>>> person2
<Person: 2>
>>>
>>> phone2 = Phone(number="dummy")
>>> phone2
<Phone: None>
>>> phone2.save() # Simulating save before assing
1
>>> phone2
<Phone: 2>
>>> person2.phone = phone2
>>> person2._prune_fields(dict(person2.__data__), person2.dirty_fields) # now it's correct
{'phone': 2}
```
For copy and paste:
```
from peewee import *
db = SqliteDatabase(':memory:')
class BaseModel(Model):
class Meta:
database = db
class Phone(BaseModel):
number = CharField(max_length=255)
class Person(BaseModel):
phone = ForeignKeyField(Phone, backref="person", null=True)
db.connect()
db.create_tables([Phone, Person])
person = Person()
person
person.save()
person
phone = Phone(number="dummy")
phone
person.phone = phone
person.phone
person._prune_fields(dict(person.__data__), person.dirty_fields) # Simulating "save_optimistic", here phone should be None, so it's correct.
phone.save() # Now save
phone # Save was succesfull
person.phone # Relation is also correct
person._prune_fields(dict(person.__data__), person.dirty_fields) # Phone should be 1, not None now, incorrect behavior
person2 = Person()
person2
person2.save()
person2
phone2 = Phone(number="dummy")
phone2
phone2.save() # Simulating save before assing
phone2
person2.phone = phone2
person2._prune_fields(dict(person2.__data__), person2.dirty_fields) # now it's correct
```
# Peewee Version
```
$ python -c "from peewee import __version__; print(__version__)"
3.8.0
```
Happens with any database (Tested with Sqlite and MySQL) | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1857/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1856 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1856/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1856/comments | https://api.github.com/repos/coleifer/peewee/issues/1856/events | https://github.com/coleifer/peewee/issues/1856 | 410,854,847 | MDU6SXNzdWU0MTA4NTQ4NDc= | 1,856 | Passing single model to bind_ctx causes uninitialized Proxy error | {
"login": "MushuEE",
"id": 7189267,
"node_id": "MDQ6VXNlcjcxODkyNjc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7189267?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MushuEE",
"html_url": "https://github.com/MushuEE",
"followers_url": "https://api.github.com/users/MushuEE/followers",
"following_url": "https://api.github.com/users/MushuEE/following{/other_user}",
"gists_url": "https://api.github.com/users/MushuEE/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MushuEE/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MushuEE/subscriptions",
"organizations_url": "https://api.github.com/users/MushuEE/orgs",
"repos_url": "https://api.github.com/users/MushuEE/repos",
"events_url": "https://api.github.com/users/MushuEE/events{/privacy}",
"received_events_url": "https://api.github.com/users/MushuEE/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"`(Testbed)` is not a tuple.\r\n\r\n`(Testbed,)` is a tuple.\r\n\r\nThis is your problem.",
"Yes, I mentioned that, I opened because the Error message received was misleading. Hopefully others might find this helpful if they hit the same issue, thanks."
] | 2019-02-15T16:47:53 | 2019-02-15T18:13:21 | 2019-02-15T18:09:04 | NONE | null | I was trying to make a fixture to do some unit testing for my Peewee methods when I hit this issue:
```
@pytest.fixture()
def test_db():
"""Test to check db."""
_db = SqliteDatabase(":memory:")
dbs = (Testbed)
with _db.bind_ctx(dbs):
_db.create_tables(dbs)
yield _db
_db.drop_tables(dbs)
```
Will trigger
```
@pytest.fixture()
def test_db():
"""Test to check db."""
_db = SqliteDatabase(":memory:")
dbs = (Testbed)
> with _db.bind_ctx(dbs):
def __getattr__(self, attr):
if self.obj is None:
> raise AttributeError('Cannot use uninitialized Proxy.')
E AttributeError: Cannot use uninitialized Proxy.
```
That sent me down a bit of a rabbit hole.
The issue ended up being that I needed `[Testbed]`, in other fixtures I had `(Model1, Model2, Model3)` which is happy because it is iterable. If there is a way to pass a single model or add a catch, I think it might be helpful. Thanks | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1856/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1855 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1855/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1855/comments | https://api.github.com/repos/coleifer/peewee/issues/1855/events | https://github.com/coleifer/peewee/issues/1855 | 409,473,029 | MDU6SXNzdWU0MDk0NzMwMjk= | 1,855 | Bulk Upsert | {
"login": "takehaya",
"id": 10973623,
"node_id": "MDQ6VXNlcjEwOTczNjIz",
"avatar_url": "https://avatars.githubusercontent.com/u/10973623?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/takehaya",
"html_url": "https://github.com/takehaya",
"followers_url": "https://api.github.com/users/takehaya/followers",
"following_url": "https://api.github.com/users/takehaya/following{/other_user}",
"gists_url": "https://api.github.com/users/takehaya/gists{/gist_id}",
"starred_url": "https://api.github.com/users/takehaya/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/takehaya/subscriptions",
"organizations_url": "https://api.github.com/users/takehaya/orgs",
"repos_url": "https://api.github.com/users/takehaya/repos",
"events_url": "https://api.github.com/users/takehaya/events{/privacy}",
"received_events_url": "https://api.github.com/users/takehaya/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Questions like this belong on StackOverflow.\r\n\r\nThe documentation in peewee:\r\n\r\n* http://docs.peewee-orm.com/en/latest/peewee/querying.html#upsert\r\n* http://docs.peewee-orm.com/en/latest/peewee/api.html#Insert.on_conflict\r\n\r\nConsult the manual for the SQL database you are using to get a better idea of how to write such a query.",
"sorry.I looked at StackOverflow and read the document, but there was no example.\r\n\r\nList can not be taken with\" on_conflict (update = books) \"\r\nHow to pass \"list\" to \"on_conflict(update=*)\"?",
"I'm suggesting that you don't understand how insert with \"on conflict\" clause works. And that you should familiarize yourself with the way it is used."
] | 2019-02-12T19:54:48 | 2019-02-13T13:02:17 | 2019-02-12T20:58:28 | NONE | null | hi. I want to use update and preserve with bulk upsert.
I would like to run the following code. How should I do?
```
Book.insert_many(books).on_conflict(preserve=[Book.disposal], update=books).execute()
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1855/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1854 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1854/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1854/comments | https://api.github.com/repos/coleifer/peewee/issues/1854/events | https://github.com/coleifer/peewee/issues/1854 | 409,414,408 | MDU6SXNzdWU0MDk0MTQ0MDg= | 1,854 | UNION query is not using parenthesis | {
"login": "Polsaker",
"id": 2000719,
"node_id": "MDQ6VXNlcjIwMDA3MTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/2000719?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Polsaker",
"html_url": "https://github.com/Polsaker",
"followers_url": "https://api.github.com/users/Polsaker/followers",
"following_url": "https://api.github.com/users/Polsaker/following{/other_user}",
"gists_url": "https://api.github.com/users/Polsaker/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Polsaker/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Polsaker/subscriptions",
"organizations_url": "https://api.github.com/users/Polsaker/orgs",
"repos_url": "https://api.github.com/users/Polsaker/repos",
"events_url": "https://api.github.com/users/Polsaker/events{/privacy}",
"received_events_url": "https://api.github.com/users/Polsaker/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"So the issue (I didn't realize this, just saw it in [the mysql docs](https://dev.mysql.com/doc/refman/8.0/en/union.html)) is that MySQL wants parentheses when either of the subqueries contains an order by clause.\r\n\r\nIn the meantime, is it possible for you to avoid ordering the subqueries, and simply apply the ordering afterwards to the resulting union? It looks like both use the same column, \"time\", so this should be easy.",
"I tried this:\r\n\r\n```\r\n s1 = SiteLog.select(SiteLog.time)\r\n s2 = SubLog.select(SubLog.time)\r\n final = (s1 | s2)\r\n final = final.order_by(final.c.time.desc())\r\n```\r\n\r\nThe resulting query is\r\n\r\n```\r\nSELECT `t1`.`time` FROM `site_log` AS `t1` UNION SELECT `t2`.`time` FROM `sub_log` AS `t2` ORDER BY `t3`.`time` DESC\r\n```\r\n\r\nand of course it throws an error because a `t3` appeared there for some reason...",
"I ended up using `final = final.order_by(SQL('time DESC'))` as a workaround",
"Yeah, I'm sorry that your intuition (final.c.time.desc()) did not work -- that would be the first thing I would try, too, of course. I'll look at fixing that particular issue -- referencing the top-level columns from a compound query is a little tricky, but I think Peewee can do better. In the meantime, yeah, SQL() is the way to go.\r\n\r\nI've pushed a patch that tells MySQL to use parentheses around the parts of a compound select query (be815ae), which should address the syntax issue hopefully.",
"Yeah, MySQL is seemingly a bit inconsistent. Here the ci is running against MariaDB and the compound-select queries are failing once parentheses were added:\r\n\r\nhttps://travis-ci.org/coleifer/peewee/jobs/492305967\r\n\r\nI'm going to revert the change to `MySQLDatabase.compound_select_parentheses`.",
"The main issue is fixed here:\r\n\r\n36bd887ac07647c60dfebe610b34efabec675706\r\n\r\nSqlite never allows parentheses in compound select. Postgres allows parentheses, including nesting (e.g. if you have more than 2 queries). MySQL allows one layer of parentheses (but it looks like in MariaDB 10.4 nesting will be supported).",
"Added another patch (aff2a92) to allow using \".c\" to reference parts of a compound select query in the order-by clause, as you had tried to do originally.",
"Thanks! This all was pretty quick!",
"See also #2014 -- mysql and mariadb are a pile of inconsistency when it comes to parsing sql.\r\n\r\nBug for MySQL (opened in 2007) was finally fixed for MySQL 8.0 release: https://bugs.mysql.com/bug.php?id=25734\r\n\r\nMariaDB thought they fixed it for 10.4: https://jira.mariadb.org/browse/MDEV-11953\r\n\r\nBut they missed some scenarios, including using compound select queries inside an IN expression (e.g.). I've opened a ticket, so we'll see what happens: https://jira.mariadb.org/browse/MDEV-20606"
] | 2019-02-12T17:27:06 | 2019-09-17T02:40:23 | 2019-02-12T20:56:32 | NONE | null | Hello, I'm doing the following query (on mysql):
```
s1 = SiteLog.select(SiteLog.time).order_by(SiteLog.time.desc())
s2 = SubLog.select(SubLog.time).order_by(SubLog.time.desc())
final = (s1 | s2)
```
and the sql result is
```
SELECT `t1`.`time` FROM `site_log` AS `t1` ORDER BY `t1`.`time` DESC UNION SELECT `t2`.`time` FROM `sub_log` AS `t2` ORDER BY `t2`.`time` DESC
```
This lacks the parenthesis around each query so the mysql server ends up throwing a syntax error:
```
peewee.ProgrammingError: (1064, "You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'UNION SELECT `t2`.`time` FROM `sub_log` AS `t2` ORDER BY `t2`.`time` DESC' at line 1")
```
What am I doing wrong? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1854/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1854/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1853 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1853/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1853/comments | https://api.github.com/repos/coleifer/peewee/issues/1853/events | https://github.com/coleifer/peewee/issues/1853 | 409,327,080 | MDU6SXNzdWU0MDkzMjcwODA= | 1,853 | v3 multiple databases support | {
"login": "tyfncn",
"id": 5200913,
"node_id": "MDQ6VXNlcjUyMDA5MTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5200913?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tyfncn",
"html_url": "https://github.com/tyfncn",
"followers_url": "https://api.github.com/users/tyfncn/followers",
"following_url": "https://api.github.com/users/tyfncn/following{/other_user}",
"gists_url": "https://api.github.com/users/tyfncn/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tyfncn/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tyfncn/subscriptions",
"organizations_url": "https://api.github.com/users/tyfncn/orgs",
"repos_url": "https://api.github.com/users/tyfncn/repos",
"events_url": "https://api.github.com/users/tyfncn/events{/privacy}",
"received_events_url": "https://api.github.com/users/tyfncn/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Yes, you can use `database.bind_ctx()`, or `Model.bind_ctx()`:\r\n\r\n* http://docs.peewee-orm.com/en/latest/peewee/api.html#Database.bind_ctx\r\n* http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.bind_ctx"
] | 2019-02-12T14:28:45 | 2019-02-12T17:24:30 | 2019-02-12T17:24:30 | NONE | null | First thank you for such a great software. I've been using it for almost 3 years now. My issue is missing "Using()" context manager. It was very useful while backing up to a new sqlite database of same models. It seems it is deprecated in v3. Is there any replacement for this?
Old docs was here: http://docs.peewee-orm.com/en/2.10.2/peewee/database.html#using-multiple-databases
My use case is somewhat like this:
```python
new_db = SqliteDatabase("backup_2019.db")
data_list = []
for rec in Record.select():
data_list.append({"name": rec.name, "desc": rec.desc})
with Using(new_db, [Record]):
for rec in data_list:
Record.create(**data)
```
Would like to update to latest version but this particular use case holds me back. Any help would be greatly appreciated | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1853/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1853/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1852 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1852/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1852/comments | https://api.github.com/repos/coleifer/peewee/issues/1852/events | https://github.com/coleifer/peewee/issues/1852 | 409,151,329 | MDU6SXNzdWU0MDkxNTEzMjk= | 1,852 | Avoid quoting of table/column names | {
"login": "ananis25",
"id": 16446513,
"node_id": "MDQ6VXNlcjE2NDQ2NTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/16446513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ananis25",
"html_url": "https://github.com/ananis25",
"followers_url": "https://api.github.com/users/ananis25/followers",
"following_url": "https://api.github.com/users/ananis25/following{/other_user}",
"gists_url": "https://api.github.com/users/ananis25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ananis25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ananis25/subscriptions",
"organizations_url": "https://api.github.com/users/ananis25/orgs",
"repos_url": "https://api.github.com/users/ananis25/repos",
"events_url": "https://api.github.com/users/ananis25/events{/privacy}",
"received_events_url": "https://api.github.com/users/ananis25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just saw this 5ff2d8a where @coleifer moved from a tuple of characters to strings, so clearly Cython must provide speedup better than 50 p/c. \r\n\r\nDo you think it'd still be useful to allow a tuple for the above use-case?\r\n\r\nInternet also helped me with a simple way to get around this, so the cython method isn't used. Please close the issue if it isn't worth the trouble :)\r\n```\r\nimport sys\r\nsys.modules['playhouse._speedups'] = None\r\n```",
"I've gone ahead and removed the speedups extension as it was a holdover from 2.x and is not really necessary anymore. It provides a tiny performance improvement for identifier quoting, but its so minimal that I don't think it justifies the existence of the additional C extension. So: going forward in the next release, the \"speedups\" module will be gone.\r\n\r\n> Needed since I haven't created the SQL tables I'm reading from.\r\n\r\nI don't understand what quoting has to do with this?",
"Please comment as to why quoted identifiers are causing problems with your DB/application. For now I am closing, as the speedups-specific issue is fixed.",
"Sure. I'm using an existing table in Postgres where the identifiers were not created with quotes.\r\n```\r\nCREATE TABLE tableA (userId int);\r\n```\r\n\r\nWhen using Peewee to construct queries for this table, I created a class with the column identifiers camel-cased (a solution would be to use lower-case names, but the data schema everywhere else uses camelcase). Hence the query comes out to be off.\r\n\r\n```\r\nclass tableA(Model):\r\n userId = IntegerField()\r\n\r\nprint(tableA.select(tableA.userId).sql())\r\n> 'SELECT \"t1\".\"userId\" FROM \"tablea\" AS \"t1\"'\r\n# this throws since the column \"t1.userId\" doesn't exist\r\n```\r\n\r\nWith the `speedups` routine removed, it'd be easy to override quoting by providing `('', '')` as quote characters. Thank you very much for looking!\r\n",
"You can override the table and column names:\r\n\r\n```python\r\nclass AnyName(Model):\r\n anything = TextField(column_name=\"colA\")\r\n class Meta:\r\n table_name = \"tableA\"\r\n```\r\n\r\nIf you read the doc's you would have probably saved us both time.",
"http://docs.peewee-orm.com/en/latest/peewee/models.html#field-initialization-arguments\r\n\r\nhttp://docs.peewee-orm.com/en/latest/peewee/models.html#model-options-and-table-metadata",
"Thanks for the reference. I do make use of the table_name parameter It hadn't struck me that using column_name would've solved this. Really appreciate your quick response.",
"Glad to have been able to help."
] | 2019-02-12T07:07:45 | 2019-02-13T14:05:43 | 2019-02-12T18:10:37 | NONE | null | I couldn't quite figure how to create table and column names without quote characters. Needed since I haven't created the SQL tables I'm reading from. Is there a way to achieve this without monkey patching?
Suggestion:
A possible way to do this could be to use a modified Database class with quote variable set to a tuple of 2 empty strings. This does fail however since the `quote_char` variable is expected to be the type `str` in the Cython `quote` function at `playhouse/_speedups.pyx`
```
class PostgresModified(peewee.PostgresqlDatabase):
quote = ('', '') # tuple of empty strings in place of double quotes which is the peewee default
# an empty string doesn't work since the quote function in Peewee.py uses join method on the string.
```
Using a tuple instead of a string also halves the time it takes for the `quote` function. Given @coleifer has written a Cython method to presumably speed it up, could this be useful - accept `quote` as a 2 character string but store it as a 2-tuple? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1852/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1852/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1851 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1851/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1851/comments | https://api.github.com/repos/coleifer/peewee/issues/1851/events | https://github.com/coleifer/peewee/issues/1851 | 408,803,896 | MDU6SXNzdWU0MDg4MDM4OTY= | 1,851 | Field class doesn't have an `index_type` attribute | {
"login": "ananis25",
"id": 16446513,
"node_id": "MDQ6VXNlcjE2NDQ2NTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/16446513?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ananis25",
"html_url": "https://github.com/ananis25",
"followers_url": "https://api.github.com/users/ananis25/followers",
"following_url": "https://api.github.com/users/ananis25/following{/other_user}",
"gists_url": "https://api.github.com/users/ananis25/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ananis25/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ananis25/subscriptions",
"organizations_url": "https://api.github.com/users/ananis25/orgs",
"repos_url": "https://api.github.com/users/ananis25/repos",
"events_url": "https://api.github.com/users/ananis25/events{/privacy}",
"received_events_url": "https://api.github.com/users/ananis25/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is only used internally by some special fields in the `playhouse.postgres_ext` module. However you *can* specify a custom index type via the `using=` parameter of `ModelIndex`:\r\n\r\n```python\r\nclass Table(Model):\r\n timestamp = DateTimeField()\r\n\r\nTable.add_index(Table.index('timestamp', using='BRIN'))\r\n```",
"I *do* think this is a legit suggestion that it should be applicable to all field types... I'm going to look at expanding the scope of the `index_type` bit to maybe apply across all fields.",
"Fixed -- now it is possible to write:\r\n\r\n```python\r\ncolumn_a = DateTimeField(index=True, index_type='BRIN')\r\n```",
"Thank you! The commit diff shows it was clearly more involved that it looks. \r\n\r\nThe `add_index` method was also doable but for my use-case, I needed to redefine the Model class every time I used it. Another hack that worked was:\r\n\r\n```\r\nclass Table(Model):\r\n columnA = DateTimeField(index = True)\r\n columnA.index_type = 'BRIN'\r\n\r\n class Meta:...\r\n\r\n```"
] | 2019-02-11T14:20:52 | 2019-02-12T03:26:03 | 2019-02-11T21:52:56 | NONE | null | The ModelIndex class looks for the index_type argument in the definition of a Field when creating an index for it, but it is not defined as an attribute for the Field class.
```
class ModelIndex(Index):
def __init__(self, model, fields, unique=False, safe=True, where=None,
using=None, name=None):
self._model = model
if name is None:
name = self._generate_name_from_fields(model, fields)
if using is None:
for field in fields:
if isinstance(field, Field) and hasattr(field, 'index_type'):
using = field.index_type
```
Might it be an oversight? If available, creating indices of type other than the default (B-tree) for fields would be really simple.
```
class Table(Model):
columnA = DateTimeField(index_type = 'BRIN')
```
Also, thank you for the great library :) | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1851/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1850 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1850/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1850/comments | https://api.github.com/repos/coleifer/peewee/issues/1850/events | https://github.com/coleifer/peewee/issues/1850 | 408,502,553 | MDU6SXNzdWU0MDg1MDI1NTM= | 1,850 | Add support for column name option in ForeignKeyField | {
"login": "michaellzc",
"id": 8373004,
"node_id": "MDQ6VXNlcjgzNzMwMDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/8373004?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaellzc",
"html_url": "https://github.com/michaellzc",
"followers_url": "https://api.github.com/users/michaellzc/followers",
"following_url": "https://api.github.com/users/michaellzc/following{/other_user}",
"gists_url": "https://api.github.com/users/michaellzc/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michaellzc/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michaellzc/subscriptions",
"organizations_url": "https://api.github.com/users/michaellzc/orgs",
"repos_url": "https://api.github.com/users/michaellzc/repos",
"events_url": "https://api.github.com/users/michaellzc/events{/privacy}",
"received_events_url": "https://api.github.com/users/michaellzc/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"ForeignKeyField *does* support `column_name`. What are you talking about?"
] | 2019-02-10T05:22:08 | 2019-02-10T16:52:18 | 2019-02-10T16:52:18 | NONE | null | ```py
class User(Model):
user_id = IntegerField(primary_key=true)
class Tweet(Model):
tweet_id = IntegerField(primary_key=true)
reply_to = ForeignKeyField('self') # Reference to itself
```
Current behaviour of the above snippet will automatically change column name of `reply_to` to `reply_to_id`. AFAIK `ForeignKeyField` class does not expose `column_name` parameter like `Field` class does.
It will be great if we can parametrize `column_name` for all `Field` class.
Here is what I have in mind.
```py
class Tweet(Model):
tweet_id = IntegerField(primary_key=true)
reply_to = ForeignKeyField('self', column_name='reply_to')
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1850/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1849 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1849/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1849/comments | https://api.github.com/repos/coleifer/peewee/issues/1849/events | https://github.com/coleifer/peewee/pull/1849 | 408,415,599 | MDExOlB1bGxSZXF1ZXN0MjUxNjY0MzAw | 1,849 | [docs]fix optimistic locking document in hacks.rst | {
"login": "ymym3412",
"id": 9605058,
"node_id": "MDQ6VXNlcjk2MDUwNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9605058?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ymym3412",
"html_url": "https://github.com/ymym3412",
"followers_url": "https://api.github.com/users/ymym3412/followers",
"following_url": "https://api.github.com/users/ymym3412/following{/other_user}",
"gists_url": "https://api.github.com/users/ymym3412/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ymym3412/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ymym3412/subscriptions",
"organizations_url": "https://api.github.com/users/ymym3412/orgs",
"repos_url": "https://api.github.com/users/ymym3412/repos",
"events_url": "https://api.github.com/users/ymym3412/events{/privacy}",
"received_events_url": "https://api.github.com/users/ymym3412/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [] | 2019-02-09T09:37:57 | 2019-02-15T02:28:23 | 2019-02-10T16:55:35 | CONTRIBUTOR | null | In peewee3, Model class has no attribute `_dict` , but has `__dict__` .
And in hack.rst, there is no definition of `ConflictDetectedException` .
Reffered to this blog post.
http://charlesleifer.com/blog/optimistic-locking-in-peewee-orm/
Thx. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1849/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1849",
"html_url": "https://github.com/coleifer/peewee/pull/1849",
"diff_url": "https://github.com/coleifer/peewee/pull/1849.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1849.patch",
"merged_at": "2019-02-10T16:55:35"
} |
https://api.github.com/repos/coleifer/peewee/issues/1848 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1848/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1848/comments | https://api.github.com/repos/coleifer/peewee/issues/1848/events | https://github.com/coleifer/peewee/issues/1848 | 408,244,925 | MDU6SXNzdWU0MDgyNDQ5MjU= | 1,848 | Map result of raw query to a model | {
"login": "IvaYan",
"id": 2164810,
"node_id": "MDQ6VXNlcjIxNjQ4MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2164810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/IvaYan",
"html_url": "https://github.com/IvaYan",
"followers_url": "https://api.github.com/users/IvaYan/followers",
"following_url": "https://api.github.com/users/IvaYan/following{/other_user}",
"gists_url": "https://api.github.com/users/IvaYan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/IvaYan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/IvaYan/subscriptions",
"organizations_url": "https://api.github.com/users/IvaYan/orgs",
"repos_url": "https://api.github.com/users/IvaYan/repos",
"events_url": "https://api.github.com/users/IvaYan/events{/privacy}",
"received_events_url": "https://api.github.com/users/IvaYan/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"In that case you can do:\r\n\r\n```python\r\nresult = MyModel.raw('SELECT value1, value2 FROM ...')\r\nfor m in result:\r\n print(m.value1, m.value2)\r\n```"
] | 2019-02-08T17:12:29 | 2019-02-08T17:52:02 | 2019-02-08T17:52:02 | NONE | null | Hello!
Is it possible to map result of a raw query to a particular Model, assuming that I know the structure of the values returned by the query? Something like this:
```py
class MyModel(Model):
# declare fields here, as usual
value1 = peewee.IntegerField()
value2 = peewee.IntegerField()
result = database.execute('SELECT value1, value2 FROM ...').map_to(MyModel)
for m in result:
print(m.value1, m.value2)
```
The `result` variable contains model instances instead of raw tuples. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1848/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1847 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1847/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1847/comments | https://api.github.com/repos/coleifer/peewee/issues/1847/events | https://github.com/coleifer/peewee/issues/1847 | 408,054,759 | MDU6SXNzdWU0MDgwNTQ3NTk= | 1,847 | Support for different blob types in mysql | {
"login": "maxnoe",
"id": 5488440,
"node_id": "MDQ6VXNlcjU0ODg0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5488440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxnoe",
"html_url": "https://github.com/maxnoe",
"followers_url": "https://api.github.com/users/maxnoe/followers",
"following_url": "https://api.github.com/users/maxnoe/following{/other_user}",
"gists_url": "https://api.github.com/users/maxnoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxnoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxnoe/subscriptions",
"organizations_url": "https://api.github.com/users/maxnoe/orgs",
"repos_url": "https://api.github.com/users/maxnoe/repos",
"events_url": "https://api.github.com/users/maxnoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxnoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Just subclass and specify the data-storage class:\r\n\r\n```python\r\nclass LongBlogField(BlobField):\r\n field_type = 'LONGBLOB'\r\n```",
"This is not compatible with using a proxy that can either be a sqlite or mysql, as sqlite has no longblob type, or am I mistaken?",
"https://www.sqlite.org/datatype3.html\r\n\r\nFrom 3.1.3:\r\n\r\n> If the declared type for a column contains the string \"BLOB\" or if no type is specified then the column has affinity BLOB."
] | 2019-02-08T08:43:08 | 2019-02-08T17:51:24 | 2019-02-08T16:04:33 | NONE | null | I would need to store some binary data larger then 65kb in a mysql database.
peewee only seems to support one `BlobField`, which maps to mysqls `blob` which is max 65kb.
How can I make sure, the table is created with a `MEDBLOB` or `LONGBLOB` datatype while also maintaining sqlite compatiblity for local testing? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1847/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1846 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1846/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1846/comments | https://api.github.com/repos/coleifer/peewee/issues/1846/events | https://github.com/coleifer/peewee/issues/1846 | 408,007,557 | MDU6SXNzdWU0MDgwMDc1NTc= | 1,846 | readonly field support | {
"login": "SteveByerly",
"id": 1393464,
"node_id": "MDQ6VXNlcjEzOTM0NjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/1393464?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SteveByerly",
"html_url": "https://github.com/SteveByerly",
"followers_url": "https://api.github.com/users/SteveByerly/followers",
"following_url": "https://api.github.com/users/SteveByerly/following{/other_user}",
"gists_url": "https://api.github.com/users/SteveByerly/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SteveByerly/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SteveByerly/subscriptions",
"organizations_url": "https://api.github.com/users/SteveByerly/orgs",
"repos_url": "https://api.github.com/users/SteveByerly/repos",
"events_url": "https://api.github.com/users/SteveByerly/events{/privacy}",
"received_events_url": "https://api.github.com/users/SteveByerly/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Not exactly answering the issue, but rather than use `bulk_create()` why not just use `insert_many()`?",
"I tend to prefer creating the instances of my objects directly. It's easier for me to see which object I'm creating and see that I'm assigning values properly. I also find this method a bit easier to search and refactor.\r\n\r\n`insert_many` gets close with dictionaries, but you're still one-step removed since the fields (dict keys) have no direct relation to the object within the code. the line where the dict is created might be several lines apart from the `insert_many` call using the model class.",
"I dunno...specifying default values using database constraints is totally fine, but it doesn't play very well with an active-record ORM like Peewee. Unless you are using Postgres, which supports INSERT...RETURNING, you would need to issue a subsequent SELECT to obtain the values that are being stored in these columns. You might consider using the `insert_many()` APIs.",
"So I've been kicking this around for a few days. The main APIs that would be impacted by such a change would be `save()`, `create()` and `bulk_create()`. My concern is that implementing something that causes certain columns to be excluded from these calls will also require a separate mechanism to override this exclusion. Since there already exist APIs for performing arbitrary insert/update queries, I think I'll pass on implementing this kind of functionality."
] | 2019-02-08T05:04:57 | 2019-02-20T20:00:31 | 2019-02-20T20:00:31 | NONE | null | I created a `BaseTrackingModel` to capture some tracking data on insert/update.
```python
class BaseTrackingModel(BaseModel):
created_at = DateTimeField(null=False, constraints=[SQL('DEFAULT CURRENT_TIMESTAMP')])
updated_at = DateTimeField(null=True, constraints=[SQL('ON UPDATE CURRENT_TIMESTAMP')])
```
I quickly ran into an issue since I was not setting the `created_at` field, which I naively expected to be set by the database.
Some operations have an `only` param to define which fields to persist. The `insert_many` method uses a fields mapping which can accomplish the same behavior.
However, the `bulk_create` method has no similar option - though I was successful in overriding the `bulk_create` class method to filter out these field names.
I feel like this is a core feature I would want built-in to the base Field, where I wouldn't need to worry about every point where data is persisted.
Do you have any thoughts or previous details on this type of functionality? Would you be able to point me in the right direction of where you could imagine the functionality living? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1846/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1846/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1845 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1845/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1845/comments | https://api.github.com/repos/coleifer/peewee/issues/1845/events | https://github.com/coleifer/peewee/issues/1845 | 407,211,453 | MDU6SXNzdWU0MDcyMTE0NTM= | 1,845 | Get timestamp from database as int | {
"login": "jansedlon",
"id": 13948180,
"node_id": "MDQ6VXNlcjEzOTQ4MTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/13948180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jansedlon",
"html_url": "https://github.com/jansedlon",
"followers_url": "https://api.github.com/users/jansedlon/followers",
"following_url": "https://api.github.com/users/jansedlon/following{/other_user}",
"gists_url": "https://api.github.com/users/jansedlon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jansedlon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jansedlon/subscriptions",
"organizations_url": "https://api.github.com/users/jansedlon/orgs",
"repos_url": "https://api.github.com/users/jansedlon/repos",
"events_url": "https://api.github.com/users/jansedlon/events{/privacy}",
"received_events_url": "https://api.github.com/users/jansedlon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"So you should:\r\n\r\n* Use `DateTimeField` if you want to work with `datetime` in python and store the data using the `datetime` data-type in your sql db.\r\n* Use `TimestampField` if you want to work with `datetime` in python and store the data using an integer in your sql db.\r\n* Use `IntegerField` if you want to work with timestamps as integers in python and store the data as an integer in your db."
] | 2019-02-06T12:25:54 | 2019-02-06T15:18:28 | 2019-02-06T15:18:28 | NONE | null | I have a column of type Timestamp (or DateTime). If I save it to database, it's saved as integer, but when I want to get it from the database, it's returned as datetime. Is there any way to get it as it is in the database (integer) ? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1845/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1844 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1844/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1844/comments | https://api.github.com/repos/coleifer/peewee/issues/1844/events | https://github.com/coleifer/peewee/issues/1844 | 406,665,304 | MDU6SXNzdWU0MDY2NjUzMDQ= | 1,844 | Expose serialization options in sqlite_ext.JSONField (or use better defaults) | {
"login": "zmwangx",
"id": 4149852,
"node_id": "MDQ6VXNlcjQxNDk4NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4149852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zmwangx",
"html_url": "https://github.com/zmwangx",
"followers_url": "https://api.github.com/users/zmwangx/followers",
"following_url": "https://api.github.com/users/zmwangx/following{/other_user}",
"gists_url": "https://api.github.com/users/zmwangx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zmwangx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zmwangx/subscriptions",
"organizations_url": "https://api.github.com/users/zmwangx/orgs",
"repos_url": "https://api.github.com/users/zmwangx/repos",
"events_url": "https://api.github.com/users/zmwangx/events{/privacy}",
"received_events_url": "https://api.github.com/users/zmwangx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"The `sqlite_ext.JSONField` relies on SQLite's `json1` extension. We convert Python data-types to JSON strings, but it's up to sqlite to parse, validate and then store the resulting JSON data. Sqlite efficiently stores this, so there's nothing to worry about:\r\n\r\n```\r\nsqlite> create table foo (data json);\r\nsqlite> insert into foo (data) values (json('{ \"foo\": \"bar\" }'));\r\nsqlite> select * from foo;\r\n{\"foo\":\"bar\"}\r\n```\r\n\r\nRegarding the handling of unicode, I've added a test-case that shows that the library is doing the right thing:\r\n\r\neec3823e3d00a5ec52d0ff059f23a82a02512f36\r\n\r\nOr,\r\n\r\n```python\r\nIn [1]: from playhouse.sqlite_ext import *\r\n\r\nIn [2]: db = SqliteDatabase(':memory:')\r\n\r\nIn [3]: class KeyData(Model):\r\n ...: key = TextField()\r\n ...: data = JSONField()\r\n ...: class Meta:\r\n ...: database = db\r\n ...: \r\n\r\nIn [4]: KeyData.create_table()\r\n\r\nIn [5]: unicode_str = '中文'\r\n\r\nIn [6]: data = {'foo': unicode_str}\r\n\r\nIn [7]: KeyData.create(key='k1', data=data)\r\nOut[7]: <KeyData: 1>\r\n\r\nIn [8]: KeyData.get(KeyData.key == 'k1').data\r\nOut[8]: {'foo': '中文'}\r\n```",
"Sorry, either I wasn't clear or I'm doing something wrong. Adapting your example a little bit:\r\n\r\n```py\r\n#!/usr/bin/env python3\r\n\r\nimport logging\r\n\r\nfrom playhouse.sqlite_ext import *\r\n\r\nlogger = logging.getLogger(\"peewee\")\r\nlogger.addHandler(logging.StreamHandler())\r\nlogger.setLevel(logging.DEBUG)\r\n\r\ndb = SqliteDatabase(\"/tmp/json1.db\")\r\n\r\n\r\nclass KeyData(Model):\r\n key = TextField()\r\n data = JSONField()\r\n\r\n class Meta:\r\n database = db\r\n\r\n\r\nKeyData.create_table()\r\nunicode_str = \"中文\"\r\ndata = {\"foo\": unicode_str}\r\nKeyData.create(key=\"k1\", data=data)\r\n```\r\n\r\nExecution output:\r\n\r\n```sql\r\n('CREATE TABLE IF NOT EXISTS \"keydata\" (\"id\" INTEGER NOT NULL PRIMARY KEY, \"key\" TEXT NOT NULL, \"data\" JSON NOT NULL)', [])\r\n('INSERT INTO \"keydata\" (\"key\", \"data\") VALUES (?, ?)', ['k1', '{\"foo\": \"\\\\u4e2d\\\\u6587\"}'])\r\n```\r\n\r\nNote that the value of `data` does NOT go through the `json` function. \r\n\r\nNow, dumping the database:\r\n\r\n```sql\r\n$ sqlite3 /tmp/json1.db .dump\r\nPRAGMA foreign_keys=OFF;\r\nBEGIN TRANSACTION;\r\nCREATE TABLE IF NOT EXISTS \"keydata\" (\"id\" INTEGER NOT NULL PRIMARY KEY, \"key\" TEXT NOT NULL, \"data\" JSON NOT NULL);\r\nINSERT INTO keydata VALUES(1,'k1','{\"foo\": \"\\u4e2d\\u6587\"}');\r\nCOMMIT;\r\n```\r\n\r\nYour raw SQL example is exactly what I have in mind (as what should be done)\r\n\r\n```sql\r\ninsert into foo (data) values (json('{ \"foo\": \"bar\" }'));\r\n```\r\n\r\nemphasis on `json(...)`.\r\n\r\n> the library is doing the right thing:\r\n\r\nDefinitely, there's no bug here, it's just that JSON is not efficiently stored.",
"Thanks for the clarification, I'll look into it some more.",
"By the way, while SQLite's `json` compresses whitespaces, it does preserve Unicode escape sequences, so `ensure_ascii=True` is still a problem.",
"I'm somewhat confused about `ensure_ascii`. I made a simple test using the interactive shell.\r\n\r\nI create \"bstr\" (a bytestring) and \"ustr\" (unicode) -- bstr contains the utf8 representation of the two characters in your example. ustr, being unicode, consists of the unicode code-points corresponding to those characters.\r\n\r\nHere is the behavior with Python 2. The unicode version results in the unicode code-point escapes *regardless* of `ensure_ascii`. It is only by serializing the UTF8-encoded `bytes` value with `ensure_ascii=False` that we get the equivalent of the \"raw\" characters:\r\n\r\n```python\r\nIn [2]: bstr = '中文'\r\n\r\nIn [3]: bstr\r\nOut[3]: '\\xe4\\xb8\\xad\\xe6\\x96\\x87'\r\n\r\nIn [4]: ustr = bstr.decode('utf8')\r\n\r\nIn [5]: print ustr\r\n中文\r\n\r\nIn [6]: json.dumps(bstr)\r\nOut[6]: '\"\\\\u4e2d\\\\u6587\"'\r\n\r\nIn [7]: json.dumps(bstr, ensure_ascii=False)\r\nOut[7]: '\"\\xe4\\xb8\\xad\\xe6\\x96\\x87\"'\r\n\r\nIn [8]: json.dumps(ustr)\r\nOut[8]: '\"\\\\u4e2d\\\\u6587\"'\r\n\r\nIn [9]: json.dumps(ustr, ensure_ascii=False)\r\nOut[9]: u'\"\\u4e2d\\u6587\"'\r\n```\r\n\r\nHere is the behavior with Python 3 -- note that we cannot serialize bytestrings so we'll be dealing with the unicode only:\r\n\r\n```python\r\nIn [1]: import json\r\n\r\nIn [2]: ustr = '中文'\r\n\r\nIn [3]: json.dumps(ustr)\r\nOut[3]: '\"\\\\u4e2d\\\\u6587\"'\r\n\r\nIn [4]: json.dumps(ustr, ensure_ascii=False)\r\nOut[4]: '\"中文\"'\r\n```\r\n\r\n-------------------------------\r\n\r\nUnder-the-hood, we're still talking about UTF8-encoded bytestrings. So in Python 2, the only way to avoid storing the code-points is to serialize the value as an already-encoded bytestring, because when using unicode we get the escapes regardless.\r\n\r\nAnyways...the end-result is potentially 2 or 3 extra bytes for these code-points. Is this the crux of the issue? The extra bytes?",
"See above patch for changes. You can now specify a custom json_dumps or json_loads when declaring your JSONField.",
"Cool, I'll give it a spin shortly. Regarding this:\r\n\r\n> Here is the behavior with Python 2. The unicode version results in the unicode code-point escapes _regardless_ of `ensure_ascii`. It is only by serializing the UTF8-encoded `bytes` value with `ensure_ascii=False` that we get the equivalent of the \"raw\" characters:\r\n\r\nActually no, you seem to be tricked by py27's confusing return types and the repr representations that look similar...\r\n\r\n> ```py\r\n> In [9]: json.dumps(ustr, ensure_ascii=False)\r\n> Out[9]: u'\"\\u4e2d\\u6587\"'\r\n> ```\r\n\r\nThese are the unescaped Unicode code points U+4E2D and U+6587 (equivalent to the Python 3 `ensure_ascii=False` version), escaped for `repr`. In py27, `json.dumps` on a `str` gives you a `str`, and `json.dumps` on a `unicode` gives you a `unicode`.\r\n\r\n> Anyways...the end-result is potentially 2 or 3 extra bytes for these code-points. Is this the crux of the issue? The extra bytes?\r\n\r\nThat's one problem. The other problem is when you look at JSON values directly without `json_extract` (e.g. in a command line shell, or in a GUI like sqlitebrowser), escaped codepoints are not readable.",
"> In py27, `json.dumps` on a `str` gives you a `str`, and `json.dumps` on a `unicode` gives you a `unicode`.\r\n\r\nActually that's also false... It seems you only get a `unicode` when `ensure_ascii=False`. Anyway, very confusing. Glad I went all in on Python 3 years ago...",
"Ooof, thanks for pointing that out, yeah I moved over to Py3 as well but I still continue to support Py2 for my libraries, at least until Py2 is officially dead.",
"Execuse me. \r\nPostgresql extension seems have same problem.\r\nThe JSONField() in playhouse.postgres_ext did not have json_dumps and json_loads options.\r\n\r\nIs it possible to add this feature to postgres same as sqlite? ",
"You can provide a custom json serialization handler to the Postgres JSONField already:\r\n\r\nhttps://github.com/coleifer/peewee/blob/55ef182840f869f63ea973421e3a572a117236c8/playhouse/postgres_ext.py#L293-L296",
"WOW, thanks for the fast reply. I didn't realize that.\r\nBinaryJSONField seems work fine, even without custom dumps function.",
"If for some reason someone ends up here for the `Your version of psycopg2 does not support JSON.` exception then try to run `pip install --upgrade psycopg2` and try again. Hope this helps someone even tho it looks like a dumb or obvious solution."
] | 2019-02-05T07:52:52 | 2022-11-09T13:58:42 | 2019-02-05T18:16:54 | CONTRIBUTOR | null | Currently, `sqlite_ext.py` seems to use `json.dumps` without any options, which means the defaults `separators=(', ', ': ')` and `ensure_ascii=True` are assumed. These are a waste of space, and `ensure_ascii` also reduces human readability of non-ASCII text without real world benefit (since SQLite is Unicode-safe).
Compare the result of SQLite JSON1 extension's builtin `json` function:
```sql
INSERT INTO entry(doc) VALUES(json('{ "lang": "中文" }'));
```
=>
```sql
INSERT INTO entry VALUES(1,'{"lang":"中文"}');
```
and peewee:
```py
Entry.insert(doc=dict(lang='中文'))
```
=>
```sql
INSERT INTO entry VALUES(1,'{"lang": "\u4e2d\u6587"}');
```
It would be nice if serialization options are exposed, or just use `separators=(',', ':')` and `ensure_ascii=False`. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1844/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1843 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1843/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1843/comments | https://api.github.com/repos/coleifer/peewee/issues/1843/events | https://github.com/coleifer/peewee/pull/1843 | 406,338,175 | MDExOlB1bGxSZXF1ZXN0MjUwMDU3ODgw | 1,843 | Implement connection_connection for Proxy, fixes #1842 | {
"login": "maxnoe",
"id": 5488440,
"node_id": "MDQ6VXNlcjU0ODg0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5488440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxnoe",
"html_url": "https://github.com/maxnoe",
"followers_url": "https://api.github.com/users/maxnoe/followers",
"following_url": "https://api.github.com/users/maxnoe/following{/other_user}",
"gists_url": "https://api.github.com/users/maxnoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxnoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxnoe/subscriptions",
"organizations_url": "https://api.github.com/users/maxnoe/orgs",
"repos_url": "https://api.github.com/users/maxnoe/repos",
"events_url": "https://api.github.com/users/maxnoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxnoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Implementation in 91f18c6, as described in #1842."
] | 2019-02-04T13:50:28 | 2019-02-04T15:11:11 | 2019-02-04T15:11:11 | NONE | null | A fix for #1842, also adding a test case | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1843/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1843",
"html_url": "https://github.com/coleifer/peewee/pull/1843",
"diff_url": "https://github.com/coleifer/peewee/pull/1843.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1843.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1842 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1842/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1842/comments | https://api.github.com/repos/coleifer/peewee/issues/1842/events | https://github.com/coleifer/peewee/issues/1842 | 406,327,877 | MDU6SXNzdWU0MDYzMjc4Nzc= | 1,842 | Cannot use Proxy connection_context decorator. | {
"login": "maxnoe",
"id": 5488440,
"node_id": "MDQ6VXNlcjU0ODg0NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5488440?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/maxnoe",
"html_url": "https://github.com/maxnoe",
"followers_url": "https://api.github.com/users/maxnoe/followers",
"following_url": "https://api.github.com/users/maxnoe/following{/other_user}",
"gists_url": "https://api.github.com/users/maxnoe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/maxnoe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/maxnoe/subscriptions",
"organizations_url": "https://api.github.com/users/maxnoe/orgs",
"repos_url": "https://api.github.com/users/maxnoe/repos",
"events_url": "https://api.github.com/users/maxnoe/events{/privacy}",
"received_events_url": "https://api.github.com/users/maxnoe/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"While `Proxy` is primarily indicated for use with deferred database objects, it is in a sense a general abstraction. Adding such a method to `Proxy` produces a bit more tight-coupling to `Database`, which I'd prefer to avoid.\r\n\r\nThe issue is that `database.connection_context()` is evaluated as an import-time side-effect. And at import-time we don't necessarily have an initialized Proxy, as you point out.\r\n\r\nWe actually could rewrite the code as:\r\n\r\n```python\r\ndatabase = Proxy()\r\n\r\ndef get_all_users():\r\n return list(User.select())\r\n\r\nget_all_users = database.connection_context()(get_all_users)\r\n```\r\n\r\nWhen we do that, it's clear that we're calling a function on the uninitialized proxy, asking that it be evaluated immediately. I know you understand all this, but I figure it's worth writing out as I'm thinking this through.\r\n\r\nSo a way to work around this would be to use the context-manager instead, within the function:\r\n\r\n```python\r\ndef get_all_users():\r\n with database.connection_context():\r\n return list(User.select())\r\n```\r\n\r\nAlso note that `connection_context()` isn't the only database decorator you might use. There are also:\r\n\r\n* `database` (the object itself supports use as a decorator or context manager)\r\n* `database.atomic` and related functions\r\n\r\nI think my preference will be to create a subclass, `DatabaseProxy`, which will implement placeholders for connection_context as the other decorators."
] | 2019-02-04T13:24:19 | 2019-02-04T15:10:44 | 2019-02-04T15:10:44 | NONE | null | Defining methods or functions with the `connection_context` decorator does not work with Proxies.
It throws `AttributeError: Cannot use uninitialized Proxy`.
Although I see no reason, why this should be necessary at definition time.
```python
from peewee import Proxy, Model, TextField
database = Proxy()
class User(Model):
username = TextField(unique=True)
class Meta:
database = database
@database.connection_context()
def get_all_users():
return list(User.select())
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1842/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1841 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1841/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1841/comments | https://api.github.com/repos/coleifer/peewee/issues/1841/events | https://github.com/coleifer/peewee/issues/1841 | 406,110,357 | MDU6SXNzdWU0MDYxMTAzNTc= | 1,841 | WHERE TRUE condition | {
"login": "againagainst",
"id": 654455,
"node_id": "MDQ6VXNlcjY1NDQ1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/654455?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/againagainst",
"html_url": "https://github.com/againagainst",
"followers_url": "https://api.github.com/users/againagainst/followers",
"following_url": "https://api.github.com/users/againagainst/following{/other_user}",
"gists_url": "https://api.github.com/users/againagainst/gists{/gist_id}",
"starred_url": "https://api.github.com/users/againagainst/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/againagainst/subscriptions",
"organizations_url": "https://api.github.com/users/againagainst/orgs",
"repos_url": "https://api.github.com/users/againagainst/repos",
"events_url": "https://api.github.com/users/againagainst/events{/privacy}",
"received_events_url": "https://api.github.com/users/againagainst/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Eh, you can do `Value(1) == Value(1)`...but the way I would handle that is to accumulate a list and then use `reduce`, e.g.:\r\n\r\n```python\r\nfrom functools import reduce\r\nimport operator\r\n\r\n# ...\r\n\r\nexpr_list = []\r\nif removed:\r\n expr_list.append(User.removed.is_null(False))\r\nif deactived:\r\n expr_list.append(User.active.is_null(True))\r\n# ...\r\nquery = User.select()\r\nif expr_list:\r\n query = query.where(reduce(operator.and_, expr_list))\r\nreturn query\r\n```\r\n\r\nA couple notes on your code, by the way:\r\n\r\n* `~(Users.removed >> None)` is NOT the same as `IS NOT NULL`. It is `NOT (... IS NULL)`. Better to use `User.removed.is_null(False)` which translates to Users.removed IS NOT NULL.\r\n* It is recommended never to use plural for model names. Better to use `User`.\r\n* Returning a query is always more flexible. Rather than return a list of dicts, return a query, and only convert to a list-of-dicts at the last possible moment."
] | 2019-02-03T17:28:20 | 2019-02-04T01:54:19 | 2019-02-04T01:54:19 | NONE | null | Is there any possibility to make the simple `TRUE` condition for where clause? I would like to use it as an initial condition and combine with some extra conditions if necessary:
```
def get_users(self, removed=False, deactivated=False):
cond = Model.TRUE
if removed:
cond &= ~(Users.removed >> None)
...
return [to_dict(user) for user in Users.select().where(cond)]
``` | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1841/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1840 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1840/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1840/comments | https://api.github.com/repos/coleifer/peewee/issues/1840/events | https://github.com/coleifer/peewee/issues/1840 | 405,183,585 | MDU6SXNzdWU0MDUxODM1ODU= | 1,840 | LEFT OUTER JOIN not match item exception | {
"login": "serkandaglioglu",
"id": 1669906,
"node_id": "MDQ6VXNlcjE2Njk5MDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1669906?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/serkandaglioglu",
"html_url": "https://github.com/serkandaglioglu",
"followers_url": "https://api.github.com/users/serkandaglioglu/followers",
"following_url": "https://api.github.com/users/serkandaglioglu/following{/other_user}",
"gists_url": "https://api.github.com/users/serkandaglioglu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/serkandaglioglu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/serkandaglioglu/subscriptions",
"organizations_url": "https://api.github.com/users/serkandaglioglu/orgs",
"repos_url": "https://api.github.com/users/serkandaglioglu/repos",
"events_url": "https://api.github.com/users/serkandaglioglu/events{/privacy}",
"received_events_url": "https://api.github.com/users/serkandaglioglu/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It's joining on the company quota id field. Please share your model definitions, as I can't tell what \"intern_request\" is, and how it may differ from the company_quota_id field.",
"```\r\nclass AppInternRequest(BaseModel):\r\n\tid = AutoField()\r\n\r\n\tschool = ForeignKeyField(AppSchool,column_name=\"school_id\",to_field=\"school_id\",backref=\"school_quotas\")\r\n\tsemester = ForeignKeyField(AppSemester,column_name=\"semester_id\",to_field=\"semester_id\")\r\n\tcompany = ForeignKeyField(AppCompany,column_name=\"company_id\",to_field=\"company_id\")\r\n\tdepartment = ForeignKeyField(AppSchoolToDepartment,column_name=\"department_id\",to_field=\"department_id\")\r\n\tinserter = ForeignKeyField(User,column_name=\"inserter_id\",to_field=\"user_id\",backref=\"inserter_com_stu_quotas\")\r\n\tstudent = ForeignKeyField(AppSchoolToStudent,column_name=\"student_id\",to_field=\"student_id\",backref=\"student_com_stu_quotas\")\r\n\tcompany_address = ForeignKeyField(AppAddress,column_name=\"company_address_id\",to_field=\"address_id\",backref=\"address_quotas\")\r\n\r\n\r\n\tcompany_department = TextField(null=True)\r\n\tdescription = TextField(null=True)\r\n\tinsert_date = DateTimeField(null=True,default=peeweeDateTimeFieldNow)\r\n\tpay_status_id = IntegerField(column_name='pay_status_id', null=True)\r\n\tstudent_gender_id = IntegerField(column_name='student_gender_id', null=True)\r\n\tstudent_quota = IntegerField(null=False,default=1)\r\n\tcreator_in_company = TextField(null=True)\r\n\t\r\n\tclass Meta:\r\n\t\ttable_name = 'app_intern_request'\r\n\r\nclass AppInternPreference(BaseModel):\r\n\tid = AutoField()\r\n\r\n\tsemester = ForeignKeyField(AppSemester,column_name=\"semester_id\",to_field=\"semester_id\")\r\n\tcompany = ForeignKeyField(AppCompany,column_name=\"company_id\",to_field=\"company_id\")\r\n\tstudent = ForeignKeyField(AppSchoolToStudent,column_name=\"student_id\",to_field=\"student_id\")\r\n\tcompany_address = ForeignKeyField(AppAddress,column_name=\"company_address_id\",to_field=\"address_id\",backref=\"address_student_preferences\")\r\n\tcompany_quota = ForeignKeyField(AppInternRequest, column_name=\"company_quota_id\", to_field=\"id\", backref=\"quota_student_preferences\")\r\n\r\n\tinsert_date = DateTimeField(null=True,default=peeweeDateTimeFieldNow)\r\n\tdescription = TextField(null=True)\r\n\tpreference_order = IntegerField(null=False,default=10)\r\n\r\n\tclass Meta:\r\n\t\ttable_name = 'app_intern_preference'\r\n\t\t\r\n\r\n```",
"Something's not right. The foreign key from `AppInternPreference` to `AppInternRequest` is named \"company_quota\", but in your example code you're using \"x.intern_request\".\r\n\r\nThat is the problem. I have no idea what \"intern_request\" is (a property? a typo?).\r\n\r\nAnother thing -- the \"company_quota\" foreign-key is not nullable, so why are you using a LEFT OUTER join? There's no possibility of the foreign-key being null, so you should be using an INNER join.\r\n\r\nGarbage ticket, by the way. Put an ounce of effort into it next time.",
"x.intern_request should be x.company_quota it is my mistake on copy board. Sorry for that.\r\nI created two test tables for explain the problem.\r\n\r\nDATABASE TABLE SCRIPT\r\n\r\n```SQL\r\nCREATE TABLE `test_tweet` (\r\n `tweet_id` int(11) NOT NULL AUTO_INCREMENT,\r\n `tweet_content` varchar(255) NOT NULL,\r\n PRIMARY KEY (`tweet_id`)\r\n) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;\r\n\r\nCREATE TABLE `test_user` (\r\n `user_id` int(11) NOT NULL AUTO_INCREMENT,\r\n `username` varchar(255) NOT NULL,\r\n `wall_tweet_id` int(11) DEFAULT NULL,\r\n PRIMARY KEY (`user_id`),\r\n KEY `wall_tweet` (`wall_tweet_id`)\r\n) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;\r\n```\r\n\r\nNOTE : I didnt define foreignkey in database but test_user.wall_tweet_id takes test_tweet.tweet_id or null. I dont want to define foreign key in database.\r\n\r\n**MODELS**\r\n\r\n```Python\r\nclass TestTweet(BaseModel):\r\n tweet_id = AutoField()\r\n tweet_content = CharField(null=True)\r\n\r\n class Meta:\r\n table_name = 'test_tweet'\r\n\r\nclass TestUser(BaseModel):\r\n user_id = AutoField()\r\n username = CharField(null=True)\r\n wall_tweet = ForeignKeyField(TestTweet, column_name=\"wall_tweet_id\", to_field=\"tweet_id\")\r\n\r\n class Meta:\r\n table_name = 'test_user'\r\n```\r\n\r\nWhen I run below code it throws TestTweetDoesNotExist . wall_tweet_id can be null so I use JOIN.LEFT_OUTER.\r\n\r\n```\r\nusers = (TestUser\r\n .select(TestUser,TestTweet)\r\n .join(TestTweet,peewee.JOIN.LEFT_OUTER)\r\n )\r\nfor x in users:\r\n print x.wall_tweet\r\n```\r\n\r\nx.wall_street : should not write 'null' instead of raise exception?\r\n",
"You say \"wall_tweet_id\" can be NULL, but you do not have it marked as such in your model.\r\n\r\n```python\r\nwall_tweet = ForeignKeyField(TestTweet, null=True)\r\n```\r\n\r\nThis is the issue with your code. Changing `null=True` will enable it to work as expected.",
"> You say \"wall_tweet_id\" can be NULL, but you do not have it marked as such in your model.\r\n> \r\n> ```python\r\n> wall_tweet = ForeignKeyField(TestTweet, null=True)\r\n> ```\r\n> \r\n> This is the issue with your code. Changing `null=True` will enable it to work as expected.\r\n\r\nI did but still same error",
"False...\r\n\r\n```python\r\nfrom peewee import *\r\n\r\ndb = SqliteDatabase(':memory:')\r\n\r\nclass Base(Model):\r\n class Meta:\r\n database = db\r\n\r\nclass Tweet(Base):\r\n content = TextField()\r\n\r\nclass User(Base):\r\n username = TextField()\r\n favorite_tweet = ForeignKeyField(Tweet, null=True)\r\n\r\ndb.create_tables([User, Tweet])\r\n\r\nt1 = Tweet.create(content='t1')\r\nt2 = Tweet.create(content='t2')\r\nUser.create(username='u1', favorite_tweet=t1)\r\nUser.create(username='u2')\r\n\r\nimport logging\r\nlogger = logging.getLogger('peewee')\r\nlogger.addHandler(logging.StreamHandler())\r\nlogger.setLevel(logging.DEBUG)\r\n\r\nquery = (User\r\n .select(User, Tweet)\r\n .join(Tweet, JOIN.LEFT_OUTER)\r\n .order_by(User.username))\r\nfor user in query:\r\n print(user.username, user.favorite_tweet)\r\n```\r\n\r\nPRINTS:\r\n\r\n```\r\n('SELECT \"t1\".\"id\", \"t1\".\"username\", \"t1\".\"favorite_tweet_id\", \"t2\".\"id\", \"t2\".\"content\" FROM \"user\" AS \"t1\" LEFT OUTER JOIN \"tweet\" AS \"t2\" ON (\"t1\".\"favorite_tweet_id\" = \"t2\".\"id\") ORDER BY \"t1\".\"username\"', [])\r\n(u'u1', <Tweet: 1>)\r\n(u'u2', None)\r\n```",
"I delete everything and use your tutorial with Mysql. It worked. Thanks a lot."
] | 2019-01-31T10:42:26 | 2019-01-31T21:22:26 | 2019-01-31T15:41:28 | NONE | null | I use JOIN.LEFT_OUTER in my query. When I print x.intern_request I see two runned query in debug console.
**My Query**
```
items = (AppInternPreference
.select(AppInternPreference, AppInternRequest)
.join(AppInternRequest, peewee.JOIN.LEFT_OUTER)
)
for x in items:
print x.intern_request
```
**In Debug Console First Query is**
`('SELECT t1.id, t1.semester_id, t1.company_id, t1.student_id, t1.company_address_id, t1.company_quota_id, t1.insert_date, t1.description, t1.preference_order, t2.id, t2.school_id, t2.semester_id, t2.company_id, t2.department_id, t2.inserter_id, t2.student_id, t2.company_address_id, t2.company_department, t2.description, t2.insert_date, t2.pay_status_id, t2.student_gender_id, t2.student_quota, t2.creator_in_company FROM app_intern_preference AS t1 LEFT OUTER JOIN app_intern_request AS t2 ON (t1.company_quota_id = t2.id)', [])`
In Debug Console Second Query is :
`('SELECT t1.id, t1.school_id, t1.semester_id, t1.company_id, t1.department_id, t1.inserter_id, t1.student_id, t1.company_address_id, t1.company_department, t1.description, t1.insert_date, t1.pay_status_id, t1.student_gender_id, t1.student_quota, t1.creator_in_company FROM app_intern_request AS t1 WHERE (t1.id = %s) LIMIT %s OFFSET %s', [79, 1, 0])`
In second query peewee trying to do select intern_request. But I already joined in my query. It shouldnt try to get intern request again. It must print None with the code is 'print x.intern_request'.
What is wrong with my query? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1840/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1840/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1839 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1839/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1839/comments | https://api.github.com/repos/coleifer/peewee/issues/1839/events | https://github.com/coleifer/peewee/pull/1839 | 404,870,668 | MDExOlB1bGxSZXF1ZXN0MjQ4OTU1Mzk2 | 1,839 | Add support for subselects with CTEs. | {
"login": "iksteen",
"id": 1001206,
"node_id": "MDQ6VXNlcjEwMDEyMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1001206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iksteen",
"html_url": "https://github.com/iksteen",
"followers_url": "https://api.github.com/users/iksteen/followers",
"following_url": "https://api.github.com/users/iksteen/following{/other_user}",
"gists_url": "https://api.github.com/users/iksteen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iksteen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iksteen/subscriptions",
"organizations_url": "https://api.github.com/users/iksteen/orgs",
"repos_url": "https://api.github.com/users/iksteen/repos",
"events_url": "https://api.github.com/users/iksteen/events{/privacy}",
"received_events_url": "https://api.github.com/users/iksteen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've merged an equivalent patch, which expands on your own and adds comments and test-cases: 6707199769e0f8878988242bca347f577e3f0e7b\r\n\r\nThank you very much for digging into this and finding such a concise solution."
] | 2019-01-30T17:04:23 | 2019-01-30T18:25:02 | 2019-01-30T18:25:02 | NONE | null | By delaying the emission of CTEs and emitting them in the subquery's context, they are contained within its surrouding parentheses. To avoid adding extra parentheses to the CTE queries, force the context for the CTE list to not have the subquery flag.
This fixes #1809. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1839/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1839/timeline | null | null | false | {
"url": "https://api.github.com/repos/coleifer/peewee/pulls/1839",
"html_url": "https://github.com/coleifer/peewee/pull/1839",
"diff_url": "https://github.com/coleifer/peewee/pull/1839.diff",
"patch_url": "https://github.com/coleifer/peewee/pull/1839.patch",
"merged_at": null
} |
https://api.github.com/repos/coleifer/peewee/issues/1838 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1838/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1838/comments | https://api.github.com/repos/coleifer/peewee/issues/1838/events | https://github.com/coleifer/peewee/issues/1838 | 404,392,846 | MDU6SXNzdWU0MDQzOTI4NDY= | 1,838 | Common Table Expression and prefetch | {
"login": "iksteen",
"id": 1001206,
"node_id": "MDQ6VXNlcjEwMDEyMDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/1001206?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/iksteen",
"html_url": "https://github.com/iksteen",
"followers_url": "https://api.github.com/users/iksteen/followers",
"following_url": "https://api.github.com/users/iksteen/following{/other_user}",
"gists_url": "https://api.github.com/users/iksteen/gists{/gist_id}",
"starred_url": "https://api.github.com/users/iksteen/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/iksteen/subscriptions",
"organizations_url": "https://api.github.com/users/iksteen/orgs",
"repos_url": "https://api.github.com/users/iksteen/repos",
"events_url": "https://api.github.com/users/iksteen/events{/privacy}",
"received_events_url": "https://api.github.com/users/iksteen/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is probably a duplicate of #1809, but I'll leave it open for @coleifer to judge.",
"Thanks for reporting. Ive taken the description from this issue and added it as a comment to #1809. I'll deal with this as a single issue."
] | 2019-01-29T17:04:17 | 2019-01-29T20:22:03 | 2019-01-29T20:22:03 | NONE | null | Consider the following example (note that this is obviously a silly example, but just enough to show the problem):
```
from peewee import *
db = PostgresqlDatabase('scoreboard')
class Base(Model):
class Meta:
database = db
class Challenge(Base):
pass
class Attachment(Base):
challenge = ForeignKeyField(Challenge, backref='attachments')
cte = (
Challenge
.select(fn.MAX(Challenge.id).alias('max_id'))
.cte('demo')
)
q = (
Challenge
.select(Challenge)
.join(cte, on=(Challenge.id == cte.c.min_id))
.with_cte(cte)
)
p = Attachment.select()
r = prefetch(q, p)
print(list(r))
```
The generated SQL for the prefetch of Attachment will be:
```
SELECT "t1"."id", "t1"."challenge_id" FROM "attachment" AS "t1"
WHERE (
"t1"."challenge_id" IN
WITH "demo" AS ((SELECT MAX("t2"."id") AS "min_id" FROM "challenge" AS "t2"))
(SELECT "t3"."id" FROM "challenge" AS "t3" INNER JOIN "demo" ON ("t3"."id" = "demo"."min_id"))
)
```
While the CTE queries and the subquery themselves do get extra parentheses, the base query is not parenthesized leaving the `WITH` part bare and resulting in an invalid syntax.
The expected query would be:
```
SELECT "t1"."id", "t1"."challenge_id" FROM "attachment" AS "t1"
WHERE (
"t1"."challenge_id" IN (
WITH "demo" AS (SELECT MAX("t2"."id") AS "min_id" FROM "challenge" AS "t2")
SELECT "t3"."id" FROM "challenge" AS "t3" INNER JOIN "demo" ON ("t3"."id" = "demo"."min_id"))
)
)
```
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1838/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1837 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1837/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1837/comments | https://api.github.com/repos/coleifer/peewee/issues/1837/events | https://github.com/coleifer/peewee/issues/1837 | 403,573,052 | MDU6SXNzdWU0MDM1NzMwNTI= | 1,837 | PooledPostgresqlExtDatabase is imported as None | {
"login": "LearnedVector",
"id": 8495552,
"node_id": "MDQ6VXNlcjg0OTU1NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8495552?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LearnedVector",
"html_url": "https://github.com/LearnedVector",
"followers_url": "https://api.github.com/users/LearnedVector/followers",
"following_url": "https://api.github.com/users/LearnedVector/following{/other_user}",
"gists_url": "https://api.github.com/users/LearnedVector/gists{/gist_id}",
"starred_url": "https://api.github.com/users/LearnedVector/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/LearnedVector/subscriptions",
"organizations_url": "https://api.github.com/users/LearnedVector/orgs",
"repos_url": "https://api.github.com/users/LearnedVector/repos",
"events_url": "https://api.github.com/users/LearnedVector/events{/privacy}",
"received_events_url": "https://api.github.com/users/LearnedVector/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You need to install psycopg2.",
"awesome! thank you for the super fast response."
] | 2019-01-27T17:10:55 | 2019-01-27T17:23:27 | 2019-01-27T17:14:56 | NONE | null | ```python
from playhouse.pool import PooledPostgresqlExtDatabase
print(PooledPostgresqlDatabase) # returns None
```
digging into the code it looks like there is an import error causing this. any idea why?
https://github.com/coleifer/peewee/blob/master/playhouse/pool.py#L282
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1837/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1836 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1836/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1836/comments | https://api.github.com/repos/coleifer/peewee/issues/1836/events | https://github.com/coleifer/peewee/issues/1836 | 403,566,178 | MDU6SXNzdWU0MDM1NjYxNzg= | 1,836 | select from multiple subqueries | {
"login": "michalchrzastek",
"id": 38867528,
"node_id": "MDQ6VXNlcjM4ODY3NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/38867528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michalchrzastek",
"html_url": "https://github.com/michalchrzastek",
"followers_url": "https://api.github.com/users/michalchrzastek/followers",
"following_url": "https://api.github.com/users/michalchrzastek/following{/other_user}",
"gists_url": "https://api.github.com/users/michalchrzastek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michalchrzastek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michalchrzastek/subscriptions",
"organizations_url": "https://api.github.com/users/michalchrzastek/orgs",
"repos_url": "https://api.github.com/users/michalchrzastek/repos",
"events_url": "https://api.github.com/users/michalchrzastek/events{/privacy}",
"received_events_url": "https://api.github.com/users/michalchrzastek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Your code is completely unreadable. Please reduce to a minimal, clear example if you think there is a bug.\r\n\r\nHowever this does not seem like a bug report, but rather like a question. In that case, the proper place to post is StackOverflow or you can try asking in the IRC channel (peewee on freenode), or on the mailing list.\r\n\r\nA hint is that you should probably use `from_()` to list your multiple query sources. Or refactor to use common table expressions.",
"There are some examples that may help, here:\r\n\r\nhttp://docs.peewee-orm.com/en/latest/peewee/query_examples.html#joins-and-subqueries",
"Ok, what about this:\r\n\r\n```\r\nSelect q1.name,q1.countA,q2.countB\r\nFROM\r\n(select name, count()countA )q1,\r\n(select name, count()countB )q2\r\n```\r\n\r\nHow to combine 2 subqueries into 1?",
"```python\r\n\r\nq1 = Foo.select(Foo.name, fn.COUNT(Foo.bar))\r\nq2 = Foo.select(Foo.name, fn.COUNT(Foo.baz).alias('count'))\r\n\r\nquery = Foo.select(q1.c.name, q2.c.name, q2.c.count).from_(q1, q2)\r\n```\r\n\r\nUntested, but that's the jist of it.",
"Thanks, this helped."
] | 2019-01-27T16:03:07 | 2019-01-31T17:33:59 | 2019-01-27T16:40:05 | NONE | null | Hi, trying to combine multiple subqueries into one, but can't see any guide for this, can you advise please.
```
SELECT q5.grpby,
q5.cat AS category,
q5.sumallmonths AS total,
q4.avg12month AS avg12mnt,
FROM (
SELECT tag_list."groupName" AS cat,
tag_list."tagGroup" AS grpby,
sum(transactions.trans_amnt) AS sumallmonths
FROM tag_list
LEFT JOIN transactions ON transactions.trans_cat = tag_list.id
GROUP BY tag_list."tagGroup", tag_list."groupName"
) q5,
( SELECT tag_list."groupName" AS cat,
round(sum(transactions.trans_amnt) / 12::numeric, 2) AS avg12month
FROM tag_list
LEFT JOIN transactions ON transactions.trans_cat = tag_list.id
AND transactions.trans_date >= date_trunc('month'::text, now()::date - '1 year'::interval)
AND transactions.trans_date <= (date_trunc('month'::text, now())::date - 1)
GROUP BY tag_list."groupName"
) q4
WHERE q4.cat::text = q5.cat::text
ORDER BY q5.grpby;
```
and my peewee attempt:
```
def get_statistics(self):
q5=tag_list.select(
tag_list.groupName.alias('tag')
,tag_list.tagGroup.alias('grpBy')
,fn.SUM(transactions.trans_amnt).alias('sumAllMonths'))
.join(transactions, on=(transactions.trans_cat == tag_list.id)).group_by(tag_list.tagGroup,tag_list.groupName)).order_by(tag_list.tagGroup)
q4=(tag_list.select(
tag_list.groupName.alias('tag')
,tag_list.tagGroup.alias('grpBy')
,fn.SUM(transactions.trans_amnt).alias('avg12Months'))
.join(transactions, on=(transactions.trans_cat == tag_list.id)).where(transactions.trans_date.between(first12MonthsAgo,lastOfPrevMonth)).group_by(tag_list.tagGroup,tag_list.groupName)).order_by(tag_list.tagGroup)
query = q4 & q5
return query
```
the webpage doesn't throw any error, and no results | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1836/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1836/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1835 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1835/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1835/comments | https://api.github.com/repos/coleifer/peewee/issues/1835/events | https://github.com/coleifer/peewee/issues/1835 | 403,482,195 | MDU6SXNzdWU0MDM0ODIxOTU= | 1,835 | Model.get() method fails silently | {
"login": "arel",
"id": 153497,
"node_id": "MDQ6VXNlcjE1MzQ5Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/153497?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/arel",
"html_url": "https://github.com/arel",
"followers_url": "https://api.github.com/users/arel/followers",
"following_url": "https://api.github.com/users/arel/following{/other_user}",
"gists_url": "https://api.github.com/users/arel/gists{/gist_id}",
"starred_url": "https://api.github.com/users/arel/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/arel/subscriptions",
"organizations_url": "https://api.github.com/users/arel/orgs",
"repos_url": "https://api.github.com/users/arel/repos",
"events_url": "https://api.github.com/users/arel/events{/privacy}",
"received_events_url": "https://api.github.com/users/arel/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This gets translated into:\r\n\r\n```sql\r\nselect * from image where 1442\r\n```\r\n\r\nSo the db interprets that, most likely, as simply `true` and returns the first row of this arbitrarily-large result set.\r\n\r\nI'm aware of this issue, but not positive about the best way to fix it. Python is highly dynamic, and so is peewee. Adding validation to all SQL clauses (not just WHERE, which is in this example), seems prohibitively complicated.\r\n\r\nPerhaps, though, the `.get()` method can be fixed up.",
"I've chosen to resolve this by supporting .get() with a single integer parameter, treating it as `.get(primary-key == value)`. So in the common case where one just passes an int, it will work as expected.",
"OK, thanks!"
] | 2019-01-26T20:53:18 | 2019-01-28T13:20:45 | 2019-01-27T14:17:10 | NONE | null | I came across a behavior with [`Model.get`](http://docs.peewee-orm.com/en/latest/peewee/api.html#Model.get) I found confusing and could be improved or at least better documented.
Calling `Model.get()` is a shorthand for "selecting with a limit of 1", which is clear. When you provide it filters, such as `.get(id=1442)`, things work as expected too. What was unexpected to me was that if you forget to specify the filter parameter by name such as `.get(1442)` the query does *not* fail. Rather, it returns an arbitrary result as if `.get(True)` or simply `.get()` were given. In my case, this led to a subtle bug because the two results were coincidentally very similar. For instance:
```
In [15]: Image.get(id=1442)
Out[15]: <Image: 1442>
In [16]: Image.get(1442)
Out[16]: <Image: 1433>
```
Is this the intended / desired behavior for some reason I'm missing? Or, would it be possible to raise an exception whenever the query expression is not an instance of type [Expression](http://docs.peewee-orm.com/en/latest/peewee/api.html#Expression)?
P.S. thanks for your work on this library! | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1835/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1834 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1834/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1834/comments | https://api.github.com/repos/coleifer/peewee/issues/1834/events | https://github.com/coleifer/peewee/issues/1834 | 401,512,324 | MDU6SXNzdWU0MDE1MTIzMjQ= | 1,834 | Mariadb 10.3.3+ change on values() function | {
"login": "altunyurt",
"id": 126674,
"node_id": "MDQ6VXNlcjEyNjY3NA==",
"avatar_url": "https://avatars.githubusercontent.com/u/126674?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/altunyurt",
"html_url": "https://github.com/altunyurt",
"followers_url": "https://api.github.com/users/altunyurt/followers",
"following_url": "https://api.github.com/users/altunyurt/following{/other_user}",
"gists_url": "https://api.github.com/users/altunyurt/gists{/gist_id}",
"starred_url": "https://api.github.com/users/altunyurt/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/altunyurt/subscriptions",
"organizations_url": "https://api.github.com/users/altunyurt/orgs",
"repos_url": "https://api.github.com/users/altunyurt/repos",
"events_url": "https://api.github.com/users/altunyurt/events{/privacy}",
"received_events_url": "https://api.github.com/users/altunyurt/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for the heads-up. This looks like a pain.\r\n\r\n* MySQL uses VALUES: https://dev.mysql.com/doc/refman/8.0/en/insert-on-duplicate.html\r\n* MariaDB's changes do not appear to be backwards-compatible: https://mariadb.com/kb/en/library/values-value/"
] | 2019-01-21T21:14:08 | 2019-01-22T01:10:21 | 2019-01-22T01:10:21 | NONE | null | This is more like a heads up! issue.
https://mariadb.com/kb/en/library/values-value/ states that
```
INSERT INTO t (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUES(a)+VALUES(b);
```
should be written as
```
INSERT INTO t (a,b,c) VALUES (1,2,3),(4,5,6)
ON DUPLICATE KEY UPDATE c=VALUE(a)+VALUE(b); <== this has changed
```
starting from 10.3.3.
Apparently peewee will fail at "on duplicate ... update ... values" queries on mariadb 10.3.3+
Judging from https://github.com/coleifer/peewee/blob/55f515be7012fcf8e33fe7f0b2726b7a36b729c0/tests/sql.py#L1287 | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1834/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1833 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1833/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1833/comments | https://api.github.com/repos/coleifer/peewee/issues/1833/events | https://github.com/coleifer/peewee/issues/1833 | 399,576,409 | MDU6SXNzdWUzOTk1NzY0MDk= | 1,833 | Feature Request: Restore `Query` __repr__/__str__ functionality | {
"login": "ibushong",
"id": 9298422,
"node_id": "MDQ6VXNlcjkyOTg0MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9298422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibushong",
"html_url": "https://github.com/ibushong",
"followers_url": "https://api.github.com/users/ibushong/followers",
"following_url": "https://api.github.com/users/ibushong/following{/other_user}",
"gists_url": "https://api.github.com/users/ibushong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibushong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibushong/subscriptions",
"organizations_url": "https://api.github.com/users/ibushong/orgs",
"repos_url": "https://api.github.com/users/ibushong/repos",
"events_url": "https://api.github.com/users/ibushong/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibushong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"That has been replaced by calling the `.sql()` method on a query."
] | 2019-01-15T23:09:19 | 2019-01-16T01:11:31 | 2019-01-16T01:11:31 | NONE | null | In peewee v2 it was convenient when debugging to print out the raw sql of a query with just `print query`, but in v3 this just gives the generic `<peewee.ModelSelect object at 0x10abce150>`. For now I've just added it to my base peewee model, leveraging `sql()`, but I think this might be a nice feature to add back in v3?
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1833/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1832 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1832/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1832/comments | https://api.github.com/repos/coleifer/peewee/issues/1832/events | https://github.com/coleifer/peewee/issues/1832 | 399,569,209 | MDU6SXNzdWUzOTk1NjkyMDk= | 1,832 | When create_tables: OperationalError: near "AS": syntax error | {
"login": "rafaleo",
"id": 25869026,
"node_id": "MDQ6VXNlcjI1ODY5MDI2",
"avatar_url": "https://avatars.githubusercontent.com/u/25869026?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rafaleo",
"html_url": "https://github.com/rafaleo",
"followers_url": "https://api.github.com/users/rafaleo/followers",
"following_url": "https://api.github.com/users/rafaleo/following{/other_user}",
"gists_url": "https://api.github.com/users/rafaleo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/rafaleo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/rafaleo/subscriptions",
"organizations_url": "https://api.github.com/users/rafaleo/orgs",
"repos_url": "https://api.github.com/users/rafaleo/repos",
"events_url": "https://api.github.com/users/rafaleo/events{/privacy}",
"received_events_url": "https://api.github.com/users/rafaleo/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This class is there - it's not the case. It doesn't work even if I pass only [Student] table.\r\n```\r\nclass User(BaseModel):\r\n username = pw.CharField(unique=True)\r\n```",
"What tutorial are you referring to? Please share the link.\r\n\r\nThe problem is your usage of `IdentityField`. You are using a `SqliteDatabase` in your example, but the documentation for `IdentityField` *clearly* states that this field is for postgresql 10.0 and newer ONLY.\r\n\r\nhttp://docs.peewee-orm.com/en/latest/peewee/api.html#IdentityField\r\n\r\nYou should use an `IntegerField` for storing a point value.",
"Ok. I've found that `IdentityField` is somehow a wrong wolf, I've changed to `IntegerField` and it's all right. If it's not an issue just ignore this."
] | 2019-01-15T22:45:36 | 2019-01-15T22:55:06 | 2019-01-15T22:48:42 | NONE | null | Hi. I've just installed peewee for the first time and try to execute some example. Based on your tutorial one (which works) I've created this simple code:
```
import peewee as pw
db = pw.SqliteDatabase('my_students.db')
class BaseModel(pw.Model):
class Meta:
database = db
class Student(BaseModel):
name = pw.CharField(unique=True)
points = pw.IdentityField()
db.connect()
db.create_tables([User, Student])
```
but it throws an error:
```
db.create_tables([User, Student])
Traceback (most recent call last):
File "<ipython-input-83-9aeeec4d4064>", line 1, in <module>
db.create_tables([User, Student])
File "C:\Anaconda3\lib\site-packages\peewee.py", line 2789, in create_tables
model.create_table(**options)
File "C:\Anaconda3\lib\site-packages\peewee.py", line 5670, in create_table
cls._schema.create_all(safe, **options)
File "C:\Anaconda3\lib\site-packages\peewee.py", line 4919, in create_all
self.create_table(safe, **table_options)
File "C:\Anaconda3\lib\site-packages\peewee.py", line 4805, in create_table
self.database.execute(self._create_table(safe=safe, **options))
File "C:\Anaconda3\lib\site-packages\peewee.py", line 2653, in execute
return self.execute_sql(sql, params, commit=commit)
File "C:\Anaconda3\lib\site-packages\peewee.py", line 2647, in execute_sql
self.commit()
File "C:\Anaconda3\lib\site-packages\peewee.py", line 2438, in __exit__
reraise(new_type, new_type(*exc_args), traceback)
File "C:\Anaconda3\lib\site-packages\peewee.py", line 177, in reraise
raise value.with_traceback(tb)
File "C:\Anaconda3\lib\site-packages\peewee.py", line 2640, in execute_sql
cursor.execute(sql, params or ())
OperationalError: near "AS": syntax error
```
Can't find any key difference between my code and tutorial. Please give me a hint if it's not a bug.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1832/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1831 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1831/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1831/comments | https://api.github.com/repos/coleifer/peewee/issues/1831/events | https://github.com/coleifer/peewee/issues/1831 | 399,390,724 | MDU6SXNzdWUzOTkzOTA3MjQ= | 1,831 | postgresql: create doesn't support returning | {
"login": "james-lawrence",
"id": 2835871,
"node_id": "MDQ6VXNlcjI4MzU4NzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/2835871?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/james-lawrence",
"html_url": "https://github.com/james-lawrence",
"followers_url": "https://api.github.com/users/james-lawrence/followers",
"following_url": "https://api.github.com/users/james-lawrence/following{/other_user}",
"gists_url": "https://api.github.com/users/james-lawrence/gists{/gist_id}",
"starred_url": "https://api.github.com/users/james-lawrence/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/james-lawrence/subscriptions",
"organizations_url": "https://api.github.com/users/james-lawrence/orgs",
"repos_url": "https://api.github.com/users/james-lawrence/repos",
"events_url": "https://api.github.com/users/james-lawrence/events{/privacy}",
"received_events_url": "https://api.github.com/users/james-lawrence/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I've discovered how to do it.... mainly by happenstance. but my original two issues still hold.\r\n\r\n```\r\nins = Example.insert(title = details.title).returning(Example).objects(Example).execute()\r\n```",
"Correct. `create()` is not intended to be used with `returning()` since the return value of `create()` is already a model instance (with any auto-incrementing ID populated).\r\n\r\n`returning()` can be used with `insert()` (or `insert_many`). By default it returns tuples of values, but as you noted, it is possible to return model instances.",
"@coleifer documentation should be improved for returning at a minimum before this is closed. took far too long to discover how to get general (and expected) response from peewee when using returning.",
"What specifically did you find confusing?",
"specifically [returning clause docs](http://docs.peewee-orm.com/en/latest/peewee/querying.html#returning-clause)\r\n\r\nmakes no reference to the ability to adjust the results of the query. an explicit link back to the [previous section](http://docs.peewee-orm.com/en/latest/peewee/querying.html#retrieving-row-tuples-dictionaries-namedtuples) would be very helpful. as when I was looking into the returning clause I never saw the ability to adjust the deserialization section.\r\n\r\nwhich is pretty much the first thing someone is going to want to do after looking into returning().",
"Thanks, I noticed a quirk when adding tests... UPDATE and DELETE will return `Model` instances by default when you have a `RETURNING` clause. I think there was a regression that caused `INSERT` to return `tuple` objects by default (when you have a non-default returning clause).\r\n\r\nI've also addressed this inconsistency, so that now if you specify a RETURNING clause on an INSERT, the default row type returned will be model instances.\r\n\r\nI've also updated the docs.",
"http://docs.peewee-orm.com/en/latest/peewee/querying.html#returning-clause",
"looks good thanks."
] | 2019-01-15T15:14:54 | 2019-01-15T18:49:14 | 2019-01-15T18:41:11 | CONTRIBUTOR | null | was attempting to insert records into the database and ran into this issue.
problem is two fold:
- create doesn't support returning afaik.
- documentation around returning is [rather sparse](http://docs.peewee-orm.com/en/latest/peewee/querying.html#returning-clause).
```python
m = Example.create(title = details.title).returning(Example)
# INSERT INTO "example" ("title") VALUES (%s) RETURNING "example"."id"
# *** AttributeError: 'KbImports' object has no attribute 'returning'
```
```
m = Example.insert(title = details.title).returning(Example).execute()
x = [y for y in m]
x[0].id
# INSERT INTO "example" ("title") VALUES (%s) RETURNING "example"."id", "kb_imports"."created_at", "kb_imports"."title", "kb_imports"."updated_at"
# *** AttributeError: 'tuple' object has no attribute 'id'
```
how does one go from an insert to a model object.... | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1831/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1831/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1830 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1830/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1830/comments | https://api.github.com/repos/coleifer/peewee/issues/1830/events | https://github.com/coleifer/peewee/issues/1830 | 399,230,734 | MDU6SXNzdWUzOTkyMzA3MzQ= | 1,830 | Does Peewee supports mysql Date() function? | {
"login": "littledemon",
"id": 6930793,
"node_id": "MDQ6VXNlcjY5MzA3OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6930793?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/littledemon",
"html_url": "https://github.com/littledemon",
"followers_url": "https://api.github.com/users/littledemon/followers",
"following_url": "https://api.github.com/users/littledemon/following{/other_user}",
"gists_url": "https://api.github.com/users/littledemon/gists{/gist_id}",
"starred_url": "https://api.github.com/users/littledemon/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/littledemon/subscriptions",
"organizations_url": "https://api.github.com/users/littledemon/orgs",
"repos_url": "https://api.github.com/users/littledemon/repos",
"events_url": "https://api.github.com/users/littledemon/events{/privacy}",
"received_events_url": "https://api.github.com/users/littledemon/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"```python\r\nquery = (ShopTable\r\n .select(\r\n fn.DATE(ShopTable.mydate).alias('date_value'), \r\n fn.SUM(ShopTable.price).alias('price_sum'),\r\n ShopTable.username)\r\n .where(fn.DATE(ShopTable.mydate) > '2011-01-07')\r\n .group_by(fn.DATE(ShopTable.mydate), ShopTable.username))\r\n```",
"In the future please post questions like \"how do I...\" to StackOverflow. This is the issue tracker."
] | 2019-01-15T08:13:30 | 2019-01-15T17:24:39 | 2019-01-15T17:24:16 | NONE | null | I have below command :
```
SELECT Date(mydate),sum(price),username
from shop_table
where Date(mydate)>'2011-01-07'
GROUP BY Date(mydate),username
```
Because the dates include the time, I convert them into 'only date' using Date() function, so regardless of hours I can understand the total amount of sales in one day.
But is there an ability in peewee using 'fn' to do this? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1830/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1829 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1829/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1829/comments | https://api.github.com/repos/coleifer/peewee/issues/1829/events | https://github.com/coleifer/peewee/issues/1829 | 398,755,793 | MDU6SXNzdWUzOTg3NTU3OTM= | 1,829 | Issue with `Meta.table_alias` (or `Meta.alias`) | {
"login": "ibushong",
"id": 9298422,
"node_id": "MDQ6VXNlcjkyOTg0MjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/9298422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ibushong",
"html_url": "https://github.com/ibushong",
"followers_url": "https://api.github.com/users/ibushong/followers",
"following_url": "https://api.github.com/users/ibushong/following{/other_user}",
"gists_url": "https://api.github.com/users/ibushong/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ibushong/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ibushong/subscriptions",
"organizations_url": "https://api.github.com/users/ibushong/orgs",
"repos_url": "https://api.github.com/users/ibushong/repos",
"events_url": "https://api.github.com/users/ibushong/events{/privacy}",
"received_events_url": "https://api.github.com/users/ibushong/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I ran into this issue with peewee 3.8.0/3.8.1. 3.7.1 (< 3.8.0) works as expected.",
"Thanks for reporting. I will look into this.",
"This seems like it will not be very easy to resolve, given the architecture of the query-builder and the way I've chosen to implement the current `table_alias` functionality.\r\n\r\nThe reason for the breakage is due to the switch to using fully-qualified column names in UPDATE queries (and in some other places as well).\r\n\r\nI think the best fix for now is to actually remove support for `table_alias`, as it is quite broken in its current form.",
"I see. Just for some background, the reason I needed to use `table_alias` in v2 was because I has some raw SQL expressions where I needed to reference the table names, which was difficult using the default \"t1\", \"t2\", etc. names since they change depending on the order of joins. e.g. I couldn't just do `.order_by(SQL(\"t1.field\"))` because \"t1\" could correspond to different tables, depending on how the query was generated. By using `table_alias = \"tag\"`, I could do `.order_by(SQL(\"tag.field\"))` (using over-simplified expressions here).\r\n\r\nIs there a way in v3 for me to reference the table name in a raw expression?\r\n\r\nOr also, is there a way to configure peewee to use the plain table names in queries, rather than aliasing them to the \"t1\", \"t2\", etc names (I'm curious on why it does this?)\r\n",
"Do you have an example of where you need to use plain SQL expressions (i.e., via `SQL()`)? Peewee's query-builder should hopefully be able to handle generating the correct SQL.\r\n\r\nYou can still call `ModelClass.alias()` and provide an alias. For example:\r\n\r\n```python\r\nIn [4]: class Tag(Model):\r\n ...: entry = ForeignKeyField(Entry)\r\n ...: tag = TextField()\r\n ...: class Meta:\r\n ...: database = db\r\n ...: \r\n\r\nIn [5]: TX = Tag.alias('tx')\r\n\r\nIn [6]: TX.select(TX.tag).sql()\r\nOut[6]: ('SELECT \"tx\".\"tag\" FROM \"tag\" AS \"tx\"', [])\r\n```",
"I needed to use a `CASE` expression in `order_by()`, so had to do raw SQL since (AFAIK) v2 doesn't support `CASE`. But now I see that v3 does, so that might solve my problem. I will try re-writing it.\r\n\r\nYour example should also work too though.\r\n\r\nThanks!",
"Update: I was able to fix up the CASE statement by implementing with the new `Case()` operator. \r\n\r\nHowever I also have some raw MATCH/AGAINST statements that also reference static table names, e.g. `.order_by(SQL(\"MATCH(venue.name) AGAINST(%s IN BOOLEAN MODE)\", [name_like + \"*\"]))`.\r\n\r\nIt doesn't look like v3 has a native `Match()`operator, and the aliasing example above actually won't work for me because I use query chaining a lot, so if the base query uses `VenueAlias = Venue.alias('venue')` (in order for the match expression to work), then chaining onto the query at a higher-level (e.g. `q = q.where(Venue.name == \"something\")`) doesn't work (I would prefer not to have to modify all these top-level queries to account for the aliasing).\r\n\r\nIs there a better way to write this MATCH expression so that it gets the correct table name?\r\n\r\nI feel like all of this would be resolved if the query generator used the actual table names instead of \"t1\", \"t2\"... I actually tried this small patch and it solved a lot of my issues:\r\n\r\n```python\r\nclass AliasManager(object):\r\n ...\r\n def add(self, source):\r\n if source not in self.mapping:\r\n self._counter += 1\r\n # self[source] = 't%d' % self._counter # ORIG\r\n self[source] = '%s' % source._path # Use actual table name\r\n return self.mapping[source]\r\n```\r\n\r\nThoughts?",
"> I feel like all of this would be resolved if the query generator used the actual table names instead of \"t1\", \"t2\"... I actually tried this small patch and it solved a lot of my issues:\r\n\r\nPeewee does use the actual table names for queries besides `SELECT`, but since a `SELECT` query may reference the same table twice in different contexts (e.g. a self-join, or joining the same table twice), it's easier to just use computed aliases.\r\n\r\nPlus, one only really worries about the aliases when trying to mix the query-builder with hand-written SQL, which as I said should usually be avoidable.\r\n\r\nFor `MATCH...AGAINST` I'd suggest wrapping it up in the query-builder primitives like `NodeList`:\r\n\r\n```python\r\n# MATCH(venue.name) AGAINST(%s IN BOOLEAN MODE)\r\ndef Match(field, value):\r\n match = fn.MATCH(field)\r\n val_in_boolean_mode = NodeList((value, SQL('IN BOOLEAN MODE')))\r\n against = fn.AGAINST(val_in_boolean_mode)\r\n return NodeList((match, against))\r\n```\r\n\r\nYou can then:\r\n\r\n```python\r\nVenue.select().order_by(Match(Venue.name, 'foo*'))\r\n```",
"Ah, right.\r\n\r\nThis `Match()` implementation is definitely the way to go. Thanks so much for the help! (and for peewee in general)\r\n",
"Glad to help!"
] | 2019-01-14T05:45:55 | 2019-01-16T13:43:20 | 2019-01-15T22:09:51 | NONE | null | I'm encountering this issue when trying to migrate from v2 to v3.
Let's say I have a DB table called `tag2` that I want to reference with a peewee model call `Tag`. If I set `Meta.table_name = 'tag2'`, everything works fine. But if I add `Meta.table_alias = 'tag_alias'`, then UPDATE queries come out like this:
```
UPDATE `tag2` SET `name` = 'something else' WHERE (`tag_alias`.`id` = 3)
```
Interestingly, If I set `Meta.alias = 'tag_alias'`, then UPDATE queries comes out correct, but then SELECT/JOIN queries do not use the alias.
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1829/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1829/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1828 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1828/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1828/comments | https://api.github.com/repos/coleifer/peewee/issues/1828/events | https://github.com/coleifer/peewee/issues/1828 | 398,599,726 | MDU6SXNzdWUzOTg1OTk3MjY= | 1,828 | Prefetch many to many fields | {
"login": "Behoston",
"id": 7823689,
"node_id": "MDQ6VXNlcjc4MjM2ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7823689?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Behoston",
"html_url": "https://github.com/Behoston",
"followers_url": "https://api.github.com/users/Behoston/followers",
"following_url": "https://api.github.com/users/Behoston/following{/other_user}",
"gists_url": "https://api.github.com/users/Behoston/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Behoston/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Behoston/subscriptions",
"organizations_url": "https://api.github.com/users/Behoston/orgs",
"repos_url": "https://api.github.com/users/Behoston/repos",
"events_url": "https://api.github.com/users/Behoston/events{/privacy}",
"received_events_url": "https://api.github.com/users/Behoston/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Because you're still going through the many-to-many accessor, you're incurring a query each time you list the tags.\r\n\r\nYou might check out the `prefetch` helper: http://docs.peewee-orm.com/en/latest/peewee/relationships.html#using-prefetch\r\n\r\nBut the problem is this is a relational database, and it doesn't exactly work to say \"get me just one of this and then all the related of tags\" in a single query. You can of course:\r\n\r\n```python\r\nfield_of_study = FieldOfStudy.get(FieldOfStudy.id == 3)\r\ntags = [tag for tag in field_of_study.tags]\r\n```\r\n\r\nOne trick I use is to use `GROUP_CONCAT` to get a list of tags in a single query:\r\n\r\n```python\r\nquery = (FieldOfStudy\r\n .select(FieldOfStudy, fn.GROUP_CONCAT(Tag.tag).alias('tag_list'))\r\n .join(StudyTag, JOIN.LEFT_OUTER)\r\n .join(Tag)\r\n .where(FieldOfStudy.id == 3)\r\n .group_by(FieldOfStudy))\r\n\r\nfos = query.get()\r\nprint(fos.tag_list)\r\n# tag1,tag2,tag3\r\n```",
"I have a quick follow-up question. I apologize for the naivete of the question, I'm not very experienced with SQL. I have these two models (simplified), using Sqlite as a backend: \r\n\r\n```\r\nclass Node(Model):\r\n replay_entries = ManyToManyField(ReplayEntry, backref=\"owners\")\r\n\r\nclass ReplayEntry(Model):\r\n observation = BlobField(null=True)\r\n```\r\n\r\nwhen I try to run:\r\n```\r\nfrom peewee import JOIN, fn\r\nquery = (Node\r\n .select(Node, fn.GROUP_CONCAT(ReplayEntry.id).alias('replay_list'))\r\n .join(Node.replay_entries.get_through_model(), JOIN.LEFT_OUTER)\r\n .join(ReplayEntry)\r\n .where(Node.unique_id == 3)\r\n .group_by(ReplayEntry))\r\nfos = query.get()\r\n```\r\nThe object returned has a replay_list that contains just a single integer (e.g. 2), not a list of ReplayEntry ids. Though at the particular moment I run it there should be 4000 ids. I think the single integer returned is specifically the id of the first ReplayEntry in the node's replay_entries list. If I change .get() to .limit(10), then I get 10 Nodes returned, each with a replay_list that corresponds to just the next replay_entry id. I.e. a Node with replay_list=2, then a Node with replay_list=4, then a Node with replay_list=7. Where I would instead expect a single Node with replay_list=[2, 4, 7...]\r\n\r\nBut I think there might be something more fundamental about this query that I'm not understanding. How does it know that I want the ReplayEntrys specifically on replay_entries? (What if I had another ManyToManyField that referenced ReplayEntry?) \r\n\r\nOverall all I want to do is have a quick way to get all of the replay_entries ids associated with a given node. (The naive way using `[entry.id for entry in node.replay_entries]` is unfortunately too slow for my purposes, so I'm trying to figure out how to speed it up.) \r\n\r\nThanks!\r\n",
"If you want the ReplayEntry IDs concatenated then you need to be grouping by `Node` rather than `ReplayEntry`.",
"Oh gotcha, thanks!"
] | 2019-01-12T22:51:04 | 2021-03-01T18:42:46 | 2019-01-13T15:35:00 | NONE | null | I have a problem with prefetching many-to-many field.
I want to Fetch FieldOfStudy with all tags. I found https://github.com/coleifer/peewee/issues/1707 but it's not applicable for my case. I want to have redy to use object.
My models:
```python
class Model(peewee.Model):
class Meta:
abstract = True
database = db
class Tag(Model):
name = peewee.CharField()
class FieldOfStudy(Model):
name = peewee.CharField()
tags = peewee.ManyToManyField(Tag, backref='fields_of_study')
StudyTag = FieldOfStudy.tags.get_through_model()
```
And my query:
```python
field_of_study = models.FieldOfStudy.select(
models.FieldOfStudy,
models.StudyTag,
models.Tag,
).join(
models.StudyTag,
).join(
models.Tag,
).where(
models.FieldOfStudy.id == 3.
).get()
print(list(field_of_study.tags))
print(list(field_of_study.tags))
print(list(field_of_study.tags))
print(list(field_of_study.tags))
```
Console output:
```python
('SELECT "t1"."id", "t1"."name", "t2"."id", "t2"."fieldofstudy_id", "t2"."tag_id", "t3"."id", "t3"."name" FROM "fieldofstudy" AS "t1" INNER JOIN "fieldofstudy_tag_through" AS "t2" ON ("t2"."fieldofstudy_id" = "t1"."id") INNER JOIN "tag" AS "t3" ON ("t2"."tag_id" = "t3"."id") WHERE ("t1"."id" = %s) LIMIT %s OFFSET %s', [3, 1, 0])
('SELECT "t1"."id", "t1"."name" FROM "tag" AS "t1" INNER JOIN "fieldofstudy_tag_through" AS "t2" ON ("t2"."tag_id" = "t1"."id") INNER JOIN "fieldofstudy" AS "t3" ON ("t2"."fieldofstudy_id" = "t3"."id") WHERE ("t2"."fieldofstudy_id" = %s)', [3])
('SELECT "t1"."id", "t1"."name" FROM "tag" AS "t1" INNER JOIN "fieldofstudy_tag_through" AS "t2" ON ("t2"."tag_id" = "t1"."id") INNER JOIN "fieldofstudy" AS "t3" ON ("t2"."fieldofstudy_id" = "t3"."id") WHERE ("t2"."fieldofstudy_id" = %s)', [3])
('SELECT "t1"."id", "t1"."name" FROM "tag" AS "t1" INNER JOIN "fieldofstudy_tag_through" AS "t2" ON ("t2"."tag_id" = "t1"."id") INNER JOIN "fieldofstudy" AS "t3" ON ("t2"."fieldofstudy_id" = "t3"."id") WHERE ("t2"."fieldofstudy_id" = %s)', [3])
[<Tag: 10>, <Tag: 11>]
[<Tag: 10>, <Tag: 11>]
('SELECT "t1"."id", "t1"."name" FROM "tag" AS "t1" INNER JOIN "fieldofstudy_tag_through" AS "t2" ON ("t2"."tag_id" = "t1"."id") INNER JOIN "fieldofstudy" AS "t3" ON ("t2"."fieldofstudy_id" = "t3"."id") WHERE ("t2"."fieldofstudy_id" = %s)', [3])
[<Tag: 10>, <Tag: 11>]
[<Tag: 10>, <Tag: 11>]
```
I really don't want to query database each time I need those tags. Perfect solution would be 1 query with already prefetched tags. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1828/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1828/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1827 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1827/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1827/comments | https://api.github.com/repos/coleifer/peewee/issues/1827/events | https://github.com/coleifer/peewee/issues/1827 | 396,915,930 | MDU6SXNzdWUzOTY5MTU5MzA= | 1,827 | Window function in order_by causes syntax error | {
"login": "zmwangx",
"id": 4149852,
"node_id": "MDQ6VXNlcjQxNDk4NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4149852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zmwangx",
"html_url": "https://github.com/zmwangx",
"followers_url": "https://api.github.com/users/zmwangx/followers",
"following_url": "https://api.github.com/users/zmwangx/following{/other_user}",
"gists_url": "https://api.github.com/users/zmwangx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zmwangx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zmwangx/subscriptions",
"organizations_url": "https://api.github.com/users/zmwangx/orgs",
"repos_url": "https://api.github.com/users/zmwangx/repos",
"events_url": "https://api.github.com/users/zmwangx/events{/privacy}",
"received_events_url": "https://api.github.com/users/zmwangx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Thanks for reporting. This should be fixed by 55f515b."
] | 2019-01-08T13:52:21 | 2019-01-10T21:17:52 | 2019-01-10T21:17:41 | CONTRIBUTOR | null | Consider the following simple example:
```py
import logging
import peewee
db = peewee.SqliteDatabase("/tmp/test.db")
class Transaction(peewee.Model):
user_id = peewee.IntegerField()
class Meta:
database = db
db.create_tables([Transaction], safe=True)
Transaction.insert_many(
dict(user_id=user_id) for user_id in [1, 2, 1, 3, 4, 2, 3, 1]
).execute()
logger = logging.getLogger("peewee")
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
# Using a pre-defined window object is okay.
win = peewee.Window(partition_by=[Transaction.user_id], order_by=[Transaction.id])
(
Transaction.select()
.order_by(peewee.fn.FIRST_VALUE(Transaction.id).over(win))
.window(win)
.execute()
)
# Defining a window directly in order_by is not okay.
(
Transaction.select()
.order_by(
peewee.fn.FIRST_VALUE(Transaction.id).over(
partition_by=[Transaction.user_id], order_by=[Transaction.id]
)
)
.execute()
)
```
Executing this produces:
```sql
('SELECT "t1"."id", "t1"."user_id" FROM "transaction" AS "t1" WINDOW w AS (PARTITION BY "t1"."user_id" ORDER BY "t1"."id") ORDER BY FIRST_VALUE("t1"."id") OVER w', [])
('SELECT "t1"."id", "t1"."user_id" FROM "transaction" AS "t1" ORDER BY FIRST_VALUE("t1"."id") OVER w AS (PARTITION BY "t1"."user_id" ORDER BY "t1"."id")', [])
Traceback (most recent call last):
File "/path/to/venv/lib/python3.7/site-packages/peewee.py", line 2712, in execute_sql
cursor.execute(sql, params or ())
sqlite3.OperationalError: near "AS": syntax error
```
The second query apparently shouldn't have `w AS` after `OVER`; removing that gives us a valid query. I'm not sure why a window alias is used in this case, and in a problematic manner; a window alias is not used if `FIRST_VALUE(...).over(...)` appears in `select()` instead. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1827/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1826 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1826/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1826/comments | https://api.github.com/repos/coleifer/peewee/issues/1826/events | https://github.com/coleifer/peewee/issues/1826 | 396,289,529 | MDU6SXNzdWUzOTYyODk1Mjk= | 1,826 | Bug in bm25 ranking function with more than one term | {
"login": "simonw",
"id": 9599,
"node_id": "MDQ6VXNlcjk1OTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/9599?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonw",
"html_url": "https://github.com/simonw",
"followers_url": "https://api.github.com/users/simonw/followers",
"following_url": "https://api.github.com/users/simonw/following{/other_user}",
"gists_url": "https://api.github.com/users/simonw/gists{/gist_id}",
"starred_url": "https://api.github.com/users/simonw/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/simonw/subscriptions",
"organizations_url": "https://api.github.com/users/simonw/orgs",
"repos_url": "https://api.github.com/users/simonw/repos",
"events_url": "https://api.github.com/users/simonw/events{/privacy}",
"received_events_url": "https://api.github.com/users/simonw/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I think the correct formula for that line is:\r\n\r\n x = X_O + ((i * term_count) + j) * 3\r\n\r\nIf I make that single line change to my example script I get the following output, which I think is correct:\r\n\r\n```\r\nsearch = dog cat\r\n============\r\n('both of them', 'both dog dog and cat here')\r\n[2, 2, 5, 4, 5, 3, 6, 0, 1, 1, 2, 4, 2, 0, 1, 1, 1, 3, 2]\r\nterm_count=2, col_count=2, total_docs=5\r\nterm (i) = 0, column (j) = 0\r\n avg_length=4.0, doc_length=3.0\r\n[0, 1, 1, 2, 4, 2, 0, 1, 1, 1, 3, 2]\r\n term_frequency_in_this_column=0.0, docs_with_term_in_this_column=1.0\r\nterm (i) = 0, column (j) = 1\r\n avg_length=5.0, doc_length=6.0\r\n[2, 4, 2, 0, 1, 1, 1, 3, 2]\r\n term_frequency_in_this_column=2.0, docs_with_term_in_this_column=2.0\r\nterm (i) = 1, column (j) = 0\r\n avg_length=4.0, doc_length=3.0\r\n[0, 1, 1, 1, 3, 2]\r\n term_frequency_in_this_column=0.0, docs_with_term_in_this_column=1.0\r\nterm (i) = 1, column (j) = 1\r\n avg_length=5.0, doc_length=6.0\r\n[1, 3, 2]\r\n term_frequency_in_this_column=1.0, docs_with_term_in_this_column=2.0\r\n-0.749035952142196\r\n```",
"Here's the spreadsheet I used to figure out the correct formula: https://docs.google.com/spreadsheets/d/1htR7CWjmF25TZQ8BLzAIh2QHCiInFlfBZbhg7lt-mrM/edit?usp=sharing",
"Just for reference, here are example implementations I based my code on:\r\n\r\n* https://github.com/rads/sqlite-okapi-bm25/blob/master/okapi_bm25.c\r\n* https://github.com/parantapa/sqlite-okapi-bm25/blob/master/okapi_bm25.c\r\n\r\nRelevant snip:\r\n\r\n```c\r\n for (int i = 0; i < termCount; i++) {\r\n int currentX = X_OFFSET + (3 * searchTextCol * (i + 1));\r\n double termFrequency = matchinfo[currentX];\r\n double docsWithTerm = matchinfo[currentX + 2];\r\n\r\n double idf = log(\r\n (totalDocs - docsWithTerm + 0.5) /\r\n (docsWithTerm + 0.5)\r\n );\r\n\r\n double rightSide = (\r\n (termFrequency * (K1 + 1)) /\r\n (termFrequency + (K1 * (1 - B + (B * (docLength / avgLength)))))\r\n );\r\n\r\n sum += (idf * rightSide);\r\n }\r\n```\r\n\r\nSpecifically, where \"searchTextCol\" corresponds to \"j\" in my example:\r\n\r\n```c\r\nint currentX = X_OFFSET + (3 * searchTextCol * (i + 1));\r\n```",
"Huh... maybe the bug is in their code as well?\r\n\r\nI'm 80% sure I'm right about this, but that's why I posted so much supporting documentation: this definitely needs fresh eyes on it!",
"It looks like someone reported the same bug against one of those repos: https://github.com/rads/sqlite-okapi-bm25/issues/2",
"Indeed, I've got scrap paper out and am seeing it, too.",
"You suggested `x = X_O + ((i * term_count) + j) * 3`\r\n\r\nBut I think it is `x = X_O + ((i * col_count) + j) * 3` and it looks like the issue on the linked repo agrees with that.",
"Yes you're right - looks like a bug in my spreadsheet. Your version is giving me the expected results.",
"Ugh, travis-ci and its ancient sqlite...last patch should do the trick.",
"The reason I spotted this is I've been building a standalone library for packaging up some SQLite FTS4 functions. It includes an `annotate_matchinfo()` function which attempts to convert the matchinfo array into something a lot more useful.\r\n\r\nYou can see that in action here: https://datasette-sqlite-fts4.datasette.io/24ways-fts4-52e8a02?sql=select%0D%0A++++json_object%28%0D%0A++++++++%22label%22%2C+articles.title%2C+%22href%22%2C+articles.url%0D%0A++++%29+as+article%2C%0D%0A++++articles.author%2C%0D%0A++++rank_score%28matchinfo%28articles_fts%2C+%22pcx%22%29%29+as+score%2C%0D%0A++++rank_bm25%28matchinfo%28articles_fts%2C+%22pcnalx%22%29%29+as+bm25%2C%0D%0A++++json_object%28%0D%0A++++++++%22pre%22%2C+annotate_matchinfo%28matchinfo%28articles_fts%2C+%22pcxnalyb%22%29%2C+%22pcxnalyb%22%29%0D%0A++++%29+as+annotated_matchinfo%2C%0D%0A++++matchinfo%28articles_fts%2C+%22pcxnalyb%22%29+as+matchinfo%2C%0D%0A++++decode_matchinfo%28matchinfo%28articles_fts%2C+%22pcxnalyb%22%29%29+as+decoded_matchinfo%0D%0Afrom%0D%0A++++articles_fts+join+articles+on+articles.rowid+%3D+articles_fts.rowid%0D%0Awhere%0D%0A++++articles_fts+match+%3Asearch%0D%0Aorder+by+bm25&search=jquery+maps\r\n\r\nThe library I am building is here: https://github.com/simonw/sqlite-fts4",
"I believe this is fixed now. Thanks for the help! I'll push a new release today or tomorrow with the patch.",
"3.8.1"
] | 2019-01-06T20:04:37 | 2019-01-07T16:33:32 | 2019-01-07T16:08:54 | NONE | null | I think I've found a bug in the bm25 implementation:
https://github.com/coleifer/peewee/blob/a24b36da3a101458a854e6a4319f4bb8d8cb478f/playhouse/sqlite_ext.py#L1160-L1175
The specific problem is here:
https://github.com/coleifer/peewee/blob/a24b36da3a101458a854e6a4319f4bb8d8cb478f/playhouse/sqlite_ext.py#L1173-L1175
This code is supposed to extract the `term_frequency` and `docs_with_term` for term `i` and column `j`. BUT... I don't think the array pointer arithmetic here is correct. In particular, with more than one term I seem to be getting the wrong results.
After quite a lot of digging around, I think I've prepared an example that illustrates the problem. My code is here: https://gist.github.com/simonw/e0b9156d66b41b172a66d0cfe32d9391
I created a modified version of the bm25 function which outputs debugging information, then ran some sample searches through it. The output illustrating the problem is this:
```
search = dog cat
============
('both of them', 'both dog dog and cat here')
[2, 2, 5, 4, 5, 3, 6, 0, 1, 1, 2, 4, 2, 0, 1, 1, 1, 3, 2]
term_count=2, col_count=2, total_docs=5
term (i) = 0, column (j) = 0
avg_length=4.0, doc_length=3.0
term_frequency_in_this_column=0.0, docs_with_term_in_this_column=1.0
term (i) = 0, column (j) = 1
avg_length=5.0, doc_length=6.0
term_frequency_in_this_column=2.0, docs_with_term_in_this_column=2.0
term (i) = 1, column (j) = 0
avg_length=4.0, doc_length=3.0
term_frequency_in_this_column=0.0, docs_with_term_in_this_column=1.0
term (i) = 1, column (j) = 1
avg_length=5.0, doc_length=6.0
term_frequency_in_this_column=0.0, docs_with_term_in_this_column=1.0
-0.438011195601579
```
That's for a search for `dog cat` against the following five documents:
```
CREATE VIRTUAL TABLE docs USING fts4(c0, c1);
INSERT INTO docs (c0, c1) VALUES ("this is about a dog", "more about that dog dog");
INSERT INTO docs (c0, c1) VALUES ("this is about a cat", "stuff on that cat cat");
INSERT INTO docs (c0, c1) VALUES ("something about a ferret", "yeah a ferret ferret");
INSERT INTO docs (c0, c1) VALUES ("both of them", "both dog dog and cat here");
INSERT INTO docs (c0, c1) VALUES ("not mammals", "maybe talk about fish");
```
The bug is illustrated by the very last section of the above example output, this bit:
```
term (i) = 1, column (j) = 1
avg_length=5.0, doc_length=6.0
term_frequency_in_this_column=0.0, docs_with_term_in_this_column=1.0
```
Here the output is showing that the document `('both of them', 'both dog dog and cat here')` was found to match the search for `dog cat` - but that the statistics for the last term and column (so the term `cat` in the column `both dog dog and cat here`) have `term_frequency_in_this_column` of 0.0.
This is incorrect! The word cat appears once in that column, so this value should be 1.0.
The bug is in the `x = X_O + (3 * j * (i + 1))` line which calculates the offset within the matchinfo array. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1826/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1825 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1825/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1825/comments | https://api.github.com/repos/coleifer/peewee/issues/1825/events | https://github.com/coleifer/peewee/issues/1825 | 396,097,318 | MDU6SXNzdWUzOTYwOTczMTg= | 1,825 | sqlite_ext.FTSModel: MATCH'ing a single column | {
"login": "zmwangx",
"id": 4149852,
"node_id": "MDQ6VXNlcjQxNDk4NTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4149852?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zmwangx",
"html_url": "https://github.com/zmwangx",
"followers_url": "https://api.github.com/users/zmwangx/followers",
"following_url": "https://api.github.com/users/zmwangx/following{/other_user}",
"gists_url": "https://api.github.com/users/zmwangx/gists{/gist_id}",
"starred_url": "https://api.github.com/users/zmwangx/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/zmwangx/subscriptions",
"organizations_url": "https://api.github.com/users/zmwangx/orgs",
"repos_url": "https://api.github.com/users/zmwangx/repos",
"events_url": "https://api.github.com/users/zmwangx/events{/privacy}",
"received_events_url": "https://api.github.com/users/zmwangx/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Correct, one needed to use the `match()` helper. I've added a method to `SearchField` as you suggested.",
"Thanks for the quick fix!"
] | 2019-01-04T23:06:07 | 2019-01-05T14:31:01 | 2019-01-05T01:58:41 | CONTRIBUTOR | null | SQLite FTS allows MATCH'ing a single column instead of the full table. An example from https://www.sqlite.org/fts3.html:
```sql
-- Example schema
CREATE VIRTUAL TABLE mail USING fts3(subject, body);
-- Example table population
INSERT INTO mail(docid, subject, body) VALUES(1, 'software feedback', 'found it too slow');
INSERT INTO mail(docid, subject, body) VALUES(2, 'software feedback', 'no feedback');
INSERT INTO mail(docid, subject, body) VALUES(3, 'slow lunch order', 'was a software problem');
-- Example queries
SELECT * FROM mail WHERE subject MATCH 'software'; -- Selects rows 1 and 2
SELECT * FROM mail WHERE body MATCH 'feedback'; -- Selects row 2
SELECT * FROM mail WHERE mail MATCH 'software'; -- Selects rows 1, 2 and 3
SELECT * FROM mail WHERE mail MATCH 'slow'; -- Selects rows 1 and 3
```
Naturally (to me) one would want to write the column matching code like this:
```py
# SELECT * FROM mail WHERE subject MATCH 'software';
Mail.select().where(Mail.subject.match('software'))
```
However, this doesn't work, and instead it seems one has to write
```py
from playhouse.sqlite_ext import match
Mail.select().where(match(Mail.subject, 'software'))
```
and the `match` function isn't documented; only the `match` method of `FTSModel` is documented.
Maybe a `match` method could be added to `SearchField`?
Please excuse me if I missed anything obvious. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1825/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1824 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1824/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1824/comments | https://api.github.com/repos/coleifer/peewee/issues/1824/events | https://github.com/coleifer/peewee/issues/1824 | 395,637,053 | MDU6SXNzdWUzOTU2MzcwNTM= | 1,824 | db_value() doesn't work fine as expected | {
"login": "handalin",
"id": 2045180,
"node_id": "MDQ6VXNlcjIwNDUxODA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2045180?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/handalin",
"html_url": "https://github.com/handalin",
"followers_url": "https://api.github.com/users/handalin/followers",
"following_url": "https://api.github.com/users/handalin/following{/other_user}",
"gists_url": "https://api.github.com/users/handalin/gists{/gist_id}",
"starred_url": "https://api.github.com/users/handalin/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/handalin/subscriptions",
"organizations_url": "https://api.github.com/users/handalin/orgs",
"repos_url": "https://api.github.com/users/handalin/repos",
"events_url": "https://api.github.com/users/handalin/events{/privacy}",
"received_events_url": "https://api.github.com/users/handalin/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I have no idea what \"refresh the page\" implies. If you can produce a test-case that replicates this I will re-open. At the moment my sense is that the problem is in your code somewhere."
] | 2019-01-03T16:26:27 | 2019-01-03T18:12:06 | 2019-01-03T18:12:06 | NONE | null | This is my DB configuration:
```
database = MySQLDatabase('creditCardManager',
**{
'host': 'localhost',
'password': 'xxx',
'user': 'yyy',
'use_unicode': True,
'charset': 'utf8',
'autorollback': True,
})
```
and I define an custom field:
```
class EnumField(SmallIntegerField):
def __init__(self, choices, default, *args, **kwargs):
self.default_db_value = default[0]
self.default_python_value = default[1]
self.int_from_str = {v:k for k, v in choices}
self.str_from_int = {k:v for k, v in choices}
return super(SmallIntegerField, self).__init__(*args, **kwargs)
def db_value(self, value):
if isinstance(value, str):
value = value.decode('utf8')
return self.int_from_str.get(value, self.default_db_value)
def python_value(self, value):
return self.str_from_int.get(value, self.default_python_value)
```
I define a model:
```
MY_CHOICES = [
(1, u'ON'),
(2, u'OFF'),
]
class Card(BaseModel):
cid = PrimaryKeyField()
my_field = EnumField(choices=MY_CHOICES, default=(0, u'unknown'))
```
And I have an updating API, which do:
```
card = Card.get(cid = cid)
card.my_field = u'ON'
card.save()
```
The problem is:
if I follow the operations below, everything works fine:
[Op-A]
1. restart my server
2. send request to update this field
but when I follow the operations below,
[Op-B]
1. refresh the page
2. send request ( by clicking some button on my page) again to update this field
this time, db_value() doesn't works as expected.
```
OperationalError: (1366, "Incorrect integer value: 'ON' for column 'bank' at row 1")
```
what's the differences between [Op-A] and [Op-B] ?
Thanks a lot. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1824/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1823 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1823/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1823/comments | https://api.github.com/repos/coleifer/peewee/issues/1823/events | https://github.com/coleifer/peewee/issues/1823 | 395,554,139 | MDU6SXNzdWUzOTU1NTQxMzk= | 1,823 | insert_many(), KeyError: '"-" is not a recognized field.' | {
"login": "michalchrzastek",
"id": 38867528,
"node_id": "MDQ6VXNlcjM4ODY3NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/38867528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michalchrzastek",
"html_url": "https://github.com/michalchrzastek",
"followers_url": "https://api.github.com/users/michalchrzastek/followers",
"following_url": "https://api.github.com/users/michalchrzastek/following{/other_user}",
"gists_url": "https://api.github.com/users/michalchrzastek/gists{/gist_id}",
"starred_url": "https://api.github.com/users/michalchrzastek/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/michalchrzastek/subscriptions",
"organizations_url": "https://api.github.com/users/michalchrzastek/orgs",
"repos_url": "https://api.github.com/users/michalchrzastek/repos",
"events_url": "https://api.github.com/users/michalchrzastek/events{/privacy}",
"received_events_url": "https://api.github.com/users/michalchrzastek/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"You don't need to quote the values. Peewee creates a parameterized query and the driver will handle all that for you.",
"That's what I thought, so why this error:\r\n```\r\nraise KeyError('\"%s\" is not a recognized field.' % field)\r\nKeyError: '\"-\" is not a recognized field.'\r\n```",
"Perhaps the data you are passing to `insert_many` may not be in the right format. You need to pass either a list of `dict` objects, or a list of tuples (then specify the fields explicitly using the `fields=` parameter). There are many examples."
] | 2019-01-03T12:21:02 | 2019-01-03T19:34:47 | 2019-01-03T18:13:46 | NONE | null | Hello,
I'm trying to insert multiple rows, but get this error:
`KeyError: '"-" is not a recognized field.'`
I figured its the minus before the figures, I was able to go around it using concatenation:
`transAmnt = "'" + data[1] + "'"`
But then I get this error:
`KeyError: '"\'" is not a recognized field.'`
I think I'm just chasing my own tail and missing something out... Can you advise please.
And here is my code:
MODEL:
```
class transactions(Model):
trans_cat = IntegerField()
trans_amnt = DecimalField()
trans_date = DateField()
trans_desc = CharField()
uploadTime = DateTimeField(default=datetime.datetime.now)
cardType = CharField(1,default="D")
class Meta:
database = finDB
```
INSERT:
```
with open(filename,'r') as fr:
lines = fr.readlines()
dictKeys = ['trans_amnt','trans_date','trans_desc','cardType']
dictList = []
for line in lines: #split line into correct column
if re.match(r"^\d+.*$",line):
data = line.split(';')
old_date = data[0]
datetimeobject = datetime.strptime(old_date,'%d/%m/%Y')
transDate = datetimeobject.strftime('%Y-%m-%d')
transDesc = "'" + data[2][:-1] + "'"
transAmnt = "'" + data[1] + "'"
transCard = statementType
dataSource = [transAmnt,transDate,transDesc,transCard]
dictList.append(dict(zip(dictKeys,dataSource)))
with modelFinance.finDB.atomic():
modelFinance.transactions.insert_many(dataSource).execute()
```
PRINT:
print(dictList):
BEFORE ADDING QUOTES
```
[{'trans_date': '2018-12-05', 'cardType': 'C', 'trans_amnt': '-6.60', 'trans_desc': 'APY, text, more text **1212\n'}, {'trans_date': '2018-12-06', 'cardType': 'C', 'trans_amnt': '-1.01', 'trans_desc': 'APY, texttext, more texttext **1111\n'}]
```
AFTER ADDING QUOTES
```
[{'trans_desc': "'APY, text, more text **1212", 'cardType': 'C', 'trans_amnt': "'-6.60'", 'trans_date': '2018-12-05'}, {'trans_desc': "'APY, texttext, more texttext **1111'", 'cardType': 'C', 'trans_amnt': "'-1.01'", 'trans_date': '2018-12-06'}]
```
Thanks
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1823/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1822 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1822/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1822/comments | https://api.github.com/repos/coleifer/peewee/issues/1822/events | https://github.com/coleifer/peewee/issues/1822 | 395,151,721 | MDU6SXNzdWUzOTUxNTE3MjE= | 1,822 | primary_key=True not working | {
"login": "dyadav7",
"id": 8121360,
"node_id": "MDQ6VXNlcjgxMjEzNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8121360?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dyadav7",
"html_url": "https://github.com/dyadav7",
"followers_url": "https://api.github.com/users/dyadav7/followers",
"following_url": "https://api.github.com/users/dyadav7/following{/other_user}",
"gists_url": "https://api.github.com/users/dyadav7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dyadav7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dyadav7/subscriptions",
"organizations_url": "https://api.github.com/users/dyadav7/orgs",
"repos_url": "https://api.github.com/users/dyadav7/repos",
"events_url": "https://api.github.com/users/dyadav7/events{/privacy}",
"received_events_url": "https://api.github.com/users/dyadav7/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"It is clearly explained here:\r\n\r\nhttp://docs.peewee-orm.com/en/latest/peewee/models.html#id4\r\n\r\nYou need to specify `force_insert=True` when saving with a non-integer primary key. Please read the docs."
] | 2019-01-02T05:19:16 | 2019-01-02T18:49:54 | 2019-01-02T18:49:54 | NONE | null | What is the issue with below Model ?
class Tags(BaseModel):
tag = CharField(primary_key=True)
cmd = CharField()
I am unable to save to database. Run attached script to see the issue.
I am unable to find the issues by checking docs.
[peewe_issue.py.txt](https://github.com/coleifer/peewee/files/2719957/peewe_issue.py.txt)
I get the below output
$ peewe_issue.py
Show all entries: 0
ret=0
Traceback (most recent call last):
File "/home/deepaky/bin/peewe_issue.py", line 37, in <module>
main()
File "/home/deepaky/bin/peewe_issue.py", line 32, in main
assert(ret)
AssertionError
$
`
| {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1822/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1822/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1821 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1821/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1821/comments | https://api.github.com/repos/coleifer/peewee/issues/1821/events | https://github.com/coleifer/peewee/issues/1821 | 394,978,907 | MDU6SXNzdWUzOTQ5Nzg5MDc= | 1,821 | Prefetch with limit might give unexpected results (at least with SQLite3) | {
"login": "moubctez",
"id": 12608048,
"node_id": "MDQ6VXNlcjEyNjA4MDQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/12608048?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/moubctez",
"html_url": "https://github.com/moubctez",
"followers_url": "https://api.github.com/users/moubctez/followers",
"following_url": "https://api.github.com/users/moubctez/following{/other_user}",
"gists_url": "https://api.github.com/users/moubctez/gists{/gist_id}",
"starred_url": "https://api.github.com/users/moubctez/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/moubctez/subscriptions",
"organizations_url": "https://api.github.com/users/moubctez/orgs",
"repos_url": "https://api.github.com/users/moubctez/repos",
"events_url": "https://api.github.com/users/moubctez/events{/privacy}",
"received_events_url": "https://api.github.com/users/moubctez/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"This is mentioned here: http://docs.peewee-orm.com/en/latest/peewee/relationships.html#using-prefetch",
"> sometimes does not work, when there are more Invoices with the same number.\r\n\r\nWell of course. You're limiting the selection to a single `Invoice` but multiple exist that match the `WHERE` clause. If you need them to be ordered deterministically, then you will need to explicitly order them.\r\n\r\nPeewee contains as little magic as possible, so ordering your queries is left to you.",
"Thanks for the explaination.\r\nHow about this fix: query `Invoice`, then do a prefetch based on ids from the first query?"
] | 2018-12-31T13:14:05 | 2019-01-03T21:12:51 | 2019-01-02T18:52:36 | NONE | null | I have tables called **Invoice** and **Item**, which essentially look like this:
```
class Invoice(Base):
number = IntegerField(index=True)
class Item(Base):
invoice_id = ForeignKeyField(Invoice, backref='items', on_delete='CASCADE')
```
While executing this: `Invoice.select().where(Invoice.number == 1001).limit(1).prefetch(Item)` sometimes does not work, when there are more Invoices with the same number.
In the log I see PeeWee is making two queries with LIMIT, but each returns a different Invoice.
```
('SELECT "t1"."id", "t1"."invoice_id" FROM "item" AS "t1" WHERE ("t1"."invoice_id" IN (SELECT "t2"."id" FROM "invoice" AS "t2" WHERE ("t2"."number" = ?) LIMIT ?))', [1001, 1])
('SELECT "t1"."id", "t1"."number" FROM "invoice" AS "t1" WHERE ("t1"."number" = ?) LIMIT ?', [1001, 1])
```
Turns out, SQLite3 returns undetermined results, depending on number of columns in the query:
```
sqlite> SELECT id FROM invoice WHERE number = 1001 LIMIT 1;
225
sqlite> SELECT id, number FROM invoice WHERE number = 1001 LIMIT 1;
153
```
To fix this I've added `order_by()`.
I propose this issue gets documented. I am not sure if it is only a problem with SQLite3, but maybe there should be an exception raised when using **limit** with **prefetch** without **order_by**, or something like **order_by(id)** should be added as a default. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1821/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1820 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1820/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1820/comments | https://api.github.com/repos/coleifer/peewee/issues/1820/events | https://github.com/coleifer/peewee/issues/1820 | 394,210,324 | MDU6SXNzdWUzOTQyMTAzMjQ= | 1,820 | Ask: can we close a connection in pool automatically? | {
"login": "likang",
"id": 850711,
"node_id": "MDQ6VXNlcjg1MDcxMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/850711?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/likang",
"html_url": "https://github.com/likang",
"followers_url": "https://api.github.com/users/likang/followers",
"following_url": "https://api.github.com/users/likang/following{/other_user}",
"gists_url": "https://api.github.com/users/likang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/likang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/likang/subscriptions",
"organizations_url": "https://api.github.com/users/likang/orgs",
"repos_url": "https://api.github.com/users/likang/repos",
"events_url": "https://api.github.com/users/likang/events{/privacy}",
"received_events_url": "https://api.github.com/users/likang/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"I don't see how closing immediately would be faster in the long run. I suppose if a burst of requests was received very quickly you might save time because you are opening fewer new connections, as they get recycled faster.\r\n\r\nIn my opinion its foolish to override the transaction APIs. Should you wish to run the same application code with a non-pooled database, the behavior is going to be non-obvious, which is the cardinal sin.\r\n\r\nJust call `db.close()` when its safe to release the connection, or use the database instance as a context manager."
] | 2018-12-26T17:34:18 | 2018-12-26T21:38:09 | 2018-12-26T21:38:09 | NONE | null | One of the best practices about connection management is close it when you are done. Normally a hook after request will take this job. like this:
```Python
@hook('after_request')
def _close_db():
if not db.is_closed():
db.close()
```
But I am wondering can we close the connection as soon as possible? especially in connection pool, since you can "return" the connection and get it back from the pool very fast. This will be very useful when you finish your database job, but still have to wait a long job before you can finish the request, like this:
```Python
@get('/')
def index():
foo = Foo.get_by_id(1)
bar = requests.get('/fetch-bar')
return foo.name + bar.text
```
You can see before we get `bar` info by HTTP, we already don't need the connection, we can "return" this connection back to pool.
Sure we can close the connection manually, but there will be lots of db operations in a project. I think it will be great if this job can be done automatically. So I try to overwrite `commit`, `rollback` and `atomic` this three methods of `Database`, like this:
```Python
class AutoReturnPooledMySQL(PooledMySQLDatabase):
def commit(self):
super(AutoReturnPooledMySQL, self).commit()
if not self.in_transaction():
self.close()
def rollback(self):
super(AutoReturnPooledMySQL, self).rollback()
if not self.in_transaction():
self.close()
def atomic(self):
ctx = super(AutoReturnPooledMySQL, self).atomic()
return _auto_return_atomic(ctx)
class _auto_return_atomic:
def __init__(self, ctx):
self.ctx = ctx
def __enter__(self):
return self.ctx.__enter__()
def __exit__(self, exc_type, exc_val, exc_tb):
e = self.ctx.__exit__(exc_type, exc_val, exc_tb)
if not self.ctx.db.in_transaction():
self.ctx.db.close()
return e
db = AutoReturnPooledMySQL(None)
```
In my test, the auto-close connection pool indeed increase the system's performance when lots of concurrent requests comes. But still, I don't know if this kind of modification is good or bad, or is there any hidden risk. I just write my thoughts down, and hope there will be an answer after a discussion, thank you. | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1820/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1820/timeline | null | completed | null | null |
https://api.github.com/repos/coleifer/peewee/issues/1819 | https://api.github.com/repos/coleifer/peewee | https://api.github.com/repos/coleifer/peewee/issues/1819/labels{/name} | https://api.github.com/repos/coleifer/peewee/issues/1819/comments | https://api.github.com/repos/coleifer/peewee/issues/1819/events | https://github.com/coleifer/peewee/issues/1819 | 393,926,350 | MDU6SXNzdWUzOTM5MjYzNTA= | 1,819 | Deriving model classes with ManyToManyField | {
"login": "mried",
"id": 9843448,
"node_id": "MDQ6VXNlcjk4NDM0NDg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9843448?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mried",
"html_url": "https://github.com/mried",
"followers_url": "https://api.github.com/users/mried/followers",
"following_url": "https://api.github.com/users/mried/following{/other_user}",
"gists_url": "https://api.github.com/users/mried/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mried/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mried/subscriptions",
"organizations_url": "https://api.github.com/users/mried/orgs",
"repos_url": "https://api.github.com/users/mried/repos",
"events_url": "https://api.github.com/users/mried/events{/privacy}",
"received_events_url": "https://api.github.com/users/mried/received_events",
"type": "User",
"site_admin": false
} | [] | closed | false | null | [] | null | [
"Ahh, yeah, `ManyToManyField` may not play nicely with inheritance without some additional effort. Can you not just replicate the definition of the `ManyToManyField`, as you have it commented-out currently?",
"Whoops... I must admit, the super simple example works after removing the comment-`#`. The example was wrong and didn't show the problem I've got in real code... The following produces the error again:\r\n```python\r\nclass Project(BaseModel):\r\n rating = IntegerField()\r\n\r\n\r\nclass User(BaseModel):\r\n projects = ManyToManyField(Project, backref='users')\r\n\r\n\r\nclass VProject(Project):\r\n # users = ManyToManyField(User, backref='projects') # Comment 1\r\n is_top_rated = BooleanField()\r\n\r\n @classmethod\r\n def create_view(cls):\r\n db.execute_sql('''CREATE VIEW vproject AS\r\n SELECT project.*, project.rating >= 5 AS is_top_rated\r\n FROM project''')\r\n```\r\nI placed the `ManyToManyField` at the `User` class which is basically the \"other end\" of the relationship of my initial example. `VProject` still derives from `Project` which has no `ManyToManyField` explicitly. This gives the exact same exception as written in my first post.\r\nSince replicating the `ManyToManyField` definitions helped so far, I added the line which is commented out here (Comment 1). After adding that, it seems to work but, I'm afraid this might have side effects I don't see right now.",
"Thanks for the comment, this is now fixed.",
"Thanks for the fix.\r\n\r\nThis seems to fix the problem I mentioned in my first post, but not the one in the second post. Here is a test which still produces the `IndexError`:\r\n```python\r\n def test_manytomany_inheritance_2(self):\r\n class BaseModel(TestModel):\r\n class Meta:\r\n database = self.database\r\n class Project(BaseModel):\r\n name = TextField()\r\n class User(BaseModel):\r\n username = TextField()\r\n projects = ManyToManyField(Project, backref='users')\r\n class VProject(Project):\r\n pass\r\n\r\n PThrough = Project.users.through_model\r\n self.assertTrue(PThrough.project.rel_model is Project)\r\n self.assertTrue(PThrough.user.rel_model is User)\r\n\r\n VPThrough = VProject.users.through_model\r\n self.assertTrue(VPThrough.vproject.rel_model is VProject)\r\n self.assertTrue(VPThrough.user.rel_model is User)\r\n```\r\n(can be placed at line 205 in `tests/manytomany.py`)\r\n\r\nIs it possible to fix this, too?",
"Yeah...this is all very weird, though, right? I mean, I think it's a bug and I have a one-line fix. But it struck me as rather confusing trying to think about inheriting many-to-many. A many-to-many between two tables implies a third table that has foreign-keys to the two sources:\r\n\r\nStudent >-- StudentCourse --< Course\r\n\r\nSo `StudentCourse` has a foreign-key `student_id` and `course_id`.\r\n\r\nIf you want to subclass either `Student` or `Course`, then presumably the junction table would need to be \"subclassed\" as well. So if `HonorStudent` is a subclass of `Student`, we need a new junction table as well:\r\n\r\nHonorStudent >-- HonorStudentCourse --< Course\r\n\r\nFor auto-generated through models, which is what the test-case I added deals with, I think this is OK (if confusing).\r\n\r\nBut what if the through model is user-defined? Should it automatically be subclassed, and have its foreign-key modified?\r\n\r\nI think the safest and most correct approach is really not to ever inherit many-to-many fields...because we're not really talking about inheriting a \"field\"...we're talking about the creation of a completely new table to store the junction -- and that seems to get into confusing territory quickly.",
"I'm going to instead favor a slightly better error message and a strong admonishment to not subclass many-to-many.",
"I totally agree because there is another thing that came into my mind: the `backref` fields.\r\n\r\nGiven the following code:\r\n```python\r\nclass User(BaseModel):\r\n pass\r\n\r\n\r\nclass Project(BaseModel):\r\n users = ManyToManyField(User, backref='projects')\r\n```\r\nHaving a `User` instance `user`, I can access `user.projects` and get a list of `Project` instances. All fine until now.\r\n\r\nNow we derive `VProject` from `Project` and access `user.projects` again. What do we get? A list of `Project` instances? A list of `VProject` instances? This is absolutely not clear to the user (at least to me 😉).\r\n\r\nTo sum up: Don't derive `ManyToManyField`s. It may produce unforeseen consequences 🔥. So an explicit warning like you suggested sounds like the best solution.",
"Yes, exactly...the backrefs were what was causing half of the trouble anyways, since in \"many-to-many\"-land, a backref is exactly the same as an ordinary `ManyToManyField`."
] | 2018-12-24T20:08:07 | 2019-01-15T18:46:20 | 2019-01-02T19:04:10 | NONE | null | Hi!
I recently switched from peewee 2.8.5 to 3.8.0 (and from Python 2.7 to Python 3.7 but I don't think, this is the reason for the problem described below). After doing so, I've got a problem with derived model classes, if they contain a `ManyToManyField`. This is a super simple (and stupid) example, which raises an `IndexError`:
```python
class User(BaseModel):
pass
class Project(BaseModel):
users = ManyToManyField(User, backref='projects')
rating = IntegerField()
class VProject(Project): # This line throws the exception
# users = ManyToManyField(User, backref='projects') # Needed in peewee 2.8.5 for a reason I don't remember
is_top_rated = BooleanField()
@classmethod
def create_view(cls):
db.execute_sql('''CREATE VIEW vproject AS
SELECT project.*, project.rating >= 5 AS is_top_rated
FROM project''')
```
As you can see, I use the derived class to access a view which has basically the same layout as a table - it only adds a column (the real case is a bit more complicated, of course). It worked perfectly using peewee 2.8.5, but now I get an `IndexError`:
```
Traceback (most recent call last):
File "peeweetest.py", line 20, in <module>
class VProject(Project):
File "venv\lib\site-packages\peewee.py", line 5421, in __new__
cls._meta.add_field(name, field)
File "venv\lib\site-packages\peewee.py", line 5239, in add_field
field.bind(self.model, field_name, set_attribute)
File "venv\lib\site-packages\peewee.py", line 4702, in bind
super(ManyToManyField, self).bind(model, name, set_attribute)
File "venv\lib\site-packages\peewee.py", line 3963, in bind
setattr(model, name, self.accessor_class(model, self, name))
File "venv\lib\site-packages\peewee.py", line 4651, in __init__
self.src_fk = self.through_model._meta.model_refs[self.model][0]
IndexError: list index out of range
```
If I understand the peewee code which throws the exception correctly, it tries to get some information about the through model of the derived class (`VProject` in my example), but the model has the infos for the base class (hence `Project`) only.
Is there a way to get around this error without removing the `ManyToManyField`? | {
"url": "https://api.github.com/repos/coleifer/peewee/issues/1819/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/coleifer/peewee/issues/1819/timeline | null | completed | null | null |
Subsets and Splits